**Meet the editor**

Dr. Mueller received his Diploma degree in mathematics from the University of Applied Sciences, Mittweida, Germany, in 1997, a Diploma degree in electrical engineering from the University of Northumbria, Newcastle, U.K., in 1998, and a Diploma and Ph.D. degrees in mechanical engineering from the Technical University Chemnitz, Chemnitz, Germany, in 2001 and 2004,

respectively. From 1989 to 1993, he did professional work as an electronic technician. From 1998 to 2008, he was a research assistant with the Institute of Mechatronics, Chemnitz. Since 2008, he has been Lecturer with the Chair of Mechanics and Robotics, University Duisburg-Essen, Germany. His research interests include dynamics and control of non-linear systems, mechanisms, robotics, parallel mechanisms, singularities, differential geometric methods, advanced simulation methods, biomechanics, microelectromechanical systems, and chaotic systems.

Contents

**Preface IX** 

Viktor Ten

**Part 1 Novel Approaches in Robust Control 1** 

Chapter 2 **Robust Control of Nonlinear Time-Delay** 

Chapter 3 **Observer-Based Robust Control of Uncertain** 

Pagès Olivier and El Hajjaji Ahmed

Chapter 5 **Neural Control Toward a Unified Intelligent** 

Chapter 6 **Robust Adaptive Wavelet Neural Network Control of Buck Converters 115**  Hamed Bouzari, Miloš Šramek, Gabriel Mistelbauer and Ehsan Bouzari

**and Sliding Mode Control 139** 

Chapter 8 **Integral Sliding-Based Robust Control 165** 

Anas N. Al-Rabadi

Chapter 7 **Quantitative Feedback Theory** 

Gemunu Happawana

Chieh-Chuan Feng

Chapter 1 **Robust Stabilization by Additional Equilibrium 3** 

**Systems via Takagi-Sugeno Fuzzy Models 21**  Hamdi Gassara, Ahmed El Hajjaji and Mohamed Chaabane

**Fuzzy Models with Pole Placement Constraints 39** 

Chapter 4 **Robust Control Using LMI Transformation and Neural-Based** 

**Order Eigenvalue-Preserved Dynamic Systems 59** 

**Control Design Framework for Nonlinear Systems 91**  Dingguo Chen, Lu Wang, Jiaben Yang and Ronald R. Mohler

**Identification for Regulating Singularly-Perturbed Reduced** 

## Contents

## **Preface** XIII


X Contents


Chapter 10 **New Practical Integral Variable Structure Controllers for Uncertain Nonlinear Systems 221**  Jung-Hoon Lee

Contents VII

Chapter 20 **Identification of Linearized Models**

**and Robust Control of Physical Systems 439** Rajamani Doraiswami and Lahouari Cheded

	- **Part 2 Special Topics in Robust and Adaptive Control 271**

Chapter 20 **Identification of Linearized Models and Robust Control of Physical Systems 439**  Rajamani Doraiswami and Lahouari Cheded

VI Contents

Chapter 9 **Self-Organized Intelligent Robust Control** 

Chapter 11 **New Robust Tracking and Stabilization Methods for Significant Classes** 

Chapter 12 **Robust Feedback Linearization Control** 

Kai Zenger and Juha Orivuori

Chapter 14 **Synthesis of Variable Gain Robust Controllers** 

Hidetoshi Oya and Kojiro Hagino

Chapter 15 **Simplified Deployment of Robust Real-Time** 

Chapter 16 **Partially Decentralized Design Principle** 

**in Large-Scale System Control 361**  Anna Filasová and Dušan Krokavec

Chapter 17 **A Model-Free Design of the Youla Parameter** 

Chapter 18 **Model Based** *μ***-Synthesis Controller Design for Time-Varying Delay System 405**

Chapter 19 **Robust Control of Nonlinear Systems** 

Yutaka Uchimura

Ciprian Lupu

Ulyanov Sergey

Jung-Hoon Lee

Laura Celentano

**Based on Quantum Fuzzy Inference 187**

Chapter 10 **New Practical Integral Variable Structure Controllers for Uncertain Nonlinear Systems 221**

**of Uncertain Linear and Nonlinear Systems 247**

Chapter 13 **Robust Attenuation of Frequency Varying Disturbances 291** 

**for a Class of Uncertain Dynamical Systems 311**

**Characteristic Architecture-Based Process Solutions 341** 

**Systems Using Multiple Model and Process**

**on the Generalized Internal Model Control Structure with Stability Constraint 389**  Kazuhiro Yubai, Akitaka Mizutani and Junji Hirai

**with Hysteresis Based on Play-Like Operators 423** Jun Fu, Wen-Fang Xie, Shao-Ping Wang and Ying Jin

**Part 2 Special Topics in Robust and Adaptive Control 271**

**for Reference Tracking and Disturbance Rejection in Nonlinear Systems 273** Cristina Ioana Pop and Eva Henrietta Dulf

Preface

comprises 20 chapters divided in two parts.

fuzzy uncertain models with specified performance.

controller design is presented in Chapter 4.

account for different control purposes.

of established methods.

This two-volume book `Recent Advances in Robust Control' covers a selection of recent developments in the theory and application of robust control. The first volume is focused on recent theoretical developments in the area of robust control and applications to robotic and electromechanical systems. The second volume is dedicated to special topics in robust control and problem specific solutions. It

The first part of this second volume focuses on novel approaches and the combination

Chapter 1 presents a novel approach to robust control adopting ideas from catastrophe theory. The proposed method amends the control system by nonlinear terms so that

Fuzzy system models allow representing complex and uncertain control systems. The design of controllers for such systems is addressed in Chapters 2 and 3. Chapter 2 addresses the control of systems with variable time-delay by means of Takagi-Sugeno (T-S) fuzzy models. In Chapter 3 the pole placement constraints are studied for T-S models with structured uncertainties in order to design robust controllers for T-S

Artificial neural networks (ANN) are ideal candidates for model-free representation of dynamical systems in general and control systems in particular. A method for system identification using recurrent ANN and the subsequent model reduction and

In Chapter 5 a hierarchical ANN control scheme is proposed. It is shown how this may

An alternative robust control method based on adaptive wavelet-based ANN is introduced in Chapter 6. Its basic design principle and its properties are discussed. As

an example this method is applied to the control of an electrical buck converter.

the amended system possesses equilibria states that guaranty robustness.

## Preface

This two-volume book `Recent Advances in Robust Control' covers a selection of recent developments in the theory and application of robust control. The first volume is focused on recent theoretical developments in the area of robust control and applications to robotic and electromechanical systems. The second volume is dedicated to special topics in robust control and problem specific solutions. It comprises 20 chapters divided in two parts.

The first part of this second volume focuses on novel approaches and the combination of established methods.

Chapter 1 presents a novel approach to robust control adopting ideas from catastrophe theory. The proposed method amends the control system by nonlinear terms so that the amended system possesses equilibria states that guaranty robustness.

Fuzzy system models allow representing complex and uncertain control systems. The design of controllers for such systems is addressed in Chapters 2 and 3. Chapter 2 addresses the control of systems with variable time-delay by means of Takagi-Sugeno (T-S) fuzzy models. In Chapter 3 the pole placement constraints are studied for T-S models with structured uncertainties in order to design robust controllers for T-S fuzzy uncertain models with specified performance.

Artificial neural networks (ANN) are ideal candidates for model-free representation of dynamical systems in general and control systems in particular. A method for system identification using recurrent ANN and the subsequent model reduction and controller design is presented in Chapter 4.

In Chapter 5 a hierarchical ANN control scheme is proposed. It is shown how this may account for different control purposes.

An alternative robust control method based on adaptive wavelet-based ANN is introduced in Chapter 6. Its basic design principle and its properties are discussed. As an example this method is applied to the control of an electrical buck converter.

Sliding mode control is known to achieve good performance but on the expense of chattering in the control variable. It is shown in Chapter 7 that combining quantitative feedback theory and sliding mode control can alleviate this phenomenon.

Preface XI

**Andreas Mueller**

Germany

The presence of hysteresis in a control system is always challenging, and its adequate representation is vital. In Chapter 19 a new hysteresis model is proposed and

The identification and *H*∞ controller design of a magnetic levitation system is

University Duisburg-Essen, Chair of Mechanics and Robotics

incorporated into a robust backstepping control scheme.

presented in Chapter 20.

An integral sliding mode controller is presented in Chapter 8 to account for the sensitivity of the sliding mode controller to uncertainties. The robustness of the proposed method is proven for a class of uncertainties.

Chapter 9 attacks the robust control problem from the perspective of quantum computing and self-organizing systems. It is outlined how the robust control problem can be represented in an information theoretic setting using entropy. A toolbox for the robust fuzzy control using self-organizing features and quantum arithmetic is presented.

Integral variable structure control is discussed in Chapter 10.

In Chapter 11 novel robust control techniques are proposed for linear and pseudolinear SISO systems. In this chapter several statements are proven for PD-type controllers in the presence of parametric uncertainties and external disturbances.

The second part of this volume is reserved for problem specific solutions tailored for specific applications.

In Chapter 12 the feedback linearization principle is applied to robust control of nonlinear systems.

The control of vibrations of an electric machine is reported in Chapter 13. The design of a robust controller is presented, that is able to tackle frequency varying disturbances.

In Chapter 14 the uncertainty problem in dynamical systems is approached by means of a variable gain robust control technique.

The applicability of multi-model control schemes is discussed in Chapter 15.

Chapter 16 addresses the control of large systems by application of partially decentralized design principles. This approach aims on partitioning the overall design problem into a number of constrained controller design problems.

Generalized internal model control has been proposed to tackle the performancerobustness dilemma. Chapter 17 proposes a method for the design of the Youla parameter, which is an important variable in this concept.

In Chapter 18 the robust control of systems with variable time-delay is addressed with help of *μ*-theory. The *μ*-synthesis design concept is presented and applied to a geared motor.

The presence of hysteresis in a control system is always challenging, and its adequate representation is vital. In Chapter 19 a new hysteresis model is proposed and incorporated into a robust backstepping control scheme.

X Preface

presented.

specific applications.

nonlinear systems.

of a variable gain robust control technique.

disturbances.

motor.

Sliding mode control is known to achieve good performance but on the expense of chattering in the control variable. It is shown in Chapter 7 that combining quantitative

An integral sliding mode controller is presented in Chapter 8 to account for the sensitivity of the sliding mode controller to uncertainties. The robustness of the

Chapter 9 attacks the robust control problem from the perspective of quantum computing and self-organizing systems. It is outlined how the robust control problem can be represented in an information theoretic setting using entropy. A toolbox for the robust fuzzy control using self-organizing features and quantum arithmetic is

In Chapter 11 novel robust control techniques are proposed for linear and pseudolinear SISO systems. In this chapter several statements are proven for PD-type controllers in the presence of parametric uncertainties and external disturbances.

The second part of this volume is reserved for problem specific solutions tailored for

In Chapter 12 the feedback linearization principle is applied to robust control of

The control of vibrations of an electric machine is reported in Chapter 13. The design of a robust controller is presented, that is able to tackle frequency varying

In Chapter 14 the uncertainty problem in dynamical systems is approached by means

Chapter 16 addresses the control of large systems by application of partially decentralized design principles. This approach aims on partitioning the overall design

Generalized internal model control has been proposed to tackle the performancerobustness dilemma. Chapter 17 proposes a method for the design of the Youla

In Chapter 18 the robust control of systems with variable time-delay is addressed with help of *μ*-theory. The *μ*-synthesis design concept is presented and applied to a geared

The applicability of multi-model control schemes is discussed in Chapter 15.

problem into a number of constrained controller design problems.

parameter, which is an important variable in this concept.

feedback theory and sliding mode control can alleviate this phenomenon.

proposed method is proven for a class of uncertainties.

Integral variable structure control is discussed in Chapter 10.

The identification and *H*∞ controller design of a magnetic levitation system is presented in Chapter 20.

> **Andreas Mueller** University Duisburg-Essen, Chair of Mechanics and Robotics Germany

**Part 1** 

**Novel Approaches in Robust Control** 

## **Part 1**

**Novel Approaches in Robust Control** 

**1** 

Viktor Ten

*Kazakhstan* 

*Center for Energy Research Nazarbayev University* 

**Robust Stabilization by** 

**Additional Equilibrium** 

There is huge number of developed methods of design of robust control and some of them even become classical. Commonly all of them are dedicated to defining the ranges of parameters (if uncertainty of parameters takes place) within which the system will function with desirable properties, first of all, will be stable. Thus there are many researches which successfully attenuate the uncertain changes of parameters in small (regarding to magnitudes of their own nominal values) ranges. But no one existing method can guarantee the stability of designed control system at arbitrarily large ranges of uncertainly changing parameters of plant. The offered approach has the origins from the study of the results of catastrophe theory where nonlinear structurally stable functions are named as 'catastrophe'. It is known that the catastrophe theory deals with several functions which are characterized by their stable structure. Today there are many classifications of these functions but

originally they are discovered as seven basic nonlinearities named as 'catastrophes':

Studying the dynamical properties of these catastrophes has urged to develope a method of design of nonlinear controller, continuously differentiable function, bringing to the new

1. new (one or several) equilibrium point appears so there are at least two equilibrium

2. these equilibrium points are stable but not simultaneous, i.e. if one exists (is stable) then

**1. Introduction** 

3

3 3

<sup>1</sup> *x kx* (fold); 4 2

2 1 *x k x kx* (cusp); 532

6432

3 2 22

242 2

<sup>321</sup> *x kx k x kx* (swallowtail);

<sup>4321</sup> *x k x kx k x kx* (butterfly);

2 1 121 2 2 31 *x x kx x k x kx* (hyperbolic umbilic);

dynamical system the following properties:

another does not exist (is unstable),

point in new designed system,

2 21 1 1 2 2 2 31 *x xx k x x kx kx* 3 (elliptic umbilic);

21 1 12 21 32 41 *x x x kx k x kx k x* (parabolic umbilic).

## **Robust Stabilization by Additional Equilibrium**

Viktor Ten *Center for Energy Research Nazarbayev University Kazakhstan* 

## **1. Introduction**

There is huge number of developed methods of design of robust control and some of them even become classical. Commonly all of them are dedicated to defining the ranges of parameters (if uncertainty of parameters takes place) within which the system will function with desirable properties, first of all, will be stable. Thus there are many researches which successfully attenuate the uncertain changes of parameters in small (regarding to magnitudes of their own nominal values) ranges. But no one existing method can guarantee the stability of designed control system at arbitrarily large ranges of uncertainly changing parameters of plant. The offered approach has the origins from the study of the results of catastrophe theory where nonlinear structurally stable functions are named as 'catastrophe'. It is known that the catastrophe theory deals with several functions which are characterized by their stable structure. Today there are many classifications of these functions but originally they are discovered as seven basic nonlinearities named as 'catastrophes':

$$\begin{aligned} &\mathbf{x}^3 + k\_1 \mathbf{x} \quad \text{(fold)};\\ &\mathbf{x}^4 + k\_2 \mathbf{x}^2 + k\_1 \mathbf{x} \quad \text{(cusp)};\\ &\mathbf{x}^5 + k\_3 \mathbf{x}^3 + k\_2 \mathbf{x}^2 + k\_1 \mathbf{x} \text{ (swall} \text{[twall])};\\ &\mathbf{x}^6 + k\_4 \mathbf{x}^4 + k\_3 \mathbf{x}^3 + k\_2 \mathbf{x}^2 + k\_1 \mathbf{x} \text{ (butterfly)};\\ &\mathbf{x}^3\_2 + \mathbf{x}^3\_1 + k\_1 \mathbf{x}\_2 \mathbf{x}\_1 - k\_2 \mathbf{x}\_2 + k\_3 \mathbf{x}\_1 \text{ (hyperbolicumbic)};\\ &\mathbf{x}^3\_2 - 3\mathbf{x}\_2 \mathbf{x}^2\_1 + k\_1 \left(\mathbf{x}^2\_1 + \mathbf{x}^2\_2\right) - k\_2 \mathbf{x}\_2 - k\_3 \mathbf{x}\_1 \text{ (elliptic unbuliic)};\\ &\mathbf{x}^2\_2 \mathbf{x}\_1 + \mathbf{x}^4\_1 + k\_1 \mathbf{x}^2\_2 + k\_2 \mathbf{x}^2\_1 - k\_3 \mathbf{x}\_2 - k\_4 \mathbf{x}\_1 \text{ (parabolic unbuliic)}.\end{aligned}$$

Studying the dynamical properties of these catastrophes has urged to develope a method of design of nonlinear controller, continuously differentiable function, bringing to the new dynamical system the following properties:


Robust Stabilization by Additional Equilibrium 5

1

2 1

*x*

.

2 21 1 1 2 2 2 31 *u x xx k x x kx kx* 3 , (2.2)

*u*

2 21 1 1 2 2 2 31

,

(2.1)

.

<sup>1</sup> *<sup>y</sup> <sup>x</sup>* . (2.3)

<sup>2</sup> 0 *<sup>s</sup> x* ; (2.4)

<sup>2</sup> 0 *<sup>s</sup> x* . (2.5)

(2.6)

(2.7)

2

1

1

 

*dt T dx*

*dt T*

and in order to study stability of the system let us suppose that there is no input signal in the system (equal to zero). Hence, the system with proposed controller can be presented as:

*dx x xx k x x kx kx*

*dx*

Let us use one of the catastrophe function as controller:

1

 

The system (2.3) has following equilibrium points

Stability conditions of the equilibrium point (2.6) are

*k3* if we properly set their values.

*dt T*

*dt T*

*dx*

2 1

*x*

,

<sup>1</sup> <sup>3</sup>

2

1

2

3 2 22

2 3 2 22

1 <sup>1</sup> 0 *<sup>s</sup> <sup>x</sup>* , <sup>1</sup>

*s k*

*x*

Stability conditions for equilibrium point (2.4) obtained via linearization are

2 3 1

1

*<sup>k</sup>* , <sup>2</sup>

Equilibrium (2.4) is origin, typical for all linear systems. Equilibrium (2.5) is additional, generated by nonlinear controller and provides stable motion of the system (2.3) to it.

2 2 3 21 2 1 2

*k kk k T*

0

By comparing the stability conditions given by (2.6) and (2.7) we find that the signs of the expressions in the second inequalities are opposite. Also we can see that the signs of expressions in the first inequalities can be opposite due to squares of the parameters *k1* and

.

<sup>3</sup> <sup>0</sup>

*k T k T T*

 

3 1 2

*k T T*

 

0

,

0

.

,


Basing on these conditions the given approach is focused on generation of the euilibria where the system will tend in the case if perturbed parameter has value from unstable ranges for original system. In contrast to classical methods of control theory, instead of zero –poles addition, the approach offers to add the equilibria to increase stability and sometimes to increase performance of the control system.

Another benefit of the method is that in some cases of nonlinearity of the plant we do not need to linearize but can use the nonlinear term to generate desired equilibria. An efficiency of the method can be prooved analytically for simple mathematical models, like in the section 2 below, and by simulation when the dynamics of the plant is quite complecated.

Nowadays there are many researches in the directions of cooperation of control systems and catastrophe theory that are very close to the offered approach or have similar ideas to stabilize the uncertain dynamical plant. Main distinctions of the offered approach are the follow:


Further, in section 2 we consider second-order systems as the justification of presented method of additional equilibria. In section 3 we consider different applications taken from well-known examples to show the technique of design of control. As classic academic example we consider stabilization of mass-damper-spring system at unknown stiffness coefficient. As the SISO systems of high order we consider positioning of center of oscillations of ACC Benchmark. As alternative opportunity we consider stabilization of submarine's angle of attack.

## **2. SISO systems with control plant of second order**

Let us consider cases of two integrator blocks in series, canonical controllable form and Jordan form. In first case we use one of the catastrophe functions, and in other two cases we offer our own two nonlinear functions as the controller.

### **2.1 Two integrator blocks in series**

Let us suppose that control plant is presented by two integrator blocks in series (Fig. 1) and described by equations (2.1)

$$\begin{array}{c} u \\ \hline \\ \hline \\ \hline \\ \hline \\ \end{array} \begin{array}{c} \begin{array}{c} \hline 1 \\ \hline T\_{2}S \\ \hline \\ \hline \\ \end{array} \begin{array}{c} \begin{array}{c} \begin{array}{c} \text{x}\_{2} \\ \hline T\_{1}S \\ \hline \\ \end{array} \begin{array}{c} \begin{array}{c} \text{x}\_{l}=\text{y} \\ \hline \\ \end{array} \\ \hline \\ \end{array} \end{array} \begin{array}{c} \begin{array}{c} \text{x}\_{l}=\text{y} \\ \hline \\ \end{array} \end{array} \end{array}$$

Fig. 1.

4 Recent Advances in Robust Control – Novel Approaches and Design Methods

3. stability of the equilibrium points are determined by values or relations of values of

4. what value(s) or what relation(s) of values of parameters would not be, every time there will be one and only one stable equilibrium point to which the system will attend and

Basing on these conditions the given approach is focused on generation of the euilibria where the system will tend in the case if perturbed parameter has value from unstable ranges for original system. In contrast to classical methods of control theory, instead of zero –poles addition, the approach offers to add the equilibria to increase stability and sometimes

Another benefit of the method is that in some cases of nonlinearity of the plant we do not need to linearize but can use the nonlinear term to generate desired equilibria. An efficiency of the method can be prooved analytically for simple mathematical models, like in the section 2 below, and by simulation when the dynamics of the plant is quite complecated. Nowadays there are many researches in the directions of cooperation of control systems and catastrophe theory that are very close to the offered approach or have similar ideas to stabilize the uncertain dynamical plant. Main distinctions of the offered approach are the



Further, in section 2 we consider second-order systems as the justification of presented method of additional equilibria. In section 3 we consider different applications taken from well-known examples to show the technique of design of control. As classic academic example we consider stabilization of mass-damper-spring system at unknown stiffness coefficient. As the SISO systems of high order we consider positioning of center of oscillations of ACC Benchmark. As alternative opportunity we consider stabilization of

Let us consider cases of two integrator blocks in series, canonical controllable form and Jordan form. In first case we use one of the catastrophe functions, and in other two cases we

Let us suppose that control plant is presented by two integrator blocks in series (Fig. 1) and

*u x2 x1=y*

<sup>1</sup>*ST* 1

<sup>2</sup>*ST* 1

parameters of the system,

to increase performance of the control system.

but tries to use it for stabilization;

**2. SISO systems with control plant of second order** 

offer our own two nonlinear functions as the controller.

stabilize the dynamical plant.

submarine's angle of attack.

**2.1 Two integrator blocks in series** 

described by equations (2.1)

Fig. 1.

thus be stable.

follow:

$$\begin{cases} \frac{d\mathbf{x}\_1}{dt} = \frac{1}{T\_1} \mathbf{x}\_{2,\prime} \\ \frac{d\mathbf{x}\_2}{dt} = \frac{1}{T\_2} \boldsymbol{\mu}. \end{cases} \tag{2.1}$$

Let us use one of the catastrophe function as controller:

$$\mu = -\mathbf{x}\_2^3 + 3\mathbf{x}\_2\mathbf{x}\_1^2 - k\_1(\mathbf{x}\_1^2 + \mathbf{x}\_2^2) + k\_2\mathbf{x}\_2 + k\_3\mathbf{x}\_1. \tag{2.2}$$

and in order to study stability of the system let us suppose that there is no input signal in the system (equal to zero). Hence, the system with proposed controller can be presented as:

$$\begin{cases} \frac{d\mathbf{x}\_1}{dt} = \frac{1}{T\_1} \mathbf{x}\_{2, \prime} \\ \frac{d\mathbf{x}\_2}{dt} = \frac{1}{T\_2} \Big( -\mathbf{x}\_2^3 + 3\mathbf{x}\_2\mathbf{x}\_1^2 - k\_1 \Big( \mathbf{x}\_1^2 + \mathbf{x}\_2^2 \Big) + k\_2\mathbf{x}\_2 + k\_3\mathbf{x}\_1 \Big). \end{cases}$$
 
$$\mathbf{y} = \mathbf{x}\_1 \tag{2.3}$$

The system (2.3) has following equilibrium points

$$\mathbf{x}\_{1s}^{1} = \mathbf{0} \; \text{ } \mathbf{x}\_{2s}^{1} = \mathbf{0} \; \text{ } \tag{2.4}$$

$$\mathbf{x}\_{1s}^{2} = \frac{k\_3}{k\_1} \; \text{ } \; \mathbf{x}\_{2s}^{2} = \mathbf{0} \; \text{ } \tag{2.5}$$

Equilibrium (2.4) is origin, typical for all linear systems. Equilibrium (2.5) is additional, generated by nonlinear controller and provides stable motion of the system (2.3) to it. Stability conditions for equilibrium point (2.4) obtained via linearization are

$$\begin{cases} -\frac{k\_2}{T\_2} > 0, \\\\ \frac{k\_3}{T\_1 T\_2} < 0. \end{cases} \tag{2.6}$$

Stability conditions of the equilibrium point (2.6) are

$$\begin{cases} -\frac{3k\_3^2 + k\_2k\_1^2}{k\_1^2T\_2} > 0, \\\\ \frac{k\_3}{T\_1T\_2} > 0. \end{cases} \tag{2.7}$$

By comparing the stability conditions given by (2.6) and (2.7) we find that the signs of the expressions in the second inequalities are opposite. Also we can see that the signs of expressions in the first inequalities can be opposite due to squares of the parameters *k1* and *k3* if we properly set their values.

Robust Stabilization by Additional Equilibrium 7

Fig. 3. Behavior of designed control system in the case of integrators in series at various *T1*.

Let us suppose that control plant is presented (or reduced) by canonical controllable form:

21 12

*a x ax u*

.

<sup>1</sup> *<sup>y</sup> <sup>x</sup>* (2.8)

<sup>2</sup> *u kx k x* 11 21 (2.9)

<sup>1</sup> *y x* . (2.10)

<sup>2</sup> 0 *<sup>s</sup> x* ; (2.11)

<sup>2</sup> 0 *<sup>s</sup> x* ; (2.12)

.

2

*x*

,

1

*dx*

*dt dx*

 

*dt*

1

*dx*

*dt*

 

*dt*

2

*x*

,

1 <sup>1</sup> <sup>0</sup> *<sup>s</sup> <sup>x</sup>* , <sup>1</sup>

1

*x*

*s*

2 2 2

*k a*

*k* , <sup>2</sup>

1

2 2

*dx a x ax kx k x*

21 12 11 21

Let us choose the controller in following parabolic form:

Thus, new control system becomes nonlinear:

and has two following equilibrium points:

2

**2.2 Canonical controllable form** 

Let us suppose that parameter *T1* can be perturbed but remains positive. If we set *k2* and *k3* both negative and 2 3 2 2 1 <sup>3</sup> *<sup>k</sup> <sup>k</sup> k* then the value of parameter *T2* is irrelevant. It can assume any

values both positive and negative (except zero), and the system given by (2.3) remains stable. If *T2* is positive then the system converges to the equilibrium point (2.4) (becomes stable). Likewise, if *T2* is negative then the system converges to the equilibrium point (2.5) which appears (becomes stable). At this moment the equilibrium point (2.4) becomes unstable (disappears).

Let us suppose that *T2* is positive, or can be perturbed staying positive. So if we can set the *k2* 2 3 <sup>3</sup> *<sup>k</sup> <sup>k</sup>*

and *k3* both negative and 2 2 1 *k* then it does not matter what value (negative or

positive) the parameter *T1* would be (except zero), in any case the system (2) will be stable. If *T1* is positive then equilibrium point (2.4) appears (becomes stable) and equilibrium point (2.5) becomes unstable (disappears) and vice versa, if *T1* is negative then equilibrium point (2.5) appears (become stable) and equilibrium point (2.4) becomes unstable (disappears).

Results of MatLab simulation for the first and second cases are presented in Fig. 2 and 3 respectively. In both cases we see how phase trajectories converge to equilibrium points

$$(0,0) \text{ and } \left(\frac{k\_3}{k\_1}; 0\right)$$

In Fig.2 the phase portrait of the system (2.3) at constant *k1=1*, *k2=-5*, *k3=-2*, *T1=100* and various (perturbed) *T2* (from *-4500* to *4500* with step *1000*) with initial condition *x=(-1;0)* is shown. In Fig.3 the phase portrait of the system (2.3) at constant *k1=2*, *k2=-3*, *k3=-1*, *T2=1000* and various (perturbed) *T1* (from *-450* to *450* with step *100*) with initial condition *x=(-0.25;0)* is shown.

Fig. 2. Behavior of designed control system in the case of integrators in series at various *T2*.

6 Recent Advances in Robust Control – Novel Approaches and Design Methods

Let us suppose that parameter *T1* can be perturbed but remains positive. If we set *k2* and *k3*

values both positive and negative (except zero), and the system given by (2.3) remains stable. If *T2* is positive then the system converges to the equilibrium point (2.4) (becomes stable). Likewise, if *T2* is negative then the system converges to the equilibrium point (2.5) which appears (becomes stable). At this moment the equilibrium point (2.4) becomes

Let us suppose that *T2* is positive, or can be perturbed staying positive. So if we can set the *k2*

positive) the parameter *T1* would be (except zero), in any case the system (2) will be stable. If *T1* is positive then equilibrium point (2.4) appears (becomes stable) and equilibrium point (2.5) becomes unstable (disappears) and vice versa, if *T1* is negative then equilibrium point (2.5) appears (become stable) and equilibrium point (2.4) becomes unstable (disappears). Results of MatLab simulation for the first and second cases are presented in Fig. 2 and 3 respectively. In both cases we see how phase trajectories converge to equilibrium points

In Fig.2 the phase portrait of the system (2.3) at constant *k1=1*, *k2=-5*, *k3=-2*, *T1=100* and various (perturbed) *T2* (from *-4500* to *4500* with step *1000*) with initial condition *x=(-1;0)* is shown. In Fig.3 the phase portrait of the system (2.3) at constant *k1=2*, *k2=-3*, *k3=-1*, *T2=1000* and various (perturbed) *T1* (from *-450* to *450* with step *100*) with initial condition *x=(-0.25;0)*

Fig. 2. Behavior of designed control system in the case of integrators in series at various *T2*.

2 3 2 2 1

<sup>3</sup> *<sup>k</sup> <sup>k</sup> k*

then the value of parameter *T2* is irrelevant. It can assume any

then it does not matter what value (negative or

both negative and

unstable (disappears).

0 0, and <sup>3</sup>

is shown.

and *k3* both negative and

1 ;<sup>0</sup> *<sup>k</sup> k* 

2 3 2 2 1

<sup>3</sup> *<sup>k</sup> <sup>k</sup> k*

Fig. 3. Behavior of designed control system in the case of integrators in series at various *T1*.

### **2.2 Canonical controllable form**

Let us suppose that control plant is presented (or reduced) by canonical controllable form:

$$\begin{cases} \frac{d\mathbf{x}\_1}{dt} = \mathbf{x}\_2, \\ \frac{d\mathbf{x}\_2}{dt} = -a\_2\mathbf{x}\_1 - a\_1\mathbf{x}\_2 + \mathbf{u}. \end{cases}$$

$$\mathbf{y} = \mathbf{x}\_1 \tag{2.8}$$

Let us choose the controller in following parabolic form:

$$
\mu = -k\_1 \mathbf{x}\_1^2 + k\_2 \mathbf{x}\_1 \tag{2.9}
$$

Thus, new control system becomes nonlinear:

$$\begin{cases} \frac{d\mathbf{x}\_1}{dt} = \mathbf{x}\_{2, \prime} \\ \frac{d\mathbf{x}\_2}{dt} = -a\_2 \mathbf{x}\_1 - a\_1 \mathbf{x}\_2 - k\_1 \mathbf{x}\_1^2 + k\_2 \mathbf{x}\_1. \end{cases}$$

$$\mathbf{y} = \mathbf{x}\_1. \tag{2.10}$$

and has two following equilibrium points:

$$\mathbf{x}\_{1s}^{1} = \mathbf{0} \; \; \; \mathbf{x}\_{2s}^{1} = \mathbf{0} \; \; \; \; \tag{2.11}$$

$$\mathbf{x}\_{1s}^{2} = \frac{k\_{2} - a\_{2}}{k\_{1}}, \ \mathbf{x}\_{2s}^{2} = \mathbf{0} \ ; \tag{2.12}$$

Robust Stabilization by Additional Equilibrium 9

*b c k k*

*b c k k*

*b c k k*

*b c k k*

These four equilibria provide stable motion of the system (2.15) at any values of unknown parameters 1 and 2 positive or negative. By parameters ka, kb, kc we can set the coordinates of added equilibria, hence the trajectory of system's motion will be globally bound within a rectangle, corners of which are the equilibria coordinates (2.16), (2.17), (2.18), (2.19)

Let us apply our approach in a widely used academic example such as mass-damper-spring

The dynamics of such system is described by the following 2nd-order deferential equation,

where x is the displacement of the mass block from the equilibrium position and F = u is the force acting on the mass, with m the mass, c the damper constant and k the spring constant.

*mx cx kx u* , (3.1)

0 0 , .

0 0 , .

0 0 , .

0 0 , .

1 2

1 2

1 2

1 2

Stability conditions for the equilibrium point (2.17) are:

Stability conditions for the equilibrium point (2.18) are:

Stability conditions for the equilibrium point (2.19) are:

**3.1 Unknown stiffness in mass-damper-spring system** 

themselves.

**3. Applications** 

system (Fig. 4).

Fig. 4.

by Newton's Second Law

Stability conditions for equilibrium points (2.11) and (2.12) respectively are

$$\begin{cases} a\_1 > 0, \\ a\_2 > k\_2. \end{cases}$$

$$\begin{cases} a\_1 > 0, \\ a\_2 < k\_2 \end{cases}$$

.

Here equlibrium (2.12) is additional and provides stability to the system (2.10) in the case when k2 is negative.

### **2.3 Jordan form**

Let us suppose that dynamical system is presented in Jordan form and described by following equations:

$$\begin{cases} \frac{d\mathbf{x}\_1}{dt} = \rho\_1 \mathbf{x}\_{1\prime} \\ \frac{d\mathbf{x}\_2}{dt} = \rho\_2 \mathbf{x}\_2. \end{cases} \tag{2.13}$$

Here we can use the fact that states are not coincided to each other and add three equilibrium points. Hence, the control law is chosen in following form:

$$
\mu\_1 = -k\_a \mathbf{x}\_1^2 + k\_b \mathbf{x}\_1, \ \mu\_2 = -k\_a \mathbf{x}\_2^2 + k\_c \mathbf{x}\_2 \tag{2.14}
$$

Hence, the system (2.13) with set control (2.14) is:

$$\begin{cases} \frac{d\mathbf{x}\_1}{dt} = \rho\_1 \mathbf{x}\_1 - k\_a \mathbf{x}\_1^2 + k\_b \mathbf{x}\_1, \\\ d\mathbf{x}\_2 = \rho\_2 \mathbf{x}\_2 - k\_a \mathbf{x}\_2^2 + k\_c \mathbf{x}\_2. \end{cases} \tag{2.15}$$

Totaly, due to designed control (2.14) we have four equilibria:

$$\mathbf{x}\_{1s}^{\mathrm{l}} = \mathbf{0} \; \; \; \mathbf{x}\_{2s}^{\mathrm{l}} = \mathbf{0} \; ; \tag{2.16}$$

$$\mathbf{x}\_{1s}^{2} = \mathbf{0} \,, \; \mathbf{x}\_{2s}^{2} = \frac{\rho\_{2} + k\_{c}}{k\_{a}} \; ; \tag{2.17}$$

$$\mathbf{x}\_{1s}^{3} = \frac{\rho\_1 + k\_b}{k\_a}, \text{ } \mathbf{x}\_{2s}^{3} = \mathbf{0} \text{ :} \tag{2.18}$$

$$\mathbf{x}\_{1s}^{4} = \frac{\rho\_1 + k\_b}{k\_a} \quad \mathbf{x}\_{2s}^{4} = \frac{\rho\_2 + k\_c}{k\_a} \; ; \tag{2.19}$$

Stability conditions for the equilibrium point (2.16) are:

8 Recent Advances in Robust Control – Novel Approaches and Design Methods

1 2 2 0, .

*a a k* 

<sup>1</sup>

*a a k* 

2 2 0, .

Here equlibrium (2.12) is additional and provides stability to the system (2.10) in the case

Let us suppose that dynamical system is presented in Jordan form and described by

1 1

Here we can use the fact that states are not coincided to each other and add three

*x*

,

.

<sup>2</sup> *u kx kx* 1 11 *a b* , <sup>2</sup> *u kx kx* 2 22 *a c* (2.14)

,

.

(2.13)

(2.15)

<sup>2</sup> 0 *<sup>s</sup> x* ; (2.16)

; (2.17)

<sup>2</sup> 0 *<sup>s</sup> x* ; (2.18)

; (2.19)

2 2

*x*

1

*dx*

*dt dx*

 

*dt*

equilibrium points. Hence, the control law is chosen in following form:

*dt*

Totaly, due to designed control (2.14) we have four equilibria:

*dt*

2

*x*

4 1 1

*x*

Stability conditions for the equilibrium point (2.16) are:

*<sup>b</sup> <sup>s</sup> a k*

*k* 

Hence, the system (2.13) with set control (2.14) is:

2

1 2

1 <sup>1</sup> 0 *<sup>s</sup> x* , <sup>1</sup>

3 1 1

*<sup>b</sup> <sup>s</sup> a k*

*k* , <sup>3</sup>

, <sup>4</sup> <sup>2</sup>

*x*

*dx x kx kx*

2 2

<sup>1</sup> 0 *<sup>s</sup> x* , <sup>2</sup> <sup>2</sup>

*x*

*dx x kx kx*

11 1 1

*a b*

*a c*

22 2 2

<sup>2</sup> *<sup>c</sup> <sup>s</sup> a k*

*k* 

<sup>2</sup> *<sup>c</sup> <sup>s</sup> a k*

*k* 

Stability conditions for equilibrium points (2.11) and (2.12) respectively are

when k2 is negative.

following equations:

**2.3 Jordan form** 

$$\begin{cases} \rho\_1 + k\_b > 0, \\ \rho\_2 + k\_c > 0. \end{cases}$$

Stability conditions for the equilibrium point (2.17) are:

$$\begin{cases} \rho\_1 + k\_b > 0, \\ \rho\_2 + k\_c < 0. \end{cases}$$

Stability conditions for the equilibrium point (2.18) are:

$$\begin{cases} \rho\_1 + k\_b < 0, \\ \rho\_2 + k\_c > 0. \end{cases}$$

Stability conditions for the equilibrium point (2.19) are:

$$\begin{cases} \rho\_1 + k\_b < 0, \\ \rho\_2 + k\_c < 0. \end{cases}$$

These four equilibria provide stable motion of the system (2.15) at any values of unknown parameters 1 and 2 positive or negative. By parameters ka, kb, kc we can set the coordinates of added equilibria, hence the trajectory of system's motion will be globally bound within a rectangle, corners of which are the equilibria coordinates (2.16), (2.17), (2.18), (2.19) themselves.

### **3. Applications**

### **3.1 Unknown stiffness in mass-damper-spring system**

Let us apply our approach in a widely used academic example such as mass-damper-spring system (Fig. 4).

Fig. 4.

The dynamics of such system is described by the following 2nd-order deferential equation, by Newton's Second Law

$$
\mathbf{u}\mathbf{x}\mathbf{i} + \mathbf{c}\mathbf{i} + k\mathbf{x} = \mathbf{u}\tag{3.1}
$$

where x is the displacement of the mass block from the equilibrium position and F = u is the force acting on the mass, with m the mass, c the damper constant and k the spring constant.

We consider a case when k is unknown parameter. Positivity or negativity of this parameter defines compression or decompression of the spring. In realistic system it can be unknown if the spring was exposed by thermal or moisture actions for a long time. Let us represent the system (3.1) by following equations:

$$\begin{cases} \dot{\mathbf{x}}\_1 = \mathbf{x}\_{2'}\\ \dot{\mathbf{x}}\_2 = \frac{1}{m}(-k\mathbf{x}\_1 - c\mathbf{x}\_2) + \frac{1}{m}u. \end{cases} \tag{3.2}$$

Robust Stabilization by Additional Equilibrium 11

Thus, when k is negative the system is also stable but tends to the (3.6). That means that

In Fig. 5 and Fig. 6 are presented results of MATLAB simulation of behavior of the system

In Fig. 6 changing of the displacement of the system at initial conditions x=[-0.05, 0] is shown. Here the red line corresponds to case when k = -5, green line corresponds to k = -4, blue line corresponds to k = -3, cyan line corresponds to k = -2, magenta line corresponds to k = -1. Everywhere the system is stable and tends to additional equilibria (3.6) which has

In Fig. 7 the displacement of the system at initial conditions x=[-0.05, 0] tends tot he origin. Colors of the lines correspond tot he following values of k: red is when k = 1, green is when

*u k k* .

k = 2, blue is when k = 3, cyan is when k = 4, and magenta is when k = 5.

*k* and we can adjust this value by setting the control parameter ku.

displacement x is equal to

Fig. 6.

Fig. 7.

different values due to the ratio

*u k*

(3.4) at negative and positive values of parameter k.

that correspond to structural diagram shown in Fig. 5.

Fig. 5.

Let us set the controller in the form:

$$
\mu = k\_u x\_1^2 \,\text{\,\,\,}\,\text{\,\,}\tag{3.3}
$$

Hence, system (3.2) is transformed to:

$$\begin{cases} \dot{\mathbf{x}}\_1 = \mathbf{x}\_{2'} \\ \dot{\mathbf{x}}\_2 = \frac{1}{m}(-k\mathbf{x}\_1 - c\mathbf{x}\_2) + \frac{1}{m}k\_u\mathbf{x}\_1^2. \end{cases} \tag{3.4}$$

Designed control system (3.4) has two equilibira:

$$\mathbf{x}\_1 = \mathbf{0} \; \; \; \mathbf{x}\_2 = \mathbf{0} \; \; \; \; \tag{3.5}$$

that is original, and

$$\mathbf{x}\_1 = \frac{k}{k\_u} \; \; \; x\_2 = 0 \; \; \; \tag{3.6}$$

that is additional. Origin is stable when following conditions are satisfaied:

$$\frac{c}{m} > 0 \; , \; \frac{k}{m} > 0 \tag{3.7}$$

This means that if parameter k is positive then system tends to the stable origin and displacement of x is equal or very close to zero. Additional equilibrium is stable when

$$\frac{c}{m} > 0 \; \prime \; \frac{k}{m} < 0 \tag{3.8}$$

Thus, when k is negative the system is also stable but tends to the (3.6). That means that displacement x is equal to *u k k* and we can adjust this value by setting the control parameter ku. In Fig. 5 and Fig. 6 are presented results of MATLAB simulation of behavior of the system

(3.4) at negative and positive values of parameter k.

Fig. 6.

10 Recent Advances in Robust Control – Novel Approaches and Design Methods

We consider a case when k is unknown parameter. Positivity or negativity of this parameter defines compression or decompression of the spring. In realistic system it can be unknown if the spring was exposed by thermal or moisture actions for a long time. Let us represent the

*x kx cx u m m*

*x kx cx k x m m*

2 12 1 1 1

1 2

*x x*

 

,

1

*x*

that is additional. Origin is stable when following conditions are satisfaied:

*u k*

<sup>0</sup> *<sup>c</sup> m*

<sup>0</sup> *<sup>c</sup> m*

 , <sup>0</sup> *<sup>k</sup> m*

This means that if parameter k is positive then system tends to the stable origin and displacement of x is equal or very close to zero. Additional equilibrium is stable when

> , <sup>0</sup> *<sup>k</sup> m*

1 1

.

(3.2)

<sup>2</sup> *u kx <sup>u</sup>* <sup>1</sup> , (3.3)

2

(3.4)

<sup>1</sup> *x* 0 , <sup>2</sup> *x* 0 ; (3.5)

*<sup>k</sup>* , <sup>2</sup> *<sup>x</sup>* <sup>0</sup> . (3.6)

(3.7)

(3.8)

. *<sup>u</sup>*

2 12

1 2

*x x*

 

that correspond to structural diagram shown in Fig. 5.

,

system (3.1) by following equations:

Let us set the controller in the form:

Hence, system (3.2) is transformed to:

that is original, and

Designed control system (3.4) has two equilibira:

Fig. 5.

### Fig. 7.

In Fig. 6 changing of the displacement of the system at initial conditions x=[-0.05, 0] is shown. Here the red line corresponds to case when k = -5, green line corresponds to k = -4, blue line corresponds to k = -3, cyan line corresponds to k = -2, magenta line corresponds to k = -1. Everywhere the system is stable and tends to additional equilibria (3.6) which has different values due to the ratio *u k k* .

In Fig. 7 the displacement of the system at initial conditions x=[-0.05, 0] tends tot he origin. Colors of the lines correspond tot he following values of k: red is when k = 1, green is when k = 2, blue is when k = 3, cyan is when k = 4, and magenta is when k = 5.

Robust Stabilization by Additional Equilibrium 13

4 1 11 1 11

Fig. 10.a Fig. 10.b

In Fig. 11 and Fig.12 the results of MATLAB simulation are presented. At the same parameters k = 1, m1 = 1, m2 = 1 and initial conditions x = [-0.1, 0, 0, 0], the center is 'almost'

*k k x x x kx m mm*

1

,

1 2

*x x*

 

,

,

3 4

*x x*

 

we can obtain less displacement of the center of oscillations.

not displaced from the zero point (Fig. 11).

2 13 2 2

*k k x xx m m*

and obtaining new control system

Fig. 10.

Fig. 11.

<sup>2</sup> *u x kx* 1 11 , (3.10)

(3.11)

. *<sup>u</sup>*

2

### **3.2 SISO systems of high order. Center of oscillations of ACC Benchmark**

Let us consider ACC Benchmark system given in MATLAB Robust Toolbox Help. The mechanism itself is presented in Fig. 8.

Fig. 8.

Structural diagram is presented in Fig. 9, where

$$G\_1 = \frac{1}{m\_1 \text{s}^2} \quad G\_2 = \frac{1}{m\_2 \text{s}^2} \dots$$

Fig. 9.

Dynamical system can be described by following equations:

$$\begin{cases} \dot{\mathbf{x}}\_1 = \mathbf{x}\_2 \\ \dot{\mathbf{x}}\_2 = -\frac{k}{m\_2}\mathbf{x}\_1 + \frac{k}{m\_2}\mathbf{x}\_3 \\ \dot{\mathbf{x}}\_3 = \mathbf{x}\_{4, \prime} \\ \dot{\mathbf{x}}\_4 = \frac{k}{m\_1}\mathbf{x}\_1 - \frac{k}{m\_1} + \frac{1}{m\_1}\boldsymbol{\mu}. \end{cases} \tag{3.9}$$

Without no control input the system produces periodic oscillations. Magnitude and center of the oscillations are defined by initial conditions. For example, let us set the parameters of the system k = 1, m1 = 1, m2 = 1. If we assume initial conditions x = [-0.1, 0, 0, 0] then center of oscillations will be displaced in negative (left) direction as it is shown in Fig. 10a. If initial conditions are x = [0.1, 0, 0, 0] then the center will be displaced in positive direction as it is shown in Fig. 10b.

After settting the controller

$$
\mu = \mathbf{x}\_1^2 - k\_1 \mathbf{x}\_1 \tag{3.10}
$$

and obtaining new control system

$$\begin{cases} \dot{\mathbf{x}}\_1 = \mathbf{x}\_{2, \prime} \\ \dot{\mathbf{x}}\_2 = -\frac{k}{m\_2} \mathbf{x}\_1 + \frac{k}{m\_2} \mathbf{x}\_{3, \prime} \\ \dot{\mathbf{x}}\_3 = \mathbf{x}\_{4, \prime} \\ \dot{\mathbf{x}}\_4 = \frac{k}{m\_1} \mathbf{x}\_1 - \frac{k}{m\_1} + \frac{1}{m\_1} \left( \mathbf{x}\_1^2 - k\_u \mathbf{x}\_1 \right) . \end{cases} \tag{3.11}$$

we can obtain less displacement of the center of oscillations.

Fig. 10.

12 Recent Advances in Robust Control – Novel Approaches and Design Methods

Let us consider ACC Benchmark system given in MATLAB Robust Toolbox Help. The

**3.2 SISO systems of high order. Center of oscillations of ACC Benchmark** 

1 2 1 <sup>1</sup> *<sup>G</sup>*

1 2

*x x*

 

,

,

3 4

*x x*

 

4 1

2 13 2 2

*k k xx u m mm*

Without no control input the system produces periodic oscillations. Magnitude and center of the oscillations are defined by initial conditions. For example, let us set the parameters of the system k = 1, m1 = 1, m2 = 1. If we assume initial conditions x = [-0.1, 0, 0, 0] then center of oscillations will be displaced in negative (left) direction as it is shown in Fig. 10a. If initial conditions are x = [0.1, 0, 0, 0] then the center will be displaced in positive direction as it is

*k k x xx m m*

1 11

1

.

(3.9)

,

*m s* , <sup>2</sup> <sup>2</sup>

<sup>1</sup> *<sup>G</sup> m s* .

2

mechanism itself is presented in Fig. 8.

Structural diagram is presented in Fig. 9, where

Dynamical system can be described by following equations:

Fig. 8.

Fig. 9.

shown in Fig. 10b.

After settting the controller

In Fig. 11 and Fig.12 the results of MATLAB simulation are presented. At the same parameters k = 1, m1 = 1, m2 = 1 and initial conditions x = [-0.1, 0, 0, 0], the center is 'almost' not displaced from the zero point (Fig. 11).

Robust Stabilization by Additional Equilibrium 15

Let us study the behavior of the system (3.12). In general form it is described as:

21 1 22 2 23 3 2

In the Fig.14 the behavior of output of the system (3.13) at various value of <sup>21</sup> *a* (varies from - 0.0121 to 0.0009 with step 0.00125) and all left constant parameters with nominal values is

In the Fig.15 the behavior of output of the system (3.13) at various value of <sup>22</sup> *a* (varies from - 0.611 to 0.289 with step 0.125) and all left constant parameters with nominal values is

In the Fig.16 the behavior of output of the system (3.13) at various value of <sup>23</sup> *a* (varies from - 0.88 to 1.120 with step 0.2) and all left constant parameters with nominal values is presented. In the Fig.17 the behavior of output of the system (3.13) at various value of <sup>32</sup> *a* (varies from - 0.43 to 0.57 with step 0.125) and all left constant parameters with nominal values is

In the Fig.18 the behavior of output of the system (3.13) at various value of <sup>33</sup> *a* (varies from - 1.3 to 0.7 to with step 0.25) and all left constant parameters with nominal values is

It is clear that the perturbation of only one parameter makes the system unstable.

*dx ax ax ax b t*

32 2 33 3 3

*dx ax ax b t*

*S*

,

2 2 *u k x x kx kx* 1 3 2 23 32 . (3.14)

(3.13)

*S*

.

*s(t)=1*. By turn let us simulate by MATLAB the changing of the value of each

Fig. 13. Angles of submarine's depth dynamics.

1

*dx*

*dt*

 

*dt*

 

where input

presented.

presented.

presented.

presented.

parameter deviated from nominal value.

*dt*

Let us set the feedback control law in the following form:

2

3

2

*x*

,

At the same parameters k = 1, m1 = 1, m2 = 1 and initial conditions x = [0.1, 0, 0, 0], the center is also displaced very close from the zero point (Fig. 12).

### Fig. 12.

### **3.3 Alternative opportunities. Submarine depth control**

Let us consider dynamics of angular motion of a controlled submarine. The important vectors of submarine's motion are shown in the Fig.13.

Let us assume that is a small angle and the velocity *v* is constant and equal to 25 ft/s. The state variables of the submarine, considering only vertical control, are *x1 =* , <sup>2</sup> *d x dt* , *x3 =* 

, where is the angle of attack and output. Thus the state vector differential equation for this system, when the submarine has an Albacore type hull, is:

$$
\dot{\mathbf{x}} = A\mathbf{x} + B\boldsymbol{\delta}\_s(\mathbf{t}) \,. \tag{3.12}
$$

where

$$A = \begin{pmatrix} 0 & a\_{12} & 0 \\ a\_{21} & a\_{22} & a\_{23} \\ 0 & a\_{32} & a\_{33} \end{pmatrix} \prime \ B = \begin{pmatrix} 0 \\ b\_2 \\ b\_3 \end{pmatrix} \prime$$

parameters of the matrices are equal to:

<sup>12</sup> *a* 1 , <sup>21</sup> *a* 0 0071 . , <sup>22</sup> *a* 0 111 . , <sup>23</sup> *a* 0 12 . , <sup>32</sup> *a* 0 07 . , <sup>33</sup> *a* 0 3. ,

$$b\_2 = -0.095 \text{ , } b\_3 = 0.072 \text{ , } c\_1$$

and *s(t)* is the deflection of the stern plane. 14 Recent Advances in Robust Control – Novel Approaches and Design Methods

At the same parameters k = 1, m1 = 1, m2 = 1 and initial conditions x = [0.1, 0, 0, 0], the center

Let us consider dynamics of angular motion of a controlled submarine. The important

*x Ax B t* 

12 21 22 23 32 33

*a a*

<sup>12</sup> *a* 1 , <sup>21</sup> *a* 0 0071 . , <sup>22</sup> *a* 0 111 . , <sup>23</sup> *a* 0 12 . , <sup>32</sup> *a* 0 07 . , <sup>33</sup> *a* 0 3. ,

<sup>2</sup> *b* 0 095 . , 3*b* 0 072 . ,

0 0

 

*a Aa a a*

0

state variables of the submarine, considering only vertical control, are *x1 =* 

is a small angle and the velocity *v* is constant and equal to 25 ft/s. The

is the angle of attack and output. Thus the state vector differential equation for

, <sup>2</sup>

3

,

0 *B b b*

  , <sup>2</sup>

*<sup>s</sup>* , (3.12)

*x*

*d*

*dt* , *x3 =* 

is also displaced very close from the zero point (Fig. 12).

**3.3 Alternative opportunities. Submarine depth control** 

this system, when the submarine has an Albacore type hull, is:

vectors of submarine's motion are shown in the Fig.13.

parameters of the matrices are equal to:

*s(t)* is the deflection of the stern plane.

Fig. 12.

, where

where

and 

Let us assume that

Fig. 13. Angles of submarine's depth dynamics.

Let us study the behavior of the system (3.12). In general form it is described as:

$$\begin{cases} \frac{d\mathbf{x}\_1}{dt} = \mathbf{x}\_{2'}\\ \frac{d\mathbf{x}\_2}{dt} = a\_{21}\mathbf{x}\_1 + a\_{22}\mathbf{x}\_2 + a\_{23}\mathbf{x}\_3 + b\_2\delta\_S\left(t\right),\\ \frac{d\mathbf{x}\_3}{dt} = a\_{32}\mathbf{x}\_2 + a\_{33}\mathbf{x}\_3 + b\_3\delta\_S\left(t\right). \end{cases} \quad \text{(3.13)}$$

where input *s(t)=1*. By turn let us simulate by MATLAB the changing of the value of each parameter deviated from nominal value.

In the Fig.14 the behavior of output of the system (3.13) at various value of <sup>21</sup> *a* (varies from - 0.0121 to 0.0009 with step 0.00125) and all left constant parameters with nominal values is presented.

In the Fig.15 the behavior of output of the system (3.13) at various value of <sup>22</sup> *a* (varies from - 0.611 to 0.289 with step 0.125) and all left constant parameters with nominal values is presented.

In the Fig.16 the behavior of output of the system (3.13) at various value of <sup>23</sup> *a* (varies from - 0.88 to 1.120 with step 0.2) and all left constant parameters with nominal values is presented.

In the Fig.17 the behavior of output of the system (3.13) at various value of <sup>32</sup> *a* (varies from - 0.43 to 0.57 with step 0.125) and all left constant parameters with nominal values is presented.

In the Fig.18 the behavior of output of the system (3.13) at various value of <sup>33</sup> *a* (varies from - 1.3 to 0.7 to with step 0.25) and all left constant parameters with nominal values is presented.

It is clear that the perturbation of only one parameter makes the system unstable.

Let us set the feedback control law in the following form:

$$
\mu = -k\_1 \left( \mathbf{x}\_3^2 + \mathbf{x}\_2^2 \right) + k\_2 \mathbf{x}\_3 + k\_3 \mathbf{x}\_2 \,. \tag{3.14}
$$

Robust Stabilization by Additional Equilibrium 17

Fig. 17. Behavior of output dynamics of submarine's depth at various *a32*.

Fig. 18. Behavior of output dynamics of submarine's depth at various *a33*.

21 1 22 2 23 3 2

*dx ax ax ax b t*

(disturbed) parameter are presented in the figures 19, 20, 21, 22, and 23.

3 2 2

*S*

*dx a x a x b t k x x kx kx*

The results of MATLAB simulation of the control system (3.15) with each changing

In the Fig.19 the behavior designed control system (3.15) at various value of <sup>21</sup> *a* (varies from -0.0121 to 0.0009 with step 0.00125) and all left constant parameters with nominal values is

In the Fig.20 the behavior of output of the system (3.15) at various value of <sup>22</sup> *a* (varies from - 0.611 to 0.289 with step 0.125) and all left constant parameters with nominal values is

,

.

(3.15)

32 2 33 3 3 1 2 3 23 32

*S*

Hence, designed control system is:

1

*dx*

*dt*

 

*dt*

 

presented

presented.

*dt*

2

2

*x*

,

Fig. 14. Behavior of output dynamics of submarine's depth at various *a21*.

Fig. 15. Behavior of output dynamics of submarine's depth at various *a22*.

Fig. 16. Behavior of output dynamics of submarine's depth at various *a23*.

16 Recent Advances in Robust Control – Novel Approaches and Design Methods

Fig. 14. Behavior of output dynamics of submarine's depth at various *a21*.

Fig. 15. Behavior of output dynamics of submarine's depth at various *a22*.

Fig. 16. Behavior of output dynamics of submarine's depth at various *a23*.

Fig. 17. Behavior of output dynamics of submarine's depth at various *a32*.

Fig. 18. Behavior of output dynamics of submarine's depth at various *a33*.

Hence, designed control system is:

$$\begin{cases} \frac{d\mathbf{x}\_1}{dt} = \mathbf{x}\_{2, \prime} \\ \frac{d\mathbf{x}\_2}{dt} = a\_{21}\mathbf{x}\_1 + a\_{22}\mathbf{x}\_2 + a\_{23}\mathbf{x}\_3 + b\_2\delta\_S(t), \\ \frac{d\mathbf{x}\_3}{dt} = a\_{32}\mathbf{x}\_2 + a\_{33}\mathbf{x}\_3 + b\_3\delta\_S(t) - k\_1(\mathbf{x}\_2^2 + \mathbf{x}\_3^2) + k\_2\mathbf{x}\_3 + k\_3\mathbf{x}\_2. \end{cases} \tag{3.15}$$

The results of MATLAB simulation of the control system (3.15) with each changing (disturbed) parameter are presented in the figures 19, 20, 21, 22, and 23.

In the Fig.19 the behavior designed control system (3.15) at various value of <sup>21</sup> *a* (varies from -0.0121 to 0.0009 with step 0.00125) and all left constant parameters with nominal values is presented

In the Fig.20 the behavior of output of the system (3.15) at various value of <sup>22</sup> *a* (varies from - 0.611 to 0.289 with step 0.125) and all left constant parameters with nominal values is presented.

Robust Stabilization by Additional Equilibrium 19

Fig. 21. Behavior of output of the submarine depth control system at various *a23*.

Fig. 22. Behavior of output of the submarine depth control system at various *a32*.

Fig. 23. Behavior of output of the submarine depth control system at various *a33*.

In the Fig.21 the behavior of output of the system (3.15) at various value of <sup>23</sup> *a* (varies from - 0.88 to 1.120 with step 0.2) and all left constant parameters with nominal values is presented. In the Fig.22 the behavior of output of the system (3.15) at various value of <sup>32</sup> *a* (varies from - 0.43 to 0.57 with step 0.125) and all left constant parameters with nominal values is presented.

In the Fig.23 the behavior of output of the system (3.15) at various value of <sup>33</sup> *a* (varies from - 1.3 to 0.7 to with step 0.25) and all left constant parameters with nominal values is presented.

Results of simulation confirm that chosen controller (3.14) provides stability to the system. In some cases, especially in the last the systems does not tend to original equilibrium (zero) but to additional one.

Fig. 19. Behavior of output of the submarine depth control system at various *a21*.

Fig. 20. Behavior of output of the submarine depth control system at various *a22*.

18 Recent Advances in Robust Control – Novel Approaches and Design Methods

In the Fig.21 the behavior of output of the system (3.15) at various value of <sup>23</sup> *a* (varies from - 0.88 to 1.120 with step 0.2) and all left constant parameters with nominal values is presented. In the Fig.22 the behavior of output of the system (3.15) at various value of <sup>32</sup> *a* (varies from - 0.43 to 0.57 with step 0.125) and all left constant parameters with nominal values is

In the Fig.23 the behavior of output of the system (3.15) at various value of <sup>33</sup> *a* (varies from - 1.3 to 0.7 to with step 0.25) and all left constant parameters with nominal values is

Results of simulation confirm that chosen controller (3.14) provides stability to the system. In some cases, especially in the last the systems does not tend to original equilibrium (zero)

Fig. 19. Behavior of output of the submarine depth control system at various *a21*.

Fig. 20. Behavior of output of the submarine depth control system at various *a22*.

presented.

presented.

but to additional one.

Fig. 21. Behavior of output of the submarine depth control system at various *a23*.

Fig. 22. Behavior of output of the submarine depth control system at various *a32*.

Fig. 23. Behavior of output of the submarine depth control system at various *a33*.

**0**

**2**

<sup>1</sup>*France* 2,3*Tunisia*

*Jules Verne, Amiens 80000,*

**Robust Control of Nonlinear Time-Delay Systems**

Robust control theory is an interdisciplinary branch of engineering and applied mathematics literature. Since its introduction in 1980's, it has grown to become a major scientific domain. For example, it gained a foothold in Economics in the late 1990 and has seen increasing numbers of Economic applications in the past few years. This theory aims to design a controller which guarantees closed-loop stability and performances of systems in the presence of system uncertainty. In practice, the uncertainty can include modelling errors, parametric variations and external disturbance. Many results have been presented for robust control of linear systems. However, most real physical systems are nonlinear in nature and usually subject to uncertainties. In this case, the linear dynamic systems are not powerful to describe these practical systems. So, it is important to design robust control of nonlinear models. In this context, different techniques have been proposed in the literature (Input-Output linearization technique, backstepping technique, Variable Structure Control

These two last decades, fuzzy model control has been extensively studied; see (Zhang & Heng, 2002)-(Chadli & ElHajjaji, 2006)-(Kim & Lee, 2000)-(Boukas & ElHajjaji, 2006) and the references therein because T-S fuzzy model can provide an effective representation of complex nonlinear systems. On the other hand, time-delay are often occurs in various practical control systems, such as transportation systems, communication systems, chemical processing systems, environmental systems and power systems. It is well known that the existence of delays may deteriorate the performances of the system and can be a source of instability. As a consequence, the T-S fuzzy model has been extended to deal with nonlinear systems with time-delay. The existing results of stability and stabilization criteria for this class of T-S fuzzy systems can be classified into two types: delay-independent, which are applicable to delay of arbitrary size (Cao & Frank, 2000)-(Park et al., 2003)-(Chen & Liu, 2005b), and delay-dependent, which include information on the size of delays, (Li et al., 2004) - (Chen & Liu, 2005a). It is generally recognized that delay-dependent results are usually less conservative than delay-independent ones, especially when the size of delay

**1. Introduction**

(VSC) technique, ...).

Hamdi Gassara1,2, Ahmed El Hajjaji1 and Mohamed Chaabane<sup>3</sup> <sup>1</sup>*Modeling, Information, and Systems Laboratory, University of Picardie*

<sup>2</sup>*Department of Electrical Engineering, Unit of Control of Industrial Process,*

**via Takagi-Sugeno Fuzzy Models**

*National School of Engineering, University of Sfax, Sfax 3038* <sup>3</sup>*Automatic control at National School of Engineers of Sfax (ENIS)*

## **4. Conclusion**

Adding the equilibria that attracts the motion of the system and makes it stable can give many advantages. The main of them is that the safe ranges of parameters are widened significantly because the designed system stay stable within unbounded ranges of perturbation of parameters even the sign of them changes. The behaviors of designed control systems obtained by MATLAB simulation such that control of linear and nonlinear dynamic plants confirm the efficiency of the offered method. For further research and investigation many perspective tasks can occur such that synthesis of control systems with special requirements, design of optimal control and many others.

## **5. Acknowledgment**

I am heartily thankful to my supervisor, Beisenbi Mamirbek, whose encouragement, guidance and support from the initial to the final level enabled me to develop an understanding of the subject. I am very thankful for advises, help, and many offered opportunities to famous expert of nonlinear dynamics and chaos Steven H. Strogatz, famous expert of control systems Marc Campbell, and Andy Ruina Lab team.

Lastly, I offer my regards and blessings to all of those who supported me in any respect during the completion of the project.

## **6. References**


## **Robust Control of Nonlinear Time-Delay Systems via Takagi-Sugeno Fuzzy Models**

Hamdi Gassara1,2, Ahmed El Hajjaji1 and Mohamed Chaabane<sup>3</sup>

<sup>1</sup>*Modeling, Information, and Systems Laboratory, University of Picardie Jules Verne, Amiens 80000,* <sup>2</sup>*Department of Electrical Engineering, Unit of Control of Industrial Process, National School of Engineering, University of Sfax, Sfax 3038* <sup>3</sup>*Automatic control at National School of Engineers of Sfax (ENIS)* <sup>1</sup>*France* 2,3*Tunisia*

### **1. Introduction**

20 Recent Advances in Robust Control – Novel Approaches and Design Methods

Adding the equilibria that attracts the motion of the system and makes it stable can give many advantages. The main of them is that the safe ranges of parameters are widened significantly because the designed system stay stable within unbounded ranges of perturbation of parameters even the sign of them changes. The behaviors of designed control systems obtained by MATLAB simulation such that control of linear and nonlinear dynamic plants confirm the efficiency of the offered method. For further research and investigation many perspective tasks can occur such that synthesis of control systems with

I am heartily thankful to my supervisor, Beisenbi Mamirbek, whose encouragement, guidance and support from the initial to the final level enabled me to develop an understanding of the subject. I am very thankful for advises, help, and many offered opportunities to famous expert of nonlinear dynamics and chaos Steven H. Strogatz, famous

Lastly, I offer my regards and blessings to all of those who supported me in any respect

Beisenbi, M; Ten, V. (2002). An approach to the increase of a potential of robust stability of

Ten, V. (2009). Approach to design of Nonlinear Robust Control in a Class of Structurally

V.I. Arnold, A.A. Davydov, V.A. Vassiliev and V.M. Zakalyukin (2006). *Mathematical Models of Catastrophes. Control of Catastrohic Processes*. EOLSS Publishers, Oxford, UK Dorf, Richard C; Bishop, H. (2008). *Modern Control Systems, 11/E*. Prentice Hall, New Jersey,

Gu, D.-W ; Petkov, P.Hr. ; Konstantinov, M.M. (2005). *Robust control design with Matlab.* 

Poston, T.; Stewart, Ian. (1998). *Catastrophe: Theory and Its Applications.* Dover, New York,

control systems, *Theses of the reports of VII International seminar «Stability and fluctuations of nonlinear control systems»* pp. 122-123, Moscow, Institute of problems

special requirements, design of optimal control and many others.

expert of control systems Marc Campbell, and Andy Ruina Lab team.

of control of Russian Academy of Sciences, Moscow, Russia

Khalil, Hassan K. (2002). *Nonlinear systems.* Prentice Hall, New Jersey, USA

Stable Functions, Available from http://arxiv.org/abs/0901.2877

**4. Conclusion** 

**5. Acknowledgment** 

**6. References** 

USA

USA

Springer-Verlag, London, UK

during the completion of the project.

Robust control theory is an interdisciplinary branch of engineering and applied mathematics literature. Since its introduction in 1980's, it has grown to become a major scientific domain. For example, it gained a foothold in Economics in the late 1990 and has seen increasing numbers of Economic applications in the past few years. This theory aims to design a controller which guarantees closed-loop stability and performances of systems in the presence of system uncertainty. In practice, the uncertainty can include modelling errors, parametric variations and external disturbance. Many results have been presented for robust control of linear systems. However, most real physical systems are nonlinear in nature and usually subject to uncertainties. In this case, the linear dynamic systems are not powerful to describe these practical systems. So, it is important to design robust control of nonlinear models. In this context, different techniques have been proposed in the literature (Input-Output linearization technique, backstepping technique, Variable Structure Control (VSC) technique, ...).

These two last decades, fuzzy model control has been extensively studied; see (Zhang & Heng, 2002)-(Chadli & ElHajjaji, 2006)-(Kim & Lee, 2000)-(Boukas & ElHajjaji, 2006) and the references therein because T-S fuzzy model can provide an effective representation of complex nonlinear systems. On the other hand, time-delay are often occurs in various practical control systems, such as transportation systems, communication systems, chemical processing systems, environmental systems and power systems. It is well known that the existence of delays may deteriorate the performances of the system and can be a source of instability. As a consequence, the T-S fuzzy model has been extended to deal with nonlinear systems with time-delay. The existing results of stability and stabilization criteria for this class of T-S fuzzy systems can be classified into two types: delay-independent, which are applicable to delay of arbitrary size (Cao & Frank, 2000)-(Park et al., 2003)-(Chen & Liu, 2005b), and delay-dependent, which include information on the size of delays, (Li et al., 2004) - (Chen & Liu, 2005a). It is generally recognized that delay-dependent results are usually less conservative than delay-independent ones, especially when the size of delay

where *θj*(*x*(*t*)) and *μij*(*i* = 1, ··· ,*r*, *j* = 1, ··· , *p*) are respectively the premise variables and the fuzzy sets; *<sup>ψ</sup>*(*t*) is the initial conditions; *<sup>x</sup>*(*t*) ∈ �*<sup>n</sup>* is the state; *<sup>u</sup>*(*t*) ∈ �*<sup>m</sup>* is the control input; *r* is the number of IF-THEN rules; the time delay, *τ*(*t*), is a time-varying continuous

Robust Control of Nonlinear Time-Delay Systems via Takagi-Sugeno Fuzzy Models 23

The parametric uncertainties Δ*Ai*, Δ*Aτi*, Δ*Bi* are time-varying matrices that are defined as

where *MAi*, *MAτi*, *MBi*, *EAi*, *EAτi*, *EBi* are known constant matrices and *Fi*(*t*) is an unknown

*<sup>i</sup>* = *Bi* + Δ*Bi*

By using the common used center-average defuzzifier, product inference and singleton

*hi*(*θ*(*x*(*t*)))[*A*¯*ix*(*t*) + *<sup>A</sup>*¯ *<sup>τ</sup>ix*(*<sup>t</sup>* <sup>−</sup> *<sup>τ</sup>*(*t*)) + *<sup>B</sup>*¯

*<sup>i</sup>*=<sup>1</sup> *hi*(*θ*(*x*(*t*))) = 1

where *<sup>θ</sup>*(*x*(*t*)) = [*θ*1(*x*(*t*)), ··· , *<sup>θ</sup>p*(*x*(*t*))] and *<sup>ν</sup>i*(*θ*(*x*(*t*))) : �*<sup>p</sup>* <sup>→</sup> [0, 1], *<sup>i</sup>* <sup>=</sup> 1, ··· ,*r*, is the membership function of the system with respect to the *i*th plan rule. Denote *hi*(*θ*(*x*(*t*))) =

the design of state feedback stabilizing fuzzy controllers for fuzzy system (5) is based on the

In the sequel, for brevity we use *hi* to denote *hi*(*θ*(*x*(*t*))). Combining (5) with (7), the

*hihj*[*<sup>A</sup>ijx*(*t*) + *<sup>A</sup>*¯

In order to obtain the main results in this chapter, the following lemmas are needed

Δ*Ai* = *MAiFi*(*t*)*EAi*, ; Δ*Aτ<sup>i</sup>* = *MAτiFi*(*t*)*EAτi*, ; Δ*Bi* = *MBiFi*(*t*)*EBi* (3)

0 ≤ *τ*(*t*) ≤ *τ*, *τ*˙(*t*) ≤ *β* (2)

*Fi*(*t*)*TFi*(*t*) <sup>≤</sup> *<sup>I</sup>* (4)

*u*(*t*) = *Kix*(*t*) (6)

*hi*(*θ*(*x*(*t*)))*Kix*(*t*) (7)

*<sup>τ</sup>ix*(*t* − *τ*(*t*))] (8)

*iu*(*t*)] (5)

function that satisfies

matrix function with the property

Let *<sup>A</sup>*¯*<sup>i</sup>* = *Ai* + <sup>Δ</sup>*Ai*; *<sup>A</sup>*¯ *<sup>τ</sup><sup>i</sup>* = *<sup>A</sup>τ<sup>i</sup>* + <sup>Δ</sup>*Aτi*; *<sup>B</sup>*¯

fuzzifier, the T-S fuzzy systems can be inferred as

*r* ∑ *i*=1

*<sup>i</sup>*=<sup>1</sup> *νi*(*θ*(*x*(*t*))). It is obvious that

The overall state feedback control law is represented by

closed-loop fuzzy system can be expressed as follows

*x*˙(*t*) =

*iKj*

*hi*(*θ*(*x*(*t*))) <sup>≥</sup> 0 and <sup>∑</sup>*<sup>r</sup>*

*u*(*t*) =

*r* ∑ *i*=1

*r* ∑ *j*=1

Controller Rule *i*(*i* = 1, 2, ··· ,*r*): If *θ*<sup>1</sup> is *μi*<sup>1</sup> and ··· and *θ<sup>p</sup>* is *μip* THEN

*r* ∑ *i*=1

*x*˙(*t*) =

Parallel Distributed Compensation.

follows

*νi*(*θ*(*x*(*t*)))/ ∑*<sup>r</sup>*

with *<sup>A</sup>ij* <sup>=</sup> *<sup>A</sup>*¯*<sup>i</sup>* <sup>+</sup> *<sup>B</sup>*¯

is small. We notice that all the results of analysis and synthesis delay-dependent methods cited previously are based on a single LKF that bring conservativeness in establishing the stability and stabilization test. Moreover, the model transformation, the conservative inequalities and the so-called Moon's inequality (Moon et al., 2001) for bounding cross terms used in these methods also bring conservativeness. Recently, in order to reduce conservatism, the weighting matrix technique was proposed originally by He and al. in (He et al., 2004)-(He et al., 2007). These works studied the stability of linear systems with time-varying delay. More recently, Huai-Ning et al. (Wu & Li, 2007) treated the problem of stabilization via PDC (Prallel Distributed Compensation) control by employing a fuzzy LKF combining the introduction of free weighting matrices which improves existing ones in (Li et al., 2004) - (Chen & Liu, 2005a) without imposing any bounding techniques on some cross product terms. In general, the disadvantage of this new approach (Wu & Li, 2007) lies in that the delay-dependent stabilization conditions presented involve three tuning parameters. Chen et al. in (Chen et al., 2007) and in (Chen & Liu, 2005a) have proposed delay-dependent stabilization conditions of uncertain T-S fuzzy systems. The inconvenience in these works is that the time-delay must be constant. The designing of observer-based fuzzy control and the introduction of performance with guaranteed cost for T-S with input delay have discussed in (Chen, Lin, Liu & Tong, 2008) and (Chen, Liu, Tang & Lin, 2008), respectively.

In this chapter, we study the asymptotic stabilization of uncertain T-S fuzzy systems with time-varying delay. We focus on the delay-dependent stabilization synthesis based on the PDC scheme (Wang et al., 1996). Different from the methods currently found in the literature (Wu & Li, 2007)-(Chen et al., 2007), our method does not need any transformation in the LKF, and thus, avoids the restriction resulting from them. Our new approach improves the results in (Li et al., 2004)-(Guan & Chen, 2004)-(Chen & Liu, 2005a)-(Wu & Li, 2007) for three great main aspects. The first one concerns the reduction of conservatism. The second one, the reduction of the number of LMI conditions, which reduce computational efforts. The third one, the delay-dependent stabilization conditions presented involve a single fixed parameter. This new approach also improves the work of B. Chen et al. in (Chen et al., 2007) by establishing new delay-dependent stabilization conditions of uncertain T-S fuzzy systems with time varying delay. The rest of this chapter is organized as follows. In section 2, we give the description of uncertain T-S fuzzy model with time varying delay. We also present the fuzzy control design law based on PDC structure. New delay dependent stabilization conditions are established in section 3. In section 4, numerical examples are given to demonstrate the effectiveness and the benefits of the proposed method. Some conclusions are drawn in section 5.

*Notation:* �*<sup>n</sup>* denotes the *<sup>n</sup>*-dimensional Euclidiean space. The notation *<sup>P</sup>* <sup>&</sup>gt; 0 means that *<sup>P</sup>* is symmetric and positive definite. *<sup>W</sup>* <sup>+</sup> *<sup>W</sup><sup>T</sup>* is denoted as *<sup>W</sup>* + (∗) for simplicity. In symmetric bloc matrices, we use ∗ as an ellipsis for terms that are induced by symmetry.

### **2. Problem formulation**

Consider a nonlinear system with state-delay which could be represented by a T-S fuzzy time-delay model described by

Plant Rule *i*(*i* = 1, 2, ··· ,*r*): If *θ*<sup>1</sup> is *μi*<sup>1</sup> and ··· and *θ<sup>p</sup>* is *μip* THEN

$$\begin{cases}
\dot{\mathbf{x}}(t) = (A\_i + \Delta A\_i)\mathbf{x}(t) + (A\_{\tau i} + \Delta A\_{\tau i})\mathbf{x}(t - \tau(t)) + (B\_i + \Delta B\_i)\mathbf{u}(t) \\
\mathbf{x}(t) = \boldsymbol{\psi}(t), t \in [-\overline{\tau}, 0],
\end{cases} \tag{1}$$

where *θj*(*x*(*t*)) and *μij*(*i* = 1, ··· ,*r*, *j* = 1, ··· , *p*) are respectively the premise variables and the fuzzy sets; *<sup>ψ</sup>*(*t*) is the initial conditions; *<sup>x</sup>*(*t*) ∈ �*<sup>n</sup>* is the state; *<sup>u</sup>*(*t*) ∈ �*<sup>m</sup>* is the control input; *r* is the number of IF-THEN rules; the time delay, *τ*(*t*), is a time-varying continuous function that satisfies

$$0 \le \tau(t) \le \overline{\tau} \,\dot{\tau}(t) \le \beta \tag{2}$$

The parametric uncertainties Δ*Ai*, Δ*Aτi*, Δ*Bi* are time-varying matrices that are defined as follows

$$
\Delta A\_{\rm i} = M\_{A\rm i} F\_{\rm i}(t) E\_{A\rm i}; \; \Delta A\_{\rm \tau i} = M\_{A\rm \tau i} F\_{\rm i}(t) E\_{A\rm \tau i}; \; \Delta B\_{\rm i} = M\_{Bi} F\_{\rm i}(t) E\_{Bi} \tag{3}
$$

where *MAi*, *MAτi*, *MBi*, *EAi*, *EAτi*, *EBi* are known constant matrices and *Fi*(*t*) is an unknown matrix function with the property

$$\,^T F\_i(t)^T F\_i(t) \le I \tag{4}$$

Let *<sup>A</sup>*¯*<sup>i</sup>* = *Ai* + <sup>Δ</sup>*Ai*; *<sup>A</sup>*¯ *<sup>τ</sup><sup>i</sup>* = *<sup>A</sup>τ<sup>i</sup>* + <sup>Δ</sup>*Aτi*; *<sup>B</sup>*¯ *<sup>i</sup>* = *Bi* + Δ*Bi*

2 Will-be-set-by-IN-TECH

is small. We notice that all the results of analysis and synthesis delay-dependent methods cited previously are based on a single LKF that bring conservativeness in establishing the stability and stabilization test. Moreover, the model transformation, the conservative inequalities and the so-called Moon's inequality (Moon et al., 2001) for bounding cross terms used in these methods also bring conservativeness. Recently, in order to reduce conservatism, the weighting matrix technique was proposed originally by He and al. in (He et al., 2004)-(He et al., 2007). These works studied the stability of linear systems with time-varying delay. More recently, Huai-Ning et al. (Wu & Li, 2007) treated the problem of stabilization via PDC (Prallel Distributed Compensation) control by employing a fuzzy LKF combining the introduction of free weighting matrices which improves existing ones in (Li et al., 2004) - (Chen & Liu, 2005a) without imposing any bounding techniques on some cross product terms. In general, the disadvantage of this new approach (Wu & Li, 2007) lies in that the delay-dependent stabilization conditions presented involve three tuning parameters. Chen et al. in (Chen et al., 2007) and in (Chen & Liu, 2005a) have proposed delay-dependent stabilization conditions of uncertain T-S fuzzy systems. The inconvenience in these works is that the time-delay must be constant. The designing of observer-based fuzzy control and the introduction of performance with guaranteed cost for T-S with input delay have discussed in

(Chen, Lin, Liu & Tong, 2008) and (Chen, Liu, Tang & Lin, 2008), respectively.

drawn in section 5.

**2. Problem formulation**

time-delay model described by

In this chapter, we study the asymptotic stabilization of uncertain T-S fuzzy systems with time-varying delay. We focus on the delay-dependent stabilization synthesis based on the PDC scheme (Wang et al., 1996). Different from the methods currently found in the literature (Wu & Li, 2007)-(Chen et al., 2007), our method does not need any transformation in the LKF, and thus, avoids the restriction resulting from them. Our new approach improves the results in (Li et al., 2004)-(Guan & Chen, 2004)-(Chen & Liu, 2005a)-(Wu & Li, 2007) for three great main aspects. The first one concerns the reduction of conservatism. The second one, the reduction of the number of LMI conditions, which reduce computational efforts. The third one, the delay-dependent stabilization conditions presented involve a single fixed parameter. This new approach also improves the work of B. Chen et al. in (Chen et al., 2007) by establishing new delay-dependent stabilization conditions of uncertain T-S fuzzy systems with time varying delay. The rest of this chapter is organized as follows. In section 2, we give the description of uncertain T-S fuzzy model with time varying delay. We also present the fuzzy control design law based on PDC structure. New delay dependent stabilization conditions are established in section 3. In section 4, numerical examples are given to demonstrate the effectiveness and the benefits of the proposed method. Some conclusions are

*Notation:* �*<sup>n</sup>* denotes the *<sup>n</sup>*-dimensional Euclidiean space. The notation *<sup>P</sup>* <sup>&</sup>gt; 0 means that *<sup>P</sup>* is symmetric and positive definite. *<sup>W</sup>* <sup>+</sup> *<sup>W</sup><sup>T</sup>* is denoted as *<sup>W</sup>* + (∗) for simplicity. In symmetric

Consider a nonlinear system with state-delay which could be represented by a T-S fuzzy

*x*˙(*t*)=(*Ai* + Δ*Ai*)*x*(*t*)+(*Aτ<sup>i</sup>* + Δ*Aτi*)*x*(*t* − *τ*(*t*)) + (*Bi* + Δ*Bi*)*u*(*t*)

*<sup>x</sup>*(*t*) = *<sup>ψ</sup>*(*t*), *<sup>t</sup>* <sup>∈</sup> [−*τ*, 0], (1)

bloc matrices, we use ∗ as an ellipsis for terms that are induced by symmetry.

Plant Rule *i*(*i* = 1, 2, ··· ,*r*): If *θ*<sup>1</sup> is *μi*<sup>1</sup> and ··· and *θ<sup>p</sup>* is *μip* THEN

By using the common used center-average defuzzifier, product inference and singleton fuzzifier, the T-S fuzzy systems can be inferred as

$$\dot{\mathbf{x}}(t) = \sum\_{i=1}^{r} h\_i(\theta(\mathbf{x}(t))) \left[ \bar{A}\_i \mathbf{x}(t) + \bar{A}\_{\tau i} \mathbf{x}(t - \tau(t)) + \bar{B}\_i \boldsymbol{u}(t) \right] \tag{5}$$

where *<sup>θ</sup>*(*x*(*t*)) = [*θ*1(*x*(*t*)), ··· , *<sup>θ</sup>p*(*x*(*t*))] and *<sup>ν</sup>i*(*θ*(*x*(*t*))) : �*<sup>p</sup>* <sup>→</sup> [0, 1], *<sup>i</sup>* <sup>=</sup> 1, ··· ,*r*, is the membership function of the system with respect to the *i*th plan rule. Denote *hi*(*θ*(*x*(*t*))) = *νi*(*θ*(*x*(*t*)))/ ∑*<sup>r</sup> <sup>i</sup>*=<sup>1</sup> *νi*(*θ*(*x*(*t*))). It is obvious that

$$h\_i(\theta(\mathfrak{x}(t))) \ge 0 \text{ and } \Sigma\_{i=1}^r h\_i(\theta(\mathfrak{x}(t))) = 1$$

the design of state feedback stabilizing fuzzy controllers for fuzzy system (5) is based on the Parallel Distributed Compensation.

Controller Rule *i*(*i* = 1, 2, ··· ,*r*): If *θ*<sup>1</sup> is *μi*<sup>1</sup> and ··· and *θ<sup>p</sup>* is *μip* THEN

$$\mathbf{u}(t) = \mathbf{K}\_i \mathbf{x}(t) \tag{6}$$

The overall state feedback control law is represented by

$$\mu(t) = \sum\_{i=1}^{r} h\_i(\theta(\mathbf{x}(t))) \mathcal{K}\_i \mathbf{x}(t) \tag{7}$$

In the sequel, for brevity we use *hi* to denote *hi*(*θ*(*x*(*t*))). Combining (5) with (7), the closed-loop fuzzy system can be expressed as follows

$$\dot{\mathbf{x}}(t) = \sum\_{i=1}^{r} \sum\_{j=1}^{r} h\_i h\_j [\hat{A}\_{i\bar{j}} \mathbf{x}(t) + \bar{A}\_{\tau i} \mathbf{x}(t - \tau(t))] \tag{8}$$

with *<sup>A</sup>ij* <sup>=</sup> *<sup>A</sup>*¯*<sup>i</sup>* <sup>+</sup> *<sup>B</sup>*¯ *iKj*

In order to obtain the main results in this chapter, the following lemmas are needed

**Lemma 1.** *(Xie & DeSouza, 1992)-(Oudghiri et al., 2007) (Guerra et al., 2006) Considering* Π < 0 *a matrix X and a scalar λ, the following holds*

$$X^T \Pi X \le -2\lambda X - \lambda^2 \Pi^{-1} \tag{9}$$

*V*˙ (*x*(*t*)) =

*r* ∑ *i*=1

*As pointed out in (Chen & Liu, 2005a)*

*where <sup>η</sup>*(*t*)*<sup>T</sup>* = [*x*(*t*)*T*, *<sup>x</sup>*(*<sup>t</sup>* <sup>−</sup> *<sup>τ</sup>*(*t*))*T*]*.*

*<sup>V</sup>*˙ (*x*(*t*)) <sup>≤</sup>

*<sup>i</sup>* + *A*¯ *<sup>T</sup>*

*The uncertain part is represented as follows*

⎡ ⎢ ⎢ ⎣

⎡ ⎢ ⎢ ⎣

*PMAi* 0 *ZMAi* 0

Φ¯ *<sup>i</sup>* =

*P*Δ*Ai* + Δ*A<sup>T</sup>*

⎤ ⎥ ⎥ ⎦ *F*(*t*) �

*where*

Φ˜ *<sup>i</sup>* = � *PA*¯

ΔΦ¯ *<sup>i</sup>* =

=

*Allowing W<sup>T</sup>* = [*YT*, *TT*]*, we obtain equation (18)*

*r* ∑ *i*=1

− � *t t*−*τ*(*t*)

*<sup>i</sup> <sup>P</sup>* <sup>+</sup> *<sup>S</sup>* <sup>+</sup> *<sup>τ</sup>A*¯ *<sup>T</sup>*

⎡ ⎢ ⎢ ⎣

*By applying Schur complement* Φ˜ *<sup>i</sup>* + *τWZ*−1*W<sup>T</sup>* < 0 *is equivalent to*

*hi*[2*x*(*t*)*TPA*¯

<sup>+</sup>*τx*˙(*t*)*TZx*˙(*t*) <sup>−</sup>

*<sup>x</sup>*˙(*t*)*TZx*˙(*t*) <sup>≤</sup>

*ix*(*t*) + <sup>2</sup>*x*(*t*)*TPA*¯ *<sup>τ</sup>ix*(*<sup>t</sup>* <sup>−</sup> *<sup>τ</sup>*(*t*))]

� *t t*−*τ*(*t*)

*<sup>i</sup> ZA*¯ *<sup>τ</sup><sup>i</sup>* <sup>−</sup> *<sup>Y</sup>* <sup>+</sup> *<sup>T</sup><sup>T</sup>*

<sup>0</sup> *EAτ<sup>i</sup>* 0 0 �

*ZA*¯ *<sup>τ</sup><sup>i</sup>* <sup>−</sup> *<sup>T</sup>* <sup>−</sup> *<sup>T</sup><sup>T</sup>*

*τi*

*x*˙(*s*)*ds*] (16)

*Tds* (18)

�

+ (∗) (20)

(19)

*η*(*t*) (17)

*x*˙(*s*)*TZx*˙(*s*)*ds*

Robust Control of Nonlinear Time-Delay Systems via Takagi-Sugeno Fuzzy Models 25

� *A*¯ *<sup>T</sup>*

*A*¯ *<sup>T</sup> τi*

*<sup>i</sup> ZA*¯*<sup>i</sup> <sup>A</sup>*¯ *<sup>T</sup>*

*ZA*¯*<sup>i</sup> A*¯ *<sup>T</sup> τi ZA*¯ *<sup>τ</sup><sup>i</sup>*

[*ηT*(*t*)*W* + *x*˙(*s*)*TZ*]*Z*−1[*ηT*(*t*)*W* + *x*˙(*s*)*TZ*]

*<sup>i</sup> Z* −*Y*

*<sup>τ</sup> Z* 0

*τ Z*

⎤ ⎥ ⎥ ⎦ < 0

*τi Z* −*T*

*<sup>i</sup> ZA*¯*<sup>i</sup>* <sup>+</sup> *<sup>Y</sup>* <sup>+</sup> *<sup>Y</sup><sup>T</sup> PA*¯ *<sup>τ</sup><sup>i</sup>* <sup>+</sup> *<sup>τ</sup>A*¯ *<sup>T</sup>*

∗ −(<sup>1</sup> <sup>−</sup> *<sup>β</sup>*)*<sup>S</sup>* <sup>+</sup> *<sup>τ</sup>A*¯ *<sup>T</sup>*

*<sup>i</sup> ZA*¯ *<sup>τ</sup><sup>i</sup>*

�

<sup>+</sup>2[*x*(*t*)*TY* <sup>+</sup> *<sup>x</sup>*(*<sup>t</sup>* <sup>−</sup> *<sup>τ</sup>*(*t*))*TT*] <sup>×</sup> [*x*(*t*) <sup>−</sup> *<sup>x</sup>*(*<sup>t</sup>* <sup>−</sup> *<sup>τ</sup>*(*t*)) <sup>−</sup>

<sup>+</sup>*x*(*t*)*TSx*(*t*) <sup>−</sup> (<sup>1</sup> <sup>−</sup> *<sup>β</sup>*)*x*(*<sup>t</sup>* <sup>−</sup> *<sup>τ</sup>*(*t*))*TSx*(*<sup>t</sup>* <sup>−</sup> *<sup>τ</sup>*(*t*))

� *t t*−*τ*

*r* ∑ *i*=1

*hiη*(*t*)*<sup>T</sup>*

*hiη*(*t*)*T*[Φ˜ *<sup>i</sup>* + *τWZ*−1*WT*]*η*(*t*)

*<sup>ϕ</sup>*¯*<sup>i</sup> PA*¯ *<sup>τ</sup><sup>i</sup>* <sup>−</sup> *<sup>Y</sup>* <sup>+</sup> *<sup>T</sup><sup>T</sup> <sup>A</sup>*¯ *<sup>T</sup>*

∗ −(<sup>1</sup> <sup>−</sup> *<sup>β</sup>*)*<sup>S</sup>* <sup>−</sup> *<sup>T</sup>* <sup>−</sup> *<sup>T</sup><sup>T</sup> <sup>A</sup>*¯ *<sup>T</sup>*

∗ ∗− <sup>1</sup>

*<sup>i</sup> P P*Δ*Aτ<sup>i</sup>* <sup>Δ</sup>*A<sup>T</sup>*

∗ ∗ 0 0 ∗ ∗∗ 0

*EAi* 000 �

<sup>∗</sup> <sup>0</sup> <sup>Δ</sup>*A<sup>T</sup>*

∗ ∗ ∗− <sup>1</sup>

*<sup>i</sup> Z* 0

+ (∗) +

⎤ ⎥ ⎥ ⎦

> ⎡ ⎢ ⎢ ⎣

*PMAτ<sup>i</sup>* 0 *ZMAτ<sup>i</sup>* 0

⎤ ⎥ ⎥ ⎦ *F*(*t*) �

*τi Z* 0

**Lemma 2.** *(Wang et al., 1992) Given matrices M*, *E*, *F*(*t*) *with compatible dimensions and F*(*t*) *satisfying F*(*t*)*TF*(*t*) <sup>≤</sup> *I.*

*Then, the following inequalities hold for any �* > 0

$$\text{MF}(t)\text{E} + \text{E}^{T}\text{F}(t)^{T}\text{M}^{T} \le \epsilon\\\text{MM}^{T} + \text{e}^{-1}\text{E}^{T}\text{E} \tag{10}$$

### **3. Main results**

### **3.1 Time-delay dependent stability conditions**

First, we derive the stability condition for unforced system (5), that is

$$\dot{\mathbf{x}}(t) = \sum\_{i=1}^{r} h\_i [\bar{A}\_i \mathbf{x}(t) + \bar{A}\_{\tau i} \mathbf{x}(t - \tau(t))] \tag{11}$$

**Theorem 1.** *System (11) is asymptotically stable, if there exist some matrices P* > 0, *S* > 0, *Z* > 0, *Y and T satisfying the following LMIs for i* = 1, 2, ..,*r*

$$
\begin{bmatrix}
\eta\_{i} + \epsilon\_{Ai} \mathbf{E}\_{Ai}^{T} \mathbf{E}\_{Ai} & \mathbf{P} A\_{\tau i} - Y + \mathbf{T}^{T} & A\_{i}^{T} \mathbf{Z} & -Y & \mathbf{P} M\_{\rm Ai} \mathbf{P} M\_{A\tau i} \\
\ast & -(1 - \theta) \mathbf{S} - \mathbf{T} - \mathbf{T}^{T} + \epsilon\_{A\tau i} \mathbf{E}\_{\tau I}^{T} \mathbf{E}\_{\tau I} & A\_{\tau I}^{T} \mathbf{Z} & -\mathbf{T} & \mathbf{0} \\
\ast & \ast & -\frac{1}{\tau} \mathbf{Z} & \mathbf{0} & Z M\_{A\mathbf{i}} \text{ } \mathbf{Z} M\_{A\tau i} \\
\ast & \ast & \ast & -\frac{1}{\tau} \mathbf{Z} & \mathbf{0} \\
\ast & \ast & \ast & \ast & -\epsilon\_{A\tau} \mathbf{I} & \mathbf{0} \\
\ast & \ast & \ast & \ast & \ast & -\epsilon\_{A\tau i} \mathbf{I}
\end{bmatrix} < 0 \tag{12}
$$

*where ϕ<sup>i</sup>* = *PAi* + *A<sup>T</sup> <sup>i</sup> <sup>P</sup>* <sup>+</sup> *<sup>S</sup>* <sup>+</sup> *<sup>Y</sup>* <sup>+</sup> *<sup>Y</sup>T.*

**Proof 1.** *Choose the LKF as*

$$\mathbf{V}(\mathbf{x}(t)) = \mathbf{x}(t)^T \mathbf{P} \mathbf{x}(t) + \int\_{t-\tau(t)}^t \mathbf{x}(a)^T \mathbf{S} \mathbf{x}(a) da + \int\_{-\tau}^0 \int\_{t+\sigma}^t \dot{\mathbf{x}}(a)^T \mathbf{Z} \dot{\mathbf{x}}(a) da d\sigma \tag{13}$$

*the time derivative of this LKF (13) along the trajectory of system (11) is computed as*

$$\begin{split} \dot{V}(\mathbf{x}(t)) &= 2\mathbf{x}(t)^T P \dot{\mathbf{x}}(t) + \mathbf{x}(t)^T \mathbf{S} \mathbf{x}(t) - (1 - \dot{\mathbf{r}}(t)) \mathbf{x}(t - \tau(t))^T \mathbf{S} \mathbf{x}(t - \tau(t)) \\ &+ 7 \dot{\mathbf{x}}(t)^T \mathbf{Z} \dot{\mathbf{x}}(t) - \int\_{t - \tau}^{t} \dot{\mathbf{x}}(s)^T \mathbf{Z} \dot{\mathbf{x}}(s) ds \end{split} \tag{14}$$

*Taking into account the Newton-Leibniz formula*

$$\mathbf{x}(t-\tau(t)) = \mathbf{x}(t) - \int\_{t-\tau(t)}^{t} \dot{\mathbf{x}}(s)ds\tag{15}$$

*We obtain equation (16)*

$$\begin{split} \dot{V}(\mathbf{x}(t)) &= \sum\_{i=1}^{r} h\_{i} [2\mathbf{x}(t)^{T} P \bar{A}\_{i} \mathbf{x}(t) + 2\mathbf{x}(t)^{T} P \bar{A}\_{\pi i} \mathbf{x}(t - \tau(t))] \\ &+ \mathbf{x}(t)^{T} S \mathbf{x}(t) - (1 - \boldsymbol{\beta}) \mathbf{x}(t - \tau(t))^{T} S \mathbf{x}(t - \tau(t)) \\ &+ \overline{\tau} \dot{\mathbf{x}}(t)^{T} Z \dot{\mathbf{x}}(t) - \int\_{t-\overline{\tau}}^{t} \dot{\mathbf{x}}(s)^{T} Z \dot{\mathbf{x}}(s) ds \\ &+ 2 [\mathbf{x}(t)^{T} Y + \mathbf{x}(t - \tau(t))^{T} T] \times [\mathbf{x}(t) - \mathbf{x}(t - \tau(t)) - \int\_{t-\tau(t)}^{t} \dot{\mathbf{x}}(s) ds] \end{split} \tag{16}$$

*As pointed out in (Chen & Liu, 2005a)*

$$\dot{\boldsymbol{x}}(t)^{T}\boldsymbol{Z}\dot{\boldsymbol{x}}(t) \leq \sum\_{i=1}^{r} h\_{i}\boldsymbol{\eta}(t)^{T} \begin{bmatrix} \bar{A}\_{i}^{T}\boldsymbol{Z}\bar{A}\_{i} \ \bar{A}\_{i}^{T}\boldsymbol{Z}\bar{A}\_{\pi i} \\ \bar{A}\_{\pi i}^{T}\boldsymbol{Z}\bar{A}\_{i} \ \bar{A}\_{\pi i}^{T}\boldsymbol{Z}\bar{A}\_{\pi i} \end{bmatrix} \boldsymbol{\eta}(t) \tag{17}$$

*where <sup>η</sup>*(*t*)*<sup>T</sup>* = [*x*(*t*)*T*, *<sup>x</sup>*(*<sup>t</sup>* <sup>−</sup> *<sup>τ</sup>*(*t*))*T*]*. Allowing W<sup>T</sup>* = [*YT*, *TT*]*, we obtain equation (18)*

$$\begin{split} \dot{\boldsymbol{V}}(\boldsymbol{x}(t)) &\leq \sum\_{i=1}^{r} h\_{i} \boldsymbol{\eta}(t)^{T} [\boldsymbol{\Phi}\_{i} + \overline{\boldsymbol{\tau}} \boldsymbol{W} \boldsymbol{Z}^{-1} \boldsymbol{W}^{T}] \boldsymbol{\eta}(t) \\ &\quad - \int\_{t-\tau(t)}^{t} [\boldsymbol{\eta}^{T}(t) \boldsymbol{W} + \dot{\boldsymbol{x}}(\boldsymbol{s})^{T} \boldsymbol{Z}] \boldsymbol{Z}^{-1} [\boldsymbol{\eta}^{T}(t) \boldsymbol{W} + \dot{\boldsymbol{x}}(\boldsymbol{s})^{T} \boldsymbol{Z}]^{T} \boldsymbol{ds} \end{split} \tag{18}$$

*where*

4 Will-be-set-by-IN-TECH

**Lemma 1.** *(Xie & DeSouza, 1992)-(Oudghiri et al., 2007) (Guerra et al., 2006) Considering* Π < 0 *a*

**Lemma 2.** *(Wang et al., 1992) Given matrices M*, *E*, *F*(*t*) *with compatible dimensions and F*(*t*)

**Theorem 1.** *System (11) is asymptotically stable, if there exist some matrices P* > 0, *S* > 0, *Z* > 0, *Y*

∗ ∗ ∗ ∗−*�AiI* 0 ∗ ∗ ∗ ∗ ∗−*�AτiI*

*<sup>t</sup>*−*τ*(*t*) *<sup>x</sup>*(*α*)*TSx*(*α*)*d<sup>α</sup>* <sup>+</sup> � <sup>0</sup>

*<sup>V</sup>*˙ (*x*(*t*)) = <sup>2</sup>*x*(*t*)*TPx*˙(*t*) + *<sup>x</sup>*(*t*)*TSx*(*t*) <sup>−</sup> (<sup>1</sup> <sup>−</sup> *<sup>τ</sup>*˙(*t*))*x*(*<sup>t</sup>* <sup>−</sup> *<sup>τ</sup>*(*t*))*TSx*(*<sup>t</sup>* <sup>−</sup> *<sup>τ</sup>*(*t*))

� *t t*−*τ*(*t*)

*τi Eτ<sup>i</sup> A<sup>T</sup> τi*

*<sup>X</sup>T*Π*<sup>X</sup>* ≤ −2*λ<sup>X</sup>* <sup>−</sup> *<sup>λ</sup>*2Π−<sup>1</sup> (9)

*hi*[*A*¯*ix*(*t*) + *<sup>A</sup>*¯ *<sup>τ</sup>ix*(*<sup>t</sup>* <sup>−</sup> *<sup>τ</sup>*(*t*))] (11)

*Z* −*T* 0

*<sup>t</sup>*−*<sup>τ</sup> <sup>x</sup>*˙(*s*)*TZx*˙(*s*)*ds* (14)

−*τ* � *t*

*<sup>i</sup> Z* −*Y PMAi PMAτ<sup>i</sup>*

⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦

*<sup>t</sup>*+*<sup>σ</sup> <sup>x</sup>*˙(*α*)*TZx*˙(*α*)*dαd<sup>σ</sup>* (13)

*x*˙(*s*)*ds* (15)

< 0 (12)

*<sup>τ</sup> Z* 0 *ZMAi ZMAτ<sup>i</sup>*

*<sup>τ</sup> Z* 0

*MF*(*t*)*<sup>E</sup>* <sup>+</sup> *<sup>E</sup>TF*(*t*)*TMT* <sup>≤</sup> *�MM<sup>T</sup>* <sup>+</sup> *�*−1*ETE* (10)

*matrix X and a scalar λ, the following holds*

*Then, the following inequalities hold for any �* > 0

**3.1 Time-delay dependent stability conditions**

*and T satisfying the following LMIs for i* = 1, 2, ..,*r*

First, we derive the stability condition for unforced system (5), that is

*r* ∑ *i*=1

*AiEAi PAτ<sup>i</sup>* <sup>−</sup> *<sup>Y</sup>* <sup>+</sup> *<sup>T</sup><sup>T</sup> <sup>A</sup><sup>T</sup>*

<sup>∗</sup> <sup>∗</sup> ∗ − <sup>1</sup>

*the time derivative of this LKF (13) along the trajectory of system (11) is computed as*

*x*(*t* − *τ*(*t*)) = *x*(*t*) −

∗ ∗− <sup>1</sup>

∗ −(<sup>1</sup> <sup>−</sup> *<sup>β</sup>*)*<sup>S</sup>* <sup>−</sup> *<sup>T</sup>* <sup>−</sup> *<sup>T</sup><sup>T</sup>* <sup>+</sup> *�AτiE<sup>T</sup>*

*<sup>i</sup> <sup>P</sup>* <sup>+</sup> *<sup>S</sup>* <sup>+</sup> *<sup>Y</sup>* <sup>+</sup> *<sup>Y</sup>T.*

<sup>+</sup>*τx*˙(*t*)*TZx*˙(*t*) <sup>−</sup> � *<sup>t</sup>*

*V*(*x*(*t*)) = *x*(*t*)*TPx*(*t*) + � *<sup>t</sup>*

*Taking into account the Newton-Leibniz formula*

*x*˙(*t*) =

*satisfying F*(*t*)*TF*(*t*) <sup>≤</sup> *I.*

**3. Main results**

⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ *ϕ<sup>i</sup>* + *�AiE<sup>T</sup>*

*where ϕ<sup>i</sup>* = *PAi* + *A<sup>T</sup>*

*We obtain equation (16)*

**Proof 1.** *Choose the LKF as*

$$\tilde{\Phi}\_{\bar{i}} = \begin{bmatrix} P\bar{A}\_{\bar{i}} + \bar{A}\_{\bar{i}}^T P + \mathcal{S} + \nabla \bar{A}\_{\bar{i}}^T Z \bar{A}\_{\bar{i}} + Y + Y^T & P\bar{A}\_{\tau \bar{i}} + \nabla \bar{A}\_{\bar{i}}^T Z \bar{A}\_{\tau \bar{i}} - Y + T^T\\ \* & -(1 - \beta)\mathcal{S} + \nabla \bar{A}\_{\tau \bar{i}}^T Z \bar{A}\_{\tau \bar{i}} - T - T^T \end{bmatrix} \tag{19}$$

*By applying Schur complement* Φ˜ *<sup>i</sup>* + *τWZ*−1*W<sup>T</sup>* < 0 *is equivalent to*

$$
\bar{\Phi}\_{\bar{l}} = \begin{bmatrix}
\bar{\varphi}\_{\bar{l}} & P\bar{A}\_{\bar{\tau}\bar{l}} - Y + T^T & \bar{A}\_{\bar{i}}^T Z & -Y \\
\* & -(1 - \beta)S - T - T^T & \bar{A}\_{\bar{\tau}\bar{l}}^T Z & -T \\
\* & \* & -\frac{1}{\bar{\tau}} Z & 0 \\
\* & \* & \* & -\frac{1}{\bar{\tau}} Z
\end{bmatrix} < 0
$$

*The uncertain part is represented as follows*

$$\begin{aligned} \Delta \Phi\_l &= \begin{bmatrix} P\Delta A\_l + \Delta A\_l^T P \; P\Delta A\_{\text{ri}l} \; \Delta A\_l^T Z & 0\\ \* & 0 & \Delta A\_{\text{ri}l}^T Z \; 0\\ \* & \* & 0 & 0\\ \* & \* & \* & 0 \end{bmatrix} \\ &= \begin{bmatrix} P M\_{Ai} \\ 0 \\ Z M\_{Ai} \\ 0 \end{bmatrix} F(t) \left[ E\_{Ai} \; 0 \; 0 \; 0 \right] + (\*) + \begin{bmatrix} P M\_{A\text{ri}l} \\ 0 \\ Z M\_{A\text{ri}l} \\ 0 \end{bmatrix} F(t) \left[ 0 \; E\_{A\text{ri}l} \; 0 \; 0 \right] + (\*) \end{aligned} \tag{20}$$

*By applying Schur complement*

*where* Φ� *ij is given by*

Φ� *ij* =

*where* Ξ*ij is given by*

⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ �

Ξ*ij* =

⎡ ⎢ ⎢ ⎢ ⎢ ⎣

*Therefore, we get <sup>V</sup>*˙ (*x*(*t*)) <sup>≤</sup> <sup>0</sup>*.*

*PA*�*ij* <sup>+</sup> *<sup>A</sup>*�*<sup>T</sup>*

*satisfying the following LMIs for i*, *j* = 1, 2, ..,*r and i* ≤ *j*

*ξij* + *�AijMAiM<sup>T</sup>*

+*�BiMBiM<sup>T</sup>*

∗

*Ai*

�

�

*PE<sup>T</sup> Ai <sup>N</sup><sup>T</sup> <sup>j</sup> <sup>E</sup><sup>T</sup> Bi PE<sup>T</sup> Aτi*

*PE<sup>T</sup> Ai <sup>N</sup><sup>T</sup> <sup>j</sup> <sup>E</sup><sup>T</sup> Bi PE<sup>T</sup> Aτi*

*Bi*

*r* ∑ *i*=1

*r* ∑ *i*=1

*r* ∑ *j*=1

*r* ∑ *j*=1

*hihj*Φ� *ij* <sup>=</sup> <sup>1</sup>

*ij<sup>P</sup>* <sup>+</sup> *<sup>S</sup>* <sup>+</sup> *<sup>Y</sup>* <sup>+</sup> *<sup>Y</sup><sup>T</sup> PA*¯

solved using existing solvers such as LMI TOOLBOX in the Matlab software.

2

<sup>=</sup> <sup>1</sup> 2

*r* ∑ *i*=1

*r* ∑ *i*=1

*r* ∑ *j*=1

Robust Control of Nonlinear Time-Delay Systems via Takagi-Sugeno Fuzzy Models 27

*r* ∑ *j*=1

∗ −(<sup>1</sup> <sup>−</sup> *<sup>β</sup>*)*<sup>S</sup>* <sup>−</sup> *<sup>T</sup>* <sup>−</sup> *<sup>T</sup><sup>T</sup>* (*A*¯ *<sup>τ</sup><sup>i</sup>*+*A*¯ *<sup>τ</sup><sup>j</sup>*)*<sup>T</sup>*

<sup>∗</sup> ∗ ∗− <sup>1</sup>

∗ ∗− <sup>1</sup>

Our objective is to transform the conditions in theorem 2 in LMI terms which can be easily

**Theorem 3.** *For a given positive number λ. System (8) is asymptotically stable if there exist some matrices P* > 0*, S* > 0*, Z* > 0*, Y, T and Ni as well as positives scalars �Aij, �Aτij, �Bij, �Ci, �Cτi, �Di*

*PA<sup>T</sup>*

∗ ∗ <sup>1</sup>

−*T* 0 0

00 0 −*�AijI* 0 0 ∗ −*�BijI* 0 ∗ ∗−*�AτijI*

<sup>−</sup>(<sup>1</sup> <sup>−</sup> *<sup>β</sup>*)*<sup>S</sup>* <sup>−</sup> *<sup>T</sup>* <sup>−</sup> *<sup>T</sup><sup>T</sup>* +*�AτiiMAτiiM<sup>T</sup>*

*Aτi*

⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦

<sup>∗</sup> ∗ ∗− <sup>1</sup>

∗ ∗ ∗∗ ∗ ∗ ∗∗ ∗ ∗ ∗∗

*hihj*Φ˜ *ij* + *τWZ*−1*W<sup>T</sup>* < 0 *is equivalent to*

*hihj*(Φ� *ij* + Φ� *ji*)

*<sup>τ</sup><sup>i</sup>* <sup>−</sup> *<sup>Y</sup>* <sup>+</sup> *<sup>T</sup><sup>T</sup>* (*A*�*ij*+*A*�*ji*)*<sup>T</sup>*

Ξ*ij* + Ξ*ji* ≤ 0 (29)

*AτiP*

*<sup>τ</sup>* (−2*λ<sup>P</sup>* <sup>+</sup> *<sup>λ</sup>*2*Z*) <sup>0</sup>

*τ Z*

(30)

*<sup>τ</sup><sup>i</sup>* <sup>−</sup> *<sup>Y</sup>* <sup>+</sup> *<sup>T</sup><sup>T</sup> AiP* <sup>+</sup> *BiNj* <sup>−</sup>*<sup>Y</sup>*

�

*hihj*(Φ¯ *ij* + Φ¯ *ji*) < 0 (27)

<sup>2</sup> *Z* −*Y*

<sup>2</sup> *Z* −*T*

*<sup>τ</sup> Z* 0

*τ Z*

⎤ ⎥ ⎥ ⎥ ⎥ ⎦

(28)

*By applying lemma 2, we obtain*

$$
\Delta\Phi\_{l} \leq \varepsilon\_{Ai}^{-1} \begin{bmatrix} PM\_{Ai} \\ 0 \\ ZM\_{Ai} \\ 0 \end{bmatrix} \left[ M\_{Ai}^{T} P \ 0 \ M\_{Al}^{T} Z \ 0 \right] + \epsilon\_{Ai} \begin{bmatrix} E\_{Ai}^{T} \\ 0 \\ 0 \\ 0 \end{bmatrix} \left[ E\_{Ai} \ 0 \ 0 \ 0 \right]
$$

$$
+ \epsilon\_{A\tau i}^{-1} \begin{bmatrix} PM\_{A\tau i} \\ 0 \\ ZM\_{A\tau i} \\ 0 \end{bmatrix} \left[ M\_{A\tau i}^{T} P \ 0 \ M\_{A\tau i}^{T} Z \ 0 \right] + \epsilon\_{A\tau i} \begin{bmatrix} 0 \\ E\_{A\tau i}^{T} \\ 0 \\ 0 \end{bmatrix} \left[ 0 \ E\_{A\tau i} \ 0 \ 0 \right] \tag{21}
$$

*where �Ai and �Aτ<sup>i</sup> are some positive scalars. By using Schur complement, we obtain theorem 1.*

### **3.2 Time-delay dependent stabilization conditions**

**Theorem 2.** *System (8) is asymptotically stable if there exist some matrices P* > 0*, S* > 0*, Z* > 0*, Y, T satisfying the following LMIs for i*, *j* = 1, 2, ..,*r and i* ≤ *j*

$$
\Phi\_{ij} + \Phi\_{ji} \le 0 \tag{22}
$$

*where* Φ¯ *ji is given by*

$$
\bar{\Phi}\_{ij} = \begin{bmatrix}
P\hat{A}\_{ij} + \hat{A}\_{ij}^T P + S + Y + Y^T & P\bar{A}\_{\tau i} - Y + T^T & \hat{A}\_{\bar{\tau} i}^T Z & -Y \\
\* & -(1 - \beta)S - T - T^T \bar{A}\_{\tau i}^T Z & -T \\
\* & \* & -\frac{1}{\tau}Z & 0 \\
\* & \* & \* & -\frac{1}{\tau}Z
\end{bmatrix} \tag{23}
$$

**Proof 2.** *As pointed out in (Chen & Liu, 2005a), the following inequality is verified.*

$$\dot{\mathbf{x}}(t)^{T}\mathbf{Z}\dot{\mathbf{x}}(t) \le \sum\_{i=1}^{r} \sum\_{j=1}^{r} h\_{i}h\_{j}\eta(t)^{T} \left[ \frac{(\hat{A}\_{il} + \hat{A}\_{jl})^{T}}{2} \mathbf{Z} \frac{(\hat{A}\_{il} + \hat{A}\_{jl})}{2} \frac{(\hat{A}\_{il} + \hat{A}\_{jl})^{T}}{2} \mathbf{Z} \frac{(\bar{A}\_{rl} + \bar{A}\_{jl})^{T}}{2} \mathbf{Z} \frac{(\bar{A}\_{rl} + \bar{A}\_{rl})}{2} \right] \eta(t) \tag{24}$$

*Following a similar development to that for theorem 1, we obtain*

$$\begin{split} \dot{V}(\mathbf{x}(t)) \le &\sum\_{i=1}^{r} \sum\_{j=1}^{r} h\_{i} h\_{j} \eta(t)^{T} [\tilde{\Phi}\_{ij} + \tau W \mathbf{Z}^{-1} \mathbf{W}^{T}] \eta(t) \\ &- \int\_{t-\tau(t)}^{t} [\eta(t)^{T} \mathbf{W} + \dot{\mathbf{x}}(\mathbf{s})^{T} \mathbf{Z}] \mathbf{Z}^{-1} [\eta(t)^{T} \mathbf{W} + \dot{\mathbf{x}}(\mathbf{s})^{T} \mathbf{Z}]^{T} d \mathbf{s} \tag{25} \end{split} \tag{25}$$

*where* Φ˜ *ij is given by*

$$\tilde{\Phi}\_{ij} = \begin{bmatrix} P\hat{A}\_{ij} + \hat{A}\_{ij}^T P + \mathcal{S} & P\bar{A}\_{ri} + \pi \frac{(\hat{A}\_{\parallel} + \hat{A}\_{\parallel})^\top}{2} Z\frac{(\hat{A}\_{\parallel} + \hat{A}\_{\parallel})^\top}{2} Z \\\ + \pi \frac{(\hat{A}\_{\parallel} + \hat{A}\_{\parallel})^\top}{2} Z \frac{(\hat{A}\_{\parallel} + \hat{A}\_{\parallel})^\top}{2} + Y + Y^T & -Y + T^T \\\ \ast & -(1 - \beta)S + \pi \frac{(\bar{A}\_{\parallel} + \bar{A}\_{\parallel})^\top}{2} Z \frac{(\bar{A}\_{\parallel} + \bar{A}\_{\parallel})}{2} \\\ -T - T^T \end{bmatrix} \tag{26}$$

*By applying Schur complement r* ∑ *i*=1 *r* ∑ *j*=1 *hihj*Φ˜ *ij* + *τWZ*−1*W<sup>T</sup>* < 0 *is equivalent to r* ∑ *i*=1 *r* ∑ *j*=1 *hihj*Φ� *ij* <sup>=</sup> <sup>1</sup> 2 *r* ∑ *i*=1 *r* ∑ *j*=1 *hihj*(Φ� *ij* + Φ� *ji*) <sup>=</sup> <sup>1</sup> 2 *r* ∑ *i*=1 *r* ∑ *j*=1 *hihj*(Φ¯ *ij* + Φ¯ *ji*) < 0 (27)

*where* Φ� *ij is given by*

6 Will-be-set-by-IN-TECH

*AiZ* 0 � + *�Ai*

*P* 0 *M<sup>T</sup> Aτi Z* 0 � + *�Aτ<sup>i</sup>*

**Theorem 2.** *System (8) is asymptotically stable if there exist some matrices P* > 0*, S* > 0*, Z* > 0*, Y,*

∗ −(<sup>1</sup> <sup>−</sup> *<sup>β</sup>*)*<sup>S</sup>* <sup>−</sup> *<sup>T</sup>* <sup>−</sup> *<sup>T</sup><sup>T</sup> <sup>A</sup>*¯ *<sup>T</sup>*

<sup>∗</sup> ∗ − <sup>1</sup>

<sup>2</sup> *<sup>Z</sup>*(*A*�*ij*+*A*�*ji*) 2

<sup>2</sup> *<sup>Z</sup>* (*A*�*ij*+*A*�*ji*) 2

*TW* + *x*˙(*s*)*TZ*]*Z*−1[*η*(*t*)

<sup>∗</sup> <sup>−</sup>(<sup>1</sup> <sup>−</sup> *<sup>β</sup>*)*<sup>S</sup>* <sup>+</sup> *<sup>τ</sup>* (*A*¯ *<sup>τ</sup><sup>i</sup>*+*A*¯ *<sup>τ</sup><sup>j</sup>*)*<sup>T</sup>*

*hihjη*(*t*)*T*[Φ˜ *ij* + *τWZ*−1*WT*]*η*(*t*)

<sup>∗</sup> ∗ ∗− <sup>1</sup>

*ij<sup>P</sup>* <sup>+</sup> *<sup>S</sup>* <sup>+</sup> *<sup>Y</sup>* <sup>+</sup> *<sup>Y</sup><sup>T</sup> PA*¯

**Proof 2.** *As pointed out in (Chen & Liu, 2005a), the following inequality is verified.*

(*A*�*ij*+*A*�*ji*)*<sup>T</sup>*

(*A*¯ *<sup>τ</sup><sup>i</sup>*+*A*¯ *<sup>τ</sup><sup>j</sup>*)*<sup>T</sup>*

⎡ ⎣

[*η*(*t*)

*ijP* + *S*

<sup>2</sup> <sup>+</sup> *<sup>Y</sup>* <sup>+</sup> *<sup>Y</sup><sup>T</sup>*

⎡ ⎢ ⎢ ⎣ *ET Ai* 0 0 0

⎤ ⎥ ⎥ ⎦ �

> ⎡ ⎢ ⎢ ⎣

0 *ET Aτi* 0 0

<sup>Φ</sup>¯ *ij* <sup>+</sup> <sup>Φ</sup>¯ *ji* <sup>≤</sup> <sup>0</sup> (22)

*ijZ* −*Y*

*<sup>τ</sup> Z* 0

*τ Z*

⎤ ⎥ ⎥ ⎥ ⎦

⎤

⎦ *η*(*t*) (24)

*Tds* (25)

⎤ ⎥ ⎥ ⎥ ⎥ ⎦

(26)

(23)

*τi Z* −*T*

<sup>2</sup> *<sup>Z</sup>* (*A*¯ *<sup>τ</sup><sup>i</sup>*+*A*¯ *<sup>τ</sup><sup>j</sup>*) 2

<sup>2</sup> *<sup>Z</sup>*(*A*¯ *<sup>τ</sup><sup>i</sup>*+*A*¯ *<sup>τ</sup><sup>j</sup>*) 2

*TW* + *x*˙(*s*)*TZ*]

<sup>2</sup> *<sup>Z</sup>* (*A*¯ *<sup>τ</sup><sup>i</sup>*+*A*¯ *<sup>τ</sup><sup>j</sup>*) 2

> <sup>2</sup> *<sup>Z</sup>* (*A*¯ *<sup>τ</sup><sup>i</sup>*+*A*¯ *<sup>τ</sup><sup>j</sup>*) 2

*<sup>τ</sup><sup>i</sup>* <sup>−</sup> *<sup>Y</sup>* <sup>+</sup> *<sup>T</sup><sup>T</sup> <sup>A</sup>*�*<sup>T</sup>*

(*A*�*ij*+*A*�*ji*)*<sup>T</sup>*

(*A*¯ *<sup>τ</sup><sup>i</sup>*+*A*¯ *<sup>τ</sup><sup>j</sup>*)*<sup>T</sup>*

*PA*¯ *<sup>τ</sup><sup>i</sup>* <sup>+</sup> *<sup>τ</sup>* (*A*�*ij*+*A*�*ji*)*<sup>T</sup>*

<sup>−</sup>*<sup>Y</sup>* <sup>+</sup> *<sup>T</sup><sup>T</sup>*

<sup>−</sup>*<sup>T</sup>* <sup>−</sup> *<sup>T</sup><sup>T</sup>*

*EAi* 000 �

⎤ ⎥ ⎥ ⎦ �

<sup>0</sup> *EAτ<sup>i</sup>* 0 0 � (21)

*By applying lemma 2, we obtain*

ΔΦ¯ *<sup>i</sup>* <sup>≤</sup> *�*−<sup>1</sup> *Ai*

*where* Φ¯ *ji is given by*

Φ¯ *ij* =

*<sup>x</sup>*˙(*t*)*TZx*˙(*t*) <sup>≤</sup>

*where* Φ˜ *ij is given by*

⎡ ⎢ ⎢ ⎢ ⎢ ⎣

Φ˜ *ij* =

⎡ ⎢ ⎢ ⎢ ⎣ ⎡ ⎢ ⎢ ⎣

+*�*−<sup>1</sup> *Aτi*

*where �Ai and �Aτ<sup>i</sup> are some positive scalars. By using Schur complement, we obtain theorem 1.*

*PMAi* 0 *ZMAi* 0

⎡ ⎢ ⎢ ⎣

**3.2 Time-delay dependent stabilization conditions**

*PA*�*ij* <sup>+</sup> *<sup>A</sup>*�*<sup>T</sup>*

*r* ∑ *i*=1

*<sup>V</sup>*˙ (*x*(*t*)) <sup>≤</sup>

<sup>+</sup>*<sup>τ</sup>* (*A*�*ij*+*A*�*ji*)*<sup>T</sup>*

*r* ∑ *j*=1

*hihjη*(*t*)*<sup>T</sup>*

*Following a similar development to that for theorem 1, we obtain*

*r* ∑ *j*=1

*r* ∑ *i*=1

− � *t t*−*τ*(*t*)

*PA*�*ij* <sup>+</sup> *<sup>A</sup>*�*<sup>T</sup>*

<sup>2</sup> *<sup>Z</sup>* (*A*�*ij*+*A*�*ji*)

*T satisfying the following LMIs for i*, *j* = 1, 2, ..,*r and i* ≤ *j*

⎤ ⎥ ⎥ ⎦ � *M<sup>T</sup>*

*PMAτ<sup>i</sup>* 0 *ZMAτ<sup>i</sup>* 0

⎤ ⎥ ⎥ ⎦ � *M<sup>T</sup> Aτi*

*Ai<sup>P</sup>* <sup>0</sup> *<sup>M</sup><sup>T</sup>*

$$
\hat{\Phi}\_{ij} = \begin{bmatrix}
PA\_{ij} + \hat{A}\_{ij}^T P + S + Y + Y^T & PA\_{\tau i} - Y + T^T & \frac{(\hat{A}\_{il} + \hat{A}\_{jl})^T}{2}Z & -Y \\
\* & -(1 - \beta)S - T - T^T & \frac{(\hat{A}\_{\tau l} + \hat{A}\_{\tau l})^T}{2}Z & -T \\
\* & \* & -\frac{1}{T}Z & 0 \\
\* & \* & \* & -\frac{1}{T}Z
\end{bmatrix} \tag{28}
$$

*Therefore, we get <sup>V</sup>*˙ (*x*(*t*)) <sup>≤</sup> <sup>0</sup>*.*

Our objective is to transform the conditions in theorem 2 in LMI terms which can be easily solved using existing solvers such as LMI TOOLBOX in the Matlab software.

**Theorem 3.** *For a given positive number λ. System (8) is asymptotically stable if there exist some matrices P* > 0*, S* > 0*, Z* > 0*, Y, T and Ni as well as positives scalars �Aij, �Aτij, �Bij, �Ci, �Cτi, �Di satisfying the following LMIs for i*, *j* = 1, 2, ..,*r and i* ≤ *j*

$$
\Delta\_{ij} + \Sigma\_{ji} \le 0 \tag{29}
$$

*where* Ξ*ij is given by*

$$
\Sigma\_{ij} = \begin{bmatrix}
\begin{bmatrix}
\xi\_{ij} + \epsilon\_{Aj}M\_{Ai}M\_{Ai}^T \\
+\epsilon\_{Bi}M\_{Bi}M\_{Bi}^T
\end{bmatrix} & PA\_{ii}^T - Y + T^T & A\_i P + B\_i N\_{j} & -Y \\
\ast & \begin{bmatrix}
+\epsilon\_{A\overline{m}\overline{I}}M\_{A\overline{m}\overline{I}}M\_{A\overline{m}i}^T
\end{bmatrix} & A\_{\overline{n}I}P \\
\ast & \ast & \begin{array}{c}
+ \epsilon\_{\overline{i}} \\
+ \epsilon\_{\overline{i}} \\
+ \epsilon\_{\overline{i}}
\end{array} & \frac{1}{\overline{l}}(-2\lambda P + \lambda^2 Z) & 0 \\
\ast & \ast & \begin{array}{c}
+ \epsilon\_{\overline{i}}
\end{array} & \ast \\
\ast & \ast & \ast & \ast \\
\ast & \ast & \ast & \ast
\end{array}
$$

$$
\begin{array}{c}
P E\_{i\overline{i}}^T M\_{\overline{i}}^T E\_{\overline{B}\overline{l}}^T P E\_{A\overline{i}\overline{l}}^T \\
P E\_{i\overline{i}}^T M\_{\overline{i}}^T E\_{\overline{B}\overline{l}}^T P E\_{A\overline{i}\overline{l}}^T \\
0 & 0 & 0 \\
\ast & -\epsilon\_{B\overline{i}\overline{j}}I & 0 \\
\ast & \ast & -\epsilon\_{A\overline{n}\overline{j}}I
\end{array}
\tag{30}
$$

*in which ξij* = *PA<sup>T</sup> <sup>i</sup>* <sup>+</sup> *<sup>N</sup><sup>T</sup> <sup>j</sup> <sup>B</sup><sup>T</sup> <sup>i</sup>* <sup>+</sup> *AiP* <sup>+</sup> *BiNj* <sup>+</sup> *<sup>S</sup>* <sup>+</sup> *<sup>Y</sup>* <sup>+</sup> *<sup>Y</sup>T. If this is the case, the Ki local feedback gains are given by*

$$K\_{\vec{l}} = N\_{\vec{l}} P^{-1} \, , \, \vec{l} = 1 \, \, \mathcal{D} \, \dots \, \, \, r \tag{31}$$

*The uncertain part is given by*

ΔΞ¯ *ij* =

*By using lemma 2, we obtain*

ΔΞ¯ *ij* <sup>≤</sup> *�Aij* � *MAi*

=

⎡ ⎢ ⎢ ⎣ *P*Δ*A<sup>T</sup>*

� *MAi* 03×<sup>1</sup>

+ � *MBi* 03×<sup>1</sup>

+ ⎡ ⎣ � *F*(*t*) �

0 *MAτ<sup>i</sup>* 02×<sup>1</sup>

03×<sup>1</sup>

<sup>+</sup>*�Bij* � *MBi* 03×<sup>1</sup>

> ⎡ ⎣

*where �Aij, �Aτij and �Bij are some positive scalars.*

0 *MAτ<sup>i</sup>* 02×<sup>1</sup>

⎤ ⎦ � 0 *M<sup>T</sup>*

*By applying Schur complement and lemma 2, we obtain theorem 3.*

*However, only one tuning parameter is involved in our approach.*

*numerical example is given to demonstrate numerically this point.*

+*�Aτij*

*and* <sup>1</sup>

� � *M<sup>T</sup> Ai* 01×<sup>3</sup> � + *�*−<sup>1</sup> *Aij*

� � *M<sup>T</sup> Bi* 01×<sup>3</sup> � + *�*−<sup>1</sup> *Bij*

� *F*(*t*) �

> ⎤ ⎦ *F*(*t*) �

*<sup>i</sup>* <sup>+</sup> *<sup>N</sup><sup>T</sup>*

*<sup>j</sup>* <sup>Δ</sup>*B<sup>T</sup>*

*EAiP* 0 *EAiP* 0

*EBiNj* 0 *EBiNj* 0

*EAτiP* 0 *EAτiP* 0

⎡ ⎢ ⎢ ⎣

> ⎡ ⎢ ⎢ ⎢ ⎣

*<sup>A</sup>τ<sup>i</sup>* 01×<sup>2</sup>

*requirements of β* < 1 *are removed in our result due to the introduction of variable T.*

� + *�*−<sup>1</sup> *Aτij*

**Remark 1.** *It is noticed that (Wu & Li, 2007) and theorem (3) contain, respectively, r*<sup>3</sup> <sup>+</sup> *<sup>r</sup>*3(*<sup>r</sup>* <sup>−</sup> <sup>1</sup>)

**Remark 2.** *It is noted that Wu et al. in (Wu & Li, 2007) have presented a new approach to delay-dependent stabilization for continuous-time fuzzy systems with time varying delay. The disadvantages of this new approach is that the LMIs presented involve three tuning parameters.*

**Remark 3.** *Our method provides a less conservative result than other results which have been recently proposed (Wu & Li, 2007), (Chen & Liu, 2005a), (Guan & Chen, 2004). In next paragraph, a*

<sup>2</sup> *r*(*r* + 1) *LMIs. This reduces the computational complexity. Moreover, it is easy to see that the*

*N<sup>T</sup> <sup>j</sup> <sup>E</sup><sup>T</sup> Bi* 0 *N<sup>T</sup> <sup>j</sup> <sup>E</sup><sup>T</sup> Bi* 0

*PE<sup>T</sup> Ai* 0 *PE<sup>T</sup> Ai* 0

⎤ ⎥ ⎥ ⎦ �

> ⎤ ⎥ ⎥ ⎥ ⎦ �

⎡ ⎢ ⎢ ⎣ *PE<sup>T</sup> Aτi* 0 *PE<sup>T</sup> Aτi* 0

⎤ ⎥ ⎥ ⎦ �

*EiP* 0 *EiP* 0

�

*EBiNj* 0 *EBiNj* 0

�

*EAτiP* 0 *EAτiP* 0

� (40)

*<sup>i</sup>* <sup>+</sup> <sup>Δ</sup>*AiP* <sup>+</sup> <sup>Δ</sup>*BiNj <sup>P</sup>*Δ*A<sup>T</sup>*

Robust Control of Nonlinear Time-Delay Systems via Takagi-Sugeno Fuzzy Models 29

� + (∗)

> � + (∗)

> > �

∗ 0 Δ*AτiP* 0 ∗ ∗ 0 0 ∗ ∗∗ 0

*<sup>τ</sup><sup>i</sup>* Δ*AiP* + Δ*BiNj* 0

+ (∗) (39)

⎤ ⎥ ⎥ ⎦

**Proof 3.** *Starting with pre-and post multiplying (22) by diag*[*I*, *I*, *Z*−1*P*, *I*] *and its transpose,we get*

$$
\Sigma\_{ij}^1 + \Sigma\_{ji}^1 \le 0, \quad 1 \le i \le j \le r \tag{32}
$$

*where*

$$
\Sigma\_{ij}^{1} = \begin{bmatrix}
P\hat{A}\_{ij} + \hat{A}\_{ij}^{T}P + S + Y + Y^{T} & P\tilde{A}\_{\tau i} - Y + T^{T} & \hat{A}\_{ij}^{T}P & -Y \\
\* & -(1 - \beta)S - T - T^{T} & \tilde{A}\_{\tau i}^{T}P & -T \\
\* & \* & -\frac{1}{\tau}P\Sigma^{-1}P & 0 \\
\* & \* & \* & -\frac{1}{\tau}Z
\end{bmatrix} \tag{33}
$$

*As pointed out by Wu et al. (Wu et al., 2004), if we just consider the stabilization condition, we can replace <sup>A</sup>*�*ij, Aτ<sup>i</sup> with <sup>A</sup>*�*<sup>T</sup> ij and A<sup>T</sup> τi , respectively, in (33).*

*Assuming Nj* = *KjP, we get*

$$
\Sigma\_{ij}^2 + \Sigma\_{ji}^2 \le 0, \quad 1 \le i \le j \le r \tag{34}
$$

$$
\Sigma\_{ij}^2 = \begin{bmatrix}
\tilde{\xi}\_{ij}^T P \bar{A}\_{\tau i}^T - Y + T^T \ \bar{A}\_i P + \bar{B}\_i N\_j & -Y \\
\* \begin{bmatrix}
\end{bmatrix} & \bar{A}\_{\tau i} P & -T \\
\* & \* & -\frac{1}{\tau} P Z^{-1} P & 0 \\
\* & \* & \* & -\frac{1}{\tau} Z
\end{bmatrix} \tag{35}
$$

*It follows from lemma 1 that*

$$-PZ^{-1}P \le -2\lambda P + \lambda^2 Z \tag{36}$$

*We obtain*

$$
\Sigma\_{ij}^3 + \Sigma\_{ji}^3 \le 0, \quad 1 \le i \le j \le r \tag{37}
$$

*where*

$$
\Sigma\_{ij}^{3} = \begin{bmatrix}
\tilde{\xi}\_{ij}^{T} P \tilde{A}\_{\tau i}^{T} - Y + T^{T} & \bar{A}\_{i} P + \mathcal{B}\_{i} N\_{j} & -Y \\
\* & \begin{bmatrix}
\end{bmatrix}
& \bar{A}\_{\tau i} P & -T \\
\* & \* & \begin{bmatrix}
\frac{1}{\tau}(-2\lambda P) & 0 \\
+\lambda^{2} Z
\end{bmatrix}
\\
\* & \* & \* & -\frac{1}{\tilde{\tau}} Z
\end{bmatrix}
\tag{38}
$$

*The uncertain part is given by*

8 Will-be-set-by-IN-TECH

**Proof 3.** *Starting with pre-and post multiplying (22) by diag*[*I*, *I*, *Z*−1*P*, *I*] *and its transpose,we get*

∗ −(<sup>1</sup> <sup>−</sup> *<sup>β</sup>*)*<sup>S</sup>* <sup>−</sup> *<sup>T</sup>* <sup>−</sup> *<sup>T</sup><sup>T</sup> <sup>A</sup>*¯ *<sup>T</sup>*

<sup>∗</sup> ∗ ∗− <sup>1</sup>

<sup>∗</sup> ∗ − <sup>1</sup>

*As pointed out by Wu et al. (Wu et al., 2004), if we just consider the stabilization condition, we can*

*<sup>τ</sup><sup>i</sup>* <sup>−</sup> *<sup>Y</sup>* <sup>+</sup> *<sup>T</sup><sup>T</sup> <sup>A</sup>*¯*iP* <sup>+</sup> *<sup>B</sup>*¯

�

∗ ∗ ∗− <sup>1</sup>

*<sup>τ</sup><sup>i</sup>* <sup>−</sup> *<sup>Y</sup>* <sup>+</sup> *<sup>T</sup><sup>T</sup> <sup>A</sup>*¯*iP* <sup>+</sup> *<sup>B</sup>*¯

�

∗ ∗ ∗− <sup>1</sup>

� 1 *<sup>τ</sup>* (−2*λP* +*λ*2*Z*)

−(1 − *β*)*S* <sup>−</sup>*<sup>T</sup>* <sup>−</sup> *<sup>T</sup><sup>T</sup>*

*, respectively, in (33).*

−(1 − *β*)*S* <sup>−</sup>*<sup>T</sup>* <sup>−</sup> *<sup>T</sup><sup>T</sup>*

∗ ∗− <sup>1</sup>

Ξ1 *ij* <sup>+</sup> <sup>Ξ</sup><sup>1</sup>

Ξ2 *ij* <sup>+</sup> <sup>Ξ</sup><sup>2</sup>

⎡ ⎢ ⎢ ⎢ ⎢ ⎣

¯ *ξij PA*¯ *<sup>T</sup>*

∗ �

Ξ3 *ij* <sup>+</sup> <sup>Ξ</sup><sup>3</sup>

⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣

¯ *ξij PA*¯ *<sup>T</sup>*

∗ �

∗ ∗

*ij<sup>P</sup>* <sup>+</sup> *<sup>S</sup>* <sup>+</sup> *<sup>Y</sup>* <sup>+</sup> *<sup>Y</sup><sup>T</sup> PA*¯

*<sup>i</sup>* <sup>+</sup> *AiP* <sup>+</sup> *BiNj* <sup>+</sup> *<sup>S</sup>* <sup>+</sup> *<sup>Y</sup>* <sup>+</sup> *<sup>Y</sup>T. If this is the case, the Ki local feedback*

*<sup>τ</sup><sup>i</sup>* <sup>−</sup> *<sup>Y</sup>* <sup>+</sup> *<sup>T</sup><sup>T</sup> <sup>A</sup>*�*<sup>T</sup>*

*Ki* = *NiP*<sup>−</sup>1, *i* = 1, 2, ..,*r* (31)

*ji* ≤ 0, 1 ≤ *i* ≤ *j* ≤ *r* (32)

*ji* ≤ 0, 1 ≤ *i* ≤ *j* ≤ *r* (34)

⎤ ⎥ ⎥ ⎥ ⎥ ⎦

*τ Z*

<sup>−</sup> *PZ*−1*<sup>P</sup>* ≤ −2*λ<sup>P</sup>* <sup>+</sup> *<sup>λ</sup>*2*<sup>Z</sup>* (36)

*iNj* −*Y*

*<sup>A</sup>*¯ *<sup>τ</sup>iP* <sup>−</sup>*<sup>T</sup>*

� 0

*ji* ≤ 0, 1 ≤ *i* ≤ *j* ≤ *r* (37)

⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦

*τ Z*

*iNj* −*Y*

*<sup>A</sup>*¯ *<sup>τ</sup>iP* <sup>−</sup>*<sup>T</sup>*

*<sup>τ</sup> PZ*−1*<sup>P</sup>* <sup>0</sup>

*ijP* −*Y*

*<sup>τ</sup> PZ*−1*<sup>P</sup>* <sup>0</sup>

*P* −*T*

*τ Z*

⎤ ⎥ ⎥ ⎥ ⎦

(33)

(35)

(38)

*τi*

*in which ξij* = *PA<sup>T</sup>*

Ξ1 *ij* =

*replace <sup>A</sup>*�*ij, Aτ<sup>i</sup> with <sup>A</sup>*�*<sup>T</sup>*

*Assuming Nj* = *KjP, we get*

*It follows from lemma 1 that*

*We obtain*

*where*

⎡ ⎢ ⎢ ⎢ ⎣ *PA*�*ij* <sup>+</sup> *<sup>A</sup>*�*<sup>T</sup>*

*ij and A<sup>T</sup> τi*

> Ξ2 *ij* =

Ξ3 *ij* =

*gains are given by*

*where*

*<sup>i</sup>* <sup>+</sup> *<sup>N</sup><sup>T</sup> <sup>j</sup> <sup>B</sup><sup>T</sup>*

$$
\Delta\Xi\_{ij} = \begin{bmatrix}
P\Delta A\_i^T + N\_j^T \Delta B\_i^T + \Delta A\_i P + \Delta B\_i N\_j \ P\Delta A\_{\tau i}^T \ \Delta A\_i P + \Delta B\_i N\_j \ 0 \\
\ast & 0 & \Delta A\_{\tau i} P & 0 \\
\ast & \ast & 0 & 0 \\
\ast & \ast & \ast & 0
\end{bmatrix}
$$

$$
= \begin{bmatrix}
M\_{Ai} \\
0\_{3 \times 1}
\end{bmatrix} F(t) \begin{bmatrix}
E\_{Ai}P \ 0 \ E\_{Ai}P \ 0
\end{bmatrix} + (\ast)
$$

$$
+ \begin{bmatrix}
M\_{Bi} \\
0\_{3 \times 1}
\end{bmatrix} F(t) \begin{bmatrix}
E\_{Bi}N\_j \ 0 \ E\_{Bi}N\_j \ 0
\end{bmatrix} + (\ast)
$$

$$
+ \begin{bmatrix}
0 \\
M\_{A\tau i} \\
0\_{2 \times 1}
\end{bmatrix} F(t) \begin{bmatrix}
E\_{A\tau i}P \ 0 \ E\_{A\tau i}P \ 0
\end{bmatrix} + (\ast)
$$

*By using lemma 2, we obtain*

$$\begin{split} \Delta \Xi\_{ij} \le \varepsilon\_{Aij} \begin{bmatrix} M\_{Ai} \\ 0\_{3 \times 1} \end{bmatrix} \begin{bmatrix} M\_{Ai}^T \ 0\_{1 \times 3} \end{bmatrix} + \epsilon\_{Aij}^{-1} \begin{bmatrix} 0 \\ 0 \\ P\_{EA}^T \\ 0 \end{bmatrix} \begin{bmatrix} E\_i P \ 0 \ E\_i P \ 0 \end{bmatrix} \\ + \varepsilon\_{Bij} \begin{bmatrix} M\_{Bi} \\ 0\_{3 \times 1} \end{bmatrix} \begin{bmatrix} M\_{Bi}^T \ 0\_{1 \times 3} \end{bmatrix} + \epsilon\_{Bij}^{-1} \begin{bmatrix} 0 \\ 0 \\ N\_j^T \ E\_{Bi}^T \\ 0 \end{bmatrix} \begin{bmatrix} E\_{Bi} N\_f \ 0 \ E\_{Bi} N\_f \ 0 \end{bmatrix} \\ + \epsilon\_{A \pi ij} \begin{bmatrix} 0 \\ M\_{A \pi i} \\ 0\_{2 \times 1} \end{bmatrix} \begin{bmatrix} 0 \\ 0 \end{bmatrix} \begin{bmatrix} M\_{A \pi i}^T \ 0\_{1 \times 2} \end{bmatrix} + \epsilon\_{A \pi ij}^{-1} \begin{bmatrix} P E\_{A \pi i}^T \\ 0 \\ P E\_{A \pi i}^T \\ 0 \end{bmatrix} \begin{bmatrix} E\_{A \pi i} P \ 0 \ E\_{A \pi i} P \ 0 \end{bmatrix} \end{split} \tag{40}$$

*where �Aij, �Aτij and �Bij are some positive scalars. By applying Schur complement and lemma 2, we obtain theorem 3.*

**Remark 1.** *It is noticed that (Wu & Li, 2007) and theorem (3) contain, respectively, r*<sup>3</sup> <sup>+</sup> *<sup>r</sup>*3(*<sup>r</sup>* <sup>−</sup> <sup>1</sup>) *and* <sup>1</sup> <sup>2</sup> *r*(*r* + 1) *LMIs. This reduces the computational complexity. Moreover, it is easy to see that the requirements of β* < 1 *are removed in our result due to the introduction of variable T.*

**Remark 2.** *It is noted that Wu et al. in (Wu & Li, 2007) have presented a new approach to delay-dependent stabilization for continuous-time fuzzy systems with time varying delay. The disadvantages of this new approach is that the LMIs presented involve three tuning parameters. However, only one tuning parameter is involved in our approach.*

**Remark 3.** *Our method provides a less conservative result than other results which have been recently proposed (Wu & Li, 2007), (Chen & Liu, 2005a), (Guan & Chen, 2004). In next paragraph, a numerical example is given to demonstrate numerically this point.*

*<sup>K</sup>*<sup>1</sup> =

0 2 4

0 1 2

0 10 20

x1(t)

x2(t)

u(t)

fuzzy model with constant time-delay.

*K*<sup>1</sup> =

*τ* = 0.4909.

uncertainties

5.5780 <sup>−</sup>16.4347

with the previous gain matrices under the initial condition *x*(*t*) =

, *K*<sup>2</sup> =

Fig 1 shows the control results for system (41) with constant time-delay via fuzzy controller (7)

Robust Control of Nonlinear Time-Delay Systems via Takagi-Sugeno Fuzzy Models 31

<sup>0</sup> <sup>2</sup> <sup>4</sup> <sup>6</sup> <sup>8</sup> <sup>10</sup> −2

<sup>0</sup> <sup>2</sup> <sup>4</sup> <sup>6</sup> <sup>8</sup> <sup>10</sup> −1

<sup>0</sup> <sup>2</sup> <sup>4</sup> <sup>6</sup> <sup>8</sup> <sup>10</sup> −10

For the case of Δ*Ai* �= 0, Δ*Aτ<sup>i</sup>* �= 0 and constant delay, the approaches in (Guan & Chen, 2004) (Wu & Li, 2007) (Lin et al., 2006) cannot be used to design feedback controllers as the system contains uncertainties. The method in (Chen & Liu, 2005b) and theorem 3 with *λ* = 5 can be

It appears from Table 2 that our result improves the existing ones in the case of uncertain T-S

For the case of uncertain T-S fuzzy model with time-varying delay, the approaches proposed in (Guan & Chen, 2004) (Chen & Liu, 2005a) (Wu & Li, 2007) (Chen et al., 2007) and (Lin et al., 2006) cannot be used to design feedback controllers as the system contains uncertainties and time-varying delay. By using theorem 3 with the choice of *λ* = 5, *τ*(*t*) = 0.25 + 0.15 sin(*t*)(*τ* =

, *K*<sup>2</sup> =

3.1438 <sup>−</sup>13.2255

Methods Maximum allowed *τ*

Theorem 3 0.4770

used to design the fuzzy controllers. The corresponding results are listed below.

Theorem of Chen and Liu (Chen & Liu, 2005a) 0.1498

Table 2. Comparison Among Various Delay-Dependent Stabilization Methods With

0.4, *β* = 0.15), we can obtain the following state-feedback gain matrices:

4.7478 <sup>−</sup>13.5217

Fig. 1. Control results for system (41) without uncertainties and with constant time delay

It is clear that the designed fuzzy controller can stabilize this system.

time (sec.)

4.0442 <sup>−</sup>15.4370

2 0 *<sup>T</sup>*

, *<sup>t</sup>* <sup>∈</sup>

<sup>−</sup>0.4909 0

.

### **4. Illustrative examples**

In this section, three examples are used to illustrate the effectiveness and the merits of the proposed results.

The first example is given to compare our result with the existing one in the case of constant delay and time-varying delay.

### **4.1 Example 1**

Consider the following T-S fuzzy model

$$\dot{\mathbf{x}}(t) = \sum\_{i=1}^{2} h\_i(\mathbf{x}\_1(t)) [(A\_i + \Delta A\_i)\mathbf{x}(t) + (A\_{\tau i} + \Delta A\_{\tau i})\mathbf{x}(t - \tau(t)) + B\_i \mathbf{u}(t)] \tag{41}$$

where

*A*<sup>1</sup> = 0 0.6 0 1 , *A*<sup>2</sup> = 1 0 1 0 , *Aτ*<sup>1</sup> = 0.5 0.9 0 2 , *Aτ*<sup>2</sup> = 0.9 0 1 1.6 *B*<sup>1</sup> = *B*<sup>2</sup> = 1 1 

$$
\Delta A\_{\dot{i}} = MF(t)E\_{\dot{\nu}} \\
\Delta A\_{\tau \dot{i}} = MF(t)E\_{\tau \dot{i}}
$$

$$\begin{aligned} M &= \begin{bmatrix} -0.03 & 0\\ 0 & 0.03 \end{bmatrix} \\ E\_1 &= E\_2 = \begin{bmatrix} -0.15 & 0.2\\ 0 & 0.04 \end{bmatrix} \\ E\_{71} &= E\_{72} = \begin{bmatrix} -0.05 & -0.35\\ 0.08 & -0.45 \end{bmatrix} \end{aligned}$$

The membership functions are defined by

$$h\_1(\mathbf{x}\_1(t)) = \frac{1}{1 + \exp(-2\mathbf{x}\_1(t))}$$

$$h\_2(\mathbf{x}\_1(t)) = 1 - h\_1(\mathbf{x}\_1(t))\tag{42}$$

For the case of delay being constant and unknown and no uncertainties (Δ*Ai* = 0, Δ*Aτ<sup>i</sup>* = 0), the existing delay-dependent approaches are used to design the fuzzy controllers. Based on theorem 3, for *λ* = 5, the largest delay is computed to be *τ* = 0.4909 such that system


Table 1. Comparison Among Various Delay-Dependent Stabilization Methods

It appears from this table that our result improves the existing ones. Letting *τ* = 0.4909, the state-feedback gain matrices are

10 Will-be-set-by-IN-TECH

In this section, three examples are used to illustrate the effectiveness and the merits of the

The first example is given to compare our result with the existing one in the case of constant

, *Aτ*<sup>1</sup> =

Δ*Ai* = *MF*(*t*)*Ei*, Δ*Aτ<sup>i</sup>* = *MF*(*t*)*Eτ<sup>i</sup>*

−0.03 0 0 0.03

For the case of delay being constant and unknown and no uncertainties (Δ*Ai* = 0, Δ*Aτ<sup>i</sup>* = 0),

Based on theorem 3, for *λ* = 5, the largest delay is computed to be *τ* = 0.4909 such that system (41) is asymptotically stable. Based on the results obtained in (Wu & Li, 2007), we get this table

It appears from this table that our result improves the existing ones. Letting *τ* = 0.4909, the

Theorem of Chen and Liu (Chen & Liu, 2005a) 0.1524 Theorem of Guan and Chen (Guan & Chen, 2004) 0.2302 Theorem of Wu and Li (Wu & Li, 2007) 0.2664

*B*<sup>1</sup> = *B*<sup>2</sup> =

 0.5 0.9 0 2

 1 1 

−0.15 0.2 0 0.04

−0.05 −0.35 0.08 −0.45

1 + *exp*(−2*x*1(*t*))

Methods Maximum allowed *τ*

Theorem 3 0.4909

*h*2(*x*1(*t*)) = 1 − *h*1(*x*1(*t*)) (42)

*<sup>i</sup>*=<sup>1</sup> *hi*(*x*1(*t*))[(*Ai* + Δ*Ai*)*x*(*t*)+(*Aτ<sup>i</sup>* + Δ*Aτi*)*x*(*t* − *τ*(*t*)) + *Biu*(*t*)] (41)

, *Aτ*<sup>2</sup> =

 0.9 0 1 1.6 **4. Illustrative examples**

delay and time-varying delay.

*x*˙(*t*) = ∑<sup>2</sup>

*A*<sup>1</sup> = 0 0.6 0 1

The membership functions are defined by

state-feedback gain matrices are

Consider the following T-S fuzzy model

 , *A*<sup>2</sup> =  1 0 1 0 

*M* = 

*E*<sup>1</sup> = *E*<sup>2</sup> =

*Eτ*<sup>1</sup> = *Eτ*<sup>2</sup> =

*<sup>h</sup>*1(*x*1(*t*)) = <sup>1</sup>

the existing delay-dependent approaches are used to design the fuzzy controllers.

Table 1. Comparison Among Various Delay-Dependent Stabilization Methods

proposed results.

**4.1 Example 1**

where

*<sup>K</sup>*<sup>1</sup> = 5.5780 <sup>−</sup>16.4347 , *K*<sup>2</sup> = 4.0442 <sup>−</sup>15.4370

Fig 1 shows the control results for system (41) with constant time-delay via fuzzy controller (7) with the previous gain matrices under the initial condition *x*(*t*) = 2 0 *<sup>T</sup>* , *<sup>t</sup>* <sup>∈</sup> <sup>−</sup>0.4909 0 .

Fig. 1. Control results for system (41) without uncertainties and with constant time delay *τ* = 0.4909.

It is clear that the designed fuzzy controller can stabilize this system.

For the case of Δ*Ai* �= 0, Δ*Aτ<sup>i</sup>* �= 0 and constant delay, the approaches in (Guan & Chen, 2004) (Wu & Li, 2007) (Lin et al., 2006) cannot be used to design feedback controllers as the system contains uncertainties. The method in (Chen & Liu, 2005b) and theorem 3 with *λ* = 5 can be used to design the fuzzy controllers. The corresponding results are listed below.


Table 2. Comparison Among Various Delay-Dependent Stabilization Methods With uncertainties

It appears from Table 2 that our result improves the existing ones in the case of uncertain T-S fuzzy model with constant time-delay.

For the case of uncertain T-S fuzzy model with time-varying delay, the approaches proposed in (Guan & Chen, 2004) (Chen & Liu, 2005a) (Wu & Li, 2007) (Chen et al., 2007) and (Lin et al., 2006) cannot be used to design feedback controllers as the system contains uncertainties and time-varying delay. By using theorem 3 with the choice of *λ* = 5, *τ*(*t*) = 0.25 + 0.15 sin(*t*)(*τ* = 0.4, *β* = 0.15), we can obtain the following state-feedback gain matrices:

$$K\_1 = \begin{bmatrix} 4.7478 \ -13.5217 \end{bmatrix}, \ K\_2 = \begin{bmatrix} 3.1438 \ -13.2255 \end{bmatrix}.$$

*x*0

0.1 0 0 �

*π*

<sup>1</sup> <sup>+</sup> *exp*(−3(*θ*(*t*) + 0.5*π*)))

0.2408 −0.0262 −0.1137 −0.0262 0.0236 0.0847 −0.1137 0.0847 0.3496

⎤ ⎦

*x*2

*x*0

*u*

*x*3(+)

Robust Control of Nonlinear Time-Delay Systems via Takagi-Sugeno Fuzzy Models 33

*x*1

*x*3(−)

*u*

� *vt lt*<sup>0</sup> 0 0 �*<sup>T</sup>*

Δ*A*<sup>1</sup> = Δ*A*<sup>2</sup> = Δ*Aτ*<sup>1</sup> = Δ*Aτ*<sup>2</sup> = *MF*(*t*)*E*

0.255 0.255 0.255 �*<sup>T</sup>* , *<sup>E</sup>* = �

Δ*B*<sup>1</sup> = Δ*B*<sup>2</sup> = *MbF*(*t*)*Eb*

*<sup>l</sup>* <sup>=</sup> 2.8, *<sup>L</sup>* <sup>=</sup> 5.5, *<sup>v</sup>* <sup>=</sup> <sup>−</sup>1, *<sup>t</sup>* <sup>=</sup> 2, *<sup>t</sup>*<sup>0</sup> <sup>=</sup> 0.5, *<sup>a</sup>* <sup>=</sup> 0.7, *<sup>d</sup>* <sup>=</sup> <sup>10</sup>*t*<sup>0</sup>

<sup>1</sup> <sup>+</sup> *exp*(−3(*θ*(*t*) <sup>−</sup> 0.5*π*))) <sup>×</sup> ( <sup>1</sup>

*h*2(*θ*(*t*)) = 1 − *h*<sup>1</sup>

*θ*(*t*) = *x*2(*t*) + *a*(*vt*/2*L*)*x*1(*t*)+(1 − *a*)(*vt*/2*L*)*x*1(*t* − *τ*(*t*)) By using theorem 3, with the choice of *λ* = 5, we can obtain the following feasible solution:

> ⎡ ⎣

⎤ ⎦ , *S* =

0.1790 0 0 �*<sup>T</sup>* , *Eb*<sup>1</sup> <sup>=</sup> 0.05, *Eb*<sup>2</sup> <sup>=</sup> 0.15

*B*<sup>1</sup> = *B*<sup>2</sup> =

*L l*

*M* = �

*Mb* = �

0.2249 0.0566 −0.0259 0.0566 0.0382 0.0775 −0.0259 0.0775 2.7440

The membership functions are defined as

*P* =

⎡ ⎣

*<sup>h</sup>*1(*θ*(*t*)) = (<sup>1</sup> <sup>−</sup> <sup>1</sup>

Fig. 3. Truck-trailer system

with

with

where

where

The simulation was tested under the initial conditions *x*(*t*) = � 2 0 �*<sup>T</sup>* , *<sup>t</sup>* <sup>∈</sup> � <sup>−</sup>0.4 0 � and uncertainty *<sup>F</sup>*(*t*) = � sin(*t*) 0 0 cos(*t*) � .

Fig. 2. Control results for system (41) with uncertainties and with time varying-delay *τ*(*t*) = 0.25 + 0.15*sin*(*t*)

From the simulation results in figure 2, it can be clearly seen that our method offers a new approach to stabilize nonlinear systems represented by uncertain T-S fuzzy model with time-varying delay.

The second example illustrates the validity of the design method in the case of slow time varying delay (*β* < 1)

### **4.2 Example 2: Application to control a truck-trailer**

In this example, we consider a continuous-time truck-trailer system, as shown in Fig. 3. We will use the delayed model given by (Chen & Liu, 2005a). It is assumed that *τ*(*t*) = 1.10 + 0.75 sin(*t*). Obviously, we have *τ* = 1.85, *β* = 0.75. The time-varying delay model with uncertainties is given by

$$\dot{\mathbf{x}}(t) = \sum\_{i=1}^{2} h\_i(\mathbf{x}\_1(t)) [(A\_i + \Delta A\_i)\mathbf{x}(t) + (A\_{7i} + \Delta A\_{7i})\mathbf{x}(t - \tau(t)) + (B\_i + \Delta B\_i)u(t)] \tag{43}$$

where

$$\begin{aligned} A\_{1} &= \begin{bmatrix} -a\frac{v\overline{t}}{Lt\_{0}} & 0 & 0\\ a\frac{v\overline{t}}{Lt\_{0}} & 0 & 0\\ a\frac{v\overline{v}^{2}}{2Lt\_{0}} & \frac{v\overline{t}}{l\_{0}} & 0 \end{bmatrix}, A\_{\tau 1} = \begin{bmatrix} -(1-a)\frac{v\overline{t}}{Lt\_{0}} & 0 & 0\\ (1-a)\frac{v\overline{t}}{Lt\_{0}} & 0 & 0\\ (1-a)\frac{v\overline{t}^{2}}{2Lt\_{0}} & 0 & 0 \end{bmatrix} \\ A\_{2} &= \begin{bmatrix} -a\frac{v\overline{t}}{Lt\_{0}} & 0 & 0\\ a\frac{v\overline{t}}{Lt\_{0}} & 0 & 0\\ a\frac{dv\overline{t}^{2}}{2Lt\_{0}} & \frac{dv\overline{t}}{l\_{0}} & 0 \end{bmatrix}, A\_{\tau 2} = \begin{bmatrix} -(1-a)\frac{v\overline{t}}{Lt\_{0}} & 0 & 0\\ (1-a)\frac{v\overline{t}}{Lt\_{0}} & 0 & 0\\ (1-a)\frac{dv\overline{t}^{2}}{2Lt\_{0}} & 0 & 0 \end{bmatrix} \end{aligned}$$

$$B\_1 = B\_2 = \begin{bmatrix} \frac{\upsilon \overline{t}}{\Pi\_0} \ 0 \ 0 \end{bmatrix}^T$$

with

12 Will-be-set-by-IN-TECH

<sup>0</sup> <sup>2</sup> <sup>4</sup> <sup>6</sup> <sup>8</sup> <sup>10</sup> −2

<sup>0</sup> <sup>2</sup> <sup>4</sup> <sup>6</sup> <sup>8</sup> <sup>10</sup> −1

<sup>0</sup> <sup>2</sup> <sup>4</sup> <sup>6</sup> <sup>8</sup> <sup>10</sup> −5

From the simulation results in figure 2, it can be clearly seen that our method offers a new approach to stabilize nonlinear systems represented by uncertain T-S fuzzy model with

The second example illustrates the validity of the design method in the case of slow time

*hi*(*x*1(*t*))[(*Ai* + Δ*Ai*)*x*(*t*)+(*Aτ<sup>i</sup>* + Δ*Aτi*)*x*(*t* − *τ*(*t*)) + (*Bi* + Δ*Bi*)*u*(*t*)] (43)

<sup>−</sup>(<sup>1</sup> <sup>−</sup> *<sup>a</sup>*) *vt*

(<sup>1</sup> <sup>−</sup> *<sup>a</sup>*) *vt*

(<sup>1</sup> <sup>−</sup> *<sup>a</sup>*) *<sup>v</sup>*2*<sup>t</sup>*

<sup>−</sup>(<sup>1</sup> <sup>−</sup> *<sup>a</sup>*) *vt*

(<sup>1</sup> <sup>−</sup> *<sup>a</sup>*) *vt*

(<sup>1</sup> <sup>−</sup> *<sup>a</sup>*) *dv*2*<sup>t</sup>*

*Lt*<sup>0</sup> 0 0

⎤ ⎥ ⎥ ⎦

> ⎤ ⎥ ⎥ ⎦

*Lt*<sup>0</sup> 0 0

*Lt*<sup>0</sup> 0 0

*Lt*<sup>0</sup> 0 0

2 <sup>2</sup>*Lt*<sup>0</sup> 0 0

2 <sup>2</sup>*Lt*<sup>0</sup> 0 0

⎡ ⎢ ⎢ ⎣

> ⎡ ⎢ ⎢ ⎣

In this example, we consider a continuous-time truck-trailer system, as shown in Fig. 3. We will use the delayed model given by (Chen & Liu, 2005a). It is assumed that *τ*(*t*) = 1.10 + 0.75 sin(*t*). Obviously, we have *τ* = 1.85, *β* = 0.75. The time-varying delay model with

> ⎤ ⎥ ⎥ <sup>⎦</sup> , *<sup>A</sup>τ*<sup>1</sup> <sup>=</sup>

⎤ ⎥ ⎥ <sup>⎦</sup> , *<sup>A</sup>τ*<sup>2</sup> <sup>=</sup>

Fig. 2. Control results for system (41) with uncertainties and with time varying-delay

time (sec.)

2 0 �*<sup>T</sup>*

, *<sup>t</sup>* <sup>∈</sup> �

<sup>−</sup>0.4 0 � and

The simulation was tested under the initial conditions *x*(*t*) = �

� .

uncertainty *F*(*t*) =

�

0 2 4

0 1 2

0 5 10

**4.2 Example 2: Application to control a truck-trailer**

*A*<sup>1</sup> =

*A*<sup>2</sup> =

⎡ ⎢ ⎢ ⎣

⎡ ⎢ ⎢ ⎣ <sup>−</sup>*<sup>a</sup> vt Lt*<sup>0</sup> 0 0

*a vt Lt*<sup>0</sup> 0 0

*a <sup>v</sup>*2*<sup>t</sup>* 2 2*Lt*<sup>0</sup> *vt <sup>t</sup>*<sup>0</sup> 0

<sup>−</sup>*<sup>a</sup> vt*

*a vt Lt*<sup>0</sup> 0 0

*a dv*2*<sup>t</sup>* 2 2*Lt*<sup>0</sup>

*Lt*<sup>0</sup> 0 0

*dvt <sup>t</sup>*<sup>0</sup> 0

x1(t)

x2(t)

u(t)

*τ*(*t*) = 0.25 + 0.15*sin*(*t*)

time-varying delay.

varying delay (*β* < 1)

uncertainties is given by

2 ∑ *i*=1

*x*˙(*t*) =

where

sin(*t*) 0 0 cos(*t*)

$$\begin{aligned} \Delta A\_1 &= \Delta A\_2 = \Delta A\_{71} = \Delta A\_{72} = MF(t)E \\\\ M &= \begin{bmatrix} 0.255 \ 0.255 \ 0.255 \end{bmatrix}^T, E = \begin{bmatrix} 0.1 \ 0 \ 0 \end{bmatrix} \end{aligned} $$

$$
\Delta B\_1 = \Delta B\_2 = M\_b F(t) E\_b
$$

with

$$M\_b = \begin{bmatrix} 0.1790 \ 0 \ 0 \end{bmatrix}^T, E\_{b1} = 0.05, E\_{b2} = 0.15$$

where

$$l = 2.8, l = 5.5, v = -1, \overline{t} = 2.7 \\ t\_0 = 0.5, a = 0.7, d = \frac{10t\_0}{\pi}$$

The membership functions are defined as

$$h\_1(\theta(t)) = \left(1 - \frac{1}{1 + \exp(-3(\theta(t) - 0.5\pi))}\right) \times \left(\frac{1}{1 + \exp(-3(\theta(t) + 0.5\pi))}\right)^{-1}$$

$$h\_2(\theta(t)) = 1 - h\_1$$

where

$$\theta(t) = \mathbf{x}\_2(t) + a(vt/2L)\mathbf{x}\_1(t) + (1-a)(vt/2L)\mathbf{x}\_1(t-\tau(t))$$

By using theorem 3, with the choice of *λ* = 5, we can obtain the following feasible solution:

$$P = \begin{bmatrix} 0.2249 & 0.0566 & -0.0259 \\ 0.0566 & 0.0382 & 0.0775 \\ -0.0259 & 0.0775 & 2.7440 \end{bmatrix}, \\ S = \begin{bmatrix} 0.2408 & -0.0262 & -0.1137 \\ -0.0262 & 0.0236 & 0.0847 \\ -0.1137 & 0.0847 & 0.3496 \end{bmatrix}.$$

(M) u(t)

*x*˙(*t*) = *A*1*x*(*t*) + *B*1*u*(*t*) (46)

*x*˙(*t*) = *A*2*x*(*t*) + *B*2*u*(*t*) (47)

1 <sup>1</sup> <sup>+</sup> exp(−7(*x*<sup>1</sup> <sup>+</sup> *<sup>π</sup>*/4)))

 0 1 12.6305 0

 0 <sup>−</sup>0.0779

*hi*[((1 − *s*)*Ai* + Δ*Ai*)*x*(*t*)+(*sAτ<sup>i</sup>* + Δ*Aτi*)*x*(*t* − *τ*(*t*)) + *Biu*(*t*)] (48)

 0 1 12.6305 0

 0 <sup>−</sup>0.0779

(m)

*θ*

Robust Control of Nonlinear Time-Delay Systems via Takagi-Sugeno Fuzzy Models 35

where *<sup>x</sup>*<sup>1</sup> is the pendulum angle (represented by *<sup>θ</sup>* in Fig. 5), and *<sup>x</sup>*<sup>2</sup> is the angular velocity ( ˙ *θ*) . *g* = 9.8*m*/*s*<sup>2</sup> is the gravity constant , *m* is the mass of the pendulum, *M* is the mass of the cart, 2*l* is the length of the pendulum and *u* is the force applied to the cart. *a* = 1/(*m* + *M*).

, *A*<sup>2</sup> =

, *B*<sup>2</sup> =

In order to illustrate the use of theorem (3), we assume that the delay terms are perturbed along values of the scalar *s* ∈ [0, 1], and the fuzzy time-delay model considered here is as

, *A*<sup>2</sup> =

, *B*<sup>2</sup> =

Δ*A*<sup>1</sup> = Δ*A*<sup>2</sup> = Δ*Aτ*<sup>1</sup> = Δ*Aτ*<sup>2</sup> = *MF*(*t*)*E*

The nonlinear system can be described by a fuzzy model with two IF-THEN rules:

<sup>2</sup> , Then

 0 1 17.2941 0

 0 <sup>−</sup>0.1765

<sup>1</sup> <sup>+</sup> exp(−7(*x*<sup>1</sup> <sup>−</sup> *<sup>π</sup>*/4))) <sup>×</sup> (<sup>1</sup> <sup>+</sup>

 0 1 17.2941 0

 0 <sup>−</sup>0.1765

*A*<sup>1</sup> =

*B*<sup>1</sup> =

*A*<sup>1</sup> =

*B*<sup>1</sup> =

*<sup>h</sup>*<sup>1</sup> = (<sup>1</sup> <sup>−</sup> <sup>1</sup>

Fig. 5. Inverted pendulum

where

follows:

where

Plant Rule 1: IF *x*<sup>1</sup> is about 0, Then

Plant rule 2: IF *<sup>x</sup>*<sup>1</sup> is about <sup>±</sup> *<sup>π</sup>*

The membership functions are

*x*˙(*t*) =

*h*<sup>2</sup> = 1 − *h*<sup>1</sup>

*r* ∑ *i*=1

$$Z = \begin{bmatrix} 0.0373 & 0.0133 & -0.0052 \\ 0.0133 & 0.0083 & 0.0202 \\ -0.0052 & 0.0202 & 1.0256 \end{bmatrix}, T = \begin{bmatrix} 0.0134 & 0.0053 & 0.0256 \\ 0.0075 & 0.0038 & -0.0171 \\ 0.0001 & 0.0014 & 0.0642 \end{bmatrix}$$

$$Y = \begin{bmatrix} -0.0073 & -0.0022 & 0.0192 \\ -0.0051 & -0.0031 & 0.0096 \\ 0.0012 & -0.0012 & -0.0804 \end{bmatrix}$$

$$\begin{aligned} \epsilon\_{A1} &= 0.1087, \epsilon\_{A2} = 0.0729, \epsilon\_{A12} = 0.1184 \\ \epsilon\_{A71} &= 0.0443, \epsilon\_{A72} = 0.0369, \epsilon\_{A712} = 0.0432 \\ \epsilon\_{B1} &= 0.3179, \epsilon\_{B2} = 0.3383, \epsilon\_{B12} = 0.3250 \\ K\_1 &= \begin{bmatrix} 3.7863 & -5.7141 \ 0.1028 \end{bmatrix} \\ K\_2 &= \begin{bmatrix} 3.8049 - 5.8490 \ 0.0965 \end{bmatrix} \end{aligned}$$

The simulation was carried out for an initial condition *x*(*t*) = � −0.5*π* 0.75*π* −5 �*T* , *t* ∈ � <sup>−</sup>1.85 0 � .

Fig. 4. Control results for the truck-trailer system (41)

The third example is presented to illustrate the effectiveness of the proposed main result for fast time-varying delay system.

#### **4.3 Example 3: Application to an inverted pendulum**

Consider the well-studied example of balancing an inverted pendulum on a cart (Cao et al., 2000).

$$
\dot{\mathbf{x}}\_1 = \mathbf{x}\_2 \tag{44}
$$

$$\dot{\mathbf{x}}\_2 = \frac{g\sin(\mathbf{x}\_1) - aml\mathbf{x}\_2^2 \sin(2\mathbf{x}\_1)/2 - a\cos(\mathbf{x}\_1)u}{4l/3 - aml\cos^2(\mathbf{x}\_1)}\tag{45}$$

### Fig. 5. Inverted pendulum

where *<sup>x</sup>*<sup>1</sup> is the pendulum angle (represented by *<sup>θ</sup>* in Fig. 5), and *<sup>x</sup>*<sup>2</sup> is the angular velocity ( ˙ *θ*) . *g* = 9.8*m*/*s*<sup>2</sup> is the gravity constant , *m* is the mass of the pendulum, *M* is the mass of the cart, 2*l* is the length of the pendulum and *u* is the force applied to the cart. *a* = 1/(*m* + *M*). The nonlinear system can be described by a fuzzy model with two IF-THEN rules:

Plant Rule 1: IF *x*<sup>1</sup> is about 0, Then

$$
\dot{\mathbf{x}}(t) = A\_1 \mathbf{x}(t) + B\_1 \mathbf{u}(t) \tag{46}
$$

Plant rule 2: IF *<sup>x</sup>*<sup>1</sup> is about <sup>±</sup> *<sup>π</sup>* <sup>2</sup> , Then

$$
\dot{\mathbf{x}}(t) = A\_2 \mathbf{x}(t) + B\_2 \boldsymbol{\mu}(t) \tag{47}
$$

where

14 Will-be-set-by-IN-TECH

⎤

⎦ , *T* =

−0.0073 −0.0022 0.0192 −0.0051 −0.0031 0.0096 0.0012 −0.0012 −0.0804

*�A*<sup>1</sup> = 0.1087, *�A*<sup>2</sup> = 0.0729, *�A*<sup>12</sup> = 0.1184 *�Aτ*<sup>1</sup> = 0.0443, *�Aτ*<sup>2</sup> = 0.0369, *�Aτ*<sup>12</sup> = 0.0432 *�B*<sup>1</sup> = 0.3179, *�B*<sup>2</sup> = 0.3383, *�B*<sup>12</sup> = 0.3250

3.7863 <sup>−</sup>5.7141 0.1028 �

3.8049 <sup>−</sup>5.8490 0.0965 �

<sup>0</sup> <sup>10</sup> <sup>20</sup> <sup>30</sup> <sup>40</sup> <sup>50</sup> −5

<sup>0</sup> <sup>10</sup> <sup>20</sup> <sup>30</sup> <sup>40</sup> <sup>50</sup> −5

<sup>0</sup> <sup>10</sup> <sup>20</sup> <sup>30</sup> <sup>40</sup> <sup>50</sup> −20

<sup>0</sup> <sup>10</sup> <sup>20</sup> <sup>30</sup> <sup>40</sup> <sup>50</sup> −50

The third example is presented to illustrate the effectiveness of the proposed main result for

Consider the well-studied example of balancing an inverted pendulum on a cart (Cao et al.,

*x*˙1 = *x*<sup>2</sup> (44)

<sup>2</sup> sin(2*x*1)/2 − *a* cos(*x*1)*u*

<sup>4</sup>*l*/3 <sup>−</sup> *aml* cos2(*x*1) (45)

time (sec.)

⎡ ⎣

0.0134 0.0053 0.0256 0.0075 0.0038 −0.0171 0.0001 0.0014 0.0642

> ⎤ ⎦

⎤ ⎦

−0.5*π* 0.75*π* −5

�*T* , *t* ∈

*Z* =

�

<sup>−</sup>1.85 0 �

.

0 5

0 5 x2(t)

> −10 0

> > 0 50

Fig. 4. Control results for the truck-trailer system (41)

**4.3 Example 3: Application to an inverted pendulum**

*<sup>x</sup>*˙2 <sup>=</sup> *<sup>g</sup>* sin(*x*1) <sup>−</sup> *amlx*<sup>2</sup>

x3(t)

u(t)

fast time-varying delay system.

2000).

x

1(t)

⎡ ⎣

0.0373 0.0133 −0.0052 0.0133 0.0083 0.0202 −0.0052 0.0202 1.0256

> ⎡ ⎣

*K*<sup>1</sup> = �

*K*<sup>2</sup> = �

The simulation was carried out for an initial condition *x*(*t*) = �

*Y* =

$$\begin{aligned} A\_1 &= \begin{bmatrix} 0 & 1 \\ 17.2941 & 0 \end{bmatrix}, A\_2 = \begin{bmatrix} 0 & 1 \\ 12.6305 & 0 \end{bmatrix}, \\ B\_1 &= \begin{bmatrix} 0 \\ -0.1765 \end{bmatrix}, B\_2 = \begin{bmatrix} 0 \\ -0.0779 \end{bmatrix} \end{aligned}$$

The membership functions are

$$\begin{aligned} h\_1 &= (1 - \frac{1}{1 + \exp(-7(\mathbf{x}\_1 - \pi/4))}) \times (1 + \frac{1}{1 + \exp(-7(\mathbf{x}\_1 + \pi/4))})\\ h\_2 &= 1 - h\_1 \end{aligned}$$

In order to illustrate the use of theorem (3), we assume that the delay terms are perturbed along values of the scalar *s* ∈ [0, 1], and the fuzzy time-delay model considered here is as follows:

$$\dot{\mathbf{x}}(t) = \sum\_{i=1}^{r} h\_i[((1-s)A\_i + \Delta A\_i)\mathbf{x}(t) + (sA\_{\tau i} + \Delta A\_{\tau i})\mathbf{x}(t-\tau(t)) + B\_i\boldsymbol{\mu}(t)] \tag{48}$$

where

$$A\_1 = \begin{bmatrix} 0 & 1 \\ 17.2941 \ 0 \end{bmatrix}, A\_2 = \begin{bmatrix} 0 & 1 \\ 12.6305 \ 0 \end{bmatrix}$$

$$B\_1 = \begin{bmatrix} 0 \\ -0.1765 \end{bmatrix}, B\_2 = \begin{bmatrix} 0 \\ -0.0779 \end{bmatrix}$$

$$\Delta A\_1 = \Delta A\_2 = \Delta A\_{71} = \Delta A\_{72} = MF(t)E$$

Cao, S. G., Rees, N. W. & Feng, G. (2000). *h*∞ control of uncertain fuzzy continuous-time

Robust Control of Nonlinear Time-Delay Systems via Takagi-Sugeno Fuzzy Models 37

Cao, Y.-Y. & Frank, P. M. (2000). Analysis and synthesis of nonlinear timedelay systems via fuzzy control approach, *IEEE Transactions on Fuzzy Systems* Vol. 8(No. 12): 200–211. Chadli, M. & ElHajjaji, A. (2006). A observer-based robust fuzzy control of nonlinear systems with parametric uncertaintie, *Fuzzy Sets and Systems* Vol. 157(No. 9): 1279–1281. Chen, B., Lin, C., Liu, X. & Tong, S. (2008). Guarateed cost control of t-s fuzzy systems with input delay, *International Journal Robust Nonlinear Control* Vol. 18: 1230–1256. Chen, B. & Liu, X. (2005a). Delay-dependent robust *h*∞ control for t-s fuzzy systems with time

Chen, B. & Liu, X. (2005b). Fuzzy guaranteed cost control for nonlinear systems with

Chen, B., Liu, X., Tang, S. & Lin, C. (2008). Observer-based stabilization of t-s fuzzy systems with input delay, *IEEE Transactions on fuzzy systems* Vol. 16(No. 3): 625–633. Chen, B., Liu, X. & Tong, S. (2007). New delay-dependent stabilization conditions of t-s fuzzy systems with constant delay, *Fuzzy sets and systems* Vol. 158(No. 20): 2209 – 2242. Guan, X.-P. & Chen, C.-L. (2004). Delay-dependent guaranteed cost control for t-s fuzzy

Guerra, T., Kruszewski, A., Vermeiren, L. & Tirmant, H. (2006). Conditions of output

He, Y., Wang, Q., Xie, L. H. & Lin, C. (2007). Further improvement of free-weighting matrices

He, Y., Wu, M., She, J. H. & Liu, G. P. (2004). Parameter-dependent lyapunov functional for

Kim, E. & Lee, H. (2000). New approaches to relaxed quadratic stability condition of fuzzy control systems, *IEEE Transactions on Fuzzy Systems* Vol. 8(No. 5): 523–534. Li, C., Wang, H. & Liao, X. (2004). Delay-dependent robust stability of uncertain fuzzy

Lin, C., Wang, Q. & Lee, T. (2006). Delay-dependent lmi conditions for stability and

Moon, Y. S., Park, P., Kwon, W. H. & Lee, Y. S. (2001). Delay-dependent robust

Oudghiri, M., Chadli, M. & ElHajjaji, A. (2007). One-step procedure for robust output fuzzy

Park, P., Lee, S. S. & Choi, D. J. (2003). A state-feedback stabilization for nonlinear

systems with time delays, *IEEE Transactions on Fuzzy Systems* Vol. 12(No. 2): 236–249.

stabilization for nonlinear models in the takagi-sugeno's form, *Fuzzy Sets and Systems*

technique for systems with time-varying delay, *IEEE Trans. Autom. Control* Vol. 52(No.

stability of time-delay systems with polytopic type uncertainties, *IEEE Trans. Autom.*

systems with time-varying delays, *Control Theory and Applications, IEE Proceedings*,

stabilization of t-s fuzzy systems with bounded time-delay, *Fuzzy sets and systems*

stabilization of uncertain state-delayed systems, *International Journal of control* Vol.

control, *CD-ROM of the 15th Mediterranean Conference on Control and Automation,*

time-delay systems: A new fuzzy weighting-dependent lyapunov-krasovskii functional approach, *Proceedings of the 42nd IEEE Conference on Decision and Control*,

delay, *IEEE Transactions on Fuzzy Systems* Vol. 13(No. 4): 544 – 556.

time-varying delay, *Fuzzy sets and systems* Vol. 13(No. 2): 238 – 249.

Vol. 157(No. 9): 1248–1259.

*Control* Vol. 49(No. 5): 828–832.

Vol. 157(No. 9): 1229 – 1247.

Maui, Hawaii, pp. 5233–5238.

*IEEE-Med'07*, Athens, Greece, pp. 1 – 6.

74(No. 14): 1447–1455.

2): 293–299.

IET, pp. 417–421.

systems, *Fuzzy Sets and Systems* Vol. 115(No. 2): 171–190.

with

$$M = \begin{bmatrix} 0.1 & 0\\ 0 & 0.1 \end{bmatrix}^T \text{ , } E = \begin{bmatrix} 0.1 & 0\\ 0 & 0.1 \end{bmatrix}$$

Let *<sup>s</sup>* <sup>=</sup> 0.1 and uncertainty *<sup>F</sup>*(*t*) = sin(*t*) 0 0 cos(*t*) . We consider a fast time-varying delay *τ*(*t*) = 0.2 + 1.2 |sin(*t*)| (*β* = 1.2 > 1).

Using LMI-TOOLBOX, there is a set of feasible solutions to LMIs (29).

*K*<sup>1</sup> = 159.7095 30.0354 , *K*<sup>2</sup> = 347.2744 78.5552

Fig. 4 shows the control results for the system (48) with time-varying delay *τ*(*t*) = 0.2 + 1.2 <sup>|</sup>sin(*t*)<sup>|</sup> under the initial condition *<sup>x</sup>*(*t*) = 2 0 *<sup>T</sup>*, *<sup>t</sup>* <sup>∈</sup> <sup>−</sup>1.40 0 .

Fig. 6. Control results for the system (48) with time-varying delay*τ*(*t*) = 0.2 + 1.2 |sin(*t*)|.

### **5. Conclusion**

In this chapter, we have investigated the delay-dependent design of state feedback stabilizing fuzzy controllers for uncertain T-S fuzzy systems with time varying delay. Our method is an important contribution as it establishes a new way that can reduce the conservatism and the computational efforts in the same time. The delay-dependent stabilization conditions obtained in this chapter are presented in terms of LMIs involving a single tuning parameter. Finally, three examples are used to illustrate numerically that our results are less conservative than the existing ones.

### **6. References**

Boukas, E. & ElHajjaji, A. (2006). On stabilizability of stochastic fuzzy systems, *American Control Conference, 2006*, Minneapolis, Minnesota, USA, pp. 4362–4366.

16 Will-be-set-by-IN-TECH

*T*

sin(*t*) 0 0 cos(*t*)

, *E* = 0. 0 0 0.1

, *K*<sup>2</sup> =

2 0 *<sup>T</sup>*, *<sup>t</sup>* <sup>∈</sup>

Fig. 4 shows the control results for the system (48) with time-varying delay *τ*(*t*) = 0.2 +

<sup>0</sup> <sup>2</sup> <sup>4</sup> <sup>6</sup> <sup>8</sup> <sup>10</sup> −1

<sup>0</sup> <sup>2</sup> <sup>4</sup> <sup>6</sup> <sup>8</sup> <sup>10</sup> −4

<sup>0</sup> <sup>2</sup> <sup>4</sup> <sup>6</sup> <sup>8</sup> <sup>10</sup> −500

Fig. 6. Control results for the system (48) with time-varying delay*τ*(*t*) = 0.2 + 1.2 |sin(*t*)|.

In this chapter, we have investigated the delay-dependent design of state feedback stabilizing fuzzy controllers for uncertain T-S fuzzy systems with time varying delay. Our method is an important contribution as it establishes a new way that can reduce the conservatism and the computational efforts in the same time. The delay-dependent stabilization conditions obtained in this chapter are presented in terms of LMIs involving a single tuning parameter. Finally, three examples are used to illustrate numerically that our results are less conservative

Boukas, E. & ElHajjaji, A. (2006). On stabilizability of stochastic fuzzy systems, *American Control Conference, 2006*, Minneapolis, Minnesota, USA, pp. 4362–4366.

time (sec.)

347.2744 78.5552

<sup>−</sup>1.40 0

.

. We consider a fast time-varying delay

*M* = 0.1 0 0 0.1

Using LMI-TOOLBOX, there is a set of feasible solutions to LMIs (29).

159.7095 30.0354

Let *s* = 0.1 and uncertainty *F*(*t*) =

*τ*(*t*) = 0.2 + 1.2 |sin(*t*)| (*β* = 1.2 > 1).

*K*<sup>1</sup> =

1.2 <sup>|</sup>sin(*t*)<sup>|</sup> under the initial condition *<sup>x</sup>*(*t*) =

0 1 2

−2 0 2

0 500 1000

x1(t)

x2(t)

u(t)

**5. Conclusion**

than the existing ones.

**6. References**

with


**3** 

*France* 

**Observer-Based Robust Control of Uncertain** 

Pagès Olivier and El Hajjaji Ahmed *University of Picardie Jules Verne, MIS, Amiens* 

**Fuzzy Models with Pole Placement Constraints** 

Practical systems are often modelled by nonlinear dynamics. Controlling nonlinear systems are still open problems due to their complexity nature. This problem becomes more complex when the system parameters are uncertain. To control such systems, we may use the linearization technique around a given operating point and then employ the known methods of linear control theory. This approach is successful when the operating point of the system is restricted to a certain region. Unfortunately, in practice this approach will not work for some physical systems with a time-varying operating point. The fuzzy model proposed by Takagi-Sugeno (T-S) is an alternative that can be used in this case. It has been proved that T-S fuzzy models can effectively approximate any continuous nonlinear systems by a set of local linear dynamics with their linguistic description. This fuzzy dynamic model is a convex combination of several linear models. It is described by fuzzy rules of the type *If-Then* that represent local input output models for a nonlinear system. The overall system model is obtained by "blending" these linear models through nonlinear fuzzy membership functions. For more details on this topic, we refer the reader to (Tanaka

The stability analysis and the synthesis of controllers and observers for nonlinear systems described by T-S fuzzy models have been the subject of many research works in recent years. The fuzzy controller is often designed under the well-known procedure: Parallel Distributed Compensation (PDC). In presence of parametric uncertainties in T-S fuzzy models, it is necessary to consider the robust stability in order to guarantee both the stability and the robustness with respect to the latter. These may include modelling error, parameter perturbations, external disturbances, and fuzzy approximation errors. So far, there have been some attempts in the area of uncertain nonlinear systems based on the T-S fuzzy models in the literature. The most of these existing works assume that all the system states are measured. However, in many control systems and real applications, these are not always available. Several authors have recently proposed observer based robust controller design methods considering the fact that in real control problems the full state information is not always available. In the case without uncertainties, we apply the separation property to design the observer-based controller: the observer synthesis is designed so that its dynamics are fast and we independently design the controller by imposing slower dynamics. Recently, much effort has been devoted to observer-based control for T-S fuzzy models. (Tanaka & al, 1998) have studied the fuzzy observer design for T-S fuzzy control systems. Nonetheless, in

& al 1998 and Wand & al, 1995) and the references therein.

**1. Introduction** 


## **Observer-Based Robust Control of Uncertain Fuzzy Models with Pole Placement Constraints**

Pagès Olivier and El Hajjaji Ahmed *University of Picardie Jules Verne, MIS, Amiens France* 

## **1. Introduction**

18 Will-be-set-by-IN-TECH

38 Recent Advances in Robust Control – Novel Approaches and Design Methods

Wang, H. O., Tanaka, K. & Griffin, M. F. (1996). An approach to fuzzy control of nonlinear

Wang, Y., Xie, L. & Souza, C. D. (1992). Robust control of a class of uncertain nonlinear

Wu, H.-N. & Li, H.-X. (2007). New approach to delay-dependent stability analysis and

Wu, M., He, Y. & She, J. (2004). New delay-dependent stability criteria and stabilizing method

Xie, L. & DeSouza, C. (1992). Robust *h*∞ control for linear systems with norm-bounded time-varying uncertainty, *IEEE Trans. Automatic Control* Vol. 37(No. 1): 1188 – 1191. Zhang, Y. & Heng, P. (2002). Stability of fuzzy control systems with bounded uncertain delays,

systems, *Systems control letters* Vol. 19(No. 2): 139 – 149.

*Transactions on Fuzzy Systems* Vol. 15(No. 3): 482–493.

*IEEE Transactions on Fuzzy Systems* Vol. 10(No. 1): 92–97.

1): 14–23.

systems: Stability and design issues, *IEEE Transactions on fuzzy systems* Vol. 4(No.

stabilization for continuous-time fuzzy systems with time-varying delay, *IEEE*

for neutral systems, *IEEE transactions on automatic control* Vol. 49(No. 12): 2266–2271.

Practical systems are often modelled by nonlinear dynamics. Controlling nonlinear systems are still open problems due to their complexity nature. This problem becomes more complex when the system parameters are uncertain. To control such systems, we may use the linearization technique around a given operating point and then employ the known methods of linear control theory. This approach is successful when the operating point of the system is restricted to a certain region. Unfortunately, in practice this approach will not work for some physical systems with a time-varying operating point. The fuzzy model proposed by Takagi-Sugeno (T-S) is an alternative that can be used in this case. It has been proved that T-S fuzzy models can effectively approximate any continuous nonlinear systems by a set of local linear dynamics with their linguistic description. This fuzzy dynamic model is a convex combination of several linear models. It is described by fuzzy rules of the type *If-Then* that represent local input output models for a nonlinear system. The overall system model is obtained by "blending" these linear models through nonlinear fuzzy membership functions. For more details on this topic, we refer the reader to (Tanaka & al 1998 and Wand & al, 1995) and the references therein.

The stability analysis and the synthesis of controllers and observers for nonlinear systems described by T-S fuzzy models have been the subject of many research works in recent years. The fuzzy controller is often designed under the well-known procedure: Parallel Distributed Compensation (PDC). In presence of parametric uncertainties in T-S fuzzy models, it is necessary to consider the robust stability in order to guarantee both the stability and the robustness with respect to the latter. These may include modelling error, parameter perturbations, external disturbances, and fuzzy approximation errors. So far, there have been some attempts in the area of uncertain nonlinear systems based on the T-S fuzzy models in the literature. The most of these existing works assume that all the system states are measured. However, in many control systems and real applications, these are not always available. Several authors have recently proposed observer based robust controller design methods considering the fact that in real control problems the full state information is not always available. In the case without uncertainties, we apply the separation property to design the observer-based controller: the observer synthesis is designed so that its dynamics are fast and we independently design the controller by imposing slower dynamics. Recently, much effort has been devoted to observer-based control for T-S fuzzy models. (Tanaka & al, 1998) have studied the fuzzy observer design for T-S fuzzy control systems. Nonetheless, in

Observer-Based Robust Control of Uncertain Fuzzy Models with Pole Placement Constraints 41

() ,

*B H tE i r*

Where , ,, *HHEE ai bi ai bi* are known real constant matrices of appropriate dimension, and

( ) ( ) 1,...,

*ai* Δ *t* is the transposed matrix of ( ) *ai* Δ *t* and *I* is the matrix identity of appropriate dimension. We suppose that pairs ( *Ai i* ,*B* ) are controllable and ( *Ai i* ,*C* ) are observable. *Mij* indicates the *jth* fuzzy set associated to the *ith* variable ( ) *<sup>i</sup> z t* , *r* is the number of fuzzy model rules, ( ) *<sup>n</sup> x t* ∈ℜ is the state vector, ( ) *<sup>m</sup> u t* ∈ℜ is the input vector, ( ) *<sup>l</sup> y t R* ∈ is the output vector,

( ) ( ( )) ( ) ( ) ( ) ( )

*xt h zt A A xt B B ut*

1 ( ( )) ( ( )) *ij v <sup>i</sup> M j <sup>j</sup> w zt z t* μ

In this paper we assume that all of the state variables are not measurable. Fuzzy state observer for T-S fuzzy model with parametric uncertainties (1) is formulated as follows:

Note that the premise variables do not depend on the state variables estimated by a fuzzy

= = ∏

<sup>⎧</sup> <sup>=</sup> <sup>∑</sup> +Δ + +Δ ⎪⎪

*i i i ii*

*t tI i r*

<sup>×</sup> ∈ℜ . 1( ),..., ( ) *<sup>v</sup> zt zt* are premise variables.

[ ]

*i ai ai ai i bi bi bi A H tE*

() () ,

*t tI*

ΔΔ ≤

*t ai ai t bi bi* ( ) , 1,...,

 *<sup>i</sup>* Then ( ) ( ) ( ) ( ) ( ), ( ) ( ) 1,...,

*i*

*i i ii*

(1)

*xt A A xt B B ut yt Cxt i r* <sup>⎧</sup> = +Δ + +Δ <sup>⎨</sup> <sup>=</sup> <sup>=</sup> <sup>⎩</sup>

Δ= Δ = (2)

ΔΔ ≤ = (3)

 *<sup>i</sup>* Then ˆ ˆ ( ) ( ) ( ) ( ( ) ( )), <sup>ˆ</sup> ˆ ˆ ( ) ( ) 1,..., *ii i*

*x t Ax t Bu t G y t y t yt Cxt i r* ⎧⎪ = +− − <sup>⎨</sup>

(5)

<sup>×</sup> ∈ℜ in the consequent part.

*i*

⎪ = = ⎩

(4)

**Plant rule** *i* **:** 

( ) *<sup>t</sup>*

*n n Ai*

where

Where ( ( )) *M j ij* μ

Observer rule *i*:

observer.

<sup>×</sup> ∈ℜ , *n m Bi*

If 1 *z t*( ) is *M1i* and …and *z t*( )

ν

( ), ( ) *ai bi* Δ Δ *t t* are unknown matrix functions satisfying:

<sup>×</sup> ∈ℜ and *l n Ci*

⎨

1

*r*

*i r*

=

*i*

<sup>⎪</sup> <sup>=</sup> <sup>∑</sup> ⎪⎩

( ( ))

=

and

*z t* is the fuzzy meaning of symbol *Mij*.

ν

The fuzzy observer design is to determine the local gains *n l Gi*

 is *M*ν

1

( ) ( ( )) ( )

*yt h zt Cxt*

*i i*

From (1), the T-S fuzzy system output is :

1

=

If 1*z t*( ) is *M1i* and …and *z t*( )

The output of (5) is represented as follows:

*i*

*i i*

*w zt*

( ( )) ( ( ))

*w zt h zt*

= ∑

*i r*

 is *M*ν

The structured uncertainties considered here are norm-bounded in the form:

Δ= Δ

the presence of uncertainties, the separation property is not applicable any more. In (El Messousi & al, 2006), the authors have proposed sufficient global stability conditions for the stabilization of uncertain fuzzy T-S models with unavailable states using a robust fuzzy observer-based controller but with no consideration to the control performances and in particular to the transient behaviour.

From a practical viewpoint, it is necessary to find a controller which will specify the desired performances of the controlled system. For example, a fast decay, a good damping can be imposed by placing the closed-loop poles in a suitable region of the complex plane. Chilali and Gahinet (Chilali & Gahinet, 1996) have proposed the concept of an LMI (Linear Matrix Inequality) region as a convenient LMI-based representation of general stability regions for uncertain linear systems. Regions of interest include α-stability regions, disks and conic sectors. In (Chilali & al 1999), a robust pole placement has been studied in the case of linear systems with static uncertainties on the state matrix. A vertical strip and α-stability robust pole placement has been studied in (Wang & al, 1995, Wang & al, 1998 and Wang & al, 2001) respectively for uncertain linear systems in which the concerned uncertainties are polytopic and the proposed conditions are not LMI. In (Hong & Man 2003), the control law synthesis with a pole placement in a circular LMI region is presented for certain T-S fuzzy models. Different LMI regions are considered in (Farinwata & al, 2000 and Kang & al, 198), for closed-loop pole placements in the case of T-S fuzzy models without uncertainties.

In this work, we extend the results of (El Messoussi & al, 2005), in which we have developed sufficient robust pole placement conditions for continuous T-S fuzzy models with measurable state variables and structured parametric uncertainties.

The main goal of this paper is to study the pole placement constraints for T-S fuzzy models with structured uncertainties by designing an observer-based fuzzy controller in order to guarantee the closed-loop stability. However, like (Lo & Li, 2004 and Tong & Li, 2002), we do not know the position of the system state poles as well as the position of the estimation error poles. The main contribution of this paper is as follows: the idea is to place the poles associated with the state dynamics in one LMI region and to place the poles associated with the estimation error dynamics in another LMI region (if possible, farther on the left). However, the separation property is not applicable unfortunately. Moreover, the estimation error dynamics depend on the state because of uncertainties. If the state dynamics are slow, we will have a slow convergence of the estimation error to the equilibrium point zero in spite of its own fast dynamics. So, in this paper, we propose an algorithm to design the fuzzy controller and the fuzzy observer separately by imposing the two pole placements. Moreover, by using the *H*<sup>∞</sup> approach, we ensure that the estimation error converges faster to the equilibrium point zero.

This chapter is organized as follows: in Section *2*, we give the class of uncertain fuzzy models, the observer-based fuzzy controller structure and the control objectives. After reviewing existing LMI constraints for a pole placement in Section *3*, we propose the new conditions for the uncertain augmented T-S fuzzy system containing both the fuzzy controller as well as the observer dynamics. Finally, in Section *4*, an illustrative application example shows the effectiveness of the proposed robust pole placement approach. Some conclusions are given in Section *5*.

### **2. Problem formulation and preliminaries**

Considering a T-S fuzzy model with parametric uncertainties composed of *r* plant rules that can be represented by the following fuzzy rule:

### **Plant rule** *i* **:**

40 Recent Advances in Robust Control – Novel Approaches and Design Methods

the presence of uncertainties, the separation property is not applicable any more. In (El Messousi & al, 2006), the authors have proposed sufficient global stability conditions for the stabilization of uncertain fuzzy T-S models with unavailable states using a robust fuzzy observer-based controller but with no consideration to the control performances and in

From a practical viewpoint, it is necessary to find a controller which will specify the desired performances of the controlled system. For example, a fast decay, a good damping can be imposed by placing the closed-loop poles in a suitable region of the complex plane. Chilali and Gahinet (Chilali & Gahinet, 1996) have proposed the concept of an LMI (Linear Matrix Inequality) region as a convenient LMI-based representation of general stability regions for uncertain linear systems. Regions of interest include α-stability regions, disks and conic sectors. In (Chilali & al 1999), a robust pole placement has been studied in the case of linear systems with static uncertainties on the state matrix. A vertical strip and α-stability robust pole placement has been studied in (Wang & al, 1995, Wang & al, 1998 and Wang & al, 2001) respectively for uncertain linear systems in which the concerned uncertainties are polytopic and the proposed conditions are not LMI. In (Hong & Man 2003), the control law synthesis with a pole placement in a circular LMI region is presented for certain T-S fuzzy models. Different LMI regions are considered in (Farinwata & al, 2000 and Kang & al, 198), for

closed-loop pole placements in the case of T-S fuzzy models without uncertainties.

measurable state variables and structured parametric uncertainties.

In this work, we extend the results of (El Messoussi & al, 2005), in which we have developed sufficient robust pole placement conditions for continuous T-S fuzzy models with

The main goal of this paper is to study the pole placement constraints for T-S fuzzy models with structured uncertainties by designing an observer-based fuzzy controller in order to guarantee the closed-loop stability. However, like (Lo & Li, 2004 and Tong & Li, 2002), we do not know the position of the system state poles as well as the position of the estimation error poles. The main contribution of this paper is as follows: the idea is to place the poles associated with the state dynamics in one LMI region and to place the poles associated with the estimation error dynamics in another LMI region (if possible, farther on the left). However, the separation property is not applicable unfortunately. Moreover, the estimation error dynamics depend on the state because of uncertainties. If the state dynamics are slow, we will have a slow convergence of the estimation error to the equilibrium point zero in spite of its own fast dynamics. So, in this paper, we propose an algorithm to design the fuzzy controller and the fuzzy observer separately by imposing the two pole placements. Moreover, by using the *H*<sup>∞</sup> approach, we ensure that the estimation error converges faster to the equilibrium point zero. This chapter is organized as follows: in Section *2*, we give the class of uncertain fuzzy models, the observer-based fuzzy controller structure and the control objectives. After reviewing existing LMI constraints for a pole placement in Section *3*, we propose the new conditions for the uncertain augmented T-S fuzzy system containing both the fuzzy controller as well as the observer dynamics. Finally, in Section *4*, an illustrative application example shows the effectiveness of the proposed robust pole placement approach. Some

Considering a T-S fuzzy model with parametric uncertainties composed of *r* plant rules that

particular to the transient behaviour.

conclusions are given in Section *5*.

**2. Problem formulation and preliminaries** 

can be represented by the following fuzzy rule:

$$\text{If } z\_1(t) \text{ is } M\_{\text{li}} \text{ and } \dots \text{and } z\_{\nu}(t) \text{ is } M\_{\nu i} \text{ Then } \begin{cases} \dot{\mathbf{x}}(t) = (A\_i + \Delta A\_i)\mathbf{x}(t) + (B\_i + \Delta B\_i)u(t), \\ y(t) = \mathbf{C}\_i \mathbf{x}(t) \qquad \mathbf{i} = \mathbf{1}, \dots, r \end{cases} \tag{1}$$

The structured uncertainties considered here are norm-bounded in the form:

$$\begin{aligned} \Delta A\_i &= H\_{ai} \Delta\_{ai}(t) E\_{ai}, \\ \Delta B\_i &= H\_{bi} \Delta\_{bi}(t) E\_{bi'} \ \mathbf{i} = \mathbf{1}, \dots, r \end{aligned} \tag{2}$$

Where , ,, *HHEE ai bi ai bi* are known real constant matrices of appropriate dimension, and ( ), ( ) *ai bi* Δ Δ *t t* are unknown matrix functions satisfying:

$$\begin{aligned} \Delta\_{ai}^{\dagger}(t)\Delta\_{ai}(t) \leq I, \\ \Delta\_{bi}^{\dagger}(t)\Delta\_{bi}(t) \leq I \qquad i = 1, \ldots, r \end{aligned} \tag{3}$$

( ) *<sup>t</sup> ai* Δ *t* is the transposed matrix of ( ) *ai* Δ *t* and *I* is the matrix identity of appropriate dimension. We suppose that pairs ( *Ai i* ,*B* ) are controllable and ( *Ai i* ,*C* ) are observable. *Mij* indicates the *jth* fuzzy set associated to the *ith* variable ( ) *<sup>i</sup> z t* , *r* is the number of fuzzy model rules, ( ) *<sup>n</sup> x t* ∈ℜ is the state vector, ( ) *<sup>m</sup> u t* ∈ℜ is the input vector, ( ) *<sup>l</sup> y t R* ∈ is the output vector, *n n Ai* <sup>×</sup> ∈ℜ , *n m Bi* <sup>×</sup> ∈ℜ and *l n Ci* <sup>×</sup> ∈ℜ . 1( ),..., ( ) *<sup>v</sup> zt zt* are premise variables. From (1), the T-S fuzzy system output is :

$$\begin{cases} \dot{\mathbf{x}}(t) = \sum\_{i=1}^{r} h\_i(\mathbf{z}(t)) \left[ (A\_i + \Delta A\_i)\mathbf{x}(t) + (B\_i + \Delta B\_i)\mathbf{u}(t) \right] \\\\ \mathbf{y}(t) = \sum\_{i=1}^{r} h\_i(\mathbf{z}(t)) \mathbf{C}\_i \mathbf{x}(t) \end{cases} \tag{4}$$

where 1 ( ( )) ( ( )) ( ( )) *i i r i i w zt h zt w zt* = = ∑ and 1 ( ( )) ( ( )) *ij v <sup>i</sup> M j <sup>j</sup> w zt z t* μ = = ∏

Where ( ( )) *M j ij* μ*z t* is the fuzzy meaning of symbol *Mij*.

In this paper we assume that all of the state variables are not measurable. Fuzzy state observer for T-S fuzzy model with parametric uncertainties (1) is formulated as follows: Observer rule *i*:

$$\text{If } z\_1(t) \text{ is } M\_{\text{li}} \text{ and } \dots \text{and } z\_{\nu}(t) \text{ is } M\_{\text{vi}}.\\\text{Then } \begin{cases} \dot{\hat{x}}(t) = A\_i \hat{x}(t) + B\_i \mu(t) - G\_i(y(t) - \hat{y}(t)),\\ \hat{y}(t) = C\_i \hat{x}(t) \qquad i = 1, \dots, r \end{cases} \tag{5}$$

The fuzzy observer design is to determine the local gains *n l Gi* <sup>×</sup> ∈ℜ in the consequent part. Note that the premise variables do not depend on the state variables estimated by a fuzzy observer.

The output of (5) is represented as follows:

Observer-Based Robust Control of Uncertain Fuzzy Models with Pole Placement Constraints 43

**Lemma 1:** The equilibrium point zero of the augmented system described by (10) is globally asymptotically stable if there exist common positive definite matrices *P*1 and *P*<sup>2</sup> , matrices

> 0, 1,..., 0,

0, 1,..., 0,

*i r*

*i r*

*ijr*

*ijr*

\*

2

*bi j ij t*

*EK I*

*ij ai ij*

2

*t*

*t t t t t*

ε−

ε

*ij i ii j j i ij ai ai ij bi bi t t t tt ij i i i j j i ij j bi bi j*

**Proof:** using theorem *7* in (Tanaka & al, 1998), property (3), the separation lemma (Shi & al, 1992)) and the Schur's complement (Boyd & al, 1994), the above conditions (12) and (13)

From (11), in order to ensure the global, asymptotic stability, the sufficient conditions must

1 11 11 11 11

2 22 22 22 22

*D AP P A BV V B H H H H D P A A P WC C W K E E K*

= + +++ +

<sup>Π</sup> +Π ≤ < ≤ (12)

<sup>Σ</sup> +Σ ≤ < ≤ (13)

1

−

*H P I*

 ε

0: ( , ) <sup>0</sup> *<sup>t</sup> <sup>t</sup>* <sup>∃</sup>*X X M A X A X XA* => = + < *<sup>D</sup> ij ij* (14)

*<sup>t</sup> t t D AX X A BK X X K B* =++ + *i ii j j <sup>i</sup>* (16)

*<sup>t</sup> t t D AX X A GC X X C G* =++ + *i ii j j <sup>i</sup>* (17)

1 2 *MD DD* (,) *AX M M* = + (15)

where *0* is a zero matrix of appropriate dimension. From (14), we have:

ε

2 2

0 00 0 0 0.5 0

*t t t ij j bi ai bi j*

1

−

ε

000

1

−

ε

1

−

ε

*D KE PH PH K*

<sup>⎡</sup> <sup>⎤</sup> <sup>⎢</sup> <sup>⎥</sup> <sup>⎢</sup> <sup>−</sup> <sup>⎥</sup> <sup>⎢</sup> <sup>⎥</sup>

00 0

*j ij*

*K I*

*bi ij*

*H P I*

∑ = <sup>⎢</sup> <sup>−</sup> <sup>⎥</sup> <sup>⎢</sup> <sup>⎥</sup> <sup>⎢</sup> <sup>−</sup> <sup>⎥</sup> <sup>⎢</sup> <sup>⎥</sup> <sup>⎢</sup> <sup>−</sup> <sup>⎥</sup> <sup>⎣</sup> <sup>⎦</sup>

*Wi* , *Vj* and positive scalars 0 *ij*

1

*E V I*

ε

*ai ij bi j ij ij*

*EP I*

Π = −

*t*

*t*

be verified:

Let: <sup>11</sup>

With <sup>1</sup> <sup>1</sup>

*M*

and

*X*

0 *X*

0 *<sup>D</sup> D*

22 0

> 2 0

where

*D* ⎡ ⎤ <sup>=</sup> ⎢ ⎥ ⎣ ⎦

*X* ⎡ ⎤ <sup>=</sup> ⎢ ⎥ ⎣ ⎦

<sup>1</sup> 0.5 0 0 0 0 0.5 0 0 00 0

ε

ε

−

1 1

2 2

*t tt ij ai j bi i bi*

*D PE V E B H*

⎡ ⎤ ⎢ ⎥ <sup>−</sup>

*i ij*

*B I*

0 00

*H I*

<sup>−</sup> ⎣ ⎦

*bi ij*

And

with

ε

; such as

*ii ij ji*

*ii ij ji*

Π≤ =

Σ≤ =

ε

\* 1

hold with some changes of variables. Let us briefly explain the different steps…

=++ + +

$$\begin{cases}
\dot{\hat{x}}(t) = \sum\_{i=1}^{r} h\_i(z(t)) \left\{ A\_i \hat{x}(t) + B\_i u(t) - G\_i(y(t) - \hat{y}(t)) \right\} \\
\hat{y}(t) = \sum\_{i=1}^{r} h\_i(z(t)) \mathbb{C}\_i \hat{x}(t)
\end{cases} \tag{6}$$

To stabilize this class of systems, we use the PDC observer-based approach (Tanaka & al, 1998). The PDC observer-based controller is defined by the following rule base system: Controller rule *i* :

 If 1*z t*( ) is *M1i* and …and *z t*( ) ν is *M*ν*<sup>i</sup>* Then ( ) ( ) 1,..., ˆ *ut Kxt i r* = *<sup>i</sup>* = (7)

The overall fuzzy controller is represented by:

$$u(t) = \frac{\sum\_{i=1}^{r} w\_i(z(t))K\_i \hat{x}(t)}{\sum\_{i=1}^{r} w\_i(z(t))} = \sum\_{i=1}^{r} h\_i(z(t))K\_i \hat{x}(t) \tag{8}$$

Let us denote the estimation error as:

$$e(t) = \mathbf{x}(t) - \hat{\mathbf{x}}(t) \tag{9}$$

The augmented system containing both the fuzzy controller and observer is represented as follows:

$$
\begin{bmatrix}
\dot{\mathbf{x}}(t) \\
\dot{e}(t)
\end{bmatrix} = \overline{A}(z(t)) \times \begin{bmatrix}
\mathbf{x}(t) \\
e(t)
\end{bmatrix} \tag{10}
$$

where

$$\begin{aligned} \overline{A}(z(t)) &= \sum\_{i=1}^{r} \sum\_{j=1}^{r} h\_i(z(t)) h\_j(z(t)) \overline{A}\_{ij} \\ \overline{A}\_{ij} &= \begin{bmatrix} (A\_i + \Delta A\_i) + (B\_i + \Delta B\_i) K\_j & -(B\_i + \Delta B\_i) K\_j \\ & \left(\Delta A\_i + \Delta B\_i K\_j\right) & \left(A\_i + G\_i C\_j - \Delta B\_i K\_j\right) \end{bmatrix} \end{aligned} \tag{11}$$

The main goal is first, to find the sets of matrices *Ki* and *Gi* in order to guarantee the global asymptotic stability of the equilibrium point zero of (10) and secondly, to design the fuzzy controller and the fuzzy observer of the augmented system (10) separately by assigning both "observer and controller poles" in a desired region in order to guarantee that the error between the state and its estimation converges faster to zero. The faster the estimation error will converge to zero, the better the transient behaviour of the controlled system will be.

### **3. Main results**

Given (1), we give sufficient conditions in order to satisfy the global asymptotic stability of the closed-loop for the augmented system (10).

**Lemma 1:** The equilibrium point zero of the augmented system described by (10) is globally asymptotically stable if there exist common positive definite matrices *P*1 and *P*<sup>2</sup> , matrices *Wi* , *Vj* and positive scalars 0 *ij* ε; such as

$$\begin{aligned} \Pi\_{\vec{n}} \le 0, \ i = 1, \ldots, r \\ \Pi\_{\vec{\eta}} + \Pi\_{\vec{j}i} \le 0, \ i < j \le r \end{aligned} \tag{12}$$

And

42 Recent Advances in Robust Control – Novel Approaches and Design Methods

ˆˆ ˆ ( ) ( ( )) ( ) ( ) ( ( ) ( ))

To stabilize this class of systems, we use the PDC observer-based approach (Tanaka & al, 1998). The PDC observer-based controller is defined by the following rule base system:

*i ii i*

*x t h z t Ax t Bu t G y t y t*

⎪ = +− −

 is *M*ν

( ( )) ( ) ˆ

*i i <sup>r</sup> <sup>i</sup>*

*w zt Kxt*

( ( ))

( ) ( ( )) ( ) ˆ

The augmented system containing both the fuzzy controller and observer is represented as

( ) ( ) ( ( )) ( ) ( ) *x t x t Azt e t e t* <sup>⎡</sup> ⎤ ⎡⎤ <sup>⎢</sup> ⎥ ⎢⎥ = × ⎣ ⎦ ⎣⎦

( )( ) ( )

The main goal is first, to find the sets of matrices *Ki* and *Gi* in order to guarantee the global asymptotic stability of the equilibrium point zero of (10) and secondly, to design the fuzzy controller and the fuzzy observer of the augmented system (10) separately by assigning both "observer and controller poles" in a desired region in order to guarantee that the error between the state and its estimation converges faster to zero. The faster the estimation error will converge to zero, the better the transient behaviour of the controlled system will be.

Given (1), we give sufficient conditions in order to satisfy the global asymptotic stability of

*A A B BK B BK*

*i i i ij i ij*

⎡ +Δ + +Δ − +Δ ⎤ <sup>=</sup> <sup>⎢</sup> <sup>⎥</sup> <sup>⎢</sup> Δ +Δ + −Δ <sup>⎥</sup> <sup>⎣</sup> <sup>⎦</sup>

*i j ij*

( )( )

*A BK A GC BK*

*i i j i i j i j*

*u t h zt Kxt*

*i i <sup>r</sup> <sup>i</sup>*

1

*i r*

⎧

⎪ ⎨ <sup>⎪</sup> <sup>=</sup> <sup>⎪</sup> ⎩

If 1*z t*( ) is *M1i* and …and *z t*( )

Let us denote the estimation error as:

The overall fuzzy controller is represented by:

Controller rule *i* :

follows:

where

**3. Main results** 

=

*i*

=

∑

*r*

∑

1

ˆ ˆ ( ) ( ( )) ( )

*yt h zt Cxt*

*i i*

ν

1

=

*r*

∑

1

=

1 1 ( ( )) ( ( )) ( ( ))

*Azt h zt h zt A*

*i j*

= = =

*ij*

the closed-loop for the augmented system (10).

*A*

*r r*

∑∑

∑

*i i*

*w zt*

= =

{ }

1

=

∑

(6)

(8)

(11)

*<sup>i</sup>* Then ( ) ( ) 1,..., ˆ *ut Kxt i r* = *<sup>i</sup>* = (7)

*et xt xt* () () () = − ˆ (9)

(10)

$$\begin{aligned} \Sigma\_{ii} \le 0, \ i = 1, \ldots, r \\ \Sigma\_{ij} + \Sigma\_{ji} \le 0, \ i < j \le r \end{aligned} \tag{13}$$

with

$$
\Pi\_{ij} = \begin{bmatrix} D\_{ij} & P\_1 E\_{ai}^t & V\_j^t E\_{ai}^t & B\_i & H\_{bi} \\ E\_{ai} P\_1 & -0.5 \varepsilon\_{ij} I & 0 & 0 & 0 \\ E\_{bi} V\_j & 0 & -0.5 \varepsilon\_{ij} I & 0 & 0 \\ B\_i^t & 0 & 0 & -\varepsilon\_{ij} I & 0 \\ H\_{bi}^t & 0 & 0 & 0 & -\varepsilon\_{ij} I \end{bmatrix} \begin{bmatrix} D\_{ij}^t & K\_j^t E\_{hi}^t & P\_2 H\_{ai} & P\_2 H\_{hi} & K\_j^t \\ E\_{hi} K\_j & -\varepsilon\_{ij}^{-1} I & 0 & 0 & 0 \\ H\_{ai}^t P\_2 & 0 & -\varepsilon\_{ij}^{-1} I & 0 & 0 \\ H\_{bi}^t P\_2 & 0 & 0 & -0.5 \varepsilon\_{ij}^{-1} I & 0 \\ K\_j & 0 & 0 & 0 & -\varepsilon\_{ij}^{-1} I \end{bmatrix},
$$

$$\begin{aligned} D\_{ij} &= A\_i P\_1 + P\_1 A\_i^t + B\_i V\_j + V\_j^t B\_i^t + \varepsilon\_{ij} H\_{ai} H\_{ai}^t + \varepsilon\_{ij} H\_{bi} H\_{bi}^t \\ D\_{ij}^\* &= P\_2 A\_i + A\_i^t P\_2 + \mathcal{W}\_i \mathbf{C}\_j + \mathbf{C}\_j^t \mathcal{W}\_i^t + \varepsilon\_{ij}^{-1} \mathcal{K}\_j^t E\_{bi}^t E\_{bi} \mathcal{K}\_j \end{aligned}$$

**Proof:** using theorem *7* in (Tanaka & al, 1998), property (3), the separation lemma (Shi & al, 1992)) and the Schur's complement (Boyd & al, 1994), the above conditions (12) and (13) hold with some changes of variables. Let us briefly explain the different steps… From (11), in order to ensure the global, asymptotic stability, the sufficient conditions must

$$\exists X = X^t > 0 : M\_D(\overline{A} \, \_t X) = \overline{A}\_{i\bar{j}} X + X \, \overline{A}\_{i\bar{j}}^t < 0 \tag{14}$$

Let: <sup>11</sup> 22 0 0 *X X X* ⎡ ⎤ <sup>=</sup> ⎢ ⎥ ⎣ ⎦ where *0* is a zero matrix of appropriate dimension. From (14), we have:

$$M\_D(\overline{A}, X) = M\_D^1 + M\_D^2 \tag{15}$$

With <sup>1</sup> <sup>1</sup> 2 0 0 *<sup>D</sup> D M D* ⎡ ⎤ <sup>=</sup> ⎢ ⎥ ⎣ ⎦ where

$$D\_1 = A\_i X\_{11} + X\_{11} A\_i^t + B\_i K\_j X\_{11} + X\_{11} K\_j^t B\_i^t \tag{16}$$

and

be verified:

$$D\_2 = A\_i X\_{22} + X\_{22} A\_i^t + G\_i C\_j X\_{22} + X\_{22} C\_j^t G\_i^t \tag{17}$$

,

and

Observer-Based Robust Control of Uncertain Fuzzy Models with Pole Placement Constraints 45

*t t t t t ij jj ij j bi bi j ij ai ai ai ai t t t t t t ij bi bi bi bi ij bi bi bi bi ij j bi bi j*

1 1 1

<sup>⎡</sup> <sup>+</sup> ⎤⎡ ⎤ ≤ = <sup>⎢</sup> ⎥⎢ ⎥ <sup>+</sup> <sup>⎣</sup> ⎦⎣ ⎦

0

*D T R*

1 1 1 2 2

(22)

(25)

(26)

(23)

(24)

− − −

 ε

22 2 0 0

*DT R*

1 11 1

− −− −

*T P KKP P KE E KP H H H H H H P KE E KP*

= + + ΔΔ

(,) 0 0 *<sup>D</sup>*

1

0 *R*

1

*bi j ij*

*E V I*

ε

1

*t*

*t*

ε

\*

0

<

2

*bi j ij t*

*EK I*

2

*t*

*ai ij*

*EP I*

2 0

*R* ⎡ ⎤ ⎢ ⎥ < ⎣ ⎦

> 1 2

First, from (24), by using (3), using the Schur's complement (Boyd & al, 1994) as well as the

*R R* ⎧ < ⎨ < ⎩

*t tt ij ai j bi i bi*

*D PE V E B H*

⇔ − <

⎡ ⎤ ⎢ ⎥ <sup>−</sup>

*i ij*

*B I*

 ε

1

−

*ai ij*

*H P I*

ε

−

0 0

0.5 0 0 0

ε

00 0

0 00

*H I*

<sup>−</sup> ⎣ ⎦

*bi ij*

Where *I* is always the identity matrix of appropriate dimension and

Then, from (24), by using (3), using the Schur's complement (Boyd & al, 1994) as well as the

2 2

*t t t ij j bi ai bi j*

0 0 0.5 0

−

1

−

⇔ − <

ε

*D KE PH PH K*

⎡ ⎤ ⎢ ⎥

00 0

*j ij*

<sup>−</sup> ⎣ ⎦

*K I*

*bi ij*

*H P I*

0 0.5 0 0 0

ε

1

−

ε

000 0 00 0

1

−

ε

−

ε

22 2 2 2

εεε

+ ΔΔ + ΔΔ +

εε

*M AX*

From (15), (16), (17) and (21), we have:

In order to verify (14), we must have:

introduction of the new variable: *V KP <sup>i</sup>* = *<sup>j</sup>* <sup>1</sup> :

1

*<sup>t</sup> t t <sup>t</sup> <sup>t</sup> D AP P A BV V B H H H H ij i ii j j i ij ai ai ij bi bi* = + ++ + +

2

*R*

introduction of the new variable: *W PG i i* = <sup>2</sup> :

*R*

0

<

Which implies:

1 1

From (15),

$$M\_D^2 = \begin{bmatrix} \Delta\_1 & X\_{11}\Delta A\_i^t + X\_{11}K\_j^t \Delta B\_i^t - B\_i K\_j X\_{22} - \Delta B\_i K\_j X\_{22} \\\\ \Delta A\_i X\_{11} + \Delta B\_i K\_j X\_{11} - X\_{22} K\_j^t B\_i^t - X\_{22} K\_j^t \Delta B\_i^t & \Delta\_2 \end{bmatrix}$$

where 1 11 11 11 11 *<sup>t</sup> t t* Δ =Δ + Δ +Δ + Δ *AX X A BK X X K B i ii j j <sup>i</sup>* and 2 22 22 *t t* Δ = −Δ − Δ *BK X X K B <sup>i</sup> j j <sup>i</sup>* From (15), we have:

$$\begin{aligned} \boldsymbol{M}\_{D}^{2} &= \boldsymbol{\Sigma}\_{1} + \boldsymbol{\Sigma}\_{2} + \boldsymbol{\Sigma}\_{3} \text{ with } \boldsymbol{\Sigma}\_{1} = \begin{bmatrix} 0 & -\boldsymbol{B}\_{i}\boldsymbol{K}\_{j}\boldsymbol{X}\_{22} - \Delta \boldsymbol{B}\_{i}\boldsymbol{K}\_{j}\boldsymbol{X}\_{22} \\ -\boldsymbol{X}\_{22}\boldsymbol{K}\_{j}^{t}\boldsymbol{B}\_{i}^{t} - \boldsymbol{X}\_{22}\boldsymbol{K}\_{j}^{t}\Delta \boldsymbol{B}\_{i}^{t} & \boldsymbol{0} \end{bmatrix}, \\ \boldsymbol{\Sigma}\_{2} &= \begin{bmatrix} 0 & \boldsymbol{X}\_{11}\Delta \boldsymbol{A}\_{i}^{t} + \boldsymbol{X}\_{11}\boldsymbol{K}\_{j}^{t}\Delta \boldsymbol{B}\_{i}^{t} \\ \Delta \boldsymbol{A}\_{i}\boldsymbol{X}\_{11} + \Delta \boldsymbol{B}\_{i}\boldsymbol{K}\_{j}\boldsymbol{X}\_{11} & \boldsymbol{0} \end{bmatrix} \text{and } \boldsymbol{\Sigma}\_{3} = \begin{bmatrix} \boldsymbol{\Delta}\_{1} & \boldsymbol{0} \\ \boldsymbol{0} & \boldsymbol{\Delta}\_{2} \end{bmatrix} \end{aligned}$$

Let <sup>1</sup> 11 1 11 2 *X PX P* , <sup>−</sup> = = . From the previous equation and (2), we have:

$$\begin{aligned} \boldsymbol{\Sigma}\_{1} &= \begin{bmatrix} 0 & 0 \\ 0 & -P\_{2}^{-1} \mathbf{K}\_{j}^{t} \\ 0 & -P\_{2}^{-1} \mathbf{K}\_{j}^{t} \end{bmatrix} \times \begin{bmatrix} 0 & 0 \\ B\_{i}^{t} & 0 \end{bmatrix} + \begin{bmatrix} 0 & 0 \\ 0 & 0 \end{bmatrix} \times \begin{bmatrix} 0 & 0 \\ 0 & -K\_{j}P\_{2}^{-1} \end{bmatrix} + \begin{bmatrix} 0 & 0 \\ 0 & -P\_{2}^{-1} \mathbf{K}\_{j}^{t} E\_{bi}^{t} \end{bmatrix} \times \begin{bmatrix} 0 & 0 \\ \Delta\_{hi}^{t} H\_{hi}^{t} & 0 \end{bmatrix} \\ &+ \begin{bmatrix} 0 & H\_{hi} \Delta\_{hi} \\ 0 & 0 \end{bmatrix} \times \begin{bmatrix} 0 & 0 \\ 0 & -E\_{hi} K\_{j} P\_{2}^{-1} \end{bmatrix} \end{aligned} \tag{18}$$

And,

$$\begin{aligned} \boldsymbol{\Sigma}\_{2} &= \begin{bmatrix} 0 & 0 \\ H\_{\dot{a}t}\Delta\_{\dot{a}i} & 0 \end{bmatrix} \times \begin{bmatrix} E\_{\dot{a}t}P\_{1} & 0 \\ 0 & 0 \end{bmatrix} + \begin{bmatrix} P\_{1}E\_{\dot{a}t}^{t} & 0 \\ 0 & 0 \end{bmatrix} \times \begin{bmatrix} 0 & \Delta\_{\dot{a}t}^{t}H\_{\dot{a}t}^{t} \\ 0 & 0 \end{bmatrix} + \begin{bmatrix} 0 & 0 \\ H\_{\dot{b}t}\Delta\_{\dot{b}i} & 0 \end{bmatrix} \times \begin{bmatrix} E\_{\dot{b}t}K\_{f}P\_{1} & 0 \\ 0 & 0 \end{bmatrix} \\ + \begin{bmatrix} P\_{1}K\_{f}^{t}E\_{\dot{b}i}^{t} & 0 \\ 0 & 0 \end{bmatrix} \times \begin{bmatrix} 0 & \Delta\_{\dot{b}t}^{t}H\_{\dot{b}i}^{t} \\ 0 & 0 \end{bmatrix} \\ \text{find} \end{aligned} \tag{19}$$

And finally:

$$\begin{aligned} \boldsymbol{\Sigma\_{3}} &= \begin{bmatrix} \boldsymbol{H}\_{\dot{a}t} \boldsymbol{\Delta}\_{\dot{a}t} & \boldsymbol{H}\_{\dot{b}t} \boldsymbol{\Delta}\_{\dot{b}i} \\ \boldsymbol{0} & \boldsymbol{0} \end{bmatrix} \times \begin{bmatrix} \boldsymbol{E}\_{\dot{a}t} \boldsymbol{P}\_{1} & \boldsymbol{0} \\ \boldsymbol{E}\_{\dot{b}t} \boldsymbol{K}\_{f} \boldsymbol{P}\_{1} & \boldsymbol{0} \end{bmatrix} + \begin{bmatrix} \boldsymbol{P}\_{1} \boldsymbol{E}\_{\dot{a}t}^{\dagger} & \boldsymbol{P}\_{1} \boldsymbol{K}\_{f}^{\dagger} \boldsymbol{E}\_{\dot{b}i}^{\dagger} \\ \boldsymbol{0} & \boldsymbol{0} \end{bmatrix} \times \begin{bmatrix} \boldsymbol{\Delta}\_{\dot{a}t}^{\dagger} \boldsymbol{H}\_{\dot{a}}^{\dagger} & \boldsymbol{0} \\ \boldsymbol{0} & \boldsymbol{0} \end{bmatrix} \\ &+ \begin{bmatrix} \boldsymbol{0} & \boldsymbol{0} \\ \boldsymbol{0} & -\boldsymbol{H}\_{\dot{b}i} \boldsymbol{\Delta}\_{\dot{b}i} \end{bmatrix} \times \begin{bmatrix} \boldsymbol{0} & \boldsymbol{0} \\ \boldsymbol{0} & \boldsymbol{E}\_{\dot{b}i} \boldsymbol{K}\_{f} \boldsymbol{P}\_{2}^{-1} \end{bmatrix} + \begin{bmatrix} \boldsymbol{0} & \boldsymbol{0} \\ \boldsymbol{0} & \boldsymbol{P}\_{2}^{-1} \boldsymbol{K}\_{f}^{\dagger} \boldsymbol{E}\_{\dot{b}i}^{\dagger} \end{bmatrix} \times \begin{bmatrix} \boldsymbol{0} & \boldsymbol{0} \\ \boldsymbol{0} & -\boldsymbol{\Delta}\_{\dot{b}i}^{\dagger} \boldsymbol{H}\_{\dot{b}i}^{\dagger} \end{bmatrix} \end{aligned} \tag{20}$$

From (18), (19) and (20) and by using the separation lemma (Shi & al, 1992)), we finally obtain:

$$M\_D^2 \le \begin{bmatrix} T\_1 & 0 \\ 0 & T\_2 \end{bmatrix} \tag{21}$$

Where:

$$\begin{split} T\_{1} &= \varepsilon\_{ij}^{-1} B\_{i} B\_{i}^{t} + \varepsilon\_{ij}^{-1} H\_{hi} \Delta\_{hi} \Delta\_{hi}^{t} H\_{hi}^{t} + \varepsilon\_{ij}^{-1} P\_{1} E\_{ai}^{t} E\_{ai} P\_{1} + \varepsilon\_{ij}^{-1} P\_{1} K\_{j}^{t} E\_{bi}^{t} E\_{bi} K\_{j} P\_{1} \\ &+ \varepsilon\_{ij} H\_{ai} \Delta\_{ai} \Delta\_{ai}^{t} H\_{ai}^{t} + \varepsilon\_{ij} H\_{hi} \Delta\_{hi} \Delta\_{hi}^{t} H\_{bi}^{t} + \varepsilon\_{ij}^{-1} P\_{1} E\_{ai}^{t} E\_{ai} P\_{1} + \varepsilon\_{ij}^{-1} P\_{1} K\_{j}^{t} E\_{bi}^{t} E\_{bi} K\_{j} P\_{1} \end{split}$$

and

44 Recent Advances in Robust Control – Novel Approaches and Design Methods

1 11 11 22 22 2

⎡ Δ Δ + Δ − −Δ ⎤ <sup>=</sup> <sup>⎢</sup> <sup>⎥</sup> ⎢Δ +Δ − − Δ <sup>Δ</sup> <sup>⎥</sup> <sup>⎣</sup> <sup>⎦</sup>

> ⎡ − −Δ ⎤ Σ = <sup>⎢</sup> <sup>⎥</sup> ⎢− −Δ <sup>⎥</sup> <sup>⎣</sup> <sup>⎦</sup>

0 0 0 0 0 00 0 0 0 0 0 0 0 0 0 0 0

⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎡ ⎤ ⎡ ⎤ Σ = ⎢ ⎥ ×+× <sup>⎢</sup> <sup>⎥</sup> ⎢ ⎥ ⎢ ⎥⎢ ⎥ <sup>+</sup> <sup>×</sup> ⎢ ⎥ ⎢ ⎥ <sup>−</sup> ⎢⎣ ⎥⎦ ⎣ ⎦ ⎢ ⎥⎢ ⎥ − − ⎢ ⎥ ⎣ ⎦ <sup>Δ</sup> ⎣ ⎦ ⎣ ⎦⎣ ⎦

*P K B H KP P KE*

0 0 0 0 0 0 0 0 0 0 0 000 0 0 0 0

1 1 1

2 0

*T* <sup>⎡</sup> <sup>⎤</sup> <sup>≤</sup> <sup>⎢</sup> <sup>⎥</sup> <sup>⎣</sup> <sup>⎦</sup>

0 0 0 0 0 0

*ai ai bi bi ai ai j bi ai ai*

⎡ ⎤ Δ Δ ⎡ ⎤ ⎡ ⎤ ⎡Δ ⎤ Σ = ⎢ ⎥ × ⎢ ⎥ + ⎢ ⎥ × ⎢ ⎥ ⎣ ⎦ ⎣ ⎦ ⎢ ⎥ ⎢Δ ⎥ ⎣ ⎦ ⎣ ⎦

*H H E P PE PK E H*

⎡ ⎤ ⎡ ⎤⎡ ⎤ ⎡ ⎤ ⎡ ⎤ Δ ⎡ ⎤ Σ= × + × + × ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ <sup>⎢</sup> <sup>⎥</sup> ⎣ ⎦ ⎣ ⎦⎣ ⎦ ⎣ ⎦ Δ Δ ⎣ ⎦ <sup>⎣</sup> <sup>⎦</sup>

*<sup>t</sup> <sup>t</sup> t t t t <sup>j</sup> <sup>i</sup> <sup>j</sup> j bi bi bi*

1 1 1

*t t bi j bi bi*

1 1

− −

εε

 ε

111 1

*E KP H*

0 0

*t t t t*

(21)

*t tt t t*

*<sup>t</sup> t t ai bi j ai ai ai*

*E P P E H E KP*

and <sup>1</sup> 3

*t tt*

*i j i ij ij*

(18)

(19)

(20)

*X A X K B BK X BK X*

*t t* Δ = −Δ − Δ *BK X X K B <sup>i</sup> j j <sup>i</sup>*

,

22 22

0 *ij ij*

0 ⎡Δ <sup>⎤</sup> Σ = <sup>⎢</sup> <sup>⎥</sup> <sup>⎣</sup> <sup>Δ</sup> <sup>⎦</sup>

2 0

*BK X BK X*

11 11 22 22 2

*<sup>t</sup> t t* Δ =Δ + Δ +Δ + Δ *AX X A BK X X K B i ii j j <sup>i</sup>* and 2 22 22

*tt t t ji j i*

22 22

11 11

0

11 1 11 2 *X PX P* , <sup>−</sup> = = . From the previous equation and (2), we have:

1 1 1 1 2 2 2

*i*

*ai ai bi bi*

1

⎡ ⎤ ⎡ ⎤⎡ ⎤ ⎡ ⎤ +× + × ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ − Δ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎣ ⎦ −Δ ⎣ ⎦⎣ ⎦

0 0 00 00 0 0

0 0 0 0

*M*

1 1 1 1 1 111 1

− − − −

= + ΔΔ + + + ΔΔ + ΔΔ + +

1 1 2 2

From (18), (19) and (20) and by using the separation lemma (Shi & al, 1992)), we finally obtain:

2 1

0 *<sup>D</sup> T*

*t tt t t t ij i i ij bi bi bi bi ij ai ai ij j bi bi j t t tt t t t ij ai ai ai ai ij bi bi bi bi ij ai ai ij j bi bi j*

*H H H H PE E P PKE E K P*

*T BB H H PE E P PK E E K P*

 ε

*H H E KP P KE* − −

*bi bi bi j j bi bi bi*

*H H*

− − −

*B*

1 2

−

*bi j*

*E KP*

*X A XK B*

0

*X KB X K B*

*t tt i j i*

*D tt t t i ij ji j i*

1

⎡ ⎤ Δ+ Δ Σ = ⎢ ⎥ Δ +Δ ⎣ ⎦

where 1 11 11 11 11

*AX BK X X K B X K B*

From (15),

From (15), we have:

<sup>2</sup> *MD* =Σ +Σ +Σ <sup>123</sup> with

Let <sup>1</sup>

11 11

0 0 0

0 0 0 0 0 0

⎡ ⎤ ⎡ ⎤ Δ + × ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ ⎢ ⎥ ⎣ ⎦

*PKE H*

3

*t t t t j bi bi bi*

εε

 ε

ε

⎡ ⎤ Δ ⎡ ⎤ + × ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ <sup>−</sup> ⎣ ⎦ ⎣ ⎦

0 0 0

*bi bi*

*H*

0

*i ij*

*AX BK X*

*M*

2

And,

2

And finally:

Where:

1

$$\begin{split} T\_{2} &= \mathcal{E}\_{i\bar{j}} P\_{2}^{-1} K\_{\bar{j}}^{t} K\_{\bar{j}} P\_{2}^{-1} + \mathcal{E}\_{i\bar{j}} P\_{2}^{-1} K\_{\bar{j}}^{t} E\_{bi}^{t} E\_{bi} K\_{\bar{j}} P\_{2}^{-1} + \mathcal{E}\_{i\bar{j}} H\_{ai} \Delta\_{ai} \Delta\_{ai}^{t} H\_{ai}^{t} \\ &+ \mathcal{E}\_{i\bar{j}} H\_{bi} \Delta\_{bi} \Delta\_{bi}^{t} H\_{bi}^{t} + \mathcal{E}\_{i\bar{j}} H\_{bi} \Delta\_{bi} \Delta\_{bi}^{t} H\_{bi}^{t} + \mathcal{E}\_{i\bar{j}}^{-1} P\_{2}^{-1} K\_{\bar{j}}^{t} E\_{bi}^{t} E\_{bi} K\_{\bar{j}} P\_{2}^{-1} \end{split}$$

From (15), (16), (17) and (21), we have:

$$M\_D(\overline{A}, X) \le \begin{bmatrix} D\_1 + T\_1 & 0 \\ 0 & D\_2 + T\_2 \end{bmatrix} = \begin{bmatrix} R\_1 & 0 \\ 0 & R\_2 \end{bmatrix} \tag{22}$$

In order to verify (14), we must have:

$$
\begin{bmatrix} R\_1 & 0 \\ 0 & R\_2 \end{bmatrix} < 0 \tag{23}
$$

Which implies:

$$\begin{cases} R\_1 < 0\\ R\_2 < 0 \end{cases} \tag{24}$$

First, from (24), by using (3), using the Schur's complement (Boyd & al, 1994) as well as the introduction of the new variable: *V KP <sup>i</sup>* = *<sup>j</sup>* <sup>1</sup> :

$$\begin{aligned} &R\_1 < 0\\ &\begin{bmatrix} D\_{ij} & P\_1 E\_{ai}^t & V\_j^t E\_{bi}^t & B\_i & H\_{bi} \\ E\_{ai} P\_1 & -0.5 \varepsilon\_{ij} I & 0 & 0 & 0 \\ E\_{bi} V\_j & 0 & -0.5 \varepsilon\_{ij} I & 0 & 0 \\ B\_i^t & 0 & 0 & -\varepsilon\_{ij} I & 0 \\ H\_{bi}^t & 0 & 0 & 0 & -\varepsilon\_{ij} I \end{bmatrix} < 0 \end{aligned} \tag{25}$$

Where *I* is always the identity matrix of appropriate dimension and 1 1 *<sup>t</sup> t t <sup>t</sup> <sup>t</sup> D AP P A BV V B H H H H ij i ii j j i ij ai ai ij bi bi* = + ++ + + ε ε

Then, from (24), by using (3), using the Schur's complement (Boyd & al, 1994) as well as the introduction of the new variable: *W PG i i* = <sup>2</sup> :

$$\begin{aligned} \begin{aligned} &R\_2 < 0\\ &\begin{bmatrix} D\_{ij}^\* & K\_j^t E\_{bi}^t & P\_2 H\_{ai} & P\_2 H\_{bi} & K\_j^t\\ E\_{bi} K\_j & -\varepsilon\_{ij}^{-1} I & 0 & 0 & 0\\ H\_{ai}^t P\_2 & 0 & -\varepsilon\_{ij}^{-1} I & 0 & 0\\ H\_{bi}^t P\_2 & 0 & 0 & -0.5\varepsilon\_{ij}^{-1} I & 0\\ K\_j & 0 & 0 & 0 & -\varepsilon\_{ij}^{-1} I \end{bmatrix} < 0\\ &\begin{bmatrix} K\_j & 0 & 0 & 0 & -\varepsilon\_{ij}^{-1} I \end{bmatrix} \end{aligned} \tag{26}$$

Observer-Based Robust Control of Uncertain Fuzzy Models with Pole Placement Constraints 47

0, 1,..., ,

( )( )

=⊗ +⊗ + ⊗ +⊗ + ⊗

μ

*ij i ii j j i*

**Proof:** Using theorem *1*, matrix *Tij* is *DT*-stable if and only if there exists a symmetric matrix

*D ij i ii j j i ai ai ai*

 β

=⊗+⊗ + ⊗ +⊗ + ⊗ +⊗ Δ

*M T X X A X XA B K X XK B H E X*

1 1 ( , ) ( )( ) ( )( ) ( )( )

= + ⊗ Δ ⊗ + ⊗ ⊗Δ + ⊗ Δ ⊗

*ij P AP P A BV V B i ii j j <sup>i</sup>*

( , ) ( ) ( )( )

≤+ ⊗ + ⊗ ⊗

 β

( ) ( ) ( )( )

*I HH I HH PE E P*

*t t tt ij ij ai ai ij bi bi ij ai ai*

+⊗ +⊗ + ⊗ ⊗

*D ij ij ij ai ai ij ai ai*

*M T X I HH PE E P*

*D ij ij ai ai ai ai ai ai bi bi bi j*

*M T X I H E P PE I H I H E V*

 β

*t t M T X X T X XT D ij* = αβ

> β

*bi j ij*

*E V I*

0

⊗ − ⎝ ⎠

 μ

 β

1 11

*ij ij ij ai ai ij bi bi*

*E I HH I HH*

=+ ⊗ + ⊗

( ,) 0 *<sup>T</sup>*

*t tt t t tt t t ai ai ai bi bi bi j j bi bi bi*

+ ⊗ Δ +⊗ Δ + ⊗ Δ

 β

Using the separation lemma (Shi & al, 1992) and (3), we obtain:

ξ μ

( )( ) 0

 β

+⊗ ⊗ ≺

*VE EV*

*XE H H E K X XK E H*

 β

ββ

1 11

1

−

μμμ

μβ

( ) ( )( )

*I HH VE EV*

*t t tt ij bi bi ij j bi bi j*

*EP I*

β

1

*E PE VE*

⎛ ⎞ ⊗ ⊗ ⎜ ⎟

*ij ai j bi*

*i r*

*ii ij ji*

( ) ( )

Ω= ⊗ −

β

β

1

*j j*

αβ

( )( )

+ ⊗ ⊗Δ

ξα

*T*

μ

1

−

μβ

μ

*t tt ij j bi bi j*

∈ℜ ∀*i j*

Thus, matrix *Tij* is *DT*-stable if:

ξ

Where, of course, , *ij*

*t tt t t j bi bi bi*

*VE I H*

*V KP*

=

ξα

ξ μ 1

*ij ai ij*

 β

With

*X* > 0 such that:

( ,) *<sup>T</sup>*

Let *X P* = 1 and *V KP j j* = <sup>1</sup> :

*T*

where

ββ

ξ

β

Ω≤ =

0, .

( )( )

*t t*

*P AP P A BV V B*

 β

> β

> > β

*t t t tt*

=⊗ +⊗ + ⊗ +⊗ + ⊗ (34)

 β

1 1

1 1

 β

 β

 β

1

−

μβ

*t tt*

+⊗ + ⊗ ⊗ (35)

1

−

β

*t t t tt*

*t t tt*

*tttt*

 β

0

 μ

*t t t tt*

 β

⊗+⊗ + ⊗ < *ij ij* (31)

 β

β

(30)

(32)

(33)

(36)

*i jr*

<sup>Ω</sup> +Ω ≤ < ≤ (29)

Where \* 1 2 2 *<sup>t</sup> t t tt D P A AP WC C W K E E K ij ii i j j i ij j bi bi <sup>j</sup>* ε<sup>−</sup> =++ + +

Thus, conditions (12) and (13) yield for all *i, j* from (25) and (26) and by using theorem *7* in (Tanaka & al, 1998) which is necessary for LMI relaxations.

**Remark** *1***:** In lemma *1*, the positive scalars *ij* ε are optimised unlike (Han & al, 2000), (Lee & al, 2001), (Tong & Li, 2002), (Chadli & El Hajjaji, 2006). We do not actually need to impose them to solve the set of LMIs. The conditions are thus less restrictive.

**Remark** *2***:** Note that it is a two-step procedure which allows us to design the controller and the observer separately. First, we solve (12) for decision variables 1 (, ,) *P Kj ij* ε and secondly, we solve (13) for decision variables 2 (,) *P Gi* by using the results from the first step. Furthermore, the controller and observer gains are given by: <sup>1</sup> *G PW i i* <sup>2</sup> <sup>−</sup> <sup>=</sup> and <sup>1</sup> *K VP j j* <sup>1</sup> <sup>−</sup> = , respectively, for *ij r* , 1,2,..., . =

**Remark** *3***:** From lemma 1 and (10), the location of the poles associated with the state dynamics and with the estimation error dynamics is unknown. However, since the design algorithm is a two-step procedure, we can impose two pole placements separately, the first one for the state and the second one for the estimation error. In the following, we focus in the robust pole placement.

We hereafter give sufficient conditions to ensure the desired pole placements by using the LMI conditions of (Chilali & Gahinet (1996) and (Chilali & al, 1999) to the case of uncertain T-S fuzzy systems with unavailable state variables. Let us recall the definition of an LMI region and pole placement LMI constraints.

**Definition 1 (**Boyd & al, 1994)**:** A subset *D* of the complex plane is called an LMI region if there exists a symmetric matrix [ ] *m m* α α *kl* <sup>×</sup> = ∈ℜ and a matrix [ ] *m m* β β*kl* <sup>×</sup> = ∈ℜ such as:

$$D = \left\{ z \in \mathbb{C} : f\_D(z) = \alpha + \beta z + \beta^t \overline{z} < 0 \right\} \tag{27}$$

**Definition 2 (**Chilali and Gahinet, 1996)**:** Let *D* be a subregion of the left-half plane. A dynamical system described by: *x Ax* = is called *D*-stable if all its poles lie in *D*. By extension, A is then called *D*-stable.

From the two previous definitions, the following theorem is given.

**Theorem 1 (**Chilali and Gahinet , 1996)**:** Matrix *A* is *D*-stable if and only if there exists a symmetric matrix 0 *X* > such as

$$M\_D(A, X) = \alpha \otimes X + \beta \otimes AX + \beta^t \otimes XA^t < 0 \tag{28}$$

where ⊗ denotes the Kronecker product.

From (10) and (11), let us define: ( ) ( ) *T A A B BK ij* = *i iii* +Δ + +Δ *<sup>j</sup>* and *S A GC BK ij* = *i i* + −Δ *<sup>j</sup> <sup>i</sup> <sup>j</sup>* .

We hereafter give sufficient conditions to guarantee that 1 1 ( ( )) ( ( )) *r r i j ij i j h zt h zt T* = = ∑∑ and

1 1 ( ( )) ( ( )) *r r i j ij i j h zt h zt S* = = ∑∑ are *DT* -stable and *DS* -stable respectively in order to impose the

dynamics of the state and the dynamics of the estimation error.

**Lemma 2:** Matrix 1 1 ( ( )) ( ( )) *r r i j ij i j h zt h zt T* = = ∑∑ is *DT* -stable if and only if there exist a symmetric

matrix 1*P* > 0 and positive scalars 0 μ*ij* ; such as

$$\begin{aligned} \Omega\_{ii} \le 0, \ i = 1, \ldots, r, \\ \Omega\_{ij} + \Omega\_{ji} \le 0, \ i < j \le r. \end{aligned} \tag{29}$$

With

46 Recent Advances in Robust Control – Novel Approaches and Design Methods

Thus, conditions (12) and (13) yield for all *i, j* from (25) and (26) and by using theorem *7* in

al, 2001), (Tong & Li, 2002), (Chadli & El Hajjaji, 2006). We do not actually need to impose

**Remark** *2***:** Note that it is a two-step procedure which allows us to design the controller and

we solve (13) for decision variables 2 (,) *P Gi* by using the results from the first step.

**Remark** *3***:** From lemma 1 and (10), the location of the poles associated with the state dynamics and with the estimation error dynamics is unknown. However, since the design algorithm is a two-step procedure, we can impose two pole placements separately, the first one for the state and the second one for the estimation error. In the following, we focus in

We hereafter give sufficient conditions to ensure the desired pole placements by using the LMI conditions of (Chilali & Gahinet (1996) and (Chilali & al, 1999) to the case of uncertain T-S fuzzy systems with unavailable state variables. Let us recall the definition of an LMI

**Definition 1 (**Boyd & al, 1994)**:** A subset *D* of the complex plane is called an LMI region if

{ : () 0} *<sup>t</sup> D z Cf z z z* = ∈ =+ + < *<sup>D</sup>* αβ

**Definition 2 (**Chilali and Gahinet, 1996)**:** Let *D* be a subregion of the left-half plane. A dynamical system described by: *x Ax* = is called *D*-stable if all its poles lie in *D*. By

**Theorem 1 (**Chilali and Gahinet , 1996)**:** Matrix *A* is *D*-stable if and only if there exists a

αβ

From (10) and (11), let us define: ( ) ( ) *T A A B BK ij* = *i iii* +Δ + +Δ *<sup>j</sup>* and *S A GC BK ij* = *i i* + −Δ *<sup>j</sup> <sup>i</sup> <sup>j</sup>* .

∑∑ are *DT* -stable and *DS* -stable respectively in order to impose the

*ij* ; such as

*kl* <sup>×</sup> = ∈ℜ and a matrix [ ] *m m*

 β

 β

∑∑ is *DT* -stable if and only if there exist a symmetric

β β

⊗+⊗ + ⊗ < (28)

1 1

*i j*

= =

*r r*

are optimised unlike (Han & al, 2000), (Lee &

ε

<sup>−</sup> <sup>=</sup> and <sup>1</sup> *K VP j j* <sup>1</sup>

*kl* <sup>×</sup> = ∈ℜ such as:

(27)

( ( )) ( ( ))

∑∑ and

*h zt h zt T*

*i j ij*

and secondly,

<sup>−</sup> = ,

ε

ε

Where \* 1 2 2

**Remark** *1***:** In lemma *1*, the positive scalars *ij*

region and pole placement LMI constraints.

extension, A is then called *D*-stable.

symmetric matrix 0 *X* > such as

( ( )) ( ( ))

*h zt h zt S*

**Lemma 2:** Matrix

*i j ij*

1 1

*i j*

= =

*r r*

where ⊗ denotes the Kronecker product.

1 1

*i j*

matrix 1*P* > 0 and positive scalars 0

= =

*r r*

there exists a symmetric matrix [ ] *m m*

α α

From the two previous definitions, the following theorem is given.

(,) 0 *t t M A X X AX XA <sup>D</sup>* =

We hereafter give sufficient conditions to guarantee that

dynamics of the state and the dynamics of the estimation error.

( ( )) ( ( ))

*h zt h zt T*

*i j ij*

μ

respectively, for *ij r* , 1,2,..., . =

the robust pole placement.

*<sup>t</sup> t t tt D P A AP WC C W K E E K ij ii i j j i ij j bi bi <sup>j</sup>*

(Tanaka & al, 1998) which is necessary for LMI relaxations.

them to solve the set of LMIs. The conditions are thus less restrictive.

the observer separately. First, we solve (12) for decision variables 1 (, ,) *P Kj ij*

Furthermore, the controller and observer gains are given by: <sup>1</sup> *G PW i i* <sup>2</sup>

<sup>−</sup> =++ + +

$$\begin{aligned} \Omega\_{ij} &= \begin{pmatrix} E\_{ij} & \left(\beta^t \otimes P\_1 E\_{ai}^t\right) & \left(\beta^t \otimes V\_j E\_{bi}^t\right) \\\\ \left(\beta \otimes E\_{ai} P\_1\right) & -\mu\_{ij} I & 0 \\\\ \left(\beta \otimes E\_{bi} V\_j\right) & 0 & -\mu\_{ij} I \end{pmatrix} \\\ E\_{ij} &= \xi\_{ij} + \mu\_{ij} \left(I \otimes H\_{ai} H\_{ai}^t\right) + \mu\_{ij} \left(I \otimes H\_{bi} H\_{bi}^t\right) \\\ \xi\_{ij} &= \alpha \otimes P\_1 + \beta \otimes A\_i P\_1 + \beta^t \otimes P\_1 A\_i^t + \beta \otimes B\_i V\_j + \beta^t \otimes V\_j^t B\_i^t \\\ V\_j &= K\_f P\_1 \end{aligned} \tag{30}$$

**Proof:** Using theorem *1*, matrix *Tij* is *DT*-stable if and only if there exists a symmetric matrix *X* > 0 such that:

$$M\_{D\_T}(T\_{ij}, X) = \alpha \otimes X + \beta \otimes T\_{\dot{\eta}}X + \beta^t \otimes XT\_{\dot{\eta}}^{\dot{t}} < 0 \tag{31}$$

$$\begin{split} M\_{D\_{\mathbb{T}}}(T\_{ij},X) &= a \otimes X + \mathcal{J} \otimes A\_i X + \mathcal{J}^t \otimes X A\_i^t + \mathcal{J} \otimes B\_i K\_j X + \mathcal{J}^t \otimes X K\_j^t B\_i^t + \mathcal{J} \otimes H\_{ai} \Delta\_{ai} E\_{ai} X \\ &+ \mathcal{J}^t \otimes X E\_{ai}^t \Delta\_{ai}^t H\_{ai}^t + \mathcal{J} \otimes H\_{bi} \Delta\_{bi} E\_{bi} X + \mathcal{J}^t \otimes X K\_j^t E\_{bi}^t \Delta\_{bi}^t H\_{bi}^t \end{split} \tag{32}$$

Let *X P* = 1 and *V KP j j* = <sup>1</sup> :

$$\begin{split} M\_{D\_{\uparrow}}(\Gamma\_{\dot{\neg}i},X) &= \mathfrak{z}\_{\dot{\neg}i} + (I \otimes H\_{\dot{a}l} \Delta\_{\dot{a}i})(\mathcal{J} \otimes E\_{\dot{a}l}P\_1) + (\mathcal{J}^{\ell} \otimes P\_1 E\_{\dot{a}l}^{\ell})(I \otimes \Delta\_{\dot{a}l}^{\ell} H\_{\dot{a}l}^{\ell}) + (I \otimes H\_{\dot{b}i} \Delta\_{\dot{b}i})(\mathcal{J} \otimes E\_{\dot{b}i} V\_j) \\ &+ (\mathcal{J}^{\ell} \otimes V\_{\dot{f}}^{\ell} E\_{\dot{b}i}^{\ell})(I \otimes \Delta\_{\dot{b}i}^{\ell} H\_{\dot{b}i}^{\ell}) \end{split} \tag{33}$$

where

$$\mathcal{L}\_{ij} = \alpha \otimes P\_1 + \mathcal{J} \otimes A\_i P\_1 + \mathcal{J}^t \otimes P\_1 A\_i^t + \mathcal{J} \otimes B\_i V\_j + \mathcal{J}^t \otimes V\_j^t B\_i^t \tag{34}$$

Using the separation lemma (Shi & al, 1992) and (3), we obtain:

$$\begin{split} \mathcal{M}\_{D\_{\Gamma}} \{ \mathcal{T}\_{ij}, X \} \leq & \xi\_{ij} + \mu\_{ij} (\mathcal{I} \otimes H\_{ai} H\_{ai}^{t}) + \mu\_{ij}^{-1} (\mathcal{J}^{t} \otimes P\_{1} E\_{ai}^{t}) (\mathcal{J} \otimes E\_{ai} P\_{1}) \\ + \mu\_{ij} (\mathcal{I} \otimes H\_{bi} H\_{bi}^{t}) + \mu\_{ij}^{-1} (\mathcal{J}^{t} \otimes V\_{j}^{t} E\_{bi}^{t}) (\mathcal{J} \otimes E\_{bi} V\_{j}) \end{split} \tag{35}$$

Thus, matrix *Tij* is *DT*-stable if:

$$\begin{aligned} \mu\_{ij}^{\mathsf{L}} &+ \mu\_{ij}(\operatorname{I} \otimes \operatorname{H}\_{\operatorname{ai}} \operatorname{H}\_{\operatorname{ai}}^{t}) + \mu\_{ij}(\operatorname{I} \otimes \operatorname{H}\_{\operatorname{bi}} \operatorname{H}\_{\operatorname{bi}}^{t}) + \mu\_{ij}^{-1}(\operatorname{\mathcal{J}}^{t} \otimes \operatorname{P}\_{1} \operatorname{E}\_{\operatorname{ai}}^{t})(\operatorname{\mathcal{J}} \otimes \operatorname{E}\_{\operatorname{ai}} \operatorname{P}\_{1}) \\ &+ \mu\_{ij}^{-1}(\operatorname{\mathcal{J}}^{t} \otimes \operatorname{V}\_{j}^{t} \operatorname{E}\_{\operatorname{bi}}^{t})(\operatorname{\mathcal{J}} \otimes \operatorname{E}\_{\operatorname{bi}} \operatorname{V}\_{j}) \prec 0 \end{aligned} \tag{36}$$

Where, of course, , *ij* μ∈ℜ ∀*i j*

Observer-Based Robust Control of Uncertain Fuzzy Models with Pole Placement Constraints 49

**Remark** *4***:** Any kind of LMI region (disk, vertical strip, conic sector) may be easily used for

From lemma *2* and lemma *3*, we have imposed the dynamics of the state as well as the dynamics of the estimation error. But from (10), the estimation error dynamics depend on the state. If the state dynamics are slow, we will have a slow convergence of the estimation error to the equilibrium point zero in spite of its own fast dynamics. So in this paper, we add an algorithm using the *H*∞ approach to ensure that the estimation error converges faster to

( ) ( ( )) ( ( )) ( )

*e t h z t h z t A GC BK e t*

= + −Δ

*i j i ij ij*

( ( )) ( ( )) ( )

( ( )) ( ( )) <sup>0</sup> *r r i ij ij i ij*

The objective is to minimize the 2 *L* gain from *x t*( ) to *e t*( ) in order to guarantee that the error between the state and its estimation converges faster to zero. Thus, we define the

2

{ ( ) ( ) ( ) ( )} 0 *t t e t e t x t x t dt* γ

**Lemma 4:** If there exist symmetric positive definite matrix *P*<sup>2</sup> , matrices *Wi* and positive

0, 1,..., 0,

*i r*

2 2

<sup>⎡</sup> <sup>−</sup> <sup>⎤</sup> <sup>⎢</sup> <sup>⎥</sup> <sup>⎢</sup> <sup>−</sup> <sup>⎥</sup>

0 0

*ij j bi bi j ij*

*KE E K U*

Γ = <sup>⎢</sup> <sup>⎥</sup> <sup>⎢</sup> <sup>−</sup> <sup>⎥</sup> <sup>⎢</sup> <sup>⎥</sup> ⎢− <sup>⎥</sup> <sup>⎣</sup> <sup>⎦</sup>

*ijr*

*ij bi ai ij j bi bi j*

β

*Z PH PH KE E K*

0 0

β

0 0

*ii ij ji*

2 2

*bi ij*

*HP I*

*ai ij*

β

*H P I*

*t*

*t t*

*ij t*

β

Γ≤ =

<sup>+</sup> ∈ℜ has to be minimized. Note that the signal *x t*( ) is square integrable because of

*e e A GC BK A BK*

*e x I* = =

*h z t h z t S A BK x t*

*i j ij i i j*

+ Δ +Δ

1 1

*i j r r*

*i j*

*h zt h zt*

following *H*∞ performance criterion under zero initial conditions:

0

We give the following lemma to satisfy the *H*∞ performance.

such as

∞

= =

*r r*

∑∑

1 1

*i j*

This equation is equivalent to the following system:

1 1

*i j*

= =

∑∑

( )

(43)

( )

⎡ ⎤ ⎛ ⎞ ⎡ ⎤ + −Δ Δ +Δ ⎡ ⎤ ⎢ ⎥ <sup>=</sup> ⎜ ⎟ ⎢ ⎥⎢ ⎥ ⎣ ⎦ ⎝ ⎠ ⎣ ⎦⎣ ⎦ ∑∑ (44)

<sup>−</sup> <sup>&</sup>lt; ∫ (45)

<sup>Γ</sup> +Γ ≤ < ≤ (46)

*t t*

*DS* and *DT* .

where \* γ

scalars 0, 0 *ij* γ

 ; ; β

lemma *1*.

With

the equilibrium point zero. We know from (10) that:

By using the Schur's complement (Boyd & al, 1994),

$$
\begin{pmatrix}
\mathcal{E}\_{\vec{\boldsymbol{\eta}}} & \left(\mathcal{J}^{t}\otimes P\_{1}\mathcal{E}\_{\vec{\boldsymbol{\alpha}}}^{t}\right) & \left(\mathcal{J}^{t}\otimes V\_{/}\mathcal{E}\_{\vec{\boldsymbol{\alpha}}}^{t}\right) \\
\left(\mathcal{J}\otimes\mathcal{E}\_{\vec{\boldsymbol{\alpha}}}P\_{1}\right) & -\mu\_{\vec{\boldsymbol{\eta}}}I & 0 \\
\left(\mathcal{J}\otimes\mathcal{E}\_{\vec{\boldsymbol{\alpha}}}V\_{j}\right) & 0 & -\mu\_{\vec{\boldsymbol{\eta}}}I
\end{pmatrix} \prec 0,\tag{37}
$$

$$
\mathcal{E}\_{\vec{\boldsymbol{\alpha}}} = \xi\_{ij} + \mu\_{ij}\big(I\otimes H\_{\operatorname{all}}H\_{\operatorname{ai}}^{t}\big) + \mu\_{ij}\big(I\otimes H\_{\operatorname{bi}}H\_{\operatorname{bi}}^{t}\big).
$$

Thus, conditions (29) easily yield for all *i, j*.

**Lemma 3:** Matrix 1 1 ( ( )) ( ( )) *r r i j ij i j h zt h zt S* = = ∑∑ is *DS*-stable if and only if there exist a symmetric

matrix 2*P* > 0 , matrices *Wi* , *Kj* and positive scalars 0 λ*ij* � such as

$$\begin{aligned} \Phi\_{ii} \le 0, \ i = 1, \ldots, r \\ \Phi\_{ij} + \Phi\_{ji} \le 0, \ i < j \le r \end{aligned} \tag{38}$$

with

$$\begin{aligned} \boldsymbol{\Phi}\_{ij} &= \begin{pmatrix} \boldsymbol{R}\_{ij} + \boldsymbol{\lambda}\_{ij} (\boldsymbol{\beta}^t \otimes \boldsymbol{K}\_j^t \boldsymbol{E}\_{hi}^t) (\boldsymbol{\beta} \otimes \boldsymbol{E}\_{hi} \boldsymbol{K}\_j) & \boldsymbol{I} \otimes \boldsymbol{P}\_2 \boldsymbol{H}\_{hi} \\ \boldsymbol{I} \otimes \boldsymbol{H}\_{hi}^t \boldsymbol{P}\_2 & -\boldsymbol{\lambda}\_{ij} \boldsymbol{I} \end{pmatrix} \\ \boldsymbol{R}\_{ij} &= \boldsymbol{\alpha} \otimes \boldsymbol{P}\_2 + \boldsymbol{\beta} \otimes \boldsymbol{P}\_2 \boldsymbol{A}\_i + \boldsymbol{\beta}^t \otimes \boldsymbol{A}\_i^t \boldsymbol{P}\_2 + \boldsymbol{\beta} \otimes \boldsymbol{W}\_i \mathbf{C}\_j + \boldsymbol{\beta}^t \otimes \boldsymbol{C}\_j^t \boldsymbol{W}\_i^t \\ \boldsymbol{W}\_i &= \boldsymbol{P}\_2 \boldsymbol{G}\_i \end{aligned} \tag{39}$$

**Proof:** Same lines as previously can be used to prove this lemma. Let:

$$\begin{split} M\_{\mathcal{D}\_{\mathcal{S}}}(S\_{\vec{\eta}},X) &= \alpha \otimes X + \beta \otimes A\_{i}X + \beta^{t} \otimes X A\_{i}^{t} + \beta \otimes G\_{l}C\_{j}X + \beta^{t} \otimes X C\_{j}^{t} G\_{i}^{t} \\ &- \beta^{t} \otimes X K\_{j}^{t} E\_{hi}^{t} (I \otimes \Lambda\_{hi}^{t} H\_{hi}^{t}) - (I \otimes \Lambda\_{bi} H\_{hi})(\beta \otimes E\_{hi} K\_{j} X) < 0 \end{split} \tag{40}$$

Using the separation lemma (Shi & al, 1992), by pre- and post- multiplying by <sup>1</sup> *I X*<sup>−</sup> ⊗ , we obtain:

$$\begin{aligned} &(\boldsymbol{a}\otimes\boldsymbol{X}^{-1}+\boldsymbol{\beta}\otimes(\boldsymbol{X}^{-1}\boldsymbol{A}\_{i})+\boldsymbol{\beta}^{t}\otimes(\boldsymbol{A}\_{i}^{t}\boldsymbol{X}^{-1})+\boldsymbol{\beta}\otimes(\boldsymbol{X}^{-1}\boldsymbol{G}\_{i}\boldsymbol{C}\_{j})+\boldsymbol{\beta}^{t}\otimes(\boldsymbol{C}\_{j}^{t}\boldsymbol{G}\_{i}^{t}\boldsymbol{X}^{-1}) \\ &+\boldsymbol{\lambda}\_{\boldsymbol{\hat{\imath}}}(\boldsymbol{\beta}^{t}\otimes\boldsymbol{K}\_{\boldsymbol{\hat{\imath}}}^{t}\boldsymbol{E}\_{\boldsymbol{\hat{\imath}}}^{t})(\boldsymbol{\beta}\otimes\boldsymbol{E}\_{\boldsymbol{\hat{\imath}}}\boldsymbol{K}\_{\boldsymbol{\hat{\imath}}})+1/\boldsymbol{\lambda}\_{\boldsymbol{\hat{\imath}}}(\boldsymbol{I}\otimes\boldsymbol{X}^{-1}\boldsymbol{H}\_{\boldsymbol{\hat{\imath}}})(\boldsymbol{I}\otimes\boldsymbol{H}\_{\boldsymbol{\hat{\imath}}}^{t}\boldsymbol{X}^{-1})<0 \end{aligned} \tag{41}$$

Where, of course, , *ij* λ∈ℜ ∀*i j*

Thus, by using the Schur's complement (Boyd & al, 1994) as well as by defining <sup>1</sup> *P X* <sup>2</sup> <sup>−</sup> = :

$$\boldsymbol{\Phi}\_{\vec{\eta}} = \begin{pmatrix} \alpha \otimes \boldsymbol{P}\_2 + \beta \otimes \boldsymbol{P}\_2 \boldsymbol{A}\_i + \boldsymbol{\beta}^t \otimes \boldsymbol{A}\_i^t \boldsymbol{P}\_2 + \beta \otimes \boldsymbol{P}\_2 \boldsymbol{G}\_i \boldsymbol{C}\_j + \boldsymbol{\beta}^t \otimes \boldsymbol{C}\_j^t \boldsymbol{G}\_i^t \boldsymbol{P}\_2 + \boldsymbol{\lambda}\_{\vec{\eta}} (\boldsymbol{\beta}^t \otimes \boldsymbol{K}\_j^t \boldsymbol{E}\_{ii}^t) (\boldsymbol{\beta} \otimes \boldsymbol{E}\_{ii} \boldsymbol{K}\_j) & \boldsymbol{I} \otimes \boldsymbol{P}\_2 \boldsymbol{H}\_{ii} \\\ \boldsymbol{I} \otimes \boldsymbol{H}\_{hi}^t \boldsymbol{P}\_2 & -\boldsymbol{\lambda}\_{\vec{\eta}} \boldsymbol{I} \end{pmatrix} < 0 \tag{42}$$

By using <sup>1</sup> *W XG i i* <sup>−</sup> = , conditions (38) easily yield for all *i, j*. The lemma proof is given. **Remark** *4***:** Any kind of LMI region (disk, vertical strip, conic sector) may be easily used for *DS* and *DT* .

From lemma *2* and lemma *3*, we have imposed the dynamics of the state as well as the dynamics of the estimation error. But from (10), the estimation error dynamics depend on the state. If the state dynamics are slow, we will have a slow convergence of the estimation error to the equilibrium point zero in spite of its own fast dynamics. So in this paper, we add an algorithm using the *H*∞ approach to ensure that the estimation error converges faster to the equilibrium point zero.

We know from (10) that:

48 Recent Advances in Robust Control – Novel Approaches and Design Methods

1

*E PE VE*

⎛ ⎞ ⊗ ⊗ ⎜ ⎟

*ij ai j bi*

0

⊗ − ⎝ ⎠

 μ

*bi j ij*

*E V I*

*ij ij ij ai ai ij bi bi*

0, 1,..., 0,

*i r*

*E I HH I HH*

=+ ⊗ + ⊗

*ai ij*

*ii ij ji*

Φ≤ =

2

 β

*j bi bi bi bi bi bi j*

*XK E I H I H E K X*

− ⊗ ⊗Δ − ⊗Δ ⊗ <

( )( ) *t tt ij ij j bi bi j bi*

*R KE E K I PH*

 β

⎛ ⎞ +⊗ ⊗ ⊗ Φ = ⎜ ⎟ ⊗ − ⎝ ⎠ =⊗ +⊗ + ⊗ +⊗ + ⊗

22 2

β

*EP I*

⊗ −

( )( )

*tttt*

 β

<sup>1</sup> 0 0,

( )( )

μ

*t t*

 μ

∑∑ is *DS*-stable if and only if there exist a symmetric

*ij* � such as

2

λ

*t t t tt*

λ

*ijr*

*bi ij*

 β

> β

 β

() ()( ) ( )

*i i i j j i ij j bi bi j bi*

 λβ

*t t t tt i i i j j i*

*I HP I*

*ij ii i j j i*

( ) ( )( ) 0

Using the separation lemma (Shi & al, 1992), by pre- and post- multiplying by <sup>1</sup> *I X*<sup>−</sup> ⊗ , we

*X X A AX X GC C G X*

22 22 2 2 2

⎛ ⊗ +⊗ + ⊗ +⊗ + ⊗ + ⊗ ⊗ ⊗ ⎞ Φ = <sup>⎜</sup> <sup>⎟</sup> <sup>&</sup>lt; <sup>⎜</sup> ⊗ − <sup>⎟</sup> <sup>⎝</sup> <sup>⎠</sup>

*P P A AP PGC C G P K E E K I PH*

<sup>−</sup> = , conditions (38) easily yield for all *i, j*. The lemma proof is given.

*t t t t t t tt*

 β

*M S X X A X XA G C X XC G*

 β

( )( ) 1 / ( )( ) 0

λ

*KE E K I X H I H X*

Thus, by using the Schur's complement (Boyd & al, 1994) as well as by defining <sup>1</sup> *P X* <sup>2</sup>

*t tt t ij j bi bi j ij bi bi*

⊗ +⊗ + ⊗ +⊗ + ⊗ +⊗ ⊗ + ⊗ ⊗ <

 β

 β

*D ij i ii j j i*

=⊗+⊗ + ⊗ +⊗ + ⊗

11 1 1 1

− − −− −

 β

*R P P A A P WC C W*

.

<sup>Φ</sup> +Φ ≤ < ≤ (38)

 β

*t t t tt*

1 1

*bi ij*

*I HP I*

− −

 β

 β

( )( ) <sup>0</sup>

 β

≺

(37)

(39)

(40)

(41)

<sup>−</sup> = :

(42)

λ

By using the Schur's complement (Boyd & al, 1994),

β

β

Thus, conditions (29) easily yield for all *i, j*.

*r r*

1 1

2

*t tt t t*

 β

*i i*

*W PG*

=

( ,)

*S*

αβ

λ∈ℜ ∀*i j*

*ij t*

 β

λβ

Where, of course, , *ij*

 β

α

By using <sup>1</sup> *W XG i i*

β

*i j*

= =

**Lemma 3:** Matrix

with

Let:

obtain:

( ) ( )

> ξ μ

matrix 2*P* > 0 , matrices *Wi* , *Kj* and positive scalars 0

*ij t*

αβ

λβ

**Proof:** Same lines as previously can be used to prove this lemma.

αβ

( ( )) ( ( ))

*h zt h zt S*

*i j ij*

$$\begin{aligned} \dot{e}(t) &= \sum\_{i=1}^{r} \sum\_{j=1}^{r} h\_i(\mathbf{z}(t)) h\_j(\mathbf{z}(t)) \Big( A\_i + G\_i \mathbf{C}\_j - \Delta B\_i K\_j \Big) e(t) \\ &+ \sum\_{i=1}^{r} \sum\_{j=1}^{r} h\_i(\mathbf{z}(t)) h\_j(\mathbf{z}(t)) S\_{ij} \Big( \Delta A\_i + \Delta B\_i K\_j \Big) \mathbf{x}(t) \end{aligned} \tag{43}$$

This equation is equivalent to the following system:

$$
\begin{bmatrix}
\dot{e} \\
\mathcal{e}
\end{bmatrix} = \sum\_{i=1}^{r} \sum\_{j=1}^{r} h\_i(z(t)) h\_j(z(t)) \begin{bmatrix}
\left[A\_i + G\_i C\_j - \Delta B\_i K\_j & \Delta A\_i + \Delta B\_i K\_j \\
I & 0
\end{bmatrix} \begin{bmatrix}
e \\
x
\end{bmatrix} \end{bmatrix} \tag{44}
$$

The objective is to minimize the 2 *L* gain from *x t*( ) to *e t*( ) in order to guarantee that the error between the state and its estimation converges faster to zero. Thus, we define the following *H*∞ performance criterion under zero initial conditions:

$$\int\_{0}^{\infty} \{e^{t}(t)e(t) - \gamma^{2}x^{t}(t)x(t)\} dt < 0\tag{45}$$

where \* γ <sup>+</sup> ∈ℜ has to be minimized. Note that the signal *x t*( ) is square integrable because of lemma *1*.

We give the following lemma to satisfy the *H*∞ performance.

**Lemma 4:** If there exist symmetric positive definite matrix *P*<sup>2</sup> , matrices *Wi* and positive scalars 0, 0 *ij* γ ; ; βsuch as

$$\begin{aligned} \Gamma\_{\vec{n}} \le 0, \ i = 1, \ldots, r \\ \Gamma\_{\vec{n}} + \Gamma\_{\vec{j}i} \le 0, \ i < j \le r \end{aligned} \tag{46}$$

With

$$
\Gamma\_{ij} = \begin{bmatrix}
Z\_{ij} & P\_2 H\_{hi} & P\_2 H\_{ai} & -\beta\_{ij} K\_j^t E\_{hi}^t E\_{hi} K\_j \\
 & H\_{hi}^t P\_2 & -\beta\_{ij} I & 0 & 0 \\
 & H\_{ai}^t P\_2 & 0 & -\beta\_{ij} I & 0 \\
 -\beta\_{ij} K\_j^t E\_{hi}^t E\_{hi} K\_j & 0 & 0 & \mathcal{U}\_{ij}
\end{bmatrix}
$$

Observer-Based Robust Control of Uncertain Fuzzy Models with Pole Placement Constraints 51

*ij ij j bi bi j ij t t t t t*

⎡ − ⎤ Θ ≤ <sup>⎢</sup> <sup>⎥</sup> ⎢− −+ + <sup>⎥</sup> <sup>⎣</sup> <sup>⎦</sup>

*Q KE E K*

*ij j bi bi j ij j bi bi j ij ai ai*


*t t*

 β

under the LMI constraints (46).

γ

*KE E K I KE E K E E*

β

*t t*

 β

0

(54)

(55)

≺ (56)

performance (45) as small as

<sup>−</sup> = and

2

Q =R +β P H Δ Δ HP+ ε P H Δ Δ H P, R =P A +A P +WC +C W +I+β KE E K.

 γβ

2

 γβ

*ij ij j bi bi j t t t t t ij j bi bi j ij j bi bi j ij ai ai*

⎡ ⎤ <sup>−</sup> ⎢ ⎥ − −+ + ⎣ ⎦

*KE E K I KE E K E E*

and using the Schur's complement (Boyd & al, 1994), theorem *7* in ( Tanaka & al, 1998) and

**Remark 5:** In order to improve the estimation error convergence, we obtain the following

**Theorem 2:** The closed-loop uncertain fuzzy system (10) is robustly stabilizable via the observer-based controller (8) with control performances defined by a pole placement constraint in LMI region *DT* for the state dynamics, a pole placement constraint in LMI

possible if first, LMI systems (12) and (29) are solvable for the decision variables

**Remark 6:** Because of uncertainties, we could not use the separation property but we have overcome this problem by designing the fuzzy controller and observer in two steps with two pole placements and by using the *H*<sup>∞</sup> approach to ensure that the estimation error

**Remark 7:** Theorem *2* also proposes a two-step procedure: the first step concerns the fuzzy controller design by imposing a pole placement constraint for the poles linked to the state dynamics and the second step concerns the fuzzy observer design by imposing the second pole placement constraint for the poles linked to the error estimation dynamics and by minimizing the *H*<sup>∞</sup> performance criterion (18). The designs of the observer and the

In this section, to illustrate the validity of the suggested theoretical development, we apply the previous control algorithm to the following academic nonlinear system (Lauber,

and secondly, LMI systems (13), (38) , (46) are solvable for the decision

*ij* . Furthermore, the controller and observer gains are <sup>1</sup> *K VP j j* <sup>1</sup>

γ

β

*Q KE E K*

β

Thus, from the following condition

(3), condition (46) yields for all *i,j*.

<sup>1</sup> (, ,, ) *P Kj ij ij* ε μ

<sup>1</sup> *G PW i i* <sup>2</sup>

variables 2 (,,, ) *P Gi i*

**4. Numerical example** 

2003):

λ *j* β

<sup>−</sup> = , respectively, for *ij r* , 1,2,..., . =

controller are separate but not independent.

β

convex optimization problem: minimization

From lemma *1*, *2*, *3* and *4* yields the following theorem:

region *DS* for the estimation error dynamics and a *L*2 gain

converges faster to zero although its dynamics depend on the state.

where

$$\begin{aligned} Z\_{ij} &= P\_2 A\_i + A\_i^\dagger P\_2 + \mathcal{W}\_i \mathcal{C}\_j + \mathcal{C}\_j^\dagger \mathcal{W}\_i^\dagger + I + \mathcal{J}\_{ij} \mathcal{K}\_j^\dagger E\_{bi}^\dagger E\_{bi} \mathcal{K}\_j \\\\ \mathcal{U}\_{ij} &= -\gamma^2 I + \mathcal{J}\_{ij} \mathcal{K}\_j^\dagger E\_{bi}^\dagger E\_{bi} \mathcal{K}\_j + \mathcal{J}\_{ij} E\_{ai}^\dagger E\_{ai} \end{aligned}$$

Then, the dynamic system:

$$
\begin{bmatrix}
\dot{\mathcal{e}} \\
\mathcal{e}
\end{bmatrix} = \sum\_{i=1}^{r} \sum\_{j=1}^{r} h\_i(\mathbf{z}(t)) h\_j(\mathbf{z}(t)) \begin{bmatrix}
A\_i + \mathbf{G}\_i \mathbf{C}\_j - \Delta B\_i \mathbf{K}\_j & \Delta A\_i + \Delta B\_i \mathbf{K}\_j \\
I & \mathbf{0}
\end{bmatrix} \begin{bmatrix}
\mathcal{e} \\
\mathbf{x}
\end{bmatrix} \tag{47}
$$

satisfies the *H*∞ performance with a *L2* gain equal or less than γ(44) .

**Proof:** Applying the bounded real lemma (Boyd & al, 1994), the system described by the following dynamics:

$$
\dot{e}(t) = \left(A\_i + G\_i C\_j - \Delta B\_i K\_j\right) e(t) + \left(\Delta A\_i + \Delta B\_i K\_j\right) x(t) \tag{48}
$$

satisfies the *H*∞ performance corresponding to the *L*2 gain γ performance if and only if there exists 2 2 <sup>0</sup> *<sup>T</sup> P P* = > :

$$\begin{aligned} \left( \left( A\_i + G\_i \mathbf{C}\_j - \Delta B\_i \mathbf{K}\_j \right)^\dagger P\_2 + P\_2 \{ A\_i + G\_i \mathbf{C}\_j - \Delta B\_i \mathbf{K}\_j \} \right) \\ + P\_2 \{ \Delta A\_i + \Delta B\_i \mathbf{K}\_j \} \left( \mathbf{y}^2 I \right)^{-1} \{ \Delta A\_i + \Delta B\_i \mathbf{K}\_j \}^\dagger P\_2 + I \prec \mathbf{0} \end{aligned} \tag{49}$$

Using the Schur's complement, (Boyd & al, 1994) yields

$$
\underbrace{\begin{bmatrix} J\_{\bar{\boldsymbol{\eta}}} & P\_2 \Delta \boldsymbol{A}\_i + P\_2 \Delta \boldsymbol{B}\_i \boldsymbol{K}\_j \\ \Delta \boldsymbol{A}\_i^t \boldsymbol{P}\_2 + \boldsymbol{K}\_j^t \Delta \boldsymbol{B}\_i^t \boldsymbol{P}\_2 & -\boldsymbol{\gamma}^2 \boldsymbol{I} \end{bmatrix}}\_{\Theta\_{\bar{\boldsymbol{\eta}}}} \prec \mathbf{0} \tag{50}
$$

where

$$\mathbf{H}\_{\circ j} = \mathbf{P}\_2 \mathbf{A}\_i + \mathbf{A}\_i^t \mathbf{P}\_2 + \mathbf{P}\_2 \mathbf{G}\_i \mathbf{C}\_j + \mathbf{C}\_j^t \mathbf{G}\_i^t \mathbf{P}\_2 - \mathbf{P}\_2 \Delta \mathbf{B}\_i \mathbf{K}\_j - \mathbf{K}\_j^t \Delta \mathbf{B}\_i^t \mathbf{P}\_2 + \mathbf{I} \tag{51}$$

We get:

$$
\boldsymbol{\Theta}\_{\dot{\boldsymbol{\eta}}} = \begin{bmatrix} P\_2 \boldsymbol{A}\_i + \boldsymbol{A}\_i^t \boldsymbol{P}\_2 + P\_2 \mathbf{C}\_i \mathbf{C}\_j + \mathbf{C}\_j^t \mathbf{C}\_i^t \mathbf{P}\_2 + \boldsymbol{I} & \mathbf{0} \\ \mathbf{0} & -\boldsymbol{\gamma}^2 \boldsymbol{I} \end{bmatrix} + \underbrace{\begin{bmatrix} -P\_2 \Delta \mathbf{B}\_i \mathbf{K}\_j - \mathbf{K}\_j^t \Delta \mathbf{B}\_i^t \mathbf{P}\_2 & P\_2 \Delta \mathbf{A}\_i + P\_2 \Delta \mathbf{B}\_i \mathbf{K}\_j \\ \Delta \mathbf{A}\_i^t \mathbf{P}\_2 + \mathbf{K}\_j^t \Delta \mathbf{B}\_i^t \mathbf{P}\_2 & \mathbf{0} \end{bmatrix}}\_{\Delta\_{\tilde{\mathbf{q}}}} \tag{52}
$$

By using the separation lemma (Shi & al, 1992) yields

$$
\Delta\_{\vec{\eta}} \leq \beta\_{\vec{\eta}} \begin{bmatrix}
\mathcal{K}\_{\vec{\eta}}^{t} \mathcal{E}\_{\vec{\eta}\vec{\iota}}^{t} \mathcal{E}\_{\vec{\eta}\vec{\iota}} \mathcal{K}\_{\vec{\eta}} & -\mathcal{K}\_{\vec{\eta}}^{t} \mathcal{E}\_{\vec{\eta}\vec{\iota}}^{t} \mathcal{E}\_{\vec{\eta}\vec{\iota}} \mathcal{K}\_{\vec{\eta}} \\
\end{bmatrix} + \mathcal{J}\_{\vec{\eta}}^{1} \begin{bmatrix}
P\_{2} H\_{bi} \Delta\_{bi} \Delta\_{bi}^{t} H\_{bi}^{t} P\_{2} + P\_{2} H\_{ai} \Delta\_{ai} \Delta\_{ai}^{t} H\_{ai}^{t} P\_{2} & 0 \\
0 & 0 \end{bmatrix} \tag{53}
$$

With substitution into Θ*ij* and defining a variable change: *W PG i i* = <sup>2</sup> , yields

$$
\Theta\_{ij} \le \begin{bmatrix}
\mathcal{Q}\_{ij} & -\beta\_{ij}\mathcal{K}\_j^t \mathcal{E}\_{bi}^t \mathcal{E}\_{bi} \mathcal{K}\_j \\
\end{bmatrix} \tag{54}$$

where

50 Recent Advances in Robust Control – Novel Approaches and Design Methods

*<sup>t</sup> t t t t Z P A AP WC C W I K E E K ij* = + + + ++ *ii i j j i i*

<sup>2</sup> *t t <sup>t</sup> U I KE E K E E ij ij j bi bi <sup>j</sup> ij ai ai* =− + +

( ( )) ( ( )) 0 *r r i ij ij i ij*

*e e A GC BK A BK*

*e x I* = =

**Proof:** Applying the bounded real lemma (Boyd & al, 1994), the system described by the

2 2 2 1 2 2

( )( ) ( ) 0

2 2 2

γ*I*

*ij i i ij ji ij j i J P A AP PGC C G P P BK K BP I* = + + + −Δ −Δ + (51)

2 2

*i ji*

*ij*

<sup>1</sup> <sup>2</sup> 2 2 <sup>2</sup> 0

0 0

Δ

�������������������<sup>⎦</sup>

*ij i ij*

*J P A P BK*

2 22 2 2 2 *t t t t t*

*i i ij ji i j j i ii j*

0 0

*t t t t t t t t j bi bi j j bi bi j bi bi bi bi ai ai ai ai*

*KE E K KE E K PH H P PH H P*

 β<sup>−</sup> ⎡ ⎤ <sup>−</sup> <sup>⎡</sup> ΔΔ + ΔΔ <sup>⎤</sup> Δ ≤ ⎢ ⎥+ ⎢ ⎥ − + ⎢⎣ ⎥⎦ ⎣ ⎦

*I AP K BP*

( )( )

*A GC BK P P A GC BK P A BK I A BK P I* γ <sup>−</sup> + −Δ + + −Δ + Δ +Δ Δ +Δ + ≺

*i ij i ij*

*ij*

Θ

2 22 2 2 22 2 2

*P A AP PGC C G P I P BK K BP P A P BK*

⎡ ⎤ ++ + + ⎡−Δ − Δ Δ + Δ ⎤ Θ = ⎢ ⎥+ ⎢ ⎥ <sup>−</sup> <sup>⎢</sup> Δ +Δ <sup>⎥</sup> ⎣ ⎦ <sup>⎣</sup>

0

γ

*t t t t t*

With substitution into Θ*ij* and defining a variable change: *W PG i i* = <sup>2</sup> , yields

*ij t tt*

By using the separation lemma (Shi & al, 1992) yields

*ij ij ij tt tt t j bi bi j j bi bi j ai ai*

*KE E K KE E K E E*

�����������������

⎡ ⎤ Δ+Δ ⎢ ⎥

*t i ij ij i ij ij*

2 2

Δ +Δ − ⎣ ⎦

*t tt i ji*

*AP K BP*

γβ

β

 β

⎡ ⎤ ⎡ ⎤ + −Δ Δ +Δ ⎡ ⎤ ⎢ ⎥ <sup>=</sup> ⎢ ⎥⎢ ⎥ ⎣ ⎦ ⎣ ⎦⎣ ⎦ ∑∑ � (47)

( ) ( ) ( ) ( ) ( ) *i ij ij i ij e t A GC BK e t A BK x t* � = + −Δ + Δ +Δ (48)

*t*

*j j bi bi j*

γ(44) .

γ

0

≺

performance if and only if

(50)

(49)

(52)

(53)

2 2

*i j*

satisfies the *H*∞ performance with a *L2* gain equal or less than

satisfies the *H*∞ performance corresponding to the *L*2 gain

Using the Schur's complement, (Boyd & al, 1994) yields

*h zt h zt*

Then, the dynamic system:

following dynamics:

there exists 2 2 <sup>0</sup> *<sup>T</sup> P P* = > :

where

We get:

β

1 1

*i j*

$$\begin{split} \mathbf{Q}\_{\text{ij}} &= \mathbf{R}\_{\text{ij}} + \boldsymbol{\beta}\_{\text{ij}}^{-1} \mathbf{P}\_{\text{2}} \mathbf{H}\_{\text{bi}} \Delta\_{\text{bi}} \Delta\_{\text{bi}}^{\text{t}} \mathbf{H}\_{\text{bi}}^{\text{t}} \mathbf{P}\_{\text{2}} + \boldsymbol{\varepsilon}\_{\text{ij}}^{-1} \mathbf{P}\_{\text{2}} \mathbf{H}\_{\text{ai}} \Delta\_{\text{ai}} \mathbf{A}\_{\text{ai}}^{\text{t}} \mathbf{H}\_{\text{ai}}^{\text{t}} \mathbf{P}\_{2}, \\ \mathbf{R}\_{\text{ij}} &= \mathbf{P}\_{2} \mathbf{A}\_{\text{i}} + \mathbf{A}\_{\text{i}}^{\text{t}} \mathbf{P}\_{2} + \mathbf{W}\_{\text{i}} \mathbf{C}\_{\text{j}} + \mathbf{C}\_{\text{j}}^{\text{t}} \mathbf{W}\_{\text{i}}^{\text{t}} + \mathbf{I} + \boldsymbol{\beta}\_{\text{ij}} \mathbf{K}\_{\text{j}}^{\text{t}} \mathbf{E}\_{\text{bi}}^{\text{t}} \mathbf{E}\_{\text{bi}} \mathbf{K}\_{\text{j}}. \end{split} \tag{55}$$

Thus, from the following condition

$$
\begin{bmatrix}
\mathbf{Q}\_{ij} & -\beta\_{ij}\mathbf{K}\_j^t \mathbf{E}\_{bi}^t \mathbf{E}\_{bi} \mathbf{K}\_j \\
\end{bmatrix} \prec \mathbf{0} \tag{56}
$$

and using the Schur's complement (Boyd & al, 1994), theorem *7* in ( Tanaka & al, 1998) and (3), condition (46) yields for all *i,j*.

**Remark 5:** In order to improve the estimation error convergence, we obtain the following convex optimization problem: minimization γunder the LMI constraints (46).

From lemma *1*, *2*, *3* and *4* yields the following theorem:

**Theorem 2:** The closed-loop uncertain fuzzy system (10) is robustly stabilizable via the observer-based controller (8) with control performances defined by a pole placement constraint in LMI region *DT* for the state dynamics, a pole placement constraint in LMI region *DS* for the estimation error dynamics and a *L*2 gain γ performance (45) as small as possible if first, LMI systems (12) and (29) are solvable for the decision variables <sup>1</sup> (, ,, ) *P Kj ij ij* ε μ and secondly, LMI systems (13), (38) , (46) are solvable for the decision variables 2 (,,, ) *P Gi i* λ *j* β*ij* . Furthermore, the controller and observer gains are <sup>1</sup> *K VP j j* <sup>1</sup> <sup>−</sup> = and <sup>1</sup> *G PW i i* <sup>2</sup> <sup>−</sup> = , respectively, for *ij r* , 1,2,..., . =

**Remark 6:** Because of uncertainties, we could not use the separation property but we have overcome this problem by designing the fuzzy controller and observer in two steps with two pole placements and by using the *H*<sup>∞</sup> approach to ensure that the estimation error converges faster to zero although its dynamics depend on the state.

**Remark 7:** Theorem *2* also proposes a two-step procedure: the first step concerns the fuzzy controller design by imposing a pole placement constraint for the poles linked to the state dynamics and the second step concerns the fuzzy observer design by imposing the second pole placement constraint for the poles linked to the error estimation dynamics and by minimizing the *H*<sup>∞</sup> performance criterion (18). The designs of the observer and the controller are separate but not independent.

## **4. Numerical example**

In this section, to illustrate the validity of the suggested theoretical development, we apply the previous control algorithm to the following academic nonlinear system (Lauber, 2003):

Observer-Based Robust Control of Uncertain Fuzzy Models with Pole Placement Constraints 53

The control design purpose of this example is to place both the poles linked to the state dynamics and to the estimation error dynamics in the vertical strip given by:

By solving LMIs of theorem *2*, we obtain the following controller and observer gain matrices

Tables 1 and 2 give some examples of both nominal and uncertain system closed-loop pole values respectively. All these poles are located in the desired regions. Note that the uncertainties must be taken into account since we wish to ensure a global pole placement. That means that the poles of (10) belong to the specific LMI region, whatever uncertainties (2), (3). From tables *1* and *2*, we can see that the estimation error pole values obtained using the *H*<sup>∞</sup> approach are more distant (farther on the left) than the ones without the

*A*1 11 + *B K* -1.8348 -3.1403 -1.8348 -3.1403 *A BK* 2 22 + -2.8264 -3.2172 -2.8264 -3.2172 *A GC* 1 11 + -5.47 +5.99i -5.47- 5.99i -3.47 + 3.75i -3.47- 3.75i *A GC* 2 22 + -5.59 +6.08i -5.59 - 6.08i -3.87 + 3.96i -3.87 - 3.96i

1 11 1 11 1 ( ) *A HE B HE K a a b b* + ++ *-2.56 + .43i -2.56 - 0.43i -2.56+ 0.43i -2.56 - 0.43i*  2 22 2 22 2 ( ) *A HE B HE K a a b b* + ++ *-3.03 +0.70i -3.032- 0.70i -3.03 + 0.70i -3.03 - 0.70i*  1 11 1 11 1 ( ) *A HE B HE K a a b b* − ++ *-2.58 +0.10i -2.58- 0.10i -2.58 + 0.10i -2.58 - 0.10i*  2 22 2 22 2 ( ) *A HE B HE K a a b b* − ++ *-3.09 +0.54i -3.09-0.54i -3.09 + 0.54i -3.09 - 0.54i A GC H E K* 1 11 1 11 *b b* + − *-5.38+5.87i -5.38 - 5.87i -3.38 + 3.61i -3.38 - 3.61i A GC H E K* 2 22 2 22 *b b* + − *-5.55 +6.01i -5.55 - 6.01i -3.83 + 3.86i -3.83 - 3.86i* 

[ ][ ][ ] [ ] t t K = -1.95 -0.17 ,K = -1.36 -0.08 ,G = -7.75 -80.80 ,G = -7.79 -82.27 121 2 (60)

= 0.3974 (61)

With the *H*<sup>∞</sup> approach Without the *H*<sup>∞</sup> approach Pole 1 Pole 2 Pole 1 Pole 2

With the *H*<sup>∞</sup> approach Without the *H*<sup>∞</sup> approach *Pole 1 Pole 2 Pole 1 Pole 2* 

show by simulation the effectiveness of our approach.

The obtained *H*<sup>∞</sup> criterion after minimization is:

The initial values of states are chosen: *x*(0) 0.2 0.1 =− − [ ] and *x*ˆ(0) 0 0 = [ ] .

γ

 1 2 ) =− − ( 1 6) . The choice of the same vertical strip is voluntary because we wish to compare results of simulations obtained with and without the *H*<sup>∞</sup> approach, in order to

**Second step:** 

respectively:

*H*<sup>∞</sup> approach.

Table 1. Pole values (nominal case).

Table 2. Pole values (extreme uncertain models).

(α α

$$\begin{cases} \dot{x}\_1(t) = \left(\cos^2(\mathbf{x}\_2(t)) \cdot \frac{1}{1 + \mathbf{x}\_1^2(t)}\right) \mathbf{x}\_2(t) + \left(1 + \frac{1}{1 + \mathbf{x}\_1^2(t)}\right) u(t) \\\\ \dot{x}\_2(t) = b \left(1 + \frac{1}{1 + \mathbf{x}\_1^2(t)}\right) \sin(\mathbf{x}\_2(t)) \cdot 1.5 \mathbf{x}\_1(t) \cdot 3 \mathbf{x}\_2(t) \\\\ \quad + \left(a \cos^2(\mathbf{x}\_2(t)) \cdot 2\right) u(t) \\\\ y(t) = \mathbf{x}\_1(t) \end{cases} \tag{57}$$

*y* ∈ℜ is the system output, *u*∈ℜ is the system input, [ ] 1 2 *<sup>t</sup> xxx* = is the state vector which is supposed to be unmeasurable. What we want to find is the control law *u* which globally stabilizes the closed-loop and forces the system output to converge to zero but by imposing a transient behaviour.

Since the state vector is supposed to be unmeasurable, an observer will be designed.

The idea here is thus to design a fuzzy observer-based robust controller from the nonlinear system (57). The first step is to obtain a fuzzy model with uncertainties from (57) while the second step is to design the fuzzy control law from theorem *2* by imposing pole placement constraints and by minimizing the *H∞* criterion (46). Let us recall that, thanks to the pole placements, the estimation error converges faster to the equilibrium point zero and we impose the transient behaviour of the system output.

### **First step:**

The goal is here to obtain a fuzzy model from (57).

By decomposing the nonlinear term 2 1 1 1 () <sup>+</sup> *x t* and integring nonlinearities of 2 *x t*( ) into

incertainties, then (20) is represented by the following fuzzy model: Fuzzy model rule *1*:

$$\text{If } \mathbf{x}\_1(t) \text{ is } M\_1 \text{ then} \\
\begin{cases}
\dot{\mathbf{x}} = (A\_1 + \Delta A\_1)\mathbf{x} + (B\_1 + \Delta B\_1)u \\
y = \mathbf{C}\mathbf{x}
\end{cases} \tag{58}$$

Fuzzy model rule *2*:

$$\text{If } \mathbf{x}\_1(t) \text{ is } M\_2 \text{ then} \\
\begin{cases}
\dot{\mathbf{x}} = (\mathbf{A}\_2 + \Delta \mathbf{A}\_2)\mathbf{x} + (\mathbf{B}\_2 + \Delta \mathbf{B}\_2)\mathbf{u} \\
y = \text{Cx}
\end{cases} \tag{59}$$

where

1 1 0 0.5 1 <sup>1</sup> , 1.5 3 <sup>2</sup> 2 2 *A B m a <sup>b</sup>* <sup>⎛</sup> ⎞ ⎛⎞ <sup>⎜</sup> ⎟ ⎜⎟ <sup>=</sup> <sup>+</sup> <sup>=</sup> <sup>⎜</sup> ⎜− −+ ⎟ ⎜⎟ <sup>−</sup> <sup>⎝</sup> ⎠ ⎝⎠ 2 0 0.5 1.5 3 (1 ) *A m b* ⎛ ⎞ <sup>=</sup> ⎜ ⎟ − −+ + ⎝ ⎠ , 2 2 2 2 *B a* ⎛ ⎞ ⎜ ⎟ = ⎜ ⎟ ⎜ ⎟ − ⎝ ⎠ , 1 2 0.1 0 0 , , 0.5 0 0.1 1 *H H EE a ai bi b b* ⎛ ⎞ ⎛⎞ <sup>=</sup> ⎜ ⎟ ⎜⎟ = = = ⎝ ⎠ ⎝⎠ 1 2 0 0.5 0 0.5 <sup>1</sup> , <sup>0</sup> 0 (1 ) <sup>2</sup> *E E a a <sup>m</sup> <sup>b</sup> m b* ⎛ ⎞ ⎛ ⎞ ⎜ ⎟ <sup>=</sup> <sup>−</sup> <sup>=</sup> ⎜ ⎟ ⎜ ⎟ <sup>−</sup> ⎝ ⎠ ⎝ ⎠ ,*C* = (1 0) , *m=-0.2172, b=-0.5, a=2* and *i=1,2*

### **Second step:**

52 Recent Advances in Robust Control – Novel Approaches and Design Methods

1 1 ( ) cos ( ( )) - ( ) 1 ( ) 1 () 1 ()

*xt xt x t u t*

is supposed to be unmeasurable. What we want to find is the control law *u* which globally stabilizes the closed-loop and forces the system output to converge to zero but by imposing

The idea here is thus to design a fuzzy observer-based robust controller from the nonlinear system (57). The first step is to obtain a fuzzy model with uncertainties from (57) while the second step is to design the fuzzy control law from theorem *2* by imposing pole placement constraints and by minimizing the *H∞* criterion (46). Let us recall that, thanks to the pole placements, the estimation error converges faster to the equilibrium point zero and we

> 1 1

> > ⎧ ⎨ ⎩

⎧ ⎨ ⎩

2

*A*

( )( ) () *x A Ax B Bu y Cx If x t is M then* = +Δ + +Δ

( )( ) ( ) *x A Ax B Bu y Cx If x t is M then* = +Δ + +Δ

,*C* = (1 0) ,

Since the state vector is supposed to be unmeasurable, an observer will be designed.

1 1

*x t x t*

(57)

*<sup>t</sup> xxx* = is the state vector which

1 () <sup>+</sup> *x t* and integring nonlinearities of 2 *x t*( ) into

(58)

(59)

, 2

2

⎛ ⎞ ⎜ ⎟ <sup>=</sup> ⎜ ⎟ ⎜ ⎟ <sup>−</sup> ⎝ ⎠

2 2 *B a*

,

*m b*

1 1 11

2 2 22

=

0 0.5 1.5 3 (1 )

⎛ ⎞ <sup>=</sup> ⎜ ⎟ − −+ + ⎝ ⎠

=

12 2 2 2

<sup>⎧</sup> <sup>⎛</sup> ⎞⎛ ⎞ <sup>⎪</sup> <sup>=</sup> <sup>⎜</sup> ⎟⎜ ⎟ + + <sup>⎜</sup> <sup>⎪</sup> <sup>⎝</sup> + + ⎠⎝ ⎠ <sup>⎪</sup> ⎛ ⎞ <sup>⎪</sup>

2 2 2 12

*xt b xt xt xt*

<sup>1</sup> ( ) 1 sin( ( )) - 1.5 ( )- 3 ( ) 1 ()

( )

*y* ∈ℜ is the system output, *u*∈ℜ is the system input, [ ] 1 2

incertainties, then (20) is represented by the following fuzzy model:

1 1

1 2

1 2

1 2 2

*x t a x t ut*

cos ( ( )) - 2 ( )

2

⎨ = + ⎜ ⎟ ⎝ ⎠ + ⎪

1

impose the transient behaviour of the system output.

The goal is here to obtain a fuzzy model from (57).

By decomposing the nonlinear term 2

() ()

*yt x t*

<sup>⎪</sup> <sup>+</sup> <sup>⎪</sup> <sup>⎪</sup> <sup>=</sup> <sup>⎩</sup>

a transient behaviour.

**First step:** 

Fuzzy model rule *1*:

Fuzzy model rule *2*:

1 1

0 0.1 1

⎝ ⎠ ⎝⎠

1 2

*m=-0.2172, b=-0.5, a=2* and *i=1,2*

*A B m a <sup>b</sup>* <sup>⎛</sup> ⎞ ⎛⎞ <sup>⎜</sup> ⎟ ⎜⎟ <sup>=</sup> <sup>+</sup> <sup>=</sup> <sup>⎜</sup> ⎜− −+ ⎟ ⎜⎟ <sup>−</sup> <sup>⎝</sup> ⎠ ⎝⎠

0 0.5 1 <sup>1</sup> , 1.5 3 <sup>2</sup>

0.1 0 0 , , 0.5

0 0.5 0 0.5 <sup>1</sup> , <sup>0</sup> 0 (1 ) <sup>2</sup> *E E a a <sup>m</sup> <sup>b</sup> m b* ⎛ ⎞ ⎛ ⎞ ⎜ ⎟ <sup>=</sup> <sup>−</sup> <sup>=</sup> ⎜ ⎟ ⎜ ⎟ <sup>−</sup> ⎝ ⎠ ⎝ ⎠

*H H EE a ai bi b b* ⎛ ⎞ ⎛⎞ <sup>=</sup> ⎜ ⎟ ⎜⎟ = = =

2 2

where

The control design purpose of this example is to place both the poles linked to the state dynamics and to the estimation error dynamics in the vertical strip given by: (α α 1 2 ) =− − ( 1 6) . The choice of the same vertical strip is voluntary because we wish to compare results of simulations obtained with and without the *H*<sup>∞</sup> approach, in order to show by simulation the effectiveness of our approach.

The initial values of states are chosen: *x*(0) 0.2 0.1 =− − [ ] and *x*ˆ(0) 0 0 = [ ] .

By solving LMIs of theorem *2*, we obtain the following controller and observer gain matrices respectively:

$$\mathbf{K\_1 = [-1.95 \ -0.17], K\_2 = [-1.36 \ -0.08], G\_1 = [-7.75 \ -80.80]^\dagger, G\_2 = [-7.79 \ -82.27]^\dagger \quad \text{(60)}$$

The obtained *H*<sup>∞</sup> criterion after minimization is:

$$
\gamma = 0.3974 \tag{61}
$$

Tables 1 and 2 give some examples of both nominal and uncertain system closed-loop pole values respectively. All these poles are located in the desired regions. Note that the uncertainties must be taken into account since we wish to ensure a global pole placement. That means that the poles of (10) belong to the specific LMI region, whatever uncertainties (2), (3). From tables *1* and *2*, we can see that the estimation error pole values obtained using the *H*<sup>∞</sup> approach are more distant (farther on the left) than the ones without the *H*<sup>∞</sup> approach.


Table 1. Pole values (nominal case).


Table 2. Pole values (extreme uncertain models).

Observer-Based Robust Control of Uncertain Fuzzy Models with Pole Placement Constraints 55

 With the *H*<sup>∞</sup> approach Without the *H*<sup>∞</sup> approach

Using lemma *1*

<sup>0</sup> 0.2 0.4 0.6 0.8 <sup>1</sup> 1.2 1.4 1.6 1.8 <sup>2</sup> -0.4

Time

0.2 0.4 0.6 0.8 1 1.2 1.4 1.6

 <sup>2</sup>*x* ( )*t* <sup>2</sup>*x*ˆ ( )*t*

 <sup>1</sup>*x* ( )*t* <sup>1</sup>*x*ˆ ( )*t*

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Time

Fig. 3. Behaviour of the state vector and its estimation with the *H∞* approach.



> -0.8 -0.6 -0.4 -0.2 0 0.2

x(2) and estimed x(2)

x(1) and estimed x(1)

Fig. 2. Behaviour of error <sup>2</sup>*e t*( ) .

0

0.2

Error e(2)

0.4

0.6

0.8

1

Figures *1* and *2* respectively show the behaviour of error 1*e t*( ) and 2*e t*( ) with and without the *H*<sup>∞</sup> approach and also the behaviour obtained using only lemma 1. We clearly see that the estimation error converges faster in the first case (with *H*<sup>∞</sup> approach and pole placements) than in the second one (with pole placements only) as well as in the third case (without *H*<sup>∞</sup> approach and pole placements). At last but not least, Figure *3* and *4* show respectively the behaviour of the state variables with and without the *H*<sup>∞</sup> approach whereas Figure 5 shows the evolution of the control signal. From Figures 3 and 4, we still have the same conclusion about the convergence of the estimation errors.

Fig. 1. Behaviour of error <sup>1</sup>*e t*( ) .

Fig. 2. Behaviour of error <sup>2</sup>*e t*( ) .

54 Recent Advances in Robust Control – Novel Approaches and Design Methods

Figures *1* and *2* respectively show the behaviour of error 1*e t*( ) and 2*e t*( ) with and without the *H*<sup>∞</sup> approach and also the behaviour obtained using only lemma 1. We clearly see that the estimation error converges faster in the first case (with *H*<sup>∞</sup> approach and pole placements) than in the second one (with pole placements only) as well as in the third case (without *H*<sup>∞</sup> approach and pole placements). At last but not least, Figure *3* and *4* show respectively the behaviour of the state variables with and without the *H*<sup>∞</sup> approach whereas Figure 5 shows the evolution of the control signal. From Figures 3 and 4, we still have the

<sup>0</sup> 0.2 0.4 0.6 0.8 <sup>1</sup> 1.2 1.4 1.6 1.8 <sup>2</sup> -0.2

Time

 With the *H*<sup>∞</sup> approach Without the *H*<sup>∞</sup> approach

Using lemma *1*

same conclusion about the convergence of the estimation errors.


Fig. 1. Behaviour of error <sup>1</sup>*e t*( ) .



Error e(1)

0

0.05

Fig. 3. Behaviour of the state vector and its estimation with the *H∞* approach.

Observer-Based Robust Control of Uncertain Fuzzy Models with Pole Placement Constraints 57

In this chapter, we have developed robust pole placement constraints for continuous T-S fuzzy systems with unavailable state variables and with parametric structured uncertainties. The proposed approach has extended existing methods based on uncertain T-S fuzzy models. The proposed LMI constraints can globally asymptotically stabilize the closed-loop T-S fuzzy system subject to parametric uncertainties with the desired control performances. Because of uncertainties, the separation property is not applicable. To overcome this problem, we have proposed, for the design of the observer and the controller, a two-step procedure with two pole placements constraints and the minimization of a *H*∞ performance criterion in order to guarantee that the estimation error converges faster to zero. Simulation results have verified and confirmed the effectiveness of our approach in controlling

Chadli, M. & El Hajjaji, A. (2006). Comment on observer-based robust fuzzy control of

Boyd, S.; El Ghaoui, L. & Feron, E. & Balkrishnan, V. (1994)*. Linear Matrix Inequalities in* 

Chilali, M. & Gahinet, P. (1996). *H*<sup>∞</sup> Design with Pole Placement Constraints: An LMI

Chilali, M.; Gahinet, P. & Apkarian, P. (1999). Robust Pole Placement in LMI Regions. *IEEE Transactions on Automatic Control*, Vol. 44, N°12 (December 1999), pp. 2257-2270 El Messoussi, W.; Pagès, O. & El Hajjaji, A. (2005). Robust Pole Placement for Fuzzy Models

El Messoussi, W.; Pagès, O. & El Hajjaji, A. (2006).Observer-Based Robust Control of

Farinwata, S.; Filev, D. & Langari, R. (2000). *Fuzzy Control Synthesis and Analysis*, John Wiley

Han, Z.X.; Feng, G. & Walcott, B.L. & Zhang, Y.M. (2000) . *H*<sup>∞</sup> Controller Design of Fuzzy

Hong, S. K. & Nam, Y. (2003). Stable Fuzzy Control System Design with Pole Placement

Kang, G.; Lee, W. & Sugeno, M. (1998). Design of TSK Fuzzy Controller Based on TSK

*11th LFA Congress*, pp. 810-815, Barcelona, Spain, September, 2005

*Control Conference*, pp. 1939-1943, Chicago, USA, June, 2000

nonlinear systems with parametric uncertainties. *Fuzzy Sets and Systems*, Vol. 157,

*System and Control Theory*, Society for Industrial and Applied Mathematics, SIAM,

Approach. *IEEE Transactions on Automatic Control*, Vol. 41, N°3 (March 1996), pp.

with Parametric Uncertainties: An LMI Approach, *Proceedings of the 4th Eusflat and* 

Uncertain Fuzzy Dynamic Systems with Pole Placement Constraints: An LMI Approach, *Proceedings of the IEEE American Control conference*, pp. 2203-2208,

Dynamic Systems with Pole Placement Constraints, *Proceedings of the IEEE American* 

constraint: An LMI Approach. *Computers in Industry*, Vol. 51, N°1 (May 2003), pp. 1-

Fuzzy Model Using Pole Placement, *Proceedings of the IEEE World Congress on Computational Intelligence*, pp. 246 – 251, Vol. 1, N°12, Anchorage, Alaska, USA,

**5. Conclusion** 

**6. References** 

nonlinear systems with parametric uncertainties.

N°9 (2006), pp. 1276-1281

Minneapolis, USA, June, 2006

& Sons, Ltd, pp. 267-282

Philadelphia, USA

358-367

11

May, 1998

Fig. 4. Behaviour of the state and its estimation without the *H*<sup>∞</sup> approach.

Fig. 5. Control signal evolution *u(t).*

## **5. Conclusion**

56 Recent Advances in Robust Control – Novel Approaches and Design Methods

0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8

 <sup>1</sup>*x* ( )*t* <sup>1</sup>*x*ˆ ( )*t*

 <sup>2</sup>*x* ( )*t* <sup>2</sup>*x*ˆ ( )*t*

0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8

0.5 1 1.5 2 2.5

Time

Time

Fig. 4. Behaviour of the state and its estimation without the *H*<sup>∞</sup> approach.


> -0.4 -0.3 -0.2 -0.1 0 0.1


0

Fig. 5. Control signal evolution *u(t).*

0.05

0.1

0.15

Control signal u(t)

0.2

0.25

0.3

0.35

x(2) and estimed x(2)

x(1) and estimed x(1)

In this chapter, we have developed robust pole placement constraints for continuous T-S fuzzy systems with unavailable state variables and with parametric structured uncertainties. The proposed approach has extended existing methods based on uncertain T-S fuzzy models. The proposed LMI constraints can globally asymptotically stabilize the closed-loop T-S fuzzy system subject to parametric uncertainties with the desired control performances. Because of uncertainties, the separation property is not applicable. To overcome this problem, we have proposed, for the design of the observer and the controller, a two-step procedure with two pole placements constraints and the minimization of a *H*∞ performance criterion in order to guarantee that the estimation error converges faster to zero. Simulation results have verified and confirmed the effectiveness of our approach in controlling nonlinear systems with parametric uncertainties.

## **6. References**


**4** 

Anas N. Al-Rabadi

*Jordan* 

**Robust Control Using LMI Transformation and** 

In control engineering, robust control is an area that explicitly deals with uncertainty in its approach to the design of the system controller [7,10,24]. The methods of robust control are designed to operate properly as long as disturbances or uncertain parameters are within a compact set, where robust methods aim to accomplish robust performance and/or stability in the presence of bounded modeling errors. A robust control policy is static in contrast to the adaptive (dynamic) control policy where, rather than adapting to measurements of variations, the system controller is designed to function assuming that certain variables will be unknown but, for example, bounded. An early example of a robust control method is the high-gain feedback control where the effect of any parameter variations will be negligible

The overall goal of a control system is to cause the output variable of a dynamic process to follow a desired reference variable accurately. This complex objective can be achieved based on a number of steps. A major one is to develop a mathematical description, called dynamical model, of the process to be controlled [7,10,24]. This dynamical model is usually accomplished using a set of differential equations that describe the dynamic behavior of the system, which can be further represented in state-space using system matrices or in

In system modeling, sometimes it is required to identify some of the system parameters. This objective maybe achieved by the use of artificial neural networks (ANN), which are considered as the new generation of information processing networks [5,15,17,28,29]. Artificial neural systems can be defined as physical cellular systems which have the capability of acquiring, storing and utilizing experiential knowledge [15,29], where an ANN consists of an interconnected group of basic processing elements called neurons that perform summing operations and nonlinear function computations. Neurons are usually organized in layers and forward connections, and computations are performed in a parallel mode at all nodes and connections. Each connection is expressed by a numerical value called the weight, where the conducted learning process of a neuron corresponds to the

**1. Introduction** 

with using sufficiently high gain.

transform-space using transfer functions [7,10,24].

changing of its corresponding weights.

**Neural-Based Identification for Regulating** 

**Eigenvalue-Preserved Dynamic Systems** 

**Singularly-Perturbed Reduced Order** 

*Computer Engineering Department, The University of Jordan, Amman* 


## **Robust Control Using LMI Transformation and Neural-Based Identification for Regulating Singularly-Perturbed Reduced Order Eigenvalue-Preserved Dynamic Systems**

Anas N. Al-Rabadi *Computer Engineering Department, The University of Jordan, Amman* 

## **1. Introduction**

58 Recent Advances in Robust Control – Novel Approaches and Design Methods

Lauber J. (2003). *Moteur à allumage commandé avec EGR: modélisation et commande non linéaires*,

Lee, H.J.; Park, J.B. & Chen, G. (2001). Robust Fuzzy Control of Nonlinear Systems with

Lo, J. C. & Lin, M. L. (2004). Observer-Based Robust *H*<sup>∞</sup> Control for Fuzzy Systems Using

Ma, X. J., Sun Z. Q. & He, Y. Y. (1998). Analysis and Design of Fuzzy Controller and Fuzzy

Shi, G., Zou Y. & Yang, C. (1992). An algebraic approach to robust *H*∞ control via state

Tanaka, K.; Ikeda, T. & Wang, H. O. (1998). Fuzzy Regulators and Fuzzy Observers: Relaxed

Tong, S. & Li, H. H. (1995). Observer-based robust fuzzy control of nonlinear systems with

Wang, S. G.; Shieh, L. S. & Sunkel, J. W. (1995). Robust optimal pole-placement in a vertical

Wang, S. G.; Shieh, L. S. & Sunkel, J. W. (1998). Observer-Based controller for Robust Pole

Wang, S. G.; Yeh, Y. & Roschke, P. N. (2001). Robust Control for Structural Systems with

Xiaodong, L. & Qingling, Z. (2003). New approaches to *H*∞ controller designs based on

Yoneyama, J; Nishikawa, M.; Katayama, H. & Ichikawa, A. (2000). Output stabilization of

feedback. *System Control Letters*, Vol. 18, N°5 (1992), pp. 365-370

*Journal of System Science*, Vol. 26, (1995), pp. 1839-1853

*Conference*, pp. 1109-1114, Arlington, USA, June 2001

*Robust and Nonlinear Control*, Vol. 8, N°5, (1998), pp. 1073-1084

December 2003, pp. 87-88

Vol. 6, N°2, (May 1998), pp. 250-265

(September 2003), pp. 1571-1582

2001), pp. 369-379

pp. 350-359

51

165-184

253-266

Ph. D. Thesis of the University of Valenciennes and Hainault-Cambresis, France,

Parametric Uncertainties*. IEEE Transactions on Fuzzy Systems*, Vol. 9, N°2, (April

Two-Step Procedure. *IEEE Transactions on Fuzzy Systems*, Vol. 12, N°3, (June 2004),

Observer. *IEEE Transactions on Fuzzy Systems*, Vol. 6, N°1, (February 1998), pp. 41-

Stability Conditions and LMI-Based Design*. IEEE Transactions on Fuzzy Systems*,

parametric uncertainties. *Fuzzy Sets and Systems*, Vol. 131, N°2, (October 2002), pp.

strip and disturbance rejection in Structured Uncertain Systems*. International* 

Clustering in a vertical strip and disturbance rejection. *International Journal of* 

Parametric and Unstructured Uncertainties, *Proceedings of the American Control* 

fuzzy observers for T-S fuzzy systems via LMI. *Automatica*, Vol. 39, N° 9,

Takagi-Sugeno fuzzy systems. *Fuzzy Sets and Systems*, Vol. 111, N°2, April 2000, pp.

In control engineering, robust control is an area that explicitly deals with uncertainty in its approach to the design of the system controller [7,10,24]. The methods of robust control are designed to operate properly as long as disturbances or uncertain parameters are within a compact set, where robust methods aim to accomplish robust performance and/or stability in the presence of bounded modeling errors. A robust control policy is static in contrast to the adaptive (dynamic) control policy where, rather than adapting to measurements of variations, the system controller is designed to function assuming that certain variables will be unknown but, for example, bounded. An early example of a robust control method is the high-gain feedback control where the effect of any parameter variations will be negligible with using sufficiently high gain.

The overall goal of a control system is to cause the output variable of a dynamic process to follow a desired reference variable accurately. This complex objective can be achieved based on a number of steps. A major one is to develop a mathematical description, called dynamical model, of the process to be controlled [7,10,24]. This dynamical model is usually accomplished using a set of differential equations that describe the dynamic behavior of the system, which can be further represented in state-space using system matrices or in transform-space using transfer functions [7,10,24].

In system modeling, sometimes it is required to identify some of the system parameters. This objective maybe achieved by the use of artificial neural networks (ANN), which are considered as the new generation of information processing networks [5,15,17,28,29]. Artificial neural systems can be defined as physical cellular systems which have the capability of acquiring, storing and utilizing experiential knowledge [15,29], where an ANN consists of an interconnected group of basic processing elements called neurons that perform summing operations and nonlinear function computations. Neurons are usually organized in layers and forward connections, and computations are performed in a parallel mode at all nodes and connections. Each connection is expressed by a numerical value called the weight, where the conducted learning process of a neuron corresponds to the changing of its corresponding weights.

*Jordan* 

Robust Control Using LMI Transformation and Neural-Based Identification for

**PID** 

**Neural-Based System** 

**Transformation: {[ A**ˆ **],[ B**ˆ **]}** 

Fig. 1. The newly utilized hierarchical control method.

optimal control, and (c) output feedback control.

Conclusions and future work are presented in Section 6.

**State Feedback Control**

Regulating Singularly-Perturbed Reduced Order Eigenvalue-Preserved Dynamic Systems 61

**Closed-Loop Feedback Control**

**Continuous Dynamic System: {[A], [B], [C], [D]}**

While similar hierarchical method of ANN-based identification and LMI-based transformation has been previously utilized within several applications such as for the reduced-order electronic Buck switching-mode power converter [1] and for the reducedorder quantum computation systems [2] with relatively simple state feedback controller implementations, the presented method in this work further shows the successful wide applicability of the introduced intelligent control technique for dynamical systems using various spectrum of control methods such as (a) PID-based control, (b) state feedback control using (1) pole placement-based control and (2) linear quadratic regulator (LQR)

Section 2 presents background on recurrent supervised neural networks, linear matrix inequality, system model transformation using neural identification, and model order reduction. Section 3 presents a detailed illustration of the recurrent neural network identification with the LMI optimization techniques for system model order reduction. A practical implementation of the neural network identification and the associated comparative results with and without the use of LMI optimization to the dynamical system model order reduction is presented in Section 4. Section 5 presents the application of the feedback control on the reduced model using PID control, state feedback control using pole assignment, state feedback control using LQR optimal control, and output feedback control.

**System Discretization**

**System Undiscretization (Continuous form)**

**Model Order Reduction** 

**State Feedback (Pole Placement)**

**State Feedback (LQR Optimal Control)** 

**Matrix [P]**

**LMI-Based Permutation** 

**Neural-Based State Transformation: [ A**

**Complete System Transformation: {[B**

**Output Feedback (LQR Optimal** 

<sup>~</sup> **],[C**

<sup>~</sup> **],[ <sup>D</sup>** <sup>~</sup> **]}** 

**Control)**

~ **]** 

When dealing with system modeling and control analysis, there exist equations and inequalities that require optimized solutions. An important expression which is used in robust control is called linear matrix inequality (LMI) which is used to express specific convex optimization problems for which there exist powerful numerical solvers [1,2,6]. The important LMI optimization technique was started by the Lyapunov theory showing that the differential equation *x t Ax t* () () = is stable if and only if there exists a positive definite matrix [**P**] such that 0 *<sup>T</sup> A P PA* + < [6]. The requirement of { 0 *P* > , 0 *<sup>T</sup> A P PA* + < } is known as the Lyapunov inequality on [**P**] which is a special case of an LMI. By picking any 0 *<sup>T</sup> Q Q*= > and then solving the linear equation *<sup>T</sup> A P PA Q* + = − for the matrix [**P**], it is guaranteed to be positive-definite if the given system is stable. The linear matrix inequalities that arise in system and control theory can be generally formulated as convex optimization problems that are amenable to computer solutions and can be solved using algorithms such as the ellipsoid algorithm [6].

In practical control design problems, the first step is to obtain a proper mathematical model in order to examine the behavior of the system for the purpose of designing an appropriate controller [1,2,3,4,5,7,8,9,10,11,12,13,14,16,17,19,20,21,22,24,25,26,27]. Sometimes, this mathematical description involves a certain small parameter (i.e., perturbation). Neglecting this small parameter results in simplifying the order of the designed controller by reducing the order of the corresponding system [1,3,4,5,8,9,11,12,13,14,17,19,20,21,22,25,26]. A reduced model can be obtained by neglecting the fast dynamics (i.e., non-dominant eigenvalues) of the system and focusing on the slow dynamics (i.e., dominant eigenvalues). This simplification and reduction of system modeling leads to controller cost minimization [7,10,13]. An example is the modern integrated circuits (ICs), where increasing package density forces developers to include side effects. Knowing that these ICs are often modeled by complex RLC-based circuits and systems, this would be very demanding computationally due to the detailed modeling of the original system [16]. In control system, due to the fact that feedback controllers don't usually consider all of the dynamics of the functioning system, model reduction is an important issue [4,5,17].

The main results in this research include the introduction of a new layered method of intelligent control, that can be used to robustly control the required system dynamics, where the new control hierarchy uses recurrent supervised neural network to identify certain parameters of the transformed system matrix [ **A** ], and the corresponding LMI is used to determine the permutation matrix [**P**] so that a complete system transformation {[ **B** ], [ **C** ], [ **D** ]} is performed. The transformed model is then reduced using the method of singular perturbation and various feedback control schemes are applied to enhance the corresponding system performance, where it is shown that the new hierarchical control method simplifies the model of the dynamical systems and therefore uses simpler controllers that produce the needed system response for specific performance enhancements. Figure 1 illustrates the layout of the utilized new control method. Layer 1 shows the continuous modeling of the dynamical system. Layer 2 shows the discrete system model. Layer 3 illustrates the neural network identification step. Layer 4 presents the undiscretization of the transformed system model. Layer 5 includes the steps for model order reduction with and without using LMI. Finally, Layer 6 presents various feedback control methods that are used in this research.

60 Recent Advances in Robust Control – Novel Approaches and Design Methods

When dealing with system modeling and control analysis, there exist equations and inequalities that require optimized solutions. An important expression which is used in robust control is called linear matrix inequality (LMI) which is used to express specific convex optimization problems for which there exist powerful numerical solvers [1,2,6]. The important LMI optimization technique was started by the Lyapunov theory showing that the differential equation *x t Ax t* () () = is stable if and only if there exists a positive definite matrix [**P**] such that 0 *<sup>T</sup> A P PA* + < [6]. The requirement of { 0 *P* > , 0 *<sup>T</sup> A P PA* + < } is known as the Lyapunov inequality on [**P**] which is a special case of an LMI. By picking any 0 *<sup>T</sup> Q Q*= > and then solving the linear equation *<sup>T</sup> A P PA Q* + = − for the matrix [**P**], it is guaranteed to be positive-definite if the given system is stable. The linear matrix inequalities that arise in system and control theory can be generally formulated as convex optimization problems that are amenable to computer solutions and can be solved using algorithms such

In practical control design problems, the first step is to obtain a proper mathematical model in order to examine the behavior of the system for the purpose of designing an appropriate controller [1,2,3,4,5,7,8,9,10,11,12,13,14,16,17,19,20,21,22,24,25,26,27]. Sometimes, this mathematical description involves a certain small parameter (i.e., perturbation). Neglecting this small parameter results in simplifying the order of the designed controller by reducing the order of the corresponding system [1,3,4,5,8,9,11,12,13,14,17,19,20,21,22,25,26]. A reduced model can be obtained by neglecting the fast dynamics (i.e., non-dominant eigenvalues) of the system and focusing on the slow dynamics (i.e., dominant eigenvalues). This simplification and reduction of system modeling leads to controller cost minimization [7,10,13]. An example is the modern integrated circuits (ICs), where increasing package density forces developers to include side effects. Knowing that these ICs are often modeled by complex RLC-based circuits and systems, this would be very demanding computationally due to the detailed modeling of the original system [16]. In control system, due to the fact that feedback controllers don't usually consider all of the dynamics of the

The main results in this research include the introduction of a new layered method of intelligent control, that can be used to robustly control the required system dynamics, where the new control hierarchy uses recurrent supervised neural network to identify certain parameters of the transformed system matrix [ **A** ], and the corresponding LMI is used to determine the permutation matrix [**P**] so that a complete system transformation {[ **B** ], [ **C** ], [ **D** ]} is performed. The transformed model is then reduced using the method of singular perturbation and various feedback control schemes are applied to enhance the corresponding system performance, where it is shown that the new hierarchical control method simplifies the model of the dynamical systems and therefore uses simpler controllers that produce the needed system response for specific performance enhancements. Figure 1 illustrates the layout of the utilized new control method. Layer 1 shows the continuous modeling of the dynamical system. Layer 2 shows the discrete system model. Layer 3 illustrates the neural network identification step. Layer 4 presents the undiscretization of the transformed system model. Layer 5 includes the steps for model order reduction with and without using LMI. Finally, Layer 6 presents various feedback

functioning system, model reduction is an important issue [4,5,17].

control methods that are used in this research.

as the ellipsoid algorithm [6].


Fig. 1. The newly utilized hierarchical control method.

While similar hierarchical method of ANN-based identification and LMI-based transformation has been previously utilized within several applications such as for the reduced-order electronic Buck switching-mode power converter [1] and for the reducedorder quantum computation systems [2] with relatively simple state feedback controller implementations, the presented method in this work further shows the successful wide applicability of the introduced intelligent control technique for dynamical systems using various spectrum of control methods such as (a) PID-based control, (b) state feedback control using (1) pole placement-based control and (2) linear quadratic regulator (LQR) optimal control, and (c) output feedback control.

Section 2 presents background on recurrent supervised neural networks, linear matrix inequality, system model transformation using neural identification, and model order reduction. Section 3 presents a detailed illustration of the recurrent neural network identification with the LMI optimization techniques for system model order reduction. A practical implementation of the neural network identification and the associated comparative results with and without the use of LMI optimization to the dynamical system model order reduction is presented in Section 4. Section 5 presents the application of the feedback control on the reduced model using PID control, state feedback control using pole assignment, state feedback control using LQR optimal control, and output feedback control. Conclusions and future work are presented in Section 6.

Robust Control Using LMI Transformation and Neural-Based Identification for

*i*

neuron (which is *yi*(*k*)), the following equation is provided:

matrices are given by { 11 12 <sup>11</sup>

( 1) <sup>~</sup>*x*<sup>1</sup> *<sup>k</sup>* <sup>+</sup>

The net internal activity of neuron *j* at time *k* is given by:

neuron *j* is computed by passing *vj*(*k*) through the nonlinearity

whose *jth* element is given by the following relationship:

Regulating Singularly-Perturbed Reduced Order Eigenvalue-Preserved Dynamic Systems 63

network at discrete time *k*, the variable **y**(*k* + 1) denotes the corresponding (*N* x 1) vector of individual neuron outputs produced one step later at time (*k* + 1), and the input vector **g**(*k*) and one-step delayed output vector **y**(*k*) are concatenated to form the ((*M* + *N*) x 1) vector **u**(*k*) whose *ith* element is denoted by *ui*(*k*). For *Λ* denotes the set of indices *i* for which *gi*(*k*) is an external input, and *β* denotes the set of indices *i* for which *ui*(*k*) is the output of a

( ) if

⎧⎪ <sup>∈</sup> <sup>⎨</sup> ⎪ ∈ ⎩

 *k, g i Λ*

β

( ) ( ) if *i*

*<sup>u</sup> k = k, y i* 

*i*

<sup>1</sup> *<sup>x</sup> <sup>k</sup>* ( ) <sup>~</sup>

Outputs ( ) ~*y k*

Fig. 3. The utilized 2nd order recurrent neural network architecture, where the identified

Z-1 *g*1:

*A*<sup>21</sup>

The (*N* x (*M* + *N*)) recurrent weight matrix of the network is represented by the variable [**W**].

( ) = ( ) ( ) *<sup>j</sup> ji i i Λ v k w ku k* ∈ ∪β

where *Λ* ∪ *ß* is the union of sets *Λ* and *ß* . At the next time step (*k* + 1), the output of the

( 1) = ( ( )) *j j y k vk* + ϕ

The derivation of the recurrent algorithm can be started by using *dj*(*k*) to denote the desired (target) response of neuron *j* at time *k*, and *ς(k)* to denote the set of neurons that are chosen to provide externally reachable outputs. A time-varying (*N* x 1) error vector *e*(*k*) is defined

∑

} and that [ ] [ ] *<sup>W</sup>* <sup>=</sup> <sup>⎡</sup> <sup>⎤</sup> <sup>⎣</sup> <sup>⎦</sup> **A B d d** .

*A*<sup>22</sup>

<sup>2</sup> *x k*

Z-1

*B*<sup>21</sup>

( 1) <sup>~</sup>*x*<sup>2</sup> *<sup>k</sup>* <sup>+</sup>

System external input

System state: internal input

Neuron

ϕ

(.) , thus obtaining:

21 22 21

System dynamics

delay

*AA B*

*B*<sup>11</sup>

*AA B* <sup>⎡</sup> ⎤ ⎡⎤ <sup>=</sup> <sup>⎢</sup> ⎥ ⎢⎥ <sup>=</sup> ⎣ ⎦ ⎣⎦

, *d d*

*A B*

*A*11 *A*<sup>12</sup>

( ) ~

## **2. Background**

The following sub-sections provide an important background on the artificial supervised recurrent neural networks, system transformation without using LMI, state transformation using LMI, and model order reduction, which can be used for the robust control of dynamic systems, and will be used in the later Sections 3-5.

### **2.1 Artificial recurrent supervised neural networks**

The ANN is an emulation of the biological neural system [15,29]. The basic model of the neuron is established emulating the functionality of a biological neuron which is the basic signaling unit of the nervous system. The internal process of a neuron maybe mathematically modeled as shown in Figure 2 [15,29].

Fig. 2. A mathematical model of the artificial neuron.

As seen in Figure 2, the internal activity of the neuron is produced as:

$$\varpi v\_k = \sum\_{j=1}^{p} \varpi v\_{kj} \varpi\_j \tag{1}$$

In supervised learning, it is assumed that at each instant of time when the input is applied, the desired response of the system is available [15,29]. The difference between the actual and the desired response represents an error measure which is used to correct the network parameters externally. Since the adjustable weights are initially assumed, the error measure may be used to adapt the network's weight matrix [**W**]. A set of input and output patterns, called a training set, is required for this learning mode, where the usually used training algorithm identifies directions of the negative error gradient and reduces the error accordingly [15,29].

The supervised recurrent neural network used for the identification in this research is based on an approximation of the method of steepest descent [15,28,29]. The network tries to match the output of certain neurons to the desired values of the system output at a specific instant of time. Consider a network consisting of a total of *N* neurons with *M* external input connections, as shown in Figure 3, for a 2nd order system with two neurons and one external input. The variable **g**(*k*) denotes the (*M* x 1) external input vector which is applied to the 62 Recent Advances in Robust Control – Novel Approaches and Design Methods

The following sub-sections provide an important background on the artificial supervised recurrent neural networks, system transformation without using LMI, state transformation using LMI, and model order reduction, which can be used for the robust control of dynamic

The ANN is an emulation of the biological neural system [15,29]. The basic model of the neuron is established emulating the functionality of a biological neuron which is the basic signaling unit of the nervous system. The internal process of a neuron maybe

∑

*k v*

Summing Junction

ϕ

Threshold *k θ*

Activation Function

( ) . *<sup>k</sup> <sup>y</sup>*

Output

**2. Background** 

systems, and will be used in the later Sections 3-5.

**2.1 Artificial recurrent supervised neural networks** 

mathematically modeled as shown in Figure 2 [15,29].

1 *x*

0 *x*

2 *x*

*p x*

Input Signals *wk* <sup>0</sup>

*wk*<sup>1</sup>

*wk* <sup>2</sup>

Fig. 2. A mathematical model of the artificial neuron.

As seen in Figure 2, the internal activity of the neuron is produced as:

Synaptic Weights

*wkp*

1

In supervised learning, it is assumed that at each instant of time when the input is applied, the desired response of the system is available [15,29]. The difference between the actual and the desired response represents an error measure which is used to correct the network parameters externally. Since the adjustable weights are initially assumed, the error measure may be used to adapt the network's weight matrix [**W**]. A set of input and output patterns, called a training set, is required for this learning mode, where the usually used training algorithm identifies

The supervised recurrent neural network used for the identification in this research is based on an approximation of the method of steepest descent [15,28,29]. The network tries to match the output of certain neurons to the desired values of the system output at a specific instant of time. Consider a network consisting of a total of *N* neurons with *M* external input connections, as shown in Figure 3, for a 2nd order system with two neurons and one external input. The variable **g**(*k*) denotes the (*M* x 1) external input vector which is applied to the

directions of the negative error gradient and reduces the error accordingly [15,29].

<sup>=</sup> ∑ (1)

*p k kj j j v wx* =

network at discrete time *k*, the variable **y**(*k* + 1) denotes the corresponding (*N* x 1) vector of individual neuron outputs produced one step later at time (*k* + 1), and the input vector **g**(*k*) and one-step delayed output vector **y**(*k*) are concatenated to form the ((*M* + *N*) x 1) vector **u**(*k*) whose *ith* element is denoted by *ui*(*k*). For *Λ* denotes the set of indices *i* for which *gi*(*k*) is an external input, and *β* denotes the set of indices *i* for which *ui*(*k*) is the output of a neuron (which is *yi*(*k*)), the following equation is provided:

$$\iota\_{M\_i}(k) = \begin{cases} \mathcal{g}\_i(k), & \text{if } \ i \in \Lambda \\ \mathcal{y}\_i(k), & \text{if } \ i \in \mathcal{\beta} \end{cases}$$

Fig. 3. The utilized 2nd order recurrent neural network architecture, where the identified matrices are given by { 11 12 <sup>11</sup> 21 22 21 , *d d AA B A B AA B* <sup>⎡</sup> ⎤ ⎡⎤ <sup>=</sup> <sup>⎢</sup> ⎥ ⎢⎥ <sup>=</sup> ⎣ ⎦ ⎣⎦ } and that [ ] [ ] *<sup>W</sup>* <sup>=</sup> <sup>⎡</sup> <sup>⎤</sup> <sup>⎣</sup> <sup>⎦</sup> **A B d d** .

The (*N* x (*M* + *N*)) recurrent weight matrix of the network is represented by the variable [**W**]. The net internal activity of neuron *j* at time *k* is given by:

$$w\_j(k) = \sum\_{i \in \Lambda \cup \beta} w\_{ji}(k)\mu\_i(k)$$

where *Λ* ∪ *ß* is the union of sets *Λ* and *ß* . At the next time step (*k* + 1), the output of the neuron *j* is computed by passing *vj*(*k*) through the nonlinearity ϕ(.) , thus obtaining:

$$y\_j(k+1) = \wp(\upsilon\_j(k))$$

The derivation of the recurrent algorithm can be started by using *dj*(*k*) to denote the desired (target) response of neuron *j* at time *k*, and *ς(k)* to denote the set of neurons that are chosen to provide externally reachable outputs. A time-varying (*N* x 1) error vector *e*(*k*) is defined whose *jth* element is given by the following relationship:

Robust Control Using LMI Transformation and Neural-Based Identification for

ϕ

(0) = 0 (0) *i m y w* ∂ ∂ <sup>A</sup>

Having those equations provides that:

πϕ

The values of ( ) *<sup>j</sup>*

weight changes:

π

Regulating Singularly-Perturbed Reduced Order Eigenvalue-Preserved Dynamic Systems 65

( + 1) ( ) = ( ( )) ( ) ( ) ( ) ( )

*y k y k vk w k u k*

∈ <sup>∂</sup> <sup>⎡</sup> <sup>∂</sup> <sup>⎤</sup> <sup>⎢</sup> <sup>+</sup> <sup>⎥</sup> ∂ ∂ <sup>⎢</sup> <sup>⎥</sup> <sup>⎣</sup> <sup>⎦</sup> <sup>∑</sup> <sup>A</sup> <sup>A</sup>

The dynamical system is described by the following triply-indexed set of variables ( *<sup>j</sup>*

*k*

∂ <sup>A</sup>

For every time step *k* and all appropriate *j*, *m* and A , system dynamics are controlled by:

 π

η

Using the weight changes, the updated weight *wm*<sup>A</sup> (*k* + 1) is calculated as follows:

purpose of model order reduction as will be shown in the following section.

**2.2 Model transformation and linear matrix inequality** 

will be presented. Consider the dynamical system:

diagram shown in Figure 4.

*m*

π

( + 1) = ( ( )) ( ) ( ) ( ) *<sup>j</sup> <sup>i</sup> j mj m ji m i k k w k k uk v* β

∈

*j ji m*

, for {*j*∈ *ß* , *m*∈ *ß* , A ∈ *Λ* ∪

( ) ( ) = ( ) *j j*

<sup>⎡</sup> <sup>⎤</sup> <sup>⎢</sup> <sup>+</sup> <sup>⎥</sup> <sup>⎢</sup> <sup>⎥</sup> <sup>⎣</sup> <sup>⎦</sup> <sup>A</sup> ∑ A A , with (0) = 0 *<sup>j</sup>*

> ( ) = ( ) ( ) *<sup>j</sup> m j m j w k ek k* ς

∈

Repeating this computation procedure provides the minimization of the cost function and thus the objective is achieved. With the many advantages that the neural network has, it is used for the important step of parameter identification in model transformation for the

In this section, the detailed illustration of system transformation using LMI optimization

The state space system representation of Equations (4) - (5) may be described by the block

∂

*m*

*w k*

A

δ

*<sup>m</sup> k* <sup>A</sup> and the error signal *ej*(*k*) are used to compute the corresponding

π

*y k*

δ

> β}.

> > π

<sup>Δ</sup> <sup>A</sup> ∑ <sup>A</sup> (2)

*wk wk wk m mm* A AA ( + 1) = ( ) + ( ) Δ (3)

*x t Ax t Bu t* () () () = + (4)

*y*() () () *t Cx t Du t* = + (5)

*<sup>m</sup>*<sup>A</sup> .

π*<sup>m</sup>*<sup>A</sup> ):

*j i*

*m m i*

The initial state of the network at time (*k* = 0) is assumed to be zero as follows:

*w k w k* β

A A

$$e\_j(k) = \begin{cases} \, \, d\_j(k) \cdot y\_j(k), & \text{if } j \in \mathcal{J}(k) \\ \, \, 0, & \text{otherwise} \end{cases}$$

The objective is to minimize the cost function *E*total which is obtained by:

$$E\_{\text{total}} = \sum\_{k} E(k) \text{ / where } E(k) = \frac{1}{2} \sum\_{j \in \varphi} e\_j^2(k).$$

To accomplish this objective, the method of steepest descent which requires knowledge of the gradient matrix is used:

$$
\nabla\_{\mathbf{W}} E\_{\text{total}} = \frac{\partial E\_{\text{total}}}{\partial \mathbf{W}} = \sum\_{k} \frac{\partial E(k)}{\partial \mathbf{W}} = \sum\_{k} \nabla\_{\mathbf{W}} E(k)
$$

where ∇**<sup>W</sup>** *E k*( ) is the gradient of *E*(*k*) with respect to the weight matrix [**W**]. In order to train the recurrent network in real time, the instantaneous estimate of the gradient is used (∇**W***E k*( )) . For the case of a particular weight *wm*<sup>A</sup> (*k*), the incremental change Δ*wm*<sup>A</sup> (*k*) made at *k* is defined as ( ) ( ) = - ( ) *<sup>m</sup> m E k w k w k* η <sup>∂</sup> <sup>Δ</sup> ∂ <sup>A</sup> A where *η* is the learning-rate parameter.

Therefore:

$$\frac{\partial E(k)}{\partial w\_{m\ell}(k)} = \sum\_{j \in \mathcal{L}} e\_j(k) \frac{\partial e\_j(k)}{\partial w\_{m\ell}(k)} = -\sum\_{j \in \mathcal{L}} e\_j(k) \frac{\partial y\_i(k)}{\partial w\_{m\ell}(k)}$$

To determine the partial derivative ( )/ ( ) ∂*yj m k wk* ∂ <sup>A</sup> , the network dynamics are derived. This derivation is obtained by using the chain rule which provides the following equation:

$$\frac{\partial y\_j(k+1)}{\partial w\_{m\ell}(k)} = \frac{\partial y\_j(k+1)}{\partial v\_j(k)} \frac{\partial v\_j(k)}{\partial w\_{m\ell}(k)} = \phi(v\_j(k)) \frac{\partial v\_j(k)}{\partial w\_{m\ell}(k)}, \text{ where } \phi(v\_j(k)) = \frac{\partial \phi(v\_j(k))}{\partial v\_j(k)} \cdot 1$$

Differentiating the net internal activity of neuron *j* with respect to *wm*<sup>A</sup> (*k*) yields:

$$\frac{\partial w\_{j}(k)}{\partial w\_{m\ell}(k)} = \sum\_{i \in A \cup \beta} \frac{\partial (w\_{j\ell}(k)u\_{i}(k))}{\partial w\_{m\ell}(k)} = \sum\_{i \in A \cup \beta} \left[ w\_{j\ell}(k) \frac{\partial u\_{i}(k)}{\partial w\_{m\ell}(k)} + \frac{\partial w\_{j\ell}(k)}{\partial w\_{m\ell}(k)} u\_{i}(k) \right]$$

where (∂ ∂ *wk w k ji m* ( )/ ( ) <sup>A</sup> ) equals "1" only when *j* = *m* and *i* = A , and "0" otherwise. Thus:

$$\frac{\partial w\_{j}(k)}{\partial w\_{m\ell}(k)} = \sum\_{i \in \Lambda \cup \beta} w\_{ji}(k) \frac{\partial\_{\mathcal{U}i}(k)}{\partial w\_{m\ell}(k)} + \delta\_{mj} \mu\_{\ell}(k)$$

where δ*mj* is a Kronecker delta equals to "1" when *j* = *m* and "0" otherwise, and:

$$\frac{\partial u\_i(k)}{\partial w\_{m\ell}(k)} = \begin{cases} 0, & \text{if } i \in A \\\ \frac{\partial y\_i(k)}{\partial w\_{m\ell}(k)}, & \text{if } i \in \beta \end{cases}$$

Having those equations provides that:

64 Recent Advances in Robust Control – Novel Approaches and Design Methods

( ) - ( ), if ( ) ( ) = 0, otherwise *j j*

*E Ek* <sup>∑</sup> , where <sup>1</sup> <sup>2</sup> ( ) = ( ) <sup>2</sup> *<sup>j</sup>*

To accomplish this objective, the method of steepest descent which requires knowledge of

*<sup>E</sup> E k <sup>E</sup> E k* <sup>∂</sup> <sup>∂</sup> ∇ ∇ <sup>∂</sup> ∑ ∑ <sup>∂</sup> **W W <sup>W</sup> <sup>W</sup>**

where ∇**<sup>W</sup>** *E k*( ) is the gradient of *E*(*k*) with respect to the weight matrix [**W**]. In order to train the recurrent network in real time, the instantaneous estimate of the gradient is used (∇**W***E k*( )) . For the case of a particular weight *wm*<sup>A</sup> (*k*), the incremental change Δ*wm*<sup>A</sup> (*k*)

*m*

∂ ∂ ∂ ∂∂ ∂ ∑ ∑ AA A

derivation is obtained by using the chain rule which provides the following equation:

*j*

*m i Λ m i Λ m m v k w ku k u k w k*

*w k w k wk wk* ∈ ∪

ϕ

Differentiating the net internal activity of neuron *j* with respect to *wm*<sup>A</sup> (*k*) yields:

( ) = ( ) ( ) *j i*

*i*

∂ ∂

*m m i Λ*

*<sup>m</sup> <sup>m</sup>*

β

*mj* is a Kronecker delta equals to "1" when *j* = *m* and "0" otherwise, and:

*v k*

*w k*

A

 ( ) ( ) ( ) = ( ) = - ( ) ( ) ( ) ( ) *<sup>j</sup> <sup>i</sup> j j mm m j j E k e k <sup>y</sup> <sup>k</sup> e k e k wk wk wk* ∈ ∈

To determine the partial derivative ( )/ ( ) ∂*yj m k wk* ∂ <sup>A</sup> , the network dynamics are derived. This

 ( ) ( ( ) ( )) ( ) ( ) = = ( ) + ( ) ( ) ( ) () () *j ji i ji i*

 β ∈ ∪ ∂ ∂ <sup>∂</sup> <sup>⎡</sup> <sup>∂</sup> <sup>⎤</sup> <sup>⎢</sup> <sup>⎥</sup> <sup>∂</sup> <sup>∂</sup> ⎢⎣ ∂ ∂ ⎥⎦ ∑ ∑ A A A A where (∂ ∂ *wk w k ji m* ( )/ ( ) <sup>A</sup> ) equals "1" only when *j* = *m* and *i* = A , and "0" otherwise. Thus:

*v k <sup>u</sup> (k) w k <sup>δ</sup> u (k) w k w (k)* ∈ ∪

∂ ∂ ∑ <sup>A</sup> A A

0, if ( ) = ( ) ( ) , if ( )

<sup>⎧</sup> <sup>∈</sup> <sup>∂</sup> <sup>⎪</sup> ⎨ ∂ <sup>∂</sup> <sup>∈</sup> <sup>⎪</sup> <sup>∂</sup> <sup>⎩</sup> <sup>A</sup> A

*u k y k w k <sup>i</sup> w k*

*i*

*ji mj*

+

*i Λ*

β

 ς

, where ( ( )) ( ( )) = ( )

*j*

ϕ

*ji i*

*w k u k*

*v k*

( ) = = = ( ) *k k*

total

*E k w k*

<sup>∂</sup> <sup>Δ</sup> ∂ <sup>A</sup>

ς

( + 1) ( + 1) ( ) ( ) = = ( ( )) () () () ( ) *j jj j*

*m jm m y k y k vk v k*

∂ ∂∂ ∂ ∂ ∂∂ ∂ A AA

β

*w k vk w k w k*

η

⎪

*e k* <sup>⎧</sup> <sup>∈</sup>

*d k y k j k*

ς

*j Ek e k* ∈ς

∑

where *η* is the learning-rate parameter.

*j*

*j v k*

∂ <sup>∂</sup> .

*v k* ϕ

*j*

the gradient matrix is used:

Therefore:

where δ

⎨ ⎪⎩ The objective is to minimize the cost function *E*total which is obtained by:

> total = ( ) *k*

> > total

made at *k* is defined as ( ) ( ) = - ( ) *<sup>m</sup>*

$$\frac{\partial y\_j(k+1)}{\partial w\_{m\ell}(k)} = \dot{\varphi}(w\_j(k)) \left[ \sum\_{i \in \mathcal{J}} w\_{ji}(k) \frac{\partial y\_i(k)}{\partial w\_{m\ell}(k)} + \delta\_{m\ell} u\_\ell(k) \right]$$

The initial state of the network at time (*k* = 0) is assumed to be zero as follows:

$$\frac{\partial y\_i(0)}{\partial w\_{m\ell}(0)} = 0 \text{, for } \{j \in \beta \: \mid \: m \in \beta \: \mid \: \ell \in \Lambda \cup \beta\}.$$

The dynamical system is described by the following triply-indexed set of variables ( *<sup>j</sup>* π*<sup>m</sup>*<sup>A</sup> ):

$$
\pi\_{m\ell}^j(k) = \frac{\partial y\_j(k)}{\partial w\_{m\ell}(k)}
$$

For every time step *k* and all appropriate *j*, *m* and A , system dynamics are controlled by:

$$\pi\_{m\ell}^{j}(k+1) = \dot{\varphi}(\upsilon\_{j}(k)) \left[ \sum\_{i \in \mathcal{J}} w\_{ji}(k) \pi\_{m\ell}^{i}(k) + \mathcal{S}\_{m\dot{\eta}} \mu\_{\ell}(k) \right] \text{ with } \pi\_{m\ell}^{j}(0) = 0 \text{ .} $$

The values of ( ) *<sup>j</sup>* π *<sup>m</sup> k* <sup>A</sup> and the error signal *ej*(*k*) are used to compute the corresponding weight changes:

$$
\Delta w\_{m\ell}(k) = \eta \sum\_{j \neq \underline{\omega}} e\_j(k) \pi\_{m\ell}^j(k) \tag{2}
$$

Using the weight changes, the updated weight *wm*<sup>A</sup> (*k* + 1) is calculated as follows:

$$
\Delta w\_{m\ell}(k+1) = w\_{m\ell}(k) + \Delta w\_{m\ell}(k) \tag{3}
$$

Repeating this computation procedure provides the minimization of the cost function and thus the objective is achieved. With the many advantages that the neural network has, it is used for the important step of parameter identification in model transformation for the purpose of model order reduction as will be shown in the following section.

### **2.2 Model transformation and linear matrix inequality**

In this section, the detailed illustration of system transformation using LMI optimization will be presented. Consider the dynamical system:

$$\dot{\mathbf{x}}(t) = A\mathbf{x}(t) + Bu(t) \tag{4}$$

$$\mathbf{y}(t) = \mathbf{C}\mathbf{x}(t) + D\mathbf{u}(t) \tag{5}$$

The state space system representation of Equations (4) - (5) may be described by the block diagram shown in Figure 4.

Robust Control Using LMI Transformation and Neural-Based Identification for

Pre-multiplying the first equation above by [**P-1**], one obtains:

which yields the following transformed model:

where the transformed system matrices are given by:

**Definition.** *A* matrix *A M*∈ *<sup>n</sup>* is called reducible if either:

thus the optimization problem will be casted as follows:

which can be written in an LMI equivalent form as:

*P*

*S o*

based on the following definition [18].

1 1 ≤≤− *r n* such that:

a. *n =* 1 and *A =* 0; or

Regulating Singularly-Perturbed Reduced Order Eigenvalue-Preserved Dynamic Systems 67

11 1 *P P x t P AP x t P Bu t* () () () <sup>−</sup> − − = + , *y* () () () *t CP x t Du t* = +

Transforming the system matrix [**A**] into the form shown in Equation (8) can be achieved

b. *n* ≥ 2, there is a permutation matrix *P M*∈ *<sup>n</sup>* , and there is some integer *r* with

<sup>1</sup> *X Y*

The attractive features of the permutation matrix [**P**] such as being (1) orthogonal and (2) invertible have made this transformation easy to carry out. However, the permutation matrix structure narrows the applicability of this method to a limited category of applications. A form of a similarity transformation can be used to correct this problem for { : *nn nn fR R* × × → } where *f* is a linear operator defined by <sup>1</sup> *f* ( ) *A P AP* <sup>−</sup> = [18]. Hence, based on [**A**] and [ **A** ], the corresponding LMI is used to obtain the transformation matrix [**P**], and

*P P Subject to P AP A*

1 1

ε

*T*

⎡ ⎤ − ⎢ ⎥ <sup>&</sup>gt; <sup>−</sup> ⎣ ⎦

*S PP*

( )

 0 ( )

−

2 1

<sup>−</sup>

*T*

⎡ ⎤ <sup>−</sup> ⎢ ⎥ <sup>&</sup>gt; <sup>−</sup> ⎣ ⎦

*P AP A I*

*o*

*I P AP A*

*Z*

*P AP*

<sup>1</sup> min *<sup>o</sup>*

min ( ) 0

*trace S Subject to PP I*

where *X M*∈ *r r*, , *Z M*∈ *n rn r* <sup>−</sup> , <sup>−</sup> , *Y M*∈ *rn r* , <sup>−</sup> , and **0**∈ *Mn rr* <sup>−</sup> , is a zero matrix.

*x t Ax t Bu t* () () () = + (9)

*y*() () () *t Cx t Du t* = + (10)

<sup>1</sup> *A P AP* <sup>−</sup> = (11)

<sup>1</sup> *B PB*<sup>−</sup> = (12)

*C CP* = (13)

*D D*= (14)

<sup>−</sup> <sup>⎡</sup> <sup>⎤</sup> <sup>=</sup> <sup>⎢</sup> <sup>⎥</sup> <sup>⎣</sup> <sup>⎦</sup> **<sup>0</sup>** (15)

ε<sup>−</sup> − − < (16)

(17)

Fig. 4. Block diagram for the state-space system representation.

In order to determine the transformed [**A**] matrix, which is [ **A** ], the discrete zero input response is obtained. This is achieved by providing the system with some initial state values and setting the system input to zero (*u*(*k*) = 0). Hence, the discrete system of Equations (4) - (5), with the initial condition 0 (0) *x x* = , becomes:

$$\mathbf{x}(k+1) = A\_d \mathbf{x}(k) \tag{6}$$

$$y(k) = x(k)\tag{7}$$

We need *x*(*k*) as an ANN target to train the network to obtain the needed parameters in [ **Ad** ] such that the system output will be the same for [**Ad**] and [ **Ad** ]. Hence, simulating this system provides the state response corresponding to their initial values with only the [**Ad**] matrix is being used. Once the input-output data is obtained, transforming the [**Ad**] matrix is achieved using the ANN training, as will be explained in Section 3. The identified transformed [ **Ad** ] matrix is then converted back to the continuous form which in general (with all real eigenvalues) takes the following form:

$$
\tilde{A} = \begin{bmatrix} A\_r & A\_c \\ 0 & A\_o \end{bmatrix} \to \quad \tilde{A} = \begin{bmatrix} \lambda\_1 & \tilde{A}\_{12} & \cdots & \tilde{A}\_{1n} \\ 0 & \lambda\_2 & \cdots & \tilde{A}\_{2n} \\ \vdots & 0 & \ddots & \vdots \\ 0 & \cdots & 0 & \lambda\_n \end{bmatrix} \tag{8}
$$

where *λi* represents the system eigenvalues. This is an upper triangular matrix that preserves the eigenvalues by (1) placing the original eigenvalues on the diagonal and (2) finding the elements *Aij* in the upper triangular. This upper triangular matrix form is used to produce the same eigenvalues for the purpose of eliminating the fast dynamics and sustaining the slow dynamics eigenvalues through model order reduction as will be shown in later sections.

Having the [**A**] and [ **A** ] matrices, the permutation [**P**] matrix is determined using the LMI optimization technique, as will be illustrated in later sections. The complete system transformation can be achieved as follows where, assuming that <sup>1</sup> *x Px* <sup>−</sup> = , the system of Equations (4) - (5) can be re-written as:

$$P\dot{\tilde{x}}(t) = AP\,\tilde{x}(t) + Bu(t) \,, \ \tilde{y}(t) = CP\,\tilde{x}(t) + Du(t) \,, \ \text{where} \ \tilde{y}(t) = y(t) \ . \ .$$

Pre-multiplying the first equation above by [**P-1**], one obtains:

$$P^{-1}P\dot{\tilde{\mathbf{x}}}(t) = P^{-1}AP\tilde{\mathbf{x}}(t) + P^{-1}Bu(t) \; \prime \; \tilde{y}(t) = CP\tilde{\mathbf{x}}(t) + Du(t)$$

which yields the following transformed model:

$$
\dot{\tilde{x}}(t) = \bar{A}\tilde{x}(t) + \tilde{B}u(t) \tag{9}
$$

$$
\tilde{y}(t) = \tilde{\mathcal{C}} \tilde{\mathbf{x}}(t) + \tilde{D}u(t) \tag{10}
$$

where the transformed system matrices are given by:

$$
\tilde{A} = P^{-1} A P \tag{11}
$$

$$
\tilde{B} = P^{-1}B \tag{12}
$$

$$
\tilde{\mathbf{C}} = \mathbf{C}P \tag{13}
$$

$$
\tilde{D} = D \tag{14}
$$

Transforming the system matrix [**A**] into the form shown in Equation (8) can be achieved based on the following definition [18].

**Definition.** *A* matrix *A M*∈ *<sup>n</sup>* is called reducible if either:

a. *n =* 1 and *A =* 0; or

66 Recent Advances in Robust Control – Novel Approaches and Design Methods

*D* 

*u*(*t*) *x*(*t*) *x*(*t*) *y*(*t*)

*<sup>B</sup>*∫ *<sup>C</sup>*

In order to determine the transformed [**A**] matrix, which is [ **A** ], the discrete zero input response is obtained. This is achieved by providing the system with some initial state values and setting the system input to zero (*u*(*k*) = 0). Hence, the discrete system of Equations

*A* 

We need *x*(*k*) as an ANN target to train the network to obtain the needed parameters in

system provides the state response corresponding to their initial values with only the [**Ad**] matrix is being used. Once the input-output data is obtained, transforming the [**Ad**] matrix is achieved using the ANN training, as will be explained in Section 3. The identified

**Ad** ] matrix is then converted back to the continuous form which in general

*<sup>A</sup> <sup>A</sup>*

" # %# "

λ

λ

where *λi* represents the system eigenvalues. This is an upper triangular matrix that preserves the eigenvalues by (1) placing the original eigenvalues on the diagonal and (2)

to produce the same eigenvalues for the purpose of eliminating the fast dynamics and sustaining the slow dynamics eigenvalues through model order reduction as will be shown

Having the [**A**] and [ **A** ] matrices, the permutation [**P**] matrix is determined using the LMI optimization technique, as will be illustrated in later sections. The complete system transformation can be achieved as follows where, assuming that <sup>1</sup> *x Px* <sup>−</sup> = , the system of

*P x t AP x t Bu t* () () () = + , *y* () () () *t CP x t Du t* = + , where *y*() () *t yt* = .

1 12 1 2 2 0 0 0 0

<sup>⎡</sup> <sup>⎤</sup> <sup>⎢</sup> <sup>⎥</sup> <sup>⎢</sup> <sup>⎥</sup> <sup>=</sup> <sup>⎢</sup> <sup>⎥</sup> <sup>⎢</sup> <sup>⎥</sup> <sup>⎢</sup> <sup>⎥</sup> <sup>⎣</sup> <sup>⎦</sup>

*Aij* in the upper triangular. This upper triangular matrix form is used

*A A*

"

( 1) ( ) *<sup>d</sup> xk Axk* + = (6)

*n n*

*n*

λ

*y*() () *k xk* = (7)

**Ad** ]. Hence, simulating this

+

+

(8)

Fig. 4. Block diagram for the state-space system representation.

+

+

**Ad** ] such that the system output will be the same for [**Ad**] and [

0 *r c o*

*A A*

*A* <sup>⎡</sup> <sup>⎤</sup> <sup>=</sup> <sup>⎢</sup> <sup>⎥</sup> <sup>⎣</sup> <sup>⎦</sup> →

(4) - (5), with the initial condition 0 (0) *x x* = , becomes:

(with all real eigenvalues) takes the following form:

*A*

[

transformed [

finding the elements

Equations (4) - (5) can be re-written as:

in later sections.

b. *n* ≥ 2, there is a permutation matrix *P M*∈ *<sup>n</sup>* , and there is some integer *r* with 1 1 ≤≤− *r n* such that:

$$P^{-1}AP = \begin{bmatrix} X & Y \\ \mathbf{0} & Z \end{bmatrix} \tag{15}$$

where *X M*∈ *r r*, , *Z M*∈ *n rn r* <sup>−</sup> , <sup>−</sup> , *Y M*∈ *rn r* , <sup>−</sup> , and **0**∈ *Mn rr* <sup>−</sup> , is a zero matrix.

The attractive features of the permutation matrix [**P**] such as being (1) orthogonal and (2) invertible have made this transformation easy to carry out. However, the permutation matrix structure narrows the applicability of this method to a limited category of applications. A form of a similarity transformation can be used to correct this problem for { : *nn nn fR R* × × → } where *f* is a linear operator defined by <sup>1</sup> *f* ( ) *A P AP* <sup>−</sup> = [18]. Hence, based on [**A**] and [ **A** ], the corresponding LMI is used to obtain the transformation matrix [**P**], and thus the optimization problem will be casted as follows:

$$\min\_{P} \left\| P - P\_o \right\| \quad Subject \text{ to } \left\| P^{-1}AP - \tilde{A} \right\| < \varepsilon \tag{16}$$

which can be written in an LMI equivalent form as:

$$\begin{aligned} \min\_{\mathbf{S}} \text{ (s.t.} \begin{bmatrix} \mathbf{S} \end{bmatrix} \begin{bmatrix} \mathbf{S} \end{bmatrix} & \begin{bmatrix} \mathbf{S} & P - P\_o \\ \left( P - P\_o \right)^T & I \end{bmatrix} > 0\\ & \begin{bmatrix} \mathbf{s}\_1^2 I & P^{-1} A P - \tilde{A} \\ \left( P^{-1} A P - \tilde{A} \right)^T & I \end{bmatrix} > 0 \end{aligned} \tag{17}$$

Robust Control Using LMI Transformation and Neural-Based Identification for

ξ

Equations (18)-(20) yields the following reduced order model:

<sup>−</sup> = − , <sup>1</sup> *B B AAB <sup>r</sup>* 1 12 22 2

the assumption that [ **A22** ] is nonsingular, produces:

variables

ξ

where { <sup>1</sup> *A A AAA <sup>r</sup>* 11 12 22 21

dynamical system is obtained as:

shown in Figure 3 in the previous section.

**order reduction** 

follows:

Regulating Singularly-Perturbed Reduced Order Eigenvalue-Preserved Dynamic Systems 69

doing so, we are neglecting the fast dynamics of the system and assuming that the state

1 1 22 21 22 1 ( ) () () *<sup>r</sup>*

<sup>−</sup> = − , <sup>1</sup> *C C CA A <sup>r</sup>* 1 2 22 21

where the index *r* denotes the remained or reduced model. Substituting Equation (21) in

**3. Neural network identification with lmi optimization for the system model** 

In this work, it is our objective to search for a similarity transformation that can be used to decouple a pre-selected eigenvalue set from the system matrix [**A**]. To achieve this objective, training the neural network to identify the transformed discrete system matrix [

performed [1,2,15,29]. For the system of Equations (18)-(20), the discrete model of the

The identified discrete model can be written in a detailed form (as was shown in Figure 3) as

1 2 ( ) ( ) ( ) *x k y k x k* <sup>⎡</sup> <sup>⎤</sup> <sup>=</sup> <sup>⎢</sup> <sup>⎥</sup> <sup>⎣</sup> <sup>⎦</sup>

where *k* is the time index, and the detailed matrix elements of Equations (26)-(27) were

The recurrent ANN presented in Section 2.1 can be summarized by defining *Λ* as the set of indices *i* for which ( ) *<sup>i</sup> g k* is an external input, defining *ß* as the set of indices *i* for which ( ) *<sup>i</sup> y k* is an internal input or a neuron output, and defining ( ) *u k <sup>i</sup>* as the combination of the internal and external inputs for which *i ß* ∈ ∪ *Λ*. Using this setting, training the ANN

> () () () *<sup>j</sup> ji i i Λ v k w ku k* ∈ ∪β

1 11 12 1 11 2 21 22 2 21 ( 1) ( ) ( ) ( 1) ( )

*xk A A xk B u k xk A A xk B* ⎡ ⎤ ⎡ ⎤⎡ ⎤ ⎡ ⎤ + ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ = + + ⎣ ⎦ ⎣ ⎦⎣ ⎦ ⎣ ⎦

depends on the internal activity of each neuron which is given by:

ε

*t A A x t A But* <sup>−</sup> <sup>−</sup> =− − (21)

( ) ( ) ( ) *r rr r x t Ax t But* = + (22)

() () () *rr r y t Cx t Dut* = + (23)

( 1) ( ) ( ) *d d xk Axk Buk* + = + (24)

() () () *d d y k Cxk Duk* = + (25)

(27)

<sup>=</sup> ∑ (28)

(26)

<sup>−</sup> = − , <sup>1</sup> *D CA B <sup>r</sup>* 2 22 2

= in Equation (19), with

<sup>−</sup> = − }.

**Ad** ] is

have reached the quasi-steady state. Hence, setting 0

where *S* is a symmetric slack matrix [6].

### **2.3 System transformation using neural identification**

A different transformation can be performed based on the use of the recurrent ANN while preserving the eigenvalues to be a subset of the original system. To achieve this goal, the upper triangular block structure produced by the permutation matrix, as shown in Equation (15), is used. However, based on the implementation of the ANN, finding the permutation matrix [**P**] does not have to be performed, but instead [**X**] and [**Z**] in Equation (15) will contain the system eigenvalues and [**Y**] in Equation (15) will be estimated directly using the corresponding ANN techniques. Hence, the transformation is obtained and the reduction is then achieved. Therefore, another way to obtain a transformed model that preserves the eigenvalues of the reduced model as a subset of the original system is by using ANN training without the LMI optimization technique. This may be achieved based on the assumption that the states are reachable and measurable. Hence, the recurrent ANN can identify the [ <sup>d</sup> A ] and [ <sup>ˆ</sup> <sup>d</sup> ˆ B ] matrices for a given input signal as illustrated in Figure 3. The ANN identification would lead to the following [ <sup>d</sup> A ] and [ <sup>ˆ</sup> <sup>d</sup> ˆ B ] transformations which (in the case of all real eigenvalues) construct the weight matrix [**W**] as follows:

$$\begin{aligned} \mathcal{W} = \begin{bmatrix} \hat{A}\_d \end{bmatrix} \begin{bmatrix} \hat{B}\_d \end{bmatrix} \end{aligned} \quad \rightarrow \quad \hat{A} = \begin{bmatrix} \hat{A}\_1 & \hat{A}\_{12} & \cdots & \hat{A}\_{1n} \\ 0 & \hat{A}\_2 & \cdots & \hat{A}\_{2n} \\ \vdots & 0 & \ddots & \vdots \\ 0 & \cdots & 0 & \hat{A}\_n \end{bmatrix} \prime \begin{bmatrix} \hat{b}\_1 \\ \hat{b}\_2 \\ \vdots \\ \hat{b}\_n \end{bmatrix} \end{aligned}$$

where the eigenvalues are selected as a subset of the original system eigenvalues.

### **2.4 Model order reduction**

Linear time-invariant (LTI) models of many physical systems have fast and slow dynamics, which may be referred to as singularly perturbed systems [19]. Neglecting the fast dynamics of a singularly perturbed system provides a reduced (i.e., slow) model. This gives the advantage of designing simpler lower-dimensionality reduced-order controllers that are based on the reduced-model information.

To show the formulation of a reduced order system model, consider the singularly perturbed system [9]:

$$\dot{\mathbf{x}}(t) \ = A\_{11}\mathbf{x}(t) + A\_{12}\tilde{\xi}(t) \ + B\_1\mathbf{u}(t) \ , \quad \mathbf{x}(0) = \mathbf{x}\_0 \tag{18}$$

$$
\varepsilon \dot{\xi}(t) = A\_{21} \mathbf{x}(t) + A\_{22} \tilde{\xi}(t) + B\_2 \boldsymbol{\mu}(t) \ , \quad \xi(0) = \xi\_0 \tag{19}
$$

$$\mathbf{y}(t) \quad = \mathbf{C}\_1 \mathbf{x}(t) + \mathbf{C}\_2 \xi(t) \tag{20}$$

where *<sup>m</sup>*<sup>1</sup> *x* ∈ℜ and *<sup>m</sup>*<sup>2</sup> ξ ∈ℜ are the slow and fast state variables, respectively, *u*∈ℜ*n*<sup>1</sup> and *<sup>n</sup>*<sup>2</sup> *<sup>y</sup>* ∈ℜ are the input and output vectors, respectively, {[ ] **Aii** , [ **Bi** ], [ **Ci** ]} are constant matrices of appropriate dimensions with *i* ∈{1, 2} , and ε is a small positive constant. The singularly perturbed system in Equations (18)-(20) is simplified by setting 0 ε= [3,14,27]. In 68 Recent Advances in Robust Control – Novel Approaches and Design Methods

A different transformation can be performed based on the use of the recurrent ANN while preserving the eigenvalues to be a subset of the original system. To achieve this goal, the upper triangular block structure produced by the permutation matrix, as shown in Equation (15), is used. However, based on the implementation of the ANN, finding the permutation matrix [**P**] does not have to be performed, but instead [**X**] and [**Z**] in Equation (15) will contain the system eigenvalues and [**Y**] in Equation (15) will be estimated directly using the corresponding ANN techniques. Hence, the transformation is obtained and the reduction is then achieved. Therefore, another way to obtain a transformed model that preserves the eigenvalues of the reduced model as a subset of the original system is by using ANN training without the LMI optimization technique. This may be achieved based on the assumption that the states are reachable and measurable. Hence, the recurrent ANN can

<sup>ˆ</sup> <sup>ˆ</sup> ˆ ˆ ˆ ˆ <sup>0</sup> [ ][ ] , 0

= →= ⎡ ⎤ <sup>=</sup> ⎢ ⎥ <sup>⎢</sup> <sup>⎥</sup> ⎣ ⎦ <sup>⎢</sup> <sup>⎥</sup> <sup>⎢</sup> <sup>⎥</sup> ⎣ ⎦ <sup>⎣</sup> <sup>⎦</sup>

Linear time-invariant (LTI) models of many physical systems have fast and slow dynamics, which may be referred to as singularly perturbed systems [19]. Neglecting the fast dynamics of a singularly perturbed system provides a reduced (i.e., slow) model. This gives the advantage of designing simpler lower-dimensionality reduced-order controllers that are

To show the formulation of a reduced order system model, consider the singularly

11 12 1 0 ( ) ( ) ( ) ( ) , 0 *x t A x t A t B u t x( ) x* = ++ = ξ

( ) ( ) ( ) ( ) , (0 *t A xt A t But )* = ++ = ξ

y 1 2 ( ) ( ) ( ) *t Cxt C t* = +

*<sup>n</sup>*<sup>2</sup> *<sup>y</sup>* ∈ℜ are the input and output vectors, respectively, {[ ] **Aii** , [ **Bi** ], [ **Ci** ]} are constant

singularly perturbed system in Equations (18)-(20) is simplified by setting 0

21 22 2 0

ξ

∈ℜ are the slow and fast state variables, respectively, *u*∈ℜ*n*<sup>1</sup> and

ε

 ξξ(19)

λ

"

λ

*<sup>n</sup> d d*

where the eigenvalues are selected as a subset of the original system eigenvalues.

*<sup>A</sup> <sup>b</sup> W AB A <sup>B</sup>*

B ] matrices for a given input signal as illustrated in Figure 3. The

1 12 1 1 2 2 2

⎡ ⎤ ⎡ ⎤ ⎢ ⎥ <sup>⎢</sup> <sup>⎥</sup> <sup>⎢</sup> <sup>⎥</sup>

" " # %# #

0 0 ˆ

λ

ˆ ˆ ˆ

*A A b*

*n*

*n n*

*b*

B ] transformations which (in

(18)

(20)

is a small positive constant. The

ε

= [3,14,27]. In

<sup>ˆ</sup> <sup>d</sup> ˆ

where *S* is a symmetric slack matrix [6].

identify the [ <sup>d</sup> A ] and [

**2.4 Model order reduction** 

perturbed system [9]:

where *<sup>m</sup>*<sup>1</sup> *x* ∈ℜ and *<sup>m</sup>*<sup>2</sup>

based on the reduced-model information.

ξ

εξ

matrices of appropriate dimensions with *i* ∈{1, 2} , and

<sup>ˆ</sup> <sup>d</sup> ˆ

**2.3 System transformation using neural identification** 

ANN identification would lead to the following [ <sup>d</sup> A ] and [

the case of all real eigenvalues) construct the weight matrix [**W**] as follows:

doing so, we are neglecting the fast dynamics of the system and assuming that the state variables ξ have reached the quasi-steady state. Hence, setting 0 ε = in Equation (19), with the assumption that [ **A22** ] is nonsingular, produces:

$$\varphi(t) = -A\_{22}^{-1}A\_{21}x\_r(t) - A\_{22}^{-1}B\_1\mu(t) \tag{21}$$

where the index *r* denotes the remained or reduced model. Substituting Equation (21) in Equations (18)-(20) yields the following reduced order model:

$$\dot{\mathbf{x}}\_r(t) \quad = A\_r \mathbf{x}\_r(t) + B\_r u(t) \tag{22}$$

$$y(t) = C\_r x\_r(t) + D\_r u(t) \tag{23}$$

where { <sup>1</sup> *A A AAA <sup>r</sup>* 11 12 22 21 <sup>−</sup> = − , <sup>1</sup> *B B AAB <sup>r</sup>* 1 12 22 2 <sup>−</sup> = − , <sup>1</sup> *C C CA A <sup>r</sup>* 1 2 22 21 <sup>−</sup> = − , <sup>1</sup> *D CA B <sup>r</sup>* 2 22 2 <sup>−</sup> = − }.

## **3. Neural network identification with lmi optimization for the system model order reduction**

In this work, it is our objective to search for a similarity transformation that can be used to decouple a pre-selected eigenvalue set from the system matrix [**A**]. To achieve this objective, training the neural network to identify the transformed discrete system matrix [ **Ad** ] is performed [1,2,15,29]. For the system of Equations (18)-(20), the discrete model of the dynamical system is obtained as:

$$\mathbf{x}(k+1) = A\_d \mathbf{x}(k) + B\_d \boldsymbol{\mu}(k) \tag{24}$$

$$\mathbf{y}(k) = \mathbf{C}\_d \mathbf{x}(k) + D\_d \boldsymbol{\mu}(k) \tag{25}$$

The identified discrete model can be written in a detailed form (as was shown in Figure 3) as follows:

$$
\begin{bmatrix} \tilde{\mathbf{x}}\_1(k+1) \\ \tilde{\mathbf{x}}\_2(k+1) \end{bmatrix} = \begin{bmatrix} A\_{11} & A\_{12} \\ A\_{21} & A\_{22} \end{bmatrix} \begin{bmatrix} \tilde{\mathbf{x}}\_1(k) \\ \tilde{\mathbf{x}}\_2(k) \end{bmatrix} + \begin{bmatrix} B\_{11} \\ B\_{21} \end{bmatrix} \mu(k) \tag{26}
$$

$$\tilde{\mathbf{y}}(k) = \begin{bmatrix} \tilde{\mathbf{x}}\_1(k) \\ \tilde{\mathbf{x}}\_2(k) \end{bmatrix} \tag{27}$$

where *k* is the time index, and the detailed matrix elements of Equations (26)-(27) were shown in Figure 3 in the previous section.

The recurrent ANN presented in Section 2.1 can be summarized by defining *Λ* as the set of indices *i* for which ( ) *<sup>i</sup> g k* is an external input, defining *ß* as the set of indices *i* for which ( ) *<sup>i</sup> y k* is an internal input or a neuron output, and defining ( ) *u k <sup>i</sup>* as the combination of the internal and external inputs for which *i ß* ∈ ∪ *Λ*. Using this setting, training the ANN depends on the internal activity of each neuron which is given by:

$$w\_j(\mathbf{k}) = \sum\_{i \in A \cup \mathcal{B}} w\_{ji}(\mathbf{k}) \mu\_i(\mathbf{k}) \tag{28}$$

where *wji* is the weight representing an element in the system matrix or input matrix for *j ß* <sup>∈</sup> and *i ß* ∈ ∪ *<sup>Λ</sup>* such that [ ] [ ] *<sup>W</sup>* <sup>=</sup> <sup>⎡</sup> <sup>⎤</sup> <sup>⎣</sup> <sup>⎦</sup> **A B d d** . At the next time step (*k* +1), the output (internal input) of the neuron *j* is computed by passing the activity through the nonlinearity *φ*(.) as follows:

$$
\propto \varepsilon\_j(k+1) = \varphi(\upsilon\_j(k)) \tag{29}
$$

Robust Control Using LMI Transformation and Neural-Based Identification for

system matrix [

system matrix [

setting 0 ε

Regulating Singularly-Perturbed Reduced Order Eigenvalue-Preserved Dynamic Systems 71

As illustrated in Equations (6) - (7), for the purpose of estimating only the transformed

completed, the obtained weight matrix [**W**] will be the discrete identified transformed

the desired continuous transformed system matrix [ **A** ]. Using the LMI optimization technique, which was illustrated in Section 2.2, the permutation matrix [**P**] is then determined. Hence, a complete system transformation, as shown in Equations (9) - (10), will be achieved.

> ( ) ( ) ( ) ( ) 0 () *r r cr r oo o o xt A A xt B*

[ ] ( ) ( ) ( ) ( ) ( ) *r r r r o o o o y t xt D*

The following system transformation enables us to decouple the original system into retained (*r*) and omitted (*o*) eigenvalues. The retained eigenvalues are the dominant eigenvalues that produce the slow dynamics and the omitted eigenvalues are the nondominant eigenvalues that produce the fast dynamics. Equation (32) maybe written as:

() () () () *r rr co r x t Ax t Ax t But* =++ and () () () *o oo o x t Ax t But* = +

The coupling term ( ) *Ax t c o* maybe compensated for by solving for ( ) *<sup>o</sup> x t* in the second equation above by setting ( ) *<sup>o</sup> x t* to zero using the singular perturbation method (by

where the details of the {[ **Aor** ], [ **Bor** ], [ **Cor** ], [ **Dor** ]} overall reduced matrices were shown in

The following subsections present the implementation of the new proposed method of system modeling using supervised ANN, with and without using LMI, and using model

**4. Examples for the dynamic system order reduction using neural** 

*C C u t*

*x t A xt B* ⎡ ⎤ ⎡ ⎤⎡ ⎤ ⎡ ⎤ ⎢ ⎥ <sup>=</sup> ⎢ ⎥⎢ ⎥ ⎢ ⎥ <sup>+</sup> ⎢ ⎥ ⎣ ⎦ ⎣ ⎦⎣ ⎦ ⎣ ⎦

*y t xt D* ⎡ ⎤ ⎡ ⎤⎡ ⎤ ⎢ ⎥ ⎢ ⎥⎢ ⎥ = + ⎣ ⎦ ⎣ ⎦⎣ ⎦

For the model order reduction, the system in Equations (9) - (10) can be written as:

= ). By performing this, the following equation is obtained:

Using ( ) *<sup>o</sup> x t* , we get the reduced order model given by:

Equations (35) - (36), respectively.

**identification** 

Hence, the overall reduced order model may be represented by:

**Ad** ], the training is based on the zero input response. Once the training is

**Ad** ]. Transforming the identified system back to the continuous form yields

*u t*

(32)

(33)

<sup>1</sup> () () *o oo x t A But* <sup>−</sup> = − (34)

<sup>1</sup> () () [ ]() *r rr c o o r x t Ax t AA B B ut* <sup>−</sup> = +− + (35)

<sup>1</sup> () () [ ]() *rr o o o <sup>y</sup> t Cx t C A B Dut* <sup>−</sup> = +− + (36)

( ) ( ) ( ) *r or r or x t A x t B ut* = + (37)

() () () *or r or y t C x t D ut* = + (38)

With these equations, based on an approximation of the method of steepest descent, the ANN identifies the system matrix [**Ad**] as illustrated in Equation (6) for the zero input response. That is, an error can be obtained by matching a true state output with a neuron output as follows:

$$e\_j(k) = \mathfrak{x}\_j(k) - \tilde{\mathfrak{x}}\_j(k)$$

Now, the objective is to minimize the cost function given by:

$$E\_{\text{total}} = \sum\_{k} E(k) \text{ and } E(k) = \frac{1}{2} \sum\_{j \neq \varphi} e\_j^2(k).$$

where ς denotes the set of indices *j* for the output of the neuron structure. This cost function is minimized by estimating the instantaneous gradient of *E*(*k*) with respect to the weight matrix [**W**] and then updating [**W**] in the negative direction of this gradient [15,29]. In steps, this may be proceeded as follows:


$$
\pi\_{m\ell}^{j}(k+1) = \dot{\varphi}(\upsilon\_{j}(k)) \left[ \sum\_{i \in \mathcal{J}} w\_{ji}(k) \pi\_{m\ell}^{i}(k) + \delta\_{mj} \mu\_{\ell}(k) \right]
$$

with initial conditions (0) 0 *<sup>j</sup>* π *<sup>m</sup>* = <sup>A</sup> and *mj* δ is given by (∂ ∂ *wk w k ji m* () () <sup>A</sup> ) , which is equal to "1" only when {*j = m*, *i* = A } and otherwise it is "0". Notice that, for the special case of a sigmoidal nonlinearity in the form of a logistic function, the derivative ϕ( )⋅ is given by ( ( )) ( 1)[1 ( 1)] *jj j* ϕ*vk y k yk* = +− + .


$$
\Delta w\_{m\ell}(k) = \eta \sum\_{j \neq \zeta} e\_j(k) \pi\_{m\ell}^j(k) \tag{30}
$$


$$
\omega w\_{m\ell}(k+1) = w\_{m\ell}(k) + \Delta w\_{m\ell}(k) \tag{31}
$$


70 Recent Advances in Robust Control – Novel Approaches and Design Methods

where *wji* is the weight representing an element in the system matrix or input matrix for *j ß* <sup>∈</sup> and *i ß* ∈ ∪ *<sup>Λ</sup>* such that [ ] [ ] *<sup>W</sup>* <sup>=</sup> <sup>⎡</sup> <sup>⎤</sup> <sup>⎣</sup> <sup>⎦</sup> **A B d d** . At the next time step (*k* +1), the output (internal input) of the neuron *j* is computed by passing the activity through the nonlinearity

> ( 1) ( ( )) *j j xk vk* + =ϕ

With these equations, based on an approximation of the method of steepest descent, the ANN identifies the system matrix [**Ad**] as illustrated in Equation (6) for the zero input response. That is, an error can be obtained by matching a true state output with a neuron

() () () *jjj ek xk xk* = −

*E Ek* <sup>=</sup> ∑ and <sup>1</sup> <sup>2</sup>

function is minimized by estimating the instantaneous gradient of *E*(*k*) with respect to the weight matrix [**W**] and then updating [**W**] in the negative direction of this gradient [15,29].



( 1) ( ( )) ( ) ( ) ( ) *<sup>j</sup> <sup>i</sup> m j ji m mj i ß*

δ


η

∈

a sigmoidal nonlinearity in the form of a logistic function, the derivative

 *k vk w k k uk* ∈

+ = <sup>⎢</sup> <sup>+</sup> <sup>⎥</sup> <sup>⎢</sup> <sup>⎥</sup> <sup>⎣</sup> <sup>⎦</sup> <sup>A</sup> ∑ A A

to "1" only when {*j = m*, *i* = A } and otherwise it is "0". Notice that, for the special case of

() () () *<sup>j</sup> m j m j w k ek k* ς

 π

πδ

is given by (∂ ∂ *wk w k ji m* () () <sup>A</sup> ) , which is equal

Δ = <sup>A</sup> ∑ <sup>A</sup> (30)

*wk wk wk m mm* A AA ( 1) ( ) ( ) + = +Δ (31)

ϕ

( )⋅ is given

⎡ ⎤

<sup>2</sup> () () *<sup>j</sup> j Ek e k* ∈ς

<sup>=</sup> ∑

denotes the set of indices *j* for the output of the neuron structure. This cost

Now, the objective is to minimize the cost function given by:

In steps, this may be proceeded as follows:

π

π

with initial conditions (0) 0 *<sup>j</sup>*

by ( ( )) ( 1)[1 ( 1)] *jj j*


*vk y k yk* = +− + .

neurons (where *N ß* = ).

total ( ) *k*

system which are governed by the triply-indexed set of variables:

 ϕ

*<sup>m</sup>* = <sup>A</sup> and *mj*


(29)

*φ*(.) as follows:

output as follows:

where

ς

ϕ

As illustrated in Equations (6) - (7), for the purpose of estimating only the transformed system matrix [ **Ad** ], the training is based on the zero input response. Once the training is completed, the obtained weight matrix [**W**] will be the discrete identified transformed system matrix [ **Ad** ]. Transforming the identified system back to the continuous form yields the desired continuous transformed system matrix [ **A** ]. Using the LMI optimization technique, which was illustrated in Section 2.2, the permutation matrix [**P**] is then determined. Hence, a complete system transformation, as shown in Equations (9) - (10), will be achieved. For the model order reduction, the system in Equations (9) - (10) can be written as:

$$
\begin{bmatrix}
\dot{\tilde{\boldsymbol{x}}}\_r(t) \\
\dot{\tilde{\boldsymbol{x}}}\_o(t)
\end{bmatrix} = \begin{bmatrix}
A\_r & A\_c \\
0 & A\_o
\end{bmatrix} \begin{bmatrix}
\tilde{\boldsymbol{x}}\_r(t) \\
\tilde{\boldsymbol{x}}\_o(t)
\end{bmatrix} + \begin{bmatrix}
B\_r \\
B\_o
\end{bmatrix} \boldsymbol{\mu}(t) \tag{32}
$$

$$
\begin{bmatrix} \tilde{y}\_r(t) \\ \tilde{y}\_o(t) \end{bmatrix} = \begin{bmatrix} \mathbf{C}\_r & \mathbf{C}\_o \end{bmatrix} \begin{bmatrix} \tilde{\boldsymbol{\pi}}\_r(t) \\ \tilde{\boldsymbol{\pi}}\_o(t) \end{bmatrix} + \begin{bmatrix} D\_r \\ D\_o \end{bmatrix} \boldsymbol{\mu}(t) \tag{33}
$$

The following system transformation enables us to decouple the original system into retained (*r*) and omitted (*o*) eigenvalues. The retained eigenvalues are the dominant eigenvalues that produce the slow dynamics and the omitted eigenvalues are the nondominant eigenvalues that produce the fast dynamics. Equation (32) maybe written as:

$$\dot{\tilde{\mathfrak{X}}}\_r(t) = A\_r \tilde{\mathfrak{x}}\_r(t) + A\_c \tilde{\mathfrak{x}}\_o(t) + B\_r \mathfrak{u}(t) \text{ and } \dot{\tilde{\mathfrak{X}}}\_o(t) = A\_o \tilde{\mathfrak{x}}\_o(t) + B\_o \mathfrak{u}(t)$$

The coupling term ( ) *Ax t c o* maybe compensated for by solving for ( ) *<sup>o</sup> x t* in the second equation above by setting ( ) *<sup>o</sup> x t* to zero using the singular perturbation method (by setting 0 ε= ). By performing this, the following equation is obtained:

$$
\tilde{\mathfrak{X}}\_o(t) = -A\_o^{-1} B\_o \mu(t) \tag{34}
$$

Using ( ) *<sup>o</sup> x t* , we get the reduced order model given by:

$$\dot{\tilde{X}}\_r(t) = A\_r \tilde{x}\_r(t) + [-A\_c A\_o^{-1} B\_o + B\_r] \mu(t) \tag{35}$$

$$y(t) = \mathbf{C}\_r \tilde{\mathbf{x}}\_r(t) + [-\mathbf{C}\_o A\_o^{-1} B\_o + D] \mu(t) \tag{36}$$

Hence, the overall reduced order model may be represented by:

$$
\dot{\tilde{X}}\_r(t) \;= A\_{or} \tilde{X}\_r(t) + B\_{or} u(t) \tag{37}
$$

$$y(t) = \mathbb{C}\_{or} \tilde{\varkappa}\_r(t) + D\_{or} u(t) \tag{38}$$

where the details of the {[ **Aor** ], [ **Bor** ], [ **Cor** ], [ **Dor** ]} overall reduced matrices were shown in Equations (35) - (36), respectively.

## **4. Examples for the dynamic system order reduction using neural identification**

The following subsections present the implementation of the new proposed method of system modeling using supervised ANN, with and without using LMI, and using model

Robust Control Using LMI Transformation and Neural-Based Identification for

ω

*r1* is the radius of the capstan take-up wheel.

, 1 11 *x r* =

ω

 , 1 2 <sup>3</sup> 2 *x x <sup>x</sup>* <sup>−</sup> <sup>=</sup>

output speed measured by the tachometer output 2

utilizing the LMI optimization technique.

identification and without using LMI.

equations of motion are given as follows:

, 2 22 *x r* =

1

*dt* ω

2

*dt* ω

1 11 *x r* = θ

**inequality**

*dt*

1 11 1 *t <sup>d</sup> <sup>J</sup> rT Ki*

= + −+ β ω

*e* 1 *di L Ri K e*

+ = ω

2 22 2 <sup>0</sup> *<sup>d</sup> <sup>J</sup> r T*

+ += β ω

13 1 13 1 *T Kx x Dx x* = −+ − ( )( ) 22 3 22 3 *T Kx x Dx x* = −+ − ( )( )

, 2 22 *x r* =

θ

Regulating Singularly-Perturbed Reduced Order Eigenvalue-Preserved Dynamic Systems 73

As can be shown, in static equilibrium, the tape tension equals the vacuum force ( *T F <sup>o</sup>* = ) and the torque from the motor equals the torque on the capstan ( *Ki rT to o* = <sup>1</sup> ) where *To* is the tape tension at the read/write head at equilibrium, *F* is the constant force (i.e., tape tension for vacuum column), *K* is the motor torque constant, *io* is the equilibrium motor current, and

The system variables are defined as deviations from this equilibrium, and the system

where *D*1,2 is the damping in the tape-stretch motion, *e* is the applied input voltage (*V*), *i* is the current into capstan motor, *J*1 is the combined inertia of the wheel and take-up motor, *J*<sup>2</sup> is the inertia of the idler, *K*1,2 is the spring constant in the tape-stretch motion, *Ke* is the electric constant of the motor, *Kt* is the torque constant of the motor, *L* is the armature inductance, *R* is the armature resistance, *r*1 is the radius of the take-up wheel, *r*2 is the radius of the tape on the idler, *T* is the tape tension at the read/write head, *x*3 is the position of the tape at the head, 3 *x* is the velocity of the tape at the head, *β*1 is the viscous friction at takeup wheel, *β*2 is the viscous friction at the wheel, *θ*1 is the angular displacement of the

θ

The state space form is derived from the system equations, where there is one input, which is the applied voltage, three outputs which are (1) tape position at the head, (2) tape tension, and (3) tape position at the wheel, and five states which are (1) tape position at the air bearing, (2) drive wheel speed, (3) tape position at the wheel, (4) tachometer output speed, and (5) capstan motor speed. The following sub-sections will present the simulation results for the investigation of different system cases using transformations with and without

**4.1.1 System transformation using neural identification without utilizing linear matrix** 

This sub-section presents simulation results for system transformation using ANN-based

02000 0 -1.1 -1.35 1.1 3.1 0.75 0 ( ) 00050 0 () () 1.35 1.4 -2.4 -11.4 0 0 0 -0.03 0 0 -10 1

<sup>⎡</sup> ⎤ ⎡⎤ <sup>⎢</sup> ⎥ ⎢⎥ <sup>⎢</sup> ⎥ ⎢⎥

*x t xt ut*

<sup>=</sup> <sup>⎢</sup> ⎥ ⎢⎥ <sup>+</sup> <sup>⎢</sup> ⎥ ⎢⎥ <sup>⎢</sup> ⎥ ⎢⎥ <sup>⎢</sup> ⎥ ⎢⎥ <sup>⎣</sup> ⎦ ⎣⎦ ,

.

θ1

, and *ω*2 is the

capstan, *θ*2 is the tachometer shaft angle, *ω*1 is the speed of the drive wheel

**Case #1.** Let us consider the following case of the tape transport:

order reduction, that can be directly utilized for the robust control of dynamic systems. The presented simulations were tested on a PC platform with hardware specifications of Intel Pentium 4 CPU 2.40 GHz, and 504 MB of RAM, and software specifications of MS Windows XP 2002 OS and Matlab 6.5 simulator.

### **4.1 Model reduction using neural-based state transformation and lmi-based complete system transformation**

The following example illustrates the idea of dynamic system model order reduction using LMI with comparison to the model order reduction without using LMI. Let us consider the system of a high-performance tape transport which is illustrated in Figure 5. As seen in Figure 5, the system is designed with a small capstan to pull the tape past the read/write heads with the take-up reels turned by DC motors [10].

Fig. 5. The used tape drive system: (a) a front view of a typical tape drive mechanism, and (b) a schematic control model.

As can be shown, in static equilibrium, the tape tension equals the vacuum force ( *T F <sup>o</sup>* = ) and the torque from the motor equals the torque on the capstan ( *Ki rT to o* = <sup>1</sup> ) where *To* is the tape tension at the read/write head at equilibrium, *F* is the constant force (i.e., tape tension for vacuum column), *K* is the motor torque constant, *io* is the equilibrium motor current, and *r1* is the radius of the capstan take-up wheel.

The system variables are defined as deviations from this equilibrium, and the system equations of motion are given as follows:

$$\begin{aligned} J\_1 &= \frac{d\alpha\_1}{dt} + \beta\_1 \alpha\_1 - r\_1 T + K\_1 i\_{1'} \,\,\dot{\mathbf{x}}\_1 = r\_1 \alpha\_1 \\ L \frac{d\dot{t}}{dt} \,\, \dot{\mathbf{R}} \,\, \dot{\mathbf{x}} + K\_\varepsilon \alpha\_1 &= \mathbf{e}\_{\,\,\prime} \,\, \dot{\mathbf{x}}\_2 = r\_2 \alpha\_2 \\ J\_2 \frac{d\alpha\_2}{dt} &+ \beta\_2 \alpha\_2 + r\_2 T = 0 \\ T &= K\_1 (\mathbf{x}\_3 - \mathbf{x}\_1) + D\_1 (\dot{\mathbf{x}}\_3 - \dot{\mathbf{x}}\_1) \\ T &= K\_2 (\mathbf{x}\_2 - \mathbf{x}\_3) + D\_2 (\dot{\mathbf{x}}\_2 - \dot{\mathbf{x}}\_3) \\ \mathbf{x}\_1 &= r\_1 \theta\_1 \,\, \, \mathbf{x}\_2 = r\_2 \theta\_2 \,\, \, \mathbf{x}\_3 = \frac{\mathbf{x}\_1 - \mathbf{x}\_2}{2} \end{aligned}$$

72 Recent Advances in Robust Control – Novel Approaches and Design Methods

order reduction, that can be directly utilized for the robust control of dynamic systems. The presented simulations were tested on a PC platform with hardware specifications of Intel Pentium 4 CPU 2.40 GHz, and 504 MB of RAM, and software specifications of MS Windows

The following example illustrates the idea of dynamic system model order reduction using LMI with comparison to the model order reduction without using LMI. Let us consider the system of a high-performance tape transport which is illustrated in Figure 5. As seen in Figure 5, the system is designed with a small capstan to pull the tape past the read/write

(a)

(b) Fig. 5. The used tape drive system: (a) a front view of a typical tape drive mechanism, and

**4.1 Model reduction using neural-based state transformation and lmi-based** 

XP 2002 OS and Matlab 6.5 simulator.

**complete system transformation** 

(b) a schematic control model.

heads with the take-up reels turned by DC motors [10].

where *D*1,2 is the damping in the tape-stretch motion, *e* is the applied input voltage (*V*), *i* is the current into capstan motor, *J*1 is the combined inertia of the wheel and take-up motor, *J*<sup>2</sup> is the inertia of the idler, *K*1,2 is the spring constant in the tape-stretch motion, *Ke* is the electric constant of the motor, *Kt* is the torque constant of the motor, *L* is the armature inductance, *R* is the armature resistance, *r*1 is the radius of the take-up wheel, *r*2 is the radius of the tape on the idler, *T* is the tape tension at the read/write head, *x*3 is the position of the tape at the head, 3 *x* is the velocity of the tape at the head, *β*1 is the viscous friction at takeup wheel, *β*2 is the viscous friction at the wheel, *θ*1 is the angular displacement of the capstan, *θ*2 is the tachometer shaft angle, *ω*1 is the speed of the drive wheelθ1 , and *ω*2 is the output speed measured by the tachometer output 2 θ .

The state space form is derived from the system equations, where there is one input, which is the applied voltage, three outputs which are (1) tape position at the head, (2) tape tension, and (3) tape position at the wheel, and five states which are (1) tape position at the air bearing, (2) drive wheel speed, (3) tape position at the wheel, (4) tachometer output speed, and (5) capstan motor speed. The following sub-sections will present the simulation results for the investigation of different system cases using transformations with and without utilizing the LMI optimization technique.

### **4.1.1 System transformation using neural identification without utilizing linear matrix inequality**

This sub-section presents simulation results for system transformation using ANN-based identification and without using LMI.

**Case #1.** Let us consider the following case of the tape transport:

$$
\dot{\mathbf{x}}(t) = \begin{bmatrix} 0 & 2 & 0 & 0 & 0 \\ -1.1 & -1.35 & 1.1 & 3.1 & 0.75 \\ 0 & 0 & 0 & 5 & 0 \\ 1.35 & 1.4 & -2.4 & -11.4 & 0 \\ 0 & -0.03 & 0 & 0 & -10 \end{bmatrix} \mathbf{x}(t) + \begin{bmatrix} 0 \\ 0 \\ 0 \\ 0 \\ 1 \end{bmatrix} u(t),
$$

Robust Control Using LMI Transformation and Neural-Based Identification for

transformation has provided different eigenvalues {-0.8283, -0.5980 ± j0.9304}.

Regulating Singularly-Perturbed Reduced Order Eigenvalue-Preserved Dynamic Systems 75

It is also observed in the above model that the reduced order model has preserved all of its eigenvalues {-0.9809, -0.5967 ± j0.8701} which are a subset of the original system, while the reduced order model obtained using the singular perturbation without system

Evaluations of the reduced order models (transformed and non-transformed) were obtained by simulating both systems for a step input. Simulation results are shown in Figure 6.

0 5 10 15 20

Time[s]

Fig. 6. Reduced 3rd order models (.… transformed, -.-.-.- non-transformed) output responses to a step input along with the non-reduced model ( \_\_\_\_ original) 5th order system output

Based on Figure 6, it is seen that the non-transformed reduced model provides a response which is better than the transformed reduced model. The cause of this is that the transformation at this point is performed only for the [**A**] and [**B**] system matrices leaving the [**C**] matrix unchanged. Therefore, the system transformation is further considered for complete system transformation using LMI (for {[**A**], [**B**], [**D**]}) as will be seen in subsection 4.1.2, where LMI-based transformation will produce better reduction-based response results

The five eigenvalues are {-9.9973, -2.0002, -0.3696, -0.6912 ± j1.3082}, where two eigenvalues are complex, three are real, and only one eigenvalue is considered to produce fast dynamics {-9.9973}. Using the discretized model with *Ts* = 0.071 sec. for a step input with learning time *Tl* = 70 sec., and through training the ANN for the input/output data with *η* = 3.5 x 10-5 and

0 0 1 00 ( ) 0.5 0 0.5 0 0 ( ) 0.2 0.2 0.2 0.2 0 *y t x t* ⎡ ⎤ ⎢ ⎥ <sup>=</sup> − − ⎣ ⎦


**Case #2.** Consider now the following case:

initial weight matrix given by:

response.

0

than both the non-transformed and transformed without LMI.

0 2 000 0 -1.1 -1.35 0.1 0.1 0.75 0 ( ) 0 0 020 0 () () 0.35 0.4 -0.4 -2.4 0 0 0 -0.03 0 0 -10 1

<sup>⎡</sup> ⎤ ⎡⎤ <sup>⎢</sup> ⎥ ⎢⎥ <sup>⎢</sup> ⎥ ⎢⎥

*x t xt ut*

<sup>=</sup> <sup>⎢</sup> ⎥ ⎢⎥ <sup>+</sup> <sup>⎢</sup> ⎥ ⎢⎥ <sup>⎢</sup> ⎥ ⎢⎥ <sup>⎢</sup> ⎥ ⎢⎥ <sup>⎣</sup> ⎦ ⎣⎦ ,

0.02

0.04

0.06

System Output

0.08

0.1

0.12

0.14

$$y(t) = \begin{bmatrix} 0 & 0 & 1 & 0 & 0 \\ 0.5 & 0 & 0.5 & 0 & 0 \\ -0.2 & -0.2 & 0.2 & 0.2 & 0 \end{bmatrix} x(t)$$

The five eigenvalues are {-10.5772, -9.999, -0.9814, -0.5962 ± j0.8702}, where two eigenvalues are complex and three are real, and thus since (1) not all the eigenvalues are complex and (2) the existing real eigenvalues produce the fast dynamics that we need to eliminate, model order reduction can be applied. As can be seen, two real eigenvalues produce fast dynamics {-10.5772, -9.999} and one real eigenvalue produce slow dynamics {-0.9814}. In order to obtain the reduced model, the reduction based on the identification of the input matrix [ **B**ˆ ] and the transformed system matrix [ **A**ˆ ] was performed. This identification is achieved utilizing the recurrent ANN.

By discretizing the above system with a sampling time *Ts* = 0.1 sec., using a step input with learning time *Tl* = 300 sec., and then training the ANN for the input/output data with a learning rate *η* = 0.005 and with initial weights *w* = [[ <sup>d</sup> A ] [ <sup>ˆ</sup> <sup>d</sup> ˆ B ]] given as:

$$w = \begin{bmatrix} -0.0059 & -0.0360 & 0.0003 & -0.0204 & -0.0307 & 0.0499 \\ -0.0283 & 0.0243 & 0.0445 & -0.0302 & -0.0257 & -0.0482 \\ 0.0359 & 0.0222 & 0.0309 & 0.0294 & -0.0405 & 0.0088 \\ -0.0058 & 0.0212 & -0.0225 & -0.0273 & 0.0079 & 0.0152 \\ 0.0295 & -0.0235 & -0.0474 & -0.0373 & -0.0158 & -0.0168 \end{bmatrix}$$

produces the transformed model for the system and input matrices, ˆ [ ] **A** and ˆ [ ] **B** , as follows:

$$\dot{\mathbf{x}}(t) = \begin{bmatrix} -0.5967 & 0.8701 & -0.1041 & -0.2710 & -0.4114 & -0.1414\\ -0.8701 & -0.5967 & 0.8034 & -0.4520 & -0.3375 & 0.0974\\ 0 & 0 & -0.9809 & 0.4962 & -0.4680 & \mathbf{x}(t) + 0.1307 & \mu(t) \\ 0 & 0 & 0 & -9.9985 & 0.0146 & & -0.0011\\ 0 & 0 & 0 & 0 & -10.5764 \end{bmatrix} \mathbf{x}(t)$$
 
$$\mathbf{y}(t) = \begin{bmatrix} 0 & 0 & 1 & 0 & 0\\ 0.5 & 0 & 0.5 & 0 & 0\\ -0.2 & -0.2 & 0.2 & 0.2 & 0 \end{bmatrix} \mathbf{x}(t)$$

As observed, all of the system eigenvalues have been preserved in this transformed model with a little difference due to discretization. Using the singular perturbation technique, the following reduced 3rd order model is obtained as follows:

$$\dot{\mathbf{x}}(t) = \begin{bmatrix} -0.5967 & 0.8701 & -0.1041 \\ -0.8701 & -0.5967 & 0.8034 \\ 0 & 0 & -0.9809 \end{bmatrix} \mathbf{x}(t) + \begin{bmatrix} 0.1021 \\ 0.0652 \\ 0.0860 \end{bmatrix} \boldsymbol{\mu}(t)$$

$$\mathbf{y}(t) = \begin{bmatrix} 0 & 0 & 1 \\ 0.5 & 0 & 0.5 \\ -0.2 & -0.2 & 0.2 \end{bmatrix} \mathbf{x}(t) + \begin{bmatrix} 0 \\ 0 \\ 0 \end{bmatrix} \boldsymbol{\mu}(t)$$

74 Recent Advances in Robust Control – Novel Approaches and Design Methods

0 0 1 00 ( ) 0.5 0 0.5 0 0 ( ) 0.2 0.2 0.2 0.2 0 *y t x t* ⎡ ⎤ ⎢ ⎥ <sup>=</sup> − − ⎣ ⎦ The five eigenvalues are {-10.5772, -9.999, -0.9814, -0.5962 ± j0.8702}, where two eigenvalues are complex and three are real, and thus since (1) not all the eigenvalues are complex and (2) the existing real eigenvalues produce the fast dynamics that we need to eliminate, model order reduction can be applied. As can be seen, two real eigenvalues produce fast dynamics {-10.5772, -9.999} and one real eigenvalue produce slow dynamics {-0.9814}. In order to obtain the reduced model, the reduction based on the identification of the input matrix [ **B**ˆ ] and the transformed system matrix [ **A**ˆ ] was performed. This identification is achieved

By discretizing the above system with a sampling time *Ts* = 0.1 sec., using a step input with learning time *Tl* = 300 sec., and then training the ANN for the input/output data with a

> -0.0059 -0.0360 0.0003 -0.0204 -0.0307 0.0499 -0.0283 0.0243 0.0445 -0.0302 -0.0257 -0.0482 0.0359 0.0222 0.0309 0.0294 -0.0405 0.0088 -0.0058 0.0212 -0.0225 -0.0273 0.0079 0.0152 0.0295 -0.0235 -0.0474 -0.0373 -0.0158 -0.016

<sup>⎡</sup> <sup>⎤</sup> <sup>⎢</sup> <sup>⎥</sup> <sup>⎢</sup> <sup>⎥</sup> <sup>⎢</sup> <sup>⎥</sup> <sup>⎢</sup> <sup>⎥</sup> <sup>⎢</sup> <sup>⎥</sup> <sup>⎢</sup> <sup>⎥</sup> <sup>⎣</sup> <sup>⎦</sup>

produces the transformed model for the system and input matrices, ˆ [ ] **A** and ˆ [ ] **B** , as follows: -0.5967 0.8701 -0.1041 -0.2710 -0.4114 0.1414 -0.8701 -0.5967 0.8034 -0.4520 -0.3375 0.0974 ( ) 0 0 -0.9809 0.4962 -0.4680 0.1307 ( ) ( ) 0 0 0 -9.9985 0.0146 -0.0011 0 0 0 0 -10.5764 1.0107

<sup>⎡</sup> ⎤⎡ ⎤ <sup>⎢</sup> ⎥⎢ ⎥ <sup>⎢</sup> ⎥⎢ ⎥

*x t x t u t*

<sup>=</sup> <sup>⎢</sup> ⎥⎢ ⎥ <sup>+</sup> <sup>⎢</sup> ⎥⎢ ⎥ <sup>⎢</sup> ⎥⎢ ⎥ <sup>⎢</sup> ⎥⎢ ⎥ <sup>⎣</sup> ⎦⎣ ⎦

0 0 1 00 ( ) 0.5 0 0.5 0 0 ( ) 0.2 0.2 0.2 0.2 0 *y t x t* ⎡ ⎤ ⎢ ⎥ <sup>=</sup> − − ⎣ ⎦ As observed, all of the system eigenvalues have been preserved in this transformed model with a little difference due to discretization. Using the singular perturbation technique, the

> -0.5967 0.8701 -0.1041 0.1021 ( ) -0.8701 -0.5967 0.8034 ( ) 0.0652 ( ) 0 0 -0.9809 0.0860 *x t xt ut* <sup>⎡</sup> ⎤⎡ ⎤ <sup>⎢</sup> ⎥⎢ ⎥ <sup>=</sup> <sup>+</sup> <sup>⎢</sup> ⎥⎢ ⎥ <sup>⎢</sup> ⎥⎢ ⎥ <sup>⎣</sup> ⎦⎣ ⎦

> > 0 01 0 ( ) 0.5 0 0.5 ( ) 0 ( ) 0.2 0.2 0.2 0 *y t xt ut* <sup>⎡</sup> ⎤ ⎡⎤ <sup>⎢</sup> ⎥ ⎢⎥ <sup>=</sup> <sup>+</sup> <sup>⎢</sup> ⎥ ⎢⎥ ⎢− − ⎥ ⎢⎥ <sup>⎣</sup> ⎦ ⎣⎦

<sup>ˆ</sup> <sup>d</sup> ˆ

B ]] given as:

8

utilizing the recurrent ANN.

*w* =

learning rate *η* = 0.005 and with initial weights *w* = [[ <sup>d</sup> A ] [

following reduced 3rd order model is obtained as follows:

It is also observed in the above model that the reduced order model has preserved all of its eigenvalues {-0.9809, -0.5967 ± j0.8701} which are a subset of the original system, while the reduced order model obtained using the singular perturbation without system transformation has provided different eigenvalues {-0.8283, -0.5980 ± j0.9304}.

Evaluations of the reduced order models (transformed and non-transformed) were obtained by simulating both systems for a step input. Simulation results are shown in Figure 6.

Fig. 6. Reduced 3rd order models (.… transformed, -.-.-.- non-transformed) output responses to a step input along with the non-reduced model ( \_\_\_\_ original) 5th order system output response.

Based on Figure 6, it is seen that the non-transformed reduced model provides a response which is better than the transformed reduced model. The cause of this is that the transformation at this point is performed only for the [**A**] and [**B**] system matrices leaving the [**C**] matrix unchanged. Therefore, the system transformation is further considered for complete system transformation using LMI (for {[**A**], [**B**], [**D**]}) as will be seen in subsection 4.1.2, where LMI-based transformation will produce better reduction-based response results than both the non-transformed and transformed without LMI.

**Case #2.** Consider now the following case:

$$\mathbf{x}(t) = \begin{bmatrix} 0 & 2 & 0 & 0 & 0 \\ -1.1 & -1.35 & 0.1 & 0.1 & 0.75 \\ 0 & 0 & 0 & 2 & 0 \\ 0.35 & 0.4 & -0.4 & -2.4 & 0 \\ 0 & -0.03 & 0 & 0 & -10 \end{bmatrix} \mathbf{x}(t) + \begin{bmatrix} 0 \\ 0 \\ 0 \\ 0 \\ 0 \\ 1 \end{bmatrix} u(t) \text{ y}(t) = \begin{bmatrix} 0 & 0 & 1 & 0 & 0 \\ 0.5 & 0 & 0.5 & 0 & 0 \\ -0.2 & -0.2 & 0.2 & 0.2 & 0 \end{bmatrix} \mathbf{x}(t)$$

The five eigenvalues are {-9.9973, -2.0002, -0.3696, -0.6912 ± j1.3082}, where two eigenvalues are complex, three are real, and only one eigenvalue is considered to produce fast dynamics {-9.9973}. Using the discretized model with *Ts* = 0.071 sec. for a step input with learning time *Tl* = 70 sec., and through training the ANN for the input/output data with *η* = 3.5 x 10-5 and initial weight matrix given by:

Robust Control Using LMI Transformation and Neural-Based Identification for

**Case #3.** Let us consider the following system:

given by:

model is obtained:

results shown in Figure 8.

matrix without transformation.

non-transformed or the original responses.

0 2 000 0 -0.1 -1.35 0.1 04.1 0.75 0 ( ) 0 0 050 0 () () 0.35 0.4 -1.4 -5.4 0 0 0 -0.03 0 0 -10 1

<sup>⎡</sup> ⎤ ⎡⎤ <sup>⎢</sup> ⎥ ⎢⎥ <sup>⎢</sup> ⎥ ⎢⎥

*x t xt ut*

<sup>=</sup> <sup>⎢</sup> ⎥ ⎢⎥ <sup>+</sup> <sup>⎢</sup> ⎥ ⎢⎥ <sup>⎢</sup> ⎥ ⎢⎥ <sup>⎢</sup> ⎥ ⎢⎥ <sup>⎣</sup> ⎦ ⎣⎦ ,


*w* =

Regulating Singularly-Perturbed Reduced Order Eigenvalue-Preserved Dynamic Systems 77

The eigenvalues are {-9.9973, -3.9702, -1.8992, -0.6778, -0.2055} which are all real. Utilizing the discretized model with *Ts* = 0.1 sec. for a step input with learning time *Tl* = 500 sec., and training the ANN for the input/output data with *η* = 1.25 x 10-5, and initial weight matrix

> 0.0014 -0.0662 0.0298 -0.0072 -0.0523 -0.0184 0.0768 0.0653 -0.0770 -0.0858 -0.0968 -0.0609 0.0231 0.0223 -0.0053 0.0162 -0.0231 0.0024

> 0.0904 -0.0772 -0.0733 -0.0490 0.0150 0.0735

and then by applying the singular perturbation technique, the following reduced 3rd order


0 01 0 ( ) 0.5 0 0.5 ( ) 0 ( ) 0.2 0.2 0.2 0.0017 *y t xt ut* ⎡ ⎤⎡ ⎤ ⎢ ⎥⎢ ⎥ <sup>=</sup> <sup>+</sup> − − ⎣ ⎦⎣ ⎦ Again, it is seen here the preservation of the eigenvalues of the reduced-order model being as a subset of the original system. However, as shown before, the reduced model without system transformation provided different eigenvalues {-1.5165,-0.6223,-0.2060} from the transformed reduced order model. Simulating both systems for a step input provided the

In Figure 8, it is also seen that the response of the non-transformed reduced model is better than the transformed reduced model, which is again caused by leaving the output [**C**]

As observed in the previous subsection, the system transformation without using the LMI optimization method, where its objective was to preserve the system eigenvalues in the reduced model, didn't provide an acceptable response as compared with either the reduced

As was mentioned, this was due to the fact of not transforming the complete system (i.e., by neglecting the [**C**] matrix). In order to achieve better response, we will now perform a

**4.1.2 LMI-based state transformation using neural identification** 

<sup>⎡</sup> <sup>⎤</sup> <sup>⎢</sup> <sup>⎥</sup> <sup>⎢</sup> <sup>⎥</sup> <sup>⎢</sup> <sup>⎥</sup> <sup>⎢</sup> <sup>⎥</sup> <sup>⎢</sup> <sup>⎥</sup> <sup>⎢</sup> <sup>⎥</sup> <sup>⎣</sup> <sup>⎦</sup>

0.0695 0.0366 0.0132 0.0515 0.0427

0 0 1 00 ( ) 0.5 0 0.5 0 0 ( ) 0.2 0.2 0.2 0.2 0 *y t x t* ⎡ ⎤ ⎢ ⎥ <sup>=</sup> − − ⎣ ⎦


and by applying the singular perturbation reduction technique, a reduced 4th order model is obtained as follows:

$$\dot{\mathbf{x}}(t) = \begin{bmatrix} -0.6912 & 1.3081 & -0.4606 & 0.0114 \\ -1.3081 & -0.6912 & 0.6916 & -0.0781 \\ 0 & 0 & -0.3696 & 0.0113 \\ 0 & 0 & 0 & -2.0002 \end{bmatrix} \mathbf{x}(t) + \begin{bmatrix} 0.0837 \\ 0.0520 \\ 0.0240 \\ -0.0014 \end{bmatrix} \boldsymbol{\mu}(t)$$
 
$$\mathbf{y}(t) = \begin{bmatrix} 0 & 0 & 1 & 0 \\ 0.5 & 0 & 0.5 & 0 \\ -0.2 & -0.2 & 0.2 & 0.2 \end{bmatrix} \mathbf{x}(t)$$

where all the eigenvalues {-2.0002, -0.3696, -0.6912 ± j1.3081} are preserved as a subset of the original system. This reduced 4th order model is simulated for a step input and then compared to both of the reduced model without transformation and the original system response. Simulation results are shown in Figure 7 where again the non-transformed reduced order model provides a response that is better than the transformed reduced model. The reason for this follows closely the explanation provided for the previous case.

Fig. 7. Reduced 4th order models (…. transformed, -.-.-.- non-transformed) output responses to a step input along with the non-reduced ( \_\_\_\_ original) 5th order system output response.

**Case #3.** Let us consider the following system:

76 Recent Advances in Robust Control – Novel Approaches and Design Methods


<sup>⎥</sup> <sup>⎢</sup> <sup>⎥</sup> <sup>⎢</sup> <sup>⎥</sup> <sup>⎢</sup> <sup>⎥</sup> <sup>⎢</sup> <sup>⎥</sup>

and by applying the singular perturbation reduction technique, a reduced 4th order model is


> 0 0 10 ( ) 0.5 0 0.5 0 ( ) 0.2 0.2 0.2 0.2 *y t x t* ⎡ ⎤ ⎢ ⎥ <sup>=</sup> − − ⎣ ⎦

where all the eigenvalues {-2.0002, -0.3696, -0.6912 ± j1.3081} are preserved as a subset of the original system. This reduced 4th order model is simulated for a step input and then compared to both of the reduced model without transformation and the original system response. Simulation results are shown in Figure 7 where again the non-transformed reduced order model provides a response that is better than the transformed reduced model. The reason for this follows closely the explanation provided for the previous case.

0 2 4 6 8 10 12 14 16 18 20

Time[s]

Fig. 7. Reduced 4th order models (…. transformed, -.-.-.- non-transformed) output responses to a step input along with the non-reduced ( \_\_\_\_ original) 5th order system output response.

*x t x t u t* <sup>⎡</sup> ⎤⎡ ⎤ <sup>⎢</sup> ⎥⎢ ⎥ <sup>⎢</sup> ⎥⎢ ⎥ <sup>=</sup> <sup>+</sup> <sup>⎢</sup> ⎥⎢ ⎥ <sup>⎢</sup> ⎥⎢ ⎥ <sup>⎣</sup> ⎦⎣ ⎦

⎤ ⎥

⎦

*w*



0

0.01

0.02

System Output

0.03

0.04

0.05 0.06

0.07

obtained as follows:

⎡ ⎢ ⎢ =

⎣

$$\mathbf{x}(t) = \begin{bmatrix} 0 & 2 & 0 & 0 & 0 \\ -0.1 & -1.35 & 0.1 & 0.4.1 & 0.75 \\ 0 & 0 & 0 & 5 & 0 \\ 0.35 & 0.4 & -1.4 & -5.4 & 0 \\ 0 & -0.03 & 0 & 0 & -10 \end{bmatrix} \mathbf{x}(t) + \begin{bmatrix} 0 \\ 0 \\ 0 \\ 0 \\ 1 \end{bmatrix} u(t) \cdot \mathbf{y}(t) = \begin{bmatrix} 0 & 0 & 1 & 0 & 0 \\ 0.5 & 0 & 0.5 & 0 & 0 \\ -0.2 & -0.2 & 0.2 & 0.2 & 0 \end{bmatrix} \mathbf{x}(t)$$

The eigenvalues are {-9.9973, -3.9702, -1.8992, -0.6778, -0.2055} which are all real. Utilizing the discretized model with *Ts* = 0.1 sec. for a step input with learning time *Tl* = 500 sec., and training the ANN for the input/output data with *η* = 1.25 x 10-5, and initial weight matrix given by:

$$w = \begin{bmatrix} 0.0014 & -0.0662 & 0.0298 & -0.0072 & -0.0523 & -0.0184 \\ 0.0768 & 0.0653 & -0.0770 & -0.0858 & -0.0968 & -0.0609 \\ 0.0231 & 0.0223 & -0.0053 & 0.0162 & -0.0231 & 0.0024 \\ -0.0907 & 0.0695 & 0.0366 & 0.0132 & 0.0515 & 0.0427 \\ 0.0904 & -0.0772 & -0.0733 & -0.0490 & 0.0150 & 0.0735 \end{bmatrix}$$

and then by applying the singular perturbation technique, the following reduced 3rd order model is obtained:

$$\dot{\mathbf{x}}(t) = \begin{bmatrix} -0.2051 & -1.5131 & 0.6966 \\ 0 & -0.6782 & -0.0329 \\ 0 & 0 & -1.8986 \end{bmatrix} \mathbf{x}(t) + \begin{bmatrix} 0.0341 \\ 0.0078 \\ 0.4649 \end{bmatrix} \boldsymbol{\mu}(t)$$

$$\mathbf{y}(t) = \begin{bmatrix} 0 & 0 & 1 \\ 0.5 & 0 & 0.5 \\ -0.2 & -0.2 & 0.2 \end{bmatrix} \mathbf{x}(t) + \begin{bmatrix} 0 \\ 0 \\ 0.0017 \end{bmatrix} \boldsymbol{\mu}(t)$$

Again, it is seen here the preservation of the eigenvalues of the reduced-order model being as a subset of the original system. However, as shown before, the reduced model without system transformation provided different eigenvalues {-1.5165,-0.6223,-0.2060} from the transformed reduced order model. Simulating both systems for a step input provided the results shown in Figure 8.

In Figure 8, it is also seen that the response of the non-transformed reduced model is better than the transformed reduced model, which is again caused by leaving the output [**C**] matrix without transformation.

## **4.1.2 LMI-based state transformation using neural identification**

As observed in the previous subsection, the system transformation without using the LMI optimization method, where its objective was to preserve the system eigenvalues in the reduced model, didn't provide an acceptable response as compared with either the reduced non-transformed or the original responses.

As was mentioned, this was due to the fact of not transforming the complete system (i.e., by neglecting the [**C**] matrix). In order to achieve better response, we will now perform a

Robust Control Using LMI Transformation and Neural-Based Identification for

order original system response.


original response.

0

0.02

0.04

0.06

System Output

0.08

0.1

0.12

0.14

Regulating Singularly-Perturbed Reduced Order Eigenvalue-Preserved Dynamic Systems 79



where the objective of eigenvalue preservation is clearly achieved. Investigating the performance of this new LMI-based reduced order model shows that the new *completely transformed system* is better than all the previous reduced models (transformed and nontransformed). This is clearly shown in Figure 9 where the 3rd order reduced model, based on the LMI optimization transformation, provided a response that is almost the same as the 5th

\_\_\_ Original, ---- Trans. with LMI, -.-.- None Trans., .... Trans. without LMI

0 1 2 3 4 5 6 7 8

Time[s]

Fig. 9. Reduced 3rd order models (…. transformed without LMI, -.-.-.- non-transformed, --- transformed with LMI) output responses to a step input along with the non reduced ( \_\_\_\_ original) system output response. The LMI-transformed curve fits almost exactly on the

**Case #2.** For the example of case #2 in subsection 4.1.1, for *Ts* = 0.1 sec., 200 input/output

 0.0332 0.0682 0.0476 0.0129 0.0439 0.0317 0.0610 0.0575 0.0028 0.0691 0.0745 0.0516 0.0040 0.0234 0.0247 0.0459 0.0231 0.0086 0.0611 0.0154

<sup>⎡</sup> <sup>⎤</sup> <sup>⎢</sup> <sup>⎥</sup> <sup>⎢</sup> <sup>⎥</sup> <sup>⎢</sup> <sup>⎥</sup> <sup>⎢</sup> <sup>⎥</sup> <sup>⎢</sup> <sup>⎥</sup> <sup>⎢</sup> <sup>⎥</sup> <sup>⎣</sup> <sup>⎦</sup>

0.0418 0.0633 0.0176 0.0273

**Ad** ] matrix as follows:

data learning points, and *η* = 0.0051 with initial weights for the [

*w* =

0.0706

complete system transformation utilizing the LMI optimization technique to obtain the permutation matrix [**P**] based on the transformed system matrix [ **A** ] as resulted from the ANN-based identification, where the following presents simulations for the previously considered tape drive system cases.

Fig. 8. Reduced 3rd order models (…. transformed, -.-.-.- non-transformed) output responses to a step input along with the non-reduced ( \_\_\_\_ original) 5th order system output response.

**Case #1.** For the example of case #1 in subsection 4.1.1, the ANN identification is used now to identify only the transformed [ **Ad** ] matrix. Discretizing the system with *Ts* = 0.1 sec., using a step input with learning time *Tl* = 15 sec., and training the ANN for the input/output data with *η* = 0.001 and initial weights for the [ **Ad** ] matrix as follows:


produces the transformed system matrix:

$$
\tilde{A} = \begin{bmatrix}
0 & 0 & -0.9809 & 0.1395 & 0.4934 \\
0 & 0 & 0 & -9.9985 & 1.0449 \\
0 & 0 & 0 & 0 & -10.5764
\end{bmatrix}
$$

Based on this transformed matrix, using the LMI technique, the permutation matrix [**P**] was computed and then used for the complete system transformation. Therefore, the transformed {[ **B** ], [ **C** ], [ **D** ]} matrices were then obtained. Performing model order reduction provided the following reduced 3rd order model:

78 Recent Advances in Robust Control – Novel Approaches and Design Methods

complete system transformation utilizing the LMI optimization technique to obtain the permutation matrix [**P**] based on the transformed system matrix [ **A** ] as resulted from the ANN-based identification, where the following presents simulations for the previously

0 10 20 30 40 50 60 70 80

**Ad** ] matrix. Discretizing the system with *Ts* = 0.1 sec.,

**Ad** ] matrix as follows:

Time[s]

Fig. 8. Reduced 3rd order models (…. transformed, -.-.-.- non-transformed) output responses to a step input along with the non-reduced ( \_\_\_\_ original) 5th order system output response. **Case #1.** For the example of case #1 in subsection 4.1.1, the ANN identification is used now

using a step input with learning time *Tl* = 15 sec., and training the ANN for the

0.0286 0.0384 0.0444 0.0206 0.0191 0.0375 0.0440 0.0325 0.0398 0.0144 0.0016 0.0186 0.0307 0.0056 0.0304 0.0411 0.0226 0.0478 0.0287 0.0453 0.0327 0.0042 0.0239 0.0106 0.0002

<sup>⎡</sup> <sup>⎤</sup> <sup>⎢</sup> <sup>⎥</sup> <sup>⎢</sup> <sup>⎥</sup>

<sup>=</sup> <sup>⎢</sup> <sup>⎥</sup> <sup>⎢</sup> <sup>⎥</sup> <sup>⎢</sup> <sup>⎥</sup> <sup>⎢</sup> <sup>⎥</sup> <sup>⎣</sup> <sup>⎦</sup>


<sup>=</sup> <sup>⎢</sup> <sup>⎥</sup> <sup>⎢</sup> <sup>⎥</sup> <sup>⎢</sup> <sup>⎥</sup> <sup>⎢</sup> <sup>⎥</sup> <sup>⎣</sup> <sup>⎦</sup>

Based on this transformed matrix, using the LMI technique, the permutation matrix [**P**] was computed and then used for the complete system transformation. Therefore, the transformed {[ **B** ], [ **C** ], [ **D** ]} matrices were then obtained. Performing model order

<sup>⎡</sup> <sup>⎤</sup> <sup>⎢</sup> <sup>⎥</sup> <sup>⎢</sup> <sup>⎥</sup>

considered tape drive system cases.

System Output


to identify only the transformed [

*w*

produces the transformed system matrix:

*A*

reduction provided the following reduced 3rd order model:

input/output data with *η* = 0.001 and initial weights for the [

$$
\dot{\mathbf{x}}(t) = \begin{bmatrix} -0.5967 & 0.8701 & -1.4633 \\ -0.8701 & -0.5967 & 0.2276 \\ 0 & 0 & -0.9809 \end{bmatrix} \mathbf{x}(t) + \begin{bmatrix} 35.1670 \\ -47.3374 \\ -4.1652 \end{bmatrix} \boldsymbol{\mu}(t)
$$

$$
\mathbf{y}(t) = \begin{bmatrix} -0.0019 & 0 & -0.0139 \\ -0.0024 & -0.0009 & -0.0088 \\ -0.0001 & 0.0004 & -0.0021 \end{bmatrix} \mathbf{x}(t) + \begin{bmatrix} -0.0025 \\ -0.0025 \\ 0.0006 \end{bmatrix} \boldsymbol{\mu}(t)
$$

where the objective of eigenvalue preservation is clearly achieved. Investigating the performance of this new LMI-based reduced order model shows that the new *completely transformed system* is better than all the previous reduced models (transformed and nontransformed). This is clearly shown in Figure 9 where the 3rd order reduced model, based on the LMI optimization transformation, provided a response that is almost the same as the 5th order original system response.

Fig. 9. Reduced 3rd order models (…. transformed without LMI, -.-.-.- non-transformed, --- transformed with LMI) output responses to a step input along with the non reduced ( \_\_\_\_ original) system output response. The LMI-transformed curve fits almost exactly on the original response.

**Case #2.** For the example of case #2 in subsection 4.1.1, for *Ts* = 0.1 sec., 200 input/output data learning points, and *η* = 0.0051 with initial weights for the [ **Ad** ] matrix as follows:


Robust Control Using LMI Transformation and Neural-Based Identification for

0.01

*w* =


System Output

original response.

(c) output feedback control.

**5.1 Proportional–Integral–Derivative (PID) control** 

Regulating Singularly-Perturbed Reduced Order Eigenvalue-Preserved Dynamic Systems 81

 0.0048 0.0039 0.0009 0.0089 0.0168 0.0072 0.0024 0.0048 0.0017 0.0040 0.0176 0.0176 0.0136 0.0175 0.0034 0.0055 0.0039 0.0078 0.0076 0.0051

<sup>⎡</sup> <sup>⎤</sup> <sup>⎢</sup> <sup>⎥</sup> <sup>⎢</sup> <sup>⎥</sup> <sup>⎢</sup> <sup>⎥</sup> <sup>⎢</sup> <sup>⎥</sup> <sup>⎢</sup> <sup>⎥</sup> <sup>⎢</sup> <sup>⎥</sup> <sup>⎣</sup> <sup>⎦</sup>

the LMI-based transformation and then order reduction were performed. Simulation results

\_\_\_ Original, ---- Trans. with LMI, -.-.- None Trans., .... Trans. without LMI

<sup>0</sup> <sup>5</sup> <sup>10</sup> <sup>15</sup> <sup>20</sup> <sup>25</sup> <sup>30</sup> <sup>35</sup> <sup>40</sup> -0.2

Fig. 11. Reduced 3rd order models (…. transformed without LMI, -.-.-.- non-transformed, ---- transformed with LMI) output responses to a step input along with the non reduced ( \_\_\_\_ original) system output response. The LMI-transformed curve fits almost exactly on the

Time[s]

Again, the response of the reduced order model using the complete LMI-based

**5. The application of closed-loop feedback control on the reduced models** 

Utilizing the LMI-based reduced system models that were presented in the previous section, various control techniques – that can be utilized for the robust control of dynamic systems are considered in this section to achieve the desired system performance. These control methods include (a) PID control, (b) state feedback control using (1) pole placement for the desired eigenvalue locations and (2) linear quadratic regulator (LQR) optimal control, and

A PID controller is a generic control loop feedback mechanism which is widely used in industrial control systems [7,10,24]. It attempts to correct the error between a measured

transformation is the best as compared to the other reduction techniques.

of the reduced order models and the original system are shown in Figure 11.

02 0.0024 0.0091 0.0049 0.0121

the transformed [ **A** ] was obtained and used to calculate the permutation matrix [**P**]. The complete system transformation was then performed and the reduction technique produced the following 3rd order reduced model:

$$\dot{\mathbf{x}}(t) = \begin{bmatrix} -0.6910 & 1.3088 & -3.8578 \\ -1.3088 & -0.6910 & -1.5719 \\ 0 & 0 & -0.3697 \end{bmatrix} \mathbf{x}(t) + \begin{bmatrix} -0.7621 \\ -0.1118 \\ 0.4466 \end{bmatrix} \boldsymbol{\mu}(t)$$

$$\mathbf{y}(t) = \begin{bmatrix} 0.0061 & 0.0261 & 0.0111 \\ -0.0459 & 0.0187 & -0.0946 \\ 0.0117 & 0.0155 & -0.0080 \end{bmatrix} \mathbf{x}(t) + \begin{bmatrix} 0.0015 \\ 0.0015 \\ 0.0014 \end{bmatrix} \boldsymbol{\mu}(t)$$

with eigenvalues preserved as desired. Simulating this reduced order model to a step input, as done previously, provided the response shown in Figure 10.

Fig. 10. Reduced 3rd order models (…. transformed without LMI, -.-.-.- non-transformed, ---- transformed with LMI) output responses to a step input along with the non reduced ( \_\_\_\_ original) system output response. The LMI-transformed curve fits almost exactly on the original response.

Here, the LMI-reduction-based technique has provided a response that is better than both of the reduced non-transformed and non-LMI-reduced transformed responses and is almost identical to the original system response.

**Case #3.** Investigating the example of case #3 in subsection 4.1.1, for *Ts* = 0.1 sec., 200 input/output data points, and *η* = 1 x 10-4 with initial weights for [ ] **Ad** given as:

80 Recent Advances in Robust Control – Novel Approaches and Design Methods


 0.0061 0.0261 0.0111 0.0015 ( ) -0.0459 0.0187 -0.0946 ( ) 0.0015 ( ) 0.0117 0.0155 -0.0080 0.0014 *y t x t u t* <sup>⎡</sup> ⎤⎡ ⎤ <sup>⎢</sup> ⎥⎢ ⎥ <sup>=</sup> <sup>+</sup> <sup>⎢</sup> ⎥⎢ ⎥ <sup>⎢</sup> ⎥⎢ ⎥ <sup>⎣</sup> ⎦⎣ ⎦

with eigenvalues preserved as desired. Simulating this reduced order model to a step input,

\_\_\_ Original, ---- Trans. with LMI, -.-.- None Trans., .... Trans. without LMI

0 2 4 6 8 10 12 14 16 18 20

Time[s]

Fig. 10. Reduced 3rd order models (…. transformed without LMI, -.-.-.- non-transformed, ---- transformed with LMI) output responses to a step input along with the non reduced ( \_\_\_\_ original) system output response. The LMI-transformed curve fits almost exactly on the

Here, the LMI-reduction-based technique has provided a response that is better than both of the reduced non-transformed and non-LMI-reduced transformed responses and is almost

**Case #3.** Investigating the example of case #3 in subsection 4.1.1, for *Ts* = 0.1 sec., 200

**Ad** given as:

input/output data points, and *η* = 1 x 10-4 with initial weights for [ ]

as done previously, provided the response shown in Figure 10.

the transformed [ **A** ] was obtained and used to calculate the permutation matrix [**P**]. The complete system transformation was then performed and the reduction technique produced

the following 3rd order reduced model:


identical to the original system response.

original response.

System Output


the LMI-based transformation and then order reduction were performed. Simulation results of the reduced order models and the original system are shown in Figure 11.

Fig. 11. Reduced 3rd order models (…. transformed without LMI, -.-.-.- non-transformed, ---- transformed with LMI) output responses to a step input along with the non reduced ( \_\_\_\_ original) system output response. The LMI-transformed curve fits almost exactly on the original response.

Again, the response of the reduced order model using the complete LMI-based transformation is the best as compared to the other reduction techniques.

## **5. The application of closed-loop feedback control on the reduced models**

Utilizing the LMI-based reduced system models that were presented in the previous section, various control techniques – that can be utilized for the robust control of dynamic systems are considered in this section to achieve the desired system performance. These control methods include (a) PID control, (b) state feedback control using (1) pole placement for the desired eigenvalue locations and (2) linear quadratic regulator (LQR) optimal control, and (c) output feedback control.

## **5.1 Proportional–Integral–Derivative (PID) control**

A PID controller is a generic control loop feedback mechanism which is widely used in industrial control systems [7,10,24]. It attempts to correct the error between a measured

Robust Control Using LMI Transformation and Neural-Based Identification for

0.02

0.04

0.06

Amplitude

for the 1st output:

**5.2 State feedback control** 

0.08

0.1

0.12

0.14

Regulating Singularly-Perturbed Reduced Order Eigenvalue-Preserved Dynamic Systems 83

Step Response

<sup>0</sup> <sup>1</sup> <sup>2</sup> <sup>3</sup> <sup>4</sup> <sup>5</sup> <sup>6</sup> <sup>7</sup> <sup>8</sup> <sup>9</sup> <sup>10</sup> <sup>0</sup>

Time (sec)

Fig. 13. Reduced 3rd order model PID controlled and uncontrolled step responses.

(a) (b)

controller: (a) a generic SIMO diagram, and (b) a detailed SIMO diagram.

Fig. 14. Closed-loop feedback single-input multiple-output (SIMO) system with a PID

As shown in Figure 14, the tuning process is accomplished using *G*1*<sup>T</sup>* and *G*3*<sup>T</sup>*. For example,

<sup>∴</sup> <sup>1</sup> PID( ) <sup>2</sup> *T <sup>R</sup> <sup>G</sup>*

In this section, we will investigate the state feedback control techniques of pole placement

For the reduced order model in the system of Equations (37) - (38), a simple pole placementbased state feedback controller can be designed. For example, assuming that a controller is

where *Y*2 is the Laplace transform of the 2nd output. Similarly, *G*3*<sup>T</sup>* can be obtained.

and the LQR optimal control for the enhancement of the system performance.

**5.2.1 Pole placement for the state feedback control** 

1 11 2 1 1 PID( ) *Y G G R Y Y GR* = *<sup>T</sup>* − == (39)

*R-Y* <sup>=</sup> (40)

process variable (output) and a desired set-point (input) by calculating and then providing a corrective signal that can adjust the process accordingly as shown in Figure 12.

Fig. 12. Closed-loop feedback single-input single-output (SISO) control using a PID controller.

In the control design process, the three parameters of the PID controller {*Kp*, *Ki*, *Kd*} have to be calculated for some specific process requirements such as system overshoot and settling time. It is normal that once they are calculated and implemented, the response of the system is not actually as desired. Therefore, further tuning of these parameters is needed to provide the desired control action.

Focusing on one output of the tape-drive machine, the PID controller using the reduced order model for the desired output was investigated. Hence, the identified reduced 3rd order model is now considered for the output of the tape position at the head which is given as:

$$G(\text{s})\_{\text{original}} = \frac{0.0801\,\text{s} + 0.133}{\text{s}^3 + 2.1742\,\text{s}^2 + 2.2837\,\text{s} + 1.0919}$$

Searching for suitable values of the PID controller parameters, such that the system provides a faster response settling time and less overshoot, it is found that {*Kp* = 100, *Ki* = 80, *Kd* = 90} with a controlled system which is given by:

$$G(\text{s})\_{\text{controlled}} = \frac{7.209 \text{s}^3 + 19.98 \text{s}^2 + 19.71 \text{s} + 10.64}{\text{s}^4 + 9.383 \text{s}^3 + 22.26 \text{s}^2 + 20.8 \text{s} + 10.64}$$

Simulating the new PID-controlled system for a step input provided the results shown in Figure 13, where the settling time is almost 1.5 sec. while without the controller was greater than 6 sec. Also as observed, the overshoot has much decreased after using the PID controller.

On the other hand, the other system outputs can be PID-controlled using the cascading of current process PID and new tuning-based PIDs for each output. For the PID-controlled output of the tachometer shaft angle, the controlling scheme would be as shown in Figure 14. As seen in Figure 14, the output of interest (i.e., the 2nd output) is controlled as desired using the PID controller. However, this will affect the other outputs' performance and therefore a further PID-based tuning operation must be applied.

Fig. 13. Reduced 3rd order model PID controlled and uncontrolled step responses.

Fig. 14. Closed-loop feedback single-input multiple-output (SIMO) system with a PID controller: (a) a generic SIMO diagram, and (b) a detailed SIMO diagram.

As shown in Figure 14, the tuning process is accomplished using *G*1*<sup>T</sup>* and *G*3*<sup>T</sup>*. For example, for the 1st output:

$$Y\_1 = G\_{17} G\_1 \\ \text{PID}(R - Y\_2) = Y\_1 = G\_1 R \tag{39}$$

$$\cdot \cdot \cdot \text{ G}\_{1T} = \frac{R}{\text{PID}(R \cdot Y\_2)} \tag{40}$$

where *Y*2 is the Laplace transform of the 2nd output. Similarly, *G*3*<sup>T</sup>* can be obtained.

### **5.2 State feedback control**

82 Recent Advances in Robust Control – Novel Approaches and Design Methods

process variable (output) and a desired set-point (input) by calculating and then providing a

corrective signal that can adjust the process accordingly as shown in Figure 12.

Fig. 12. Closed-loop feedback single-input single-output (SISO) control using a PID

original 3 2

controlled 432

therefore a further PID-based tuning operation must be applied.

*s*

*G s*

with a controlled system which is given by:

*G s*

In the control design process, the three parameters of the PID controller {*Kp*, *Ki*, *Kd*} have to be calculated for some specific process requirements such as system overshoot and settling time. It is normal that once they are calculated and implemented, the response of the system is not actually as desired. Therefore, further tuning of these parameters is needed to provide

Focusing on one output of the tape-drive machine, the PID controller using the reduced order model for the desired output was investigated. Hence, the identified reduced 3rd order model is now considered for the output of the tape position at the head which is given as:

0.0801s 0.133 ( ) 2.1742s 2.2837s 1.0919

Searching for suitable values of the PID controller parameters, such that the system provides a faster response settling time and less overshoot, it is found that {*Kp* = 100, *Ki* = 80, *Kd* = 90}

<sup>+</sup> <sup>=</sup> + ++

3 2

7.209s 19.98s 19.71s 10.64 ( ) s 9.383 22.26s 20.8s 10.64

Simulating the new PID-controlled system for a step input provided the results shown in Figure 13, where the settling time is almost 1.5 sec. while without the controller was greater than 6 sec. Also as observed, the overshoot has much decreased after using the PID

On the other hand, the other system outputs can be PID-controlled using the cascading of current process PID and new tuning-based PIDs for each output. For the PID-controlled output of the tachometer shaft angle, the controlling scheme would be as shown in Figure 14. As seen in Figure 14, the output of interest (i.e., the 2nd output) is controlled as desired using the PID controller. However, this will affect the other outputs' performance and

*s* + ++ <sup>=</sup> + + ++

controller.

controller.

the desired control action.

In this section, we will investigate the state feedback control techniques of pole placement and the LQR optimal control for the enhancement of the system performance.

### **5.2.1 Pole placement for the state feedback control**

For the reduced order model in the system of Equations (37) - (38), a simple pole placementbased state feedback controller can be designed. For example, assuming that a controller is needed to provide the system with an enhanced system performance by relocating the eigenvalues, the objective can be achieved using the control input given by:

$$
\mu(t) = -K\tilde{\chi}\_r(t) + r(t) \tag{41}
$$

Robust Control Using LMI Transformation and Neural-Based Identification for

The overall closed-loop system model may then be written as:

eigenvalues.

performance as shown in Figure 17.

System Output


Regulating Singularly-Perturbed Reduced Order Eigenvalue-Preserved Dynamic Systems 85

() () () *cl r cl xt A x t Brt* = + (44)

such that the closed loop system matrix [**Acl**] will provide the new desired system

For example, for the system of case #3, the state feedback was used to re-assign the eigenvalues with {-1.89, -1.5, -1}. The state feedback control was then found to be of *K* = [- 1.2098 0.3507 0.0184], which placed the eigenvalues as desired and enhanced the system

0 10 20 30 40 50 60 70 80 90 100

Time[s]

Fig. 17. Reduced 3rd order state feedback control (for pole placement) output step response

**5.2.2 Linear-Quadratic Regulator (LQR) optimal control for the state feedback control**  Another method for designing a state feedback control for system performance

( )

= + ∫ (46)

*u t Kx t* () () = − (47)

*T T J x Qx u Ru dt*

which is defined for the system *x t Ax t Bu t* () () () = + , where *Q* and *R* are weight matrices for the states and input commands. This is known as the LQR problem, which has received much of a special attention due to the fact that it can be solved analytically and that the resulting optimal controller is expressed in an easy-to-implement state feedback control

enhancement may be achieved based on minimizing the cost function given by [10]:

0

[7,10]. The feedback control law that minimizes the values of the cost is given by:

∞


() () () *cl r cl y t C x t Drt* = + (45)

where *K* is the state feedback gain designed based on the desired system eigenvalues. A state feedback control for pole placement can be illustrated by the block diagram shown in Figure 15.

Fig. 15. Block diagram of a state feedback control with {[ **Aor** ], [ **Bor** ], [ **Cor** ], [ **Dor** ]} overall reduced order system matrices.

Replacing the control input *u*(*t*) in Equations (37) - (38) by the above new control input in Equation (41) yields the following reduced system equations:

$$\dot{\tilde{\mathbf{x}}}\_{r}(t) = A\_{vr}\tilde{\mathbf{x}}\_{r}(t) + B\_{vr}[-K\tilde{\mathbf{x}}\_{r}(t) + r(t)] \tag{42}$$

$$y(t) = C\_{or}\tilde{\mathbf{x}}\_r(t) + D\_{or}[-K\tilde{\mathbf{x}}\_r(t) + r(t)]\tag{43}$$

which can be re-written as:

$$\dot{\tilde{\mathbf{x}}}\_{r}(t) = A\_{or}\tilde{\mathbf{x}}\_{r}(t) - B\_{or}K\tilde{\mathbf{x}}\_{r}(t) + B\_{or}r(t) \to \dot{\tilde{\mathbf{x}}}\_{r}(t) = [A\_{or} - B\_{or}K]\tilde{\mathbf{x}}\_{r}(t) + B\_{or}r(t)$$
 
$$y(t) = C\_{or}\tilde{\mathbf{x}}\_{r}(t) - D\_{or}K\tilde{\mathbf{x}}\_{r}(t) + D\_{or}r(t) \to \quad y(t) = [C\_{or} - D\_{or}K]\tilde{\mathbf{x}}\_{r}(t) + D\_{or}r(t)$$

where this is illustrated in Figure 16.

Fig. 16. Block diagram of the overall state feedback control for pole placement.

The overall closed-loop system model may then be written as:

84 Recent Advances in Robust Control – Novel Approaches and Design Methods

needed to provide the system with an enhanced system performance by relocating the

where *K* is the state feedback gain designed based on the desired system eigenvalues. A state feedback control for pole placement can be illustrated by the block diagram shown in

*Bor* <sup>∫</sup> <sup>+</sup><sup>+</sup>

*<sup>y</sup>*(*t*) *<sup>u</sup>*(*t*) ( ) <sup>~</sup>*<sup>x</sup> <sup>t</sup> <sup>r</sup>* ( ) <sup>~</sup>*<sup>x</sup> <sup>t</sup> <sup>r</sup>*

*Dor*

*Aor*

Fig. 15. Block diagram of a state feedback control with {[ **Aor** ], [ **Bor** ], [ **Cor** ], [ **Dor** ]} overall

 *K* 

+

Replacing the control input *u*(*t*) in Equations (37) - (38) by the above new control input in

( ) ( ) [ ( ) ( )] *r or r or r x t A x t B Kx t r t* = +− + (42)

() () () () *r or r or r or x t A x t B Kx t B r t* =− + () [ ] () () *r or or r or* → =− + *x t A B Kx t B rt*

() () () () *or r or r or y t C x t D Kx t D r t* =− + () [ ] () () *or or r or* → =− + *y t C D Kx t D rt*

Fig. 16. Block diagram of the overall state feedback control for pole placement.

*Dor*

*Aor* − *BorK*

*Bor Cor* − *DorK*

( ) ( ) [ ( ) ( )] *or r or r y t C x t D Kx t r t* = +− + (43)

*<sup>y</sup>*(*t*) ( ) *<sup>r</sup> x t*

+

+

Equation (41) yields the following reduced system equations:

<sup>∫</sup> <sup>+</sup>

( ) *<sup>r</sup> x t r*(*t*)

+

() () () *u t Kx t r t* =− + *<sup>r</sup>* (41)

+

*Cor*

eigenvalues, the objective can be achieved using the control input given by:

Figure 15.

*r*(*t*) +

reduced order system matrices.


which can be re-written as:

where this is illustrated in Figure 16.

$$
\dot{\tilde{\mathbf{x}}}(t) = A\_{cl}\tilde{\mathbf{x}}\_r(t) + B\_{cl}r(t) \tag{44}
$$

$$y(t) = C\_{cl}\tilde{x}\_r(t) + D\_{cl}r(t) \tag{45}$$

such that the closed loop system matrix [**Acl**] will provide the new desired system eigenvalues.

For example, for the system of case #3, the state feedback was used to re-assign the eigenvalues with {-1.89, -1.5, -1}. The state feedback control was then found to be of *K* = [- 1.2098 0.3507 0.0184], which placed the eigenvalues as desired and enhanced the system performance as shown in Figure 17.

Fig. 17. Reduced 3rd order state feedback control (for pole placement) output step response -.-.-.- compared with the original \_\_\_\_ full order system output step response.

**5.2.2 Linear-Quadratic Regulator (LQR) optimal control for the state feedback control**  Another method for designing a state feedback control for system performance enhancement may be achieved based on minimizing the cost function given by [10]:

$$J = \bigcap\_{0}^{\alpha} \left( \mathbf{x}^{T} Q \mathbf{x} + \mathbf{u}^{T} R \mathbf{u} \right) dt \tag{46}$$

which is defined for the system *x t Ax t Bu t* () () () = + , where *Q* and *R* are weight matrices for the states and input commands. This is known as the LQR problem, which has received much of a special attention due to the fact that it can be solved analytically and that the resulting optimal controller is expressed in an easy-to-implement state feedback control [7,10]. The feedback control law that minimizes the values of the cost is given by:

$$
\mu(t) = -K\,\alpha(t) \tag{47}
$$

where *K* is the solution of <sup>1</sup> *<sup>T</sup> K R Bq* <sup>−</sup> = and [**q**] is found by solving the algebraic Riccati equation which is described by:

$$A^T q + qA - qBR^{-1}B^T q + Q = 0\tag{48}$$

Robust Control Using LMI Transformation and Neural-Based Identification for

Fig. 19. Block diagram of an output feedback control.


*r* + (*t*)

*r*(*t*)

<sup>1</sup> [ ] *B I KD or or*

Regulating Singularly-Perturbed Reduced Order Eigenvalue-Preserved Dynamic Systems 87

*<sup>y</sup>*(*t*) *<sup>u</sup>*(*t*) ( ) *<sup>r</sup> x t* ( ) *<sup>r</sup> x t*

*Dor*

*Aor*

The control input is now given by *u t Ky t r t* () () () =− + , where () () () *or r or y t C x t D ut* = + . By

() () () ()

[ [ ] ] () [ [ ] ]()

*or or or or r or or*

*A B K I D K C x t B I KD r t* − −

1 1

*or or r or or*

[[ ] ] ( ) [[ ] ] ( )

*DorK Dor I* <sup>1</sup> [ ] <sup>−</sup> +

*I D K C x t I D K D rt* − −

1 1

(49)

+

+

*Cor*

(50)

+

+

*y*(*t*)

applying this control to the considered system, the system equations become [7]:

*Bor* <sup>∫</sup> <sup>+</sup>

+

 *K* 

*x t A x t B KC x t D ut rt*

( ) ( ) [ ( ) ( )]

*y t C x t D Ky t r t*

= +− + =− +

*or r or or*

= + + +

*C x t D Ky t D r t*

*r or r or or r or*

*or r or*

 

This leads to the overall block diagram as seen in Figure 20.

+

Fig. 20. An overall block diagram of an output feedback control.

where one can observe that the system behavior is enhanced as desired.

<sup>∫</sup> <sup>+</sup>

( ) ( ) [ ( ( ) ( )) ( )]

= +− + + =− − + =− − +

[ ] () () ()

=− + + +

() () ()

Considering the reduced 3rd order model in case #3 of subsection 4.1.2 for system behavior enhancement using the output feedback control, the feedback control gain is found to be *K* = [0.5799 -2.6276 -11]. The normalized controlled system step response is shown in Figure 21,

<sup>1</sup> [ ] *Aor or or or B KI D K C*<sup>−</sup> − +

( ) *<sup>r</sup> x t* ( ) *<sup>r</sup> x t*

<sup>−</sup> <sup>+</sup> <sup>1</sup> [ ] *or or I DK C*<sup>−</sup> <sup>+</sup>

*or r or or r or or or or or or r or or or*

*A x t B KC x t B KD u t B r t A B KC x t B KD u t B r t*

where [**Q**] is the state weighting matrix and [**R**] is the input weighting matrix. A direct solution for the optimal control gain maybe obtained using the MATLAB statement *K ABQR* = lqr( , , , ) , where in our example *R* = 1, and the [**Q**] matrix was found using the output [**C**] matrix such as *<sup>T</sup> Q CC* = .

The LQR optimization technique is applied to the reduced 3rd order model in case #3 of subsection 4.1.2 for the system behavior enhancement. The state feedback optimal control gain was found *K* = [-0.0967 -0.0192 0.0027], which when simulating the complete system for a step input, provided the normalized output response (with a normalization factor *γ* = 1.934) as shown in Figure 18.

Fig. 18. Reduced 3rd order LQR state feedback control output step response -.-.-.- compared with the original \_\_\_\_ full order system output step response.

As seen in Figure 18, the optimal state feedback control has enhanced the system performance, which is basically based on selecting new proper locations for the system eigenvalues.

### **5.3 Output feedback control**

The output feedback control is another way of controlling the system for certain desired system performance as shown in Figure 19 where the feedback is directly taken from the output.

Fig. 19. Block diagram of an output feedback control.

86 Recent Advances in Robust Control – Novel Approaches and Design Methods

where *K* is the solution of <sup>1</sup> *<sup>T</sup> K R Bq* <sup>−</sup> = and [**q**] is found by solving the algebraic Riccati

where [**Q**] is the state weighting matrix and [**R**] is the input weighting matrix. A direct solution for the optimal control gain maybe obtained using the MATLAB statement *K ABQR* = lqr( , , , ) , where in our example *R* = 1, and the [**Q**] matrix was found using the

The LQR optimization technique is applied to the reduced 3rd order model in case #3 of subsection 4.1.2 for the system behavior enhancement. The state feedback optimal control gain was found *K* = [-0.0967 -0.0192 0.0027], which when simulating the complete system for a step input, provided the normalized output response (with a normalization factor *γ* =

0 10 20 30 40 50 60 70 80 90 100

Time[s]

Fig. 18. Reduced 3rd order LQR state feedback control output step response -.-.-.- compared

As seen in Figure 18, the optimal state feedback control has enhanced the system performance, which is basically based on selecting new proper locations for the system

The output feedback control is another way of controlling the system for certain desired system performance as shown in Figure 19 where the feedback is directly taken from the

<sup>1</sup> 0 *T T A q qA qBR B q Q* <sup>−</sup> + − += (48)

equation which is described by:

output [**C**] matrix such as *<sup>T</sup> Q CC* = .


**5.3 Output feedback control** 

eigenvalues.

output.

with the original \_\_\_\_ full order system output step response.


0

0.1

0.2

System Output

0.3

0.4 0.5

0.6

0.7

1.934) as shown in Figure 18.

The control input is now given by *u t Ky t r t* () () () =− + , where () () () *or r or y t C x t D ut* = + . By applying this control to the considered system, the system equations become [7]:

$$\begin{aligned} \dot{\tilde{x}}\_r(t) &= A\_{or}\ddot{\tilde{x}}\_r(t) + B\_{or}[-K(C\_{or}\ddot{\tilde{x}}\_r(t) + D\_{or}u(t)) + r(t)] \\ &= A\_{or}\ddot{\tilde{x}}\_r(t) - B\_{or}KC\_{or}\ddot{\tilde{x}}\_r(t) - B\_{or}KD\_{or}u(t) + B\_{or}r(t) \\ &= [A\_{or} - B\_{or}KC\_{or}]\ddot{\tilde{x}}\_r(t) - B\_{or}KD\_{or}u(t) + B\_{or}r(t) \\ &= [A\_{or} - B\_{or}K[I + D\_{or}K]^{-1}C\_{or}]\ddot{\tilde{x}}\_r(t) + [B\_{or}[I + KD\_{or}]^{-1}]r(t) \end{aligned} \tag{49}$$

$$\begin{aligned} y(t) &= C\_{or}\ddot{\tilde{x}}\_r(t) + D\_{or}[-K\,y(t) + r(t)] \\ &= C\_{or}\ddot{\tilde{x}}\_r(t) - D\_{or}Ky(t) + D\_{or}r(t) \\ &= [[I + D\_{or}K]^{-1}C\_{or}]\ddot{\tilde{x}}\_r(t) + [[I + D\_{or}K]^{-1}D\_{or}]r(t) \end{aligned} \tag{50}$$

This leads to the overall block diagram as seen in Figure 20.

Fig. 20. An overall block diagram of an output feedback control.

Considering the reduced 3rd order model in case #3 of subsection 4.1.2 for system behavior enhancement using the output feedback control, the feedback control gain is found to be *K* = [0.5799 -2.6276 -11]. The normalized controlled system step response is shown in Figure 21, where one can observe that the system behavior is enhanced as desired.

Robust Control Using LMI Transformation and Neural-Based Identification for

reduction for dynamic systems with all eigenvalues being complex.

*Computer Science* (*IJCS*), Vol. 37, No. 3, 2010.

*Conference*, Las Vegas, Nevada, February 1989.

enhanced response of the full order system.

**7. References** 

Regulating Singularly-Perturbed Reduced Order Eigenvalue-Preserved Dynamic Systems 89

to that without using LMI. Simple feedback control methods using PID control, state feedback control utilizing (a) pole assignment and (b) LQR optimal control, and output feedback control were then implemented to the reduced model to obtain the desired

Future work will involve the application of new control techniques, utilizing the control hierarchy introduced in this research, such as using fuzzy logic and genetic algorithms. Future work will also involve the fundamental investigation of achieving model order

[1] A. N. Al-Rabadi, "Artificial Neural Identification and LMI Transformation for Model

[2] A. N. Al-Rabadi, "Intelligent Control of Singularly-Perturbed Reduced Order

[3] P. Avitabile, J. C. O'Callahan, and J. Milani, "Comparison of System Characteristics

[5] A. Bilbao-Guillerna, M. De La Sen, S. Alonso-Quesada, and A. Ibeas, "Artificial

*Technology*, *Artificial Intelligence and Application*, Innsbruck, Austria, 2004. [6] S. Boyd, L. El Ghaoui, E. Feron, and V. Balakrishnan, *Linear Matrix Inequalities in System and Control Theory*, Society for Industrial and Applied Mathematics (SIAM), 1994.

[8] T. Bui-Thanh, and K. Willcox, "Model Reduction for Large-Scale CFD Applications

[9] J. H. Chow, and P. V. Kokotovic, "A Decomposition of Near-Optimal Regulators for

[10] G. F. Franklin, J. D. Powell, and A. Emami-Naeini, *Feedback Control of Dynamic Systems*,

[11] K. Gallivan, A. Vandendorpe, and P. Van Dooren, "Model Reduction of MIMO System

Katagiri and Li Xu, Vol. 3, pp. 202 – 216, New York, U.S.A., 2009.

[4] P. Benner, "Model Reduction at ICIAM'07," *SIAM News*, Vol. 40, No. 8, 2007.

[7] W. L. Brogan, *Modern Control Theory*, 3rd Edition, Prentice Hall, 1991.

Canada, June 2005.

3rd Edition, Addison-Wesley, 1994.

26, No. 2, pp. 328-349, 2004.

705, 1976.

Reduction-Based Control of the Buck Switch-Mode Regulator," *American Institute of Physics* (*AIP*), In: *IAENG Transactions on Engineering Technologies*, *Special Edition of the International MultiConference of Engineers and Computer Scientists* 2009, AIP Conference Proceedings 1174, Editors: Sio-Iong Ao, Alan Hoi-Shou Chan, Hideki

Eigenvalue-Preserved Quantum Computing Systems via Artificial Neural Identification and Linear Matrix Inequality Transformation," *IAENG Int. Journal of* 

Using Various Model Reduction Techniques," 7*th International Model Analysis* 

Intelligence Tools for Discrete Multiestimation Adaptive Control Scheme with Model Reduction Issues," *Proc. of the International Association of Science and* 

Using the Balanced Proper Orthogonal Decomposition," 17*th American Institute of Aeronautics and Astronautics* (*AIAA*) *Computational Fluid Dynamics Conf.,* Toronto,

Systems with Slow and Fast Modes," *IEEE Trans. Automatic Control*, AC-21, pp. 701-

via Tangential Interpolation," *SIAM Journal of Matrix Analysis and Applications*, Vol.

Fig. 21. Reduced 3rd order output feedback controlled step response -.-.-.- compared with the original \_\_\_\_ full order system uncontrolled output step response.

## **6. Conclusions and future work**

In control engineering, robust control is an area that explicitly deals with uncertainty in its approach to the design of the system controller. The methods of robust control are designed to operate properly as long as disturbances or uncertain parameters are within a compact set, where robust methods aim to accomplish robust performance and/or stability in the presence of bounded modeling errors. A robust control policy is static - in contrast to the adaptive (dynamic) control policy - where, rather than adapting to measurements of variations, the system controller is designed to function assuming that certain variables will be unknown but, for example, bounded.

This research introduces a new method of hierarchical intelligent robust control for dynamic systems. In order to implement this control method, the order of the dynamic system was reduced. This reduction was performed by the implementation of a recurrent supervised neural network to identify certain elements [**Ac**] of the transformed system matrix [ **A** ], while the other elements [**Ar**] and [**Ao**] are set based on the system eigenvalues such that [**Ar**] contains the dominant eigenvalues (i.e., slow dynamics) and [**Ao**] contains the non-dominant eigenvalues (i.e., fast dynamics). To obtain the transformed matrix [ **A** ], the zero input response was used in order to obtain output data related to the state dynamics, based only on the system matrix [**A**]. After the transformed system matrix was obtained, the optimization algorithm of linear matrix inequality was utilized to determine the permutation matrix [**P**], which is required to complete the system transformation matrices {[ **B** ], [ **C** ], [ **D** ]}. The reduction process was then applied using the singular perturbation method, which operates on neglecting the faster-dynamics eigenvalues and leaving the dominant slow-dynamics eigenvalues to control the system. The comparison simulation results show clearly that modeling and control of the dynamic system using LMI is superior to that without using LMI. Simple feedback control methods using PID control, state feedback control utilizing (a) pole assignment and (b) LQR optimal control, and output feedback control were then implemented to the reduced model to obtain the desired enhanced response of the full order system.

Future work will involve the application of new control techniques, utilizing the control hierarchy introduced in this research, such as using fuzzy logic and genetic algorithms. Future work will also involve the fundamental investigation of achieving model order reduction for dynamic systems with all eigenvalues being complex.

## **7. References**

88 Recent Advances in Robust Control – Novel Approaches and Design Methods

0 10 20 30 40 50 60 70 80 90 100

Time[s]

Fig. 21. Reduced 3rd order output feedback controlled step response -.-.-.- compared with the

In control engineering, robust control is an area that explicitly deals with uncertainty in its approach to the design of the system controller. The methods of robust control are designed to operate properly as long as disturbances or uncertain parameters are within a compact set, where robust methods aim to accomplish robust performance and/or stability in the presence of bounded modeling errors. A robust control policy is static - in contrast to the adaptive (dynamic) control policy - where, rather than adapting to measurements of variations, the system controller is designed to function assuming that certain variables will

This research introduces a new method of hierarchical intelligent robust control for dynamic systems. In order to implement this control method, the order of the dynamic system was reduced. This reduction was performed by the implementation of a recurrent supervised neural network to identify certain elements [**Ac**] of the transformed system matrix [ **A** ], while the other elements [**Ar**] and [**Ao**] are set based on the system eigenvalues such that [**Ar**] contains the dominant eigenvalues (i.e., slow dynamics) and [**Ao**] contains the non-dominant eigenvalues (i.e., fast dynamics). To obtain the transformed matrix [ **A** ], the zero input response was used in order to obtain output data related to the state dynamics, based only on the system matrix [**A**]. After the transformed system matrix was obtained, the optimization algorithm of linear matrix inequality was utilized to determine the permutation matrix [**P**], which is required to complete the system transformation matrices {[ **B** ], [ **C** ], [ **D** ]}. The reduction process was then applied using the singular perturbation method, which operates on neglecting the faster-dynamics eigenvalues and leaving the dominant slow-dynamics eigenvalues to control the system. The comparison simulation results show clearly that modeling and control of the dynamic system using LMI is superior

original \_\_\_\_ full order system uncontrolled output step response.


**6. Conclusions and future work** 

be unknown but, for example, bounded.

System Output


**5** 

*1,2,4USA 3China* 

**Neural Control Toward a Unified Intelligent** 

Dingguo Chen1, Lu Wang2, Jiaben Yang3 and Ronald R. Mohler4

There have been significant progresses reported in nonlinear adaptive control in the last two decades or so, partially because of the introduction of neural networks (Polycarpou, 1996; Chen & Liu, 1994; Lewis, Yesidirek & Liu, 1995; Sanner & Slotine, 1992; Levin & Narendra, 1993; Chen & Yang, 2005). The adaptive control schemes reported intend to design adaptive neural controllers so that the designed controllers can help achieve the stability of the resulting systems in case of uncertainties and/or unmodeled system dynamics. It is a typical assumption that no restriction is imposed on the magnitude of the control signal. Accompanied with the adaptive control design is usually a reference model which is assumed to exist, and a parameter estimator. The parameters can be estimated within a predesignated bound with appropriate parameter projection. It is noteworthy that these design approaches are not applicable for many practical systems where there is a restriction on the

On the other hand, the economics performance index is another important objective for controller design for many practical control systems. Typical performance indexes include, for instance, minimum time and minimum fuel. The optimal control theory developed a few decades ago is applicable to those systems when the system model in question along with a performance index is available and no uncertainties are involved. It is obvious that these optimal control design approaches are not applicable for many practical systems where

Motivated by the fact that many practical systems are concerned with both system stability and system economics, and encouraged by the promising images presented by theoretical advances in neural networks (Haykin, 2001; Hopfield & Tank, 1985) and numerous application results (Nagata, Sekiguchi & Asakawa, 1990; Methaprayoon, Lee, Rasmiddatta, Liao & Ross, 2007; Pandit, Srivastava & Sharma, 2003; Zhou, Chellappa, Vaid & Jenkins, 1998; Chen & York, 2008; Irwin, Warwick & Hunt, 1995; Kawato, Uno & Suzuki, 1988; Liang 1999; Chen & Mohler, 1997; Chen & Mohler, 2003; Chen, Mohler & Chen, 1999), this chapter aims at developing an

control magnitude, or a reference model is not available.

these systems contain uncertain elements.

**1. Introduction** 

**Control Design Framework for** 

*1Siemens Energy Inc., Minnetonka, MN 55305 2Siemens Energy Inc., Houston, TX 77079 3Tsinghua University, Beijing 100084 4Oregon State University, OR 97330* 

**Nonlinear Systems** 


## **Neural Control Toward a Unified Intelligent Control Design Framework for Nonlinear Systems**

Dingguo Chen1, Lu Wang2, Jiaben Yang3 and Ronald R. Mohler4 *1Siemens Energy Inc., Minnetonka, MN 55305 2Siemens Energy Inc., Houston, TX 77079 3Tsinghua University, Beijing 100084 4Oregon State University, OR 97330 1,2,4USA 3China* 

## **1. Introduction**

90 Recent Advances in Robust Control – Novel Approaches and Design Methods

[12] K. Gallivan, A. Vandendorpe, and P. Van Dooren, "Sylvester Equation and Projection-

[13] G. Garsia, J. Dfouz, and J. Benussou, "H2 Guaranteed Cost Control for Singularly

[14] R. J. Guyan, "Reduction of Stiffness and Mass Matrices," *AIAA Journal*, Vol. 6, No. 7,

[15] S. Haykin, *Neural Networks*: *A Comprehensive Foundation*, Macmillan Publishing

[16] W. H. Hayt, J. E. Kemmerly, and S. M. Durbin, *Engineering Circuit Analysis,* McGraw-

[17] G. Hinton, and R. Salakhutdinov, "Reducing the Dimensionality of Data with Neural

[18] R. Horn, and C. Johnson, *Matrix Analysis*, Cambridge University Press, New York, 1985. [19] S. H. Javid, "Observing the Slow States of a Singularly Perturbed Systems," *IEEE Trans.* 

[20] H. K. Khalil, "Output Feedback Control of Linear Two-Time-Scale Systems," *IEEE* 

[21] H. K. Khalil, and P. V. Kokotovic, "Control Strategies for Decision Makers Using

[22] P. Kokotovic, R. O'Malley, and P. Sannuti, "Singular Perturbation and Order Reduction in Control Theory – An Overview," *Automatica*, 12(2), pp. 123-132, 1976. [23] C. Meyer, *Matrix Analysis and Applied Linear Algebra*, Society for Industrial and Applied

[25] R. Skelton, M. Oliveira, and J. Han, *Systems Modeling and Model Reduction*, Invited Chapter of the Handbook of Smart Systems and Materials, Institute of Physics, 2004. [26] M. Steinbuch, "Model Reduction for Linear Systems," 1*st International MACSI-net* 

[27] A. N. Tikhonov, "On the Dependence of the Solution of Differential Equation on a Small Parameter," *Mat Sbornik* (*Moscow*), 22(64):2, pp. 193-204, 1948. [28] R. J. Williams, and D. Zipser, "A Learning Algorithm for Continually Running Fully Recurrent Neural Networks," *Neural Computation,* 1(2), pp. 270-280, 1989. [29] J. M. Zurada, *Artificial Neural Systems,* West Publishing Company, New York, 1992.

Different Models of the Same System," *IEEE Trans. Automatic Control,* AC-23, pp.

213-229, 2004.

1329, 1998.

Hill, 2007.

289-297, 1978.

Mathematics (SIAM), 2000.

pp. 1313-1319, 1968.

Company, New York, 1994.

Networks," *Science*, pp. 504-507, 2006.

*Automatic Control,* AC-25, pp. 277-280, 1980.

*Trans. Automatic Control,* AC-32, pp. 784-792, 1987.

[24] K. Ogata, *Discrete-Time Control Systems*, 2nd Edition, Prentice Hall, 1995.

*Workshop on Model Reduction,* Netherlands, October 2001.

Based Model Reduction," *Journal of Computational and Applied Mathematics*, 162, pp.

Perturbed Uncertain Systems," *IEEE Trans. Automatic Control*, Vol. 43, pp. 1323-

There have been significant progresses reported in nonlinear adaptive control in the last two decades or so, partially because of the introduction of neural networks (Polycarpou, 1996; Chen & Liu, 1994; Lewis, Yesidirek & Liu, 1995; Sanner & Slotine, 1992; Levin & Narendra, 1993; Chen & Yang, 2005). The adaptive control schemes reported intend to design adaptive neural controllers so that the designed controllers can help achieve the stability of the resulting systems in case of uncertainties and/or unmodeled system dynamics. It is a typical assumption that no restriction is imposed on the magnitude of the control signal. Accompanied with the adaptive control design is usually a reference model which is assumed to exist, and a parameter estimator. The parameters can be estimated within a predesignated bound with appropriate parameter projection. It is noteworthy that these design approaches are not applicable for many practical systems where there is a restriction on the control magnitude, or a reference model is not available.

On the other hand, the economics performance index is another important objective for controller design for many practical control systems. Typical performance indexes include, for instance, minimum time and minimum fuel. The optimal control theory developed a few decades ago is applicable to those systems when the system model in question along with a performance index is available and no uncertainties are involved. It is obvious that these optimal control design approaches are not applicable for many practical systems where these systems contain uncertain elements.

Motivated by the fact that many practical systems are concerned with both system stability and system economics, and encouraged by the promising images presented by theoretical advances in neural networks (Haykin, 2001; Hopfield & Tank, 1985) and numerous application results (Nagata, Sekiguchi & Asakawa, 1990; Methaprayoon, Lee, Rasmiddatta, Liao & Ross, 2007; Pandit, Srivastava & Sharma, 2003; Zhou, Chellappa, Vaid & Jenkins, 1998; Chen & York, 2008; Irwin, Warwick & Hunt, 1995; Kawato, Uno & Suzuki, 1988; Liang 1999; Chen & Mohler, 1997; Chen & Mohler, 2003; Chen, Mohler & Chen, 1999), this chapter aims at developing an

Neural Control Toward a Unified

**2. Problem formulation** 

system linear in control and linear in parameters.

considered in this Chapter include:

The system is mathematically represented by

where *<sup>n</sup> xG R* ∈ ⊆ is the state vector, *<sup>l</sup>*

control.

*ax a x a x a x* () () () () [ 1 2 *<sup>n</sup>* ]

1 2

1 2

*C x*

*B x*

=

=

11 12 1 21 22 2

⎡ ⎤ ⎢ ⎥

*n n nl*

11 12 1 21 22 2

*n n nm*

*Bx Bx B x*

⎣ ⎦

( ) ( ) ...

*Bx B x B Bx Bx B x*

( ) ( ) ... ( ) ( ) ... ... ... ...

*C x C x Cx*

⎣ ⎦

⎡ ⎤ ⎢ ⎥

( ) ( ) ... ( )

( ) ( ) ... ( )

( ) ( ) ...

*CxC x C Cx Cx Cx*

( ) ( ) ... ( ) ( ) ... ... ... ...

The adaptive control design framework features the following:

• The adaptive, robust control is achieved by hierarchical neural networks.

control can not handle, are reflected in the admissible control set.

• Minimum time – resulting in the so-called time-optimal control • Minimum fuel – resulting in the so-called fuel-optimal control

the system characterization and some key assumptions are common.

τ

*l l*

*m m*

Intelligent Control Design Framework for Nonlinear Systems 93

are developed to justify the fuel-optimal control oriented neural control design procedures

As is known, the adaptive control design of nonlinear dynamic systems is still carried out on a per case-by-case basis, even though there have numerous progresses in the adaptive of linear dynamic systems. Even with linear systems, the conventional adaptive control schemes have common drawbacks that include (a) the control usually does not consider the physical control limitations, and (b) a performance index is difficult to incorporate. This has made the adaptive control design for nonlinear system even more challenging. With this common understanding, this Chapter is intended to address the adaptive control design for a class of nonlinear systems using the neural network based techniques. The systems of interest are linear in both control and parameters, and feature time-varying, parametric uncertainties, confined control inputs, and multiple control inputs. These systems are represented by a finite dimensional differential

• The physical control limitations, one of the difficulties that conventional adaptive

• The performance measures to be incorporated in the adaptive control design, deemed as a technical challenge for the conventional adaptive control schemes, that will be

• Quadratic performance index – resulting in the quadratic performance optimal

*x ax Cxp Bxu* = () () () + + (1)

*<sup>p</sup> p*∈Ω ⊂ *R* is the bounded parameter vector, *<sup>m</sup> u R* ∈

is an *n l* × -dimensional matrix function of *x* , and

is an *n m*× -dimensional matrix function of *x* .

Although the control performance indices are different for the above mentioned approaches,

is the control vector, which is confined to an admissible control set *U* ,

= " is an *n* -dimensional vector function of *x* ,

for the time-varying nonlinear systems. Finally, some concluding remarks are made.

intelligent control design framework to guide the controller design for uncertain, nonlinear systems to address the combining challenge arising from the following:


The salient features of the proposed control design framework include: (a) achieving nearly optimal control regardless of parameter uncertainties; (b) no need for a parameter estimator which is popular in many adaptive control designs; (c) respecting the pre-designated range for the admissible control.

Several important technical aspects of the proposed intelligent control design framework will be studied:


In summary, this chapter attempts to provide a deep understanding of what hierarchical neural networks do to optimize a desired control performance index when controlling uncertain nonlinear systems with time-varying properties; make an insightful investigation of how hierarchical neural networks may be designed to achieve the desired level of control performance; and create an intelligent control design framework that provides guidance for analyzing and studying the behaviors of the systems in question, and designing hierarchical neural networks that work in a coordinated manner to optimally, adaptively control the systems.

This chapter is organized as follows: Section 2 describes several classes of uncertain nonlinear systems of interest and mathematical formulations of these problems are presented. Some conventional assumptions are made to facilitate the analysis of the problems and the development of the design procedures generic for a large class of nonlinear uncertain systems. The time optimal control problem and the fuel optimal control problem are analyzed and an iterative numerical solution process is presented in Section 3. These are important elements in building a solution approach to address the control problems studied in this paper which are in turn decomposed into a series of control problems that do not exhibit parameter uncertainties. This decomposition is vital in the proposal of the hierarchical neural network based control design. The details of the hierarchical neural control design methodology are given in Section 4. The synthesis of hierarchical neural controllers is to achieve (a) near optimal control (which can be timeoptimal or fuel-optimal) of the studied systems with constrained control; (b) adaptive control of the studied control systems with unknown parameters; (c) robust control of the studied control systems with the time-varying parameters. In Section 5, theoretical results are developed to justify the fuel-optimal control oriented neural control design procedures for the time-varying nonlinear systems. Finally, some concluding remarks are made.

## **2. Problem formulation**

92 Recent Advances in Robust Control – Novel Approaches and Design Methods

intelligent control design framework to guide the controller design for uncertain, nonlinear

• The designed controller is expected to stabilize the system in the presence of

• The designed controller is expected to stabilize the system in the presence of

• The designed controller is expected to achieve the desired control target with minimum

The salient features of the proposed control design framework include: (a) achieving nearly optimal control regardless of parameter uncertainties; (b) no need for a parameter estimator which is popular in many adaptive control designs; (c) respecting the pre-designated range

Several important technical aspects of the proposed intelligent control design framework

• Hierarchical neural networks (Kawato, Uno & Suzuki, 1988; Zakrzewski, Mohler & Kolodziej, 1994; Chen, 1998; Chen & Mohler, 2000; Chen, Mohler & Chen, 2000; Chen, Yang & Moher, 2008; Chen, Yang & Mohler, 2006) are utilized; and the role of each tier of the hierarchy will be discussed and how each tier of the hierarchical neural networks

• The theoretical aspects of using hierarchical neural networks to approximately achieve optimal, adaptive control of nonlinear, time-varying systems will be studied. • How the tessellation of the parameter space affects the resulting hierarchical neural

In summary, this chapter attempts to provide a deep understanding of what hierarchical neural networks do to optimize a desired control performance index when controlling uncertain nonlinear systems with time-varying properties; make an insightful investigation of how hierarchical neural networks may be designed to achieve the desired level of control performance; and create an intelligent control design framework that provides guidance for analyzing and studying the behaviors of the systems in question, and designing hierarchical neural networks that work in a coordinated manner to optimally, adaptively control the

This chapter is organized as follows: Section 2 describes several classes of uncertain nonlinear systems of interest and mathematical formulations of these problems are presented. Some conventional assumptions are made to facilitate the analysis of the problems and the development of the design procedures generic for a large class of nonlinear uncertain systems. The time optimal control problem and the fuel optimal control problem are analyzed and an iterative numerical solution process is presented in Section 3. These are important elements in building a solution approach to address the control problems studied in this paper which are in turn decomposed into a series of control problems that do not exhibit parameter uncertainties. This decomposition is vital in the proposal of the hierarchical neural network based control design. The details of the hierarchical neural control design methodology are given in Section 4. The synthesis of hierarchical neural controllers is to achieve (a) near optimal control (which can be timeoptimal or fuel-optimal) of the studied systems with constrained control; (b) adaptive control of the studied control systems with unknown parameters; (c) robust control of the studied control systems with the time-varying parameters. In Section 5, theoretical results

systems to address the combining challenge arising from the following:

unmodeled system dynamics uncertainties.

total control effort or minimum time.

is constructed will be highlighted.

networks will be discussed.

for the admissible control.

will be studied:

systems.

uncertainties in the parameters of the nonlinear systems in question.

• The designed controller is confined on the magnitude of the control signals.

As is known, the adaptive control design of nonlinear dynamic systems is still carried out on a per case-by-case basis, even though there have numerous progresses in the adaptive of linear dynamic systems. Even with linear systems, the conventional adaptive control schemes have common drawbacks that include (a) the control usually does not consider the physical control limitations, and (b) a performance index is difficult to incorporate. This has made the adaptive control design for nonlinear system even more challenging. With this common understanding, this Chapter is intended to address the adaptive control design for a class of nonlinear systems using the neural network based techniques. The systems of interest are linear in both control and parameters, and feature time-varying, parametric uncertainties, confined control inputs, and multiple control inputs. These systems are represented by a finite dimensional differential system linear in control and linear in parameters.

The adaptive control design framework features the following:

	- Minimum time resulting in the so-called time-optimal control
	- Minimum fuel resulting in the so-called fuel-optimal control
	- Quadratic performance index resulting in the quadratic performance optimal control.

Although the control performance indices are different for the above mentioned approaches, the system characterization and some key assumptions are common.

The system is mathematically represented by

$$
\dot{\mathbf{x}} = \mathbf{a(x)} + \mathbf{C(x)}p + \mathbf{B(x)}\mu \tag{1}
$$

where *<sup>n</sup> xG R* ∈ ⊆ is the state vector, *<sup>l</sup> <sup>p</sup> p*∈Ω ⊂ *R* is the bounded parameter vector, *<sup>m</sup> u R* ∈ is the control vector, which is confined to an admissible control set *U* , *ax a x a x a x* () () () () [ 1 2 *<sup>n</sup>* ] τ = " is an *n* -dimensional vector function of *x* , 11 12 1 21 22 2 1 2 ( ) ( ) ... ( ) ( ) ... ( ) ( ) ... ... ... ... ( ) ( ) ... ( ) *l l n n nl CxC x C Cx Cx Cx C x C x C x Cx* ⎡ ⎤ ⎢ ⎥ = ⎣ ⎦ is an *n l* × -dimensional matrix function of *x* , and 11 12 1 21 22 2 1 2 ( ) ( ) ... ( ) ( ) ... ( ) ( ) ... ... ... ... ( ) ( ) ... ( ) *m m n n nm Bx B x B Bx Bx B x B x Bx Bx B x* ⎡ ⎤ ⎢ ⎥ = ⎣ ⎦ is an *n m*× -dimensional matrix function of *x* .

Neural Control Toward a Unified

, \* | ( ) ( )| *f f f x NN x* − <

pre-designated number 0, *<sup>s</sup>*

**2.1 Time-optimal control** 

**2.2 Fuel-optimal control** 

AS4 replaced with the following:

AS4 replaced with the following:

τ

AS4: The control performance criteria is

AS4: The control performance criteria is

ε.

\* \*

ε

*NN x NN x NN x fs fs f* ( ; ; ) | ( , ) ( , )| ΘΘ = Θ − Θ . Assume that \* (; ; ) *NN x f s*

optimal control problem is greater than the number of state variables.

 > i.e., \* (; ; ) *<sup>s</sup> NN x f s* δ

Θ Θ <

AS9: The total number of switch times for all control components for the studied fuel-

Remark 3: AS9 is true for practical systems to the best knowledge of the authors. The assumption is made for the convenience of the rigor of the theoretical results developed in

For the time-optimal control problem, the system characterization, the control objective, constraints remain the same as for the generic control problem with the exception that the control performance index reflected in the Assumption AS4 is replaced with the following:

> 0 1 *f t*

final time, respectively. The cost functional reflects the requirement of time-optimal control.

For the fuel-optimal control problem, the system characterization, the control objective, constraints remain the same as for the time-optimal control problem with the Assumption

0

initial time and the final time, respectively, and *<sup>k</sup> e* ( *k m* = 0,1,2, , " ) are non-negative constants. The cost functional reflects the requirement of fuel-optimal control as related to

For the quadratic performance index based optimal control problem, the system characterization, the control objective, constraints remain the same with the Assumption

AS4: The control performance criteria is

= − − + +− − <sup>⎡</sup> <sup>⎤</sup> ∫ <sup>⎣</sup> <sup>⎦</sup> where 0*t* and *<sup>f</sup> <sup>t</sup>* are

τ

0

*f t*

1 1 ( ( ) ( )) ( )( ( ) ( )) ( )( ) 2 2

*J x t r t S t x t r t x Qx u u R u u ds*

*f f ff f e e t*

*t*

the integration of the absolute control effort of each control variable over time.

**2.3 Optimal control with quadratic performance index** 

*f t*

<sup>0</sup> <sup>1</sup> | |

 τ

*m k k k*

*J e e u ds* <sup>=</sup>

*t*

\* *f* ε

δ

this Chapter.

Intelligent Control Design Framework for Nonlinear Systems 95

Remark 2: For any continuous function *<sup>f</sup>* ( ) *<sup>x</sup>* defined on the compact domain *<sup>n</sup>* Ω ⊂*<sup>x</sup> R* , there exists a neural network characterized by ( ) *NN x <sup>f</sup>* such that for any positive number

AS8: Let the sufficiently trained neural network be denoted by (, ) *NN x* Θ*<sup>s</sup>* , and the neural network with the ideal weights and biases by \* *NN x*(, ) Θ where Θ*s* and Θ\* designate the parameter vectors comprising weights and biases of the corresponding neural networks. The approximation of (, ) *NN x <sup>f</sup>* Θ*s* to \* (, ) *NN x <sup>f</sup>* Θ is measured by

δ

*J ds* <sup>=</sup> ∫ where 0*t* and *<sup>f</sup> <sup>t</sup>* are the initial time and the

<sup>⎡</sup> <sup>⎤</sup> = + ⎢⎣ ∑ ⎥⎦ ∫ where 0*t* and *<sup>f</sup> <sup>t</sup>* are the

ε. Θ Θ is bounded by a

The control objective is to follow a theoretically sound control design methodology to design the controller such that the system is adaptively controlled with respect to parametric uncertainties and yet minimizing a desired control performance.

To facilitate the theoretical derivations, several conventional assumptions are made in the following and applied throughout the Chapter.

AS1: It is assumed that *a*(.) , *C*(.) and *B*(.) have continuous partial derivatives with respect

to the state variables on the region of interest. In other words, ( ) *<sup>i</sup> a x* , ( ) *C x is* , ( ) *B x ik* , ( ) *<sup>i</sup> j a x x* ∂ <sup>∂</sup> ,

$$\frac{\partial \mathbb{C}\_{i\mathbf{x}}(\mathbf{x})}{\partial \mathbf{x}\_{j}}, \text{and } \frac{\partial \mathbb{B}\_{ik}(\mathbf{x})}{\partial \mathbf{x}\_{j}} \text{ for } i, j = 1, 2, \cdots, n \text{; } k = 1, 2, \cdots, m \text{; } s = 1, 2, \cdots, l \text{ exist and are continuous}$$

and bounded on the region of interest.

It should be noted that the above conditions imply that *a*(.) , *C*(.) and *B*(.) satisfy the Lipschitz condition which in turn implies that there always exists a unique and continuous solution to the differential equation given an initial condition 0 0 *x t*( ) = ξ and a bounded control *u t*( ) .

AS2: In practical applications, control effort is usually confined due to the limitation of design or conditions corresponding to physical constraints. Without loss of generality, assume that the admissible control set *U* is characterized by:

$$\mathcal{LI} = \left\{ \mu : |\; \mu\_i| \le 1, i = 1, 2, \dots, m \right\} \tag{2}$$

where *ui* is *u* 's *i* th component.

AS3: It is assumed that the system is controllable.

AS4: Some control performance criteria *J* may relate to the initial time 0*t* and the final time *<sup>f</sup> t* . The cost functional reflects the requirement of a particular type of optimal control.

AS5: The target set θ *<sup>f</sup>* is defined as θ ψ *f f* = {*x xt* : ( ( )) 0 = } where ψ *<sup>i</sup>* 's ( *i q* = 1,2, , " ) are the components of the continuously differentiable function vector ψ(.) .

Remark 1: As a step of our approach to address the control design for the system (1), the above same control problem is studied with the only difference that the parameters in Eq. (1) are given. An optimal solution is sought to the following control problem:

The optimal control problem ( *P*<sup>0</sup> ) consists of the system equation (1) with fixed and known parameter vector *p* , the initial time 0*t* , the variable final time *<sup>f</sup> t* , the initial state 0 0 *x xt* = ( ) , together with the assumptions AS1, AS2, AS3, AS4, AS5 satisfied such that the system state conducts to a pre-specified terminal set θ *<sup>f</sup>* at the final time *<sup>f</sup> t* while the control performance index is minimized.

AS6: There do not exist singular solutions to the optimal control problem ( *P*<sup>0</sup> ) as described in Remark 1 (referenced as the control problem ( *P*<sup>0</sup> ) later on distinct from the original control problem ( *P* )).

$$\text{AS7: } \frac{\partial \chi}{\partial p} \text{ is bounded on } p \in \Omega\_p \text{ and } \ge \varepsilon \Omega\_\chi.$$

Remark 2: For any continuous function *<sup>f</sup>* ( ) *<sup>x</sup>* defined on the compact domain *<sup>n</sup>* Ω ⊂*<sup>x</sup> R* , there exists a neural network characterized by ( ) *NN x <sup>f</sup>* such that for any positive number

$$
\sigma\_{f'}^"\mid f(\mathbf{x}) - \mathrm{NN}\_f(\mathbf{x}) \mid \mathbf{c}\_f^"\,.
$$

94 Recent Advances in Robust Control – Novel Approaches and Design Methods

The control objective is to follow a theoretically sound control design methodology to design the controller such that the system is adaptively controlled with respect to

To facilitate the theoretical derivations, several conventional assumptions are made in the

AS1: It is assumed that *a*(.) , *C*(.) and *B*(.) have continuous partial derivatives with respect to the state variables on the region of interest. In other words, ( ) *<sup>i</sup> a x* , ( ) *C x is* , ( ) *B x ik* , ( ) *<sup>i</sup>*

It should be noted that the above conditions imply that *a*(.) , *C*(.) and *B*(.) satisfy the Lipschitz condition which in turn implies that there always exists a unique and continuous

AS2: In practical applications, control effort is usually confined due to the limitation of design or conditions corresponding to physical constraints. Without loss of generality,

AS4: Some control performance criteria *J* may relate to the initial time 0*t* and the final time *<sup>f</sup> t* . The cost functional reflects the requirement of a particular type of optimal control.

> ψ

Remark 1: As a step of our approach to address the control design for the system (1), the above same control problem is studied with the only difference that the parameters in Eq.

The optimal control problem ( *P*<sup>0</sup> ) consists of the system equation (1) with fixed and known parameter vector *p* , the initial time 0*t* , the variable final time *<sup>f</sup> t* , the initial state 0 0 *x xt* = ( ) , together with the assumptions AS1, AS2, AS3, AS4, AS5 satisfied such that the system state

θ

AS6: There do not exist singular solutions to the optimal control problem ( *P*<sup>0</sup> ) as described in Remark 1 (referenced as the control problem ( *P*<sup>0</sup> ) later on distinct from the original

*f f* = {*x xt* : ( ( )) 0 = } where

for *ij n* , 1,2, , = " ; *k m* = 1,2, , " ; *s l* = 1,2, , " exist and are continuous

*U uu i m* = ≤= { :| | 1, 1,2, , *<sup>i</sup>* " } (2)

ψ

*<sup>f</sup>* at the final time *<sup>f</sup> t* while the control

ψ(.) . ξ

*j a x x* ∂ <sup>∂</sup> ,

and a bounded

*<sup>i</sup>* 's ( *i q* = 1,2, , " ) are the

parametric uncertainties and yet minimizing a desired control performance.

solution to the differential equation given an initial condition 0 0 *x t*( ) =

θ

(1) are given. An optimal solution is sought to the following control problem:

assume that the admissible control set *U* is characterized by:

following and applied throughout the Chapter.

( ) *is j C x x* ∂

control *u t*( ) .

<sup>∂</sup> , and ( ) *ik*

*j B x x* ∂ ∂

where *ui* is *u* 's *i* th component.

AS5: The target set

AS3: It is assumed that the system is controllable.

*<sup>f</sup>* is defined as

components of the continuously differentiable function vector

θ

conducts to a pre-specified terminal set

is bounded on *<sup>p</sup> p*∈Ω and *<sup>x</sup> x* ∈Ω .

performance index is minimized.

control problem ( *P* )).

AS7: *<sup>x</sup> p* ∂ ∂

and bounded on the region of interest.

AS8: Let the sufficiently trained neural network be denoted by (, ) *NN x* Θ*<sup>s</sup>* , and the neural network with the ideal weights and biases by \* *NN x*(, ) Θ where Θ*s* and Θ\* designate the parameter vectors comprising weights and biases of the corresponding neural networks. The approximation of (, ) *NN x <sup>f</sup>* Θ*s* to \* (, ) *NN x <sup>f</sup>* Θ is measured by \* \* δ*NN x NN x NN x fs fs f* ( ; ; ) | ( , ) ( , )| ΘΘ = Θ − Θ . Assume that \* (; ; ) *NN x f s* δ Θ Θ is bounded by a pre-designated number 0, *<sup>s</sup>* ε > i.e., \* (; ; ) *<sup>s</sup> NN x f s* δ Θ Θ < ε.

AS9: The total number of switch times for all control components for the studied fueloptimal control problem is greater than the number of state variables.

Remark 3: AS9 is true for practical systems to the best knowledge of the authors. The assumption is made for the convenience of the rigor of the theoretical results developed in this Chapter.

## **2.1 Time-optimal control**

For the time-optimal control problem, the system characterization, the control objective, constraints remain the same as for the generic control problem with the exception that the control performance index reflected in the Assumption AS4 is replaced with the following:

AS4: The control performance criteria is 0 1 *f t t J ds* <sup>=</sup> ∫ where 0*t* and *<sup>f</sup> <sup>t</sup>* are the initial time and the

final time, respectively. The cost functional reflects the requirement of time-optimal control.

### **2.2 Fuel-optimal control**

For the fuel-optimal control problem, the system characterization, the control objective, constraints remain the same as for the time-optimal control problem with the Assumption AS4 replaced with the following:

AS4: The control performance criteria is 0 <sup>0</sup> <sup>1</sup> | | *f t m k k k t J e e u ds* <sup>=</sup> <sup>⎡</sup> <sup>⎤</sup> = + ⎢⎣ ∑ ⎥⎦ ∫ where 0*t* and *<sup>f</sup> <sup>t</sup>* are the

initial time and the final time, respectively, and *<sup>k</sup> e* ( *k m* = 0,1,2, , " ) are non-negative constants. The cost functional reflects the requirement of fuel-optimal control as related to the integration of the absolute control effort of each control variable over time.

## **2.3 Optimal control with quadratic performance index**

For the quadratic performance index based optimal control problem, the system characterization, the control objective, constraints remain the same with the Assumption AS4 replaced with the following:

AS4: The control performance criteria is 0 1 1 ( ( ) ( )) ( )( ( ) ( )) ( )( ) 2 2 *f t f f ff f e e t J x t r t S t x t r t x Qx u u R u u ds* τ τ τ= − − + +− − <sup>⎡</sup> <sup>⎤</sup> ∫ <sup>⎣</sup> <sup>⎦</sup> where 0*t* and *<sup>f</sup> <sup>t</sup>* are

Neural Control Toward a Unified

where ( ) *B x <sup>k</sup>* is the *k* th column of the *B x*( ) .

is equivalent to the minimization of ( ) *B xu k k*

component of the switch vector ( ) ( ) *St Bx*

Consider the following cost functional:

*<sup>i</sup>* 's are positive constants, and

 ψ

where

ρ

θ

the target set

solutions.

where '

'

<sup>0</sup> *Bx b x Bx* ( ) ( ) ( ( ))

<sup>=</sup> ⎡ ⎤ ⎣ ⎦ .

problem, and its analytic solution is not available in general.

that the control problem ( *P*<sup>0</sup> ) has bang-bang control solutions.

by means of proper control, and *q* is the number of components in

0

<sup>∂</sup> ∑ '

The system equation can be rewritten in terms of the augmented state vector as

*x ax Bxu* = () () + where 0 0 *xt xt* () 0 ()

Note that the cost functional can be rewritten as follows:

<sup>0</sup> <sup>1</sup> () 1 2 ,() () , *<sup>q</sup> <sup>i</sup> i i <sup>i</sup> a x ax Cxp*

*C x*( ) , *p* and *B x*( ) are as given in the control problem ( *P*<sup>0</sup> ).

ρψ=

Define a new state variable 0 *x t*( ) as follows:

Define the augmented state vector 0 *xxx*

τ τ

*x* ψ

∂ =+ < + >

Intelligent Control Design Framework for Nonlinear Systems 97

Since the control components *uk* 's are all independent, the minimization of 1 ( ) *<sup>m</sup>*

The optimal control can be expressed as \* \* sgn( ( )) *u st k k* = − , where sgn(.) is the sign function

With assumption AS6 satisfied, it is observed from the derivation of the optimal time control

1 1 ( ( )) *<sup>f</sup>*

= = + ∫ ∑

It is observed that the system described by Eq. (1) is a nonlinear system but linear in control. With assumption AS6, the requirements for applying the Switching-Time-Varying-Method (STVM) are met. The optimal switching-time vector can be obtained by using a gradientbased method. The convergence of the STVM is guaranteed if there are no singular

> ' ' 0 0 [( ( ) ( ), )] *<sup>f</sup> <sup>t</sup> <sup>t</sup> J a x b x u dt* = +< > ∫

' ' 0 00 <sup>0</sup> ( ) [( ( ) ( ), )] *<sup>t</sup> <sup>t</sup> x t a x b x u dt* = +< > ∫

> τ τ<sup>=</sup> <sup>⎡</sup> <sup>⎤</sup> <sup>⎣</sup> <sup>⎦</sup> , '

*i i <sup>f</sup> <sup>t</sup> <sup>i</sup> J dt x t* ρψ

2

*f f* = {*x xt* : ( ( )) 0 = } to the system state is transferred from a given initial state

τ

defined as sgn( ) 1 *t* = if 0 *t* > or sgn( ) 1 *t* = − if 0 *t* < ; and ( ) ( ) *k k st Bx*

0

ψ

*<sup>q</sup> <sup>t</sup>*

τ=

λ . It is observed that the resulting Hamiltonian system is a coupled two-point boundary-value

λ. *k k <sup>k</sup> B xu*

is the *k* th

τ

λ∑ <sup>=</sup>

τ = λ

*<sup>i</sup>* 's are the components of the defining equation of

<sup>0</sup> <sup>1</sup> () 2 [ ] () *<sup>q</sup> <sup>i</sup> i i <sup>i</sup> b x B x*

ρψ=

<sup>0</sup> *ax a x ax Cxp* () () (() ())

τ τ<sup>=</sup> <sup>⎡</sup> <sup>⎤</sup> <sup>⎣</sup> <sup>⎦</sup> .

*x* ψ τ

<sup>∂</sup> <sup>=</sup> <sup>∂</sup> <sup>∑</sup> , and *a x*( ) ,

<sup>=</sup> <sup>⎡</sup> <sup>+</sup> <sup>⎤</sup> <sup>⎣</sup> <sup>⎦</sup> , and

τ τ

ψ.

the initial time and the final time, respectively; and ()0 *S tf* ≥ , *Q* ≥ 0 , and 0 *R* ≥ with appropriate dimensions; and the desired final state ( )*<sup>f</sup> r t* is the specified as the equilibrium *<sup>e</sup> x* , and *ue* is the equilibrium control.

## **3. Numerical solution schemes to the optimal control problems**

To solve for the optimal control, mathematical derivations are presented below for each of the above optimal control problems to show that the resulting equations represent the Hamiltonian system which is usually a coupled two-point boundary-value problem (TPBVP), and the analytic solution is not available, to our best knowledge. It is worth noting that in the solution process, the parameter is assumed to be fixed.

### **3.1 Numerical solution scheme to the time optimal control problem**

By assumption AS4, the optimal control performance index can be expressed as

$$J(t\_0) = \int\_{t\_0}^{t\_f} 1 dt$$

where 0*t* is the initial time, and *<sup>f</sup> t* is the final time.

Define the Hamiltonian function as

$$H(\mathbf{x}, \mu, t) = 1 + \mathcal{X}^{\mathsf{r}}(a(\mathbf{x}) + \mathbf{C}(\mathbf{x})p + B(\mathbf{x})\mu)$$

where [ ] 1 2 *<sup>n</sup>* τ λ = λλ λ" is the costate vector.

The final-state constraint is ( ( )) 0 *<sup>f</sup>* ψ*x t* = as mentioned before.

The state equation can be expressed as

$$\dot{\boldsymbol{x}} = \frac{\partial \boldsymbol{H}}{\partial \mathcal{A}} = \boldsymbol{a}(\boldsymbol{\alpha}) + \mathbf{C}(\boldsymbol{\alpha})\boldsymbol{p} + \boldsymbol{B}(\boldsymbol{\alpha})\boldsymbol{\mu}, \boldsymbol{t} \ge \mathbf{t}\_0$$

The costate equation can be written as

$$-\dot{\mathcal{A}} = \frac{\partial H}{\partial \mathbf{x}} = \frac{\partial \left(a(\mathbf{x}) + \mathbf{C}(\mathbf{x})p + B(\mathbf{x})u\right)^{\mathbf{r}}}{\partial \mathbf{x}} \mathcal{A}\_{\mathbf{r}} t \le T^{\mathbf{r}}$$

The Pontryagin minimum principle is applied in order to derive the optimal control (Lee & Markus, 1967). That is,

$$H(\mathbf{x}^\*, \boldsymbol{\mu}^\*, \boldsymbol{\mathcal{X}}^\*, t) \le H(\mathbf{x}^\*, \boldsymbol{\mu}^\*, \boldsymbol{\mathcal{X}}^\*, t)$$

for all admissible *u* .

where \* *u* , \* *x* and \* λ correspond to the optimal solution. Consequently,

$$\mathcal{X}^{\,\,\,\tau}\sum\_{k=1}^{m}B\_{k}(\mathbf{x}^{\,\,\ast})\boldsymbol{\mu}\_{k}^{\,\,\ast} \leq \mathcal{X}^{\,\,\,\tau}\sum\_{k=1}^{m}B\_{k}(\mathbf{x})\boldsymbol{\mu}\_{k}$$

where ( ) *B x <sup>k</sup>* is the *k* th column of the *B x*( ) .

96 Recent Advances in Robust Control – Novel Approaches and Design Methods

the initial time and the final time, respectively; and ()0 *S tf* ≥ , *Q* ≥ 0 , and 0 *R* ≥ with appropriate dimensions; and the desired final state ( )*<sup>f</sup> r t* is the specified as the equilibrium

To solve for the optimal control, mathematical derivations are presented below for each of the above optimal control problems to show that the resulting equations represent the Hamiltonian system which is usually a coupled two-point boundary-value problem (TPBVP), and the analytic solution is not available, to our best knowledge. It is worth noting

> <sup>0</sup> <sup>0</sup> () 1*<sup>f</sup> <sup>t</sup> <sup>t</sup> J t dt* <sup>=</sup> ∫

*Hxut ax Cx* ( , ,) 1 (() () ()) *p Bxu* τ=+ + +

*x t* = as mentioned before.

<sup>0</sup> () () (), *<sup>H</sup> x ax Cxp Bxut t*

(() () ()) , *<sup>H</sup> ax Cxp Bxu t T*

τ

 λ≤

 λ

<sup>∂</sup> <sup>=</sup> =+ + ≥ <sup>∂</sup>

<sup>∂</sup> ∂+ + <sup>−</sup> = = <sup>≤</sup>

The Pontryagin minimum principle is applied in order to derive the optimal control (Lee &

\*\*\* \* \* *Hx u t Hx u t* ( , , ,) ( , , ,)

1 1 ( ) ( ) *m m k k kk k k B x u B xu*

 λ<sup>=</sup> <sup>=</sup> ∑ ∑ <sup>≤</sup>

 τ

λ

**3. Numerical solution schemes to the optimal control problems** 

that in the solution process, the parameter is assumed to be fixed.

where 0*t* is the initial time, and *<sup>f</sup> t* is the final time.

τ

ψ

λ

 λ

Define the Hamiltonian function as

The final-state constraint is ( ( )) 0 *<sup>f</sup>*

The state equation can be expressed as

The costate equation can be written as

λ

where [ ] 1 2 *<sup>n</sup>*

Markus, 1967). That is,

for all admissible *u* . where \* *u* , \* *x* and \*

Consequently,

λλ

λ=

**3.1 Numerical solution scheme to the time optimal control problem**  By assumption AS4, the optimal control performance index can be expressed as

" is the costate vector.

λ

*x x*

λ

correspond to the optimal solution.

\* \* \*

τ

λ

∂ ∂

*<sup>e</sup> x* , and *ue* is the equilibrium control.

Since the control components *uk* 's are all independent, the minimization of 1 ( ) *<sup>m</sup> k k <sup>k</sup> B xu* τ λ ∑ <sup>=</sup> is equivalent to the minimization of ( ) *B xu k k* τ λ.

The optimal control can be expressed as \* \* sgn( ( )) *u st k k* = − , where sgn(.) is the sign function defined as sgn( ) 1 *t* = if 0 *t* > or sgn( ) 1 *t* = − if 0 *t* < ; and ( ) ( ) *k k st Bx* τ = λ is the *k* th component of the switch vector ( ) ( ) *St Bx* τ = λ.

It is observed that the resulting Hamiltonian system is a coupled two-point boundary-value problem, and its analytic solution is not available in general.

With assumption AS6 satisfied, it is observed from the derivation of the optimal time control that the control problem ( *P*<sup>0</sup> ) has bang-bang control solutions.

Consider the following cost functional:

$$J = \int\_{t\_0}^{t\_f} 1 dt + \sum\_{i=1}^{q} \rho\_i \nu\_i^2 \left(\mathbf{x}(t\_f)\right)$$

where ρ*<sup>i</sup>* 's are positive constants, and ψ *<sup>i</sup>* 's are the components of the defining equation of the target set θ ψ *f f* = {*x xt* : ( ( )) 0 = } to the system state is transferred from a given initial state by means of proper control, and *q* is the number of components in ψ.

It is observed that the system described by Eq. (1) is a nonlinear system but linear in control. With assumption AS6, the requirements for applying the Switching-Time-Varying-Method (STVM) are met. The optimal switching-time vector can be obtained by using a gradientbased method. The convergence of the STVM is guaranteed if there are no singular solutions.

Note that the cost functional can be rewritten as follows:

$$J = \int\_{t\_0}^{t\_f} [(a\_0^\cdot(\chi) + < b\_0^\cdot(\chi), \mu >)] dt$$

where ' <sup>0</sup> <sup>1</sup> () 1 2 ,() () , *<sup>q</sup> <sup>i</sup> i i <sup>i</sup> a x ax Cxp x* ψ ρψ = ∂ =+ < + > <sup>∂</sup> ∑ ' <sup>0</sup> <sup>1</sup> () 2 [ ] () *<sup>q</sup> <sup>i</sup> i i <sup>i</sup> b x B x x* ψ τ ρψ = <sup>∂</sup> <sup>=</sup> <sup>∂</sup> <sup>∑</sup> , and *a x*( ) ,

*C x*( ) , *p* and *B x*( ) are as given in the control problem ( *P*<sup>0</sup> ). Define a new state variable 0 *x t*( ) as follows:

$$\mathcal{X}\_0(t) = \int\_{t0}^t [(\vec{a\_0}(\chi) + < \vec{b\_0}(\chi) \,\, \mu > )] dt$$

Define the augmented state vector 0 *xxx* τ τ <sup>=</sup> <sup>⎡</sup> <sup>⎤</sup> <sup>⎣</sup> <sup>⎦</sup> , ' <sup>0</sup> *ax a x ax Cxp* () () (() ()) τ τ <sup>=</sup> <sup>⎡</sup> <sup>+</sup> <sup>⎤</sup> <sup>⎣</sup> <sup>⎦</sup> , and ' <sup>0</sup> *Bx b x Bx* ( ) ( ) ( ( )) τ τ<sup>=</sup> ⎡ ⎤ ⎣ ⎦ .

The system equation can be rewritten in terms of the augmented state vector as

$$
\underline{\dot{\underline{x}}} = \underline{a}(\underline{\underline{x}}) + \underline{B}(\underline{\underline{x}})\boldsymbol{\mu} \text{ where } \underline{\underline{x}}(t\_0) = \begin{bmatrix} 0 & \underline{x}(t\_0)^\tau \end{bmatrix}^\tau \dots
$$

Neural Control Toward a Unified

Define the Hamiltonian function as

The final-state constraint is ( ( )) 0 *<sup>f</sup>*

The state equation can be expressed as

The costate equation can be written as

λ

where [ 1 2 *<sup>n</sup>* ]

Markus, 1967). That is,

λ

optimal solution. Consequently,

\*\*\* \* \* *Hx u t Hx u t* ( , , ,) ( , , ,)

1 1 | | () *m m k k k k k k e u B xu* τ

Since 0 *<sup>k</sup> e* ≠ , define ( ) / *kkk s Bx e*

λ

λλ

λ=

Intelligent Control Design Framework for Nonlinear Systems 99

( ) | | *<sup>f</sup> <sup>m</sup> <sup>t</sup> k k <sup>t</sup> <sup>k</sup> J t e e u dt* = ⎡ ⎤ = + <sup>⎢</sup> <sup>⎥</sup> <sup>⎣</sup> <sup>⎦</sup> <sup>∫</sup> <sup>∑</sup>

1

( , ,) | | (( ) ( ) ( ))

=+ + + + ∑

λ

*Hxut e e u ax Cxp Bxu* τ

*x t* = as mentioned before.

<sup>0</sup> () () (), *<sup>H</sup> x ax Cx <sup>p</sup> Bxut t*

<sup>∂</sup> <sup>=</sup> =+ + ≥ <sup>∂</sup>

(() () ())

*x x*

∂ ∂

*H ax Cxp Bxu*

<sup>∂</sup> ∂+ + −= = <sup>+</sup>

( | |) (() () ()) ,

≤ for all admissible *u* , where \* *u* , \* *x* and \*

1 1 1 1

+

<sup>=</sup> <sup>=</sup> ∑ ∑ <sup>+</sup> is equivalent to the minimization of | | ( ) *kk k k e u B xu*

\* \*

*k k*

*u st*

= = = =

λ

λ

*m m*

∑ ∑ ∑ ∑

*m m*

∂ + ∂+ + <sup>=</sup> <sup>≤</sup>

*e eu ax Cxp Bxu t T*

The Pontryagin minimum principle is applied in order to derive the optimal control (Lee &

=

∑

\* \* \* \*

+ ≤

\*

*k*

,| ( )| 1

sgn( ( )),| ( )| 1 0,| ( )| 1

*st st*

*k k*

*undefined s t*

<sup>⎪</sup> <sup>=</sup> ⎪⎩

⎧− <sup>&</sup>gt; ⎪⎪ <sup>=</sup> <sup>⎨</sup> <sup>&</sup>lt;


*e u Bxu*


*e u B xu*

Since the control components *uk* 's are all independent, the minimization of

*k k k k k k*

τ

*k k k k k k*

τ

τ

 λ

τ

. The fuel-optimal control satisfies the following

λ

λ

τ +λ

.

correspond to the

**3.2 Numerical solution scheme to the fuel optimal control problem** 

0

" is the costate vector.

λ

*x x*

∂ ∂

where 0*t* is the initial time, and *<sup>f</sup> t* is the final time.

τ

ψ

0 1

 λ

where ( ) *B x <sup>k</sup>* is the *k* th column of the *B x*( ) .

τ = λ

condition: \* \*

*m k k k*

 λ

By assumption AS4, the optimal control performance index can be expressed as

<sup>0</sup> 0 0

1

=

*k*

*k k*

*m*

A Hamiltonian system can be constructed for the above state equation with the costate equation given by

$$\dot{\mathcal{A}} = -\frac{\widehat{\mathcal{O}}}{\widehat{\mathcal{O}}\underline{\underline{\chi}}} (\underline{\underline{a}}(\underline{\underline{\chi}}) + \underline{B}(\underline{\underline{\chi}})u)^{\mathfrak{r}} \mathcal{A} \text{ where } \mathcal{A}(t\_f) = \frac{\widehat{\mathcal{O}}l}{\underline{\widehat{\mathcal{O}}\underline{\underline{\chi}}}} |\underline{\underline{\chi}}(t\_f) \dots \mathcal{A}|$$

It has been shown (Moon, 1969; Mohler, 1973; Mohler, 1991) that the number of the optimal switching times must be finite provided that no singular solutions exist. Let the zeros of ( ) *<sup>k</sup>* −*s t* be *<sup>k</sup>*, *<sup>j</sup>* τ<sup>+</sup> ( 1,2, ,2 *<sup>k</sup> j N*<sup>+</sup> <sup>=</sup> " , 1,2, , *k m* <sup>=</sup> " ; and 1 2 *<sup>k</sup>*, , *<sup>j</sup> <sup>k</sup> <sup>j</sup>* τ τ+ + <sup>&</sup>lt; for 1 2 1 2 *<sup>k</sup> jj N*<sup>+</sup> ≤<≤ ).

$$\mu\_k^\*(t) = \sum\_{j=1}^{N\_k^+} [\text{sgn}(t - \tau\_{k,2j-1}^+) - \text{sgn}(t - \tau\_{k,2j}^+)].$$

Let the switch vector for the *k* th component of the control vector be *N N k k* τ τ <sup>+</sup> = where ,1 ,2 *k k N k k N* τ ττ τ <sup>+</sup> <sup>+</sup> ⎡ ⎤ + + <sup>=</sup> ⎣ ⎦ " . Let 2 *N N k k* <sup>+</sup> = . Then *Nk* τ is the switching vector of *Nk* dimensions.

Let the vector of switch functions for the control variable *uk* be defined as 1 2 *kk k k NN N N* τ φφ φ + ⎡ ⎤ <sup>=</sup> ⎢ ⎥ ⎣ ⎦ " where <sup>1</sup> , ( 1) ( ) *Nk <sup>j</sup> j k k j* φ τ*<sup>s</sup>* <sup>−</sup> <sup>+</sup> = − ( 1,2, ,2 *<sup>k</sup> j N*<sup>+</sup> <sup>=</sup> " ).

The gradient that can be used to update the switching vector *Nk* τcan be given by

$$\nabla\_{\underline{\mathfrak{z}}^{N\_k}}^{J} = -\underline{\mathfrak{g}}^{N\_k}$$

The optimal switching vector can be obtained iteratively by using a gradient-based method.

$$\underline{\underline{\tau}}^{N\_k,i+1} = \underline{\underline{\tau}}^{N\_k,i} + K^{k,i} \underline{\underline{\phi}}^{N\_k}$$

where *k i*, *K* is a properly chosen *N N k k* × -dimensional diagonal matrix with non-negative entries for the *i* th iteration of the iterative optimization process; and , *N i <sup>k</sup>* τ represents the *i* th iteration of the switching vector *Nk* τ.

Remark 4: The choice of the step sizes as characterized in the matrix *k i*, *K* must consider two facts: if the step size is chosen too small, the solution may converge very slowly; if the step size is chosen too large, the solution may not converge. Instead of using the gradient descent method, which is relatively slow compared to other alternative such as methods based on Newton's method and inversion of the Hessian using conjugate gradient techniques.

When the optimal switching vectors are determined upon convergence, the optimal control trajectories and the optimal state trajectories are computed. This process will be repeated for all selected nominal cases until all needed off-line optimal control and state trajectories are obtained. These trajectories will be used in training the time-optimal control oriented neural networks.

### **3.2 Numerical solution scheme to the fuel optimal control problem**

By assumption AS4, the optimal control performance index can be expressed as

$$J(t\_0) = \int\_{t\_0}^{t\_f} \left[ e\_0 + \sum\_{k=1}^{m} e\_k \mid \mu\_k \mid \right] dt$$

where 0*t* is the initial time, and *<sup>f</sup> t* is the final time.

Define the Hamiltonian function as

98 Recent Advances in Robust Control – Novel Approaches and Design Methods

A Hamiltonian system can be constructed for the above state equation with the costate

τ λ

where ( ) |( ) *f f*

It has been shown (Moon, 1969; Mohler, 1973; Mohler, 1991) that the number of the optimal switching times must be finite provided that no singular solutions exist. Let the zeros of

( ) [sgn( ) sgn( )].

−

= − −− ∑

*k k j k j*

<sup>+</sup> = . Then *Nk*

Let the vector of switch functions for the control variable *uk* be defined as

 τ

τ

Let the switch vector for the *k* th component of the control vector be *N N k k*

, ( 1) ( ) *Nk <sup>j</sup> j k k j*

> *Nk J N*

> > τ

∇ =−φ

The optimal switching vector can be obtained iteratively by using a gradient-based method.

,1 , , *Ni Ni N kk k k i*

where *k i*, *K* is a properly chosen *N N k k* × -dimensional diagonal matrix with non-negative

Remark 4: The choice of the step sizes as characterized in the matrix *k i*, *K* must consider two facts: if the step size is chosen too small, the solution may converge very slowly; if the step size is chosen too large, the solution may not converge. Instead of using the gradient descent method, which is relatively slow compared to other alternative such as methods based on Newton's method and inversion of the Hessian using conjugate gradient

When the optimal switching vectors are determined upon convergence, the optimal control trajectories and the optimal state trajectories are computed. This process will be repeated for all selected nominal cases until all needed off-line optimal control and state trajectories are obtained. These trajectories will be used in training the time-optimal control oriented neural

*K* <sup>+</sup> = +

τ

τ

*<sup>J</sup> t xt x*

<sup>∂</sup> <sup>=</sup> <sup>∂</sup> .

+ + <sup>&</sup>lt; for 1 2 1 2 *<sup>k</sup> jj N*<sup>+</sup> ≤<≤ ).

τ

is the switching vector of *Nk*

can be given by

τ

represents the

τ <sup>+</sup> = where

λ

 τ

> τ*t*

> > τ

,2 1 ,2

+ +

τ

*<sup>s</sup>* <sup>−</sup> <sup>+</sup> = − ( 1,2, ,2 *<sup>k</sup> j N*<sup>+</sup> <sup>=</sup> " ).

*k*

 φ

(() ()) *ax Bxu*

<sup>∂</sup>

<sup>+</sup> ( 1,2, ,2 *<sup>k</sup> j N*<sup>+</sup> <sup>=</sup> " , 1,2, , *k m* <sup>=</sup> " ; and 1 2 *<sup>k</sup>*, , *<sup>j</sup> <sup>k</sup> <sup>j</sup>*

1

*Nk*

+

*j ut t*

φ

The gradient that can be used to update the switching vector *Nk*

τ

entries for the *i* th iteration of the iterative optimization process; and , *N i <sup>k</sup>*

τ.

=

*x*

\*

=− + ∂

λ

equation given by

( ) *<sup>k</sup>* −*s t* be *<sup>k</sup>*, *<sup>j</sup>* τ

*k*

ττ

φφ

techniques.

networks.

dimensions.

*N*

,1 ,2

1 2 *kk k*

*NN N*

<sup>+</sup>

*k k N*

 τ

*k*

*k*

*i* th iteration of the switching vector *Nk*

*N* τ

 φ+ τ

⎡ ⎤ <sup>=</sup> ⎢ ⎥ ⎣ ⎦ " where <sup>1</sup>

<sup>+</sup> ⎡ ⎤ + + <sup>=</sup> ⎣ ⎦ " . Let 2 *N N k k*

$$H(\mathbf{x}, \boldsymbol{\mu}, t) = e\_0 + \sum\_{k=1}^{m} e\_k \mid \boldsymbol{\mu}\_k \mid + \boldsymbol{\mathcal{A}}^\tau(\boldsymbol{a}(\mathbf{x}) + \mathbf{C}(\mathbf{x})p + B(\mathbf{x})\boldsymbol{\mu})$$

where [ 1 2 *<sup>n</sup>* ] τ λ = λλ λ" is the costate vector.

The final-state constraint is ( ( )) 0 *<sup>f</sup>* ψ*x t* = as mentioned before.

The state equation can be expressed as

$$\dot{\boldsymbol{x}} = \frac{\partial H}{\partial \mathcal{X}} = \boldsymbol{a}(\boldsymbol{\chi}) + \mathbf{C}(\boldsymbol{\chi})\boldsymbol{p} + B(\boldsymbol{\chi})\boldsymbol{\mu}, \boldsymbol{t} \ge \boldsymbol{t}\_0$$

The costate equation can be written as

$$\begin{aligned} -\mathcal{A} &= \frac{\widehat{\mathcal{O}}H}{\widehat{\mathcal{O}}\mathbb{x}} = \frac{\widehat{\mathcal{O}}(a(\mathbf{x}) + \mathbf{C}(\mathbf{x})p + B(\mathbf{x})\boldsymbol{\mu})^{\mathsf{r}}}{\widehat{\mathcal{O}}\mathbb{x}}\mathcal{A} + \\ &\frac{\widehat{\mathcal{O}}(e\_0 + \sum\_{k=1}^m e\_k \mid \boldsymbol{\mu}\_k \mid \boldsymbol{\}}{\widehat{\mathcal{O}}\mathbb{x}} = \frac{\widehat{\mathcal{O}}(a(\mathbf{x}) + \mathbf{C}(\mathbf{x})p + B(\mathbf{x})\boldsymbol{\mu})^{\mathsf{r}}}{\widehat{\mathcal{O}}\mathbb{x}}\mathcal{A}, \mathsf{t} \leq \mathsf{T} \end{aligned}$$

The Pontryagin minimum principle is applied in order to derive the optimal control (Lee & Markus, 1967). That is,

\*\*\* \* \* *Hx u t Hx u t* ( , , ,) ( , , ,) λ λ ≤ for all admissible *u* , where \* *u* , \* *x* and \* λ correspond to the optimal solution.

Consequently,

$$\begin{aligned} &\sum\_{k=1}^m e\_k \mid \boldsymbol{\mu\_k^\*} \mid + \boldsymbol{\mathcal{X}}^{\*\tau} \sum\_{k=1}^m B\_k(\boldsymbol{\chi^\*}) \boldsymbol{\mu\_k^\*} \le \boldsymbol{\xi} \\ &\sum\_{k=1}^m e\_k \mid \boldsymbol{\mu\_k} \mid + \boldsymbol{\mathcal{X}}^{\tau} \sum\_{k=1}^m B\_k(\boldsymbol{\chi}) \boldsymbol{\mu\_k} \end{aligned}$$

where ( ) *B x <sup>k</sup>* is the *k* th column of the *B x*( ) .

Since the control components *uk* 's are all independent, the minimization of 1 1 | | () *m m k k k k k k e u B xu* τ λ <sup>=</sup> <sup>=</sup> ∑ ∑ <sup>+</sup> is equivalent to the minimization of | | ( ) *kk k k e u B xu* τ +λ . Since 0 *<sup>k</sup> e* ≠ , define ( ) / *kkk s Bx e* τ = λ. The fuel-optimal control satisfies the following

condition:\ 
$$\boldsymbol{\mu}\_{k}^{\*} = \begin{cases} -\text{sgn}(\boldsymbol{s}\_{k}^{\*}(t))\_{\prime} |\boldsymbol{s}\_{k}^{\*}(t)| > 1\\ \boldsymbol{0}\_{\prime} |\boldsymbol{s}\_{k}^{\*}(t)| < 1\\ \quad \text{undefined} \, |\boldsymbol{s}\_{k}^{\*}(t)| = 1 \end{cases}$$

Neural Control Toward a Unified

() 1 *<sup>k</sup>* − − *s t* be *<sup>k</sup>*, *<sup>j</sup>*

( )( ) *NN N kk k*

2 2 *NNN kkk*

 φ

ττ

φ

τ

 τ

+ − = + . Then *Nk*

be *<sup>k</sup>*, *<sup>j</sup>* τ

τ

The adjoint state equation can be written as

λ

Intelligent Control Design Framework for Nonlinear Systems 101

τ

τ

,2 1 ,2

Let the switch vector for the *k* th component of the control vector be

*k k N*

Let the vector of switch functions for the control variable *uk* be defined as

<sup>+</sup> <sup>=</sup> − − ( 1,2, ,2 *<sup>k</sup> j N*<sup>−</sup> <sup>=</sup> " ).

 τ

is the switching vector of *Nk* dimensions.

*k*

 φ

 τ

*k*

τ

<sup>+</sup> <sup>=</sup> <sup>⎡</sup> + + <sup>⎤</sup> <sup>⎣</sup> <sup>⎦</sup> " and ,1 ,2

*k*

ττ

φ

τ

*N*

*k j k j*

− −

[sgn( ) sgn( )]}.

− −−

<sup>+</sup>

⎡ ⎤ <sup>=</sup> ⎢ ⎥ ⎣ ⎦ " " where <sup>1</sup>

 τ

> *Nk J N*

> > τ

∇ =−φ

The optimal switching vector can be obtained iteratively by using a gradient-based method.

,1 , , *Ni Ni N kk k k i*

where *k i*, *K* is a properly chosen *N N k k* × -dimensional diagonal matrix with non-negative

When the optimal switching vectors are determined upon convergence, the optimal control trajectories and the optimal state trajectories are computed. This process will be repeated for

*K* <sup>+</sup> = +

τ

*t t*

−

*u t t t*

<sup>1</sup> ( ) { [sgn( ) sgn( )] <sup>2</sup>

−

= − −− −

*k k j k j*

τ

 τ

λ

It has been shown (Moon, 1969; Mohler, 1973; Mohler, 1991) that the number of the optimal switching times must be finite provided that no singular solutions exist. Let the zeros of

where ( ) |( ) *f f*

*<sup>J</sup> t xt x*

<sup>∂</sup> <sup>=</sup> <sup>∂</sup> .

+ + <sup>&</sup>lt; for 1 2 1 2 *<sup>k</sup> jj N*<sup>+</sup> ≤<≤ ) which

− − <sup>&</sup>lt; for 1 2 1 2 *<sup>k</sup> jj N*<sup>−</sup> ≤<≤ ) which represent

<sup>−</sup> . Altogether *<sup>k</sup>*, *<sup>j</sup>*

<sup>+</sup> , the zeros of ( ) 1 *<sup>k</sup>* − + *s t*

<sup>+</sup> 's and *<sup>k</sup>*, *<sup>j</sup>* τ<sup>−</sup> 's

*k*

 τ*e s* <sup>−</sup> <sup>+</sup> = − +

represents the

τ

*k k N*

<sup>−</sup>

can be given by

τ

 τ

<sup>−</sup> <sup>=</sup> <sup>⎡</sup> − − <sup>⎤</sup> <sup>⎣</sup> <sup>⎦</sup> " . Let

, ( 1) ( ( ) 1) *Nk <sup>j</sup> j k k kj*

τ

λ

τ

,2 1 ,2

+ +

 τ

 τ

(() ()) *ax Bxu*

<sup>+</sup> ( 1,2, ,2 *<sup>k</sup> j N*<sup>+</sup> <sup>=</sup> " , 1,2, , *k m* <sup>=</sup> " ; and 1 2 *<sup>k</sup>*, , *<sup>j</sup> <sup>k</sup> <sup>j</sup>*

<sup>∂</sup>

*x*

=− + ∂

represent the switching times corresponding to positive control \* *uk*

represent the switching times which uniquely determine \* *uk* as follows:

1

τ

*k*

ττ

 φ+ + + − + +

<sup>+</sup> *e s* <sup>−</sup>

τ

entries for the *i* th iteration of the iterative optimization process; and , *N i <sup>k</sup>*

τ.

*N*

*k k k k*

The gradient that can be used to update the switching vector *Nk*

*N N N N*

*k*

+

*N*

*j*

=

∑

<sup>−</sup> ( 1,2, ,2 *<sup>k</sup> j N*<sup>−</sup> <sup>=</sup> " , *k m* <sup>=</sup> 1,2, , " ; and 1 2 *<sup>k</sup>*, , *<sup>j</sup> <sup>k</sup> <sup>j</sup>*

\*

1

+ − ⎡ ⎤ <sup>=</sup> ⎢ ⎥ ⎣ ⎦ where ,1 ,2

*k*

−

*N*

*j*

τ

τ

1 2 21 22 *k k kk k*

( 1,2, ,2 *<sup>k</sup> j N*<sup>+</sup> <sup>=</sup> " ), and , <sup>2</sup> ( 1) ( ( ) 1) *<sup>k</sup> k N j k k kj j N*

φ

 τ

*N N NN N*

*i* th iteration of the switching vector *Nk*

 φφ =

∑

the switching times corresponding to negative control \* *uk*

where *k m* = 1,2, , " .

Note that the above optimal control can be written in a different form as follows:

$$
\mu\_k^\* = \mu\_k^{\*+} + \mu\_k^{\*-}
$$

where \* \* <sup>1</sup> sgn( ( ) 1) 1 <sup>2</sup> *u st k k* <sup>+</sup> = − −+ ⎡ ⎤ ⎣ ⎦ , and \* \* <sup>1</sup> sgn( ( ) 1) 1 <sup>2</sup> *u st k k* <sup>−</sup> <sup>=</sup> ⎡ ⎤ − +− ⎣ ⎦ .

It is observed that the resulting Hamiltonian system is a coupled two-point boundary-value problem, and its analytic solution is not available in general.

With assumption AS6 satisfied, it is observed from the derivation of the optimal fuel control that the control problem ( *P*<sup>0</sup> ) only has bang-off-bang control solutions.

Consider the following cost functional:

$$J = \int\_{t\_0}^{t\_f} \left[ e\_0 + \sum\_{k=1}^{m} e\_k \mid \boldsymbol{\mu}\_k \mid \right] dt + \sum\_{i=1}^{q} \rho\_i \nu\_i^2 \left( \boldsymbol{\varkappa}(t\_f) \right)$$

where ρ*<sup>i</sup>* 's are positive constants, and ψ *<sup>i</sup>* 's are the components of the defining equation of the target set θ ψ *f f* = {*x xt* : ( ( )) 0 = } to the system state is transferred from a given initial state by means of proper control, and *q* is the number of components in ψ.

It is observed that the system described by Eq. (1) is a nonlinear system but linear in control. With assumption AS6, the requirements for the STVM's application are met. The optimal switching-time vector can be obtained by using a gradient-based method. The convergence of the STVM is guaranteed if there are no singular solutions.

Note that the cost functional can be rewritten as follows:

$$J = \int\_{t\_0}^{t\_f} \{ (a\_0^\cdot(\mathbf{x}) + < b\_0^\cdot(\mathbf{x}), u> \} + \sum\_{k=1}^m e\_k \mid u\_k \mid \} dt$$

where ' 0 0 <sup>1</sup> () 2 ,() () , *<sup>q</sup> <sup>i</sup> i i <sup>i</sup> ax e ax Cxp x* ψ ρψ = ∂ =+ < + > <sup>∂</sup> ∑ ' <sup>0</sup> <sup>1</sup> () 2 [ ] () *<sup>q</sup> <sup>i</sup> i i <sup>i</sup> b x B x x* ψ τ ρψ = <sup>∂</sup> <sup>=</sup> <sup>∂</sup> <sup>∑</sup> , and *a x*( ) ,

*C x*( ) , *p* and *B x*( ) are as given in the control problem ( *P*<sup>0</sup> ). Define a new state variable 0 *x t*( ) as follows:

$$\mathbf{x}\_0(t) = \int\_{t0}^t \left[ (\overset{\cdot}{a\_0}(\mathbf{x}) + < \overset{\cdot}{b\_0}(\mathbf{x}) , \mu > \right] + \sum\_{k=1}^m e\_k \, | \, \mu\_k \, | \, |dt|$$

Define the augmented state vector 0 *xxx* τ τ<sup>=</sup> <sup>⎡</sup> <sup>⎤</sup> <sup>⎣</sup> <sup>⎦</sup> ,

$$\underline{a}(\underline{\mathbf{x}}) = \left[ \stackrel{\circ}{a\_0}(\mathbf{x}) \quad \text{( $a(\mathbf{x}) + C(\mathbf{x})p$ )}^{\circ} \right]^{\mathsf{r}}, \text{ and } \underline{B(\underline{\mathbf{x}})} = \left[ \stackrel{\circ}{b\_0}(\mathbf{x}) \quad \text{( $B(\mathbf{x})$ )}^{\mathsf{r}} \right]^{\mathsf{r}}.$$

The system equation can be rewritten in terms of the augmented state vector as

$$
\underline{\dot{\underline{x}}} = \underline{\underline{a}}(\underline{\underline{x}}) + \underline{\underline{B}}(\underline{\underline{x}})\underline{\mu} \text{ where } \underline{\underline{x}}(t\_0) = \left[ 0 \quad \underline{x}(t\_0)^\tau \right]^\tau.
$$

The adjoint state equation can be written as

100 Recent Advances in Robust Control – Novel Approaches and Design Methods

\*\* \* *uu u kk k* <sup>+</sup> <sup>−</sup> = +

*u st k k*

It is observed that the resulting Hamiltonian system is a coupled two-point boundary-value

With assumption AS6 satisfied, it is observed from the derivation of the optimal fuel control

1 1 | | ( ( )) *<sup>f</sup> <sup>m</sup> <sup>q</sup> <sup>t</sup> k k i i <sup>f</sup> <sup>t</sup> k i J e e u dt x t*

⎣ ⎦ <sup>∫</sup> ∑ ∑

= =

It is observed that the system described by Eq. (1) is a nonlinear system but linear in control. With assumption AS6, the requirements for the STVM's application are met. The optimal switching-time vector can be obtained by using a gradient-based method. The convergence

[( ( ) ( ), ) | |]

= +< > + ∫ ∑

sgn( ( ) 1) 1 <sup>2</sup>

2

*<sup>i</sup>* 's are the components of the defining equation of

ψ.

<sup>0</sup> <sup>1</sup> () 2 [ ] () *<sup>q</sup> <sup>i</sup> i i <sup>i</sup> b x B x*

ρψ=

*x* ψ τ

<sup>∂</sup> <sup>=</sup> <sup>∂</sup> <sup>∑</sup> , and *a x*( ) ,

τ τ

ρψ

*f f* = {*x xt* : ( ( )) 0 = } to the system state is transferred from a given initial state

1

=

<sup>0</sup> *Bx b x Bx* ( ) ( ) ( ( ))

<sup>=</sup> <sup>⎡</sup> <sup>⎤</sup> <sup>⎣</sup> <sup>⎦</sup> .

τ τ<sup>=</sup> <sup>⎡</sup> <sup>⎤</sup> <sup>⎣</sup> <sup>⎦</sup> .

=

<sup>−</sup> <sup>=</sup> ⎡ ⎤ − +− ⎣ ⎦ .

Note that the above optimal control can be written in a different form as follows:

⎣ ⎦ , and \* \* <sup>1</sup>

that the control problem ( *P*<sup>0</sup> ) only has bang-off-bang control solutions.

0

by means of proper control, and *q* is the number of components in

of the STVM is guaranteed if there are no singular solutions. Note that the cost functional can be rewritten as follows:

0

∂ =+ < + >

*x* ψ

<sup>∂</sup> ∑ '

0 0 <sup>1</sup> () 2 ,() () , *<sup>q</sup> <sup>i</sup> i i <sup>i</sup> ax e ax Cxp*

*C x*( ) , *p* and *B x*( ) are as given in the control problem ( *P*<sup>0</sup> ).

ρψ=

Define a new state variable 0 *x t*( ) as follows:

Define the augmented state vector 0 *xxx*

'

<sup>0</sup> *ax a x ax Cxp* () () (() ())

⎡ ⎤ =+ + ⎢ ⎥

ψ

' ' 0 0

' ' 0 00 <sup>0</sup> <sup>1</sup> ( ) [( ( ) ( ), ) | |] *<sup>m</sup> <sup>t</sup> k k <sup>t</sup> <sup>k</sup> x t a x b x u e u dt*

The system equation can be rewritten in terms of the augmented state vector as

= +< > + ∫ ∑

τ τ<sup>=</sup> <sup>⎡</sup> <sup>⎤</sup> <sup>⎣</sup> <sup>⎦</sup> ,

τ τ<sup>=</sup> <sup>⎡</sup> <sup>+</sup> <sup>⎤</sup> <sup>⎣</sup> <sup>⎦</sup> , and '

*x ax Bxu* = + () () where 0 0 *xt xt* () 0 ()

*<sup>f</sup> <sup>m</sup> <sup>t</sup> k k <sup>t</sup> <sup>k</sup> J a x b x u e u dt*

problem, and its analytic solution is not available in general.

0

where *k m* = 1,2, , " .

where \* \* <sup>1</sup>

where

where '

ρ

θ

the target set

*u st k k*

sgn( ( ) 1) 1 <sup>2</sup>

<sup>+</sup> = − −+ ⎡ ⎤

Consider the following cost functional:

*<sup>i</sup>* 's are positive constants, and

 ψ

$$\dot{\mathcal{A}} = -\frac{\partial}{\partial \underline{\underline{\chi}}} (\underline{\underline{a}}(\underline{\underline{\chi}}) + \underline{B}(\underline{\underline{\chi}})\mu)^{\mathsf{r}} \mathcal{A} \text{ where } \mathcal{A}(t\_f) = \frac{\partial \underline{\underline{\chi}}}{\partial \underline{\underline{\chi}}} \underline{\underline{\chi}}(t\_f) \text{ .}$$

It has been shown (Moon, 1969; Mohler, 1973; Mohler, 1991) that the number of the optimal switching times must be finite provided that no singular solutions exist. Let the zeros of () 1 *<sup>k</sup>* − − *s t* be *<sup>k</sup>*, *<sup>j</sup>* τ<sup>+</sup> ( 1,2, ,2 *<sup>k</sup> j N*<sup>+</sup> <sup>=</sup> " , 1,2, , *k m* <sup>=</sup> " ; and 1 2 *<sup>k</sup>*, , *<sup>j</sup> <sup>k</sup> <sup>j</sup>* τ τ + + <sup>&</sup>lt; for 1 2 1 2 *<sup>k</sup> jj N*<sup>+</sup> ≤<≤ ) which represent the switching times corresponding to positive control \* *uk* <sup>+</sup> , the zeros of ( ) 1 *<sup>k</sup>* − + *s t* be *<sup>k</sup>*, *<sup>j</sup>* τ<sup>−</sup> ( 1,2, ,2 *<sup>k</sup> j N*<sup>−</sup> <sup>=</sup> " , *k m* <sup>=</sup> 1,2, , " ; and 1 2 *<sup>k</sup>*, , *<sup>j</sup> <sup>k</sup> <sup>j</sup>* τ τ − − <sup>&</sup>lt; for 1 2 1 2 *<sup>k</sup> jj N*<sup>−</sup> ≤<≤ ) which represent the switching times corresponding to negative control \* *uk* <sup>−</sup> . Altogether *<sup>k</sup>*, *<sup>j</sup>* τ<sup>+</sup> 's and *<sup>k</sup>*, *<sup>j</sup>* τ<sup>−</sup> 's represent the switching times which uniquely determine \* *uk* as follows:

$$\begin{aligned} \mu\_k^\*(t) &= \frac{1}{2} \{ \sum\_{j=1}^{N\_k^+} [\text{sgn}(t - \tau\_{k, 2j-1}^+) - \text{sgn}(t - \tau\_{k, 2j}^+)] - \dots \\ &\sum\_{j=1}^{N\_k^-} [\text{sgn}(t - \tau\_{k, 2j-1}^-) - \text{sgn}(t - \tau\_{k, 2j}^-)] \}. \end{aligned}$$

Let the switch vector for the *k* th component of the control vector be ( )( ) *NN N kk k* τ τ τ ττ τ + − ⎡ ⎤ <sup>=</sup> ⎢ ⎥ ⎣ ⎦ where ,1 ,2 *k k N k k N* τ ττ τ <sup>+</sup> <sup>+</sup> <sup>=</sup> <sup>⎡</sup> + + <sup>⎤</sup> <sup>⎣</sup> <sup>⎦</sup> " and ,1 ,2 *k k N k k N* τ ττ τ <sup>−</sup> <sup>−</sup> <sup>=</sup> <sup>⎡</sup> − − <sup>⎤</sup> <sup>⎣</sup> <sup>⎦</sup> " . Let 2 2 *NNN kkk* + − = + . Then *Nk* τis the switching vector of *Nk* dimensions.

Let the vector of switch functions for the control variable *uk* be defined as 1 2 21 22 *k k kk k k k k k N N NN N N N N N* φ φ φφ φ + + + − + + ⎡ ⎤ <sup>=</sup> ⎢ ⎥ ⎣ ⎦ " " where <sup>1</sup> , ( 1) ( ( ) 1) *Nk <sup>j</sup> j k k kj* φ τ *e s* <sup>−</sup> <sup>+</sup> = − + ( 1,2, ,2 *<sup>k</sup> j N*<sup>+</sup> <sup>=</sup> " ), and , <sup>2</sup> ( 1) ( ( ) 1) *<sup>k</sup> k N j k k kj j N* φ τ <sup>+</sup> *e s* <sup>−</sup> <sup>+</sup> <sup>=</sup> − − ( 1,2, ,2 *<sup>k</sup> j N*<sup>−</sup> <sup>=</sup> " ).

The gradient that can be used to update the switching vector *Nk* τcan be given by

$$\nabla\_{\underline{\mathfrak{z}}^{N\_k}}^J = -\underline{\mathfrak{g}}^{N\_k}$$

The optimal switching vector can be obtained iteratively by using a gradient-based method.

$$\underline{\underline{\tau}}^{N\_k, i+1} = \underline{\underline{\tau}}^{N\_k, i} + K^{k, i} \underline{\underline{\phi}}^{N\_k}$$

where *k i*, *K* is a properly chosen *N N k k* × -dimensional diagonal matrix with non-negative entries for the *i* th iteration of the iterative optimization process; and , *N i <sup>k</sup>* τ represents the *i* th iteration of the switching vector *Nk* τ.

When the optimal switching vectors are determined upon convergence, the optimal control trajectories and the optimal state trajectories are computed. This process will be repeated for

Neural Control Toward a Unified

costate value.

**4.1 Three-layer approach** 

features the following:

& Mohler & Chen, 2000).

value.

The idea for the shooting method is as follows:

2. Integrate the Hamiltonian system forward. 3. Evaluate the mismatch on the final constraints.

1. First make a guess for the initial values for the costate.

**4. Unified hierarchical neural control design framework** 

address the optimal control of uncertain nonlinear systems.

constitute the nominal layer of neural network controllers.

regional layer of neural network controllers.

layer of neural networks controllers.

Intelligent Control Design Framework for Nonlinear Systems 103

4. Find the sensitivity Jacobian for the final state and costate with respect to the initial

5. Using the Newton-Raphson method to determine the change on the initial costate

Keeping in mind that the discussions and analyses made in Section 3 are focused on the system with a fixed parameter vector, which is the control problem ( *P*<sup>0</sup> ). To address the original control problem ( *P* ), the parameter vector space is tessellated into a number of subregions. Each sub-region is identified with a set of vertexes. For each of the vertexes, a different control problem ( *P*<sup>0</sup> ) is formed. The family of control problems ( *P*<sup>0</sup> ) are combined together to represent an approximately accurate characterization of the dynamic system behaviours exhibited by the nonlinear systems in the control problem ( *P* ). This is an important step toward the hierarchical neural control design framework that is proposed to

While the control problem ( *P* ) is approximately equivalent to the family of control problems ( *P*<sup>0</sup> ), the solutions to the respective control problems ( *P*<sup>0</sup> ) must be properly coordinated in order to provide a consistent solution to the original control problem ( *P* ). The requirement of consistent coordination of individual solutions may be mapped to the hierarchical neural network control design framework proposed in this Chapter that

• For a fixed parameter vector, the control solution characterized by a set of optimal state and control trajectories shall be approximated by a neural network, which may be called a nominal neural network for this nominal case. For each nominal case, a nominal neural network is needed. All the nominal neural network controllers

• For each sub-region, regional coordinating neural network controllers are needed to coordinate the responses from individual nominal neural network controllers for the sub-region. All the regional coordinating neural network controllers constitute the

• For an unknown parameter vector, global coordinating neural network controllers are needed to coordinate the responses from regional coordinating neural network controllers. All the global coordinating neural network controllers constitute the global

The proposed hierarchical neural network control design framework is a systematic extension and a comprehensive enhancement of the previous endeavours (Chen, 1998; Chen

6. Repeat the loop of steps 2 through 5 until the mismatch is close enough to zero.

all selected nominal cases until all needed off-line optimal control and state trajectories are obtained. These trajectories will be used in training the fuel-optimal control oriented neural networks.

### **3.3 Numerical solution scheme to the quadratic optimal control problem**

The Hamiltonian function can be defined as

$$H(\mathbf{x}, \boldsymbol{\mu}, t) = \frac{1}{2} (\mathbf{x}^{\mathsf{r}} Q \mathbf{x} + (\boldsymbol{\mu} - \boldsymbol{\mu}\_e)^{\mathsf{r}} R (\boldsymbol{\mu} - \boldsymbol{\mu}\_e)) + \boldsymbol{\mathcal{A}}^{\mathsf{r}} (\boldsymbol{a} + \mathbf{C} p + B \boldsymbol{\mu})$$

The state equation is given by

$$\dot{\mathbf{x}} = \frac{\partial H}{\partial \mathcal{A}} = a + Cp + Bu$$

The costate equation can be given by

$$-\dot{\mathcal{A}} = \frac{\partial H}{\partial \mathbf{x}} = \frac{\partial (a + \mathbf{C}p + B\mathbf{u})^{\mathbf{r}}}{\partial \mathbf{x}} \mathcal{A} + \mathbf{Q}\mathbf{x}^{\mathbf{r}}$$

The stationarity equation gives

$$0 = \frac{\partial H}{\partial \mu} = \frac{\partial (a + \mathbf{C}p + Bu)^{\tau}}{\partial \mu} \lambda + R(\mu - \mu\_e)$$

*u* can be solved out as

$$\mu = -\mathbb{R}^{-1}\mathcal{B}^{\tau}\mathcal{X} + \mu\_{\psi}$$

The Hamiltonian system becomes

$$\begin{cases} \dot{\mathbf{x}} = a(\mathbf{x}) + \mathbf{C}(\mathbf{x})p + B(\mathbf{x})(-\mathbf{R}^{-1}\mathbf{B}^{\mathsf{r}}\mathcal{A} + \boldsymbol{\mu}\_{\boldsymbol{e}}) \\ - \dot{\mathcal{A}} = \frac{\partial(a(\mathbf{x}) + \mathbf{C}(\mathbf{x})p + B(\mathbf{x})(-\mathbf{R}^{-1}\mathbf{B}^{\mathsf{r}}\mathcal{A} + \boldsymbol{\mu}\_{\boldsymbol{e}}))^{\mathsf{r}}}{\partial \mathbf{x}}\mathcal{A} + Q\mathbf{x}^{\mathsf{r}} \end{cases}$$

Furthermore, the boundary condition can be given by

$$\mathcal{A}(t\_f) = S(t\_f)(\mathbf{x}(t\_f) - r(t\_f))$$

Notice that for the Hamiltonian system which is composed of the state and costate equations, the initial condition is given for the state equation, and the constraints on the costate variables at the final time for the costate equation.

It is observed that the Hamiltonian system is a set of nonlinear ordinary differential equations in *x t*( ) and λ( )*t* which develop forward and back in time, respectively. Generally, it is not possible to obtain the analytic closed-form solution to such a two-point boundaryvalue problem (TPBVP). Numerical methods have to be employed to solve for the Hamiltonian system. One simple method, called shooting method may be used. There are other methods like the "shooting to a fixed point" method, and relaxation methods, etc.

The idea for the shooting method is as follows:

102 Recent Advances in Robust Control – Novel Approaches and Design Methods

all selected nominal cases until all needed off-line optimal control and state trajectories are obtained. These trajectories will be used in training the fuel-optimal control oriented neural

> τ

λ

*<sup>H</sup>* ( ) *a Cp Bu Qx*

τ

λ

τ

 λ

*x x*

*u u*

∂ ∂

 <sup>∂</sup> ∂+ + −= = <sup>+</sup> ∂ ∂

( ) <sup>0</sup> ( )*<sup>e</sup> <sup>H</sup> a Cp Bu Ru u*

> <sup>1</sup> *u RB ue* τ λ<sup>−</sup> = − +

( ( ) ( ) ( )( ))

( ) ( )( ( ) ( )) *f ff f*

Notice that for the Hamiltonian system which is composed of the state and costate equations, the initial condition is given for the state equation, and the constraints on the

It is observed that the Hamiltonian system is a set of nonlinear ordinary differential

it is not possible to obtain the analytic closed-form solution to such a two-point boundaryvalue problem (TPBVP). Numerical methods have to be employed to solve for the Hamiltonian system. One simple method, called shooting method may be used. There are other methods like the "shooting to a fixed point" method, and relaxation methods, etc.

*t St xt rt* = −

<sup>⎨</sup> ∂+ + − + ⎪− = <sup>+</sup> <sup>⎩</sup> <sup>∂</sup>

( ) ( ) ( )( )

*x ax Cxp Bx R B u*

<sup>⎧</sup> =+ + − + <sup>⎪</sup>

λ

1

−

τ

*ax Cxp Bx R B u Qx x*

1

−

λ

τ

*e*

( )*t* which develop forward and back in time, respectively. Generally,

λ

*e*

 τ

> λ

<sup>∂</sup> ∂+ + = = + −

**3.3 Numerical solution scheme to the quadratic optimal control problem** 

<sup>1</sup> ( , , ) ( ( ) ( )) ( ) <sup>2</sup> *H x u t x Qx u u R u u a Cp Bu e e*

*<sup>H</sup> x a Cp Bu* λ

= =+ + <sup>∂</sup>

= +− − + + +

ττ

∂

λ

The Hamiltonian function can be defined as

The state equation is given by

The costate equation can be given by

The stationarity equation gives

The Hamiltonian system becomes

λ

Furthermore, the boundary condition can be given by

costate variables at the final time for the costate equation.

λ

*u* can be solved out as

equations in *x t*( ) and

networks.


## **4. Unified hierarchical neural control design framework**

Keeping in mind that the discussions and analyses made in Section 3 are focused on the system with a fixed parameter vector, which is the control problem ( *P*<sup>0</sup> ). To address the original control problem ( *P* ), the parameter vector space is tessellated into a number of subregions. Each sub-region is identified with a set of vertexes. For each of the vertexes, a different control problem ( *P*<sup>0</sup> ) is formed. The family of control problems ( *P*<sup>0</sup> ) are combined together to represent an approximately accurate characterization of the dynamic system behaviours exhibited by the nonlinear systems in the control problem ( *P* ). This is an important step toward the hierarchical neural control design framework that is proposed to address the optimal control of uncertain nonlinear systems.

## **4.1 Three-layer approach**

While the control problem ( *P* ) is approximately equivalent to the family of control problems ( *P*<sup>0</sup> ), the solutions to the respective control problems ( *P*<sup>0</sup> ) must be properly coordinated in order to provide a consistent solution to the original control problem ( *P* ). The requirement of consistent coordination of individual solutions may be mapped to the hierarchical neural network control design framework proposed in this Chapter that features the following:


The proposed hierarchical neural network control design framework is a systematic extension and a comprehensive enhancement of the previous endeavours (Chen, 1998; Chen & Mohler & Chen, 2000).

Neural Control Toward a Unified

**4.3 Overall architecture** 

desired control performance.

function as shown in Fig. 3.

a level of desired granularity.

Intelligent Control Design Framework for Nonlinear Systems 105

Conventional NN

The overall architecture of the multi-layered hierarchical neural network control framework, as shown in Fig. 4, include three layers: the nominal layer, the regional layer, and the global layer. These three layers play different roles and yet work together to attempt to achieve

At the nominal layer, the nominal neural networks are responsible to compute the near optimal control signals for a given parameter vector. The post-processing function block is necessary for both time optimal control problem and fuel optimal control problems while indeed it may not be needed for the quadratic optimal control problems. For time optimal control problems, the post-processing function is a sign function as shown in Fig. 2. For the fuel optimal control problems, the post-processing is a slightly more complicated stair-case

At the regional layer, the regional neural networks are responsible to compute the desired weighting factors that are in turn used to modulate the control signals computed by the nominal neural networks to produce near optimal control signals for an unknown parameter vector situated at the know sub-region of the parameter vector space. The postprocessing function block is necessary for all the three types of control problems studied in this Chapter. It is basically a normalization process of the weighting factors produced by the regional neural networks for a sub-region that is enabled by the global neural networks. At the global layer, the global neural networks are responsible to compute the possibilities of the unknown parameter vector being located within sub-regions. The post-processing function block is necessary for all the three types of control problems studied in this Chapter. It is a winner-take-all logic applied to all the output data of the global neural networks. Consequently, only one sub-regional will be enabled, and all the other subregions will be disabled. The output data of the post-processing function block is used to

To make use of the multi-layered hierarchical neural network control design framework, it is clear that the several key factors such as the number of the neural networks for each layer, the size of each neural network, and desired training patterns, are important. This all has to do with the determination of the nominal cases. A nominal case designates a group of system conditions that reflect one of the typical system behaviors. In the context of control of a dynamic system with uncertain parameters, which is the focus of this Chapter, a nominal case may be designated as corresponding to the vertexes of the sub-regions when the parameter vector space is tessellated into a number of non-overlapping sub-regions down to

Fig. 3. Nominal neural network for quadratic optimal control

turn on only one of the sub-regions for the regional layer.

## **4.2 Nominal layer**

Even though the hierarchical neural network control design methodology is unified and generic, the design of the three layers of neural networks, especially the nominal layer of neural networks may consider the uniqueness of the problems under study. For the time optimal control problems, the role of the nominal layer of neural networks is to identify the switching manifolds that relate to the bang-bang control. For the fuel optimal problems, the role of the nominal layer of neural networks is to identify the switching manifolds that relate to the bang-off-bang control. For the quadratic optimal control problems, the role of the nominal layer of neural networks is to approximate the optimal control based on the state variables.

Fig. 1. Nominal neural network for time optimal control

Consequently a nominal neural network for the time optimal control takes the form of a conventional neural network with continuous activation functions cascaded by a two-level stair case function which itself may viewed as a discrete neural network itself, as shown in Fig. 1. For the fuel optimal control, a nominal neural network takes the form of a conventional neural network with continuous activation functions cascaded by a three-level stair case function, as shown in Fig. 2.

Fig. 2. Nominal neural network for fuel optimal control

For the quadratic optimal control, no switching manifolds are involved. A conventional neural network with continuous activation functions is sufficient for a nominal case, as shown in Fig. 3.

Fig. 3. Nominal neural network for quadratic optimal control

## **4.3 Overall architecture**

104 Recent Advances in Robust Control – Novel Approaches and Design Methods

Even though the hierarchical neural network control design methodology is unified and generic, the design of the three layers of neural networks, especially the nominal layer of neural networks may consider the uniqueness of the problems under study. For the time optimal control problems, the role of the nominal layer of neural networks is to identify the switching manifolds that relate to the bang-bang control. For the fuel optimal problems, the role of the nominal layer of neural networks is to identify the switching manifolds that relate to the bang-off-bang control. For the quadratic optimal control problems, the role of the nominal layer of neural networks is to approximate the optimal control based on the state

Consequently a nominal neural network for the time optimal control takes the form of a conventional neural network with continuous activation functions cascaded by a two-level stair case function which itself may viewed as a discrete neural network itself, as shown in Fig. 1. For the fuel optimal control, a nominal neural network takes the form of a conventional neural network with continuous activation functions cascaded by a three-level

For the quadratic optimal control, no switching manifolds are involved. A conventional neural network with continuous activation functions is sufficient for a nominal case, as

**4.2 Nominal layer** 

variables.

Fig. 1. Nominal neural network for time optimal control

Conventional NN

Fig. 2. Nominal neural network for fuel optimal control

Conventional NN

stair case function, as shown in Fig. 2.

shown in Fig. 3.

The overall architecture of the multi-layered hierarchical neural network control framework, as shown in Fig. 4, include three layers: the nominal layer, the regional layer, and the global layer. These three layers play different roles and yet work together to attempt to achieve desired control performance.

At the nominal layer, the nominal neural networks are responsible to compute the near optimal control signals for a given parameter vector. The post-processing function block is necessary for both time optimal control problem and fuel optimal control problems while indeed it may not be needed for the quadratic optimal control problems. For time optimal control problems, the post-processing function is a sign function as shown in Fig. 2. For the fuel optimal control problems, the post-processing is a slightly more complicated stair-case function as shown in Fig. 3.

At the regional layer, the regional neural networks are responsible to compute the desired weighting factors that are in turn used to modulate the control signals computed by the nominal neural networks to produce near optimal control signals for an unknown parameter vector situated at the know sub-region of the parameter vector space. The postprocessing function block is necessary for all the three types of control problems studied in this Chapter. It is basically a normalization process of the weighting factors produced by the regional neural networks for a sub-region that is enabled by the global neural networks.

At the global layer, the global neural networks are responsible to compute the possibilities of the unknown parameter vector being located within sub-regions. The post-processing function block is necessary for all the three types of control problems studied in this Chapter. It is a winner-take-all logic applied to all the output data of the global neural networks. Consequently, only one sub-regional will be enabled, and all the other subregions will be disabled. The output data of the post-processing function block is used to turn on only one of the sub-regions for the regional layer.

To make use of the multi-layered hierarchical neural network control design framework, it is clear that the several key factors such as the number of the neural networks for each layer, the size of each neural network, and desired training patterns, are important. This all has to do with the determination of the nominal cases. A nominal case designates a group of system conditions that reflect one of the typical system behaviors. In the context of control of a dynamic system with uncertain parameters, which is the focus of this Chapter, a nominal case may be designated as corresponding to the vertexes of the sub-regions when the parameter vector space is tessellated into a number of non-overlapping sub-regions down to a level of desired granularity.

Neural Control Toward a Unified

patterns.

**4.4 Design procedure** 

hierarchy.

solution may be obtained.

network can be determined as follows:

assign 1 or else 0.

can be determined as follows:

then assign 1 or else 0.

Intelligent Control Design Framework for Nonlinear Systems 107

Once the nominal cases are identified, the numbers of neural networks for the nominal layer, the regional layer and the global layer can be determined accordingly. Each nominal neural network corresponds to a nominal case identified. Each regional neural network corresponds

With the numbers of neural networks for all the three layers in the hierarchy determined, the size of each neural network is dependent upon the data collected for each nominal case. As shown in the last Section, the optimal state trajectories and the optimal control trajectories for each of the control problems ( *P*<sup>0</sup> ) can be obtained through use of the STVM approach for time optimal control and for fuel optimal control or the shooting method for the quadratic optimal control. For each of the nominal cases, the optimal state trajectories and optimal control trajectories may be properly utilized to form the needed training

• Identify the nominal cases. The parameter vector space may be tessellated into a number of non-overlapping sub-regions. The granualarity of the tessellation process is determined by how sensitive the system dynamic behaviors are to the changes of the parameters. Each vertext of the sub-regions identifies a nominal case. For each nominal case, the optimal control problem may be solved numerically and the nuermical

• Determine the size of the nominal layer, the regional layer and the global layer of the

• Train the nominal neural networks. The numerically obtained optimal state and control trajectories are acquired for each nominal case. The training data pattern for the nominal neural networks is composed of the state vector as input and the control signal as the output. In other words, the nominal layer is to establish and approximate a state feedback control. Finish training when the training performance is satisfactory. Repeat

• Training the regional neural networks. The input data to the nominal neural networks is also part of the input data to the regional neural networks. In addition, for a specific regional neural network, the ideal output data of the corresponding nominal neural network is also part of its input data. The ideal output data of the regional neural

• If the data presented to a given regional neural network reflects a nominal case that corresponds to the vertex that this regional neural network is to be trained for, then

• If the data presented to a given global neural network reflects a nominal case that corresponds to the sub-region that this global neural network is to be trained for,

• Training the global neural networks. The input data to the nominal neural networks is also part of the input data to the global neural networks. In addition, for a specific global neural network, the ideal output data of the corresponding nominal neural network is also part of its input data. The ideal output data of the global neural network

to a nominal neural network. Each global neural network corresponds to a sub-region.

Below is the design procedure for multi-layered hierarchical neural networks:

• Determine the size of the neural networks for each layer in the hierarchy.

this nominal layer training process for all the nominal neural networks.

Fig. 4. Multi-layered hierarch neural network architecture

Once the nominal cases are identified, the numbers of neural networks for the nominal layer, the regional layer and the global layer can be determined accordingly. Each nominal neural network corresponds to a nominal case identified. Each regional neural network corresponds to a nominal neural network. Each global neural network corresponds to a sub-region.

With the numbers of neural networks for all the three layers in the hierarchy determined, the size of each neural network is dependent upon the data collected for each nominal case. As shown in the last Section, the optimal state trajectories and the optimal control trajectories for each of the control problems ( *P*<sup>0</sup> ) can be obtained through use of the STVM

approach for time optimal control and for fuel optimal control or the shooting method for the quadratic optimal control. For each of the nominal cases, the optimal state trajectories and optimal control trajectories may be properly utilized to form the needed training patterns.

## **4.4 Design procedure**

106 Recent Advances in Robust Control – Novel Approaches and Design Methods

Post-Processing of Regional Layer NNs

Post-Processing of Global Layer NNs

Post-Processing of Nominal Layer NNs

Multiplication

Nominal Layer

Regional Layer

Global Layer

Fig. 4. Multi-layered hierarch neural network architecture

Nominal Layer Neural Networks

Regional Layer Neural Networks

Global Layer Neural Networks

Input

Below is the design procedure for multi-layered hierarchical neural networks:

	- If the data presented to a given regional neural network reflects a nominal case that corresponds to the vertex that this regional neural network is to be trained for, then assign 1 or else 0.
	- If the data presented to a given global neural network reflects a nominal case that corresponds to the sub-region that this global neural network is to be trained for, then assign 1 or else 0.

Neural Control Toward a Unified

Define 1 2 Δ= − *xt x t x t* () () () , and Δ*p* = − *p p*ˆ . Then we have

<sup>0</sup> <sup>1</sup>

*t t*

<sup>0</sup> <sup>1</sup>

ε

ε

may be mathematically described as follows:

δ

 

ω

Inequality to the above inequality yields


*C x s p ds*

Δ

Note that || ( ( ) || *Cx s p* <sup>1</sup> Δ can be made uniformly bounded by

∫

is applied, the following is obtained:

*t t*

∫


where <sup>0</sup>

This completes the proof.

**6. Simulation** 

It follows that

0

*C x s pds*

Δ

0

*t*

∫

*t*

∫

( ( ))

Intelligent Control Design Framework for Nonlinear Systems 109

( ) { ( ) ( ) ( )] ( ) }

*x t a x s B x s u s C x s p ds*

Δ = Δ +Δ +Δ +

If the both sides of the above equation takes an appropriate norm and the triangle inequality


*TT T <sup>t</sup>*

*p* is made sufficiently close to *p* (which can be controlled by the granularity of tessellation), and *p* is bounded; | ( )| 1 *u t* ≤ ; || || sup ( ) *T xT a ax* = ∈Ω < ∞ , || || sup ( ) *B Bx T xT* = ∈Ω < ∞ and


2 0 0 0 0 0

*t t tt K Kt t K*

Consider the single-machine infinity-bus (SMIB) model with a thyristor-controlled seriescapacitor (TCSC) installed on the transmission line (Chen, 1998) as shown in Fig. 5, which

> *b <sup>t</sup> <sup>m</sup>*

<sup>⎡</sup> <sup>−</sup> <sup>⎤</sup> ⎡ ⎤ <sup>⎢</sup> <sup>⎥</sup> ⎢ ⎥ <sup>=</sup> <sup>∞</sup> <sup>⎢</sup> − − −− <sup>⎥</sup> ⎣ ⎦ <sup>⎢</sup> + − <sup>⎥</sup> <sup>⎣</sup> <sup>⎦</sup>

ω ω

*VV P PD M X sX*

ω ( 1)

<sup>1</sup> ( ( 1) sin ) (1 )

*d e*

δ

( ) ( ) exp( ( )) <sup>2</sup>

<sup>−</sup> ≤ −+ − ≤

 ε

*t t K t t K Kt t* <sup>−</sup> =− + <sup>−</sup> , and *<sup>K</sup>* <sup>&</sup>lt; <sup>∞</sup> .

0 0 0 0 ( ) ( )(1 exp( ( ))) <sup>2</sup>

εε

Δ ≤ −+ −

0


*t s x t t t K s t K d ds*

*TTT <sup>t</sup>* Δ ≤ −+ + + *x t t t a B C p x s ds*

Define a constant 0 (|| || || || || |||| ||) *Ka B Cp* = ++ *TTT* . Applying the Gronwall-Bellman

<sup>Δ</sup> ∫

<sup>0</sup> 0 00 0

∫ ∫

*t t*

Δ ≤ Δ +Δ +Δ +

ε

0

 σ

ε

*t*

as long as the estimate of

*x t a x s B x s u s C x s p ds*

*TT T <sup>t</sup>*

## **5. Theoretical justification**

This Section provides theoretical support for the adoption of the hierarchical neural networks.

As shown in (Chen, Yang & Moher, 2006), the desired prediction or control can be achieved by a properly designed hierarchical neural network.

Proposition 1 (Chen, Yang & Mohler, 2006): Suppose that an ideal system controller can be characterized by function vectors *<sup>u</sup> <sup>i</sup> f* and *<sup>l</sup> <sup>i</sup> f* ( 1 *l u* ≤ *in n* ≤ = ) which are continuous mappings from a compact support Ω ⊂ *Rnx* to *ny R* , such that a continuous function vector *<sup>f</sup>* also defined on Ω can be expressed as , , <sup>1</sup> () () () *nl u l <sup>j</sup> ij ij <sup>i</sup> <sup>f</sup> x fxfx* <sup>=</sup> <sup>=</sup> ∑ <sup>×</sup> on the point-wise basis ( *x* ∈Ω ; and , ( ) *<sup>u</sup> i j f x* and , ( ) *<sup>l</sup> i j f x* are the *j*th component of *<sup>u</sup> <sup>i</sup> f* and *<sup>l</sup> <sup>i</sup> f* ). Then there exists a hierarchical neural network, used to approximate the ideal system controller or system identifier, that includes lower level neural networks *<sup>l</sup> nni* 's and upper level neural networks

*<sup>u</sup> nni* ( <sup>1</sup> *l u* ≤≤ = *in n* ) such that for any 0 *<sup>j</sup>* ε <sup>&</sup>gt; , , , <sup>1</sup> sup | <sup>|</sup> *nl l u <sup>x</sup> <sup>j</sup> <sup>i</sup> <sup>j</sup> <sup>i</sup> j j <sup>i</sup> f nn nn* ε ∈Ω <sup>=</sup> −∑ × < where , ( ) *<sup>u</sup> nn x i j* and , ( ) *<sup>l</sup> nn x i j* are the *j*th component of *<sup>u</sup> nni* and *<sup>l</sup> nni* .

The following proposition is to show that the parameter uncertainties can also be handled by the hierarchical neural networks.

Proposition 2: For the system (1) and the assumptions AS1-AS9, with the application of the hierarchical neural controller, the deviation of the resuting state trajectory for the unknow parameter vector from that of the optimal state trajectory is bounded.

Proof: Let the estiamte of the parameter vector be denoted by *p*ˆ . The counterpart of system (1) for the estimated paramter vector *p*ˆ can be given by

$$\dot{\mathbf{x}} = a(\mathbf{x}) + \mathbf{C}(\mathbf{x})\hat{p} + B(\mathbf{x})\mu$$

Integrating of the above equation and system (1) from 0*t* to *t* leads to the following two equations:

$$\begin{aligned} \mathbf{x}\_1(t) &= \mathbf{x}\_1(t\_0) + \int\_{t\_0}^t [a(\mathbf{x}\_1(s)) + \mathbf{C}(\mathbf{x}\_1(s))] \hat{p} + B(\mathbf{x}\_1(s)) \mu(s)] ds \\\\ \mathbf{x}\_2(t) &= \mathbf{x}\_2(t\_0) + \int\_{t\_0}^t [a(\mathbf{x}\_2(s)) + \mathbf{C}(\mathbf{x}\_2(s))] p + B(\mathbf{x}\_2(s)) \mu(s)] ds \end{aligned}$$

By noting that 10 20 0 *xt xt x* () () = = , subtraction of the above two equations yields

$$\begin{aligned} \mathbf{x}\_1(t) - \mathbf{x}\_2(t) &= \int\_{t\_0}^t \{a(\mathbf{x}\_1(s)) - a(\mathbf{x}\_2(s)) + [B(\mathbf{x}\_1(s)) - B(\mathbf{x}\_2(s))]u(s)\} ds + \\ &\int\_{t\_0}^t \{\mathbf{C}(\mathbf{x}\_1(s))(\hat{p} - p) + [\mathbf{C}(\mathbf{x}\_1(s)) - \mathbf{C}(\mathbf{x}\_2(s))]p\} ds \end{aligned}$$

Note that, by Taylor's theorem, 1 2 12 ( ( )) ( ( )) ( ( ) ( )) *<sup>T</sup> ax s ax s a x s x s* − = − , 1 2 12 ( ( )) ( ( )) ( ( ) ( )) *Bx s Bx s B x s x s* −= − *<sup>T</sup>* , and 1 2 12 ( ( )) ( ( )) ( ( ) ( )) *Cx s Cx s C x s x s* − = − *<sup>T</sup>* .

Define 1 2 Δ= − *xt x t x t* () () () , and Δ*p* = − *p p*ˆ . Then we have

$$\begin{aligned} \Delta \mathbf{x}(t) &= \int\_{t\_0}^t \left\{ a\_T \Delta \mathbf{x}(s) + B\_T \Delta \mathbf{x}(s) \boldsymbol{\mu}(s) \right\} + C\_T \Delta \mathbf{x}(s) p \right] ds + \mathbf{c} \\ \int\_{t\_0}^t \mathbf{C}(\mathbf{x}\_1(s)) \Delta p ds \end{aligned}$$

If the both sides of the above equation takes an appropriate norm and the triangle inequality is applied, the following is obtained:

 $\|\\\\| \mid \Delta \mathbf{x}(t) \mid \lvert \leq \rvert \int\_{t\_0}^{t} \{a\_T \Delta \mathbf{x}(s) + B\_T \Delta \mathbf{x}(s)\mu(s)\} + \mathbf{C}\_T \Delta \mathbf{x}(s)p \} ds \mid \mid + \rangle$  $\int\_{t\_0}^{t} \mid \mid \mathbf{C}(\mathbf{x}\_1(s)) \Delta p \mid \mid ds$ 

Note that || ( ( ) || *Cx s p* <sup>1</sup> Δ can be made uniformly bounded by ε as long as the estimate of *p* is made sufficiently close to *p* (which can be controlled by the granularity of tessellation), and *p* is bounded; | ( )| 1 *u t* ≤ ; || || sup ( ) *T xT a ax* = ∈Ω < ∞ , || || sup ( ) *B Bx T xT* = ∈Ω < ∞ and || || sup ( ) *C Cx T xT* = ∈Ω < ∞ .

It follows that

108 Recent Advances in Robust Control – Novel Approaches and Design Methods

This Section provides theoretical support for the adoption of the hierarchical neural

As shown in (Chen, Yang & Moher, 2006), the desired prediction or control can be achieved

Proposition 1 (Chen, Yang & Mohler, 2006): Suppose that an ideal system controller can be

mappings from a compact support Ω ⊂ *Rnx* to *ny R* , such that a continuous function vector

hierarchical neural network, used to approximate the ideal system controller or system identifier, that includes lower level neural networks *<sup>l</sup> nni* 's and upper level neural networks

The following proposition is to show that the parameter uncertainties can also be handled

Proposition 2: For the system (1) and the assumptions AS1-AS9, with the application of the hierarchical neural controller, the deviation of the resuting state trajectory for the unknow

Proof: Let the estiamte of the parameter vector be denoted by *p*ˆ . The counterpart of system

*x ax Cxp Bxu* =+ + () () () ˆ

Integrating of the above equation and system (1) from 0*t* to *t* leads to the following two

<sup>0</sup> 1 10 1 1 <sup>1</sup> ( ) ( ) [ ( ( )) ( ( )) ( ( )) ( )] <sup>ˆ</sup> *<sup>t</sup> <sup>t</sup> x t x t a x s C x s p B x s u s ds* =+ + + ∫

<sup>0</sup> 2 20 2 2 <sup>2</sup> ( ) ( ) [ ( ( )) ( ( )) ( ( )) ( )] *<sup>t</sup> <sup>t</sup> x t x t a x s C x s p B x s u s ds* =+ + + ∫

( ) ( ) { ( ( )) ( ( )) [ ( ( )) ( ( ))] ( )]}

*x t x t a x s a x s B x s B x s u s ds*

Note that, by Taylor's theorem, 1 2 12 ( ( )) ( ( )) ( ( ) ( )) *<sup>T</sup> ax s ax s a x s x s* − = − ,

− = −+ − +

By noting that 10 20 0 *xt xt x* () () = = , subtraction of the above two equations yields

12 1 2 1 2

1 2 12 ( ( )) ( ( )) ( ( ) ( )) *Bx s Bx s B x s x s* −= − *<sup>T</sup>* , and 1 2 12 ( ( )) ( ( )) ( ( ) ( )) *Cx s Cx s C x s x s* − = − *<sup>T</sup>* .

*<sup>i</sup> f* ( 1 *l u* ≤ *in n* ≤ = ) which are continuous

*<sup>i</sup> f* ). Then there exists a

ε

*<sup>j</sup> ij ij <sup>i</sup> <sup>f</sup> x fxfx* <sup>=</sup> <sup>=</sup> ∑ <sup>×</sup> on the point-wise basis

*<sup>x</sup> <sup>j</sup> <sup>i</sup> <sup>j</sup> <sup>i</sup> j j <sup>i</sup> f nn nn*

∈Ω <sup>=</sup> −∑ × < where

*<sup>i</sup> f* and *<sup>l</sup>*

<sup>&</sup>gt; , , , <sup>1</sup> sup | <sup>|</sup> *nl l u*

*<sup>i</sup> f* and *<sup>l</sup>*

*i j f x* are the *j*th component of *<sup>u</sup>*

ε

**5. Theoretical justification** 

by a properly designed hierarchical neural network.

*<sup>f</sup>* also defined on Ω can be expressed as , , <sup>1</sup> () () () *nl u l*

characterized by function vectors *<sup>u</sup>*

*i j f x* and , ( ) *<sup>l</sup>*

*<sup>u</sup> nni* ( <sup>1</sup> *l u* ≤≤ = *in n* ) such that for any 0 *<sup>j</sup>*

by the hierarchical neural networks.

, ( ) *<sup>u</sup> nn x i j* and , ( ) *<sup>l</sup> nn x i j* are the *j*th component of *<sup>u</sup> nni* and *<sup>l</sup> nni* .

(1) for the estimated paramter vector *p*ˆ can be given by

0

*t t*

∫

1 12

{ ( ( ))( ) [ ( ( )) ( ( ))] } ˆ

−+ −

*C x s p p C x s C x s p ds*

0

*t t*

∫

parameter vector from that of the optimal state trajectory is bounded.

networks.

( *x* ∈Ω ; and , ( ) *<sup>u</sup>*

equations:

$$\left| \begin{array}{c} | \end{array} \right| \, | \, \Delta \mathbf{x}(t) \rangle \mid \left| \leq \mathbf{z}(t - t\_{0}) + \left( \left| \begin{array}{c} \| \, \| \, a\_{T} \| \, \| \, + \| \, \| \, B\_{T} \| \, \| \, + \| \, \| \, \mathbf{C}\_{T} \| \, \| \, \| \, \| \, p \, \| \, \| \right) \right) \right|\_{t\_{0}}^{t} \Delta \mathbf{x}(\mathbf{s}) d\mathbf{s} $$

Define a constant 0 (|| || || || || |||| ||) *Ka B Cp* = ++ *TTT* . Applying the Gronwall-Bellman Inequality to the above inequality yields

$$|\mid \Delta \mathfrak{x}(t) \mid | \le \varepsilon (t - t\_0) + \int\_{t\_0}^{t} K\_0 \varepsilon (s - t\_0) \exp\{\int\_s^t K\_0 d\sigma\} ds$$

$$\le \varepsilon (t - t\_0) + \varepsilon K\_0 \frac{\left(t - t\_0\right)^2}{2} \exp(K\_0 (t - t\_0)) \le K\varepsilon$$

where <sup>0</sup> 0 0 0 0 ( ) ( )(1 exp( ( ))) <sup>2</sup> *t t K t t K Kt t* <sup>−</sup> =− + <sup>−</sup> , and *<sup>K</sup>* <sup>&</sup>lt; <sup>∞</sup> .

This completes the proof.

## **6. Simulation**

Consider the single-machine infinity-bus (SMIB) model with a thyristor-controlled seriescapacitor (TCSC) installed on the transmission line (Chen, 1998) as shown in Fig. 5, which may be mathematically described as follows:

$$
\begin{bmatrix}
\dot{\delta} \\
\dot{\alpha}
\end{bmatrix} = \begin{bmatrix}
a\_b(\alpha - 1) \\
\frac{1}{M}(P\_m - P\_0 - D(\alpha - 1) - \frac{V\_t V \infty}{X\_d + (1 - s)X\_e} \sin \delta)
\end{bmatrix}
$$

Neural Control Toward a Unified

optimal control.

**7. Conclusion** 

Intelligent Control Design Framework for Nonlinear Systems 111

Fig. 6. Control performance of hierarchical neural controller. Solid - neural control; dashed -

Even with remarkable progress witnessed in the adaptive control techniques for the nonlinear system control over the past decade, the general challenge with adaptive control of nonlinear systems has never become less formidable, not to mention the adaptive control of nonlinear systems while optimizing a pre-designated control performance index and respecting restrictions on control signals. Neural networks have been introduced to tackle the adaptive control of nonlinear systems, where there are system uncertainties in parameters, unmodeled nonlinear system dynamics, and in many cases the parameters may be time varying. It is the main focus of this Chapter to establish a framework in which general nonlinear systems will be targeted and near optimal, adaptive control of uncertain, time-varying, nonlinear systems is studied. The study begins with a generic presentation of the solution scheme for fixed-parameter nonlinear systems. The optimal control solution is presented for the purpose of minimum time control and minimum fuel control, respectively. The parameter space is tessellated into a set of convex sub-regions. The set of parameter vectors corresponding to the vertexes of those convex sub-regions are obtained. Accordingly, a set of optimal control problems are solved. The resulting control trajectories and state or output trajectories are employed to train a set of properly designed neural networks to establish a relationship that would otherwise be unavailable for the sake of near optimal controller design. In addition, techniques are developed and applied to deal with the time varying property of uncertain parameters of the nonlinear systems. All these pieces

where δ is rotor angle (rad), ω rotor speed (p.u.), 2 60 ω*b* = π × synchronous speed as base (rad/sec), *Pm* = 0.3665 is mechanical power input (p.u.), *P*0 is unknown fixed load (p.u.), *D* = 2.0 damping factor, 3.5 *M* = system inertia referenced to the base power, 1.0 *Vt* = terminal bus voltage (p.u.), *V* 0.99 <sup>∞</sup> = infinite bus voltage (p.u.), 2.0 *Xd* = transient reactance of the generator (p.u.), 0.35 *Xe* = transmission reactance (p.u.), min max *ss s* ∈ = [ , ] [0.2,0.75] series compensation degree of the TCSC, and ( ,1) *<sup>e</sup>* δ is system equilibrium with the series compensation degree fixed at 0.4 *<sup>e</sup> s* = .

The goal is to stabilize the system in the near optimal time control fashion with an unknown load *P*0 ranging 0 and 10% of *Pm* . Two nominal cases are identified. The nominal neural networks have 15 and 30 neurons in the first and second hidden layer with log-sigmoid and tan-sigmoid activation functions for these two hidden layers, respectively. The input data to regional neural networks is the rotor angle, its two previous values, the control and its previous value, and the outputs are the weighting factors. The regional neural networks have 15 and 30 neurons in the first and second hidden layer with log-sigmoid and tan-sigmoid activation functions for these two hidden layers, respectively. The global neural networks are really not necessary in this simple case of parameter uncertainty.

Once the nominal and regional neural networks are trained, they are used to control the system after a severe short-circuit fault and with an unknown load (5% of *Pm* ). The resulting trajectory is shown in Fig. 6. It is observed that the hierarchical neural controller stabilizes the system in a near optimal control manner.

Fig. 5. The SMIB system with TCSC

Fig. 6. Control performance of hierarchical neural controller. Solid - neural control; dashed optimal control.

## **7. Conclusion**

110 Recent Advances in Robust Control – Novel Approaches and Design Methods

rotor speed (p.u.), 2 60

(rad/sec), *Pm* = 0.3665 is mechanical power input (p.u.), *P*0 is unknown fixed load (p.u.), *D* = 2.0 damping factor, 3.5 *M* = system inertia referenced to the base power, 1.0 *Vt* = terminal bus voltage (p.u.), *V* 0.99 <sup>∞</sup> = infinite bus voltage (p.u.), 2.0 *Xd* = transient reactance of the generator (p.u.), 0.35 *Xe* = transmission reactance (p.u.),

The goal is to stabilize the system in the near optimal time control fashion with an unknown load *P*0 ranging 0 and 10% of *Pm* . Two nominal cases are identified. The nominal neural networks have 15 and 30 neurons in the first and second hidden layer with log-sigmoid and tan-sigmoid activation functions for these two hidden layers, respectively. The input data to regional neural networks is the rotor angle, its two previous values, the control and its previous value, and the outputs are the weighting factors. The regional neural networks have 15 and 30 neurons in the first and second hidden layer with log-sigmoid and tan-sigmoid activation functions for these two hidden layers, respectively. The global neural networks are really not necessary in this simple

Once the nominal and regional neural networks are trained, they are used to control the system after a severe short-circuit fault and with an unknown load (5% of *Pm* ). The resulting trajectory is shown in Fig. 6. It is observed that the hierarchical neural controller stabilizes

> Transmission Line with TCSC

min max *ss s* ∈ = [ , ] [0.2,0.75] series compensation degree of the TCSC, and ( ,1) *<sup>e</sup>*

ω*b* = π

× synchronous speed as base

δ

Infinite Bus

is system

where δ

is rotor angle (rad),

case of parameter uncertainty.

Fig. 5. The SMIB system with TCSC

Synchronous Machine

the system in a near optimal control manner.

ω

equilibrium with the series compensation degree fixed at 0.4 *<sup>e</sup> s* = .

Even with remarkable progress witnessed in the adaptive control techniques for the nonlinear system control over the past decade, the general challenge with adaptive control of nonlinear systems has never become less formidable, not to mention the adaptive control of nonlinear systems while optimizing a pre-designated control performance index and respecting restrictions on control signals. Neural networks have been introduced to tackle the adaptive control of nonlinear systems, where there are system uncertainties in parameters, unmodeled nonlinear system dynamics, and in many cases the parameters may be time varying. It is the main focus of this Chapter to establish a framework in which general nonlinear systems will be targeted and near optimal, adaptive control of uncertain, time-varying, nonlinear systems is studied. The study begins with a generic presentation of the solution scheme for fixed-parameter nonlinear systems. The optimal control solution is presented for the purpose of minimum time control and minimum fuel control, respectively. The parameter space is tessellated into a set of convex sub-regions. The set of parameter vectors corresponding to the vertexes of those convex sub-regions are obtained. Accordingly, a set of optimal control problems are solved. The resulting control trajectories and state or output trajectories are employed to train a set of properly designed neural networks to establish a relationship that would otherwise be unavailable for the sake of near optimal controller design. In addition, techniques are developed and applied to deal with the time varying property of uncertain parameters of the nonlinear systems. All these pieces

Neural Control Toward a Unified

New York.

2, pp. 8-15.

New York.

ISSN 1063-6706.

1416.

76.

pp. 1306–1310, ISSN 0018-9286.

*Networks*, Vol. 4, No. 2, pp. 192-206.

0132733501, Englewood Cliffs, New Jersey.

Intelligent Control Design Framework for Nonlinear Systems 113

Chen, F. & Liu, C. (1994). Adaptively Controlling Nonlinear Continuous-Time Systems

Haykin, S. (2001). *Neural Networks: A Comprehensive Foundation*, Prentice-Hall, ISBN

Hebb, D. (1949). *The Organization of Behavior*, John Wiley and Sons, ISBN 9780805843002,

Hopfield, J. J., & Tank, D. W. (1985). Neural Computation of Decisions in Optimization

Irwin, G. W., Warwick, K., & Hunt, K. J. (1995). *Neural Network Applications in Control*, The

Kawato, M., Uno, Y., & Suzuki, R. (1988). Hierarchical Neural Network Model for Voluntary

Lee, E. & Markus, L. (1967). *Foundations of Optimal Control Theory*, Wiley, ISBN 0898748070,

Levin, A. U., & Narendra, K. S. (1993). Control of Nonlinear Dynamical Systems Using

Lewis, F., Yesidirek, A. & Liu, K. (1995). Neural Net Robot Controller with Guaranteed

Liang, R. H. (1999). A Neural-based Redispatch Approach to Dynamic Generation Allocation. *IEEE Transactions on Power Systems*, Vol. 14, No. 4, pp. 1388-1393. Methaprayoon, K., Lee, W., Rasmiddatta, S., Liao, J. R., & Ross, R. J. (2007). Multistage

Mohler, R. (1991). *Nonlinear Systems Volume I, Dynamics and Control*, Prentice Hall,

Mohler, R. (1991). *Nonlinear Systems Volume II, Applications to Bilinear Control*, Prentice Hall,

Mohler, R. (1973). *Bilinear Control Processes*, Academic Press, ISBN 0-12-504140-3, New York. Moon S. (1969). *Optimal Control of Bilinear Systems and Systems Linear in Control*, Ph.D.

Nagata, S., Sekiguchi, M., & Asakawa, K. (1990). Mobile Robot Control by a Structured

Pandit, M., Srivastava, L., & Sharma, J. (2003). Fast Voltage Contingency Selection Using

Polycarpou, M. (1996). Stable Adaptive Neural Control Scheme for Nonlinear Systems. I*EEE Transactions on Automatic Control*, Vol. 41, pp. 447-451, ISSN 0018-9286.

Hierarchical Neural Network. *IEEE Control Systems Magazine*, Vol. 10, No. 3, pp. 69-

Fuzzy Parallel Self-Organizing Hierarchical Neural Network. *IEEE Transactions on* 

Englewood Cliffs, ISBN 0-13-623489-5, New Jersey.

Englewood Cliffs, ISBN 0-13- 623521-2, New Jersey.

dissertation, The University of New Mexico.

*Power Systems*, Vol. 18, No. 2, pp. 657-664.

Movement with Application to Robotics. *IEEE Control Systems Magazine*, Vol. 8, No.

Neural Networks: Controllability and Stabilization. *IEEE Transactions on Neural* 

Tracking Performance. *IEEE Transactions on Neural Networks*, Vol. 6, pp. 703-715,

Artificial Neural Network Short-Term Load Forecasting Engine with Front-End Weather Forecast. *IEEE Transactions Industry Applications*, Vol. 43, No. 6, pp. 1410-

Problems. *Biological Cybernetics*, Vol. 52, No. 3, pp. 141-152.

Institution of Electrical Engineers, ISBN 0906048567, London.

Using Multilayer Neural Networks. *IEEE Transactions on Automatic Control*, Vol. 39,

come together in an organized and cooperative manner under the unified intelligent control design framework to meet the Chapter's ultimate goal of constructing intelligent controllers for uncertain, nonlinear systems.

## **8. Acknowledgment**

The authors are grateful to the Editor and the anonymous reviewers for their constructive comments.

## **9. References**


112 Recent Advances in Robust Control – Novel Approaches and Design Methods

come together in an organized and cooperative manner under the unified intelligent control design framework to meet the Chapter's ultimate goal of constructing intelligent controllers

The authors are grateful to the Editor and the anonymous reviewers for their constructive

Chen, D. (1998). *Nonlinear Neural Control with Power Systems Applications*, Doctoral

Chen, D. & Mohler, R. (1997). Load Modelling and Voltage Stability Analysis by Neural

Chen, D. & Mohler, R. (2000). Theoretical Aspects on Synthesis of Hierarchical Neural

Chen, D. & Mohler, R. (2003). Neural-Network-based Loading Modeling and Its Use in

Chen, D., Mohler, R., & Chen, L. (1999). Neural-Network-based Adaptive Control with

3236-3240, ISBN 0-7803-4990-3, San Diego, California, USA, June 2-4, 1999. Chen, D., Mohler, R., & Chen, L. (2000). Synthesis of Neural Controller Applied to Power

Chen, D. & Yang, J. (2005). Robust Adaptive Neural Control Applied to a Class of Nonlinear

Chen, D., Yang, J., & Mohler, R. (2006). Hierarchical Neural Networks toward a Unified

Chen, D., Yang, J., & Mohler, R. (2008). On near Optimal Neural Control of Multiple-Input

Chen, D., Yang, J., & Mohler, R. (2006). Hierarchical Neural Networks toward a Unified

Chen, D. & York, M. (2008). Neural Network based Approaches to Very Short Term Load

*Intelligence Research*, Vol. 2, No. 1, pp. 17-25, ISSN 0974-1259.

*Intelligence Research*, Vol. 2, No. 1, pp. 17-25, ISSN 0974-1259.

8, ISBN 978-1-4244-1905-0, Pittsbufgh, PA, USA, July 20-24, 2008.

Network, *Proceedings of 1997 American Control Conference*, pp. 1086-1090, ISBN 0-

Controllers for Power Systems, *Proceedings of 2000 American Control Conference,* pp.

Voltage Stability Analysis. *IEEE Transactions on Control Systems Technology*, Vol. 11,

Application to Power Systems, *Proceedings of 1999 American Control Conference*, pp.

Systems. *IEEE Transactions on Circuits and Systems I*, Vol. 47, No. 3, pp. 376 – 388,

Systems, *Proceedings of 17th IMACS World Congress: Scientific Computation, Applied Mathematics and Simulation*, Paper T5-I-01-0911, pp. 1-8, ISBN 2-915913-02-1, Paris,

Modelling Framework for Load Dynamics. *International Journal of Computational* 

Nonlinear Systems. *Neural Computing & Applications*, Vol. 17, No. 4, pp. 327-337,

Modelling Framework for Load Dynamics. *International Journal of Computational* 

Prediction, *Proceedings of 2008 IEEE Power and Energy Society General Meeting*, pp. 1-

Dissertation, Oregon State University, ISBN 0-599-12704-X.

7803-3832-4, Albuquerque, New Mexico, USA, June 4-6, 1997.

No. 4, pp. 460-470, ISSN 1063-6536.

ISSN 1057-7122.

ISSN 0941-0643.

July 2005.

3432 – 3436, ISBN 0-7803-5519-9, Chicago, Illinois, June 28-30, 2000.

for uncertain, nonlinear systems.

**8. Acknowledgment** 

comments.

**9. References** 


**6** 

**Robust Adaptive Wavelet Neural Network** 

*Robustness* is of crucial importance in control system design because the real engineering systems are vulnerable to external disturbance and measurement noise and there are always differences between mathematical models used for design and the actual system. Typically, it is required to design a controller that will stabilize a plant, if it is not stable originally, and to satisfy certain performance levels in the presence of disturbance signals, noise interference, unmodelled plant dynamics and plant-parameter variations. These design objectives are best realized via the feedback control mechanism (Fig. 1), although it introduces in the issues of high cost (the use of sensors), system complexity (implementation and safety) and more concerns on stability (thus internal stability and stabilizing controllers) (Gu, Petkov, & Konstantinov, 2005). In abstract, a control system is robust if it remains stable and achieves certain performance criteria in the presence of possible uncertainties. The *robust design* is to

In this chapter, the basic concepts and representations of a robust adaptive wavelet neural

The remainder of the chapter is organized as follows: In section 2 the advantages of neural network controllers over conventional ones will be discussed, considering the efficiency of introduction of wavelet theory in identifying unknown dependencies. Section 3 presents an overview of the buck converter models. In section 4, a detailed overview of WNN methods is presented. Robust control is introduced in section 5 to increase the robustness against noise by implementing the error minimization. Section 6 explains the stability analysis which is based on adaptive bound estimation. The implementation procedure and results of AWNN controller are explained in section 7. The results show the effectiveness of the proposed method in comparison to other previous works. The final section concludes the chapter.

The conventional Proportional Integral Derivative (PID) controllers have been widely used in industry due to their simple control structure, ease of design, and inexpensive cost (Ang,

find a controller, for a given system, such that the closed-loop system is robust.

network control for the case study of buck converters will be discussed.

**2. Overview of wavelet neural networks** 

**1. Introduction** 

**Control of Buck Converters** 

Gabriel Mistelbauer2 and Ehsan Bouzari3

Hamed Bouzari\*1,2, Miloš Šramek1,2,

*1Austrian Academy of Sciences 2Vienna University of Technology* 

*3Zanjan University* 

*1,2Austria 3Iran* 


## **Robust Adaptive Wavelet Neural Network Control of Buck Converters**

Hamed Bouzari\*1,2, Miloš Šramek1,2,

Gabriel Mistelbauer2 and Ehsan Bouzari3 *1Austrian Academy of Sciences 2Vienna University of Technology 3Zanjan University 1,2Austria 3Iran* 

## **1. Introduction**

114 Recent Advances in Robust Control – Novel Approaches and Design Methods

Sanner, R. & Slotine, J. (1992). Gaussian Networks for Direct Adaptive Control. *IEEE Transactions on Neural Networks*, Vol. 3, pp. 837-863, ISSN 1045-9227. Yesidirek, A. & Lewis, F. (1995). Feedback Linearization Using Neural Network. *Automatica*,

Zakrzewski, R. R., Mohler, R. R., & Kolodziej, W. J. (1994). Hierarchical Intelligent Control

Zhou, Y. T., Chellappa, R., Vaid, A., & Jenkins B. K. (1988). Image Restoration Using a

with Flexible AC Transmission System Application. *IFAC Journal of Control* 

Neural Network. *IEEE Transactions on Acoustics, Speech, and Signal Processing*, Vol.

Vol. 31, pp. 1659-1664, ISSN.

36, No. 7, pp. 1141-1151.

*Engineering Practice*, pp. 979-987.

*Robustness* is of crucial importance in control system design because the real engineering systems are vulnerable to external disturbance and measurement noise and there are always differences between mathematical models used for design and the actual system. Typically, it is required to design a controller that will stabilize a plant, if it is not stable originally, and to satisfy certain performance levels in the presence of disturbance signals, noise interference, unmodelled plant dynamics and plant-parameter variations. These design objectives are best realized via the feedback control mechanism (Fig. 1), although it introduces in the issues of high cost (the use of sensors), system complexity (implementation and safety) and more concerns on stability (thus internal stability and stabilizing controllers) (Gu, Petkov, & Konstantinov, 2005). In abstract, a control system is robust if it remains stable and achieves certain performance criteria in the presence of possible uncertainties. The *robust design* is to find a controller, for a given system, such that the closed-loop system is robust.

In this chapter, the basic concepts and representations of a robust adaptive wavelet neural network control for the case study of buck converters will be discussed.

The remainder of the chapter is organized as follows: In section 2 the advantages of neural network controllers over conventional ones will be discussed, considering the efficiency of introduction of wavelet theory in identifying unknown dependencies. Section 3 presents an overview of the buck converter models. In section 4, a detailed overview of WNN methods is presented. Robust control is introduced in section 5 to increase the robustness against noise by implementing the error minimization. Section 6 explains the stability analysis which is based on adaptive bound estimation. The implementation procedure and results of AWNN controller are explained in section 7. The results show the effectiveness of the proposed method in comparison to other previous works. The final section concludes the chapter.

### **2. Overview of wavelet neural networks**

The conventional Proportional Integral Derivative (PID) controllers have been widely used in industry due to their simple control structure, ease of design, and inexpensive cost (Ang,

Robust Adaptive Wavelet Neural Network Control of Buck Converters 117

typically converge in a smaller number of iterations than the conventional NNs (Ho, Ping-Au, & Jinhua, 2001). Unlike the sigmoid functions used in conventional NNs, the second layer of WNN is a wavelet form, in which the translation and dilation parameters are included. Thus, WNN has been proved to be better than the other NNs in that the structure can provide more potential to enrich the mapping relationship between inputs and outputs (Ho, Ping-Au, & Jinhua, 2001). Much research has been done on applications of WNNs, which combines the capability of artificial NNs for learning from processes and the capability of wavelet decomposition (Chen & Hsiao, 1999) for identification and control of dynamic systems (Zhang, 1997). Zhang, 1997 described a WNN for function learning and estimation, and the structure of this network is similar to that of the radial basis function network except that the radial functions are replaced by orthonormal scaling functions. Also in this study, the family of basis functions for the RBF network is replaced by an orthogonal basis (i.e., the scaling functions in the theory of wavelets) to form a WNN. WNNs offer a good compromise between robust implementations resulting from the redundancy characteristic of non-orthogonal wavelets and neural systems, and efficient functional

representations that build on the time–frequency localization property of wavelets.

supply is subjected to the input voltage and load resistance variations.

Due to the rapid development of power semiconductor devices in personal computers, computer peripherals, and adapters, the switching power supplies are popular in modern industrial applications. To obtain high quality power systems, the popular control technique of the switching power supplies is the Pulse Width Modulation (PWM) approach (Pressman, Billings, & Morey, 2009). By varying the duty ratio of the PWM modulator, the switching power supply can convert one level of electrical voltage into the desired level. From the control viewpoint, the controller design of the switching power supply is an intriguing issue, which must cope with wide input voltage and load resistance variations to ensure the stability in any operating condition while providing fast transient response. Over the past decade, there have been many different approaches proposed for PWM switching control design based on PI control (Alvarez-Ramirez et al., 2001), optimal control (Hsieh, Yen, & Juang, 2005), sliding-mode control (Vidal-Idiarte et al., 2004), fuzzy control (Vidal-Idiarte et al., 2004), and adaptive control (Mayosky & Cancelo, 1999) techniques. However, most of these approaches require adequately time-consuming trial-and-error tuning procedure to achieve satisfactory performance for specific models; some of them cannot achieve satisfactory performance under the changes of operating point; and some of them have not given the stability analysis. The motivation of this chapter is to design an Adaptive Wavelet Neural Network (AWNN) control system for the Buck type switching power supply. The proposed AWNN control system is comprised of a NN controller and a compensated controller. The neural controller using a WNN is designed to mimic an ideal controller and a robust controller is designed to compensate for the approximation error between the ideal controller and the neural controller. The online adaptive laws are derived based on the Lyapunov stability theorem so that the stability of the system can be guaranteed. Finally, the proposed AWNN control scheme is applied to control a Buck type switching power supply. The simulated results demonstrate that the proposed AWNN control scheme can achieve favorable control performance; even the switching power

**3. Problem formulation** 

Chong, & Li, 2005). However, successful applications of the PID controller require the satisfactory tuning of parameters according to the dynamics of the process. In fact, most PID controllers are tuned on-site. The lengthy calculations for an initial guess of PID parameters can often be demanding if we know a few about the plant, especially when the system is unknown.

Fig. 1. Feedback control system design.

There has been considerable interest in the past several years in exploring the applications of Neural Network (NN) to deal with nonlinearities and uncertainties of the real-time control system (Sarangapani, 2006). It has been proven that artificial NN can approximate a wide range of nonlinear functions to any desired degree of accuracy under certain conditions (Sarangapani, 2006). It is generally understood that the selection of the NN training algorithm plays an important role for most NN applications. In the conventional gradientdescent-type weight adaptation, the sensitivity of the controlled system is required in the online training process. However, it is difficult to acquire sensitivity information for unknown or highly nonlinear dynamics. In addition, the local minimum of the performance index remains to be challenged (Sarangapani, 2006). In practical control applications, it is desirable to have a systematic method of ensuring the stability, robustness, and performance properties of the overall system. Several NN control approaches have been proposed based on Lyapunov stability theorem (Lim et al., 2009; Ziqian, Shih, & Qunjing, 2009). One main advantage of these control schemes is that the adaptive laws were derived based on the Lyapunov synthesis method and therefore it guarantees the stability of the under control system. However, some constraint conditions should be assumed in the control process, e.g., that the approximation error, optimal parameter vectors or higher order terms in a Taylor series expansion of the nonlinear control law, are bounded. Besides, the prior knowledge of the controlled system may be required, e.g., the external disturbance is bounded or all states of the controlled system are measurable. These requirements are not easy to satisfy in practical control applications.

NNs in general can identify patterns according to their relationship, responding to related patterns with a similar output. They are trained to classify certain patterns into groups, and then are used to identify the new ones, which were never presented before. NNs can correctly identify incomplete or similar patterns; it utilizes only absolute values of input variables but these can differ enormously, while their relations may be the same. Likewise we can reason identification of unknown dependencies of the input data, which NN should learn. This could be regarded as a pattern abstraction, similar to the brain functionality, where the identification is not based on the values of variables but only relations of these.

In the hope to capture the complexity of a process Wavelet theory has been combined with the NN to create Wavelet Neural Networks (WNN). The training algorithms for WNN typically converge in a smaller number of iterations than the conventional NNs (Ho, Ping-Au, & Jinhua, 2001). Unlike the sigmoid functions used in conventional NNs, the second layer of WNN is a wavelet form, in which the translation and dilation parameters are included. Thus, WNN has been proved to be better than the other NNs in that the structure can provide more potential to enrich the mapping relationship between inputs and outputs (Ho, Ping-Au, & Jinhua, 2001). Much research has been done on applications of WNNs, which combines the capability of artificial NNs for learning from processes and the capability of wavelet decomposition (Chen & Hsiao, 1999) for identification and control of dynamic systems (Zhang, 1997). Zhang, 1997 described a WNN for function learning and estimation, and the structure of this network is similar to that of the radial basis function network except that the radial functions are replaced by orthonormal scaling functions. Also in this study, the family of basis functions for the RBF network is replaced by an orthogonal basis (i.e., the scaling functions in the theory of wavelets) to form a WNN. WNNs offer a good compromise between robust implementations resulting from the redundancy characteristic of non-orthogonal wavelets and neural systems, and efficient functional representations that build on the time–frequency localization property of wavelets.

## **3. Problem formulation**

116 Recent Advances in Robust Control – Novel Approaches and Design Methods

Chong, & Li, 2005). However, successful applications of the PID controller require the satisfactory tuning of parameters according to the dynamics of the process. In fact, most PID controllers are tuned on-site. The lengthy calculations for an initial guess of PID parameters can often be demanding if we know a few about the plant, especially when the system is

There has been considerable interest in the past several years in exploring the applications of Neural Network (NN) to deal with nonlinearities and uncertainties of the real-time control system (Sarangapani, 2006). It has been proven that artificial NN can approximate a wide range of nonlinear functions to any desired degree of accuracy under certain conditions (Sarangapani, 2006). It is generally understood that the selection of the NN training algorithm plays an important role for most NN applications. In the conventional gradientdescent-type weight adaptation, the sensitivity of the controlled system is required in the online training process. However, it is difficult to acquire sensitivity information for unknown or highly nonlinear dynamics. In addition, the local minimum of the performance index remains to be challenged (Sarangapani, 2006). In practical control applications, it is desirable to have a systematic method of ensuring the stability, robustness, and performance properties of the overall system. Several NN control approaches have been proposed based on Lyapunov stability theorem (Lim et al., 2009; Ziqian, Shih, & Qunjing, 2009). One main advantage of these control schemes is that the adaptive laws were derived based on the Lyapunov synthesis method and therefore it guarantees the stability of the under control system. However, some constraint conditions should be assumed in the control process, e.g., that the approximation error, optimal parameter vectors or higher order terms in a Taylor series expansion of the nonlinear control law, are bounded. Besides, the prior knowledge of the controlled system may be required, e.g., the external disturbance is bounded or all states of the controlled system are measurable. These requirements are not easy to satisfy in

NNs in general can identify patterns according to their relationship, responding to related patterns with a similar output. They are trained to classify certain patterns into groups, and then are used to identify the new ones, which were never presented before. NNs can correctly identify incomplete or similar patterns; it utilizes only absolute values of input variables but these can differ enormously, while their relations may be the same. Likewise we can reason identification of unknown dependencies of the input data, which NN should learn. This could be regarded as a pattern abstraction, similar to the brain functionality, where the identification is not based on the values of variables but only relations of these. In the hope to capture the complexity of a process Wavelet theory has been combined with the NN to create Wavelet Neural Networks (WNN). The training algorithms for WNN

unknown.

Fig. 1. Feedback control system design.

practical control applications.

Due to the rapid development of power semiconductor devices in personal computers, computer peripherals, and adapters, the switching power supplies are popular in modern industrial applications. To obtain high quality power systems, the popular control technique of the switching power supplies is the Pulse Width Modulation (PWM) approach (Pressman, Billings, & Morey, 2009). By varying the duty ratio of the PWM modulator, the switching power supply can convert one level of electrical voltage into the desired level. From the control viewpoint, the controller design of the switching power supply is an intriguing issue, which must cope with wide input voltage and load resistance variations to ensure the stability in any operating condition while providing fast transient response. Over the past decade, there have been many different approaches proposed for PWM switching control design based on PI control (Alvarez-Ramirez et al., 2001), optimal control (Hsieh, Yen, & Juang, 2005), sliding-mode control (Vidal-Idiarte et al., 2004), fuzzy control (Vidal-Idiarte et al., 2004), and adaptive control (Mayosky & Cancelo, 1999) techniques. However, most of these approaches require adequately time-consuming trial-and-error tuning procedure to achieve satisfactory performance for specific models; some of them cannot achieve satisfactory performance under the changes of operating point; and some of them have not given the stability analysis. The motivation of this chapter is to design an Adaptive Wavelet Neural Network (AWNN) control system for the Buck type switching power supply. The proposed AWNN control system is comprised of a NN controller and a compensated controller. The neural controller using a WNN is designed to mimic an ideal controller and a robust controller is designed to compensate for the approximation error between the ideal controller and the neural controller. The online adaptive laws are derived based on the Lyapunov stability theorem so that the stability of the system can be guaranteed. Finally, the proposed AWNN control scheme is applied to control a Buck type switching power supply. The simulated results demonstrate that the proposed AWNN control scheme can achieve favorable control performance; even the switching power supply is subjected to the input voltage and load resistance variations.

Robust Adaptive Wavelet Neural Network Control of Buck Converters 119

applied to the output circuit while transistor *Q*1 provides a circulating path for inductor

( ) ( )

*L UtV t V t*

( ) ( ) ( ) () () <sup>2</sup>

Where, *V t LC <sup>x</sup>* ( ) , is the control gain which is a positive constant and *U t*( ) is the output of the controller. The control problem of Buck type switching power supplies is to control the duty cycle *U t*( ) so that the output voltage *Vo* can provide a fixed voltage under the occurrence of the uncertainties such as the wide input voltages and load variations. The

( )

*O d*

*Vt Vt*

*O d*

*dV t dV t dt dt*

<sup>⎡</sup> ⎤⎡ ⎤ <sup>⎢</sup> ⎥⎢ ⎥ <sup>⎢</sup> ⎥⎢ ⎥ = − <sup>⎢</sup> ⎥⎢ ⎥ <sup>⎢</sup> ⎥⎢ ⎥ <sup>⎣</sup> ⎦⎣ ⎦

( )

where *Vd* is the output desired voltage. The control law of the duty cycle is determined by the error voltage signal in order to provide fast transient response and small overshoot in the output voltage. If the system parameters are well known, the following ideal controller

( ) ( ) ( ) ( ) ( ) ( ) <sup>2</sup>

*Ut Vt LC LC t V t R dt dt*

( ) ( ) ( ) ( ) <sup>2</sup> <sup>2</sup> 1 2 0 lim 0 *d e t de t k ke t e t dt dt <sup>t</sup>*

Since the system parameters may be unknown or perturbed, the ideal controller in (5) cannot be precisely implemented. However, the parameter variations of the system are difficult to be monitored, and the exact value of the external load disturbance is also difficult

1 *O d <sup>T</sup>*

**K** = *k k* is chosen to correspond to the coefficients of a Hurwitz polynomial, which ensures satisfactory behavior of the close-loop linear system. It is a polynomial whose roots lie strictly in the open left half of the complex plane, and then the linear system would be as

+ + =⇒ =

*L dV t d V t*

⎡ ⎤ = ++ + <sup>⎢</sup> <sup>⎥</sup> <sup>⎣</sup> <sup>⎦</sup>

*C C L*

*dt R*

⎨ = −

*dV t V t*

() ()

*O O* 11 1

*dt LC RC dt LC*

*O C*

It yields to a nonlinear dynamics which must be transformed into a linear one:

*dV t dV t*

( )

*t*

would transform the original nonlinear dynamics into a linear one:

*O*

*x*

*C I*

⎪ = −

*L*

*dt Vt Vt*

<sup>⎪</sup> <sup>=</sup> <sup>⎪</sup> ⎩

*dI t*

⎧

⎪ ⎪

⎪

( ) () () ()

*x C*

*O x*

*V t UtV t*

( )

( )

2

→ ∞

**e** (4)

=− − + (3)

(2)

**K e** (5)

(6)

current. The output voltage can be expressed as:

2

output error voltage vector is defined as:

\*

If [ ] 2 1 , *<sup>T</sup>*

follows:

Among the various switching control methods, PWM which is based on fast switching and duty ratio control is the most widely considered one. The switching frequency is constant and the duty cycle, *U N*( ) varies with the load resistance fluctuations at the *N* th sampling time. The output of the designed controller *U N*( ) is the duty cycle.

Fig. 2. Buck type switching power supply

This duty cycle signal is then sent to a PWM output stage that generates the appropriate switching pattern for the switching power supplies. A forward switching power supply (Buck converter) is discussed in this study as shown in Fig. 2, where *Vi* and *Vo* are the input and output voltages of the converter, respectively, *L* is the inductor, *C* is the output capacitor, *R* is the resistor and *Q*1 and *Q*2 are the transistors which control the converter circuit operating in different modes. Figure 1 shows a synchronous Buck converter. It is called a synchronous buck converter because transistor *Q*2 is switched on and off synchronously with the operation of the primary switch *Q*1. The idea of a synchronous buck converter is to use a MOSFET as a rectifier that has very low forward voltage drop as compared to a standard rectifier. By lowering the diode's voltage drop, the overall efficiency for the buck converter can be improved. The synchronous rectifier (MOSFET *Q*2) requires a second PWM signal that is the complement of the primary PWM signal. *Q*2 is on when *Q*1 is off and vice a versa. This PWM format is called Complementary PWM. When *Q*1 is ON and *Q*2 is OFF, *Vi* generates:

$$V\_x = \left(V\_i - V\_{last}\right) \tag{1}$$

where *Vlost* denotes the voltage drop occurring by transistors and represents the unmodeled dynamics in practical applications. The transistor *Q*2 ensures that only positive voltages are 118 Recent Advances in Robust Control – Novel Approaches and Design Methods

Among the various switching control methods, PWM which is based on fast switching and duty ratio control is the most widely considered one. The switching frequency is constant and the duty cycle, *U N*( ) varies with the load resistance fluctuations at the *N* th sampling

This duty cycle signal is then sent to a PWM output stage that generates the appropriate switching pattern for the switching power supplies. A forward switching power supply (Buck converter) is discussed in this study as shown in Fig. 2, where *Vi* and *Vo* are the input and output voltages of the converter, respectively, *L* is the inductor, *C* is the output capacitor, *R* is the resistor and *Q*1 and *Q*2 are the transistors which control the converter circuit operating in different modes. Figure 1 shows a synchronous Buck converter. It is called a synchronous buck converter because transistor *Q*2 is switched on and off synchronously with the operation of the primary switch *Q*1. The idea of a synchronous buck converter is to use a MOSFET as a rectifier that has very low forward voltage drop as compared to a standard rectifier. By lowering the diode's voltage drop, the overall efficiency for the buck converter can be improved. The synchronous rectifier (MOSFET *Q*2) requires a second PWM signal that is the complement of the primary PWM signal. *Q*2 is on when *Q*1 is off and vice a versa. This PWM format is called Complementary PWM. When *Q*1 is ON and

where *Vlost* denotes the voltage drop occurring by transistors and represents the unmodeled dynamics in practical applications. The transistor *Q*2 ensures that only positive voltages are

*V VV x i lost* = − ( ) (1)

time. The output of the designed controller *U N*( ) is the duty cycle.

Fig. 2. Buck type switching power supply

*Q*2 is OFF, *Vi* generates:

applied to the output circuit while transistor *Q*1 provides a circulating path for inductor current. The output voltage can be expressed as:

$$\begin{cases} \mathbf{C} \frac{dV\_c(t)}{dt} = I\_\perp - \frac{V\_c(t)}{R} \\ \mathbf{L} \frac{dI\_\perp(t)}{dt} = \mathbf{U}(t)V\_x(t) - V\_c(t) \\ V\_\circ(t) = V\_c\left(t\right) \end{cases} \tag{2}$$

It yields to a nonlinear dynamics which must be transformed into a linear one:

$$\frac{d^2V\_o(t)}{dt^2} = -\frac{1}{LC}V\_o(t) - \frac{1}{RC}\frac{dV\_o(t)}{dt} + \frac{1}{LC}\mathcal{U}(t)V\_\*(t) \tag{3}$$

Where, *V t LC <sup>x</sup>* ( ) , is the control gain which is a positive constant and *U t*( ) is the output of the controller. The control problem of Buck type switching power supplies is to control the duty cycle *U t*( ) so that the output voltage *Vo* can provide a fixed voltage under the occurrence of the uncertainties such as the wide input voltages and load variations. The output error voltage vector is defined as:

$$\mathbf{e}(t) = \begin{bmatrix} V\_o(t) \\\\ \frac{dV\_o(t)}{dt} \end{bmatrix} - \begin{bmatrix} V\_d(t) \\\\ \frac{dV\_d(t)}{dt} \end{bmatrix} \tag{4}$$

where *Vd* is the output desired voltage. The control law of the duty cycle is determined by the error voltage signal in order to provide fast transient response and small overshoot in the output voltage. If the system parameters are well known, the following ideal controller would transform the original nonlinear dynamics into a linear one:

$$\mathbf{U}^\*(t) = \frac{1}{V\_\ast(t)} \left[ V\_\diamond(t) + \frac{L}{R} \frac{dV\_\diamond(t)}{dt} + LC \frac{d^2V\_\ast(t)}{dt^2} + LC \mathbf{K}^\tau \mathbf{e}(t) \right] \tag{5}$$

If [ ] 2 1 , *<sup>T</sup>* **K** = *k k* is chosen to correspond to the coefficients of a Hurwitz polynomial, which ensures satisfactory behavior of the close-loop linear system. It is a polynomial whose roots lie strictly in the open left half of the complex plane, and then the linear system would be as follows:

$$\frac{d^2e\left(t\right)}{dt^2} + k\_1 \frac{de\left(t\right)}{dt} + k\_2 e\left(t\right) = 0 \quad \Rightarrow \quad \lim\_{t \to -\infty} e\left(t\right) = 0\tag{6}$$

Since the system parameters may be unknown or perturbed, the ideal controller in (5) cannot be precisely implemented. However, the parameter variations of the system are difficult to be monitored, and the exact value of the external load disturbance is also difficult to be measured in advance for practical applications. Therefore, an intuitive candidate of ( ) \* *U t* would be an AWNN controller (Fig. 1):

$$\mathcal{U}\_{\text{ANNN}}\left(t\right) = \mathcal{U}\_{\text{VNNN}}\left(t\right) + \mathcal{U}\_{\text{A}}\left(t\right) \tag{7}$$

Robust Adaptive Wavelet Neural Network Control of Buck Converters 121

mining but they could not characterize local features like jumps in values well. The local features may exist in time or frequency. Wavelets have many desired properties combined together like compact support, orthogonality, localization in time and frequency and fast algorithms. The improvement in their characterization will result in data compression and

In this study a two-layer WNN (Fig. 3), which is comprised of a product layer and an output layer, was adopted to implement the proposed WNN controller. The standard approach in sliding control is to define an integrated error function which is similar to a PID function. The control signal *U t*( ) is calculated in such way that the closed-loop system reaches a predened sliding surface *S t*( ) and remains on this surface. The control signal *U t*( ) required for the system to remain on this sliding surface is called the equivalent control

( ) ( ), 0 *<sup>d</sup> St et*

where = is a strictly positive constant. The equivalent control is given by the requirement *S t*( ) = 0 , it defines a time varying hyperplane in 2 ℜ on which the tracking error vector *e t*( ) decays exponentially to zero, so that perfect tracking can be obtained asymptotically.

> *dS t*( ) *dt*

where η is a strictly positive constant. Then *S t*( ) will approach the hyperplane *S t*( ) = 0 in a finite time less than or equal to *S t*( ) η . In other words, by maintain the condition in equation (11), *S t*( ) will approaches the sliding surface *S t*( ) = 0 in a finite time, and then error, *e t*( ) will converge to the origin exponentially with a time constant 1 = . If 2 *k* = 0 and

> ( ) ( ) ( ) <sup>2</sup> 2 1 *dS t d e t de t <sup>k</sup> dt dt dt*

The inputs of the WNN are *S* and *dS dt* which in discrete domain it equals to <sup>1</sup> *S( z )* 1 <sup>−</sup> − , where 1 *z*<sup>−</sup> is a time delay. Note that the change of integrated error function <sup>1</sup> *S( z )* 1 <sup>−</sup> − , is utilized as an input to the WNN to avoid the noise induced by the differential of integrated error function *dS dt* . The output of the WNN is *U (t) WNN* . A family of wavelets will be constructed by translations and dilations performed on a single fixed function called the mother wavelet. It is very effective way to use wavelet functions with time-frequency localization properties. Therefore if the dilation parameter is changed, the support region width of the wavelet function changes, but the number of cycles doesn't change; thus the first derivative of a Gaussian function 2 *Φ(x) x ( x )* =− − exp 2 was adopted as a mother wavelet in this study. It may be regarded as a differentiable version of the Haar mother wavelet, just as the sigmoid is a differentiable version of a step function, and it has the

⎛ ⎞ <sup>=</sup> + > ⎜ ⎟ ⎝ ⎠ = = (10)

< −η (11)

= + (12)

*dt*

subsequent modication of classication tools.

( ) \* *U t* . This sliding surface is defined as follows:

Moreover, if we can maintain the following condition:

<sup>1</sup> = = *k* , then it yields from (6) and (10) that:

universal approximation property.

Where *U t WNN* ( ) is a WNN controller which is rich enough to approximate the system parameters, and *U t <sup>A</sup>* ( ) , is a robust controller. The WNN control is the main tracking controller that is used to mimic the computed control law, and the robust controller is designed to compensate the difference between the computed control law and the WNN controller.

Now the problem is divided into two tasks:


The first task is not too difficult as long as WNN is equipped with enough parameters to approximate the system. For the second task, we need to apply the concept of a branch of nonlinear control theory called *sliding control* (Slotine & Li, 1991). This method has been developed to handle performance and robustness objectives. It can be applied to systems where the plant model and the control gain are not exactly known, but bounded.

The robust controller is derived from Lyapunov theorem to cope all system uncertainties in order to guarantee a stable control. Substituting (7) into (3), we get:

$$\frac{d^2V\_o(t)}{dt^2} = -\frac{1}{LC}V\_o(t) - \frac{1}{RC}\frac{dV\_o(t)}{dt} + \frac{1}{LC}\mathcal{U}\_{\text{мим}}(t)V\_z(t) \tag{8}$$

The error equation governing the system can be obtained by combining (6) and (8), i.e.

$$\frac{d^2e(t)}{dt^2} + k\_\circ \frac{de(t)}{dt} + k\_\circ e(t) = \frac{1}{LC} V\_\circ(t) \left( \mathcal{U}'(t) - \mathcal{U}\_{\text{max}}(t) - \mathcal{U}\_\circ(t) \right) \tag{9}$$

### **4. Wavelet neural network controller**

Feed forward NNs are composed of layers of neurons in which the input layer of neurons is connected to the output layer of neurons through one or more layers of intermediate neurons. The notion of a WNN was proposed as an alternative to feed forward NNs for approximating arbitrary nonlinear functions based on the wavelet transform theory, and a back propagation algorithm was adapted for WNN training. From the point of view of function representation, the traditional radial basis function (RBF) networks can represent any function that is in the space spanned by the family of basis functions. However, the basis functions in the family are generally not orthogonal and are redundant. It means that the RBF network representation for a given function is not unique and is probably not the most efficient. Representing a continuous function by a weighted sum of basis functions can be made unique if the basis functions are orthonormal.

It was proved that NNs can be designed to represent such expansions with desired degree of accuracy. NNs are used in function approximation, pattern classication and in data 120 Recent Advances in Robust Control – Novel Approaches and Design Methods

to be measured in advance for practical applications. Therefore, an intuitive candidate of

Where *U t WNN* ( ) is a WNN controller which is rich enough to approximate the system parameters, and *U t <sup>A</sup>* ( ) , is a robust controller. The WNN control is the main tracking controller that is used to mimic the computed control law, and the robust controller is designed to compensate the difference between the computed control law and the WNN

• How to update the parameters of WNN incrementally so that it approximates the

• How to apply *U t <sup>A</sup>* ( ) to guarantee global stability while WNN is approximating the

The first task is not too difficult as long as WNN is equipped with enough parameters to approximate the system. For the second task, we need to apply the concept of a branch of nonlinear control theory called *sliding control* (Slotine & Li, 1991). This method has been developed to handle performance and robustness objectives. It can be applied to systems

The robust controller is derived from Lyapunov theorem to cope all system uncertainties in

( ) ( ) ( ) () () <sup>2</sup>

( ) ( ) ( ) () () () () ( ) <sup>2</sup>

Feed forward NNs are composed of layers of neurons in which the input layer of neurons is connected to the output layer of neurons through one or more layers of intermediate neurons. The notion of a WNN was proposed as an alternative to feed forward NNs for approximating arbitrary nonlinear functions based on the wavelet transform theory, and a back propagation algorithm was adapted for WNN training. From the point of view of function representation, the traditional radial basis function (RBF) networks can represent any function that is in the space spanned by the family of basis functions. However, the basis functions in the family are generally not orthogonal and are redundant. It means that the RBF network representation for a given function is not unique and is probably not the most efficient. Representing a continuous function by a weighted sum of basis functions can

It was proved that NNs can be designed to represent such expansions with desired degree of accuracy. NNs are used in function approximation, pattern classication and in data

*O AWNN x*

*V t U tV t*

\*

=− − + (8)

*x WNN A*

+ += − − (9)

where the plant model and the control gain are not exactly known, but bounded.

*O O* 11 1

*dt LC RC dt LC*

The error equation governing the system can be obtained by combining (6) and (8), i.e.

1

*d e t de t k ke t V t U t U t U t*

order to guarantee a stable control. Substituting (7) into (3), we get:

*dV t dV t*

*U t U t Ut AWNN* ( ) = + *WNN* ( ) *<sup>A</sup>* ( ) (7)

( ) \* *U t* would be an AWNN controller (Fig. 1):

Now the problem is divided into two tasks:

system during the whole process.

2

**4. Wavelet neural network controller** 

2 1 2

be made unique if the basis functions are orthonormal.

*dt dt LC*

controller.

system.

mining but they could not characterize local features like jumps in values well. The local features may exist in time or frequency. Wavelets have many desired properties combined together like compact support, orthogonality, localization in time and frequency and fast algorithms. The improvement in their characterization will result in data compression and subsequent modication of classication tools.

In this study a two-layer WNN (Fig. 3), which is comprised of a product layer and an output layer, was adopted to implement the proposed WNN controller. The standard approach in sliding control is to define an integrated error function which is similar to a PID function. The control signal *U t*( ) is calculated in such way that the closed-loop system reaches a predened sliding surface *S t*( ) and remains on this surface. The control signal *U t*( ) required for the system to remain on this sliding surface is called the equivalent control ( ) \* *U t* . This sliding surface is defined as follows:

$$S(t) = \left(\frac{d}{dt} + \hbar\right)e(t), \quad \hbar > 0\tag{10}$$

where = is a strictly positive constant. The equivalent control is given by the requirement *S t*( ) = 0 , it defines a time varying hyperplane in 2 ℜ on which the tracking error vector *e t*( ) decays exponentially to zero, so that perfect tracking can be obtained asymptotically. Moreover, if we can maintain the following condition:

$$\frac{d|S(t)|}{dt} < -\eta \tag{11}$$

where η is a strictly positive constant. Then *S t*( ) will approach the hyperplane *S t*( ) = 0 in a finite time less than or equal to *S t*( ) η . In other words, by maintain the condition in equation (11), *S t*( ) will approaches the sliding surface *S t*( ) = 0 in a finite time, and then error, *e t*( ) will converge to the origin exponentially with a time constant 1 = . If 2 *k* = 0 and <sup>1</sup> = = *k* , then it yields from (6) and (10) that:

$$\frac{dS(t)}{dt} = \frac{d^2e(t)}{dt^2} + k\_\text{,} \frac{de(t)}{dt} \tag{12}$$

The inputs of the WNN are *S* and *dS dt* which in discrete domain it equals to <sup>1</sup> *S( z )* 1 <sup>−</sup> − , where 1 *z*<sup>−</sup> is a time delay. Note that the change of integrated error function <sup>1</sup> *S( z )* 1 <sup>−</sup> − , is utilized as an input to the WNN to avoid the noise induced by the differential of integrated error function *dS dt* . The output of the WNN is *U (t) WNN* . A family of wavelets will be constructed by translations and dilations performed on a single fixed function called the mother wavelet. It is very effective way to use wavelet functions with time-frequency localization properties. Therefore if the dilation parameter is changed, the support region width of the wavelet function changes, but the number of cycles doesn't change; thus the first derivative of a Gaussian function 2 *Φ(x) x ( x )* =− − exp 2 was adopted as a mother wavelet in this study. It may be regarded as a differentiable version of the Haar mother wavelet, just as the sigmoid is a differentiable version of a step function, and it has the universal approximation property.

Robust Adaptive Wavelet Neural Network Control of Buck Converters 123

There are many kinds of wavelets that can be used in WNN. In this study, the first

The single node in the output layer is labeled as ∑ , which computes the overall output as

3 33 3 3 3 3 <sup>0</sup> 00 0 0 , *<sup>M</sup>*

The output of the last layer is *UWNN* , respectively. Then the output of a WNN can be

First we begin with translating a robust control problem into an optimal control problem. Since we know how to solve a large class of optimal control problems, this optimal control approach allows us to solve some robust control problems that cannot be easily solved otherwise. By the universal approximation theorem, there exists an optimal neural controller

*nc*

*WNN*

= −

governing the closed-loop system can be obtained.

*\*T \* \* Θ Γ U (t)*

To develop the robust controller, first, the minimum approximation error is defined as

*\* \*\*\* \* <sup>ε</sup> U (S,M ,D ,<sup>Θ</sup> ) U (t)*

Where *M ,D , \*\*\* <sup>Θ</sup>* are optimal network parameter vectors, achieve the minimum approximation error. After some straightforward manipulation, the error equation

() () () () ( ) <sup>1</sup> \* *S(t) V t U t U t U t <sup>x</sup> WNN <sup>A</sup>*

= −

derivative of a Gaussian function is selected as a mother wavelet, as illustrated why.

*k k k*

*n*

*<sup>T</sup> <sup>Γ</sup>* <sup>=</sup> *[y ,y ,...,y ]* , 1 2 *<sup>n</sup> <sup>M</sup>*

2 222 2

1 , 1 2 *jj j jj <sup>i</sup> <sup>M</sup> y f (net ) <sup>Φ</sup> (net ) j , ,...,n* <sup>=</sup>

= = ∏ <sup>=</sup> (14)

*net* =∑*<sup>α</sup> .y y f (net ) net* = = (15)

*<sup>T</sup> <sup>Θ</sup>* <sup>=</sup> *[<sup>α</sup> ,<sup>α</sup> ,...,<sup>α</sup> ]* , 1 2 *<sup>n</sup> <sup>M</sup>*

*WNN* ( ) *<sup>T</sup> U S,M,D,Θ ΘΓ* <sup>=</sup> (16)

*\* <sup>ε</sup>* = − *U (t) U (t)* (17)

*LC* = −− (19)

*<sup>M</sup> <sup>T</sup>* <sup>=</sup> *[m ,m ,...,m ]* and

(18)

<sup>2</sup> : *i ij*

*ij x m*

*d* <sup>−</sup> <sup>=</sup> ,

*j*

*net*

the summation of all input signals.

**4.3 Output layer** 

represented as:

where 33 3

1 2 *n <sup>M</sup> <sup>T</sup> D [d ,d ,...,d ]* <sup>=</sup> .

**5. Robust controller** 

*U (t) nc* such that (Lin, 2007):

follows:

Define *UWNN*

as:

1 2 *<sup>n</sup> <sup>M</sup>*

Fig. 3. Two-layer product WNN structure.

### **4.1 Input layer**

$$\mathbf{net}\_{\mathbf{i}}^{\mathbf{1}} = \mathbf{x}\_{\mathbf{i}}^{\mathbf{1}}; \quad y\_{\mathbf{i}}^{\mathbf{1}} = f\_{\mathbf{i}}^{\mathbf{1}}(\mathbf{net}\_{\mathbf{i}}^{\mathbf{1}}) = \mathbf{net}\_{\mathbf{i}}^{\mathbf{1}}, \mathbf{i} = \mathbf{1}, \mathbf{2} \tag{13}$$

where *i* = 1,2 indicates as the number of layers.

### **4.2 Wavelet layer**

A family of wavelets is constructed by translations and dilations performed on the mother wavelet. In the mother wavelet layer each node performs a wavelet *Φj* that is derived from its mother wavelet. For the *j* th node:

$$net\_{\dot{j}}^2 \coloneqq \frac{\chi\_{\dot{i}} - m\_{\ddot{\eta}}}{d\_{\ddot{\eta}}}, \ y\_j^2 = f\_j^2(net\_j^2) = \prod\_{i=1}^2 \Phi\_j(net\_j^2), \qquad j = 1, 2, \dots, n\_{\dot{M}} \tag{14}$$

There are many kinds of wavelets that can be used in WNN. In this study, the first derivative of a Gaussian function is selected as a mother wavelet, as illustrated why.

### **4.3 Output layer**

122 Recent Advances in Robust Control – Novel Approaches and Design Methods

11 111 1 *net x* ; *y f (net ) net , i ,* 1 2 *ii ii i i* = = = = (13)

A family of wavelets is constructed by translations and dilations performed on the mother wavelet. In the mother wavelet layer each node performs a wavelet *Φj* that is derived from

Fig. 3. Two-layer product WNN structure.

where *i* = 1,2 indicates as the number of layers.

its mother wavelet. For the *j* th node:

**4.1 Input layer** 

**4.2 Wavelet layer** 

The single node in the output layer is labeled as ∑ , which computes the overall output as the summation of all input signals.

$$\text{net}\_{o}^{3} = \sum\_{k}^{n\_{M}} a\_{k}^{3} . y\_{k}^{3} \qquad y\_{o}^{3} = f\_{o}^{3} (\text{net}\_{o}^{3}) = \text{net}\_{o}^{3} \tag{15}$$

The output of the last layer is *UWNN* , respectively. Then the output of a WNN can be represented as:

$$\mathcal{U}L\_{\text{NNN}}\left(\mathcal{S}, \mathcal{M}, \mathcal{D}, \Theta\right) = \Theta^T \Gamma \tag{16}$$

where 33 3 1 2 *<sup>n</sup> <sup>M</sup> <sup>T</sup> <sup>Γ</sup>* <sup>=</sup> *[y ,y ,...,y ]* , 1 2 *<sup>n</sup> <sup>M</sup> <sup>T</sup> <sup>Θ</sup>* <sup>=</sup> *[<sup>α</sup> ,<sup>α</sup> ,...,<sup>α</sup> ]* , 1 2 *<sup>n</sup> <sup>M</sup> <sup>M</sup> <sup>T</sup>* <sup>=</sup> *[m ,m ,...,m ]* and 1 2 *n <sup>M</sup> <sup>T</sup> D [d ,d ,...,d ]* <sup>=</sup> .

### **5. Robust controller**

First we begin with translating a robust control problem into an optimal control problem. Since we know how to solve a large class of optimal control problems, this optimal control approach allows us to solve some robust control problems that cannot be easily solved otherwise. By the universal approximation theorem, there exists an optimal neural controller *U (t) nc* such that (Lin, 2007):

$$
\varepsilon = \mathsf{U}\_{nc}(t) - \mathsf{U}^\*(t) \tag{17}
$$

To develop the robust controller, first, the minimum approximation error is defined as follows:

$$\begin{aligned} \varepsilon &= \boldsymbol{\mathcal{U}}^{\*}\_{\text{NNN}}(\text{S}, \boldsymbol{\mathcal{M}}^{\*}, \boldsymbol{\mathcal{D}}^{\*}, \boldsymbol{\Theta}^{\*}) - \boldsymbol{\mathcal{U}}^{\*}(\text{t}) \\ &= \boldsymbol{\Theta}^{\*T} \boldsymbol{\Gamma}^{\*} - \boldsymbol{\mathcal{U}}^{\*}(\text{t}) \end{aligned} \tag{18}$$

Where *M ,D , \*\*\* <sup>Θ</sup>* are optimal network parameter vectors, achieve the minimum approximation error. After some straightforward manipulation, the error equation governing the closed-loop system can be obtained.

$$\dot{S}(t) = \frac{1}{LC} V\_\times(t) \left( \mathcal{U}'(t) - \mathcal{U}\_{\nu \text{NN}} \left( t \right) - \mathcal{U}\_\times \left( t \right) \right) \tag{19}$$

Define *UWNN* as:

$$\begin{split} \tilde{\mathcal{U}}\_{\mathsf{v}\_{\mathsf{NNN}}} &= \mathsf{U}^{\mathsf{v}}(\mathsf{t}) - \mathsf{U}\_{\mathsf{v}\_{\mathsf{NNN}}}(\mathsf{t}) = \mathsf{U}^{\mathsf{v}}\_{\mathsf{v}\_{\mathsf{NNN}}}(\mathsf{t}) - \mathsf{U}\_{\mathsf{v}\_{\mathsf{NNN}}}(\mathsf{t}) - \varepsilon \\ &= \mathsf{G}^{\mathsf{v}} \boldsymbol{\varGamma} - \mathsf{G}^{\mathsf{T}} \boldsymbol{\varGamma} - \varepsilon \end{split} \tag{20}$$

For simplicity of discussion, define *\* \* ΘΘ Θ* <sup>=</sup> − =− *; ΓΓ Γ* � � to obtain a rewritten form of (20):

$$
\tilde{\mathcal{U}}\_{\text{NNN}} = \Theta^{\mathsf{w}} \tilde{\varGamma} + \tilde{\Theta}^{T} \varGamma - \varepsilon \tag{21}
$$

Robust Adaptive Wavelet Neural Network Control of Buck Converters 125

Where the lumped uncertainty *T T ψ ΘΓ ΘΓ ε* <sup>=</sup> + − is assumed to be bounded by *ψ ρ* <sup>&</sup>lt; , in

System performance to be achieved by control can be characterized either as stability or optimality which are the most important issues in any control system. Briefly, a system is said to be stable if it would come to its equilibrium state after any external input, initial conditions, and/or disturbances which have impressed the system. An unstable system is of no practical value. The issue of stability is of even greater relevance when questions of safety and accuracy are at stake as Buck type switching power supplies. The stability test for WNN control systems, or lack of it, has been a subject of criticism by many control engineers in some control engineering literature. One of the most fundamental methods is based on Lyapunov's method. It shows that the time derivative of the Lyapunov function at the equilibrium point is negative semi definite. One approach is to define a Lyapunov function and then derive the WNN controller architecture from stability conditions (Lin, Hung, &

( ) ( ) ( ) ( )

where *λ* , *η*<sup>1</sup> , *η*2 and *η*3 are positive learning-rate constants. Differentiating (28) and using

*xx x x*

*Vt Vt Vt V t LC LC TTT LC LC <sup>ρ</sup> (t) Θ Θ MM DD λη η η*

12 3

12 3

*Θ η* = <sup>1</sup>*S(t)Γ* , *M* = *η*2*S(t)AΘ* and *D* = *η*3*S(t)BΘ* (30)

*U (t) <sup>A</sup>* = *ρ*ˆ*(t) (S(t))* sgn (31)

*ρ*ˆ*(t)* = *λ S(t)* (32)

⎡ ⎤

2

++ + +

11 1 1

22 2 2

( )

For achieving 0 *VA* ≤ , the adaptive laws and the compensated controller are chosen as:

*LC TTT <sup>ρ</sup>(t)ρ(t) V t Θ Θ MM DD λ LC ηη η*

+ − ++ <sup>⎢</sup> <sup>⎥</sup> <sup>⎣</sup> <sup>⎦</sup>

If the adaptation laws of the WNN controller are chosen as (30) and the robust controller is

*x*

11 1 1 <sup>ˆ</sup>

1 2

=

( )

*A x WNN A*

*\* V S(t) V t U (t) U (t) U (t) LC*

⎡ ⎤ <sup>=</sup> − − ⎢ ⎥ ⎣ ⎦

*ρ* (*t*) = *ρ*ˆ (*t*) − *ρ* (27)

(28)

(29)

which *.* is the absolute value and *ρ* is a given positive constant.

**6. Stability analysis** 

Hsu, 2007).

Define a Lyapunov function as:

*A*

(19), it is concluded that:

2

( )

designed as (31), then (29) can be rewritten as follows:

*x*

*V t*

1

1

*V (S(t), ρ(t),Θ,M,D) S (t)*

In this study, a method is proposed to guarantee closed-loop stability and perfect tracking performance, and to tune translations and dilations of the wavelets online. The linearization technique was employed to transform the nonlinear wavelet functions into partially linear form to obtain the expansion of *Γ* � in a Taylor series:

$$
\tilde{I} = \begin{bmatrix} \tilde{y}\_1 \\ \tilde{y}\_2 \\ \vdots \\ \tilde{y}\_{n\_M} \end{bmatrix} = \begin{bmatrix} \frac{\partial y\_1}{\partial M} \\ \frac{\partial y\_2}{\partial M} \\ \frac{\partial y\_3}{\partial M} \\ \vdots \\ \frac{\partial y\_n}{\partial M} \\ \frac{\partial y\_n}{\partial M} \end{bmatrix} \tilde{M} + H \tag{22}
$$

$$
\tilde{\Gamma} = A\tilde{M} + B\tilde{D} + H \tag{23}
$$

Where *M M M ; D D D \* \** = − =− � � ; *H* is a vector of higher order terms, and:

$$A = \begin{bmatrix} \frac{\partial y\_1}{\partial M} & \frac{\partial y\_2}{\partial M} & \dots & \frac{\partial y\_M}{\partial M} \\ \end{bmatrix}^T \tag{24}$$

$$B = \begin{bmatrix} \frac{\partial y\_1}{\partial D} & \frac{\partial y\_2}{\partial D} & \dots & \frac{\partial y\_M}{\partial D} \\\\ \end{bmatrix}^T \tag{25}$$

Substituting (23) into (21), it is revealed that:

$$\begin{split} \tilde{\mathcal{U}}\_{\text{\tiny NN}} &= \left(\Theta + \tilde{\Theta}\right)^{T} \tilde{\Gamma} + \tilde{\Theta}^{T} \Gamma - \varepsilon \\ &= \Theta^{T} \left( A\tilde{\mathcal{M}} + B\tilde{D} + H \right) + \tilde{\Theta}^{T} \tilde{\Gamma} + \tilde{\Theta}^{T} \Gamma - \varepsilon \\ &= \tilde{\Theta}^{T} \Gamma + \Theta^{T} A\tilde{\mathcal{M}} + \Theta^{T} B\tilde{D} + \eta \end{split} \tag{26}$$

Where the lumped uncertainty *T T ψ ΘΓ ΘΓ ε* <sup>=</sup> + − is assumed to be bounded by *ψ ρ* <sup>&</sup>lt; , in which *.* is the absolute value and *ρ* is a given positive constant.

$$
\tilde{\rho}(t) = \hat{\rho}(t) - \rho \tag{27}
$$

## **6. Stability analysis**

124 Recent Advances in Robust Control – Novel Approaches and Design Methods

For simplicity of discussion, define *\* \* ΘΘ Θ* <sup>=</sup> − =− *; ΓΓ Γ* � � to obtain a rewritten form of

In this study, a method is proposed to guarantee closed-loop stability and perfect tracking performance, and to tune translations and dilations of the wavelets online. The linearization technique was employed to transform the nonlinear wavelet functions into partially linear

1 1

*y y*

⎡ ⎤⎡ ⎤ ∂ ∂ ⎢ ⎥⎢ ⎥

*n n M M M D*

⎣ ⎦⎣ ⎦ ∂ ∂

� in a Taylor series:

2 2 2

== + + ⎢ ⎥ ∂ ∂ ⎢ ⎥

*M y y*

� �� � � � �

⎣ ⎦ ∂ ∂

1 2

1 2

*T T U (Θ Θ) Γ ΘΓ ε*

� � � �

=+ + −

*<sup>y</sup> y y <sup>n</sup> A M MM M*

<sup>=</sup> <sup>⎢</sup> <sup>⎥</sup> <sup>⎢</sup> ∂∂ ∂ <sup>⎥</sup> <sup>⎢</sup> <sup>⎥</sup> <sup>⎣</sup> <sup>⎦</sup>

*<sup>y</sup> y y <sup>n</sup> B M DD D*

<sup>=</sup> <sup>⎢</sup> <sup>⎥</sup> <sup>⎢</sup> ∂∂ ∂ <sup>⎥</sup> <sup>⎢</sup> <sup>⎥</sup> <sup>⎣</sup> <sup>⎦</sup>

*<sup>T</sup> T T <sup>Θ</sup> (AM BD H) ΘΓ ΘΓ ε*

�� � � �

= + ++ + −

*TT T ΘΓ Θ AM <sup>Θ</sup> BD <sup>ψ</sup>*

=+ + +

� � �

<sup>⎡</sup> <sup>⎤</sup> <sup>∂</sup> ⎢∂ ∂ <sup>⎥</sup>

<sup>⎡</sup> <sup>⎤</sup> <sup>∂</sup> ⎢∂ ∂ <sup>⎥</sup>

Where *M M M ; D D D \* \** = − =− � � ; *H* is a vector of higher order terms, and:

*y M D y y y Γ M D M D H* 

∂ ∂ ⎡ ⎤ ⎢ ⎥ ∂ ∂ ⎢ ⎥

= −=−−

� (20)

*\*T T <sup>U</sup>* <sup>=</sup> *Θ Γ ΘΓ ε* + − � � � (21)

*Γ* = *AM BD H* + + � �� (23)

… (24)

… (25)

(26)

*T*

*T*

(22)

*WNN WNN WNN WNN \* \* U U (t) U (t) U (t) U (t) <sup>ε</sup>*

*\*T T Θ Γ ΘΓ ε*

= −−

*WNN*

1

� �

*y n*

Substituting (23) into (21), it is revealed that:

*WNN*

⎢ ⎥ ⎢ ⎥

(20):

form to obtain the expansion of *Γ*

System performance to be achieved by control can be characterized either as stability or optimality which are the most important issues in any control system. Briefly, a system is said to be stable if it would come to its equilibrium state after any external input, initial conditions, and/or disturbances which have impressed the system. An unstable system is of no practical value. The issue of stability is of even greater relevance when questions of safety and accuracy are at stake as Buck type switching power supplies. The stability test for WNN control systems, or lack of it, has been a subject of criticism by many control engineers in some control engineering literature. One of the most fundamental methods is based on Lyapunov's method. It shows that the time derivative of the Lyapunov function at the equilibrium point is negative semi definite. One approach is to define a Lyapunov function and then derive the WNN controller architecture from stability conditions (Lin, Hung, & Hsu, 2007).

Define a Lyapunov function as:

$$\begin{split} V\_{\boldsymbol{A}}(\boldsymbol{S}(t), \tilde{\boldsymbol{\rho}}(t), \tilde{\boldsymbol{\Theta}}, \tilde{\boldsymbol{M}}, \tilde{\boldsymbol{D}}) &= \frac{1}{2} \boldsymbol{S}^{2}(t) \\ + \frac{1}{2\boldsymbol{\Delta}} V\_{\boldsymbol{\varepsilon}}(t) \sum\_{\boldsymbol{\tilde{\varepsilon}}} \frac{1}{2\eta\_{1}} V\_{\boldsymbol{\varepsilon}}(t) \tilde{\boldsymbol{\Theta}}^{T} \tilde{\boldsymbol{\Theta}} + \frac{1}{2\eta\_{2}} V\_{\boldsymbol{\varepsilon}}(t) \tilde{\boldsymbol{\mu}}^{T} \tilde{\boldsymbol{M}} + \frac{1}{2\eta\_{3}} V\_{\boldsymbol{\varepsilon}}(t) \tilde{\boldsymbol{\mu}}^{T} \tilde{\boldsymbol{M}} + \frac{1}{2\eta\_{3}} \boldsymbol{\tilde{\Omega}}^{T} \tilde{\boldsymbol{D}} \end{split} \tag{28}$$

where *λ* , *η*<sup>1</sup> , *η*2 and *η*3 are positive learning-rate constants. Differentiating (28) and using (19), it is concluded that:

$$\begin{split} \dot{V}\_{\text{s}} &= S(t) \frac{1}{LC} V\_{\text{s}}(t) \Big[ \boldsymbol{L}^{\ast}(t) - \boldsymbol{L}\_{\text{vNN}}(t) - \boldsymbol{L}\_{\text{A}}(t) \Big] \\ &+ \frac{1}{LC} V\_{\text{s}}(t) \dot{\tilde{\rho}}(t) \dot{\tilde{\rho}}(t) - \frac{1}{LC} V\_{\text{s}}(t) \Big[ \frac{1}{\eta\_{1}} \tilde{\Theta}^{T} \dot{\Theta} + \frac{1}{\eta\_{2}} \tilde{\boldsymbol{M}}^{T} \dot{\boldsymbol{M}} + \frac{1}{\eta\_{3}} \tilde{\boldsymbol{D}}^{T} \dot{\tilde{D}} \Big] \end{split} \tag{29}$$

For achieving 0 *VA* ≤ , the adaptive laws and the compensated controller are chosen as:

$$
\dot{\Theta} = \eta\_1 S(t) \Gamma \text{ , } \dot{M} = \eta\_2 S(t) A \Theta \text{ , and } \dot{D} = \eta\_3 S(t) B \Theta \text{ } \tag{30}
$$

$$\mathcal{U}\_{\wedge}(t) = \hat{\rho}(t)\text{sgn}(\mathcal{S}(t))\tag{31}$$

$$
\dot{\hat{\rho}}(t) = \lambda \left| S(t) \right| \tag{32}
$$

If the adaptation laws of the WNN controller are chosen as (30) and the robust controller is designed as (31), then (29) can be rewritten as follows:

Robust Adaptive Wavelet Neural Network Control of Buck Converters 127

*Overshoot*: how much the peak level is higher than the steady state, normalized against the steady state; *Settling Time*: the time it takes for the system to converge to its steady state. *Steady-state Error*: the difference between the steady-state output and the desired output. Specifically speaking, controlling results are more preferable with the following

Here in this part, the controlling results are completely determined by the following parameters which are listed in Table 1. The converter runs at a switching frequency of 20 KHz and the controller runs at a sampling frequency of 1 KHz. Experimental cases are addressed as follows: Some load resistance variations with step changes are tested: *1)* from 20Ω to 4Ω at slope of 300*ms* , *2)* from 4Ω to 20Ω at slope of 500*ms* , and *3)* from 20Ω to

*Rise Time, Overshoot, Settling Time* and *Steady-state Error*: as least as possible

4Ω at slope of 700*ms* . The input voltage runs between 19*V* and 21*V* randomly.

2.2mF 0.5mH 2 0.001 0.001 0.001 8 0.1 7

**AWNN**

**0 1 2 3 4 5 6 7 8**

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9

Steady-state Error

Settling Time

**Time (sec)**

*C L* <sup>1</sup> *k* η<sup>1</sup> η<sup>2</sup> η<sup>3</sup> λ *S*<sup>0</sup> *nM*

At the first stage, the reference is chosen as a Step function with amplitude of 3 V.

Overshoot

Rise Time

Fig. 4. Output Voltage, Command(reference) Voltage.

2.5

3

3.5

**-1**

**-0.5**

**0**

**0.5**

**1**

**1.5**

**Vout, Vref (volt)**

**2**

**2.5**

**3**

**3.5**

characteristics:

**7.1 AWNN controller** 

Table 1. Simulation Parameters.

$$\begin{split} \dot{V}\_{A} &= \frac{1}{LC} V\_{x}(t)S(t)\varphi - \rho \frac{1}{LC} V\_{x}(t) \| S(t) \| \leq \frac{1}{LC} V\_{x}(t) \| S(t) \| \| \varphi \| - \rho \frac{1}{LC} V\_{x}(t) \| S(t) \| \\ &= \frac{1}{LC} V\_{x}(t) \| S(t) \| \left[ \| \varphi \| - \rho \right] \leq 0 \end{split} \tag{33}$$

Since 0 *VA* ≤ ,*VA* is negative semi definite:

$$V\_{\wedge} \left( S(t), \tilde{\rho}(t), \tilde{\theta}, \tilde{M}, \tilde{D} \right) \le V\_{\wedge} \left( S(0), \tilde{\rho}(0), \tilde{\theta}, \tilde{M}, \tilde{D} \right) \tag{34}$$

Which implies that *S(t)* , *Θ* , *M* and *D* are bounded. By using Barbalat's lemma (Slotine & Li, 1991), it can be shown that *t S(t)* →∞ ⇒ → 0 . As a result, the stability of the system can be guaranteed. Moreover, the tracking error of the control system, *e* , will converge to zero according to 0 *S(t)* → .

It can be verified that the proposed system not only guarantees the stable control performance of the system but also no prior knowledge of the controlled plant is required in the design process. Since the WNN has introduced the wavelet decomposition property into a general NN and the adaptation laws for the WNN controller are derived in the sense of Lyapunov stability, the proposed control system has two main advantages over prior ones: faster network convergence speed and stable control performance.

The adaptive bound estimation algorithm in (34) is always a positive value, and tracking error introduced by any uncertainty, such as sensor error or accumulation of numerical error, will cause the estimated bound *ρ*ˆ*(t)* increase unless the integrated error function *S(t)* converges quickly to zero. These results that the actuator will eventually be saturated and the system may be unstable. To avoid this phenomenon in practical applications, an estimation index *I* is introduced in the bound estimation algorithm as *ρ*ˆ*(t) I* = *λ S(t)* . If the magnitude of integrated error function is small than a predefined value *S*<sup>0</sup> , the WNN controller dominates the control characteristic; therefore, the control gain of the robust controller is fixed as the preceding adjusted value (i.e. I 0 = ). However, when the magnitude of integrated error function is large than the predefined value *S*<sup>0</sup> , the deviation of the states from the reference trajectory will require a continuous updating of, which is generated by the estimation algorithm (i.e. *I* = 1 ), for the robust controller to steer the system trajectory quickly back into the reference trajectory (Bouzari, Moradi, & Bouzari, 2008).

## **7. Numerical simulation results**

In the first part of this section, AWNN results are presented to demonstrate the efficiency of the proposed approach. The performance of the proposed AWNN controlled system is compared in contrast with two controlling schemes, i.e. PID compensator and NN Predictive Controller (NNPC). The most obvious lack of these conventional controllers is that they cannot adapt themselves with the system new state variations than what they were designed based on at first. In this study, some parameters may be chosen as fixed constants, since they are not sensitive to experimental results. The principal of determining the best parameter values is based on the perceptual quality of the final results. We are most interested in four major characteristics of the closed-loop step response. They are: *Rise Time*: the time it takes for the plant output to rise beyond 90% of the desired level for the rst time; *Overshoot*: how much the peak level is higher than the steady state, normalized against the steady state; *Settling Time*: the time it takes for the system to converge to its steady state. *Steady-state Error*: the difference between the steady-state output and the desired output. Specifically speaking, controlling results are more preferable with the following characteristics:

*Rise Time, Overshoot, Settling Time* and *Steady-state Error*: as least as possible

## **7.1 AWNN controller**

126 Recent Advances in Robust Control – Novel Approaches and Design Methods

*V V t S(t)ψ ρ V t S(t) V t S(t) ψ ρ V t S(t) LC LC LC LC*

*V S t t ,D V S A A* ( () () , ,, )≤ ( () () 0, 0, , *,D*)

Which implies that *S(t)* , *Θ* , *M* and *D* are bounded. By using Barbalat's lemma (Slotine & Li, 1991), it can be shown that *t S(t)* →∞ ⇒ → 0 . As a result, the stability of the system can be guaranteed. Moreover, the tracking error of the control system, *e* , will converge to

It can be verified that the proposed system not only guarantees the stable control performance of the system but also no prior knowledge of the controlled plant is required in the design process. Since the WNN has introduced the wavelet decomposition property into a general NN and the adaptation laws for the WNN controller are derived in the sense of Lyapunov stability, the proposed control system has two main advantages over prior ones:

The adaptive bound estimation algorithm in (34) is always a positive value, and tracking error introduced by any uncertainty, such as sensor error or accumulation of numerical error, will cause the estimated bound *ρ*ˆ*(t)* increase unless the integrated error function *S(t)* converges quickly to zero. These results that the actuator will eventually be saturated and the system may be unstable. To avoid this phenomenon in practical applications, an estimation index *I* is introduced in the bound estimation algorithm as *ρ*ˆ*(t) I* = *λ S(t)* . If the magnitude of integrated error function is small than a predefined value *S*<sup>0</sup> , the WNN controller dominates the control characteristic; therefore, the control gain of the robust controller is fixed as the preceding adjusted value (i.e. I 0 = ). However, when the magnitude of integrated error function is large than the predefined value *S*<sup>0</sup> , the deviation of the states from the reference trajectory will require a continuous updating of, which is generated by the estimation algorithm (i.e. *I* = 1 ), for the robust controller to steer the system trajectory

In the first part of this section, AWNN results are presented to demonstrate the efficiency of the proposed approach. The performance of the proposed AWNN controlled system is compared in contrast with two controlling schemes, i.e. PID compensator and NN Predictive Controller (NNPC). The most obvious lack of these conventional controllers is that they cannot adapt themselves with the system new state variations than what they were designed based on at first. In this study, some parameters may be chosen as fixed constants, since they are not sensitive to experimental results. The principal of determining the best parameter values is based on the perceptual quality of the final results. We are most interested in four major characteristics of the closed-loop step response. They are: *Rise Time*: the time it takes for the plant output to rise beyond 90% of the desired level for the rst time;

ρ

 θΜ

1 11 1

*A x x x x*

=− ≤ −

( )

 *V t S(t) ψ ρ LC*

*x*

zero according to 0 *S(t)* → .

**7. Numerical simulation results** 

Since 0 *VA* ≤ ,*VA*

<sup>1</sup> <sup>0</sup>

ρ θΜ

faster network convergence speed and stable control performance.

quickly back into the reference trajectory (Bouzari, Moradi, & Bouzari, 2008).

= ⎡ ⎤ − ≤ ⎣ ⎦

is negative semi definite:

( ) ( ) ( ) ( )

(33)

(34)

Here in this part, the controlling results are completely determined by the following parameters which are listed in Table 1. The converter runs at a switching frequency of 20 KHz and the controller runs at a sampling frequency of 1 KHz. Experimental cases are addressed as follows: Some load resistance variations with step changes are tested: *1)* from 20Ω to 4Ω at slope of 300*ms* , *2)* from 4Ω to 20Ω at slope of 500*ms* , and *3)* from 20Ω to 4Ω at slope of 700*ms* . The input voltage runs between 19*V* and 21*V* randomly.


Table 1. Simulation Parameters.

At the first stage, the reference is chosen as a Step function with amplitude of 3 V.

Fig. 4. Output Voltage, Command(reference) Voltage.

Robust Adaptive Wavelet Neural Network Control of Buck Converters 129

At the second stage, the command is a burst signal which changes from zero to 2 V with the period of 3 seconds and vice versa, repetitively. Results which are shown in Fig. 7 to Fig. 9 express that the output voltage follows the command in an acceptable manner from the beginning. It can be seen that after each step controller learns the system better and therefore adapts well more. If the input command has no discontinuity, the controller can track the command without much settling time. Big jumps in the input command have a great negative impact on the controller. It means that to get a fast tracking of the input commands, the different states of the command must be continues or have discontinuities

**0 1 2 3 4 5 6 7 8**

1.9

2

2.1

2.2

**Ref AWNN**

6 6.2 6.4 6.6 6.8

**Time (sec)**

very close to each other.

**-0.5**

**0**

1.9

2

2.1

2.2

**0.5**

**1**

**Vout, Vref (volt)**

**1.5**

**2**

**2.5**

Fig. 7. Output Voltage, Command(reference) Voltage.

0 0.2 0.4 0.6 0.8 1

Fig. 5. Output Current.

Fig. 6. Error Signal.

128 Recent Advances in Robust Control – Novel Approaches and Design Methods

**AWNN**

**AWNN**

**0 1 2 3 4 5 6 7 8**

**Time (sec)**

**0 1 2 3 4 5 6 7 8**

0 0.2 0.4 0.6 0.8 1

**Time (sec)**

Fig. 5. Output Current.

**-0.1**

**0**

**0.1**

**0.2**

**0.3**

**0.4**

**Iout (amp)**

**0.5**

**0.6**

**0.7**

**0.8**

**0.9**

Fig. 6. Error Signal.

**-0.5**

**0**

**0.5**

**1**






0

**1.5**

**Error (volt)**

**2**

**2.5**

**3**

**3.5**

**4**

At the second stage, the command is a burst signal which changes from zero to 2 V with the period of 3 seconds and vice versa, repetitively. Results which are shown in Fig. 7 to Fig. 9 express that the output voltage follows the command in an acceptable manner from the beginning. It can be seen that after each step controller learns the system better and therefore adapts well more. If the input command has no discontinuity, the controller can track the command without much settling time. Big jumps in the input command have a great negative impact on the controller. It means that to get a fast tracking of the input commands, the different states of the command must be continues or have discontinuities very close to each other.

Fig. 7. Output Voltage, Command(reference) Voltage.

Robust Adaptive Wavelet Neural Network Control of Buck Converters 131

At the third stage, to show the well behavior of the controller, the output voltage follows the

**Ref AWNN**

**AWNN**

**0 1 2 3 4 5 6 7 8**

**Time (sec)**

**0 1 2 3 4 5 6 7 8**

**Time (sec)**

*Chirp* signal command perfectly, as it is shown in Fig. 10 to Fig. 12.

Fig. 10. Output Voltage, Command(reference) Voltage.

0 0.1 0.2 0.3 0.4 0.5

0 0.1 0.2

Fig. 11. Output Current.


0 0.05 0.1

**-0.4**

**-0.3**

**-0.2**

**-0.1**

**Iout (amp)**

**0**

**0.1**

**0.2**

**0.3**

**-1.5**

**-1**


**-0.5**

**0**

**Vout, Vref (volt)**

**0.5**

**1**

**1.5**

Fig. 8. Output Current.

Fig. 9. Error Signal.

130 Recent Advances in Robust Control – Novel Approaches and Design Methods

**AWNN**

**AWNN**

**0 1 2 3 4 5 6 7 8**

2.4 2.5 2.6 2.7 2.8

0.09 0.1 0.11 0.12 0.13 0.14 0.15

0 0.2 0.4 0.6 0.8 1

**Time (sec)**

0

**0 1 2 3 4 5 6 7 8**



2.95 3 3.05

**Time (sec)**

Fig. 8. Output Current.

**-0.1**

**0**

**0.1**

**0.2**

**Iout (amp)**

**0.3**

**0.4**

**0.5**

**0.6**

Fig. 9. Error Signal.

**-1.5**

**-1**

**-0.5**

**0**

**0.5**

**Error (volt)**

**1**



0

0.1

**1.5**

**2**

**2.5**

At the third stage, to show the well behavior of the controller, the output voltage follows the *Chirp* signal command perfectly, as it is shown in Fig. 10 to Fig. 12.

Fig. 10. Output Voltage, Command(reference) Voltage.

Fig. 11. Output Current.

Robust Adaptive Wavelet Neural Network Control of Buck Converters 133

The MPC method is based on the receding horizon technique. The NN model predicts the plant response over a specified time horizon. The predictions are used by a numerical optimization program to determine the control signal that minimizes the following

> ( ) () () ( ) ( )( ) 2 2 1

= + − + +ρ + − − + − ∑ ∑ ′ ′ (35)

*u*

where *N*<sup>1</sup> , *N*<sup>2</sup> , and *Nu* define the horizons over which the tracking error and the control increments are evaluated. The *u*′ variable is the tentative control signal, *<sup>r</sup> y* is the desired response, and *<sup>m</sup> y* is the network model response. The ρ value determines the contribution that the sum of the squares of the control increments has on the performance index. The following block diagram illustrates the MPC process. The controller consists of the NN plant model and the optimization block. The optimization block determines the values of *u*′ that

Delayed

*J y t j y t j ut j ut j*

1 2

Outputs Training Algorithm Iterations

Optimization <sup>5</sup>

performance criterion over the specified horizon: (Fig. 15)

*r m j N j*

*N N*

= =

2 1

Fig. 14. NNPC Block Diagram.

*<sup>N</sup>*<sup>2</sup> *Nu* <sup>ρ</sup> Hidden

Table 3. NNPC Simulation Parameters.

minimize *J* , and then the optimal *u* is input to the plant.

Delayed Inputs

5 2 0.05 30 10 20 Levenberg-Marquardt

Layers

Fig. 12. Error Signal.

### **7.2 NNPC**

To compare the results with other adaptive controlling techniques, Model Predictive Controller (MPC) with NN as its model descriptor (or NNPC), was implemented. The name NNPC stems from the idea of employing an explicit NN model of the plant to be controlled which is used to predict the future output behavior. This technique has been widely adopted in industry as an effective means to deal with multivariable constrained control problems. This prediction capability allows solving optimal control problems on-line, where tracking error, namely the dierence between the predicted output and the desired reference, is minimized over a future horizon, possibly subject to constraints on the manipulated inputs and outputs. Therefore, the first stage of NNPC is to train a NN to represent the forward dynamics of the plant. The prediction error between the plant output and the NN output is used as the NN training signal (Fig. 14). The NN plant model can be trained offline by using the data collected from the operation of the plant.

Fig. 13. NN Plant Model Identification.

132 Recent Advances in Robust Control – Novel Approaches and Design Methods

**AWNN**

To compare the results with other adaptive controlling techniques, Model Predictive Controller (MPC) with NN as its model descriptor (or NNPC), was implemented. The name NNPC stems from the idea of employing an explicit NN model of the plant to be controlled which is used to predict the future output behavior. This technique has been widely adopted in industry as an effective means to deal with multivariable constrained control problems. This prediction capability allows solving optimal control problems on-line, where tracking error, namely the dierence between the predicted output and the desired reference, is minimized over a future horizon, possibly subject to constraints on the manipulated inputs and outputs. Therefore, the first stage of NNPC is to train a NN to represent the forward dynamics of the plant. The prediction error between the plant output and the NN output is used as the NN training signal (Fig. 14). The NN plant model can be trained offline

**0 1 2 3 4 5 6 7 8**

0 0.2 0.4 0.6 0.8 1

**Time (sec)**

by using the data collected from the operation of the plant.


Fig. 13. NN Plant Model Identification.

Fig. 12. Error Signal.

**-0.2**

**-0.1**

**0**

**0.1**

**0.2**

**Error (volt)**

**0.3**

**0.4**

**0.5**

**0.6**

**0.7**

**7.2 NNPC** 

The MPC method is based on the receding horizon technique. The NN model predicts the plant response over a specified time horizon. The predictions are used by a numerical optimization program to determine the control signal that minimizes the following performance criterion over the specified horizon: (Fig. 15)

$$J = \sum\_{j=\mathbb{N}}^{\mathbb{N}} \left( y\_r(t+j) - y\_w(t+j) \right)^2 + \rho \sum\_{j=1}^{\mathbb{N}\_\mathbb{H}} \left( \mu'(t+j-1) - \mu'(t+j-2) \right)^2 \tag{35}$$

Fig. 14. NNPC Block Diagram.

where *N*<sup>1</sup> , *N*<sup>2</sup> , and *Nu* define the horizons over which the tracking error and the control increments are evaluated. The *u*′ variable is the tentative control signal, *<sup>r</sup> y* is the desired response, and *<sup>m</sup> y* is the network model response. The ρ value determines the contribution that the sum of the squares of the control increments has on the performance index. The following block diagram illustrates the MPC process. The controller consists of the NN plant model and the optimization block. The optimization block determines the values of *u*′ that minimize *J* , and then the optimal *u* is input to the plant.


Table 3. NNPC Simulation Parameters.

Robust Adaptive Wavelet Neural Network Control of Buck Converters 135

Based on the power stages which were defined in the previous experiments, a nominal second-order PID compensator (controller) can be designed for the output voltage feedback loop, using small-signal analysis, to yield guaranteed stable performance. A generic second-

> ( ) 1 2 1

It is assumed that sufficient information about the nominal power stage (i.e., at system startup) is known such that a conservative compensator design can be performed. The following parameters were used for system initialization of the compensator: 16.5924 *K* = , <sup>1</sup> *R* = 0.0214 , 2 *R* = −15.2527 and 0 *P* = . Figure 17 shows the Bode plot of the considered PID compensator. The output voltages with two different reference signals are shown in Fig. 18 and Fig. 19. As you can see it cannot get better after some times, because it is not adaptive to system variations, but on the other hand its convergence is quite good from the beginning.

*z zP*

**103 104 105 106**

**10 5** **10 6**

**10 4**

**frequency [Hz]**

=+ + − − (36)

*R R Gz K*

order PID compensator is considered with the following transfer function:

**7.3 PID controller** 

**-100**

**-250**

**-200**

**-150**

**phase [deg]**

**-100**

**-50**

**0**

**-50**

**magnitude [db]**

**0**

**50**

Fig. 17. Bode plot of the PID controller.

**10 3**

Fig. 15. Output Voltage, Command(reference) Voltage of NNPC.

Fig. 16. Output Voltage, Command(reference) Voltage of NNPC

### **7.3 PID controller**

134 Recent Advances in Robust Control – Novel Approaches and Design Methods

**NNPC**

Steady-state Error

**NNPC Ref**

6 6.2 6.4 6.6

**0 1 2 3 4 5 6 7 8**

Rise Time

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9

Settling Time

**Time (sec)**

**0 1 2 3 4 5 6 7 8**

1.6 1.8 2 2.2 2.4 2.6

**Time (sec)**

Fig. 15. Output Voltage, Command(reference) Voltage of NNPC.

2.5

3

3.5

4

Overshoot

4.5

**0**

**-0.5**

**0**

1.8 2 2.2 2.4 2.6

**0.5**

**1**

**1.5**

**Vout, Vref (volt)**

**2**

**2.5**

**3**

**0.5**

**1**

**1.5**

**2**

**2.5**

**Vout, Vref (volt)**

**3**

**3.5**

**4**

**4.5**

Fig. 16. Output Voltage, Command(reference) Voltage of NNPC

0 0.2 0.4 0.6

Based on the power stages which were defined in the previous experiments, a nominal second-order PID compensator (controller) can be designed for the output voltage feedback loop, using small-signal analysis, to yield guaranteed stable performance. A generic secondorder PID compensator is considered with the following transfer function:

$$G(z) = K + \frac{R\_1}{z - 1} + \frac{R\_2}{z - P} \tag{36}$$

It is assumed that sufficient information about the nominal power stage (i.e., at system startup) is known such that a conservative compensator design can be performed. The following parameters were used for system initialization of the compensator: 16.5924 *K* = , <sup>1</sup> *R* = 0.0214 , 2 *R* = −15.2527 and 0 *P* = . Figure 17 shows the Bode plot of the considered PID compensator. The output voltages with two different reference signals are shown in Fig. 18 and Fig. 19. As you can see it cannot get better after some times, because it is not adaptive to system variations, but on the other hand its convergence is quite good from the beginning.

Fig. 17. Bode plot of the PID controller.

Robust Adaptive Wavelet Neural Network Control of Buck Converters 137

This study presented a new robust on-line training algorithm for AWNN via a case study of buck converters. A review of AWNN is described and its advantages of simple design and fast convergence over conventional controlling techniques e.g. PID were described. Even though that PID may lead to a better controller, it takes a very long and complicated procedure to find the best parameters for a known system. However on cases with some or no prior information, it is practically hard to create a controller. On the other hand these PID controllers are not robust if the system changes. AWNN can handle controlling of systems without any prior information by learning it through time. For the case study of buck converters, the modeling and the consequent principal theorems were extracted. Afterwards, the Lyapunov stability analysis of the under controlled system were defined in a way to be robust against noise and system changes. Finally, the numerical simulations, in different variable conditions, were implemented and the results were extracted. In comparison with prior controllers which are designed for stabilizing output voltage of buck converters (e.g. PID and NNPC), this method is very easy to implement and also cheap to

The authors would like to thank the Austrian Academy of Sciences for the support of this

Alvarez-Ramirez, J., Cervantes, I., Espinosa-Perez, G., Maya, P., & Morales, A. (2001). A

Ang, K. H., Chong, G. C., & Li, Y. (2005). PID control system analysis, design, and technology. *IEEE Transactions on Control Systems Technology, 13*(4), 559-576. Bouzari, H., Moradi, H., & Bouzari, E. (2008). Adaptive Neuro-Wavelet System for the Robust Control of Switching Power Supplies. *INMIC* (pp. 1-6). Karachi: IEEE. Chen, C., & Hsiao, C.-H. (1999). Wavelet approach to optimising dynamic systems. *IEE* 

Gu, D. W., Petkov, P. H., & Konstantinov, M. M. (2005). *Robust Control Design with MATLAB.*

Ho, D. C., Ping-Au, Z., & Jinhua, X. (2001). Fuzzy wavelet networks for function learning.

Hsieh, F.-H., Yen, N.-Z., & Juang, Y.-T. (2005). Optimal controller of a buck DC-DC

Lim, K. H., Seng, K. P., Ang; Siew W, L.-M., & Chin, S. W. (2009). Lyapunov Theory-Based

Lin, C. M., Hung, K. N., & Hsu, C. F. (2007, Jan.). Adaptive Neuro-Wavelet Control for Switching Power Supplies. *IEEE Trans. Power Electronics, 22*(1), 87-95.

converter using the uncertain load as stochastic noise. *IEEE Transactions on Circuits* 

Multilayered Neural Network. *IEEE Transactions on Circuits and Systems II: Express* 

*Proceedings - Control Theory and Applications, 149*(2), 213-219.

*IEEE Transactions on Fuzzy Systems, 9*(1), 200-211.

*and Systems II: Express Briefs, 52*(2), 77-81.

stable design of PI control for DC-DC converters with an RHS zero. *IEEE Transactions on Circuits and Systems I: Fundamental Theory and Applications, 48*(1),

**8. Conclusion** 

build while convergence is very fast.

**9. Acknowledgements** 

103-106.

London: Springer.

*Briefs, 56*(4), 305-309.

**10. References** 

study.

Fig. 18. Output Voltage, Command(reference) Voltage of PID.

Fig. 19. Output Voltage, Command(reference) Voltage of PID.

## **8. Conclusion**

136 Recent Advances in Robust Control – Novel Approaches and Design Methods

2.99

Overshoot

3

**0 0.5 1 1.5 2 2.5 3**

0.5 1 1.5 2 2.5 3

Settling Time Rise Time

2.7 2.8 2.9

Steady-state Error

x 10-3

**Time (sec)**

**0 0.5 1 1.5 2 2.5 3**

3

3.2

3.4

3.6

x 10-4

**Time (sec)**

**x 10-3**

**x 10-3**

x 10-3

x 10-4

**PID Ref**

1.9 2 2.1 2.2

**PID Ref**

Fig. 18. Output Voltage, Command(reference) Voltage of PID.

2.5

3

3.5

4

**0**

**0**

**1**

4.6 4.8 5 5.2 5.4

**2**

**3**

**Vout, Vref (volt)**

**4**

**5**

**6**

**0.5**

**1**

**1.5**

**2**

**2.5**

**Vout, Vref (volt)**

**3**

**3.5**

**4**

**4.5**

Fig. 19. Output Voltage, Command(reference) Voltage of PID.

0 1 2 3

This study presented a new robust on-line training algorithm for AWNN via a case study of buck converters. A review of AWNN is described and its advantages of simple design and fast convergence over conventional controlling techniques e.g. PID were described. Even though that PID may lead to a better controller, it takes a very long and complicated procedure to find the best parameters for a known system. However on cases with some or no prior information, it is practically hard to create a controller. On the other hand these PID controllers are not robust if the system changes. AWNN can handle controlling of systems without any prior information by learning it through time. For the case study of buck converters, the modeling and the consequent principal theorems were extracted. Afterwards, the Lyapunov stability analysis of the under controlled system were defined in a way to be robust against noise and system changes. Finally, the numerical simulations, in different variable conditions, were implemented and the results were extracted. In comparison with prior controllers which are designed for stabilizing output voltage of buck converters (e.g. PID and NNPC), this method is very easy to implement and also cheap to build while convergence is very fast.

## **9. Acknowledgements**

The authors would like to thank the Austrian Academy of Sciences for the support of this study.

## **10. References**


**7** 

*USA* 

**Quantitative Feedback Theory** 

A robust control method that combines Sliding Mode Control (SMC) and Quantitative Feedback Theory (QFT) is introduced in this chapter. The utility of SMC schemes in robust tracking of nonlinear mechanical systems, although established through a body of published results in the area of robotics, has important issues related to implementation and chattering behavior that remain unresolved. Implementation of QFT during the sliding phase of a SMC controller not only eliminates chatter but also achieves vibration isolation. In addition, QFT does not diminish the robustness characteristics of the SMC because it is known to tolerate large parametric and phase information uncertainties. As an example, a driver's seat of a heavy truck will be used to show the basic theoretical approach in implementing the combined SMC and QFT controllers through modeling and numerical simulation. The SMC is used to track the trajectory of the desired motion of the driver's seat. When the system enters into sliding regime, chattering occurs due to switching delays as well as systems vibrations. The chattering is eliminated with the introduction of QFT inside the boundary layer to ensure smooth tracking. Furthermore, this chapter will illustrate that using SMC alone requires higher actuator forces for tracking than using both control schemes together. Also, it will be illustrated that the presence of uncertainties and unmodeled high frequency

QFT is different from other robust control methodologies, such as LQR/LTR, mu-synthesis, or H2/ H <sup>∞</sup> control, in that large parametric uncertainty and phase uncertainty information is directly considered in the design process. This results in smaller bandwidths and lower

Engineering design theory claims that every engineering design process should satisfy the

1. Maintenance of the independence of the design functional requirements.

**1. Introduction** 

cost of feedback.

**2.1 System design** 

following conditions:

dynamics can largely be ignored with the use of QFT.

**2. Quantitative Feedback Theory Preliminaries** 

2. Minimization of the design information content.

**and Sliding Mode Control** 

*Department of Mechanical Engineering, California State University, Fresno, California* 

Gemunu Happawana

Lin, F. (2007). *Robust Control Design: An Optimal Control Approach.* John Wiley & Sons.


## **Quantitative Feedback Theory and Sliding Mode Control**

## Gemunu Happawana

*Department of Mechanical Engineering, California State University, Fresno, California USA* 

## **1. Introduction**

138 Recent Advances in Robust Control – Novel Approaches and Design Methods

Mayosky, M., & Cancelo, I. (1999). Direct adaptive control of wind energy conversion

Pressman, A., Billings, K., & Morey, T. (2009). *Switching Power Supply Design* (3rd ed.).

Sarangapani, J. (2006). *Neural Network Control of Nonlinear Discrete-Time Systems* (1 ed.). Boca

Vidal-Idiarte, E., Martinez-Salamero, L., Guinjoan, F., Calvente, J., & Gomariz, S. (2004).

Zhang, Q. (1997). Using wavelet network in nonparametric estimation. *IEEE Transactions on* 

Ziqian, L., Shih, S., & Qunjing, W. (2009). Global Robust Stabilizing Control for a Dynamic

systems using Gaussian networks. *IEEE Transactions on Neural Networks, 10*(4), 898-

Sliding and fuzzy control of a boost converter using an 8-bit microcontroller. *IEE* 

Neural Network System. *IEEE Transactions on Systems, Man and Cybernetics, Part A:* 

Lin, F. (2007). *Robust Control Design: An Optimal Control Approach.* John Wiley & Sons.

Slotine, J.-J., & Li, W. (1991). *Applied Nonlinear Control.* Prentice Hall.

*Proceedings - Electric Power Applications, 151*(1), 5-11.

906.

McGraw-Hill Professional.

*Neural Networks, 8*(2), 227-236.

*Systems and Humans, 39*(2), 426-436.

Raton, FL: CRC Press.

A robust control method that combines Sliding Mode Control (SMC) and Quantitative Feedback Theory (QFT) is introduced in this chapter. The utility of SMC schemes in robust tracking of nonlinear mechanical systems, although established through a body of published results in the area of robotics, has important issues related to implementation and chattering behavior that remain unresolved. Implementation of QFT during the sliding phase of a SMC controller not only eliminates chatter but also achieves vibration isolation. In addition, QFT does not diminish the robustness characteristics of the SMC because it is known to tolerate large parametric and phase information uncertainties. As an example, a driver's seat of a heavy truck will be used to show the basic theoretical approach in implementing the combined SMC and QFT controllers through modeling and numerical simulation. The SMC is used to track the trajectory of the desired motion of the driver's seat. When the system enters into sliding regime, chattering occurs due to switching delays as well as systems vibrations. The chattering is eliminated with the introduction of QFT inside the boundary layer to ensure smooth tracking. Furthermore, this chapter will illustrate that using SMC alone requires higher actuator forces for tracking than using both control schemes together. Also, it will be illustrated that the presence of uncertainties and unmodeled high frequency dynamics can largely be ignored with the use of QFT.

## **2. Quantitative Feedback Theory Preliminaries**

QFT is different from other robust control methodologies, such as LQR/LTR, mu-synthesis, or H2/ H <sup>∞</sup> control, in that large parametric uncertainty and phase uncertainty information is directly considered in the design process. This results in smaller bandwidths and lower cost of feedback.

## **2.1 System design**

Engineering design theory claims that every engineering design process should satisfy the following conditions:


Quantitative Feedback Theory and Sliding Mode Control 141

The above theorem says that the constraint satisfaction with equality is equivalent to

rational *G* \* must have infinite order. Thus the optimal *G* \* is unrealizable and because of order, would lead to spectral singularities for large parameter variations; and hence would

ηω

 ω

( , \* ( ) 1, 0 *G i* ) = ∀ ≥ .

ω

 ω( , ( ) 1 , 0, *G i* ) ≤ ∀∈ ∞ [ ]. (4)

<sup>&</sup>lt; <sup>∞</sup> <sup>+</sup> ∫ (5)

 ω

ω

≥0 ; it follows that a

[ ] . Now

ω

∈ , and

ω

Both 1 2 *W W*, satisfy the compatibility condition min{*W W*1 2 , 1 , 0, } < ∀∈ ∞

 ω ηω

≥ ∈ or in some cases can be unbounded as ω→0, while 2 2 *W L* ( )

i. 2 2 ( ) , 0, *im W W* ω

→ ∞ =∞ ≥

2 log ( ) . <sup>1</sup>

<sup>∈</sup> **<sup>G</sup>** ∫ ,

( , ( ) 1 , 0, . *G i* ) ≤ ∀∈ ∞ [ ]

The above problem does not have an analytic solution. For a numerical solution we define

0 0 *L i PGi* () () ω=

> ηω

*f W W q M wGi* (

, , ,, : ,( ), 1 2 ) → Γ( )

ω

*d* ω

ω

ω

 ω

> ω,

> > ω

,( ) : ,( ) 1 , *Gi PG Gi* ) { <sup>0</sup> ( ) ≤ ⊂} **C** (6)

 ω

ω

*W*

0 min log ( ) *<sup>G</sup> Gi d G* ω

> ω

where *P*0∈**P** is a nominal plant. Consider the sub-level set Γ : **M** → **C** given by

 ω

\*

define

*<sup>c</sup> <sup>I</sup>* <sup>=</sup> \*

ω

be quality-inadequate.

η ω ω

Here 1 1 *W L* ()0 ω

subject to:

and the map

which carries **M** into Γ(

satisfies the conditions:

0 0 min log log \* *G G G d G d <sup>G</sup>*

ω=  ω

Corollary: Every quality-adequate design is suboptimal.

( ) ( ) max

 ,( ) *G i* ,,( ) *G i* λ

 ηωλ

ω

ηω

Γ = (ω ω

ω,( ) *G i*ω) .

ωφ

+∞

−∞

ii. <sup>2</sup>

Our design problem now reduces to:

the nominal loop transmission function

= ∈ Λ ⇔

<sup>∈</sup>**<sup>G</sup>** ∫ ∫ if and only if

optimality. Since the constraint must be satisfied with inequality ∀

For control system design problems, Condition 1 translates into approximate decoupling in multivariable systems, while Condition 2 translates into minimization of the controller high frequency generalized gain-bandwidth product (Nwokah et al., 1997).

The information content of the design process is embedded in G, the forward loop controller to be designed, and often has to do with complexity, dimensionality, and cost. Using the system design approach, one can pose the following general design optimization problem. Let **G** be the set of all G for which a design problem has a solution. The optimization problem then is:

> *Minimize <sup>G</sup>* <sup>∈</sup>**<sup>G</sup>** {*Information contentofG*}

subject to:


In the context of single input, single output (SISO) linear control systems, G is given by:

$$I\_c = \int\_0^{a\_C} \log \left| G(io) \right| d o \quad , \tag{1}$$

where ω*<sup>G</sup>* is the gain crossover frequency or effective bandwidth. If **P** is a plant family given by

$$\mathbf{P} = P(\mathcal{Z}, \mathbf{s}) \begin{bmatrix} 1+\Lambda \end{bmatrix} \quad , \ \mathcal{X} \in \Lambda \quad , \ \Lambda \in H^{\sigma} \quad , \ \begin{vmatrix} \Lambda \end{vmatrix} < \mathcal{W}\_2(\mathcal{o}) \; , \tag{2}$$

then the major functional requirement can be reduced to:

$$\left| \eta \left( \alpha \slash \lambda \slash \mathcal{G}(i\alpha) \right) \right| = \left| \mathcal{W}\_1(\alpha) \middle| \mathcal{S}(\mathcal{A}, i\alpha) \right| + \left| \mathcal{W}\_2(\alpha) \middle| \mathcal{T}(\mathcal{A}, i\alpha) \right| \le 1 \quad \text{(1)}$$

∀ ≥ ∀ ∈Λ ω λ 0, , where 1 *W* ( ) ω and 2 *W* ( ) ω are appropriate weighting functions, and S and T are respectively the sensitivity and complementary sensitivity functions. Write

$$\overline{\eta}\left(\phi\iota\_\prime\mathcal{G}(i\alpha)\right) = \max\_{\mathcal{A}\in\Lambda} \eta\left(\mathcal{A}\_\prime\alpha\iota\_\prime\mathcal{G}(i\alpha)\right).$$

Then the system design approach applied to a SISO feedback problem reduces to the following problem:

$$I\_c^\* = \min\_{\mathbf{G} \in \mathbf{G}} \int\_0^{\alpha\_c} \log \left| \mathbf{G}(i\alpha) \right| d\alpha \,\tag{3}$$

subject to:

i. ηω ω ω ( , ( ) 1, 0 *G i* ) ≤ ∀ ≥ , ii. quality adequacy of 1 *PG <sup>T</sup> PG* <sup>=</sup> <sup>+</sup> .

Theorem: Suppose *G* \*∈ **G** . Then:

$$I\_c = \min\_{\mathbf{G} \in \mathbf{G}} \int\_0^{\alpha\_{\mathbf{G}}} \log |\mathbf{G}| \, d\boldsymbol{\rho} = \int\_0^{\alpha\_{\mathbf{G}}^\*} \log \left| \mathbf{G}^\* \right| \, d\boldsymbol{\rho} \text{ if and only if } \overline{\eta} \left( \boldsymbol{\rho} \boldsymbol{\epsilon} \mathbf{G}^\* (i\boldsymbol{\rho}) \right) = 1 \text{, } \forall \boldsymbol{\rho} \ge 0 \text{.} $$

The above theorem says that the constraint satisfaction with equality is equivalent to optimality. Since the constraint must be satisfied with inequality ∀ω ≥0 ; it follows that a rational *G* \* must have infinite order. Thus the optimal *G* \* is unrealizable and because of order, would lead to spectral singularities for large parameter variations; and hence would be quality-inadequate.

Corollary: Every quality-adequate design is suboptimal.

Both 1 2 *W W*, satisfy the compatibility condition min{*W W*1 2 , 1 , 0, } < ∀∈ ∞ ω [ ] . Now define

$$\overline{\eta}\left(o\iota,\mathbb{G}(i\iota)\right) = \max\_{\check{\lambda}\in\Lambda} \eta\left(o\iota,\check{\lambda},\mathbb{G}(i\iota)\right) \Leftrightarrow \overline{\eta}\left(o\iota,\mathbb{G}(i\iota)\right) \leq 1 \ , \ \forall \, o \in \left[0,\infty\right].\tag{4}$$

Here 1 1 *W L* ()0 ω ≥ ∈ or in some cases can be unbounded as ω→0, while 2 2 *W L* ( ) ω ∈ , and satisfies the conditions:

$$\begin{array}{ll} \text{i.} \quad \stackrel{\scriptstyle \lim}{\underset{\alpha \to \infty}{\longrightarrow}} \mathcal{W}\_2(\alpha) = \text{or} \quad \mathcal{W}\_2 \ge 0 \\\\ \text{ii.} \quad \int\_{-\alpha}^{+\alpha} \frac{\left| \log \mathcal{W}\_2(\alpha) \right|}{1 + \alpha^2} \, d\alpha < \infty \end{array} . \tag{5}$$

Our design problem now reduces to:

$$\min\_{G \in \mathbf{G}} \int\_0^{a\_{\mathrm{G}}} \log \left| G(io) \right| dao \; ,$$

subject to:

140 Recent Advances in Robust Control – Novel Approaches and Design Methods

For control system design problems, Condition 1 translates into approximate decoupling in multivariable systems, while Condition 2 translates into minimization of the controller high

The information content of the design process is embedded in G, the forward loop controller to be designed, and often has to do with complexity, dimensionality, and cost. Using the system design approach, one can pose the following general design optimization problem. Let **G** be the set of all G for which a design problem has a solution. The optimization

*<sup>G</sup>* <sup>∈</sup>**<sup>G</sup>** {*Information contentofG*}

In the context of single input, single output (SISO) linear control systems, G is given by:

ω

 λ

( ) 1 2

 ω λ ω

and 2 *W* ( )

ω

() ( ) max

Then the system design approach applied to a SISO feedback problem reduces to the

min log ( ) *<sup>G</sup> Gi d G* ω

 ,( ) *G i* ,,( ) *G i* λ

 η λω

= ∈ Λ .

ω

and T are respectively the sensitivity and complementary sensitivity functions. Write

*<sup>c</sup> <sup>I</sup>* <sup>=</sup> <sup>0</sup> log ( ) , *<sup>G</sup>*

*Gi d*

*<sup>G</sup>* is the gain crossover frequency or effective bandwidth. If **P** is a plant family given

[ ] <sup>2</sup> *P s* ( ,)1 , , , ( ),

, , ( ) ( ) ( , ) ( ) ( , ) 1, *Gi W S i W T i* = +≤

ω ω

*H W*

<sup>∞</sup> **<sup>P</sup>** = +Δ ∈Λ Δ∈ Δ < (2)

 ω λ ω

> ω

ω

<sup>∈</sup>**<sup>G</sup>** ∫ , (3)

∫ (1)

ω

are appropriate weighting functions, and S

frequency generalized gain-bandwidth product (Nwokah et al., 1997).

*Minimize*

i. satisfaction of the functional requirements ii. independence of the functional requirements iii. quality adequacy of the designed function.

λ

then the major functional requirement can be reduced to:

 ω

η ω ω

 ω

( , ( ) 1, 0 *G i* ) ≤ ∀ ≥ ,

ω

\*

*PG <sup>T</sup> PG* <sup>=</sup> <sup>+</sup> .

*<sup>c</sup> <sup>I</sup>* <sup>=</sup> 0

ηωλ

0, , where 1 *W* ( )

problem then is:

subject to:

where ω

∀ ≥ ∀ ∈Λ

 λ

following problem:

subject to:

ηω

 ω

ii. quality adequacy of 1

Theorem: Suppose *G* \*∈ **G** . Then:

i.

ω

by

$$
\overline{\eta}\left(o\iota\_\prime \mathcal{G}(io\iota)\right) \le 1 \quad \forall \, o \in \left[0, \infty\right] \ .
$$

The above problem does not have an analytic solution. For a numerical solution we define the nominal loop transmission function

$$L\_0(i\alpha) = P\_0 G(i\alpha) \, .$$

where *P*0∈**P** is a nominal plant. Consider the sub-level set Γ : **M** → **C** given by

$$
\Gamma\left(o\nu\_{\prime}\mathbb{G}(i\nu)\right) = \left\{P\_0\mathcal{G} \,:\, \overline{\eta}\left(o\nu\_{\prime}\mathbb{G}(i\nu)\right) \le 1\right\} \subset \mathbb{C} \quad\tag{6}
$$

and the map

$$f\left(a\!\!/ \_\prime \mathcal{V}\_{1'} \!\!\!/ \_\prime \phi \!\!/ \_\prime \eta\right) \\
:M \to \Gamma\left(w \!\!\!/ G(io)\right) \\
:M$$

which carries **M** into Γ(ω,( ) *G i*ω) .

Quantitative Feedback Theory and Sliding Mode Control 143

( ) , ( ) (,) : , ( ) 1 , { } <sup>0</sup> ( ) *<sup>o</sup>* Γ= = ≤

( ) , ( ) (,) (,) (,) *<sup>o</sup>*

20 log( ) 20 log . *<sup>i</sup> Ls r i <sup>m</sup> qe <sup>q</sup> <sup>i</sup>*

converts the level curves to boundaries on the Nichols' plane called design bounds. These design bounds are identical to the traditional QFT design bounds except that unlike the QFT

finite order rational approximation is necessarily sub-optimal. This is the essence of all QFT

According to the optimization theorem, if a solution to the problem exists, then there is an

( ) [ ] \*

which lies on

2 2 *W L* ( ) ω

0 2 2 <sup>1</sup> 1 ( ) ( ) exp log . <sup>1</sup> *<sup>m</sup> is q L s d H s i* α

 α

However, this sub-optimality enables the designer to address structural stability issues by proper choice of the poles and zeros of any admissible approximation G(s). Without control of the locations of the poles and zeros of G(s), singularities could result in the closed loop

<sup>0</sup> *L s <sup>m</sup>* ( ) is non-rational and every admissible finite order rational approximation of it

 ω

> ∂*Bp* , ∀

<sup>−</sup> ∈ ; it follows that

\*

 α

α

⎡ ⎤ <sup>−</sup> <sup>=</sup> ⎢ ⎥ <sup>∈</sup> <sup>−</sup> <sup>+</sup> <sup>⎣</sup> <sup>⎦</sup> <sup>∫</sup> (12)

∂

 ω

*G i Cp C p Cp*

 η ω ω

 ωφ

) : 2 0, − ≤ ≤ −∞ < < ∞ *r* }

φ

= +

*G i Cp PG G i* **C \** (9)

∪ = **.**

,( ) *G i* ) as well as the sub level sets Γ(

⊂ **C .** Then for 2 *W* ≤ 1 , it can be shown that

 ωφ

φ

= *<sup>h</sup>* < ∞ . This clearly shows that every admissible

ω

, ( ) 1 , 0, *G i* = ∀∈ ∞ (11)

ω

α

*Bp* ∀ ∈ ∞ [0, ] whereas in traditional QFT,

0 0 *L i P i Gi m m* () () ()

∈ ∞ [0, .] If \* *q* ( )

 = ⋅ ω

ω,( ) *G i*ω) can

(10)

 ωwhich

ω

is found,

*C p* ω φ

> ωφ

(Bondarev et al., 1985; Tabarrok & Tong, 1993; Esmailzadeh et al., 1990):

 ∂ ωφ

[0, .] Let **N** represent the Nichols' plane:

 πφ

 =+= φ

> ∂ ωφ

ω ω

optimal minimum phase loop transmission function: \* \*

 ω

∞ −∞

is necessarily sub-optimal; and is the essence of all QFT based design methods.

ω

∈ and <sup>1</sup>

π

Γ → ( ) *p q*

∂Γ(ω ω

= then the map *L s <sup>m</sup>* : → **N** sends s to **N** by the formula:

Let the interior of this boundary be ( , ) *<sup>o</sup>*

ω ω

In this way both the level curves

be computed ∀∈ ∞ ω

If , *<sup>i</sup> s qe* φ

bounds,

satisfies

Clearly \*

∂Γ(ω ω

based design methods.

such \* \* | | () *L q <sup>m</sup>*<sup>0</sup> =

this is possible only up to a certain

ω

then (Robinson, 1962) if 1 1 *W L* ( )

\*

Γ = ω

ω

**N ,r** = {(φ

Consequently, *L Gi B <sup>m</sup>* : , ( ) ( , ,20log ) ∂ ω ω

,( ) *G i* ) can be used to generate

ηω

, gives 20 log \* *q* ( )

ω

while for 2 *W* > 1

Also consider the level curve of ( ( Γ(ω ω,( ) *G i* )) ) ∂Γ : **M** → C \ {∞} given by,

$$\mathcal{A}\left(\left(\mathcal{o}\prime\_{\prime}\mathcal{G}(i\rho)\right)\right) = \left\{P\_0\mathcal{G}\,:\,\overline{\eta}\left(\mathcal{o}\prime\_{\prime}\mathcal{G}(i\rho)\right) = 1\right\} \subset \mathbf{C}\backslash\{\mathcal{\circ}\}\,.$$

The map

$$f \colon \mathbf{M} \to \mathcal{A}\left(a \prime \mathcal{G}(i\rho)\right) \subset \mathbf{C} \quad \rho$$

generates bounds on **C** for which f is satisfied. The function f is crucial for design purposes and will be defined shortly.

Write

$$P(\mathcal{A}, \mathbf{s}) \, = \, P\_m(\mathcal{A}, \mathbf{s}) \, P\_a(\mathcal{A}, \mathbf{s}) \, \, \, \, \,$$

where *P s <sup>m</sup>*( ,) λ is minimum phase and ( ,) *P s <sup>a</sup>* λ is all-pass. Let 0 *P s <sup>m</sup>* ( ) be the minimum phase nominal plant model and 0( ) *P s <sup>a</sup>* be the all-pass nominal plant model. Let

$$P\_0\text{(s)}\,\,\,=\,P\_{m0}\text{(s)}\,\,\,\cdot\,\,P\_{a0}\text{(s)}\,\,.$$

Define:

$$L\_0(\mathbf{s}) = L\_{m0}(\mathbf{s}) \cdot P\_{a0}(\mathbf{s}) \quad = P\_{m0}(\mathbf{s}) \cdot G(\mathbf{s}) \cdot P\_{a0}(\mathbf{s})$$

$$\eta \left( \langle o \rangle \, \lambda , \mathbf{C}(i\alpha) \rangle \le 1 \Leftrightarrow \left| \frac{P\_0(i\alpha)}{P(\lambda, i\alpha) \, P\_{a0}(i\alpha)} + L\_{m0}(i\alpha) \right| - \left| \mathcal{W}\_2(\alpha) \right| L\_{m0}(i\alpha) \right| \ge \mathcal{W}\_1(\alpha) \left| \frac{P\_0(i\alpha)}{P(\lambda, i\alpha)} \right| \tag{7}$$

 ω∈Λ ∀ ∈ ∞ , 0, [ ]

By defining:

$$p(\mathcal{X}, oo) \, e^{i\theta(\mathcal{X}, oo)} = \frac{P\_0(ioo)}{P(\mathcal{X}, io)P\_{a0}(ioo)} \,, \quad \text{and} \, \, L\_{m0}(ioo) = q(oo) \, e^{i\phi(o)} \,,$$

the above inequality, (dropping the argument ω), reduces to:

∀ λ

$$\begin{split} f(\boldsymbol{\rho}, \boldsymbol{\phi}, \mathcal{W}\_1, \mathcal{W}\_2, \boldsymbol{\eta}) &= \left(1 - \mathcal{W}\_2^2\right) \boldsymbol{q}^2 + 2p(\boldsymbol{\lambda}) \{\cos(\boldsymbol{\theta}(\boldsymbol{\lambda}) - \boldsymbol{\phi}) - \mathcal{W}\_1 \mathcal{W}\_2\} \boldsymbol{q} \\ &+ \left(1 - \mathcal{W}\_1^2\right) p^2(\boldsymbol{\lambda}) \ge 0 \ \forall \boldsymbol{\lambda} \in \Lambda \ \forall \boldsymbol{\phi} \end{split} \tag{8}$$

 At each ω, one solves the above parabolic inequality as a quadratic equation for a grid of various λ ∈Λ . By examining the solutions over φ π∈ −[ 2 ,0 ,] one determines a boundary

$$\partial \mathbb{C}p(o\rho, \phi) = \left\{ P\_0 \mathcal{G} \; : \; \overline{\eta} \left( o\rho \mathcal{G}(io\rho) \right) = 1 \right\} \subset \mathbf{C} \; \; \; \; \rho$$

so that

$$\mathcal{A}\left(\alpha \iota\_\prime G(i\alpha)\right) = \hat{\circ} \text{C}p(\alpha \iota \phi) \ .$$

Let the interior of this boundary be ( , ) *<sup>o</sup> C p* ω φ ⊂ **C .** Then for 2 *W* ≤ 1 , it can be shown that (Bondarev et al., 1985; Tabarrok & Tong, 1993; Esmailzadeh et al., 1990):

$$\Gamma\left(o\flat,\mathcal{G}(i\omicron)\right) = \mathcal{C} \backslash \stackrel{\scriptstyle\vartheta}{\mathcal{C}} p\left(o\flat,\phi\right) \\ = \left\{ P\_0 \mathcal{G} \,:\, \overline{\eta}\left(o\flat,\mathcal{G}(i\nu)\right) \le 1 \right\},\tag{9}$$

while for 2 *W* > 1

142 Recent Advances in Robust Control – Novel Approaches and Design Methods

Γ( ,( ) : ,( ) 1 *Gi PG Gi* ) = { <sup>0</sup> ( ) = } ⊂ **C \** {∞} .

*f G* : ,( ) , **M C** →Γ ⊂ ∂ω

generates bounds on **C** for which f is satisfied. The function f is crucial for design purposes

*P s P sP s* ( ,) ( ,) ( ,),

λ

0 00 *Ps P s P s* () () (). = *m a* ⋅

0 00 *Ls L s P s* () () () = ⋅ *m a* 0 0 = *P s Gs P s m a* ( ) ( ). ( )

ω

 ω∈Λ ∀ ∈ ∞ , 0, [ ]

( ) ( ) ,,( ) 1 ( ) () ( ) () (, ) ( ) (, ) *m m*

*P i P i G i Li W Li W*

 = *m a* λ

 ω

 ω( *i* )

> λ

0 20 1

ωω

ω

≤ ⇔ + − ≥ (7)

*P i Pi P i*

<sup>=</sup> and ( )

1 2 2 1 2

=− + − −

1 ( ) 0, , .

λ

( ) ( )

 π  θλ φ

λω

is all-pass. Let 0 *P s <sup>m</sup>* ( ) be the minimum

 ω

> φ ω

0( ) () , *<sup>i</sup> Li q e <sup>m</sup>*

 ω=

∈ −[ 2 ,0 ,] one determines a boundary

ω

(8)

λ ω

 ηω

,( ) *G i* )) ) ∂Γ : **M** → C \ {∞} given by,

ω ω

λ

phase nominal plant model and 0( ) *P s <sup>a</sup>* be the all-pass nominal plant model. Let

( ) 0 0

0

 ω

2 2

*f WWq W q p WW q*

At each ω, one solves the above parabolic inequality as a quadratic equation for a grid of

*Cp P G G i* (,) : , ( ) 1 , = { <sup>0</sup> ( ) = } ⊂ **C**

λ

+ − ≥ ∀ ∈Λ ∀

φ

 ∂ ωφ

, ( ) ( , ). *G i Cp* )

( , , , , ) 1 2 ( ) cos( ( ) )

*W p*

2 2 1

> η ω ω

Γ = (ω ω

*a*

ω

is minimum phase and ( ,) *P s <sup>a</sup>*

0

 ω

> ∀ λ

> > *P i*

( )

λω

*a*

(,) 0

*p e P iPi*

the above inequality, (dropping the argument ω), reduces to:

∈Λ . By examining the solutions over

∂

∂ ωφ

( ) (,) , (, ) ( )

ω

λ ω

*i*

λ ω

ω φ θ λω

Also consider the level curve of ( ( Γ(

and will be defined shortly.

The map

Write

Define:

ηωλ

By defining:

various

so that

λ

 ω

where *P s <sup>m</sup>*( ,) λ

∂ ω ω

$$
\Gamma(o\prime\_\prime \mathcal{G}(io)) = \hat{\circ} \mathcal{C}p(o\prime\_\prime \phi) \bigcup \stackrel{\scriptstyle}{\mathcal{C}} p(o\prime\_\prime \phi) \ = \mathcal{C}p(o\prime\_\prime \phi) \ .
$$

In this way both the level curves ∂Γ(ω ω ,( ) *G i* ) as well as the sub level sets Γ(ω,( ) *G i*ω ) can be computed ∀∈ ∞ ω[0, .] Let **N** represent the Nichols' plane:

$$\mathbf{N} = \left\{ (\phi, \mathbf{r}) \; :\; -2\pi \le \phi \le 0 \; \; \; \; -\infty < r < \infty \right\}$$

If , *<sup>i</sup> s qe* φ= then the map *L s <sup>m</sup>* : → **N** sends s to **N** by the formula:

$$L\_m s = r + i\phi = \ 20 \quad \log(q e^{i\phi}) = 20 \log q + i\phi \ . \tag{10}$$

Consequently, *L Gi B <sup>m</sup>* : , ( ) ( , ,20log ) ∂ ω ω ∂ ωφΓ → ( ) *p q*

converts the level curves to boundaries on the Nichols' plane called design bounds. These design bounds are identical to the traditional QFT design bounds except that unlike the QFT bounds, ∂Γ(ω ω ,( ) *G i* ) can be used to generate ∂ ω *Bp* ∀ ∈ ∞ [0, ] whereas in traditional QFT, this is possible only up to a certain ω ω= *<sup>h</sup>* < ∞ . This clearly shows that every admissible finite order rational approximation is necessarily sub-optimal. This is the essence of all QFT based design methods.

According to the optimization theorem, if a solution to the problem exists, then there is an optimal minimum phase loop transmission function: \* \* 0 0 *L i P i Gi m m* () () () ω = ⋅ ω ω which satisfies

$$\left(\overline{\eta}\left(o\iota\_\prime\mathbb{G}^\*(i\iota)\right)\right) \;=\; 1\;\;\forall\;o\in\left[0,\infty\right] \tag{11}$$

such \* \* | | () *L q <sup>m</sup>*<sup>0</sup> = ω , gives 20 log \* *q* ( ) ω which lies on ∂ *Bp* , ∀ ω ∈ ∞ [0, .] If \* *q* ( ) ω is found, then (Robinson, 1962) if 1 1 *W L* ( ) ω ∈ and <sup>1</sup> 2 2 *W L* ( ) ω<sup>−</sup> ∈ ; it follows that

$$L\_{m0}^\*(\mathbf{s}) \;=\; \exp\left[\frac{1}{\pi} \int\_{-\infty}^{\infty} \frac{1 - i a s}{s - i a} \log \frac{q^\*(a)}{1 + a^2} \; da\right] \in H\_2 \; . \tag{12}$$

Clearly \* <sup>0</sup> *L s <sup>m</sup>* ( ) is non-rational and every admissible finite order rational approximation of it is necessarily sub-optimal; and is the essence of all QFT based design methods.

However, this sub-optimality enables the designer to address structural stability issues by proper choice of the poles and zeros of any admissible approximation G(s). Without control of the locations of the poles and zeros of G(s), singularities could result in the closed loop

Quantitative Feedback Theory and Sliding Mode Control 145

83.94 ( 0.66) ( 1.74) ( 4.20) ( ) ( 0.79) ( 2.3) ( 8.57) ( 40)

stable, *Cr* is still large and could generate large spectral sensitivity due to its large modal

Because reduction of the information content improves quality adequacy, Thompson (Thompson, 1998) employed the nonlinear programming optimization routine to locally optimize the parameters of G(s) so as to further reduce its information content, and obtained

34.31 ( 0.5764) ( 2.088) ( 5.04) ( ) . ( 0.632) ( 1.84) ( 6.856) ( 40)

*s ss s* + ++ <sup>=</sup> + ++ +

Note that the change in pole locations in both cases is highly insignificant. However, because of the large coefficients associated with the un-optimized polynomial it is not yet quality-adequate, and has 39.8. *Cr* = The optimized polynomial on the other hand has the pleasantly small 0.925, *Cr* = thus resulting in a quality adequate design. For solving the

other spectral sensitivity problems, 1 *Cr* ≤ is required. We have so far failed to obtain a

Quality adequacy is demanded of most engineering designs. For linear control system designs, this translates to quality- adequate closed loop characteristic polynomials under small plant and/or controller perturbations (both parametric and non parametric). Under these conditions, all optimization based designs produce quality inadequate closed loop polynomials. By backing off from these unique non-generic optimal solutions, one can produce a family of quality-adequate solutions, which are in tune with modern engineering design methodologies. These are the solutions which practical engineers desire and can confidently implement. The major attraction of the optimization-based design methods is that they are both mathematically elegant and tractable, but no engineering designer ever claims that real world design problems are mathematically beautiful. We suggest that, like in all other design areas, quality adequacy should be added as an extra condition on all feedback design problems. Note that if we follow axiomatic design theory, every MIMO problem should be broken up into a series of SISO sub-problems. This is why we have not

In sliding mode control, a time varying surface of *S*(t) is defined with the use of a desired vector, *Xd*, and the name is given as the sliding surface. If the state vector *X* can remain on the surface *S*(t) for all time, t>0, tracking can be achieved. In other words, problem of tracking the state vector, *X* ≡ *Xd* (n- dimensional desired vector) is solved. Scalar quantity, *s*,

quality-adequate design from any of the modern optimal methods 1 2 ( , , , ). *H H*

λ

*s ss G s*

This optimized controller now produced: 0, *<sup>c</sup> I* = and 0.925. *Cr* =

singularity problem, structural stability of 0 *X s* ( ,)

considered the MIMO problem herein.

**3. Sliding mode control preliminaries** 

*s sss* +++ <sup>=</sup> + ++ + .

λ

is now structurally

is enough. However, to solve the

<sup>∞</sup> A

μ

Using the scheme just described, the first feasible controller G(s) was found as:

This controller produced: 206, *<sup>c</sup> I* = and 39.8. *Cr* = Although 0 *X s* ( ,)

κ( ). *V*

matrix condition number

the optimized controller:

α( ) λ

*sss G s*

characteristic polynomial. Sub-optimality also enables us to back off from the non-realizable unique optimal solution to a class of admissible solutions which because of the compactness and connectedness of Λ (which is a differentiable manifold), induce genericity of the resultant solutions. After this, one usually optimizes the resulting controller so as to obtain quality adequacy (Thompson, 1998).

### **2.2 Design algorithm: Systematic loop-shaping**

The design theory developed in section 2.1, now leads directly to the following systematic design algorithm:

1. Choose a sufficient number of discrete frequency points:

$$a\_1, a\_2, \dots, a\_N < \infty.$$


At the end of the algorithm, we obtain a feasible minimal order, minimal information content, and quality-adequate controller.

### **Design Example**

1

*s*

Consider:

$$P(\lambda, s) \begin{bmatrix} 1 + \Lambda \end{bmatrix} = \frac{k(1 - bs)}{s(1 + ds)} \begin{pmatrix} 1 + \Lambda \end{pmatrix} \quad , \quad \lambda = \begin{bmatrix} k, b, d \end{bmatrix}^T \in \Lambda \text{ .}$$

$$\begin{array}{rcl} \mathbf{k} \in \begin{bmatrix} 1, 3 \end{bmatrix} \text{ .} \ b \in [0.05, 0.1] \text{ .} \ d \in [0.3, 1] \end{array}$$

$$P\_0(\mathbf{s}) \quad = \frac{3(1 - 0.05\mathbf{s})}{s(1 + 0.35)} \quad \left| \begin{bmatrix} \Lambda \end{bmatrix} < \left| \mathcal{W}\_2 \right| \text{ .}$$

$$\begin{array}{rcl} \mathcal{W}\_1(\mathbf{s}) \quad = \frac{\mathbf{s} + 1.8}{2.80\mathbf{s}} \quad \text{and } \mathcal{W}\_2(\mathbf{s}) \quad = \frac{2(0.0074\mathbf{s}^3 + 0.333\mathbf{s}^2 + 1.551\mathbf{s} + 1)(.00001\mathbf{s} + 1)}{3(0.0049\mathbf{s}^3 + 0.246\mathbf{s}^2 + 1.157\mathbf{s} + 1)}$$

<sup>1</sup> *W s RH* ( ) <sup>∞</sup> <sup>∉</sup> but 1 2 <sup>2</sup> *W s RH* () . <sup>∈</sup> Since we are dealing with loop-shaping, that 1 *W RH* , <sup>∞</sup> <sup>∉</sup> does not matter (Nordgren et al., 1995).

*sss*

144 Recent Advances in Robust Control – Novel Approaches and Design Methods

characteristic polynomial. Sub-optimality also enables us to back off from the non-realizable unique optimal solution to a class of admissible solutions which because of the compactness and connectedness of Λ (which is a differentiable manifold), induce genericity of the resultant solutions. After this, one usually optimizes the resulting controller so as to obtain

The design theory developed in section 2.1, now leads directly to the following systematic

 , . ω ω… *<sup>N</sup>* < ∞

3. With fixed controller order , *nG* use the QFT design methodology to fit a loop

5. Determine the information content (of G(s)) , *<sup>c</sup> I* and apply some nonlinear local optimization algorithm to minimize *<sup>c</sup> I* until further reduction is not feasible without

At the end of the algorithm, we obtain a feasible minimal order, minimal information

[ ] [ ] (1 ) ( ,)1 (1 ) , , , . (1 ) *k bs <sup>T</sup> P s kbd*

k ∈ [1, 3] , b ∈ [0.05, 0.1] , d ∈ [0.3, 1]

<sup>−</sup> <sup>+</sup> Δ = +Δ = ∈ Λ

<sup>−</sup> <sup>=</sup> <sup>+</sup><sup>2</sup> Δ < *<sup>W</sup>* .

2 3 2

 λ

3 2

<sup>2</sup> *W s RH* () . <sup>∈</sup> Since we are dealing with loop-shaping, that 1 *W RH* , <sup>∞</sup> <sup>∉</sup>

2(0.0074 0.333 1.551 1) (.00001 1) ( ) 3(0.0049 0.246 1.157 1) *sss s W s*

+ ++ + <sup>=</sup> + ++

*sss*

*s ds*

+

3(1 0.05 ) ( ) (1 0.35) *<sup>s</sup> P s s*

This is an iterative process.

*<sup>i</sup> G i* and translate them to the corresponding

to lie just on the correct side of each boundary

(start with 1 2). *n or <sup>G</sup>* =

 ω

*<sup>i</sup>* for − 2 0 π ≤ ≤ φ

1 2 ω

quality adequacy (Thompson, 1998).

design algorithm:

bounds ( , ). *p i* ∂ β ωφ

( ,) *p i*

∂ β ωφ

8. End.

Consider:

**Design Example** 

1

1.8 ( ) 2.80 *<sup>s</sup> W s*

<sup>1</sup> *W s RH* ( ) <sup>∞</sup> <sup>∉</sup> but 1 2

**2.2 Design algorithm: Systematic loop-shaping** 

2. Generate the level curves ( , ( ))

at its frequency ,

4. If step 3 is feasible, continue, otherwise go to 7.

∂ β ωφ

6. Determine . *Cr* If 1, *Cr* ≤ go to 8, otherwise continue. 7. Increase *nG* by 1 (i.e., set 1) *n n G G* = + and return to 3.

0

transmission function 0 *L i <sup>m</sup>* ( ),

violating the bounds ( , ). *p i*

content, and quality-adequate controller.

λ

*s* <sup>+</sup> <sup>=</sup> and

does not matter (Nordgren et al., 1995).

1. Choose a sufficient number of discrete frequency points:

∂ Γ ω

ω

ω

Using the scheme just described, the first feasible controller G(s) was found as:

$$G(\text{s}) \, = \, \frac{83.94 \, (\text{s} + 0.66) \, (\text{s} + 1.74) \, (\text{s} + 4.20)}{\text{(s} + 0.79) \, (\text{s} + 2.3) \, (\text{s} + 8.57) \, (\text{s} + 40)}. \,\text{s}$$

This controller produced: 206, *<sup>c</sup> I* = and 39.8. *Cr* = Although 0 *X s* ( ,) λ is now structurally stable, *Cr* is still large and could generate large spectral sensitivity due to its large modal matrix condition number κ( ). *V*

Because reduction of the information content improves quality adequacy, Thompson (Thompson, 1998) employed the nonlinear programming optimization routine to locally optimize the parameters of G(s) so as to further reduce its information content, and obtained the optimized controller:

$$\overline{\mathbf{G}}(\mathbf{s}) = \frac{\mathbf{34.31 \ (s + 0.5764) \ (s + 2.088) \ (s + 5.04)}{\left(s + 0.632\right) \left(s + 1.84\right) \ (s + 6.856) \ (s + 40)} \ . $$

This optimized controller now produced: 0, *<sup>c</sup> I* = and 0.925. *Cr* =

Note that the change in pole locations in both cases is highly insignificant. However, because of the large coefficients associated with the un-optimized polynomial it is not yet quality-adequate, and has 39.8. *Cr* = The optimized polynomial on the other hand has the pleasantly small 0.925, *Cr* = thus resulting in a quality adequate design. For solving the α( ) λ singularity problem, structural stability of 0 *X s* ( ,) λ is enough. However, to solve the other spectral sensitivity problems, 1 *Cr* ≤ is required. We have so far failed to obtain a

quality-adequate design from any of the modern optimal methods 1 2 ( , , , ). *H H* μ<sup>∞</sup> A

Quality adequacy is demanded of most engineering designs. For linear control system designs, this translates to quality- adequate closed loop characteristic polynomials under small plant and/or controller perturbations (both parametric and non parametric). Under these conditions, all optimization based designs produce quality inadequate closed loop polynomials. By backing off from these unique non-generic optimal solutions, one can produce a family of quality-adequate solutions, which are in tune with modern engineering design methodologies. These are the solutions which practical engineers desire and can confidently implement. The major attraction of the optimization-based design methods is that they are both mathematically elegant and tractable, but no engineering designer ever claims that real world design problems are mathematically beautiful. We suggest that, like in all other design areas, quality adequacy should be added as an extra condition on all feedback design problems. Note that if we follow axiomatic design theory, every MIMO problem should be broken up into a series of SISO sub-problems. This is why we have not considered the MIMO problem herein.

### **3. Sliding mode control preliminaries**

In sliding mode control, a time varying surface of *S*(t) is defined with the use of a desired vector, *Xd*, and the name is given as the sliding surface. If the state vector *X* can remain on the surface *S*(t) for all time, t>0, tracking can be achieved. In other words, problem of tracking the state vector, *X* ≡ *Xd* (n- dimensional desired vector) is solved. Scalar quantity, *s*,

Quantitative Feedback Theory and Sliding Mode Control 147

When the state vector reaches the sliding surface, *S*(t), the distance to the sliding surface, *s*,

0 *s* = (18)

When the Eq. (9) is satisfied, the equivalent control input, *U*es, can be obtained as follows:

*es b b* →

*es b U U* → *es*

*es*, *f f* →

λ*x*


, (19)

*s* (20)

η

 η

η

*Ues* = - *es f* + *<sup>d</sup> x* -

The control discontinuity, *k(x)* is needed to satisfy sliding conditions with the introduction of an estimated equivalent control. However, this control discontinuity is highly dependent on the parametric uncertainty of the system. In order to satisfy sliding conditions and for the

= *ss* ≤ -

η

 η

λ

λ

system trajectories to remain on the sliding surface, the following must be satisfied:

11 1

*<sup>s</sup> kx b b f f b b x x b b*

≥ − + − −+ + ⎡ ⎤ ⎣ ⎦

*es es es d es*

−− −

( ) ( 1)( )

⎡ ⎤ − +− −+ + ≤ ⎣ ⎦

( ) (1 )( ) ( )sgn( )

( ) (1 )( ) ( )

*s f bb f bb x x s bb k x s*

1 1 1

⎡ ⎤ − +− −+ − ≤ − ⎣ ⎦

*s f bb f bb x x bb k x s s*

*es es es d es*

11 1 ( ) ( 1)( ) *es es es <sup>d</sup> es kx b b f f b b x x b b*

−− − ≥ − + − −+ +

*es es es d es*

− − −

1 1 1

− − −

λ

λ

1 <sup>2</sup> 2 *d s dt*

The control discontinuity can be found from the above inequality:

For the best tracking performance, *k(x)* must satisfy the inequality

*<sup>U</sup>* <sup>=</sup>( <sup>1</sup> *es es U*

*b* ⎛ ⎞ ⎜ ⎟ ⎝ ⎠

becomes zero. This represents the dynamics while in sliding mode, such that

where *XXX x* =− = *<sup>d</sup>* [ *<sup>T</sup>*

and λ

This leads to

where

and *U* is given by

*k(x)* is the control discontinuity.

where *η* is a strictly positive constant.

*s*

*x*⎤ ⎦ 

= positive constant (first order filter bandwidth)

is the distance to the sliding surface and this becomes zero at the time of tracking. This replaces the vector *Xd* effectively by a first order stabilization problem in *s*. The scalar *s* represents a realistic measure of tracking performance since bounds on *s* and the tracking error vector are directly connected. In designing the controller, a feedback control law *U* can be chosen appropriately to satisfy sliding conditions. The control law across the sliding surface can be made discontinuous in order to facilitate for the presence of modeling imprecision and of disturbances. Then the discontinuous control law *U* is smoothed accordingly using QFT to achieve an optimal trade-off between control bandwidth and tracking precision.

Consider the second order single-input dynamic system (Jean-Jacques & Weiping, 1991)

$$
\ddot{X} = f(X) + b(X)\mathcal{U} \,\,\,\,\,\,\tag{13}
$$

where

*X –* State vector, [ *x x* ]T

*x –* Output of interest

*f -* Nonlinear time varying or state dependent function

*U –* Control input torque

*b –* Control gain

The control gain, *b*, can be time varying or state-dependent but is not completely known. In other words, it is sufficient to know the bounding values of *b,*

$$0 < b\_{\min} \le b \le b\_{\max} \,\, . \tag{14}$$

The estimated value of the control gain, *b*es, can be found as (Jean-Jacques & Weiping, 1991)

$$b\_{\text{ess}} = (b\_{\text{min}} b\_{\text{max}})^{1/2}$$

Bounds of the gain *b* can be written in the form:

$$
\beta \mathcal{J}^{-1} \le \frac{b\_{\rm cs}}{b} \le \beta \tag{15}
$$

Where

$$
\beta = \left[\frac{b\_{\text{max}}}{b\_{\text{min}}}\right]^{1/2}
$$

The nonlinear function *f* can be estimated (*f*es) and the estimation error on *f* is to be bounded by some function of the original states of *f.* 

$$\left| f\_{\rm es} - f \right| \le F \tag{16}$$

In order to have the system track on to a desired trajectory *x(t)* ≡ *xd(t)*, a time-varying surface, *S*(t) in the state-space *R2* by the scalar equation *s*(x;t) = *s* = 0 is defined as

$$\mathbf{s} = \left(\frac{d}{dt} + \mathcal{A}\right)\mathbf{\bar{x}} = \dot{\overline{\mathbf{x}}} + \mathcal{A}\overline{\mathbf{x}} \tag{17}$$

where *XXX x* =− = *<sup>d</sup>* [ *<sup>T</sup> x*⎤ ⎦ and λ = positive constant (first order filter bandwidth) When the state vector reaches the sliding surface, *S*(t), the distance to the sliding surface, *s*, becomes zero. This represents the dynamics while in sliding mode, such that

$$
\dot{s} = 0\tag{18}
$$

When the Eq. (9) is satisfied, the equivalent control input, *U*es, can be obtained as follows:

$$\begin{array}{c} b \to b\_{\infty} \\\\ b\_{\infty} \circ U \to \mathcal{U}\_{\infty} \\\\ f \to f\_{\infty} \end{array}$$

This leads to

146 Recent Advances in Robust Control – Novel Approaches and Design Methods

is the distance to the sliding surface and this becomes zero at the time of tracking. This replaces the vector *Xd* effectively by a first order stabilization problem in *s*. The scalar *s* represents a realistic measure of tracking performance since bounds on *s* and the tracking error vector are directly connected. In designing the controller, a feedback control law *U* can be chosen appropriately to satisfy sliding conditions. The control law across the sliding surface can be made discontinuous in order to facilitate for the presence of modeling imprecision and of disturbances. Then the discontinuous control law *U* is smoothed accordingly using QFT to achieve an optimal trade-off between control bandwidth and

Consider the second order single-input dynamic system (Jean-Jacques & Weiping, 1991)

The control gain, *b*, can be time varying or state-dependent but is not completely known. In

The estimated value of the control gain, *b*es, can be found as (Jean-Jacques & Weiping, 1991)

es min max *b bb* = ( )

<sup>1</sup> *es b b*

> max min

The nonlinear function *f* can be estimated (*f*es) and the estimation error on *f* is to be bounded

In order to have the system track on to a desired trajectory *x(t)* ≡ *xd(t)*, a time-varying

\_ . *d s xx x*

⎛ ⎞ = + =+ ⎜ ⎟

λ

surface, *S*(t) in the state-space *R2* by the scalar equation *s*(x;t) = *s* = 0 is defined as

*dt* λ

<sup>=</sup> *<sup>b</sup> b* β<sup>⎡</sup> <sup>⎤</sup> <sup>⎢</sup> <sup>⎥</sup> <sup>⎣</sup> <sup>⎦</sup>

β 1/2

β

1/2

*x f X bXU* = () () + , (13)

min max 0 < ≤≤ *b bb* . (14)

<sup>−</sup> ≤ ≤ (15)

*es f f* − ≤ *F* (16)

⎝ ⎠ (17)

tracking precision.

*X –* State vector, [ *x x* ]T *x –* Output of interest

*U –* Control input torque

*b –* Control gain

*f -* Nonlinear time varying or state dependent function

Bounds of the gain *b* can be written in the form:

by some function of the original states of *f.* 

other words, it is sufficient to know the bounding values of *b,*

where

Where

$$
\lambda I\_{cs} = \text{-}\, f\_{cs} \, + \text{\text{\textdegree}}\, \lambda \text{--} \, \lambda \dot{\overline{\text{x}}}\,\, \text{\textdegree}\,\tag{19}
$$

and *U* is given by

$$
\mathcal{U} = \left(\frac{1}{b\_{cs}}\right) (\mathcal{U}\_{cs} - k(\mathbf{x}) \text{sgn}(\mathbf{s}) \text{ \textquotedblleft})
$$

where

*k(x)* is the control discontinuity.

The control discontinuity, *k(x)* is needed to satisfy sliding conditions with the introduction of an estimated equivalent control. However, this control discontinuity is highly dependent on the parametric uncertainty of the system. In order to satisfy sliding conditions and for the system trajectories to remain on the sliding surface, the following must be satisfied:

$$\frac{1}{2}\frac{d}{dt}s^2 = s\dot{s} \le -\eta|s|\tag{20}$$

where *η* is a strictly positive constant.

The control discontinuity can be found from the above inequality:

$$s\left[\left(f - bb\_{cs}^{-1}f\_{cs}\right) + \left(1 - bb\_{cs}^{-1}\right)\left(-\ddot{\boldsymbol{x}}\_d + \lambda\dot{\bar{\boldsymbol{x}}}\right) - bb\_{cs}^{-1}k\left(\boldsymbol{x}\right)\text{sgn}(\boldsymbol{s})\right] \le -\eta \left|s\right|$$

$$s\left[\left(f - bb\_{cs}^{-1}f\_{cs}\right) + \left(1 - bb\_{cs}^{-1}\right)\left(-\ddot{\boldsymbol{x}}\_d + \lambda\dot{\bar{\boldsymbol{x}}}\right)\right] + \eta \left|s\right| \le bb\_{cs}^{-1}k(\boldsymbol{x}) \left|s\right|$$

$$k(\boldsymbol{x}) \ge \frac{s}{|s|} \left[b\_{cs}b^{-1}f - f\_{cs} + (b\_{cs}b^{-1} - 1)(-\ddot{\boldsymbol{x}}\_d + \lambda\dot{\bar{\boldsymbol{x}}})\right] + b\_{cs}b^{-1}\eta$$

For the best tracking performance, *k(x)* must satisfy the inequality

$$k(\boldsymbol{\kappa}) \ge \left| b\_{es} b^{-1} f - f\_{es} + (b\_{es} b^{-1} - 1)(-\ddot{\boldsymbol{\kappa}}\_d + \lambda \dot{\boldsymbol{\varpi}}) \right| + b\_{es} b^{-1} \eta$$

Quantitative Feedback Theory and Sliding Mode Control 149

after reaching the boundary layer, chattering of the controller is observed because of the discontinuity across the sliding surface. In practice, this situation can extremely complicate designing hardware for the controller as well as affect desirable performance because of the time lag of the hardware functionality. Also, chattering excites undesirable high frequency dynamics of the system. By using a QFT controller, the switching control laws can be modified to eliminate chattering in the system since QFT controller works as a robust low pass filter. In QFT, attraction by the boundary layer can be maintained for all *t >0* by varying

<sup>1</sup> <sup>2</sup> ( ) <sup>2</sup>

 φη

(23)

instead of *kx s* ( )sgn( ). The

(25)

λ

*s gx* <sup>+</sup> <sup>Δ</sup> . (26)

φ> )

(24)

< ) than for boundary layer expansion ( 0

φ

φ <sup>2</sup> /β

> φ 2 β

1 1

and *<sup>d</sup> x x* → .

1/2

.

 ) () *<sup>d</sup>* α

> max min

( ) *es d*

*es d b x b x*

<sup>⎡</sup> <sup>⎤</sup> <sup>=</sup> <sup>⎢</sup> <sup>⎥</sup> <sup>⎣</sup> <sup>⎦</sup>

− −

*d es es es d*

*g x x f bb f bb x x*

 φ

Δ = − +− −+

*d s ss dt* ≥→ ≤ −

φ

> 0 () () →=− *kx kx*

< 0 () () →=− *kx kx*

( ( )sat( / ) ) ( , )

*es d*

φ α

Where ( , ) ( ) (1 )( )

Since *k x*( ) and Δ*g* are continuous in *x*, the system trajectories inside the boundary layer can be expressed in terms of the variable *s* and the desired trajectory *xd* by the following relation:

φ

The dynamics inside the boundary layer can be written by combining Eq. (24) and Eq. (25)

φ

( )

<sup>1</sup> ( )sat( / )

*s bb k x s s g x x*

*s ss* ≤→ =

2 ( ( )( / ) *d d s kx s* = − +

β

 sat( / ) / φ

( ) Where

*d*

β

= − + +Δ

It is evident from Eq. (23) that the boundary layer attraction condition is highly guaranteed

(Jean-Jacques, 1991). Equation (23) can be used to modify the control discontinuity gain, *k(x*),

relationship between *kx kx* ( ) and ( ) for the boundary layer attraction condition can be

the boundary layer thickness,

φ

to smoothen the performance by putting *kx s* ( )sat( / )

φ

φ

*es*

φ

*U U kx s*

1

⎛ ⎞ = − ⎜ ⎟ ⎝ ⎠

*es*

−

*b*

in the case of boundary layer contraction ( 0

presented for both the cases as follows:

Then the control law, *U*, and *s* become

Inside the boundary layer, i.e.,

Hence

as follows:

, as follows:

φ

As seen from the above inequality, the value for *k(x)* can be simplified further by rearranging *f* as below:

$$f = f\_{es} + (f - f\_{es}) \text{ and } \quad |f\_{es} - f| \le F$$

$$k(\mathbf{x}) \ge \left| b\_{es} b^{-1} (f - f\_{es}) + (b\_{es} b^{-1} - 1)(f\_{es} - \ddot{\mathbf{x}}\_d + \lambda \ddot{\mathbf{x}}) \right| + b\_{es} b^{-1} \eta$$

$$k(\mathbf{x}) \ge \left| b\_{es} \cdot b^{-1} (f - f\_{es}) \right| + \left| b\_{es} b^{-1} - 1 \right| (f\_{es} - \ddot{\mathbf{x}}\_d + \lambda \ddot{\mathbf{x}}) \stackrel{!}{\to} + b\_{es} b^{-1} \eta$$

$$k(\mathbf{x}) \ge \beta (F + \eta) + (\beta - 1) \left| (f\_{es} - \ddot{\mathbf{x}}\_d + \lambda \ddot{\mathbf{x}}) \right| $$

$$k(\mathbf{x}) \ge \beta (F + \eta) + (\beta - 1) \left| L\_{es} \right| \tag{21}$$

By choosing *k*(*x*) to be large enough, sliding conditions can be guaranteed. This control discontinuity across the surface *s = 0* increases with the increase in uncertainty of the system parameters. It is important to mention that the functions for *fes* and *F* may be thought of as any measured variables external to the system and they may depend explicitly on time.

### **3.1 Rearrangement of the sliding surface**

The sliding condition 0 *s* = does not necessarily provide smooth tracking performance across the sliding surface. In order to guarantee smooth tracking performance and to design an improved controller, in spite of the control discontinuity, sliding condition can be redefined, i.e. *s s* = −α (Taha et al., 2003), so that tracking of *x* → *xd* would achieve an exponential convergence. Here the parameter α is a positive constant. The value for α is determined by considering the tracking smoothness of the unstable system. This condition modifies *Ues* as follows:

$$\mathcal{U}\_{\rm es} = -f\_{\rm es} + \ddot{\mathbf{x}}\_d - \mathcal{A}\dot{\overline{\mathbf{x}}} - a\mathbf{s}\,\mathbf{s}$$

and *k(x)* must satisfy the condition

$$k(\mathbf{x}) \ge \left| b\_{\rm es} b^{-1} f - f\_{\rm es} + (b\_{\rm es} b^{-1} - 1)(-\ddot{\mathbf{x}}\_d + \lambda \dot{\overline{\mathbf{x}}}) \right| \, \, + b\_{\rm es} b^{-1} \eta - a \left| s \right|.$$

Further *k(x)* can be simplified as

$$k(\mathbf{x}) \ge \beta(\mathbf{F} + \eta) + (\beta - 1) \left| \mathcal{U}\_{\rm es} \right| + (\beta - 2) \stackrel{\mathcal{a} \left| \mathbf{s} \right|}{\tag{22}} \tag{23}$$

Even though the tracking condition is improved, chattering of the system on the sliding surface remains as an inherent problem in SMC. This can be removed by using QFT to follow.

### **3.2 QFT controller design**

In the previous sections of sliding mode preliminaries, designed control laws, which satisfy sliding conditions, lead to perfect tracking even with some model uncertainties. However, 148 Recent Advances in Robust Control – Novel Approaches and Design Methods

As seen from the above inequality, the value for *k(x)* can be simplified further by

*f* = *es f* + ( *f* - *fes* ) and es *f f* − ≤ *F*

( ) *es kx b* <sup>≥</sup> 1 1 <sup>1</sup> ( ) 1)( *es es es d es b f f bb f x x bb*

( ) ( ) ( 1) ( *es d kx F* ≥ ++ − −+

( ) ( ) ( 1) *es kx F* ≥ ++ −

By choosing *k*(*x*) to be large enough, sliding conditions can be guaranteed. This control discontinuity across the surface *s = 0* increases with the increase in uncertainty of the system parameters. It is important to mention that the functions for *fes* and *F* may be thought of as any measured variables external to the system and they may depend explicitly on time.

The sliding condition 0 *s* = does not necessarily provide smooth tracking performance across the sliding surface. In order to guarantee smooth tracking performance and to design an improved controller, in spite of the control discontinuity, sliding condition can be redefined,

considering the tracking smoothness of the unstable system. This condition modifies *Ues* as

*Ues es d* = −+− − *f x xs*

− − ≥ − + − −+ <sup>1</sup>

( ) ( ) ( 1) ( 2) *es kx F* ≥ ++ − + −

Even though the tracking condition is improved, chattering of the system on the sliding surface remains as an inherent problem in SMC. This can be removed by using QFT to

In the previous sections of sliding mode preliminaries, designed control laws, which satisfy sliding conditions, lead to perfect tracking even with some model uncertainties. However,

(Taha et al., 2003), so that tracking of *x* → *xd* would achieve an exponential

λ α

λ

 β*U*

*es bb s* η α

<sup>−</sup> + −

α

is a positive constant. The value for

 ηβ

 ηβ

β

β

α

1 1 ( ) ( 1)( ) *es es es <sup>d</sup> kx b b f f bb x x*

 ηβ

β

**3.1 Rearrangement of the sliding surface** 

convergence. Here the parameter

and *k(x)* must satisfy the condition

Further *k(x)* can be simplified as

**3.2 QFT controller design** 

i.e. *s s* = −α

follows:

follow.

− − ≥ − + − −+ <sup>1</sup>

− − <sup>−</sup> − + − −+ +

λ

λ

 λ*f x x*

*es b b* η<sup>−</sup> +

η

*<sup>U</sup>* (21)

α

*<sup>s</sup>* (22)

is determined by

1 1 ( ) ( ) ( 1)( ) *es es es es d kx b b f f b b f x x*

rearranging *f* as below:

after reaching the boundary layer, chattering of the controller is observed because of the discontinuity across the sliding surface. In practice, this situation can extremely complicate designing hardware for the controller as well as affect desirable performance because of the time lag of the hardware functionality. Also, chattering excites undesirable high frequency dynamics of the system. By using a QFT controller, the switching control laws can be modified to eliminate chattering in the system since QFT controller works as a robust low pass filter. In QFT, attraction by the boundary layer can be maintained for all *t >0* by varying the boundary layer thickness,φ, as follows:

$$|s| \ge \phi \to \frac{1}{2}\frac{d}{dt}s^2 \le (\dot{\phi} - \eta)|s|\tag{23}$$

It is evident from Eq. (23) that the boundary layer attraction condition is highly guaranteed in the case of boundary layer contraction ( 0 φ < ) than for boundary layer expansion ( 0 φ > ) (Jean-Jacques, 1991). Equation (23) can be used to modify the control discontinuity gain, *k(x*), to smoothen the performance by putting *kx s* ( )sat( / ) φ instead of *kx s* ( )sgn( ). The relationship between *kx kx* ( ) and ( ) for the boundary layer attraction condition can be presented for both the cases as follows:

$$
\phi \rhd 0 \to \overline{k}\,(\mathbf{x}) = k(\mathbf{x}) - \phi \, \left/ \beta^2 \right. \tag{24}
$$

$$
\phi \le 0 \to \overline{k}\,(\mathbf{x}) = k(\mathbf{x}) - \phi \,\,\beta^2 \tag{25}
$$

Then the control law, *U*, and *s* become

$$\begin{aligned} LU &= \left(\frac{1}{b\_{cs}}\right) \left( U\_{cs} - \overline{k}\left(\mathbf{x}\right) \text{sat}\left(\mathbf{s} \mid \boldsymbol{\phi}\right) \right) \\ \dot{s} &= -bb\_{cs}^{-1} \left( \overline{k}\left(\mathbf{x}\right) \text{sat}\left(\mathbf{s} \mid \boldsymbol{\phi}\right) + a\mathbf{s}\right) + \Delta\mathbf{g}\left(\mathbf{x}, \mathbf{x}\_{d}\right) \\ \text{Where} \qquad \Delta\mathbf{g}\left(\mathbf{x}, \mathbf{x}\_{d}\right) &= \left(f - bb\_{cs}^{-1}f\_{cs}\right) + \left(1 - bb\_{cs}^{-1}\right)\left(-\ddot{\mathbf{x}}\_{d} + \dot{\mathbf{x}}\overline{\mathbf{x}}\right) \end{aligned}$$

Since *k x*( ) and Δ*g* are continuous in *x*, the system trajectories inside the boundary layer can be expressed in terms of the variable *s* and the desired trajectory *xd* by the following relation: Inside the boundary layer, i.e.,

$$\left|\mathbf{s}\right| \le \phi \to \text{sat}(\mathbf{s} \;/\; \phi) = \mathbf{s} \;/\; \phi \quad \text{and } \mathbf{x} \to \mathbf{x}\_d\;.$$

Hence

$$
\dot{\mathbf{s}} = -\mathcal{J}\_d^2(\overline{\mathbf{k}} \,\mathrm{(x\_d)}(\mathbf{s} \,\prime \,\phi) + \,\mathrm{as}\) + \Delta\mathrm{g}(\mathbf{x\_d}) \,. \tag{26}
$$

$$
\text{Where} \quad \mathcal{J}\_d = \left[\frac{b\_{\mathrm{es}}(\mathbf{x\_d})\_{\mathrm{max}}}{b\_{\mathrm{es}}(\mathbf{x\_d})\_{\mathrm{min}}}\right]^{1/2} \,.
$$

The dynamics inside the boundary layer can be written by combining Eq. (24) and Eq. (25) as follows:

$$
\dot{\phi} > 0 \to \overline{k}(\mathbf{x}\_d) = k(\mathbf{x}\_d) - \dot{\phi} \ne \mathcal{J}\_d^2 \tag{27}
$$

Quantitative Feedback Theory and Sliding Mode Control 151

The results discussed above can be used for applications to track and stabilize highly nonlinear systems. Sliding mode control along with QFT provides better system controllers and leads to selection of hardware easier than using SMC alone. The application of this theory to a driver seat of a heavy vehicle and its simulation are given in the following

In this section, the sliding mode control theory is applied to track the motion behavior of a driver's seat of a heavy vehicle along a trajectory that can reduce driver fatigue and drowsiness. The trajectory can be varied accordingly with respect to the driver requirements. This control methodology can overcome most of the road disturbances and provide predetermined seat motion pattern to avoid driver fatigue. However, due to parametric uncertainties and modeling inaccuracies chattering can be observed which causes a major problem in applying SMC alone. In general, the chattering enhances the driver fatigue and also leads to premature failure of controllers. SMC with QFT developed in this chapter not only eliminates the chattering satisfactorily but also reduces the control

Relationship between driver fatigue and seat vibration has been discussed in many publications based on anecdotal evidence (Wilson & Horner, 1979; Randall, 1992). It is widely believed and proved in field tests that lower vertical acceleration levels will increase comfort level of the driver (U. & R. Landstorm, 1985; Altunel, 1996; Altunel & deHoop, 1998). Heavy vehicle truck drivers who usually experience vibration levels around 3 Hz, while driving, may undergo fatigue and drowsiness (Mabbott et al., 2001). Fatigue and drowsiness, while driving, may result in loss of concentration leading to road accidents. Human body metabolism and chemistry can be affected by intermittent and random vibration exposure resulting in fatigue (Kamenskii, 2001). Typically, vibration exposure levels of heavy vehicle drivers are in the range 0.4 m/s2 - 2.0 m/s2 with a mean value of 0.7 m/s2 in the vertical axis (U. & R. Landstorm, 1985; Altunel, 1996; Altunel & deHoop, 1998;

A suspension system determines the ride comfort of the vehicle and therefore its characteristics may be properly evaluated to design a proper driver seat under various operating conditions. It also improves vehicle control, safety and stability without changing the ride quality, road holding, load carrying, and passenger comfort while providing directional control during handling maneuvers. A properly designed driver seat can reduce driver fatigue, while maintaining same vibration levels, against different external

Over the past decades, the application of sliding mode control has been focused in many disciplines such as underwater vehicles, automotive applications and robot manipulators (Taha et al., 2003; Roberge, 1960; Dorf, 1967; Ogata, 1970; Higdon, 1963; Truxal, 1965; Lundberg, 2003; Phillips, 1994; Siebert, 1986). The combination of sliding controllers with state observers was also developed and discussed for both the linear and nonlinear cases (Hedrick & Gopalswamy, 1989; Bondarev et al., 1985). Nonlinear systems are difficult to model as linear systems since there are certain parametric uncertainties and modeling inaccuracies that can eventually resonate the system (Jean-Jacques, 1991). The sliding mode control can be used for nonlinear stabilization problems in designing controllers. Sliding mode control can provide high performance systems that are robust to parameter

effort necessary to maintain the desired motion of the seat.

disturbances to provide improved performance in riding.

sections.

**4. Numerical example** 

Mabbott et al., 2001).

$$
\dot{\phi} < 0 \to \overline{k}(\mathbf{x}\_d) = k(\mathbf{x}\_d) - \dot{\phi} \ne \beta\_d^2 \tag{28}
$$

By taking the Laplace transform of Eq. (26), It can be shown that the variable *s* is given by the output of a first-order filter, whose dynamics entirely depends on the desired state *xd*  (Fig.1).

Fig. 1. Structure of closed-loop error dynamics

Where P is the Laplace variable. ( ) *<sup>d</sup>* Δ*g x* are the inputs to the first order filter, but they are highly uncertain.

This shows that chattering in the boundary layer due to perturbations or uncertainty of ( ) *<sup>d</sup>* Δ*g x* can be removed satisfactorily by first order filtering as shown in Fig.1 as long as high-frequency unmodeled dynamics are not excited. The boundary layer thickness, φ , can be selected as the bandwidth of the first order filter having input perturbations which leads to tuning φ with λ:

$$\overline{k}\left(\mathbf{x}\_d\right) = (\boldsymbol{\lambda} \mid \boldsymbol{\beta}\_d^2 - \alpha)\boldsymbol{\phi} \tag{29}$$

Combining Eq. (27) and Eq. (29) yields

$$k(\mathbf{x}\_d) > \phi(\boldsymbol{\lambda} \mid \boldsymbol{\beta}\_d^2 - \alpha) \text{ and } \dot{\phi} + (\boldsymbol{\lambda} - \alpha \boldsymbol{\beta}\_d^2)\phi = \boldsymbol{\beta}\_d^2 k(\mathbf{x}\_d) \tag{30}$$

Also, by combining Eq. (28) and Eq. (29) results in

$$k(\mathbf{x}\_d) < \phi(\boldsymbol{\lambda} \mid \boldsymbol{\beta}\_d^2 - \alpha) \text{ and } \dot{\phi} + (\phi \nmid \boldsymbol{\beta}\_d^2) \Big[ (\boldsymbol{\lambda} \mid \boldsymbol{\beta}\_d^2) - \alpha \right] = k(\mathbf{x}\_d) / \boldsymbol{\beta}\_d^2 \tag{31}$$

Equations (24) and (30) yield

$$\dot{\phi} > 0 \to \overline{k}\left(\mathbf{x}\right) = k\left(\mathbf{x}\right) - \left(\beta\_d \nmid \beta\right)^2 \left[k\left(\mathbf{x}\_d\right) - \phi\left(\lambda \mid \beta\_d^2 - \alpha\right)\right] \tag{32}$$

and combining Eq. (22) with Eq. (28) gives

$$\dot{\phi} < 0 \to \overline{k}\left(\mathbf{x}\right) = k\left(\mathbf{x}\right) - \left(\beta \ne \beta\_d\right)^2 \left[k\left(\mathbf{x}\_d\right) - \phi\left(\lambda \ne \beta\_d^2 - \alpha\right)\right] \tag{33}$$

In addition, initial value of the boundary layer thickness,φ(0) , is given by substituting *xd* at t=0 in Eq. (29).

$$\phi(0) = \frac{k \left(\alpha\_d(0)\right)}{\left(\lambda \;/\; \beta\_d^2\right) - a}$$

The results discussed above can be used for applications to track and stabilize highly nonlinear systems. Sliding mode control along with QFT provides better system controllers and leads to selection of hardware easier than using SMC alone. The application of this theory to a driver seat of a heavy vehicle and its simulation are given in the following sections.

## **4. Numerical example**

150 Recent Advances in Robust Control – Novel Approaches and Design Methods

<sup>2</sup> 0 () () / *dd d*

<sup>2</sup> 0 () () / *dd d*

By taking the Laplace transform of Eq. (26), It can be shown that the variable *s* is given by the output of a first-order filter, whose dynamics entirely depends on the desired state *xd* 

selection *s* selection

Where P is the Laplace variable. ( ) *<sup>d</sup>* Δ*g x* are the inputs to the first order filter, but they are

This shows that chattering in the boundary layer due to perturbations or uncertainty of ( ) *<sup>d</sup>* Δ*g x* can be removed satisfactorily by first order filtering as shown in Fig.1 as long as

be selected as the bandwidth of the first order filter having input perturbations which leads

<sup>2</sup> ( ) (/ ) *d d k x* = − λ β αφ

> φ λ αβ φ β

φβ

2 2 0 ( ) ( ) ( / ) [ ( ) ( / )] *dd d*

2 2 0 ( ) ( ) ( / ) [ ( ) ( / )] *dd d*

2 ( (0)) (0) (/ ) *d d*

*k x*

<sup>=</sup> <sup>−</sup>

λ β α

and 2 2 ( ) () *d dd*

− and 2 2 <sup>2</sup> ( / ) ( / ) ( )/ *d d dd*

 φλ β α

 φλ β α

φ

*k x* − − (33)

*k x* − − (32)

 α

⎡ ⎤ − = *k x* ⎣ ⎦ (31)

 λβ φ, can

(29)

 β

(0) , is given by substituting *xd* at

+− = *k x* (30)

high-frequency unmodeled dynamics are not excited. The boundary layer thickness,

φ β(27)

φ β(28)

> 1 P+λ

>→ = − *kx kx*

<→ = − *kx kx*

( ) *<sup>d</sup>* <sup>Δ</sup>*<sup>g</sup> <sup>x</sup> s x*

φ

φ

φ

2 1 P ( ( )/ ) *d d* + β*k x*

φ α+

<sup>2</sup> ( ) (/ ) *d d k x* > − φ λβ

> α

>→ = − *kx kx*

<→ = − *kx kx*

φ

In addition, initial value of the boundary layer thickness,

 α

> φ+

> > β β

β β

Fig. 1. Structure of closed-loop error dynamics

Combining Eq. (27) and Eq. (29) yields

Equations (24) and (30) yield

t=0 in Eq. (29).

Also, by combining Eq. (28) and Eq. (29) results in

<sup>2</sup> ( ) (/ ) *d d k x* < φ λβ

φ

and combining Eq. (22) with Eq. (28) gives

φ

(Fig.1).

highly uncertain.

φ with λ:

to tuning

In this section, the sliding mode control theory is applied to track the motion behavior of a driver's seat of a heavy vehicle along a trajectory that can reduce driver fatigue and drowsiness. The trajectory can be varied accordingly with respect to the driver requirements. This control methodology can overcome most of the road disturbances and provide predetermined seat motion pattern to avoid driver fatigue. However, due to parametric uncertainties and modeling inaccuracies chattering can be observed which causes a major problem in applying SMC alone. In general, the chattering enhances the driver fatigue and also leads to premature failure of controllers. SMC with QFT developed in this chapter not only eliminates the chattering satisfactorily but also reduces the control effort necessary to maintain the desired motion of the seat.

Relationship between driver fatigue and seat vibration has been discussed in many publications based on anecdotal evidence (Wilson & Horner, 1979; Randall, 1992). It is widely believed and proved in field tests that lower vertical acceleration levels will increase comfort level of the driver (U. & R. Landstorm, 1985; Altunel, 1996; Altunel & deHoop, 1998). Heavy vehicle truck drivers who usually experience vibration levels around 3 Hz, while driving, may undergo fatigue and drowsiness (Mabbott et al., 2001). Fatigue and drowsiness, while driving, may result in loss of concentration leading to road accidents. Human body metabolism and chemistry can be affected by intermittent and random vibration exposure resulting in fatigue (Kamenskii, 2001). Typically, vibration exposure levels of heavy vehicle drivers are in the range 0.4 m/s2 - 2.0 m/s2 with a mean value of 0.7 m/s2 in the vertical axis (U. & R. Landstorm, 1985; Altunel, 1996; Altunel & deHoop, 1998; Mabbott et al., 2001).

A suspension system determines the ride comfort of the vehicle and therefore its characteristics may be properly evaluated to design a proper driver seat under various operating conditions. It also improves vehicle control, safety and stability without changing the ride quality, road holding, load carrying, and passenger comfort while providing directional control during handling maneuvers. A properly designed driver seat can reduce driver fatigue, while maintaining same vibration levels, against different external disturbances to provide improved performance in riding.

Over the past decades, the application of sliding mode control has been focused in many disciplines such as underwater vehicles, automotive applications and robot manipulators (Taha et al., 2003; Roberge, 1960; Dorf, 1967; Ogata, 1970; Higdon, 1963; Truxal, 1965; Lundberg, 2003; Phillips, 1994; Siebert, 1986). The combination of sliding controllers with state observers was also developed and discussed for both the linear and nonlinear cases (Hedrick & Gopalswamy, 1989; Bondarev et al., 1985). Nonlinear systems are difficult to model as linear systems since there are certain parametric uncertainties and modeling inaccuracies that can eventually resonate the system (Jean-Jacques, 1991). The sliding mode control can be used for nonlinear stabilization problems in designing controllers. Sliding mode control can provide high performance systems that are robust to parameter

Quantitative Feedback Theory and Sliding Mode Control 153

Based on the mathematical model developed above, the equation of motion in the vertical

(1 / ) (1 / ) *h hh haf x mF mF* = − + , (34)

*F AP af* = *<sup>L</sup>*

<sup>1</sup> *d xx a h hs i s* = −− ( ) sin

Complete derivation of Eq. (34) is shown below for a five-degree-of-freedom roll and bounce motion configuration of the heavy truck driver-seat system subject to a sudden impact. In four-way valve-piston hydraulic actuator system, the rate of change of pressure

<sup>1</sup> ( ) <sup>4</sup>

*V P Q C P Ax x*

[ ] <sup>1</sup> sgn sgn( ) (1 / ) sgn( ) *Q P x PC x P x P* = −*s v d s vL* ω

ν

*lp L h s*

 ρ

drop across the hydraulic actuator piston, *PL,* is given by (Fialho, 2002)

*L*

*e*

β

θ

=− − − (35)

− (36)

3 2 12 1 2 sgn( ) *F kd kd Cd Cd d h hh hh hh hh h* =++ +

*mh* - Mass of the driver and the seat

*xh* - Vertical position coordinate of the driver seat *xs* - Vertical position coordinate of the sprung mass

direction for the driver and the seat can be written as follows:

*θs* - Angular displacement of the driver seat (same as sprung mass)

*ms* - Sprung mass

**4.1 Equations of motion** 

*kh1* - linear stiffness *kh2* - cubic stiffness

sgn - signum function

*Vt* - Total actuator volume

*Q* - Load flow

*A* - Piston area

*X*ν

*be* - Effective bulk modulus of the fluid

The load flow of the actuator is given by (Fialho, 2002):

*Ctp* - Total piston leakage coefficient

*Ps* – Hydraulic supply pressure ω - Spool valve area gradient

− Displacement of the spool valve

*Ch1* - linear viscous damping

*Ch2* - fluidic (amplitude dependent) damping

where

uncertainties and disturbances. Design of such systems includes two steps: (i) choosing a set of switching surfaces that represent some sort of a desired motion, and (ii) designing a discontinuous control law that ensures convergence to the switching surfaces (Dorf, 1967; Ogata, 1970). The discontinuous control law guarantees the attraction features of the switching surfaces in the phase space. Sliding mode occurs when the system trajectories are confined to the switching surfaces and cannot leave them for the remainder of the motion. Although this control approach is relatively well understood and extensively studied, important issues related to implementation and chattering behavior remain unresolved. Implementing QFT during the sliding phase of a SMC controller not only eliminates chatter but also achieves vibration isolation. In addition, QFT does not diminish the robustness characteristics of the SMC because it is known to tolerate large parametric and phase information uncertainties.

Figure 2 shows a schematic of a driver seat of a heavy truck. The model consists of an actuator, spring, damper and a motor sitting on the sprung mass. The actuator provides actuation force by means of a hydraulic actuator to keep the seat motion within a comfort level for any road disturbance, while the motor maintains desired inclination angle of the driver seat with respect to the roll angle of the sprung mass. The driver seat mechanism is connected to the sprung mass by using a pivoted joint; it provides the flexibility to change the roll angle. The system is equipped with sensors to measure the sprung mass vertical acceleration and roll angle. Hydraulic pressure drop and spool valve displacement are also used as feedback signals.

Fig. 2. The hydraulic power feed of the driver seat on the sprung mass

### **Nomenclature**


### **4.1 Equations of motion**

Based on the mathematical model developed above, the equation of motion in the vertical direction for the driver and the seat can be written as follows:

$$\ddot{\mathbf{x}}\_h = -(\mathbf{1} \,/\, m\_h)\mathbf{F}\_h + (\mathbf{1} \,/\, m\_h)\mathbf{F}\_{u\dagger} \,. \tag{34}$$

where

152 Recent Advances in Robust Control – Novel Approaches and Design Methods

uncertainties and disturbances. Design of such systems includes two steps: (i) choosing a set of switching surfaces that represent some sort of a desired motion, and (ii) designing a discontinuous control law that ensures convergence to the switching surfaces (Dorf, 1967; Ogata, 1970). The discontinuous control law guarantees the attraction features of the switching surfaces in the phase space. Sliding mode occurs when the system trajectories are confined to the switching surfaces and cannot leave them for the remainder of the motion. Although this control approach is relatively well understood and extensively studied, important issues related to implementation and chattering behavior remain unresolved. Implementing QFT during the sliding phase of a SMC controller not only eliminates chatter but also achieves vibration isolation. In addition, QFT does not diminish the robustness characteristics of the SMC because it is known to tolerate large parametric and phase

Figure 2 shows a schematic of a driver seat of a heavy truck. The model consists of an actuator, spring, damper and a motor sitting on the sprung mass. The actuator provides actuation force by means of a hydraulic actuator to keep the seat motion within a comfort level for any road disturbance, while the motor maintains desired inclination angle of the driver seat with respect to the roll angle of the sprung mass. The driver seat mechanism is connected to the sprung mass by using a pivoted joint; it provides the flexibility to change the roll angle. The system is equipped with sensors to measure the sprung mass vertical acceleration and roll angle. Hydraulic pressure drop and spool valve displacement are also

Fig. 2. The hydraulic power feed of the driver seat on the sprung mass

Sprung Mass, *ms* 

Mass of the driver & Seat

*mh*

Motor

*xh , θ<sup>s</sup>*

*xs , θ<sup>s</sup>*

Actuator

*Fh -* Combined nonlinear spring and damper force of the driver seat *kh -* Stiffness of the spring between the seat and the sprung mass

*A -* Cross sectional area of the hydraulic actuator piston

Spring

information uncertainties.

used as feedback signals.

**Nomenclature** 

*Faf -* Actuator force

$$F\_{\hbar} = k\_{\hbar 1} d\_{\hbar} + k\_{\hbar 2} d\_{\hbar}^3 + \mathcal{C}\_{\hbar 1} \dot{d}\_{\hbar} + \mathcal{C}\_{\hbar 2} \dot{d}\_{\hbar}^2 \operatorname{sgn}(\dot{d}\_{\hbar})$$


sgn - signum function

$$F\_{g'} = AP\_L$$

$$d\_{\mathbf{h}} = (\mathbf{x}\_{\mathbf{h}} - \mathbf{x}\_{s}) - a\_{1i} \sin \theta\_{s}$$

Complete derivation of Eq. (34) is shown below for a five-degree-of-freedom roll and bounce motion configuration of the heavy truck driver-seat system subject to a sudden impact. In four-way valve-piston hydraulic actuator system, the rate of change of pressure drop across the hydraulic actuator piston, *PL,* is given by (Fialho, 2002)

$$\frac{V\_1 \dot{P}\_L}{4\beta\_e} = Q - C\_{lp} P\_L - A(\dot{\mathbf{x}}\_h - \dot{\mathbf{x}}\_s) \tag{35}$$

*Vt* - Total actuator volume

*be* - Effective bulk modulus of the fluid

*Q* - Load flow

*Ctp* - Total piston leakage coefficient

*A* - Piston area

The load flow of the actuator is given by (Fialho, 2002):

$$Q = \text{sgn}\left[P\_s - \text{sgn}(\mathbf{x}\_v)P\_1\right] \mathbb{C}\_d a \mathbf{x}\_v \sqrt{(1 \;/\; \rho) \left|P\_s - \text{sgn}(\mathbf{x}\_v)P\_L\right|}\tag{36}$$

*Ps* – Hydraulic supply pressure

ω - Spool valve area gradient

*X*ν− Displacement of the spool valve

Quantitative Feedback Theory and Sliding Mode Control 155

Based on the mathematical model developed, deflection of the suspension system on the

Deflection of side1, ( ) (sin sin ) Deflection of side 2, ( ) (sin sin )

By considering the free body diagram in Fig. 3, deflection of the seat is obtained as follows

<sup>1</sup> *d xx a h hs i s* = ( ) sin − −

The tires are modeled by using springs and dampers. Deflections of the tires to a road

Based on the mathematical model developed above, the equations of motion for each of the sprung mass, unsprung mass, and the seat are written by utilizing the free-body diagram of

1 2 3 2 4 1 ( )cos ( )cos ( )( )cos *uu i s s u i t t u i i t t u J SF F TF F T A F F*

21 1 ( )cos cos *ss i s s s ih s J SF F aF*

Equations (37)-(41) have to be solved simultaneously, since there are many parameters and nonlinearities. Nonlinear effects can better be understood by varying the parameters and

θ

 = − + − ++ − θθ

=− +

Deflection of tire1, ( )sin Deflection of tire 2, sin Deflection of tire 3, sin Deflection of tire 4, ( )sin

**Equations of motion for the combined sprung mass, unsprung mass and driver seat** 

*d xx S d xx S*

1 s2

3 2 12 1 2 sgn( ) *F kd kd Cd Cd d h hh hh hh hh h* =++ +

> *s su i s u su i s u*

> > θ

*t uii u t ui u t ui u t uii u*

θ

θ

12 1234 ( )( ) *mx F F F F F F uu s s t t t t* = + − +++ (37)

1 2 ( ) *mx F F F ss s s h* = −+ + (39)

*mx F hh h* = − (41)

 θ(40)

(38)

θ

θ

 θ

*d x TA d xT d xT d x TA*

= + + = + = − =−+

= −+ − =−− −

θ

θ θ

θ

For the seat:

**Deflection of the suspension springs and dampers** 

axle is found for both sides as follows:

(Rajapakse & Happawana, 2004):

the system in Fig. 3 as follows:

θ

Vertical motion for the seat

Vertical and roll motion for the sprung mass

**Tire deflections** 

**Deflection of the seat springs and dampers** 

disturbance are given by the following equations.

Vertical and roll motion for the *ith* axle (unsprung mass)

θ

ρ - Hydraulic fluid density

*Cd* – Discharge coefficient

Voltage or current can be fed to the servo-valve to control the spool valve displacement of the actuator for generating the force. Moreover, a stiction model for hydraulic spool can be included to reduce the chattering further, but it is not discussed here.

Fig. 3. Five-degree-of-freedom roll and bounce motion configuration of the heavy duty truck driver-seat system.

### **Nonlinear force equations**

Nonlinear tire forces, suspension forces, and driver seat forces can be obtained by substituting appropriate coefficients to the following nonlinear equation that covers wide range of operating conditions for representing dynamical behavior of the system.

$$F = k\_1 d + k\_2 d^3 + \mathbf{C}\_1 \dot{d} \, | \, + \mathbf{C}\_2 \dot{d}^2 \, \text{sgn}(\dot{d} \, \,)$$

where

*F* - Force

*k1* - linear stiffness coefficient

*k2* - cubic stiffness coefficient

*C1* - linear viscous damping coefficient

*C2* - amplitude dependent damping coefficient

*d -* deflection

For the suspension:

$$F\_{si} = k\_{si1}d\_{si} + k\_{si2}d\_{si}^3 + \mathbf{C}\_{si1}\dot{d}\_{si} + \mathbf{C}\_{si2}\dot{d}\_{si}^2 \mathbf{s} \mathbf{g} \mathbf{n}(\dot{d}\_{si})$$

For the tires:

$$F\_{ti} = k\_{ti1}d\_{ti} + k\_{ti2}d\_{ti}^3 + \mathbf{C}\_{ti1}\dot{d}\_{ti} + \mathbf{C}\_{ti2}\dot{d}\_{ti}^2 \operatorname{sgn}(\dot{d}\_{ti})$$

For the seat:

154 Recent Advances in Robust Control – Novel Approaches and Design Methods

Voltage or current can be fed to the servo-valve to control the spool valve displacement of the actuator for generating the force. Moreover, a stiction model for hydraulic spool can be

*xh* 

*a*1*<sup>i</sup>*

*<sup>ө</sup><sup>s</sup>ө<sup>s</sup>*

Fig. 3. Five-degree-of-freedom roll and bounce motion configuration of the heavy duty truck

*Ft1 Ft2 Ft3 Ft4* 

Tires & axle

*Fs1 Fs2* 

Suspension

*xu* 

*Ti* 

*xs* 

Nonlinear tire forces, suspension forces, and driver seat forces can be obtained by substituting appropriate coefficients to the following nonlinear equation that covers wide

> 3 2 12 1 2 *F kd kd Cd Cd d* =+ + + sgn( )

3 2 12 1 2 sgn( ) *F kd kd Cd Cd d si si si si si si si si si si* =++ +

3 2 12 1 2 sgn( ) *F kd kd Cd Cd d ti ti ti ti ti ti ti ti ti ti* =++ +

range of operating conditions for representing dynamical behavior of the system.

*Si* 

included to reduce the chattering further, but it is not discussed here.

*Fh* 

Seat

ρ - Hydraulic fluid density *Cd* – Discharge coefficient

driver-seat system.

where *F* - Force

*d -* deflection For the suspension:

For the tires:

**Nonlinear force equations** 

*Ai* 

*өu* 

*k1* - linear stiffness coefficient *k2* - cubic stiffness coefficient

*C1* - linear viscous damping coefficient

*C2* - amplitude dependent damping coefficient

$$F\_h = k\_{h1}d\_h + k\_{h2}d\_h^3 + \mathbf{C}\_{h1}\dot{d}\_h + \mathbf{C}\_{h2}\dot{d}\_h^2 \text{sgn}(\dot{d}\_h)$$

### **Deflection of the suspension springs and dampers**

Based on the mathematical model developed, deflection of the suspension system on the axle is found for both sides as follows:

$$\begin{aligned} \text{Deflection of side 1, } d\_{s1} &= (\mathbf{x}\_s - \mathbf{x}\_u) + S\_i(\sin \theta\_s - \sin \theta\_u) \\ \text{Deflection of side 2, } d\_{s2} &= (\mathbf{x}\_s - \mathbf{x}\_u) - S\_i(\sin \theta\_s - \sin \theta\_u) \end{aligned}$$

### **Deflection of the seat springs and dampers**

By considering the free body diagram in Fig. 3, deflection of the seat is obtained as follows (Rajapakse & Happawana, 2004):

$$d\_{\mathbf{h}} = (\boldsymbol{\chi}\_{\mathbf{h}} - \boldsymbol{\chi}\_{s}) - a\_{1i}\sin\theta\_{s}$$

### **Tire deflections**

The tires are modeled by using springs and dampers. Deflections of the tires to a road disturbance are given by the following equations.

> 1 2 3 4 Deflection of tire1, ( )sin Deflection of tire 2, sin Deflection of tire 3, sin Deflection of tire 4, ( )sin *t uii u t ui u t ui u t uii u d x TA d xT d xT d x TA* θ θ θ θ = + + = + = − =−+

### **Equations of motion for the combined sprung mass, unsprung mass and driver seat**

Based on the mathematical model developed above, the equations of motion for each of the sprung mass, unsprung mass, and the seat are written by utilizing the free-body diagram of the system in Fig. 3 as follows:

Vertical and roll motion for the *ith* axle (unsprung mass)

$$m\_u \ddot{\mathbf{x}}\_u = \left(F\_{s1} + F\_{s2}\right) - \left(F\_{t1} + F\_{t2} + F\_{t3} + F\_{t4}\right) \tag{37}$$

$$J\_u \ddot{\theta}\_u = S\_i (F\_{s1} - F\_{s2}) \cos \theta\_u + T\_i (F\_{t3} - F\_{t2}) \cos \theta\_u + (T\_i + A\_i)(F\_{t4} - F\_{t1}) \cos \theta\_u \tag{38}$$

Vertical and roll motion for the sprung mass

$$
\delta m\_s \ddot{\mathbf{x}}\_s = - (F\_{s1} + F\_{s2}) + F\_h \tag{39}
$$

$$J\_s \ddot{\theta\_s} = S\_i (F\_{s2} - F\_{s1}) \cos \theta\_s + a\_{1i} F\_h \cos \theta\_s \tag{40}$$

Vertical motion for the seat

$$m\_h \ddot{\mathbf{x}}\_h = -F\_h \tag{41}$$

Equations (37)-(41) have to be solved simultaneously, since there are many parameters and nonlinearities. Nonlinear effects can better be understood by varying the parameters and examining relevant dynamical behavior, since changes in parameters change the dynamics of the system. Furthermore, Eqs. (37)-(41) can be represented in the phase plane while varying the parameters of the truck, since each and every trajectory in the phase portrait characterizes the state of the truck. Equations above can be converted to the state space form and the solutions can be obtained using MATLAB. Phase portraits are used to observe the nonlinear effects with the change of the parameters. Change of initial conditions clearly changes the phase portraits and the important effects on the dynamical behavior of the truck can be understood.

### **4.2 Applications and simulations (MATLAB)**

Equation (34) can be represented as,

where

$$
\ddot{X}\_h = f + b\mathcal{U} \tag{42}
$$

Quantitative Feedback Theory and Sliding Mode Control 157

( / )( ) *es h hes h ses f km xx* = − −

max *F ff* = *es* −

0.014 *es b* =

*β*=1.414

min max ( ) *ses s s x xx* = The sprung mass is oscillated by road disturbances and its changing pattern is given by the

measured by using the sensors in real time and be fed to the controller to estimate the control force necessary to maintain the desired frequency limits of the driver seat. Expected

frequency of the driver to have comfortable driving conditions to avoid driver fatigue in the

yields 0.5 Hz continuous vibration for the driver seat over the time. The mass of the driver and seat is considered as 70 kg throughout the simulation. This value changes from driver to driver and can be obtained by an attached load cell attached to the driver seat to calculate the control force. It is important to mention that this control scheme provides sufficient room to change the vehicle parameters of the system according to the driver requirements to

In this section tracking is achieved by using SMC alone and the simulation results are

Consider (1) *<sup>h</sup> x x* = and (2) *<sup>h</sup> x x* = . Eq. (25) is represented in the state space form as follows:

*x x* (1) (2) =

(2) ( / )( (1) ) *h h es x k m x x bU* = − −+

( (2) ) *U fx x x es es hd* = −+ − − λ*hd*

Figures 4 to 7 show system trajectories, tracking error and control torque for the initial condition: [ , ]=[0.1m , 1m/s.] *h h x x* using the control law. Figure 4 provides the tracked vertical displacement of the driver seat vs. time and perfect tracking behavior can be observed. Figure 5 exhibits the tracking error and it is enlarged in Fig. 6 to show it's chattering behavior after the tracking is achieved. Chattering is undesirable for the

Combining Eq. (17), Eq. (19) and Eq. (42), the estimated control law becomes,

simulation in order to vary the sprung mass frequency from 0.1 to 10 Hz. Thus

*<sup>d</sup>* are assumed to be .05 m and 2 \* 0.5

 π= + 2 (0.1 9.9sin(2 ) ) *t* . This function for

1/2

ω

π

, where

ω

rad/s during the simulation which

The estimated nonlinear function, *f*, and bounded estimation error, *F*, are given by:

ω

 = 2 (0.1 10) / π

− *rad s* , *A=0.3*

ω

is used in the

ωcan be

*<sup>d</sup>* is the desired angular

min 50 *m kg <sup>h</sup>* = , max 100 *m kg <sup>h</sup>* = , min 0.3 *<sup>s</sup> x m* = − , max 0.3 *<sup>s</sup> x m* = ,

ωπ

trajectory for *<sup>h</sup> x* is given by the function, *hd* sin *<sup>d</sup> xB t* =

vertical angular frequency,

ω

long run. *B* and

achieve ride comfort.

obtained as follows.

**4.3 Using sliding mode only** 

$$\begin{aligned} f &= -(1 \, / \, m\_h \, ) F\_h \\\\ b &= 1 \, / \, m\_h \\\\ U &= F\_{af} \end{aligned}$$

The expression *f* is a time varying function of *<sup>s</sup> x* and the state vector *<sup>h</sup> x* . The time varying function, *<sup>s</sup> x* , can be estimated from the information of the sensor attached to the sprung mass and its limits of variation must be known. The expression, *f*, and the control gain, *b* are not required to be known exactly, but their bounds should be known in applying SMC and QFT. In order to perform the simulation, *<sup>s</sup> x* is assumed to vary between -0.3m to 0.3m and it can be approximated by the time varying function, *A t* sin( ) ω , where ω is the disturbance angular frequency of the road by which the unsprung mass is oscillated. The bounds of the parameters are given as follows:

$$\begin{aligned} m\_{\text{hr min}} \le m\_{\text{ft}} \le m\_{\text{hr max}} \\\\ \mathcal{X}\_{s\text{min}} \le \mathcal{X}\_{s} \le \mathcal{X}\_{s\text{max}} \\\\ b\_{\text{min}} \le b \le b\_{\text{max}} \end{aligned}$$

Estimated values of *mh* and *xs*:

$$\begin{aligned} \left| m\_{\text{lens}} = \left| \left( m\_{\text{lim\\_min}} m\_{\text{lim\\_max}} \right)^{1/2} \right| \\\\ \left| \mathbf{x}\_{\text{ess}} = \left| \left( \mathbf{x}\_{s \text{min}} \mathbf{x}\_{s \text{max}} \right) \right|^{1/2} \end{aligned} \right| $$

Above bounds and the estimated values were obtained for some heavy trucks by utilizing field test information (Tabarrok & Tong, 1993, 1992; Esmailzadeh et al., 1990; Aksionov, 2001; Gillespie, 1992; Wong, 1978; Rajapakse & Happawana, 2004; Fialho, 2002). They are as follows:

156 Recent Advances in Robust Control – Novel Approaches and Design Methods

examining relevant dynamical behavior, since changes in parameters change the dynamics of the system. Furthermore, Eqs. (37)-(41) can be represented in the phase plane while varying the parameters of the truck, since each and every trajectory in the phase portrait characterizes the state of the truck. Equations above can be converted to the state space form and the solutions can be obtained using MATLAB. Phase portraits are used to observe the nonlinear effects with the change of the parameters. Change of initial conditions clearly changes the phase portraits and the important effects on the dynamical behavior of the truck

(1 / ) *h h f mF* = −

1 / *<sup>h</sup> b m* =

*U F* = *af*

The expression *f* is a time varying function of *<sup>s</sup> x* and the state vector *<sup>h</sup> x* . The time varying function, *<sup>s</sup> x* , can be estimated from the information of the sensor attached to the sprung mass and its limits of variation must be known. The expression, *f*, and the control gain, *b* are not required to be known exactly, but their bounds should be known in applying SMC and QFT. In order to perform the simulation, *<sup>s</sup> x* is assumed to vary between -0.3m to 0.3m and it

angular frequency of the road by which the unsprung mass is oscillated. The bounds of the

*m mm h hh* min ≤ ≤ max

*s ss* min max *x xx* ≤ ≤

min max *b bb* ≤ ≤

min max ( ) *m mm hes h h* =

min max ( ) *ses s s x xx* =

Above bounds and the estimated values were obtained for some heavy trucks by utilizing field test information (Tabarrok & Tong, 1993, 1992; Esmailzadeh et al., 1990; Aksionov, 2001; Gillespie, 1992; Wong, 1978; Rajapakse & Happawana, 2004; Fialho, 2002). They are as

*<sup>h</sup> x f bU* <sup>=</sup> <sup>+</sup> (42)

ω

1/2

1/2

, where

ω

is the disturbance

can be understood.

where

**4.2 Applications and simulations (MATLAB)** 

can be approximated by the time varying function, *A t* sin( )

Equation (34) can be represented as,

parameters are given as follows:

Estimated values of *mh* and *xs*:

follows:

min 50 *m kg <sup>h</sup>* = , max 100 *m kg <sup>h</sup>* = , min 0.3 *<sup>s</sup> x m* = − , max 0.3 *<sup>s</sup> x m* = ,ω = 2 (0.1 10) / π− *rad s* , *A=0.3*

The estimated nonlinear function, *f*, and bounded estimation error, *F*, are given by:

$$\begin{aligned} f\_{\rm es} &= -(\mathbf{k}\_h \,/\ m\_{\rm hes})(\mathbf{x}\_h - \mathbf{x}\_{\rm ses}) \\\\ F &= \max \left| f\_{\rm es} - f \right| \\\\ b\_{\rm es} &= 0.014 \\\\ \beta &= 1.414 \\\\ \mathbf{x}\_{\rm ses} &= \left| (\mathbf{x}\_{s\_{\rm min}} \mathbf{x}\_{s\_{\rm max}}) \right|^{1/2} \end{aligned}$$

The sprung mass is oscillated by road disturbances and its changing pattern is given by the vertical angular frequency,ωπ π = + 2 (0.1 9.9sin(2 ) ) *t* . This function for ω is used in the simulation in order to vary the sprung mass frequency from 0.1 to 10 Hz. Thusω can be measured by using the sensors in real time and be fed to the controller to estimate the control force necessary to maintain the desired frequency limits of the driver seat. Expected trajectory for *<sup>h</sup> x* is given by the function, *hd* sin *<sup>d</sup> xB t* = ω , where ω*<sup>d</sup>* is the desired angular frequency of the driver to have comfortable driving conditions to avoid driver fatigue in the long run. *B* and ω*<sup>d</sup>* are assumed to be .05 m and 2 \* 0.5 π rad/s during the simulation which yields 0.5 Hz continuous vibration for the driver seat over the time. The mass of the driver and seat is considered as 70 kg throughout the simulation. This value changes from driver to driver and can be obtained by an attached load cell attached to the driver seat to calculate the control force. It is important to mention that this control scheme provides sufficient room to change the vehicle parameters of the system according to the driver requirements to achieve ride comfort.

### **4.3 Using sliding mode only**

In this section tracking is achieved by using SMC alone and the simulation results are obtained as follows.

Consider (1) *<sup>h</sup> x x* = and (2) *<sup>h</sup> x x* = . Eq. (25) is represented in the state space form as follows:

$$
\dot{x}(1) = x(2)
$$

$$\dot{\mathfrak{x}}(\mathcal{D}) = -(k\_h \,/\, m\_h \,)(\mathfrak{x}(1) - \mathfrak{x}\_{es}) + b\mathcal{L}b$$

Combining Eq. (17), Eq. (19) and Eq. (42), the estimated control law becomes,

$$\mathcal{U}\_{es} = -f\_{es} + \dot{\mathfrak{x}}\_{hd} - \mathcal{A}(\mathfrak{x}(\mathfrak{D}) - \dot{\mathfrak{x}}\_{hd}),$$

Figures 4 to 7 show system trajectories, tracking error and control torque for the initial condition: [ , ]=[0.1m , 1m/s.] *h h x x* using the control law. Figure 4 provides the tracked vertical displacement of the driver seat vs. time and perfect tracking behavior can be observed. Figure 5 exhibits the tracking error and it is enlarged in Fig. 6 to show it's chattering behavior after the tracking is achieved. Chattering is undesirable for the

Quantitative Feedback Theory and Sliding Mode Control 159

Fig. 5. Tracking error vs. time using SMC only

Fig. 6. Zoomed in tracking error vs. time using SMC only

Fig. 7. Control force vs. time using SMC only

controller that makes impossible in selecting hardware and leads to premature failure of hardware.

The values for λ andη in Eq. (17) and Eq. (20) are chosen as 20 and 0.1 (Jean-Jacques, 1991) to obtain the plots and to achieve satisfactory tracking performance. The sampling rate of 1 kHz is selected in the simulation. 0 *s* = condition and the signum function are used. The plot of control force vs. time is given in Fig. 7. It is very important to mention that, the tracking is guaranteed only with excessive control forces. Mass of the driver and driver seat, limits of its operation, control bandwidth, initial conditions, sprung mass vibrations, chattering and system uncertainties are various factors that cause to generate huge control forces. It should be mentioned that this selected example is governed only by the linear equations with sine disturbance function, which cause for the controller to generate periodic sinusoidal signals. In general, the road disturbance is sporadic and the smooth control action can never be expected. This will lead to chattering and QFT is needed to filter them out. Moreover, applying SMC with QFT can reduce excessive control forces and will ease the selection of hardware.

In subsequent results, the spring constant of the tires were 1200kN/m & 98kN/m3 and the damping coefficients were 300kNs/m & 75kNs/m2. Some of the trucks' numerical parameters (Taha et al., 2003; Ogata, 1970; Tabarrok & Tong, 1992, 1993; Esmailzadeh et al., 1990; Aksionov, 2001; Gillespie, 1992; Wong, 1978) are used in obtaining plots and they are as follows: *mh* = 100kg, *ms* = 3300kg, *mu* = 1000kg, *ks11* = *ks21* = 200 kN/m & *ks12* =*ks22* = 18 kN/m3, *kh1* = 1 kN/m & *kh2* = 0.03 kN/m3 ,*Cs11* = *Cs21* = 50 kNs/m & *Cs12 = Cs22* = 5 kNs/m2 , *Ch1* = 0.4 kNs/m & *Ch2* = 0.04 kNs/m , *Js* = 3000 kgm2 , *Ju* = 900 kgm2, *Ai* = 0.3 m, *Si* = 0.9 m, and *a1i* = 0.8 m.

Fig. 4. Vertical displacement of driver seat vs. time using SMC only

158 Recent Advances in Robust Control – Novel Approaches and Design Methods

controller that makes impossible in selecting hardware and leads to premature failure of

obtain the plots and to achieve satisfactory tracking performance. The sampling rate of 1 kHz is selected in the simulation. 0 *s* = condition and the signum function are used. The plot of control force vs. time is given in Fig. 7. It is very important to mention that, the tracking is guaranteed only with excessive control forces. Mass of the driver and driver seat, limits of its operation, control bandwidth, initial conditions, sprung mass vibrations, chattering and system uncertainties are various factors that cause to generate huge control forces. It should be mentioned that this selected example is governed only by the linear equations with sine disturbance function, which cause for the controller to generate periodic sinusoidal signals. In general, the road disturbance is sporadic and the smooth control action can never be expected. This will lead to chattering and QFT is needed to filter them out. Moreover, applying SMC with QFT can reduce excessive control forces and will ease

In subsequent results, the spring constant of the tires were 1200kN/m & 98kN/m3 and the damping coefficients were 300kNs/m & 75kNs/m2. Some of the trucks' numerical parameters (Taha et al., 2003; Ogata, 1970; Tabarrok & Tong, 1992, 1993; Esmailzadeh et al., 1990; Aksionov, 2001; Gillespie, 1992; Wong, 1978) are used in obtaining plots and they are as follows: *mh* = 100kg, *ms* = 3300kg, *mu* = 1000kg, *ks11* = *ks21* = 200 kN/m & *ks12* =*ks22* = 18 kN/m3, *kh1* = 1 kN/m & *kh2* = 0.03 kN/m3 ,*Cs11* = *Cs21* = 50 kNs/m & *Cs12 = Cs22* = 5 kNs/m2 , *Ch1* = 0.4 kNs/m & *Ch2* = 0.04 kNs/m , *Js* = 3000 kgm2 , *Ju* = 900 kgm2, *Ai* = 0.3 m, *Si* = 0.9 m,

Fig. 4. Vertical displacement of driver seat vs. time using SMC only

in Eq. (17) and Eq. (20) are chosen as 20 and 0.1 (Jean-Jacques, 1991) to

hardware. The values for

λ andη

the selection of hardware.

and *a1i* = 0.8 m.

Fig. 5. Tracking error vs. time using SMC only

Fig. 6. Zoomed in tracking error vs. time using SMC only

Fig. 7. Control force vs. time using SMC only

Quantitative Feedback Theory and Sliding Mode Control 161

Fig. 10. Zoomed in tracking error vs. time using SMC & QFT

Fig. 11. Control force vs. time using SMC & QFT

Fig. 12. Zoomed in control force vs. time using SMC & QFT

### **4.4 Use of QFT on the sliding surface**

Figure 8 shows the required control force using SMC only. In order to lower the excessive control force and to further smoothen the control behavior with a view of reducing chattering, QFT is introduced inside the boundary layer. The following graphs are plotted for the initial boundary layer thickness of 0.1 meters.

Fig. 8. Vertical displacement of driver seat vs. time using SMC & QFT

Fig. 9. Tracking error vs. time using SMC & QFT

160 Recent Advances in Robust Control – Novel Approaches and Design Methods

Figure 8 shows the required control force using SMC only. In order to lower the excessive control force and to further smoothen the control behavior with a view of reducing chattering, QFT is introduced inside the boundary layer. The following graphs are plotted

**4.4 Use of QFT on the sliding surface** 

for the initial boundary layer thickness of 0.1 meters.

Fig. 8. Vertical displacement of driver seat vs. time using SMC & QFT

Fig. 9. Tracking error vs. time using SMC & QFT

Fig. 10. Zoomed in tracking error vs. time using SMC & QFT

Fig. 11. Control force vs. time using SMC & QFT

Fig. 12. Zoomed in control force vs. time using SMC & QFT

Quantitative Feedback Theory and Sliding Mode Control 163

selection of hardware and also reduces excessive control action. In this chapter simulation study is done for a linear system with sinusoidal disturbance inputs. It is seen that very high control effort is needed due to fast switching behavior in the case of using SMC alone. Because QFT smoothens the switching nature, the control effort can be reduced. Most of the controllers fail when excessive chattering is present and SMC with QFT can be used effectively to smoothen the control action. In this example, since the control gain is fixed, it is independent of the states. This eases control manipulation. The developed theory can be used effectively in most control problems to reduce chattering and to lower the control effort. It should be mentioned here that the acceleration feedback is not always needed for position control since it depends mainly on the control methodology and the system

employed. In order to implement the control law, the road disturbance frequency,

vehicle wheels, *Int. J. Vehicle Design*, Vol. 25, No. 3, pp. 198-202.

profile, response time, and signal delay, etc.

Forestry, Wildlife and Fisheries.

Rouge, USA.

276 – 279.

Vol. 10, No.1, pp. 43-54.

*Englewood Cliffs*, New Jersey 07632.

**6. References** 

be measured at a rate higher or equal to 1000Hz (comply with the simulation requirements) to update the system; higher frequencies are better. The bandwidth of the actuator depends upon several factors; i.e. how quickly the actuator can generate the force needed, road

Aksionov, P.V. (2001). Law and criterion for evaluation of optimum power distribution to

Altunel, A. O. (1996). The effect of low-tire pressure on the performance of forest products

Altunel, A. O. and De Hoop C. F. (1998). The Effect of Lowered Tire Pressure on a Log Truck

Bondarev, A. G. Bondarev, S. A., Kostylyova, N. Y. and Utkin, V. I. (1985). Sliding Modes in Systems with Asymptotic State Observers, *Automatic. Remote Control*, Vol. 6. Dorf, R. C. (1967). Modern Control Systems, *Addison-Wesley*, Reading, Massachusetts, pp.

Esmailzadeh, E., Tong, L. and Tabarrok, B. (1990). Road Vehicle Dynamics of Log Hauling Combination Trucks, *SAE Technical Paper Series 912670*, pp. 453-466. Fialho, I. and Balas, G. J. (2002). Road Adaptive Active Suspension Design Using Linear

Hedrick, J. K. and Gopalswamy, S. (1989). Nonlinear Flight Control Design via Sliding Method, Dept. of Mechanical Engineering, *Univ. of California*, Berkeley. Higdon, D. T. and Cannon, R. H. (1963). ASME J. of the Control of Unstable Multiple-Output Mechanical Systems, *ASME Publication,* 63-WA-148, New York. Jean-Jacques, E. S. and Weiping, L. (1991). Applied Nonlinear Control, *Prentice-Hall, Inc.,* 

Kamenskii, Y. and Nosova, I. M. (1989). Effect of whole body vibration on certain indicators of neuro-endocrine processes, *Noise and Vibration Bulletin,* pp. 205-206. Landstrom, U. and Landstrom, R. (1985). Changes in wakefulness during exposure to whole body vibration, *Electroencephal, Clinical, Neurophysiology*, Vol. 61, pp. 411-115.

Gillespie, T. D. (1992). *Fundamentals of Vehicle Dynamics*, SAE, Inc. Warrendale, PA.

transportation vehicles, *Master's thesis*, Louisiana State University, School of

Driver Seat, *Louisiana State University Agriculture Center*, Vol. 9, No. 2, Baton

Parameter-Varying Gain-Scheduling, *IEEE transaction on Control Systems Technology*,

ω, should

Fig. 13. s-trajectory with time-varying boundary layer vs. time using SMC & QFT

Figure 8 again shows that the system is tracked to the trajectory of interest and it follows the desired trajectory of the seat motion over the time. Figure 9 provides zoomed in tracking error of Fig. 8 which is very small and perfect tracking condition is achieved. The control force needed to track the system is given in Fig. 11. Figure 12 provides control forces for both cases, i.e., SMC with QFT and SMC alone. SMC with QFT yields lower control force and this can be precisely generated by using a hydraulic actuator. Increase of the parameter λwill decrease the tracking error with an increase of initial control effort.

Varying thickness of the boundary layer allows the better use of the available bandwidth, which causes to reduce the control effort for tracking the system. Parameter uncertainties can effectively be addressed and the control force can be smoothened with the use of the SMC and QFT. A successful application of QFT methodology requires selecting suitable function for *F*, since the change in boundary layer thickness is dependent on the bounds of *F*. Increase of the bounds of *F* will increase the boundary layer thickness that leads to overestimate the change in boundary layer thickness and the control effort. Evolution of dynamic model uncertainty with time is given by the change of boundary layer thickness. Right selection of the parameters and their bounds always result in lower tracking errors and control forces, which will ease choosing hardware for most applications.

### **5. Conclusion**

This chapter provided information in designing a road adaptive driver's seat of a heavy truck via a combination of SMC and QFT. Based on the assumptions, the simulation results show that the adaptive driver seat controller has high potential to provide superior driver comfort over a wide range of road disturbances. However, parameter uncertainties, the presence of unmodeled dynamics such as structural resonant modes, neglected time-delays, and finite sampling rate can largely change the dynamics of such systems. SMC provides effective methodology to design and test the controllers in the performance trade-offs. Thus tracking is guaranteed within the operating limits of the system. Combined use of SMC and QFT facilitates the controller to behave smoothly and with minimum chattering that is an inherent obstacle of using SMC alone. Chattering reduction by the use of QFT supports selection of hardware and also reduces excessive control action. In this chapter simulation study is done for a linear system with sinusoidal disturbance inputs. It is seen that very high control effort is needed due to fast switching behavior in the case of using SMC alone. Because QFT smoothens the switching nature, the control effort can be reduced. Most of the controllers fail when excessive chattering is present and SMC with QFT can be used effectively to smoothen the control action. In this example, since the control gain is fixed, it is independent of the states. This eases control manipulation. The developed theory can be used effectively in most control problems to reduce chattering and to lower the control effort. It should be mentioned here that the acceleration feedback is not always needed for position control since it depends mainly on the control methodology and the system employed. In order to implement the control law, the road disturbance frequency,ω , should be measured at a rate higher or equal to 1000Hz (comply with the simulation requirements) to update the system; higher frequencies are better. The bandwidth of the actuator depends upon several factors; i.e. how quickly the actuator can generate the force needed, road profile, response time, and signal delay, etc.

### **6. References**

162 Recent Advances in Robust Control – Novel Approaches and Design Methods

Fig. 13. s-trajectory with time-varying boundary layer vs. time using SMC & QFT

will decrease the tracking error with an increase of initial control effort.

and control forces, which will ease choosing hardware for most applications.

λ

**5. Conclusion** 

Figure 8 again shows that the system is tracked to the trajectory of interest and it follows the desired trajectory of the seat motion over the time. Figure 9 provides zoomed in tracking error of Fig. 8 which is very small and perfect tracking condition is achieved. The control force needed to track the system is given in Fig. 11. Figure 12 provides control forces for both cases, i.e., SMC with QFT and SMC alone. SMC with QFT yields lower control force and this can be precisely generated by using a hydraulic actuator. Increase of the parameter

Varying thickness of the boundary layer allows the better use of the available bandwidth, which causes to reduce the control effort for tracking the system. Parameter uncertainties can effectively be addressed and the control force can be smoothened with the use of the SMC and QFT. A successful application of QFT methodology requires selecting suitable function for *F*, since the change in boundary layer thickness is dependent on the bounds of *F*. Increase of the bounds of *F* will increase the boundary layer thickness that leads to overestimate the change in boundary layer thickness and the control effort. Evolution of dynamic model uncertainty with time is given by the change of boundary layer thickness. Right selection of the parameters and their bounds always result in lower tracking errors

This chapter provided information in designing a road adaptive driver's seat of a heavy truck via a combination of SMC and QFT. Based on the assumptions, the simulation results show that the adaptive driver seat controller has high potential to provide superior driver comfort over a wide range of road disturbances. However, parameter uncertainties, the presence of unmodeled dynamics such as structural resonant modes, neglected time-delays, and finite sampling rate can largely change the dynamics of such systems. SMC provides effective methodology to design and test the controllers in the performance trade-offs. Thus tracking is guaranteed within the operating limits of the system. Combined use of SMC and QFT facilitates the controller to behave smoothly and with minimum chattering that is an inherent obstacle of using SMC alone. Chattering reduction by the use of QFT supports


**0**

**8**

Chieh-Chuan Feng *I-Shou University, Taiwan*

*Republic of China*

**Integral Sliding-Based Robust Control**

In this chapter we will study the robust performance control based-on integral sliding-mode for system with nonlinearities and perturbations that consist of external disturbances and model uncertainties of great possibility time-varying manner. Sliding-mode control is one of robust control methodologies that deal with both linear and nonlinear systems, known for over four decades (El-Ghezawi et al., 1983; Utkin & Shi, 1996) and being used extensively from switching power electronics (Tan et al., 2005) to automobile industry (Hebden et al., 2003), even satellite control (Goeree & Fasse, 2000; Liu et al., 2005). The basic idea of sliding-mode control is to drive the sliding surface *s* from *s* � 0 to *s* = 0 and stay there for all future time, if proper sliding-mode control is established. Depending on the design of sliding surface, however, *s* = 0 does not necessarily guarantee system state being the problem of control to equilibrium. For example, sliding-mode control drives a sliding surface, where *s* = *Mx* − *Mx*0, to *s* = 0. This then implies that the system state reaches the initial state, that is, *x* = *x*<sup>0</sup> for some constant matrix *M* and initial state, which is not equal to zero. Considering linear sliding surface *s* = *Mx*, one of the superior advantages that sliding-mode has is that *s* = 0 implies the equilibrium of system state, i.e., *x* = 0. Another sliding surface design, the integral sliding surface, in particular, for this chapter, has one important advantage that is the improvement of the problem of reaching phase, which is the initial period of time that the system has not yet reached the sliding surface and thus is sensitive to any uncertainties or disturbances that jeopardize the system. Integral sliding surface design solves the problem in that the system trajectories start in the sliding surface from the first time instant (Fridman et al., 2005; Poznyak et al., 2004). The function of integral sliding-mode control is now to maintain the system's motion on the integral sliding surface despite model uncertainties and external disturbances, although the system state equilibrium has not yet

In general, an inherent and invariant property, more importantly an advantage, that all sliding-mode control has is the ability to completely nullify the so-called matched-type uncertainties and nonlinearities, defined in the range space of input matrix (El-Ghezawi et al., 1983). But, in the presence of unmatched-type nonlinearities and uncertainties, the conventional sliding-mode control (Utkin et al., 1999) can not be formulated and thus is unable to control the system. Therefore, the existence of unmatched-type uncertainties has the great possibility to endanger the *sliding dynamics*, which identify the system motion on the sliding surface after matched-type uncertainties are nullified. Hence, another control action

simultaneously stabilizes the sliding dynamics must be developed.

**1. Introduction**

been reached.


## **Integral Sliding-Based Robust Control**

## Chieh-Chuan Feng

*I-Shou University, Taiwan Republic of China*

### **1. Introduction**

164 Recent Advances in Robust Control – Novel Approaches and Design Methods

Lundberg, K. H. and Roberge, J. K. (2003). Classical dual-inverted-pendulum control,

Mabbott, N., Foster, G. and Mcphee, B. (2001). Heavy Vehicle Seat Vibration and Driver Fatigue, Australian *Transport Safety Bureau*, Report No. CR 203, pp. 35. Nordgren, R. E., Franchek, M. A. and Nwokah, O. D. I. (1995). A Design Procedure for the

Nwokah, O. D. I., Ukpai, U. I., Gasteneau, Z., and Happawana, G. S.(1997). Catastrophes in

Ogata, K. (1970). Modern Control Engineering, *Prentice-Hall*, Englewood Cliffs, New Jersey,

Phillips, L. C. (1994). Control of a dual inverted pendulum system using linear-quadratic and H-infinity methods, Master's thesis, *Massachusetts Institute of Technology*. Randall, J. M. (1992). Human subjective response to lorry vibration: implications for farm animal transport, J*. Agriculture. Engineering, Res,* Vol. 52, pp. 295-307. Rajapakse, N. and Happawana, G. S. (2004). A nonlinear six degree-of-freedom axle and

Roberge, J. K. (1960). The mechanical seal, *Bachelor's thesis*, Massachusetts Institute of

Siebert, W. McC. (1986) Circuits, Signals, and Systems, *MIT Press*, Cambridge,

Tabarrok, B. and Tong, X. (1993). Directional Stability Analysis of Logging Trucks by a Yaw

Tabarrok, B. and Tong, L. (1992). The Directional Stability Analysis of Log Hauling Truck –

Taha, E. Z., Happawana, G. S., and Hurmuzlu, Y. (2003). Quantitative feedback theory

Thompson, D. F. (1998). Gain-Bandwidth Optimal Design for the New Formulation

Truxal, J. G. (1965). State Models, Transfer Functions, and Simulation, Monograph 8,

Wilson, L. J. and Horner, T. W. (1979). Data Analysis of Tractor-Trailer Drivers to Assess

*National Technical Information Service*, Springfield, VA, USA.

Wong, J.Y. (1978). Theory of Ground Vehicles*, John Wiley and Sons*.

*Expo.,* November 13-19, Anaheim, California, USA.

Exact *H*∞ SISO – Robust Performance Problem, *Int. J. Robust and Nonlinear Control*,

Modern Optimal Controllers, *Proceedings, American Control Conference*,

body combination roll model for heavy trucks' directional stability, *In Proceedings of IMECE2004-61851, ASME International Mechanical Engineering Congress and RD&D* 

Roll Model, *Technical Reports*, University of Victoria, Mechanical Engineering

Double Doglogger, *Technical Reports, University of Victoria, Mechanical Engineering* 

(QFT) for chattering reduction and improved tracking in sliding mode control (SMC), *ASME J. of Dynamic Systems, Measurement, and Control*, Vol. 125, pp 665-

Quantitative Feedback Theory, *ASME J. Dyn. Syst., Meas., Control* Vol.120, pp. 401–

Drivers' Perception of Heavy Duty Truck Ride Quality, *Report DOT-HS-805-139,* 

*Proceedings of the IEEE CDC-*2003, Maui, Hawaii, pp. 4399-4404.

Vol.5, 107-118.

pp. 277 – 279.

Technology.

669.

404.

Massachusetts.

Department, pp. 57- 62.

*Department, DSC*, Vol. 44, pp. 383-396.

*Discrete Systems Concept Project*.

Albuquerque, NM, June.

In this chapter we will study the robust performance control based-on integral sliding-mode for system with nonlinearities and perturbations that consist of external disturbances and model uncertainties of great possibility time-varying manner. Sliding-mode control is one of robust control methodologies that deal with both linear and nonlinear systems, known for over four decades (El-Ghezawi et al., 1983; Utkin & Shi, 1996) and being used extensively from switching power electronics (Tan et al., 2005) to automobile industry (Hebden et al., 2003), even satellite control (Goeree & Fasse, 2000; Liu et al., 2005). The basic idea of sliding-mode control is to drive the sliding surface *s* from *s* � 0 to *s* = 0 and stay there for all future time, if proper sliding-mode control is established. Depending on the design of sliding surface, however, *s* = 0 does not necessarily guarantee system state being the problem of control to equilibrium. For example, sliding-mode control drives a sliding surface, where *s* = *Mx* − *Mx*0, to *s* = 0. This then implies that the system state reaches the initial state, that is, *x* = *x*<sup>0</sup> for some constant matrix *M* and initial state, which is not equal to zero. Considering linear sliding surface *s* = *Mx*, one of the superior advantages that sliding-mode has is that *s* = 0 implies the equilibrium of system state, i.e., *x* = 0. Another sliding surface design, the integral sliding surface, in particular, for this chapter, has one important advantage that is the improvement of the problem of reaching phase, which is the initial period of time that the system has not yet reached the sliding surface and thus is sensitive to any uncertainties or disturbances that jeopardize the system. Integral sliding surface design solves the problem in that the system trajectories start in the sliding surface from the first time instant (Fridman et al., 2005; Poznyak et al., 2004). The function of integral sliding-mode control is now to maintain the system's motion on the integral sliding surface despite model uncertainties and external disturbances, although the system state equilibrium has not yet been reached.

In general, an inherent and invariant property, more importantly an advantage, that all sliding-mode control has is the ability to completely nullify the so-called matched-type uncertainties and nonlinearities, defined in the range space of input matrix (El-Ghezawi et al., 1983). But, in the presence of unmatched-type nonlinearities and uncertainties, the conventional sliding-mode control (Utkin et al., 1999) can not be formulated and thus is unable to control the system. Therefore, the existence of unmatched-type uncertainties has the great possibility to endanger the *sliding dynamics*, which identify the system motion on the sliding surface after matched-type uncertainties are nullified. Hence, another control action simultaneously stabilizes the sliding dynamics must be developed.

is a constant matrix that shows how *w*(*t*) influences the system in a particular direction. The matched-type nonlinearities *<sup>h</sup>*(*x*) <sup>∈</sup> **<sup>R</sup>***<sup>m</sup>* is continuous in *<sup>x</sup>*. *gi*(*x*, *<sup>t</sup>*) <sup>∈</sup> **<sup>R</sup>***n*, an unmatched-type nonlinearity, possibly time-varying, is piecewise continuous in *t* and continuous in *x*. We

Integral Sliding-Based Robust Control 167

1. *A*(*t*) = *A* + Δ*A*(*t*) = *A* + *E*0*F*0(*t*)*H*0, where *A* is a constant matrix and Δ*A*(*t*) = *E*0*F*0(*t*)*H*<sup>0</sup>

where *F*0(*t*) is an unknown but bounded matrix function. *E*<sup>0</sup> and *H*<sup>0</sup> are known constant

2. *B*(*t*) = *B*(*I* + Δ*B*(*t*)) and Δ*B*(*t*) = *F*1(*t*)*H*1. Δ*B*(*t*) represents the input matrix uncertainty.

�*F*0(*t*)� ≤ 1, (2)

�*F*1(*t*)� ≤ 1, (3)

�*H*1� = *β*<sup>1</sup> < 1, (4)

rank(*B*) = *m*. (5)

�*w*(*t*)� ≤ *w*¯. (6)

�*h*(*x*)� ≤ *η*(*x*), (8)

*MB* = *I*. (10)

. (9)

�*gi*(*x*, *t*)� ≤ *θi*�*x*�, ∀ *t* ≥ 0, *i* = 1, ··· , *N*, (7)

is the unmatched uncertainty in state matrix satisfying

*F*1(*t*) is an unknown but bounded matrix function with

and the constant matrix *<sup>B</sup>* <sup>∈</sup> **<sup>R</sup>***n*×*<sup>m</sup>* is of full column rank, i.e.

3. The exogenous signals, *w*(*t*), are bounded by an upper bound *w*¯,

5. The matched nonlinearity *h*(*x*) satisfies the inequality

*for rank*(*B*) = *m by the singular value decomposition:*

where *η*(*x*) is a non-negative known vector-valued function.

*where* (*U*<sup>1</sup> *U*2) *and V are unitary matrices.* Σ = *diag*(*σ*1, ··· , *σm*)*. Let*

4. The *gi*(*x*, *t*) representing the unmatched nonlinearity satisfies the condition,

*B* =

*M* = *V<sup>T</sup>*

**Remark 1.** *For the simplicity of computation in the sequel a projection matrix M is such that MB* = *I*

 Σ 0 *V*,

*U*<sup>1</sup> *U*<sup>2</sup>

Σ−<sup>1</sup> 0 *U<sup>T</sup>* 1 *U<sup>T</sup>* 2

*H*<sup>1</sup> is a known constant real matrix, where

assume the following:

real matrices.

where *θ<sup>i</sup>* > 0.

*It is seen easily that*

Next, a new issue concerning the performance of integral sliding-mode control is addressed, that is, we develop a performance measure in terms of L2-gain of *zero dynamics*. The concept of zero dynamics introduced by (Lu & Spurgeon, 1997) treats the sliding surface *s* as the controlled output of the system. The role of integral sliding-mode control is to reach and maintain *s* = 0 while keeping the performance measure within bound. In short, the implementation of integral sliding-mode control solves the influence of matched-type nonlinearities and uncertainties while, in the meantime, maintaining the system on the integral sliding surface and bounding a performance measure without reaching phase. Simultaneously, not subsequently, another control action, i.e. robust linear control, must be taken to compensate the unmatched-type nonlinearities, model uncertainties, and external disturbances and drive the system state to equilibrium.

Robust linear control (Zhou et al., 1995) applied to the system with uncertainties has been extensively studied for over three decades (Boyd et al., 1994) and reference therein. Since part of the uncertainties have now been eliminated by the sliding-mode control, the rest unmatched-type uncertainties and external disturbances will be best suitable for the framework of robust linear control, in which the stability and performance are the issues to be pursued. In this chapter the control in terms of L2-gain (van der Schaft, 1992) and H<sup>2</sup> (Paganini, 1999) are the performance measure been discussed. It should be noted that the integral sliding-mode control signal and robust linear control signal are combined to form a composite control signal that maintain the system on the sliding surface while simultaneously driving the system to its final equilibrium, i.e. the system state being zero.

This chapter is organized as follows: in section 2, a system with nonlinearities, model uncertainties, and external disturbances represented by state-space is proposed. The assumptions in terms of norm-bound and control problem of stability and performance issues are introduced. In section 3, we construct the integral sliding-mode control such that the stability of zero dynamics is reached while with the same sliding-mode control signal the performance measure is confined within a bound. After a without reaching phase integral sliding-mode control has been designed, in the section 4, we derive robust control scheme of L2-gain and H<sup>2</sup> measure. Therefore, a composite control that is comprised of integral sliding-mode control and robust linear control to drive the system to its final equilibrium is now completed. Next, the effectiveness of the whole design can now be verified by numerical examples in the section 5. Lastly, the chapter will be concluded in the section 6.

### **2. Problem formulation**

In this section the uncertain systems with nonlinearities, model uncertainties, and disturbances and control problem to be solved are introduced.

#### **2.1 Controlled system**

Consider continuous-time uncertain systems of the form

$$\dot{\mathbf{x}}(t) = A(t)\mathbf{x}(t) + B(t)(u(\mathbf{x}, t) + h(\mathbf{x})) + \sum\_{i=1}^{N} g\_i(\mathbf{x}, t) + \mathcal{B}\_d w(t) \tag{1}$$

where *<sup>x</sup>*(*t*) <sup>∈</sup> **<sup>R</sup>***<sup>n</sup>* is the state vector, *<sup>u</sup>*(*x*, *<sup>t</sup>*) <sup>∈</sup> **<sup>R</sup>***<sup>m</sup>* is the control action, and for some prescribed compact set S ∈ **<sup>R</sup>***p*, *<sup>w</sup>*(*t*) ∈ S is the vector of (time-varying) variables that represent exogenous inputs which includes disturbances (to be rejected) and possible references (to be tracked). *<sup>A</sup>*(*t*) <sup>∈</sup> **<sup>R</sup>***n*×*<sup>n</sup>* and *<sup>B</sup>*(*t*) <sup>∈</sup> **<sup>R</sup>***n*×*<sup>m</sup>* are time-varying uncertain matrices. *Bd* <sup>∈</sup> **<sup>R</sup>***n*×*<sup>p</sup>*

2 Will-be-set-by-IN-TECH

Next, a new issue concerning the performance of integral sliding-mode control is addressed, that is, we develop a performance measure in terms of L2-gain of *zero dynamics*. The concept of zero dynamics introduced by (Lu & Spurgeon, 1997) treats the sliding surface *s* as the controlled output of the system. The role of integral sliding-mode control is to reach and maintain *s* = 0 while keeping the performance measure within bound. In short, the implementation of integral sliding-mode control solves the influence of matched-type nonlinearities and uncertainties while, in the meantime, maintaining the system on the integral sliding surface and bounding a performance measure without reaching phase. Simultaneously, not subsequently, another control action, i.e. robust linear control, must be taken to compensate the unmatched-type nonlinearities, model uncertainties, and external

Robust linear control (Zhou et al., 1995) applied to the system with uncertainties has been extensively studied for over three decades (Boyd et al., 1994) and reference therein. Since part of the uncertainties have now been eliminated by the sliding-mode control, the rest unmatched-type uncertainties and external disturbances will be best suitable for the framework of robust linear control, in which the stability and performance are the issues to be pursued. In this chapter the control in terms of L2-gain (van der Schaft, 1992) and H<sup>2</sup> (Paganini, 1999) are the performance measure been discussed. It should be noted that the integral sliding-mode control signal and robust linear control signal are combined to form a composite control signal that maintain the system on the sliding surface while simultaneously

This chapter is organized as follows: in section 2, a system with nonlinearities, model uncertainties, and external disturbances represented by state-space is proposed. The assumptions in terms of norm-bound and control problem of stability and performance issues are introduced. In section 3, we construct the integral sliding-mode control such that the stability of zero dynamics is reached while with the same sliding-mode control signal the performance measure is confined within a bound. After a without reaching phase integral sliding-mode control has been designed, in the section 4, we derive robust control scheme of L2-gain and H<sup>2</sup> measure. Therefore, a composite control that is comprised of integral sliding-mode control and robust linear control to drive the system to its final equilibrium is now completed. Next, the effectiveness of the whole design can now be verified by numerical

In this section the uncertain systems with nonlinearities, model uncertainties, and

where *<sup>x</sup>*(*t*) <sup>∈</sup> **<sup>R</sup>***<sup>n</sup>* is the state vector, *<sup>u</sup>*(*x*, *<sup>t</sup>*) <sup>∈</sup> **<sup>R</sup>***<sup>m</sup>* is the control action, and for some prescribed compact set S ∈ **<sup>R</sup>***p*, *<sup>w</sup>*(*t*) ∈ S is the vector of (time-varying) variables that represent exogenous inputs which includes disturbances (to be rejected) and possible references (to be tracked). *<sup>A</sup>*(*t*) <sup>∈</sup> **<sup>R</sup>***n*×*<sup>n</sup>* and *<sup>B</sup>*(*t*) <sup>∈</sup> **<sup>R</sup>***n*×*<sup>m</sup>* are time-varying uncertain matrices. *Bd* <sup>∈</sup> **<sup>R</sup>***n*×*<sup>p</sup>*

*N* ∑ *i*=1

*gi*(*x*, *t*) + *Bdw*(*t*) (1)

driving the system to its final equilibrium, i.e. the system state being zero.

examples in the section 5. Lastly, the chapter will be concluded in the section 6.

disturbances and control problem to be solved are introduced.

*x*˙(*t*) = *A*(*t*)*x*(*t*) + *B*(*t*)(*u*(*x*, *t*) + *h*(*x*)) +

Consider continuous-time uncertain systems of the form

**2. Problem formulation**

**2.1 Controlled system**

disturbances and drive the system state to equilibrium.

is a constant matrix that shows how *w*(*t*) influences the system in a particular direction. The matched-type nonlinearities *<sup>h</sup>*(*x*) <sup>∈</sup> **<sup>R</sup>***<sup>m</sup>* is continuous in *<sup>x</sup>*. *gi*(*x*, *<sup>t</sup>*) <sup>∈</sup> **<sup>R</sup>***n*, an unmatched-type nonlinearity, possibly time-varying, is piecewise continuous in *t* and continuous in *x*. We assume the following:

1. *A*(*t*) = *A* + Δ*A*(*t*) = *A* + *E*0*F*0(*t*)*H*0, where *A* is a constant matrix and Δ*A*(*t*) = *E*0*F*0(*t*)*H*<sup>0</sup> is the unmatched uncertainty in state matrix satisfying

$$\|\|F\_0(t)\|\| \le 1,\tag{2}$$

where *F*0(*t*) is an unknown but bounded matrix function. *E*<sup>0</sup> and *H*<sup>0</sup> are known constant real matrices.

2. *B*(*t*) = *B*(*I* + Δ*B*(*t*)) and Δ*B*(*t*) = *F*1(*t*)*H*1. Δ*B*(*t*) represents the input matrix uncertainty. *F*1(*t*) is an unknown but bounded matrix function with

$$\|\|F\_1(t)\|\| \le 1,\tag{3}$$

*H*<sup>1</sup> is a known constant real matrix, where

$$\|\|H\_1\|\| = \mathfrak{f}\_1 < 1,\tag{4}$$

and the constant matrix *<sup>B</sup>* <sup>∈</sup> **<sup>R</sup>***n*×*<sup>m</sup>* is of full column rank, i.e.

$$\text{rank}(B) = m.\tag{5}$$

3. The exogenous signals, *w*(*t*), are bounded by an upper bound *w*¯,

$$\|\|w(t)\|\| \le \bar{w}.\tag{6}$$

4. The *gi*(*x*, *t*) representing the unmatched nonlinearity satisfies the condition,

$$\|\|g\_i(\mathbf{x}, t)\|\| \le \theta\_i \|\|\mathbf{x}\|\|, \ \forall \, t \ge 0, \ i = 1, \cdots, N,\tag{7}$$

where *θ<sup>i</sup>* > 0.

5. The matched nonlinearity *h*(*x*) satisfies the inequality

$$\|\|h(\mathbf{x})\|\le\eta(\mathbf{x}),\tag{8}$$

where *η*(*x*) is a non-negative known vector-valued function.

**Remark 1.** *For the simplicity of computation in the sequel a projection matrix M is such that MB* = *I for rank*(*B*) = *m by the singular value decomposition:*

$$B = \begin{pmatrix} \mathcal{U}\_1 \ \mathcal{U}\_2 \end{pmatrix} \begin{pmatrix} \Sigma \\ 0 \end{pmatrix} V\_{\nu}$$

*where* (*U*<sup>1</sup> *U*2) *and V are unitary matrices.* Σ = *diag*(*σ*1, ··· , *σm*)*. Let*

$$M = V^T \begin{pmatrix} \Sigma^{-1} \ 0 \end{pmatrix} \begin{pmatrix} \mathcal{U}\_1^T\\\mathcal{U}\_2^T \end{pmatrix}. \tag{9}$$

*It is seen easily that*

$$MB = I.\tag{10}$$

### **2.2 Control problem**

The control action to (1) is to provide a feedback controller which processes the full information received from the plant in order to generate a composite control signal

$$
\mu(\mathbf{x}, t) = \mu\_s(t) + \mu\_r(\mathbf{x}, t), \tag{11}
$$

**3. Sliding-mode control design**

**3.1 Integral sliding-mode control** Let the switching control law be

where *s*0(*x*, *t*) is defined to be

see from (16) and (17) that

of time must hold, i.e.

It follows from (16) and (17) that

Substituting (1) into (23) and in view of (10), we have

*s*˙ = *M*Δ*A*(*t*)*x* + (*I* + Δ*B*(*t*))(*u* + *h*(*x*)) + *M*

where

*s*0(*x*, *t*) = −*M*

*α*(*t*) ≥

following manner.

The integral sliding-mode control completely eliminating the matched-type nonlinearities and uncertainties of (1) while keeping *s* = 0 and satisfying L2-gain bound is designed in the

Integral Sliding-Based Robust Control 169

*us*(*t*) = <sup>−</sup>*α*(*t*) *<sup>s</sup>*(*x*, *<sup>t</sup>*)

The integral sliding surface inspired by (Cao & Xu, 2004) is defined to be

 *x*<sup>0</sup> + *t* 0

The switching control gain *α*(*t*) being a positive scalar satisfies

1 1 − *β*<sup>1</sup>

*β*<sup>0</sup> = *κ*�*ME*0��*H*0� + *κ*�*M*�

�*s*(*x*,*t*)�

(*Ax*(*τ*) + *Bur*(*τ*)*dτ*

*N* ∑ *i*=1

*λ* is chosen to be some positive constant satisfying performance measure. It is not difficult to

which, in other words, from the very beginning of system operation, the controlled system is on the sliding surface. Without reaching phase is then achieved. Next to ensure the sliding

> *Vs* <sup>=</sup> <sup>1</sup> 2

It is noted that in the sequel if the arguments of a function is intuitively understandable we will omit them. To guarantee the sliding motion of the sliding surface, the following differentiation

motion on the sliding surface, a Lyapunov candidate for the system is chosen to be

*V*˙

*s*(*x*, *t*) = *Mx*(*t*) + *s*0(*x*, *t*), (16)

(*λ* + *β*<sup>0</sup> + (1 + *β*1)*η*(*x*) + *β*1�*ur*�) (18)

*s*(*x*0, 0) = 0, (20)

*<sup>s</sup>* <sup>=</sup> *<sup>s</sup>Ts*˙ <sup>≤</sup> 0. (22)

*s*˙ = *Mx*˙ + *M*(*Ax* + *Bur*) (23)

*N* ∑ *i*=1

*sTs*. (21)

*gi*(*x*, *t*) + *MBdw* − *ur*. (24)

. (15)

; *x*<sup>0</sup> = *x*(0). (17)

*θ<sup>i</sup>* + �*MBd*�*w*¯. (19)

where *us*(*t*) stands for the sliding-mode control and *ur*(*x*, *t*) is the linear control that robustly stabilize the system with performance measure for all admissible nonlinearities, model uncertainties, and external disturbances. Taking the structure of sliding-mode control that completely nullifies matched-type nonlinearities is one of the reasons for choosing the control as part of the composite control (11). For any control problem to have satisfactory action, two objectives must achieve: *stability* and *performance*. In this chapter sliding-mode controller, *us*(*t*), is designed so as to have asymptotic stability in the Lyapunov sense and the performance measure in L<sup>2</sup> sense satisfying

$$\int\_{0}^{T} \|s\|^2 dt \le \rho^2 \int\_{0}^{T} \|w\|^2 dt,\tag{12}$$

where the variable *s* defines the sliding surface. The mission of *us*(*t*) drives the system to reach *s* = 0 and maintain there for all future time, subject to zero initial condition for some prescribed *ρ* > 0. It is noted that the asymptotic stability in the Lyapunov sense is saying that, by defining the sliding surface *s*, sliding-mode control is to keep the sliding surface at the condition, where *s* = 0. When the system leaves the sliding surface due to external disturbance reasons so that *s* �= 0, the sliding-mode control will drive the system back to the surface again in an asymptotic manner. In particular, our design of integral sliding-mode control will let the system on the sliding surface without reaching phase. It should be noted that although the system been driven to the sliding surface, the unmatched-type nonlinearities and uncertainties are still affecting the behavior of the system. During this stage another part of control, the robust linear controller, *ur*(*x*, *t*), is applied to compensate the unmatched-type nonlinearities and uncertainties that robust stability and performance measure in L2-gain sense satisfying

$$\int\_{0}^{T} \|z\|^2 dt \le \gamma^2 \int\_{0}^{T} \|w\|^2 dt,\tag{13}$$

where the controlled variable, *z*, is defined to be the linear combination of the system state, *x*, and the control signal, *ur*, such that the state of sliding dynamics will be driven to the equilibrium state, that is, *x* = 0, subject to zero initial condition for some *γ* > 0. In addition to the performance defined in (13), the H<sup>2</sup> performance measure can also be applied to the sliding dynamics such that the performance criterion is finite when evaluated the energy response to an impulse input of random direction at *w*. The H<sup>2</sup> performance measure is defined to be

$$J(\mathbf{x}\_0) = \sup\_{\mathbf{x}(0) = \mathbf{x}\_0} \|\mathbf{z}\|\_2^2. \tag{14}$$

In this chapter we will study both performance of controlled variable, *z*. For the composite control defined in (11), one must aware that the working purposes of the control signals of *us*(*t*) and *ur*(*x*, *t*) are different. When applying the composite control simultaneously, it should be aware that the control signal not only maintain the sliding surface but drive the system toward its equilibrium. These are accomplished by having the asymptotic stability in the sense of Lyapunov.

4 Will-be-set-by-IN-TECH

The control action to (1) is to provide a feedback controller which processes the full

where *us*(*t*) stands for the sliding-mode control and *ur*(*x*, *t*) is the linear control that robustly stabilize the system with performance measure for all admissible nonlinearities, model uncertainties, and external disturbances. Taking the structure of sliding-mode control that completely nullifies matched-type nonlinearities is one of the reasons for choosing the control as part of the composite control (11). For any control problem to have satisfactory action, two objectives must achieve: *stability* and *performance*. In this chapter sliding-mode controller, *us*(*t*), is designed so as to have asymptotic stability in the Lyapunov sense and the

*u*(*x*, *t*) = *us*(*t*) + *ur*(*x*, *t*), (11)

�*w*�2*dt*, (12)

�*w*�2*dt*, (13)

<sup>2</sup>. (14)

information received from the plant in order to generate a composite control signal

**2.2 Control problem**

sense satisfying

defined to be

sense of Lyapunov.

performance measure in L<sup>2</sup> sense satisfying

 *T* 0

 *T* 0

�*z*�2*dt* <sup>≤</sup> *<sup>γ</sup>*<sup>2</sup>

*J*(*x*0) = sup

*x*(0)=*x*<sup>0</sup>

In this chapter we will study both performance of controlled variable, *z*. For the composite control defined in (11), one must aware that the working purposes of the control signals of *us*(*t*) and *ur*(*x*, *t*) are different. When applying the composite control simultaneously, it should be aware that the control signal not only maintain the sliding surface but drive the system toward its equilibrium. These are accomplished by having the asymptotic stability in the

�*z*�<sup>2</sup>

where the controlled variable, *z*, is defined to be the linear combination of the system state, *x*, and the control signal, *ur*, such that the state of sliding dynamics will be driven to the equilibrium state, that is, *x* = 0, subject to zero initial condition for some *γ* > 0. In addition to the performance defined in (13), the H<sup>2</sup> performance measure can also be applied to the sliding dynamics such that the performance criterion is finite when evaluated the energy response to an impulse input of random direction at *w*. The H<sup>2</sup> performance measure is

 *T* 0

�*s*�2*dt* <sup>≤</sup> *<sup>ρ</sup>*<sup>2</sup>

where the variable *s* defines the sliding surface. The mission of *us*(*t*) drives the system to reach *s* = 0 and maintain there for all future time, subject to zero initial condition for some prescribed *ρ* > 0. It is noted that the asymptotic stability in the Lyapunov sense is saying that, by defining the sliding surface *s*, sliding-mode control is to keep the sliding surface at the condition, where *s* = 0. When the system leaves the sliding surface due to external disturbance reasons so that *s* �= 0, the sliding-mode control will drive the system back to the surface again in an asymptotic manner. In particular, our design of integral sliding-mode control will let the system on the sliding surface without reaching phase. It should be noted that although the system been driven to the sliding surface, the unmatched-type nonlinearities and uncertainties are still affecting the behavior of the system. During this stage another part of control, the robust linear controller, *ur*(*x*, *t*), is applied to compensate the unmatched-type nonlinearities and uncertainties that robust stability and performance measure in L2-gain

 *T* 0

### **3. Sliding-mode control design**

The integral sliding-mode control completely eliminating the matched-type nonlinearities and uncertainties of (1) while keeping *s* = 0 and satisfying L2-gain bound is designed in the following manner.

### **3.1 Integral sliding-mode control**

Let the switching control law be

$$
\mu\_s(t) = -\mathfrak{a}(t) \frac{\mathfrak{s}(\mathfrak{x}, t)}{||\mathfrak{s}(\mathfrak{x}, t)||}. \tag{15}
$$

The integral sliding surface inspired by (Cao & Xu, 2004) is defined to be

$$\mathbf{s}(\mathbf{x},t) = M\mathbf{x}(t) + \mathbf{s}\_0(\mathbf{x},t),\tag{16}$$

where *s*0(*x*, *t*) is defined to be

$$\mathbf{s}\_{0}(\mathbf{x},t) = -M\left(\mathbf{x}\_{0} + \int\_{0}^{t} (A\mathbf{x}(\tau) + Bu\_{r}(\tau)d\tau)\right); \quad \mathbf{x}\_{0} = \mathbf{x}(0). \tag{17}$$

The switching control gain *α*(*t*) being a positive scalar satisfies

$$\alpha(t) \ge \frac{1}{1 - \beta\_1} \left( \lambda + \beta\_0 + (1 + \beta\_1)\eta(\mathbf{x}) + \beta\_1 \|\mathbf{u}\_r\| \right) \tag{18}$$

where

$$\beta\_0 = \kappa ||ME\_0|| \| |H\_0| | + \kappa ||M|| \sum\_{i=1}^{N} \theta\_i + ||MB\_d|| \bar{w}. \tag{19}$$

*λ* is chosen to be some positive constant satisfying performance measure. It is not difficult to see from (16) and (17) that

$$s(\mathbf{x}\_0, \mathbf{0}) = \mathbf{0},\tag{20}$$

which, in other words, from the very beginning of system operation, the controlled system is on the sliding surface. Without reaching phase is then achieved. Next to ensure the sliding motion on the sliding surface, a Lyapunov candidate for the system is chosen to be

$$V\_s = \frac{1}{2} s^T s.\tag{21}$$

It is noted that in the sequel if the arguments of a function is intuitively understandable we will omit them. To guarantee the sliding motion of the sliding surface, the following differentiation of time must hold, i.e.

$$
\dot{V}\_s = \mathbf{s}^T \dot{\mathbf{s}} \le \mathbf{0}.\tag{22}
$$

It follows from (16) and (17) that

$$
\dot{\mathbf{s}} = M\dot{\mathbf{x}} + M(A\mathbf{x} + Bu\_{\mathrm{r}}) \tag{23}
$$

Substituting (1) into (23) and in view of (10), we have

$$\dot{\mathbf{x}} = M\Delta A(t)\mathbf{x} + (I + \Delta B(t))(\mathbf{u} + h(\mathbf{x})) + M\sum\_{i=1}^{N} g\_i(\mathbf{x}, t) + MB\_d w - \mathbf{u}\_r. \tag{24}$$

Thus the following inequality holds,

$$\begin{split} \dot{V}\_{\mathbf{s}} &= \mathbf{s}^{T} \left( M \Delta A(t) \mathbf{x} + (I + \Delta B(t))(\mathbf{u} + h(\mathbf{x})) + M \sum\_{i=1}^{N} \mathbf{g}\_{i}(\mathbf{x}, t) + MB\_{d} \mathbf{w} - u\_{r} \right) \\ &\leq \|\mathbf{s}\| (\beta \mathbf{\hat{o}} + (1 + \beta \mathbf{1})\eta(\mathbf{x}) + \beta \mathbf{1} \|\boldsymbol{u}\_{r}\| + (\beta \mathbf{1} - \mathbf{1})a(t)). \end{split} \tag{25}$$

By selecting *α*(*t*) as (18), we obtain

$$
\dot{V}\_s \le -||s||\lambda \le 0,\tag{26}
$$

This implies that

*<sup>u</sup>* <sup>=</sup> <sup>−</sup>(*<sup>I</sup>* <sup>+</sup> <sup>Δ</sup>*B*(*t*))−<sup>1</sup>

completely removed.

or equal to *ρ* if

(6), we obtain the sliding dynamics

*dVs* √*Vs*

= 2 

*Vs*(*t*) = 2

Integrating both sides of the inequality, we have

Knowing that (20) and thus *Vs*(0) = 0, this implies

*x*˙ = *Ax* + *G*

**3.2 Performance measure of sliding-mode control**

(van der Schaft, 1992) is formally defined:

choosing the sliding variable *λ* that satisfies

With the inequality (37) we obtain

 *T* 0

 *Vs*(*t*) *Vs*(0)

> 0 ≤ 2

*dVs* √*Vs*

≤ − *<sup>t</sup>* 0 *λdt*

Integral Sliding-Based Robust Control 171

*Vs*(*t*) − 2

This identifies that *s* = 0, which implies that *s*˙ = 0 for *t* ≥ 0, from which and (24), we find

where (4) guarantees the invertibility of (33) to exist. Substituting (33) into (1) and in view of

*N* ∑ *i*=1

where *G* = *I* − *BM*. It is seen that the matched uncertainties, Δ*B*(*t*)*u* and (*I* + Δ*B*(*t*))*h*(*x*) are

The concept of *zero dynamics* introduced by (Lu & Spurgeon, 1997) in sliding-mode control treats the sliding surface *s* as the controlled output in the presence of disturbances, nonlinearities and uncertainties. With regard to (1) the performance measure similar to

Let *ρ* ≥ 0. The system (1) and zero dynamics defined in (16) is said to have L2-gain less than

for all *T* ≥ 0 and all *w* ∈ L2(0, *T*). The inequality of (35) can be accomplished by appropriately

 *T* 0

�*s*�2*dt* <sup>≤</sup> *<sup>ρ</sup>*<sup>2</sup>

where the parameter *ζ* is defined in (40). To prove this the following inequality holds,

*gi*(*x*, *t*)

*M*Δ*A*(*t*)*x* + (*I* + Δ*B*(*t*))*h*(*x*) +

Δ*A*(*t*)*x* +

*Vs*(0) ≤ −*λt*.

*N* ∑ *i*=1

*<sup>s</sup>T*(*x*, *<sup>t</sup>*)*s*(*x*, *<sup>t</sup>*) ≤ 0. (32)

*gi*(*x*, *t*) + *MBdw* − *ur*

+ *GBdw* + *Bur*, (34)

�*w*�2*dt*, (35)

*λ* ≥ 2*ζ* + 2*ρw*¯, (36)

<sup>−</sup> (*ρ<sup>w</sup>* <sup>−</sup> *<sup>s</sup>*)*T*(*ρ<sup>w</sup>* <sup>−</sup> *<sup>s</sup>*) <sup>≤</sup> 0. (37)

�*s*�<sup>2</sup> <sup>−</sup> *<sup>ρ</sup>*2�*w*�<sup>2</sup> <sup>≤</sup> <sup>2</sup>�*s*�<sup>2</sup> <sup>−</sup> <sup>2</sup>*ρsTw*. (38)

, (33)

which not only guarantees the sliding motion of (1) on the sliding surface, i.e. maintaining *s* = 0, but also drives the system back to sliding surface if deviation caused by disturbances happens. To illustrate the inequality of (25), the following norm-bounded conditions must be quantified,

$$\begin{split} \|s^{\mathrm{T}}(M\Delta A(t)\mathbf{x}) \le \|s\| \|M\Delta A(t)\mathbf{x}\| = \|s\| \|\|M\mathbf{E}\_{0}\mathbf{F}\_{0}(t)H\_{0}\mathbf{x}\| \\ \le \|s\| \|\|M\mathbf{E}\_{0}\mathbf{F}\_{0}(t)H\_{0}\| \|\mathbf{x}\| \le \|s\| \|M\mathbf{E}\_{0}\| \|H\_{0}\| \|\mathbf{x}\| \end{split} \tag{27}$$

by the assumption (2) and by asymptotic stability in the sense of Lyapunov such that there exists a ball, B, where B {*x*(*t*) : max*t*≥<sup>0</sup> �*x*(*t*)� ≤ *κ*, for �*x*0� < *δ*}. In view of (3), (4), (68), and the second term of parenthesis of (25), the following inequality holds,

$$\begin{split} \mathbf{s}^{\top}(I + \Delta \mathcal{B}(t))h(\mathbf{x}) \le \|\mathbf{s}\| \|(I + \Delta \mathcal{B})h\| = \|\mathbf{s}\| \|(I + F\_{\mathbf{1}}(t)H\_{\mathbf{1}})h\| \\ \le \|\mathbf{s}\| (1 + \|H\_{\mathbf{1}}\|) \eta(\mathbf{x}) = \|\mathbf{s}\| (1 + \beta\_{\mathbf{1}}) \eta(\mathbf{x}). \end{split} \tag{28}$$

By the similar manner, we obtain

$$\begin{split} \|s^T \Delta B(t)u \le \|s\| \|\Delta Bu\| &= \|s\| \|F\_1(t)H\_1(u\_s + u\_r)\| \\ &\le \|s\| \|H\_1\| (\|u\_s\| + \|u\_r\|) = \|s\| \beta\_1 (a(t) + \|u\_r\|), \end{split} \tag{29}$$

where �*us*� <sup>=</sup> � − *<sup>α</sup>*(*t*) *<sup>s</sup>* �*s*� � <sup>=</sup> *<sup>α</sup>*(*t*). As for the disturbance *<sup>w</sup>*, we have

$$\|s^T M B\_d \overline{w} \le \|s\| \|\|M B\_d \overline{w}\|\| \le \|s\| \|\|M B\_d\| \|\overline{w}\|\tag{30}$$

by using the assumption of (6). Lastly,

$$\begin{split} \|s^{T}M\sum\_{i}^{N}g\_{i}(\mathbf{x},t) \leq & \|s\| \|\|M\|\| \|\sum\_{i=1}^{N}g\_{i}(\mathbf{x},t)\| \leq \|s\| \|\|M\| \|\sum\_{i=1}^{N} \|g\_{i}(\mathbf{x},t)\| \\ \leq & \|s\| \|\|M\|\left(\sum\_{i=1}^{N}\theta\_{i} \|\mathbf{x}\|\right) \leq \|s\| \|\|M\|\left(\mathbf{x}\sum\_{i=1}^{N}\theta\_{i}\right), \end{split} \tag{31}$$

for the unmatched nonlinearity *gi*(*x*, *t*) satisfies (7). Applying (27)-(31) to (22), we obtain the inequality (25). To guarantee the sliding motion on the sliding surface right from the very beginning of the system operation, i.e. *t* = 0, and to maintain *s* = 0 for *t* ≥ 0, are proved by having the inequality (26)

$$\dot{V\_s} = \frac{dV\_s}{dt} \le -\lambda ||s|| = -\lambda \sqrt{V\_s} \le 0.$$

This implies that

6 Will-be-set-by-IN-TECH

which not only guarantees the sliding motion of (1) on the sliding surface, i.e. maintaining *s* = 0, but also drives the system back to sliding surface if deviation caused by disturbances happens. To illustrate the inequality of (25), the following norm-bounded conditions must be

by the assumption (2) and by asymptotic stability in the sense of Lyapunov such that there exists a ball, B, where B {*x*(*t*) : max*t*≥<sup>0</sup> �*x*(*t*)� ≤ *κ*, for �*x*0� < *δ*}. In view of (3), (4), (68),

�*s*� � <sup>=</sup> *<sup>α</sup>*(*t*). As for the disturbance *<sup>w</sup>*, we have

*N* ∑ *i*=1

 *N* ∑ *i*=1

*θi*�*x*�

for the unmatched nonlinearity *gi*(*x*, *t*) satisfies (7). Applying (27)-(31) to (22), we obtain the inequality (25). To guarantee the sliding motion on the sliding surface right from the very beginning of the system operation, i.e. *t* = 0, and to maintain *s* = 0 for *t* ≥ 0, are proved by

*dt* ≤ −*λ*�*s*� <sup>=</sup> <sup>−</sup>*<sup>λ</sup>*

*N* ∑ *i*=1

*gi*(*x*, *t*) + *MBdw* − *ur*

*<sup>s</sup>* ≤ −�*s*�*λ* ≤ 0, (26)

≤ �*s*��*ME*0*F*0(*t*)*H*0��*x*�≤�*s*��*ME*0��*H*0�*κ*, (27)

≤ �*s*�(<sup>1</sup> <sup>+</sup> �*H*1�)*η*(*x*) = �*s*�(<sup>1</sup> <sup>+</sup> *<sup>β</sup>*1)*η*(*x*). (28)

≤ �*s*��*H*1�(�*us*� <sup>+</sup> �*ur*�) = �*s*�*β*1(*α*(*t*) + �*ur*�), (29)

*N* ∑ *i*=1

 *κ N* ∑ *i*=1 *θi* ,

�*gi*(*x*, *t*)�

(31)

*<sup>s</sup>TMBdw* ≤ �*s*��*MBdw*�≤�*s*��*MBd*�*w*¯, (30)

≤ �*s*��*M*�

<sup>√</sup>*Vs* <sup>≤</sup> 0.

*gi*(*x*, *t*)�≤�*s*��*M*�

(25)

*M*Δ*A*(*t*)*x* + (*I* + Δ*B*(*t*))(*u* + *h*(*x*)) + *M*

≤ �*s*�(*β*<sup>0</sup> + (1 + *β*1)*η*(*x*) + *β*1�*ur*� + (*β*<sup>1</sup> − 1)*α*(*t*)).

*V*˙

*<sup>s</sup>T*(*M*Δ*A*(*t*)*x*) ≤ �*s*��*M*Δ*A*(*t*)*x*� <sup>=</sup> �*s*��*ME*0*F*0(*t*)*H*0*x*�

and the second term of parenthesis of (25), the following inequality holds,

*<sup>s</sup>T*(*<sup>I</sup>* <sup>+</sup> <sup>Δ</sup>*B*(*t*))*h*(*x*) ≤ �*s*��(*<sup>I</sup>* <sup>+</sup> <sup>Δ</sup>*B*)*h*� <sup>=</sup> �*s*��(*<sup>I</sup>* <sup>+</sup> *<sup>F</sup>*1(*t*)*H*1)*h*�

*<sup>s</sup>T*Δ*B*(*t*)*<sup>u</sup>* ≤ �*s*��Δ*Bu*� <sup>=</sup> �*s*��*F*1(*t*)*H*1(*us* <sup>+</sup> *ur*)�

*gi*(*x*, *t*) ≤ �*s*��*M*��

*V*˙ *<sup>s</sup>* <sup>=</sup> *dVs*

≤ �*s*��*M*�

Thus the following inequality holds,

By selecting *α*(*t*) as (18), we obtain

By the similar manner, we obtain

by using the assumption of (6). Lastly,

*sTM N* ∑ *i*

where �*us*� <sup>=</sup> � − *<sup>α</sup>*(*t*) *<sup>s</sup>*

having the inequality (26)

*V*˙ *<sup>s</sup>* = *s<sup>T</sup>*

quantified,

$$\frac{dV\_s}{\sqrt{V\_s}} \le -\int\_0^t \lambda dt$$

Integrating both sides of the inequality, we have

$$\int\_{V\_{\mathbf{s}}(0)}^{V\_{\mathbf{s}}(t)} \frac{dV\_{\mathbf{s}}}{\sqrt{\nabla\_{\mathbf{s}}}} = 2\sqrt{V\_{\mathbf{s}}(t)} - 2\sqrt{V\_{\mathbf{s}}(0)} \le -\lambda t.$$

Knowing that (20) and thus *Vs*(0) = 0, this implies

$$0 \le 2\sqrt{V\_s(t)} = 2\sqrt{\mathbf{s}^T(\mathbf{x}, t)\mathbf{s}(\mathbf{x}, t)} \le 0. \tag{32}$$

This identifies that *s* = 0, which implies that *s*˙ = 0 for *t* ≥ 0, from which and (24), we find

$$\mu = -\left(I + \Delta B(t)\right)^{-1} \left(M\Delta A(t)\mathbf{x} + (I + \Delta B(t))h(\mathbf{x}) + \sum\_{i=1}^{N} g\_i(\mathbf{x}, t) + MB\_d w - u\_r\right), \tag{33}$$

where (4) guarantees the invertibility of (33) to exist. Substituting (33) into (1) and in view of (6), we obtain the sliding dynamics

$$\dot{\mathbf{x}} = A\mathbf{x} + G\left(\Delta A(t)\mathbf{x} + \sum\_{i=1}^{N} g\_i(\mathbf{x}, t)\right) + GB\_d w + Bu\_{r\_i} \tag{34}$$

where *G* = *I* − *BM*. It is seen that the matched uncertainties, Δ*B*(*t*)*u* and (*I* + Δ*B*(*t*))*h*(*x*) are completely removed.

#### **3.2 Performance measure of sliding-mode control**

The concept of *zero dynamics* introduced by (Lu & Spurgeon, 1997) in sliding-mode control treats the sliding surface *s* as the controlled output in the presence of disturbances, nonlinearities and uncertainties. With regard to (1) the performance measure similar to (van der Schaft, 1992) is formally defined:

Let *ρ* ≥ 0. The system (1) and zero dynamics defined in (16) is said to have L2-gain less than or equal to *ρ* if

$$\int\_{0}^{T} ||s||^{2} dt \le \rho^{2} \int\_{0}^{T} ||w||^{2} dt,\tag{35}$$

for all *T* ≥ 0 and all *w* ∈ L2(0, *T*). The inequality of (35) can be accomplished by appropriately choosing the sliding variable *λ* that satisfies

$$
\lambda \ge 2\zeta + 2\rho\overline{w} \,\tag{36}
$$

where the parameter *ζ* is defined in (40). To prove this the following inequality holds,

$$- \left( \rho w - s \right)^{T} (\rho w - s) \le 0. \tag{37}$$

With the inequality (37) we obtain

$$\left\|\left\|\mathbf{s}\right\|\right\|^2 - \rho^2 \left\|\left\|\mathbf{w}\right\|\right\|^2 \le 2\left\|\mathbf{s}\right\|^2 - 2\rho\mathbf{s}^T\mathbf{w}.\tag{38}$$

where *<sup>z</sup>* ∈ R*nz* is an additional artificial controlled variable to satisfy robust performance measure with respect to disturbance signal, *w*. In order to merge the uncertainty Δ*A*(*t*)*x* with

Integral Sliding-Based Robust Control 173

*p*<sup>0</sup> = *g*0(*x*, *t*) = *F*0(*t*)*H*0*x* = *F*(*t*)*q*0,

�*p*0� = �*F*0(*t*)*q*0� ≤ *θ*0�*q*0�, (42)

<sup>1</sup> ··· *<sup>q</sup><sup>T</sup> N* � ,

⎛

*H*<sup>0</sup> *I* . . . *I*

⎞

⎟⎟⎟⎠ .

(45)

⎜⎜⎜⎝

*ur* = *Kx*. (44)

�*pi*� = �*gi*(*x*, *t*)� ≤ *θi*�*x*� = *θi*�*qi*�, ∀ *i* = 1, ··· , *N*. (43)

*qT* <sup>0</sup> *<sup>q</sup><sup>T</sup>*

, *q<sup>T</sup>* = �

, *Bw* = *GBd* and *Cq* =

through which all the uncertainties and the unmatched nonlinearities are fed into the sliding

*x*˙ = A*x* + *Bp p* + *Bww*

*pi* = *gi*(*qi*, *t*), *i* = 0, 1, ··· , *N*,

where A = *A* + *BK* and C = *Cz* + *DzK*. This completes LFR process of the sliding dynamics. In what follows the robust linear control with performance measure that asymptotically drive

In this section the performance measure in L2-gain sense is suggested for the robust control design of sliding dynamics where the system state will be driven to the equilibrium. We will be concerned with the stability and performance notion for the system (45) as follows:

*<sup>i</sup>*=<sup>1</sup> *gi*(*x*, *t*), the variable *p*<sup>0</sup> is defined to be

where *q*<sup>0</sup> = *H*0*x*. Thus, by considering (2), *p*<sup>0</sup> has a norm-bounded constraint

where *θ*<sup>0</sup> = 1. Let *pi* = *gi*(*x*, *t*), *i* = 1, ··· , *N* and *qi* = *x*, then in view of (7)

Let the vector *<sup>p</sup>* ∈ R(*N*+1)*<sup>n</sup>* and *<sup>q</sup>* ∈ R(*N*+1)*n*lumping all *pi*s be defined to be

<sup>1</sup> ··· *<sup>p</sup><sup>T</sup> N* �

dynamics. The matrices, *Bp*, *Bw*, and *Cq* are constant matrices as follows,

�

*q* = *Cqx z* = C*x*

and

the overall system to the equilibrium point is illustrated.

*E*<sup>0</sup> *I* ··· *I*

� �� � (N+1) matrix

*p<sup>T</sup>* = � *pT* <sup>0</sup> *<sup>p</sup><sup>T</sup>*

*Bp* = *G* �

Since full-state feedback is applied, thus

**4.2 Robust performance measure 4.2.1 Robust** L2**-gain measure**

The overall closed-loop system is as follows,

nonlinearities ∑*<sup>N</sup>*

It is noted that

$$\begin{split} \int\_{0}^{T} (||s||^{2} - \rho^{2}||w||^{2}) dt &\leq \int\_{0}^{T} 2(||s||^{2} - \rho \mathbf{s}^{T}w) dt \\ &\leq \int\_{0}^{T} \Big( 2(||s||^{2} - \rho \mathbf{s}^{T}w) + \dot{V} \Big) dt - (V(T) - V(0)) \\ &\leq \int\_{0}^{T} \Big( 2(||s||^{2} - \rho \mathbf{s}^{T}w) - \lambda ||s|| \Big) dt \\ &\leq \int\_{0}^{T} \|s\| (2\|s\| + 2\rho \bar{w} - \lambda) dt \end{split} \tag{39}$$

The above inequalities use the fact (20), (26), and (32). Thus to guarantee the inequality we require that the *λ* be chosen as (36). In what follows, we need to quantify �*s*� such that finite *λ* is obtained. To show this, it is not difficult to see, in the next section, that *ur* = *Kx* is so as to *A* + *BK* Hurwitz, i.e. all eigenvalues of *A* + *BK* are in the left half-plane. Therefore, for *x*(0) = *x*<sup>0</sup>

$$\begin{aligned} \|s\| &= \left\|M\mathbf{x} - M\left(\mathbf{x}\_0 + \int\_0^\infty (A\mathbf{x} + Bu\_r)d\tau\right)\right\| \\ &\le \|M\| \|\mathbf{x} - \mathbf{x}\_0\| + \|M\| \left\|\int\_0^\infty (A + BK)\mathbf{x}d\tau\right\| \\ &\le \|M\| (\|\mathbf{x}\| + \|\mathbf{x}\_0\|) + \|M\| \|\|A + BK\| \left\|\int\_0^\infty \mathbf{x}d\tau\right\| \\ &\le \|M\| (\mathbf{x} + \|\mathbf{x}\_0\|) + \|M\| \|\|A + BK\| \left\|\int\_0^T \mathbf{x}d\tau + \int\_T^\infty \mathbf{x}d\tau\right\| \\ &\le \|M\| (\mathbf{x} + \|\mathbf{x}\_0\|) + \|M\| \|\|A + BK\| \left\|\int\_0^T \mathbf{x}d\tau\right\| \\ &\le \|M\| (\mathbf{x} + \|\mathbf{x}\_0\|) + \|M\| \|\|A + BK\| \left\|\int\_0^T \mathbf{x}d\tau\right\| \\ &\le \|M\| (\mathbf{x} + \|\mathbf{x}\_0\|) + \|A + BK\| \|\mathbf{x}\| \triangleq \mathsf{f}\_t \end{aligned} \tag{40}$$

where the elimination of <sup>∞</sup> *<sup>T</sup> xdτ* is due to the reason of asymptotic stability in the sense of Lyapunov, that is, when *t* ≥ *T* the state reaches the equilibrium, i.e. *x*(*t*) → 0.

### **4. Robust linear control design**

The foregoing section illustrates the sliding-mode control that assures asymptotic stability of sliding surface, where *s* = 0 is guaranteed at the beginning of system operation. In this section we will reformulate the sliding dynamics (34) by using linear fractional representation such that the nonlinearities and perturbations are lumped together and are treated as uncertainties from linear control perspective.

#### **4.1 Linear Fractional Representation (LFR)**

Applying LFR technique to the sliding dynamics (34), we have LFR representation of the following form

$$\begin{aligned} \dot{\mathbf{x}} &= A\mathbf{x} + B\boldsymbol{u}\_{I} + B\_{p}\boldsymbol{p} + B\_{w}w \\ \mathbf{z} &= \mathbf{C}\_{z}\mathbf{x} + D\_{z}\boldsymbol{u}\_{r} \\ \text{and} \\ \mathbf{p}\_{i} &= \mathbf{g}\_{i}(\mathbf{x}, t), \quad i = 0, 1, \cdots, N \end{aligned} \tag{41}$$

8 Will-be-set-by-IN-TECH

<sup>2</sup>(�*s*�<sup>2</sup> <sup>−</sup> *<sup>ρ</sup>sTw*)*dt*

<sup>2</sup>(�*s*�<sup>2</sup> <sup>−</sup> *<sup>ρ</sup>sTw*) + *<sup>V</sup>*˙

�*s*�(2�*s*� + 2*ρw*¯ − *λ*)*dt*

(*Ax* + *Bur*)*dτ*

 

> ∞ 0

 

> *xdτ*

> > ∞ *T xdτ*

*xdτ* +

(*A* + *BK*)*xdτ*

 *T* 0

 *T* 0 *xdτ* 

*<sup>T</sup> xdτ* is due to the reason of asymptotic stability in the sense of

The above inequalities use the fact (20), (26), and (32). Thus to guarantee the inequality we require that the *λ* be chosen as (36). In what follows, we need to quantify �*s*� such that finite *λ* is obtained. To show this, it is not difficult to see, in the next section, that *ur* = *Kx* is so as to *A* + *BK* Hurwitz, i.e. all eigenvalues of *A* + *BK* are in the left half-plane. Therefore, for

<sup>2</sup>(�*s*�<sup>2</sup> <sup>−</sup> *<sup>ρ</sup>sTw*) <sup>−</sup> *<sup>λ</sup>*�*s*�

 *dt*

*dt* − (*V*(*T*) − *V*(0))

(39)

(40)

(41)

 *T* 0

≤ *T* 0 

≤ *T* 0 

≤ *T* 0

 *x*<sup>0</sup> +

≤ �*M*��*x* − *x*0� + �*M*�

 ∞ 0

≤ �*M*�(�*x*� + �*x*0�) + �*M*��*A* + *BK*�

≤ �*M*�(*κ* + �*x*0�) + �*M*��*A* + *BK*�

≤ �*M*�(*κ* + �*x*0�) + �*M*��*A* + *BK*�

≤ �*M*� (*κ* + �*x*0�) + �*A* + *BK*�*κT*) *ζ*,

The foregoing section illustrates the sliding-mode control that assures asymptotic stability of sliding surface, where *s* = 0 is guaranteed at the beginning of system operation. In this section we will reformulate the sliding dynamics (34) by using linear fractional representation such that the nonlinearities and perturbations are lumped together and are treated as uncertainties

Applying LFR technique to the sliding dynamics (34), we have LFR representation of the

*x*˙ = *Ax* + *Bur* + *Bp p* + *Bww*

*pi* = *gi*(*x*, *t*), *i* = 0, 1, ··· , *N*

*z* = *Czx* + *Dzur*

and

Lyapunov, that is, when *t* ≥ *T* the state reaches the equilibrium, i.e. *x*(*t*) → 0.

 ∞ 0

It is noted that

*x*(0) = *x*<sup>0</sup>

 *T* 0

(�*s*�<sup>2</sup> <sup>−</sup> *<sup>ρ</sup>*2�*w*�2)*dt* <sup>≤</sup>

�*s*� =

where the elimination of <sup>∞</sup>

**4. Robust linear control design**

from linear control perspective.

following form

**4.1 Linear Fractional Representation (LFR)**

 

*Mx* − *M*

where *<sup>z</sup>* ∈ R*nz* is an additional artificial controlled variable to satisfy robust performance measure with respect to disturbance signal, *w*. In order to merge the uncertainty Δ*A*(*t*)*x* with nonlinearities ∑*<sup>N</sup> <sup>i</sup>*=<sup>1</sup> *gi*(*x*, *t*), the variable *p*<sup>0</sup> is defined to be

$$p\_0 = \mathcal{g}\_0(\mathfrak{x}, t) = F\_0(t)H\_0\mathfrak{x} = F(t)q\_{0\nu}$$

where *q*<sup>0</sup> = *H*0*x*. Thus, by considering (2), *p*<sup>0</sup> has a norm-bounded constraint

$$\|\|p\_0\|\| = \|\|F\_0(t)q\_0\|\| \le \theta\_0 \|\|q\_0\|\|,\tag{42}$$

where *θ*<sup>0</sup> = 1. Let *pi* = *gi*(*x*, *t*), *i* = 1, ··· , *N* and *qi* = *x*, then in view of (7)

$$\|\|p\_{\dot{l}}\|\| = \|\|g\_{\dot{l}}(\mathbf{x}, \mathbf{t})\|\| \le \theta\_{\dot{l}} \|\|\mathbf{x}\|\| = \theta\_{\dot{l}} \|\|q\_{\dot{l}}\|\|, \quad \forall \ \dot{\mathbf{n}} = \mathbf{1}, \cdots, \mathbf{N}. \tag{43}$$

Let the vector *<sup>p</sup>* ∈ R(*N*+1)*<sup>n</sup>* and *<sup>q</sup>* ∈ R(*N*+1)*n*lumping all *pi*s be defined to be

$$p^T = \begin{pmatrix} p\_0^T \ p\_1^T \ \cdots \ p\_N^T \end{pmatrix} \ \ \ \ q^T = \begin{pmatrix} q\_0^T \ q\_1^T \ \cdots \ q\_N^T \end{pmatrix} \ \ \ \ \ $$

through which all the uncertainties and the unmatched nonlinearities are fed into the sliding dynamics. The matrices, *Bp*, *Bw*, and *Cq* are constant matrices as follows,

$$B\_p = G \underbrace{\begin{pmatrix} E\_0 \ I \ \cdots \ I \end{pmatrix}}\_{\text{(N+1) matrix}}, \quad B\_{\text{w}} = G B\_d \quad \text{and} \quad \mathbb{C}\_q = \begin{pmatrix} H\_0 \\ I \\ \vdots \\ \vdots \\ I \end{pmatrix}.$$

Since full-state feedback is applied, thus

$$
\mu\_{\mathbb{F}} = \mathbb{K}\mathfrak{x}.\tag{44}
$$

The overall closed-loop system is as follows,

$$\begin{aligned} \dot{\mathbf{x}} &= \mathcal{A}\mathbf{x} + \mathcal{B}\_p p + \mathcal{B}\_w w \\ q &= \mathbb{C}\_q \mathbf{x} \\ z &= \mathcal{C}\mathbf{x} \\ \text{and} \\ p\_i &= \mathcal{g}\_i(q\_{ii}t), \quad i = 0, 1, \cdots, N\_\prime \end{aligned} \tag{45}$$

where A = *A* + *BK* and C = *Cz* + *DzK*. This completes LFR process of the sliding dynamics. In what follows the robust linear control with performance measure that asymptotically drive the overall system to the equilibrium point is illustrated.

### **4.2 Robust performance measure**

## **4.2.1 Robust** L2**-gain measure**

In this section the performance measure in L2-gain sense is suggested for the robust control design of sliding dynamics where the system state will be driven to the equilibrium. We will be concerned with the stability and performance notion for the system (45) as follows:

Let the constant *γ* > 0 be given. The closed-loop system (45) is said to have a *robust* L2*-gain measure γ* if for any admissible norm-bounded uncertainties the following conditions hold. (1) The closed-loop system is uniformly asymptotically stable.

(2) Subject to the assumption of zero initial condition, the controlled output *z* satisfies

$$\int\_0^\infty \|z\|^2 dt \le \gamma^2 \int\_0^\infty \|w\|^2 dt. \tag{46}$$

where the identity matrix *<sup>I</sup>* <sup>∈</sup> **<sup>R</sup>***nqi*

with <sup>Ξ</sup> <sup>=</sup> <sup>A</sup>*TX* <sup>+</sup> *<sup>X</sup>*<sup>A</sup> <sup>+</sup> <sup>C</sup>*T*<sup>C</sup> <sup>+</sup> *<sup>C</sup><sup>T</sup>*

Define the matrix variables

<sup>A</sup>*TX* <sup>+</sup> *<sup>X</sup>*<sup>A</sup> <sup>+</sup> <sup>C</sup>*T*<sup>C</sup> <sup>+</sup> *<sup>C</sup><sup>T</sup>*

Thus, the inequality (56) can be rewritten as

H =

Manipulating (58) by adding and subtracting *jωX* to obtain

� <sup>C</sup> Σ1/2Θ*Cq*

where

have

or

Defining a system

*i*. Hence the inequality (51) can be translated to the following matrix inequalities

⎛ ⎝

Integral Sliding-Based Robust Control 175

L2-gain measure *γ* from input *w* to output *z* if there exists *X* > 0 and Σ > 0 such that (54) is satisfied. Without loss of generality, we will adopt only strict inequality. To prove uniformly

*<sup>q</sup>* <sup>Θ</sup>*T*ΣΘ*Cq* + *<sup>X</sup>*(*Bp*Σ−1*B<sup>T</sup>*

, <sup>G</sup> <sup>=</sup> �

Pre-multiplying <sup>G</sup>*T*(−*jω<sup>I</sup>* − A*T*)−<sup>1</sup> and post-multiplying (*jω<sup>I</sup>* − A)−1<sup>G</sup> to inequality (59), we

*x*˙ = A*x* + G*w z* = H*x*

with transfer function *<sup>T</sup>*(*s*) = <sup>H</sup>(*sI* − A)−1<sup>G</sup> and thus *<sup>T</sup>*(*jω*) = <sup>H</sup>(*jω<sup>I</sup>* − A)−1<sup>G</sup> and a matrix

*<sup>T</sup>*∗(*jω*)*T*(*jω*) <sup>−</sup> *<sup>M</sup>*¯ (*jω*) <sup>−</sup> *<sup>M</sup>*¯ <sup>∗</sup>(*jω*) + *<sup>M</sup>*¯ <sup>∗</sup>(*jω*)*M*¯ (*jω*) <sup>≺</sup> 0,

*<sup>T</sup>*∗(*jω*)*T*(*jω*) <sup>≺</sup> *<sup>M</sup>*¯ (*jω*) + *<sup>M</sup>*¯ <sup>∗</sup>(*jω*) <sup>−</sup> *<sup>M</sup>*¯ <sup>∗</sup>(*jω*)*M*¯ (*jω*)

<sup>=</sup> <sup>−</sup>(*<sup>I</sup>* <sup>−</sup> *<sup>M</sup>*¯ <sup>∗</sup>(*jω*))(*<sup>I</sup>* <sup>−</sup> *<sup>M</sup>*¯ (*jω*)) + *<sup>I</sup>*

variable *<sup>M</sup>*¯ (*jω*) = <sup>G</sup>*TX*(*jω<sup>I</sup>* − A)−1G. The matrix inequality (60) can be rewritten as

� *I*, ∀ *ω* ∈ **R**.

<sup>+</sup> <sup>G</sup>*T*(−*jω<sup>I</sup>* − A*T*)−1*X*GG*TX*(*jω<sup>I</sup>* − A)−1<sup>G</sup> <sup>+</sup> <sup>G</sup>*T*(−*jω<sup>I</sup>* − A*T*)−1H*T*H(*jω<sup>I</sup>* − A)−1G ≺ 0.

−G*TX*(*jω<sup>I</sup>* − A)−1G−G*T*(−*jω<sup>I</sup>* − A*T*)−1*X*<sup>G</sup>

Ξ *XBp XBw* � −Σ 0 � <sup>0</sup> <sup>−</sup>*γ*<sup>2</sup> *<sup>I</sup>*

Π(*X*, Σ, *γ*) =

asymptotic stability of (45), we expand the inequality (54) by Schur complement,

�

<sup>×</sup>*nqi* . It is noted that we require that *<sup>θ</sup><sup>i</sup>* > 0 and *<sup>σ</sup><sup>i</sup>* > 0 for all

⎞

*Bp*Σ−1/2 *γBw*

<sup>A</sup>*TX* <sup>+</sup> *<sup>X</sup>*<sup>A</sup> <sup>+</sup> <sup>H</sup>*T*<sup>H</sup> <sup>+</sup> *<sup>X</sup>*GG*TX* <sup>≺</sup> 0. (58)

<sup>−</sup> (−*jω<sup>I</sup>* − A*T*)*<sup>X</sup>* <sup>−</sup> *<sup>X</sup>*(*jω<sup>I</sup>* − A) + <sup>H</sup>*T*<sup>H</sup> <sup>+</sup> *<sup>X</sup>*GG*TX* <sup>≺</sup> 0. (59)

*<sup>q</sup>* Θ*T*ΣΘ*Cq*. Then the closed-loop system is said to have robust

Π(*X*, Σ, *γ*) ≺ 0, (54)

*<sup>p</sup>* + *<sup>γ</sup>*−2*BwB<sup>T</sup>*

�

⎠ , (55)

*<sup>w</sup>*)*X* ≺ 0. (56)

(60)

(61)

(62)

. (57)

Here, we use the notion of quadratic Lyapunov function with an L2-gain measure introduced by (Boyd et al., 1994) and (van der Schaft, 1992) for robust linear control and nonlinear control, respectively. With this aim, the characterizations of robust performance based on quadratic stability will be given in terms of matrix inequalities, where if LMIs can be found then the computations by finite dimensional convex programming are efficient. Now let quadratic Lyapunov function be

$$\mathbf{V} = \mathbf{x}^T \mathbf{X} \mathbf{x}^T \tag{47}$$

with *X* � 0. To prove (46), we have the following process

$$\begin{split} &\int\_{0}^{\infty} \|z\|^{2} dt \leq \gamma^{2} \int\_{0}^{\infty} \|w\|^{2} dt \\ &\Leftrightarrow \int\_{0}^{\infty} \left(z^{T}z - \gamma^{2}w^{T}w\right) dt \leq 0 \\ &\Leftrightarrow \int\_{0}^{\infty} \left(z^{T}z - \gamma^{2}w^{T}w + \frac{d}{dt}V\right) dt - V(\mathbf{x}(\infty)) \leq 0. \end{split} \tag{48}$$

Thus, to ensure (48), *<sup>z</sup>Tz* <sup>−</sup> *<sup>γ</sup>*2*wTw* <sup>+</sup> *<sup>V</sup>*˙ <sup>≤</sup> 0 must hold. Therefore, we need first to secure

$$\frac{d}{dt}V(\mathbf{x}) + z^T z - \gamma^2 w^T w \le 0,\tag{49}$$

subject to the condition

$$\|\|p\_i\|\| \le \theta\_i \|\|q\_i\|\|, \ i = 0, 1, \cdots, N,\tag{50}$$

for all vector variables satisfying (45). It suffices to secure (49) and (50) by *S*-procedure (Boyd et al., 1994), where the quadratic constraints are incorporated into the cost function via Lagrange multipliers *σi*, i.e. if there exists *σ<sup>i</sup>* > 0, *i* = 0, 1, ··· , *N* such that

$$z^T z - \gamma^2 w^T w + \dot{V} - \sum\_{i=0}^{N} \sigma\_i (\|p\_i\|^2 - \theta\_i^2 \|q\_i\|^2) \le 0. \tag{51}$$

To show that the closed-loop system (45) has a robust L2-gain measure *γ*, we integrate (51) from 0 to ∞, with the initial condition *x*(0) = 0, and get

$$\int\_{0}^{\infty} \left( z^T z - \gamma^2 w^T w + \dot{V} + \sum\_{i=0}^{N} \sigma\_i \left( \theta\_i^2 \|q\_i\|^2 - \|p\_i\|^2 \right) \right) dt - V(\mathbf{x}(\infty)) \le 0. \tag{52}$$

If (51) hold, this implies (49) and (46). Therefore, we have robust L2-gain measure *γ* for the system (45). Now to secure (51), we define

$$
\Theta = \begin{pmatrix}
\theta\_0 I & 0 & \cdots & 0 \\
0 & \theta\_1 I & \cdots & 0 \\
& & 0 & \ddots & 0 \\
0 & 0 & 0 & \theta\_N I
\end{pmatrix}, \quad \Sigma = \begin{pmatrix}
\sigma\_0 I & 0 & \cdots & 0 \\
0 & \sigma\_1 I & \cdots & 0 \\
& 0 & 0 & \ddots & 0 \\
0 & 0 & 0 & \sigma\_N I
\end{pmatrix}, \tag{53}
$$

where the identity matrix *<sup>I</sup>* <sup>∈</sup> **<sup>R</sup>***nqi* <sup>×</sup>*nqi* . It is noted that we require that *<sup>θ</sup><sup>i</sup>* > 0 and *<sup>σ</sup><sup>i</sup>* > 0 for all *i*. Hence the inequality (51) can be translated to the following matrix inequalities

$$
\Pi(X\_\prime \Sigma\_\prime \gamma) \prec 0,\tag{54}
$$

where

10 Will-be-set-by-IN-TECH

Let the constant *γ* > 0 be given. The closed-loop system (45) is said to have a *robust* L2*-gain measure γ* if for any admissible norm-bounded uncertainties the following conditions hold.

Here, we use the notion of quadratic Lyapunov function with an L2-gain measure introduced by (Boyd et al., 1994) and (van der Schaft, 1992) for robust linear control and nonlinear control, respectively. With this aim, the characterizations of robust performance based on quadratic stability will be given in terms of matrix inequalities, where if LMIs can be found then the computations by finite dimensional convex programming are efficient. Now let quadratic

� ∞ 0

�*w*�2*dt*. (46)

(48)

*V* = *xTXxT*, (47)

*dt* − *V*(*x*(∞)) ≤ 0.

*dt <sup>V</sup>*(*x*) + *<sup>z</sup>Tz* <sup>−</sup> *<sup>γ</sup>*2*wTw* <sup>≤</sup> 0, (49)

�*pi*� ≤ *θi*�*qi*�, *i* = 0, 1, ··· , *N*, (50)

��

*σ*<sup>0</sup> *I* 0 ··· 0 0 *σ*<sup>1</sup> *I* ··· 0 0 0 ... <sup>0</sup> 000 *σ<sup>N</sup> I*

⎞

*<sup>i</sup>* �*qi*�2) <sup>≤</sup> 0. (51)

*dt* − *V*(*x*(∞)) ≤ 0. (52)

⎟⎟⎟⎠ , (53)

(2) Subject to the assumption of zero initial condition, the controlled output *z* satisfies

�*z*�2*dt* <sup>≤</sup> *<sup>γ</sup>*<sup>2</sup>

� ∞ 0

> � *dt* ≤ 0

Thus, to ensure (48), *<sup>z</sup>Tz* <sup>−</sup> *<sup>γ</sup>*2*wTw* <sup>+</sup> *<sup>V</sup>*˙ <sup>≤</sup> 0 must hold. Therefore, we need first to secure

for all vector variables satisfying (45). It suffices to secure (49) and (50) by *S*-procedure (Boyd et al., 1994), where the quadratic constraints are incorporated into the cost function via

*<sup>σ</sup>i*(�*pi*�<sup>2</sup> <sup>−</sup> *<sup>θ</sup>*<sup>2</sup>

*<sup>i</sup>* �*qi*�<sup>2</sup> − �*pi*�<sup>2</sup>

⎛

⎜⎜⎜⎝

*N* ∑ *i*=0

To show that the closed-loop system (45) has a robust L2-gain measure *γ*, we integrate (51)

If (51) hold, this implies (49) and (46). Therefore, we have robust L2-gain measure *γ* for the

⎟⎟⎟⎠ , <sup>Σ</sup> <sup>=</sup>

⎞

�*w*�2*dt*

*d dt <sup>V</sup>* �

(1) The closed-loop system is uniformly asymptotically stable.

with *X* � 0. To prove (46), we have the following process � ∞ 0

�

⇔ � ∞ 0 �

⇔ � ∞ 0

Lyapunov function be

subject to the condition

� ∞ 0

�

system (45). Now to secure (51), we define

Θ =

� ∞ 0

�*z*�2*dt* <sup>≤</sup> *<sup>γ</sup>*<sup>2</sup>

*d*

*<sup>z</sup>Tz* <sup>−</sup> *<sup>γ</sup>*2*wTw*

*<sup>z</sup>Tz* <sup>−</sup> *<sup>γ</sup>*2*wTw* <sup>+</sup>

Lagrange multipliers *σi*, i.e. if there exists *σ<sup>i</sup>* > 0, *i* = 0, 1, ··· , *N* such that

*N* ∑ *i*=0 *σi* � *θ*2

*<sup>z</sup>Tz* <sup>−</sup> *<sup>γ</sup>*2*wTw* <sup>+</sup> *<sup>V</sup>*˙ <sup>−</sup>

*θ*<sup>0</sup> *I* 0 ··· 0 0 *θ*<sup>1</sup> *I* ··· 0 0 0 ... <sup>0</sup> 000 *θ<sup>N</sup> I*

from 0 to ∞, with the initial condition *x*(0) = 0, and get

*<sup>z</sup>Tz* <sup>−</sup> *<sup>γ</sup>*2*wTw* <sup>+</sup> *<sup>V</sup>*˙ <sup>+</sup>

⎛

⎜⎜⎜⎝

$$
\Pi(\mathbf{X}, \Sigma, \gamma) = \begin{pmatrix}
\Sigma \ X B\_p \ X B\_w \\
\star & -\Sigma & 0 \\
\star & 0 & -\gamma^2 I
\end{pmatrix},
\tag{55}
$$

with <sup>Ξ</sup> <sup>=</sup> <sup>A</sup>*TX* <sup>+</sup> *<sup>X</sup>*<sup>A</sup> <sup>+</sup> <sup>C</sup>*T*<sup>C</sup> <sup>+</sup> *<sup>C</sup><sup>T</sup> <sup>q</sup>* Θ*T*ΣΘ*Cq*. Then the closed-loop system is said to have robust L2-gain measure *γ* from input *w* to output *z* if there exists *X* > 0 and Σ > 0 such that (54) is satisfied. Without loss of generality, we will adopt only strict inequality. To prove uniformly asymptotic stability of (45), we expand the inequality (54) by Schur complement,

$$\mathcal{A}^T X + X \mathcal{A} + \mathcal{C}^T \mathcal{C} + \mathcal{C}\_q^T \Theta^T \Sigma \Theta \mathcal{C}\_q + X(B\_p \Sigma^{-1} B\_p^T + \gamma^{-2} B\_w B\_w^T) X \prec 0. \tag{56}$$

Define the matrix variables

$$\mathcal{H} = \begin{pmatrix} \mathcal{C} \\ \Sigma^{1/2} \Theta \mathbb{C}\_q \end{pmatrix}, \quad \mathcal{G} = \begin{pmatrix} \mathcal{B}\_p \Sigma^{-1/2} \ \gamma \mathcal{B}\_w \end{pmatrix}. \tag{57}$$

Thus, the inequality (56) can be rewritten as

$$\mathcal{A}^T X + X\mathcal{A} + \mathcal{H}^T \mathcal{H} + X\mathcal{G}\mathcal{G}^T X \prec 0. \tag{58}$$

Manipulating (58) by adding and subtracting *jωX* to obtain

$$-(-j\omega I - \mathcal{A}^T)X - X(j\omega I - \mathcal{A}) + \mathcal{H}^T\mathcal{H} + X\mathcal{G}\mathcal{G}^TX \prec 0. \tag{59}$$

Pre-multiplying <sup>G</sup>*T*(−*jω<sup>I</sup>* − A*T*)−<sup>1</sup> and post-multiplying (*jω<sup>I</sup>* − A)−1<sup>G</sup> to inequality (59), we have

$$\begin{split} & - \mathcal{G}^{T} \mathbf{X} (j\omega \mathbf{I} - \mathcal{A})^{-1} \mathcal{G} - \mathcal{G}^{T} (-j\omega \mathbf{I} - \mathcal{A}^{T})^{-1} \mathbf{X} \mathcal{G} \\ & \quad + \mathcal{G}^{T} (-j\omega \mathbf{I} - \mathcal{A}^{T})^{-1} \mathbf{X} \mathcal{G} \mathcal{G}^{T} \mathbf{X} (j\omega \mathbf{I} - \mathcal{A})^{-1} \mathcal{G} \\ & \quad + \mathcal{G}^{T} (-j\omega \mathbf{I} - \mathcal{A}^{T})^{-1} \mathcal{H}^{T} \mathcal{H} (j\omega \mathbf{I} - \mathcal{A})^{-1} \mathcal{G} \prec \mathbf{0}. \end{split} \tag{60}$$

Defining a system

$$\begin{aligned} \dot{x} &= \mathcal{A}x + \mathcal{G}w \\ z &= \mathcal{H}x \end{aligned} \tag{61}$$

with transfer function *<sup>T</sup>*(*s*) = <sup>H</sup>(*sI* − A)−1<sup>G</sup> and thus *<sup>T</sup>*(*jω*) = <sup>H</sup>(*jω<sup>I</sup>* − A)−1<sup>G</sup> and a matrix variable *<sup>M</sup>*¯ (*jω*) = <sup>G</sup>*TX*(*jω<sup>I</sup>* − A)−1G. The matrix inequality (60) can be rewritten as

$$T^\*(j\omega)T(j\omega) - \bar{M}(j\omega) - \bar{M}^\*(j\omega) + \bar{M}^\*(j\omega)\bar{M}(j\omega) \prec 0,$$

or

$$\begin{split} T^\*(j\omega)T(j\omega) &\prec \bar{M}(j\omega) + \bar{M}^\*(j\omega) - \bar{M}^\*(j\omega)\bar{M}(j\omega) \\ &= -(I - \bar{M}^\*(j\omega))(I - \bar{M}(j\omega)) + I \\ &\preceq I, \quad \forall \ \omega \in \mathbb{R}. \end{split} \tag{62}$$

and incorporate the quadratic norm-bounded constraints via Lagrange multipliers *σ<sup>i</sup>* through

Integral Sliding-Based Robust Control 177

It is worth noting that the use of dissipation theory for (47), (69), and (67) is for the quantification of H<sup>2</sup> performance measure in the sequel. It is also shown easily by plugging

> Ω*<sup>H</sup> XBp XBw* (*XBp*)*<sup>T</sup>* <sup>−</sup><sup>Σ</sup> <sup>0</sup> (*XBw*)*<sup>T</sup>* <sup>0</sup> <sup>−</sup>*<sup>I</sup>*

as (53). Then the system is robustly asymptotically stabilized with the norm-bounded uncertainty if (68) is satisfied. This is shown by the fact, Schur complement, that (68) is

> � � Σ 0 0 *I*

If (69) and (70) are both true, then <sup>A</sup>*TX* <sup>+</sup> *<sup>X</sup>*A ≺ 0. This implies that <sup>A</sup> is Hurwitz. In addition to robust stability, the robust performance of the closed-loop uncertain system (45) on the sliding surface that fulfils the H<sup>2</sup> performance requirement is suggested for the overall robust design in this section. We will show that the H<sup>2</sup> performance measure will also guarantee

Given that the A is stable, the closed-loop map *Tzw*(*gi*(*qi*, *t*)) from *w* to *z* is bounded for all nonlinearities and uncertainties *gi*(*qi*, *t*); we wish to impose an H<sup>2</sup> performance specification

This criterion is classically interpreted as a measure of transient response to an impulse applied to *w*(*t*) and it gives the bound of output energy of *z*. The approach of H<sup>2</sup> performance criterion as the evaluation of the energy response to an impulse input of random direction at

where *z*(*t*) = *Tzw*(*gi*(*qi*, *t*))*w*0*δ*(*t*), and *w*<sup>0</sup> satisfies random vector of covariance **E**(*w*0*w*�

*J*(*x*0) = sup

*x*(0)=*x*<sup>0</sup>

The above definition of H<sup>2</sup> performance can also be equivalently interpreted by letting the initial condition *x*(0) = *Bww*<sup>0</sup> and *w*(*t*) = 0 in the system, which subsequently responds autonomously. Although this definition is applied to the case where *gi*(*x*, *t*) is LTI and standard notion of (71), we can also apply it to a more general perturbation structure, nonlinear or time-varying uncertainties. Now to evaluate the energy bound of (72), consider

2,*imp* **<sup>E</sup>***w*<sup>0</sup> (�*z*�<sup>2</sup>

⎞

� �*B<sup>T</sup> <sup>p</sup> X BT wX* �

*<sup>i</sup>* �*qi*�<sup>2</sup> − �*pi*�2) <sup>≤</sup> *<sup>w</sup>Tw* <sup>−</sup> *<sup>z</sup>Tz*. (67)

*<sup>q</sup>* Θ*T*ΣΘ*Cq* and Θ and Σ are defined exactly the same

trace(*Tzw*0(*jω*)∗*Tzw*0(*jω*))*dω* (71)

⎠ ≺ 0, (68)

Ω*<sup>H</sup>* ≺ 0 (69)

<sup>2</sup>), (72)

�*z*�<sup>2</sup> (73)

<sup>0</sup>) = *I*.

≺ 0 (70)

*S*-procedure, it is then said that the system is dissipative if, and only if

*σi*(*θ*<sup>2</sup>

*V*˙ + *N* ∑ *i*=0

where <sup>Ω</sup>*<sup>H</sup>* <sup>=</sup> <sup>A</sup>*TX* <sup>+</sup> *<sup>X</sup>*<sup>A</sup> <sup>+</sup> <sup>C</sup>*T*<sup>C</sup> <sup>+</sup> *<sup>C</sup><sup>T</sup>*

equivalent to

*w*(*t*) is

using the inequality (68).

first the index *J*(*x*0) defined to be

(45) into (67) that if there exist *X* � 0, Σ � 0, then (67) implies

Ω*<sup>H</sup>* + �

�*Tzw*0�<sup>2</sup>

<sup>2</sup> <sup>=</sup> <sup>1</sup> 2*π*

*XBp XBw*

on this map. Consider first the nominal map *Tzw*<sup>0</sup> = *Tzw*(0), this norm is given by

� ∞ −∞

�*Tzw*(Δ)�<sup>2</sup>

⎛ ⎝

Hence, the maximum singular value of (62)

$$
\sigma\_{\max}(T(j\omega) < 1, \quad \forall \ \omega \in \mathbb{R}.)
$$

By small gain theorem, we prove that the matrix A is Hurwitz, or equivalently, the eigenvalues of A are all in the left-half plane, and therefore the closed-loop system (45) is uniformly asymptotically stable.

Next to the end of the robust L2-gain measure *γ* is to synthesize the control law, *K*. Since (54) and (56) are equivalent, we multiply both sides of inequality of (56) by *Y* = *X*−1. We have

$$\mathcal{Y}\mathcal{A}^T + \mathcal{A}Y + \mathcal{Y}\mathcal{C}^T\mathcal{C}Y + \mathcal{Y}\mathcal{C}\_q^T\Theta^T\Sigma\Theta\mathcal{C}\_qY + B\_p\Sigma^{-1}B\_p^T + \gamma^{-2}B\_WB\_w^T \prec 0.$$

Rearranging the inequality with Schur complement and defining a matrix variable *W* = *KY*, we have

$$
\begin{pmatrix}
\Omega\_L \ Y \mathbf{C}\_z^T + W^T \mathbf{D}\_z^T \ Y \mathbf{C}\_q^T \Theta^T & B\_w \\
\star & -I & 0 & 0 \\
\star & 0 & -V & 0 \\
\star & 0 & 0 & -\gamma^2 I
\end{pmatrix} < 0,\tag{63}
$$

where Ω*<sup>L</sup>* = *YA<sup>T</sup>* + *AY* + *WTBT* + *BW* + *BpVB<sup>T</sup> <sup>p</sup>* and *V* = Σ−1. The matrix inequality is linear in matrix variables *Y*, *W*, *V* and a scalar *γ*, which can be solved efficiently.

**Remark 2.** *The matrix inequalities (63) are linear and can be transformed to optimization problem, for instance, if* L2*-gain measure γ is to be minimized:*

$$\begin{array}{ll}\text{minimize} & \gamma^2\\\text{subject to (63), } Y \succ 0, \text{ } V \succ 0 \text{ and } W. \end{array} \tag{64}$$

**Remark 3.** *Once from (64) we obtain the matrices W and Y, the control law K* = *WY*−<sup>1</sup> *can be calculated easily.*

**Remark 4.** *It is seen from (61) that with Riccati inequality (56) a linear time-invariant system is obtained to fulfill* �*T*�<sup>∞</sup> < 1*, where* A *is Hurwitz.*

**Remark 5.** *In this remark, we will synthesize the overall control law consisting of us*(*t*) *and ur*(*t*) *that perform control tasks. The overall control law as shown in (22) and in view of (15) and (44),*

$$u(t) = u\_s(t) + u\_r(x, t) = -a(t)\frac{s(x, t)}{||s(x, t)||} + Kx(t) \tag{65}$$

*where α*(*t*) > 0 *satisfies (18), integral sliding surface, s*(*x*, *t*)*, is defined in (16) and gain K is found using optimization technique shown in (64).*

## **4.2.2 Robust** H<sup>2</sup> **measure**

In this section we will study the H<sup>2</sup> measure for the system performance of (45). The robust stability of which in the presence of norm-bounded uncertainty has been extensively studied Boyd et al. (1994) and reference therein. For self-contained purpose, we will demonstrate robust stability by using quadratic Lyapunov function (47) subject to (45) with the norm-bounded constraints satisfying (7) and (42). To guarantee the asymptotic stability with respect to (47) (or called *storage function* from dissipation perspective), we consider the a quadratic supply function

$$\int\_{0}^{\infty} (w^T w - z^T z) dt,\tag{66}$$

12 Will-be-set-by-IN-TECH

*σ*max(*T*(*jω*) < 1, ∀ *ω* ∈ **R**. By small gain theorem, we prove that the matrix A is Hurwitz, or equivalently, the eigenvalues of A are all in the left-half plane, and therefore the closed-loop system (45) is uniformly

Next to the end of the robust L2-gain measure *γ* is to synthesize the control law, *K*. Since (54) and (56) are equivalent, we multiply both sides of inequality of (56) by *Y* = *X*−1. We have

Rearranging the inequality with Schur complement and defining a matrix variable *W* = *KY*,

**Remark 2.** *The matrix inequalities (63) are linear and can be transformed to optimization problem,*

**Remark 3.** *Once from (64) we obtain the matrices W and Y, the control law K* = *WY*−<sup>1</sup> *can be*

**Remark 4.** *It is seen from (61) that with Riccati inequality (56) a linear time-invariant system is*

**Remark 5.** *In this remark, we will synthesize the overall control law consisting of us*(*t*) *and ur*(*t*) *that perform control tasks. The overall control law as shown in (22) and in view of (15) and (44),*

*where α*(*t*) > 0 *satisfies (18), integral sliding surface, s*(*x*, *t*)*, is defined in (16) and gain K is found*

In this section we will study the H<sup>2</sup> measure for the system performance of (45). The robust stability of which in the presence of norm-bounded uncertainty has been extensively studied Boyd et al. (1994) and reference therein. For self-contained purpose, we will demonstrate robust stability by using quadratic Lyapunov function (47) subject to (45) with the norm-bounded constraints satisfying (7) and (42). To guarantee the asymptotic stability with respect to (47) (or called *storage function* from dissipation perspective), we consider the a

*<sup>u</sup>*(*t*) = *us*(*t*) + *ur*(*x*, *<sup>t</sup>*) = <sup>−</sup>*α*(*t*) *<sup>s</sup>*(*x*, *<sup>t</sup>*)

� ∞ 0

*<sup>z</sup> YC<sup>T</sup>*

*<sup>z</sup>* + *WTDT*

in matrix variables *Y*, *W*, *V* and a scalar *γ*, which can be solved efficiently.

*minimize γ*<sup>2</sup>

� −*I* 0 0 � 0 −*V* 0 � 0 0 <sup>−</sup>*γ*<sup>2</sup> *<sup>I</sup>*

*<sup>q</sup>* <sup>Θ</sup>*T*ΣΘ*CqY* + *Bp*Σ−1*B<sup>T</sup>*

*<sup>q</sup>* Θ*<sup>T</sup> Bw*

⎞

⎟⎟⎠

*subject to (63)*, *<sup>Y</sup>* � 0, *<sup>V</sup>* � <sup>0</sup> *and W*. (64)

*<sup>p</sup>* + *<sup>γ</sup>*−2*BwB<sup>T</sup>*

*<sup>p</sup>* and *V* = Σ−1. The matrix inequality is linear

�*s*(*x*, *<sup>t</sup>*)� <sup>+</sup> *Kx*(*t*) (65)

(*wTw* <sup>−</sup> *<sup>z</sup>Tz*)*dt*, (66)

*<sup>w</sup>* ≺ 0.

< 0, (63)

Hence, the maximum singular value of (62)

*<sup>Y</sup>*A*<sup>T</sup>* <sup>+</sup> <sup>A</sup>*<sup>Y</sup>* <sup>+</sup> *<sup>Y</sup>*C*T*C*<sup>Y</sup>* <sup>+</sup> *YC<sup>T</sup>*

⎛

Ω*<sup>L</sup> YC<sup>T</sup>*

⎜⎜⎝

where Ω*<sup>L</sup>* = *YA<sup>T</sup>* + *AY* + *WTBT* + *BW* + *BpVB<sup>T</sup>*

*for instance, if* L2*-gain measure γ is to be minimized:*

*obtained to fulfill* �*T*�<sup>∞</sup> < 1*, where* A *is Hurwitz.*

*using optimization technique shown in (64).*

**4.2.2 Robust** H<sup>2</sup> **measure**

quadratic supply function

asymptotically stable.

we have

*calculated easily.*

and incorporate the quadratic norm-bounded constraints via Lagrange multipliers *σ<sup>i</sup>* through *S*-procedure, it is then said that the system is dissipative if, and only if

$$\dot{V} + \sum\_{i=0}^{N} \sigma\_i(\theta\_i^2 ||q\_i||^2 - ||p\_i||^2) \le w^T w - z^T z. \tag{67}$$

It is worth noting that the use of dissipation theory for (47), (69), and (67) is for the quantification of H<sup>2</sup> performance measure in the sequel. It is also shown easily by plugging (45) into (67) that if there exist *X* � 0, Σ � 0, then (67) implies

$$
\begin{pmatrix}
\Omega\_H & \mathbf{X}\mathcal{B}\_p \ X\mathcal{B}\_w \\
(\mathbf{X}\mathcal{B}\_p)^T & -\Sigma & \mathbf{0} \\
(\mathbf{X}\mathcal{B}\_w)^T & \mathbf{0} & -I
\end{pmatrix} \prec \mathbf{0},\tag{68}
$$

where <sup>Ω</sup>*<sup>H</sup>* <sup>=</sup> <sup>A</sup>*TX* <sup>+</sup> *<sup>X</sup>*<sup>A</sup> <sup>+</sup> <sup>C</sup>*T*<sup>C</sup> <sup>+</sup> *<sup>C</sup><sup>T</sup> <sup>q</sup>* Θ*T*ΣΘ*Cq* and Θ and Σ are defined exactly the same as (53). Then the system is robustly asymptotically stabilized with the norm-bounded uncertainty if (68) is satisfied. This is shown by the fact, Schur complement, that (68) is equivalent to

$$
\Omega\_H \prec 0 \tag{69}
$$

$$\left(\Omega\_H + \begin{pmatrix} XB\_p \ XB\_w \end{pmatrix} \begin{pmatrix} \Sigma \ 0 \\ 0 \ I \end{pmatrix} \begin{pmatrix} B\_p^T X \\ B\_w^T X \end{pmatrix} \prec 0 \tag{70}$$

If (69) and (70) are both true, then <sup>A</sup>*TX* <sup>+</sup> *<sup>X</sup>*A ≺ 0. This implies that <sup>A</sup> is Hurwitz. In addition to robust stability, the robust performance of the closed-loop uncertain system (45) on the sliding surface that fulfils the H<sup>2</sup> performance requirement is suggested for the overall robust design in this section. We will show that the H<sup>2</sup> performance measure will also guarantee using the inequality (68).

Given that the A is stable, the closed-loop map *Tzw*(*gi*(*qi*, *t*)) from *w* to *z* is bounded for all nonlinearities and uncertainties *gi*(*qi*, *t*); we wish to impose an H<sup>2</sup> performance specification on this map. Consider first the nominal map *Tzw*<sup>0</sup> = *Tzw*(0), this norm is given by

$$\|\|T\_{zw0}\|\|\_{2}^{2} = \frac{1}{2\pi} \int\_{-\infty}^{\infty} \text{trace}(T\_{zw0}(j\omega)^{\*}T\_{zw0}(j\omega))d\omega \tag{71}$$

This criterion is classically interpreted as a measure of transient response to an impulse applied to *w*(*t*) and it gives the bound of output energy of *z*. The approach of H<sup>2</sup> performance criterion as the evaluation of the energy response to an impulse input of random direction at *w*(*t*) is

$$\|\|T\_{zw}(\Delta)\|\|\_{2,imp}^2 \triangleq \mathbb{E}\_{w\_0}(\|z\|\_2^2),\tag{72}$$

where *z*(*t*) = *Tzw*(*gi*(*qi*, *t*))*w*0*δ*(*t*), and *w*<sup>0</sup> satisfies random vector of covariance **E**(*w*0*w*� <sup>0</sup>) = *I*. The above definition of H<sup>2</sup> performance can also be equivalently interpreted by letting the initial condition *x*(0) = *Bww*<sup>0</sup> and *w*(*t*) = 0 in the system, which subsequently responds autonomously. Although this definition is applied to the case where *gi*(*x*, *t*) is LTI and standard notion of (71), we can also apply it to a more general perturbation structure, nonlinear or time-varying uncertainties. Now to evaluate the energy bound of (72), consider first the index *J*(*x*0) defined to be

$$J(\mathbf{x}\_0) = \sup\_{\mathbf{x}(0) = \mathbf{x}\_0} \left\| \mathbf{z} \right\|^2 \tag{73}$$

Next to the end of the robust H<sup>2</sup> measure is to synthesize the control law, *K*. Since (68) and (70) are equivalent, we multiply both sides of inequality of (70) by *Y* = *X*−1. We have

Integral Sliding-Based Robust Control 179

Rearranging the inequality with Schur complement and defining a matrix variable *W* = *KY*,

*<sup>z</sup> YC<sup>T</sup>*

*<sup>z</sup>* + *WTDT*

� −*I* 0 0 � 0 −*V* 0 � 0 0 −*I*

**Remark 6.** *The trace of (81) is to put in a convenient form by introducing the auxiliary matrix U as*

**Remark 7.** *The matrix inequalities (83) are linear and can be transformed to optimization problem,*

**Remark 8.** *Once from (85) we obtain the matrices W and Y, the control law K* = *WY*−<sup>1</sup> *can be*

**Remark 9.** *To perform the robust* H<sup>2</sup> *measure control, the overall composite control of form (65) should be established, where the continuous control gain K is found by using optimization technique shown in*

A numerical example to verify the integral sliding-mode-based control with L2-gain measure and H<sup>2</sup> performance establishes the solid effectiveness of the whole chapter. Consider the

−0.1 0.3�

<sup>1</sup> <sup>+</sup> *<sup>x</sup>*<sup>2</sup>

, *g*1(*x*, *t*) = *x*1, *g*2(*x*, *t*) = *x*2, and *g*1(*x*, *t*) + *g*2(*x*, *t*) ≤ 1.01(�*x*1� + �*x*2�) (87)

, *<sup>B</sup>*(*t*) = �

0 1 �

<sup>2</sup>), and *w*(*t*) = *ε*(*t* − 1) + *ε*(*t* − 3), (88)

(1 + 0.7 sin(*ω*1*t*)) (86)

�

*wXBw*

� *U B<sup>T</sup> w Bw Y*

�

*subject to (83), (84)*, **Tr**(*U*) <sup>≤</sup> *<sup>ϑ</sup>*2, *<sup>Y</sup>* � 0, *<sup>V</sup>* � <sup>0</sup> *and W*. (85)

*<sup>U</sup>* � *<sup>B</sup><sup>T</sup>*

� =

*<sup>q</sup>* <sup>Θ</sup>*T*ΣΘ*CqY* + *Bp*Σ−1*B<sup>T</sup>*

*<sup>q</sup>* Θ*<sup>T</sup> Bw*

⎞

⎟⎟⎠

*<sup>p</sup>* + *BwB<sup>T</sup>*

*<sup>p</sup>* and *V* = Σ−1. The matrix inequality is linear

*<sup>w</sup>* ≺ 0.

< 0, (83)

� 0. (84)

*<sup>Y</sup>*A*<sup>T</sup>* <sup>+</sup> <sup>A</sup>*<sup>Y</sup>* <sup>+</sup> *<sup>Y</sup>*C*T*C*<sup>Y</sup>* <sup>+</sup> *YC<sup>T</sup>*

⎛

Ω *YC<sup>T</sup>*

in matrix variables *Y*, *W*, and *V*, which can be solved efficiently.

� *U B<sup>T</sup> w Bw X*−<sup>1</sup>

system of states, *x*<sup>1</sup> and *x*2, with nonlinear functions and matrices:

0.8 sin(*ω*0*t*)

<sup>2</sup>) <sup>≤</sup> *<sup>η</sup>*(*x*) = 2.11(*x*<sup>2</sup>

⎜⎜⎝

where Ω = *YA<sup>T</sup>* + *AY* + *WTBT* + *BW* + *BpVB<sup>T</sup>*

*for instance, if robust* H<sup>2</sup> *measure is to be minimized: minimize ϑ*<sup>2</sup>

we have

*or, equivalently,*

*calculated easily.*

**5. Numerical example**

*<sup>A</sup>*(*t*) = � 0 1

−1 2� + � 1.4 −2.3�

*h*(*x*) = 2.1(*x*<sup>2</sup>

<sup>1</sup> <sup>+</sup> *<sup>x</sup>*<sup>2</sup>

*(85).*

*Bd* = � 0.04 0.5 �

The next step is to bound *J*(*x*0) by an application of so-called S-procedure where quadratic constraints are incorporated into the cost function (73) via Lagrange Multipliers *σi*. This leads to

$$J(\mathbf{x}\_0) \le \inf\_{\sigma\_i > 0} \sup\_{\mathbf{x}\_0} \left( \left\| \mathbf{z} \right\|^2 + \sum\_{i=0}^{i=1} \sigma\_i(\theta\_i^2 \left\| q\_i \right\|^2 - \left\| p\_i \right\|^2 \right) \tag{74}$$

To compute the right hand side of (74), we find that for fixed *σ<sup>i</sup>* we have an optimization problem,

$$\sup\_{\mathbf{x}(0)=\mathbf{x}\_0(45)} \int\_0^\infty \left( z^T z + q^T \Theta^T \Sigma \Theta q - p^T \Sigma p \right) dt. \tag{75}$$

To compute the optimal bound of (75) for some Σ � 0 satisfying (68), the problem (75) can be rewritten as

$$J(\mathbf{x}\_0) \le \int\_0^\infty \left( z^T z + q^T \Theta^T \Sigma \Theta q - p^T \Sigma p + \frac{d}{dt} V(\mathbf{x}) \right) dt + V(\mathbf{x}\_0) \tag{76}$$

for *x*(∞) = 0. When (68) is satisfied, then it is equivalent to

$$
\begin{pmatrix} \mathbf{x}^T \ p^T \ w^T \end{pmatrix} \begin{pmatrix} \Omega & \mathbf{X}B\_p \ \mathbf{X}B\_w \\ \left(\mathbf{X}B\_p\right)^T & -\Sigma & \mathbf{0} \\ \left(\mathbf{X}B\_w\right)^T & \mathbf{0} & -I \end{pmatrix} \begin{pmatrix} \mathbf{x} \\ p \\ w \end{pmatrix} < \mathbf{0},\tag{77}
$$

or,

$$
\begin{pmatrix} x^T \ p^T \ w^T \end{pmatrix} \begin{pmatrix} \Omega & \mathbf{X}\mathbf{B}\_p \ \mathbf{X}\mathbf{B}\_w \\ (\mathbf{X}\mathbf{B}\_p)^T & -\Sigma & \mathbf{0} \\ (\mathbf{X}\mathbf{B}\_w)^T & \mathbf{0} & \mathbf{0} \end{pmatrix} \begin{pmatrix} \mathbf{x} \\ p \\ w \end{pmatrix} < w^T w. \tag{78}
$$

With (78), we find that the problem of performance *J*(*x*0) of (76) is

$$J(\mathbf{x}\_0) \le \int\_0^\infty w^T w dt + V(\mathbf{x}\_0). \tag{79}$$

It is noted that the matrix inequality (68) is jointly affine in Σ and *X*. Thus, we have the index

$$J(\mathbf{x}\_0) \le \inf\_{\mathbf{X} \succ \mathbf{0}, \Sigma \succ \mathbf{0}, \langle \nabla \mathbf{7} \rangle} \mathbf{x}\_0^T \mathbf{X} \mathbf{x}\_{0\prime} \tag{80}$$

for the alternative definition of robust H<sup>2</sup> performance measure of (71), where *w*(*t*) = 0 and *x*<sup>0</sup> = *Bww*0. Now the final step to evaluate the infimum of (80) is to average over each impulsive direction, we have

$$\sup\_{\mathcal{G}\_l(q\_l, t)} \mathbb{E}\_{w\_0} \|z\|\_2^2 \le \mathbb{E}\_{w\_0} J(\mathbf{x}\_0) \le \inf\_X \mathbb{E}\_{w\_0} (\mathbf{x}\_0^T X \mathbf{x}\_0) = \inf\_X \text{Tr}(B\_w^T X B\_w).$$

Thus the robust performance design specification is that

$$\text{Tr}(B\_w^T X B\_w) \le \theta^2 \tag{81}$$

for some *ϑ* > 0 subject to (77). In summary, the overall robust H<sup>2</sup> performance control problem is the following convex optimization problem:

$$\begin{aligned} \text{minimize} \quad & \theta^2\\ \text{subject to (81), (68), } \ X \succ 0, \ \Sigma \succ 0. \end{aligned} \tag{82}$$

14 Will-be-set-by-IN-TECH

The next step is to bound *J*(*x*0) by an application of so-called S-procedure where quadratic constraints are incorporated into the cost function (73) via Lagrange Multipliers *σi*. This leads

To compute the right hand side of (74), we find that for fixed *σ<sup>i</sup>* we have an optimization

To compute the optimal bound of (75) for some Σ � 0 satisfying (68), the problem (75) can be

Ω *XBp XBw* (*XBp*)*<sup>T</sup>* <sup>−</sup><sup>Σ</sup> <sup>0</sup> (*XBw*)*<sup>T</sup>* <sup>0</sup> <sup>−</sup>*<sup>I</sup>*

Ω *XBp XBw* (*XBp*)*<sup>T</sup>* <sup>−</sup><sup>Σ</sup> <sup>0</sup> (*XBw*)*<sup>T</sup>* 0 0

It is noted that the matrix inequality (68) is jointly affine in Σ and *X*. Thus, we have the index

*X*�0,Σ�0,(77)

for the alternative definition of robust H<sup>2</sup> performance measure of (71), where *w*(*t*) = 0 and *x*<sup>0</sup> = *Bww*0. Now the final step to evaluate the infimum of (80) is to average over each

*X*

for some *ϑ* > 0 subject to (77). In summary, the overall robust H<sup>2</sup> performance control

**E***w*<sup>0</sup> (*x<sup>T</sup>*

<sup>0</sup> *Xx*0) = inf

subject to (81), (68), *<sup>X</sup>* � 0, <sup>Σ</sup> � 0. (82)

*X*

Tr(*B<sup>T</sup>*

*wXBw*) <sup>≤</sup> *<sup>ϑ</sup>*<sup>2</sup> (81)

*<sup>z</sup>Tz* <sup>+</sup> *<sup>q</sup>T*Θ*T*ΣΘ*<sup>q</sup>* <sup>−</sup> *<sup>p</sup>T*Σ*<sup>p</sup>* <sup>+</sup>

*i*=1 ∑ *i*=0

*σi*(*θ*<sup>2</sup>

*<sup>z</sup>Tz* <sup>+</sup> *<sup>q</sup>T*Θ*T*ΣΘ*<sup>q</sup>* <sup>−</sup> *<sup>p</sup>T*Σ*<sup>p</sup>*

*d dt <sup>V</sup>*(*x*)

⎞ ⎠

⎞ ⎠

*xT*

⎛ ⎝ *x p w*

⎞

*wTwdt* + *V*(*x*0). (79)

<sup>0</sup> *Xx*0, (80)

*wXBw*).

⎛ ⎝ *x p w*

⎞

*<sup>i</sup>* �*qi*�<sup>2</sup> − �*pi*�<sup>2</sup>

�

�

�

*dt*. (75)

*dt* + *V*(*x*0), (76)

⎠ < 0, (77)

<sup>⎠</sup> <sup>&</sup>lt; *<sup>w</sup>Tw*. (78)

(74)

�*z*�<sup>2</sup> <sup>+</sup>

�

� ∞ 0 �

> ⎛ ⎝

⎛ ⎝

*J*(*x*0) ≤

� ∞ 0

*J*(*x*0) ≤ inf

<sup>2</sup> ≤ **E***w*<sup>0</sup> *J*(*x*0) ≤ inf

minimize *ϑ*<sup>2</sup>

**Tr**(*B<sup>T</sup>*

With (78), we find that the problem of performance *J*(*x*0) of (76) is

*J*(*x*0) ≤ inf *σi*>0 sup *x*0

> sup *x*(0)=*x*0,(45)

> > �

for *x*(∞) = 0. When (68) is satisfied, then it is equivalent to

*x<sup>T</sup> p<sup>T</sup> wT*�

*x<sup>T</sup> p<sup>T</sup> wT*�

� ∞ 0

�

�

impulsive direction, we have

sup *gi*(*qi*,*t*) **<sup>E</sup>***w*<sup>0</sup> �*z*�<sup>2</sup>

Thus the robust performance design specification is that

problem is the following convex optimization problem:

*J*(*x*0) ≤

to

problem,

rewritten as

or,

Next to the end of the robust H<sup>2</sup> measure is to synthesize the control law, *K*. Since (68) and (70) are equivalent, we multiply both sides of inequality of (70) by *Y* = *X*−1. We have

$$Y\mathcal{A}^T + \mathcal{A}Y + Y\mathcal{C}^T\mathcal{C}Y + Y\mathcal{C}\_q^T\Theta^T\Sigma\Theta\mathcal{C}\_qY + B\_p\Sigma^{-1}B\_p^T + B\_wB\_w^T \prec 0.$$

Rearranging the inequality with Schur complement and defining a matrix variable *W* = *KY*, we have

$$
\begin{pmatrix}
\Omega \ Y \mathbf{C}\_z^T + W^T D\_z^T \ Y \mathbf{C}\_q^T \Theta^T \ B\_w \\
\star & -I & 0 & 0 \\
\star & 0 & -V & 0 \\
\star & 0 & 0 & -I
\end{pmatrix} < 0,\tag{83}
$$

where Ω = *YA<sup>T</sup>* + *AY* + *WTBT* + *BW* + *BpVB<sup>T</sup> <sup>p</sup>* and *V* = Σ−1. The matrix inequality is linear in matrix variables *Y*, *W*, and *V*, which can be solved efficiently.

**Remark 6.** *The trace of (81) is to put in a convenient form by introducing the auxiliary matrix U as*

$$U \succ B\_w^T X B\_w$$

*or, equivalently,*

$$
\begin{pmatrix} \mathcal{U} & \mathcal{B}\_w^T \\ \mathcal{B}\_w & X^{-1} \end{pmatrix} = \begin{pmatrix} \mathcal{U} & \mathcal{B}\_w^T \\ \mathcal{B}\_w & Y \end{pmatrix} \succ 0. \tag{84}
$$

**Remark 7.** *The matrix inequalities (83) are linear and can be transformed to optimization problem, for instance, if robust* H<sup>2</sup> *measure is to be minimized:*

$$\begin{aligned} \text{minimize } & \theta^2\\ \text{subject to} & \text{ (83), (84), (7x(\mathcal{U}) \le \theta^2), \ Y \succ 0, \ V \succ 0 \text{ and } \mathcal{W}. \end{aligned} \tag{85}$$

**Remark 8.** *Once from (85) we obtain the matrices W and Y, the control law K* = *WY*−<sup>1</sup> *can be calculated easily.*

**Remark 9.** *To perform the robust* H<sup>2</sup> *measure control, the overall composite control of form (65) should be established, where the continuous control gain K is found by using optimization technique shown in (85).*

### **5. Numerical example**

A numerical example to verify the integral sliding-mode-based control with L2-gain measure and H<sup>2</sup> performance establishes the solid effectiveness of the whole chapter. Consider the system of states, *x*<sup>1</sup> and *x*2, with nonlinear functions and matrices:

$$A(t) = \begin{pmatrix} 0 & 1 \\ -1 & 2 \end{pmatrix} + \begin{pmatrix} 1.4 \\ -2.3 \end{pmatrix} 0.8 \sin(\omega\_0 t) \begin{pmatrix} -0.1 \ 0.3 \end{pmatrix}, \quad B(t) = \begin{pmatrix} 0 \\ 1 \end{pmatrix} \left( 1 + 0.7 \sin(\omega\_1 t) \right) \tag{86}$$

$$B\_d = \begin{pmatrix} 0.04 \\ 0.5 \end{pmatrix}, \; \operatorname{g}\_1(\mathbf{x}, t) = \mathbf{x}\_1, \; \operatorname{g}\_2(\mathbf{x}, t) = \mathbf{x}\_2, \text{ and } \operatorname{g}\_1(\mathbf{x}, t) + \operatorname{g}\_2(\mathbf{x}, t) \le 1.01(\|\mathbf{x}\_1\| + \|\mathbf{x}\_2\|) \tag{87}$$

$$h(\mathbf{x}) = 2.1(\mathbf{x}\_1^2 + \mathbf{x}\_2^2) \le \eta(\mathbf{x}) = 2.11(\mathbf{x}\_1^2 + \mathbf{x}\_2^2), \text{ and } w(t) = \varepsilon(t - 1) + \varepsilon(t - 3), \tag{88}$$

where the necessary parameter matrices and functions can be easily obtained by comparison (86), (87), and (88) with assumption 1 through 5, thus we have

$$A = \begin{pmatrix} 0 & 1 \\ -1 & 2 \end{pmatrix}, \; E\_0 = \begin{pmatrix} 1.4 \\ -2.3 \end{pmatrix}, \; H\_0 = \begin{pmatrix} -0.1 \ 0.03 \end{pmatrix}, \; B = \begin{pmatrix} 0 \\ 1 \end{pmatrix}, \\ H\_1 = 0.7, \; \theta\_1 = \theta\_2 = 1.01.$$

It should be noted that *ε*(*t* − *t*1) denotes the pulse centered at time *t*<sup>1</sup> with pulse width 1 sec and strength 1. So, it is easy to conclude that *w*¯ = 1. We now develop the integral sliding-mode such that the system will be driven to the designated sliding surface *s*(*x*, *t*) shown in (16). Consider the initial states, *x*1(0) = −0.3 and *x*2(0) = 1.21, thus, the ball, B, is confined within *κ* = 1.2466. The matrix *M* such that *MB* = *I* is *M* = (0 1), hence, �*M*� = 1, �*ME*0� = 2.3, and �*MBd*� = 0.5. To compute switching control gain *α*(*t*) of sliding-mode control in (18), we need (19), which *β*<sup>0</sup> = 5.8853. We then have

$$a(t) = \frac{1}{0.3} (5.8853 + \lambda + 3.587(\mathbf{x}\_1^2 + \mathbf{x}\_2^2) + 0.7||\mu\_r||),\tag{89}$$

0 0.5 1 1.5 2 2.5 3 3.5 4

x1

x2

(b)

0 0.5 1 1.5 2 2.5 3 3.5 4

time

0 0.5 1 1.5 2 2.5 3 3.5 4

(b)

0 0.5 1 1.5 2 2.5 3 3.5 4

time

Fig. 2. The control signals of (a) linear robust control, *ur*, (b) integral sliding-mode control, *us*

approaching speed due to control factor *α*<sup>1</sup> = 0.5, we see chattering phenomenon in the Fig.4, Fig.5, and Fig.6. This is because of inherent property of sliding-mode control. We will draw the same conclusions as for the case *α*<sup>1</sup> = 0.022 with one extra comment that is we see the

of integral sliding-mode-based robust control with L2-gain measure. *α*<sup>1</sup> = 0.022.

(a)

Fig. 1. Integral sliding-mode-based robust control with L2-gain measure (a) the closed-loop states - *x*<sup>1</sup> and *x*2, (b) the chattering phenomenon of sliding surface *s*(*x*, *t*). *α*<sup>1</sup> = 0.022.

(a)

Integral Sliding-Based Robust Control 181

−0.5 0 0.5 1 1.5

> −10

−2

−1

u<sup>s</sup>

0

1

−5

u<sup>r</sup>

0

5

s

x1 and x2

where *λ* is chosen to be any positive number and *ur* = *Kx* is the linear control law to achieve performance measure. It is noted that in (89) the factor <sup>1</sup> 0.3 will now be replace by a control factor, *α*1, which the approaching speed of sliding surface can be adjusted. Therefore, the (89) is now

$$u(t) = a\_1(5.8853 + \lambda + 3.587(\mathbf{x}\_1^2 + \mathbf{x}\_2^2) + 0.7||\mu\_r||). \tag{90}$$

It is seen later that the values of *α*<sup>1</sup> is related to how fast the system approaches the sliding surface, *s* = 0 for a fixed number of *λ* = 0.

To find the linear control gain, *K*, for performance L2-gain measure, we follow the computation algorithm outlined in (64) and the parametric matrices of (41) are as follows,

$$G = I - BM = \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix}, \ B\_{\mathcal{U}} = G B\_d = \begin{pmatrix} 0.04 \\ 0 \end{pmatrix}, \ B\_{\mathcal{P}} = G(E\_0 \ I \ I) = \begin{pmatrix} 1.4 \ 1 \ 0 \ 1 \ 0 \\ 0 \ 0 \ 0 \ 0 \ 0 \end{pmatrix}.$$

$$\mathbb{C}\_{\mathcal{q}} = \begin{pmatrix} -0.1 \ 0.03 \\ 1 & 0 \\ 0 & 1 \\ 1 & 0 \\ 0 & 1 \end{pmatrix}, \ \mathbb{C}\_{z} = \begin{pmatrix} 1 \ 0 \\ 0 \ 1 \end{pmatrix}, \ D\_{z} = \begin{pmatrix} 1 \\ 1 \end{pmatrix}.$$

The simulated results of closed-loop system for integral sliding-mode with L2-gain measure are shown in Fig.1, Fig.2, and Fig.3 under the adjust factor *α*<sup>1</sup> = 0.022 in (90). The linear control gain *K* = [−18.1714 − 10.7033], which makes the eigenvalues of (*A* + *BK*) being −4.3517 ± 0.4841*j*. It is seen in Fig.1(b) that the sliding surface starting from *s* = 0 at *t* = 0, which matches the sliding surface design. Once the system started, the values of *s* deviate rapidly from the sliding surface due to the integral part within it. Nevertheless, the feedback control signals soon drive the trajectories of *s* approaching *s* = 0 and at time about *t* = 2.63 the values of *s* hit the sliding surface, *s* = 0. After that, to maintain the sliding surface the sliding control *us* starts chattering in view of Fig.2(b). When looking at the Fig.2(a) and (b), we see that the sliding-mode control, *us*, dominates the feedback control action that the system is pulling to the sliding surface. We also note that although the system is pulling to the sliding surface, the states *x*<sup>2</sup> has not yet reached its equilibrium, which can be seen from Fig.1(a). Not until the sliding surface reaches, do the states asymptotically drive to their equilibrium. Fig.3 is the phase plot of states of *x*<sup>1</sup> and *x*<sup>2</sup> and depicts the same phenomenon. To show different 16 Will-be-set-by-IN-TECH

where the necessary parameter matrices and functions can be easily obtained by comparison

−0.1 0.03�

It should be noted that *ε*(*t* − *t*1) denotes the pulse centered at time *t*<sup>1</sup> with pulse width 1 sec and strength 1. So, it is easy to conclude that *w*¯ = 1. We now develop the integral sliding-mode such that the system will be driven to the designated sliding surface *s*(*x*, *t*) shown in (16). Consider the initial states, *x*1(0) = −0.3 and *x*2(0) = 1.21, thus, the ball, B, is confined within *κ* = 1.2466. The matrix *M* such that *MB* = *I* is *M* = (0 1), hence, �*M*� = 1, �*ME*0� = 2.3, and �*MBd*� = 0.5. To compute switching control gain *α*(*t*) of sliding-mode control in (18), we

where *λ* is chosen to be any positive number and *ur* = *Kx* is the linear control law to achieve

factor, *α*1, which the approaching speed of sliding surface can be adjusted. Therefore, the (89)

It is seen later that the values of *α*<sup>1</sup> is related to how fast the system approaches the sliding

To find the linear control gain, *K*, for performance L2-gain measure, we follow the computation algorithm outlined in (64) and the parametric matrices of (41) are as follows,

> � 0.04 0 �

⎟⎟⎟⎟⎠ , *Cz* <sup>=</sup>

The simulated results of closed-loop system for integral sliding-mode with L2-gain measure are shown in Fig.1, Fig.2, and Fig.3 under the adjust factor *α*<sup>1</sup> = 0.022 in (90). The linear control gain *K* = [−18.1714 − 10.7033], which makes the eigenvalues of (*A* + *BK*) being −4.3517 ± 0.4841*j*. It is seen in Fig.1(b) that the sliding surface starting from *s* = 0 at *t* = 0, which matches the sliding surface design. Once the system started, the values of *s* deviate rapidly from the sliding surface due to the integral part within it. Nevertheless, the feedback control signals soon drive the trajectories of *s* approaching *s* = 0 and at time about *t* = 2.63 the values of *s* hit the sliding surface, *s* = 0. After that, to maintain the sliding surface the sliding control *us* starts chattering in view of Fig.2(b). When looking at the Fig.2(a) and (b), we see that the sliding-mode control, *us*, dominates the feedback control action that the system is pulling to the sliding surface. We also note that although the system is pulling to the sliding surface, the states *x*<sup>2</sup> has not yet reached its equilibrium, which can be seen from Fig.1(a). Not until the sliding surface reaches, do the states asymptotically drive to their equilibrium. Fig.3 is the phase plot of states of *x*<sup>1</sup> and *x*<sup>2</sup> and depicts the same phenomenon. To show different

� 1 0 0 1 �

⎞

0.3 (5.8853 <sup>+</sup> *<sup>λ</sup>* <sup>+</sup> 3.587(*x*<sup>2</sup>

*α*(*t*) = *α*1(5.8853 + *λ* + 3.587(*x*<sup>2</sup>

, *Bw* = *GBd* =

, *B* = � 0 1 �

<sup>1</sup> <sup>+</sup> *<sup>x</sup>*<sup>2</sup>

<sup>1</sup> <sup>+</sup> *<sup>x</sup>*<sup>2</sup>

, *Bp* = *G*(*E*<sup>0</sup> *I I*) =

� 1 1 � .

, *Dz* =

, *H*<sup>1</sup> = 0.7, *θ*<sup>1</sup> = *θ*<sup>2</sup> = 1.01.

<sup>2</sup>) + 0.7�*ur*�), (89)

0.3 will now be replace by a control

<sup>2</sup>) + 0.7�*ur*�). (90)

�

1.4 1 0 1 0 0 0000

�

(86), (87), and (88) with assumption 1 through 5, thus we have

�

, *H*<sup>0</sup> = �

� 1.4 −2.3

*A* =

is now

� 0 1 −1 2

�

, *E*<sup>0</sup> =

need (19), which *β*<sup>0</sup> = 5.8853. We then have

surface, *s* = 0 for a fixed number of *λ* = 0.

� 1 0 0 0 �

*Cq* =

⎛

⎜⎜⎜⎜⎝

*G* = *I* − *BM* =

*<sup>α</sup>*(*t*) = <sup>1</sup>

performance measure. It is noted that in (89) the factor <sup>1</sup>

Fig. 1. Integral sliding-mode-based robust control with L2-gain measure (a) the closed-loop states - *x*<sup>1</sup> and *x*2, (b) the chattering phenomenon of sliding surface *s*(*x*, *t*). *α*<sup>1</sup> = 0.022.

Fig. 2. The control signals of (a) linear robust control, *ur*, (b) integral sliding-mode control, *us* of integral sliding-mode-based robust control with L2-gain measure. *α*<sup>1</sup> = 0.022.

approaching speed due to control factor *α*<sup>1</sup> = 0.5, we see chattering phenomenon in the Fig.4, Fig.5, and Fig.6. This is because of inherent property of sliding-mode control. We will draw the same conclusions as for the case *α*<sup>1</sup> = 0.022 with one extra comment that is we see the

0 0.5 1 1.5 2 2.5 3 3.5 4

(b)

0 0.5 1 1.5 2 2.5 3 3.5 4

time

−0.3 −0.25 −0.2 −0.15 −0.1 −0.05 0 0.05

x1

Fig. 6. The phase plot of state *x*<sup>1</sup> and *x*<sup>2</sup> of integral sliding-mode-based robust control with

Fig. 5. The control signals of (a) linear robust control, *ur*, (b) integral sliding-mode control, *us*

of integral sliding-mode-based robust control with L2-gain measure. *α*<sup>1</sup> = 0.5.

(a)

Integral Sliding-Based Robust Control 183

−10

−40

−0.4 −0.2 0 0.2 0.4 0.6 0.8 1 1.2 1.4

L2-gain measure. *α*<sup>1</sup> = 0.5.

x

2

−20

u<sup>s</sup>

0

20

−5

u<sup>r</sup>

0

5

Fig. 3. The phase plot of state *x*<sup>1</sup> and *x*<sup>2</sup> of integral sliding-mode-based robust control with L2-gain measure. *α*<sup>1</sup> = 0.022.

trajectory of state *x*<sup>1</sup> is always smoother that of *x*2. The reason for this is because the state *x*<sup>1</sup> is the integration of the state *x*2, which makes the smoother trajectory possible.

Next, we will show the integral sliding-mode-based control with H<sup>2</sup> performance. The integral sliding-mode control, *us* is exactly the same as previous paragraph. The linear control part satisfying (85) will now be used to find the linear control gain *K*. The gain *K* computed is *K* = [−4.4586 − 5.7791], which makes eigenvalues of (*A* + *BK*) being −1.8895 ± 1.3741*j*. From Fig.7, Fig.8, and Fig.9, we may draw the same conclusions as Fig.1 to Fig.6 do. We should be aware that the H<sup>2</sup> provides closed-loop poles closer to the imaginary axis than L2-gain case, which slower the overall motion to the states equilibrium.

Fig. 4. Integral sliding-mode-based robust control with L2-gain measure (a) the closed-loop states - *x*<sup>1</sup> and *x*2, (b) the chattering phenomenon of sliding surface *s*(*x*, *t*). *α*<sup>1</sup> = 0.5.

18 Will-be-set-by-IN-TECH

−0.3 −0.25 −0.2 −0.15 −0.1 −0.05 0 0.05 0.1

Fig. 3. The phase plot of state *x*<sup>1</sup> and *x*<sup>2</sup> of integral sliding-mode-based robust control with

trajectory of state *x*<sup>1</sup> is always smoother that of *x*2. The reason for this is because the state *x*<sup>1</sup>

Next, we will show the integral sliding-mode-based control with H<sup>2</sup> performance. The integral sliding-mode control, *us* is exactly the same as previous paragraph. The linear control part satisfying (85) will now be used to find the linear control gain *K*. The gain *K* computed is *K* = [−4.4586 − 5.7791], which makes eigenvalues of (*A* + *BK*) being −1.8895 ± 1.3741*j*. From Fig.7, Fig.8, and Fig.9, we may draw the same conclusions as Fig.1 to Fig.6 do. We should be aware that the H<sup>2</sup> provides closed-loop poles closer to the imaginary axis than L2-gain case,

0 0.5 1 1.5 2 2.5 3 3.5 4

(b)

0 0.5 1 1.5 2 2.5 3 3.5 4

time

Fig. 4. Integral sliding-mode-based robust control with L2-gain measure (a) the closed-loop states - *x*<sup>1</sup> and *x*2, (b) the chattering phenomenon of sliding surface *s*(*x*, *t*). *α*<sup>1</sup> = 0.5.

(a)

is the integration of the state *x*2, which makes the smoother trajectory possible.

which slower the overall motion to the states equilibrium.

x1

−0.2

L2-gain measure. *α*<sup>1</sup> = 0.022.

−0.5 0 0.5 1 1.5

> −10 −5 0 5 10

s

x1 and x2

x2

0 0.2 0.4 0.6 0.8 1 1.2 1.4

Fig. 5. The control signals of (a) linear robust control, *ur*, (b) integral sliding-mode control, *us* of integral sliding-mode-based robust control with L2-gain measure. *α*<sup>1</sup> = 0.5.

Fig. 6. The phase plot of state *x*<sup>1</sup> and *x*<sup>2</sup> of integral sliding-mode-based robust control with L2-gain measure. *α*<sup>1</sup> = 0.5.

−0.4 −0.3 −0.2 −0.1 0 0.1 0.2 0.3

Integral Sliding-Based Robust Control 185

Fig. 9. The phase plot of state *x*<sup>1</sup> and *x*<sup>2</sup> of integral sliding-mode-based robust control with

In this chapter we have successfully developed the robust control for a class of uncertain systems based-on integral sliding-mode control in the presence of nonlinearities, external disturbances, and model uncertainties. Based-on the integral sliding-mode control where reaching phase of conventional sliding-mode control is eliminated, the matched-type nonlinearities and uncertainties have been nullified and the system is driven to the sliding surface where sliding dynamics with unmatched-type nonlinearities and uncertainties will further be compensated for resulting equilibrium. Integral sliding-mode control drives the system maintaining the sliding surface with L2-gain bound while treating the sliding surface as zero dynamics. Once reaching the sliding surface where *s* = 0, the robust performance control for controlled variable *z* in terms of L2-gain and H<sup>2</sup> measure with respect to disturbance, *w*, acts to further compensate the system and leads the system to equilibrium. The overall design effectiveness is implemented on a second-order system which proves the successful design of the methods. Of course, there are issues which can still be pursued such as we are aware that the control algorithms, say integral sliding-mode and L2-gain measure, apply separate stability criterion that is integral sliding-mode has its own stability perspective from Lyapunov function of integral sliding-surface while L2-gain measure also has its own too, the question is: is it possible produce two different control vectors that jeopardize the

Boyd, S., El Ghaoui, L., Feron, E. & Balakrishnan, V. (1994). *Linear Matrix Inequalities in System*

Cao, W.-J. & Xu, J.-X. (2004). Nonlinear integral-type sliding surface for both matched and

*and Control Theory*, Vol. 15 of *Studies in Applied Mathematics*, SIAM, Philadelphia, PA.

unmatched uncertain systems, *IEEE Transactions on Automatic Control* 49(8): 1355 –

x1

−0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1 1.2 1.4

overall stability? This is the next issue to be developed.

H<sup>2</sup> performance. *α*<sup>1</sup> = 0.06.

**6. Conclusion**

**7. References**

1360.

x2

Fig. 7. Integral sliding-mode-based robust control with H<sup>2</sup> performance (a) the closed-loop states - *x*<sup>1</sup> and *x*2, (b) the chattering phenomenon of sliding surface *s*(*x*, *t*). *α*<sup>1</sup> = 0.06.

Fig. 8. The control signals of (a) linear robust control, *ur*, (b) integral sliding-mode control, *us* of integral sliding-mode-based robust control with H<sup>2</sup> performance. *α*<sup>1</sup> = 0.06.

20 Will-be-set-by-IN-TECH

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5

(b)

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5

time

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5

(b)

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5

time

Fig. 8. The control signals of (a) linear robust control, *ur*, (b) integral sliding-mode control, *us*

of integral sliding-mode-based robust control with H<sup>2</sup> performance. *α*<sup>1</sup> = 0.06.

(a)

Fig. 7. Integral sliding-mode-based robust control with H<sup>2</sup> performance (a) the closed-loop states - *x*<sup>1</sup> and *x*2, (b) the chattering phenomenon of sliding surface *s*(*x*, *t*). *α*<sup>1</sup> = 0.06.

(a)

−0.5 0 0.5 1 1.5

> −10

−4 −2 0 2 4

u<sup>s</sup>

−5

u<sup>r</sup>

0

5

s

x1 and x2

Fig. 9. The phase plot of state *x*<sup>1</sup> and *x*<sup>2</sup> of integral sliding-mode-based robust control with H<sup>2</sup> performance. *α*<sup>1</sup> = 0.06.

### **6. Conclusion**

In this chapter we have successfully developed the robust control for a class of uncertain systems based-on integral sliding-mode control in the presence of nonlinearities, external disturbances, and model uncertainties. Based-on the integral sliding-mode control where reaching phase of conventional sliding-mode control is eliminated, the matched-type nonlinearities and uncertainties have been nullified and the system is driven to the sliding surface where sliding dynamics with unmatched-type nonlinearities and uncertainties will further be compensated for resulting equilibrium. Integral sliding-mode control drives the system maintaining the sliding surface with L2-gain bound while treating the sliding surface as zero dynamics. Once reaching the sliding surface where *s* = 0, the robust performance control for controlled variable *z* in terms of L2-gain and H<sup>2</sup> measure with respect to disturbance, *w*, acts to further compensate the system and leads the system to equilibrium. The overall design effectiveness is implemented on a second-order system which proves the successful design of the methods. Of course, there are issues which can still be pursued such as we are aware that the control algorithms, say integral sliding-mode and L2-gain measure, apply separate stability criterion that is integral sliding-mode has its own stability perspective from Lyapunov function of integral sliding-surface while L2-gain measure also has its own too, the question is: is it possible produce two different control vectors that jeopardize the overall stability? This is the next issue to be developed.

### **7. References**

Boyd, S., El Ghaoui, L., Feron, E. & Balakrishnan, V. (1994). *Linear Matrix Inequalities in System and Control Theory*, Vol. 15 of *Studies in Applied Mathematics*, SIAM, Philadelphia, PA.

Cao, W.-J. & Xu, J.-X. (2004). Nonlinear integral-type sliding surface for both matched and unmatched uncertain systems, *IEEE Transactions on Automatic Control* 49(8): 1355 – 1360.

**9** 

Ulyanov Sergey

*Russia* 

 *Society, and Man "Dubna"* 

**Self-Organized Intelligent Robust Control** 

This Chapter describes a generalized design strategy of intelligent robust control systems based on quantum/soft computing technologies that enhance robustness of hybrid intelligent fuzzy controllers by supplying a self-organizing capability. Main ideas of selforganization processes are discussed that are the background for robust knowledge base (KB) design. Principles and physical model examples of self-organization are described. Main quantum operators and general structure of quantum control algorithm of selforganization are introduced. It is demonstrated that fuzzy controllers (FC) prepared to maintain control object (CO) in the prescribed conditions are often fail to control when such a conditions are dramatically changed. We propose the solution of such kind of problems by introducing a quantum generalization of strategies in fuzzy inference in on-line from a set of pre-defined FCs by new *Quantum Fuzzy Inference* (QFI) based systems. The latter is a new quantum algorithm (QA) in quantum computing without entanglement. A new structure of intelligent control system (ICS) with a quantum KB self-organization based on QFI is suggested. Robustness of control is the background for support the reliability of advanced control accuracy in uncertainty environments. We stress our attention on the robustness

Proposed QFI system consists of a few KB of FC (KB-FCs), each of which has prepared for appropriate conditions of CO and excitations by Soft Computing Optimizer (SCO). QFI system is a new quantum control algorithm of self-organization block, which performs post processing of the results of fuzzy inference of each independent FC and produces in on-line the generalized control signal output. In this case the output of QFI is an optimal robust control signal, which includes best features of the each independent FC outputs. Therefore the operation area of such a control system can be expanded greatly as well as its robustness.

In this Chapter we give a brief introduction on soft computing tools for designing independent FC and then we will provide QFI methodology of quantum KB selforganization in unforeseen situations. The simulation example of robust intelligent control

features of ICS's with the effective simulation of Benchmarks.

**1. Introduction**

**1.1 Method of solution** 

**1.2 Main goal** 

**Based on Quantum Fuzzy Inference** 

*PRONETLABS Co., Ltd/ International University of Nature,* 


## **Self-Organized Intelligent Robust Control Based on Quantum Fuzzy Inference**

Ulyanov Sergey

*PRONETLABS Co., Ltd/ International University of Nature, Society, and Man "Dubna" Russia* 

## **1. Introduction**

22 Will-be-set-by-IN-TECH

186 Recent Advances in Robust Control – Novel Approaches and Design Methods

El-Ghezawi, O., Zinober & Billings, S. A. (1983). Analysis and design of variable structure systems using a geometric approach, *International Journal of Control* 38(3): 675–671. Fridman, L., Poznyak, A. & Bejarano, F. (2005). Decomposition of the minmax multi-model

Goeree, B. & Fasse, E. (2000). Sliding mode attitude control of a small satellite for ground tracking maneuvers, *Proc. American Control Conf.*, Vol. 2, pp. 1134–1138. Hebden, R., Edwards, C. & Spurgeon, S. (2003). An application of sliding mode control to

Liu, X., Guan, P. & Liu, J. (2005). Fuzzy sliding mode attitude control of satellite, *Proc. 44th IEEE Decision and Control and 2005 European Control Conf.*, pp. 1970–1975. Lu, X. Y. & Spurgeon, S. K. (1997). Robustness of static sliding mode control for nonlinear

Paganini, F. (1999). Convex methods for robust h2 analysis of continuous-time systems, *IEEE*

Poznyak, A., Fridman, L. & Bejarano, F. (2004). Mini-max integral sliding-mode control for

Tan, S., Lai, Y., Tse, C. & Cheung, M. (2005). A fixed-frequency pulsewidth modulation

Utkin, V., Guldner, J. & Shi, J. X. (1999). *Sliding modes in electromechanical systems*, Taylor and

Utkin, V. & Shi, J. (1996). Integral sliding mode in systems operating under uncertainty conditions, *Proc. 35th IEEE Decision and Control Conf.*, Vol. 4, pp. 4591 –4596 vol.4. van der Schaft, A. (1992). L2-gain analysis of nonlinear systems and nonlinear state-feedback h infin; control, *IEEE Transactions on Automatic Control* 37(6): 770 –784. Zhou, K., Doyle, J. & Glover, K. (1995). *Robust and Optimal Control*, Prentice Hall, Upper Saddle

multimodel linear uncertain systems, *IEEE Transactions on Automatic Control* 49(1): 97

based quasi-sliding-mode controller for buck converters, *IEEE Transactions on Power*

systems, *International Journal of Control* 72(15): 1343–1353.

*Transactions on Automatic Control* 44(2): 239 –252.

*Electronics* 20(6): 1379 – 1392.

Francis, London U.K.

River, new Jersey.

15(13): 559–574.

– 4364 vol.5.

– 102.

problem via integral sliding mode, *International Journal of Robust and Nonlinear Control*

vehicle steering in a split-mu maneuver, *Proc. American Control Conf.*, Vol. 5, pp. 4359

This Chapter describes a generalized design strategy of intelligent robust control systems based on quantum/soft computing technologies that enhance robustness of hybrid intelligent fuzzy controllers by supplying a self-organizing capability. Main ideas of selforganization processes are discussed that are the background for robust knowledge base (KB) design. Principles and physical model examples of self-organization are described. Main quantum operators and general structure of quantum control algorithm of selforganization are introduced. It is demonstrated that fuzzy controllers (FC) prepared to maintain control object (CO) in the prescribed conditions are often fail to control when such a conditions are dramatically changed. We propose the solution of such kind of problems by introducing a quantum generalization of strategies in fuzzy inference in on-line from a set of pre-defined FCs by new *Quantum Fuzzy Inference* (QFI) based systems. The latter is a new quantum algorithm (QA) in quantum computing without entanglement. A new structure of intelligent control system (ICS) with a quantum KB self-organization based on QFI is suggested. Robustness of control is the background for support the reliability of advanced control accuracy in uncertainty environments. We stress our attention on the robustness features of ICS's with the effective simulation of Benchmarks.

## **1.1 Method of solution**

Proposed QFI system consists of a few KB of FC (KB-FCs), each of which has prepared for appropriate conditions of CO and excitations by Soft Computing Optimizer (SCO). QFI system is a new quantum control algorithm of self-organization block, which performs post processing of the results of fuzzy inference of each independent FC and produces in on-line the generalized control signal output. In this case the output of QFI is an optimal robust control signal, which includes best features of the each independent FC outputs. Therefore the operation area of such a control system can be expanded greatly as well as its robustness.

### **1.2 Main goal**

In this Chapter we give a brief introduction on soft computing tools for designing independent FC and then we will provide QFI methodology of quantum KB selforganization in unforeseen situations. The simulation example of robust intelligent control

Self-Organized Intelligent Robust Control Based on Quantum Fuzzy Inference 189

local interactions. Because of its distributed character, this organization tends to be robust, resisting perturbations. The dynamics of a self-organizing system is typically nonlinear, because of circular or feedback relations between the components. Positive feedback leads to an explosive growth, which ends when all components have been absorbed into the new configuration, leaving the system in a stable, negative feedback state. Nonlinear systems have in general several stable states, and this number tends to increase (bifurcate) as an increasing input of energy pushes the system farther from its thermodynamic equilibrium. To adapt to a changing environment, the system needs a variety of stable states that is large enough to react to all perturbations but not so large as to make its evolution uncontrollably chaotic. The most adequate states are selected according to their fitness, either directly by the environment, or by subsystems that have adapted to the environment at an earlier stage. Formally, the basic mechanism underlying self-organization is the (often noise-driven) variation which explores different regions in the system's state space until it enters an *attractor*. This precludes further variation outside the attractor, and thus restricts the freedom of the system's components to behave independently. This is equivalent to the increase of coherence, or *decrease* of statistical *entropy*, that defines *self-organization*. The most obvious change that has taken place in systems is the *emergence* of *global* organization. Initially the elements of the system (spins or molecules) were only interacting *locally*. This locality of interactions follows from the basic continuity of all physical processes: for any influence to pass from one region to another it must first pass through all intermediate

In the self-organized state, on the other hand, all segments of the system are *strongly correlated*. This is most clear in the example of the magnet: in the magnetized state, all spins, however far apart, point in the same direction. *Correlation* is a useful measure to study the transition from the disordered to the ordered state. Locality implies that neighboring configurations are strongly correlated, but that this correlation diminishes as the distance between configurations increases. The *correlation length* can be defined as the maximum distance over which there is a significant correlation. When we consider a highly organized system, we usually imagine some external or internal *agent* (controller) that is responsible for guiding, directing or controlling that organization. The controller is a physically distinct subsystem that exerts its influence over the rest of the system. In this case, we may say that control is *centralized*. In self-organizing systems, on the other hand, "control" of the organization is typically *distributed* over the whole of the system. All parts contribute evenly

A general characteristic of self-organizing systems is as following: they are *robust* or *resilient*. This means that they are relatively insensitive to perturbations or errors, and have a strong capacity to restore themselves, unlike most human designed systems. *One reason* for this fault-tolerance is the *redundant*, *distributed* organization: the non-damaged regions can usually make up for the damaged ones. *Another reason* for this intrinsic robustness is that self-organization thrives on *randomness*, fluctuations or "noise". A certain amount of random perturbations will facilitate rather than hinder self-organization. A *third reason* for resilience is the stabilizing effect of *feedback* loops. Many self-organizational processes begin with the amplification (through positive feedback) of initial random fluctuations. This breaks the symmetry of the initial state, but often in unpredictable but operationally equivalent ways. That is, the job gets done, but hostile forces will have difficulty predicting precisely how it

regions.

gets done.

to the resulting arrangement.

based on QFI is introduced. The role of self-organized KB design based on QFI in the solution of System of Systems Engineering problems is also discussed.

## **2. Problem's formulation**

Main problem in modern FC design is how to design and introduce robust KBs into control system for increasing *self-learning, self-adaptation and self-organizing capabilities* that enhance robustness of developed FC. The *learning* and *adaptation* aspects of FC's have always the interesting topic in advanced control theory and system of systems engineering. Many learning schemes were based on the *back-propagation* (BP)-algorithm and its modifications. Adaptation processes are based on iterative stochastic algorithms. These ideas are successfully working if we perform our control task without a presence of ill-defined stochastic noises in environment or without a presence of unknown noises in sensors systems and control loop, and so on. For more complicated control situations learning and adaptation methods based on BP-algorithms or iterative stochastic algorithms do not guarantee the required robustness and accuracy of control.

The solution of this problem based on SCO of KB was developed (Litvintseva et al., 2006). For achieving of *self-organization* level in intelligent control system it is necessary to use QFI (Litvintseva et al., 2007).

The described *self-organizing* FC design method is based on special form of QFI that uses a few of partial KBs designed by SCO. In particularity, QFI uses the laws of quantum computing and explores three main unitary operations: (i) superposition; (ii) entanglement (quantum correlations); and (iii) interference. According to quantum gate computation, the logical union of a few KBs in one generalized space is realized with *superposition* operator; with *entanglement* operator (that can be equivalently described by different models of *quantum oracle*) a search of "successful" marked solution is formalized; and with *interference*  operator we can extract "good" solutions together with classical *measurement* operations. Let us discuss briefly the main principles of self-organization that are used in the knowledge base self-organization of robust ICS.

## **3. Principles and physical model examples of self-organization**

The theory of self-organization, learning and adaptation has grown out of a variety of disciplines, including quantum mechanics, thermodynamics, cybernetics, control theory and computer modeling. The present section reviews its most important definitions, principles, model descriptions and engineering concepts of self-organization processes that can be used in design of robust ICS's.

## **3.1 Definitions and main properties of self-organization processes**

Self-organization is defined in general form as following: *The spontaneous emergence of largescale spatial, temporal, or spatiotemporal order in a system of locally interacting, relatively simple components.* Self-organization is a bottom-up process where complex organization emerges at multiple levels from the interaction of lower-level entities. The final product is the result of nonlinear interactions rather than planning and design, and is not known a priori. Contrast this with the standard, top-down engineering design paradigm where planning precedes implementation, and the desired final system is known by design. Selforganization can be defined as the spontaneous creation of a globally coherent pattern out of 188 Recent Advances in Robust Control – Novel Approaches and Design Methods

based on QFI is introduced. The role of self-organized KB design based on QFI in the

Main problem in modern FC design is how to design and introduce robust KBs into control system for increasing *self-learning, self-adaptation and self-organizing capabilities* that enhance robustness of developed FC. The *learning* and *adaptation* aspects of FC's have always the interesting topic in advanced control theory and system of systems engineering. Many learning schemes were based on the *back-propagation* (BP)-algorithm and its modifications. Adaptation processes are based on iterative stochastic algorithms. These ideas are successfully working if we perform our control task without a presence of ill-defined stochastic noises in environment or without a presence of unknown noises in sensors systems and control loop, and so on. For more complicated control situations learning and adaptation methods based on BP-algorithms or iterative stochastic algorithms do not

The solution of this problem based on SCO of KB was developed (Litvintseva et al., 2006). For achieving of *self-organization* level in intelligent control system it is necessary to use QFI

The described *self-organizing* FC design method is based on special form of QFI that uses a few of partial KBs designed by SCO. In particularity, QFI uses the laws of quantum computing and explores three main unitary operations: (i) superposition; (ii) entanglement (quantum correlations); and (iii) interference. According to quantum gate computation, the logical union of a few KBs in one generalized space is realized with *superposition* operator; with *entanglement* operator (that can be equivalently described by different models of *quantum oracle*) a search of "successful" marked solution is formalized; and with *interference*  operator we can extract "good" solutions together with classical *measurement* operations. Let us discuss briefly the main principles of self-organization that are used in the knowledge

The theory of self-organization, learning and adaptation has grown out of a variety of disciplines, including quantum mechanics, thermodynamics, cybernetics, control theory and computer modeling. The present section reviews its most important definitions, principles, model descriptions and engineering concepts of self-organization processes that can be used

Self-organization is defined in general form as following: *The spontaneous emergence of largescale spatial, temporal, or spatiotemporal order in a system of locally interacting, relatively simple components.* Self-organization is a bottom-up process where complex organization emerges at multiple levels from the interaction of lower-level entities. The final product is the result of nonlinear interactions rather than planning and design, and is not known a priori. Contrast this with the standard, top-down engineering design paradigm where planning precedes implementation, and the desired final system is known by design. Selforganization can be defined as the spontaneous creation of a globally coherent pattern out of

**3. Principles and physical model examples of self-organization** 

**3.1 Definitions and main properties of self-organization processes** 

solution of System of Systems Engineering problems is also discussed.

guarantee the required robustness and accuracy of control.

**2. Problem's formulation** 

(Litvintseva et al., 2007).

base self-organization of robust ICS.

in design of robust ICS's.

local interactions. Because of its distributed character, this organization tends to be robust, resisting perturbations. The dynamics of a self-organizing system is typically nonlinear, because of circular or feedback relations between the components. Positive feedback leads to an explosive growth, which ends when all components have been absorbed into the new configuration, leaving the system in a stable, negative feedback state. Nonlinear systems have in general several stable states, and this number tends to increase (bifurcate) as an increasing input of energy pushes the system farther from its thermodynamic equilibrium. To adapt to a changing environment, the system needs a variety of stable states that is large enough to react to all perturbations but not so large as to make its evolution uncontrollably chaotic. The most adequate states are selected according to their fitness, either directly by the environment, or by subsystems that have adapted to the environment at an earlier stage. Formally, the basic mechanism underlying self-organization is the (often noise-driven) variation which explores different regions in the system's state space until it enters an *attractor*. This precludes further variation outside the attractor, and thus restricts the freedom of the system's components to behave independently. This is equivalent to the increase of coherence, or *decrease* of statistical *entropy*, that defines *self-organization*. The most obvious change that has taken place in systems is the *emergence* of *global* organization. Initially the elements of the system (spins or molecules) were only interacting *locally*. This locality of interactions follows from the basic continuity of all physical processes: for any influence to pass from one region to another it must first pass through all intermediate regions.

In the self-organized state, on the other hand, all segments of the system are *strongly correlated*. This is most clear in the example of the magnet: in the magnetized state, all spins, however far apart, point in the same direction. *Correlation* is a useful measure to study the transition from the disordered to the ordered state. Locality implies that neighboring configurations are strongly correlated, but that this correlation diminishes as the distance between configurations increases. The *correlation length* can be defined as the maximum distance over which there is a significant correlation. When we consider a highly organized system, we usually imagine some external or internal *agent* (controller) that is responsible for guiding, directing or controlling that organization. The controller is a physically distinct subsystem that exerts its influence over the rest of the system. In this case, we may say that control is *centralized*. In self-organizing systems, on the other hand, "control" of the organization is typically *distributed* over the whole of the system. All parts contribute evenly to the resulting arrangement.

A general characteristic of self-organizing systems is as following: they are *robust* or *resilient*. This means that they are relatively insensitive to perturbations or errors, and have a strong capacity to restore themselves, unlike most human designed systems. *One reason* for this fault-tolerance is the *redundant*, *distributed* organization: the non-damaged regions can usually make up for the damaged ones. *Another reason* for this intrinsic robustness is that self-organization thrives on *randomness*, fluctuations or "noise". A certain amount of random perturbations will facilitate rather than hinder self-organization. A *third reason* for resilience is the stabilizing effect of *feedback* loops. Many self-organizational processes begin with the amplification (through positive feedback) of initial random fluctuations. This breaks the symmetry of the initial state, but often in unpredictable but operationally equivalent ways. That is, the job gets done, but hostile forces will have difficulty predicting precisely how it gets done.

Self-Organized Intelligent Robust Control Based on Quantum Fuzzy Inference 191

(see, Fig. 1) based on QFI model as the background of robust KB design information technology. QFI model is described in details (Litvintseva et al., 2007) and used here as

Fig. 1. General structure of quantum control algorithm of self-organization

(quantum dot); (iv) coordination control (swam-bot and snake-bot).

assembling; and (iii) self-organization.

quantum operators.

used for measurement of computation results].

Analysis of self-organization models gives us the following results. Models of selforganization are included natural *quantum* effects and based on the following *informationthermodynamic* concepts: (i) macro- and micro-level interactions with information exchange (in ABM micro-level is the communication space where the inter-agent messages are exchange and is explained by increased entropy on a micro-level); (ii) communication and information transport on micro-level ("quantum mirage" in quantum corrals); (iii) different types of quantum spin correlation that design different structure in self-organization

Natural evolution processes are based on the following steps: (i) templating; (iii) self-

According quantum computing theory in general form every QA includes the following unitary quantum operators: (i) superposition; (ii) entanglement (quantum oracle); (iii) interference. Measurement is the fourth classical operator. [It is irreversible operator and is

Quantum control algorithm of self-organization that developed below is based on QFI models. QFI includes these concepts of self-organization and has realized by corresponding

Structure of QFI that realize the self-organization process is developed. QFI is one of possible realization of quantum control algorithm of self-organization that includes all of these features: (i) superposition; (ii) selection of quantum correlation types; (iii) information

toolkit.

## **3.2 Principles of self-organization**

A system can cope with an unpredictable environment autonomously using different but closely related approaches:

	- *Adaptation* will enable the system to modify itself to "fit" better within the environment.
	- *Robustness* will allow the system to withstand changes without losing its function or purpose, and thus allowing it to adapt.
	- *Anticipation* will prepare the system for changes before these occur, adapting the system without it being perturbed.

Let us consider the peculiarities of common parts in self-organization models: (i) Models of self-organizations on macro-level are used the information from micro-level that support thermodynamic relations (second law of thermodynamics: increasing and decreasing of entropy on micro- and macro-levels, correspondingly) of dynamic evolution; (ii) Selforganization processes are used transport of the information on/to macro- and from microlevels in different hidden forms; (iii) Final states of self-organized structure have minimum of entropy production; (iv) In natural self-organization processes are don't planning types of correlation before the evolution (Nature given the type of corresponding correlation through genetic coding of templates in self-assembly); (v) Coordination control for design of self-organization structure is used; (vi) Random searching process for self-organization structure design is applied; (vii) Natural models are biologically inspired evolution dynamic models and are used current classical information for decision-making (but don't have toolkit for extraction and exchanging of hidden quantum information from dynamic behavior of control object).

## **3.3 Quantum control algorithm of self-organization processes**

In man-made self-organization *types of correlations* and *control of self-organization* are developed before the design of the searching structure. Thus the future design algorithm of self-organization must include these common peculiarities of bio-inspired and man-made processes: *quantum hidden correlations* and *information transport*.

Figure 1 shows the structure of a new *quantum control algorithm of self-organization* that includes the above mentioned properties.

*Remark*. The developed quantum control algorithm includes three possibilities: (i) from the simplest living organism composition in response to external stimuli of bacterial and neuronal self-organization; and (ii) according to correlation information stored in the DNA; (iii) from quantum hidden correlations and information transport used in quantum dots.

Quantum control algorithm of self-organization design in intelligent control systems based on QFI-model is described in (Litvintseva et al., 2009). Below we will describe the Level 1 190 Recent Advances in Robust Control – Novel Approaches and Design Methods

A system can cope with an unpredictable environment autonomously using different but

• *Adaptation* (learning, evolution). The system changes its behavior to cope with the

• *Anticipation* (cognition). The system predicts a change to cope with, and adjusts its behavior accordingly. This is a special case of adaptation, where the system does not

• *Robustness.* A system is robust if it continues to function in the face of perturbations. This can be achieved with modularity, degeneracy, distributed robustness, or redundancy. Successful self-organizing systems will use combinations of these approaches to maintain their integrity in a changing and unexpected environment. - *Adaptation* will enable the system to modify itself to "fit" better within the



Let us consider the peculiarities of common parts in self-organization models: (i) Models of self-organizations on macro-level are used the information from micro-level that support thermodynamic relations (second law of thermodynamics: increasing and decreasing of entropy on micro- and macro-levels, correspondingly) of dynamic evolution; (ii) Selforganization processes are used transport of the information on/to macro- and from microlevels in different hidden forms; (iii) Final states of self-organized structure have minimum of entropy production; (iv) In natural self-organization processes are don't planning types of correlation before the evolution (Nature given the type of corresponding correlation through genetic coding of templates in self-assembly); (v) Coordination control for design of self-organization structure is used; (vi) Random searching process for self-organization structure design is applied; (vii) Natural models are biologically inspired evolution dynamic models and are used current classical information for decision-making (but don't have toolkit for extraction and exchanging of hidden quantum information from dynamic

In man-made self-organization *types of correlations* and *control of self-organization* are developed before the design of the searching structure. Thus the future design algorithm of self-organization must include these common peculiarities of bio-inspired and man-made

Figure 1 shows the structure of a new *quantum control algorithm of self-organization* that

*Remark*. The developed quantum control algorithm includes three possibilities: (i) from the simplest living organism composition in response to external stimuli of bacterial and neuronal self-organization; and (ii) according to correlation information stored in the DNA; (iii) from quantum hidden correlations and information transport used in quantum dots. Quantum control algorithm of self-organization design in intelligent control systems based on QFI-model is described in (Litvintseva et al., 2009). Below we will describe the Level 1

require experiencing a situation before responding to it.

or purpose, and thus allowing it to adapt.

**3.3 Quantum control algorithm of self-organization processes** 

processes: *quantum hidden correlations* and *information transport*.

includes the above mentioned properties.

system without it being perturbed.

**3.2 Principles of self-organization** 

closely related approaches:

environment.

behavior of control object).

change.

(see, Fig. 1) based on QFI model as the background of robust KB design information technology. QFI model is described in details (Litvintseva et al., 2007) and used here as toolkit.

Fig. 1. General structure of quantum control algorithm of self-organization

Analysis of self-organization models gives us the following results. Models of selforganization are included natural *quantum* effects and based on the following *informationthermodynamic* concepts: (i) macro- and micro-level interactions with information exchange (in ABM micro-level is the communication space where the inter-agent messages are exchange and is explained by increased entropy on a micro-level); (ii) communication and information transport on micro-level ("quantum mirage" in quantum corrals); (iii) different types of quantum spin correlation that design different structure in self-organization (quantum dot); (iv) coordination control (swam-bot and snake-bot).

Natural evolution processes are based on the following steps: (i) templating; (iii) selfassembling; and (iii) self-organization.

According quantum computing theory in general form every QA includes the following unitary quantum operators: (i) superposition; (ii) entanglement (quantum oracle); (iii) interference. Measurement is the fourth classical operator. [It is irreversible operator and is used for measurement of computation results].

Quantum control algorithm of self-organization that developed below is based on QFI models. QFI includes these concepts of self-organization and has realized by corresponding quantum operators.

Structure of QFI that realize the self-organization process is developed. QFI is one of possible realization of quantum control algorithm of self-organization that includes all of these features: (i) superposition; (ii) selection of quantum correlation types; (iii) information

Self-Organized Intelligent Robust Control Based on Quantum Fuzzy Inference 193

possible to introduce the entropy characteristics in Eqs. (1) and (2) because of the scalar

*Remark.* It is worth noting that the presence of entropy production in (2) as a parameter (for example, entropy production term in dissipative process in Eq. (1)) reflects the dynamics of the behavior of the control object and results in a new class of substantially nonlinear dynamic automatic control systems. The choice of the minimum entropy production both in the control object and in the fuzzy PID controller as a fitness function in the genetic algorithm allows one to obtain feasible robust control laws for the gains in the fuzzy PID controller. The entropy production of a dynamic system is characterized uniquely by the parameters of the nonlinear dynamic automatic control system, which results in determination of an optimal selective trajectory from the set of possible trajectories in

Thus, the first condition is fulfilled automatically. Assume that the second condition 0 *dV*

N ( ) ( ) ( )( ) Robustness Stability Controllability

 

Relation (3) relates the stability, controllability, and robustness properties.

*dV q q tu dt* <sup>=</sup> <sup>∑</sup> ϕ Ψ−ϒ + Ψ−ϒ Ψ−ϒ ≤

*Remark*. It was introduced the new physical measure of control quality (3) to complex nonlinear controlled objects described as non-linear dissipative models. This physical measure of control quality is based on the physical law of minimum entropy production rate in ICS and in dynamic behavior of complex control object. The problem of the minimum entropy production rate is *equivalent* with the associated problem of the maximum released mechanical work as the optimal solutions of corresponding Hamilton-Jacobi-Bellman equations. It has shown that the variational fixed-end problem of the *maximum work W* is equivalent to the variational fixed-end problem of the *minimum entropy production*. In this case both optimal solutions are equivalent for the dynamic control of complex systems and the principle of minimum of entropy production guarantee the maximal released mechanical work with intelligent operations. This new physical measure of control quality we using as fitness function of GA in optimal control system design. Such state corresponds

The introduction of physical criteria (the minimum entropy production rate) can guarantee the stability and robustness of control. This method differs from aforesaid design method in that a new *intelligent global feedback* in control system is introduced. The interrelation between the stability of control object (the Lyapunov function) and controllability (the entropy production rate) is used. The basic peculiarity of the given method is the necessity of model investigation for CO and the calculation of entropy production rate through the parameters of the developed model. The integration of joint systems of equations (the

*i i i i* ( ) , ,, ( ) *cob c cob c* ( )

, ,, 0 *i i*

 

*q q SS q q Stu S S S S dt* = += ϕ + − − ∑ ∑

holds. In this case, the complete derivative of the Lyapunov function (2) has the form

*i i*

*i*

Taking into account (1) and the notation introduced above, we have

*dt* ≤

(3)

property of entropy as a function of time, *S t*( ) .

*dV*

to the minimum of system entropy.

optimization problems.

transport and quantum oracle; and (iv) interference. With *superposition* is realized *templating* operation, and based on macro- and micro-level interactions with information exchange of active agents. *Selection* of quantum correlation type organize *self-assembling* using power source of communication and information transport on micro-level. In this case the type of correlation defines the level of *robustness* in designed KB of FC. *Quantum oracle* calculates intelligent quantum state that includes the most important (value) information transport for *coordination* control. *Interference* is used for extraction the results of coordination control and design in on-line robust KB.

The developed QA of self-organization is applied to design of robust KB of FC in unpredicted control situations.

Main operations of developed QA and concrete examples of QFI applications are described.

The goal of quantum control algorithm of self-organization in Fig. 1 is the support of optimal *thermodynamic trade-off* between *stability*, *controllability* and *robustness* of control object behavior using robust self-organized KB of ICS.

Q. *Why with thermodynamics approach we can organize trade-off between stability, controllability and robustness?* 

Let us consider the answer on this question.

### **3.4 Thermodynamics trade-off between stability, controllability, and robustness**

Consider a dynamic control object given by the equation

$$\frac{dq}{dt} = \mathfrak{q}\left(q\_{\prime}S(t), t, \mu\_{\prime}\mathfrak{k}\left(t\right)\right), \quad u = f\left(q\_{\prime}q\_{\prime\prime}t\right), \tag{1}$$

where *q* is the vector of generalized coordinates describing the dynamics of the control object; *S* is the generalized entropy of dynamic system (1); *u* is the control force (the output of the actuator of the automatic control system); *qd* (*t*) is reference signal, ξ(*t*) is random disturbance and *t* is the time. The necessary and sufficient conditions of asymptotic stability of dynamic system (1) with ξ(*t*) ≡ 0 are determined by the physical constraints on the form of the Lyapunov function, which possesses two important properties represented by the following conditions:


$$\frac{dV}{dt} \le 0 \; . $$

In general case the Lagrangian dynamic system (1) is not lossless with corresponding outputs.

By conditions (i) and (ii), as the generalized Lyapunov function, we take the function

$$V = \frac{1}{2} \sum\_{i=1}^{n} \eta\_i^2 + \frac{1}{2} S^2 \tag{2}$$

where *SS S* = − *cob c* is the production of entropy in the open system "*control object* + *controller*"; ( ) , , *Scob* = Ψ *q q t* is the production of entropy in the control object; and *S et <sup>c</sup>* = ϒ(, ) is the production of entropy in the controller (actuator of the automatic control system). It is 192 Recent Advances in Robust Control – Novel Approaches and Design Methods

transport and quantum oracle; and (iv) interference. With *superposition* is realized *templating* operation, and based on macro- and micro-level interactions with information exchange of active agents. *Selection* of quantum correlation type organize *self-assembling* using power source of communication and information transport on micro-level. In this case the type of correlation defines the level of *robustness* in designed KB of FC. *Quantum oracle* calculates intelligent quantum state that includes the most important (value) information transport for *coordination* control. *Interference* is used for extraction the results of coordination control and

The developed QA of self-organization is applied to design of robust KB of FC in

Main operations of developed QA and concrete examples of QFI applications are described. The goal of quantum control algorithm of self-organization in Fig. 1 is the support of optimal *thermodynamic trade-off* between *stability*, *controllability* and *robustness* of control

Q. *Why with thermodynamics approach we can organize trade-off between stability, controllability* 

( ) , , , , , , , () () ( ) *<sup>d</sup>*

where *q* is the vector of generalized coordinates describing the dynamics of the control object; *S* is the generalized entropy of dynamic system (1); *u* is the control force (the output of the actuator of the automatic control system); *qd* (*t*) is reference signal, ξ(*t*) is random disturbance and *t* is the time. The necessary and sufficient conditions of asymptotic stability of dynamic system (1) with ξ(*t*) ≡ 0 are determined by the physical constraints on the form of the Lyapunov function, which possesses two important properties represented by the

ii. The complete derivative in time of the Lyapunov function is a non-positive function,

In general case the Lagrangian dynamic system (1) is not lossless with corresponding

1 1 1 2 2 *n i i V qS* =

where *SS S* = − *cob c* is the production of entropy in the open system "*control object* + *controller*"; ( ) , , *Scob* = Ψ *q q t* is the production of entropy in the control object; and *S et <sup>c</sup>* = ϒ(, ) is the production of entropy in the controller (actuator of the automatic control system). It is

By conditions (i) and (ii), as the generalized Lyapunov function, we take the function

<sup>0</sup> *dV dt*

2 2

= + ∑ , (2)

≤ .

*dq qSt tu t u f qq t dt* =ϕ ξ = , (1)

**3.4 Thermodynamics trade-off between stability, controllability, and robustness** 

i. This is a strictly positive function of generalized coordinates, i.e., 0 *V* > ;

design in on-line robust KB.

*and robustness?* 

following conditions:

outputs.

unpredicted control situations.

object behavior using robust self-organized KB of ICS.

Consider a dynamic control object given by the equation

Let us consider the answer on this question.

possible to introduce the entropy characteristics in Eqs. (1) and (2) because of the scalar property of entropy as a function of time, *S t*( ) .

*Remark.* It is worth noting that the presence of entropy production in (2) as a parameter (for example, entropy production term in dissipative process in Eq. (1)) reflects the dynamics of the behavior of the control object and results in a new class of substantially nonlinear dynamic automatic control systems. The choice of the minimum entropy production both in the control object and in the fuzzy PID controller as a fitness function in the genetic algorithm allows one to obtain feasible robust control laws for the gains in the fuzzy PID controller. The entropy production of a dynamic system is characterized uniquely by the parameters of the nonlinear dynamic automatic control system, which results in determination of an optimal selective trajectory from the set of possible trajectories in optimization problems.

Thus, the first condition is fulfilled automatically. Assume that the second condition 0 *dV dt* ≤

holds. In this case, the complete derivative of the Lyapunov function (2) has the form

$$\frac{dV}{dt} = \sum\_{i} q\_{i}\dot{q}\_{i} + S\dot{S} = \sum\_{i} q\_{i}\mathfrak{p}\_{i} \left(q\_{\prime}S\_{\prime}t\_{\prime}\mu\right) + \left(S\_{\alpha b} - S\_{\varepsilon}\right)\left(\dot{S}\_{\alpha b} - \dot{S}\_{\varepsilon}\right)\mu$$

Taking into account (1) and the notation introduced above, we have

$$\underbrace{\frac{dV}{dt}}\_{\text{Stability}} = \underbrace{\sum\_{i} q\_{i} \phi\_{i} \left(q\_{i} \left(\Psi - \mathbf{I}\right), t, \boldsymbol{\mu}\right)}\_{\text{Controllability}} + \underbrace{\left(\boldsymbol{\Psi} - \mathbf{I}\right) \left(\dot{\boldsymbol{\Psi}} - \dot{\mathbf{I}}\right)}\_{\text{Robustness}} \le 0 \tag{3}$$

Relation (3) relates the stability, controllability, and robustness properties.

*Remark*. It was introduced the new physical measure of control quality (3) to complex nonlinear controlled objects described as non-linear dissipative models. This physical measure of control quality is based on the physical law of minimum entropy production rate in ICS and in dynamic behavior of complex control object. The problem of the minimum entropy production rate is *equivalent* with the associated problem of the maximum released mechanical work as the optimal solutions of corresponding Hamilton-Jacobi-Bellman equations. It has shown that the variational fixed-end problem of the *maximum work W* is equivalent to the variational fixed-end problem of the *minimum entropy production*. In this case both optimal solutions are equivalent for the dynamic control of complex systems and the principle of minimum of entropy production guarantee the maximal released mechanical work with intelligent operations. This new physical measure of control quality we using as fitness function of GA in optimal control system design. Such state corresponds to the minimum of system entropy.

The introduction of physical criteria (the minimum entropy production rate) can guarantee the stability and robustness of control. This method differs from aforesaid design method in that a new *intelligent global feedback* in control system is introduced. The interrelation between the stability of control object (the Lyapunov function) and controllability (the entropy production rate) is used. The basic peculiarity of the given method is the necessity of model investigation for CO and the calculation of entropy production rate through the parameters of the developed model. The integration of joint systems of equations (the

Self-Organized Intelligent Robust Control Based on Quantum Fuzzy Inference 195

thermodynamically stabilizing compensator. The resetting set is thus defined to be the set of all points in the closed-loop state space that correspond to decreasing controller emulated energy. By resetting the controller states, the plant energy can never increase after the first resetting event. Furthermore, if the closed-loop system total energy is conserved between resetting events, then a decrease in plant energy is accompanied by a corresponding increase in emulated energy. Hence, this approach allows the plant energy to flow to the controller, where it increases the emulated energy but does not allow the emulated energy

This energy dissipating hybrid controller effectively enforces a one-way energy transfer between the control object and the controller after the first resetting event. For practical implementation, knowledge of controller and object outputs is sufficient to determine whether or not the closed-loop state vector is in the resetting set. Since the energy-based hybrid controller architecture involves the exchange of energy with conservation laws describing transfer, accumulation, and dissipation of energy between the controller and the plant, we can construct a modified hybrid controller that guarantees that the closedloop system is consistent with basic thermodynamic principles after the first resetting

The entropy of the closed-loop system strictly increases between resetting events after the first resetting event, which is consistent with thermodynamic principles. This is not surprising since in this case the closed-loop system is *adiabatically isolated* (i.e., the system does not exchange energy (heat) with the environment) and the total energy of the closedloop system is conserved between resetting events. Alternatively, the entropy of the closedloop system strictly decreases across resetting events since the total energy strictly decreases

Entropy production rate is a continuously differentiable function that defines the resetting set as its zero level set. Thus the resetting set is motivated by thermodynamic principles and guarantees that the energy of the closed-loop system is always flowing from regions of higher to lower energies after the first resetting event, which is in accordance with the second law of thermodynamics. This guarantees the existence of entropy function for the closed-loop system that satisfies the Clausius-type inequality between resetting events. Hence, it is reset the compensator states in order to ensure that the second law of thermodynamics is not violated. Furthermore, in this case, the hybrid controller with resetting set is a thermodynamically stabilizing compensator. Analogous thermodynamically stabilizing compensators can be constructed for lossless dynamical

Equation (3) joint in analytic form different measures of control quality such as *stability, controllability*, and *robustness* supporting the required level of reliability and accuracy. As particular case Eq. (3) includes the entropic principle of robustness. Consequently, the interrelation between the Lyapunov stability and robustness described by Eq. (3) is the main physical law for designing automatic control systems. This law provides the background for an applied technique of designing KBs of robust intelligent control systems (with different

1. The introduced physical law of intelligent control (3) provides a background of design of robust KB's of ICS's (with different levels of intelligence) based on soft computing. 2. The technique of soft computing gives the opportunity to develop a universal approximator in the form of a fuzzy automatic control system, which elicits information

at each resetting instant, and hence, energy is not conserved across resetting events.

to flow back to the plant after the first resetting event.

levels of intelligence) with the use of soft computing.

In concluding this section, we formulate the following conclusions:

event.

systems.

equations of mechanical model motion and the equations of entropy production rate) enable to use the result as the fitness function in GA.

*Remark*. The concept of an energy-based hybrid controller can be viewed from (3) also as a feedback control technique that exploits the coupling between a physical dynamical system and an energy-based controller to efficiently remove energy from the physical system. According to (3) we have

$$
\sum\_{i} q\_{i} \wp\_{i} \left( q\_{\nu} (\Psi - \Upsilon), t, \mu \right) + \left( \Psi - \Upsilon \right) \left( \dot{\Psi} - \dot{\Upsilon} \right) \le 0 \text{, or}
$$

$$
\sum\_{i} q\_{i} \wp\_{i} \left( q\_{\nu} (\Psi - \Upsilon), t, \mu \right) \le \left( \Psi - \Upsilon \right) \left( \dot{\Upsilon} - \dot{\Psi} \right). \tag{4}
$$

Therefore, we have different possibilities for support inequalities in (4) as following:

$$\begin{aligned} \text{(i)} & \sum\_{i} q\_{i} \dot{q}\_{i} < 0, \left(\Psi > \Upsilon\right) / \left(\dot{\Upsilon} > \dot{\Psi}\right), S\dot{S} > 0 \; ; \\\\ \text{(ii)} & \sum\_{i} q\_{i} \dot{q}\_{i} < 0, \left(\Psi < \Upsilon\right) / \left(\dot{\Upsilon} < \dot{\Psi}\right), S\dot{S} > 0 \; ; \\\\ \text{(iii)} & \sum\_{i} q\_{i} \dot{q}\_{i} < 0, \left(\Psi < \Upsilon\right) ; \left(\dot{\Upsilon} > \dot{\Psi}\right), S\dot{S} < 0 \; ; \sum\_{i} q\_{i} \dot{q}\_{i} < S\dot{S} \; ; \text{ etc.} \end{aligned}$$

and its combinations, that means thermodynamically stabilizing compensator can be constructed. These inequalities specifically, if a dissipative or lossless plant is at high energy level, and a lossless feedback controller at a low energy level is attached to it, then energy will generally tends to flow from the plant into the controller, decreasing the plant energy and increasing the controller energy. Emulated energy, and not physical energy, is accumulated by the controller. Conversely, if the attached controller is at a high energy level and a plant is at a low energy level, then energy can flow from the controller to the plant, since a controller can generate real, physical energy to effect the required energy flow. Hence, if and when the controller states coincide with a high emulated energy level, then it is possible reset these states to remove the emulated energy so that the emulated energy is not returned to the CO.

In this case, the overall closed-loop system consisting of the plant and the controller possesses discontinuous flows since it combines logical switching with continuous dynamics, leading to impulsive differential equations. Every time the emulated energy of the controller reaches its maximum, the states of the controller reset in such a way that the controller's emulated energy becomes zero.

Alternatively, the controller states can be made reset every time the emulated energy is equal to the actual energy of the plant, enforcing the second law of thermodynamics that ensures that the energy flows from the more energetic system (the plant) to the less energetic system (the controller). The proof of asymptotic stability of the closed-loop system in this case requires the non-trivial extension of the hybrid invariance principle, which in turn is a very recent extension of the classical *Barbashin-Krasovskii* invariant set theorem. The subtlety here is that the resetting set is not a closed set and as such a new transversality condition involving higher-order Lie derivatives is needed.

Main goal of robust intelligent control is support of optimal *trade-off* between stability, controllability and robustness with thermodynamic relation as (3) or (4) as 194 Recent Advances in Robust Control – Novel Approaches and Design Methods

equations of mechanical model motion and the equations of entropy production rate) enable

*Remark*. The concept of an energy-based hybrid controller can be viewed from (3) also as a feedback control technique that exploits the coupling between a physical dynamical system and an energy-based controller to efficiently remove energy from the physical system.

( , ,, ( ) ) ( )( ) 0 *i i*

∑*q q tu* <sup>ϕ</sup> Ψ−ϒ + Ψ−ϒ Ψ−ϒ ≤ , or

*i i*( , ,, ( ) ) ( )( )

(i) *i i* 0, , , 0 ( ) ( )

(ii) *i i* 0, , , 0 ( ) ( )

∑*q q* <sup>&</sup>lt; Ψ>ϒ ϒ>Ψ > *SS* ;

∑*q q* <sup>&</sup>lt; Ψ<ϒ ϒ<Ψ > *SS* ;

*i i*

and its combinations, that means thermodynamically stabilizing compensator can be constructed. These inequalities specifically, if a dissipative or lossless plant is at high energy level, and a lossless feedback controller at a low energy level is attached to it, then energy will generally tends to flow from the plant into the controller, decreasing the plant energy and increasing the controller energy. Emulated energy, and not physical energy, is accumulated by the controller. Conversely, if the attached controller is at a high energy level and a plant is at a low energy level, then energy can flow from the controller to the plant, since a controller can generate real, physical energy to effect the required energy flow. Hence, if and when the controller states coincide with a high emulated energy level, then it is possible reset these states to remove the emulated energy so that the emulated energy is

In this case, the overall closed-loop system consisting of the plant and the controller possesses discontinuous flows since it combines logical switching with continuous dynamics, leading to impulsive differential equations. Every time the emulated energy of the controller reaches its maximum, the states of the controller reset in such a way that the

Alternatively, the controller states can be made reset every time the emulated energy is equal to the actual energy of the plant, enforcing the second law of thermodynamics that ensures that the energy flows from the more energetic system (the plant) to the less energetic system (the controller). The proof of asymptotic stability of the closed-loop system in this case requires the non-trivial extension of the hybrid invariance principle, which in turn is a very recent extension of the classical *Barbashin-Krasovskii* invariant set theorem. The subtlety here is that the resetting set is not a closed set and as such a new transversality

Main goal of robust intelligent control is support of optimal *trade-off* between stability, controllability and robustness with thermodynamic relation as (3) or (4) as

∑*q q* < Ψ<ϒ ϒ>Ψ < < *SS* ∑*q q SS* , etc

Therefore, we have different possibilities for support inequalities in (4) as following:

(iii) *i i* 0, ; , 0, ( ) ( ) *i i*

∑*q q tu* <sup>ϕ</sup> Ψ−ϒ ≤ Ψ−ϒ ϒ−Ψ . (4)

to use the result as the fitness function in GA.

*i*

*i*

*i*

*i*

According to (3) we have

not returned to the CO.

controller's emulated energy becomes zero.

condition involving higher-order Lie derivatives is needed.

thermodynamically stabilizing compensator. The resetting set is thus defined to be the set of all points in the closed-loop state space that correspond to decreasing controller emulated energy. By resetting the controller states, the plant energy can never increase after the first resetting event. Furthermore, if the closed-loop system total energy is conserved between resetting events, then a decrease in plant energy is accompanied by a corresponding increase in emulated energy. Hence, this approach allows the plant energy to flow to the controller, where it increases the emulated energy but does not allow the emulated energy to flow back to the plant after the first resetting event.

This energy dissipating hybrid controller effectively enforces a one-way energy transfer between the control object and the controller after the first resetting event. For practical implementation, knowledge of controller and object outputs is sufficient to determine whether or not the closed-loop state vector is in the resetting set. Since the energy-based hybrid controller architecture involves the exchange of energy with conservation laws describing transfer, accumulation, and dissipation of energy between the controller and the plant, we can construct a modified hybrid controller that guarantees that the closedloop system is consistent with basic thermodynamic principles after the first resetting event.

The entropy of the closed-loop system strictly increases between resetting events after the first resetting event, which is consistent with thermodynamic principles. This is not surprising since in this case the closed-loop system is *adiabatically isolated* (i.e., the system does not exchange energy (heat) with the environment) and the total energy of the closedloop system is conserved between resetting events. Alternatively, the entropy of the closedloop system strictly decreases across resetting events since the total energy strictly decreases at each resetting instant, and hence, energy is not conserved across resetting events.

Entropy production rate is a continuously differentiable function that defines the resetting set as its zero level set. Thus the resetting set is motivated by thermodynamic principles and guarantees that the energy of the closed-loop system is always flowing from regions of higher to lower energies after the first resetting event, which is in accordance with the second law of thermodynamics. This guarantees the existence of entropy function for the closed-loop system that satisfies the Clausius-type inequality between resetting events. Hence, it is reset the compensator states in order to ensure that the second law of thermodynamics is not violated. Furthermore, in this case, the hybrid controller with resetting set is a thermodynamically stabilizing compensator. Analogous thermodynamically stabilizing compensators can be constructed for lossless dynamical systems.

Equation (3) joint in analytic form different measures of control quality such as *stability, controllability*, and *robustness* supporting the required level of reliability and accuracy. As particular case Eq. (3) includes the entropic principle of robustness. Consequently, the interrelation between the Lyapunov stability and robustness described by Eq. (3) is the main physical law for designing automatic control systems. This law provides the background for an applied technique of designing KBs of robust intelligent control systems (with different levels of intelligence) with the use of soft computing.

In concluding this section, we formulate the following conclusions:


Self-Organized Intelligent Robust Control Based on Quantum Fuzzy Inference 197

produces the generalized control signal output. In this case the on-line output of QFI is an optimal robust control signal, which combines best features of the each independent FC

Thus QFI is one of the possible realizations of a general quantum control algorithm of the

From computer science viewpoint the QA structure of QFI model (as a particular case of the general quantum control algorithm of self-organization) must includes following necessary QA features: *superposition* preparation; *selection of quantum correlation* types; *quantum oracle* (black box model) application and *transportation* of extracted information (dynamic evolution of "*intelligent control state*" with minimum entropy); a *quantum correlation* over a classical correlation as power source of computing; applications of an *interference* operator for the answer extraction; *quantum parallel massive* computation; *amplitude amplification* of searching solution; effective quantum solution of classical *algorithmically unsolved* problems*.*  In this section we will show that we can use ideas of mathematical formalism of quantum mechanics for discovery new control algorithms that can be calculated on classical

First of all, we must be able to construct normalized states 0 (for example, it can be called as "*True*") and 1 (that can be called as "*False*") for inputs to our QFI algorithm. In Hilbert space the superposition of classical states (α +α 0 1 0 1 ) called a quantum bit (qubit) means that "*True*" and "*False*" are joined in one quantum state with different probability

The probabilities governed by the amplitudes α*k* must sum to unity. This necessary

single state, the Hadamard transform *H* is used. *H* denotes the fundamental unitary matrix:

1 1 1 2 1 1 *<sup>H</sup>* ⎛ ⎞ <sup>=</sup> ⎜ ⎟ <sup>−</sup> ⎝ ⎠

If the Hadamard operator *H* is applied to classical state 0 we receive the following result:

( ( ) ( ) ) <sup>1</sup> 00 1 01

<sup>2</sup> ψ = *<sup>P</sup>* +− = *P quantum bit* .

*k*

1 *<sup>k</sup>*

.

. So, a superposition of two classical states giving a quantum

( ) 1 11 1 11 1 1 1 0 0 0 1 2 22 2 11 1 0 1 0 *<sup>H</sup>* ⎛ ⎞ ⎛⎞ ⎛⎞ ⎛⎞ ⎛ ⎞ ⎛ ⎞ ⊗ ≡ ⎜ ⎟ ⎜⎟ ⎜⎟ ⎜⎟ ⎜ ⎟ = = += + ⎜ ⎟ <sup>−</sup> ⎝ ⎠ ⎝⎠ ⎝⎠ ⎝⎠ ⎝ ⎠ ⎝ ⎠

or α = α= *k k P P* .

∑ <sup>α</sup> <sup>=</sup> . To create a superposition from a

.

and state 1 is

1 0 ⎛ ⎞ ⎜ ⎟ ⎝ ⎠

**4.1 Quantum Fuzzy Inference process based on quantum computing** 

outputs (self-organization principle).

Let us consider main ideas of our QFI algorithm.

amplitudes , 0,1 *<sup>k</sup>* α = *k* . If *P* is a probability of a state, then <sup>2</sup>

*Remark.* The state 0 in a vector form is represented as a vector

constraint is expressed as the unitary condition <sup>2</sup>

0 1 ⎛ ⎞ ⎜ ⎟ ⎝ ⎠

self-organization processes.

computers.

represented as a vector

state represented as follows:

from the data of simulation of the dynamic behavior of the control object and the actuator of the automatic control system.

3. The application of soft computing guarantees the purposeful design of the corresponding robustness level by an optimal design of the total number of production rules and types of membership functions in the KB.

The main components and their interrelations in the information design technology are based on new types of (soft and quantum) computing. The key point of this information design technology is the use of the method of eliciting objective knowledge about the control process irrespective of the subjective experience of experts and the design of objective KB's of a FC, which is principal component of a robust ICS.

The output result of application of this information design technology is a robust KB of the FC that allows the ICS to operate under various types of information uncertainty. Selforganized ICS based on soft computing technology can supports thermodynamic trade-off in interrelations between stability, controllability and robustness (Litvintseva et al., 2006).

*Remark*. Unfortunately, soft computing approach also has bounded possibilities for global optimization while multi-objective GA can work on fixed space of searching solutions. It means that robustness of control can be guaranteed on similar unpredicted control situations. Also search space of GA choice expert. It means that exist the possibility that searching solution is not included in search space. (It is very difficult find black cat in dark room if you know that cat is absent in this room.) The support of optimal *thermodynamic trade-off* between *stability*, *controllability* and *robustness* in self-organization processes (see, Fig. 1) with (3) or (4) can be realized using a new quantum control algorithm of selforganization in KB of robust FC based on quantum computing operations (that absent in soft computing toolkit).

Let us consider the main self-organization idea and the corresponding structure of quantum control algorithm as QFI that can realize the self-organization process.

## **4. QFI-structure and knowledge base self-organization based on quantum computing**

General physical approach to the different bio-inspired and man-made model's description of self-organization principles from quantum computing viewpoint and quantum control algorithm of self-organization design are described. Particular case of this approach (based on early developed quantum swarm model) was introduced (see, in details (Litvintseva et al., 2009)). Types of quantum operators as superposition, entanglement and interference in different model's evolution of self-organization processes are applied from quantum computing viewpoint. The physical interpretation of self-organization control process on quantum level is discussed based on the information-thermodynamic models of the exchange and extraction of quantum (hidden) value information from/between classical particle's trajectories in particle swarm. New types of quantum correlations (as behavior control coordinator with quantum computation by communication) and information transport (value information) between particle swarm trajectories (communication through a quantum link) are introduced.

We will show below that the structure of developed QFI model includes necessary selforganization properties and realizes a self-organization process as a new QA. In particular case in intelligent control system (ICS) structure, QFI system is a QA block, which performs post-processing in on-line of the results of fuzzy inference of each independent FC and 196 Recent Advances in Robust Control – Novel Approaches and Design Methods

3. The application of soft computing guarantees the purposeful design of the corresponding robustness level by an optimal design of the total number of production

The main components and their interrelations in the information design technology are based on new types of (soft and quantum) computing. The key point of this information design technology is the use of the method of eliciting objective knowledge about the control process irrespective of the subjective experience of experts and the design of

The output result of application of this information design technology is a robust KB of the FC that allows the ICS to operate under various types of information uncertainty. Selforganized ICS based on soft computing technology can supports thermodynamic trade-off in interrelations between stability, controllability and robustness (Litvintseva et al., 2006). *Remark*. Unfortunately, soft computing approach also has bounded possibilities for global optimization while multi-objective GA can work on fixed space of searching solutions. It means that robustness of control can be guaranteed on similar unpredicted control situations. Also search space of GA choice expert. It means that exist the possibility that searching solution is not included in search space. (It is very difficult find black cat in dark room if you know that cat is absent in this room.) The support of optimal *thermodynamic trade-off* between *stability*, *controllability* and *robustness* in self-organization processes (see, Fig. 1) with (3) or (4) can be realized using a new quantum control algorithm of selforganization in KB of robust FC based on quantum computing operations (that absent in

Let us consider the main self-organization idea and the corresponding structure of quantum

General physical approach to the different bio-inspired and man-made model's description of self-organization principles from quantum computing viewpoint and quantum control algorithm of self-organization design are described. Particular case of this approach (based on early developed quantum swarm model) was introduced (see, in details (Litvintseva et al., 2009)). Types of quantum operators as superposition, entanglement and interference in different model's evolution of self-organization processes are applied from quantum computing viewpoint. The physical interpretation of self-organization control process on quantum level is discussed based on the information-thermodynamic models of the exchange and extraction of quantum (hidden) value information from/between classical particle's trajectories in particle swarm. New types of quantum correlations (as behavior control coordinator with quantum computation by communication) and information transport (value information) between particle swarm trajectories (communication through

We will show below that the structure of developed QFI model includes necessary selforganization properties and realizes a self-organization process as a new QA. In particular case in intelligent control system (ICS) structure, QFI system is a QA block, which performs post-processing in on-line of the results of fuzzy inference of each independent FC and

**4. QFI-structure and knowledge base self-organization based on quantum** 

actuator of the automatic control system.

soft computing toolkit).

a quantum link) are introduced.

**computing** 

rules and types of membership functions in the KB.

objective KB's of a FC, which is principal component of a robust ICS.

control algorithm as QFI that can realize the self-organization process.

from the data of simulation of the dynamic behavior of the control object and the

produces the generalized control signal output. In this case the on-line output of QFI is an optimal robust control signal, which combines best features of the each independent FC outputs (self-organization principle).

Thus QFI is one of the possible realizations of a general quantum control algorithm of the self-organization processes.

### **4.1 Quantum Fuzzy Inference process based on quantum computing**

From computer science viewpoint the QA structure of QFI model (as a particular case of the general quantum control algorithm of self-organization) must includes following necessary QA features: *superposition* preparation; *selection of quantum correlation* types; *quantum oracle* (black box model) application and *transportation* of extracted information (dynamic evolution of "*intelligent control state*" with minimum entropy); a *quantum correlation* over a classical correlation as power source of computing; applications of an *interference* operator for the answer extraction; *quantum parallel massive* computation; *amplitude amplification* of searching solution; effective quantum solution of classical *algorithmically unsolved* problems*.*  In this section we will show that we can use ideas of mathematical formalism of quantum mechanics for discovery new control algorithms that can be calculated on classical computers.

Let us consider main ideas of our QFI algorithm.

First of all, we must be able to construct normalized states 0 (for example, it can be called as "*True*") and 1 (that can be called as "*False*") for inputs to our QFI algorithm. In Hilbert space the superposition of classical states (α +α 0 1 0 1 ) called a quantum bit (qubit) means that "*True*" and "*False*" are joined in one quantum state with different probability amplitudes , 0,1 *<sup>k</sup>* α = *k* . If *P* is a probability of a state, then <sup>2</sup> or α = α= *k k P P* .

The probabilities governed by the amplitudes α*k* must sum to unity. This necessary constraint is expressed as the unitary condition <sup>2</sup> 1 *<sup>k</sup> k* ∑ <sup>α</sup> <sup>=</sup> . To create a superposition from a single state, the Hadamard transform *H* is used. *H* denotes the fundamental unitary matrix:

$$H = \frac{1}{\sqrt{2}} \begin{pmatrix} 1 & 1 \\ 1 & -1 \end{pmatrix} \dots$$

If the Hadamard operator *H* is applied to classical state 0 we receive the following result:

$$H \otimes |0\rangle \equiv \frac{1}{\sqrt{2}} \begin{pmatrix} 1 & 1\\ 1 & -1 \end{pmatrix} \begin{pmatrix} 1\\ 0 \end{pmatrix} = \frac{1}{\sqrt{2}} \begin{pmatrix} 1\\ 1 \end{pmatrix} = \frac{1}{\sqrt{2}} \left( \begin{pmatrix} 1\\ 0 \end{pmatrix} + \begin{pmatrix} 0\\ 1 \end{pmatrix} \right) = \frac{1}{\sqrt{2}} (|0\rangle + |1\rangle) \dots$$

*Remark.* The state 0 in a vector form is represented as a vector 1 0 ⎛ ⎞ ⎜ ⎟ ⎝ ⎠ and state 1 is

represented as a vector 0 1 ⎛ ⎞ ⎜ ⎟ ⎝ ⎠ . So, a superposition of two classical states giving a quantum state represented as follows:

$$|\psi\rangle = \frac{1}{\sqrt{2}} \left( \sqrt{P\left(|0\rangle\right)} |0\rangle + \sqrt{1 - P\left(|0\rangle\right)} |1\rangle \right) = \text{quantum bit.} \ . $$

Self-Organized Intelligent Robust Control Based on Quantum Fuzzy Inference 199

**4.2 Requirements to QFI-model design and its features in quantum algorithm control** 

Main *proposals* and *features* of the developed swarm QFI-model in the solution of intelligent

1. The digital value's set of control signals produced by responses of FC outputs are considered as swarm particles along of classical control trajectories with individual

2. Communication between particle swarm trajectories through a quantum link is

3. Intelligent agents are used different types of quantum correlations (as behavior control coordinator with quantum computation by communication) and information transport

4. The (hidden) quantum value information extracted from *classical* states of control signal classical trajectories (with minimum entropy in "intelligent states" of designed robust

1. Developed QFI model is based on *thermodynamic* and *information-theoretic* measures of intelligent agent interactions in communication space between macro- and micro-levels (the entanglement-assisted correlations in an active system represented by a collection

2. From computer science viewpoint, QA of QFI model plays the role of the informationalgorithmic and SW-platform support for design of self-organization process;

Fig. 2. Quantum Fuzzy Inference (QFI) block

**4.2.1 Main proposals and features of QFI model** 

**of self-organization** 

A**.** *Main proposals*

introduced;

control problems are as following:

marked intelligent agents;

(value information);

of intelligent agents);

control signals).

B. *Features* 

If the Hadamard operator *H* is independently applied to different classical states then a tensor product of superposition states is the result:

$$\left| \left\{ \psi \right\} \right\rangle = H^{\otimes \boldsymbol{\nu}} \left| True\boldsymbol{\mu} \right\rangle = \frac{1}{\sqrt{2^{\boldsymbol{\nu}}}} \otimes\_{i=1}^{\boldsymbol{\nu}} \left( \left| True\boldsymbol{\mu} \right\rangle + \left|False\right\rangle \right) . . $$

The fundamental result of quantum computation says that all of the computation can be embedded in a circuit, which nodes are the universal gates. These gates offer an expansion of unitary operator *U* that evolves the system in order to perform some computation.

Thus, naturally two problems are discussed: (1) Given a set of functional points *S xy* = {( , )} find the operator*U* such that *y* = *U x*⋅ ; (2) Given a problem, fined the quantum circuit that solves it. Algorithms for solving these problems may be implemented in a hardware quantum gate or in software as computer programs running on a classical computer. It is shown that in quantum computing the construction of a universal quantum simulator based on classical effective simulation is possible. Hence, a quantum gate approach can be used in a global optimization of KB structures of ICS's that are based on quantum computing, on a quantum genetic search and quantum learning algorithms.

A general structure of QFI block is shown on Figure 2.

In particularity, Figure 2 shows the structure of QFI algorithm for coding, searching and extracting the value information from the outputs of a few of independent fuzzy controllers with different knowledge bases (FC-KBs).

Inputs to QFI are control signals

$$K^i = \{k^i\_\rho(t), k^i\_\nu(t), k^i\_\nu(t)\}\;\_\nu$$

where index *i* means a number of KB (or FC) and *t* is a current temporal point.

*Remark.* In advanced control theory, control signal { ( ), ( ), ( )} *ii i i K ktk tkt* = *PDI* is called as *a PID gain coefficient schedule*. We will call it as a *control laws vector*.

These inputs are the outputs from fuzzy controllers (FC1, FC2, …,FCn) designed by SC Optimizer (SCO) tools for the given control task in different control situations (for example, in the presence of different stochastic noises). Output of QFI block is a new, redesigned (self-organized), control signal. The robust laws designed by the model of QFI are determined in a learning mode based on the output responses of individual KB's (with a fixed set of production rules) of corresponding FC's (see below Fig. 2) to the current unpredicted control situation in the form signals for controlling coefficient gains schedule of the PID controller and implement the adaptation process in online.

This effect is achieved only by the use of the laws of quantum information theory in the developed structure of QFI (see above the description of four facts from quantum information theory).

From the point of view of quantum information theory, the structure of the quantum algorithm in QFI (Level 3, Fig. 1) plays the role of a quantum filter simultaneously. The KB's consist of logical production rules, which, based on a given control error, form the laws of the coefficient gains schedule in the employed fuzzy PID controllers.

The QA in this case allows one to extract the necessary valuable information from the responses of two (or more) KB's to an unpredicted control situation by eliminating additional redundant information in the laws of the coefficient gains schedule of the controllers employed.

Fig. 2. Quantum Fuzzy Inference (QFI) block

## **4.2 Requirements to QFI-model design and its features in quantum algorithm control of self-organization**

## **4.2.1 Main proposals and features of QFI model**

Main *proposals* and *features* of the developed swarm QFI-model in the solution of intelligent control problems are as following:

A**.** *Main proposals*

198 Recent Advances in Robust Control – Novel Approaches and Design Methods

If the Hadamard operator *H* is independently applied to different classical states then a

*<sup>i</sup> <sup>n</sup> H True True False* <sup>⊗</sup> ψ= = ⊗ + <sup>=</sup> .

The fundamental result of quantum computation says that all of the computation can be embedded in a circuit, which nodes are the universal gates. These gates offer an expansion of unitary operator *U* that evolves the system in order to perform some computation. Thus, naturally two problems are discussed: (1) Given a set of functional points *S xy* = {( , )} find the operator*U* such that *y* = *U x*⋅ ; (2) Given a problem, fined the quantum circuit that solves it. Algorithms for solving these problems may be implemented in a hardware quantum gate or in software as computer programs running on a classical computer. It is shown that in quantum computing the construction of a universal quantum simulator based on classical effective simulation is possible. Hence, a quantum gate approach can be used in a global optimization of KB structures of ICS's that are based on quantum computing, on a

In particularity, Figure 2 shows the structure of QFI algorithm for coding, searching and extracting the value information from the outputs of a few of independent fuzzy controllers

{ ( ), ( ), ( )} *ii i i K ktk tkt* = *PDI* ,

*Remark.* In advanced control theory, control signal { ( ), ( ), ( )} *ii i i K ktk tkt* = *PDI* is called as *a PID* 

These inputs are the outputs from fuzzy controllers (FC1, FC2, …,FCn) designed by SC Optimizer (SCO) tools for the given control task in different control situations (for example, in the presence of different stochastic noises). Output of QFI block is a new, redesigned (self-organized), control signal. The robust laws designed by the model of QFI are determined in a learning mode based on the output responses of individual KB's (with a fixed set of production rules) of corresponding FC's (see below Fig. 2) to the current unpredicted control situation in the form signals for controlling coefficient gains schedule of

This effect is achieved only by the use of the laws of quantum information theory in the developed structure of QFI (see above the description of four facts from quantum

From the point of view of quantum information theory, the structure of the quantum algorithm in QFI (Level 3, Fig. 1) plays the role of a quantum filter simultaneously. The KB's consist of logical production rules, which, based on a given control error, form the laws of

The QA in this case allows one to extract the necessary valuable information from the responses of two (or more) KB's to an unpredicted control situation by eliminating additional redundant information in the laws of the coefficient gains schedule of the

where index *i* means a number of KB (or FC) and *t* is a current temporal point.

2 *n n*

<sup>1</sup> ( ) <sup>1</sup>

tensor product of superposition states is the result:

quantum genetic search and quantum learning algorithms. A general structure of QFI block is shown on Figure 2.

*gain coefficient schedule*. We will call it as a *control laws vector*.

the PID controller and implement the adaptation process in online.

the coefficient gains schedule in the employed fuzzy PID controllers.

with different knowledge bases (FC-KBs).

Inputs to QFI are control signals

information theory).

controllers employed.


Self-Organized Intelligent Robust Control Based on Quantum Fuzzy Inference 201

Using the four facts from quantum information theory QFI extracts the hidden quantum

In this case between KB1 and KB2 (from quantum information theory of viewpoint) we organize a communication channel using quantum correlations that is impossible in classical communication theory. The algorithm of superposition calculation is presented below and

We discuss for simplicity the situation in which an arbitrary amount of correlation is

Let us consider the communication process between two KBs as communication between two players *A* and *B* (see, Figs 2 and 3) and let 2*<sup>n</sup> d* = . According to the law of quantum mechanics, initially we must prepare a quantum state description by density matrix ρ from two classical states (KB1 and KB2). The initial state ρ is shared between subsystems held by

( ) ( ) 1 1 †

ρ = ∑∑ ⊗ ⊗ . (5)

*Cl Cl II d* ρ′ = ρ = + , i.e., the amount of *accessible* 

*t t A B k t k k t t U k kU <sup>d</sup>*

Here 0 1 *UI U* = and changes the computational basis to a conjugate basis

In this case, *B* chooses *k* randomly from *d* states in two possible random bases, while *A* has complete knowledge on his state. The state (5) can arise from following scenario. *A* picks a random ρ′-bit string *k* and sends *B k* or *<sup>n</sup> H k* <sup>⊗</sup> depending on whether the random bit *t* = 0 or 1 . Player *A* can send *t* to player *B* to unlock the correlation later. Experimentally, Hadamard transform, *H* and measurement on single qubits are sufficient to prepare the state (2), and later extract the unlocked correlation in ρ′ . The initial correlation is small,

*Cl I d* ρ = . The final amount of information after the complete measurement *MA*

This phenomenon is *impossible* classically. However, states exhibiting this behaviour *need not be entangled* and corresponding communication can be organized using Hadamard transform. Therefore, using the Hadamard transformation and a new type of quantum correlation as the communication between a few KB's it is possible to increase initial information by unconventional quantum correlation (as the quantum cognitive process of a value hidden information extraction in on-line, see, e.g. Fig. 3,b).In present section we consider a simplified case of QFI when with the Hadamard transform is organized an unlocked correlation in superposition of two KB's; instead of the difficult defined entanglement operation an equivalent quantum oracle is modelled that can estimates an "*intelligent state*" with the maximum of amplitude probability in corresponding superposition of classical states (minimum entropy principle relative to extracted quantum knowledge (Litvintseva et al., 2009)). Interference operator extracts this maximum of

**4.2.3 Quantum hidden information extraction in QFI** 

described in details in (Litvintseva et al., 2007).

*A* (KB1) and *B* (KB2), with respective dimensions *d* ,

1 2

*d*

− = =

in one-way communication is ad hoc, ( ) ( ) ( ) log 1 *<sup>l</sup>*

amplitude probability with a classical measurement.

0 0

unlocked with a one-way message.

<sup>1</sup> *iU k d i k* = ∀ 1 , .

i.e. ( ) ( ) <sup>1</sup> log 2

*information increase*.

*l*

value information from classical KB1 and KB2 (see, Figure 3).

3. Physically, QFI supports optimally a new developed *thermodynamic trade-off* of control performance (between stability, controllability and robustness) in self-organization KB process.

From quantum information theory viewpoint, QFI reduces the redundant information in classical control signals using four facts (Litvintseva et al., 2007) from quantum information for data compression in quantum information processing: 1) efficient quantum data compression; 2) coupling (separation) of information in the quantum state in the form of classical and quantum components; 3) amount of total, classical, and quantum correlation; and 4) hidden (observable) classical correlation in the quantum state.

We are developed the gate structure of QFI model with self-organization KB properties that includes all of these QA features (see, below Fig. 3) based on abovementioned proposals and general structure on Fig. 2.

### Let us discuss the following question.

Q*. What is a difference between our approach and Natural (or man-made) models of selforganization?* 

A. *Main differences* and *features* are as followings:


*Specific features of QFI applications in design of* robust *KB.* Let us stress the fundamentally important specific feature of operation of the QA (in the QFI model) in the design process of robust laws for the coefficient gain schedules of fuzzy PID controllers based on the individual KB that designed on SCO with soft computing (Level 1, Fig. 1).

## **4.2.2 Quantum information resources in QFI algorithm**

In this section we introduce briefly the particularities of quantum computing and quantum information theory that are used in the quantum block – QFI (see, Fig. 1) supporting a selforganizing capability of FC in robust ICS. According to described above algorithm the input to the QFI gate is considered according Fig. 2 as a superposed quantum state 1 2 *Kt Kt* () () ⊗ , where 1,2 *K t*( ) are the outputs from fuzzy controllers FC1 and FC2 designed by SCO (see, below Fig. 3) for the given control task in different control situations (for example, in the presence of different stochastic noises).

## **4.2.3 Quantum hidden information extraction in QFI**

200 Recent Advances in Robust Control – Novel Approaches and Design Methods

3. Physically, QFI supports optimally a new developed *thermodynamic trade-off* of control performance (between stability, controllability and robustness) in self-organization KB

From quantum information theory viewpoint, QFI reduces the redundant information in classical control signals using four facts (Litvintseva et al., 2007) from quantum information for data compression in quantum information processing: 1) efficient quantum data compression; 2) coupling (separation) of information in the quantum state in the form of classical and quantum components; 3) amount of total, classical, and quantum correlation;

We are developed the gate structure of QFI model with self-organization KB properties that includes all of these QA features (see, below Fig. 3) based on abovementioned proposals and

Q*. What is a difference between our approach and Natural (or man-made) models of self-*

• In our approach a self-organization process is described as a *logical algorithmic* process of value information *extraction* from hidden layers (*possibilities*) in classical control laws using quantum decision-making logic of QFI-models based on main facts of quantum

• Structure of QFI includes all of natural elements of self-organization (templating, selfassembly, and self-organization structure) with corresponding quantum operators (superposition of initial states, selection of quantum correlation types and classes,

• QFI is a new quantum search algorithm (belonging to so called *QPB*-class) that can

• In QFI the self-organization principle is realized using the on-line responses in a dynamic behavior of classical FC's on new control errors in unpredicted control

• Model of QFI supports the thermodynamic interrelations between *stability*, *controllability* and *robustness* for design of self-organization processes (Goal description level on Fig.

*Specific features of QFI applications in design of* robust *KB.* Let us stress the fundamentally important specific feature of operation of the QA (in the QFI model) in the design process of robust laws for the coefficient gain schedules of fuzzy PID controllers based on the

In this section we introduce briefly the particularities of quantum computing and quantum information theory that are used in the quantum block – QFI (see, Fig. 1) supporting a selforganizing capability of FC in robust ICS. According to described above algorithm the input to the QFI gate is considered according Fig. 2 as a superposed quantum state 1 2 *Kt Kt* () () ⊗ , where 1,2 *K t*( ) are the outputs from fuzzy controllers FC1 and FC2 designed by SCO (see, below Fig. 3) for the given control task in different control situations (for example, in the

information, quantum computing and QA's theories (Level 3, Fig. 1);

quantum oracles, interference, and measurements) (Level 2, Fig. 1);

solve classical algorithmically unsolved problems (Level 1, Fig. 1);

situations for the design of robust intelligent control (see Fig. 2);

individual KB that designed on SCO with soft computing (Level 1, Fig. 1).

**4.2.2 Quantum information resources in QFI algorithm** 

presence of different stochastic noises).

and 4) hidden (observable) classical correlation in the quantum state.

process.

general structure on Fig. 2.

*organization?* 

1).

Let us discuss the following question.

A. *Main differences* and *features* are as followings:

Using the four facts from quantum information theory QFI extracts the hidden quantum value information from classical KB1 and KB2 (see, Figure 3).

In this case between KB1 and KB2 (from quantum information theory of viewpoint) we organize a communication channel using quantum correlations that is impossible in classical communication theory. The algorithm of superposition calculation is presented below and described in details in (Litvintseva et al., 2007).

We discuss for simplicity the situation in which an arbitrary amount of correlation is unlocked with a one-way message.

Let us consider the communication process between two KBs as communication between two players *A* and *B* (see, Figs 2 and 3) and let 2*<sup>n</sup> d* = . According to the law of quantum mechanics, initially we must prepare a quantum state description by density matrix ρ from two classical states (KB1 and KB2). The initial state ρ is shared between subsystems held by *A* (KB1) and *B* (KB2), with respective dimensions *d* ,

$$\varphi = \frac{1}{2d} \sum\_{k=0}^{d-1} \sum\_{t=0}^{1} \left( |k\rangle\langle k| \otimes |t\rangle\langle t| \right)\_{A} \otimes \left( \mathcal{U}\_{t} |k\rangle\langle k| \mathcal{U}\_{t}^{\dagger} \right)\_{B}.\tag{5}$$

Here 0 1 *UI U* = and changes the computational basis to a conjugate basis <sup>1</sup> *iU k d i k* = ∀ 1 , .

In this case, *B* chooses *k* randomly from *d* states in two possible random bases, while *A* has complete knowledge on his state. The state (5) can arise from following scenario. *A* picks a random ρ′-bit string *k* and sends *B k* or *<sup>n</sup> H k* <sup>⊗</sup> depending on whether the random bit *t* = 0 or 1 . Player *A* can send *t* to player *B* to unlock the correlation later. Experimentally, Hadamard transform, *H* and measurement on single qubits are sufficient to prepare the state (2), and later extract the unlocked correlation in ρ′ . The initial correlation is small, i.e. ( ) ( ) <sup>1</sup> log 2 *l Cl I d* ρ = . The final amount of information after the complete measurement *MA*

in one-way communication is ad hoc, ( ) ( ) ( ) log 1 *<sup>l</sup> Cl Cl II d* ρ′ = ρ = + , i.e., the amount of *accessible information increase*.

This phenomenon is *impossible* classically. However, states exhibiting this behaviour *need not be entangled* and corresponding communication can be organized using Hadamard transform. Therefore, using the Hadamard transformation and a new type of quantum correlation as the communication between a few KB's it is possible to increase initial information by unconventional quantum correlation (as the quantum cognitive process of a value hidden information extraction in on-line, see, e.g. Fig. 3,b).In present section we consider a simplified case of QFI when with the Hadamard transform is organized an unlocked correlation in superposition of two KB's; instead of the difficult defined entanglement operation an equivalent quantum oracle is modelled that can estimates an "*intelligent state*" with the maximum of amplitude probability in corresponding superposition of classical states (minimum entropy principle relative to extracted quantum knowledge (Litvintseva et al., 2009)). Interference operator extracts this maximum of amplitude probability with a classical measurement.

Self-Organized Intelligent Robust Control Based on Quantum Fuzzy Inference 203

Figure 4 shows the algorithm for coding, searching and extracting the value information from KB's of fuzzy PID controllers designed by SCO and QCO (quantum computing

Optimal drawing process of value information from a few KBs that are designed by soft computing is based on following four facts from quantum information theory (Litvintseva et al., 2007): (*i*) the effective quantum data compression; (*ii*) the splitting of classical and quantum parts of information in quantum state; (*iii*) the total correlations in quantum state are "mixture" of classical and quantum correlations; and (*iv*) the exiting of hidden (locking)

This quantum control algorithm uses these four Facts from quantum information theory in following way: (i) compression of classical information by coding in computational basis { 0 ,1 } and forming the quantum correlation between different computational bases (Fact 1); (ii) separating and splitting total information and correlations on "classical" and "quantum" parts using Hadamard transform (Facts 2 and 3); (iii) extract unlocking information and residual redundant information by measuring the classical correlation in quantum state (Fact 4) using criteria of maximal corresponding amplitude probability. These facts are the informational resources of QFI background. Using these facts it is possible to extract an additional amount of quantum value information from smart KBs produced by SCO for design a *wise* control using compression and rejection procedures of the redundant

Below we discuss the application of this quantum control algorithm in QFI structure.

optimizer).

Fig. 4. The structure of QFI gate

classical correlation in quantum state.

information in a classical control signal.

Fig. 3. (a, b). Example of information extraction in QFI

Figure 4 shows the algorithm for coding, searching and extracting the value information from KB's of fuzzy PID controllers designed by SCO and QCO (quantum computing optimizer).

Fig. 4. The structure of QFI gate

202 Recent Advances in Robust Control – Novel Approaches and Design Methods

(a)

(b)

Fig. 3. (a, b). Example of information extraction in QFI

Optimal drawing process of value information from a few KBs that are designed by soft computing is based on following four facts from quantum information theory (Litvintseva et al., 2007): (*i*) the effective quantum data compression; (*ii*) the splitting of classical and quantum parts of information in quantum state; (*iii*) the total correlations in quantum state are "mixture" of classical and quantum correlations; and (*iv*) the exiting of hidden (locking) classical correlation in quantum state.

This quantum control algorithm uses these four Facts from quantum information theory in following way: (i) compression of classical information by coding in computational basis { 0 ,1 } and forming the quantum correlation between different computational bases (Fact 1); (ii) separating and splitting total information and correlations on "classical" and "quantum" parts using Hadamard transform (Facts 2 and 3); (iii) extract unlocking information and residual redundant information by measuring the classical correlation in quantum state (Fact 4) using criteria of maximal corresponding amplitude probability. These facts are the informational resources of QFI background. Using these facts it is possible to extract an additional amount of quantum value information from smart KBs produced by SCO for design a *wise* control using compression and rejection procedures of the redundant information in a classical control signal.

Below we discuss the application of this quantum control algorithm in QFI structure.

Self-Organized Intelligent Robust Control Based on Quantum Fuzzy Inference 205

based on GA, fuzzy neural network, and fuzzy logic inference. Quantum computational intelligence is used quantum search algorithm, quantum neural network, and QFI. These algorithms are includes three main operators. In GA selection, crossover, and mutation operators are used. In quantum search algorithm superposition, entanglement, and

Fig. 6. Structure of robust KB information technology design for integrated fuzzy ICS (IFICS) (R.S. – reference signal) Information design technology includes two steps: 1) step 1 based on SCO with soft computing; and 2) step 2 based on SCO with quantum computing. Main problem in this technology is the design of robust KB of FC that can include the selforganization of knowledge in unpredicted control situations. The background of this design processes is KB optimizer based on quantum/soft computing. Concrete industrial Benchmarks (as 'cart - pole' system, robotic unicycle, robotic motorcycle, mobile robot for service use, semi-active car suspension system etc.) are tested successfully with the developed design technology. In particular case, the role of Kansei engineering in System of System Engineering is demonstrated. An application of developed toolkit in design of "*Hu-Machine* technology" based on Kansei Engineering is demonstrated for emotion generating

We illustrate the efficiency of application of QFI by a particular example. Positive applied results of classical computational technologies (as soft computing) together with quantum computing technology created a new alternative approach – applications of quantum computational intelligence technology to optimization of control processes in classical CO (physical analogy of inverse method investigation "*quantum control system* – *classical* CO"). We will discuss also the main goal and properties of quantum control design algorithm of self-organization robust KB in ICS. Benchmarks of robust intelligent control in unpredicted

interference are used.

enterprise (purpose of enterprise).

situation are introduced.

## **5. Structures of robust ICS and information design technology of quantum KB self-organization**

The kernel of the abovementioned FC design toolkit is a so-called SCO implementing advanced soft computing ideas. SCO is considered as a new flexible tool for design of optimal structure and robust KBs of FC based on a chain of genetic algorithms (GAs) with information-thermodynamic criteria for KB optimization and advanced error BP-algorithm for KB refinement. Input to SCO can be some measured or simulated data (called as 'teaching signal" (TS)) about the modelling system. For TS design (or for GA fitness evaluation) we use stochastic simulation system based on the control object model. More detail description of SCO is given in (Litvintseva et al., 2006).

Figure 5 illustrates as an example the structure and main ideas of self-organized control system consisting of two FC's coupling in one QFI chain that supplies a self-organizing capability. CO may be represented in physical form or in the form of mathematical model. We will use a mathematical model of CO described in Matlab-Simulink 7.1 (some results are obtained by using Matlab-Simulink 6.5). The kernel of the abovementioned FC design tools is a so-called SC Optimizer (SCO) implementing advanced soft computing ideas.

Fig. 5. Structure of robust ICS based on QFI

Figure 6 shows the structural diagram of the information technology and design stages of the objective KB for robust ICS's based on new types of computational intelligence. *Remark. Unconventional computational intelligence*: *Soft and quantum computing technologies*. Soft computing and quantum computing are new types of unconventional computational intelligence (details see in http://www.qcoptimizer.com/). Technology of soft computing is 204 Recent Advances in Robust Control – Novel Approaches and Design Methods

The kernel of the abovementioned FC design toolkit is a so-called SCO implementing advanced soft computing ideas. SCO is considered as a new flexible tool for design of optimal structure and robust KBs of FC based on a chain of genetic algorithms (GAs) with information-thermodynamic criteria for KB optimization and advanced error BP-algorithm for KB refinement. Input to SCO can be some measured or simulated data (called as 'teaching signal" (TS)) about the modelling system. For TS design (or for GA fitness evaluation) we use stochastic simulation system based on the control object model. More

Figure 5 illustrates as an example the structure and main ideas of self-organized control system consisting of two FC's coupling in one QFI chain that supplies a self-organizing capability. CO may be represented in physical form or in the form of mathematical model. We will use a mathematical model of CO described in Matlab-Simulink 7.1 (some results are obtained by using Matlab-Simulink 6.5). The kernel of the abovementioned FC design tools

Figure 6 shows the structural diagram of the information technology and design stages of

*Remark. Unconventional computational intelligence*: *Soft and quantum computing technologies*. Soft computing and quantum computing are new types of unconventional computational intelligence (details see in http://www.qcoptimizer.com/). Technology of soft computing is

the objective KB for robust ICS's based on new types of computational intelligence.

is a so-called SC Optimizer (SCO) implementing advanced soft computing ideas.

**5. Structures of robust ICS and information design technology of quantum** 

detail description of SCO is given in (Litvintseva et al., 2006).

Fig. 5. Structure of robust ICS based on QFI

**KB self-organization** 

based on GA, fuzzy neural network, and fuzzy logic inference. Quantum computational intelligence is used quantum search algorithm, quantum neural network, and QFI. These algorithms are includes three main operators. In GA selection, crossover, and mutation operators are used. In quantum search algorithm superposition, entanglement, and interference are used.

Fig. 6. Structure of robust KB information technology design for integrated fuzzy ICS (IFICS) (R.S. – reference signal) Information design technology includes two steps: 1) step 1 based on SCO with soft computing; and 2) step 2 based on SCO with quantum computing.

Main problem in this technology is the design of robust KB of FC that can include the selforganization of knowledge in unpredicted control situations. The background of this design processes is KB optimizer based on quantum/soft computing. Concrete industrial Benchmarks (as 'cart - pole' system, robotic unicycle, robotic motorcycle, mobile robot for service use, semi-active car suspension system etc.) are tested successfully with the developed design technology. In particular case, the role of Kansei engineering in System of System Engineering is demonstrated. An application of developed toolkit in design of "*Hu-Machine* technology" based on Kansei Engineering is demonstrated for emotion generating enterprise (purpose of enterprise).

We illustrate the efficiency of application of QFI by a particular example. Positive applied results of classical computational technologies (as soft computing) together with quantum computing technology created a new alternative approach – applications of quantum computational intelligence technology to optimization of control processes in classical CO (physical analogy of inverse method investigation "*quantum control system* – *classical* CO").

We will discuss also the main goal and properties of quantum control design algorithm of self-organization robust KB in ICS. Benchmarks of robust intelligent control in unpredicted situation are introduced.

Self-Organized Intelligent Robust Control Based on Quantum Fuzzy Inference 207

For QA design of QFI it is needed to apply the additional operations to partial KBs outputs that drawing and aggregate the value information from different KBs. Soft computing tool does not contain corresponding necessary operations**.** The necessary unitary reversible operations are called *superposition*, *entanglement* (quantum correlation) and *interference* that

• Preparation of all normalized states 0 and 1 for current values of control

knowledge bases and corresponding probability distributions, including: • (a) calculation of probability amplitudes 0 1 α ,α of states 0 and 1 from

**Step 2.** *Choose quantum correlation type for preparation of entangled state.* In the Table **1**

<sup>1122</sup> { ( ), ( ), ( ), ( )} ( ), *new PDPD P ktktktkt k t* →

investigated types of quantum correlations are shown. Take, for example, the

, , , , , ,

, , , , , ,

12 12 12 12 12 12

11 2 2 11 2 2 11 2 2

**Step 3.** *Superposition and entanglement.* According to the chosen quantum correlation type

construct superposition of entangled states as shown on general Fig. 8,a,b, where **H**

12 12 12 12 12 12

*KB KB KB KB new*

*KB KB KB KB new*

*KB KB KB KB new*

*KB KB KB KB new I iI i Ii I*

*KB KB KB KB new*

*KB KB KB KB new*

*KB KB KB KB new I iP i I i P i I i I*

() ( ) ( ) () () () ( ) ( ) () () () ( ) ( ) () ()

*k t k t t k t t k t k t gain k t k t t k t t k t k t gain k t k t t k t t k t k t gain*

*P iD i P i D i P i P*

−Δ −Δ → ⋅ −Δ −Δ → ⋅ −Δ −Δ → ⋅

*D iI i D i I i D i D*

*KB KB KB KB new*

*PDI ktk tkt* (index *i* means a number of KB) with respect to the chosen

<sup>1234</sup> () () () () *P DP D aaaa k t k t k t k t* = is considered as correlated

() () () ( ) ( ) () () ()

*k t k t k t gain k t k k t gain k t k t k t gain*

*D iI Di D KB KB KB KB new I iP i I i I*

() ( ) () () ( ) () () ( ) ()

*k t k t t k t gain k t k t t k t gain k t k t t k t gain*

*P iP i Pi P*

*D iD i Di D*

*P iD i P i P*

→ ⋅ → ⋅ → ⋅

−Δ → ⋅ −Δ → ⋅ −Δ → ⋅

Consider main steps of developed QFI process that is considered as a QA.

• (b) by using α1 calculation of normalized value of state 1 .

physically are operators of quantum computing.

following quantum correlation type:

Then a quantum state 112 2

signal { ( ), ( ), ( )} *iii*

histograms;

where 1 and 2 are indexes of KB.

**Step 1.** Coding

(entangled) state

correlations

1. QFI based on spatial

2. QFI based on temporal correlations

3. QFI based on spatio-temporal correlations

Table 1. Types of quantum correlations

**Step 4.** Interference and measurement

is the Hadamard transform operator.

Therefore the operation area of such a control system can be expanded greatly as well as its robustness. Robustness of control signal is the background for support the reliability of control accuracy in uncertainty environments. The effectiveness of the developed QFI model is illustrated for important case - the application to design of robust control system in unpredicted control situations.

The main technical purpose of QFI is to supply a self-organization capability for many (sometimes unpredicted) control situations based on a few KBs. QFI produces a robust optimal control signal for the current control situation using a reducing procedure and compression of redundant information in KB's of individual FCs. Process of rejection and compression of redundant information in KB's uses the laws of quantum information theory. Decreasing of redundant information in KB-FC increases the robustness of control without loss of important control quality as reliability of control accuracy. As a result, a few KB-FC with QFI can be adapted to unexpected change of external environments and to uncertainty in initial information.

Let us discuss in detail the design process of robust KB in unpredicted situations.

## **6. KB self-organization quantum algorithm of FC's based on QFI**

We use real value of a current input control signal to design normalized state 0 . To define probability amplitude α0 we will use simulation results of controlled object behavior in teaching conditions. In this case by using control signal values, we can construct histograms of control signals and then taking integral we can receive probability distribution function and calculate α =0 0 *P* . Then we can find 1 0 α = −1 *P* . After that it is possible to define state 1 as shown on Fig. 7 below.

Fig. 7. Example of control signal and corresponding probability distribution function

For QA design of QFI it is needed to apply the additional operations to partial KBs outputs that drawing and aggregate the value information from different KBs. Soft computing tool does not contain corresponding necessary operations**.** The necessary unitary reversible operations are called *superposition*, *entanglement* (quantum correlation) and *interference* that physically are operators of quantum computing.

Consider main steps of developed QFI process that is considered as a QA.

**Step 1.** Coding

206 Recent Advances in Robust Control – Novel Approaches and Design Methods

Therefore the operation area of such a control system can be expanded greatly as well as its robustness. Robustness of control signal is the background for support the reliability of control accuracy in uncertainty environments. The effectiveness of the developed QFI model is illustrated for important case - the application to design of robust control system in

The main technical purpose of QFI is to supply a self-organization capability for many (sometimes unpredicted) control situations based on a few KBs. QFI produces a robust optimal control signal for the current control situation using a reducing procedure and compression of redundant information in KB's of individual FCs. Process of rejection and compression of redundant information in KB's uses the laws of quantum information theory. Decreasing of redundant information in KB-FC increases the robustness of control without loss of important control quality as reliability of control accuracy. As a result, a few KB-FC with QFI can be adapted to unexpected change of external environments and to

We use real value of a current input control signal to design normalized state 0 . To define probability amplitude α0 we will use simulation results of controlled object behavior in teaching conditions. In this case by using control signal values, we can construct histograms of control signals and then taking integral we can receive probability distribution function and calculate α =0 0 *P* . Then we can find 1 0 α = −1 *P* . After that it is possible to define

Fig. 7. Example of control signal and corresponding probability distribution function

Let us discuss in detail the design process of robust KB in unpredicted situations.

**6. KB self-organization quantum algorithm of FC's based on QFI** 

unpredicted control situations.

uncertainty in initial information.

state 1 as shown on Fig. 7 below.


$$\{k\_{\boldsymbol{\rho}}^{\boldsymbol{1}}(t), k\_{\boldsymbol{\rho}}^{\boldsymbol{1}}(t), k\_{\boldsymbol{\rho}}^{\boldsymbol{2}}(t), k\_{\boldsymbol{\rho}}^{\boldsymbol{2}}(t)\} \to k\_{\boldsymbol{\rho}}^{\boldsymbol{new}}(t),$$

where 1 and 2 are indexes of KB.

Then a quantum state 112 2 <sup>1234</sup> () () () () *P DP D aaaa k t k t k t k t* = is considered as correlated (entangled) state


Table 1. Types of quantum correlations


Self-Organized Intelligent Robust Control Based on Quantum Fuzzy Inference 209

• Choose a quantum state 1234 *aaaa* with maximum amplitude of probability <sup>2</sup>

1 1

*<sup>P</sup> n n <sup>i</sup> n n <sup>i</sup> k t a aa a a*

1 1 ( ) ... ... ( ) 2 2

<sup>=</sup> <sup>=</sup> ∑

() , () , () . *output new output new output new P P pD D DI I <sup>I</sup> k k t gain k k t gain k k t gain* =⋅ =⋅ =⋅

**Step 6a.** Find robust QFI scaling gains {, ,} *PDI gain gain gain* based on GA and a chosen fitness

In proposed QFI we investigated the proposed types of quantum QFI correlations shown in Table 1 where the correlations are given with 2KB, but in general case a few of KBs may be; *<sup>i</sup> t* is a current temporal point and Δ*t* is a correlation parameter. Let us discuss the particularities of quantum computing that are used in the quantum block QFI (Fig. 4) supporting a self-organizing capability of a fuzzy controller. Optimal drawing process of value information from a few of KBs as abovementioned is based on the following four facts

• the splitting of classical and quantum parts of information in quantum state (Fact 2); • the total correlations in quantum state are "mixture" of classical and quantum

• existing of hidden (locking) classical correlation in quantum state using criteria of

These facts are the informational resources of QFI background. Using these facts it is possible to extract the value information from KB1 and KB2. In this case between KB1 and KB2 (from quantum information theory point of view) we organize a communication channel using quantum correlations that is impossible in classical communication. In QFI algorithm with the Hadamard transform an unlocked correlation in superposition of states is organized. The entanglement operation is modelled as a quantum oracle that can estimate a maximum of amplitude probability in corresponding superposition of entangled states. Interference operator extracts this maximum of amplitudes probability with a classical measurement. Thus from two FC-KBs (produced by SCO for design a smart control) we can produce a wise control by using compression and rejection procedures of the redundant information in a classical control signal. This completes the particularities of quantum computing and quantum information theory that are used in the quantum block supporting a self-

**7. Robust FC design toolkit: SC and QC Optimizers for quantum controller's** 

To realize QFI process we developed new tools called "QC Optimizer" that are the next

• Calculate normalized output as a norm of the chosen quantum state vector as

2

1

=

*n*

**Step 5.** *Decoding*

follows

**Step 6.** Denormalization

from quantum information theory:

correlations (*Fact* 3); and

organizing capability of FC.

generation of SCO tools.

**design** 

• the effective quantum data compression (*Fact*1);

maximal corresponding probability amplitude (*Fact* 4).

function.

*new*

• Calculate final (denormalized) output result as follows:

α*k*

(a)

Fig. 8. The algorithm of superposition calculation


208 Recent Advances in Robust Control – Novel Approaches and Design Methods

(a)

(b)

Fig. 8. The algorithm of superposition calculation

• Calculate normalized output as a norm of the chosen quantum state vector as follows

$$k\_{\mathbb{P}^\*}^{new}(t) = \frac{1}{\sqrt{2^{\ast\prime}}} \sqrt{\left} = \frac{1}{\sqrt{2^{\ast\ast}}} \sqrt{\sum\_{i=1}^{\ast\ast} \left(a\_i\right)^2}$$

**Step 6.** Denormalization

• Calculate final (denormalized) output result as follows:

$$k\_p^{\text{output}} = k\_p^{\text{new}}(\mathbf{t}) \cdot \text{gain}\_{p'} \\ k\_D^{\text{output}} = k\_D^{\text{new}}(\mathbf{t}) \cdot \text{gain}\_{p'} \\ k\_l^{\text{output}} = k\_l^{\text{new}}(\mathbf{t}) \cdot \text{gain}\_l.$$

**Step 6a.** Find robust QFI scaling gains {, ,} *PDI gain gain gain* based on GA and a chosen fitness function.

In proposed QFI we investigated the proposed types of quantum QFI correlations shown in Table 1 where the correlations are given with 2KB, but in general case a few of KBs may be; *<sup>i</sup> t* is a current temporal point and Δ*t* is a correlation parameter. Let us discuss the particularities of quantum computing that are used in the quantum block QFI (Fig. 4) supporting a self-organizing capability of a fuzzy controller. Optimal drawing process of value information from a few of KBs as abovementioned is based on the following four facts from quantum information theory:


These facts are the informational resources of QFI background. Using these facts it is possible to extract the value information from KB1 and KB2. In this case between KB1 and KB2 (from quantum information theory point of view) we organize a communication channel using quantum correlations that is impossible in classical communication. In QFI algorithm with the Hadamard transform an unlocked correlation in superposition of states is organized. The entanglement operation is modelled as a quantum oracle that can estimate a maximum of amplitude probability in corresponding superposition of entangled states. Interference operator extracts this maximum of amplitudes probability with a classical measurement.

Thus from two FC-KBs (produced by SCO for design a smart control) we can produce a wise control by using compression and rejection procedures of the redundant information in a classical control signal. This completes the particularities of quantum computing and quantum information theory that are used in the quantum block supporting a selforganizing capability of FC.

## **7. Robust FC design toolkit: SC and QC Optimizers for quantum controller's design**

To realize QFI process we developed new tools called "QC Optimizer" that are the next generation of SCO tools.

Self-Organized Intelligent Robust Control Based on Quantum Fuzzy Inference 211

*Remark*. On Fig. 9, the first internal layer of QC Optimizer is shown (inputs/output). On Fig. 10, the quantum block realizing QFI process based on three KB is described. On the Fig. 10, "delay time = 20 (sec)" corresponds to the parameter " Δ*t* " given in temporal quantum correlations description (see, Table 1); the knob named "correlation parameters" call other block (see, Fig. 10) where a chosen type of quantum correlations (Table 1) is described. On Fig. 11 description of temporal quantum correlations is shown. Here "kp1\_r" means state 0 for ( ) *<sup>P</sup> k t* of FC1 (or KB1); "kp1\_r\_t" means state 0 for ( ) *<sup>P</sup> kt t* + Δ of FC1 (or KB1); "kp1\_v" means state 1 for ( ) *<sup>P</sup> k t* of FC1 (or KB1); "kp1\_v\_t" means state 1 for ( ) *<sup>P</sup> kt t* + Δ

of FC1 (or KB1); and so on for other FC2 (KB2) and FC3(KB3).

Fig. 11. Internal structure of "correlation parameters" block

is based on *QC optimizer tools* (Step 2 technology).

**capable to work in unpredicted control situations** 

according to Fig. 6 as follows:

forming;

model;

**7.2 Design of intelligent robust control systems for complex dynamic systems** 

Base (KB) of a Fuzzy Inference System (realized in a Fuzzy Controller (FC));

• KB-FC tuning is based on Fuzzy Neural Networks using error BP-algorithm; • Optimization of KB-FC is based on *SC optimizer tools* (Step 1 technology);

Describe now key points of Quantum & Soft Computing Application in Control Engineering

• PID Gain coefficient schedule (control laws) is described in the form of a Knowledge

• Genetic Algorithm (GA) with complicated Fitness Function is used for KB-FC

• Quantum control algorithm of self-organization is developed based on the QFI-

• QFI-model realized for the KB self-organization to a new unpredicted control situation

## **7.1 QC Optimizer Toolkit**

QC Optimizer Toolkit is based on Quantum & Soft Computing and includes the following:


Internal structure of QC Optimizer is shown on Figs 9 and 10.

Fig. 9. First internal layer of QC Optimizer

Fig. 10. Second internal layer of QC Optimizer

210 Recent Advances in Robust Control – Novel Approaches and Design Methods

QC Optimizer Toolkit is based on Quantum & Soft Computing and includes the following: • Soft computing and stochastic fuzzy simulation with information-thermodynamic criteria for robust KBs design in the case of a few teaching control situations; • QFI-Model and its application to a self-organization process based on two or more KBs

for robust control in the case of unpredicted control situations.

Internal structure of QC Optimizer is shown on Figs 9 and 10.

Fig. 9. First internal layer of QC Optimizer

Fig. 10. Second internal layer of QC Optimizer

**7.1 QC Optimizer Toolkit** 

*Remark*. On Fig. 9, the first internal layer of QC Optimizer is shown (inputs/output). On Fig. 10, the quantum block realizing QFI process based on three KB is described. On the Fig. 10, "delay time = 20 (sec)" corresponds to the parameter " Δ*t* " given in temporal quantum correlations description (see, Table 1); the knob named "correlation parameters" call other block (see, Fig. 10) where a chosen type of quantum correlations (Table 1) is described.

On Fig. 11 description of temporal quantum correlations is shown. Here "kp1\_r" means state 0 for ( ) *<sup>P</sup> k t* of FC1 (or KB1); "kp1\_r\_t" means state 0 for ( ) *<sup>P</sup> kt t* + Δ of FC1 (or KB1); "kp1\_v" means state 1 for ( ) *<sup>P</sup> k t* of FC1 (or KB1); "kp1\_v\_t" means state 1 for ( ) *<sup>P</sup> kt t* + Δ of FC1 (or KB1); and so on for other FC2 (KB2) and FC3(KB3).

Fig. 11. Internal structure of "correlation parameters" block

## **7.2 Design of intelligent robust control systems for complex dynamic systems capable to work in unpredicted control situations**

Describe now key points of Quantum & Soft Computing Application in Control Engineering according to Fig. 6 as follows:


Self-Organized Intelligent Robust Control Based on Quantum Fuzzy Inference 213

case, to provide the operation of the QFI model, the knowledge of particular production rules fired in the KB is not required, which gives a big advantage, which is expressed the form of an opportunity of designing control processes with the required robustness level in

Note that the achievement of the required robustness level in an unpredicted control situation essentially depends in a number of cases on the quality and quantity of the

Thus, the QA in the QFI model is a physical prototype of production rules, implements a virtual robust KB for a fuzzy PID controller in a program way (for the current unpredicted control situation), and is a problem-independent toolkit. The presented facts give an opportunity to use experimental data of the teaching signal without designing a mathematical model of the CO. This approach offers the challenge of QFI using in problems of CO with weakly formalized (ill-defined) structure and a large dimension of the phase

In present Chapter we are described these features. The dominant role of self-organization

2 2 2 2 <sup>1</sup> <sup>1</sup> <sup>2</sup> 1 ( ) ( ); 2 1 , *<sup>x</sup> dS x ax k x x kx t u t ax k x x x*

where ξ( )*t* is a stochastic excitation with an appropriate probability density function; *u t*( ) is a control force; and *Sx* is an entropy production of control object *x* . The system, described by Eq.(6) have essentially nonlinear dissipative components and appears different types of behaviour: if β = 0.5 (other parameters, for example, <sup>1</sup> α = = = 0.3; 0.2; 5 *k k* ), then dynamic system motion is asymptotically stable; if 1 β = − (other parameters is the same as above),

Consider an excited motion of the given dynamic system under hybrid fuzzy PID-control. Let the system be disturbed by a Rayleigh (non Gaussian) noise. The stochastic simulation of random excitations with appropriate probability density functions is based on nonlinear forming filters methodology is developed. In modelling we are considered with developed toolkit (see, Fig. 12) different unforeseen control situations and compared control performances of FC1, FC2, and self-organized control system based on QFI with two FC's. The stochastic simulation of random excitations with appropriate probability density functions is based on nonlinear forming filters methodology developed in (Litvintseva et al.,

FC1 *design*: The following model parameters: 1 β = α= = = 0.5; 0.3; 0.2; 5 *k k* and initial conditions [2.5] [0.1] are considered. Reference signal is: 0 *ref x* = . K-gains ranging area is [0, 10]. By using SC Optimizer and teaching signal (TS) obtained by the stochastic simulation system with GA or from experimental data, we design KB of FC 1, which optimally

approximate the given TS (from the chosen fitness function point of view).

*dt* + β+ + − + =ξ + = β+ + − ⋅ ⎡⎤ ⎡⎤ ⎣⎦ ⎣⎦ (6)

in robust KB design of intelligent FC for unpredicted control situations is discussed.

Robustness of new types of self-organizing intelligent control systems is demonstrated.

Consider the following model of control object as nonlinear oscillator:

on-line.

employed individual KB's.

space of controlled parameters.

**8. Benchmark simulation** 

**8.1 Control object's model simulation** 

then the motion is locally unstable.

2006).

In this Chapter we are introduced briefly the particularities of quantum computing and quantum information theory that are used in the quantum block – QFI (see, Fig. 12) supporting a self-organizing capability of FC in robust ICS.

Fig. 12. QFI-process by using QC Optimizer (QFI kernel)

Using unconventional computational intelligence toolkit we propose a solution of such kind of generalization problems by introducing a *self-organization* design process of robust KB-FC that supported by the *Quantum Fuzzy Inference* (QFI) based on Quantum Soft Computing ideas.

The main technical purpose of QFI is to supply a self-organization capability for many (sometimes unpredicted) control situations based on a few KBs. QFI produces robust optimal control signal for the current control situation using a reducing procedure and compression of redundant information in KB's of individual FCs. Process of rejection and compression of redundant information in KB's uses the laws of quantum information theory. Decreasing of redundant information in KB-FC increases the robustness of control without loss of important control quality as reliability of control accuracy. As a result, a few KB-FC with QFI can be adapted to unexpected change of external environments and to uncertainty in initial information.

At the second stage of design with application of the QFI model, we do not need yet to form new production rules. It is sufficient only to receive in on-line the response of production rules in the employed FC to the current unpredicted control situation in the form of the output control signals of the coefficient gains schedule in the fuzzy PID controller. In this case, to provide the operation of the QFI model, the knowledge of particular production rules fired in the KB is not required, which gives a big advantage, which is expressed the form of an opportunity of designing control processes with the required robustness level in on-line.

Note that the achievement of the required robustness level in an unpredicted control situation essentially depends in a number of cases on the quality and quantity of the employed individual KB's.

Thus, the QA in the QFI model is a physical prototype of production rules, implements a virtual robust KB for a fuzzy PID controller in a program way (for the current unpredicted control situation), and is a problem-independent toolkit. The presented facts give an opportunity to use experimental data of the teaching signal without designing a mathematical model of the CO. This approach offers the challenge of QFI using in problems of CO with weakly formalized (ill-defined) structure and a large dimension of the phase space of controlled parameters.

In present Chapter we are described these features. The dominant role of self-organization in robust KB design of intelligent FC for unpredicted control situations is discussed.

## **8. Benchmark simulation**

212 Recent Advances in Robust Control – Novel Approaches and Design Methods

In this Chapter we are introduced briefly the particularities of quantum computing and quantum information theory that are used in the quantum block – QFI (see, Fig. 12)

**QFI kernel menu**

**Probability function**

**Delay time (if temporal QFI): t\_corr/sampletime**

**Description of correlation type**

supporting a self-organizing capability of FC in robust ICS.

*new Kp*

**Multiple KB set**

*new Kd*

*new Ki*

**Robust QFC**

Using unconventional computational intelligence toolkit we propose a solution of such kind of generalization problems by introducing a *self-organization* design process of robust KB-FC that supported by the *Quantum Fuzzy Inference* (QFI) based on Quantum Soft Computing

The main technical purpose of QFI is to supply a self-organization capability for many (sometimes unpredicted) control situations based on a few KBs. QFI produces robust optimal control signal for the current control situation using a reducing procedure and compression of redundant information in KB's of individual FCs. Process of rejection and compression of redundant information in KB's uses the laws of quantum information theory. Decreasing of redundant information in KB-FC increases the robustness of control without loss of important control quality as reliability of control accuracy. As a result, a few KB-FC with QFI can be adapted to unexpected change of external environments and to

At the second stage of design with application of the QFI model, we do not need yet to form new production rules. It is sufficient only to receive in on-line the response of production rules in the employed FC to the current unpredicted control situation in the form of the output control signals of the coefficient gains schedule in the fuzzy PID controller. In this

**Scale coefficients: Q\_A\_params** max*Kp*( ,) *d i* ⋅

Fig. 12. QFI-process by using QC Optimizer (QFI kernel)

**QFI input/output**

*FC*<sup>0</sup> *Kp*

**Templating**

*FC*<sup>0</sup> *Kd FC*<sup>0</sup> *Ki*

*FC*<sup>2</sup> *Kp*

**Self-Assembly** **QFI (Self-Organization)**

**Multiple KB SC Optimizer**

ideas.

*FC*<sup>2</sup> *Kd*

*FC*<sup>2</sup> *Ki*

uncertainty in initial information.

Robustness of new types of self-organizing intelligent control systems is demonstrated.

## **8.1 Control object's model simulation**

Consider the following model of control object as nonlinear oscillator:

$$\ddot{\mathbf{x}} + \left[\mathbf{2}\mathfrak{B} + a\dot{\mathbf{x}}^2 + k\_\mathrm{i}\mathbf{x}^2 - \mathbf{1}\right]\dot{\mathbf{x}} + k\mathbf{x} = \xi(\mathbf{t}) + \mathfrak{u}(\mathbf{t});\ \frac{dS\_\mathbf{x}}{dt} = \left[\mathbf{2}\mathfrak{B} + a\dot{\mathbf{x}}^2 + k\_\mathrm{i}\mathbf{x}^2 - \mathbf{1}\right]\dot{\mathbf{x}} \cdot \dot{\mathbf{x}},\tag{6}$$

where ξ( )*t* is a stochastic excitation with an appropriate probability density function; *u t*( ) is a control force; and *Sx* is an entropy production of control object *x* . The system, described by Eq.(6) have essentially nonlinear dissipative components and appears different types of behaviour: if β = 0.5 (other parameters, for example, <sup>1</sup> α = = = 0.3; 0.2; 5 *k k* ), then dynamic system motion is asymptotically stable; if 1 β = − (other parameters is the same as above), then the motion is locally unstable.

Consider an excited motion of the given dynamic system under hybrid fuzzy PID-control. Let the system be disturbed by a Rayleigh (non Gaussian) noise. The stochastic simulation of random excitations with appropriate probability density functions is based on nonlinear forming filters methodology is developed. In modelling we are considered with developed toolkit (see, Fig. 12) different unforeseen control situations and compared control performances of FC1, FC2, and self-organized control system based on QFI with two FC's. The stochastic simulation of random excitations with appropriate probability density functions is based on nonlinear forming filters methodology developed in (Litvintseva et al., 2006).

FC1 *design*: The following model parameters: 1 β = α= = = 0.5; 0.3; 0.2; 5 *k k* and initial conditions [2.5] [0.1] are considered. Reference signal is: 0 *ref x* = . K-gains ranging area is [0, 10]. By using SC Optimizer and teaching signal (TS) obtained by the stochastic simulation system with GA or from experimental data, we design KB of FC 1, which optimally approximate the given TS (from the chosen fitness function point of view).

Self-Organized Intelligent Robust Control Based on Quantum Fuzzy Inference 215

Fig. 13. Motion under different types of control

Fig. 14. Control error in different types of control

FC2 *design*: The following *new* model parameters: 1 β =− α= = = 1; 0.3; 0.2; 5 *k k* are used. Initial conditions are the same: [2.5] [0.1]. *New* reference signal is as following: 1 *ref x* = − ; Kgains ranging area is [0, 10].

In modelling we are considered with developed toolkit different unforeseen control situations and compared control performances of FC1, FC2, and self-organized control system based on QFI with two FC's.

In Table 2 four different control situations are described.


Table 2. Learning and unpredicted control situation types

CO may be represented in physical form or in the form of mathematical model. We will use a mathematical model of CO described in Matlab-Simulink 7.1 (some results are obtained by using Matlab-Simulink 6.5). The kernel of the abovementioned FC design tools is a so-called SC Optimizer (SCO) implementing advanced soft computing ideas. SCO is considered as a new flexible tool for design of optimal structure and robust KBs of FC based on a chain of genetic algorithms (GAs) with information-thermodynamic criteria for KB optimization and advanced error BP-algorithm for KB refinement. Input to SCO can be some measured or simulated data (called as 'teaching signal" (TS)) about the modelling system. For TS design we use stochastic simulation system based on the CO model and GA. More detail description of SCO is given below. The output signal of QFI is provided by new laws of the coefficient gains schedule of the PID controllers (see, in details Fig. 2 in what follows).

## **8.2 Result analysis of simulation**

For *Environments* 2 and 4 (see, Table 1), Figs 13 -15 show the response comparison of FC1, FC2 and QFI-self-organized control system. *Environment* 2 for FC1 is an unpredicted control situation. Figure 9 shows responses of FC's on unpredicted control situation: a *dramatically new* parameter β=− 0.1 (R1 *situation*) in the model of the CO as (3) and with the similar as above Rayleigh external noise. *Environment* 4 and R1 situation are presented also unpredicted control situations for both designed FC1 & FC2.

214 Recent Advances in Robust Control – Novel Approaches and Design Methods

FC2 *design*: The following *new* model parameters: 1 β =− α= = = 1; 0.3; 0.2; 5 *k k* are used. Initial conditions are the same: [2.5] [0.1]. *New* reference signal is as following: 1 *ref x* = − ; K-

In modelling we are considered with developed toolkit different unforeseen control situations and compared control performances of FC1, FC2, and self-organized control

1

1

CO may be represented in physical form or in the form of mathematical model. We will use a mathematical model of CO described in Matlab-Simulink 7.1 (some results are obtained by using Matlab-Simulink 6.5). The kernel of the abovementioned FC design tools is a so-called SC Optimizer (SCO) implementing advanced soft computing ideas. SCO is considered as a new flexible tool for design of optimal structure and robust KBs of FC based on a chain of genetic algorithms (GAs) with information-thermodynamic criteria for KB optimization and advanced error BP-algorithm for KB refinement. Input to SCO can be some measured or simulated data (called as 'teaching signal" (TS)) about the modelling system. For TS design we use stochastic simulation system based on the CO model and GA. More detail description of SCO is given below. The output signal of QFI is provided by new laws of the coefficient gains schedule of the PID controllers (see, in

For *Environments* 2 and 4 (see, Table 1), Figs 13 -15 show the response comparison of FC1, FC2 and QFI-self-organized control system. *Environment* 2 for FC1 is an unpredicted control situation. Figure 9 shows responses of FC's on unpredicted control situation: a *dramatically new* parameter β=− 0.1 (R1 *situation*) in the model of the CO as (3) and with the similar as above Rayleigh external noise. *Environment* 4 and R1 situation are presented also

*Environment 2:*  Rayleigh noise; Ref signal = -1; Model parameters :

*Environment 4:*  Gaussian noise; Ref signal = +0.5; Model parameters:

1; 0.3; *k k* 0.2; 5 β =− α= = =

1; 0.3; *k k* 0.2; 5 β =− α= = =

gains ranging area is [0, 10].

system based on QFI with two FC's.

1

1

details Fig. 2 in what follows).

**8.2 Result analysis of simulation** 

In Table 2 four different control situations are described.

0.5; 0.3; *k k* 0.2; 5 β = α= = =

*Environment 3:*  Gaussian noise; Ref signal = -0.5; Model parameters:

1; 0.3; *k k* 0.2; 5 β =− α= = =

Table 2. Learning and unpredicted control situation types

unpredicted control situations for both designed FC1 & FC2.

*Environment 1:*  Rayleigh noise; Ref signal = 0; Model parameters:

Fig. 13. Motion under different types of control

Fig. 14. Control error in different types of control

Self-Organized Intelligent Robust Control Based on Quantum Fuzzy Inference 217

Fig. 17. Example of operation of the block of self-organization of the knowledge base based

Figure 18 presents the values of generalized entropies of the system "CO + FC" calculated in

The necessary relations between the qualitative and quantitative definitions of the Lyapunov stability, controllability, and robustness of control processes of a given controlled object are correctly established. Before the achievement of the control goal (the reference control signal equal (–1) in this case) the process of self-learning the FC and extraction of

on quantum fuzzy inference

accordance with (6).

Fig. 15. Control laws in different types of environments

Figure 16 shows responses of FCs on unpredicted control situation: a dramatically new parameter β=− 0.1 (R1 unpredicted situation) in the model of the CO (6) and with the similar as above Rayleigh external noise.

Fig. 16. Control error in unpredicted control situation

Figure 17 shows the example of operation of the quantum fuzzy controller for formation of the robust control signal using the proportional gain in contingency control situation S3. In this case, the output signals of knowledge bases 1 and 2 in the form of the response on the new control error in situation **S3** are received in the block of the quantum FC. The output of the block of quantum FC is the new signal for real time control of the factor *<sup>P</sup> k* . Thus, the blocks of KB's 1, 2, and quantum FC in Fig. 17 form the block of self-organization of the knowledge base with new synergetic effect in the contingency control situation.

216 Recent Advances in Robust Control – Novel Approaches and Design Methods

Figure 16 shows responses of FCs on unpredicted control situation: a dramatically new parameter β=− 0.1 (R1 unpredicted situation) in the model of the CO (6) and with the

Figure 17 shows the example of operation of the quantum fuzzy controller for formation of the robust control signal using the proportional gain in contingency control situation S3. In this case, the output signals of knowledge bases 1 and 2 in the form of the response on the new control error in situation **S3** are received in the block of the quantum FC. The output of the block of quantum FC is the new signal for real time control of the factor *<sup>P</sup> k* . Thus, the blocks of KB's 1, 2, and quantum FC in Fig. 17 form the block of self-organization of the

knowledge base with new synergetic effect in the contingency control situation.

Fig. 15. Control laws in different types of environments

Fig. 16. Control error in unpredicted control situation

similar as above Rayleigh external noise.

Fig. 17. Example of operation of the block of self-organization of the knowledge base based on quantum fuzzy inference

Figure 18 presents the values of generalized entropies of the system "CO + FC" calculated in accordance with (6).

The necessary relations between the qualitative and quantitative definitions of the Lyapunov stability, controllability, and robustness of control processes of a given controlled object are correctly established. Before the achievement of the control goal (the reference control signal equal (–1) in this case) the process of self-learning the FC and extraction of

Self-Organized Intelligent Robust Control Based on Quantum Fuzzy Inference 219

Figure 19 demonstrate the final results of control law of coefficient gains simulation for

Simulation results show that with QFI it is possible from two non-robust KB's outputs to design the optimal robust control signal with simple wise control laws of PID coefficient gain schedule in unpredicted control situations. The latter is despite the fact that in Environments 2

Physically, it is the employment demonstration of the minimum entropy principle relative to extracted quantum knowledge. As to the viewpoint of quantum game theory we have *Parrondo*' paradox: from two classical KBs - that are not winners in different unforeseen environments - with QFI toolkit we can design one winner as a wise control signal using quantum strategy of decision making (without entanglement) (Ulyanov & Mishin, 2011).

& 4 (see, below Table) FC1 and in R1 situation both FC1 & FC2 lose robustness.

Fig. 19. Simulation results of coefficient gains for intelligent PID-controller

Other examples are described in (Oppenheim, 2008 and Smith & Yard, 2008) later.

description, see in Web site: http://www.qcoptimizer.com/).

**9. Conclusions** 

quantum knowledge.

fixed control environments.

This synergetic quantum effect of knowledge self-organization in robust control was described also on other examples of unstable control systems (details of technology

1. QFI block enhances robustness of FCs using a self-organizing capability and hidden

2. SCO allows us to model different versions of KBs of FC that guarantee robustness for

intelligent PID-controller.

valuable information from the results of reactions of the two FC's to an unpredicted control situation in on-line with the help of quantum correlation is implemented. Since quantum correlation contains information about the current values of the corresponding gains, the self-organizing FC uses for achievement of the control goal the advantage of performance of the FC2 and the aperiodic character of the dynamic behavior of the FC1.

Fig. 18. The dynamic behavior of the generalized entropies of the system (CO + FC): (a) temporal generalized entropy; (b) the accumulated value of the generalized entropy

As a consequence, improved control quality is ensured (Karatkevich et al., 2011).

Figure 19 demonstrate the final results of control law of coefficient gains simulation for intelligent PID-controller.

Simulation results show that with QFI it is possible from two non-robust KB's outputs to design the optimal robust control signal with simple wise control laws of PID coefficient gain schedule in unpredicted control situations. The latter is despite the fact that in Environments 2 & 4 (see, below Table) FC1 and in R1 situation both FC1 & FC2 lose robustness.

Physically, it is the employment demonstration of the minimum entropy principle relative to extracted quantum knowledge. As to the viewpoint of quantum game theory we have *Parrondo*' paradox: from two classical KBs - that are not winners in different unforeseen environments - with QFI toolkit we can design one winner as a wise control signal using quantum strategy of decision making (without entanglement) (Ulyanov & Mishin, 2011).

Fig. 19. Simulation results of coefficient gains for intelligent PID-controller

This synergetic quantum effect of knowledge self-organization in robust control was described also on other examples of unstable control systems (details of technology description, see in Web site: http://www.qcoptimizer.com/).

Other examples are described in (Oppenheim, 2008 and Smith & Yard, 2008) later.

## **9. Conclusions**

218 Recent Advances in Robust Control – Novel Approaches and Design Methods

valuable information from the results of reactions of the two FC's to an unpredicted control situation in on-line with the help of quantum correlation is implemented. Since quantum correlation contains information about the current values of the corresponding gains, the self-organizing FC uses for achievement of the control goal the advantage of performance of

(a)

(b)

Fig. 18. The dynamic behavior of the generalized entropies of the system (CO + FC): (a) temporal generalized entropy; (b) the accumulated value of the generalized entropy

As a consequence, improved control quality is ensured (Karatkevich et al., 2011).

**Local unstable state**

the FC2 and the aperiodic character of the dynamic behavior of the FC1.


**10** 

**New Practical Integral Variable** 

**Nonlinear Systems** 

*Gyeongsang National University* 

Jung-Hoon Lee

*South Korea* 

**Structure Controllers for Uncertain** 

Stability analysis and controller design for uncertain nonlinear systems is open problem now(Vidyasagar, 1986). So far numerous design methodologies exist for the controller design of nonlinear systems(Kokotovic & Arcak, 2001). These include any of a huge number of linear design techniques(Anderson & More, 1990; Horowitz, 1991) used in conjuction with gain scheduling(Rugh & Shamma, 200); nonlinear design methodologies such as Lyapunov function approach(Vidyasagar, 1986; Kokotovic & Arcak, 2001; Cai et al., 2008; Gutman, 1979; Slotine & Li, 1991; Khalil, 1996), feedback linearization method(Hunt et al., 1987; Isidori, 1989; Slotine & Li, 1991), dynamics inversion(Slotine & Li, 1991), backstepping(Lijun & Chengkand, 2008), adaptive technique which encompass both linear adaptive(Narendra, 1994) and nonlinear adaptive control(Zheng & Wu, 2009), sliding mode control(SMC)(Utkin, 1978; Decarlo etal., 1988; Young et al., 1996; Drazenovic, 1969; Toledo & Linares, 1995; Bartolini & Ferrara, 1995; Lu & Spurgeon, 1997), and etc(Hu & Martin, 1999;

The sliding mode control can provide the effective means to the problem of controlling uncertain nonlinear systems under parameter variations and external disturbances(Utkin, 1978; Decarlo et. al., 1988; Young et al., 1996). One of its essential advantages is the robustness of the controlled system to variations of parameters and external disturbances in the sliding mode on the predetermined sliding surface, *s=0*(Drazenovic, 1969). In the VSS, there are the two main problems, i.e., the reaching phase at the initial stage(Lee & Youn, 1994) and chattering of the input (Chern & Wu, 1992). To remove the reaching phase, the two requirements are needed, i.e., the sliding surface must be determined from an any given initial state to the origin( (0) & t 0 ( )| 0 *x x s x* = = = ) and the control input must satisfy the existence condition of the sliding mode on the pre-selected sliding surface for all time from the initial

In (Toledo & Linares, 1995), the sliding mode approach is applied to nonlinear output regulator schemes. The underlying concept is that of designing sliding submanifold which contains the zero tracking error sub-manifold. The convergence to a sliding manifold can be attained relying on a control strategy based on a simplex of control vectors for multi input uncertain nonlinear systems(Bartolini & Ferrara, 1995). A nonlinear optimal integral variable

**1. Introduction** 

Sun, 2009; Chen, 2003).

to the final time( 0, for 0 *<sup>T</sup>*

*ss t* < ≥ ).


## **10. References**


## **New Practical Integral Variable Structure Controllers for Uncertain Nonlinear Systems**

Jung-Hoon Lee *Gyeongsang National University South Korea* 

## **1. Introduction**

220 Recent Advances in Robust Control – Novel Approaches and Design Methods

3. Designed FC based on QFI achieves the prescribed control objectives in many

4. Using SCO and QFI we can design *wise control* of essentially non-linear stable and, especially of unstable dynamic systems in the presence of information uncertainty about external excitations and in presence of dramatically changing control goal, model

5. QFI based FC requires minimum of initial information about external environments and internal structures of a control object adopted a computing speed-up and the power of

Karatkevich, S.G.; Litvintseva, L.V. & Ulyanov, S.V. (2011). Intelligent control system: II.

Litvintseva, L.V.; Ulyanov, S.V. & Ulyanov, S.S. (2006). Design of robust knowledge bases of

Litvintseva, L.V.; Ulyanov, S.V. & Ulyanov, S.S. (2007). Quantum fuzzy inference for

Oppenheim, J. (2008). For quantum information, two wrongs can make a right. *Science*, Vol.

Smith, G. & Yard, J. (2008). Quantum communication with zero-capacity channels*. Science*,

Ulyanov, S.V. & Mishin, A.A. (2011). "Self-organization robust knowledge base design for

*Systems Sciences International*, Vol. 46, No 6, pp. 908 – 961, ISSN 1064-2307 Litvintseva, L.V. & Ulyanov, S.V. (2009). Intelligent control system: I. Quantum computing

*International*, Vol. 48, No 6, pp. 946–984, ISSN 1064-2307

321, No 5897, pp. 1783–1784, ISSN 0036-8075

No 1, pp. 164 – 174, ISSN 1683-3511

Vol. 321, No 5897, pp. 1812–1815, ISSN 0036-8075

Design of self-organized robust knowledge bases in contingency control situations. *Journal of Computer and Systems Sciences International*, Vol. 50, No 2, pp. 250–292,

fuzzy controllers for intelligent control of substantially nonlinear dynamic systems: II A soft computing optimizer and robustness of intelligent control systems. *Journal of Computer and Systems Sciences International*, Vol. 45, No 5, pp. 744 – 771, ISSN

knowledge base design in robust intelligent controllers," *Journal of Computer and* 

and self-organization algorithm. *Journal of Computer and Systems Sciences* 

fuzzy controllers in unpredicted control situations based on quantum fuzzy inference," *Applied and Computational Mathematics*: *An International Journal*, Vol. 10,

unpredicted control situations.

parameters, and emergency.

ISSN 1064-2307

1064-2307

**10. References** 

quantum control algorithm in KB-self-organization.

Stability analysis and controller design for uncertain nonlinear systems is open problem now(Vidyasagar, 1986). So far numerous design methodologies exist for the controller design of nonlinear systems(Kokotovic & Arcak, 2001). These include any of a huge number of linear design techniques(Anderson & More, 1990; Horowitz, 1991) used in conjuction with gain scheduling(Rugh & Shamma, 200); nonlinear design methodologies such as Lyapunov function approach(Vidyasagar, 1986; Kokotovic & Arcak, 2001; Cai et al., 2008; Gutman, 1979; Slotine & Li, 1991; Khalil, 1996), feedback linearization method(Hunt et al., 1987; Isidori, 1989; Slotine & Li, 1991), dynamics inversion(Slotine & Li, 1991), backstepping(Lijun & Chengkand, 2008), adaptive technique which encompass both linear adaptive(Narendra, 1994) and nonlinear adaptive control(Zheng & Wu, 2009), sliding mode control(SMC)(Utkin, 1978; Decarlo etal., 1988; Young et al., 1996; Drazenovic, 1969; Toledo & Linares, 1995; Bartolini & Ferrara, 1995; Lu & Spurgeon, 1997), and etc(Hu & Martin, 1999; Sun, 2009; Chen, 2003).

The sliding mode control can provide the effective means to the problem of controlling uncertain nonlinear systems under parameter variations and external disturbances(Utkin, 1978; Decarlo et. al., 1988; Young et al., 1996). One of its essential advantages is the robustness of the controlled system to variations of parameters and external disturbances in the sliding mode on the predetermined sliding surface, *s=0*(Drazenovic, 1969). In the VSS, there are the two main problems, i.e., the reaching phase at the initial stage(Lee & Youn, 1994) and chattering of the input (Chern & Wu, 1992). To remove the reaching phase, the two requirements are needed, i.e., the sliding surface must be determined from an any given initial state to the origin( (0) & t 0 ( )| 0 *x x s x* = = = ) and the control input must satisfy the existence condition of the sliding mode on the pre-selected sliding surface for all time from the initial to the final time( 0, for 0 *<sup>T</sup> ss t* < ≥ ).

In (Toledo & Linares, 1995), the sliding mode approach is applied to nonlinear output regulator schemes. The underlying concept is that of designing sliding submanifold which contains the zero tracking error sub-manifold. The convergence to a sliding manifold can be attained relying on a control strategy based on a simplex of control vectors for multi input uncertain nonlinear systems(Bartolini & Ferrara, 1995). A nonlinear optimal integral variable

New Practical Integral Variable Structure Controllers for Uncertain Nonlinear Systems 223

where *<sup>n</sup> x R* ∈ is the state, *x*(0) is its initial condition for the state, , *<sup>q</sup> y* ∈ *R q* ≤ *n* is the output, *y*(0) is an initial condition of the output, 1 *u R* ∈ is the control to be determined, mismatched uncertainty '( , ) *<sup>k</sup> f xt C*∈ and matched uncertainty ( , ) , 1 *<sup>k</sup> gxt C k* ∈ ≥ , ( , ) 0 for all *<sup>n</sup> g x t* ≠ ∈*x R* and for all 0 *t* ≥ are of suitable dimensions, and *dxt* ( ,) implies

Then, uncertain nonlinear system (1) can be represented in more affine nonlinear system of

[ ( , ) ( , )] [ ( , ) ( , )] ( , )

where 0 *f xt* ( ,) and 0 *g xt* ( ,) is each nominal value such that 0 *f xt f xt f xt x* '( , ) [ ( , ) ( , )] = +Δ and <sup>0</sup> *gxt g xt gxt* ( , ) [ ( , ) ( , )] = +Δ , respectively, Δ*f xt* ( ,) and Δ*gxt* ( ,) are mismatched or matched

**A2**: The pair 0 0 ( ( , ), ( , )) *f xt g xt* is controllable and 0 ( ( , ), ) *f xt C* is observable for all *<sup>n</sup> x R* ∈ and

= +Δ + +Δ +

*f x t f xt x g x t g xt u dxt*

*x* =+ + *f* '( , ) ( , ) ( , ), (0) *x t g xtu dxt x* (1)

*y* =⋅ =⋅ *Cx y Cx* , (0) (0) (2)

*y Cx* = ⋅ , (4)

*dxt* ( ,) ( ,) ( ,) ( ,) =Δ +Δ + *f xtx g xtu dxt* (5)

(3)

**2. Practical integral nonlinear variable structure systems** 

**A1**: '( , ) *<sup>k</sup> f xt C*∈ is continuously differentiable and *f t* '(0, ) 0 = for all 0 *t* ≥ .

0 0

*f xtx g xtu dxt*

*x f xtx gxtu dxt x*

( ,) ( ,) ( ,)

=++

0 0

uncertainties, and *dxt* ( ,) is the mismatched lumped uncertainty.

**2.2 Full sate feedback practical integral variable structure controller** 

ττ

0 0 0 0 0 ( ) ( ) ( ) (0) *t t x xd xd xd x*

−∞

 ττ

To control uncertain nonlinear system (1) or (3) with a linear closed loop dynamics and without reaching phase, the full-state feedback integral sliding surface used in this design is

 ττ

=+ =+ ∫∫ ∫ (6)

=++

state dependent coefficient form(Pan et al., 2009; Hu & Martin, 1999; Sun, 2009)

( , ) ( , ) ( , ), (0)

**2.1 Descriptions of plants** 

Consider an affine uncertain nonlinear system

bounded matched external disturbances.

**A3**: The lumped uncertainty *dxt* ( ,) is bounded. **A4**: *x* is bounded if *u* and ( , ) *dxt* is bounded.

**2.2.1 Full-state feedback integral sliding surface** 

For use later, the integral term of the full-state is augmented as

**Assumption** (Pan et al., 2009)

**Assumption:** 

as follows:

all 0 *t* ≥ (Sun, 2009).

structure controller with an arbitrary sliding surface without the reaching phase was proposed for uncertain linear plants(Lee, 1995). (Lu and Spurgeon, 1997) considered the robustness of dynamic sliding mode control of nonlinear system, which is in differential input-output form with additive uncertainties in the model. The discrete-time implementation of a second-order sliding mode control scheme is analyzed for uncertain nonlinear system in (Bartolini et al., 2001). (Adamy & Flemming, 2004) surveyed so called soft variable structure controls, compared them to others. The tracker control problem that is the regulation control problem from an arbitrary initial state to an arbitrary final state without the reaching phase is handled and solved for uncertain SISO linear plants in (Lee, 2004). For 2nd order uncertain nonlinear system with mismatched uncertainties, a switching control law between a first order sliding mode control and a second order sliding mode control is proposed to obtain the globally or locally asymptotic stability(Wang et al., 2007). The optimal SMC for nonlinear system with time-delay is suggested(Tang et al., 2008). The nonlinear time varying sliding sector is designed for continuous control of a single input nonlinear time varying input affine system which can be represented in the form of state dependent linear time variant systems with matched uncertainties(Pan et al., 2009). For uncertain affine nonlinear systems with mismatched uncertainties and matched disturbance, the systematic design of the SMC is reported(Lee, 2010a). The two clear proofs of the existence condition of the sliding mode with respect to the two transformations i.e., the two diagonalization methods are given for multi-input uncertain linear plants(Lee 2010b), while (Utkin, 1978) and (Decarlo et al., 1988) proved unclearly for uncertain nonlinear plants.

Until now, the integral action is not introduced to the variable structure system for uncertain nonlinear system with mismatched uncertainties and matched disturbance to improve the output performance by means of removing the reaching phase problems. And a nonlinear output feedback controller design for uncertain nonlinear systems with mismatched uncertainties and matched disturbance is not presented.

In this chapter, a systematic general design of new integral nonlinear full-state(output) feedback variable structure controllers based on state dependent nonlinear form is presented for the control of uncertain affine nonlinear systems with mismatched uncertainties and matched disturbances. After an affine uncertain nonlinear system is represented in the form of state dependent nonlinear system, a systematic design of a new nonlinear full-state(output) feedback variable structure controller is presented. To be linear in the closed loop resultant dynamics, full-state(output) feedback (transformed) integral linear sliding surfaces are applied in order to remove the reaching phase, those are stemmed from the studys by (Lee & Youn, 1994; Lee, 2010b) which is the first time work of removing the reaching phase with the idea of introducing the initial condition for the integral state. The corresponding discontinuous (transformed) control inputs are proposed to satisfy the closed loop exponential stability and the existence condition of the sliding mode on the fullstate(output) feedback integral sliding surfaces, which will be investigated in Theorem 1 and Theorem 2. For practical application to the real plant by means of removing the chattering problems, the implementation of the continuous approximation is essentially needed instead of the discontinuous input as the inherent property of the VSS. Using the saturation function, the different form from that of (Chern & Wu, 1992) for the continuous approximation is suggested. The two main problems of the VSS are removed and solved. Through the design examples and simulation studies, the usefulness of the proposed practical integral nonlinear VSS controller is verified.

### **2. Practical integral nonlinear variable structure systems**

### **2.1 Descriptions of plants**

222 Recent Advances in Robust Control – Novel Approaches and Design Methods

structure controller with an arbitrary sliding surface without the reaching phase was proposed for uncertain linear plants(Lee, 1995). (Lu and Spurgeon, 1997) considered the robustness of dynamic sliding mode control of nonlinear system, which is in differential input-output form with additive uncertainties in the model. The discrete-time implementation of a second-order sliding mode control scheme is analyzed for uncertain nonlinear system in (Bartolini et al., 2001). (Adamy & Flemming, 2004) surveyed so called soft variable structure controls, compared them to others. The tracker control problem that is the regulation control problem from an arbitrary initial state to an arbitrary final state without the reaching phase is handled and solved for uncertain SISO linear plants in (Lee, 2004). For 2nd order uncertain nonlinear system with mismatched uncertainties, a switching control law between a first order sliding mode control and a second order sliding mode control is proposed to obtain the globally or locally asymptotic stability(Wang et al., 2007). The optimal SMC for nonlinear system with time-delay is suggested(Tang et al., 2008). The nonlinear time varying sliding sector is designed for continuous control of a single input nonlinear time varying input affine system which can be represented in the form of state dependent linear time variant systems with matched uncertainties(Pan et al., 2009). For uncertain affine nonlinear systems with mismatched uncertainties and matched disturbance, the systematic design of the SMC is reported(Lee, 2010a). The two clear proofs of the existence condition of the sliding mode with respect to the two transformations i.e., the two diagonalization methods are given for multi-input uncertain linear plants(Lee 2010b), while (Utkin, 1978) and (Decarlo et al., 1988) proved unclearly for uncertain nonlinear plants. Until now, the integral action is not introduced to the variable structure system for uncertain nonlinear system with mismatched uncertainties and matched disturbance to improve the output performance by means of removing the reaching phase problems. And a nonlinear output feedback controller design for uncertain nonlinear systems with mismatched

In this chapter, a systematic general design of new integral nonlinear full-state(output) feedback variable structure controllers based on state dependent nonlinear form is presented for the control of uncertain affine nonlinear systems with mismatched uncertainties and matched disturbances. After an affine uncertain nonlinear system is represented in the form of state dependent nonlinear system, a systematic design of a new nonlinear full-state(output) feedback variable structure controller is presented. To be linear in the closed loop resultant dynamics, full-state(output) feedback (transformed) integral linear sliding surfaces are applied in order to remove the reaching phase, those are stemmed from the studys by (Lee & Youn, 1994; Lee, 2010b) which is the first time work of removing the reaching phase with the idea of introducing the initial condition for the integral state. The corresponding discontinuous (transformed) control inputs are proposed to satisfy the closed loop exponential stability and the existence condition of the sliding mode on the fullstate(output) feedback integral sliding surfaces, which will be investigated in Theorem 1 and Theorem 2. For practical application to the real plant by means of removing the chattering problems, the implementation of the continuous approximation is essentially needed instead of the discontinuous input as the inherent property of the VSS. Using the saturation function, the different form from that of (Chern & Wu, 1992) for the continuous approximation is suggested. The two main problems of the VSS are removed and solved. Through the design examples and simulation studies, the usefulness of the proposed

uncertainties and matched disturbance is not presented.

practical integral nonlinear VSS controller is verified.

Consider an affine uncertain nonlinear system

$$
\dot{\mathbf{x}} = f'(\mathbf{x}, t) + \mathbf{g}(\mathbf{x}, t)\boldsymbol{\mu} + \overline{d}(\mathbf{x}, t), \qquad \mathbf{x}(0) \tag{1}
$$

$$y = \mathbf{C} \cdot \mathbf{x}, \qquad y(0) = \mathbf{C} \cdot \mathbf{x}(0) \tag{2}$$

where *<sup>n</sup> x R* ∈ is the state, *x*(0) is its initial condition for the state, , *<sup>q</sup> y* ∈ *R q* ≤ *n* is the output, *y*(0) is an initial condition of the output, 1 *u R* ∈ is the control to be determined, mismatched uncertainty '( , ) *<sup>k</sup> f xt C*∈ and matched uncertainty ( , ) , 1 *<sup>k</sup> gxt C k* ∈ ≥ , ( , ) 0 for all *<sup>n</sup> g x t* ≠ ∈*x R* and for all 0 *t* ≥ are of suitable dimensions, and *dxt* ( ,) implies bounded matched external disturbances.

**Assumption** (Pan et al., 2009)

**A1**: '( , ) *<sup>k</sup> f xt C*∈ is continuously differentiable and *f t* '(0, ) 0 = for all 0 *t* ≥ . Then, uncertain nonlinear system (1) can be represented in more affine nonlinear system of state dependent coefficient form(Pan et al., 2009; Hu & Martin, 1999; Sun, 2009)

$$\begin{aligned} \dot{\mathbf{x}} &= f(\mathbf{x}, t)\mathbf{x} + \mathbf{g}(\mathbf{x}, t)\boldsymbol{\mu} + \overline{d}(\mathbf{x}, t), \qquad \mathbf{x}(0) \\ &= [f\_{\diamond}(\mathbf{x}, t) + \Delta f(\mathbf{x}, t)]\mathbf{x} + [\mathbf{g}\_{\diamond}(\mathbf{x}, t) + \Delta \mathbf{g}(\mathbf{x}, t)]\boldsymbol{\mu} + \overline{d}(\mathbf{x}, t) \\ &= f\_{\diamond}(\mathbf{x}, t)\mathbf{x} + \mathbf{g}\_{\diamond}(\mathbf{x}, t)\boldsymbol{\mu} + d(\mathbf{x}, t) \end{aligned} \tag{3}$$

$$y = \mathbb{C} \cdot \mathbf{x}\_{\prime} \tag{4}$$

$$d\mathbf{f}(\mathbf{x},t) = \Delta \mathbf{f}(\mathbf{x},t)\mathbf{x} + \Delta \mathbf{g}(\mathbf{x},t)\boldsymbol{\mu} + \overline{d}(\mathbf{x},t) \tag{5}$$

where 0 *f xt* ( ,) and 0 *g xt* ( ,) is each nominal value such that 0 *f xt f xt f xt x* '( , ) [ ( , ) ( , )] = +Δ and <sup>0</sup> *gxt g xt gxt* ( , ) [ ( , ) ( , )] = +Δ , respectively, Δ*f xt* ( ,) and Δ*gxt* ( ,) are mismatched or matched uncertainties, and *dxt* ( ,) is the mismatched lumped uncertainty. **Assumption:** 

## **A2**: The pair 0 0 ( ( , ), ( , )) *f xt g xt* is controllable and 0 ( ( , ), ) *f xt C* is observable for all *<sup>n</sup> x R* ∈ and all 0 *t* ≥ (Sun, 2009).

**A3**: The lumped uncertainty *dxt* ( ,) is bounded.

**A4**: *x* is bounded if *u* and ( , ) *dxt* is bounded.

### **2.2 Full sate feedback practical integral variable structure controller 2.2.1 Full-state feedback integral sliding surface**

For use later, the integral term of the full-state is augmented as

$$\mathbf{x}\_{0} = \int\_{0}^{t} \mathbf{x}(\tau)d\tau + \int\_{-\pi}^{0} \mathbf{x}(\tau)d\tau = \int\_{0}^{t} \mathbf{x}(\tau)d\tau + \mathbf{x}\_{0}(0) \tag{6}$$

To control uncertain nonlinear system (1) or (3) with a linear closed loop dynamics and without reaching phase, the full-state feedback integral sliding surface used in this design is as follows:

$$\mathbf{S}\_{f} = \mathbf{L}\_{\mathbf{r}}\mathbf{x} + \mathbf{L}\_{\mathbf{r}}\mathbf{x}\_{0} = \begin{bmatrix} L\_{\mathbf{r}} & L\_{\mathbf{v}} \end{bmatrix} \cdot \begin{bmatrix} \mathbf{x} \\ \mathbf{x}\_{0} \end{bmatrix} \tag{7}$$

where

$$\mathbf{x}\_{\circ}(\mathbf{O}) = -L\_{\circ}^{-}L\_{\circ}\mathbf{x}(\mathbf{O})\tag{7a}$$

New Practical Integral Variable Structure Controllers for Uncertain Nonlinear Systems 225

( ) , 0 *<sup>T</sup> V x x Px P* = > (14)

<sup>0</sup> ( ,) *<sup>T</sup> u* = −*g x t Px* (16)

0 0 ( ,) ( ,) ( ,) *<sup>T</sup> f xtP P* + =− *f xt Qxt* (17)

*c cc f xtP P* + =− *f xt Q xt* (18)

(19)

(23)

00 0 0 ( ) ( ,) ( ,) ( ,) ( ,) *T T T T <sup>T</sup> V x x f x t P Pf x t x u g x t Px x Pg x t u* = ++ + ⎡ ⎤ ⎣ ⎦ (15)

0 0 0 0

2

[ ] { } <sup>1</sup>

= = + (20)

1 2 ( ) ( ) *u K x x Kx K S K si <sup>f</sup> f f* = − −Δ − − *gn S* (21)

Δ= Δ = (22)

*f i*

*f i*

To select the stable gain, take a Lyapunov function candidate as

and *Qxt* ( ,) 0 > and ( ,) 0 *Q xt <sup>c</sup>* > for all *<sup>n</sup> x R* ∈ and all 0 *t* ≥ is

( ,) ( ,) ( ,) *<sup>T</sup>*

By the Lyapunov control theory(Slotine & Li, 1991), take the control input as

{ }

*Q xt x*

[ ( , ) ( , )]

( , ) || ||

*xQ xtx*

( ) ( ,) 2 ( ,) ( ,) [ ( ,) 2 ( ,) ( ,) ]

*Vx xQxtx xPg x t g x t Px*

*c c*

<sup>0</sup> 1 0 1 0 <sup>0</sup> ( ) ( , ) or ( , ) ( , ) *<sup>T</sup> K x <sup>g</sup> xtP L <sup>g</sup> xt L <sup>f</sup> xt L* <sup>−</sup>

where *K x*( ) is a static nonlinear feedback gain, Δ*K* is a discontinuous switching gain, *K*1 is a static feedback gain of the sliding surface itself, and *K*2 is a discontinuous switching gain,

1 0 ( , ) [ ] 1,..., *K L <sup>i</sup> <sup>g</sup> xt k i n* <sup>−</sup>

<sup>⎧</sup> Δ −Δ <sup>⎪</sup> ≥ >

≤ < <sup>⎪</sup> + Δ <sup>⎩</sup>

max ( . ) ( , ) ( ) ( )0 min min ( . ) ( , ) ( ) ( )0 min

*L f xt L g x t K x sign S x I I*

*L f xt L g x t K x sign S x I I*

*i*

*i*

The corresponding control input with the transformed gains is proposed as follows:

[ ] <sup>1</sup>

1 1

1 1

⎪ + Δ

⎪ Δ −Δ

{ } { } { } { }

*x f x t P Pf x t x*

*T TT T T*

*x Qxt Pg x t g xtPx*

min {*Q xt <sup>c</sup>* ( ,)} means the minimum eigenvalue of ( ,) *Q xt <sup>c</sup>* . Therefore the stable static

min

*T c c*

*T T*

=− − =− + =− +

( ,)

= − ≤ −

λ

**2.2.2 Full-state feedback transformed discontinuous control input** 

The derivative of (14) becomes

then

where

λ

respectively as

nonlinear feedback gain is chosen as

*i*

Δ = ⎨

*k*

and 1 0 00 0 ( ) *T T L L WL L W* − − = , which is stemmed from the work by (Lee & Youn, 1994). At 0 *t* = , the full-state feedback integral sliding surface is zero, Hence, the one of the two requirements is satisfied. Without the initial condition of the integral state, the reaching phase is not removed except the exact initial state on the sliding surface. With the initial condition (7a) for the integral state, the work on removing the reaching phase was reported by (Lee & Youn, 1994) for the first time, which is applied to the VSS for uncertain linear plants. In (7), *L*1 is a non zero element as the design parameter such that the following assumption is satisfied.

### **Assumption**

**A5:** <sup>1</sup> *Lgxt* ( ,) and 1 0 *Lg xt* ( ,) have the full rank, i.e. those are invertible **A6**: [ ] <sup>1</sup> <sup>1</sup> 1 0 *<sup>L</sup> <sup>g</sup>*( ,) ( ,) *xt L <sup>g</sup> xt I* <sup>−</sup> Δ = Δ and || 1 Δ*I* ≤ < ξ.

In (7), the design parameters *L*1 and *L*0 satisfy the following relationship

$$L\_1 \left[ f\_\circ(\mathbf{x}, t) - g\_\circ(\mathbf{x}, t) K(\mathbf{x}) \right] + L\_\circ = 0 \tag{8a}$$

$$L\_0 = -L\_1 \left[ f\_o(\mathbf{x}, t) - g\_o(\mathbf{x}, t) \mathbf{K}(\mathbf{x}) \right] = -L\_1 f\_c(\mathbf{x}, t) \tag{8b}$$

$$f\_{\circ}(\mathbf{x},t) = \left[f\_{\circ}(\mathbf{x},t) - \mathbf{g}\_{\circ}(\mathbf{x},t)\mathbf{K}(\mathbf{x})\right] \tag{8c}$$

The equivalent control input is obtained using 0 *Sf* = as(Decarlo et al., 1998)

$$\boldsymbol{\mu}\_{eq} = -\left[\boldsymbol{L}\_{1}\mathbf{g}(\mathbf{x},t)\right]^{-1}\left[\boldsymbol{L}\_{1}f\_{0}(\mathbf{x},t) + \boldsymbol{L}\_{0}\right]\mathbf{x} - \left[\boldsymbol{L}\_{1}\mathbf{g}(\mathbf{x},t)\right]^{-1}\Delta f(\mathbf{x},t)\mathbf{x} - \left[\boldsymbol{L}\_{1}\mathbf{g}(\mathbf{x},t)\right]^{-1}\overline{\boldsymbol{d}}(\mathbf{x},t) \tag{9}$$

This control input can not be implemented because of the uncertainties, but used to obtaining the ideal sliding dynamics. The ideal sliding mode dynamics of the sliding surface (7) can be derived by the equivalent control approach(Lee, 2010a) as

$$\dot{\mathbf{x}}\_{s} = \left[ f\_{0}(\mathbf{x}\_{s}, t) - \mathbf{g}\_{0}(\mathbf{x}\_{s}, t) \right] \left[ L\_{1} \mathbf{g}(\mathbf{x}\_{s}, t) \right]^{-1} \left\{ L\_{1} f\_{0}(\mathbf{x}\_{s}, t) + L\_{0} \right\} \mathbf{\dot{x}}\_{s}, \qquad \qquad \mathbf{x}\_{s}(0) = \mathbf{x}(0) \tag{10}$$

$$\dot{\mathbf{x}}\_{\ast} = \left[ f\_o(\mathbf{x}\_{\ast'}t) - g\_o(\mathbf{x}\_{\ast'}t) \mathbf{K}(\mathbf{x}\_{\ast}) \right] \mathbf{x}\_{\ast} = f\_{\ast}(\mathbf{x}\_{\ast'}t) \mathbf{x}\_{\ast'} \qquad \qquad \mathbf{x}\_{\ast}(0) = \mathbf{x}(0) \tag{11}$$

$$K(\mathbf{x}\_{\ast}) = \left[L\_{\mathrm{i}}\mathbf{g}(\mathbf{x}\_{\ast}, t)\right]^{\ast 1} \left\{L\_{\mathrm{i}}f\_{\mathrm{o}}(\mathbf{x}\_{\ast}, t) + L\_{\mathrm{o}}\right\} \tag{12}$$

The solution of (10) or (11) identically defines the integral sliding surface. Hence to design the sliding surface as stable, this ideal sliding dynamics is designed to be stable, the reverse argument also holds. To choose the stable gain based on the Lyapunov stability theory, the ideal sliding dynamics (10) or (11) is represented by the nominal plant of (3) as

$$\begin{aligned} \dot{\mathbf{x}} &= f\_o(\mathbf{x}, t)\mathbf{x} + g\_o(\mathbf{x}, t)\boldsymbol{\mu}, & \boldsymbol{\mu} &= -\mathbf{K}(\mathbf{x})\mathbf{x} \\ &= f\_c(\mathbf{x}, t)\mathbf{x}, & f\_c(\mathbf{x}, t) &= f\_o(\mathbf{x}, t) - g\_o(\mathbf{x}, t)\mathbf{K}(\mathbf{x}) \end{aligned} \tag{13}$$

To select the stable gain, take a Lyapunov function candidate as

$$V(\mathbf{x}) = \mathbf{x}^{\top} P \mathbf{x}, \qquad \quad P > 0 \tag{14}$$

The derivative of (14) becomes

$$\dot{V}(\mathbf{x}) = \mathbf{x}^{\top} \left[ f\_{\mathbf{o}}^{\top}(\mathbf{x}, t)P + Pf\_{\mathbf{o}}(\mathbf{x}, t) \right] \mathbf{x} + \mathbf{u}^{\top} g\_{\mathbf{o}}^{\top}(\mathbf{x}, t)P\mathbf{x} + \mathbf{x}^{\top} P g\_{\mathbf{o}}(\mathbf{x}, t)\mathbf{u} \tag{15}$$

By the Lyapunov control theory(Slotine & Li, 1991), take the control input as

$$\mu = -\mathbb{g}\_0^\top(\mathbf{x}, t)P\mathbf{x} \tag{16}$$

and *Qxt* ( ,) 0 > and ( ,) 0 *Q xt <sup>c</sup>* > for all *<sup>n</sup> x R* ∈ and all 0 *t* ≥ is

$$\int f\_0^\top(\mathbf{x}, t) P + P f\_0(\mathbf{x}, t) = -Q(\mathbf{x}, t) \tag{17}$$

$$f\_{\varepsilon}^{\varepsilon}(\mathbf{x},t)P + Pf\_{\varepsilon}(\mathbf{x},t) = -Q\_{\varepsilon}(\mathbf{x},t) \tag{18}$$

then

224 Recent Advances in Robust Control – Novel Approaches and Design Methods

( 0) *<sup>f</sup>*

0 00 0 ( ) *T T L L WL L W* − − = , which is stemmed from the work by (Lee & Youn, 1994). At 0 *t* = , the full-state feedback integral sliding surface is zero, Hence, the one of the two requirements is satisfied. Without the initial condition of the integral state, the reaching phase is not removed except the exact initial state on the sliding surface. With the initial condition (7a) for the integral state, the work on removing the reaching phase was reported by (Lee & Youn, 1994) for the first time, which is applied to the VSS for uncertain linear plants. In (7), *L*1 is a non zero element as the design parameter such that the following

⎡ ⎤ <sup>=</sup> += ⋅ = ⎢ ⎥

0

<sup>0</sup> 0 1 *x LLx* (0) (0) <sup>−</sup> = − (7a)

1 0 [ <sup>0</sup> ] <sup>0</sup> *L f xt g xtKx L* ( ,) ( ,) ( ) 0 − + = (8a)

*fc* ( ,) ( ,) ( ,) ( ) *x t* = − [ *f*0 0 *x t g xtKx* ] (8c)

= + (12)

0 0

0 10 [ <sup>0</sup> ] <sup>1</sup> ( ,) ( ,) ( ) ( ,) *L L <sup>c</sup>* =− − =− *f x t g xtKx L f x t* (8b)

(7)

*x*

⎣ ⎦

*x*

1 10 1 0 [ ]

*S Lx Lx L L*

**A5:** <sup>1</sup> *Lgxt* ( ,) and 1 0 *Lg xt* ( ,) have the full rank, i.e. those are invertible

In (7), the design parameters *L*1 and *L*0 satisfy the following relationship

The equivalent control input is obtained using 0 *Sf* = as(Decarlo et al., 1998)

(7) can be derived by the equivalent control approach(Lee, 2010a) as

[ ] [ ] [ ] [ ] <sup>1</sup> 1 1 <sup>1</sup> 1 0 0 1 <sup>1</sup> ( ,) ( ,) ( ,) ( ,) ( ,) ( ,) *u L eq <sup>g</sup> xt L <sup>f</sup> xt L x L <sup>g</sup> x t <sup>f</sup> xtx L <sup>g</sup> xt dxt* <sup>−</sup> − −

This control input can not be implemented because of the uncertainties, but used to obtaining the ideal sliding dynamics. The ideal sliding mode dynamics of the sliding surface

> [ ] { } <sup>1</sup> <sup>0</sup> <sup>0</sup> <sup>1</sup> 1 0 <sup>0</sup> ( , ) ( , ) ( , ) ( , ) , (0) (0) *ss s s s s s <sup>x</sup> <sup>f</sup> x t <sup>g</sup> xtL <sup>g</sup> xt L <sup>f</sup> xt L x x x* <sup>−</sup> =− + = ⎡ ⎤

The solution of (10) or (11) identically defines the integral sliding surface. Hence to design the sliding surface as stable, this ideal sliding dynamics is designed to be stable, the reverse argument also holds. To choose the stable gain based on the Lyapunov stability theory, the

( , ) , ( , ) ( , ) ( , ) ( ) *c c*

*f xtx f x t f x t g xtKx*

ideal sliding dynamics (10) or (11) is represented by the nominal plant of (3) as

( , ) ( , ) , ( )

= = −

*x f xtx g xtu u Kxx*

= + = −

0 0

= − +− Δ − (9)

⎣ ⎦ (10)

[ ] { } <sup>1</sup> <sup>1</sup> 1 0 <sup>0</sup> ( ) ( ,) ( ,) *Kx L ss s <sup>g</sup> xt L <sup>f</sup> xt L* <sup>−</sup>

*s s* [ 0 0 ( , ) ( , ) ( ) ( , ) , (0) (0) *s s s cs s* ] *<sup>s</sup> x* =− = = *f x t g x tKx x f x tx x x* (11)

(13)

ξ.

Δ = Δ and || 1 Δ*I* ≤ <

where

and 1

assumption is satisfied.

**A6**: [ ] <sup>1</sup> <sup>1</sup> 1 0 *<sup>L</sup> <sup>g</sup>*( ,) ( ,) *xt L <sup>g</sup> xt I* <sup>−</sup>

**Assumption** 

$$\begin{split} \dot{V}(\mathbf{x}) &= -\mathbf{x}^{\top} \mathbf{Q}(\mathbf{x},t) \mathbf{x} - 2\mathbf{x}^{\top} P \mathbf{g}\_{o}(\mathbf{x},t) \mathbf{g}\_{o}^{\top}(\mathbf{x},t) P \mathbf{x} \\ &= -\mathbf{x}^{\top} [\mathbf{Q}(\mathbf{x},t) + 2P \mathbf{g}\_{o}(\mathbf{x},t) \mathbf{g}\_{o}^{\top}(\mathbf{x},t) P] \mathbf{x} \\ &= -\mathbf{x}^{\top} [f\_{\boldsymbol{\epsilon}}^{\top}(\mathbf{x},t) P + P f\_{\boldsymbol{\epsilon}}(\mathbf{x},t)] \mathbf{x} \\ &= -\mathbf{x}^{\top} \mathbf{Q}\_{\boldsymbol{\epsilon}}(\mathbf{x},t) \mathbf{x} \\ &\leq -\lambda\_{\text{min}} \left\{ \mathbf{Q}\_{\boldsymbol{\epsilon}}(\mathbf{x},t) \right\} \parallel \|\mathbf{x}\| \mid \mathbf{l} \end{split} \tag{19}$$

where λmin {*Q xt <sup>c</sup>* ( ,)} means the minimum eigenvalue of ( ,) *Q xt <sup>c</sup>* . Therefore the stable static nonlinear feedback gain is chosen as

$$K(\mathbf{x}) = \mathbf{g}\_{\boldsymbol{\alpha}}^{\top}(\mathbf{x}, t)P \quad \text{or} \quad = \left[L\_{\mathbf{i}}\mathbf{g}\_{\boldsymbol{\alpha}}(\mathbf{x}, t)\right]^{-1} \left\{L\_{\mathbf{i}}f\_{\boldsymbol{\alpha}}(\mathbf{x}, t) + L\_{\boldsymbol{\alpha}}\right\} \tag{20}$$

### **2.2.2 Full-state feedback transformed discontinuous control input**

The corresponding control input with the transformed gains is proposed as follows:

$$
\mu\_{\rangle} = -K(\mathbf{x})\mathbf{x} - \Delta \mathbf{K}\mathbf{x} - \mathbf{K}\_{\mathbf{i}}\mathbf{S}\_{\mathbf{j}} - \mathbf{K}\_{\mathbf{z}}\text{sign}(\mathbf{S}\_{\mathbf{j}}) \tag{21}
$$

where *K x*( ) is a static nonlinear feedback gain, Δ*K* is a discontinuous switching gain, *K*1 is a static feedback gain of the sliding surface itself, and *K*2 is a discontinuous switching gain, respectively as

$$
\Delta K = \left[ L\_i g\_o(\mathbf{x}, t) \right]^{-1} \left[ \Delta k\_i \right] \qquad \text{i} = 1, \dots, n \tag{22}
$$

$$\Delta k\_{i} = \begin{cases} \max\left\{L\_{i}\Delta f(\mathbf{x},t) - L\_{i}\Delta \mathbf{g}(\mathbf{x},t)K(\mathbf{x})\right\}\_{i} & \text{sign}(\mathbf{S}\_{\boldsymbol{\gamma}}\mathbf{x}\_{i}) > 0 \\ \hline \min\left\{I + \Delta I\right\} & \text{sign}(\mathbf{S}\_{\boldsymbol{\gamma}}\mathbf{x}\_{i}) > 0 \\ \displaystyle \leq \frac{\min\left\{L\_{i}\Delta f(\mathbf{x},t) - L\_{i}\Delta \mathbf{g}(\mathbf{x},t)K(\mathbf{x})\right\}\_{i}}{\min\left\{I + \Delta I\right\}} & \text{sign}(\mathbf{S}\_{\boldsymbol{\gamma}}\mathbf{x}\_{i}) < 0 \end{cases} \tag{23}$$

$$K\_{\mathbf{i}} = \left[L\_{\mathbf{i}} \mathcal{g}(\mathbf{x}, t)\right]^{-1} K\_{\mathbf{i}} \, \prime \, \qquad \qquad K\_{\mathbf{i}} \, \prime > 0 \, \tag{24}$$

New Practical Integral Variable Structure Controllers for Uncertain Nonlinear Systems 227

<sup>1</sup> <sup>2</sup> ( ( )) ( (0)) *K t Vxt Vx e*<sup>−</sup>

1 0 ( ) || || ( ) *T T V x S S S S S S L Cx L x* =

**2.2.3 Continuous approximation of full sate feedback discontinuous control input**  The discontinuous control input (21) with (7) chatters from the beginning without reaching phase. The chattering of the discontinuous control input (21) may be harmful to the real

and by Assumption A5 ( ) *V x* is bounded, which completes the proof of Theorem 1.

*fc f f*

=− − − Δ +

dramatically improved without severe output performance deterioration.

**2.3 Practical output feedback integral variable structure controller** 

**A7:** The nominal input matrix 0 *g xt* ( ,) is constant, i.e, 0 *g xt B* ( ,) =

**2.3.1 Transformed output feedback integral sliding surface**  Now, the integral of the output is augmented as follows:

bounded and satisfied by the following conditions:

*u K x x K S Kx K sign S <sup>S</sup>*

dynamic plant. Hence using the saturation function for a suitable *<sup>f</sup>*

2

1 2 () { ( )}| |

which is different from that of (Chern & Wu, 1992) continuous approximation. For a first order system, this approximation is the same as that of (Chern & Wu, 1992) continuous approximation, but for a higher order system more than the first one, continuous approximation can be effectively made. The discontinuity of the control input can be

For the implementation of the output feedback when full-state is not available, some

**A8**: The unmatched Δ*f* ( ,) *x t* , matched Δ*gxt* ( ,) , and matched *dxt* ( ,) are unknown and

( , ) '( , ) ''( , ) *<sup>T</sup>* Δ =Δ =Δ *f xt f xtCC f xtC* (33a)

( , ) '( , ) ''( , ) *<sup>T</sup> d x t BB d x t Bd x t* = = (33c)

0 0 0 0 ( ) ( ) (0) *t yt A y d y* =⋅ + τ τ

where 0 ( ) , *<sup>r</sup> yt R r q* ∈ ≤ is the integral of the output and 0 *y* (0) is the initial condition of the integral state determined later, and *A*<sup>0</sup> is appropriately dimensioned without loss of

( , ) '( , ) , 0 | | 1 *<sup>T</sup>* Δ*g x t BB g x t B I I p* = Δ = Δ ≤Δ ≤ < (33b)

0 0 <sup>0</sup> *y t A yt y* ( ) ( ), (0) = ⋅ (34a)

∫ (34b)

And the second order derivative of *V x*( ) becomes

continuous for practical application as

additional assumptions are made

generality, *A I* <sup>0</sup> = .

ε

≤ (30)

δ

*f*

*S*

*f f*

+

δ

, one make the input be

(32)

*ff ff f f* + = + + <∞ (31)

$$K\_2 = \left[L\_1 \text{g(x,t)}\right]^{-1} K\_2 \text{ \text{\textquotedblleft}L \text{\textquotedblright}\text{\textquotedblright}}\text{\textquotedblright} = \frac{\max\left\{\left|L\_1 d(x,t)\right|\right\}}{\min\{I + \Delta I\}}\tag{25}$$

which is transformed for easy proof of the existence condition of the sliding mode on the chosen sliding surface as the works of (Utkin, 1978; Decarlo et al., 1988; Lee, 2010b). The real sliding dynamics by the proposed control with the linear integral sliding surface is obtained as follows:

{ } 1 0 1 0 0 1 0 1 2 0 1 0 1 0 0 1 1 [ ] [ ( , ) ( , ) ( , ) ( , )] ( ,) ( ,) ( ,) ( ) ( ) ( ,) ( ,) ( ,) ( ) ( ,) ( ,) ( ) *f f f f S Lx Lx L f xtx f xtx gxtu d xt Lx L f xtx f xtx g x t K x x Kx K S K signS d xt Lx L f xtx Lg xtKxx Lx L f xtx L gxtKxx* = + = +Δ + + + = +Δ + − −Δ − − + + ⎡ ⎤ ⎣ ⎦ = − + +Δ −Δ 1 1 11 1 2 1 1 1 0 10 1 1 10 2 ( ,) ( ,) ( ,) ( ,) ( ) ( ,) ( ,) ( ) [ ] ( ,) [ ] ( ,) ( ,) [ ] ( ,) ( ) *f f f f L g x t Kx L g x t K S L d x t L g x t K sign S L f x t x L g x t K x x I I L g x t Kx I I L g x t K S L d x t I I L g x t K sign S* − Δ− + − = Δ − Δ − +Δ Δ − +Δ + − +Δ (26)

The closed loop stability by the proposed control input with sliding surface together with the existence condition of the sliding mode will be investigated in next Theorem 1.

**Theorem 1**: *If the sliding surface (7) is designed in the stable, i.e. stable design of K x*( ) *, the proposed input (21) with Assumption A1-A6 satisfies the existence condition of the sliding mode on the integral sliding surface and exponential stability*.

Proof(Lee, 2010b); Take a Lyapunov function candidate as

$$V(\mathbf{x}) = \frac{1}{2} S\_{\prime}^{\top} S\_{\prime} \tag{27}$$

Differentiating (27) with respect to time leads to and substituting (26) into (28)

$$\begin{aligned} \dot{V}(\mathbf{x}) &= S\_f^\top \dot{S}\_f \\ &= S\_f^\top L\_1 \Delta f(\mathbf{x}, t) \mathbf{x} - S\_f^\top L\_1 \Delta g(\mathbf{x}, t) \mathbf{K}(\mathbf{x}) \mathbf{x} - S\_f^\top [I + \Delta I] L\_1 g\_o(\mathbf{x}, t) \Delta \mathbf{K} \mathbf{x} \\ &- S\_f^\top [I + \Delta I] L\_1 g\_o(\mathbf{x}, t) \mathbf{K}\_1 S\_f + S\_f^\top L\_1 \overline{d}(\mathbf{x}, t) - S\_f^\top [I + \Delta I] L\_1 g\_o(\mathbf{x}, t) K\_2 \operatorname{sign}(\mathbf{S}\_f) \\ &\le -\varepsilon K\_1 \, ^\top \|\, \|\, S\_f \, \|\, ^\top \boldsymbol{\varepsilon} \qquad \boldsymbol{\varepsilon} = \min \{ \, \|\, \|\, I + \Delta I \, \|\, \|\} \\ &= -\varepsilon K\_1 \, ^\top S\_f^\top S\_f \\ &= -2\varepsilon K\_1 \, ^\top V(\mathbf{x}) \end{aligned} \tag{28}$$

The second requirement to remove the reaching phase is satisfied. Therefore, the reaching phase is completely removed. There are no reaching phase problems. As a result, the real output dynamics can be exactly predetermined by the ideal sliding output with the matched uncertainty. From (28), the following equations are obtained as

$$
\dot{V}(\mathbf{x}) + 2\varepsilon K\_\circ \, ^\circ V(\mathbf{x}) \le 0 \tag{29}
$$

$$V(\mathbf{x}(t)) \le V(\mathbf{x}(0))e^{-2c\mathcal{K}\_1 t} \tag{30}$$

And the second order derivative of *V x*( ) becomes

226 Recent Advances in Robust Control – Novel Approaches and Design Methods

11 1 1 *K Lgxt K K* ( , ) ', ' 0 <sup>−</sup>

[ ] <sup>1</sup> { <sup>1</sup> } 21 2 2 max | ( , )| ( , ) ', ' min{ }

which is transformed for easy proof of the existence condition of the sliding mode on the chosen sliding surface as the works of (Utkin, 1978; Decarlo et al., 1988; Lee, 2010b). The real sliding dynamics by the proposed control with the linear integral sliding surface is obtained

1 0 1 2 0

*f f*

*L f xtx f xtx g x t K x x Kx K S K signS d xt Lx*

= +Δ + − −Δ − − + + ⎡ ⎤ ⎣ ⎦

( ,) ( ,) ( ,) ( ) ( ) ( ,)

1 1 1 0 10 1

*L f x t x L g x t K x x I I L g x t Kx I I L g x t K S*

*f*

The closed loop stability by the proposed control input with sliding surface together with

**Theorem 1**: *If the sliding surface (7) is designed in the stable, i.e. stable design of K x*( ) *, the proposed input (21) with Assumption A1-A6 satisfies the existence condition of the sliding mode on the* 

<sup>1</sup> ( ) <sup>2</sup>

1 1 1 0

= Δ − Δ − +Δ Δ

[ ] ( ,) ( ,) [ ] ( ,) ( )

The second requirement to remove the reaching phase is satisfied. Therefore, the reaching phase is completely removed. There are no reaching phase problems. As a result, the real output dynamics can be exactly predetermined by the ideal sliding output with the matched

> <sup>1</sup> *Vx K Vx* () 2 ' () 0 + ε

*S L f x t x S L g x t K x x S I I L g x t Kx*

10 1 1 10 2

*f f f f f*

*S I I L g x t K S S L d x t S I I L g x t K sign S*

Differentiating (27) with respect to time leads to and substituting (26) into (28)

( ,) ( ,) ( ) [ ] ( ,)

− +Δ + − +Δ

 ε*K S I I*

*TT T ff f T T T*

*f*

*L f xtx Lg xtKxx Lx L f xtx L gxtKxx*

 ( ,) ( ,) ( ,) ( ,) ( ) ( ,) ( ,) ( ) [ ] ( ,) [ ] ( ,)

*L g x t Kx L g x t K S L d x t L g x t K sign S*

the existence condition of the sliding mode will be investigated in next Theorem 1.

*K Lgxt K K I I*

= > (24)

*Ldxt*

*f f*

*<sup>T</sup> Vx SS* = *f f* (27)

≤ (29)

*f*

(26)

(28)

<sup>=</sup> <sup>=</sup> + Δ (25)

{ }

[ ] <sup>1</sup>

−

1 0 0

*L f xtx f xtx gxtu d xt Lx*

[ ( , ) ( , ) ( , ) ( , )]

1 10 2

*L d x t I I L g x t K sign S*

Proof(Lee, 2010b); Take a Lyapunov function candidate as

2

uncertainty. From (28), the following equations are obtained as

*f*

'|| || , min{|| ||}

≤ − = +Δ

− Δ− + −

( ,) [ ] ( ,) ( )

+ − +Δ

*integral sliding surface and exponential stability*.

1

*<sup>T</sup> K SSf f K Vx*

1 1

ε

ε

ε

 ' 2 ' ()

= − = −

( )

*Vx SS*

=

*T f f*

= +Δ + + +

1 0 1 0 0 1 1

= − + +Δ −Δ

( ,) ( ,) ( ) ( ,) ( ,) ( )

1 1 11 1 2

= Δ − Δ − +Δ Δ − +Δ

as follows:

*f*

1 0

[ ]

*S Lx Lx*

= +

$$\ddot{V}(\mathbf{x}) = \dot{S}\_{\prime}^{\tau} \dot{S}\_{\prime} + S\_{\prime}^{\tau} \ddot{S}\_{\prime} = \| \dot{S}\_{\prime} \| \ \| \ \| \ ^{2} + S\_{\prime} \{ L\_{\text{i}} \mathbf{C} \ddot{\mathbf{x}} + L\_{\upsilon} \dot{\mathbf{x}} \} < \infty \tag{31}$$

and by Assumption A5 ( ) *V x* is bounded, which completes the proof of Theorem 1.

### **2.2.3 Continuous approximation of full sate feedback discontinuous control input**

The discontinuous control input (21) with (7) chatters from the beginning without reaching phase. The chattering of the discontinuous control input (21) may be harmful to the real dynamic plant. Hence using the saturation function for a suitable *<sup>f</sup>* δ , one make the input be continuous for practical application as

$$\mu\_{\boldsymbol{\mu}} = -K(\mathbf{x})\mathbf{x} - K\_{\mathrm{i}}S\_{\boldsymbol{\mu}} - \{\Delta K\mathbf{x} + K\_{\mathrm{z}}\mathrm{sign}(S\_{\boldsymbol{\mu}})\} \frac{S\_{\boldsymbol{\mu}}}{|S\_{\boldsymbol{\mu}}| + \delta\_{\boldsymbol{\mu}}} \tag{32}$$

which is different from that of (Chern & Wu, 1992) continuous approximation. For a first order system, this approximation is the same as that of (Chern & Wu, 1992) continuous approximation, but for a higher order system more than the first one, continuous approximation can be effectively made. The discontinuity of the control input can be dramatically improved without severe output performance deterioration.

### **2.3 Practical output feedback integral variable structure controller**

For the implementation of the output feedback when full-state is not available, some additional assumptions are made

**A7:** The nominal input matrix 0 *g xt* ( ,) is constant, i.e, 0 *g xt B* ( ,) =

**A8**: The unmatched Δ*f* ( ,) *x t* , matched Δ*gxt* ( ,) , and matched *dxt* ( ,) are unknown and bounded and satisfied by the following conditions:

$$
\Delta f(\mathbf{x},t) = \Delta f'(\mathbf{x},t)\mathbf{C}^\top \mathbf{C} = \Delta f''(\mathbf{x},t)\mathbf{C} \tag{33a}
$$

$$
\Delta \mathbf{g}(\mathbf{x}, t) = B B^{\top} \Delta \mathbf{g}'(\mathbf{x}, t) = B \Delta I, \quad 0 \le |\Delta I| \le p < 1 \tag{33b}
$$

$$d\mathbf{(x,t)} = BB^{\top}d^{\prime}(\mathbf{x},t) = Bd^{\prime\prime}(\mathbf{x},t) \tag{33c}$$

### **2.3.1 Transformed output feedback integral sliding surface**

Now, the integral of the output is augmented as follows:

$$
\dot{y}\_o(t) = A\_o \cdot y(t), \qquad \qquad y\_o(0) \tag{34a}
$$

$$
\partial\_{\nu} y\_o(t) = A\_o \cdot \bigvee\_{o} y(\tau)d\tau + \ y\_o(0) \tag{34b}
$$

where 0 ( ) , *<sup>r</sup> yt R r q* ∈ ≤ is the integral of the output and 0 *y* (0) is the initial condition of the integral state determined later, and *A*<sup>0</sup> is appropriately dimensioned without loss of generality, *A I* <sup>0</sup> = .

### **Assumption**

**A9**: 1 ( ) *H CB* has the inverse for some non zero row vector *H*<sup>1</sup> . Now, a transformed output feedback integral sliding surface is suggested be

$$S\_o = \left(H\_i \text{CB}\right)^{-1} \cdot \left(H\_i \cdot y + H\_o \cdot y\_o\right) \text{(=0)}\tag{35}$$

$$
\partial\_o y\_o(0) = -H\_o^- H\_i \cdot y(0) \tag{36}
$$

New Practical Integral Variable Structure Controllers for Uncertain Nonlinear Systems 229

0 0 ( ,) ( ,) ( ,) *<sup>T</sup>*

*T TT T T T T TT T*

1

1 10 ( ) or ( ) ( , ) *<sup>T</sup> G y B P H CB H Cf x t* <sup>−</sup> = = (49)

<sup>0</sup> 10 2 0 *u G y y Gy G S G sign S* = − −Δ − − ( ) ( ) (50)

[ ] 1,..., Δ =Δ = *Gg i q <sup>i</sup>* (51)

*i*

*i*

<sup>1</sup> *G* > 0 (53)

0

*i*

0

*i*

(52)

*V x x Q x t x x C PBB Px x PBB PCx x Q x t C PBB P PBB PC x*

By means of the Lyapunov control theory(Khalil, 1996), take the control input as

{ }

*Q xt x*

[ ( , ) ( , )]

*xQ xtx*

=− +

min

*T c c*

*T T*

() ( ,)

Therefore the stable gain is chosen as

*i*

Δ = ⎨

 ( ,) ( ,)

= − ≤ −

**2.3.2 Output feedback discontinuous control input** 

sliding surface, and *G*2 is a switching gain, respectively as

⎪ + Δ

⎪ Δ +Δ

λ

0 0

*c c*

A corresponding output feedback discontinuous control input is proposed as follows:

{ } { } { } { }

*<sup>g</sup> H CB H C f x t I H CB H f x t sign S y I I*

1 1 1 1 1 1 0

− −

1 1 1 1 1 1 0

− −

where *G y*( ) is a nonlinear output feedback gain satisfying the relationship (37) and (49), Δ*G* is a switching gain of the state, *G*1 is a feedback gain of the output feedback integral

max ( ) ''( . ) ( ) ( , ) ( )0 min min ( ) ''( . ) ( ) ( , ) ( )0 min

*H CB H C f x t I H CB H f x t sign S y I I*

<sup>⎧</sup> Δ +Δ <sup>⎪</sup> ≥ >

≤ < <sup>⎪</sup> + Δ <sup>⎩</sup>

*x f x t P Pf x t x*

[ ( ,) ]

=− − − =− + +

and *Qxt* ( ,) 0 > and ( ,) 0 *Q xt <sup>c</sup>* > for all *<sup>n</sup> x R* ∈ and all 0 *t* ≥ is

The derivative of (43) becomes

then

( ) , 0 *<sup>T</sup> V x x Px P* = > (43)

<sup>0</sup> ( ,) *T T u* =− =− *g xtPy B Py* (45)

0 0 ( ,) ( ,) ( ,) *<sup>T</sup> f xtP P* + =− *f xt Qxt* (46)

*c cc f xtP P* + =− *f xt Q xt* (47)

(48)

00 0 0 ( ) [ ( , ) ( , )] ( , ) ( , ) *<sup>T</sup> <sup>T</sup> T T <sup>T</sup> Vx x* = ++ + *f xt P Pf xt x u g x t Px x Pg xtu* (44)

where 1 0 000 ( ) *T T H H WH H W* − − = , which is transformed for easy proof of the existence condition of the sliding mode on the sliding surface as the works of (Decarlo et al., 1988) and (Lee, 2010b). In (35), non zero row vector *H*0 and *H*1 are the design parameters satisfying the following relationship

$$H\_i\mathbf{C}[f\_o(\mathbf{x},t) - BG(\mathbf{y})\mathbf{C}] + H\_o\mathbf{C} = H\_i\mathbf{C}f\_{o\epsilon}(\mathbf{x},t) + H\_o\mathbf{C} = \mathbf{0} \tag{37}$$

where 0 0 ( ,) ( ,) ( ) *<sup>c</sup> f x t f x t BG y C* = − is a closed loop system matrix and *G y*( ) is an output feedback gain. At 0 *t* = , this output feedback integral sliding surface is zero so that there will be no reaching phase(Lee & Youn, 1994). In (35), *H*0 and *H*1 are the non zero row vectors as the design parameters such that the following assumption is satisfied.

### **Assumption**

**A10:** <sup>1</sup> *H Cg x t* ( ,) has the full rank and is invertible The equivalent control input is obtained using as

$$\boldsymbol{\mu}\_{eq} = -[\boldsymbol{H}\_{\text{i}}\mathbb{C}\mathbf{g}(\mathbf{x},t)]^{\perp}[\boldsymbol{H}\_{\text{i}}\mathbb{C}\mathbf{f}(\mathbf{x},t)\mathbf{x} + \boldsymbol{H}\_{\text{o}}\mathbf{y}\_{o}(t)] - [\boldsymbol{H}\_{\text{i}}\mathbb{C}\mathbf{g}(\mathbf{x},t)]^{\perp}[\boldsymbol{H}\_{\text{i}}\mathbb{C}\mathbf{A}\mathbf{f}(\mathbf{x},t) + \boldsymbol{d}(\mathbf{x},t)] \tag{38}$$

This control input can not be implemented because of the uncertainties and disturbances. The ideal sliding mode dynamics of the output feedback integral sliding surface (35) can be derived by the equivalent control approach as (Decarlo et al., 1998)

$$\dot{\mathbf{x}}\_{\circ} = \left[ f\_o(\mathbf{x}\_{\circ}, t) - B(H\_1 \mathbf{C} \mathbf{B})^{-1} H\_1 \mathbf{C} f\_o(\mathbf{x}\_{\circ}, t) - B(H\_1 \mathbf{C} \mathbf{B})^{-1} H\_0 \mathbf{C} \right] \mathbf{x}\_{\circ}, \qquad \mathbf{x}\_{\circ}(0) = \mathbf{x}(0) \tag{39}$$

$$y\_\* = \mathbb{C} \cdot \boldsymbol{\pi}\_\* \tag{40}$$

and from 0 *S* = 0 , the another ideal sliding mode dynamics is obtained as(Lee, 2010a)

$$\dot{y}\_{\*} = -H\_{1}^{-}H\_{0}y\_{\*\prime} \qquad y\_{\*}(0) \tag{41}$$

where 1 1 111 ( ) *T T H H WH H W* − − = . The solution of (39) or (41) identically defines the output feedback integral sliding surface. Hence to design the output feedback integral sliding surface as stable, this ideal sliding dynamics (39) is designed to be stable. To choose the stable gain based on the Lyapunov stability theory, the ideal sliding dynamics (39) is represented by the nominal plant of (3) as

$$\begin{aligned} \dot{\mathbf{x}} &= f\_{\mathbf{o}}(\mathbf{x}, t)\mathbf{x} + \mathbf{g}\_{\mathbf{o}}(\mathbf{x}, t)\boldsymbol{\mu}, \qquad &\boldsymbol{\mu} = -\mathbf{G}(\mathbf{y})\mathbf{y} \\ &= f\_{\mathbf{o}\boldsymbol{\epsilon}}(\mathbf{x}, t)\mathbf{x} \end{aligned} \tag{42}$$

To select the stable gain, take a Lyapunov function candidate as

$$V(\mathbf{x}) = \mathbf{x}^{\top} \mathbf{P} \mathbf{x}, \qquad \quad \mathbf{P} > \mathbf{0} \tag{43}$$

The derivative of (43) becomes

$$\dot{V}(\mathbf{x}) = \mathbf{x}^{\top} [f\_{\boldsymbol{\uprho}}(\mathbf{x}, t)^{\top} P + P f\_{\boldsymbol{\uprho}}(\mathbf{x}, t)] \mathbf{x} + \mathbf{u}^{\top} g\_{\boldsymbol{\uprho}}^{\top}(\mathbf{x}, t) P \mathbf{x} + \mathbf{x}^{\top} P g\_{\boldsymbol{\uprho}}(\mathbf{x}, t) \mu \tag{44}$$

By means of the Lyapunov control theory(Khalil, 1996), take the control input as

$$
\mu = -\mathcal{g}\_o^\top (\mathbf{x}, t) P \mathcal{Y} = -B^\top P \mathcal{Y} \tag{45}
$$

and *Qxt* ( ,) 0 > and ( ,) 0 *Q xt <sup>c</sup>* > for all *<sup>n</sup> x R* ∈ and all 0 *t* ≥ is

$$\int f\_0^\top(\mathbf{x}, t) P + P f\_0(\mathbf{x}, t) = -Q(\mathbf{x}, t) \tag{46}$$

$$f\_{\alpha\circ}^{\top}(\mathbf{x},t)P + Pf\_{\alpha\circ}(\mathbf{x},t) = -Q\_c(\mathbf{x},t) \tag{47}$$

then

228 Recent Advances in Robust Control – Novel Approaches and Design Methods

0 000 ( ) *T T H H WH H W* − − = , which is transformed for easy proof of the existence condition

of the sliding mode on the sliding surface as the works of (Decarlo et al., 1988) and (Lee, 2010b). In (35), non zero row vector *H*0 and *H*1 are the design parameters satisfying the

where 0 0 ( ,) ( ,) ( ) *<sup>c</sup> f x t f x t BG y C* = − is a closed loop system matrix and *G y*( ) is an output feedback gain. At 0 *t* = , this output feedback integral sliding surface is zero so that there will be no reaching phase(Lee & Youn, 1994). In (35), *H*0 and *H*1 are the non zero row

1 1 11 0 01 1 [ ( , )] [ ( , ) ( )] [ ( , )] [ ( , ) ( , )] *u H Cg x t H Cf x t x H y t H Cg x t H C f x t d x t eq*

This control input can not be implemented because of the uncertainties and disturbances. The ideal sliding mode dynamics of the output feedback integral sliding surface (35) can be

1 1

and from 0 *S* = 0 , the another ideal sliding mode dynamics is obtained as(Lee, 2010a)

0 0

0

To select the stable gain, take a Lyapunov function candidate as

=

( ,) *<sup>c</sup>*

*f xtx*

− − = − + − Δ + (38)

0 1 10 1 0 [ ( , ) ( ) ( , ) ( ) ] , (0) (0) *s s <sup>s</sup> s s x f x t B H CB H Cf x t B H CB H C x x x* − − = −− = (39)

1 111 ( ) *T T H H WH H W* − − = . The solution of (39) or (41) identically defines the output

feedback integral sliding surface. Hence to design the output feedback integral sliding surface as stable, this ideal sliding dynamics (39) is designed to be stable. To choose the stable gain based on the Lyapunov stability theory, the ideal sliding dynamics (39) is

( , ) ( , ) , ( )

*x f xtx g xtu u G y y*

=+ = −

*s s y* = *C x*⋅ (40)

1 0 , (0) *s ss y HHy y* <sup>−</sup> = − (41)

(42)

vectors as the design parameters such that the following assumption is satisfied.

derived by the equivalent control approach as (Decarlo et al., 1998)

**A10:** <sup>1</sup> *H Cg x t* ( ,) has the full rank and is invertible The equivalent control input is obtained using as

0 1 1 00 *S H CB H y H y* ( )( )( 0) <sup>−</sup> = ⋅ ⋅+ ⋅ = (35)

1 0 0 10 <sup>0</sup> [ ( ,) ( ) ] ( ,) 0 *H C f x t BG y C H C H Cf x t H C* − += += *<sup>c</sup>* (37)

<sup>0</sup> 0 1 *y HH y* (0) (0) <sup>−</sup> =− ⋅ (36)

**A9**: 1 ( ) *H CB* has the inverse for some non zero row vector *H*<sup>1</sup> .

Now, a transformed output feedback integral sliding surface is suggested be

1

**Assumption** 

where 1

following relationship

where 1

represented by the nominal plant of (3) as

**Assumption** 

$$\begin{split} \dot{V}(\mathbf{x}) &= -\mathbf{x}^{\top}Q(\mathbf{x},t)\mathbf{x} - \mathbf{x}^{\top}\mathbf{C}^{\top}PBB^{\top}P\mathbf{x} - \mathbf{x}^{\top}PBB^{\top}P\mathbf{C}\mathbf{x} \\ &= -\mathbf{x}^{\top}[Q(\mathbf{x},t) + \mathbf{C}^{\top}PBB^{\top}P + PBB^{\top}PC]\mathbf{x} \\ &= -\mathbf{x}^{\top}[f\_{0c}^{\top}(\mathbf{x},t)P + Pf\_{0c}(\mathbf{x},t)]\mathbf{x} \\ &= -\mathbf{x}^{\top}Q\_{c}(\mathbf{x},t)\mathbf{x} \\ &\leq -\lambda\_{\text{min}}\left\{Q\_{c}(\mathbf{x},t)\right\}\mathbf{x} \end{split} \tag{48}$$

Therefore the stable gain is chosen as

$$\mathbf{G(y) = B^\top P \quad \text{or} \quad = (H\_\mathbf{i} \mathbf{C} \mathbf{B})^{-1} H\_\mathbf{i} \mathbf{C} f\_\mathbf{o}(\mathbf{x}, \mathbf{t}) \tag{49}$$

### **2.3.2 Output feedback discontinuous control input**

A corresponding output feedback discontinuous control input is proposed as follows:

$$
\Delta u\_o = -G(y)y - \Delta G y - G\_1 S\_0 - G\_2 \text{sign}(S\_\circ) \tag{50}
$$

where *G y*( ) is a nonlinear output feedback gain satisfying the relationship (37) and (49), Δ*G* is a switching gain of the state, *G*1 is a feedback gain of the output feedback integral sliding surface, and *G*2 is a switching gain, respectively as

$$
\Delta G = \begin{bmatrix} \Delta \mathbf{g}\_i \end{bmatrix} \qquad \mathbf{i} = \mathbf{1}, \dots, \mathbf{q} \tag{51}
$$

$$\mathsf{Lag}\_{i} = \begin{cases} \displaystyle \geq \frac{\max\left\{ \{H\_{i}\} \text{CB} \right\}^{-1} H\_{i} \mathsf{CA} f''(\mathbf{x},t) + \Delta\{H\_{i}\mathsf{CB}\}^{-1} H\_{i} f\_{0}(\mathbf{x},t) \right\}\_{i} & \operatorname{sign}(\mathsf{S}\_{\mathsf{u}} \boldsymbol{y}\_{i}) > 0 \\ \displaystyle \geq \frac{\min\left\{ \{H\_{i}\} \text{CB} \right\}^{-1} H\_{i} \mathsf{CA} f''(\mathbf{x},t) + \Delta\{H\_{i}\mathsf{CB}\}^{-1} H\_{i} f\_{0}(\mathbf{x},t) \right\}\_{i} & \operatorname{sign}(\mathsf{S}\_{\mathsf{u}} \boldsymbol{y}\_{i}) < 0 \end{cases} \tag{52}$$

<sup>1</sup> *G* > 0 (53)

### 230 Recent Advances in Robust Control – Novel Approaches and Design Methods

$$G\_2 = \frac{\max\{\|d^\bullet(\mathbf{x}, t)\|\}}{\min\{I + \Delta I\}} \tag{55}$$

New Practical Integral Variable Structure Controllers for Uncertain Nonlinear Systems 231

<sup>1</sup> <sup>2</sup> ( ( )) ( (0)) *G t Vyt Vy e*<sup>−</sup>

and by Assumption A5 ( ) *V x* is bounded, which completes the proof of Theorem 2.

**2.3.3 Continuous approximation of output feedback discontinuous control input**  Also, the control input (50) with (35) chatters from the beginning without reaching phase. The chattering of the discontinuous control input may be harmful to the real dynamic plant

part of the discontinuous input be continuous effectively for practical application as

0 1 0 2 0

**3.1 Example 1: Full-state feedback practical integral variable structure controller**  Consider a second order affine uncertain nonlinear system with mismatched uncertainties

> 2 1 1 1 12 <sup>1</sup> *x x x xx* =− + + + 0.1 sin ( ) 0.02sin(2.0 ) *x u*

Since (63) satisfy the Assumption A1, (63) is represented in state dependent coefficient form

*x x x t dxt* ⎡ ⎤ ⎡ ⎤ − + ⎡⎤⎡ ⎤ ⎡ ⎤ ⎢ ⎥ <sup>=</sup> ⎢ ⎥⋅ + ⎢⎥⎢ <sup>⎥</sup> <sup>+</sup> <sup>⎢</sup> <sup>⎥</sup> <sup>+</sup> <sup>+</sup> ⎣ ⎦ ⎣ ⎦ ⎣⎦⎣ <sup>⎦</sup> <sup>⎣</sup> <sup>⎦</sup>

where the nominal parameter 0 *f* ( ,) *x t* and 0 *g xt* ( ,) and mismatched uncertainties Δ*f* ( ,) *x t*

0 0 2

*<sup>x</sup> f xt g xt f xt*

11 0 0.1sin ( ) 0 ( , ) , ( , ) , ( , ) 0 1 2.0 0 sin ( )

⎡ ⎤ ⎡⎤ <sup>−</sup> <sup>⎡</sup> <sup>⎤</sup> = = Δ= ⎢ ⎥ ⎢⎥ <sup>⎢</sup> <sup>⎥</sup> ⎣ ⎦ ⎣⎦ <sup>⎣</sup> <sup>⎦</sup>

1 0.1sin ( ) 1 0.02sin( ) 0

0 1 sin ( ) 2.0 0.5sin(2.0 ) ( ,)

(65)

2

1 1 1 1 2

*x x x x*

() { ( )}| | *<sup>c</sup> <sup>S</sup> u G y y G S Gy G sign S <sup>S</sup>*

The discontinuity of control input of can be dramatically improved without severe output

so it must be removed. Hence using the saturation function for a suitable 0

=− − − Δ +

2 1

And the second order derivative of *V x*( ) becomes

**3. Design examples and simulation studies** 

2

0.02sin( ) ( ,) 0.2sin(2.0 )

*gxt <sup>t</sup>*

⎡ ⎤ Δ = ⎢ ⎥ ⎣ ⎦

2 2 2

1

*x*

performance deterioration.

and matched disturbance

and Δ*gxt* ( ,) are

as

ε

00 00 0 0 1 1 0 ( ) || || ( ) ( ) *T T V y S S S S S S H CB H Cx H Cx* <sup>−</sup> = += + + <∞ (61)

≤ (60)

0

0 0

222 2 *xxx x* =+ + + + sin ( ) (2.0 0.5sin(2.0 )) ( , ) *t u dxt* (63)

2 2 <sup>1</sup> 2 12 *dxt x x x x* ( , ) 0.7 sin( ) 0.8sin( ) 0.2( ) 2.0sin(5.0 ) 3.0 = − + ++ + *t* (64)

+

δ

*u*

2

(66)

*x*

2 1 δ

, one make the

(62)

The real sliding dynamics by the proposed control (50) with the output feedback integral sliding surface (35) is obtained as follows:

$$\begin{aligned} \dot{S}\_{o} &= (H\_{i}\text{CB})^{-1}[H\_{i}\dot{y} + H\_{o}y] \\ &= (H\_{i}\text{CB})^{-1}[H\_{i}\text{C}\_{o}(x,t)x + H\_{i}\text{C}\Delta f(x,t) + H\_{i}\text{C}(B + \Delta g(x,t))u + H\_{i}\text{C}\overline{d}(x,t) + H\_{o}y] \\ &= (H\_{i}\text{CB})^{-1}[H\_{i}\text{C}\_{o}(x,t)x - H\_{i}\text{C}\text{B}\overline{d}(y)y + H\_{o}y] \\ &+ (H\_{i}\text{C}\text{B})^{-1}[H\_{i}\text{C}\Delta f(x,t) + H\_{i}\text{C}\Delta g(x,t)K(y)y] \\ &+ (H\_{i}\text{C}\text{B})^{-1}[H\_{i}\text{C}\text{B} + \text{Ag}(x,t))(-\Delta Gy - G\_{i}S\_{o} - G\_{z}\text{sign}(S\_{o}) + H\_{i}\text{C}\overline{d}(x,t)] \\ &= (H\_{i}\text{CB})^{-1}[H\_{i}\text{C}\Delta f''(x,t)\text{C}x + H\_{i}\text{C}\Delta g(x,t)\text{G}(y)y] - (I + \Delta I)\Delta G(y)y \\ &+ [(I + \Delta I)(-\text{G}\_{i}S\_{o} - \text{G}\_{z}\text{sign}(S\_{o})) + d^{w}(x,t)] \\ &= (H\_{i}\text{CB})^{-1}H\_{i}\text{C}\Delta f''(x,t)y + \Delta(H\_{i}\text{C}\text{B})^{-1}H\_{i}f\_{o}(x,t)y - (I + \Delta I)\Delta G(y)y \\ &+ (I + \Delta I)(-\text{G}\_{z}S\_{o} - \text{G}\_{z}$$

The closed loop stability by the proposed control input with the output feedback integral sliding surface together with the existence condition of the sliding mode will be investigated in next Theorem 1.

**Theorem 2**: *If the output feedback integral sliding surface (35) is designed to be stable, i.e. stable design of G y*( ) *, the proposed control input (50) with Assumption A1-A10 satisfies the existence condition of the sliding mode on the output feedback integral sliding surface and closed loop exponential stability.* 

**Proof**; Take a Lyapunov function candidate as

$$V(\boldsymbol{y}) = \frac{1}{2} S\_o^\top S\_o \tag{57}$$

Differentiating (57) with respect to time leads to and substituting (56) into (58)

$$\begin{aligned} \dot{V}(\mathbf{y}) &= S\_o^\top \dot{S}\_o \\ &= S\_o^\top [(H\_i \mathbf{C} \mathbf{B})^\text{-1} H\_i \mathbf{C} \Delta f^\prime(\mathbf{x}, t) + \Delta I (H\_i \mathbf{C} \mathbf{B})^\text{-1} H\_i f\_o(\mathbf{x}, t)] \mathbf{y} - S\_o^\top (I + \Delta I) \Delta G(\mathbf{y}) \mathbf{y} \\ &+ S\_o^\top (I + \Delta I) (-\mathbf{G}\_i S\_o - \mathbf{G}\_z \text{sign}(\mathbf{S}\_o)) + S\_o^\top d^\pi(\mathbf{x}, t) \\ &\leq -\varepsilon \mathbf{G}\_i \parallel \|\mathbf{S}\_o\|\parallel^2 \rangle \qquad \varepsilon = \min \{ \parallel \quad |I + \Delta I| \parallel \} \\ &= -\varepsilon \mathbf{G}\_i S\_o^\top \mathbf{S}\_o \\ &= -2\varepsilon \mathbf{G}\_i V(\mathbf{y}) \end{aligned} \tag{58}$$

From (58), the second requirement to get rid of the reaching phase is satisfied. Therefore, the reaching phase is clearly removed. There are no reaching phase problems. As a result, the real output dynamics can be exactly predetermined by the ideal sliding output with the matched uncertainty. Moreover from (58), the following equations are obtained as

$$
\dot{V}(y) + 2\varepsilon \mathbf{G}\_\mathrm{i} V(y) \le 0 \tag{59}
$$

$$V(y(t)) \le V(y(0))e^{-2\varepsilon G\_1 t} \tag{60}$$

And the second order derivative of *V x*( ) becomes

230 Recent Advances in Robust Control – Novel Approaches and Design Methods

The real sliding dynamics by the proposed control (50) with the output feedback integral

( ) [ ( , ) ( , ) ( ( , )) ( , ) ]

The closed loop stability by the proposed control input with the output feedback integral sliding surface together with the existence condition of the sliding mode will be investigated

**Theorem 2**: *If the output feedback integral sliding surface (35) is designed to be stable, i.e. stable design of G y*( ) *, the proposed control input (50) with Assumption A1-A10 satisfies the existence condition of the sliding mode on the output feedback integral sliding surface and closed loop* 

<sup>1</sup> ( ) <sup>2</sup>

[( ) ''( , ) ( ) ( , )] ( ) ( )

= Δ +Δ − +Δ Δ

From (58), the second requirement to get rid of the reaching phase is satisfied. Therefore, the reaching phase is clearly removed. There are no reaching phase problems. As a result, the real output dynamics can be exactly predetermined by the ideal sliding output with the

> <sup>1</sup> *V y GV y* () 2 () 0 + ε

*T T*

Differentiating (57) with respect to time leads to and substituting (56) into (58)

1 1 01 1 1 1 0 0

matched uncertainty. Moreover from (58), the following equations are obtained as

− −

0 10 2 0 0

 ε

*T T*

*S I I G S G sign S S d x t G S I I*

 ( )( ( )) ''( , ) || || , min{|| ||}

+ +Δ − − + ≤ − = +Δ

2

1 0 1 0

0

*T*

*GSS* ε

0 0

*S H CB H C f x t I H CB H f x t y S I I G y y*

= + Δ + +Δ + +

2

1 10 1 0

10 2 0 1 1 1 1 1 1 0

− −

10 2 0

*I I G S G sign S d x t*

*I I G S G sign S d x t*

*H CB H C f x t H C g x t K y y*

( ) [ ''( , ) ( , ) ( ) ] ( ) ( )

= Δ +Δ − +Δ Δ

*H CB H C f x t y I H CB H f x t y I I*

*H CB H C f x t Cx H C g x t G y y I I G y y*

( ) ''( , ) ( ) ( , ) ( )

= Δ +Δ − +Δ

*H CB H Cf x t x H CBG y y H y*

11 1

[( )( ( )) ''( , )]

1 1 1

+ +Δ − − +

( )( ( )) ''( , )

+ +Δ − − +

 ( )[ ( ,) ( ) ] ( ) [ ( , ) ( , ) ( ) ]

= −+ + Δ +Δ

sliding surface (35) is obtained as follows:

1 01 1 0 1

*S H CB H y H y*

= +

− − −

( )[ ]

1

1

−

( )[

+

in next Theorem 1.

*exponential stability.* 

1

− −

1 1 1

*H CB H C*

**Proof**; Take a Lyapunov function candidate as

0 0

*T*

ε

1 2 () = − ε*GV y*

= −

( )

*Vy S S*

=

{ }

*I I* <sup>=</sup> + Δ (55)

10 2 0 1

( )

(56)

(58)

*Gyy*

Δ

*<sup>T</sup> Vy SS* = (57)

≤ (59)

max | ''( , )| min{ } *d xt <sup>G</sup>*

1 10 1 1 1 0

+ Δ −Δ − − +

( ( , ))( ( ) ( , )]

*B g x t Gy G S G sign S H Cd x t*

*H CB H Cf xtx HC f xt HC B g x t u H Cd x t H y*

$$\ddot{V}(y) = \dot{S}\_o^r \dot{S}\_o + S\_o^r \ddot{S}\_o = |\begin{array}{c} \dot{S}\_o \ \end{array}| \ \ \stackrel{\circ}{\ } + \text{S}\_o \text{(} H\_i \text{CB} \text{)}^{-1} \text{(} H\_i \text{C\ddot{x}} + H\_o \text{C\dot{x}} \text{)}<\infty \tag{61}$$

and by Assumption A5 ( ) *V x* is bounded, which completes the proof of Theorem 2.

### **2.3.3 Continuous approximation of output feedback discontinuous control input**

Also, the control input (50) with (35) chatters from the beginning without reaching phase. The chattering of the discontinuous control input may be harmful to the real dynamic plant so it must be removed. Hence using the saturation function for a suitable 0 δ , one make the part of the discontinuous input be continuous effectively for practical application as

$$
\mu\_{0c} = -G(y)y - G\_i S\_0 - \{\Delta G y + G\_2 \text{sign}(S\_0)\} \frac{S\_0}{|S\_0| + \delta\_0} \tag{62}
$$

The discontinuity of control input of can be dramatically improved without severe output performance deterioration.

### **3. Design examples and simulation studies**

### **3.1 Example 1: Full-state feedback practical integral variable structure controller**

Consider a second order affine uncertain nonlinear system with mismatched uncertainties and matched disturbance

$$
\dot{\mathbf{x}}\_1 = -\mathbf{x}\_1 + 0.1\mathbf{x}\_1 \sin^2(\mathbf{x}\_1) + \mathbf{x}\_2 + 0.02 \sin(2.0\mathbf{x}\_1)\mu
$$

$$
\dot{\mathbf{x}}\_2 = \mathbf{x}\_2 + \mathbf{x}\_2 \sin^2(\mathbf{x}\_2) + (2.0 + 0.5 \sin(2.0t))\mu + \overline{d}(\mathbf{x}, t)\tag{63}
$$

$$d\left(\mathbf{x},t\right) = 0.7\sin(\mathbf{x}\_1) - 0.8\sin(\mathbf{x}\_2) + 0.2\left(\mathbf{x}\_1^2 + \mathbf{x}\_2^2\right) + 2.0\sin(5.0t) + 3.0\tag{64}$$

Since (63) satisfy the Assumption A1, (63) is represented in state dependent coefficient form as

$$
\begin{bmatrix}
\dot{\mathbf{x}}\_1 \\
\dot{\mathbf{x}}\_2
\end{bmatrix} = \begin{bmatrix}
0 & 1 + \sin^2(\mathbf{x}\_\circ)
\end{bmatrix} \cdot \begin{bmatrix}
\mathbf{x}\_\circ \\
\mathbf{x}\_\circ
\end{bmatrix} + \begin{bmatrix}
0.02\sin(\mathbf{x}\_\circ) \\
2.0 + 0.5\sin(2.0t)
\end{bmatrix} \mu + \begin{bmatrix}
0 \\
\overline{d}(\mathbf{x}, t)
\end{bmatrix} \tag{65}
$$

where the nominal parameter 0 *f* ( ,) *x t* and 0 *g xt* ( ,) and mismatched uncertainties Δ*f* ( ,) *x t* and Δ*gxt* ( ,) are

$$\begin{aligned} f\_{\diamond}(\mathbf{x},t) &= \begin{bmatrix} -1 & 1\\ 0 & 1 \end{bmatrix}, \ g\_{\diamond}(\mathbf{x},t) = \begin{bmatrix} 0\\ 2.0 \end{bmatrix}, \ \Delta f(\mathbf{x},t) = \begin{bmatrix} 0.1\sin^2(\mathbf{x}\_{\text{i}}) & 0\\ 0 & \sin^2(\mathbf{x}\_{\text{i}}) \end{bmatrix} \\ \ \Delta \mathbf{g}(\mathbf{x},t) &= \begin{bmatrix} 0.02\sin(\mathbf{x}\_{\text{i}})\\ 0.2\sin(2.0t) \end{bmatrix} \end{aligned} \tag{66}$$

To design the full-state feedback integral sliding surface, ( ,) *<sup>c</sup> f x t* is selected as

$$f\_{\circ}(\mathbf{x},t) = f\_{\circ}(\mathbf{x},t) - g\_{\circ}(\mathbf{x},t)K(\mathbf{x}) = \begin{bmatrix} -1 & 1\\ -70 & -21 \end{bmatrix} \tag{67}$$

in order to assign the two poles at 16.4772 − and 5.5228 − . Hence, the feedback gain *K x*( ) becomes

$$K(\mathbf{x}) = \begin{bmatrix} 35 & 11 \end{bmatrix} \tag{68}$$

New Practical Integral Variable Structure Controllers for Uncertain Nonlinear Systems 233

uncertainty and matched disturbance. The three case output responses except the case (iv) are almost identical to each other. The four phase trajectories (i)ideal sliding trajectory, (ii)no uncertainty and no disturbance (iii)matched uncertainty/disturbance, and (iv) unmatched uncertainty and matched disturbance are depicted in Fig. 2. As can be seen, the sliding surface is exactly defined from a given initial condition to the origin, so there is no reaching phase, only the sliding exists from the initial condition. The one of the two main problems of the VSS is removed and solved. The unmatched uncertainties influence on the ideal sliding dynamics as in the case (iv). The sliding surface ( ) *S t <sup>f</sup>* (i) unmatched uncertainty and matched disturbance is shown in Fig. 3. The control input (i) unmatched uncertainty and matched disturbance is depicted in Fig. 4. For practical application, the discontinuous input is made be continuous by the saturation function with a new form as in (32) for a

= . The output responses of the continuous input by (32) are shown in Fig. 5

for the four cases (i)ideal sliding output, (ii)no uncertainty and no disturbance (iii)matched uncertainty/disturbance, and (iv)unmatched uncertainty and matched disturbance. There is no chattering in output states. The four case trajectories (i)ideal sliding time trajectory, (ii)no uncertainty and no disturbance (iii)matched uncertainty/disturbance, and (iv) unmatched uncertainty and matched disturbance are depicted in Fig. 6. As can be seen, the trajectories are continuous. The four case sliding surfaces are shown in fig. 7, those are continuous. The three case continuously implemented control inputs instead of the discontinuous input in Fig. 4 are shown in Fig. 8 without the severe performance degrade, which means that the continuous VSS algorithm is practically applicable. The another of the two main problems of

From the simulation studies, the usefulness of the proposed SMC is proven.

Fig. 1. Four case 1 *x* and 2 *x* time trajectories (i)ideal sliding output, (ii) no uncertainty and no disturbance (iii)matched uncertainty/disturbance, and (iv)unmatched uncertainty and

positive 0.8 *<sup>f</sup>* δ

matched disturbance

the VSS is improved effectively and removed.

The *P* in (14) is chosen as

$$P = \begin{bmatrix} 100 & 17.5 \\ 17.5 & 5.5 \end{bmatrix} > 0\tag{69}$$

so as to be

$$\begin{bmatrix} f\_c^\top \langle \mathbf{x}, t \rangle P + Pf\_c \langle \mathbf{x}, t \rangle = \begin{bmatrix} -2650 & -670 \\ -670 & -196 \end{bmatrix} < 0 \tag{70} $$

Hence, the continuous static feedback gain is chosen as

$$K(\mathbf{x}) = \mathcal{g}\_{\boldsymbol{\uprho}}^{\boldsymbol{r}}(\mathbf{x}, t)P = \begin{bmatrix} 35 & 11 \end{bmatrix} \tag{71}$$

Therefore, the coefficient of the sliding surface is determined as

$$L\_1 = \begin{bmatrix} L\_{11} & L\_{12} \end{bmatrix} = \begin{bmatrix} 10 & 1 \end{bmatrix} \tag{72}$$

Then, to satisfy the relationship (8a) and from (8b), *L*0 is selected as

$$L\_{\rm o} = -L\_{\rm i} \left[ f\_{\rm o}(\mathbf{x}, t) - \mathbf{g}\_{\rm o}(\mathbf{x}, t) \mathbf{K}(\mathbf{x}) \right] = -L\_{\rm i} f\_{\rm c}(\mathbf{x}, t) = \begin{bmatrix} L\_{\rm i1} + 70L\_{\rm i2} & -L\_{\rm i1} + 21L\_{\rm i2} \end{bmatrix} = \begin{bmatrix} 80 & 11 \end{bmatrix} \tag{73}$$

The selected gains in the control input (21), (23)-(25) are as follows:

$$
\Delta \mathbf{k}\_{\rm i} = \begin{cases}
+4.0 & \text{if} \quad S\_{\rm \slash} \mathbf{x}\_{\rm i} > 0 \\
\end{cases}
\tag{74a}
$$

$$
\Delta k\_2 = \begin{cases}
+5.0 & \text{if} \quad S\_{\nearrow} x\_2 > 0 \\
\end{cases}
\tag{74b}
$$

$$K\_t = 400.0\tag{74c}$$

$$K\_2 = 2.8 + 0.2(\mathbf{x}\_1^2 + \mathbf{x}\_2^2) \tag{74d}$$

The simulation is carried out under 1[msec] sampling time and with (0) 10 5 [ ]*<sup>T</sup> x* = initial state. Fig. 1 shows four case 1 *x* and 2 *x* time trajectories (i)ideal sliding output, (ii) no uncertainty and no disturbance (iii)matched uncertainty/disturbance, and (iv)unmatched 232 Recent Advances in Robust Control – Novel Approaches and Design Methods

( ,) ( ,) ( ,) ( ) 70 21 *<sup>c</sup> f xt f xt g xtKx* <sup>⎡</sup> <sup>−</sup> <sup>⎤</sup> =− = <sup>⎢</sup> <sup>⎥</sup> − − <sup>⎣</sup> <sup>⎦</sup>

in order to assign the two poles at 16.4772 − and 5.5228 − . Hence, the feedback gain

100 17.5

17.5 5.5 *<sup>P</sup>* ⎡ ⎤ <sup>=</sup> ⎢ ⎥ <sup>&</sup>gt; ⎣ ⎦

( ,) ( ,) 0

*L L f xt g xtKx Lf xt L L L L* 0 10 =− − =− = + − + = [ ( , ) ( , ) ( ) ( , ) 70 21 80 11 <sup>0</sup> ] <sup>1</sup> *<sup>c</sup>* [ <sup>11</sup> 12 11 <sup>12</sup> ] [ ] (73)

4.0 if 0 4.0 if 0 *f f S x*

5.0 if 0 5.0 if 0 *f f S x*

⎪− < ⎩

⎪− < ⎩

*<sup>k</sup> S x* ⎧⎪+ >

*<sup>k</sup> S x* ⎧⎪+ >

The simulation is carried out under 1[msec] sampling time and with (0) 10 5 [ ]*<sup>T</sup>*

state. Fig. 1 shows four case 1 *x* and 2 *x* time trajectories (i)ideal sliding output, (ii) no uncertainty and no disturbance (iii)matched uncertainty/disturbance, and (iv)unmatched

1

1

2

2

2 2

*c c f x t P Pf x t* ⎡ ⎤ − − <sup>+</sup> <sup>=</sup> ⎢ ⎥ <sup>&</sup>lt; − − ⎣ ⎦

*T*

Hence, the continuous static feedback gain is chosen as

Therefore, the coefficient of the sliding surface is determined as

Then, to satisfy the relationship (8a) and from (8b), *L*0 is selected as

The selected gains in the control input (21), (23)-(25) are as follows:

1

2

Δ = ⎨

Δ = ⎨

0

2650 670

670 196

( ) ( , ) 35 11 <sup>0</sup> [ ] *<sup>T</sup> Kx g xtP* = = (71)

*L LL* 1 11 12 = = [ ] [10 1] (72)

1 1

*K x*( ) 35 11 = [ ] (68)

(67)

(69)

(70)

(74a)

(74b)

*x* = initial

<sup>1</sup> *K* = 400.0 (74c)

<sup>2</sup> 1 2 *K xx* =+ + 2.8 0.2( ) (74d)

To design the full-state feedback integral sliding surface, ( ,) *<sup>c</sup> f x t* is selected as

0 0

*K x*( ) becomes

so as to be

The *P* in (14) is chosen as

uncertainty and matched disturbance. The three case output responses except the case (iv) are almost identical to each other. The four phase trajectories (i)ideal sliding trajectory, (ii)no uncertainty and no disturbance (iii)matched uncertainty/disturbance, and (iv) unmatched uncertainty and matched disturbance are depicted in Fig. 2. As can be seen, the sliding surface is exactly defined from a given initial condition to the origin, so there is no reaching phase, only the sliding exists from the initial condition. The one of the two main problems of the VSS is removed and solved. The unmatched uncertainties influence on the ideal sliding dynamics as in the case (iv). The sliding surface ( ) *S t <sup>f</sup>* (i) unmatched uncertainty and matched disturbance is shown in Fig. 3. The control input (i) unmatched uncertainty and matched disturbance is depicted in Fig. 4. For practical application, the discontinuous input is made be continuous by the saturation function with a new form as in (32) for a positive 0.8 *<sup>f</sup>* δ = . The output responses of the continuous input by (32) are shown in Fig. 5 for the four cases (i)ideal sliding output, (ii)no uncertainty and no disturbance (iii)matched uncertainty/disturbance, and (iv)unmatched uncertainty and matched disturbance. There is no chattering in output states. The four case trajectories (i)ideal sliding time trajectory, (ii)no uncertainty and no disturbance (iii)matched uncertainty/disturbance, and (iv) unmatched uncertainty and matched disturbance are depicted in Fig. 6. As can be seen, the trajectories are continuous. The four case sliding surfaces are shown in fig. 7, those are continuous. The three case continuously implemented control inputs instead of the discontinuous input in Fig. 4 are shown in Fig. 8 without the severe performance degrade, which means that the continuous VSS algorithm is practically applicable. The another of the two main problems of the VSS is improved effectively and removed.

From the simulation studies, the usefulness of the proposed SMC is proven.

Fig. 1. Four case 1 *x* and 2 *x* time trajectories (i)ideal sliding output, (ii) no uncertainty and no disturbance (iii)matched uncertainty/disturbance, and (iv)unmatched uncertainty and matched disturbance

New Practical Integral Variable Structure Controllers for Uncertain Nonlinear Systems 235

Fig. 4. Discontinuous control input (i) unmatched uncertainty and matched disturbance

Fig. 5. Four case 1 *x* and 2 *x* time trajectories (i)ideal sliding output, (ii) no uncertainty and no disturbance (iii)matched uncertainty/disturbance, and (iv)unmatched uncertainty and matched disturbance by the continuously approximated input for a positive 0.8 *<sup>f</sup>*

δ=

Fig. 2. Four phase trajectories (i)ideal sliding trajectory, (ii)no uncertainty and no disturbance (iii)matched uncertainty/disturbance, and (iv) unmatched uncertainty and matched disturbance

Fig. 3. Sliding surface ( ) *S t <sup>f</sup>* (i) unmatched uncertainty and matched disturbance

234 Recent Advances in Robust Control – Novel Approaches and Design Methods

Fig. 2. Four phase trajectories (i)ideal sliding trajectory, (ii)no uncertainty and no disturbance (iii)matched uncertainty/disturbance, and (iv) unmatched uncertainty and

Fig. 3. Sliding surface ( ) *S t <sup>f</sup>* (i) unmatched uncertainty and matched disturbance

matched disturbance

Fig. 4. Discontinuous control input (i) unmatched uncertainty and matched disturbance

Fig. 5. Four case 1 *x* and 2 *x* time trajectories (i)ideal sliding output, (ii) no uncertainty and no disturbance (iii)matched uncertainty/disturbance, and (iv)unmatched uncertainty and matched disturbance by the continuously approximated input for a positive 0.8 *<sup>f</sup>* δ=

New Practical Integral Variable Structure Controllers for Uncertain Nonlinear Systems 237

Fig. 8. Three case continuous control inputs *ufc* (i)no uncertainty and no disturbance (ii)matched uncertainty/disturbance, and (iii) unmatched uncertainty and matched

**3.2 Example 2: Output feedback practical integral variable structure controller** 

*x x u*

uncertainties and matched input matrix uncertainties and disturbance

1 1 1 2 2 2 2

*xx x*

*f xt B f*

2

 

0

Consider a third order uncertain affine nonlinear system with unmatched system matrix

3 3sin ( ) 1 0 0 0 0 11 0 0 1 0.5sin ( ) 0 2 0.4sin ( ) 2 0.3sin(2 ) ( ,)

> 1 2 3

*x*

*x*

2 2 <sup>1</sup> <sup>1</sup> 2 13 *d xt x x x x t* ( , ) 0.7 sin( ) 0.8sin( ) 0.2( ) 1.5sin(2 ) 1.5 = − + ++ + (77)

> 2 1

*x*

(75)

(76)

2 2 2 3

*x x*

3 2 3 3 1

*x x x x t d xt*

100 001

<sup>⎡</sup> <sup>⎤</sup> ⎡ ⎤⎢ <sup>⎥</sup> <sup>=</sup> ⎢ ⎥⎢ <sup>⎥</sup> ⎣ ⎦⎢ <sup>⎥</sup> <sup>⎣</sup> <sup>⎦</sup>

where the nominal matrices 0 *f* ( ,) *x t* , 0 *g xt B* ( ,) = and *C* , the unmatched system matrix

310 0 3sin ( ) 0 0 <sup>100</sup>

⎡ ⎤ ⎡⎤ − −⎡ <sup>⎤</sup> ⎢ ⎥ ⎢⎥ ⎡ ⎤ <sup>⎢</sup> <sup>⎥</sup> = − = = Δ= ⎢ ⎥ <sup>⎢</sup> <sup>⎥</sup> ⎣ ⎦ <sup>⎢</sup> <sup>⎥</sup> ⎣ ⎦ ⎣⎦ <sup>⎣</sup> <sup>⎦</sup>

1 02 2 0.5sin ( ) 0 0.4sin ( )

uncertainties and matched input matrix uncertainties and matched disturbance are

( , ) 0 1 1 , 0 , C , 0 0 0 <sup>001</sup>

*y x*

⎡ ⎤ ⎡ ⎤ − − ⎡⎤⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥⎢⎥⎢ ⎥ ⎢ ⎥ =− ++ ⎢ ⎥ ⎢ ⎥⎢⎥⎢ ⎥ <sup>⎢</sup> <sup>⎥</sup> ⎢ ⎥ <sup>⎢</sup> <sup>⎥</sup> ++ + ⎣ ⎦ ⎣ ⎦⎣⎦⎣ ⎦ <sup>⎣</sup> <sup>⎦</sup>

Fig. 6. Four phase trajectories (i)ideal sliding trajectory, (ii)no uncertainty and no disturbance (iii)matched uncertainty/disturbance, and (iv) unmatched uncertainty and matched disturbance by the continuously approximated input

Fig. 7. Four sliding surfaces (i)ideal sliding surface, (ii)no uncertainty and no disturbance (iii)matched uncertainty/disturbance, and (iv) unmatched uncertainty and matched disturbance by the continuously approximated input

236 Recent Advances in Robust Control – Novel Approaches and Design Methods

Fig. 6. Four phase trajectories (i)ideal sliding trajectory, (ii)no uncertainty and no disturbance (iii)matched uncertainty/disturbance, and (iv) unmatched uncertainty and

Fig. 7. Four sliding surfaces (i)ideal sliding surface, (ii)no uncertainty and no disturbance (iii)matched uncertainty/disturbance, and (iv) unmatched uncertainty and matched

matched disturbance by the continuously approximated input

disturbance by the continuously approximated input

Fig. 8. Three case continuous control inputs *ufc* (i)no uncertainty and no disturbance (ii)matched uncertainty/disturbance, and (iii) unmatched uncertainty and matched

### **3.2 Example 2: Output feedback practical integral variable structure controller**

Consider a third order uncertain affine nonlinear system with unmatched system matrix uncertainties and matched input matrix uncertainties and disturbance

$$
\begin{bmatrix}
\dot{\boldsymbol{x}}\_{1} \\
\dot{\boldsymbol{x}}\_{2} \\
\dot{\boldsymbol{x}}\_{3}
\end{bmatrix} = \begin{bmatrix}
0 & -\mathbf{1} & \mathbf{1} \\
1 + 0.5\sin^{2}(\mathbf{x}\_{1}) & 0 & 2 + 0.4\sin^{2}(\mathbf{x}\_{3})
\end{bmatrix} \begin{bmatrix}
\mathbf{x}\_{1} \\
\mathbf{x}\_{2} \\
\mathbf{x}\_{3}
\end{bmatrix} + \begin{bmatrix}
\mathbf{0} \\
0 \\
2 + 0.3\sin(2t)
\end{bmatrix} \boldsymbol{u} + \begin{bmatrix}
\mathbf{0} \\
0 \\
\overline{d}\_{i}(\mathbf{x},t)
\end{bmatrix} \tag{75}
$$

$$
\mathbf{y} = \begin{bmatrix}
\mathbf{1} & \mathbf{0} & \mathbf{0} \\
\mathbf{0} & \mathbf{0} & \mathbf{1}
\end{bmatrix} \begin{bmatrix}
\mathbf{x}\_{1} \\
\mathbf{x}\_{2} \\
\mathbf{x}\_{3}
\end{bmatrix} \tag{76}
$$

$$\overline{d}\_{\text{i}}(\mathbf{x},t) = 0.7\sin(\mathbf{x}\_{\text{i}}) - 0.8\sin(\mathbf{x}\_{\text{i}}) + 0.2(\mathbf{x}\_{\text{i}}^2 + \mathbf{x}\_{\text{i}}^2) + 1.5\sin(2t) + 1.5\tag{77}$$

where the nominal matrices 0 *f* ( ,) *x t* , 0 *g xt B* ( ,) = and *C* , the unmatched system matrix uncertainties and matched input matrix uncertainties and matched disturbance are

$$f\_o(\mathbf{x}, t) = \begin{bmatrix} -3 & 1 & 0 \\ 0 & -1 & 1 \\ 1 & 0 & 2 \end{bmatrix}, \quad B = \begin{bmatrix} 0 \\ 0 \\ 2 \end{bmatrix}, \quad \mathbf{C} = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 0 & 1 \end{bmatrix}, \quad \Delta f = \begin{bmatrix} -3\sin^2(\mathbf{x}\_\circ) & 0 & 0 \\ 0 & 0 & 0 \\ 0.5\sin^2(\mathbf{x}\_\circ) & 0 & 0.4\sin^2(\mathbf{x}\_\circ) \end{bmatrix}.$$

New Practical Integral Variable Structure Controllers for Uncertain Nonlinear Systems 239

*<sup>g</sup> S y* ⎧+ >

*<sup>g</sup> S y* ⎧+ >

The simulation is carried out under 1[msec] sampling time and with (0) 10 0.0 5 [ ]*<sup>T</sup>*

initial state. Fig. 9 shows the four case two output responses of 1 *y* and 2 *y* (i)ideal sliding output, (ii) with no uncertainty and no disturbance, (iii)with matched uncertainty and matched disturbance, and (iv) with ummatched uncertainty and matched disturbance. The each two output is insensitive to the matched uncertainty and matched disturbance, hence is almost equal, so that the output can be predicted. The four case phase trajectories (i)ideal sliding trajectory, (ii) with no uncertainty and no disturbance, (iii)with matched uncertainty and matched disturbance, and (iv) with ummatched uncertainty and matched disturbance are shown in Fig. 10. There is no reaching phase and each phase trajectory except the case (iv) with ummatched uncertainty and matched disturbance is almost identical also. The sliding surface is exactly defined from a given initial condition to the origin. The output feedback integral sliding surfaces (i) with ummatched uncertainty and matched disturbance is depicted in Fig. 11. Fig. 12 shows the control inputs (i)with unmatched uncertainty and matched disturbance. For practical implementation, the discontinuous input can be made

continuous by the saturation function with a new form as in (32) for a positive <sup>0</sup>

SMC is proven.

the real dynamic plants.

output responses by the continuous input of (62) are shown in Fig. 13 for the four cases (i)ideal sliding output, (ii)no uncertainty and no disturbance (iii)matched uncertainty/disturbance, and (iv)unmatched uncertainty and matched disturbance. There is no chattering in output responses. The four case trajectories (i)ideal sliding time trajectory, (ii)no uncertainty and no disturbance (iii)matched uncertainty/disturbance, and (iv) unmatched uncertainty and matched disturbance are depicted in Fig. 14. As can be seen, the trajectories are continuous. The four case sliding surfaces are shown in fig. 15, those are continuous also. The three case continuously implemented control inputs instead of the discontinuous input in Fig. 12 are shown in Fig. 16 without the severe performance loss, which means that the chattering of the control input is removed and the continuous VSS algorithm is practically applicable to the real dynamic plants. From the above simulation studies, the proposed algorithm has superior performance in view of the no reaching phase, complete robustness, predetermined output dynamics, the prediction of the output, and practical application. The effectiveness of the proposed output feedback integral nonlinear

Through design examples and simulation studies, the usefulness of the proposed practical integral nonlinear variable structure controllers is verified. The continuous approximation VSS controllers without the reaching phase in this chapter can be practically applicable to

1

2

Δ = ⎨

Δ = ⎨

0 1

(87a)

(87b)

*x* =

δ

= 0.02 . The

<sup>1</sup> *G* = 500.0 (87c)

<sup>2</sup> 1 2 *G* =+ + 3.2 0.2( ) *y y* (87d)

1.6 if 0 1.6 if 0 *S y*

1.7 if 0 1.7 if 0 *S y*

− < ⎩

− < ⎩

0 1

0 2

0 2

2 2

$$\mathbf{A}\mathbf{g}(\mathbf{x},t) = \begin{bmatrix} 0\\0\\0.3\sin(2t) \end{bmatrix}, \quad \overline{d}(\mathbf{x},t) = \begin{bmatrix} 0\\0\\\overline{d}\_1(\mathbf{x},t) \end{bmatrix}.\tag{78}$$

The eigenvalues of the open loop system matrix 0*f* ( ,) *x t* are -2.6920, -2.3569, and 2.0489, hence 0*f* ( ,) *x t* is unstable. The unmatched system matrix uncertainties and matched input matrix uncertainties and matched disturbance satisfy the assumption A3 and A8 as

$$\Delta f'' = \begin{bmatrix} -3\sin^2(\mathbf{x}\_1) & 0\\ 0 & 0\\ 0.5\sin^2(\mathbf{x}\_2) & 0.4\sin^2(\mathbf{x}\_3) \end{bmatrix} \quad \Delta l = 0.15\sin(2t) \le 0.15 < 1, \quad d^\circ(\mathbf{x}, t) = \frac{1}{2}\overline{d}\_1(\mathbf{x}, t) \tag{79}$$

disturbance by the continuously approximated input for a positive 0.8 *<sup>f</sup>* δ = To design the output feedback integral sliding surface, ( ,) *cf x t* is designed as

$$f\_{0c}(\mathbf{x},t) = f\_0(\mathbf{x},t) - BG(\mathbf{y})\mathbf{C} = \begin{bmatrix} -3 & 1 & 0 \\ 0 & -1 & 1 \\ -19 & 0 & -30 \end{bmatrix} \tag{80}$$

in order to assign the three stable pole to ( ,) *cf xt* at 30.0251 − and 2.4875 0.6636 − ± *i* . The constant feedback gain is designed as

$$\mathbf{G}(\mathbf{y})\mathbf{C} = \mathbf{2}^{-1} \begin{bmatrix} \begin{bmatrix} 1 & 0 & 2 \end{bmatrix} - \begin{bmatrix} -19 & 0 & 30 \end{bmatrix} \end{bmatrix} \tag{81}$$

$$\begin{bmatrix} \vdots \ G(y) = \begin{bmatrix} 10 & 16 \end{bmatrix} \end{bmatrix} \tag{82}$$

Then, one find *Hhh* 1 11 12 = [ ] and *H hh* 0 01 02 = [ ] which satisfy the relationship (37) as

$$hh\_{11} = 0, \qquad h\_{01} = 19h\_{12}, \qquad h\_{02} = 30h\_{12} \tag{83}$$

One select 12 *h* = 1 , 01 *h* = 19 , and 02 *h* = 30 . Hence 1 12 *H CB h* = 2 2 = is a non zero satisfying A4. The resultant output feedback integral sliding surface becomes

$$S\_0 = \frac{1}{2} \left\{ \begin{bmatrix} 0 & 1 \end{bmatrix} \begin{bmatrix} \mathcal{Y}\_1 \\ \mathcal{Y}\_2 \end{bmatrix} + \begin{bmatrix} 19 & 30 \end{bmatrix} \begin{bmatrix} \mathcal{Y}\_{01} \\ \mathcal{Y}\_{02} \end{bmatrix} \right\} \tag{84}$$

where

$$y\_{01} = \int\_{0}^{t} y\_{1}(\tau)d\tau \tag{85}$$

$$y\_{02} = \int\_{0}^{t} y\_{2}(\tau)d\tau - y\_{2}(0) / 30\tag{86}$$

The output feedback control gains in (50), (51)-(55) are selected as follows:

238 Recent Advances in Robust Control – Novel Approaches and Design Methods

( , ) 0 , ( , ) 0

⎢ ⎥ <sup>⎢</sup> <sup>⎥</sup> Δ = <sup>=</sup> ⎢ ⎥ <sup>⎢</sup> <sup>⎥</sup>

The eigenvalues of the open loop system matrix 0*f* ( ,) *x t* are -2.6920, -2.3569, and 2.0489, hence 0*f* ( ,) *x t* is unstable. The unmatched system matrix uncertainties and matched input matrix uncertainties and matched disturbance satisfy the assumption A3 and A8 as

3sin ( ) 0 1 " 0 0 , 0.15sin(2 ) 0.15 1, "( , ) ( , ) <sup>2</sup>

⎡ ⎤ <sup>−</sup> ⎢ ⎥ Δ = Δ= ≤ < =

disturbance by the continuously approximated input for a positive 0.8 *<sup>f</sup>*

*<sup>c</sup> f x t f x t BG y C*

To design the output feedback integral sliding surface, ( ,) *cf x t* is designed as

*f I t d xt d xt*

( ,) ( ,) ( ) 0 1 1

in order to assign the three stable pole to ( ,) *cf xt* at 30.0251 − and 2.4875 0.6636 − ± *i* . The

Then, one find *Hhh* 1 11 12 = [ ] and *H hh* 0 01 02 = [ ] which satisfy the relationship (37) as

One select 12 *h* = 1 , 01 *h* = 19 , and 02 *h* = 30 . Hence 1 12 *H CB h* = 2 2 = is a non zero satisfying

<sup>1</sup> 0 1 19 30

01 1 <sup>0</sup> ( ) *<sup>t</sup> y y* =

02 2 <sup>2</sup> <sup>0</sup> ( ) (0) / 30 *<sup>t</sup> y y dy* = − τ τ

*y y <sup>S</sup>*

[] [ ] <sup>1</sup> <sup>01</sup>

τ *d*τ

2 02

*y y* ⎧⎪ <sup>⎡</sup> ⎤ ⎡⎤⎫⎪ <sup>=</sup> <sup>⎨</sup> <sup>⎢</sup> ⎥ ⎢⎥ <sup>+</sup> <sup>⎬</sup> ⎩ ⎭ <sup>⎪</sup> <sup>⎣</sup> ⎦ ⎣⎦⎪

<sup>⎡</sup> <sup>−</sup> <sup>⎤</sup> <sup>⎢</sup> <sup>⎥</sup> = − =− <sup>⎢</sup> <sup>⎥</sup> ⎢− − <sup>⎥</sup> <sup>⎣</sup> <sup>⎦</sup>

*gxt dxt*

2 1

2 2 2 3

*x x*

0 0

A4. The resultant output feedback integral sliding surface becomes

2

The output feedback control gains in (50), (51)-(55) are selected as follows:

0

*x*

0.5sin ( ) 0.4sin ( )

⎣ ⎦

constant feedback gain is designed as

where

0 0

⎡ ⎤ ⎡ ⎤

0.3sin(2 ) ( ,)

⎢ ⎥ <sup>⎢</sup> <sup>⎥</sup> ⎣ ⎦ <sup>⎣</sup> <sup>⎦</sup>

*t d xt*

1

31 0

19 0 30

{ [ ]} <sup>1</sup> *GyC* ( ) 2 [1 0 2] 19 0 30 <sup>−</sup> = − − (81)

<sup>11</sup> 01 12 02 12 *h hh hh* = 0, 19 , 30 = = (83)

∴ = *G y*( ) 10 16 [ ] (82)

∫ (85)

∫ (86)

δ=

. (78)

1

(80)

(84)

(79)

$$\mathbf{A}\mathbf{g}\_1 = \begin{cases} +1.6 & \text{if} \quad S\_0 y\_1 > 0 \\ -1.6 & \text{if} \quad S\_0 y\_1 < 0 \end{cases} \tag{87a}$$

$$\text{Ag}\_2 = \begin{cases} +1.7 & \text{if} \quad S\_0 y\_2 > 0 \\ -1.7 & \text{if} \quad S\_0 y\_2 < 0 \end{cases} \tag{87b}$$

$$G\_1 = 500.0\tag{87c}$$

$$\mathbf{G}\_2 = \mathbf{3}.\mathbf{2} + \mathbf{0}.\mathbf{2}(\mathbf{y}\_1^2 + \mathbf{y}\_2^2) \tag{87d}$$

The simulation is carried out under 1[msec] sampling time and with (0) 10 0.0 5 [ ]*<sup>T</sup> x* = initial state. Fig. 9 shows the four case two output responses of 1 *y* and 2 *y* (i)ideal sliding output, (ii) with no uncertainty and no disturbance, (iii)with matched uncertainty and matched disturbance, and (iv) with ummatched uncertainty and matched disturbance. The each two output is insensitive to the matched uncertainty and matched disturbance, hence is almost equal, so that the output can be predicted. The four case phase trajectories (i)ideal sliding trajectory, (ii) with no uncertainty and no disturbance, (iii)with matched uncertainty and matched disturbance, and (iv) with ummatched uncertainty and matched disturbance are shown in Fig. 10. There is no reaching phase and each phase trajectory except the case (iv) with ummatched uncertainty and matched disturbance is almost identical also. The sliding surface is exactly defined from a given initial condition to the origin. The output feedback integral sliding surfaces (i) with ummatched uncertainty and matched disturbance is depicted in Fig. 11. Fig. 12 shows the control inputs (i)with unmatched uncertainty and matched disturbance. For practical implementation, the discontinuous input can be made continuous by the saturation function with a new form as in (32) for a positive <sup>0</sup> δ = 0.02 . The output responses by the continuous input of (62) are shown in Fig. 13 for the four cases (i)ideal sliding output, (ii)no uncertainty and no disturbance (iii)matched uncertainty/disturbance, and (iv)unmatched uncertainty and matched disturbance. There is no chattering in output responses. The four case trajectories (i)ideal sliding time trajectory, (ii)no uncertainty and no disturbance (iii)matched uncertainty/disturbance, and (iv) unmatched uncertainty and matched disturbance are depicted in Fig. 14. As can be seen, the trajectories are continuous. The four case sliding surfaces are shown in fig. 15, those are continuous also. The three case continuously implemented control inputs instead of the discontinuous input in Fig. 12 are shown in Fig. 16 without the severe performance loss, which means that the chattering of the control input is removed and the continuous VSS algorithm is practically applicable to the real dynamic plants. From the above simulation studies, the proposed algorithm has superior performance in view of the no reaching phase, complete robustness, predetermined output dynamics, the prediction of the output, and practical application. The effectiveness of the proposed output feedback integral nonlinear SMC is proven.

Through design examples and simulation studies, the usefulness of the proposed practical integral nonlinear variable structure controllers is verified. The continuous approximation VSS controllers without the reaching phase in this chapter can be practically applicable to the real dynamic plants.

New Practical Integral Variable Structure Controllers for Uncertain Nonlinear Systems 241

Fig. 11. Sliding surface 0 *S t*( ) (i) unmatched uncertainty and matched disturbance

Fig. 12. Discontinuous control input (i) unmatched uncertainty and matched disturbance

Fig. 9. Four case two output responses of 1*y* and 2*y* (i)ideal sliding output, (ii) with no uncertainty and no disturbance, (iii)with matched uncertainty and matched disturbance, and (iv) with ummatched uncertainty and matched disturbance

Fig. 10. Four phase trajectories (i)ideal sliding trajectory, (ii)no uncertainty and no disturbance (iii)matched uncertainty/disturbance, and (iv)unmatched uncertainty and matched disturbance

240 Recent Advances in Robust Control – Novel Approaches and Design Methods

Fig. 9. Four case two output responses of 1*y* and 2*y* (i)ideal sliding output, (ii) with no uncertainty and no disturbance, (iii)with matched uncertainty and matched disturbance,

Fig. 10. Four phase trajectories (i)ideal sliding trajectory, (ii)no uncertainty and no disturbance (iii)matched uncertainty/disturbance, and (iv)unmatched uncertainty and

matched disturbance

and (iv) with ummatched uncertainty and matched disturbance

Fig. 11. Sliding surface 0 *S t*( ) (i) unmatched uncertainty and matched disturbance

Fig. 12. Discontinuous control input (i) unmatched uncertainty and matched disturbance

New Practical Integral Variable Structure Controllers for Uncertain Nonlinear Systems 243

Fig. 15. Four sliding surfaces (i)ideal sliding surface , (ii)no uncertainty and no disturbance (iii)matched uncertainty/disturbance, and (iv) unmatched uncertainty and matched

Fig. 16. Three case continuous control inputs *u*0*<sup>c</sup>* (i)no uncertainty and no disturbance (ii)matched uncertainty/disturbance, and (iii) unmatched uncertainty and matched

> δ= 0.02

disturbance by the continuously approximated input for a positive <sup>0</sup>

disturbance by the continuously approximated input

Fig. 13. Four case 1 *y* and 2 *y* time trajectories (i)ideal sliding output, (ii) no uncertainty and no disturbance (iii)matched uncertainty/disturbance, and (iv)unmatched uncertainty and matched disturbance by the continuously approximated input for a positive <sup>0</sup> δ= 0.02

Fig. 14. Four phase trajectories (i)ideal sliding trajectory, (ii)no uncertainty and no disturbance (iii)matched uncertainty/disturbance, and (iv) unmatched uncertainty and matched disturbance by the continuously approximated input

242 Recent Advances in Robust Control – Novel Approaches and Design Methods

Fig. 13. Four case 1 *y* and 2 *y* time trajectories (i)ideal sliding output, (ii) no uncertainty and no disturbance (iii)matched uncertainty/disturbance, and (iv)unmatched uncertainty and

> δ= 0.02

matched disturbance by the continuously approximated input for a positive <sup>0</sup>

Fig. 14. Four phase trajectories (i)ideal sliding trajectory, (ii)no uncertainty and no disturbance (iii)matched uncertainty/disturbance, and (iv) unmatched uncertainty and

matched disturbance by the continuously approximated input

Fig. 15. Four sliding surfaces (i)ideal sliding surface , (ii)no uncertainty and no disturbance (iii)matched uncertainty/disturbance, and (iv) unmatched uncertainty and matched disturbance by the continuously approximated input

Fig. 16. Three case continuous control inputs *u*0*<sup>c</sup>* (i)no uncertainty and no disturbance (ii)matched uncertainty/disturbance, and (iii) unmatched uncertainty and matched disturbance by the continuously approximated input for a positive <sup>0</sup> δ= 0.02

New Practical Integral Variable Structure Controllers for Uncertain Nonlinear Systems 245

Gutman, S. (1979). Uncertain dynamical Systems:A Lyapunov Min-Max Approach. *IEEE* 

Horowitz, I. (1991). Survey of Quantitative Feedback Theory(QFT). *Int. J. Control*, vol.53,

Hu, X. & Martin, C. (1999). Linear Reachability Versus Global Stabilization. *IEEE Trans.* 

Hunt, L. R., Su, R. & Meyer, G. (1987). Global Transformations of Nonlinear Systems," *IEEE* 

Kokotovic, P. & Arcak, M. (2001). Constructive Nonlinear Control: a Historical Perspective.

Lee, J. H. & Youn, M. J., (1994). An Integral-Augmented Optimal Variable Structure control

Lee, J. H. (1995). Design of Integral-Augmented Optimal Variable Structure Controllers, Ph.

Lee, J. H., (2004). A New Improved Integral Variable Structure Controller for Uncertain

Lee, J. H., (2010a). A New Robust Variable Structure Controller for Uncertain Affine

Lee, J. H., (2010b). A Poof of Utkin's Theorem for a MI Uncertain Linear Case," *KIEE,* vol.59,

Lee, J. H., (2010c). A MIMO VSS with an Integral-Augmented Sliding Surface for Uncertain

Lijun, L. & Chengkand, X., (2008). Robust Backstepping Design of a Nonlinear Output

Lu, X. Y. & Spurgeon, S. K. (1997). Robust Sliding Mode Control of Uncertain Nonlinear

Narendra, K. S. (1994). Parameter Adaptive Control-the End...or the Beginning? *Proceeding of* 

Pan, Y. D. Kumar, K. D. Liu, G. J., & Furuta, K. (2009). Design of Variable Structure Control

Rugh, W. J. & Shamma, J., (2000). Research on Gain Scheduling. *Automatica*, vol.36, pp.1401-

Sun, Y. M. (2009). Linear Controllability Versus Global Controllability," IEEE Trans. Autom.

Tang, G. Y., Dong, R., & Gao, H. W. (2008). Optimal sliding Mode Control for Nonlinear System with Time Delay. *Nonlinear Analysis: Hybrid Systems*, vol.2, pp891-899. Toledo, B. C., & Linares, R. C., (1995). On Robust Regulation via Sliding Mode for Nonlinear

Utkin, V. I. (1978). *Sliding Modes and Their Application in Variable Structure Systems*. Moscow,

system with Nonlinear Time Varying Sliding Sector. *IEEE Trans. Autom. Contr*, AC-

for Uncertain dynamical SISO System, *KIEE(The Korean Institute of Electrical* 

Nonlinear Systems with Mismatched Uncertainties," *KIEE,* vol.59, no.5, pp.945-949.

*Trans. Autom. Contr*, Vol. AC-24, no. 1, pp.437-443.

*Autom. Contr*, AC-44, no. 6, pp.1303-1305.

Khalil, H. K. (1996). *Nonlinear Systems*(2e). Prentice-Hall.

*Engineers),* vol.43, no.8, pp.1333-1351.

*Automatica*, vol.37, pp.637-662.

D. dissertation, KAIST.

no.9, pp.1680-1685.

*33rd IEEE CDC*.

1425.

1978.

54, no. 8, pp.1981-1986.

Contr, AC-54, no. 7, pp.1693-1697.

*Trans. Autom. Contr*, Vol. AC-28, no. 1, pp.24-31. Isidori, A., (1989). *Nonlinear Control System(*2e). Springer-Verlag.

Linear Systems. *KIEE,* vol.43, no.8, pp.1333-1351.

Multivariable Systems ," *KIEE,* vol.59, no.5, pp.950-960.

System. *System & control Letters*, vol.32, pp.75-90.

Slottine, J. J. E. & Li, W., (1991). *Applied Nonlinear Control*, Prentice-Hall.

Systems, *System & Control Letters*, vol.24, pp.361-371.

Feedback System, *Proceeding of IEEE CDC 2008*, pp.5095-5099.

no.2 pp.255-291.

## **4. Conclusion**

In this chapter, a new practical robust full-state(output) feedback nonlinear integral variable structure controllers with the full-state(output) feedback integral sliding surfaces are presented based on state dependent nonlinear form for the control of uncertain more affine nonlinear systems with mismatched uncertainties and matched disturbance. After an affine uncertain nonlinear system is represented in the form of state dependent nonlinear system, a systematic design of the new robust integral nonlinear variable structure controllers with the full-state(output) feedback (transformed) integral sliding surfaces are suggested for removing the reaching phase. The corresponding (transformed) control inputs are proposed. The closed loop stabilities by the proposed control inputs with full-state(output) feedback integral sliding surface together with the existence condition of the sliding mode on the selected sliding surface are investigated in Theorem 1 and Theorem 2 for all mismatched uncertainties and matched disturbance. For practical application of the continuous discontinuous VSS, the continuous approximation being different from that of (Chern & Wu, 1992) is suggested without severe performance degrade. The two practical algorithms, i.e., practical full-state feedback integral nonlinear variable structure controller with the fullstate feedback transformed input and the full-state feedback sliding surface and practical output feedback integral nonlinear variable structure controller with the output feedback input and the output feedback transformed sliding surface are proposed. The outputs by the proposed inputs with the suggested sliding surfaces are insensitive to only the matched uncertainty and disturbance. The unmatched uncertainties can influence on the ideal sliding dynamics, but the exponential stability is satisfied. The two main problems of the VSS, i.e., the reaching phase at the beginning and the chattering of the input are removed and solved.

## **5. References**


244 Recent Advances in Robust Control – Novel Approaches and Design Methods

In this chapter, a new practical robust full-state(output) feedback nonlinear integral variable structure controllers with the full-state(output) feedback integral sliding surfaces are presented based on state dependent nonlinear form for the control of uncertain more affine nonlinear systems with mismatched uncertainties and matched disturbance. After an affine uncertain nonlinear system is represented in the form of state dependent nonlinear system, a systematic design of the new robust integral nonlinear variable structure controllers with the full-state(output) feedback (transformed) integral sliding surfaces are suggested for removing the reaching phase. The corresponding (transformed) control inputs are proposed. The closed loop stabilities by the proposed control inputs with full-state(output) feedback integral sliding surface together with the existence condition of the sliding mode on the selected sliding surface are investigated in Theorem 1 and Theorem 2 for all mismatched uncertainties and matched disturbance. For practical application of the continuous discontinuous VSS, the continuous approximation being different from that of (Chern & Wu, 1992) is suggested without severe performance degrade. The two practical algorithms, i.e., practical full-state feedback integral nonlinear variable structure controller with the fullstate feedback transformed input and the full-state feedback sliding surface and practical output feedback integral nonlinear variable structure controller with the output feedback input and the output feedback transformed sliding surface are proposed. The outputs by the proposed inputs with the suggested sliding surfaces are insensitive to only the matched uncertainty and disturbance. The unmatched uncertainties can influence on the ideal sliding dynamics, but the exponential stability is satisfied. The two main problems of the VSS, i.e., the reaching phase at the beginning and the chattering of the input are removed and solved.

Adamy, J. & Flemming, A. (2004). Soft Variable Structure Control: a Survey. *Automatica*,

Bartolini, G. & Ferrara, A. (1995). On Multi-Input Sliding Mode Control of Uncertain

Bartolini, G., Pisano, A. & Usai, E. (2001). Digital Second-Order Sliding Mode Control for

Cai, X., Lin, R., and Su, SU., (2008). Robust stabilization for a class of Nonlinear Systems.

Chen, W. H., Ballance, D. J. & Gawthrop, P. J. (2003). Optimal Control of Nonlinear System:A Predictive Control Approach. *Automatica*, vol. 39, pp633-641. Chern, T. L. & Wu, Y. C., (1992). An Optimal Variable Structure Control with Integral

Decarlo, R. A., Zak, S. H., & Mattews, G. P., (1988). Variable Structure Control of Nonlinear Multivariable Systems: A Tutorial. *Proceeding of IEEE*, Vol. 76, pp.212-232. Drazenovic, B., (1969). The invariance conditions in variable structure systems, *Automatica*,

Compensation for Electrohydraulic Position Servo Control Systems. *IEEE T.* 

Anderson, B. D. O. & More, J. B. (1990) *Optimal Control*, Prentice-Hall.

*Proceeding of IEEE CDC* pp.4840-4844, 2008.

*Industrial Electronics*, vol.39, no.5 pp460-463.

Nonlinear Systems. *Proceeding of IEEE 34th CDC*, p.2121-2124.

Uncertain Nonlinear Systems. *Automatica*, vol.37 pp.1371-1377.

**4. Conclusion** 

**5. References** 

vol.40, pp.1821-1844.

Vol. 5, pp.287-295.


**11** 

*Italy* 

Laura Celentano

**New Robust Tracking and Stabilization Methods** 

There exist many mechanical, electrical, electro-mechanical, thermic, chemical, biological and medical linear and nonlinear systems, subject to parametric uncertainties and non standard disturbances, which need to be efficiently controlled. Indeed, e.g. consider the numerous manufacturing systems (in particular the robotic and transport systems,…) and the more pressing requirements and control specifications in an ever more dynamic society. Despite numerous scientific papers available in literature (Porter and Power, 1970)-(Sastry, 1999), some of which also very recent (Paarmann, 2001)-(Siciliano and Khatib, 2009), the

1. the considered classes of systems are often with little relevant interest to engineers; 2. the considered signals (references, disturbances,…) are almost always standard

4. the control signals are often excessive and/or unfeasible because of the chattering. Taking into account that a very important problem is to force a process or a plant to track generic references, provided that sufficiently regular, e.g. the generally continuous piecewise linear signals, easily produced by using digital technologies, new theoretical results are needful for the scientific and engineering community in order to design control systems with non standard references and/or disturbances and/or with ever harder

3. the controllers are not very robust and they do not allow satisfying more than a single

In the first part of this chapter, new results are stated and presented; they allow to design a controller of a SISO process, without zeros, with measurable state and with parametric uncertainties, such that the controlled system is of type one and has, for all the possible uncertain parameters, assigned minimum constant gain and maximum time constant or such that the controlled system tracks with a prefixed maximum error a generic reference with limited derivative, also when there is a generic disturbance with limited derivative, has

The proposed design techniques use a feedback control scheme with an integral action (Seraj and Tarokh, 1977), (Freeman and Kokotovic, 1995) and they are based on the choice of a

an assigned maximum time constant and guarantees a good quality of the transient.

**1. Introduction** 

specification;

specifications.

following practical limitations remain:

(polynomial and/or sinusoidal ones);

**for Significant Classes of Uncertain** 

**Linear and Nonlinear Systems** 

*Università degli Studi di Napoli Federico II, Napoli,* 

*Dipartimento di Informatica e Sistemistica* 


## **New Robust Tracking and Stabilization Methods for Significant Classes of Uncertain Linear and Nonlinear Systems**

Laura Celentano

*Dipartimento di Informatica e Sistemistica Università degli Studi di Napoli Federico II, Napoli, Italy* 

## **1. Introduction**

246 Recent Advances in Robust Control – Novel Approaches and Design Methods

Vidyasagar, M. (1986). New Directions of Research in Nonlinear System Theory. *Proc. of the* 

Wang, Y., Jiang, C., Zhou, D., & Gao, F. (2007). Variable Structure Control for a Class of

Young, K.D., Utkin, V.I., & Ozguner, U, (1996). A Control Engineer's Guide to Sliding Mode Control. *Proceeding of 1996 IEEE Workshop on Variable Structure Systems*, pp.1-14. Zheng, Q. & Wu, F. Lyapunov Redesign of Adpative Controllers for Polynomial Nonlinear

systems," (2009). *Proceeding of IEEE ACC 2009*, pp.5144-5149.

Nonlinear Systems with Mismatched Uncertainties. *Applied Mathematics and* 

*IEEE,* Vol.74, No.8, (1986), pp.1060-1091.

*Computation*, pp.1-14.

There exist many mechanical, electrical, electro-mechanical, thermic, chemical, biological and medical linear and nonlinear systems, subject to parametric uncertainties and non standard disturbances, which need to be efficiently controlled. Indeed, e.g. consider the numerous manufacturing systems (in particular the robotic and transport systems,…) and the more pressing requirements and control specifications in an ever more dynamic society. Despite numerous scientific papers available in literature (Porter and Power, 1970)-(Sastry,

1999), some of which also very recent (Paarmann, 2001)-(Siciliano and Khatib, 2009), the following practical limitations remain:


Taking into account that a very important problem is to force a process or a plant to track generic references, provided that sufficiently regular, e.g. the generally continuous piecewise linear signals, easily produced by using digital technologies, new theoretical results are needful for the scientific and engineering community in order to design control systems with non standard references and/or disturbances and/or with ever harder specifications.

In the first part of this chapter, new results are stated and presented; they allow to design a controller of a SISO process, without zeros, with measurable state and with parametric uncertainties, such that the controlled system is of type one and has, for all the possible uncertain parameters, assigned minimum constant gain and maximum time constant or such that the controlled system tracks with a prefixed maximum error a generic reference with limited derivative, also when there is a generic disturbance with limited derivative, has an assigned maximum time constant and guarantees a good quality of the transient.

The proposed design techniques use a feedback control scheme with an integral action (Seraj and Tarokh, 1977), (Freeman and Kokotovic, 1995) and they are based on the choice of a

New Robust Tracking and Stabilization Methods

*v*

<sup>0</sup> <sup>1</sup> <sup>2</sup> <sup>3</sup> <sup>4</sup> <sup>5</sup> <sup>6</sup> <sup>7</sup> <sup>8</sup> <sup>9</sup> <sup>10</sup> -2

t

 . .

0 1 2 3 4 5 6 7 8 9 10

t

*<sup>a</sup> t* .

must be substituted with *<sup>d</sup> y* .

where the *maximum variation velocity* ˆ

control (Porter et al., 1970)-(Sastry, 1999).

1

*<sup>n</sup>*+<sup>1</sup> *k*

the ones (even if more conservative) of ,*<sup>i</sup> a b* .

Fig. 2. State feedback control scheme with an I control action.

preassigned settling time ˆ

output, in (2) *d*

By posing

*r*

δ

*K*


u, d

. . u, d

for Significant Classes of Uncertain Linear and Nonlinear Systems 249

<sup>1</sup> ˆ ˆ ( ) , 0, ( ), ( ) : max ( ) ( ) <sup>ˆ</sup> *r d r d r d <sup>t</sup>*



u, d

. .

**Remark 1.** Clearly if the initial state of the control system is not null and/or *r d* (0) (0) 0 − ≠ (and/or, more in general, *rt dt* () () − has discontinuities), the error *e t*( ) in (2) must be considered unless of a "free evolution", whose practical duration can be made minus that a

**Remark 2.** If disturbance *d* does not directly act on the output *y* , said *<sup>d</sup> y* its effect on the

This is one of the main and most realistic problem not suitable solved in the literature of

There exist several controllers able to satisfy the *1.* and/or *2* specifications. In the following, for brevity, is considered the well-known state feedback control law with an integral (I)

1 1 11

<sup>−</sup> =− = ≤ ≤ ≤ ≤ ≤≤

**Remark 3.** It is useful to note that often the state-space model of the process (1) is already in the corresponding companion form of the input-output model of the system (3) (think to the case in which this model is obtained experimentally by using e.g. Matlab command invfreqs); on the contrary, it is easy to transform the interval uncertainties of *ABC* , , into

− − + − +− +

<sup>1</sup> ... *n n*

1 2 1 2 ... *n n <sup>n</sup> ks ks k* − − + + +

*b s as a* <sup>−</sup> + + +

*n*

() ( ) , , ..., , ... *n n nnn n <sup>b</sup> G s C sI A B a a a a a ab bb*

*s* <sup>1</sup>

u, d δ

 − − − ∈ ≤ ∀≥ ∀ = −≤

*e t t rt dt r d*

Fig. 1. Possible reference or disturbance signals with limited derivative.

control action (Seraj and Tarokh, 1977), (Freeman and Kokotovic, 1995).

1

*s as a*

+ ++

in the Laplace domain the considered control scheme is the one of Fig. 2.

*r d* δ

[ ] 0 ,

<sup>−</sup> of *rt dt* () () − is a design specification.

 σ  σδ

0 1 2 3 4 5 6 7 8 9 10

t

. .

0 1 2 3 4 5 6 7 8 9 10

t

, (2)

, (3)

σ

suitable set of reference poles, on a proportionality parameter of these poles and on the theory of externally positive systems (Bru and Romero-Vivò, 2009).

The utility and efficiency of the proposed methods are illustrated with an attractive and significant example of position control.

In the second part of the chapter it is considered the uncertain pseudo-quadratic systems of

$$\text{the type } \ i = F\_i(y, \dot{y}, p)\mu + \left\lfloor \sum\_{i=1}^n F\_{zi}(y, \dot{y}, p)\dot{y}\_i \right\rfloor \dot{y} + f(t, y, \dot{y}, p)\prime \text{, where } t \in \mathbb{R} \text{ is the time, } y \in \mathbb{R}^n \text{ is the } $$

the output, *<sup>r</sup> u R* ∈ is the control input, *p R*μ ∈℘⊂ is the vector of uncertain parameters, with ℘ compact set, 1 *m r F R* <sup>×</sup> ∈ is limited and of rank *m* , 2 *mxm F R <sup>i</sup>* ∈ is limited and *<sup>m</sup> f* ∈ *R* is limited and models possible disturbances and/or particular nonlinearities of the system.

For this class of systems, including articulated mechanical systems, several theorems are stated which easily allow to determine robust control laws of the PD type, with a possible partial compensation, in order to force *y* and *y* to go to rectangular neighbourhoods (of the origin) with prefixed areas and with prefixed time constants characterizing the convergence of the error. Clearly these results allow also designing control laws to take and hold a generic articulated system in a generic posture less than prefixed errors also in the presence of parametric uncertainties and limited disturbances.

Moreover the stated theorems can be used to determine simple and robust control laws in order to force the considered class of systems to track a generic preassigned limited in "acceleration" trajectory, with preassigned majorant values of the maximum "position and/or velocity" errors and preassigned increases of the time constants characterizing the convergence of the error.

## **Part I**

### **2. Problem formulation and preliminary results**

Consider the SISO *n*-order system, linear, time-invariant and with uncertain parameters, described by

$$
\dot{\mathbf{x}} = A\mathbf{x} + Bu, \quad \mathbf{y} = \mathbf{C}\mathbf{x} + d\mathbf{l}, \tag{1}
$$

where: *<sup>n</sup> x R* ∈ is the state, *u R* ∈ is the control signal, *d R* ∈ is the disturbance or, more in general, the effect *<sup>d</sup> y* of the disturbance *d* on the output, *y* ∈ *R* is the output, *A AA* , − + ≤ ≤ *B BB* <sup>−</sup> <sup>+</sup> ≤ ≤ and *C CC* <sup>−</sup> <sup>+</sup> ≤ ≤ .

Suppose that this process is without zeros, is completely controllable and that the state is measurable.

Moreover, suppose that the disturbance *d* and the reference *r* are continuous signals with limited first derivative (see Fig. 1).

A main goal is to design a linear and time invariant controller such that:


t

248 Recent Advances in Robust Control – Novel Approaches and Design Methods

suitable set of reference poles, on a proportionality parameter of these poles and on the

The utility and efficiency of the proposed methods are illustrated with an attractive and

In the second part of the chapter it is considered the uncertain pseudo-quadratic systems of

⎣ ⎦ ∑ , where *t R* <sup>∈</sup> is the time, *<sup>m</sup> y R* <sup>∈</sup> is

μ

∈℘⊂ is the vector of uncertain parameters,

*x Ax Bu y Cx d* = + =+ , , (1)

*mxm F R <sup>i</sup>* ∈ is limited and *<sup>m</sup> f* ∈ *R* is

theory of externally positive systems (Bru and Romero-Vivò, 2009).

( , , ) ( , , ) (, , , )

*y F yypu F yypy y f tyyp*

presence of parametric uncertainties and limited disturbances.

**2. Problem formulation and preliminary results** 

output, *A AA* , − + ≤ ≤ *B BB* <sup>−</sup> <sup>+</sup> ≤ ≤ and *C CC* <sup>−</sup> <sup>+</sup> ≤ ≤ .

1. *A AA B BB C CC* , , ,, , − + − + − + ∀ ∈ ∀∈ ∀ ∈ ⎡ ⎤ ⎡⎤ ⎡ ⎤

limited first derivative (see Fig. 1).

design specifications, or

*i i*

*m r F R* <sup>×</sup> ∈ is limited and of rank *m* , 2

limited and models possible disturbances and/or particular nonlinearities of the system. For this class of systems, including articulated mechanical systems, several theorems are stated which easily allow to determine robust control laws of the PD type, with a possible partial compensation, in order to force *y* and *y* to go to rectangular neighbourhoods (of the origin) with prefixed areas and with prefixed time constants characterizing the convergence of the error. Clearly these results allow also designing control laws to take and hold a generic articulated system in a generic posture less than prefixed errors also in the

Moreover the stated theorems can be used to determine simple and robust control laws in order to force the considered class of systems to track a generic preassigned limited in "acceleration" trajectory, with preassigned majorant values of the maximum "position and/or velocity" errors and preassigned increases of the time constants characterizing the

Consider the SISO *n*-order system, linear, time-invariant and with uncertain parameters,

where: *<sup>n</sup> x R* ∈ is the state, *u R* ∈ is the control signal, *d R* ∈ is the disturbance or, more in general, the effect *<sup>d</sup> y* of the disturbance *d* on the output, *y* ∈ *R* is the

Suppose that this process is without zeros, is completely controllable and that the state is

Moreover, suppose that the disturbance *d* and the reference *r* are continuous signals with

2. condition *1.* is satisfied and, in addition, in the hypothesis that the initial state of the control system is null and that *r d* (0) (0) 0 − = , the tracking error *e t*( ) satisfies relation

⎣ ⎦ ⎣⎦ ⎣ ⎦ the control system is of type one, with

τ ≤τ

ˆ , where ˆ

*Kv* and max τˆ are

A main goal is to design a linear and time invariant controller such that:

constant gain <sup>ˆ</sup> *K K v v* <sup>≥</sup> and maximum time constant max max

1

*i*

the output, *<sup>r</sup> u R* ∈ is the control input, *p R*

= ⎡ ⎤ =+ + ⎢ ⎥

*m*

significant example of position control.

the type 1 2

with ℘ compact set, 1

convergence of the error.

**Part I** 

described by

measurable.

$$\left| \left| \mathbf{e}(t) \right| \leq \frac{1}{\hat{K}\_{\boldsymbol{r}}} \hat{\boldsymbol{\delta}}\_{\boldsymbol{r} \rightarrow \boldsymbol{\nu}'} \quad \forall t \geq 0, \quad \forall r(t), \ d(t) \text{ : } \boldsymbol{\delta}\_{\boldsymbol{r} \rightarrow \boldsymbol{\delta}} = \max\_{\sigma \in [0, t]} \left| \dot{r}(\sigma) - \dot{d}(\sigma) \right| \leq \hat{\boldsymbol{\delta}}\_{\boldsymbol{r} \rightarrow \boldsymbol{\delta}'} \tag{2}$$

t

Fig. 1. Possible reference or disturbance signals with limited derivative.

where the *maximum variation velocity* ˆ δ<sup>−</sup> of *rt dt* () () − is a design specification.

*r d* **Remark 1.** Clearly if the initial state of the control system is not null and/or *r d* (0) (0) 0 − ≠ (and/or, more in general, *rt dt* () () − has discontinuities), the error *e t*( ) in (2) must be considered unless of a "free evolution", whose practical duration can be made minus that a preassigned settling time ˆ *<sup>a</sup> t* .

**Remark 2.** If disturbance *d* does not directly act on the output *y* , said *<sup>d</sup> y* its effect on the output, in (2) *d* must be substituted with *<sup>d</sup> y* .

This is one of the main and most realistic problem not suitable solved in the literature of control (Porter et al., 1970)-(Sastry, 1999).

There exist several controllers able to satisfy the *1.* and/or *2* specifications. In the following, for brevity, is considered the well-known state feedback control law with an integral (I) control action (Seraj and Tarokh, 1977), (Freeman and Kokotovic, 1995). By posing

$$\text{G(s)} = \text{C(sI} - A)^{-1}B = \frac{b}{s^\* + a\_1s^{\*-1} + \dots + a\_n}, \quad a\_i^- \le a\_i \le a\_{i'}^+ \dots \\
a\_n^- \le a\_n^- \le a\_{n'}^+, b^- \le b \le b^+, \tag{3}$$

in the Laplace domain the considered control scheme is the one of Fig. 2.

Fig. 2. State feedback control scheme with an I control action.

**Remark 3.** It is useful to note that often the state-space model of the process (1) is already in the corresponding companion form of the input-output model of the system (3) (think to the case in which this model is obtained experimentally by using e.g. Matlab command invfreqs); on the contrary, it is easy to transform the interval uncertainties of *ABC* , , into the ones (even if more conservative) of ,*<sup>i</sup> a b* .

New Robust Tracking and Stabilization Methods

where

and

conditions for which *H K v v* = .

**Theorem 1.** Let be *s* , *st* , *<sup>a</sup> t* ,

cutoff angular frequency of

*w*

ρ

with non negative impulse response.

0

τ

 τ

from which, if all the poles of ( ) *S s <sup>p</sup>* have negative real part, it is

0

ω

0

then the corresponding values of ,,, , , *sa s v v st t K H*

*p*

*t t ss t t*

ρ

ρ

*L*

*L*

ρ

τ τ

∫

( ) *<sup>v</sup> r d <sup>t</sup>*

τ τ

*s d*

*p*

*t*

for Significant Classes of Uncertain Linear and Nonlinear Systems 251

() ( ) ( ) ( ) , () ()

<sup>1</sup> ( ) *r d v*

*H r d*

<sup>∞</sup> <sup>−</sup> <sup>∈</sup> = = −

δ

**Remark 5.** Note that, while the constant gain *Kv* allows to compute the steady-state tracking error to a ramp reference signal, *Hv* , denoted *absolute constant gain*, allows to obtain ∀*t* an excess estimate of the tracking error to a generic reference with derivative. On this basis, it is very interesting from a theoretical and practical point of view, to establish the

In order to establish the condition necessary for the equality of the absolute constant gain *Hv* with the constant gain *Kv* and to provide some methods to choose the poles *P* and

the following preliminary results are necessary. They concern the main parameters of the sensitivity function *W s*( ) of the output and the externally positive systems, i.e. the systems

> 1 1 1

1 ... , , ( ) ... ( )

*n n*

*n n n*

*s a s sv v v v*

1

+

*<sup>d</sup> w t*

+

*n n n nn n*

ρ

ρω

1 1 1 1

+ −

*n n*

( ) ( ) ... ( )

⎛ ⎞ ⎛ ⎞ ⎜ ⎟ <sup>=</sup> ⎜ ⎟ <sup>=</sup> ⎝ ⎠ ⎝ ⎠ + ++ +

*d s <sup>d</sup> <sup>s</sup> <sup>d</sup> <sup>s</sup> <sup>d</sup> s d*

⎛ ⎞ + ++ = = <sup>=</sup> ⎜ ⎟ ⎝ ⎠ + ++ + <sup>∫</sup>

= = + ++ +

*d s s ds ds d* + + +

( ) ( ) ... *n n n n*

1 1 1

ω

,,, , , *s a*

ω

**Proof.** By using the change of scale property of the Laplace transform, (8) and (10) it is

1 1 1 1 1

− + − + +

1 = ( ). ...

 ρρ

*n*

*s ds ds d s*

⎛ ⎞ ⎜ ⎟ <sup>=</sup> ⎝ ⎠ + ++ +

*d s <sup>d</sup> <sup>s</sup> <sup>d</sup> K H where s t*

∞ +

*v v p n n*

 ρ

1 1

*t d*

*n n*

− +

+ −

*d d W s*

*H* δ

[ ] 0 ,

σ

 σ

*<sup>s</sup>* the overshoot, the rise time, the settling time and the upper

1 1

 when ρ

> ρ

= = = = = = . (17)

1 1

*s ds ds d s*

 ρ ρ ρ

*K KH H*

*n n*

*n n*

1

+

*n n*

1

−

≠ 1 turn out to be:

 ρ

1

 ρ

+

1 1

*L* , (16)

<sup>1</sup> , max ( ) ( )

σ

*e t*

<sup>−</sup> ≤ −− − <sup>=</sup> ∫ *L* , (12)

*p p <sup>p</sup> e t s r t d t d where s t S s*

 ττ ( ) <sup>1</sup>

<sup>−</sup> ≤ , (13)

. (14)

(15)

+

(18)

ρ,

Moreover note that almost always the controller is supplied with an actuator having gain *<sup>a</sup> g* . In this case it can be posed *<sup>a</sup> b bg* ← and also consider the possible uncertainty of *<sup>a</sup> g* . Finally, it is clear that, for the controllability of the process, the parameter *b* must be always not null. In the following, without loss of generality, it is supposed that *b* 0. <sup>−</sup> > **Remark 4.** In the following it will be proved that, by using the control scheme of Fig. 2, if (2) is satisfied then the overshoot of the controlled system is always null.

From the control scheme of Fig. 2 it can be easily derived that

$$E(\mathbf{s}) = \mathbf{s} \frac{\mathbf{s}^{\mathbf{v}} + (a\_{\mathbf{i}} + b\mathbf{k}\_{\mathbf{i}})\mathbf{s}^{\mathbf{v}-1} + \dots + (a\_{\mathbf{n}} + b\mathbf{k}\_{\mathbf{n}})}{\mathbf{s}^{\mathbf{v}+1} + (a\_{\mathbf{i}} + b\mathbf{k}\_{\mathbf{i}})\mathbf{s}^{\mathbf{v}} + \dots + (a\_{\mathbf{n}} + b\mathbf{k}\_{\mathbf{n}})\mathbf{s} + b\mathbf{k}\_{\mathbf{n}+1}} (\mathbf{R}(\mathbf{s}) - D(\mathbf{s})) = \mathbf{S}(\mathbf{s})(\mathbf{R}(\mathbf{s}) - D(\mathbf{s})) \,. \tag{4}$$

If it is posed that

$$d(\mathbf{s}) = \mathbf{s}^{n+1} + (a\_1 + b\mathbf{k}\_1)\mathbf{s}^{n} + \dots + (a\_n + b\mathbf{k}\_n)\mathbf{s} + b\mathbf{k}\_{n+1} = \mathbf{s}^{n+1} + d\_1\mathbf{s}^{n} + \dots + d\_n\mathbf{s} + d\_{n+1}\tag{5}$$

from (4) and by noting that the open loop transfer function is

$$F(\mathbf{s}) = \frac{k\_{n+1}}{\mathbf{s}} \frac{b}{\mathbf{s}^{\boldsymbol{\tau}} + (a\_{\mathbf{i}} + b\mathbf{k}\_{\boldsymbol{\tau}})\mathbf{s}^{\boldsymbol{\tau}^{-1}} + ... + (a\_{\boldsymbol{n}} + b\mathbf{k}\_{\boldsymbol{n}})} = \frac{d\_{n+1}}{\mathbf{s}\left(\mathbf{s}^{\boldsymbol{\tau}} + d\_{\mathbf{i}}\mathbf{s}^{\boldsymbol{\tau}^{-1}} + ... + d\_{\mathbf{i}}\right)}\tag{6}$$

the sensitivity function *S s*( ) of the error and the constant gain *Kv* turn out to be:

$$S(s) = s \frac{s^v + d\_i s^{v-1} + \dots + d\_v}{s^{v+1} + d\_i s^v + \dots + d\_v s + d\_{v+1}}, \quad K\_v = \frac{d\_{n+1}}{d\_n} = \frac{bk\_{n+1}}{a\_n + bk\_n}.\tag{7}$$

Moreover the sensitivity function *W s*( ) of the output is

$$\mathcal{W}(\mathbf{s}) = \frac{d\_{n+1}}{\mathbf{s}^{n+1} + d\_{\mathbf{i}}\mathbf{s}^{n} + \dots + d\_{n}\mathbf{s} + d\_{n+1}} \ . \tag{8}$$

**Definition 1.** A symmetric set of 1 *n* + negative real part complex numbers *P pp p* <sup>=</sup> { 12 1 , , ..., *n*<sup>+</sup> } normalized such that <sup>1</sup> 1 ( )1 *n <sup>i</sup> <sup>i</sup> <sup>p</sup>* <sup>+</sup> = ∏ − = is said to be set of *reference poles*. Let be

$$\overline{d}\overline{d}(\mathbf{s}) = \mathbf{s}^{\text{\*}+1} + \overline{d}\_{\text{i}}\mathbf{s}^{\text{\*}} + ... + \overline{d}\_{\text{n}}\mathbf{s} + \overline{d}\_{\text{n}+1} \tag{9}$$

the polynomial whose roots are a preassigned set of reference poles *P* . By choosing the poles *P* of the control system equal to ρ*P* , with ρpositive , it is

$$d(\mathbf{s}) = \mathbf{s}^{n+1} + \rho \overline{d}\_{\mathbf{i}} \mathbf{s}^{n} + \dots + \rho^{n} \overline{d}\_{\mathbf{i}} \mathbf{s} + \rho^{n+1} \overline{d}\_{\mathbf{i}+1} \,. \tag{10}$$

Moreover, said ( ) *<sup>p</sup> s t* the impulsive response of the system having transfer function

$$S\_{\boldsymbol{\tau}}(\boldsymbol{s}) = \frac{1}{\boldsymbol{s}} S(\boldsymbol{s}) = \frac{\boldsymbol{s}^{\boldsymbol{\tau}} + d\_{\boldsymbol{s}} \boldsymbol{s}^{\boldsymbol{n}^{-1}} + \dots + d\_{\boldsymbol{n}}}{\boldsymbol{s}^{\boldsymbol{n}+1} + d\_{\boldsymbol{s}} \boldsymbol{s}^{\boldsymbol{n}} + \dots + d\_{\boldsymbol{n}} \boldsymbol{s} + d\_{\boldsymbol{n}+1}} \text{ }\tag{11}$$

from (4) and from the first of (7) it is

$$\left| e(t) \right| \le \int\_{\mathbb{T}} \left| s\_r(\tau) \right| \left| \dot{r}(t - \tau) - \dot{d}(t - \tau) \right| d\tau, \quad \text{where } s\_r(t) = \mathcal{L}^{-1} \left( S\_r(s) \right), \tag{12}$$

from which, if all the poles of ( ) *S s <sup>p</sup>* have negative real part, it is

$$\left|\varepsilon(t)\right| \leq \frac{1}{H\_v} \delta\_{\dot{\gamma}^{-l}} \tag{13}$$

where

. (4)

, (6)

(7)

. (8)

, (11)

250 Recent Advances in Robust Control – Novel Approaches and Design Methods

Moreover note that almost always the controller is supplied with an actuator having gain *<sup>a</sup> g* . In this case it can be posed *<sup>a</sup> b bg* ← and also consider the possible uncertainty of *<sup>a</sup> g* . Finally, it is clear that, for the controllability of the process, the parameter *b* must be always

**Remark 4.** In the following it will be proved that, by using the control scheme of Fig. 2, if (2)

( ) ... ( ) ( ) ( ( ) ( )) ( )( ( ) ( )) ( ) ... ( )

1 1 1 1 <sup>1</sup> ( ) ( ) ... ( ) ... *n n <sup>n</sup> <sup>n</sup>*

1 1

1 1 1 ( ) ( ) ... ( ) ( ... ) *n n n n n n*

... ( ) , . ...

*s d s d s d d a bk*

+ + ++ <sup>=</sup> <sup>=</sup> <sup>=</sup> + ++ + <sup>+</sup>

1

*n*

*s ds ds d* +

( )1

<sup>=</sup> + ++ +

**Definition 1.** A symmetric set of 1 *n* + negative real part complex numbers

1 1 ( ) ... *n n*

the polynomial whose roots are a preassigned set of reference poles *P* . By choosing the

ρ

 ρρ

1 1

−

1 1

1 1 1 1 ( ) ... *n n nn n n ds s ds ds d*

*n n*

*s s ds ds d*

+ ++ = = + ++ +

*P* , with

1

1 1

*s s a bk s a bk s s d s d* + + − − <sup>=</sup> <sup>=</sup> + + ++ + + ++

+

*nn n n n d s s a bk s a bk s bk s d s d s d* <sup>+</sup> <sup>+</sup> = + + ++ + + = + ++ + <sup>+</sup> <sup>+</sup> , (5)

1 1

1 1 1

*n nn*

*n n nnn*

*n n*

+

*n n n*

+ +

∏ − = is said to be set of *reference poles*.

*n n d s s ds ds d* <sup>+</sup> = + ++ + <sup>+</sup> (9)

+ + = + ++ + <sup>+</sup> . (10)

*n*

*n n*

+

positive , it is

*n n*

+ + ++ + <sup>=</sup> <sup>−</sup> <sup>=</sup> <sup>−</sup> + + ++ + +

*nn n s a bk s a bk Es s Rs Ds Ss Rs Ds*

not null. In the following, without loss of generality, it is supposed that *b* 0. <sup>−</sup> >

is satisfied then the overshoot of the controlled system is always null.

1

*s a bk s a bk s bk* −

1 1 1

1 1

*kb d F s*

the sensitivity function *S s*( ) of the error and the constant gain *Kv* turn out to be:

1 1

1

1

ρ

ρ

*p n n*

Moreover, said ( ) *<sup>p</sup> s t* the impulsive response of the system having transfer function

1

1 ... () () ...

*s ds d S s Ss*

+

*<sup>d</sup> W s*

+

( ) ...

*n n*

*n <sup>i</sup> <sup>i</sup> <sup>p</sup>* <sup>+</sup> =

*n n v*

*s ds d d bk Ss s <sup>K</sup>*

1

−

*n n*

From the control scheme of Fig. 2 it can be easily derived that

from (4) and by noting that the open loop transfer function is

1

+

Moreover the sensitivity function *W s*( ) of the output is

*P pp p* <sup>=</sup> { 12 1 , , ..., *n*<sup>+</sup> } normalized such that <sup>1</sup>

poles *P* of the control system equal to

from (4) and from the first of (7) it is

1 1

*n n*

*n n*

1

+

If it is posed that

Let be

$$H\_v = \frac{1}{\int\_0^\pi \left| s\_r(\tau) \right| d\tau}, \quad \delta\_{r\to l} = \max\_{\sigma \in [0, \bar{l}]} \left| \dot{r}(\sigma) - \dot{d}(\sigma) \right|. \tag{14}$$

**Remark 5.** Note that, while the constant gain *Kv* allows to compute the steady-state tracking error to a ramp reference signal, *Hv* , denoted *absolute constant gain*, allows to obtain ∀*t* an excess estimate of the tracking error to a generic reference with derivative. On this basis, it is very interesting from a theoretical and practical point of view, to establish the conditions for which *H K v v* = .

In order to establish the condition necessary for the equality of the absolute constant gain *Hv* with the constant gain *Kv* and to provide some methods to choose the poles *P* and ρ , the following preliminary results are necessary. They concern the main parameters of the sensitivity function *W s*( ) of the output and the externally positive systems, i.e. the systems with non negative impulse response.

**Theorem 1.** Let be *s* , *st* , *<sup>a</sup> t* , ω*<sup>s</sup>* the overshoot, the rise time, the settling time and the upper cutoff angular frequency of

$$\overline{\mathcal{W}}(\mathbf{s}) = \frac{\overline{d}\_{\mathbf{s}+1}}{\overline{d}(\mathbf{s})} = \frac{\overline{d}\_{\mathbf{s}+1}}{\mathbf{s}^{\mathbf{s}^{\alpha+1}} + \overline{d}\_{\mathbf{i}}\mathbf{s}^{\mathbf{s}} + \dots + \overline{d}\_{\mathbf{i}}\mathbf{s} + \overline{d}\_{\mathbf{s}+1}} \tag{15}$$

and

$$\overline{K}\_{v} = \frac{\overline{d}\_{n+1}}{\overline{d}\_{v}}, \quad \overline{H}\_{v} = \frac{1}{\int\_{0}^{v} \overline{s}\_{r}(\tau) \Big| \, d\tau}, \quad \text{where} \quad \overline{s}\_{r}(t) = \mathcal{L}\_{s}^{-1} \left( \frac{s^{v} + \overline{d}\_{i} s^{v-1} + ... + \overline{d}\_{v}}{s^{v+1} + \overline{d}\_{i} s^{v} + ... + \overline{d}\_{v} s + \overline{d}\_{n+1}} \right), \tag{16}$$

then the corresponding values of ,,, , , *sa s v v st t K H* ω when ρ≠ 1 turn out to be:

$$\mathbf{s} = \overline{\mathbf{s}}', \quad \mathbf{t}\_{\circ} = \frac{\overline{\overline{\mathbf{f}}}\_{\circ}}{\rho}, \quad \mathbf{t}\_{\circ} = \frac{\overline{\overline{\mathbf{f}}}\_{\circ}}{\rho}, \quad \alpha \mathbf{o}\_{\circ} = \rho \overline{\alpha} \mathbf{o}\_{\circ}, \quad \mathbf{K}\_{v} = \rho \overline{\overline{\mathbf{K}}}\_{v}, \quad \mathbf{H}\_{v} = \rho \overline{\overline{\mathbf{H}}}\_{v}. \tag{17}$$

**Proof.** By using the change of scale property of the Laplace transform, (8) and (10) it is

$$\begin{split} \mathcal{L}w\_{-1}\left(\frac{t}{\rho}\right) &= \mathcal{L}^{-1}\left(\rho \frac{\rho^{n+1}\overline{d}\_{n+1}}{(\rho s)^{n+1} + \rho \overline{d}\_{1}(\rho s)^{n} + ... + \rho^{n}\overline{d}\_{n}(\rho s) + \rho^{n+1}\overline{d}\_{n+1}} \frac{1}{\rho s}\right) = \\ &= \mathcal{L}^{-1}\left(\frac{\overline{d}\_{n+1}}{s^{n+1} + \overline{d}\_{1}s^{n} + ... + \overline{d}\_{n}s + \overline{d}\_{n+1}} \frac{1}{s}\right) = \overline{w}\_{-1}(t). \end{split} \tag{18}$$

New Robust Tracking and Stabilization Methods

1 .

*n*

α

for Significant Classes of Uncertain Linear and Nonlinear Systems 253

1 0 .0 ˆ ˆ

ˆ ˆ 1 .0 <sup>1</sup> <sup>ˆ</sup>

<sup>2</sup> . .. . . . <sup>1</sup> ˆˆ ˆ . 1 <sup>1</sup>

⎛ ⎞ ⎡ ⎤ ⎜ ⎟ ⎝ ⎠ ⎢ ⎥ ⎛ ⎞ ⎜ ⎟ ⎡⎤ ⎛ ⎞ ⎝ ⎠ ⎢ ⎥ ⎜ ⎟ ⎝ ⎠ ⎢ ⎥ ⎢ ⎥ ⎛⎞ ⎛⎞ ⎢ ⎥ ⎜⎟ ⎜⎟ ⎛ ⎞ ⎢ ⎥ ⎝⎠ ⎝⎠ ⎜ ⎟ ⎢ ⎥ ⎣⎦ ⎝ ⎠ ⎛⎞ ⎛ ⎞ ⎛⎞ ⎜⎟ ⎜ ⎟ ⎜⎟ ⎣ ⎝⎠ ⎝ ⎠ ⎝⎠ ⎦ <sup>1</sup>

1 1 2 2 1 2 1 1

⎡ ⎤ ⎢ ⎥ ⎛ ⎞ ⎜ ⎟ ⎡ ⎤ ⎝ ⎠ <sup>⎡</sup> <sup>⎤</sup> ⎢ ⎥ <sup>⎢</sup> <sup>⎥</sup> ⎢ ⎥ <sup>⎢</sup> <sup>⎥</sup> ⎢ ⎥ ⎛⎞ ⎛⎞ <sup>⎢</sup> <sup>⎥</sup> ⎢ ⎥ ⎜⎟ ⎜⎟ <sup>⎢</sup> <sup>⎥</sup> ⎣⎦ ⎝ ⎠ ⎝ ⎠ ⎢ ⎥⎣ <sup>⎦</sup>

<sup>ˆ</sup> 1 .00 <sup>ˆ</sup> <sup>ˆ</sup> ˆ ˆ . .. .

. 1 . ˆ ˆ .10 <sup>ˆ</sup> <sup>ˆ</sup> 1 2

*n n n n n n*

− − + +

> − −

1

αα

2 2 *wt Ws* () () 0 <sup>−</sup> <sup>=</sup>*L* <sup>≥</sup> . From this and considering that

τ τ<sup>−</sup> <sup>=</sup> = − *L* ∫ (30)

, (31)

, is externally

α ± *j*ω

τ

ω

1 1 ˆ ˆˆ . 1 1 1

−

*n n k*

 α

⎛⎞ ⎛ ⎞ ⎛⎞ ⎜⎟ ⎜ ⎟ ⎜⎟ ⎣ ⎝⎠ ⎝ ⎠ ⎝⎠ ⎦

**Proof.** The proof is obtained by making standard manipulations and for brevity it has been

Now, as pre-announced, some preliminary results about the externally positive systems are

**Theorem 4.** Connecting in series two or more SISO systems, linear, time-invariant and

**Proof.** If 1 *W s*( ) and 2 *W s*( ) are the transfer functions of two SISO externally positive systems

() () () ( ) ( )

*wt W sW s w t w d*

**Theorem 5.** A third-order SISO linear and time-invariant system with transfer function

<sup>1</sup> ( ) ( )

*sp s*

12 1 2 0

( ) 2 2

≤ *p* , i.e. iff the real pole is not on the left of the couple of complex poles.

<sup>=</sup> − −+ <sup>⎡</sup> <sup>⎤</sup> <sup>⎣</sup> <sup>⎦</sup>

α

*t*

( ) <sup>1</sup>

1 2 ˆ

 αχ

1 0 .00

1

χ

 χ

 χ

1

+

*n*

*n n*

2 1 2

*a n*

1

α

α

,

(28)

(29)

α

*n*

α <sup>+</sup> + +

*n*

1 ˆ 1

*n n*

> *k k*

1

+

⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢⎛ ⎞ ⎥ ⎢⎜ ⎟ ⎥ ⎣⎝ ⎠⎦

+

*n*

2

*a*

where *d s*( ) is the polynomial (5) or (22), are given by using the affine transformation

1

*<sup>d</sup> n n a n n n b n n n*

<sup>=</sup> <sup>+</sup> <sup>−</sup> <sup>+</sup> − −

1 1 ˆ ˆ ˆˆ . 1 1

−

 α

1 2

− −

− −

*n n*

αα

*n n*

1 .

*n n*

α

<sup>=</sup> <sup>−</sup>

α

externally positive it is obtained another externally positive system.

*W s*

i.e. without zeros, with a real pole *p* and a couple of complex poles

**Proof.** By using the translation property of the Laplace transform it is

1 1 *wt Ws* () () 0 <sup>−</sup> <sup>=</sup>*L* <sup>≥</sup> and ( ) <sup>1</sup>

*n n n n*

− −

*n*

α

*n n*

α

χ

χ

χ

where

omitted.

stated.

then ( ) <sup>1</sup>

the proof follows.

positive iff

α

By using again the change of scale property of the Laplace transform, by taking into account (10) and (11) it is

$$\begin{split} s\_r \left( \frac{t}{\rho} \right) &= \mathcal{L}^{-1} \left( \rho \frac{(\rho s)^\* + \rho \overline{d}\_i (\rho s)^{n-1} + ... + \rho^\* \overline{d}\_n}{(\rho s)^{n+1} + \rho \overline{d}\_i (\rho s)^\* + ... + \rho^\* \overline{d}\_n (\rho s) + \rho^{\*+1} \overline{d}\_{n+1}} \right) = \\ &= \mathcal{L}^{-1} \left( \frac{s^\* + \overline{d}\_i s^{n-1} + ... + \overline{d}\_n}{s^{n+1} + \overline{d}\_i s^n + ... + \overline{d}\_n s + \overline{d}\_{n+1}} \right) = \overline{s}\_r(t), \end{split} \tag{19}$$

from which

$$\int\_{0}^{\tau} \left| s\_r(\tau) \right| d\tau = \frac{1}{\rho} \int\_{0}^{\tau} \left| s\_r \left( \frac{t}{\rho} \right) \right| dt = \frac{1}{\rho} \int\_{0}^{\tau} \left| \overline{s}\_r(t) \right| dt \,. \tag{20}$$

From the second of (7) and from (10), (14), (18), (20) the proof easily follows.

**Theorem 2.** Let be , , 1,2,..., , *i ii a aa i n* − + ∈ = ⎡ ⎤ ⎣ ⎦ and *b bb* , − + ∈ ⎡ ⎤ ⎣ ⎦ the nominal values of the parameters of the process and ˆ *P P* = ρˆ the desired nominal poles. Then the parameters of the controller, designed by using the nominal parameters of the process and the nominal poles, turn out to be:

$$
\hat{k}\_i = \frac{\hat{\rho}^i \overline{d}\_i - \overline{a}\_i}{\overline{b}}, \ \mathbf{i} = \mathbf{1}, \ \mathbf{2}, \dots, n, \quad \hat{k}\_{n+1} = \frac{\hat{\rho}^{n+1} \overline{d}\_{n+1}}{\overline{b}}. \tag{21}
$$

Moreover the polynomial of the effective poles and the constant gain are:

$$d(\mathbf{s}) = \hat{d}(\mathbf{s}) + h\hat{m}(\mathbf{s}) + \mathcal{S}(\mathbf{s})\tag{22}$$

$$K\_{\boldsymbol{v}} = \frac{\widehat{\rho}^{\boldsymbol{n}+1} \overline{d}\_{\boldsymbol{n}+1}}{\frac{a\_{\boldsymbol{n}}}{1+h} - \overline{a}\_{\boldsymbol{n}} + \widehat{\rho}^{\boldsymbol{n}} \overline{d}\_{\boldsymbol{n}}} = \frac{\widehat{d}\_{\boldsymbol{n}+1}}{\frac{a\_{\boldsymbol{n}}}{1+h} - \overline{a}\_{\boldsymbol{n}} + \widehat{d}\_{\boldsymbol{n}}},\tag{23}$$

where:

$$\hat{d}(\mathbf{s}) = \mathbf{s}^{n+1} + \overline{d}\_{\mathbf{i}}\hat{\rho}\mathbf{s}^{n} + \dots + \overline{d}\_{\mathbf{s}}\hat{\rho}^{n}\mathbf{s} + \overline{d}\_{\mathbf{s}+1}\hat{\rho}^{n+1} = \mathbf{s}^{n+1} + \hat{d}\_{\mathbf{i}}\mathbf{s}^{n} + \dots + \hat{d}\_{\mathbf{s}}\mathbf{s} + \hat{d}\_{\mathbf{s}+1} \tag{24}$$

$$
\hat{m}(\mathbf{s}) = \overline{d}\_{\mathbf{i}} \hat{\rho} \mathbf{s}^{\mathbf{v}} + \dots + \overline{d}\_{\mathbf{v}} \hat{\rho}^{\mathbf{v}} \mathbf{s} + \overline{d}\_{\mathbf{s}+1} \hat{\rho}^{\mathbf{v}+1} = \hat{d}\_{\mathbf{i}} \mathbf{s}^{\mathbf{v}} + \dots + \hat{d}\_{\mathbf{v}} \mathbf{s} + \hat{d}\_{\mathbf{s}+1} \tag{25}
$$

$$\begin{split} h &= \frac{\Delta b}{\overline{b}}, \quad \mathcal{S}(\mathbf{s}) = \overline{a}\_{\mathrm{i}} \Big( \frac{\Delta a\_{\mathrm{i}}}{\overline{a}\_{\mathrm{i}}} - \frac{\Delta b}{\overline{b}} \Big) \mathbf{s}^{\mathrm{v}} + \dots + \overline{a}\_{\mathrm{v}} \Big( \frac{\Delta a\_{\mathrm{v}}}{\overline{a}\_{\mathrm{v}}} - \frac{\Delta b}{\overline{b}} \Big) \mathbf{s} \\ \Delta b &= b - \overline{b}, \quad \Delta a\_{\mathrm{i}} = a\_{\mathrm{i}} - \overline{a}\_{\mathrm{i}}, \dots, \Delta a\_{\mathrm{v}} = a\_{\mathrm{v}} - \overline{a}\_{\mathrm{v}}. \end{split} \tag{26}$$

**Proof.** The proof is obtained by making standard manipulations starting from (5), from the second of (7) and from (10). For brevity it has been omitted. **Theorem 3.** The coefficients *d* of the polynomial

$$d(s - \hat{\alpha}) = s^{n+1} + [s^n \ s^{n-1} \dots s \ 1]d, \quad \hat{\alpha} = 1/\hat{\tau}\_{\text{max}} \ . \tag{27}$$

where *d s*( ) is the polynomial (5) or (22), are given by using the affine transformation

$$d = \begin{bmatrix} 1 & 0 & \dots & 0 & \hat{\mathcal{X}}\_{\boldsymbol{\cdot}} \\ \binom{n}{1}\hat{\mathcal{a}} & 1 & \dots & 0 & \hat{\mathcal{X}}\_{\boldsymbol{\cdot}} \\ \vdots & \vdots & \ddots & \vdots \\ \vdots & \vdots & \ddots & \vdots \\ \binom{n}{n-1}\hat{\mathcal{a}}^{\boldsymbol{\cdot}^{\cdot\cdot}} & \binom{n-1}{n-2}\hat{\mathcal{a}}^{\boldsymbol{\cdot}^{\cdot\cdot\cdot}} & \cdot & 1 & \hat{\mathcal{X}}\_{\boldsymbol{\cdot}} \\ \binom{n}{n-1}\hat{\mathcal{a}}^{\boldsymbol{\cdot}\cdot} & \binom{n-1}{n-2}\hat{\mathcal{a}}^{\boldsymbol{\cdot}\cdot\cdot} & \cdot & 1 & \hat{\mathcal{X}}\_{\boldsymbol{\cdot}} \\ \binom{n}{n}\hat{\mathcal{a}}^{\boldsymbol{\cdot}} & \binom{n-1}{n-1}\hat{\mathcal{a}}^{\boldsymbol{\cdot}\cdot\cdot} & \cdot & \binom{1}{1}\hat{\mathcal{a}} & \hat{\mathcal{X}}\_{\boldsymbol{\cdot}\cdot\cdot} \end{bmatrix} + \begin{pmatrix} n+1\\ n\\ \vdots\\ n\\ n \end{pmatrix} \hat{\mathcal{a}}^{\boldsymbol{\cdot}} \\ \vdots \\ \binom{n+1}{n}\hat{\mathcal{a}}^{\boldsymbol{\cdot}\cdot} \\ \binom{n+1}{n}\hat{\mathcal{a}}^{\boldsymbol{\cdot}\cdot} \end{cases} \tag{28}$$

where

252 Recent Advances in Robust Control – Novel Approaches and Design Methods

By using again the change of scale property of the Laplace transform, by taking into

 ρρ

⎛ ⎞ ⎛ ⎞ + ++ ⎜ ⎟ <sup>=</sup> ⎜ ⎟ <sup>=</sup> ⎝ ⎠ ⎝ ⎠ + ++ +

1

⎛ ⎞ + ++ ⎜ ⎟ <sup>=</sup> ⎝ ⎠ + ++ +

−

*s ds ds d*

00 0 1 1 ( ) *pp p <sup>t</sup> s d s dt s t dt*

From the second of (7) and from (10), (14), (18), (20) the proof easily follows.

*P P* = ρ

Moreover the polynomial of the effective poles and the constant gain are:

ρρ

⎣ ⎦ and *b bb* ,

ˆ ˆ ˆ ˆ , 1, 2, ..., , *i n i i n*

*i n d a <sup>d</sup> k i nk b b*

1

1 1 1

1 1 1 11

+

*d d <sup>K</sup>*

*n*

ρ

the controller, designed by using the nominal parameters of the process and the nominal

ˆ *d s d s hn s s* () () () () =+ + ˆ

*p n nn n*

*t s d s d*

1 1

*n n p n n*

( ) ( ) ... ( ) ( ) ... ( )

*s ds d s t*

1 1

... = ( ), ...

*n n*

ρ

 ρρ

1 1 1

+

ρ

−

τ τ

ρ

*v*

ρ

ρ

δ

second of (7) and from (10). For brevity it has been omitted.

α

**Theorem 3.** The coefficients *d* of the polynomial

*L*

−

*L*

ρ

1

−

 ρ ρ ρ

+

 ρ *n*

 ρ

( )

∞∞ ∞ ⎛ ⎞ <sup>=</sup> ⎜ ⎟ <sup>=</sup> ⎝ ⎠ ∫∫ ∫ . (20)

*n n*

ˆ the desired nominal poles. Then the parameters of

1 1

 ρ <sup>+</sup> +

<sup>−</sup> <sup>=</sup> <sup>=</sup> <sup>=</sup> . (21)

1

+

δ

1 1 ˆ ˆ

*n n n n*

1 1 1 1

1 1 1 1 1 ˆ ˆˆ ˆˆ ˆ ˆ ( ) ... ... *n n nn ns d s d s d ds ds d*

+ + <sup>+</sup> = + ++ + = + ++ + <sup>+</sup> <sup>+</sup> (24)

*n n n n*

α

 τ+ − −= + = , (27)

 *n n n n* <sup>+</sup> = ++ + = ++ + <sup>+</sup> <sup>+</sup> (25)

*n nn*

*n n*

− + −+

<sup>ˆ</sup> <sup>ˆ</sup> 1 1

*a a a d ad h h*

*n n n*

+ + = =

ρ

ˆ ˆ ˆ ˆ ( ) ˆ ˆˆ ... ... *n n n nn n n n n n ds s d s d s d s ds ds d*

ρρ

ρρ

, , ..., .

**Proof.** The proof is obtained by making standard manipulations starting from (5), from the

*b ab ab h sa s a s bbb a a*

 Δ ΔΔ ΔΔ ⎛ ⎞ ⎛ ⎞ = = − ++ − ⎜ ⎟ ⎜ ⎟ ⎝ ⎠ ⎝ ⎠

1

, () ...

*bbb a a a a a a*

1 1 max ( ) [ ... 1] , 1 ˆ ˆ ˆ *n nn ds s s s s d*

Δ= − Δ = − Δ = −

+ +

+

− + ∈ ⎡ ⎤ ⎣ ⎦ the nominal values of the

(22)

, (23)

(26)

(19)

1 1 1 1

*s ds ds d*

+ +

*n*

*n nn*

account (10) and (11) it is

from which

*s*

ρ

**Theorem 2.** Let be , , 1,2,..., , *i ii a aa i n* − + ∈ = ⎡ ⎤

parameters of the process and ˆ

poles, turn out to be:

where:

$$
\begin{bmatrix}
\hat{X}\_{\cdot} \\
\hat{X}\_{\cdot} \\
\vdots \\
\hat{X}\_{\cdot\cdot} \\
\vdots \\
\hat{X}\_{\cdot\cdot}
\end{bmatrix} = \begin{bmatrix}
1 & 0 & \dots & 0 & 0 \\
\binom{n}{1}\hat{\alpha} & & 1 & \dots & 0 & 0 \\
\vdots & & \ddots & & \ddots & \vdots \\
\binom{n}{n-1}\hat{\alpha}^{\cdot\cdot\cdot} & \binom{n-1}{n-2}\hat{\alpha}^{\cdot\cdot\cdot} & \cdots & 1 & 0 \\
\binom{n}{n-1}\hat{\alpha}^{\cdot\cdot\cdot} & \binom{n-1}{n-2}\hat{\alpha}^{\cdot\cdot\cdot\cdot} & \cdots & 1 & 0 \\
\vdots \\
\binom{n}{n}\hat{\alpha}^{\cdot\cdot} & \binom{n-1}{n-1}\hat{\alpha}^{\cdot\cdot\cdot\cdot} & \cdots & \binom{1}{1}\hat{\alpha} & 1 \\
\end{bmatrix} \tag{29}
$$

**Proof.** The proof is obtained by making standard manipulations and for brevity it has been omitted.

Now, as pre-announced, some preliminary results about the externally positive systems are stated.

**Theorem 4.** Connecting in series two or more SISO systems, linear, time-invariant and externally positive it is obtained another externally positive system.

**Proof.** If 1 *W s*( ) and 2 *W s*( ) are the transfer functions of two SISO externally positive systems then ( ) <sup>1</sup> 1 1 *wt Ws* () () 0 <sup>−</sup> <sup>=</sup>*L* <sup>≥</sup> and ( ) <sup>1</sup> 2 2 *wt Ws* () () 0 <sup>−</sup> <sup>=</sup>*L* <sup>≥</sup> . From this and considering that

$$w(t) = \mathcal{L}^{-1}\left(\mathcal{W}\_{\text{i}}(s)\mathcal{W}\_{\text{z}}(s)\right) = \int\_{0}^{t} w\_{\text{i}}(t-\tau)w\_{\text{z}}(\tau)d\tau\tag{30}$$

the proof follows.

**Theorem 5.** A third-order SISO linear and time-invariant system with transfer function

$$\mathcal{W}(s) = \frac{1}{(s-p)\left[\left(s-a\right)^2 + o^2\right]},\tag{31}$$

i.e. without zeros, with a real pole *p* and a couple of complex poles α ± *j*ω , is externally positive iff α ≤ *p* , i.e. iff the real pole is not on the left of the couple of complex poles. **Proof.** By using the translation property of the Laplace transform it is

New Robust Tracking and Stabilization Methods

*respect to the parametric uncertainties of the process*).



<sup>ˆ</sup> , 1, ..., 1 *<sup>i</sup> ki n* = + are computed.

≅ ˆ ˆ , .

theorem, the polynomial *d s*( ) −

, , *i ii a aa* − + ∀ ∈ ⎡ ⎤ ⎣ ⎦ *i n* = 1, 2 ,..., , and *b bb* ,

 ρ

Bessel or of Butterworth, the values of *s*, ,*st* , *<sup>a</sup> t* ,

A very simple algorithm is the following.

*Step 1.* By using (33) and (34),

**Algorithm 1** 

ρ

to *b* .

ˆ *a a <sup>t</sup> <sup>t</sup>* ρ

**4. Second main result** 

ω ρω

required specifications *2*., is stated.

, ˆ *s s <sup>t</sup> <sup>t</sup>* ρ

≅ ,

(Paarmann, 2001).

*Step 2.*

Fig. 4. Root locus of ( ) *d s* , coincident poles, 1 5 *n n <sup>c</sup>* = + = and 1

ρ

ˆ ˆˆ = max , {ρ ρ *K* τ

**Remark 6.** Note that, if the uncertainties of the process are small enough and

Fig. 3. Root locus of *d s*( ) , Bessel poles, 1 5 *n n <sup>c</sup>* = + = and

with constant gain <sup>ˆ</sup> *K K v v* <sup>≥</sup> and maximum time constant max max

for Significant Classes of Uncertain Linear and Nonlinear Systems 255

design specifications (*robustness of the constant gain and of the maximum time constant with* 



α

enough, it is ˆ *ds ds* () () ≅ . Therefore, by using Theorem 1, turns out to be: *s s* ≅ ,

optimization theory) are well-known and/or easily computing (Butterworth, 1930),

The following fundamental result, that is the key to design a robust controller satisfying the

ˆ is iteratively increased, if necessary, until by using (28) and Kharitonov's

(e.g. because of the uncertainty of the gain *<sup>a</sup> g* of the "power" actuator), instead of using Kharitonov's theorem it can be directly plot the root locus of *d s*( ) with respect

*s sv v* = ≅ *K K* Moreover, if the poles *P* are equal to −1 or are of

ω

τ ≤τ

ρ= 1 .

> ρ= .

} is obtained and by using (21) the gains

ˆ given by (27) becomes *hurwitzian*

ρ

*s v K* (intensively studied in the

ˆ is chosen big

− + ∀ ∈ ⎡ ⎤ ⎣ ⎦ . If the only uncertain parameter is *b*

ˆ , where ˆ

*Kv* and max τˆ are

$$w(t) = \mathcal{L}^{-1}\left(\frac{1}{(s-p)\left[\left(s-\alpha\right)^2 + \alpha^2\right]}\right) = e^{rt}\mathcal{L}^{-1}\left(\frac{1}{s\left[\left(s-\alpha+p\right)^2 + \alpha^2\right]}\right) = e^{rt}\left(e^{(a-p)\tau}\sin\alpha\pi d\tau.\tag{32}$$

Note that the signal ( ) ( ) sin *p t vt e t* α ω <sup>−</sup> = is composed by a succession of positive and negative alternately waves. Therefore the integral ( ) *<sup>i</sup> v t* of this signal is non negative iff the succession of the absolute values of the areas of the considered semi-waves is non decreasing. Clearly this fact occurs iff the factor ( ) *<sup>p</sup> <sup>t</sup> e* α<sup>−</sup> is non increasing, i.e. iff α − *p* ≤ 0 , from which the proof derives.

From Theorems 4 and 5 easily follows that:


By using the above proposed results the following main results can be stated.

## **3. First main result**

The following main result, useful to design a robust controller satisfying the required specifications *1*, holds.

**Theorem 6.** Give the process (3) with limited uncertainties, a set of reference poles *P* and some design values ˆ *Kv* and max τˆ . If it is chosen *b b*<sup>−</sup> = and *n n a a*<sup>+</sup> = then ˆ ˆ ∀ ≥ ρ ρ*<sup>K</sup>* , where

$$
\hat{\rho}\_{\chi} = \hat{K}\_v \frac{\overline{d}\_n}{\overline{d}\_{n+1}},
\tag{33}
$$

the constant gain *Kv* of the control system of Fig. 2, with a controller designed by using (21), is not minus than ˆ *Kv* , , , 1, 2 ,..., , *i ii a aa i n* − + ∀∈ = ⎡ ⎤ ⎣ ⎦ and *b bb* , − + ∀ ∈ ⎡ ⎤ ⎣ ⎦ . Moreover, by choosing the poles *P* all in −1 or of Bessel or of Butterworth, for ˆ ˆ ρ ρτ, where

$$
\hat{\rho}\_{\tau} = -\frac{1}{\hat{\tau}\_{\text{max}} \, \text{max} \text{Real}(\overline{P})} \; \text{\,\, \text{'} \,\tag{34}
$$

the polynomial *d s*( ) −αˆ given by (27) is *hurwitzian* , , *i ii a aa* − + ∀ ∈ ⎡ ⎤ ⎣ ⎦ *i n* = 1, 2 ,..., , and *b bb* , − + ∀ ∈ ⎡ ⎤ ⎣ ⎦ .

**Proof.** The proof of the first part of the theorem easily follows from (23) and from the fact that *b b*<sup>−</sup> = and *n n a a*<sup>+</sup> = .

In order to prove the second part of the theorem note that, from (22), (24), (25) and (26), for ˆ ˆ ρ ρτ it is ˆ *d s d s d s hn s h* ( ) ( ) ( ) ( ), 0. ≅ =+ ≥ ˆ Since for Δ*b h* = ⇔= 0 ( 0) the roots of ( ) *d s* are equal to the ones of ˆ *d s*( ) and the zeros of *n s* ˆ( ) are always on the right of the roots of ˆ *d s*( ) and on the left of the imaginary axis (see Figs. 3, 4; from Fig. 4 it is possible to note that if the poles *P* are all in −1 then the zeros of *n s* ˆ( ) have real part equal to −ρˆ / 2 ), it is that the root locus of ( ) *d s* has a negative real asymptote and *n* branches which go to the roots of *n s* ˆ( ). From this consideration the second part of the proof follows.

From Theorems 3 and 6 several algorithms to design a controller such that , , 1, 2 ,..., , *i ii a aa i n* − + ∀∈ = ⎡ ⎤ ⎣ ⎦ and *b bb* , − + ∀ ∈ ⎡ ⎤ ⎣ ⎦ the controlled system of Fig. 2 is of type one, with constant gain <sup>ˆ</sup> *K K v v* <sup>≥</sup> and maximum time constant max max τ ≤τˆ , where ˆ *Kv* and max τˆ are design specifications (*robustness of the constant gain and of the maximum time constant with respect to the parametric uncertainties of the process*).

Fig. 3. Root locus of *d s*( ) , Bessel poles, 1 5 *n n <sup>c</sup>* = + = and ρ= 1 .

Fig. 4. Root locus of ( ) *d s* , coincident poles, 1 5 *n n <sup>c</sup>* = + = and 1 ρ= .

A very simple algorithm is the following.

### **Algorithm 1**

254 Recent Advances in Robust Control – Novel Approaches and Design Methods

( ) 1 1 ( ) 2 2 2 2

of the absolute values of the areas of the considered semi-waves is non decreasing. Clearly



The following main result, useful to design a robust controller satisfying the required

**Theorem 6.** Give the process (3) with limited uncertainties, a set of reference poles *P* and

ˆ ˆ *<sup>n</sup> K v*

the constant gain *Kv* of the control system of Fig. 2, with a controller designed by using (21),

max <sup>1</sup> <sup>ˆ</sup> <sup>ˆ</sup> maxReal( ) *<sup>P</sup>*

**Proof.** The proof of the first part of the theorem easily follows from (23) and from the fact

In order to prove the second part of the theorem note that, from (22), (24), (25) and (26), for

and on the left of the imaginary axis (see Figs. 3, 4; from Fig. 4 it is possible to note that if

From Theorems 3 and 6 several algorithms to design a controller such that

it is ˆ *d s d s d s hn s h* ( ) ( ) ( ) ( ), 0. ≅ =+ ≥ ˆ Since for Δ*b h* = ⇔= 0 ( 0) the roots of ( ) *d s*

τ

ρ

*Kv* , , , 1, 2 ,..., , *i ii a aa i n* − + ∀∈ = ⎡ ⎤

choosing the poles *P* all in −1 or of Bessel or of Butterworth, for ˆ ˆ

ρτ

the poles *P* are all in −1 then the zeros of *n s* ˆ( ) have real part equal to −

*n s* ˆ( ). From this consideration the second part of the proof follows.

ˆ . If it is chosen *b b*<sup>−</sup> = and *n n a a*<sup>+</sup> = then ˆ ˆ ∀ ≥

1

⎣ ⎦ and *b bb* ,

ˆ given by (27) is *hurwitzian* , , *i ii a aa* − + ∀ ∈ ⎡ ⎤ ⎣ ⎦ *i n* = 1, 2 ,..., , and

*d s*( ) and the zeros of *n s* ˆ( ) are always on the right of the roots of ˆ

has a negative real asymptote and *n* branches which go to the roots of

+

*n <sup>d</sup> <sup>K</sup> d*

<sup>−</sup> is non increasing, i.e. iff

*pt pt p w t e ee d*

( ) ( )

− − <sup>−</sup> ⎛ ⎞ ⎛ ⎞ <sup>=</sup> ⎜ ⎟ <sup>=</sup> ⎜ ⎟ <sup>=</sup> − −+ ⎡⎤⎡ ⎤ −+ + ⎝ ⎠ ⎣⎦⎣ ⎦ ⎝ ⎠

*sp s ss p*

ω

α ω

α

α

left of every couple of complex poles is externally positive.

By using the above proposed results the following main results can be stated.

Note that the signal ( ) ( ) sin *p t vt e t*

this fact occurs iff the factor ( ) *<sup>p</sup> <sup>t</sup> e*

derives.

positive;

**3. First main result** 

specifications *1*, holds.

some design values ˆ

is not minus than ˆ

the polynomial *d s*( ) −

that *b b*<sup>−</sup> = and *n n a a*<sup>+</sup> = .

equal to the ones of ˆ

, , 1, 2 ,..., , *i ii a aa i n* − + ∀∈ = ⎡ ⎤

root locus of ( ) *d s*

*b bb* , − + ∀ ∈ ⎡ ⎤ ⎣ ⎦ .

ˆ ˆ ρ ρτ

alternately waves. Therefore the integral ( )

From Theorems 4 and 5 easily follows that:

*Kv* and max τ

α

⎣ ⎦ and *b bb* ,

1 1 ( ) sin .

α

*L L* ∫ (32)

 ω

α

<sup>−</sup> = is composed by a succession of positive and negative

0

*<sup>i</sup> v t* of this signal is non negative iff the succession

*t*

α τ

− *p* ≤ 0 , from which the proof

ρ ρ*<sup>K</sup>* , where

− + ∀ ∈ ⎡ ⎤ ⎣ ⎦ . Moreover, by

ρ

are

ˆ / 2 ), it is that the

*d s*( )

= , (33)

ρ ρτ, where

= − , (34)

− + ∀ ∈ ⎡ ⎤ ⎣ ⎦ the controlled system of Fig. 2 is of type one,

ωτ τ


**Remark 6.** Note that, if the uncertainties of the process are small enough and ρˆ is chosen big enough, it is ˆ *ds ds* () () ≅ . Therefore, by using Theorem 1, turns out to be: *s s* ≅ ,

, ˆ *s s <sup>t</sup> <sup>t</sup>* ρ ≅ , ˆ *a a <sup>t</sup> <sup>t</sup>* ρ ≅ ˆ ˆ , . ω ρω ρ*s sv v* = ≅ *K K* Moreover, if the poles *P* are equal to −1 or are of

Bessel or of Butterworth, the values of *s*, ,*st* , *<sup>a</sup> t* , ω*s v K* (intensively studied in the optimization theory) are well-known and/or easily computing (Butterworth, 1930), (Paarmann, 2001).

## **4. Second main result**

The following fundamental result, that is the key to design a robust controller satisfying the required specifications *2*., is stated.

New Robust Tracking and Stabilization Methods

*d s hn s*

is externally positive 0 ∀*h* ≥ . Then for

 with 1 α> (e.g.

≥ 2.544, ... ); { } <sup>4</sup> <sup>3</sup> *P* =− − − − 1, , ,

α



0

0.3

0.6

Fig. 5. Root locus of ( ), 1 3 *<sup>c</sup> ds n n* <sup>=</sup> <sup>+</sup> <sup>=</sup> .

Fig. 6. Root locus of ( ), 1 4 *<sup>c</sup> ds n n* = <sup>+</sup> <sup>=</sup> .

useful.

such that the system

externally positive.

α} α

ω

considered.

*P* =− − { 1,

α> 1 and

ω

for Significant Classes of Uncertain Linear and Nonlinear Systems 257

**Theorem 8.** Give the process (3) with limited uncertainties and with assigned nominal values of its parameters. Suppose that there exists a set of reference poles *P pp p* = { 12 1 , ,..., *<sup>n</sup>*<sup>+</sup> }

<sup>1</sup> ( ) , () ( ) ... , ( ) ( ) , () ()

ˆ big enough it is ˆ *d s d s d s hn s* () () () () ≅=+ <sup>ˆ</sup> . From this the proof easily follows. In the following, for brevity, the second, third, fourth-order control systems will be

**Theorem 9.** Some sets of reference poles *P* which satisfy Theorem 8 are:

such that the roots of *n s*( ) are real (e.g. 1.5 *a* = − and 2.598



To verify the externally positivity of a third-order system the following theorems are

<sup>=</sup> <sup>=</sup> <sup>−</sup> <sup>=</sup> <sup>+</sup> <sup>+</sup> <sup>+</sup> + = <sup>−</sup> <sup>+</sup> ∏ (40)

*W s ds s p s ds ds d ns d s s*

ρˆ big enough the control system of Fig. 2, with ˆ , j 1, 2 ,..., 1 *j j kk n* = = + , given by (21), , , *i ii a aa* − + ∀ ∈ <sup>⎡</sup> <sup>⎤</sup> <sup>⎣</sup> <sup>⎦</sup> *i n* <sup>=</sup> 1, 2 ,..., , and *b bb* ,

1 1

ρ

 ω α

ω

 αω

≥ , 2 *a* = − and

− + ∀ ∈ ⎡ ⎤ ⎣ ⎦ is

with

+ + +

*n n n*

1 1

 = 1.5, 2, ... ); { } <sup>3</sup> 2 2 *P* =− − + − − + 1, , α

= 1.5, 2, ... ).

α

 *i i* ω

1

+

*n*

1

=

**Proof.** Note that, taking into account (22), (24), (25) and (26), for

α

 α , 1 α> (e.g.

**Proof.** The proof easily follows from the root loci of *d s d s hn s* () () () = + (see Figs. 5, 6).

αα

*h i n n i*

**Theorem 7.** Consider the process (3) with limited uncertainties and assigned design values of ˆ *Kv* and <sup>ˆ</sup> *r d* δ <sup>−</sup> . If there exist a set of reference poles *P* and a ρˆ such that, with ˆ , j 1, 2 ,..., 1 *<sup>j</sup> k n* = + , provided by (21), , , *i ii a aa* − + ∀ ∈ <sup>⎡</sup> <sup>⎤</sup> <sup>⎣</sup> <sup>⎦</sup> *i n* <sup>=</sup> 1, 2 ,..., , and *b bb* , − + ∀ ∈ ⎡ ⎤ ⎣ ⎦ the transfer function

$$\mathcal{W}(\mathbf{s}) = \frac{d\_{n+1}}{\mathbf{s}^{n+1} + d\_i \mathbf{s}^{n} + \dots + d\_n \mathbf{s} + d\_{n+1}} = \frac{b\hat{\mathbf{k}}\_{n+1}}{\mathbf{s}^{n+1} + (a\_i + b\hat{\mathbf{k}}\_i)\mathbf{s}^{n} + \dots + (a\_n + b\hat{\mathbf{k}}\_n)\mathbf{s} + b\hat{\mathbf{k}}\_{n+1}} \tag{35}$$

is strictly *hurwitzian and externally positive* and <sup>1</sup> <sup>ˆ</sup> *KddK vnn v* = ≥ <sup>+</sup> , then, in the hypothesis that the initial state of the control system of Fig. 2 with <sup>ˆ</sup> *i i k k* <sup>=</sup> is null and that *r d* (0) (0) 0 − = , the corresponding tracking error *e t*( ) , always , , *i ii a aa* − + ∀ ∈ ⎡ ⎤ ⎣ ⎦ *i n* = 1, 2 ,..., , and *b bb* , − + ∀ ∈ ⎡ ⎤ ⎣ ⎦ , satisfies relation

$$\left|\boldsymbol{\varepsilon}(t)\right| \leq \frac{1}{\hat{K}\_{\boldsymbol{r}}} \hat{\boldsymbol{\delta}}\_{\boldsymbol{r}\rightarrow\boldsymbol{l}'} \quad \forall t \geq 0, \quad \forall r(t), \ d(t) \colon \left. \boldsymbol{\delta}\_{\boldsymbol{r}\rightarrow\boldsymbol{l}} \right| = \max\_{\sigma \in [0, 1]} \left| \dot{r}(\sigma) - \dot{\boldsymbol{d}}(\sigma) \right| \leq \hat{\boldsymbol{\delta}}\_{\boldsymbol{r}\rightarrow\boldsymbol{l}} \,. \tag{36}$$

Moreover the overshoot *s* is always null.

**Proof.** Note that the function ( ) *S s <sup>p</sup>* given by (11) is

$$S\_r(s) = \frac{1}{s}(1 - \mathcal{W}(s))\,. \tag{37}$$

Hence

$$\mathbf{s}\_r(\mathbf{t}) = \mathbf{1} - w\_{-\mathbf{t}}(\mathbf{t}) \,. \tag{38}$$

Since, for hypothesis, *w t*( ) is non negative then 1 0 () ( ) *t wt w d* τ τ <sup>−</sup> <sup>=</sup> ∫ is non decreasing with a final value 0 () 1 *W s <sup>s</sup>*<sup>=</sup> = . Therefore ( ) *<sup>p</sup> s t* is surely non negative. From this, by taking into account (7), (13) and (14), it follows that

$$H\_v = \frac{1}{\int\_0^{\pi} \left| s\_r(\tau) \right| d\tau} = \frac{1}{\int\_0^{\pi} s\_r(\tau) d\tau} = \frac{1}{S\_r(s) \Big|\_{s=0}} = \frac{1}{d\_n/d\_{n+1}} = \frac{d\_{n+1}}{d\_n} = K\_v \ge \hat{K}\_v \tag{39}$$

and hence the proof.

**Remark 7.** The choice of *P* and the determination of a ρˆ such that (36) is valid, if the uncertainties of the process are null, are very simple. Indeed, by using Theorems 4 and 5, it is sufficient to choose *P* with all the poles real or with at least a real pole not on the left of each couple of complex poles (e.g. *P* = {− − 1, 1} , *P ii* = {− −+ −− 1, 1 , 1 } , *P ii* =− − − + − − { 1, 1, 1 , 1 , ...) } and then to compute ρˆ by using relation 1 ˆ ˆρ= *Kd d vn n*<sup>+</sup> .

If the process has parametric uncertainties, it is intuitive that the choice of *P* can be made with at least a real pole dominant with respect to each couple of complex poles and then to go on by using the Theorems of Sturm and/or Kharitonov or with new results or directly with the command *roots* and with the *Monte Carlo* method.

Regarding this the following main theorem holds.

256 Recent Advances in Robust Control – Novel Approaches and Design Methods

**Theorem 7.** Consider the process (3) with limited uncertainties and assigned design values

ˆ such that, with ˆ , j 1, 2 ,..., 1 *<sup>j</sup> k n* = + , provided by (21), , , *i ii a aa* − + ∀ ∈ <sup>⎡</sup> <sup>⎤</sup> <sup>⎣</sup> <sup>⎦</sup> *i n* <sup>=</sup> 1, 2 ,..., , and *b bb* ,

1 1

*s ds ds d s a bk s a bk s bk* + +

<sup>ˆ</sup> ( ) ˆ ˆˆ ... ( ) ... ( ) *n n*

<sup>=</sup> <sup>=</sup> + ++ + + + ++ + +

the initial state of the control system of Fig. 2 with <sup>ˆ</sup> *i i k k* <sup>=</sup> is null and that *r d* (0) (0) 0 − = , the corresponding tracking error *e t*( ) , always , , *i ii a aa* − + ∀ ∈ ⎡ ⎤ ⎣ ⎦ *i n* = 1, 2 ,..., , and *b bb* ,

<sup>1</sup> ˆ ˆ ( ) , 0, ( ), ( ) : max ( ) ( ) <sup>ˆ</sup> *r d r d r d <sup>t</sup>*

( ) <sup>1</sup> () 1 () *S s Ws <sup>p</sup>*

final value 0 () 1 *W s <sup>s</sup>*<sup>=</sup> = . Therefore ( ) *<sup>p</sup> s t* is surely non negative. From this, by taking into

*v v v <sup>p</sup> nn n <sup>s</sup> p p <sup>d</sup> H K <sup>K</sup> S s dd d s d sd*

= = = = = = ≥

uncertainties of the process are null, are very simple. Indeed, by using Theorems 4 and 5, it is sufficient to choose *P* with all the poles real or with at least a real pole not on the left of each couple of complex poles (e.g. *P* = {− − 1, 1} , *P ii* = {− −+ −− 1, 1 , 1 } ,

If the process has parametric uncertainties, it is intuitive that the choice of *P* can be made with at least a real pole dominant with respect to each couple of complex poles and then to go on by using the Theorems of Sturm and/or Kharitonov or with new results or directly

1 1 11 ˆ

δ

 − − − ∈ ≤ ∀≥ ∀ = −≤

*e t t rt dt r d*

1 1 1 1 1

*n n nn n*

+ +

[ ] 0,

0 () ( ) *t wt w d* τ τ

ρ

<sup>1</sup> <sup>0</sup>

ρ

<sup>+</sup> <sup>=</sup>

 σ

σ

ρ

<sup>ˆ</sup> *KddK vnn v* = ≥ <sup>+</sup> , then, in the hypothesis that

 σδ

*<sup>s</sup>* = − . (37)

<sup>1</sup> () 1 () *<sup>p</sup> st w t* = − <sup>−</sup> . (38)

1

+

*n*

<sup>−</sup> <sup>=</sup> ∫ is non decreasing with a

ˆ such that (36) is valid, if the

ˆ ˆρ

= *Kd d vn n*<sup>+</sup> .

ˆ by using relation 1

. (36)

− + ∀ ∈ ⎡ ⎤

⎣ ⎦ the

(35)

− + ∀ ∈ ⎡ ⎤ ⎣ ⎦ ,

(39)

<sup>−</sup> . If there exist a set of reference poles *P* and a

1 1

+ +

is strictly *hurwitzian and externally positive* and <sup>1</sup>

*v*

δ

**Proof.** Note that the function ( ) *S s <sup>p</sup>* given by (11) is

Since, for hypothesis, *w t*( ) is non negative then 1

0 0

**Remark 7.** The choice of *P* and the determination of a

*P ii* =− − − + − − { 1, 1, 1 , 1 , ...) } and then to compute

with the command *roots* and with the *Monte Carlo* method.

Regarding this the following main theorem holds.

∫ ∫

∞ ∞

τ τ

( ) () ()

 ττ

*K*

Moreover the overshoot *s* is always null.

account (7), (13) and (14), it follows that

and hence the proof.

*n n n n*

*d bk W s*

of ˆ

*Kv* and <sup>ˆ</sup>

transfer function

satisfies relation

Hence

*r d* δ

**Theorem 8.** Give the process (3) with limited uncertainties and with assigned nominal values of its parameters. Suppose that there exists a set of reference poles *P pp p* = { 12 1 , ,..., *<sup>n</sup>*<sup>+</sup> } such that the system

$$\overline{\mathcal{W}}\_{\text{h}}(\mathbf{s}) = \frac{1}{\overline{d}(\mathbf{s}) + h\overline{m}(\mathbf{s})}, \quad \overline{d}(\mathbf{s}) = \prod\_{i=1}^{n+1} (\mathbf{s} - p\_i) = \mathbf{s}^{n+1} + \overline{d}\_{\text{i}}\mathbf{s}^{n} + \dots + \overline{d}\_{\text{i}}\mathbf{s} + \overline{d}\_{\text{n+1}}, \ \overline{m}(\mathbf{s}) = \overline{d}(\mathbf{s}) - \mathbf{s}^{n+1}, \tag{40}$$

is externally positive 0 ∀*h* ≥ . Then for ρˆ big enough the control system of Fig. 2, with ˆ , j 1, 2 ,..., 1 *j j kk n* = = + , given by (21), , , *i ii a aa* − + ∀ ∈ <sup>⎡</sup> <sup>⎤</sup> <sup>⎣</sup> <sup>⎦</sup> *i n* <sup>=</sup> 1, 2 ,..., , and *b bb* , − + ∀ ∈ ⎡ ⎤ ⎣ ⎦ is externally positive.

**Proof.** Note that, taking into account (22), (24), (25) and (26), for ρˆ big enough it is ˆ *d s d s d s hn s* () () () () ≅=+ <sup>ˆ</sup> . From this the proof easily follows.

In the following, for brevity, the second, third, fourth-order control systems will be considered.

**Theorem 9.** Some sets of reference poles *P* which satisfy Theorem 8 are: *P* =− − { 1, α} α with 1 α > (e.g. α = 1.5, 2, ... ); { } <sup>3</sup> 2 2 *P* =− − + − − + 1, , α *i i* ω ω α αω with α > 1 and ω such that the roots of *n s*( ) are real (e.g. 1.5 *a* = − and 2.598 ω ≥ , 2 *a* = − and ω ≥ 2.544, ... ); { } <sup>4</sup> <sup>3</sup> *P* =− − − − 1, , , α αα α , 1 α > (e.g. α= 1.5, 2, ... ).

**Proof.** The proof easily follows from the root loci of *d s d s hn s* () () () = + (see Figs. 5, 6).

Fig. 5. Root locus of ( ), 1 3 *<sup>c</sup> ds n n* <sup>=</sup> <sup>+</sup> <sup>=</sup> .

Fig. 6. Root locus of ( ), 1 4 *<sup>c</sup> ds n n* = <sup>+</sup> <sup>=</sup> .

To verify the externally positivity of a third-order system the following theorems are useful.

**Theorem 10**. Let be

$$\mathcal{W}(\mathbf{s}) = \frac{d\_3}{\mathbf{s}^3 + d\_i \mathbf{s}^2 + d\_2 \mathbf{s} + d\_3} = \frac{d\_3}{d(\mathbf{s})} \tag{41}$$

New Robust Tracking and Stabilization Methods

1 11 2 22 *a aa a aa* ,, ,

1 2 δ

*b bb*,

*h* of the polynomial

the proof easily follows.

**Proof.** Note that if 1 2 1 11 δ

− + − + ∀∈ ∀∈ ⎡⎤ ⎡⎤ ⎣⎦ ⎣⎦ and *b bb*,


> -4 -3 -2 -1 0 1 2 3 4

root on the right of the remaining complex roots.

 αω

some design values of ˆ

 ω α

{ } <sup>3</sup> 2 2 −−+ −− + 1, ,

 *i i* ω

follows.

α

for Significant Classes of Uncertain Linear and Nonlinear Systems 259

( , , ) 0, , *aab a a a* − − <sup>+</sup> < ∀∈ ⎡ ⎤ ⎣ ⎦ and *b bb*, ,

ˆ ˆˆ () ( ) ( ) ˆ ˆˆ () ( ) ( ) , *d s s a bk s a bk s bk d s s a bk s a bk s bk*

=+ + + + + =+ + + + +

− + ∀ ∈ ⎡ ⎤ ⎣ ⎦ , have a dominant real root. By taking into account the root loci with respect

in the two cases of polynomial ( ) *d s* <sup>−</sup> with all the roots real negative and of polynomial *d s*( ) <sup>−</sup> with a real negative root on the right of the remaining complex roots (see Figs. 9, 10),


Fig. 9. Root locus of the polynomial (46) in the hypothesis that all the roots of ( ) *d s* <sup>−</sup> are real.


Fig. 10. Root locus of the polynomial (46) under the hypothesis that ( ) *d s* <sup>−</sup> has a real negative

Finally, from Theorems 7, 9, 11 and by using the Routh criterion the next theorem easily

**Theorem 12.** Give the process (3) with limited uncertainties for 1 3 *n n <sup>c</sup>* = + = and assigned

*r d* δ

ω

<sup>−</sup> . Let be choose *P ppp* = { <sup>122</sup> , , } =

such that the roots of

*Kv* and <sup>ˆ</sup>

 , with 1 α> and

3 2 2 11 22 3

11 22 3

11 22 3

( , , ) 0, , , *aab b b b* + − − + < ∀∈ ⎡ ⎤ ⎣ ⎦ then by using Theorem 10 the polynomials

3 2

−− − ++ −

3 2

− + ∀ ∈ ⎡ ⎤ ⎣ ⎦ . Moreover if 1 2

ˆ ˆˆ *d s s a bk s a bk s bk hs* () ( ) ( ) − − =+ + + + + + , (46)

− + ∀ ∈ ⎡ ⎤ ⎣ ⎦ then 1 2

δ

δ

(45)

( , , ) 0, *aab* <

( , ,) 0 *aab* − − < and

an asimptotically stable system. If

$$\mathcal{S} = 27d(-d\_\text{i}/\mathcal{S}) = 2d\_\text{i} - 9d\_\text{i}d\_\text{i} + 27d\_\text{i} < 0 \tag{42}$$

then the poles of *W s*( ) are all real or the real pole is on the right of the remaining couple of complex poles, i.e. the system is externally positive.

**Proof.** Let be 123 *p* , , *p p* the poles of *W s*( ) note that the "barycentre" 1 3 *<sup>c</sup> x d* = − is in the interval [minReal( ), maxReal( ) *p p i i* ] . Hence if relation (42) is satisfied, as 3 *d d* (0) 0 = > , the interval [*xc* , 0] contains a real pole (see Figs. 7, 8). From this the proof easily follows.

Fig. 7. δin the case of real pole on the right of the couple of complex poles.

Fig. 8. δin the case of all real poles.

**Theorem 11**. Give the control system

$$\mathcal{W}(\mathbf{s}) = \frac{b\hat{\mathbf{k}}\_3}{\mathbf{s}^3 + (a\_1 + b\hat{\mathbf{k}}\_1)\mathbf{s}^2 + (a\_2 + b\hat{\mathbf{k}}\_2)\mathbf{s} + b\hat{\mathbf{k}}\_3}, \quad a\_1 \in \left[a\_1^-, a\_1^+\right], a\_2 \in \left[a\_2^-, a\_2^+\right], b \in \left[b^-, b^+\right], \tag{43}$$

if 123 ˆˆˆ *kkk* , , satisfy the relations:

$$\begin{aligned} b\hat{k}\_3 &> 0, \forall b \in \left[b^-, b^+\right] \\ \delta(a\_1, a\_2^-, b) &= 2(a\_1 + b\hat{k}\_1)^3 - 9(a\_1 + b\hat{k}\_1)(a\_2^- + b\hat{k}\_1) + 27b\hat{k}\_3 < 0, \forall b \in \left[b^-, b^+\right] \text{ and } a\_1 = \left\{a\_1^-, a\_1^+\right\} \end{aligned} \tag{44}$$

then the control system is externally positive 1 11 *a aa*, , − + ∀ ∈ ⎡ ⎤ ⎣ ⎦ 2 22 *a aa*, − + ∀ ∈ ⎡ ⎤ ⎣ ⎦ and *b bb*, . − + ∀ ∈ ⎡ ⎤ ⎣ ⎦

258 Recent Advances in Robust Control – Novel Approaches and Design Methods

3 2

3 3

= 27 3 2 9 27 0 *d d d dd d* − =− + < (42)

(41)

1 23 ( ) ( ) *d d W s s ds ds d ds* <sup>=</sup> <sup>=</sup> + ++

( 1 11 ) 2 3

then the poles of *W s*( ) are all real or the real pole is on the right of the remaining couple of

**Proof.** Let be 123 *p* , , *p p* the poles of *W s*( ) note that the "barycentre" 1 3 *<sup>c</sup> x d* = − is in the interval [minReal( ), maxReal( ) *p p i i* ] . Hence if relation (42) is satisfied, as 3 *d d* (0) 0 = > , the



<sup>ˆ</sup> ( ) , ,, ,, ,, ˆ ˆˆ ( )( ) *bk W s a aa a aa b bb*

− + −+ −+ = ∈ ⎡

⎦ ⎣ ⎦⎣ ⎦ <sup>∈</sup> <sup>∈</sup> ++ ++ +

− + ∀ ∈ ⎡ ⎤ ⎣ ⎦ 2 22 *a aa*,

⎣ ⎤ ⎡ ⎤⎡ ⎤

(43)

(44)

{ }

− + ∀ ∈ ⎡ ⎤ ⎣ ⎦

− + ∀ ∈ ⎡ ⎤ ⎣ ⎦ and *b bb*, .

interval [*xc* , 0] contains a real pole (see Figs. 7, 8). From this the proof easily follows.

xc

in the case of real pole on the right of the couple of complex poles.

xc

1 11 2 22 3 2

12 1 1 1 1 2 1 3 1 11

*a a b a bk a bk a bk bk b b b a a a*

ˆ ˆ ˆˆ ( , , ) 2( ) 9( )( ) 27 0, , and , ,

= + − + + + < ∀∈ ⎡ ⎤ = ⎣ ⎦

− −− + − +

**Theorem 10**. Let be

Fig. 7. δ

Fig. 8. δ

if 123 ˆˆˆ

3

δ

ˆ 0, ,

*bk b b b*

> ∀∈ ⎡ ⎤ ⎣ ⎦

an asimptotically stable system. If

δ

complex poles, i.e. the system is externally positive.

3

*s a bk s a bk s bk*

then the control system is externally positive 1 11 *a aa*, ,

11 22 3

in the case of all real poles.

3

**Theorem 11**. Give the control system

*kkk* , , satisfy the relations:

− +

**Proof.** Note that if 1 2 1 11 δ ( , , ) 0, , *aab a a a* − − <sup>+</sup> < ∀∈ ⎡ ⎤ ⎣ ⎦ and *b bb*, , − + ∀ ∈ ⎡ ⎤ ⎣ ⎦ then 1 2 δ ( , , ) 0, *aab* < 1 11 2 22 *a aa a aa* ,, , − + − + ∀∈ ∀∈ ⎡⎤ ⎡⎤ ⎣⎦ ⎣⎦ and *b bb*, − + ∀ ∈ ⎡ ⎤ ⎣ ⎦ . Moreover if 1 2 δ ( , ,) 0 *aab* − − < and 1 2 δ( , , ) 0, , , *aab b b b* + − − + < ∀∈ ⎡ ⎤ ⎣ ⎦ then by using Theorem 10 the polynomials

$$\begin{aligned} d^-(\mathbf{s}) &= \mathbf{s}^3 + (a\_1^- + b\hat{\mathbf{k}}\_1)\mathbf{s}^2 + (a\_2^- + b\hat{\mathbf{k}}\_2)\mathbf{s} + b\hat{\mathbf{k}}\_3 \\ d^+(\mathbf{s}) &= \mathbf{s}^3 + (a\_1^+ + b\hat{\mathbf{k}}\_1)\mathbf{s}^2 + (a\_2^- + b\hat{\mathbf{k}}\_2)\mathbf{s} + b\hat{\mathbf{k}}\_3 \end{aligned} \tag{45}$$

*b bb*, − + ∀ ∈ ⎡ ⎤ ⎣ ⎦ , have a dominant real root. By taking into account the root loci with respect *h* of the polynomial

$$d(\mathbf{s}) = \mathbf{s}^3 + (a\_1^- + b\hat{\mathbf{k}}\_1)\mathbf{s}^2 + (a\_2^- + b\hat{\mathbf{k}}\_2)\mathbf{s} + b\hat{\mathbf{k}}\_3 + h\mathbf{s}^2 \tag{46}$$

in the two cases of polynomial ( ) *d s* <sup>−</sup> with all the roots real negative and of polynomial *d s*( ) <sup>−</sup> with a real negative root on the right of the remaining complex roots (see Figs. 9, 10), the proof easily follows.

Fig. 9. Root locus of the polynomial (46) in the hypothesis that all the roots of ( ) *d s* <sup>−</sup> are real.

Fig. 10. Root locus of the polynomial (46) under the hypothesis that ( ) *d s* <sup>−</sup> has a real negative root on the right of the remaining complex roots.

Finally, from Theorems 7, 9, 11 and by using the Routh criterion the next theorem easily follows.

**Theorem 12.** Give the process (3) with limited uncertainties for 1 3 *n n <sup>c</sup>* = + = and assigned some design values of ˆ *Kv* and <sup>ˆ</sup> *r d* δ <sup>−</sup> . Let be choose *P ppp* = { <sup>122</sup> , , } = { } <sup>3</sup> 2 2 −−+ −− + 1, , α *i i* ω ω α αω , with 1 α > and ωsuch that the roots of

$$
\overline{m}(\mathbf{s}) = \mathbf{s}^3 - \overline{d}(\mathbf{s}) = \overline{d}\_1 \mathbf{s}^2 + \overline{d}\_2 \mathbf{s} + \overline{d}\_3, \quad \overline{d}(\mathbf{s}) = (\mathbf{s} - \overline{p}\_1)(\mathbf{s} - \overline{p}\_2)(\mathbf{s} - \overline{p}\_3) \tag{47}
$$

are real (e.g. 1.5 *a* = − and 2.598 ω ≥ , 2 *a* = − and ω ≥ 2.544, ... ). Then said ρˆ a number not minus than

$$
\hat{\boldsymbol{\rho}}\_{\boldsymbol{\kappa}} = \hat{\boldsymbol{K}}\_{\boldsymbol{v}} \frac{\overline{\boldsymbol{d}}\_{\boldsymbol{n}}}{\overline{\boldsymbol{d}}\_{\boldsymbol{n}+1}} \; \tag{48}
$$

New Robust Tracking and Stabilization Methods

If

it is

δ

δ

1 2 3



for Significant Classes of Uncertain Linear and Nonlinear Systems 261

*b RKK K G s <sup>a</sup> abg s as a RI RI* <sup>+</sup> <sup>=</sup> <sup>=</sup> <sup>=</sup> <sup>=</sup> + +

2 1 2

 ω α

 αω

*aab* + − = − < , max

*<sup>K</sup>* = 1 23

*<sup>K</sup>* = <sup>1</sup>

ρ

Hence the controlled process is externally positive ∀ ∈ *a*<sup>1</sup> [1.8, 2.7] and ∀ ∈*b* [310, 512] . Therefore the overshoot is always null; moreover, said , *<sup>x</sup> <sup>y</sup> r r* the components of the reference trajectory of the controlled robot, the corresponding tracking errors satisfy

Suppose that a tracking goal is to engrave on a big board of size 2 2.5 0.70 × *m* the word INTECH (see Fig. 12 ). In Fig. 11 the time histories of *<sup>x</sup> r* , *<sup>x</sup> r* and, under the hypothesis that <sup>ˆ</sup> <sup>2</sup> *Kv* <sup>=</sup> , the corresponding error *<sup>x</sup> <sup>e</sup>* , in accordance with the proposed results are reported. Clearly the "tracking precision" is unchanged *<sup>x</sup>* ∀*r* with the same maximum value of *<sup>x</sup> r* .

<sup>0</sup> <sup>10</sup> <sup>20</sup> <sup>30</sup> <sup>40</sup> <sup>50</sup> <sup>60</sup> <sup>0</sup>

<sup>0</sup> <sup>10</sup> <sup>20</sup> <sup>30</sup> <sup>40</sup> <sup>50</sup> <sup>60</sup> -0.5

0 10 20 30 40 50 60

time[s]

( ) , , 0, *<sup>a</sup>*

1 2

 *i i* ω

By choosing { } <sup>3</sup> 2 2 *P* =− − + − − + 1, , α

<sup>1</sup> 2.25, 310, *n n a b* = = for <sup>ˆ</sup> <sup>2</sup> *Kv* <sup>=</sup> it is: ˆ 5.547,

 *aab* − − =− < max ( , , ) 1.181e3 0 1 2 *<sup>b</sup>* δ

ρ

 δ*aab aab* − − + − =− < =− < , max

Figs. 13 and 14 show the engraved words for <sup>ˆ</sup> <sup>2</sup> *Kv* <sup>=</sup> and <sup>ˆ</sup> <sup>10</sup> *Kv* <sup>=</sup> , respectively.

max ( , , ) 1.436e5 0, max ( , , ) 1 2 1 2 1.454e5 0 *b b*

max ( , , ) 1.108e3 0, 1 2 *<sup>b</sup>*

relations 2 *x x e r* ≤ and 2 *y y e r* ≤ . For <sup>ˆ</sup> <sup>10</sup> *Kv* <sup>=</sup> it is obtained that: ˆ 27.734,

Hence 10 *x x e r* ≤ and 10 *y y e r* ≤ .

Fig. 11. Time histories of , *x x r r* and *<sup>x</sup> <sup>e</sup>* for <sup>ˆ</sup> <sup>2</sup> *Kv* <sup>=</sup> .

2

*RKK I g* =± =± = ± = ± = ± 2.5 5%, 0.5 5%, 0.01 5%, 0.05 5%, 100 10% *a a* , (54)

1 2 1.8 2.7, 0, 310 512 ≤ ≤ = ≤≤ *aa b* . (55)

τ

ˆ ˆˆ *k kk* = 0.165, =6.877, =68.771 ,

τ

*k* = 0.0271, <sup>2</sup>

≤ 383*ms* .

≤ 75.3*ms* .

ˆ

*a*

, 1.5 *a* = − and 2.598

ˆ

*k* =0.275, <sup>3</sup>

. (53)

ω= ,

*k* = 0.550 ,

r x

r x

.

ex

ˆ

such that:

$$\begin{aligned} \mathcal{S}(a\_1, a\_2^-, b) &= \mathcal{Z}(a\_1 + b\hat{k}\_1)^3 - \mathcal{Y}(a\_1 + b\hat{k}\_1)(\{a\_2^- + b\hat{k}\_2\}) + 27b\hat{k}\_3 < 0\\ \forall b \in \left[b^-, b^+\right] &\text{ and } \ a\_1 = \{a\_1^-, a\_1^+\} \end{aligned} \tag{49}$$

$$a\_1^- + b^- \hat{k}\_1 > 0, \quad \hat{k}\_1 \hat{k}\_2 b^2 + b(\hat{k}\_1 + \hat{k}\_2 - \hat{k}\_3) + a\_1^- a\_2^- > 0,\ \forall b \in \left[b^-, b^+\right].\tag{50}$$

where

$$
\hat{k}\_1 = \frac{\rho \overline{d}\_\circ - a\_\circ^-}{b^-}, \quad \hat{k}\_2 = \frac{\rho^2 \overline{d}\_\circ - a\_\circ^+}{b^-}, \quad \hat{k}\_3 = \frac{\rho^3 \overline{d}\_\circ}{b^-} \tag{51}
$$

under the hypothesis that the initial state of the control system of Fig. 2, with 1 3 *n n <sup>c</sup>* =+= and <sup>ˆ</sup> *i i k k* <sup>=</sup> , is null and that *r d* (0) (0) 0 <sup>−</sup> <sup>=</sup> , the error *e t*( ) of the control system of Fig. 2, considering all the possible values of the process, satisfies relation

$$\left| \left| \boldsymbol{\varepsilon}(t) \right| \leq \frac{1}{\hat{K}\_{\boldsymbol{\varepsilon}}} \hat{\boldsymbol{\delta}}\_{\boldsymbol{\varepsilon} \rightarrow l'} \quad \forall t \geq 0, \quad \forall r(t), \ d(t) \text{ : } \boldsymbol{\delta}\_{\boldsymbol{\varepsilon} \rightarrow l} = \max\_{\boldsymbol{\sigma} \in [0, 1]} \left| \dot{r}(\boldsymbol{\sigma}) - \dot{\boldsymbol{d}}(\boldsymbol{\sigma}) \right| \leq \hat{\boldsymbol{\delta}}\_{\boldsymbol{\varepsilon} \rightarrow l} \,. \tag{52}$$

Note that, by applying the Routh conditions (50) to the polynomial *d s*( ) −αˆ , max αˆ = 1 τˆ , instead of to *d s*( ) , it is possible to satisfy also the specification about max τ ; so the specifications *2.* are all satisfied.

**Remark 8.** Give the process (3) with limited uncertainties and assigne the design values of ˆ *Kv* , max τˆ and of ˆ *r d* δ <sup>−</sup> ; if 1 2, 3,4 *n n <sup>c</sup>* = + = , by choosing *P* in accordance with Theorem 9, a controller such that, for all the possible values of the parameters of the process, max max τ ≤τˆ and the error *e t*( ) satisfies relation (2), can be obtained by increasing, if necessary, iteratively ρˆ starting from the value of 1 ˆ ˆρ *K vn n Kd d* = <sup>+</sup> with the help of the command *roots* and with the *Monte Carlo* method.

According to this, note that for 4 *nc* ≤ the control system of Fig. 2 (for an assigned set of parameters) is externally positive and max max τ ≤τˆ if, denoting with *<sup>j</sup> p* the root of *d s*( ) having the maximum real part, imag()0 *<sup>j</sup> p* = and max real( ) 1 ˆ *<sup>j</sup> p* ≤ − τ.

*Note that the proposed design method, by taking into account Theorem 8, can be easily extended in the case of* 4 *nc* ≥ *.* 

**Example 1.** Consider a planar robot (e.g. a plotter) whose end-effector must plot dashed linear and continuous lines with constant velocities during each line.

Under the hypothesis that each activation system is an electric DC motor (with inertial load, possible resistance in series and negligble inductance of armature) powered by using a power amplifier, the model of the process turns out to be

$$\mathbf{G(s)} = \frac{b}{s^2 + a\_i s + a\_2}, \quad a\_i = \frac{RK\_s + K^2}{RI}, \; a\_z = 0, \; b = \frac{K}{RI}\mathbf{g}\_s. \tag{53}$$

If

260 Recent Advances in Robust Control – Novel Approaches and Design Methods

ˆ ˆ *<sup>n</sup> K v*

{ }

1 11

12 3 ˆˆ ˆ , , *da da d kk k b bb*

under the hypothesis that the initial state of the control system of Fig. 2, with 1 3 *n n <sup>c</sup>* =+= and <sup>ˆ</sup> *i i k k* <sup>=</sup> , is null and that *r d* (0) (0) 0 <sup>−</sup> <sup>=</sup> , the error *e t*( ) of the control system of Fig. 2,

<sup>1</sup> ˆ ˆ ( ) , 0, ( ), ( ) : max ( ) ( ) <sup>ˆ</sup> *r d r d r d <sup>t</sup>*

**Remark 8.** Give the process (3) with limited uncertainties and assigne the design values of

controller such that, for all the possible values of the parameters of the process, max max

and the error *e t*( ) satisfies relation (2), can be obtained by increasing, if necessary,

According to this, note that for 4 *nc* ≤ the control system of Fig. 2 (for an assigned set of

*Note that the proposed design method, by taking into account Theorem 8, can be easily extended in* 

**Example 1.** Consider a planar robot (e.g. a plotter) whose end-effector must plot dashed

Under the hypothesis that each activation system is an electric DC motor (with inertial load, possible resistance in series and negligble inductance of armature) powered by using a

ˆ ˆρ

δ

 − − − ∈ ≤ ∀≥ ∀ = −≤

*e t t rt dt r d*

instead of to *d s*( ) , it is possible to satisfy also the specification about max

Note that, by applying the Routh conditions (50) to the polynomial *d s*( ) −

τ ≤τ

ρρ

12 1 1 1 1 2 2 3

*a a b a bk a bk a bk bk*

ˆ ˆ ˆˆ ( , , ) 2( ) 9( )(( )) 27 0

1 23 <sup>123</sup> *ns s d s ds ds d d s s p s p s p* () () =− = + + =− − − , ( ) ( )( )( ) (47)

≥ 2.544, ... ). Then said

= , (48)

ρ

ˆ a number not

(49)

ω

1

+

= + −+ + + <

1 1 2 2 3

 − + − −−

ˆ ˆˆ ˆ ˆ ˆ *a bk kkb bk k k aa b b b* 0, ( ) 0, , − − − − − + + > + + − + > ∀∈ <sup>⎡</sup> <sup>⎤</sup> <sup>⎣</sup> <sup>⎦</sup> , (50)

2 3

 ρ

− − <sup>=</sup> <sup>=</sup> <sup>=</sup> , (51)

[ ] 0 ,

 σ  σδ

*K vn n Kd d* = <sup>+</sup> with the help of the command *roots*

ˆ if, denoting with *<sup>j</sup> p* the root of *d s*( ) having

. (52)

α

τ

ˆ , max αˆ = 1 τˆ ,

> τ ≤τˆ

; so the

σ

<sup>−</sup> ; if 1 2, 3,4 *n n <sup>c</sup>* = + = , by choosing *P* in accordance with Theorem 9, a

τ.

*n <sup>d</sup> <sup>K</sup> d*

3 2

≥ , 2 *a* = − and

ρ

3

, and ,

2 1 1 1 2 1 2 3 12

 − − − + − +

*b bb a aa*

considering all the possible values of the process, satisfies relation

ˆ starting from the value of 1

linear and continuous lines with constant velocities during each line.

*v*

parameters) is externally positive and max max

the maximum real part, imag()0 *<sup>j</sup> p* = and max real( ) 1 ˆ *<sup>j</sup> p* ≤ −

power amplifier, the model of the process turns out to be

δ

*K*

specifications *2.* are all satisfied.

*r d* δ

and with the *Monte Carlo* method.

ˆ and of ˆ

ρ

∀ ∈ ⎡ ⎤ = ⎣ ⎦

ω

are real (e.g. 1.5 *a* = − and 2.598

δ

minus than

such that:

where

ˆ *Kv* , max τ

iteratively

*the case of* 4 *nc* ≥ *.* 

$$R = 2.5 \pm 5\%, \; K = 0.5 \pm 5\%, \; K\_s = 0.01 \pm 5\%, \; I = 0.05 \pm 5\%, \; \underline{\mathcal{g}}\_s = 100 \pm 10\% \,\tag{54}$$

it is

$$1.8 \le a\_1 \le 2.7, \quad a\_2 = 0, \quad 310 \le b \le 512 \, . \tag{55}$$

By choosing { } <sup>3</sup> 2 2 *P* =− − + − − + 1, , α *i i* ω ω α αω , 1.5 *a* = − and 2.598 ω = , <sup>1</sup> 2.25, 310, *n n a b* = = for <sup>ˆ</sup> <sup>2</sup> *Kv* <sup>=</sup> it is: ˆ 5.547, ρ *<sup>K</sup>* = <sup>1</sup> ˆ *k* = 0.0271, <sup>2</sup> ˆ *k* =0.275, <sup>3</sup> ˆ *k* = 0.550 , max ( , , ) 1.108e3 0, 1 2 *<sup>b</sup>* δ *aab* − − =− < max ( , , ) 1.181e3 0 1 2 *<sup>b</sup>* δ *aab* + − = − < , max τ≤ 383*ms* .

Hence the controlled process is externally positive ∀ ∈ *a*<sup>1</sup> [1.8, 2.7] and ∀ ∈*b* [310, 512] . Therefore the overshoot is always null; moreover, said , *<sup>x</sup> <sup>y</sup> r r* the components of the reference trajectory of the controlled robot, the corresponding tracking errors satisfy relations 2 *x x e r* ≤ and 2 *y y e r* ≤ .

For <sup>ˆ</sup> <sup>10</sup> *Kv* <sup>=</sup> it is obtained that: ˆ 27.734, ρ *<sup>K</sup>* = 1 23 ˆ ˆˆ *k kk* = 0.165, =6.877, =68.771 ,

max ( , , ) 1.436e5 0, max ( , , ) 1 2 1 2 1.454e5 0 *b b* δ δ *aab aab* − − + − =− < =− < , max τ≤ 75.3*ms* .

Hence 10 *x x e r* ≤ and 10 *y y e r* ≤ .

Suppose that a tracking goal is to engrave on a big board of size 2 2.5 0.70 × *m* the word INTECH (see Fig. 12 ). In Fig. 11 the time histories of *<sup>x</sup> r* , *<sup>x</sup> r* and, under the hypothesis that <sup>ˆ</sup> <sup>2</sup> *Kv* <sup>=</sup> , the corresponding error *<sup>x</sup> <sup>e</sup>* , in accordance with the proposed results are reported. Clearly the "tracking precision" is unchanged *<sup>x</sup>* ∀*r* with the same maximum value of *<sup>x</sup> r* . Figs. 13 and 14 show the engraved words for <sup>ˆ</sup> <sup>2</sup> *Kv* <sup>=</sup> and <sup>ˆ</sup> <sup>10</sup> *Kv* <sup>=</sup> , respectively.

Fig. 11. Time histories of , *x x r r* and *<sup>x</sup> <sup>e</sup>* for <sup>ˆ</sup> <sup>2</sup> *Kv* <sup>=</sup> .

New Robust Tracking and Stabilization Methods


centrifugal forces, the Coriolis and friction ones,

1 1

*H FK w f Fu*

= = −

*y I x Cx y I x C x*

= = = = ⎡⎤ ⎡⎤ ⎣⎦ ⎣⎦

 = *f d* ( , ), ( ), ( ) *p p v v* =η ρ

**Theorem 13**. Consider the quadratic system

*t*

ϕ

0 ,0 .

where , ,

definition and results are necessary.

*I*

vector of the uncertain parameters of the mechanical system,


described as follows

external disturbances,

where:

compensation

where *Kp* , *m m K R <sup>d</sup>*

system

If 2

ρ

ρ αρ αρ

 αβ− > 40 *d* it is:

> ρ ρϕ

1 2 α

> ρ

 ρ

for Significant Classes of Uncertain Linear and Nonlinear Systems 263

**Remark 9.** It is important to note that the class of systems (56) includes the one, very important from a practical point of view, of the articulated mechanical systems (mechanical structures, flexible too, robots,…). Indeed it is well-known that mechanical systems can be



If system (56) is controlled by using the following state feedback control law with a partial

<sup>×</sup> ∈ are constant matrices, *r m K R* <sup>×</sup> ∈ is a matrix depending in general on

1 1 2

= =

<sup>0</sup> 00 0 , <sup>0</sup>

∑ ∑

*x xx <sup>x</sup> <sup>w</sup> <sup>A</sup> <sup>x</sup> <sup>A</sup> <sup>x</sup> <sup>x</sup> Bw HK HK F I*

In order to develop a practical robust stabilization method for system (59) the following

**Definition 2.** Give system (59) and a symmetric *p.d.* matrix *nxn P R* ∈ . A positive first-order

= , where *<sup>T</sup>*

1 2 2 1 01 2 0

*<sup>t</sup> t t e t*

ϕτ

ρ

ρ ρ

 ρ ρ

*P*

 α β

= + + = + + < ≥ =≥ ≥ *d d* , 0, , 0, (0) 0, 0. (60)

0 2 22 1 ( ) <sup>1</sup> ( ) , where ( ) , , lim ( ) , , 1 () ( ) *t*

− − <sup>=</sup> <sup>=</sup> <sup>=</sup> <sup>≤</sup> <sup>∀</sup> <sup>&</sup>lt; − −− (61)

 α ρ

τ

−

= = *x x Px* and *d w* = max , such that

 ρ

1 02

ρ

ρ

ρ

 ρ

*t*

→∞

ρ

 ρ

⎡ ⎤ ⎛ ⎞ ⎡ ⎤ ⎡⎤ ⎛ ⎞ =+ + ⎢ ⎥ ⎜ ⎟ ⎢ ⎥ ⎢⎥ <sup>=</sup> + + ⎜ ⎟ − − ⎢ ⎥ ⎣ ⎦ ⎝ ⎠ ⎣ ⎦ ⎣⎦ ⎝ ⎠

*m m*


*tyy* , , and *uc* is the partial compensation signal, the closed-loop system is

*c*

 ηρ

> αρ α α

, *<sup>p</sup> y vy v* ≤ ≤ is said to be *majorant system* of system (59).

2 2

1 2 0 1

 β αρ

*p d i i i*

*y*

*Bq c g Tu* = + + , (57)

∈℘⊂ with ℘ a compact set, is the

(59)

μ

( ) , *u K Ky Ky u* =− + − *pd c* (58)

1 12 1

− + − +

*m i im i*

Fig. 12. The desired "word".

Fig. 13. The engraved word with <sup>ˆ</sup> <sup>2</sup> *Kv* <sup>=</sup> .

Fig. 14. The engraved word with <sup>ˆ</sup> <sup>10</sup> *Kv* <sup>=</sup> .

### **Part II**

### **5. Problem formulation and preliminary results**

Now consider the following class of nonlinear dynamic system

$$\dot{\mathbf{y}} = F\_{\mathbf{i}}(\mathbf{y}, \dot{\mathbf{y}}, p)\mathbf{u} + F\_{\mathbf{z}}(\mathbf{y}, \dot{\mathbf{y}}, p)\dot{\mathbf{y}} + f(\mathbf{t}, \mathbf{y}, \dot{\mathbf{y}}, p), \quad F\_{\mathbf{z}}(\mathbf{y}, \dot{\mathbf{y}}, p) = \sum\_{i=1}^{m} F\_{\mathbf{z}i}(\mathbf{y}, \dot{\mathbf{y}}, p)\dot{\mathbf{y}}\_i,\tag{56}$$

where *t R* ∈ is the time, *<sup>m</sup> y* ∈ *R* is the output, *<sup>r</sup> u R* ∈ is the control input, *p R*μ ∈℘⊂ is the uncertain parametric vector of the system, with ℘ a compact set, 1 *m r F R* <sup>×</sup> ∈ is a limited matrix with rank *m* , <sup>2</sup> *mxm F R <sup>i</sup>* ∈ are limited matrices and *<sup>m</sup> f* ∈ *R* is a limited vector which models possible disturbances and/or particular nonlinearities of the system.

In the following it is supposed that there exists at least a matrix ( , ) *r m Kyy R* <sup>×</sup> <sup>∈</sup> such that the matrix *H FK* = 1 is positive definite (*p.d.*) ∀*p*∈℘.

**Remark 9.** It is important to note that the class of systems (56) includes the one, very important from a practical point of view, of the articulated mechanical systems (mechanical structures, flexible too, robots,…). Indeed it is well-known that mechanical systems can be described as follows

$$B\ddot{q} = \mathbf{c} + \mathbf{g} + \mathbf{T}u\_{\prime} \tag{57}$$

where:

262 Recent Advances in Robust Control – Novel Approaches and Design Methods

<sup>0</sup> 0.5 <sup>1</sup> 1.5 <sup>2</sup> 2.5 <sup>0</sup>

<sup>0</sup> 0.5 <sup>1</sup> 1.5 <sup>2</sup> 2.5 -0.1

<sup>0</sup> 0.5 <sup>1</sup> 1.5 <sup>2</sup> 2.5 -0.1

1 2 2 2

where *t R* ∈ is the time, *<sup>m</sup> y* ∈ *R* is the output, *<sup>r</sup> u R* ∈ is the control input, *p R*

uncertain parametric vector of the system, with ℘ a compact set, 1

models possible disturbances and/or particular nonlinearities of the system.

*y F yypu F yypy f tyyp F yyp F yypy*

( , , ) ( , , ) ( , , , ), ( , , ) ( , , ) , *<sup>m</sup>*

In the following it is supposed that there exists at least a matrix ( , ) *r m Kyy R* <sup>×</sup> <sup>∈</sup> such that the

=++ <sup>=</sup> ∑ (56)

*mxm F R <sup>i</sup>* ∈ are limited matrices and *<sup>m</sup> f* ∈ *R* is a limited vector which

1

*i*

=

*i i*

μ∈℘⊂ is the

*m r F R* <sup>×</sup> ∈ is a limited

0.1 0.2 0.3 0.4 0.5 0.6

Fig. 12. The desired "word".

0 0.1 0.2 0.3 0.4 0.5 0.6

0 0.1 0.2 0.3 0.4 0.5 0.6

matrix with rank *m* , <sup>2</sup>

**Part II** 

Fig. 13. The engraved word with <sup>ˆ</sup> <sup>2</sup> *Kv* <sup>=</sup> .

Fig. 14. The engraved word with <sup>ˆ</sup> <sup>10</sup> *Kv* <sup>=</sup> .

**5. Problem formulation and preliminary results** 

matrix *H FK* = 1 is positive definite (*p.d.*) ∀*p*∈℘.

Now consider the following class of nonlinear dynamic system


If system (56) is controlled by using the following state feedback control law with a partial compensation

$$
\mu = -K\left(K\_p \mathcal{y} + K\_d \dot{\mathcal{y}}\right) - \mu\_{\varepsilon \prime} \tag{58}
$$

where *Kp* , *m m K R <sup>d</sup>* <sup>×</sup> ∈ are constant matrices, *r m K R* <sup>×</sup> ∈ is a matrix depending in general on *tyy* , , and *uc* is the partial compensation signal, the closed-loop system is

$$\dot{\mathbf{x}} = \begin{bmatrix} 0 & I \\ -HK\_p & -HK\_d \end{bmatrix} \mathbf{x} + \left( \sum\_{i=1}^{m} \begin{bmatrix} 0 & 0 \\ 0 & F\_{2i} \end{bmatrix} \mathbf{x}\_{m-1+i} \right) \mathbf{x} + \begin{bmatrix} 0 \\ I \end{bmatrix} \mathbf{w} = A\_1 \mathbf{x} + \left( \sum\_{i=1}^{m} A\_{2i} \mathbf{x}\_{m-1+i} \right) \mathbf{x} + B \mathbf{w}\_i$$

1 1 where , , *c H FK w f Fu* = = −

$$\begin{bmatrix} y = \begin{bmatrix} I & 0 \end{bmatrix} \mathbf{x} = \mathbf{C} \mathbf{x}, & \dot{y} = \begin{bmatrix} 0 & I \end{bmatrix} \mathbf{x} = \mathbf{C}\_{\dot{y}} \mathbf{x} \dots$$

In order to develop a practical robust stabilization method for system (59) the following definition and results are necessary.

**Definition 2.** Give system (59) and a symmetric *p.d.* matrix *nxn P R* ∈ . A positive first-order system ρ ρ = *f d* ( , ), ( ), ( ) *p p v v* =η ρ ηρ = , where *<sup>T</sup> P* ρ = = *x x Px* and *d w* = max , such that , *<sup>p</sup> y vy v* ≤ ≤ is said to be *majorant system* of system (59).

**Theorem 13**. Consider the quadratic system

$$\rho \cdot \dot{\rho} = a\_1 \rho + a\_2 \rho^2 + \beta d = a\_2 \rho^2 + a\_1 \rho + a\_0, \quad a\_1 < 0, \ a\_2, \beta \ge 0, \quad \rho(0) = \rho\_0 \ge 0, \quad d \ge 0. \tag{60}$$

If 2 1 2 α αβ− > 40 *d* it is:

$$\rho(\mathbf{t}) = \frac{\rho\_1 - \rho\_2 \rho(\mathbf{t})}{1 - \rho(\mathbf{t})}, \text{ where } \rho(\mathbf{t}) = \frac{\rho\_\diamond - \rho\_1}{\rho\_\diamond - \rho\_2} e^{-\theta \mathbf{t}}, \text{ } \tau = \frac{1}{a\_2(\rho\_2 - \rho\_1)}, \lim\_{t \to \pi} \rho(\mathbf{t}) \le \rho\_1, \,\forall \rho\_\diamond < \rho\_2, \tag{61}$$

(59)

New Robust Tracking and Stabilization Methods

α

1 2

*ii i*

π π

( ) *<sup>T</sup> Q A P PA* =− + : is assumed in one of the 2

{ }

μ

γ

**Proof.** The proof can be found in (Celentano, 2010).

ρ

,1

2

ρ

λ

α

α

, : . *<sup>T</sup> C x x Px <sup>P</sup>*

belonging to a generic hyper-ellipse *CP*,

= =

where { }

ρ

αρ αρ

,

ρ

λ

*<sup>P</sup>* 2

*xC p*

, ,..., 0,1

μ

∈

1 2

 π

μ

symmetric *p.d.* matrix. Then the minimum (maximum) of 1

μ

*ii i*

μ

 π

then the smallest

**Theorem 15**. Let be

**6. Main results** 

system, is stated.

(59) is

in which:

β λ

α λ*CP C*<sup>−</sup> =

max ( ). *<sup>T</sup>*

**Proof.** The proof is standard.

a symmetric matrix, where

and each function , 1,..., , *<sup>i</sup> g i* =

for Significant Classes of Uncertain Linear and Nonlinear Systems 265

ρ

**Theorem 14.** Give a matrix *n n P R* <sup>×</sup> ∈ *p.d.* and a matrix *m n C R* <sup>×</sup> ∈ with rank *m* . If *<sup>P</sup> x* ≤

α

... 1 1 2 2

[ ... ] 1 2 { : } *<sup>T</sup> R*

μ

Γ= ∈

{ *R gg gg* : min[ ... ] max[ ... ] . 1 1 }

 γ

μ

Now the following main result, which provides a majorant system of the considered control

**Theorem 16**. Give a symmetric *p.d.* matrix *n n P R* <sup>×</sup> ∈ . Then a majorant system of the system

1 2 , , , *p p*

1

−

1 1 <sup>1</sup> <sup>1</sup> , ( ) min , ( ),

> 1 min 2 1 1

*i mi*

( )

−

∈ ∈℘ = − =− +

*QP x*

2 2 <sup>2</sup> <sup>2</sup> ,

min , ( ),

1 1 max max max , ,

= = = = (70)

*B PB c CP C c C P C d w*

λ− −

+ −

*i T ii i xC p*

 ρ

*Q P Q A P PA*

, , , max ,

2

min 1

*<sup>P</sup>* 2

*m*

∑

=

( ) ( ) ( )

ρ, it is

*TTT*

**Proof.** By choosing as "Lyapunov function" the quadratic form <sup>2</sup> *<sup>T</sup>* <sup>2</sup>

λ

=+ + = =

 β  π

ππ

*A Ag g g R*

αρwhere *v Cx* <sup>=</sup> , turns out to be 1

1 2

( ) ( ) ... ( )*<sup>i</sup> i i m m*

<sup>=</sup> ∑ <sup>∈</sup> (64)

μ

 π ππ

vertices of Γ , in which

 π μ μ μ

− + = ∈∏ = ∈ ≤ ≤ (65)

min ( ) *QP*

≤ ≤ (66)

 ( <sup>1</sup> max λ

( ) *QP*<sup>−</sup> ), where

(69)

,

ρ

ρ, for *x*

*P*

∈ ∈ ∈℘

*<sup>P</sup> V x Px x* = = =

is continuous with respect to its argument, and *n n P R* <sup>×</sup> ∈ a

π

 μ

 ρ

*Q A P PA*

*p yy t Rx C p*

*T*

∈ ∈℘ = − =− + (68)

*dv c v c* (67)

λ <sup>−</sup> ∈∏

×

such that , *<sup>P</sup> v x* ≤ ≤

where 1 2 ρ , ρ *,* ρ1 2 < ρ , are the roots of the algebraic equation 2 2 10 αρ αρ α + + = 0 (see Fig. 15). Moreover for 0 *d* = the practical convergence time 5% 5% <sup>0</sup> *t t* ρ( ) 5% = ρ is given by (see Fig. 16):

$$\mathbf{t}\_{\approx 1} = \gamma \mathbf{\tau}\_{1}, \mathbf{\tau}\_{1} = -1/\alpha\_{1}, \gamma = \ln \frac{20 - \rho\_{\text{o}}/\rho\_{\text{z}0}}{1 - \rho\_{\text{o}}/\rho\_{\text{z}0}}, \; \rho\_{\text{z}0} = -\alpha\_{1}/\alpha\_{2}.\tag{62}$$

in which *<sup>l</sup>* τ is the time constant of the linearized of system (60) and ρ20 is the upper bound of the convergence interval of ρ( )*t* for 0 *d* = , i.e. of system (60) in free evolution.

Fig. 15. Graphical representation of system(60).

**Proof.** The proof of (61) easily follows by solving, with the use of the method of separation of variables, the equation *d dt* ρ =− − αρρ ρρ 21 2 ( )( ) and from Fig. 15. Instead (62) easily derives by noting that the solution of (60) for 0 *d* = is

$$\frac{\rho(t)}{\rho\_0} = \frac{\rho\_{\ge 0}}{\rho\_0} \frac{1}{1 + \left(\frac{\rho\_{\ge 0}}{\rho\_0} - 1\right) e^{t/\gamma l}}.\tag{63}$$

Fig. 16. Time history of ρ and γ as a function of ρ0

**Theorem 14.** Give a matrix *n n P R* <sup>×</sup> ∈ *p.d.* and a matrix *m n C R* <sup>×</sup> ∈ with rank *m* . If *<sup>P</sup> x* ≤ ρ then the smallest α such that , *<sup>P</sup> v x* ≤ ≤ α αρ where *v Cx* <sup>=</sup> , turns out to be 1 max ( ). *<sup>T</sup>* α λ *CP C*<sup>−</sup> = **Proof.** The proof is standard. **Theorem 15**. Let be

$$A = \sum\_{i\_1, i\_2, \dots, i\_\mu \in \{0, 1\}} A\_{i\_1 i\_2 \dots i\_\mu} g\_1(\pi\_1)^{i\_1} g\_2(\pi\_2)^{i\_2} \dots g\_\mu(\pi\_\mu)^{i\_\mu} \in \mathbb{R}^{m \times m} \tag{64}$$

a symmetric matrix, where

264 Recent Advances in Robust Control – Novel Approaches and Design Methods

0 20

ρ

<sup>−</sup> = =− = = − <sup>−</sup> (62)

( )*t* for 0 *d* = , i.e. of system (60) in free evolution.

ρ ρ

ρ ρ

0 20

2 10

ρ

is given by (see

20 is the upper bound

 αρ α+ + = 0 (see Fig.

ρ

αρ

( ) 5% =

ρ

 αα

21 2 ( )( ) and from Fig. 15. Instead (62) easily

<sup>0</sup> 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 <sup>1</sup> <sup>3</sup>

ρ0 /ρ<sup>20</sup>

(63)

, are the roots of the algebraic equation 2

5% 1 20 1 2

**Proof.** The proof of (61) easily follows by solving, with the use of the method of separation

 ρρ

0 0 20 / 0

<sup>=</sup> ⎛ ⎞ + − ⎜ ⎟ ⎝ ⎠

1 1 *<sup>t</sup> <sup>l</sup>*

3.5

ρ0

as a function of

4

4.5

γ

5

5.5

6

*e* τ

( ) <sup>1</sup> .

ρ

ρ

 =− − αρρ

*t*

ρ

ρ

20

 ρ

> ρ

<sup>20</sup> , 1 , ln , , <sup>1</sup> *l l <sup>t</sup>*

15). Moreover for 0 *d* = the practical convergence time 5% 5% <sup>0</sup> *t t*

 α γ

ρ

is the time constant of the linearized of system (60) and

γτ τ

Fig. 15. Graphical representation of system(60).

ρ

derives by noting that the solution of (60) for 0 *d* = is

<sup>0</sup> <sup>1</sup> <sup>2</sup> <sup>3</sup> <sup>4</sup> <sup>5</sup> <sup>6</sup> <sup>7</sup> <sup>8</sup> <sup>9</sup> <sup>10</sup> <sup>0</sup>

t/τl

ρ and γ

of variables, the equation *d dt*

where 1 2 ρ , ρ *,* ρ1 2 < ρ

Fig. 16):

in which *<sup>l</sup>* τ

> 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Fig. 16. Time history of

ρ/ρ0

of the convergence interval of

$$\left\{\begin{bmatrix}\pi\_{1} & \pi\_{2} & \dots & \pi\_{\mu}\end{bmatrix}^{\mathrm{r}} = \pi \in \prod = \left\{\pi \in \mathbb{R}^{\prime} : \pi^{\cdot} \le \pi \le \pi^{+}\right\}\tag{65}$$

and each function , 1,..., , *<sup>i</sup> g i* = μ is continuous with respect to its argument, and *n n P R* <sup>×</sup> ∈ a symmetric *p.d.* matrix. Then the minimum (maximum) of 1 min ( ) *QP* π λ <sup>−</sup> ∈∏ ( <sup>1</sup> max λ ( ) *QP*<sup>−</sup> ), where ( ) *<sup>T</sup> Q A P PA* =− + : is assumed in one of the 2μvertices of Γ , in which

$$\Gamma = \left\{ \boldsymbol{\gamma} \in \mathbb{R}^{\boldsymbol{\mu}} : \min \{ \boldsymbol{\varrho}\_{1} \dots \boldsymbol{\varrho}\_{\boldsymbol{\mu}} \} \leq \boldsymbol{\gamma} \leq \max \{ \boldsymbol{\varrho}\_{1} \dots \boldsymbol{\varrho}\_{\boldsymbol{\mu}} \} \right\}. \tag{66}$$

**Proof.** The proof can be found in (Celentano, 2010).

### **6. Main results**

Now the following main result, which provides a majorant system of the considered control system, is stated.

**Theorem 16**. Give a symmetric *p.d.* matrix *n n P R* <sup>×</sup> ∈ . Then a majorant system of the system (59) is

$$
\rho \dot{\rho} = \alpha\_1 \rho + \alpha\_2 \rho^2 + \beta d, \ v = \mathbf{c} \cdot \rho, \quad \upsilon\_r = \mathbf{c}\_r \rho\_r \tag{67}
$$

in which:

$$\alpha\_1 = -\min\_{\mathbf{x} \in \mathbb{C}\_{P,\rho}, \eta \neq \phi} \frac{\lambda\_{\min}(Q\_1 P^{-1})}{2},\\ Q\_1 = -(A\_1^{\top} P + P A\_1),\tag{68}$$

$$\alpha\_2 = -\min\_{\mathbf{x} \in \mathbb{C}\_{\mathcal{P}\_{11}}, \mathbf{y} \neq \boldsymbol{\varphi}} \frac{\lambda\_{\min}(\sum\_{i=1}^{m} Q\_{2i} P^{-1} \mathbf{x}\_{m+i-1})}{2}, \\ Q\_{2i} = -\{A\_{2i}^{-T} P + P A\_{2i}\}, \tag{69}$$

$$\mathcal{J} = \sqrt{\lambda\_{\text{max}} \left( B^T P B \right)}, \quad \mathcal{c} = \sqrt{\lambda\_{\text{max}} \left( C P^{-1} C^T \right)}, \quad \mathcal{c}\_p = \sqrt{\lambda\_{\text{max}} \left( C\_{\hat{y}} P^{-1} C\_{\hat{y}}^T \right)}, \quad d = \max\_{t \ge R, x \in \mathbb{C}\_{P, \rho}, \eta \le \rho} \|w\|\_{\text{\{ $f$  = 1, \dots, T\}}}$$

where { } 2 , : . *<sup>T</sup> C x x Px <sup>P</sup>* ρ = = ρ

**Proof.** By choosing as "Lyapunov function" the quadratic form <sup>2</sup> *<sup>T</sup>* <sup>2</sup> *<sup>P</sup> V x Px x* = = = ρ , for *x* belonging to a generic hyper-ellipse *CP*,ρ, it is

New Robust Tracking and Stabilization Methods

2

( )

*T T*

*T T*

*T T*

*I aH I*

0 *aI*

2 2

*i i*

*I* <sup>⎡</sup> <sup>⎤</sup> <sup>=</sup> <sup>⎢</sup> <sup>⎥</sup> <sup>⎣</sup> <sup>⎦</sup>

1 1 2 2 2 2

instead of *y* and *y* , their components *<sup>i</sup> y* e *<sup>i</sup> y* are considered.

**Theorem 18.** If system (56) is controlled by using the control law

*I I A A*

⎡ ⎤ − − ⎡ ⎤ = −+ = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − − <sup>⎢</sup> <sup>−</sup> ⎣ ⎦ ⎣ ⎦

*I I A AA*

2 2 ˆ ˆ ˆ ˆ <sup>ˆ</sup> <sup>ˆ</sup> ,( ) 2 2 <sup>2</sup>

From (69), (81) and (82) the relation (74) easily follows.

*T*

*P a A P PA P*

− −

prefixed majorant values of the time constant

1 1

α

"steady-state" ones of *<sup>i</sup> y* e *<sup>i</sup> y* .

*l* τ

2

⎢ ⎥ −⎣ ⎦

from which (73) easily follows.

λλ

By choosing a matrix <sup>0</sup>

Then it is

Therefore

2

*i*

considering (78).

constant

for Significant Classes of Uncertain Linear and Nonlinear Systems 267

2

<sup>1</sup> ( ) ( 2 ) ( 2 ( )), *<sup>T</sup>*

 λ

( ) ( ) ( ) 1 1 <sup>1</sup> <sup>1</sup> 1 1 <sup>1</sup>

*Q P TQ P T A P PA P A TA T P T PT i i ii i <sup>i</sup>*

<sup>2</sup> 00 0 0 0 0 00 <sup>1</sup> <sup>2</sup> <sup>ˆ</sup> <sup>ˆ</sup> , , <sup>2</sup> 00 0 <sup>2</sup> 0 00

*I I aI <sup>I</sup> <sup>I</sup> aI I I A P a aa IF F I I <sup>a</sup> I I I II*

⎡ ⎤ ⎡⎤ ⎡⎤ ⎡⎤ <sup>⎡</sup> <sup>⎤</sup> ⎡ ⎤⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢⎥ ⎢⎥ ⎢⎥ <sup>=</sup> <sup>=</sup> <sup>=</sup> <sup>=</sup> <sup>⎢</sup> <sup>⎥</sup> ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢⎥ ⎢⎥ ⎢⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦ <sup>⎢</sup> <sup>⎥</sup> ⎢ ⎥ <sup>⎣</sup> <sup>⎦</sup> ⎢⎥ ⎢⎥ ⎢⎥ ⎣⎦ ⎣⎦ ⎣⎦ ⎣ ⎦

Relations (75) easily follow from the third of (59), from (70), from the third of (72) and by

**Remark 10**. It is easy to note that the values of *c* and *<sup>p</sup> c* provided by (75) are the same if,

Now the main result can be stated. It allows determining the control law which guarantees

τ

α ρ ρ<sup>=</sup> <sup>−</sup> related to

= − of the linearized majorant system and prefixed majorant values of the

*T i i i i <sup>T</sup>*

<sup>2</sup> <sup>2</sup> <sup>2</sup> <sup>2</sup>

*I aH I I a H aH I aI <sup>a</sup>*

⎡⎤ ⎡⎤ ⎡ ⎤ ⎢⎥ ⎢⎥ ⎡ ⎤ <sup>−</sup> <sup>−</sup> <sup>=</sup> ⎢ ⎥ <sup>+</sup> ⎢ ⎥ <sup>=</sup> ⎢⎥ ⎢⎥ ⎢ ⎥ ⎣ ⎦ <sup>−</sup> ⎢ ⎥ ⎣ ⎦ ⎢⎥ ⎢⎥ <sup>−</sup> ⎣⎦ ⎣⎦

*aI I a H <sup>I</sup> I I*

2

<sup>⎡</sup> ⎤ ⎡ + − <sup>⎤</sup> <sup>⎢</sup> ⎥ ⎢ <sup>=</sup> <sup>⎥</sup> <sup>⎢</sup> − + ⎥ ⎢ <sup>−</sup> <sup>⎥</sup> <sup>⎣</sup> ⎦ ⎣ <sup>⎦</sup>

− − <sup>−</sup> <sup>−</sup> − − <sup>−</sup> = =− + = = (81)

2 22

*i ii*

22 1 1 ( ) *a*

<sup>2</sup> 2 ( 2) . ( ) 0 2( )

*QP aI a H H I* <sup>−</sup> = ∪ +− (80)

ˆˆ ˆ ˆˆ ˆ ˆ ( ), , . *<sup>T</sup>*

*a*

. ⎥

( ) <sup>2</sup> 2 , *u K a y ay u* =− + − *<sup>c</sup>* (83)

ϕ

( )*t* and of the time

*aH I aH H I*

*T T aI a H H I* (79)

(82)

1 11 1 1 11

− −− =− + =− − =

*T T*

*QP A P PA P A PA P*

<sup>2</sup> <sup>0</sup> <sup>2</sup> <sup>2</sup> <sup>2</sup> <sup>0</sup>

*I aH I I a H*

⎡ ⎤ ⎡ ⎤ ⎢ ⎥⎡ ⎤ <sup>−</sup> <sup>=</sup> ⎢ ⎥ <sup>+</sup> ⎢ ⎥ <sup>=</sup> ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ <sup>−</sup> ⎢ ⎥ ⎣ ⎦ ⎢ ⎥ ⎣ ⎦

*a H aI a H I*

*aI I a H I aI*

In order to prove (74) note that, if *T* is a symmetric nonsingular matrix, it is

 λ

it is:

2 2 22 2 2

*a*

2 2 <sup>0</sup> <sup>0</sup>

2 2

λλ

⎡ ⎤ <sup>−</sup> <sup>=</sup> ⎢ ⎥ <sup>+</sup>

0 2 ( 2) 2 2

$$\dot{\rho} \leq -\min\_{\mathbf{x} \in \mathbb{C}\_{\mathcal{P}\_{\mathcal{P}}, \mathcal{P} \in \mathcal{Q}}} \frac{\mathbf{x}^{T} Q\_{\mathbf{i}} \mathbf{x}}{\mathbf{2} \mathbf{x}^{T} P\_{\mathbf{X}}} - \min\_{\mathbf{x} \in \mathbb{C}\_{\mathcal{P}\_{\mathcal{P}}, \mathcal{P} \in \mathcal{Q}}} \frac{\mathbf{x}^{T} \sum\_{i=1}^{n} Q\_{2i} \mathbf{x}\_{i} P^{-1} \mathbf{x}}{\mathbf{2} \mathbf{x}^{T} P\_{\mathbf{X}}} + \max\_{t, \mathbf{x} \in \mathbb{C}\_{\mathcal{P}\_{\mathcal{P}}, \mathcal{P} \in \mathcal{Q}}} \frac{\mathbf{x}^{T} P B w}{\mathbf{x}^{T} P \mathbf{x}}.\tag{71}$$

The proof easily follows from (71) It is valid the following important "*non-interaction*" theorem. **Theorem 17.** If in Theorem 16 it is

$$K\_p = Ia^2, \quad K\_d = \sqrt{2}Ia, \quad P = \begin{bmatrix} \sqrt{2}al & I \\ & I \\ & I \end{bmatrix}, \quad a > 0,\tag{72}$$

then:

$$\alpha\_1 = -\min\_{\mathbf{x} \in \mathbb{C}\_{\mathbb{P}\_{\mathcal{P}}, \mathcal{P}}, \mathbf{y} \neq \boldsymbol{\mu}\_{\boldsymbol{\nu}}} \frac{\lambda\_{\min}(Q\_1 \mathbf{P}^{-1})}{2} = \begin{cases} -\frac{a}{\sqrt{2}} [\lambda\_{\min}(\boldsymbol{H}^T + \boldsymbol{H}) - \mathbf{1}], & \text{if } \lambda\_{\min}(\boldsymbol{H}^T + \boldsymbol{H}) < 2 \\\\ -\frac{a}{\sqrt{2}}, & \text{if } \lambda\_{\min}(\boldsymbol{H}^T + \boldsymbol{H}) \ge 2, \end{cases} \tag{73}$$

$$\mathbf{a}\_{2} = \min\_{\mathbf{x} \in \mathbb{C}\_{P,1}, p \neq \rho} \frac{\lambda\_{\min} \left( \sum\_{i=1}^{m} \begin{bmatrix} A\_{2i} & -\sqrt{2}A\_{2i} \\ \sqrt{2}A\_{2i} & -2A\_{2i} - A\_{2i}^T \end{bmatrix} \mathbf{x}\_{m+i-1} \right)}{\mathbf{2}},\tag{74}$$

$$
\beta = \frac{\sqrt[4]{2}}{\sqrt{a}}, \quad c = \frac{\sqrt[4]{2}}{\sqrt{a}}, \quad c\_r = \sqrt[4]{2}\sqrt{a}. \tag{75}
$$

**Proof.** First note that, by making the change of variable 1 *z Tx*<sup>−</sup> <sup>=</sup> , with *T* such that [ ] <sup>1122</sup> ... , *<sup>T</sup> m m z yyyy y y* <sup>=</sup> the matrix <sup>ˆ</sup> *<sup>T</sup> P T PT* <sup>=</sup> is block-diagonal with blocks on the principal diagonal equal to

$$
\hat{P}\_{\vec{\imath}} = \begin{bmatrix}
\sqrt{2}a & 1 \\
1 & \frac{\sqrt{2}}{a}
\end{bmatrix}.
\tag{76}
$$

Since ˆ *Pii* is *p.d.*<sup>0</sup> ∀ >*<sup>a</sup>* , it follows that <sup>ˆ</sup> *P* is *p.d.* and, therefore, also *P* is *p.d. .* Now note that

$$
\begin{bmatrix}
\sqrt{2}aI & I \\
 I & \frac{\sqrt{2}}{a}I
\end{bmatrix}
\begin{bmatrix}
\frac{\sqrt{2}}{a}I & -I \\
 -I & \sqrt{2}aI
\end{bmatrix} = \begin{bmatrix}
I & 0 \\
0 & I
\end{bmatrix};
\tag{77}
$$

hence

$$P^{-1} = \begin{bmatrix} \sqrt{2} \\ a \\ -I & \sqrt{2}aI \end{bmatrix} \tag{78}$$

Then it is

266 Recent Advances in Robust Control – Novel Approaches and Design Methods

*<sup>n</sup> <sup>T</sup> T T i i i TT T xC p xC p tx C p*

= ∈ ∈℘ ∈ ∈℘ ∈ ∈℘

∑

1

−

ρ

min

+ −

(76)

(77)

(78)

 λ

λ

*T T*

*<sup>a</sup> H H*

*m i T*

= = = (75)

*x*

*<sup>a</sup> HH HH*

− + − + < ⎪

<sup>⎪</sup> − + <sup>≥</sup> ⎪⎩

*x Px*

(72)

(74)

(73)

2

2

1 min min

2 2 min 1 1 2 22

*A AA*

*A A*

[ ( ) 1], if ( ) 2 ( ) <sup>2</sup> min

2 2 min ,

⎛ ⎞ ⎡ ⎤ <sup>−</sup> ⎜ ⎟ ⎢ ⎥ − − ⎝ ⎠ ⎣ ⎦ = −

4 4 2 2 <sup>4</sup> , , 2. *<sup>p</sup> c ca*

**Proof.** First note that, by making the change of variable 1 *z Tx*<sup>−</sup> <sup>=</sup> , with *T* such that

*m m z yyyy y y* <sup>=</sup> the matrix <sup>ˆ</sup> *<sup>T</sup> P T PT* <sup>=</sup> is block-diagonal with blocks on the

2 1 <sup>ˆ</sup> . <sup>2</sup> <sup>1</sup> *ii a*

<sup>⎡</sup> <sup>⎤</sup> <sup>⎢</sup> <sup>⎥</sup> <sup>=</sup> <sup>⎢</sup> <sup>⎥</sup> <sup>⎢</sup> <sup>⎥</sup> <sup>⎣</sup> <sup>⎦</sup>

2 2 <sup>0</sup>

*aI I I I <sup>I</sup> <sup>a</sup> <sup>I</sup> I I I aI <sup>a</sup>*

⎢ ⎥⎢ ⎥ <sup>−</sup> <sup>⎡</sup> <sup>⎤</sup> <sup>=</sup> <sup>⎢</sup> <sup>⎥</sup> <sup>⎣</sup> <sup>⎦</sup> <sup>−</sup> ⎣ ⎦⎣ ⎦

2

*I I <sup>P</sup> <sup>a</sup>*

⎡ ⎤⎡ ⎤

1

−

*a*

; <sup>2</sup> <sup>0</sup> 2

2

*I aI*

<sup>⎡</sup> <sup>⎤</sup> <sup>⎢</sup> <sup>−</sup> <sup>⎥</sup> <sup>=</sup> <sup>⎢</sup> <sup>⎥</sup> <sup>⎢</sup> <sup>−</sup> <sup>⎥</sup> <sup>⎣</sup> <sup>⎦</sup>

*P* is *p.d.* and, therefore, also *P* is *p.d. .*

.

*<sup>m</sup> i i*

*i i ii*

*aI I*

⎡ ⎤

*I I a*

<sup>2</sup> , if ( ) 2, 2

2

⎢ ⎥ ⎣ ⎦

*x Q xP x x Qx x PBw*

(71)

, , ,

*P P* 2 2 *<sup>P</sup>*

, , , , min min max .

, 2, , 0, *p d* 2

⎢ ⎥ <sup>=</sup> = = <sup>&</sup>gt; ⎢ ⎥

*K Ia K Ia P a*

*xC p T*

*<sup>P</sup>* 2

*a a*

*P*

=

∑

λ

1 1

≤ − − +

It is valid the following important "*non-interaction*" theorem.

2

min 1

λ

<sup>⎪</sup> = − <sup>=</sup> <sup>⎨</sup>

*Q P*

,1

∈ ∈℘

β

*xC p*

<sup>2</sup> ,

−

λ

⎧

 ρ*x Px x Px*

ρ

The proof easily follows from (71)

**Theorem 17.** If in Theorem 16 it is

,

ρ

∈ ∈℘

α

*Pii* is *p.d.*<sup>0</sup> ∀ >*<sup>a</sup>* , it follows that <sup>ˆ</sup>

*P*

<sup>1</sup> , ,

[ ] <sup>1122</sup> ... , *<sup>T</sup>*

principal diagonal equal to

Since ˆ

hence

Now note that

α

ρ

then:

$$\begin{aligned} QP^{-1} &= -(A\_i^\top P + PA\_i)P^{-1} = -A\_i^\top - PA\_i P^{-1} = \\ &= \begin{bmatrix} 0 & a^2 H^\top \\ -I & \sqrt{2} a H^\top \end{bmatrix} + \begin{bmatrix} \sqrt{2} aI & I \\ I & \sqrt{2} \\ I & a \end{bmatrix} \begin{bmatrix} 0 & -I \\ a^2 H & \sqrt{2} a H \end{bmatrix} \begin{bmatrix} \sqrt{2} & I & -I \\ -I & \sqrt{2} aI \end{bmatrix} = \\ &= \begin{bmatrix} 0 & a^2 H^\top \\ -I & \sqrt{2} a H^\top \end{bmatrix} + \begin{bmatrix} \sqrt{2} aI & I \\ I & \sqrt{2} \\ I & \sqrt{2} \\ -I & a \end{bmatrix} \begin{bmatrix} I & -\sqrt{2} aI \\ 0 & a^2 H \end{bmatrix} = \\ &= \begin{bmatrix} 0 & a^2 H^\top \\ -I & \sqrt{2} a H^\top \end{bmatrix} + \begin{bmatrix} \sqrt{2} aI & a^2 (H - 2I) \\ I & \sqrt{2} a (H - I) \end{bmatrix} = \begin{bmatrix} \sqrt{2} aI & a^2 (H^\top + H - 2I) \\ 0 & \sqrt{2} a (H^\top + H - I) \end{bmatrix}. \end{aligned} \tag{79}$$

Therefore

$$
\mathcal{A}(QP^{-1}) = \mathcal{A}(\sqrt{2}aI) \cup \mathcal{A}(\sqrt{2}a(H^\top + H - I)),
\tag{80}
$$

from which (73) easily follows.

In order to prove (74) note that, if *T* is a symmetric nonsingular matrix, it is

$$\mathcal{A}\left(Q\_{2i}P^{-1}\right) = \mathcal{A}\left(TQ\_{2i}P^{-1}T^{-1}\right) = \mathcal{A}\left(-\left(\hat{A}\_{2i}\,^{\mathrm{T}}\hat{P} + \hat{P}\hat{A}\_{2i}\right)\hat{P}^{-1}\right),\\\hat{A}\_{2i} = TA\_{2i}T^{-1}, \hat{P} = T^{-1}PT^{-1}.\tag{81}$$
  $\text{For a matrix } \mathbf{T}^{-} \in \left[aI \quad \mathbf{0}\right]\_{\text{i.e.}}$ 

By choosing a matrix <sup>0</sup> 0 *T I* <sup>⎡</sup> <sup>⎤</sup> <sup>=</sup> <sup>⎢</sup> <sup>⎥</sup> <sup>⎣</sup> <sup>⎦</sup> it is:

$$
\hat{A}\_{z\_i} = \begin{bmatrix} aI & 0 \\ 0 & I \end{bmatrix} \begin{bmatrix} 0 & 0 \\ 0 & F\_{z\_i} \end{bmatrix} \begin{bmatrix} I & 0 \\ a & 0 \\ 0 & I \end{bmatrix} = \begin{bmatrix} 0 & 0 \\ 0 & F\_{z\_i} \end{bmatrix},
\hat{P} = \begin{bmatrix} I & 0 \\ a & 0 \\ 0 & I \end{bmatrix} \begin{bmatrix} \sqrt{2}aI & I \\ a & \sqrt{2}I \\ 0 & I \end{bmatrix} \begin{bmatrix} I & 0 \\ a & 0 \\ 0 & I \end{bmatrix} = \frac{1}{a} \begin{bmatrix} \sqrt{2}I & I \\ I & \sqrt{2}I \end{bmatrix},
\tag{82}
$$

$$
\hat{P}^{-1} = a \begin{bmatrix} \sqrt{2}I & -I \\ -I & \sqrt{2}I \end{bmatrix}, \quad -(\hat{A}\_{z\_i}\hat{P} + \hat{P}\hat{A}\_{z\_i})\hat{P}^{-1} = \begin{bmatrix} A\_{z\_i} & -\sqrt{2}A\_{z\_i} \\ \sqrt{2}A\_{z\_i} & -2A\_{z\_i} - A\_{z\_i}^T \end{bmatrix}.
$$

From (69), (81) and (82) the relation (74) easily follows.

Relations (75) easily follow from the third of (59), from (70), from the third of (72) and by considering (78).

**Remark 10**. It is easy to note that the values of *c* and *<sup>p</sup> c* provided by (75) are the same if, instead of *y* and *y* , their components *<sup>i</sup> y* e *<sup>i</sup> y* are considered.

Now the main result can be stated. It allows determining the control law which guarantees prefixed majorant values of the time constant 22 1 1 ( ) τ α ρ ρ <sup>=</sup> <sup>−</sup> related to ϕ( )*t* and of the time

constant 1 1 *l* τ α= − of the linearized majorant system and prefixed majorant values of the

"steady-state" ones of *<sup>i</sup> y* e *<sup>i</sup> y* .

**Theorem 18.** If system (56) is controlled by using the control law

$$
\mu = -K\left(a^2y + \sqrt{2}a\dot{y}\right) - \mu\_{\circ \prime} \tag{83}
$$

New Robust Tracking and Stabilization Methods

characterizing the convergence of the error.

theory of the externally positive systems.

function of the design parameters of the control law.

*Wireless Engineering*, no. 7, pp. 536-541

Dorato P. (Editor) (1987). *Robust control*, IEEE Press

Integral Feedback. *Electron. Lett.*, no. 6, pp. 689-690

Slotine, J. J. E. and Li, W. (1991). *Applied nonlinear control*, Prentice-Hall

for Pole Assignement. *Proc. IEE*, vol. 124, no. 8, pp. 729-732

**7. Conclusion** 

**8. References** 

pp. 39-45

4, pp. 803-807.

New Jersey, Prentice-Hall.

Verlag

Atlanta

Orleans

for Significant Classes of Uncertain Linear and Nonlinear Systems 269

generic preassigned limited in "acceleration" trajectory, with preassigned increases of the maximum "position and/or velocity" errors and preassigned increases of the time constants

In this chapter it is has been considered one of the main and most realistic control problem not suitable solved in literature (to design robust control laws to force an uncertain parametric system subject to disturbances to track generic references but regular enough

This problem is satisfactorily solved for SISO processes, without zeros, with measurable state and with parametric uncertainties by using theorems and algorithms deriving from some proprierties of the most common filters, from Kharitonov's theorem and from the

The considered problem has been solved also for a class of uncertain pseudo-quadratic systems, including articulated mechanical ones, but for limitation of pages only the two fundamental results have been reported. They allow to calculate, by using efficient algorithms, the parameters characterizing the performances of the control system as a

Butterworth, S. (1930). On the Theory of Filter Amplifiers. *Experimental Wireless and the* 

Porter, B. and Power, H.M. (1970). Controllability of Multivariable Systems Incorporating

Seraj, H. and Tarokh, M. (1977). Design of Proportional-Plus-Derivative Output Feedback

Ambrosino, G., Celentano, G. and Garofalo, F. (1985). Robust Model Tracking Control for a Class of Nonlinear Plants. *IEEE Trans. Autom. Control*, vol. 30, no. 3, pp. 275-279

Jayasuriya, S. and Hwang, C.N. (1988). Tracking Controllers for Robot Manipulators: a High

Nijmeijer, H. and Van der Schaft, A. J. (1990). *Nonlinear Dynamical Control Systems*, Springer-

Tao, G. (1992). On Robust Adaptive Control of Robot Manipulators. *Automatica*, vol. 28, no.

Colbaugh, R., Glass, K. and Seraji, H. (1993). A New Approach to Adaptive Manipulator

Abdallah, C. T., Dorato, P. and Cerone, V. (1995). *Linear Quadratic Control*, Englewood Cliffs,

Freeman, R.A. and Kokotovic, P.V., (1995). Robust Integral Control for a Class of Uncertain

Gain Perspective. *ASME J. of Dynamic Systems, Measurement, and Control,* vol. 110,

Control. *Proc. IEEE Intern. Conf. on Robotics and Automation*, vol. 1, pp. 604-611,

Nonlinear systems. *34th IEEE Intern. Conf. on Decision & Control*, pp. 2245-2250, New

with a maximum prefixed error starting from a prefixed instant time).

with , , *Kauc* such that

$$
\lambda\_{\min} \left( K^{\top} F\_1^{\top} + F\_1 K \right) \ge 2, \quad a \ge 2.463 \sqrt[n]{a\_2^2 d^2}, \tag{84}
$$

where

$$d = \max\_{\boldsymbol{\mu}, \boldsymbol{\mu} \subset \boldsymbol{P}\_{\boldsymbol{\mu}, \boldsymbol{\mu} \circ \boldsymbol{\mu} \circ \boldsymbol{\mu}}} \left\| \boldsymbol{f} - \boldsymbol{f}\_{i} \boldsymbol{\mu}\_{i} \right\|, \quad \boldsymbol{\alpha}\_{i} = - \min\_{\boldsymbol{\alpha} \in \boldsymbol{P}\_{\boldsymbol{\mu}}, 1 \circ \boldsymbol{\mu}} \frac{\left( \sum\_{i=1}^{n} \left\| \frac{\boldsymbol{A}\_{\boldsymbol{\mu},i}}{\sqrt{2} \boldsymbol{A}\_{\boldsymbol{\mu},i}} - \frac{-\sqrt{2} \boldsymbol{A}\_{\boldsymbol{\mu},i}}{\boldsymbol{A}\_{\boldsymbol{\mu}} - \boldsymbol{A}\_{\boldsymbol{\mu}}^{\boldsymbol{\prime}}} \right\|\_{\boldsymbol{\mathcal{X}} \boldsymbol{\mu} \mapsto \boldsymbol{I}}}{2} \right) \tag{85}$$
 
$$\boldsymbol{P} = \begin{bmatrix} \sqrt{2} \boldsymbol{a} \boldsymbol{I} & \boldsymbol{I} \\ & \sqrt{2} \\ \boldsymbol{I} & \frac{\sqrt{2}}{a} \boldsymbol{I} \end{bmatrix},$$

said 1 2 ρ , ρthe roots of the equation

$$a\_2 \rho^2 - \frac{a}{\sqrt{2}} \rho + \frac{\sqrt[4]{2}}{\sqrt{a}} d = 0 \,\,\, \,\, \,\tag{86}$$

0 0 00 2 : *<sup>T</sup> <sup>P</sup>* ∀=< *x x x Px* ρit is:

$$\begin{aligned} \left| y\_{\circ}(t) \right| \leq & \frac{\sqrt[4]{2}}{\sqrt{a}} \rho(t)\_{\prime} \quad \left| \dot{y}\_{\circ}(t) \right| \leq \sqrt[4]{2} \sqrt{a} \rho(t), \quad \rho(t) = \frac{\rho\_{\circ} - \rho\_{\circ} \rho(t)}{1 - \rho(t)},\\ \text{where} \quad \rho(t) = \frac{\rho\_{\circ} - \rho\_{\circ}}{\rho\_{\circ} - \rho\_{\circ}} e^{-\eta \tau}, \text{ } \tau = \frac{1}{a\_{\circ}(\rho\_{\circ} - \rho\_{\circ})}, \end{aligned} \tag{87}$$

with time constant <sup>2</sup> *<sup>l</sup> a* τ = ; moreover, for *a* big enough such that 1 2 ρ ρ, it is:

$$\lim\_{t \to n} \left| y\_i(t) \right| \le \frac{\sqrt[4]{2}}{\sqrt{a}} \rho\_i \equiv \frac{2}{a^2} d, \quad \lim\_{t \to n} \left| \dot{y}\_i(t) \right| \le \sqrt[4]{2} \sqrt{a} \rho\_i \equiv \frac{2}{a} d, \quad \tau \equiv \frac{\sqrt{2}}{a} = \tau,\tag{88}$$

**Proof.** The proof of (87) follows from Theorems 13, 16 and 17. The proof of (88) derives from the fact that if ρ1 2 ρ it is 4 1 2 1 2 <sup>2</sup> , . 2 2 *a a a* ρ ρρ α ≅ −≅

**Remark 11**. As regards the determination of *K* in order to satisfy the first of (84), the computation of *uc* to decrease *d* and regarding the computation of α2 and *d* , for limitation of pages, it has to be noted at least that for the mechanical systems, being 1 <sup>1</sup> *F B* , <sup>−</sup> = taking into account that the inertia matrix *B* is symmetric and *p.d.* and *<sup>m</sup>* ∀ ∈℘ ∀ = ∈ *p yq R* , under the hypothesis that *T I* = it can be chosen *K kI* = , with max *k B* ≥ λ ( ). Moreover it can be posed (, , )ˆ *u gtyp <sup>c</sup>* = , with *p*ˆ nominal value of the parameters. Finally the calculation of max λ ( ) *B* , α2 and *d* can be facilitated by suitably using Theorem 15.

**Remark 12**. The stated theorems can be used for determining simple and robust control laws of the PD type, with a possible compensation action, in order to force system (56) to track a generic preassigned limited in "acceleration" trajectory, with preassigned increases of the maximum "position and/or velocity" errors and preassigned increases of the time constants characterizing the convergence of the error.

## **7. Conclusion**

268 Recent Advances in Robust Control – Novel Approaches and Design Methods

( ) 2 2 <sup>5</sup> min 1 1 <sup>2</sup> 2, 2.463 , *T T*

λ

4

*<sup>a</sup> <sup>d</sup> a*

<sup>2</sup> <sup>0</sup>

4 1 2

ρ

 ρ

ρ

→∞ →∞ ≤ ≅ ≤ ≅≅ = (88)

2

α

− − = − =− <sup>&</sup>gt;

1

=

*m*

∑

 α*K F FK a d* +≥ ≥ (84)

2 2 , min 0,

2 2 min 1

*i i m i <sup>T</sup> <sup>i</sup> i ii*

⎛ ⎞ ⎡ ⎤ ⎜ ⎟ ⎢ ⎥ ⎝ ⎠ ⎣ ⎦

−

2

2 22

*A A <sup>x</sup> A AA*

2

− + = , (86)

ϕ

ρ ρ, it is:

λ

ττ

α

( ). Moreover it can

2 and *d* , for

<sup>1</sup> *F B* , <sup>−</sup> =

ρ ρϕ + −

(85)

(87)

with , , *Kauc* such that

where

said 1 2 ρ , ρ

0 0 00 2 : *<sup>T</sup> <sup>P</sup>* ∀=< *x x x Px*

with time constant <sup>2</sup> *<sup>l</sup>*

ρ1 2 ρit is

the fact that if

max λ( ) *B* ,

α

λ

1 2 , , , , ,1

∈ ∈ ∈℘ ∈ ∈℘

*<sup>c</sup> t Rx C p P xC p <sup>P</sup>*

max

*d f F u*

ρ

the roots of the equation

ρit is:

4

ϕ

*a*

4

ρ

ρ

4

*a*

τ

*i i*

ρ

, <sup>2</sup>

*P*

=

α

2

2 2

αρ

0 1

*t e*

ρ ρ

ρ ρ

<sup>1</sup> where ( ) , , ( )

1 2 1

computation of *uc* to decrease *d* and regarding the computation of

under the hypothesis that *T I* = it can be chosen *K kI* = , with max *k B* ≥

2 and *d* can be facilitated by suitably using Theorem 15.

≅ −≅

*t*

−

τ

<sup>−</sup> <sup>=</sup> <sup>=</sup> − −

*aI I*

⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦

*I I <sup>a</sup>*

2

 ρ

0 2 22 1

= ; moreover, for *a* big enough such that 1 2

 τ

1 1 2 2 2 2 2 lim ( ) , lim ( ) 2 , . *i i <sup>l</sup> t t y t d yt a d a a a a*

**Proof.** The proof of (87) follows from Theorems 13, 16 and 17. The proof of (88) derives from

<sup>2</sup> , . 2 2 *a a*

**Remark 11**. As regards the determination of *K* in order to satisfy the first of (84), the

limitation of pages, it has to be noted at least that for the mechanical systems, being 1

taking into account that the inertia matrix *B* is symmetric and *p.d.* and *<sup>m</sup>* ∀ ∈℘ ∀ = ∈ *p yq R* ,

be posed (, , )ˆ *u gtyp <sup>c</sup>* = , with *p*ˆ nominal value of the parameters. Finally the calculation of

**Remark 12**. The stated theorems can be used for determining simple and robust control laws of the PD type, with a possible compensation action, in order to force system (56) to track a

 ρρ

2 ( ) ( ) ( ), ( ) 2 ( ), ( ) , 1 ()

 ρ

 αρ

4

*<sup>t</sup> yt t yt a t t a t*

<sup>−</sup> ≤≤ = <sup>−</sup>

In this chapter it is has been considered one of the main and most realistic control problem not suitable solved in literature (to design robust control laws to force an uncertain parametric system subject to disturbances to track generic references but regular enough with a maximum prefixed error starting from a prefixed instant time).

This problem is satisfactorily solved for SISO processes, without zeros, with measurable state and with parametric uncertainties by using theorems and algorithms deriving from some proprierties of the most common filters, from Kharitonov's theorem and from the theory of the externally positive systems.

The considered problem has been solved also for a class of uncertain pseudo-quadratic systems, including articulated mechanical ones, but for limitation of pages only the two fundamental results have been reported. They allow to calculate, by using efficient algorithms, the parameters characterizing the performances of the control system as a function of the design parameters of the control law.

## **8. References**


Dorato P. (Editor) (1987). *Robust control*, IEEE Press


**Part 2** 

**Special Topics in Robust and Adaptive Control** 

Arimoto, S. (1996). *Control Theory of Nonlinear Mechanical Systems*, Oxford Engineering Science Series

Sastry, S. (1999). *Nonlinear Systems, Analysis, Stability and Control*, Springer-Verlag


## **Part 2**

## **Special Topics in Robust and Adaptive Control**

270 Recent Advances in Robust Control – Novel Approaches and Design Methods

Arimoto, S. (1996). *Control Theory of Nonlinear Mechanical Systems*, Oxford Engineering

Paarmann, L.D. (2001). *Design and Analysis of Analog Filters: A Signal Processing Perspective*,

Celentano, L. (2005). A General and Efficient Robust Control Method for Uncertain

Amato, F. (2006). *Robust Control of Linear Systems Subject to Uncertain Time-Varying* 

Bru, R. and Romero-Vivò, S. (2009). Positive Systems. *Proc. 3rd Multidisciplinary Intern. Symposium on Positive Systems: Theory and Applications*, Valencia, Spain

Nonlinear Mechanical Systems. *Proc. IEEE Conf. Decision and Control*, Seville, Spain,

Sastry, S. (1999). *Nonlinear Systems, Analysis, Stability and Control*, Springer-Verlag

Siciliano, S. and Khatib, O. (Editors) (2009). *Springer Handbook of Robotics*, Springer

Kluwer Academic Publishers, Springer

*Parameters*, Springer-Verlag

Science Series

pp. 659-665

**12** 

*Romania* 

**Robust Feedback Linearization Control** 

**for Reference Tracking and Disturbance** 

*Technical University of Cluj, Department of Automation, Cluj-Napoca* 

Most industrial processes are nonlinear systems, the control method applied consisting of a linear controller designed for the linear approximation of the nonlinear system around an operating point. However, even though the design of a linear controller is rather straightforward, the result may prove to be unsatisfactorily when applied to the nonlinear

Several authors proposed the method of feedback linearization (Chou & Wu, 1995), to design a nonlinear controller. The main idea with feedback linearization is based on the fact that the system is no entirely nonlinear, which allows to transform a nonlinear system into an equivalent linear system by effectively canceling out the nonlinear terms in the closedloop (Seo *et al*., 2007). It provides a way of addressing the nonlinearities in the system while allowing one to use the power of linear control design techniques to address nonlinear

Nevertheless, the classical feedback linearization technique has certain disadvantages regarding robustness. A robust linear controller designed for the linearized system may not guarantee robustness when applied to the initial nonlinear system, mainly because the linearized system obtained by feedback linearization is in the Brunovsky form, a non robust form whose dynamics is completely different from that of the original system and which is highly vulnerable to uncertainties (Franco, *et al*., 2006). To eliminate the drawbacks of classical feedback linearization, a robust feedback linearization method has been developed for uncertain nonlinear systems (Franco, et al., 2006; Guillard & Bourles, 2000; Franco *et al*., 2005) and its efficiency proved theoretically by W-stability (Guillard & Bourles, 2000). The method proposed ensures that a robust linear controller, designed for the linearized system obtained using robust feedback linearization, will maintain the robustness properties when

In this paper, a comparison between the classical approach and the robust feedback linearization method is addressed. The mathematical steps required to feedback linearize a nonlinear system are given in both approaches. It is shown how the classical approach can be altered in order to obtain a linearized system that coincides with the tangent linearized system around the chosen operating point, rather than the classical chain of integrators. Further, a robust linear controller is designed for the feedback linearized system using loop-

system. The natural consequence is to use a nonlinear controller.

closed loop performance specifications.

applied to the initial nonlinear system.

**1. Introduction** 

**Rejection in Nonlinear Systems** 

Cristina Ioana Pop and Eva Henrietta Dulf

## **Robust Feedback Linearization Control for Reference Tracking and Disturbance Rejection in Nonlinear Systems**

Cristina Ioana Pop and Eva Henrietta Dulf *Technical University of Cluj, Department of Automation, Cluj-Napoca Romania* 

## **1. Introduction**

Most industrial processes are nonlinear systems, the control method applied consisting of a linear controller designed for the linear approximation of the nonlinear system around an operating point. However, even though the design of a linear controller is rather straightforward, the result may prove to be unsatisfactorily when applied to the nonlinear system. The natural consequence is to use a nonlinear controller.

Several authors proposed the method of feedback linearization (Chou & Wu, 1995), to design a nonlinear controller. The main idea with feedback linearization is based on the fact that the system is no entirely nonlinear, which allows to transform a nonlinear system into an equivalent linear system by effectively canceling out the nonlinear terms in the closedloop (Seo *et al*., 2007). It provides a way of addressing the nonlinearities in the system while allowing one to use the power of linear control design techniques to address nonlinear closed loop performance specifications.

Nevertheless, the classical feedback linearization technique has certain disadvantages regarding robustness. A robust linear controller designed for the linearized system may not guarantee robustness when applied to the initial nonlinear system, mainly because the linearized system obtained by feedback linearization is in the Brunovsky form, a non robust form whose dynamics is completely different from that of the original system and which is highly vulnerable to uncertainties (Franco, *et al*., 2006). To eliminate the drawbacks of classical feedback linearization, a robust feedback linearization method has been developed for uncertain nonlinear systems (Franco, et al., 2006; Guillard & Bourles, 2000; Franco *et al*., 2005) and its efficiency proved theoretically by W-stability (Guillard & Bourles, 2000). The method proposed ensures that a robust linear controller, designed for the linearized system obtained using robust feedback linearization, will maintain the robustness properties when applied to the initial nonlinear system.

In this paper, a comparison between the classical approach and the robust feedback linearization method is addressed. The mathematical steps required to feedback linearize a nonlinear system are given in both approaches. It is shown how the classical approach can be altered in order to obtain a linearized system that coincides with the tangent linearized system around the chosen operating point, rather than the classical chain of integrators. Further, a robust linear controller is designed for the feedback linearized system using loop-

Robust Feedback Linearization Control

*y* in (2) is continued until:

feedback.

with the control input equal to:

with *p*(*x*) a smooth vector field.

The final (new) input – output relation becomes:

for Reference Tracking and Disturbance Rejection in Nonlinear Systems 275

yields a linear first order system from the supplementary input *v* to the initial output of the system, *y*. Thus, there exists a state feedback law, similar to (3), that makes the nonlinear system in (2) linear. The relative degree of system (2) is defined as the number of times the output has to be differentiated before the input appears in its expression. This is equivalent to the denominator in (3) being bounded away from zero, for all *x U* . In general, the

0 02

 

<sup>1</sup> <sup>1</sup> ( ) ( ) *<sup>f</sup>*

(5)

(6)

(8)

(7)

(9)

(4)

relative degree of a nonlinear system at <sup>0</sup> *x U* is defined as an integer *γ* satisfying:

0

( ) , , ,...,

Thus, if the nonlinear system in (1) has relative degree equal to *γ*, then the differentiation of

 ( ) <sup>1</sup> *f f <sup>g</sup> y Lhx LL hxu*

*u L h x v*

( ) *y v* 

which is linear and can be written as a chain of integrators (Brunovsky form). The control law in (6) yields (*n*-*γ*) states of the nonlinear system in (1) unobservable through state

The problem of measurable disturbances has been tackled also in the framework of feedback linearization. In general, for a nonlinear system affected by a measurable disturbance *d*:

( )

*x f x g x u pxd*

Similar to the relative degree of the nonlinear system, a disturbance relative degree is

() , ( )

*LL hx i k*

0 1

0

Thus, a comparison between the input relative degree and the disturbance relative degree gives a measure of the effect that each external signal has on the output (Daoutidis and Kravaris, 1989). If *k* , the disturbance will have a more direct effect upon the output, as compared to the input signal, and therefore a simple control law as given in (6) cannot ensure the disturbance rejection (Henson and Seborg, 1997). In this case complex feedforward structures are required and effective control must involve anticipatory action

1 0

*LL hx* 

*i g f g f*

( )

*LLhx x Ui*

*g f*

*LL hx*

( )

1

*LL hx*

*i p f k p f*

*y hx*

defined as a value *k* for which the following relation holds:

shaping techniques and then applied to the original nonlinear system. To test the robustness of the method, a chemical plant example is given, concerning the control of a continuous stirred tank reactor.

The paper is organized as follows. In Section 2, the mathematical concepts of feedback linearization are presented – both in the classical and robust approach. The authors propose a technique for disturbance rejection in the case of robust feedback linearization, based on a feed-forward controller. Section 3 presents the *H* robust stabilization problem. To exemplify the robustness of the method described, the nonlinear robust control of a continuous stirred tank reactor (CSTR) is given in Section 4. Simulations results for reference tracking, as well as disturbance rejection are given, considering uncertainties in the process parameters. Some concluding remarks are formulated in the final section of the paper.

## **2. Feedback linearization: Classical versus robust approach**

Feedback linearization implies the exact cancelling of nonlinearities in a nonlinear system, being a widely used technique in various domains such as robot control (Robenack, 2005), power system control (Dabo et al., 2009), and also in chemical process control (Barkhordari Yazdi & Jahed-Motlagh, 2009; Pop & Dulf, 2010; Pop et al, 2010), etc. The majority of nonlinear control techniques using feedback linearization also use a strategy to enhance robustness. This section describes the mathematical steps required to obtain the final closed loop control structure, to be later used with robust linear control.

### **2.1 Classical feedback linearization**

### **2.1.1 Feedback linearization for SISO systems**

In the classical approach of feedback linearization as introduced by Isidori (Isidori, 1995), the Lie derivative and relative degree of the nonlinear system plays an important role. For a single input single output system, given by:

$$\begin{aligned} \dot{x} &= f\left(x\right) + g\left(x\right)u \\ y &= h(x) \end{aligned} \tag{1}$$

with *<sup>n</sup> x* is the state, *u* is the control input, *y* is the output, *f* and *g* are smooth vector fields on *<sup>n</sup>* and *h* is a smooth nonlinear function. Differentiating *y* with respect to time, we obtain:

$$\begin{aligned} \dot{y} &= \frac{\partial h}{\partial \mathbf{x}} f(\mathbf{x}) + \frac{\partial h}{\partial \mathbf{x}} g(\mathbf{x}) u \\ \dot{y} &= L\_f h(\mathbf{x}) + L\_g h(\mathbf{x}) u \end{aligned} \tag{2}$$

with *<sup>n</sup> <sup>f</sup> xhL* : and *<sup>n</sup> <sup>g</sup> xhL* : , defined as the Lie derivatives of h with respect to f and g, respectively. Let *U* be an open set containing the equilibrium point *x*0 , that is a point where *f*(*x*) becomes null – *f*(*x*0) = 0. Thus, if in equation (2), the Lie derivative of *h* with respect to *g* - *xhL <sup>g</sup>* - is bounded away from zero for all *x U* (Sastry, 1999), then the state feedback law:

$$
\mu = \frac{1}{L\_{\mathcal{R}} h(\mathbf{x})} \Big( -L\_f h(\mathbf{x}) + \upsilon \Big) \tag{3}
$$

yields a linear first order system from the supplementary input *v* to the initial output of the system, *y*. Thus, there exists a state feedback law, similar to (3), that makes the nonlinear system in (2) linear. The relative degree of system (2) is defined as the number of times the output has to be differentiated before the input appears in its expression. This is equivalent to the denominator in (3) being bounded away from zero, for all *x U* . In general, the relative degree of a nonlinear system at <sup>0</sup> *x U* is defined as an integer *γ* satisfying:

$$\begin{aligned} L\_{\mathcal{g}}L\_f^i h(\mathbf{x}) &= 0, \forall \mathbf{x} \in \mathcal{U}, \mathbf{i} = \mathbf{0}, \dots, \mathbf{y} - \mathbf{2} \\ L\_{\mathcal{g}}L\_f^{\mathcal{I}-1}h(\mathbf{x}\_0) &\neq \mathbf{0} \end{aligned} \tag{4}$$

Thus, if the nonlinear system in (1) has relative degree equal to *γ*, then the differentiation of *y* in (2) is continued until:

$$y^{(\gamma)} = L\_f^{\gamma} h(\mathbf{x}) + L\_g L\_f^{\gamma - 1} h(\mathbf{x}) u \tag{5}$$

with the control input equal to:

274 Recent Advances in Robust Control – Novel Approaches and Design Methods

shaping techniques and then applied to the original nonlinear system. To test the robustness of the method, a chemical plant example is given, concerning the control of a continuous

The paper is organized as follows. In Section 2, the mathematical concepts of feedback linearization are presented – both in the classical and robust approach. The authors propose a technique for disturbance rejection in the case of robust feedback linearization, based on a feed-forward controller. Section 3 presents the *H* robust stabilization problem. To exemplify the robustness of the method described, the nonlinear robust control of a continuous stirred tank reactor (CSTR) is given in Section 4. Simulations results for reference tracking, as well as disturbance rejection are given, considering uncertainties in the process parameters. Some concluding remarks are formulated in the final section of the paper.

Feedback linearization implies the exact cancelling of nonlinearities in a nonlinear system, being a widely used technique in various domains such as robot control (Robenack, 2005), power system control (Dabo et al., 2009), and also in chemical process control (Barkhordari Yazdi & Jahed-Motlagh, 2009; Pop & Dulf, 2010; Pop et al, 2010), etc. The majority of nonlinear control techniques using feedback linearization also use a strategy to enhance robustness. This section describes the mathematical steps required to obtain the final closed

In the classical approach of feedback linearization as introduced by Isidori (Isidori, 1995), the Lie derivative and relative degree of the nonlinear system plays an important role. For a

(1)

*Lhx* (3)

(2)

( ) *x f x g x u*

with *<sup>n</sup> x* is the state, *u* is the control input, *y* is the output, *f* and *g* are smooth vector fields on *<sup>n</sup>* and *h* is a smooth nonlinear function. Differentiating *y* with respect to time, we

<sup>1</sup> ( ) ( ) *<sup>f</sup>*

*u Lhx v*

*f g*

with *<sup>n</sup> <sup>f</sup> xhL* : and *<sup>n</sup> <sup>g</sup> xhL* : , defined as the Lie derivatives of h with respect to f and g, respectively. Let *U* be an open set containing the equilibrium point *x*0 , that is a point where *f*(*x*) becomes null – *f*(*x*0) = 0. Thus, if in equation (2), the Lie derivative of *h* with respect to *g* - *xhL <sup>g</sup>* - is bounded away from zero for all *x U* (Sastry, 1999), then the state

*h h y f <sup>x</sup> <sup>g</sup> x u x x y Lhx Lhxu*

 

*y hx* 

*g*

**2. Feedback linearization: Classical versus robust approach** 

loop control structure, to be later used with robust linear control.

**2.1 Classical feedback linearization** 

obtain:

feedback law:

**2.1.1 Feedback linearization for SISO systems** 

single input single output system, given by:

stirred tank reactor.

$$\mu = \frac{1}{L\_g L\_f^{\gamma - 1} h(\mathbf{x})} \left( -L\_f^{\gamma} h(\mathbf{x}) + v \right) \tag{6}$$

The final (new) input – output relation becomes:

$$y^{(\mathcal{V})} = \mathcal{v} \tag{7}$$

which is linear and can be written as a chain of integrators (Brunovsky form). The control law in (6) yields (*n*-*γ*) states of the nonlinear system in (1) unobservable through state feedback.

The problem of measurable disturbances has been tackled also in the framework of feedback linearization. In general, for a nonlinear system affected by a measurable disturbance *d*:

$$\begin{aligned} \dot{x} &= f\left(\mathbf{x}\right) + g\left(\mathbf{x}\right)\boldsymbol{\mu} + p(\mathbf{x})d\\ \mathbf{y} &= h(\mathbf{x}) \end{aligned} \tag{8}$$

with *p*(*x*) a smooth vector field.

Similar to the relative degree of the nonlinear system, a disturbance relative degree is defined as a value *k* for which the following relation holds:

$$\begin{aligned} L\_p L\_f^i h(\mathbf{x}) &= 0, i < k - 1 \\ L\_p L\_f^{k-1} h(\mathbf{x}) &\neq 0 \end{aligned} \tag{9}$$

Thus, a comparison between the input relative degree and the disturbance relative degree gives a measure of the effect that each external signal has on the output (Daoutidis and Kravaris, 1989). If *k* , the disturbance will have a more direct effect upon the output, as compared to the input signal, and therefore a simple control law as given in (6) cannot ensure the disturbance rejection (Henson and Seborg, 1997). In this case complex feedforward structures are required and effective control must involve anticipatory action

Robust Feedback Linearization Control

yielding the linearized system as:

with

*c*

*A*

for Reference Tracking and Disturbance Rejection in Nonlinear Systems 277

<sup>1</sup> <sup>1</sup> <sup>1</sup> <sup>2</sup> <sup>2</sup> <sup>2</sup> ( )

*L h <sup>y</sup> <sup>u</sup> y L h u*

*<sup>r</sup> <sup>r</sup> <sup>m</sup> <sup>m</sup> <sup>f</sup> <sup>m</sup>*

Since *M*( <sup>0</sup>*<sup>x</sup>* ) is nonsingular, then *M*(*x*) *mm* is nonsingular for each *Ux* . As a

1 1 <sup>2</sup> ( ) () () ()

*u M x M xv x xv*

1 2 1 1 2 2

*y v y v*

*r r*

*m*

*<sup>r</sup> <sup>m</sup> <sup>m</sup>*

1 2 1 1 1

 

*c*

*B*

1000

0....10 0....01

 

*<sup>i</sup> Ac* and

In a classical approach, the feedback linearization is achieved through a feedback control law and a state transformation, leading to a linearized system in the form of a chain of integrators (Isidori, 1995). Thus the design of the linear controller is difficult, since the linearized system obtained bears no physical meaning similar to the initial nonlinear system

(17)

*v y*

1 12 f f 2 f <sup>T</sup> *r r rm x y Ly y Ly y L y <sup>c</sup> m m*

*<sup>u</sup> <sup>y</sup> L h*

*M x*

(14)

*c c*

 

(15)

(16)

*c cc c x Ax Bv* (18)

<sup>T</sup> 10....00 *<sup>i</sup> Bc* .

0 0

*crr rr rrc rr*

*....B*

12 2 2 1 21 1

0 0

*m m m m*

*crrrrrr*

1 2 3

000

*B ....*

 

*m m*

*B*

, where each

<sup>1</sup> <sup>1</sup> 2 2

*<sup>r</sup> <sup>r</sup> <sup>f</sup> r r f*

*m m*

1 2 1

*L h L h*

 

*r f r f*

*m*

*L h*

*r f m*

consequence, the control signal vector can be written as:

The states *x* undergo a change of coordinates given by:

The nonlinear MIMO system in (11) is linearized to give:

*m*

 

and

*A*

1 12 1 21 2 2

*A ....*

*c rr r r rr c r rm*

0 0

123

*mm m m*

 

*rr rr rr c*

000

term individually is given by:

0 0

*A ....*

for the disturbance. The control law in (6) is modified to include a dynamic feedforward/state feedback component which differentiates a state- and disturbance-dependent signal up to *γ*–*k* times, in addition to the pure static state feedback component. In the particular case that *k*= *γ*, both the disturbance and the manipulated input affect the output in the same way. Therefore, a feed-forward/state feedback element which is static in the disturbance is necessary in the control law in addition to the pure state feedback element (Daoutidis and Kravaris, 1989):

$$\mu = \frac{1}{L\_g L\_f^{\gamma - 1} h(\mathbf{x})} \left( -L\_f^{\gamma} h(\mathbf{x}) + \upsilon - L\_p L\_f^{\gamma - 1} p(\mathbf{x}) d \right) \tag{10}$$

### **2.1.2 Feedback linearization for MIMO systems**

The feedback linearization method can be extended to multiple input multiple output nonlinear square systems (Sastry, 1999). For a MIMO nonlinear system having *n* states and *m* inputs/outputs the following representation is used:

$$\begin{aligned} \dot{x} &= f\left(x\right) + g\left(x\right)u \\ y &= h\left(x\right) \end{aligned} \tag{11}$$

where *<sup>n</sup> x* is the state, *<sup>m</sup> u* is the control input vector and *<sup>m</sup> y* is the output vector. Similar to the SISO case, a vector relative degree is defined for the MIMO system in (11). The problem of finding the vector relative degree implies differentiation of each output signal until one of the input signals appear explicitly in the differentiation. For each output signal, we define *γ<sup>j</sup>* as the smallest integer such that at least one of the inputs appears in *<sup>j</sup> j y* :

$$\mathbf{h}\_{j}\mathbf{y}\_{j}^{\mathcal{I}\_{j}} = \mathbf{L}\_{f}^{\mathcal{I}\_{j}}\mathbf{h}\_{j} + \sum\_{i=1}^{m} \mathbf{L}\_{\mathcal{S}\_{i}} \left(\mathbf{L}\_{f}^{\mathcal{I}\_{j}-1}\mathbf{h}\_{j}\right)\mathbf{u}\_{i} \tag{12}$$

and at least one term <sup>1</sup> 0 ( )( )) *<sup>j</sup> <sup>i</sup> g ji <sup>f</sup> L L hu* for some *x* (Sastry, 1999). In what follows we assume that the sum of the relative degrees of each output is equal to the number of states of the nonlinear system. Such an assumption implies that the feedback linearization method is exact. Thus, neither of the state variables of the original nonlinear system is rendered unobservable through feedback linearization.

The matrix *M*(*x*), defined as the decoupling matrix of the system, is given as:

$$M = \begin{vmatrix} L\_{\mathcal{S}\_1} \left( L\_f^{r\_1} h\_1 \right) & \dots & L\_{\mathcal{S}\_m} \left( L\_f^{r\_{p-1}} h\_m \right) \\ \dots & \dots & \dots \\ L\_{\mathcal{S}\_l} \left( L\_f^{r\_{p-1}} h\_m \right) & \dots & L\_{\mathcal{S}\_m} \left( L\_f^{r\_{p-1}} h\_m \right) \end{vmatrix} \tag{13}$$

The nonlinear system in (11) has a defined vector relative degree <sup>21</sup> ,......, *rrr <sup>m</sup>* at the point <sup>0</sup>*x* if *xhLL* 0 *<sup>i</sup> k fgi* , 0 2 *irk* for i=1,…,*m* and the matrix M( <sup>0</sup>*x* ) is nonsingular. If the vector relative degree <sup>21</sup> ,......, *rrr <sup>m</sup>* is well defined, then (12) can be written as:

$$
\begin{bmatrix} y\_1^{r\_1} \\ y\_2^{r\_2} \\ \vdots \\ y\_m^{r\_m} \end{bmatrix} = \begin{bmatrix} L\_f^{r\_1} h\_1 \\ L\_f^{r\_2} h\_2 \\ \vdots \\ L\_f^{r\_f} h\_m \end{bmatrix} + \mathcal{M}(\mathbf{x}) \begin{bmatrix} \mu\_1 \\ \mu\_2 \\ \vdots \\ \mu\_m \end{bmatrix} \tag{14}
$$

Since *M*( <sup>0</sup>*<sup>x</sup>* ) is nonsingular, then *M*(*x*) *mm* is nonsingular for each *Ux* . As a consequence, the control signal vector can be written as:

$$\mu = -M^{-1}(\mathbf{x}) \begin{bmatrix} L\_f^r h\_1 \\ L\_f^r h\_2 \\ \vdots \\ L\_f^r h\_m \end{bmatrix} + M^{-1}(\mathbf{x}) \upsilon = \alpha\_c(\mathbf{x}) + \beta\_c(\mathbf{x}) \upsilon \tag{15}$$

yielding the linearized system as:

276 Recent Advances in Robust Control – Novel Approaches and Design Methods

for the disturbance. The control law in (6) is modified to include a dynamic feedforward/state feedback component which differentiates a state- and disturbance-dependent signal up to *γ*–*k* times, in addition to the pure static state feedback component. In the particular case that *k*= *γ*, both the disturbance and the manipulated input affect the output in the same way. Therefore, a feed-forward/state feedback element which is static in the disturbance is necessary in the control law in addition to the pure state feedback element

1

*LL hx*

*g f*

**2.1.2 Feedback linearization for MIMO systems** 

and at least one term <sup>1</sup> 0 ( )( )) *<sup>j</sup>*

unobservable through feedback linearization.

<sup>0</sup>*x* if *xhLL* 0 *<sup>i</sup> k*

*m* inputs/outputs the following representation is used:

<sup>1</sup>

<sup>1</sup>

  

 (10)

(11)

(12)

for some *x* (Sastry, 1999). In what follows we

*p m*

*j y* :

(13)

<sup>1</sup> () () ( ) *f f <sup>p</sup>*

*u L h x v L L p x d*

The feedback linearization method can be extended to multiple input multiple output nonlinear square systems (Sastry, 1999). For a MIMO nonlinear system having *n* states and

 *x f x g x u*

where *<sup>n</sup> x* is the state, *<sup>m</sup> u* is the control input vector and *<sup>m</sup> y* is the output vector. Similar to the SISO case, a vector relative degree is defined for the MIMO system in (11). The problem of finding the vector relative degree implies differentiation of each output signal until one of the input signals appear explicitly in the differentiation. For each output signal,

> 1 *j j j*

assume that the sum of the relative degrees of each output is equal to the number of states of the nonlinear system. Such an assumption implies that the feedback linearization method is exact. Thus, neither of the state variables of the original nonlinear system is rendered

*m j f f jg j i i y Lh L L h u*

*i*

1 1 1

*r r g f f g m*

*L Lh L Lh*

.... .... .... ....

<sup>1</sup> .....

*m*

1 1

 

*fgi* , 0 2 *irk* for i=1,…,*m* and the matrix M( <sup>0</sup>*x* ) is nonsingular. If the

*p p*

*r r gm g m f f*

*L Lh L Lh*

The nonlinear system in (11) has a defined vector relative degree <sup>21</sup> ,......, *rrr <sup>m</sup>* at the point

*y hx* 

we define *γ<sup>j</sup>* as the smallest integer such that at least one of the inputs appears in *<sup>j</sup>*

 

The matrix *M*(*x*), defined as the decoupling matrix of the system, is given as:

1

1

vector relative degree <sup>21</sup> ,......, *rrr <sup>m</sup>* is well defined, then (12) can be written as:

*<sup>i</sup> g ji <sup>f</sup> L L hu* 

*M*

(Daoutidis and Kravaris, 1989):

$$\begin{bmatrix} \boldsymbol{y}\_1^{r\_1} \\ \boldsymbol{y}\_2^{r\_2} \\ \vdots \\ \boldsymbol{y}\_m^{r\_m} \end{bmatrix} = \begin{bmatrix} \boldsymbol{v}\_1 \\ \boldsymbol{v}\_2 \\ \vdots \\ \boldsymbol{v}\_m \end{bmatrix} \tag{16}$$

The states *x* undergo a change of coordinates given by:

$$\mathbf{x}\_c = \begin{bmatrix} y\_1 & \cdots & L\_{\mathbf{f}}^{\eta\_1 - 1} y\_1 & y\_2 & \cdots & L\_{\mathbf{f}}^{\eta\_2 - 1} y\_2 & \cdots & \cdots & y\_m & \cdots & L\_{\mathbf{f}}^{r\_m - 1} y\_m \end{bmatrix}^\mathrm{T} \tag{17}$$

The nonlinear MIMO system in (11) is linearized to give:

$$
\dot{\mathbf{x}}\_c = \mathbf{A}\_c \mathbf{x}\_c + \mathbf{B}\_c \mathbf{v} \tag{18}
$$

$$\begin{aligned} \text{with } A\_c &= \begin{bmatrix} A\_{c\_1} & 0\_{r\_1 \times r\_2} & \dots & 0\_{r\_1 \times r\_m} \\ 0\_{r\_2 \times r\_1} & A\_{c\_2} & \dots & 0\_{r\_2 \times r m} \\ \vdots & \vdots & \vdots & \vdots \\ 0\_{r\_m \times r\_1} & 0\_{r\_m \times r\_2} & 0\_{r\_m \times r\_3} & A\_{c\_m} \end{bmatrix} \text{and } B\_c = \begin{bmatrix} B\_{c\_1} & 0\_{r\_1 \times r\_2} & \dots & 0\_{r\_1 \times r\_m} \\ 0\_{r\_2 \times r\_1} & B\_{c\_2} & \dots & 0\_{r\_2 \times r\_m} \\ \vdots & \vdots & \vdots & \vdots \\ 0\_{r\_m \times r\_1} & 0\_{r\_m \times r\_2} & 0\_{r\_m \times r\_3} & B\_{c\_m} \end{bmatrix}, \text{ where each } A\_c = \begin{bmatrix} 1 & 0 & \dots & 0 \\ 0 & 1 & \dots & 0 \\ \vdots & \vdots & \vdots & \vdots \\ \vdots & \vdots & \vdots & \vdots \\ 0 & 0 & 0 & 1 \end{bmatrix} \text{and } B\_{c\_i} = \begin{bmatrix} 0 & 0 & \dots & 0 & 1 \end{bmatrix}^T. \end{aligned}$$

In a classical approach, the feedback linearization is achieved through a feedback control law and a state transformation, leading to a linearized system in the form of a chain of integrators (Isidori, 1995). Thus the design of the linear controller is difficult, since the linearized system obtained bears no physical meaning similar to the initial nonlinear system

Robust Feedback Linearization Control

Replacing (25) into (24) and using (21), gives:

The control signal vector is given by:

is given in Figure 1, (Pop et al., 2010).

) 0 1

Equation (23) yields:

for Reference Tracking and Disturbance Rejection in Nonlinear Systems 279

*cc c c c c*

The *L*, *T* and *R* matrices are taken as: )()( <sup>0</sup> <sup>0</sup> *xML α x cx* , )( <sup>0</sup> *cx xxT* ,

Disturbance rejection in nonlinear systems, based on classical feedback linearization theory, has been tackled firstly by (Daoutidis and Kravaris, 1989). Disturbance rejection in the

In what follows, we assume that the relative degrees of the disturbances to the outputs are equal to those of the inputs. Thus, for measurable disturbances, a simple static feedforward structure can be used (Daoutidis and Kravaris, 1989; Daoutidis et al., 1990). The final closed loop control scheme used in robust feedback linearization and feed-forward compensation

*Tz A B LT Tz B R v z T A B LT Tz Τ BR v*

1 1 1 1 11

1 1 () () () () () () () *<sup>u</sup> c c c c cc <sup>α</sup> <sup>x</sup> <sup>β</sup> x w <sup>α</sup> <sup>x</sup> <sup>β</sup> x LT x <sup>β</sup> xR v <sup>α</sup> <sup>x</sup> <sup>β</sup> x v* (27)

*c c z T x x Tz* (25)

(26)

1

*z T T A BRL T Tz T TBRLT Tz T TBRR v*

1 11 111

1 1 1 11

*cc c*

*A BRL z BRLz Bv Az Bv*

resulting the liniarized system in (20), with )( <sup>0</sup> *<sup>x</sup> xfA* and )( <sup>0</sup> *xgB* .

framework of robust feedback linearization has not been discussed so far.

**(***xMR* (Franco *et al*., 2006; Guillard și Bourles, 2000).

Fig. 1. Feedback linearization closed loop control scheme

*T A Tz T B LT Tz T B R v*

(Pop *et al*., 2009). In fact, two nonlinear systems having the same degree will lead to the same feedback linearized system.

### **2.2 Robust feedback linearization**

To overcome the disadvantages of classical feedback linearization, the robust feedback linearization is performed in a neighborhood of an operating point, <sup>0</sup>*x* . The linearized system would be equal to the tangent linearized system around the chosen operating point. Such system would bear similar physical interpretation as compared to the initial nonlinear system, thus making it more efficient and simple to design a controller (Pop *et al*., 2009; Pop *et al*., 2010; Franco, *et al*., 2006).

The multivariable nonlinear system with disturbance vector *d*, is given in the following equation:

$$\begin{aligned} \dot{x} &= f\left(\mathbf{x}\right) + g\left(\mathbf{x}\right)\mu + p(\mathbf{x})d\\ y &= h(\mathbf{x}) \end{aligned} \tag{19}$$

where *<sup>n</sup> x* is the state, *<sup>m</sup> u* is the control input vector and *<sup>m</sup> y* is the output vector. In robust feedback linearization, the purpose is to find a state feedback control law that transforms the nonlinear system (19) in a tangent linearized one around an equilibrium point, <sup>0</sup>*x* :

$$
\dot{z} = Az + Bw \tag{20}
$$

In what follows, we assume the feedback linearization conditions (Isidori, 1995) are satisfied and that the output of the nonlinear system given in (19) can be chosen as: *)x()x(y* , where *<sup>m</sup> )]x().....x([)x(* <sup>1</sup> is a vector formed by functions *)x( <sup>i</sup>* , such that the sum of the relative degrees of each function *)x( <sup>i</sup>* to the input vector is equal to the number of states of (19).

With the (*A*,*B*) pair in (20) controllable, we define the matrices *L*( *nm* ), *T*( *nn* ) and *R*( *mm* ) such that (Levine, 1996):

$$\begin{aligned} T\left(A - BRL\right)T^{-1} &= A\_c\\ TBR &= B\_c \end{aligned} \tag{21}$$

with *T* and *R* nonsingular. By taking:

$$w = LT^{-1}\mathfrak{x}\_c + \mathcal{R}^{-1}w \tag{22}$$

And using the state transformation:

$$\mathbf{z} = T^{-1} \mathbf{x}\_c \tag{23}$$

the system in (18) is rewritten as:

$$\dot{\mathbf{x}}\_c = \mathbf{A}\_c \mathbf{x}\_c + \mathbf{B}\_c L \mathbf{T}^{-1} \mathbf{x}\_c + \mathbf{B}\_c \mathbf{R}^{-1} \mathbf{w} = \left(\mathbf{A}\_c + \mathbf{B}\_c L \mathbf{T}^{-1}\right) \mathbf{x}\_c + \mathbf{B}\_c \mathbf{R}^{-1} \mathbf{w} \tag{24}$$

Equation (23) yields:

278 Recent Advances in Robust Control – Novel Approaches and Design Methods

(Pop *et al*., 2009). In fact, two nonlinear systems having the same degree will lead to the

To overcome the disadvantages of classical feedback linearization, the robust feedback linearization is performed in a neighborhood of an operating point, <sup>0</sup>*x* . The linearized system would be equal to the tangent linearized system around the chosen operating point. Such system would bear similar physical interpretation as compared to the initial nonlinear system, thus making it more efficient and simple to design a controller (Pop *et al*., 2009; Pop

The multivariable nonlinear system with disturbance vector *d*, is given in the following

*x f x g x u pxd* ( )

where *<sup>n</sup> x* is the state, *<sup>m</sup> u* is the control input vector and *<sup>m</sup> y* is the output vector. In robust feedback linearization, the purpose is to find a state feedback control law that transforms the nonlinear system (19) in a tangent linearized one around an equilibrium

In what follows, we assume the feedback linearization conditions (Isidori, 1995) are satisfied and that the output of the nonlinear system given in (19) can be chosen as:

With the (*A*,*B*) pair in (20) controllable, we define the matrices *L*( *nm* ), *T*( *nn* ) and

1 1

1

 11 1 1 *c cc c c c c c c c x A x B LT x B R w A B LT x B R w* (24)

*c*

<sup>1</sup>

*c T A BRL T A*

(19)

*z Az Bw* (20)

*<sup>i</sup>* to the input vector is equal to the number of

(21)

*<sup>c</sup> v LT x R w* (22)

*<sup>c</sup> z Tx* (23)

*)x()x(y* ,

*<sup>i</sup>* , such that the sum of

*y hx*

*<sup>m</sup> )]x().....x([)x(* <sup>1</sup> is a vector formed by functions *)x(*

*TBR B*

same feedback linearized system.

**2.2 Robust feedback linearization** 

*et al*., 2010; Franco, *et al*., 2006).

equation:

point, <sup>0</sup>*x* :

where

states of (19).

By taking:

*R*( *mm* ) such that (Levine, 1996):

And using the state transformation:

the system in (18) is rewritten as:

with *T* and *R* nonsingular.

the relative degrees of each function *)x(*

$$\mathbf{z} = \mathbf{T}^{-1}\mathbf{x}\_c \Rightarrow \mathbf{x}\_c = \mathbf{T}\mathbf{z} \tag{25}$$

Replacing (25) into (24) and using (21), gives:

$$\begin{aligned} T\dot{z} &= \left(A\_c + B\_c L T^{-1} \right) T z + B\_c R^{-1} v \Longrightarrow \dot{z} = T^{-1} \left(A\_c + B\_c L T^{-1} \right) T z + T^{-1} B\_c R^{-1} v = \\ &= T^{-1} A\_c T z + T^{-1} B\_c L T^{-1} T z + T^{-1} B\_c R^{-1} v \\ \dot{z} &= T^{-1} T \left(A - B R L \right) T^{-1} T z + T^{-1} T B R L T^{-1} T z + T^{-1} T B R R^{-1} v = \\ &= \left(A - B R L \right) z + B R L z + B v = A z + B v \end{aligned} \tag{26}$$

resulting the liniarized system in (20), with )( <sup>0</sup> *<sup>x</sup> xfA* and )( <sup>0</sup> *xgB* . The control signal vector is given by:

$$\mathbf{u} = a\_c(\mathbf{x}) + \boldsymbol{\beta}\_c(\mathbf{x})\boldsymbol{w} = a\_c(\mathbf{x}) + \boldsymbol{\beta}\_c(\mathbf{x})\boldsymbol{L}\boldsymbol{T}^{-1}\mathbf{x}\_c + \boldsymbol{\beta}\_c(\mathbf{x})\boldsymbol{R}^{-1}\boldsymbol{v} = a(\mathbf{x}) + \boldsymbol{\beta}(\mathbf{x})\boldsymbol{v} \tag{27}$$

The *L*, *T* and *R* matrices are taken as: )()( <sup>0</sup> <sup>0</sup> *xML α x cx* , )( <sup>0</sup> *cx xxT* , ) 0 1 **(***xMR* (Franco *et al*., 2006; Guillard și Bourles, 2000).

Disturbance rejection in nonlinear systems, based on classical feedback linearization theory, has been tackled firstly by (Daoutidis and Kravaris, 1989). Disturbance rejection in the framework of robust feedback linearization has not been discussed so far.

In what follows, we assume that the relative degrees of the disturbances to the outputs are equal to those of the inputs. Thus, for measurable disturbances, a simple static feedforward structure can be used (Daoutidis and Kravaris, 1989; Daoutidis et al., 1990). The final closed loop control scheme used in robust feedback linearization and feed-forward compensation is given in Figure 1, (Pop et al., 2010).

Fig. 1. Feedback linearization closed loop control scheme

or the nonlinear system given in (19), the state feedback/ feed-forward control law is given by:

$$u = a(\mathbf{x}) + \beta(\mathbf{x})v - \chi(\mathbf{x})d\tag{28}$$

Robust Feedback Linearization Control

matrices *WI* and/or *Wo* : *WK WK oaI* .

for the robust feedback linearization case.

order reaction:

(De Oliveira, 1994).

**4.1 The isothermal continuous stirred tank reactor** 

The schematic representation of the process is given in Figure 5.

indicator of the efficiency of the loopshaping technique.

**Step 3. Final robust controller** 

for Reference Tracking and Disturbance Rejection in Nonlinear Systems 281

The final resulting controller is given by the sub-optimal controller *Ka* weighted with the

Using the McFarlane-Glover method, the loop shaping is done without considering the problem of robust stability, which is explcitily taken into account at the second design step, by imposing a stability margin for the closed loop system. This stability margin max is an

The stability of the closed loop nonlinear system using robust stability and loopshaping is

**4. Case study: Reference tracking and disturbance rejection in an isothermal CSTR**  The authors propose as an example, the control of an isothermal CSTR. A complete description of the steps required to obtain the final feedback linearization control scheme in both approaches – is given. The robustness of the final nonlinear H∞ controller is demonstrated through simulations concerning reference tracking and disturbance rejection,

The application studied is an isothermal continuous stirred tank reactor process with first

Different strategies have been proposed for this type of multivariable process (De Oliveira, 1994; Martinsen et al., 2004; Chen et al., 2010). The choice of the CSTR resides in its strong nonlinear character, which makes the application of a nonlinear control strategy based directly on the nonlinear model of the process preferable to classical linearization methods

The tank reactor is assumed to be a well mixed one. The control system designed for such a process is intended to keep the liquid level in the tank – *x*1- constant, as well as the *B* product concentration – x2, extracted at the bottom of the tank. It is also assumed that the output flow rate *F*o is determined by the liquid level in the reactor. The final concentration x2 is obtained by mixing two input streams: a concentrated one *u*1, of concentration *CB*1 and a diluted one *u*2, of concentration *CB*2. The process is therefore modelled as a multivariable system, having two manipulated variables, *u* = [*u*1 *u*2]T and two control outputs: x = [x1 x2]T.

*AB P* (30)

proven theoretically using W-stability (Guillard & Bourles, 2000; Franco *et al*., 2006).

Fig. 4. Optimal controller obtained with the pre and post weighting matrices

with *α*( ) *x* and *β*( ) *x* as described in (27), and <sup>1</sup> *γ*() ()() *x Mx p x* .

## **3. Robust H∞ controller design**

To ensure stability and performance against modelling errors, the authors choose the method of McFarlane-Glover to design a robust linear controller for the feedback linearized system. The method of loop-shaping is chosen due to its ability to address robust performance and robust stability in two different stages of controller design (McFarlane and Glover, 1990).

The method of loopshaping consists of three steps:

### **Step 1. Open loop shaping**

Using a pre-weighting matrix*WI* and/or a post-weighting matrix *Wo* , the minimum and maxiumum singular values are modified to shape the response. This step results in an augmented matrix of the process transfer function: *WPWP Ios* .

Fig. 2. Augmented matrix of the process transfer function

### **Step 2. Robust stability**

$$\begin{aligned} \text{The stability margin is computed as} \quad & \frac{1}{\varepsilon\_{\text{max}}} = \inf\_{K} \left\| \begin{bmatrix} I \\ K \end{bmatrix} (I - P\_{\text{s}} K)^{-1} \widetilde{M}\_{\text{s}}^{-1} \right\|\_{\infty} \text{ where } \varepsilon\_{\text{max}} \text{ is the } \varepsilon\_{\text{max}} \text{-valued vector.} \end{aligned}$$

*NMP sss* <sup>~</sup> <sup>1</sup> <sup>~</sup> is the normalized left coprime factorization of the process transfer function matrix. If max 1 , the pre and post weighting matrices have to be modified by relaxing the constraints imposed on the open loop shaping. If the value of max is acceptable, for a value max the resulting controller - *Ka* - is computed in order to sati1sfy the following relation:

$$\left\| \begin{array}{c} I \\ \mathbf{K}\_{a} \end{array} \right\| \left( \mathbf{I} - \mathbf{P}\_{s} \mathbf{K}\_{a} \right)^{-1} \tilde{M}\_{s}^{-1} \right\|\_{\infty} \leq \varepsilon \tag{29}$$
 
$$\overbrace{\left( \overbrace{\left( \mathbf{W}\_{o} \right)}^{} \cdots \left( \overbrace{\left( \mathbf{P}\_{a} \right)}^{} \cdots \overbrace{\left( \mathbf{W}\_{l} \right)}^{} \right) \cdot \overbrace{\left( \overbrace{\mathbf{K}\_{a}}^{} \cdots \right)}^{} \mathbf{P}\_{S} \right) \cdot \overbrace{\left( \overbrace{\mathbf{K}\_{a}}^{} \cdots \right)}^{} \cdots \underset{\mathbf{f}}^{}$$

Fig. 3. Robust closed loop control scheme

## **Step 3. Final robust controller**

280 Recent Advances in Robust Control – Novel Approaches and Design Methods

or the nonlinear system given in (19), the state feedback/ feed-forward control law is given by:

To ensure stability and performance against modelling errors, the authors choose the method of McFarlane-Glover to design a robust linear controller for the feedback linearized system. The method of loop-shaping is chosen due to its ability to address robust performance and robust stability in two different stages of controller design (McFarlane and

Using a pre-weighting matrix*WI* and/or a post-weighting matrix *Wo* , the minimum and maxiumum singular values are modified to shape the response. This step results in an

max

matrix. If max 1 , the pre and post weighting matrices have to be modified by relaxing the constraints imposed on the open loop shaping. If the value of max is acceptable, for a value max the resulting controller - *Ka* - is computed in order to sati1sfy the following

> <sup>1</sup> <sup>1</sup> *sa s*

*I PK M ε*

<sup>~</sup> <sup>1</sup> <sup>~</sup> is the normalized left coprime factorization of the process transfer function

*ε*

1

with *α*( ) *x* and *β*( ) *x* as described in (27), and <sup>1</sup> *γ*() ()() *x Mx p x* .

augmented matrix of the process transfer function: *WPWP Ios* .

Fig. 2. Augmented matrix of the process transfer function

The stability margin is computed as

*a I*

*K*

Fig. 3. Robust closed loop control scheme

The method of loopshaping consists of three steps:

**3. Robust H∞ controller design** 

Glover, 1990).

**Step 1. Open loop shaping** 

**Step 2. Robust stability** 

*NMP sss*

relation:

*u α*() () () *x β x v γ x d* (28)

, where

*MKPI*

(29)

 <sup>11</sup>

*ss <sup>K</sup> orstabilizat*

*K I*

<sup>~</sup> inf

The final resulting controller is given by the sub-optimal controller *Ka* weighted with the matrices *WI* and/or *Wo* : *WK WK oaI* .

Using the McFarlane-Glover method, the loop shaping is done without considering the problem of robust stability, which is explcitily taken into account at the second design step, by imposing a stability margin for the closed loop system. This stability margin max is an indicator of the efficiency of the loopshaping technique.

Fig. 4. Optimal controller obtained with the pre and post weighting matrices

The stability of the closed loop nonlinear system using robust stability and loopshaping is proven theoretically using W-stability (Guillard & Bourles, 2000; Franco *et al*., 2006).

### **4. Case study: Reference tracking and disturbance rejection in an isothermal CSTR**

The authors propose as an example, the control of an isothermal CSTR. A complete description of the steps required to obtain the final feedback linearization control scheme in both approaches – is given. The robustness of the final nonlinear H∞ controller is demonstrated through simulations concerning reference tracking and disturbance rejection, for the robust feedback linearization case.

## **4.1 The isothermal continuous stirred tank reactor**

The application studied is an isothermal continuous stirred tank reactor process with first order reaction:

$$A + B \to P \tag{30}$$

Different strategies have been proposed for this type of multivariable process (De Oliveira, 1994; Martinsen et al., 2004; Chen et al., 2010). The choice of the CSTR resides in its strong nonlinear character, which makes the application of a nonlinear control strategy based directly on the nonlinear model of the process preferable to classical linearization methods (De Oliveira, 1994).

The schematic representation of the process is given in Figure 5.

The tank reactor is assumed to be a well mixed one. The control system designed for such a process is intended to keep the liquid level in the tank – *x*1- constant, as well as the *B* product concentration – x2, extracted at the bottom of the tank. It is also assumed that the output flow rate *F*o is determined by the liquid level in the reactor. The final concentration x2 is obtained by mixing two input streams: a concentrated one *u*1, of concentration *CB*1 and a diluted one *u*2, of concentration *CB*2. The process is therefore modelled as a multivariable system, having two manipulated variables, *u* = [*u*1 *u*2]T and two control outputs: x = [x1 x2]T.

Robust Feedback Linearization Control

yielding:

for Reference Tracking and Disturbance Rejection in Nonlinear Systems 283

1 12 2

(34)

1 1

*<sup>c</sup> x y y x x* (36)

*c c*

 

and

( ) *<sup>c</sup> B B x Cx Cx*

0 -1 -2 -2

*x x*

1 1

*B B*

*B B*

1 2

 

> 12 12 T T

( ) () () () *<sup>f</sup>*

2


1 0

*u M x M xv x xv*

In the next step, the L, T and R matrices needed for the robust feedback linearization

0

0 1 ( ) *T xx x c* 

1

1 1

*k x*

 

12 12 0 0

2 2 12 22 2 2 1 2 1 1 <sup>2</sup> 1

*k x Cx C x y u u x x x*

thus yielding *r*1=1 and *r*2=1, respectively, with *r*1 + *r*2 = 2, the number of state variables of the nonlinear system (32). Since this is the case, the linearization will be exact, without any state

(33)

(35)

(37)

(38)

12 12 1 1

1 1

*x x*

(39)

1

.

() () ()

*f x g x u g x u*

1

*x*

 

*x*

2

11 1 22 2

*y hx x y hx x*

The relative degrees of each output are obtained based on differentiation:

variables rendered unobservable through feedback linearization.

( ) *gf g f*

and is non-singular in the equilibrium point *x*0 = [100; 7.07]*T*.

1 1 1 1 1 2

*L h*

*f*

1

12 12 2 2

 

*x x x*

*L Mx x c α x*

1 1 2

0 0

*L h*

1 2

*M x Cx Cx*

0 0 1 2

1 2

*gf g f*

*L Lh L Lh*

*L Lh L Lh*

1 11 1 2

*y kx u u*

The decoupling matrix *M*(*x*) in (13), will be equal to:

The state transformation is given by:

while the control signal vector is:

with

method are computed:

( ) *<sup>c</sup> B B*

1 1

*x Cx Cx k x*

 

( ) ( )

The process model is then given as:

$$\begin{aligned} \frac{d\mathbf{x}\_1}{dt} &= \boldsymbol{\mu}\_1 + \boldsymbol{\mu}\_2 - k\_1 \sqrt{\mathbf{x}\_1} \\ \frac{d\mathbf{x}\_2}{dt} &= (\mathbf{C}\_{B1} - \mathbf{x}\_2) \frac{\boldsymbol{\mu}\_1}{\boldsymbol{\mu}\_1} + (\mathbf{C}\_{B2} - \mathbf{x}\_2) \frac{\boldsymbol{\mu}\_2}{\boldsymbol{\mu}\_1} - \frac{k\_2 \mathbf{x}\_2}{\left(1 + \mathbf{x}\_2\right)^2} \end{aligned} \tag{31}$$

with the parameters' nominal values given in table 1. The steady state operating conditions are taken as *x*1*ss*=100 and *x*2*ss*=7.07, corresponding to the input flow rates: *u*1*<sup>s</sup>* =1 and *u*2*<sup>s</sup>* =1. The concentrations of B in the input streams, CB1 and CB2, are regarded as input disturbances.

Fig. 5. Continuous stirred tank reactor (De Oliveira, 1994)


Table 1. CSTR parameters and nominal values

From a feedback linearization point of view the process model given in (31) is rewritten as:

$$\begin{aligned} \begin{pmatrix} \dot{\mathbf{x}}\_1\\ \dot{\mathbf{x}}\_2 \end{pmatrix} &= \begin{pmatrix} -k\_1 \sqrt{\mathbf{x}\_1} \\ -\frac{k\_2 \mathbf{x}\_2}{\left(1 + \mathbf{x}\_2\right)^2} \end{pmatrix} + \begin{pmatrix} 1 \\ \left(\mathbf{C}\_{B1} - \mathbf{x}\_2\right) \\ \mathbf{x}\_1 \end{pmatrix} \mathbf{u}\_1 + \begin{pmatrix} 1 \\ \left(\mathbf{C}\_{B2} - \mathbf{x}\_2\right) \\ \mathbf{x}\_1 \end{pmatrix} \mathbf{u}\_2 \\ y &= \begin{bmatrix} \mathbf{x}\_1 & \mathbf{x}\_2 \end{bmatrix}^T \end{aligned} \tag{32}$$

yielding:

282 Recent Advances in Robust Control – Novel Approaches and Design Methods

(31)

24.9

0.1

*B B*

2 1 2 2 2 12 22 2 1 1 <sup>2</sup> 1

with the parameters' nominal values given in table 1. The steady state operating conditions are taken as *x*1*ss*=100 and *x*2*ss*=7.07, corresponding to the input flow rates: *u*1*<sup>s</sup>* =1 and *u*2*<sup>s</sup>* =1. The concentrations of B in the input streams, CB1 and CB2, are regarded as input

Parameter Meaning Nominal Value

Concentration of *B* in the inlet flow *u*<sup>1</sup>

Concentration of *B* in the inlet flow *u*<sup>2</sup>

*k*<sup>1</sup> Valve constant 0.2 *k*<sup>2</sup> Kinetic constant 1

From a feedback linearization point of view the process model given in (31) is rewritten as:

(32)

1 1

2 2 1 2 1 2 2 2

*k x Cx C x u u*

*B B*

2 1 1

*x x x*

*dx <sup>u</sup> u kx Cx C x dt x x x*

1 2 11

*dx u u kx*

The process model is then given as:

disturbances.

1

Fig. 5. Continuous stirred tank reactor (De Oliveira, 1994)

*CB*<sup>1</sup>

*C*B2

Table 1. CSTR parameters and nominal values

1

*x*

*x*

2 2

1

*T*

1 1

*k x*

*yxx*

1 2

*dt*

$$\begin{aligned} \begin{pmatrix} \dot{\mathbf{x}}\_1\\ \dot{\mathbf{x}}\_2 \end{pmatrix} &= f(\mathbf{x}) + g\_1(\mathbf{x})u\_1 + g\_2(\mathbf{x})u\_2\\ y\_1 = h\_1(\mathbf{x}) &= \mathbf{x}\_1\\ y\_2 = h\_2(\mathbf{x}) &= \mathbf{x}\_2 \end{aligned} \tag{33}$$

The relative degrees of each output are obtained based on differentiation:

$$\begin{aligned} \dot{y}\_1 &= -k\_1 \sqrt{\mathbf{x}\_1} + \mathbf{u}\_1 + \mathbf{u}\_2\\ \dot{y}\_2 &= -\frac{k\_2 \mathbf{x}\_2}{\left(1 + \mathbf{x}\_2\right)^2} + \frac{\left(\mathbf{C}\_{B1} - \mathbf{x}\_2\right)}{\mathbf{x}\_1} \mathbf{u}\_1 + \frac{\left(\mathbf{C}\_{B2} - \mathbf{x}\_2\right)}{\mathbf{x}\_1} \mathbf{u}\_2 \end{aligned} \tag{34}$$

thus yielding *r*1=1 and *r*2=1, respectively, with *r*1 + *r*2 = 2, the number of state variables of the nonlinear system (32). Since this is the case, the linearization will be exact, without any state variables rendered unobservable through feedback linearization. The decoupling matrix *M*(*x*) in (13), will be equal to:

$$\mathbf{M(x)} = \begin{bmatrix} L\_{\mathcal{g}\_1} \left( L\_f^0 h\_1 \right) & L\_{\mathcal{g}\_2} \left( L\_f^0 h\_2 \right) \\ L\_{\mathcal{g}\_1} \left( L\_f^0 h\_1 \right) & L\_{\mathcal{g}\_2} \left( L\_f^0 h\_2 \right) \end{bmatrix} = \begin{bmatrix} 1 & 1 \\ \underbrace{\left( \mathbf{C}\_{\mathcal{B}1} - \mathbf{x}\_2 \right)}\_{\mathbf{x}\_1} & \underbrace{\left( \mathbf{C}\_{\mathcal{B}1} - \mathbf{x}\_2 \right)}\_{\mathbf{x}\_1} \end{bmatrix} \tag{35}$$

and is non-singular in the equilibrium point *x*0 = [100; 7.07]*T*. The state transformation is given by:

$$\mathbf{x}\_c = \begin{bmatrix} y\_1 & y\_2 \end{bmatrix}^T = \begin{bmatrix} \mathbf{x}\_1 & \mathbf{x}\_2 \end{bmatrix}^T \tag{36}$$

while the control signal vector is:

$$\mu = -M^{-1}(\mathbf{x}) \begin{bmatrix} L\_f^1 h\_1 \\ L\_f^1 h\_2 \end{bmatrix} + M^{-1}(\mathbf{x}) \upsilon = \alpha\_c(\mathbf{x}) + \beta\_c(\mathbf{x}) \upsilon \tag{37}$$

$$\text{with } \boldsymbol{\alpha}\_{c}(\mathbf{x}) = -\left[ \underbrace{\begin{pmatrix} 1 & 1 \\ \mathbf{C}\_{\mathrm{Bl}} - \mathbf{x}\_{2} \\ \mathbf{x}\_{1} \end{pmatrix}}\_{\mathbf{x}\_{1}} \underbrace{\begin{pmatrix} \mathbf{C}\_{\mathrm{Bl}} - \mathbf{x}\_{2} \\ \mathbf{x}\_{1} \end{pmatrix}}\_{\mathbf{x}\_{1}} \right]^{-1} \left[ \underbrace{-\mathbf{k}\_{1} \sqrt{\mathbf{x}\_{1}}}\_{\mathbf{x}\_{2}} \right] \text{and } \boldsymbol{\beta}\_{c}(\mathbf{x}) = \left[ \underbrace{\begin{pmatrix} 1 & 1 \\ \left(\mathbf{C}\_{\mathrm{Bl}} - \mathbf{x}\_{2} \right)}\_{\mathbf{x}\_{1}} & \left(\mathbf{C}\_{\mathrm{Bl}} - \mathbf{x}\_{2} \right) \\ \mathbf{x}\_{1} & \cdots & \cdots \end{pmatrix}^{-1} \right]^{-1}$$

In the next step, the L, T and R matrices needed for the robust feedback linearization method are computed:

$$L = -M(\mathbf{x}\_0)\hat{\sigma}\_{\mathbf{x}}a\_c(\mathbf{x}\_0) = \begin{pmatrix} -0.1 \cdot 10^{-1} & 0\\ -0.11 \cdot 10^{-2} & -0.84 \cdot 10^{-2} \end{pmatrix} \tag{38}$$

$$T = \partial\_x \mathbf{x}\_c(\mathbf{x}\_0) = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} \tag{39}$$

$$R = M^{-1}(\mathbf{x}\_0) = \begin{pmatrix} 0.28 & 4.03 \\ 0.72 & 4.03 \end{pmatrix} \tag{40}$$

Robust Feedback Linearization Control

<sup>0</sup> <sup>1</sup> <sup>2</sup> <sup>3</sup> <sup>4</sup> <sup>5</sup> <sup>6</sup> <sup>95</sup>

Time

<sup>0</sup> <sup>1</sup> <sup>2</sup> <sup>3</sup> <sup>4</sup> <sup>5</sup> <sup>6</sup> <sup>0</sup>

Time

1

*x*

 

*x*

with *p*(*x*) taken to be dependent on the output vector:

2

11 1 22 2

*y hx x y hx x*

 

( ) ( )

The relative degrees of the disturbance to the outputs of interest are: <sup>1</sup>

uncertain case nominal case

uncertain case nominal case

2

*γ x*)( being equal to:

4

6

u1

8

10

12

x1

for Reference Tracking and Disturbance Rejection in Nonlinear Systems 285

a) b)

6.6 6.7 6.8 6.9 7 7.1 7.2 7.3

x2

c) d)

1 12 2

() () () ()

*f x g x u g x u p x d*

1 2

*x p x <sup>x</sup>* 

the relative degrees of the disturbances to the outputs are equal to those of the inputs, a simple static feed-forward structure can be used for output disturbance rejection purposes, with the control law given in (28), with *α x*)( and *β x*)( determined according to (27) and

Fig. 6. Closed loop simulations using robust nonlinear controller a) *x*1 b) *x*2 c) *u*1 d) *u*<sup>2</sup>

( )

u2

<sup>0</sup> <sup>1</sup> <sup>2</sup> <sup>3</sup> <sup>4</sup> <sup>5</sup> <sup>6</sup> <sup>0</sup>

<sup>0</sup> <sup>1</sup> <sup>2</sup> <sup>3</sup> <sup>4</sup> <sup>5</sup> <sup>6</sup> 6.5

Time

uncertain case nominal case

uncertain case nominal case

Time

(44)

(45)

1 and <sup>2</sup>

1. Since

The control law can be easily obtained based on (27) as:

$$\begin{aligned} a(\mathbf{x}) &= a\_c(\mathbf{x}) + \beta\_c(\mathbf{x}) LT^{-1} \mathbf{x}\_c \\ \beta(\mathbf{x}) &= \beta\_c(\mathbf{x}) \mathbf{R}^{-1} \end{aligned} \tag{41}$$

while the linearized system is given as:

$$\dot{z} = \begin{pmatrix} -\frac{k\_1}{2} \mathbf{x}\_{10}^{-1/2} & \mathbf{0} \\ \mathbf{0} & \frac{k\_2 \left(\mathbf{x}\_{20} - \mathbf{I}\right)}{\left(\mathbf{x}\_{20} + \mathbf{I}\right)^3} \end{pmatrix} \mathbf{z} + \begin{pmatrix} \mathbf{I} & \mathbf{I} \\ \frac{\mathbf{C}\_{B1} - 7.07}{100} & \frac{\mathbf{C}\_{B2} - 7.07}{100} \end{pmatrix} \mathbf{w} \tag{42}$$

The linear *H* controller is designed using the McFarlane-Glover method (McFarlane, et al., 1989; Skogestad, et al., 2007) with loop-shaping that ensures the robust stabilization problem of uncertain linear plants, given by a normalized left co-prime factorization. The loopshaping () () () *P s WsPs <sup>s</sup>* , with *P*(*s*) the matrix transfer function of the linear system given in (41), is done with the weighting matrix, *W*:

$$\mathcal{W} = \text{diag}\left(\frac{14}{s} \quad \frac{10}{s}\right) \tag{43}$$

The choice of the weighting matrix corresponds to the performance criteria that need to be met. Despite robust stability, achieved by using a robust *H* controller, all process outputs need to be maintained at their set-point values. To keep the outputs at the prescribed setpoints, the steady state errors have to be reduced. The choice of the integrators in the weighting matrix *W* above ensure the minimization of the output signals steady state errors. To keep the controller as simple as possible, only a pre-weighting matrix is used (Skogestad, et al., 2007). The resulting robust controller provides for a robustness of 38%, corresponding to a value of 2.62 .

The simulation results considering both nominal values as well as modelling uncertainties are given in Figure 6. The results obtained using the designed nonlinear controller show that the closed loop control scheme is robust, the uncertainty range considered being of ±20% for k1 and ±30% for k2.

A different case scenario is considered in Figure 7, in which the input disturbances CB1 and CB2 have a +20% deviation from the nominal values. The simulation results show that the nonlinear robust controller, apart from its robustness properties, is also able to reject input disturbances.

To test the output disturbance rejection situation, the authors consider an empiric model of a measurable disturbance that has a direct effect on the output vector. To consider a general situation from a feedback linearization perspective, the nonlinear model in (33) is altered to model the disturbance, *d*(*t*), as:

284 Recent Advances in Robust Control – Novel Approaches and Design Methods

*RM x*

1

*cc c*

1 2 2 20

() () ()

*α x α x β x LT x*

<sup>0</sup> 1 1 <sup>2</sup>

 

*zz w k x C C*

<sup>0</sup> 100 100 <sup>1</sup>

The linear *H* controller is designed using the McFarlane-Glover method (McFarlane, et al., 1989; Skogestad, et al., 2007) with loop-shaping that ensures the robust stabilization problem of uncertain linear plants, given by a normalized left co-prime factorization. The loopshaping () () () *P s WsPs <sup>s</sup>* , with *P*(*s*) the matrix transfer function of the linear system given in

14 10 *W diag*

The choice of the weighting matrix corresponds to the performance criteria that need to be met. Despite robust stability, achieved by using a robust *H* controller, all process outputs need to be maintained at their set-point values. To keep the outputs at the prescribed setpoints, the steady state errors have to be reduced. The choice of the integrators in the weighting matrix *W* above ensure the minimization of the output signals steady state errors. To keep the controller as simple as possible, only a pre-weighting matrix is used (Skogestad, et al., 2007). The resulting robust controller provides for a robustness of 38%, corresponding

The simulation results considering both nominal values as well as modelling uncertainties are given in Figure 6. The results obtained using the designed nonlinear controller show that the closed loop control scheme is robust, the uncertainty range considered being of ±20% for

A different case scenario is considered in Figure 7, in which the input disturbances CB1 and CB2 have a +20% deviation from the nominal values. The simulation results show that the nonlinear robust controller, apart from its robustness properties, is also able to reject input

To test the output disturbance rejection situation, the authors consider an empiric model of a measurable disturbance that has a direct effect on the output vector. To consider a general situation from a feedback linearization perspective, the nonlinear model in (33) is altered to

*s s* 

() ()

 

20

*x*

3

*β x β x R*

*c*

0.28 4.03 ) 0.72 -4.03

1

7 07 7 07 1

(42)

. . *B B*

(40)

(43)

(41)

1 <sup>0</sup> (

The control law can be easily obtained based on (27) as:

1 1 2 10

*k x*

/

while the linearized system is given as:

(41), is done with the weighting matrix, *W*:

to a value of 2.62 .

k1 and ±30% for k2.

model the disturbance, *d*(*t*), as:

disturbances.

Fig. 6. Closed loop simulations using robust nonlinear controller a) *x*1 b) *x*2 c) *u*1 d) *u*<sup>2</sup>

$$\begin{aligned} \begin{pmatrix} \dot{\mathbf{x}}\_1\\ \dot{\mathbf{x}}\_2 \end{pmatrix} &= f(\mathbf{x}) + g\_1(\mathbf{x})u\_1 + g\_2(\mathbf{x})u\_2 + p(\mathbf{x})d\\ y\_1 &= h\_1(\mathbf{x}) = \mathbf{x}\_1\\ y\_2 &= h\_2(\mathbf{x}) = \mathbf{x}\_2 \end{aligned} \tag{44}$$

with *p*(*x*) taken to be dependent on the output vector:

$$p(\mathbf{x}) = \begin{pmatrix} \mathbf{x}\_1 \\ \mathbf{x}\_2 \end{pmatrix} \tag{45}$$

The relative degrees of the disturbance to the outputs of interest are: <sup>1</sup> 1 and <sup>2</sup> 1. Since the relative degrees of the disturbances to the outputs are equal to those of the inputs, a simple static feed-forward structure can be used for output disturbance rejection purposes, with the control law given in (28), with *α x*)( and *β x*)( determined according to (27) and *γ x*)( being equal to:

1

Robust Feedback Linearization Control

<sup>0</sup> <sup>2</sup> <sup>4</sup> <sup>6</sup> <sup>8</sup> <sup>10</sup> <sup>12</sup> <sup>14</sup> <sup>16</sup> <sup>18</sup> <sup>20</sup> <sup>75</sup>

without compensator with compensator

Time

<sup>0</sup> <sup>2</sup> <sup>4</sup> <sup>6</sup> <sup>8</sup> <sup>10</sup> <sup>12</sup> <sup>14</sup> <sup>16</sup> <sup>18</sup> <sup>20</sup> -120

without compensator with compensator

Time


**5. Conclusions** 

rejection.

u1

x1

for Reference Tracking and Disturbance Rejection in Nonlinear Systems 287


u2

c) d)

As it has been previously demonstrated theoretically through mathematical computations (Guillard, *et al*., 2000), the results in this paper prove that by combining the robust method of feedback linearization with a robust linear controller, the robustness properties are kept when simulating the closed loop nonlinear uncertain system. Additionally, the design of the loop-shaping controller is significantly simplified as compared to the classical linearization technique, since the final linearized model bears significant information regarding the initial nonlinear model. Finally, the authors show that robust nonlinear controller - designed by combining this new method for feedback linearization (Guillard & Bourles, 2000) with a linear *H* controller - offers a simple and efficient solution, both in terms of reference tracking and input disturbance rejection. Moreover, the implementation of the feed-forward control scheme in the state-feedback control structure leads to improved output disturbance

Fig. 8. Output disturbance rejection using robust nonlinear controller and feed-forward compensator considering time delay measurements of the disturbance *d* a) *x*1 b) *x*2 c) *u*1 d) *u*<sup>2</sup>

x2

a) b)

<sup>0</sup> <sup>2</sup> <sup>4</sup> <sup>6</sup> <sup>8</sup> <sup>10</sup> <sup>12</sup> <sup>14</sup> <sup>16</sup> <sup>18</sup> <sup>20</sup> <sup>3</sup>

without compensator with compensator

Time

<sup>0</sup> <sup>2</sup> <sup>4</sup> <sup>6</sup> <sup>8</sup> <sup>10</sup> <sup>12</sup> <sup>14</sup> <sup>16</sup> <sup>18</sup> <sup>20</sup> -140

without compensator with compensator

Time

Fig. 7. Input disturbance rejection using robust nonlinear controller a) *x*1 b) *x*2 c) *u*1 d) *u*<sup>2</sup>

The simulation results considering a unit disturbance *d* are given in Figure 8, considering a time delay in the sensor measurements of 1 minute. The results show that the state feedback/feed-forward scheme proposed in the robust feedback linearization framework is able to reject measurable output disturbances. A comparative simulation is given considering the case of no feed-forward scheme. The results show that the use of the feedforward scheme in the feedback linearization loop reduces the oscillations in the output, with the expense of an increased control effort.

In the unlikely situation of no time delay measurements of the disturbance *d*, the results obtained using feed-forward compensator are highly notable, as compared to the situation without the compensator. The simulation results are given in Figure 9. Both, Figure 8 and Figure 9 show the efficiency of such feed-forward control scheme in output disturbance rejection problems.

Fig. 8. Output disturbance rejection using robust nonlinear controller and feed-forward compensator considering time delay measurements of the disturbance *d* a) *x*1 b) *x*2 c) *u*1 d) *u*<sup>2</sup>

## **5. Conclusions**

286 Recent Advances in Robust Control – Novel Approaches and Design Methods

() ()() *B B*

input disturbance no disturbance

input disturbance no disturbance

<sup>0</sup> <sup>1</sup> <sup>2</sup> <sup>3</sup> <sup>4</sup> <sup>5</sup> <sup>6</sup> <sup>95</sup>

Time

<sup>0</sup> <sup>1</sup> <sup>2</sup> <sup>3</sup> <sup>4</sup> <sup>5</sup> <sup>6</sup> <sup>0</sup>

Time

with the expense of an increased control effort.

2

rejection problems.

4

6

u1

8

10

12

x1

*γ x M xpx Cx Cx*

1 1

1 1

*x x*

6.6 6.7 6.8 6.9 7 7.1 7.2 7.3 7.4 7.5

c) d)

The simulation results considering a unit disturbance *d* are given in Figure 8, considering a time delay in the sensor measurements of 1 minute. The results show that the state feedback/feed-forward scheme proposed in the robust feedback linearization framework is able to reject measurable output disturbances. A comparative simulation is given considering the case of no feed-forward scheme. The results show that the use of the feedforward scheme in the feedback linearization loop reduces the oscillations in the output,

In the unlikely situation of no time delay measurements of the disturbance *d*, the results obtained using feed-forward compensator are highly notable, as compared to the situation without the compensator. The simulation results are given in Figure 9. Both, Figure 8 and Figure 9 show the efficiency of such feed-forward control scheme in output disturbance

Fig. 7. Input disturbance rejection using robust nonlinear controller a) *x*1 b) *x*2 c) *u*1 d) *u*<sup>2</sup>

u2

a) b)

x2

1 1 12 12

1

2

<sup>0</sup> <sup>1</sup> <sup>2</sup> <sup>3</sup> <sup>4</sup> <sup>5</sup> <sup>6</sup> 6.5

Time

<sup>0</sup> <sup>1</sup> <sup>2</sup> <sup>3</sup> <sup>4</sup> <sup>5</sup> <sup>6</sup> <sup>0</sup>

Time

(46)

input disturbance no disturbance

input disturbance no disturbance

*x*

*x*

As it has been previously demonstrated theoretically through mathematical computations (Guillard, *et al*., 2000), the results in this paper prove that by combining the robust method of feedback linearization with a robust linear controller, the robustness properties are kept when simulating the closed loop nonlinear uncertain system. Additionally, the design of the loop-shaping controller is significantly simplified as compared to the classical linearization technique, since the final linearized model bears significant information regarding the initial nonlinear model. Finally, the authors show that robust nonlinear controller - designed by combining this new method for feedback linearization (Guillard & Bourles, 2000) with a linear *H* controller - offers a simple and efficient solution, both in terms of reference tracking and input disturbance rejection. Moreover, the implementation of the feed-forward control scheme in the state-feedback control structure leads to improved output disturbance rejection.

Robust Feedback Linearization Control

ISSN: 1547-5905

June 2000

ISSN: 0959-1524

Romania, 26-29 May 2009

1454-8658

0167-739X

ISSN: 0967-0661

USA

Seville, Spain, 12-15 December 2005

for Reference Tracking and Disturbance Rejection in Nonlinear Systems 289

Daoutidis, P., & Kravaris, C. (1989), Synthesis of feedforward/state feedback controllers for nonlinear processes, In: *AIChE Journal*, vol. 35, pp.1602–1616, ISSN: 1547-5905 Daoutidis, P., Soruosh, M, & Kravaris, C. (1990), Feedforward-Feedback Control of

Franco, A.L.D., Bourles, H., De Pieri, E.R., & Guillard, H. (2006), Robust nonlinear control

Franco, A.L.D., Bourles, H., & De Pieri, E.R. (2005), A robust nonlinear controller with

Guillard, H., &Bourles, H. (2000), Robust feedback linearization, In: *Proc. 14 th International* 

Isidori, A. (1995), *Nonlinear control systems*, Springer-Verlag, ISBN: 3540199160, New York,

Martinesn, F., Biegler, L. T., & Foss, B. A. (2004), A new optimization algorithm with

McFarlane, D.C, & Glover, K. (1990), Robust controller design using normalized coprime

De Oliveira, N. M. C (1994), *Newton type algorithms for nonlinear constrained chemical process* 

Pop, C.I, Dulf, E., & Festila, Cl. (2009), Nonlinear Robust Control of the 13C Cryogenic

Pop, C.-I., & Dulf, E.-H. (2010), Control Strategies of the 13C Cryogenic Separation Column,

Pop, C.I., Dulf, E., Festila, Cl., & Muresan, B. (2010), Feedback Linearization Control Design

Seo, J., Venugopala, & R., Kenne, J.-P. (2007), Feedback linearization based control of a

163, ISBN: 978-1-4244-6724-2, Cluj-Napoca, Romania, 28-30 May 2010 Robenack, K. (2005), Automatic differentiation and nonlinear controller design by exact

138, Springer Verlag, New York, USA, ISSN: 0170-8643

*control*, PhD thesis, Carnegie Melon University, Pennsylvania

4524-0, pp. 3458 – 3463, St. Louis, Missouri, USA, 10-12 June 2009

*Automatic Control*, vol. 51, No. 7, pp. 1200-1207, ISSN: 0018-9286

choice, In: *Proceedings of the American Control Conference ACC'09*, ISBN: 978-1-4244-

Multivariable Nonlinear Processes, In: *AIChE Journal*, vol. 36, no.10, pp.1471–1484,

associating robust feedback linearization and *H* control, In: *IEEE Transactions on* 

application to a magnetic bearing, In: Proceedings of the 44th *IEEE Conference on Decision and Control and The European Control Conference*, ISBN: 0-7803-9567-0,

*Symposium on Mathematical Theory of Networks and Systems*, Perpignan, France, 19-23

application to nonlinear MPC, *Journal of Process Control*, vol. 14, No. 8, pp. 853-865,

factor plant descriptions, In: *Lecture Notes in Control and Information Sciences*, vol.

Isotope Separation Column, In: *Proceedings of the 17th International Conference on Control Systems and Computer Science*, Vol.2., pp.59-65, ISSN: 2066-4451, Bucharest,

*Control Engineering and Applied Informatics*, Vol. 12, No. 2, pp.36-43, June 2010, ISSN

for the 13C Cryogenic Separation Column, In: *International IEEE-TTTC International Conference on Automation, Quality and Testing, Robotics AQTR 2010*, vol. I, pp. 157-

linearization, In: *Future Generation Computer Systems*, vol. 21, pp. 1372-1379, ISSN:

rotational hydraulic drive, In: *Control Engineering Practice*, vol. 15, pp. 1495–1507,

Fig. 9. Output disturbance rejection using robust nonlinear controller and feed-forward compensator considering instant measurements of the disturbance *d* a) *x*1 b) *x*2 c) *u*1 d) *u*<sup>2</sup>

## **6. References**


288 Recent Advances in Robust Control – Novel Approaches and Design Methods


u2

c) d)

Barkhordari Yazdi, M., & Jahed-Motlagh, M.R. (2009), Stabilization of a CSTR with two

Chen, P., Lu, I.-Z., & Chen, Y.-W. (2010), Extremal Optimization Combined with LM

Chou, YI-S., & Wu, W. (1995), Robust controller design for uncertain nonlinear systems via

Dabo, M., Langlois, & N., Chafouk, H. (2009), Dynamic feedback linearization applied to

arbitrarily switching modes using modal state feedback linearization, In: *Chemical* 

Gradient Search for MLP Network Learning, *International Journal of Computational* 

feedback linearization, In: *Chemical Engineering Science*, vol. 50, No. 9, pp. 1429-1439,

asymptotic tracking: Generalization about the turbocharged diesel engine outputs

Fig. 9. Output disturbance rejection using robust nonlinear controller and feed-forward compensator considering instant measurements of the disturbance *d* a) *x*1 b) *x*2 c) *u*1 d) *u*<sup>2</sup>

*Engineering Journal*, vol. 155, pp. 838-843, ISSN: 1385-8947

*Intelligence Systems*, Vol.3, No. 5, pp. 622-631, ISSN: 1875-6891

x2

a) b)

<sup>0</sup> <sup>5</sup> <sup>10</sup> <sup>15</sup> <sup>20</sup> <sup>3</sup>

with compensator without compensator

Time

<sup>0</sup> <sup>5</sup> <sup>10</sup> <sup>15</sup> <sup>20</sup> -80

with compensator without compensator

Time

<sup>0</sup> <sup>5</sup> <sup>10</sup> <sup>15</sup> <sup>20</sup> <sup>95</sup>

Time

<sup>0</sup> <sup>5</sup> <sup>10</sup> <sup>15</sup> <sup>20</sup> -120

with compensator without compensator

ISSN: 0009-2509

Time

100


**6. References** 

u1

105

110

x1

115

120

125

with compensator without compensator choice, In: *Proceedings of the American Control Conference ACC'09*, ISBN: 978-1-4244- 4524-0, pp. 3458 – 3463, St. Louis, Missouri, USA, 10-12 June 2009


**0**

**13**

*Finland*

**Robust Attenuation of Frequency**

Systems described by differential equations with time-periodic coefficients have a long history in mathematical physics. Applications cover a wide area of systems ranging from helicopter blades, rotor-bearing systems, mechanics of structures, stability of structures influenced by periodic loads, applications in robotics and micro-electromechanical systems etc. (Rao, 2000; Sinha, 2005). Processes characterized by linear time-invariant or time-varying dynamics corrupted by sinusoidal output disturbance belong to this class of systems. Robust and adaptive analysis and synthesis techniques can be used to design suitable controllers, which fulfill the desired disturbance attenuation and other performance characteristics of the

Despite of the fact that LTP (Linear Time Periodic) system theory has been under research for years (Deskmuhk & Sinha, 2004; Montagnier et al., 2004) the analysis on LTPs with experimental data has been seriously considered only recently (Allen, 2007). The importance of new innovative ideas and products is of utmost importance in modern industrial society. In order to design more accurate and more economical products the importance of model-based control, involving increasingly accurate identification schemes and more effective control

An example of the processes related to the topic is vibration control in electrical machines, in which several research groups are currently working. Active vibration control has many applications in various industrial areas, and the need to generate effective but relatively cheap solutions is enormous. The example of electrical machines considered concerns the dampening of rotor vibrations in the so-called critical speed declared by the first flexural rotor bending resonance. In addition, the electromagnetic fields in the air-gap between rotor and stator may couple with the mechanic vibration modes, leading to rotordynamic instability. The vibration caused by this resonance is so considerable that large motors often have to be driven below the critical speed. Smaller motors can be driven also in super-critical speeds, but they have to be accelerated fast over the critical speed. Active vibration control would make it possible to use the motor in its whole operation range freely, according to specified needs given by the load process. Introducing characteristics of this kind for the electric drives of the future would be a major technological break-through, a good example of an innovative

methods, have become fully recognized in industrial applications.

**1. Introduction**

closed-loop system.

technological development.

**Varying Disturbances**

*Aalto University School of Electrical Engineering*

Kai Zenger and Juha Orivuori


## **Robust Attenuation of Frequency Varying Disturbances**

Kai Zenger and Juha Orivuori *Aalto University School of Electrical Engineering Finland*

### **1. Introduction**

290 Recent Advances in Robust Control – Novel Approaches and Design Methods

Sastry, S. S. (1999), *Nonlinear systems: analysis, stability and control*, Springer Verlag, ISBN: 0-

Henson, M., & Seborg, D. (Eds.),(1997), *Nonlinear process control*, Prentice Hall, ISBN: 978-

387-98513-1, New York, USA

0136251798, New York, USA

Systems described by differential equations with time-periodic coefficients have a long history in mathematical physics. Applications cover a wide area of systems ranging from helicopter blades, rotor-bearing systems, mechanics of structures, stability of structures influenced by periodic loads, applications in robotics and micro-electromechanical systems etc. (Rao, 2000; Sinha, 2005). Processes characterized by linear time-invariant or time-varying dynamics corrupted by sinusoidal output disturbance belong to this class of systems. Robust and adaptive analysis and synthesis techniques can be used to design suitable controllers, which fulfill the desired disturbance attenuation and other performance characteristics of the closed-loop system.

Despite of the fact that LTP (Linear Time Periodic) system theory has been under research for years (Deskmuhk & Sinha, 2004; Montagnier et al., 2004) the analysis on LTPs with experimental data has been seriously considered only recently (Allen, 2007). The importance of new innovative ideas and products is of utmost importance in modern industrial society. In order to design more accurate and more economical products the importance of model-based control, involving increasingly accurate identification schemes and more effective control methods, have become fully recognized in industrial applications.

An example of the processes related to the topic is vibration control in electrical machines, in which several research groups are currently working. Active vibration control has many applications in various industrial areas, and the need to generate effective but relatively cheap solutions is enormous. The example of electrical machines considered concerns the dampening of rotor vibrations in the so-called critical speed declared by the first flexural rotor bending resonance. In addition, the electromagnetic fields in the air-gap between rotor and stator may couple with the mechanic vibration modes, leading to rotordynamic instability. The vibration caused by this resonance is so considerable that large motors often have to be driven below the critical speed. Smaller motors can be driven also in super-critical speeds, but they have to be accelerated fast over the critical speed. Active vibration control would make it possible to use the motor in its whole operation range freely, according to specified needs given by the load process. Introducing characteristics of this kind for the electric drives of the future would be a major technological break-through, a good example of an innovative technological development.

Varying Disturbances 3

Robust Attenuation of Frequency Varying Disturbances 293

In Fig.1 a 30 kW induction machine is presented, provided with such a new actuator, which is a coil mounted in the stator slots of the machine (b). The electromechanical actuator is an extra winding, which, due to the controlled current, produces the required counter force to damp the rotor vibrations. The actuator is designed such that the interaction with the normal operation of the machine is minimal. More on the design and modelling of the actuator can

Some of the machine parameters are listed in Table 1. The vibration of the rotor is continuously measured in two dimensions and the control algorithm is used to calculate the control current fed into the coil. The schema of the control arrangement is shown in Fig.2. The idea is to

generate a control force to the rotor through a new actuator consisting of extra windings mounted in the stator slots. An adaptive model-based algorithm controls the currents to the actuator thus generating a magnetic field that induces a force negating the disturbance force exited by the mass imbalance of the rotor. The configuration in the figure includes an excitation force (disturbance) consisting of rotation harmonics and harmonics stemming from the induction machine dynamics. The control force and the disturbance exert a force to the rotor, which results in a rotor center displacement. If the dynamic compensation signal is

In practical testing the setup shown in Fig.3 has been used. The displacement of the rotor in two dimensions (xy) is measured at one point with displacement transducers, which give a voltage signal proportional to the distance from sensor to the shaft. A digital tachometer at the end of the rotor measures the rotational frequency. The control algorithms were programmed in Matlab/Simulink model and the dSpace interface system and the Real-Time Workshop

The second tests were made by a rolling process consisting of a reel, hydraulic actuator and force sensor. The natural frequency of the process was 39 Hz, and the hydraulic actuator acts both as the source of control forces and as a support for the reel. The actuator is connected to the support structures through a force sensor, thus providing information on the forces acting on the reel. The test setup is shown in Fig.4 and the control schema is presented in Fig.5.

be found in (Laiho et al., 2008).

Fig. 2. Rotor vibration control by a built-in new actuator

chosen cleverly, the rotor vibrations can be effectively reduced.

were used to control the current fed to the actuator winding.

**2.2 An industrial rolling process**

In practice, the basic electromechanical models of electrical machines can be approximated by linear time-invariant models with a sinusoidal disturbance signal entering at the so-called critical frequency. That frequency can also vary which makes the system model time-variable. The outline of the article is as follows. Two test processes are introduced in Section 2. A systematic and generic model structure valid for these types of systems is presented in Section 3. Three types of controllers for active vibration control are presented in Section 4 and their performance is verified by simulations and practical tests. Specifically the extension to the nonlinear control algorithm presented in Section 4.4 is important, because it extends the optimal controller to a nonlinear one with good robustness properties with respect to variations in rotation frequency. Conclusions are given in Section 5.

## **2. Problem statement**

The control algorithms described in the paper were tested by two test processes to be discussed next.

## **2.1 An electric machine**

(a) Fig1a (b) Fig1b

Fig. 1. Test machine: A 30 kW three-phase squirrel cage induction motor with an extended rotor shaft (a) and stator windings (b)

In electrical motors both radial and axial vibration modes are of major concern, because they limit the speed at which the motor can be run and also shorten the lifetime of certain parts of the motor. The fundamental vibration forces are typically excited at discrete frequencies (critical frequencies), which depend on the electrodynamics of the rotor and stator (Inman, 2006). In some machines the critical frequency can be passed by accelerating the rotor speed fast beyond it, but specifically in larger machines that is not possible. Hence these machines must be run at subcritical frequencies. It would be a good idea to construct an actuator, which would create a separate magnetic field in the airgap between the stator and rotor. That would cause a counterforce, which would attenuate the vibration mode of the rotor. Running the rotor at critical speeds and beyond will need a stable and robust vibration control system, because at different speeds different vibration modes also wake.

2 Will-be-set-by-IN-TECH

In practice, the basic electromechanical models of electrical machines can be approximated by linear time-invariant models with a sinusoidal disturbance signal entering at the so-called critical frequency. That frequency can also vary which makes the system model time-variable. The outline of the article is as follows. Two test processes are introduced in Section 2. A systematic and generic model structure valid for these types of systems is presented in Section 3. Three types of controllers for active vibration control are presented in Section 4 and their performance is verified by simulations and practical tests. Specifically the extension to the nonlinear control algorithm presented in Section 4.4 is important, because it extends the optimal controller to a nonlinear one with good robustness properties with respect to

The control algorithms described in the paper were tested by two test processes to be

(a) Fig1a (b) Fig1b

In electrical motors both radial and axial vibration modes are of major concern, because they limit the speed at which the motor can be run and also shorten the lifetime of certain parts of the motor. The fundamental vibration forces are typically excited at discrete frequencies (critical frequencies), which depend on the electrodynamics of the rotor and stator (Inman, 2006). In some machines the critical frequency can be passed by accelerating the rotor speed fast beyond it, but specifically in larger machines that is not possible. Hence these machines must be run at subcritical frequencies. It would be a good idea to construct an actuator, which would create a separate magnetic field in the airgap between the stator and rotor. That would cause a counterforce, which would attenuate the vibration mode of the rotor. Running the rotor at critical speeds and beyond will need a stable and robust vibration control system,

Fig. 1. Test machine: A 30 kW three-phase squirrel cage induction motor with an extended

variations in rotation frequency. Conclusions are given in Section 5.

**2. Problem statement**

**2.1 An electric machine**

rotor shaft (a) and stator windings (b)

because at different speeds different vibration modes also wake.

discussed next.

In Fig.1 a 30 kW induction machine is presented, provided with such a new actuator, which is a coil mounted in the stator slots of the machine (b). The electromechanical actuator is an extra winding, which, due to the controlled current, produces the required counter force to damp the rotor vibrations. The actuator is designed such that the interaction with the normal operation of the machine is minimal. More on the design and modelling of the actuator can be found in (Laiho et al., 2008).

Some of the machine parameters are listed in Table 1. The vibration of the rotor is continuously measured in two dimensions and the control algorithm is used to calculate the control current fed into the coil. The schema of the control arrangement is shown in Fig.2. The idea is to

Fig. 2. Rotor vibration control by a built-in new actuator

generate a control force to the rotor through a new actuator consisting of extra windings mounted in the stator slots. An adaptive model-based algorithm controls the currents to the actuator thus generating a magnetic field that induces a force negating the disturbance force exited by the mass imbalance of the rotor. The configuration in the figure includes an excitation force (disturbance) consisting of rotation harmonics and harmonics stemming from the induction machine dynamics. The control force and the disturbance exert a force to the rotor, which results in a rotor center displacement. If the dynamic compensation signal is chosen cleverly, the rotor vibrations can be effectively reduced.

In practical testing the setup shown in Fig.3 has been used. The displacement of the rotor in two dimensions (xy) is measured at one point with displacement transducers, which give a voltage signal proportional to the distance from sensor to the shaft. A digital tachometer at the end of the rotor measures the rotational frequency. The control algorithms were programmed in Matlab/Simulink model and the dSpace interface system and the Real-Time Workshop were used to control the current fed to the actuator winding.

### **2.2 An industrial rolling process**

The second tests were made by a rolling process consisting of a reel, hydraulic actuator and force sensor. The natural frequency of the process was 39 Hz, and the hydraulic actuator acts both as the source of control forces and as a support for the reel. The actuator is connected to the support structures through a force sensor, thus providing information on the forces acting on the reel. The test setup is shown in Fig.4 and the control schema is presented in Fig.5.

Varying Disturbances 5

Robust Attenuation of Frequency Varying Disturbances 295

Reel

Measurements

1 1 c d

m = sensed force c = hydraulic pressure F = control force F = disturbance force

<sup>1</sup> *m*<sup>1</sup> *c*

*Fd*

Hydraulic valve

*Fc*

Process

DAC ADC Control algorithm

by using finite-element (FE) model as the "real" process have been good and accurate (Laiho et al., 2007), when both prediction error method (PEM) and subspace identification (SUB) have been used. Since the running speed of the motor was considered to be below 60 Hz, the sampling rate was chosen to be 1 kHz. A 12th order state-space model was used as the model structure (four inputs and two outputs corresponding to the control voltages, rotor displacements and produced control forces in two dimensions). The model order was chosen based on the frequency response calculated from the measurement data, from which

In identification a pseudo random (PSR) control signal was used in control inputs. That excites rotor dynamics on a wide frequency range, which in limited only by the sampling rate. However, because the second control input corresponds to the rotor position and has a big influence on the produced force a pure white noise signal cannot be used here. Therefore

dSpace

1 kHz

Measured signals

*Fdisturbance Fcontrol*

*Fmeasured*

Force sensor

Reel

Fig. 4. The test setup (industrial rolling process)

Control voltage

the approximate number of poles and zeros were estimated.

Fig. 5. The controller schema

Hydraulic actuator


Table 1. Main parameters of the test motor

Fig. 3. Schema of the test setup (motor)

### **3. Modeling and identification**

Starting from the first principles of electromagnetics (Chiasson, 2005; Fuller et al., 1995) and structure mechanics, the vibration model can for a two-pole cage induction machine be written in the form (Laiho et al., 2008)

$$\begin{cases} \dot{q} = Aq + Bv + Gf\_{ex} \\ u\_{rc} = \mathbb{C}q \end{cases} \tag{1}$$

where *q* denotes the states (real and complex) of the system, *v* is the control signal of the actuator, *fex* is the sinusoidal disturbance causing the vibration at the critical frequency, and *urc* is the radial rotor movement in two dimensions. The matrices *A*, *B*, *G* and *C* are constant. The constant parameter values can be identified by the well-known methods (Holopainen et al., 2004; Laiho et al., 2008; Repo & Arkkio, 2006). The results obtained 4 Will-be-set-by-IN-TECH

supply frequency 50 Hz rated voltage 400 V connection delta rated current 50 A rated power 30 kW number of phases 3 number of poles 2 rated slip 1 % rotor mass 55.8 kg rotor shaft length 1560 mm critical speed 37.5 Hz width of the air-gap 1 mm

Table 1. Main parameters of the test motor

Control voltage

Current amplifier

Fig. 3. Schema of the test setup (motor)

written in the form (Laiho et al., 2008)

**3. Modeling and identification**

Parameter Value Unit

DAC ADC 2 -> 3 phase

dSpace

Measurements

*m*<sup>1</sup> *m*<sup>2</sup>

Tacho

Control algorithm Measured signals

*urc* <sup>=</sup> *Cq* (1)

m = rotor displacement m = rotational frequency F = control force F = disturbance force

1 2 c d

conversion

Starting from the first principles of electromagnetics (Chiasson, 2005; Fuller et al., 1995) and structure mechanics, the vibration model can for a two-pole cage induction machine be

*q*˙ = *Aq* + *Bv* + *G fex*

where *q* denotes the states (real and complex) of the system, *v* is the control signal of the actuator, *fex* is the sinusoidal disturbance causing the vibration at the critical frequency, and *urc* is the radial rotor movement in two dimensions. The matrices *A*, *B*, *G* and *C* are constant. The constant parameter values can be identified by the well-known methods (Holopainen et al., 2004; Laiho et al., 2008; Repo & Arkkio, 2006). The results obtained

Stator & actuator

Process

*Fc Fd*

Rotor

Stator & actuator

Bearing Bearing

Fig. 4. The test setup (industrial rolling process)

Fig. 5. The controller schema

by using finite-element (FE) model as the "real" process have been good and accurate (Laiho et al., 2007), when both prediction error method (PEM) and subspace identification (SUB) have been used. Since the running speed of the motor was considered to be below 60 Hz, the sampling rate was chosen to be 1 kHz. A 12th order state-space model was used as the model structure (four inputs and two outputs corresponding to the control voltages, rotor displacements and produced control forces in two dimensions). The model order was chosen based on the frequency response calculated from the measurement data, from which the approximate number of poles and zeros were estimated.

In identification a pseudo random (PSR) control signal was used in control inputs. That excites rotor dynamics on a wide frequency range, which in limited only by the sampling rate. However, because the second control input corresponds to the rotor position and has a big influence on the produced force a pure white noise signal cannot be used here. Therefore

Varying Disturbances 7

Robust Attenuation of Frequency Varying Disturbances 297

where *A* is the amplitude of the disturbance. The models of the actuator, rotor and disturbance

*Ar BrCa BrCd Ba*1*Cr Aa* 0 0 0 *Ad*

> ⎤ ⎦

⎤

⎦ *xp*(*t*) +

*yar*(*t*) = *Carxar*(*t*) (6)

*xp*(*t*) + �

*Bar* 0 � *u*(*t*)

⎡ ⎣ 0 *Ba*<sup>2</sup> 0

⎤ ⎦ *u*(*t*)

(5)

(7)

⎡ ⎣

*xp*(*t*)

*xp* =

⎡ ⎣ *xr xa xd*

As mentioned, the actuator and rotor model can be combined and the disturbance can be moved to enter at the output of the process (according to Fig. 6b). The state-space

*x*˙*ar*(*t*) = *Aarxar*(*t*) + *Baru*(*t*)

where *u* is a vector of applied control voltages and *yar* is vector of rotor displacements. The

*Aar* 0 0 *Ad*

*xar*(*t*) *xd*(*t*)

�

�

*Cr* 0 0 �

can be combined into one state-space representation

*x*˙ *<sup>p</sup>*(*t*) = *Apxp*(*t*) + *Bpu*(*t*) =

representation of the actuator-rotor model is then

*<sup>x</sup>*˙ *<sup>p</sup>*(*t*) = *Apxp*(*t*) + *Bpup*(*t*) = �

*Car Cd* � *xp*(*t*)

*xp*(*t*) = �

The process was identified with a sampling frequency of 1 kHz, which was considered adequate since the running speed of the motor was about 60 Hz and therefore well below 100 Hz. Pseudorandom signals were used as control forces in both channels separately, and the prediction error method (PEM) was used (Ljung, 1999) to identify a 12th order state-space

The identified process model is compared to real process data, and the results are shown in Figs.7 and 8, respectively. The fit in x and y directions were calculated as 72.5 % and 80.08 %, which is considered to be appropriate. From the frequency domain result it is seen that for lower frequency the model agrees well with response obtained form measured data, but in higher frequencies there is a clear difference. That is because the physical model used behind the identification is only valid up to a certain frequency, and above that there exist unmodelled

In the following sections different control methods are presented for vibration control of single or multiple disturbances with a constant or varying disturbance frequencies. Two of the methods are based on the *linear quadratic gaussian* (LQ) control, and one belongs to the class of *higher harmonic control* algorithms (HHC), which is also known as *convergent control*. If the

*yr*(*t*) = *Cpxp*(*t*) = �

whole system can be modeled as

representation of the system.

*yr*(*t*) = *Cpxp*(*t*) = �

with

with

dynamics.

**4. Control design**

the model output of the rotor position added with a small PSR signal to prevent correlation was used as the second control input. After identification the model was validated by using independent validation data. The fit was larger than 80 per cent, which was considered to be adequate for control purposes. The results have later been confirmed by tests carried out by using the real test machine data, and the results were found to be equally good.

The model structure is then as shown in Fig.6, where the actuator model and electromechanic model of the rotor have been separated, and the sinusoidal disturbance term is used to model the force that causes the radial vibration of the rotor. In Fig.6a the models of the actuator and rotor have been separated and the disturbance is modelled to enter at the input of the rotor model. The internal feedback shown is caused by the unbalanced magnetic pull (UMP), which means that the rotor when moved from the center position in the airgap causes an extra distortion in the magnetic field. That causes an extra force, which can be taken into consideration in the actuator model. However, in practical tests it is impossible to separate the models of the actuator and rotor dynamics, and therefore the model in Fig.6b has been used in identification. Because the models are approximated by linear dynamics, the sinusoidal disturbance signal can be moved to the process output, and the actuator and rotor models can be combined.

In Fig. 6a the 4-state dynamical (Jeffcott) model for the radial rotor dynamics is

$$\begin{cases} \dot{\mathbf{x}}\_{r}(t) = A\_{r}\mathbf{x}(t) + B\_{r}u\_{r}(t) \\ y\_{r}(t) = \mathbf{C}\_{r}\mathbf{x}(t) \end{cases} \tag{2}$$

where *yr* is the 2-dimensional rotor displacement from the center axis in xy-coordinates, and *ur* is the sum of the actuator and disturbance forces. The actuator model is

$$\begin{aligned} \dot{\mathbf{x}}\_a(t) &= A\_a \mathbf{x}\_a(t) + \begin{bmatrix} B\_{a1} \ B\_{a2} \end{bmatrix} \begin{bmatrix} y\_r(t) \\ u(t) \end{bmatrix} \\ y\_a(t) &= \mathbf{C}\_a \mathbf{x}\_a(t) \end{aligned} \tag{3}$$

where *ya* are the forces generated by the actuator, and *u* are the control voltages fed into the windings. The self-excited sinusoidal disturbance signal is generated by (given here in two dimensions)

$$\begin{aligned} \dot{\mathbf{x}}\_d(t) &= A\_d \mathbf{x}\_d(t) = \begin{bmatrix} 0 & 1 & 0 & 0 \\ -\omega\_d^2 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & -\omega\_d^2 & 0 \end{bmatrix} \mathbf{x}\_d(t) \\ d(t) &= \mathbf{C}\_d \mathbf{x}\_d(t) = \begin{bmatrix} 1 \ 0 \ 0 \ 0 \\ 0 \ 0 \ 1 \ 0 \end{bmatrix} \mathbf{x}\_d(t) \end{aligned} \tag{4}$$

where *ω<sup>d</sup>* is the angular frequency of the disturbance and *d*(*t*) denotes the disturbance forces in xy-directions. The initial values of the state are chosen such that the disturbance consists of two sinusoidal signals with 90 degree phase shift (sine and cosine waves). The initial values are then

$$\mathbf{x}\_d(0) = \begin{bmatrix} \mathbf{x}\_{\sin}(0) \\ \mathbf{x}\_{\cos}(0) \end{bmatrix} = \begin{bmatrix} 0 \\ A\omega\_d \\ A \\ 0 \end{bmatrix}$$

where *A* is the amplitude of the disturbance. The models of the actuator, rotor and disturbance can be combined into one state-space representation

$$\begin{aligned} \dot{\mathbf{x}}\_p(t) &= A\_p \mathbf{x}\_p(t) + B\_p \boldsymbol{u}(t) = \begin{bmatrix} A\_r & B\_r \mathbf{C}\_d \ B\_r \mathbf{C}\_d \\ B\_{d1} \mathbf{C}\_r & A\_d & 0 \\ 0 & 0 & A\_d \end{bmatrix} \mathbf{x}\_p(t) + \begin{bmatrix} 0 \\ B\_{d2} \\ 0 \end{bmatrix} \boldsymbol{u}(t) \\ \mathbf{y}\_r(t) &= \mathbf{C}\_p \mathbf{x}\_p(t) = \begin{bmatrix} \mathbf{C}\_r \ \mathbf{0} \ \mathbf{0} \end{bmatrix} \mathbf{x}\_p(t) \end{aligned} \tag{5}$$

with

6 Will-be-set-by-IN-TECH

the model output of the rotor position added with a small PSR signal to prevent correlation was used as the second control input. After identification the model was validated by using independent validation data. The fit was larger than 80 per cent, which was considered to be adequate for control purposes. The results have later been confirmed by tests carried out by

The model structure is then as shown in Fig.6, where the actuator model and electromechanic model of the rotor have been separated, and the sinusoidal disturbance term is used to model the force that causes the radial vibration of the rotor. In Fig.6a the models of the actuator and rotor have been separated and the disturbance is modelled to enter at the input of the rotor model. The internal feedback shown is caused by the unbalanced magnetic pull (UMP), which means that the rotor when moved from the center position in the airgap causes an extra distortion in the magnetic field. That causes an extra force, which can be taken into consideration in the actuator model. However, in practical tests it is impossible to separate the models of the actuator and rotor dynamics, and therefore the model in Fig.6b has been used in identification. Because the models are approximated by linear dynamics, the sinusoidal disturbance signal can be moved to the process output, and the actuator and rotor models can

*x*˙*r*(*t*) = *Arx*(*t*) + *Brur*(*t*)

where *yr* is the 2-dimensional rotor displacement from the center axis in xy-coordinates, and

where *ya* are the forces generated by the actuator, and *u* are the control voltages fed into the windings. The self-excited sinusoidal disturbance signal is generated by (given here in two

> ⎡ ⎢ ⎢ ⎣

� 1000 0010

� *x*sin(0) *x*cos(0) � =

where *ω<sup>d</sup>* is the angular frequency of the disturbance and *d*(*t*) denotes the disturbance forces in xy-directions. The initial values of the state are chosen such that the disturbance consists of two sinusoidal signals with 90 degree phase shift (sine and cosine waves). The initial values

<sup>−</sup>*ω*<sup>2</sup>

*Ba*<sup>1</sup> *Ba*<sup>2</sup>

0100

� *xd*(*t*)

> ⎡ ⎢ ⎢ ⎣

0 *Aω<sup>d</sup> A* 0

⎤ ⎥ ⎥ ⎦

*<sup>d</sup>* 000 0001 0 0 <sup>−</sup>*ω*<sup>2</sup>

*d* 0

⎤ ⎥ ⎥ ⎦ *xd*(*t*)

� � *yr*(*t*) *u*(*t*)

*yr*(*t*) = *Crx*(*t*) (2)

�

(3)

(4)

using the real test machine data, and the results were found to be equally good.

In Fig. 6a the 4-state dynamical (Jeffcott) model for the radial rotor dynamics is

*ur* is the sum of the actuator and disturbance forces. The actuator model is

*x*˙*a*(*t*) = *Aaxa*(*t*) + �

*ya*(*t*) = *Caxa*(*t*)

*x*˙*d*(*t*) = *Adxd*(*t*) =

*d*(*t*) = *Cdxd*(*t*) =

*xd*(0) =

be combined.

dimensions)

are then

$$\mathbf{x}\_p = \begin{bmatrix} \mathbf{x}\_r \\ \mathbf{x}\_a \\ \mathbf{x}\_d \end{bmatrix}$$

As mentioned, the actuator and rotor model can be combined and the disturbance can be moved to enter at the output of the process (according to Fig. 6b). The state-space representation of the actuator-rotor model is then

$$\begin{aligned} \dot{x}\_{ar}(t) &= A\_{ar}\mathbf{x}\_{ar}(t) + B\_{ar}u(t) \\ y\_{ar}(t) &= \mathbf{C}\_{ar}\mathbf{x}\_{ar}(t) \end{aligned} \tag{6}$$

where *u* is a vector of applied control voltages and *yar* is vector of rotor displacements. The whole system can be modeled as

$$\begin{aligned} \dot{\mathbf{x}}\_p(t) &= A\_p \mathbf{x}\_p(t) + B\_p \boldsymbol{u}\_p(t) = \begin{bmatrix} A\_{ar} & 0 \\ 0 & A\_d \end{bmatrix} \mathbf{x}\_p(t) + \begin{bmatrix} B\_{ar} \\ 0 \end{bmatrix} \boldsymbol{u}(t) \\ \mathbf{y}\_r(t) &= \mathbf{C}\_p \mathbf{x}\_p(t) = \begin{bmatrix} \mathbf{C}\_{ar} \ \mathbf{C}\_d \end{bmatrix} \mathbf{x}\_p(t) \end{aligned} \tag{7}$$

with

$$\mathbf{x}\_p(t) = \begin{bmatrix} \mathbf{x}\_{ar}(t) \\ \mathbf{x}\_d(t) \end{bmatrix}$$

The process was identified with a sampling frequency of 1 kHz, which was considered adequate since the running speed of the motor was about 60 Hz and therefore well below 100 Hz. Pseudorandom signals were used as control forces in both channels separately, and the prediction error method (PEM) was used (Ljung, 1999) to identify a 12th order state-space representation of the system.

The identified process model is compared to real process data, and the results are shown in Figs.7 and 8, respectively. The fit in x and y directions were calculated as 72.5 % and 80.08 %, which is considered to be appropriate. From the frequency domain result it is seen that for lower frequency the model agrees well with response obtained form measured data, but in higher frequencies there is a clear difference. That is because the physical model used behind the identification is only valid up to a certain frequency, and above that there exist unmodelled dynamics.

### **4. Control design**

In the following sections different control methods are presented for vibration control of single or multiple disturbances with a constant or varying disturbance frequencies. Two of the methods are based on the *linear quadratic gaussian* (LQ) control, and one belongs to the class of *higher harmonic control* algorithms (HHC), which is also known as *convergent control*. If the

Varying Disturbances 9

Robust Attenuation of Frequency Varying Disturbances 299

sinusoidal disturbance frequency signal varies in frequency, the algorithms must be modified

In this method the suppressing of tonal disturbance is posed as a dynamic optimization problem, which can be solved by the well-known LQ theory. The idea is again that the model generating the disturbance is embedded in the process model, and that information is then automatically used when minimizing the design criterion. That leads to a control algorithm which inputs a signal of the same amplitude but opposite phase to the system thus canceling the disturbance. The problem can be defined in several scenarios, e.g. the disturbance can be modelled to enter at the process input or output, the signal to be minimized can vary etc.

> *Ap* 0 0 *Ad*

*zT*(*τ*)*Qz*(*τ*) + *uT*(*τ*)*Ru*(*τ*)

*<sup>z</sup> QCzx*(*τ*) + *<sup>u</sup>T*(*τ*)*Ru*(*τ*)

*Ad*<sup>1</sup> ··· 0 0

0 ··· *Adn* 0 0 ··· 0 −*�*

� *xdn*(*t*)

, *i* = 1, 2, ..., *n*

*dn*(0) *b*

�*T*

*Cd*<sup>1</sup> ··· *Cdn* 0

�

*<sup>d</sup>*1(0) ··· *<sup>x</sup><sup>T</sup>*

According to the formalism a sum of *n* sinusoidal disturbance components (angular frequencies *ωdn*) enter the system. The very small number *�* is added in order the augmented system to be stabilizable, which is needed for the solution to exist. The damping of the resulting sinusoidal is so low that it does not affect the practical use of the optimal controller.

where *z* is a freely chosen performance variable and *Q* ≥ 0, *R* > 0 are the weighing matrices for the performance variable and control effort. By inserting *z*(*t*) = *Czx*(*t*) the criterion

> ⎡ ⎢ ⎢ ⎢ ⎣

� 0 1 <sup>−</sup>*ω*<sup>2</sup> *dn* −*ε*

*xT*

. . . ... . . . . . .

�

*<sup>x</sup>*(*t*) + �

*Bp* 0 � *u*(*t*)

�

�

*xdn*(*t*)

⎤ ⎥ ⎥ ⎥ ⎦ *dτ* (9)

*dτ* (10)

(8)

(11)

by combining them and using direct frequency measurement or frequency tracking.

*<sup>x</sup>*˙(*t*) = *Ax*(*t*) + *Bu*(*t*) = �

*Cp Cd* � *x*(*t*)

0

�

*J* = �∞

*J* = �∞

The disturbance dynamics can be modelled as

0

�

*x*˙*d*(*t*) = *Adxd*(*t*) =

*<sup>d</sup>*(*t*) = *Cdxd*(*t*) = �

*Adn* =

*x*(0) = �

*xT*(*τ*)*C<sup>T</sup>*

*y*(*t*) = �

**4.1 Direct optimal feedback design**

Starting from the generic model

changes into the standard LQ form

the control criterion is set

where

and the initial values

Fig. 6. Process models for the actuator, rotor and sinusoidal disturbance

Fig. 7. Validation of the actuator-rotor model in time domain

Fig. 8. Validation of the actuator-rotor model in frequency domain

sinusoidal disturbance frequency signal varies in frequency, the algorithms must be modified by combining them and using direct frequency measurement or frequency tracking.

### **4.1 Direct optimal feedback design**

8 Will-be-set-by-IN-TECH

*u t*( )

*pro d G G* = =

*Gd*

+

(b) Fig6b

*d t*( ) ( ) *pro y t y t*( ) +

*Gpro*

process model disturbance model

*Ga*

*u t*( )

*a d r* = = =

*G Actuator G Disturbance G Rotor*

*Gd*

( ) *<sup>r</sup> u t*

*d t*( )

<sup>+</sup> <sup>+</sup>

(a) Fig6a

( ) *<sup>a</sup> y t*

*Gr*

Fig. 6. Process models for the actuator, rotor and sinusoidal disturbance

Fig. 7. Validation of the actuator-rotor model in time domain

Fig. 8. Validation of the actuator-rotor model in frequency domain

( ) *<sup>r</sup> y t*

In this method the suppressing of tonal disturbance is posed as a dynamic optimization problem, which can be solved by the well-known LQ theory. The idea is again that the model generating the disturbance is embedded in the process model, and that information is then automatically used when minimizing the design criterion. That leads to a control algorithm which inputs a signal of the same amplitude but opposite phase to the system thus canceling the disturbance. The problem can be defined in several scenarios, e.g. the disturbance can be modelled to enter at the process input or output, the signal to be minimized can vary etc. Starting from the generic model

$$\begin{aligned} \dot{\mathbf{x}}(t) &= A\mathbf{x}(t) + Bu(t) = \begin{bmatrix} A\_p & 0\\ 0 & A\_d \end{bmatrix} \mathbf{x}(t) + \begin{bmatrix} B\_p\\ 0 \end{bmatrix} u(t) \\\ y(t) &= \begin{bmatrix} \mathbb{C}\_p \ \mathbb{C}\_d \end{bmatrix} \mathbf{x}(t) \end{aligned} \tag{8}$$

the control criterion is set

$$J = \int\_0^\infty \left( z^T(\tau) Q z(\tau) + u^T(\tau) R u(\tau) \right) d\tau \tag{9}$$

where *z* is a freely chosen performance variable and *Q* ≥ 0, *R* > 0 are the weighing matrices for the performance variable and control effort. By inserting *z*(*t*) = *Czx*(*t*) the criterion changes into the standard LQ form

$$J = \int\_0^\infty \left( \mathbf{x}^T(\tau) \mathbf{C}\_z^T \mathbf{Q} \mathbf{C}\_z \mathbf{x}(\tau) + \mathbf{u}^T(\tau) \mathbf{R} \mathbf{u}(\tau) \right) d\tau \tag{10}$$

The disturbance dynamics can be modelled as

$$\begin{aligned} \dot{\mathbf{x}}\_d(t) &= A\_d \mathbf{x}\_d(t) = \begin{bmatrix} A\_{d1} \cdot \cdots \cdot 0 & 0 \\ \vdots & \ddots & \vdots & \vdots \\ 0 & \cdots & A\_{dn} & 0 \\ 0 & \cdots & 0 & -\varepsilon \end{bmatrix} \mathbf{x}\_{dn}(t) \\ d(t) &= \mathbf{C}\_d \mathbf{x}\_d(t) = \begin{bmatrix} \mathbf{C}\_{d1} \cdot \cdots \cdot \mathbf{C}\_{dn} & 0 \end{bmatrix} \mathbf{x}\_{dn}(t) \end{aligned} \tag{11}$$

where

$$A\_{d\eta} = \begin{bmatrix} 0 & 1 \\ -\omega\_{d\eta}^2 & -\varepsilon \end{bmatrix} \quad i = 1, 2, \dots, m$$

and the initial values

$$\mathbf{x}(0) = \begin{bmatrix} \mathbf{x}\_{d1}^T(0) \ \cdots \ \mathbf{x}\_{dn}^T(0) \ b \ \end{bmatrix}^T$$

According to the formalism a sum of *n* sinusoidal disturbance components (angular frequencies *ωdn*) enter the system. The very small number *�* is added in order the augmented system to be stabilizable, which is needed for the solution to exist. The damping of the resulting sinusoidal is so low that it does not affect the practical use of the optimal controller.

Varying Disturbances 11

Robust Attenuation of Frequency Varying Disturbances 301

*xN*(*t*)*TQobsxN*(*t*) + *uN*(*t*)*TRobsuN*(*t*)

 − *Bp* 0 *L* 

 *A*LQ

where *yr* is the rotor displacement, *u*LQ is the optimal control signal, and *A*LQ, *B*LQ and *C*LQ

The convergent control (CC) algorithm (also known as instantaneous harmonic control (IHC) is a feedforward control method to compensate a disturbance at a certain frequency (Daley et al., 2008). It is somewhat similar to the well-known least means squares compensator (LMS), (Fuller et al., 1995; Knospe et al., 1994) which has traditionally been used in many frequency compensating methods in signal processing. A basic schema is presented in Fig.9. The term *r* is a periodic signal of the same frequency as *d*, but possibly with a different

0

=

*N i ì*

*r k*( )

Fig. 9. Feedforward compensation of a disturbance signal

( )

∑ <sup>−</sup> <sup>+</sup>

amplitude and phase. The idea is to change the filter parameters *hi* such that the signal *u* compensates the disturbance *d*. The standard LMS algorithm that minimizes the squared

*d k*( )

*hi*(*k* + 1) = *hi*(*k*) − *αr*(*k* − *i*)*e*(*k*) (22)

*e k*( ) *u k*( )

*hr k i*

where the matrices *Qobs* and *Robs* contain the weights for the relative state estimation error

The optimal control law (13) can now be combined with the observer model (17). Including

 *Ap* <sup>−</sup> *KCp* <sup>0</sup> *Cp* 0

*x*˙*N*(*t*) = *ANxN*(*t*) + *BNuN*(*t*) (19)

*<sup>x</sup>*LQ(*t*) +

*K* 0 

*yr*(*t*)

(21)

 *B*LQ

*dt* (20)

*<sup>p</sup>* , *KN* <sup>=</sup> *<sup>K</sup><sup>T</sup>* and *uN*(*t*) = <sup>−</sup>*KNxN*(*t*). The weighting matrix *KN* can

which is similar to

*<sup>p</sup>* , *BN* = *C<sup>T</sup>*

*Jobs* =

∞

0

the augmented states (15) the control law can be stated as

*x*LQ(*t*)

 =

*x*ˆ(*t*) ˙ *x*ˆaug(*t*)

 *C*LQ

be determined by minimizing

and its convergence rate.

*<sup>x</sup>*˙LQ(*t*) = ˙

*u*LQ(*t*) = −*L*

are the parameters of the controller.

**4.2 Convergent controller**

error can be derived to be as

with *AN* = *A<sup>T</sup>*

The constant *b* can be used for a constant bias term in the disturbance. Compare the disturbance modelling also to that presented in equations (4) and (5).

To minimize of sinusoidal disturbances the following performance variable can be chosen

$$\mathbf{z}(t) = \left[ \mathbf{C}\_p \stackrel{\cdot}{\colon} \begin{bmatrix} \mathbf{C}\_{d1} \ \cdots \ \mathbf{C}\_{dn} \ 0 \end{bmatrix} \right] \mathbf{x}(t) = \begin{bmatrix} \mathbf{C}\_p \ \vdots \ \mathbf{C}\_d \end{bmatrix} \mathbf{x}(t) = \mathbf{C}\_z \mathbf{x}(t) \tag{12}$$

which leads to the cost function (10)

The solution of the LQ problem can now be obtained by standard techniques (Anderson & Moore, 1989) as

$$\ln(t) = -L\mathbf{x}(t) = -\mathbf{R}^{-1}\mathbf{B}^T\mathbf{S}\mathbf{x}(t) \tag{13}$$

where *S* is the solution of the algebraic Riccati equation

$$A^T \mathbf{S} + \mathbf{S}A - \mathbf{S}BR^{-1}B^T \mathbf{S} + Q = \mathbf{0} \tag{14}$$

It is also possible to choose simply *z*(*t*) = *x*(*t*) in (9). To force the states approach zero it is in this case necessary to introduce augmented states

$$\mathbf{x}\_{\text{allg}}(t) = \int\_{0}^{t} \left( y\_{ar}(\tau) + d(\tau) \right) d\tau = \left[ \mathbf{C}\_{ar} \, \mathbf{C}\_{d} \right] \int\_{0}^{t} \left( \left[ \mathbf{x}\_{ar}(\tau)^{T} \, \mathbf{x}\_{d}(\tau)^{T} \right]^{T} \right) d\tau \tag{15}$$

The system to which the LQ design is used is then

$$\begin{aligned} \dot{\mathbf{x}}(t) &= \begin{bmatrix} \dot{\mathbf{x}}\_p(t) \\ \dot{\mathbf{x}}\_{aug}(t) \end{bmatrix} = \underbrace{\begin{bmatrix} A\_p & \vdots & 0 \\ & \cdots & \vdots & \dots \\ & & \mathbf{0} & \mathbf{0} \end{bmatrix}}\_{A\_{aug}} \mathbf{x}(t) + \underbrace{\begin{bmatrix} B\_p \\ \cdots \\ 0 \end{bmatrix}}\_{B\_{aug}} \mathbf{u}(t) \end{aligned} \tag{16}$$
 
$$\begin{aligned} y\_f(t) &= \underbrace{\begin{bmatrix} \mathbf{C}\_p \ \mathbf{0} \end{bmatrix} \mathbf{x}(t)}\_{C\_{aug}} \end{aligned} \tag{17}$$

In this design the weights in *Q* corresponding to the augmented states should be set to considerably high values, e.g. values like 105 have been used.

Usually a state observer must be used to implement the control law. For example, in the configuration shown in Fig.6a (see also equation (5)) that has the form

$$\begin{cases} \dot{\mathfrak{X}}(t) = A\_p \mathfrak{X}(t) + B\_p \mathfrak{u}(t) + K \left( y\_r(t) - \mathfrak{y}\_r(t) \right) \\ y\_{obs} = \mathfrak{X}(t) \end{cases} \tag{17}$$

The gain in the estimator can be chosen based on the duality between the LQ optimal controller and the estimator. The state error dynamics *x*˜(*t*) = *x*(*t*) − *x*ˆ(*t*) follows the dynamics

$$
\dot{\tilde{x}}(t) = \left(A\_p - K\mathbb{C}\_p\right)\tilde{x}(t) \tag{18}
$$

which is similar to

10 Will-be-set-by-IN-TECH

The constant *b* can be used for a constant bias term in the disturbance. Compare the

The solution of the LQ problem can now be obtained by standard techniques

It is also possible to choose simply *z*(*t*) = *x*(*t*) in (9). To force the states approach zero it is in

*Car Cd*

. . . 0

� . . . 0

� � *t*

��

0

⎤ ⎥ ⎥ ⎥ ⎥ ⎦

*x*(*t*) +

*yobs* <sup>=</sup> *<sup>x</sup>*ˆ(*t*) (17)

⎡ ⎣ *Bp* ··· 0

� *Cp* . . . *Cd* �

*<sup>u</sup>*(*t*) = <sup>−</sup>*Lx*(*t*) = <sup>−</sup>*R*−1*BTSx*(*t*) (13)

*<sup>A</sup>TS* <sup>+</sup> *SA* <sup>−</sup> *SBR*−1*BTS* <sup>+</sup> *<sup>Q</sup>* <sup>=</sup> <sup>0</sup> (14)

*xar*(*τ*)*<sup>T</sup> xd*(*τ*)*<sup>T</sup>* �*T*�

⎤ ⎦

*u*(*t*)

*x*˜(*t*) (18)

� �� � *Baug*

*x*(*t*) = *Czx*(*t*) (12)

*dτ* (15)

(16)

To minimize of sinusoidal disturbances the following performance variable can be chosen

� � *x*(*t*) =

disturbance modelling also to that presented in equations (4) and (5).

*Cd*<sup>1</sup> ··· *Cdn* 0

(*yar*(*τ*) + *d*(*τ*)) *dτ* = �

� =

*x*(*t*)

configuration shown in Fig.6a (see also equation (5)) that has the form

˙*x*˜(*t*) = �

⎡ ⎢ ⎢ ⎢ ⎢ ⎣

� *Car Cd*

*Ap*

··· . . . ···

� �� � *Aaug*

In this design the weights in *Q* corresponding to the augmented states should be set to

Usually a state observer must be used to implement the control law. For example, in the

*x*ˆ(*t*) = *Apx*ˆ(*t*) + *Bpu*(*t*) + *K* (*yr*(*t*) − *y*ˆ*r*(*t*))

The gain in the estimator can be chosen based on the duality between the LQ optimal controller and the estimator. The state error dynamics *x*˜(*t*) = *x*(*t*) − *x*ˆ(*t*) follows the dynamics

*Ap* − *KCp*

�

*z*(*t*) =

which leads to the cost function (10)

(Anderson & Moore, 1989) as

*xaug*(*t*) =

� *Cp* . . . �

where *S* is the solution of the algebraic Riccati equation

this case necessary to introduce augmented states

The system to which the LQ design is used is then

� *x*˙ *<sup>p</sup>*(*t*) *x*˙*aug*(*t*)

> *Cp* 0 �

� �� � *Caug*

considerably high values, e.g. values like 105 have been used.

˙

� *t*

0

*x*˙(*t*) =

*yr*(*t*) = �

$$
\dot{\mathfrak{x}}\_N(t) = A\_N \mathfrak{x}\_N(t) + B\_N \mathfrak{u}\_N(t) \tag{19}
$$

with *AN* = *A<sup>T</sup> <sup>p</sup>* , *BN* = *C<sup>T</sup> <sup>p</sup>* , *KN* <sup>=</sup> *<sup>K</sup><sup>T</sup>* and *uN*(*t*) = <sup>−</sup>*KNxN*(*t*). The weighting matrix *KN* can be determined by minimizing

$$J\_{obs} = \int\_0^\infty \left( \mathbf{x}\_N(t)^T \mathbf{Q}\_{obs} \mathbf{x}\_N(t) + \boldsymbol{\mu}\_N(t)^T \mathbf{R}\_{obs} \boldsymbol{\mu}\_N(t) \right) dt \tag{20}$$

where the matrices *Qobs* and *Robs* contain the weights for the relative state estimation error and its convergence rate.

The optimal control law (13) can now be combined with the observer model (17). Including the augmented states (15) the control law can be stated as

$$\begin{aligned} \dot{\mathbf{x}}\_{\text{LQ}}(t) &= \begin{bmatrix} \dot{\dot{\mathbf{x}}}(t) \\ \dot{\mathbf{x}}\_{\text{aug}}(t) \end{bmatrix} = \underbrace{\begin{pmatrix} \begin{bmatrix} A\_p - \mathbf{K} \mathbf{C}\_p \ \mathbf{0} \\ \mathbf{C}\_p \ \mathbf{0} \end{bmatrix} - \begin{bmatrix} B\_p \\ \mathbf{0} \end{bmatrix} L \end{pmatrix}}\_{A\_{\text{LQ}}} \mathbf{x}\_{\text{LQ}}(t) + \underbrace{\begin{bmatrix} \mathbf{K} \\ \mathbf{0} \end{bmatrix}}\_{B\_{\text{LQ}}} y\_r(t) \end{aligned} \tag{21}$$

where *yr* is the rotor displacement, *u*LQ is the optimal control signal, and *A*LQ, *B*LQ and *C*LQ are the parameters of the controller.

### **4.2 Convergent controller**

The convergent control (CC) algorithm (also known as instantaneous harmonic control (IHC) is a feedforward control method to compensate a disturbance at a certain frequency (Daley et al., 2008). It is somewhat similar to the well-known least means squares compensator (LMS), (Fuller et al., 1995; Knospe et al., 1994) which has traditionally been used in many frequency compensating methods in signal processing. A basic schema is presented in Fig.9. The term *r* is a periodic signal of the same frequency as *d*, but possibly with a different

Fig. 9. Feedforward compensation of a disturbance signal

amplitude and phase. The idea is to change the filter parameters *hi* such that the signal *u* compensates the disturbance *d*. The standard LMS algorithm that minimizes the squared error can be derived to be as

$$h\_l(k+1) = h\_l(k) - ar(k-i)e(k)\tag{22}$$

where *α* is a tuning parameter (Fuller et al., 1995; Tammi, 2007). In the CC algorithm the process dynamics is presented by means of the Fourier coefficients as

$$E\_F(k) = G\_F \mathcal{U}\_F(k) + D\_F(k) \tag{23}$$

Varying Disturbances 13

Robust Attenuation of Frequency Varying Disturbances 303

Alternatively, the CC controller can be connected in parallel with the LQ-controller, then having the plant output as the input signal. Several CC controllers (tuned for different

( ) *<sup>r</sup> u k*( ) *y k*

Process

LQ-controller

CC-controller

The controller performance was tested in two phases. Firstly, extensive simulations by using a finite element (FE) model of the electrical machine and actuator were carried out. Secondly, the control algorithms were implemented in the test machine discussed in Section 2.1 by using a dSpace system as the program-machine interface. The disturbance frequency was 49.5 Hz, and the controller was discretized with the sampling frequency 1 kHz. Time domain simulations are shown in Figs. 12 and 13. The damping is observed to be about 97 per cent, which is a

frequencies) can also be connected in parallel in this configuration, see Fig.11.

( ) *u k CC*

Fig. 11. Convergent controller connected in parallel with the LQ controller

Fig. 12. Simulation result in time domain (rotor vibration in x-direction)

The critical frequency of the 30 kW test motor was 37.7 Hz. However, due to vibrations the rotor could not be driven at this speed in open loop, and both the identification and initial control tests were performed at 32 Hz rotation frequency. In the control tests the LQ controller was used alone first, after which the CC controller was connected, in order to verify the performance of these two control configurations. Both controllers were discretized at 5 kHz

( ) *u k LQ*

+ +

**4.3 Simulations and test runs**

good result.

sampling rate.

where *GF* is the complex frequency response of the system and the symbols *EF*, *UF* and *DF* are the Fourier coefficients of the error, control and disturbance signals. For example

$$E\_F^{\omega\_n} = \frac{1}{N} \sum\_{k=0}^{N-1} e(k)e^{-2i\pi kn/N} \approx e(k)e^{-i\omega\_n t}$$

where *N* is the number of samples in one signal period, and *n* is the number of the spectral line of the corresponding frequency. If the sampling time is *Ts*, then *t* = *kTs*. The criterion to be minimized is *J* = *E*∗ *<sup>F</sup>EF* which gives

$$\mathcal{U}I\_F = -\left(\mathcal{G}\_F^\* \mathcal{G}\_F\right)^{-1} \mathcal{G}\_F^\* D\_F = -A\_F D\_F \tag{24}$$

where ∗ denotes the complex transpose. The pseudoinverse is used if necessary when calculating the inverse matrix. In terms of Fourier coefficients the Convergent Control Algorithm can be written as

$$\mathcal{U}\_F(k+1) = \beta \mathcal{U}\_F(k) - \mathfrak{a}A\_F E\_F(k) \tag{25}$$

where *α* and *β* are tuning parameters. It can be shown (Daley et al., 2008; Tammi, 2007) that the control algorithm can be presented in the form of a linear time-invariant pulse transfer function

$$\mathbf{G}\_{\rm cr}(z) = \frac{\mathbf{U}(z)}{Y(z)} = \beta \frac{\mathrm{Re}\left(\mathbf{G}\_{\rm F}\left(e^{i\omega\_{k}}\right)^{-1}\right)z^{2} - a\mathrm{Re}\left(\mathbf{G}\_{\rm F}\left(e^{i\omega\_{k}}\right)^{-1}e^{-i\omega\_{k}T\_{s}}\right)z}{z^{2} - 2a\cos\left(\omega\_{k}T\_{s}\right)z + a^{2}}\tag{26}$$

where *Y*(*z*) is the sampled plant output and *U*(*z*) is the sampled control signal.

The convergent controller can operate like an LMS controller in series with the plant, by using a reference signal *r* proportional to the disturbance signal to be compensated. The 'plant' can here mean also the process controlled by a wide-frequency band controller like the LQ controller for instance.

Fig. 10. Convergent controller in series with a controlled plant

12 Will-be-set-by-IN-TECH

where *α* is a tuning parameter (Fuller et al., 1995; Tammi, 2007). In the CC algorithm the

where *GF* is the complex frequency response of the system and the symbols *EF*, *UF* and *DF*

where *N* is the number of samples in one signal period, and *n* is the number of the spectral

*<sup>F</sup>EF* which gives

<sup>−</sup><sup>1</sup> *G*<sup>∗</sup>

where ∗ denotes the complex transpose. The pseudoinverse is used if necessary when calculating the inverse matrix. In terms of Fourier coefficients the Convergent Control

where *α* and *β* are tuning parameters. It can be shown (Daley et al., 2008; Tammi, 2007) that the control algorithm can be presented in the form of a linear time-invariant pulse transfer

The convergent controller can operate like an LMS controller in series with the plant, by using a reference signal *r* proportional to the disturbance signal to be compensated. The 'plant' can here mean also the process controlled by a wide-frequency band controller like the LQ

( ) *LQ u t*

*u t*( ) +

Convergent control ( ) *<sup>r</sup>* ( ) *y t ext u t r t*( )

*<sup>z</sup>*<sup>2</sup> <sup>−</sup> *<sup>α</sup>*Re

*<sup>e</sup>*(*k*)*e*−2*iπkn*/*<sup>N</sup>* <sup>≈</sup> *<sup>e</sup>*(*k*)*e*−*iωnt*

are the Fourier coefficients of the error, control and disturbance signals. For example

*<sup>F</sup>GF*)

*N*−1 ∑ *k*=0

line of the corresponding frequency. If the sampling time is *Ts*, then *t* = *kTs*.

*UF* = − (*G*<sup>∗</sup>

*EF*(*k*) = *GFUF*(*k*) + *DF*(*k*) (23)

*UF*(*k* + 1) = *βUF*(*k*) − *αAFEF*(*k*) (25)

 *GF eiω<sup>k</sup>* −<sup>1</sup>

Process

Feedback control

*<sup>F</sup>DF* = −*AFDF* (24)

*e*−*iωkTs z*

*<sup>z</sup>*<sup>2</sup> <sup>−</sup> <sup>2</sup>*<sup>α</sup>* cos (*ωkTs*) *<sup>z</sup>* <sup>+</sup> *<sup>α</sup>*<sup>2</sup> (26)

process dynamics is presented by means of the Fourier coefficients as

*Eω<sup>n</sup> <sup>F</sup>* <sup>=</sup> <sup>1</sup> *N*

The criterion to be minimized is *J* = *E*∗

Algorithm can be written as

*<sup>G</sup>*cc(*z*) = *<sup>U</sup>*(*z*)

controller for instance.

*<sup>Y</sup>*(*z*) <sup>=</sup> *<sup>β</sup>*

Re *GF eiω<sup>k</sup>* −<sup>1</sup> 

where *Y*(*z*) is the sampled plant output and *U*(*z*) is the sampled control signal.

Adaptation signal

Fig. 10. Convergent controller in series with a controlled plant

function

Alternatively, the CC controller can be connected in parallel with the LQ-controller, then having the plant output as the input signal. Several CC controllers (tuned for different frequencies) can also be connected in parallel in this configuration, see Fig.11.

Fig. 11. Convergent controller connected in parallel with the LQ controller

### **4.3 Simulations and test runs**

The controller performance was tested in two phases. Firstly, extensive simulations by using a finite element (FE) model of the electrical machine and actuator were carried out. Secondly, the control algorithms were implemented in the test machine discussed in Section 2.1 by using a dSpace system as the program-machine interface. The disturbance frequency was 49.5 Hz, and the controller was discretized with the sampling frequency 1 kHz. Time domain simulations are shown in Figs. 12 and 13. The damping is observed to be about 97 per cent, which is a good result.

Fig. 12. Simulation result in time domain (rotor vibration in x-direction)

The critical frequency of the 30 kW test motor was 37.7 Hz. However, due to vibrations the rotor could not be driven at this speed in open loop, and both the identification and initial control tests were performed at 32 Hz rotation frequency. In the control tests the LQ controller was used alone first, after which the CC controller was connected, in order to verify the performance of these two control configurations. Both controllers were discretized at 5 kHz sampling rate.

Varying Disturbances 15

Robust Attenuation of Frequency Varying Disturbances 305

Fig. 16. Test machine runs at 37.5 Hz critical speed: Control voltage and rotor displacement

Fig. 15. Test machine runs at 32 Hz speed: xy-plot

Fig. 17. Test machine runs at 37.5 Hz critical speed: xy-plot

in x-direction

Fig. 13. Simulated rotor vibration in xy-plot

The test results are shown in Figs. 14-17. In Fig.14 the control signal and rotor vibration amplitude are shown, when the machine was driven at 32.5 Hz. The LQ controller was used first alone, and then the CC controller was connected. It is seen that the CC controller improves the performance somewhat, and generally the vibration damping is good and well comparable to the results obtained by simulations. The same can be noticed from the xy-plot shown in Fig.15.

Fig. 14. Test machine runs at 32 Hz speed: Control voltage and rotor displacement in x-direction

Next, the operation speed was increased to the critical frequency 37.5 Hz. Controller(s) tuned for this frequency could be driven without any problems at this speed. Similar results as above are shown in Figs.16 and 17. It is remarkable that now connecting the CC controller on improved the results more than before. So far there is no clear explanation to this behaviour.

#### **4.4 Nonlinear controller**

If the frequency of the disturbance signal is varying, the performance of a controller with constant coefficients deteriorates considerably. An immediate solution to the problem involves the use of continuous gain scheduling, in which the controller coefficients are modified according to the current disturbance frequency. To this end the disturbance 14 Will-be-set-by-IN-TECH

The test results are shown in Figs. 14-17. In Fig.14 the control signal and rotor vibration amplitude are shown, when the machine was driven at 32.5 Hz. The LQ controller was used first alone, and then the CC controller was connected. It is seen that the CC controller improves the performance somewhat, and generally the vibration damping is good and well comparable to the results obtained by simulations. The same can be noticed from the xy-plot

Fig. 14. Test machine runs at 32 Hz speed: Control voltage and rotor displacement in

Next, the operation speed was increased to the critical frequency 37.5 Hz. Controller(s) tuned for this frequency could be driven without any problems at this speed. Similar results as above are shown in Figs.16 and 17. It is remarkable that now connecting the CC controller on improved the results more than before. So far there is no clear explanation to this behaviour.

If the frequency of the disturbance signal is varying, the performance of a controller with constant coefficients deteriorates considerably. An immediate solution to the problem involves the use of continuous gain scheduling, in which the controller coefficients are modified according to the current disturbance frequency. To this end the disturbance

Fig. 13. Simulated rotor vibration in xy-plot

shown in Fig.15.

x-direction

**4.4 Nonlinear controller**

Fig. 15. Test machine runs at 32 Hz speed: xy-plot

Fig. 16. Test machine runs at 37.5 Hz critical speed: Control voltage and rotor displacement in x-direction

Fig. 17. Test machine runs at 37.5 Hz critical speed: xy-plot

Varying Disturbances 17

Robust Attenuation of Frequency Varying Disturbances 307

*ωm*−<sup>1</sup> *hz* ··· *<sup>ω</sup>*<sup>2</sup>

The controller was tested with the industrial rolling process presented in Section 2.2. A sinusoidal sweep disturbance signal was used, which corresponds to a varying rotational speed of the reel with constant width. The rotation frequency ranged over the frequency range 5 Hz..50 Hz. Before the practical tests the theoretical performance of the controller was analyzed. The result is shown in Fig.19, which shows a neat damping of the vibration near the critical frequency 39 Hz. Simulation and practical test results are shown in Figs.20 and

21, respectively. The controller turns out to be effective over the whole frequency range the damping ratio being 99 per cent in simulation and about 90 per cent in practical tests. The result of the good performance of the nonlinear controller is further verified by the output spectrum of the process obtained with and without control. The result is shown in Fig.22.

*hz ωhz* 1

*T*

(30)

where *aij* are the polynomial coefficients and

*<sup>u</sup>ω*(*ωhz*) =

Fig. 19. Theoretical damping achieved with the nonlinear controller

Fig. 20. Simulated performance of the nonlinear controller

The optimal control gain *L*(*ωhz*) can be computed similarly.

frequency (usually the rotating frequency) has to be measured of tracked (Orivuori & Zenger, 2010; Orivuori et al., 2010). The state estimator can be written in the form

$$\dot{\mathfrak{X}}(t,\omega\_{\hbar z}) = \left(A\left(\omega\_{\hbar z}\right) - K\left(\omega\_{\hbar z}\right)\mathbb{C}\right)\mathfrak{X}(t,\omega\_{\hbar z}) + Bu(t) + K(\omega\_{\hbar z})y(t) \tag{27}$$

where it has been assumed that the model topology is as in Fig.6b and the disturbance model is included in the system matrix *A*. The matrix *K* changes as a function of frequency as

$$K(\omega\_{\hbar z}) = \begin{bmatrix} f\_1(\omega\_{\hbar z}) \ f\_2(\omega\_{\hbar z}) \ \cdots \ f\_n(\omega\_{\hbar z}) \end{bmatrix}^T \tag{28}$$

where *fi* are suitable functions of frequency. Solving the linear optimal control problem in a frequency grid gives the values of *K*, which can be presented like in Fig.18 The functions *fi*

Fig. 18. Projections of the hypersurface to the elements of *K* can be chosen to be polynomials, so that the feedback gain has the form

$$K(\omega\_{\hbar z}) = \begin{bmatrix} a\_{11} \ a\_{12} \ \cdots \ a\_{1m} \\\\ a\_{21} \ a\_{22} \ \cdots \ a\_{2m} \\\\ \vdots \\\\ a\_{n1} \ a\_{n2} \ \cdots \ a\_{nm} \end{bmatrix} \mu\_{\omega}(\omega\_{\hbar z}) \tag{29}$$

where *aij* are the polynomial coefficients and

16 Will-be-set-by-IN-TECH

frequency (usually the rotating frequency) has to be measured of tracked (Orivuori & Zenger,

where it has been assumed that the model topology is as in Fig.6b and the disturbance model is included in the system matrix *A*. The matrix *K* changes as a function of frequency as

where *fi* are suitable functions of frequency. Solving the linear optimal control problem in a frequency grid gives the values of *K*, which can be presented like in Fig.18 The functions *fi*

*x*ˆ(*t*, *ωhz*) = (*A* (*ωhz*) − *K* (*ωhz*) *C*) *x*ˆ(*t*, *ωhz*) + *Bu*(*t*) + *K*(*ωhz*)*y*(*t*) (27)

*f*1(*ωhz*) *f*2(*ωhz*) ··· *fn*(*ωhz*)

�*<sup>T</sup>* (28)

2010; Orivuori et al., 2010). The state estimator can be written in the form

*<sup>K</sup>*(*ωhz*) = �

Fig. 18. Projections of the hypersurface to the elements of *K*

can be chosen to be polynomials, so that the feedback gain has the form

⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣

. . . . . . ... . . .

*a*<sup>11</sup> *a*<sup>12</sup> ··· *a*1*<sup>m</sup> a*<sup>21</sup> *a*<sup>22</sup> ··· *a*2*<sup>m</sup>* ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦

*uω*(*ωhz*) (29)

*an*<sup>1</sup> *an*<sup>2</sup> ··· *anm*

*K*(*ωhz*) =

˙

$$
\mu\_{\omega}(\omega\_{h\overline{z}}) = \left[\omega\_{h\overline{z}}^{m-1} \cdots \omega\_{h\overline{z}}^{2} \,\,\omega\_{h\overline{z}}\,\, 1\right]^{T} \tag{30}
$$

The optimal control gain *L*(*ωhz*) can be computed similarly.

The controller was tested with the industrial rolling process presented in Section 2.2. A sinusoidal sweep disturbance signal was used, which corresponds to a varying rotational speed of the reel with constant width. The rotation frequency ranged over the frequency range 5 Hz..50 Hz. Before the practical tests the theoretical performance of the controller was analyzed. The result is shown in Fig.19, which shows a neat damping of the vibration near the critical frequency 39 Hz. Simulation and practical test results are shown in Figs.20 and

Fig. 19. Theoretical damping achieved with the nonlinear controller

21, respectively. The controller turns out to be effective over the whole frequency range the damping ratio being 99 per cent in simulation and about 90 per cent in practical tests. The result of the good performance of the nonlinear controller is further verified by the output spectrum of the process obtained with and without control. The result is shown in Fig.22.

Fig. 20. Simulated performance of the nonlinear controller

Varying Disturbances 19

Robust Attenuation of Frequency Varying Disturbances 309

The research has been supported by TEKES (The Finnish Funding Agency for Technology and

Allen, M. S. (2007). Floquet Experimental Modal Analysis for System Identification of Linear

Anderson, B. D. O. & Moore, J. B. (1989). *Optimal Control: Linear Quadratic Methods*,

Chiasson, J. N. (2005). *Modeling and High Performance Control of Electric Machines*, John Wiley,

Daley, S.; Zazas, I.; Hätönen, J. (2008). Harmonic Control of a 'Smart Spring' Machinery

Deskmuhk, V. S & Sinha, S. C. (2004). Control of Dynamical Systems with Time-Periodic

Fuller, C. R.; Elliott, S. J.; Nelson, P. A. (1995). *Active Control of Vibration*, Academic Press,

Gupta, N. K. (1980). Frequency-shaped cost functionals: Extension of

Holopainen, T. P.; Tenhunen, A.; Lantto, E.; Arkkio, A. (2004). Numerical Identification

Knospe, C. R.; Hope, R. W.; Fedigan, S. J.; Williams, R. D. (1994). New Results in the Control

Laiho, A.; Holopainen, T. P.; Klinge, P.; Arkkio, A. (2007). Distributed Model For

Laiho, A.; Tammi, K.; Zenger, K.; Arkkio, A. (2008). A Model-Based Flexural Rotor Vibration

Montagnier, P.; Spiteri, R. J.; Angeles, J. (2004). The Control of Linear Time-Periodic Systems

Orivuori, J. & Zenger, K. (2010). Active Control of Vibrations in a Rolling Process by Nonlinear

*Vibration Control (Movic 2010)*, August 2010, Tokyo, Japan.

*on Magnetic Bearings*, Hochschulverlag AG, Zurich, Switzerland.

*Journal of Sound and Vibration*, Vol. 302, Issues 4-5, 2007, pp. 683–698.

*Engineering (Archiv f ˝ur Elektrotechnik)*, Vol. 90, No. 6, 2008, pp. 407–423. Ljung, L. (1999). *System Identification: Theory for the User*, 2nd Ed., Prentice Hall, Upper Saddle

*Journal of Vibration and Control*, Vol. 10, 2004, pp. 1517–1533.

Time-Periodic Systems, In: *Proceedings of the ASME 2007, IDETC/CIE*, 11 pages,

Vibration Isolation System, In: *Proceedings of the Institution of Mechanical Engineers, Part M: Journal of Engineering for the Maritime Environment*, Vol. 22, No. 2, pp. 109–119.

Coefficients via the Lyapunov-Floquet Transformation and Backstepping Technique,

linear-quadratic-Gaussian design methods, *Journal of Guidance and Control*, Vol.

of Electromechanic Force Parameters for Linearized Rotordynamic Model of Cage Induction Motors, *Journal of Vibration and Acoustics*, Vol. 126, No. 3, 2004, pp. 384–390.

of Rotor Synchronous Vibration. In: *Proceedings of the Fourth International Symposium*

Electromechanical Interaction in Rotordynamics of Cage Rotor Electrical Machines,

Control in Cage Induction Electrical Machines by a Built-In Force Actuator, *Electrical*

Using Floquet-Lyapunov Theory, *International Journal of Control*, Vol. 77, No. 20,

Optimal Controller, In: *Proceedings of the 10th International Conference on Motion and*

**6. Acknowledgement**

**7. References**

Innovation) and the Academy of Finland.

IEEE, Hoboken, NJ.

3, No. 6, 1980, pp. 529–535.

Inman, D. J. (2006). *Vibration With Control*, Wiley, Hoboken, NJ.

London.

River, NJ.

March 2004, pp. 472–490.

September 2007, Las Vegas, Nevada, USA.

Prentice-Hall, Englewood Cliffs, NJ.

Fig. 21. Real performance of the nonlinear controller

Fig. 22. Frequency spectra of the process output with and without control

### **5. Conclusion**

Vibration control in rotating machinery is an important topic of research both from theoretical and practical viewpoints. Generic methods which are suitable for a large class of such processes are needed in order to make the analysis and controller design transparent and straightforward. LQ control theory offers a good and easy to learn model-based control technique, which is effective and easily implemented for industrial processes. The control algorithm can be extended to the nonlinear case covering systems with varying disturbance frequencies. The performance of such an algorithm has been studied in the paper, and the performance has been verified by analysis, simulation and practical tests of two different processes. The vibration control results have been excellent. In future research it is investigated, how the developed methods can be modified to be used is semi-active vibration control. That is important, because active control has its risks, and all industrial users are not willing to use active control methods.

### **6. Acknowledgement**

The research has been supported by TEKES (The Finnish Funding Agency for Technology and Innovation) and the Academy of Finland.

### **7. References**

18 Will-be-set-by-IN-TECH

Fig. 21. Real performance of the nonlinear controller

**5. Conclusion**

willing to use active control methods.

Fig. 22. Frequency spectra of the process output with and without control

Vibration control in rotating machinery is an important topic of research both from theoretical and practical viewpoints. Generic methods which are suitable for a large class of such processes are needed in order to make the analysis and controller design transparent and straightforward. LQ control theory offers a good and easy to learn model-based control technique, which is effective and easily implemented for industrial processes. The control algorithm can be extended to the nonlinear case covering systems with varying disturbance frequencies. The performance of such an algorithm has been studied in the paper, and the performance has been verified by analysis, simulation and practical tests of two different processes. The vibration control results have been excellent. In future research it is investigated, how the developed methods can be modified to be used is semi-active vibration control. That is important, because active control has its risks, and all industrial users are not


**0**

**14**

*Japan*

**Synthesis of Variable Gain Robust Controllers**

Robustness of control systems to uncertainties has always been the central issue in feedback control and therefore for dynamical systems with unknown parameters, a large number of robust controller design methods have been presented (e.g. (3; 37)). Also, many robust state feedback controllers achieving some robust performances such as quadratic cost function(28; 31), <sup>H</sup>∞-disturbance attenuation(6) and so on have been suggested. It is well-known that most of these problems are reduced to standard convex optimization problems involving linear matrix inequalities (LMIs) which can be solved numerically very efficiently. Furthermore, in the case that the full state of systems cannot be measured, the control strategies via observer-based robust controllers (e.g. (12; 19; 27)) or robust output feedback one (e.g. (9; 11)) have also been well studied. However, most of the resulting controllers derived in the existing results have fixed structure, and these methods result in worst-case design. Therefore these controllers become cautious when the perturbation region of the uncertainties has been estimated larger than the proper region, because the robust

From these viewpoints, it is important to derive robust controllers with adjustable parameters which are tuned by using available information. Thus some researchers have proposed robust controllers with adjustable parameters(18; 33). In the work of Ushida et al.(33), a quadratically stabilizing state feedback controller based on the parametrization of <sup>H</sup><sup>∞</sup> controllers is derived. Maki and Hagino(18) have introduced a robust controller with adaptation mechanism for linear systems with time-varying parameter uncertainties and the controller gain in their work is tuned on-line based on the information about parameter uncertainties. On the other hand, we have proposed a robust controller with adaptive compensation input for a class of uncertain linear systems(19; 21; 22). The adaptive compensation input is tuned by adjustable parameters based on the error information between the plant trajectory and the desired one. These adaptive robust controllers achieve good control performance and these approaches are very simple due to the application of the linear quadratic control problem. Besides these design methods reduce the cautiousness in a robust controller with a fixed gain, because utilizing the error signal between the real response of the uncertain system and the desired one is equivalent to giving consideration to the effect of the uncertainties as on-line information.

controller designed by the existing results only has a fixed gain.

**1. Introduction**

**for a Class of Uncertain Dynamical Systems**

Hidetoshi Oya1 and Kojiro Hagino<sup>2</sup>

<sup>2</sup>*The University of Electro-Communications*

<sup>1</sup>*The University of Tokushima*


## **Synthesis of Variable Gain Robust Controllers for a Class of Uncertain Dynamical Systems**

Hidetoshi Oya1 and Kojiro Hagino<sup>2</sup>

<sup>1</sup>*The University of Tokushima* <sup>2</sup>*The University of Electro-Communications Japan*

### **1. Introduction**

20 Will-be-set-by-IN-TECH

310 Recent Advances in Robust Control – Novel Approaches and Design Methods

Orivuori, J.; Zazas, I.; Daley, S. (2010). Active Control of a Frequency Varying Tonal

Rao, J. D. (2000). *Vibratory Condition Monitoring of Machines*, Narosa Publishing House, New

Repo, A-K. & Arkkio, A. (2006). Numerical Impulse Response Test to Estimate Circuit-Model

Sinha, S. C. (2005). Analysis and Control of Nonlinear Dynamical Systems with Periodic

Tammi, K. (2007). *Active Control of Rotor Vibrations - Identification, Feedback, Feedforward and Repetitive Control Methods*, Doctoral thesis, VTT Publications 634, Espoo: Otamedia.

Sievers, L. A.; Blackwood, G. H.; Mercadal, M.; von Flotow, A. H. (1991). MIMO Narrowband

Antalya, Turkey.

153, No. 6, 2006, pp. 883–890.

and L C S Goes, 2-4 May, 2005.

*of American Control Conference*, Boston, MA, USA.

Delhi, India.

Disturbance by Nonlinear Optimal Controller with Frequency Tracking, In: *Proceedings of the IFAC Workshop of Periodic Control Systems (Psyco 2010)*, August 2010,

Parameters for Induction Machines, *IEE Proceedings, Electric Power Applications*, Vol.

Coefficients, In: *Proceedings of the Workshop on Nonlinear Phenomena, Modeling and their applications*, SP-Brazil, Eds. JM Balthazar, RMLRF Brasil, EEN Macau, B. R. Pontes

Disturbance Rejection Using Frequency Shaping of Cost Functionals, In: *Proceedings*

Robustness of control systems to uncertainties has always been the central issue in feedback control and therefore for dynamical systems with unknown parameters, a large number of robust controller design methods have been presented (e.g. (3; 37)). Also, many robust state feedback controllers achieving some robust performances such as quadratic cost function(28; 31), <sup>H</sup>∞-disturbance attenuation(6) and so on have been suggested. It is well-known that most of these problems are reduced to standard convex optimization problems involving linear matrix inequalities (LMIs) which can be solved numerically very efficiently. Furthermore, in the case that the full state of systems cannot be measured, the control strategies via observer-based robust controllers (e.g. (12; 19; 27)) or robust output feedback one (e.g. (9; 11)) have also been well studied. However, most of the resulting controllers derived in the existing results have fixed structure, and these methods result in worst-case design. Therefore these controllers become cautious when the perturbation region of the uncertainties has been estimated larger than the proper region, because the robust controller designed by the existing results only has a fixed gain.

From these viewpoints, it is important to derive robust controllers with adjustable parameters which are tuned by using available information. Thus some researchers have proposed robust controllers with adjustable parameters(18; 33). In the work of Ushida et al.(33), a quadratically stabilizing state feedback controller based on the parametrization of <sup>H</sup><sup>∞</sup> controllers is derived. Maki and Hagino(18) have introduced a robust controller with adaptation mechanism for linear systems with time-varying parameter uncertainties and the controller gain in their work is tuned on-line based on the information about parameter uncertainties. On the other hand, we have proposed a robust controller with adaptive compensation input for a class of uncertain linear systems(19; 21; 22). The adaptive compensation input is tuned by adjustable parameters based on the error information between the plant trajectory and the desired one. These adaptive robust controllers achieve good control performance and these approaches are very simple due to the application of the linear quadratic control problem. Besides these design methods reduce the cautiousness in a robust controller with a fixed gain, because utilizing the error signal between the real response of the uncertain system and the desired one is equivalent to giving consideration to the effect of the uncertainties as on-line information.

In this chapter, for a class of uncertain linear systems, variable gain robust controllers which achieve not only asymptotical stability but also improving transient behavior of the resulting closed-loop system have been shown(23; 24; 26). The variable gain robust controllers, which consist of fixed gain controllers and variable gain one, are tuned on-line based on the information about parameter uncertainties. In this chapter, firstly, a design method of variable gain state feedback controllers for linear systems with matched uncertainties has been shown and next the variable gain state feedback controller is extended to output feedback controllers. Finally, on the basis of the concept of piecewise Lyapunov functions (PLFs), an LMI-based variable gain robust controller synthesis for linear systems with matched uncertainties and unmatched one has been presented.

The contents of this chapter are as follows, where the item numbers in the list accord with the section numbers.


Basic symbols are listed bellow.

**Z**<sup>+</sup> : the set of positive integers **R** : the set of real numbers **R***<sup>n</sup>* : the set of *n*-dimensional vectors **<sup>R</sup>***n*×*<sup>m</sup>* : the set of *<sup>n</sup>* <sup>×</sup> *<sup>m</sup>*-dimensional matrices *In* : *n*-dimensional identity matrix

Other than the above, we use the following notation and terms. For a matrix A, the transpose of matrix <sup>A</sup> and the inverse of one are denoted by <sup>A</sup>*<sup>T</sup>* and <sup>A</sup>−<sup>1</sup> respectively and rank {A} represents the rank of the matrix A. Also, *In* represents *n*-dimensional identity matrix. For real symmetric matrices A and B, A > B (resp. A≥B) means that A−B is positive (resp. nonnegative) definite matrix. For a vector *<sup>α</sup>* <sup>∈</sup> **<sup>R</sup>***n*, ||*α*|| denotes standard Euclidian norm and for a matrix <sup>A</sup>, ||A|| represents its induced norm. Besides, a vector *<sup>α</sup>* <sup>∈</sup> **<sup>R</sup>***n*, *α* 1

denotes 1-norm, i.e. *α* <sup>1</sup> is defined as *α* 1 � = *n* ∑ *j*=1 |*αj*|. The intersection of two sets *Γ* and *Υ* are

denoted by *<sup>Γ</sup>* <sup>∩</sup> *<sup>Υ</sup>* and the symbols "� =" and "�" mean equality by definition and symmetric blocks in matrix inequalities, respectively. Besides, for a symmetric matrix P, *λ*max {P} (resp. *λ*min {P}) represents the maximal eigenvalue (resp. minimal eigenvalue).

Furthermore, the following usefule lemmas are used in this paper.

**Lemma 1.** *For arbitrary vectors λ and ξ and the matrices* G *and* H *which have appropriate dimensions, the following relation holds.*

$$\begin{aligned} 2\lambda^T \mathcal{G} \Delta(t) \mathcal{H} \tilde{\xi} &\le 2 \left\| \mathcal{G}^T \lambda \right\| \left\| \Delta(t) \mathcal{H} \tilde{\xi} \right\| \\ &\le 2 \left\| \mathcal{G}^T \lambda \right\| \left\| \mathcal{H} \tilde{\xi} \right\| \end{aligned}$$

*where* <sup>Δ</sup>(*t*) <sup>∈</sup> **<sup>R</sup>***p*×*<sup>q</sup> is a time-varying unknown matrix satisfying* Δ(*t*) ≤ 1*.*

*Proof.* The above relation can be easily obtained by Schwartz's inequality (see (8)).

for a Class of Uncertain Dynamical Systems 3

Synthesis of Variable Gain Robust Controllers for a Class of Uncertain Dynamical Systems 313

**Lemma 2.** *(Schur complement) For a given constant real symmetric matrix* Ξ*, the following*

**Lemma 3.** *( Barbalat's lemma ) Let φ* : **R** → **R** *be a uniformly continuous function on* [ 0, ∞)*.*

*φ*(*t*) → 0 as *t* → ∞

**Lemma 4.** *(*S*-procedure) Let* <sup>F</sup>(*x*) *and* <sup>G</sup>(*x*) *be two arbitrary quadratic forms over* **<sup>R</sup>***n. Then* <sup>F</sup>(*x*) <sup>&</sup>lt;

<sup>F</sup>(*x*) <sup>−</sup> *<sup>τ</sup>*G(*x*) <sup>≤</sup> 0 for <sup>∀</sup>*<sup>x</sup>* <sup>∈</sup> **<sup>R</sup>***<sup>n</sup>*

In this section, we propose a variable gain robust state feedback controller for a class of uncertain linear systems. The uncertainties under consideration are supposed to satisfy the matching condition(3) and the variable gain robust state feedback controller under consideration consists of a state feedback with a fixed gain matrix and a compensation input with variable one. In this section, we show a design method of the variable gain robust state

where *<sup>x</sup>*(*t*) <sup>∈</sup> **<sup>R</sup>***<sup>n</sup>* and *<sup>u</sup>*(*t*) <sup>∈</sup> **<sup>R</sup>***<sup>m</sup>* are the vectors of the state (assumed to be available for feedback) and the control input, respectively. In (2.1) the matrices *A* and *B* denote the nominal values of the system, and the pair (*A*, *<sup>B</sup>*) is stabilizable and the matrix <sup>Δ</sup>(*t*) <sup>∈</sup> **<sup>R</sup>***m*×*<sup>q</sup>*

where *<sup>x</sup>*(*t*) <sup>∈</sup> **<sup>R</sup>***<sup>n</sup>* and *<sup>u</sup>*(*t*) <sup>∈</sup> **<sup>R</sup>***<sup>m</sup>* are the vectors of the state and the control input for the

*dt <sup>x</sup>*(*t*) = (*<sup>A</sup>* <sup>+</sup> *<sup>B</sup>*Δ(*t*)*E*) *<sup>x</sup>*(*t*) + *Bu*(*t*) (2.1)

Δ(*t*) 

*dt <sup>x</sup>*(*t*) = *Ax*(*t*) + *Bu*(*t*) (2.2)

≤ 1. Namely, the uncertain

<sup>0</sup> *for* <sup>∀</sup>*<sup>x</sup>* <sup>∈</sup> **<sup>R</sup>***<sup>n</sup> satisfying* <sup>G</sup>(*x*) <sup>≤</sup> <sup>0</sup> *if and only if there exist a nonnegative scalar <sup>τ</sup> such that*

Consider the uncertain linear system described by the following state equation.

parameter satisfies the matching condition (See e.g. (3) and references therein). The nominal system, ignoring the unknown parameter Δ(*t*) in (2.1), is given by

*d*

*arguments are equivalent.*

 Ξ<sup>11</sup> Ξ<sup>12</sup> Ξ*T* <sup>12</sup> <sup>Ξ</sup><sup>22</sup>

*(ii).* <sup>Ξ</sup><sup>11</sup> <sup>&</sup>gt; <sup>0</sup> *and* <sup>Ξ</sup><sup>22</sup> <sup>−</sup> <sup>Ξ</sup>*<sup>T</sup>*

*Proof.* See Boyd et al.(4)

*Suppose that* lim*t*→<sup>∞</sup>

*Proof.* See Khalil(13).

*Proof.* See Boyd et al.(4).

feedback controller.

**2.1 Problem formulation**

nominal system respectively.

*(iii).* <sup>Ξ</sup><sup>22</sup> <sup>&</sup>gt; <sup>0</sup> *and* <sup>Ξ</sup><sup>11</sup> <sup>−</sup> <sup>Ξ</sup>12Ξ−<sup>1</sup>

> 0

*t*

12Ξ−<sup>1</sup>

**2. Variable gain robust state feedback controllers**

*d*

denotes unknown time-varying parameters which satisfy

<sup>11</sup> Ξ<sup>12</sup> > 0

<sup>0</sup> *φ*(*τ*)*dτ exists and is finite. Then*

<sup>22</sup> <sup>Ξ</sup>*<sup>T</sup>* <sup>12</sup> > 0

*(i).* Ξ =

**Lemma 2.** *(Schur complement) For a given constant real symmetric matrix* Ξ*, the following arguments are equivalent.*

$$\begin{aligned} (i). \ \Xi &= \begin{pmatrix} \Xi\_{11} \ \Xi\_{12} \\ \Xi\_{12}^T \ \Xi\_{22} \end{pmatrix} > 0 \\ (ii). \ \Xi\_{11} > 0 \ \text{and} \ \Xi\_{22} - \Xi\_{12}^T \Sigma\_{11}^{-1} \Xi\_{12} > 0 \\ (iii). \ \Xi\_{22} > 0 \ \text{and} \ \Xi\_{11} - \Xi\_{12} \Sigma\_{22}^{-1} \Xi\_{12}^T > 0 \end{aligned}$$

*Proof.* See Boyd et al.(4)

2 Will-be-set-by-IN-TECH

In this chapter, for a class of uncertain linear systems, variable gain robust controllers which achieve not only asymptotical stability but also improving transient behavior of the resulting closed-loop system have been shown(23; 24; 26). The variable gain robust controllers, which consist of fixed gain controllers and variable gain one, are tuned on-line based on the information about parameter uncertainties. In this chapter, firstly, a design method of variable gain state feedback controllers for linear systems with matched uncertainties has been shown and next the variable gain state feedback controller is extended to output feedback controllers. Finally, on the basis of the concept of piecewise Lyapunov functions (PLFs), an LMI-based variable gain robust controller synthesis for linear systems with matched uncertainties and

The contents of this chapter are as follows, where the item numbers in the list accord with the

4. Variable Gain Robust Controllers based on Piecewise Lyapunov Functions

**Z**<sup>+</sup> : the set of positive integers **R** : the set of real numbers

**R***<sup>n</sup>* : the set of *n*-dimensional vectors **<sup>R</sup>***n*×*<sup>m</sup>* : the set of *<sup>n</sup>* <sup>×</sup> *<sup>m</sup>*-dimensional matrices *In* : *n*-dimensional identity matrix

Other than the above, we use the following notation and terms. For a matrix A, the transpose of matrix <sup>A</sup> and the inverse of one are denoted by <sup>A</sup>*<sup>T</sup>* and <sup>A</sup>−<sup>1</sup> respectively and rank {A} represents the rank of the matrix A. Also, *In* represents *n*-dimensional identity matrix. For real symmetric matrices A and B, A > B (resp. A≥B) means that A−B is positive (resp. nonnegative) definite matrix. For a vector *<sup>α</sup>* <sup>∈</sup> **<sup>R</sup>***n*, ||*α*|| denotes standard Euclidian norm and for a matrix <sup>A</sup>, ||A|| represents its induced norm. Besides, a vector *<sup>α</sup>* <sup>∈</sup> **<sup>R</sup>***n*,

blocks in matrix inequalities, respectively. Besides, for a symmetric matrix P, *λ*max {P} (resp.

**Lemma 1.** *For arbitrary vectors λ and ξ and the matrices* G *and* H *which have appropriate dimensions,*

 <sup>G</sup>*T<sup>λ</sup>* Δ(*t*)H*ξ*

≤ 2 <sup>G</sup>*T<sup>λ</sup>* H*ξ* 

*Proof.* The above relation can be easily obtained by Schwartz's inequality (see (8)).

 *α* 1


=" and "�" mean equality by definition and symmetric

 

Δ(*t*) ≤ 1*.*

unmatched one has been presented.

5. Conclusions and Future Works Basic symbols are listed bellow.

2. Variable Gain Robust State Feedback Controllers 3. Variable Gain Robust Output Feedback Controllers

section numbers.

denotes 1-norm, i.e.

*the following relation holds.*

*α* 

denoted by *<sup>Γ</sup>* <sup>∩</sup> *<sup>Υ</sup>* and the symbols "�

<sup>1</sup> is defined as

*where* <sup>Δ</sup>(*t*) <sup>∈</sup> **<sup>R</sup>***p*×*<sup>q</sup> is a time-varying unknown matrix satisfying*

*α* 1 � = *n* ∑ *j*=1

*λ*min {P}) represents the maximal eigenvalue (resp. minimal eigenvalue). Furthermore, the following usefule lemmas are used in this paper.

<sup>2</sup>*λT*GΔ(*t*)H*<sup>ξ</sup>* <sup>≤</sup> <sup>2</sup>

**Lemma 3.** *( Barbalat's lemma ) Let φ* : **R** → **R** *be a uniformly continuous function on* [ 0, ∞)*. Suppose that* lim*t*→<sup>∞</sup> *t* <sup>0</sup> *φ*(*τ*)*dτ exists and is finite. Then*

$$\phi(t) \to 0 \text{ as } t \to \infty$$

*Proof.* See Khalil(13).

**Lemma 4.** *(*S*-procedure) Let* <sup>F</sup>(*x*) *and* <sup>G</sup>(*x*) *be two arbitrary quadratic forms over* **<sup>R</sup>***n. Then* <sup>F</sup>(*x*) <sup>&</sup>lt; <sup>0</sup> *for* <sup>∀</sup>*<sup>x</sup>* <sup>∈</sup> **<sup>R</sup>***<sup>n</sup> satisfying* <sup>G</sup>(*x*) <sup>≤</sup> <sup>0</sup> *if and only if there exist a nonnegative scalar <sup>τ</sup> such that*

$$\mathcal{F}(\mathfrak{x}) - \mathfrak{r}\mathcal{G}(\mathfrak{x}) \le 0 \text{ for } \forall \mathfrak{x} \in \mathbb{R}^n$$

*Proof.* See Boyd et al.(4).

### **2. Variable gain robust state feedback controllers**

In this section, we propose a variable gain robust state feedback controller for a class of uncertain linear systems. The uncertainties under consideration are supposed to satisfy the matching condition(3) and the variable gain robust state feedback controller under consideration consists of a state feedback with a fixed gain matrix and a compensation input with variable one. In this section, we show a design method of the variable gain robust state feedback controller.

### **2.1 Problem formulation**

Consider the uncertain linear system described by the following state equation.

$$\frac{d}{dt}\mathbf{x}(t) = \left(A + B\Delta(t)E\right)\mathbf{x}(t) + Bu(t) \tag{2.1}$$

where *<sup>x</sup>*(*t*) <sup>∈</sup> **<sup>R</sup>***<sup>n</sup>* and *<sup>u</sup>*(*t*) <sup>∈</sup> **<sup>R</sup>***<sup>m</sup>* are the vectors of the state (assumed to be available for feedback) and the control input, respectively. In (2.1) the matrices *A* and *B* denote the nominal values of the system, and the pair (*A*, *<sup>B</sup>*) is stabilizable and the matrix <sup>Δ</sup>(*t*) <sup>∈</sup> **<sup>R</sup>***m*×*<sup>q</sup>* denotes unknown time-varying parameters which satisfy Δ(*t*) ≤ 1. Namely, the uncertain parameter satisfies the matching condition (See e.g. (3) and references therein). The nominal system, ignoring the unknown parameter Δ(*t*) in (2.1), is given by

$$\frac{d}{dt}\overline{\mathfrak{X}}(t) = A\overline{\mathfrak{X}}(t) + B\overline{\mathfrak{u}}(t) \tag{2.2}$$

where *<sup>x</sup>*(*t*) <sup>∈</sup> **<sup>R</sup>***<sup>n</sup>* and *<sup>u</sup>*(*t*) <sup>∈</sup> **<sup>R</sup>***<sup>m</sup>* are the vectors of the state and the control input for the nominal system respectively.

for a Class of Uncertain Dynamical Systems 5

Synthesis of Variable Gain Robust Controllers for a Class of Uncertain Dynamical Systems 315

In this subsection, we consider designing the variable matrix <sup>L</sup>(*x*,*e*, *<sup>t</sup>*) <sup>∈</sup> **<sup>R</sup>***m*×*<sup>n</sup>* and the fixed gain matrix F ∈ **<sup>R</sup>***m*×*<sup>n</sup>* such that the error system (2.6) with unknown parameters is asymptotically stable. The following theorem gives a design method of the proposed adaptive

**Theorem 1.** *Consider the uncertain error system (2.6) with variable gain matrix* <sup>L</sup>(*x*,*e*, *<sup>t</sup>*) <sup>∈</sup> **<sup>R</sup>***m*×*<sup>n</sup>*

*By using the LQ control theory, the fixed gain matrix* F ∈ **<sup>R</sup>***m*×*<sup>n</sup> is designed as* <sup>F</sup> <sup>=</sup> −R−<sup>1</sup> *<sup>e</sup> <sup>B</sup>T*<sup>P</sup>

*where* <sup>Q</sup>*<sup>e</sup>* <sup>∈</sup> **<sup>R</sup>***n*×*<sup>n</sup> and* <sup>R</sup>*<sup>e</sup>* <sup>∈</sup> **<sup>R</sup>***m*×*<sup>m</sup> are positive definite matrices which are selected by designers.*

 *Ex*(*t*) 2

*where σ*∗ *is any positive constant and t*<sup>0</sup> *denotes an initial time. Then the uncertain error system (2.6)*

*Proof.* Using symmetric positive definite matrix P ∈ **<sup>R</sup>***n*×*<sup>n</sup>* which satisfies (2.8), we introduce

Let's *e*(*t*) be the solution of (2.6) for *t* ≥ *t*0. Then the time derivative of the function V(*e*, *t*)

Now, one can see from (2.13) and **Lemma 1** that the following inequality for the function

 *e*(*t*)

*e*(*t*) + 2

*e*(*t*) + 2

 *<sup>B</sup>T*P*e*(*t*)

 *<sup>B</sup>T*P*e*(*t*)

<sup>+</sup> <sup>2</sup>*eT*(*t*)*PB*L(*x*,*e*, *<sup>t</sup>*)*e*(*t*) (2.14)

 *Ex*(*t*) + *σ*(*t*)

*<sup>e</sup> <sup>B</sup>T*<sup>P</sup> <sup>+</sup> <sup>Q</sup>*<sup>e</sup>* <sup>=</sup> <sup>0</sup> (2.8)

*σ*(*τ*)*dτ* ≤ *σ*<sup>∗</sup> < ∞ (2.10)

*<sup>t</sup>*→<sup>∞</sup> *<sup>e</sup>*(*t*; *<sup>t</sup>*0,*e*(*t*0)) = <sup>0</sup> (2.11)

<sup>+</sup> <sup>2</sup>*eT*(*t*)P*B*Δ(*t*)*Ex*(*t*) + <sup>2</sup>*eT*(*t*)P*B*L(*x*,*e*, *<sup>t</sup>*)*e*(*t*) (2.13)

 

 *Ex*(*t*) 

Δ(*t*)*Ex*(*t*)

 

<sup>=</sup> *<sup>e</sup>T*(*t*)P*e*(*t*) (2.12)

*<sup>B</sup>T*<sup>P</sup> (2.9)

**2.2 Synthesis of variable gain robust state feedback controllers**

*where* P ∈ **<sup>R</sup>***n*×*<sup>n</sup> is unique solution of the following algebraic Riccati equation.*

*<sup>K</sup>*<sup>P</sup> <sup>+</sup> <sup>P</sup> *AK* − P*B*R−<sup>1</sup>

 *<sup>B</sup>T*P*e*(*t*)

 *t t*0

lim

<sup>V</sup>(*e*, *<sup>t</sup>*) �

<sup>F</sup> <sup>P</sup> <sup>+</sup> <sup>P</sup> *<sup>A</sup>*<sup>F</sup>

<sup>F</sup> <sup>P</sup> <sup>+</sup> <sup>P</sup> *<sup>A</sup>*<sup>F</sup>

<sup>F</sup> <sup>P</sup> <sup>+</sup> <sup>P</sup> *<sup>A</sup>*<sup>F</sup>

<sup>+</sup> <sup>2</sup>*eT*(*t*)P*B*L(*x*,*e*, *<sup>t</sup>*)*e*(*t*)

*Namely, asymptotical stability of the uncertain error system (2.6) is ensured.*

 *AT*

 *AT*

*In (2.9), <sup>σ</sup>*(*t*) <sup>∈</sup> **<sup>R</sup>**<sup>+</sup> *is any positive uniform continuous and bounded function which satisfies*

*AT*

*Besides, the variable gain matrix* <sup>L</sup>(*x*,*e*, *<sup>t</sup>*) <sup>∈</sup> **<sup>R</sup>***m*×*<sup>n</sup> is determined as*

L(*x*,*e*, *t*) = −

robust controller.

*is bounded and*

V(*e*, *t*) holds.

the following quadratic function

along the trajectory of (2.6) can be written as

*dt* <sup>V</sup>(*e*, *<sup>t</sup>*) =*eT*(*t*)

*dt* <sup>V</sup>(*e*, *<sup>t</sup>*) =*eT*(*t*)

<sup>≤</sup>*eT*(*t*) *AT*

*d*

*d*

*and the fixed gain matrix* F ∈ **<sup>R</sup>***m*×*n.*

First of all, in order to generate the desirable transient behavior in time response for the uncertain system (2.1) systematically, we adopt the standard linear quadratic control problem (LQ control theory) for the nominal system (2.2). Note that some other design method so as to generate the desired response for the controlled system can also be used (e.g. pole assignment). It is well-known that the optimal control input for the nominal system (2.2) can be obtained as *u*(*t*) = −*Kx*(*t*) and the closed-loop system

$$\begin{split} \frac{d}{dt}\overline{\mathfrak{x}}(t) &= (A + BK)\,\overline{\mathfrak{x}}(t) \\ &= A\_K \overline{\mathfrak{x}}(t) \end{split} \tag{2.3}$$

is asymptotically stable\*.

Now in order to obtain on-line information on the parameter uncertainty, we introduce an error signal *e*(*t*) � = *x*(*t*) − *x*(*t*), and for the uncertain system (2.1), we consider the following control input.

$$
\mu(t) \stackrel{\triangle}{=} \text{Kx}(t) + \psi(\text{x}, e, \mathcal{L}, t) \tag{2.4}
$$

where *<sup>ψ</sup>*(*x*,*e*,L, *<sup>t</sup>*) <sup>∈</sup> **<sup>R</sup>***<sup>m</sup>* is a compensation input(21) to correct the effect of uncertainties, and it is supposed to have the following structure.

$$
\psi\left(\mathbf{x}, e, \mathcal{L}, t\right) \stackrel{\triangle}{=} \mathcal{F}e(t) + \mathcal{L}(\mathbf{x}, e, t)e(t) \tag{2.5}
$$

In (2.4), F ∈ **<sup>R</sup>R***m*×*<sup>n</sup>* and <sup>L</sup>(*t*) <sup>∈</sup> **<sup>R</sup>***m*×*<sup>n</sup>* are a fixed gain matrix and an adjustable time-varying matrix, respectively. Thus from (2.1), (2.3) – (2.5), the error system can be written as

$$\frac{d}{dt}e(t) = \left(A + B\Delta(t)E\right)\mathbf{x}(t) + B\left(K\mathbf{x}(t) + \mathcal{F}e(t) + \mathcal{L}(\mathbf{x}, e, t)\mathbf{x}(t)\right) - A\_K\overline{\mathbf{x}}(t)$$

$$= A\_{\mathcal{F}}e(t) + B\Delta(t)\mathbf{Ex}(t) + \mathcal{L}(\mathbf{x}, e, t)e(t) \tag{2.6}$$

In (2.6), *<sup>A</sup>*<sup>F</sup> is the matrix expressed as

$$A\_{\mathcal{F}} = A\_K + B\mathcal{F} \tag{2.7}$$

Note that from the definition of the error signal, the uncertain system (2.1) is ensured to be stable, because the nominal system is asymptotically stable.

From the above, our control objective in this section is to derive the fixed gain matrix F ∈ **<sup>R</sup>***m*×*<sup>n</sup>* and the variable gain matrix <sup>L</sup>(*x*,*e*, *<sup>t</sup>*) <sup>∈</sup> **<sup>R</sup>***m*×*<sup>n</sup>* which stabilize the uncertain error system (2.6).

<sup>\*</sup> Using the unique solution of the algebraic Riccati equation *<sup>A</sup>T*<sup>X</sup> <sup>+</sup> <sup>X</sup> *<sup>A</sup>* − X *<sup>B</sup>*R−1*BT*<sup>X</sup> <sup>+</sup> <sup>Q</sup> <sup>=</sup> 0, the gain matrix *<sup>K</sup>* <sup>∈</sup> **<sup>R</sup>***m*×*<sup>n</sup>* is determined as *<sup>K</sup>* <sup>=</sup> −R−1*BT*<sup>X</sup> where <sup>Q</sup> and <sup>R</sup> are nonnegative and positive definite matrices, respectively. Besides, Q is selected such that the pair (*A*, C) is detectable, where C is any matrix satisfying <sup>Q</sup> <sup>=</sup> CC*T*, and then the matrix *AK* � = *A* + *BK* is stable.

### **2.2 Synthesis of variable gain robust state feedback controllers**

4 Will-be-set-by-IN-TECH

First of all, in order to generate the desirable transient behavior in time response for the uncertain system (2.1) systematically, we adopt the standard linear quadratic control problem (LQ control theory) for the nominal system (2.2). Note that some other design method so as to generate the desired response for the controlled system can also be used (e.g. pole assignment). It is well-known that the optimal control input for the nominal system (2.2)

*dt <sup>x</sup>*(*t*) = (*<sup>A</sup>* <sup>+</sup> *BK*) *<sup>x</sup>*(*t*)

Now in order to obtain on-line information on the parameter uncertainty, we introduce an

where *<sup>ψ</sup>*(*x*,*e*,L, *<sup>t</sup>*) <sup>∈</sup> **<sup>R</sup>***<sup>m</sup>* is a compensation input(21) to correct the effect of uncertainties, and

*dte*(*t*) = (*<sup>A</sup>* <sup>+</sup> *<sup>B</sup>*Δ(*t*)*E*) *<sup>x</sup>*(*t*) + *<sup>B</sup>* (*Kx*(*t*) + <sup>F</sup>*e*(*t*) + <sup>L</sup>(*x*,*e*, *<sup>t</sup>*)*x*(*t*)) <sup>−</sup> *AKx*(*t*)

Note that from the definition of the error signal, the uncertain system (2.1) is ensured to be

From the above, our control objective in this section is to derive the fixed gain matrix F ∈ **<sup>R</sup>***m*×*<sup>n</sup>* and the variable gain matrix <sup>L</sup>(*x*,*e*, *<sup>t</sup>*) <sup>∈</sup> **<sup>R</sup>***m*×*<sup>n</sup>* which stabilize the uncertain error

\* Using the unique solution of the algebraic Riccati equation *<sup>A</sup>T*<sup>X</sup> <sup>+</sup> <sup>X</sup> *<sup>A</sup>* − X *<sup>B</sup>*R−1*BT*<sup>X</sup> <sup>+</sup> <sup>Q</sup> <sup>=</sup> 0, the gain matrix *<sup>K</sup>* <sup>∈</sup> **<sup>R</sup>***m*×*<sup>n</sup>* is determined as *<sup>K</sup>* <sup>=</sup> −R−1*BT*<sup>X</sup> where <sup>Q</sup> and <sup>R</sup> are nonnegative and positive definite matrices, respectively. Besides, Q is selected such that the pair (*A*, C) is detectable, where C is

�

= *A* + *BK* is stable.

�

matrix, respectively. Thus from (2.1), (2.3) – (2.5), the error system can be written as

= *x*(*t*) − *x*(*t*), and for the uncertain system (2.1), we consider the following

and <sup>L</sup>(*t*) <sup>∈</sup> **<sup>R</sup>***m*×*<sup>n</sup>* are a fixed gain matrix and an adjustable time-varying

= *<sup>A</sup>*<sup>F</sup> *<sup>e</sup>*(*t*) + *<sup>B</sup>*Δ(*t*)*Ex*(*t*) + L(*x*,*e*, *<sup>t</sup>*)*e*(*t*) (2.6)

= *AKx*(*t*) (2.3)

= *Kx*(*t*) + *ψ*(*x*,*e*,L,*t*) (2.4)

= F*e*(*t*) + L(*x*,*e*, *t*)*e*(*t*) (2.5)

*<sup>A</sup>*<sup>F</sup> = *AK* + *<sup>B</sup>*F (2.7)

can be obtained as *u*(*t*) = −*Kx*(*t*) and the closed-loop system

is asymptotically stable\*.

�

it is supposed to have the following structure.

error signal *e*(*t*)

In (2.4), F ∈ **<sup>R</sup>R***m*×*<sup>n</sup>*

*d*

system (2.6).

In (2.6), *<sup>A</sup>*<sup>F</sup> is the matrix expressed as

control input.

*d*

*u*(*t*) �

*ψ* (*x*,*e*, L,*t*)

stable, because the nominal system is asymptotically stable.

any matrix satisfying <sup>Q</sup> <sup>=</sup> CC*T*, and then the matrix *AK*

In this subsection, we consider designing the variable matrix <sup>L</sup>(*x*,*e*, *<sup>t</sup>*) <sup>∈</sup> **<sup>R</sup>***m*×*<sup>n</sup>* and the fixed gain matrix F ∈ **<sup>R</sup>***m*×*<sup>n</sup>* such that the error system (2.6) with unknown parameters is asymptotically stable. The following theorem gives a design method of the proposed adaptive robust controller.

**Theorem 1.** *Consider the uncertain error system (2.6) with variable gain matrix* <sup>L</sup>(*x*,*e*, *<sup>t</sup>*) <sup>∈</sup> **<sup>R</sup>***m*×*<sup>n</sup> and the fixed gain matrix* F ∈ **<sup>R</sup>***m*×*n.*

*By using the LQ control theory, the fixed gain matrix* F ∈ **<sup>R</sup>***m*×*<sup>n</sup> is designed as* <sup>F</sup> <sup>=</sup> −R−<sup>1</sup> *<sup>e</sup> <sup>B</sup>T*<sup>P</sup> *where* P ∈ **<sup>R</sup>***n*×*<sup>n</sup> is unique solution of the following algebraic Riccati equation.*

$$A\_K^T \mathcal{P} + \mathcal{P}A\_K - \mathcal{P}B\mathcal{R}\_\varepsilon^{-1}B^T \mathcal{P} + \mathcal{Q}\_\varepsilon = 0\tag{2.8}$$

*where* <sup>Q</sup>*<sup>e</sup>* <sup>∈</sup> **<sup>R</sup>***n*×*<sup>n</sup> and* <sup>R</sup>*<sup>e</sup>* <sup>∈</sup> **<sup>R</sup>***m*×*<sup>m</sup> are positive definite matrices which are selected by designers. Besides, the variable gain matrix* <sup>L</sup>(*x*,*e*, *<sup>t</sup>*) <sup>∈</sup> **<sup>R</sup>***m*×*<sup>n</sup> is determined as*

$$\mathcal{L}(\mathbf{x}, \mathbf{e}, t) = -\frac{\left\| \mathbf{E} \mathbf{x}(t) \right\|^2}{\left\| \mathbf{B}^T \mathcal{P} \mathbf{e}(t) \right\| \left\| \mathbf{E} \mathbf{x}(t) \right\| + \sigma(t)} \mathbf{B}^T \mathcal{P} \tag{2.9}$$

*In (2.9), <sup>σ</sup>*(*t*) <sup>∈</sup> **<sup>R</sup>**<sup>+</sup> *is any positive uniform continuous and bounded function which satisfies*

$$\int\_{t\_0}^{t} \sigma(\tau) d\tau \le \sigma^\* < \infty \tag{2.10}$$

*where σ*∗ *is any positive constant and t*<sup>0</sup> *denotes an initial time. Then the uncertain error system (2.6) is bounded and*

$$\lim\_{t \to \infty} e(t; t\_0, e(t\_0)) = 0 \tag{2.11}$$

*Namely, asymptotical stability of the uncertain error system (2.6) is ensured.*

*Proof.* Using symmetric positive definite matrix P ∈ **<sup>R</sup>***n*×*<sup>n</sup>* which satisfies (2.8), we introduce the following quadratic function

$$\mathcal{V}(e,t) \stackrel{\triangle}{=} e^T(t)\mathcal{P}e(t) \tag{2.12}$$

Let's *e*(*t*) be the solution of (2.6) for *t* ≥ *t*0. Then the time derivative of the function V(*e*, *t*) along the trajectory of (2.6) can be written as

$$\begin{split} \frac{d}{dt} \mathcal{V}(e, t) &= \mathbf{e}^{T}(t) \left( A\_{\mathcal{F}}^{T} \mathcal{P} + \mathcal{P} A\_{\mathcal{F}} \right) \mathbf{e}(t) \\ &+ 2 \mathbf{e}^{T}(t) \mathcal{P} \mathbf{B} \Delta(t) \mathbf{E} \mathbf{x}(t) + 2 \mathbf{e}^{T}(t) \mathcal{P} \mathbf{B} \mathcal{L}(\mathbf{x}, e, t) \mathbf{e}(t) \end{split} \tag{2.13}$$

Now, one can see from (2.13) and **Lemma 1** that the following inequality for the function V(*e*, *t*) holds.

$$\begin{split} \frac{d}{dt} \mathcal{V}(e, t) &= \varepsilon^{T}(t) \left( A\_{\mathcal{F}}^{\mathsf{T}} \mathcal{P} + \mathcal{P} A\_{\mathcal{F}} \right) e(t) + 2 \left\| B^{\mathsf{T}} \mathcal{P} e(t) \right\| \left\| \Delta(t) \mathrm{Ex}(t) \right\| \\ &\quad + 2 \varepsilon^{T}(t) \mathcal{P} \mathcal{B} \mathcal{L}(\mathbf{x}, e, t) e(t) \\ &\leq \varepsilon^{T}(t) \left( A\_{\mathcal{F}}^{\mathsf{T}} \mathcal{P} + \mathcal{P} A\_{\mathcal{F}} \right) e(t) + 2 \left\| B^{\mathsf{T}} \mathcal{P} e(t) \right\| \left\| \mathrm{Ex}(t) \right\| \\ &\quad + 2 \varepsilon^{T}(t) \mathcal{P} \mathcal{B} \mathcal{L}(\mathbf{x}, e, t) e(t) \end{split} \tag{2.14}$$

for a Class of Uncertain Dynamical Systems 7

Synthesis of Variable Gain Robust Controllers for a Class of Uncertain Dynamical Systems 317

system is continuous. In addition, it follows from (2.19) and (2.20), that for any *t* ≥ *t*0, we

≤ V (*e*, *t*) = V (*e*, *t*0) +

Therefore, from (2.22) and (2.23) we can obtain the following two results. Firstly, taking the limit as *t* approaches infinity on both sides of inequality (2.23), we have the following relation.

*<sup>d</sup><sup>τ</sup>* <sup>≤</sup> *<sup>ξ</sup>*<sup>+</sup>

*e*(*t*0) + *t t*0

<sup>≤</sup> *<sup>ξ</sup>*<sup>+</sup>

The relation (2.29) implies that *e*(*t*) is uniformly bounded. Since *e*(*t*) has been shown to be continuous, it follows that *e*(*t*) is uniformly continuous. Therefore that *e*(*t*) is uniformly

> *e*(*t*)

Namely, asymptotical stability of the uncertain error system (2.6) is ensured. Therefore the uncertain system (2.1) is also asymptotically stable, because the nominal system (2.2) is stable. It follows that the result of the theorem is true. Thus the proof of **Theorem 1** is completed.

*e*(*t*0) 

is a positive definite scalar function, it is obvious that the following

 *t t*0 *ξ*<sup>∗</sup> *e*(*τ*) 

<sup>≤</sup> *<sup>ξ</sup>*<sup>+</sup>

 *t t*0

 *t t*0 *d*

*dτ* + lim *t*→∞

*e*(*t*0)   *t t*0

*σ*(*τ*)*dτ* ≤ *σ*<sup>∗</sup> (2.28)

*dt* <sup>V</sup>(*e*, *<sup>τ</sup>*)*d<sup>τ</sup>* (2.22)

<sup>2</sup> (2.24)

*σ*(*τ*)*dτ* (2.23)

*σ*(*τ*)*dτ* (2.25)

+ *σ*∗ (2.26)

*σ*(*τ*)*dτ* (2.27)

+ *σ*∗ (2.29)

also uniformly

*e*(*t*) 

= 0 (2.30)

= 0 (2.31)

have

In (2.23), *ξ*<sup>∗</sup>

*e*(*t*) 

Note that for any *t* ≥ *t*0,

Besides, since *ξ*<sup>∗</sup>

equation holds.

It follows from (2.27) and (2.28) that

*e*(*t*) 

<sup>0</sup> <sup>≤</sup> *<sup>ξ</sup>*<sup>−</sup>

<sup>0</sup> <sup>≤</sup> *<sup>ξ</sup>*<sup>+</sup>

Thus one can see from (2.10) and (2.25) that

<sup>V</sup> (*e*, *<sup>t</sup>*) <sup>≤</sup> *<sup>ξ</sup>*<sup>+</sup>

is defined as

*e*(*t*0) − lim *t*→∞

lim *t*→∞

On the other hand, from (2.22) and (2.23), we obtain

<sup>0</sup> <sup>≤</sup> *<sup>ξ</sup>*<sup>−</sup>

*e*(*t*) 

<sup>0</sup> <sup>≤</sup> *<sup>ξ</sup>*<sup>−</sup>

continuous and one can see from the definition that the function *ξ*<sup>∗</sup>

continuous. Applying the **Lemma 2** ( Barbalat's lemma ) to (2.26) yields

lim *<sup>t</sup>*→<sup>∞</sup> *<sup>ξ</sup>*<sup>∗</sup>

> lim *t*→∞ *e*(*t*)

sup *t*∈[*t*0,∞)

> *e*(*t*)

 *t t*0 *ξ*<sup>∗</sup> *e*(*τ*) 

*e*(*t*) 

> *e*(*t*0) − *t t*0 *ξ*<sup>∗</sup> *e*(*τ*) *dτ* + *t t*0

*ξ*<sup>∗</sup> *e*(*t*) � = *ζ e*(*t*) 

Additionally, using the relation (2.8), substituting (2.9) into (2.14) and some trivial manipulations give the inequality

$$\begin{split} \frac{d}{dt} \mathcal{V}(e,t) &\leq e^{T}(t) \left( A\_{F}^{\top} \mathcal{P} + \mathcal{P} A\_{F} \right) e(t) + 2 \left\| B^{\top} \mathcal{P} e(t) \right\| \left\| Ex(t) \right\| \\ &\quad + 2 e^{T}(t) \mathcal{P} B \left( - \frac{\left\| Ex(t) \right\|^{2}}{\left\| B^{\top} \mathcal{P} e(t) \right\| \left\| Ex(t) \right\| + \sigma(t)} B^{\top} \mathcal{P} \right) e(t) \\ &\leq e^{T}(t) \left( A\_{F}^{\top} \mathcal{P} + \mathcal{P} A\_{F} \right) e(t) + 2 \frac{\left\| B^{\top} \mathcal{P} e(t) \right\| \left\| Ex(t) \right\|}{\left\| B^{\top} \mathcal{P} e(t) \right\| \left\| Ex(t) \right\| + \sigma(t)} \sigma(t) \\ &= -e^{T}(t) \left\{ Q\_{\varepsilon} + \mathcal{P} \mathcal{B} \mathcal{R}\_{\varepsilon}^{-1} B^{\top} \mathcal{P} \right\} e(t) + 2 \frac{\left\| B^{\top} \mathcal{P} e(t) \right\| \left\| Ex(t) \right\|}{\left\| B^{\top} \mathcal{P} e(t) \right\| \left\| Ex(t) \right\| + \sigma(t)} \sigma(t) \tag{2.15} \end{split} \tag{2.15}$$

Notice the fact that for ∀*α*, *β* > 0

$$0 \le \frac{\alpha \beta}{\alpha + \beta} \le \alpha \tag{2.16}$$

Then we can further obtain that for any *t* > *t*<sup>0</sup>

$$\frac{d}{dt}\mathcal{V}(e,t) \le -e(t)\Omega e(t) + \sigma(t) \tag{2.17}$$

where <sup>Ω</sup> <sup>∈</sup> **<sup>R</sup>***n*×*<sup>n</sup>* is the symmetric positive definite matrix given by

$$
\Omega = \mathcal{Q}\_{\mathcal{E}} + \mathcal{P}\mathcal{B}\mathcal{R}\_{\mathcal{E}}^{-1}\mathcal{B}^{\mathsf{T}}\mathcal{P} \tag{2.18}
$$

Besides, letting *ζ* � = *λ*min (Ω), we have

$$\frac{d}{dt}\mathcal{V}(e,t) \le -\zeta \left\| e(t) \right\|^2 + \sigma(t) \tag{2.19}$$

On the other hand, from the definition of the quadratic function V(*e*, *t*), there always exist two positive constants *<sup>ξ</sup>*<sup>−</sup> and *<sup>ξ</sup>*<sup>+</sup> such that for any *<sup>t</sup>* <sup>≥</sup> *<sup>t</sup>*0,

$$
\xi^- \left( ||e(t)|| \right) \le \mathcal{V} \left( e, t \right) \le \xi^+ \left( ||e(t)|| \right) \tag{2.20}
$$

where *ξ*<sup>−</sup> *e*(*t*) and *ξ*<sup>+</sup> *e*(*t*) are given by

$$\begin{array}{l} \xi^{-} \left( \left\| e(t) \right\| \right) \stackrel{\triangle}{=} \xi^{-} \left\| e(t) \right\| ^ {2} \\ \xi^{+} \left( \left\| e(t) \right\| \right) \stackrel{\triangle}{=} \xi^{+} \left\| e(t) \right\| ^ {2} \end{array} \tag{2.21}$$

From the above, we want to show that the solution *e*(*t*) is uniformly bounded, and that the error signal *e*(*t*) converges asymptotically to zero.

By continuity of the error system (2.6), it is obvious that any solution *e*(*t*; *t*0,*e*(*t*0)) of the error system is continuous. Namely, *e*(*t*) is also continuous, because the state *x*(*t*) for the nominal system is continuous. In addition, it follows from (2.19) and (2.20), that for any *t* ≥ *t*0, we have

$$0 \le \xi^- \left( ||e(t)|| \right) \le \mathcal{V}(e, t) = \mathcal{V}(e, t\_0) + \int\_{t\_0}^t \frac{d}{dt} \mathcal{V}(e, \tau) d\tau \tag{2.22}$$

$$\mathcal{V}\left(\boldsymbol{e},t\right) \le \xi^{+}\left(\left||\boldsymbol{e}(t\_{0})\right||\right) - \int\_{t\_{0}}^{t} \xi^{\*}\left(\left||\boldsymbol{e}(\tau)\right||\right)d\tau + \int\_{t\_{0}}^{t} \boldsymbol{\sigma}(\tau)d\tau\tag{2.23}$$

In (2.23), *ξ*<sup>∗</sup> *e*(*t*) is defined as

6 Will-be-set-by-IN-TECH

Additionally, using the relation (2.8), substituting (2.9) into (2.14) and some trivial

 *<sup>B</sup>T*P*e*(*t*)

 *<sup>B</sup>T*P*e*(*t*)

<sup>0</sup> <sup>≤</sup> *αβ*

<sup>Ω</sup> <sup>=</sup> <sup>Q</sup>*<sup>e</sup>* <sup>+</sup> <sup>P</sup>*B*R−<sup>1</sup>

 *e*(*t*) 

≤ V (*e*, *<sup>t</sup>*) <sup>≤</sup> *<sup>ξ</sup>*<sup>+</sup>

On the other hand, from the definition of the quadratic function V(*e*, *t*), there always exist two

From the above, we want to show that the solution *e*(*t*) is uniformly bounded, and that the

By continuity of the error system (2.6), it is obvious that any solution *e*(*t*; *t*0,*e*(*t*0)) of the error system is continuous. Namely, *e*(*t*) is also continuous, because the state *x*(*t*) for the nominal

 *<sup>B</sup>T*P*e*(*t*)

*e*(*t*) + 2

 *Ex*(*t*) 

 *Ex*(*t*) 

 *<sup>B</sup>T*P*e*(*t*)

 *<sup>B</sup>T*P*e*(*t*)

*<sup>B</sup>T*<sup>P</sup> *e*(*t*)

<sup>+</sup> *<sup>σ</sup>*(*t*) *<sup>σ</sup>*(*t*)

<sup>+</sup> *<sup>σ</sup>*(*t*) *<sup>σ</sup>*(*t*) (2.15)

 *Ex*(*t*) 

*<sup>α</sup>* <sup>+</sup> *<sup>β</sup>* <sup>≤</sup> *<sup>α</sup>* (2.16)

*<sup>e</sup> <sup>B</sup>T*<sup>P</sup> (2.18)

<sup>2</sup> + *<sup>σ</sup>*(*t*) (2.19)

(2.20)

<sup>2</sup> (2.21)

 *Ex*(*t*) 

*dt* <sup>V</sup>(*e*, *<sup>t</sup>*) ≤ − *<sup>e</sup>*(*t*)Ω*e*(*t*) + *<sup>σ</sup>*(*t*) (2.17)

*e*(*t*) 

 *Ex*(*t*) 

manipulations give the inequality

<sup>≤</sup>*eT*(*t*) *AT*

<sup>=</sup> <sup>−</sup> *<sup>e</sup>T*(*t*)

Notice the fact that for ∀*α*, *β* > 0

Besides, letting *ζ* �

where *ξ*<sup>−</sup>

*e*(*t*)   *AT*

<sup>+</sup> <sup>2</sup>*eT*(*t*)P*<sup>B</sup>*

Then we can further obtain that for any *t* > *t*<sup>0</sup>

<sup>F</sup> <sup>P</sup> <sup>+</sup> <sup>P</sup> *<sup>A</sup>*<sup>F</sup>

<sup>F</sup> <sup>P</sup> <sup>+</sup> <sup>P</sup> *<sup>A</sup>*<sup>F</sup>

<sup>Q</sup>*<sup>e</sup>* <sup>+</sup> <sup>P</sup>*B*R−<sup>1</sup>

*d*

where <sup>Ω</sup> <sup>∈</sup> **<sup>R</sup>***n*×*<sup>n</sup>* is the symmetric positive definite matrix given by

*d*

*ξ*<sup>−</sup> *e*(*t*) 

*e*(*t*) 

*dt* <sup>V</sup>(*e*, *<sup>t</sup>*) ≤ − *<sup>ζ</sup>*

are given by

*ξ*<sup>−</sup> *e*(*t*) � = *ξ*<sup>−</sup> *e*(*t*) 2

*ξ*<sup>+</sup> *e*(*t*) � = *ξ*<sup>+</sup> *e*(*t*) 

= *λ*min (Ω), we have

positive constants *<sup>ξ</sup>*<sup>−</sup> and *<sup>ξ</sup>*<sup>+</sup> such that for any *<sup>t</sup>* <sup>≥</sup> *<sup>t</sup>*0,

and *ξ*<sup>+</sup>

error signal *e*(*t*) converges asymptotically to zero.

 *<sup>B</sup>T*P*e*(*t*)

 − *e*(*t*) + 2

 *Ex*(*t*) 2

*e*(*t*) + 2

*<sup>e</sup> <sup>B</sup>T*<sup>P</sup> 

 *Ex*(*t*) + *σ*(*t*)

*dt* <sup>V</sup>(*e*, *<sup>t</sup>*) <sup>≤</sup>*eT*(*t*)

*d*

$$\mathcal{L}^\*\left(\left||\boldsymbol{e}(t)\right||\right) \stackrel{\triangle}{=} \zeta \left||\boldsymbol{e}(t)\right||^2\tag{2.24}$$

Therefore, from (2.22) and (2.23) we can obtain the following two results. Firstly, taking the limit as *t* approaches infinity on both sides of inequality (2.23), we have the following relation.

$$0 \le \xi^+\left(\left||e(t\_0)||\right|\right) - \lim\_{t \to \infty} \int\_{t\_0}^t \xi^\*\left(\left||e(\tau)||\right|\right)d\tau + \lim\_{t \to \infty} \int\_{t\_0}^t \sigma(\tau)d\tau\tag{2.25}$$

Thus one can see from (2.10) and (2.25) that

$$\lim\_{t \to \infty} \int\_{t\_0}^{t} \xi^\* \left( \left||e(\tau)\right|| \right) d\tau \le \xi^+ \left( \left||e(t\_0)\right|| \right) + \sigma^\* \tag{2.26}$$

On the other hand, from (2.22) and (2.23), we obtain

$$0 \le \xi^- \left( \|e(t)\| \right) \le \xi^+ \left( \|e(t\_0)\| \right) + \int\_{t\_0}^t \sigma(\tau)d\tau \tag{2.27}$$

Note that for any *t* ≥ *t*0,

$$\sup\_{t \in [t\_0, \infty)} \int\_{t\_0}^t \sigma(\tau) d\tau \le \sigma^\* \tag{2.28}$$

It follows from (2.27) and (2.28) that

$$0 \le \xi^- \left( \| \| \boldsymbol{\varepsilon}(t) \| \right) \le \xi^+ \left( \| \boldsymbol{\varepsilon}(t\_0) \| \right) + \sigma^\* \tag{2.29}$$

The relation (2.29) implies that *e*(*t*) is uniformly bounded. Since *e*(*t*) has been shown to be continuous, it follows that *e*(*t*) is uniformly continuous. Therefore that *e*(*t*) is uniformly continuous and one can see from the definition that the function *ξ*<sup>∗</sup> *e*(*t*) also uniformly continuous. Applying the **Lemma 2** ( Barbalat's lemma ) to (2.26) yields

$$\lim\_{t \to \infty} \mathfrak{J}^\* \left( \left|| e(t) \right|| \right) = 0 \tag{2.30}$$

Besides, since *ξ*<sup>∗</sup> *e*(*t*) is a positive definite scalar function, it is obvious that the following equation holds.

$$\lim\_{t \to \infty} \left\| e(t) \right\| = 0 \tag{2.31}$$

Namely, asymptotical stability of the uncertain error system (2.6) is ensured. Therefore the uncertain system (2.1) is also asymptotically stable, because the nominal system (2.2) is stable. It follows that the result of the theorem is true. Thus the proof of **Theorem 1** is completed.

for a Class of Uncertain Dynamical Systems 9

Synthesis of Variable Gain Robust Controllers for a Class of Uncertain Dynamical Systems 319

Case 1) Case 2) Desired

0 1 2 3 4 5

Timett

0 1 2 3 4 5

Case 1) Case 2) Desired

Time

*eT*(*t*)*e*(*t*)*dt*

(2.36)

The results of the simulation of this example are depicted in Figures 1–3 and Table 1. In these Figures, "Case 1)" and "Case 2)" represent the time-histories of the state variables *x*1(*t*) and *x*2(*t*) and the control input *u*(*t*) generated by the proposed controller, and "Desired" shows the desired time-response and the desired control input generated by the nominal system. Additionally <sup>J</sup> *<sup>k</sup>*(*e*, *<sup>t</sup>*) (*<sup>k</sup>* <sup>=</sup> 1, 2) in Table 1 represent the following performance indecies.

<sup>J</sup> <sup>1</sup>(*e*, *<sup>t</sup>*)

<sup>J</sup> <sup>2</sup>(*e*, *<sup>t</sup>*)

achieves the good transient performance and can avoid serious chattering.

� = ∞ 0

� = sup *t e*(*t*) 1

From Figures 1–3, we find that the proposed variable gain robust state feedback controller stabilizes the uncertain system (2.32) in spite of uncertainties. Besides one can also see from Figures 1 and 2 and Table 1 that the proposed variable gain robust state feedback controller

0

0




State

0.5

State

Fig. 1. Time histories of the state *x*1(*t*)

Fig. 2. Time histories of the state *x*2(*t*)

1

**Remark 1.** *Though, the variable gain controllers in the existing results(19; 21) can also be good transient performance, these controllers may cause serious chattering, because the adjustment parameters in the existing results(19; 21) are adjusted on the boundary surface of the allowable parameter space (see. (26) for details). On the other hand, since the variable gain matrix (2.9) of the proposed robust controller is continuous, chattering phenomenon can be avoided.*

### **2.3 Illustrative examples**

In order to demonstrate the efficiency of the proposed control scheme, we have run a simple example.

Consider the following linear system with unknown parameter, i.e. the unknown parameter <sup>Δ</sup>(*t*) <sup>∈</sup> **<sup>R</sup>**1.

$$\frac{d}{dt}\mathbf{x}(t) = \begin{pmatrix} 1 & 1\\ 0 & -2 \end{pmatrix} \mathbf{x}(t) + \begin{pmatrix} 0\\ 1 \end{pmatrix} \boldsymbol{\Lambda}(t) \begin{pmatrix} 5 \ 4 \end{pmatrix} + \begin{pmatrix} 0\\ 1 \end{pmatrix} \boldsymbol{\mu}(t) \tag{2.32}$$

Now we select the weighting matrices Q and R such as Q = 1.0*I*<sup>2</sup> and R = 4.0 for the standard linear quadratic control problem for the nominal system, respectively. Then solving the algebraic Riccati equation, we obtain the following optimal gain matrix

$$K = \begin{pmatrix} -6.20233 \ -2.08101 \end{pmatrix} \tag{2.33}$$

In addition, setting the design parameters Q*<sup>e</sup>* and R*<sup>e</sup>* such as Q*<sup>e</sup>* = 9.0*I*<sup>2</sup> and R*<sup>e</sup>* = 1.0, respectively, we have

$$\mathcal{F} = \begin{pmatrix} -2.37665 \times 10^2 & -9.83494 \times 10^1 \end{pmatrix} \tag{2.34}$$

Besides for the variable gain matrix <sup>L</sup>(*x*,*e*, *<sup>t</sup>*) <sup>∈</sup> **<sup>R</sup>***m*×*n*, we select the following parameter.

$$
\sigma(t) = 50 \exp\left(-0.75t\right) \tag{2.35}
$$

In this example, we consider the following two cases for the unknown parameter Δ(*t*).

• Case 1) :

$$\Delta(t) = \sin(\pi t)$$

• Case 2) :

$$\begin{array}{l} \Delta(t) = -1.0 : 0 \le t \le 1.0\\ \Delta(t) = 1.0 \quad : 1.0 < t \le 2.0\\ \Delta(t) = -1.0 : t > 2.0 \end{array}$$

Besides, for numerical simulations, the initial values of the uncertain system (2.32) and the nominal system are selected as *x*(0) = *x*(0) = 1.0 <sup>−</sup>1.0*<sup>T</sup>* .


Table 1. The values of the performance indecies

Fig. 1. Time histories of the state *x*1(*t*)

8 Will-be-set-by-IN-TECH

**Remark 1.** *Though, the variable gain controllers in the existing results(19; 21) can also be good transient performance, these controllers may cause serious chattering, because the adjustment parameters in the existing results(19; 21) are adjusted on the boundary surface of the allowable parameter space (see. (26) for details). On the other hand, since the variable gain matrix (2.9) of*

In order to demonstrate the efficiency of the proposed control scheme, we have run a simple

Consider the following linear system with unknown parameter, i.e. the unknown parameter

 0 1 Δ(*t*)

Now we select the weighting matrices Q and R such as Q = 1.0*I*<sup>2</sup> and R = 4.0 for the standard linear quadratic control problem for the nominal system, respectively. Then solving

In addition, setting the design parameters Q*<sup>e</sup>* and R*<sup>e</sup>* such as Q*<sup>e</sup>* = 9.0*I*<sup>2</sup> and R*<sup>e</sup>* = 1.0,

Besides for the variable gain matrix <sup>L</sup>(*x*,*e*, *<sup>t</sup>*) <sup>∈</sup> **<sup>R</sup>***m*×*n*, we select the following parameter.

In this example, we consider the following two cases for the unknown parameter Δ(*t*).

<sup>−</sup>2.37665 <sup>×</sup> 102 <sup>−</sup>9.83494 <sup>×</sup> 101

Δ(*t*) = sin(*πt*)

Δ(*t*) = −1.0 : 0 ≤ *t* ≤ 1.0 Δ(*t*) = 1.0 : 1.0 < *t* ≤ 2.0 Δ(*t*) = −1.0 : *t* > 2.0

Besides, for numerical simulations, the initial values of the uncertain system (2.32) and the

Case 1) 1.05685 <sup>×</sup> <sup>10</sup>−<sup>4</sup> 1.41469 <sup>×</sup> <sup>10</sup>−<sup>3</sup> Case 2) 2.11708 <sup>×</sup> <sup>10</sup>−<sup>4</sup> 2.79415 <sup>×</sup> <sup>10</sup>−<sup>3</sup>

1.0 <sup>−</sup>1.0*<sup>T</sup>*

<sup>J</sup> <sup>1</sup>(*e*, *<sup>t</sup>*) <sup>J</sup> <sup>2</sup>(*e*, *<sup>t</sup>*)

.

 5 4 + 0 1 

<sup>−</sup>6.20233 <sup>−</sup>2.08101 (2.33)

*σ*(*t*) = 50 exp (−0.75*t*) (2.35)

*u*(*t*) (2.32)

(2.34)

*the proposed robust controller is continuous, chattering phenomenon can be avoided.*

 1 1 0 −2

<sup>F</sup> <sup>=</sup>

nominal system are selected as *x*(0) = *x*(0) =

Table 1. The values of the performance indecies

the algebraic Riccati equation, we obtain the following optimal gain matrix

*K* =

*x*(*t*) +

**2.3 Illustrative examples**

respectively, we have

• Case 1) :

• Case 2) :

*d dt <sup>x</sup>*(*t*) =

example.

<sup>Δ</sup>(*t*) <sup>∈</sup> **<sup>R</sup>**1.

Fig. 2. Time histories of the state *x*2(*t*)

The results of the simulation of this example are depicted in Figures 1–3 and Table 1. In these Figures, "Case 1)" and "Case 2)" represent the time-histories of the state variables *x*1(*t*) and *x*2(*t*) and the control input *u*(*t*) generated by the proposed controller, and "Desired" shows the desired time-response and the desired control input generated by the nominal system. Additionally <sup>J</sup> *<sup>k</sup>*(*e*, *<sup>t</sup>*) (*<sup>k</sup>* <sup>=</sup> 1, 2) in Table 1 represent the following performance indecies.

$$\begin{aligned} \mathcal{J}^1(e, t) & \stackrel{\triangle}{=} \int\_0^\infty e^T(t)e(t)dt\\ \mathcal{J}^2(e, t) & \stackrel{\triangle}{=} \sup\_t ||e(t)||\_1 \end{aligned} \tag{2.36}$$

From Figures 1–3, we find that the proposed variable gain robust state feedback controller stabilizes the uncertain system (2.32) in spite of uncertainties. Besides one can also see from Figures 1 and 2 and Table 1 that the proposed variable gain robust state feedback controller achieves the good transient performance and can avoid serious chattering.

for a Class of Uncertain Dynamical Systems 11

Synthesis of Variable Gain Robust Controllers for a Class of Uncertain Dynamical Systems 321

*dt <sup>x</sup>*(*t*) = *Ax*(*t*) + *Bu*(*t*)

In this paper, the nominal system (3.3) is supposed to be stabilizable via static output feedback

. In other words, since the nominal system is stabilizable via static output feedback

= *A* + *BKC* is asymptotically stable. Note that the feedback gain matrix

*ey*(*t*) = *Ce*(*t*) (3.6)

*y*(*t*) = *Cx*(*t*) (3.3)

= *Ky*(*t*) + *ψ*(*ey*, L,*t*) (3.4)

= L(*ey*, *t*)*ey*(*t*) (3.5)

�

i.e. a fixed gain matrix

= *x*(*t*) − *x*(*t*) and

*.*

(3.7)

�

, we consider the following

The nominal system, ignoring unknown parameters in (3.1), is given by *d*

control. Namely, there exists an output feedback control *u*(*t*) = *Ky*(*t*)

Now on the basis of the work of (25), we introduce the error vectors *e*(*t*)

<sup>=</sup> *<sup>y</sup>*(*t*) <sup>−</sup> *<sup>y</sup>*(*t*). Beside, using the fixed gain matrix *<sup>K</sup>* <sup>∈</sup> **<sup>R</sup>***m*×*<sup>l</sup>*

*u*(*t*) �

matrix <sup>L</sup>(*ey*, *<sup>t</sup>*) <sup>∈</sup> **<sup>R</sup>***m*×*<sup>l</sup>* which stabilizes the uncertain error system (3.6).

LMI-based design method of a variable gain robust output feedback controller.

**3.2 Synthesis of variable gain robust output feedback controllers**

<sup>S</sup> *AK* <sup>+</sup> *<sup>A</sup><sup>T</sup>*

⎛

⎜⎝

where *<sup>ψ</sup>*(*ey*, <sup>L</sup>,*t*) <sup>∈</sup> **<sup>R</sup>***<sup>m</sup>* is a compensation input (e.g. (25)) and has the following form.

In (3.5), <sup>L</sup>(*ey*, *<sup>t</sup>*) <sup>∈</sup> **<sup>R</sup>***m*×*<sup>l</sup>* is a variable gain matrix. Then one can see from (3.1) and (3.3) – (3.5)

*dte*(*t*) = *AKe*(*t*) + *<sup>B</sup>*Δ(*t*)*Ex*(*t*) + *<sup>B</sup>*L(*ey*, *<sup>t</sup>*)*ey*(*t*)

From the above, our control objective is to design the variable gain robust output feedback controller which stabilizes the uncertain error system (3.6). That is to derive the variable gain

In this subsection, an LMI-based design method of the variable gain robust output feedback controller for the uncertain linear system (3.1) is presented. The following theorem gives an

**Theorem 2.** *Consider the uncertain error system (3.6) with the variable gain matrix* <sup>L</sup>(*ey*, *<sup>t</sup>*) <sup>∈</sup> **<sup>R</sup>***m*×*<sup>l</sup>*

*Suppose there exist the positive definite matrices* S ∈ **<sup>R</sup>***n*×*n*, *<sup>Θ</sup>* <sup>∈</sup> **<sup>R</sup>***l*×*<sup>l</sup> and <sup>Ψ</sup>* <sup>∈</sup> **<sup>R</sup>***l*×*<sup>l</sup> and the positive*

*<sup>K</sup>*<sup>S</sup> <sup>+</sup> *<sup>γ</sup>*1*ETE* ≤ −Q <sup>−</sup>*CTΘ<sup>C</sup>* <sup>+</sup> <sup>S</sup>*CT*<sup>T</sup> *<sup>T</sup>*<sup>T</sup> *<sup>C</sup>* <sup>+</sup> *<sup>C</sup>*<sup>T</sup> *<sup>T</sup>*<sup>T</sup> *<sup>C</sup>*S ≤ <sup>0</sup>

⎞

⎟⎠ <sup>≤</sup> <sup>0</sup>

<sup>−</sup>*CTΨ<sup>C</sup>* <sup>S</sup>*CT*T S*CT*<sup>T</sup> −*γ*<sup>1</sup> *Im* 0 −*γ*<sup>2</sup> *Im*

*<sup>ψ</sup>*(*ey*, <sup>L</sup>,*t*) �

*<sup>K</sup>* <sup>∈</sup> **<sup>R</sup>***m*×*<sup>l</sup>* is designed by using the existing results (e.g. (2; 16)).

where T ∈ **<sup>R</sup>***m*×*<sup>l</sup>* is a known constant matrix.

�

control input for the uncertain linear system (3.1).

that the following uncertain error system can be derived.

*d*

*constants γ*<sup>1</sup> *and γ*<sup>2</sup> *satisfying the following LMIs.*

*<sup>K</sup>* <sup>∈</sup> **<sup>R</sup>***m*×*<sup>l</sup>* �

*ey*(*t*) �

control, the matrix *AK*

Fig. 3. Time histories of the control input *u*(*t*)

### **2.4 Summary**

In this section, a design method of a variable gain robust state feedback controller for a class of uncertain linear systems has been presented and, by numerical simulations, the effectiveness of the proposed controller has been presented.

Since the proposed state feedback controller can easily be obtained by solving the standard algebraic Riccati equation, the proposed design approach is very simple. The proposed variable gain robust state feedback controller can be extended to robust servo systems and robust tracking control systems.

### **3. Variable gain robust output feedback controllers**

In section 2, it is assumed that all the state are measurable and the procedure specifies the current control input as a function of the current value of the state vector. However it is physically and economically impractical to measure all of the state in many practical control systems. Therefore, it is necessary that the control input from the measurable signal is constructed to achieve satisfactory control performance. In this section, for a class of uncertain linear systems, we extend the result derived in section 2 to a variable gain robust output feedback controller.

### **3.1 Problem formulation**

Consider the uncertain linear system described by the following state equation.

$$\begin{aligned} \frac{d}{dt}\mathbf{x}(t) &= \left(A + B\Delta(t)E\right)\mathbf{x}(t) + Bu(t) \\ y(t) &= \mathbf{C}\mathbf{x}(t) \end{aligned} \tag{3.1}$$

where *<sup>x</sup>*(*t*) <sup>∈</sup> **<sup>R</sup>***n*, *<sup>u</sup>*(*t*) <sup>∈</sup> **<sup>R</sup>***<sup>m</sup>* and *<sup>y</sup>*(*t*) <sup>∈</sup> **<sup>R</sup>***<sup>l</sup>* are the vectors of the state, the control input and the measured output, respectively. In (3.1), the matrices *A*, *B* and *C* are the nominal values of system parameters and the matrix <sup>Δ</sup>(*t*) <sup>∈</sup> **<sup>R</sup>***p*×*<sup>q</sup>* denotes unknown time-varying parameters which satisfy Δ(*t*) ≤ 1. In this paper, we introduce the following assumption for the system parameters(25).

$$B^T = \mathcal{T}\mathbf{C} \tag{3.2}$$

where T ∈ **<sup>R</sup>***m*×*<sup>l</sup>* is a known constant matrix.

10 Will-be-set-by-IN-TECH

0 1 2 3 4 5

Case 1) Case 2) Desired

Time

In this section, a design method of a variable gain robust state feedback controller for a class of uncertain linear systems has been presented and, by numerical simulations, the effectiveness

Since the proposed state feedback controller can easily be obtained by solving the standard algebraic Riccati equation, the proposed design approach is very simple. The proposed variable gain robust state feedback controller can be extended to robust servo systems and

In section 2, it is assumed that all the state are measurable and the procedure specifies the current control input as a function of the current value of the state vector. However it is physically and economically impractical to measure all of the state in many practical control systems. Therefore, it is necessary that the control input from the measurable signal is constructed to achieve satisfactory control performance. In this section, for a class of uncertain linear systems, we extend the result derived in section 2 to a variable gain robust output

*dt <sup>x</sup>*(*t*) = (*<sup>A</sup>* <sup>+</sup> *<sup>B</sup>*Δ(*t*)*E*) *<sup>x</sup>*(*t*) + *Bu*(*t*)

where *<sup>x</sup>*(*t*) <sup>∈</sup> **<sup>R</sup>***n*, *<sup>u</sup>*(*t*) <sup>∈</sup> **<sup>R</sup>***<sup>m</sup>* and *<sup>y</sup>*(*t*) <sup>∈</sup> **<sup>R</sup>***<sup>l</sup>* are the vectors of the state, the control input and the measured output, respectively. In (3.1), the matrices *A*, *B* and *C* are the nominal values of system parameters and the matrix <sup>Δ</sup>(*t*) <sup>∈</sup> **<sup>R</sup>***p*×*<sup>q</sup>* denotes unknown time-varying parameters

*y*(*t*) = *Cx*(*t*) (3.1)

*<sup>B</sup><sup>T</sup>* <sup>=</sup> <sup>T</sup> *<sup>C</sup>* (3.2)

≤ 1. In this paper, we introduce the following assumption for the system

Consider the uncertain linear system described by the following state equation.


**3. Variable gain robust output feedback controllers**

*d*

Control input

Fig. 3. Time histories of the control input *u*(*t*)

of the proposed controller has been presented.

robust tracking control systems.

feedback controller.

which satisfy

parameters(25).

Δ(*t*) 

**3.1 Problem formulation**

**2.4 Summary**

The nominal system, ignoring unknown parameters in (3.1), is given by

$$\begin{aligned} \frac{d}{dt}\overline{\mathfrak{x}}(t) &= A\overline{\mathfrak{x}}(t) + B\overline{\mathfrak{x}}(t) \\ \overline{\mathfrak{y}}(t) &= C\overline{\mathfrak{x}}(t) \end{aligned} \tag{3.3}$$

In this paper, the nominal system (3.3) is supposed to be stabilizable via static output feedback control. Namely, there exists an output feedback control *u*(*t*) = *Ky*(*t*) � i.e. a fixed gain matrix *<sup>K</sup>* <sup>∈</sup> **<sup>R</sup>***m*×*<sup>l</sup>* � . In other words, since the nominal system is stabilizable via static output feedback control, the matrix *AK* � = *A* + *BKC* is asymptotically stable. Note that the feedback gain matrix *<sup>K</sup>* <sup>∈</sup> **<sup>R</sup>***m*×*<sup>l</sup>* is designed by using the existing results (e.g. (2; 16)).

Now on the basis of the work of (25), we introduce the error vectors *e*(*t*) � = *x*(*t*) − *x*(*t*) and *ey*(*t*) � <sup>=</sup> *<sup>y</sup>*(*t*) <sup>−</sup> *<sup>y</sup>*(*t*). Beside, using the fixed gain matrix *<sup>K</sup>* <sup>∈</sup> **<sup>R</sup>***m*×*<sup>l</sup>* , we consider the following control input for the uncertain linear system (3.1).

$$
\mu(t) \stackrel{\triangle}{=} Ky(t) + \psi(e\_{\mathcal{Y}}, \mathcal{L}, t) \tag{3.4}
$$

where *<sup>ψ</sup>*(*ey*, <sup>L</sup>,*t*) <sup>∈</sup> **<sup>R</sup>***<sup>m</sup>* is a compensation input (e.g. (25)) and has the following form.

$$
\psi(e\_{y\prime}\mathcal{L}\_{\prime}t) \stackrel{\triangle}{=} \mathcal{L}(e\_{y\prime}t)e\_{y}(t) \tag{3.5}
$$

In (3.5), <sup>L</sup>(*ey*, *<sup>t</sup>*) <sup>∈</sup> **<sup>R</sup>***m*×*<sup>l</sup>* is a variable gain matrix. Then one can see from (3.1) and (3.3) – (3.5) that the following uncertain error system can be derived.

$$\begin{aligned} \frac{d}{dt}e(t) &= A\_K e(t) + B\Delta(t)Ex(t) + B\mathcal{L}(e\_{\mathcal{Y}}, t)e\_{\mathcal{Y}}(t) \\ e\_{\mathcal{Y}}(t) &= \mathsf{C}e(t) \end{aligned} \tag{3.6}$$

From the above, our control objective is to design the variable gain robust output feedback controller which stabilizes the uncertain error system (3.6). That is to derive the variable gain matrix <sup>L</sup>(*ey*, *<sup>t</sup>*) <sup>∈</sup> **<sup>R</sup>***m*×*<sup>l</sup>* which stabilizes the uncertain error system (3.6).

#### **3.2 Synthesis of variable gain robust output feedback controllers**

In this subsection, an LMI-based design method of the variable gain robust output feedback controller for the uncertain linear system (3.1) is presented. The following theorem gives an LMI-based design method of a variable gain robust output feedback controller.

**Theorem 2.** *Consider the uncertain error system (3.6) with the variable gain matrix* <sup>L</sup>(*ey*, *<sup>t</sup>*) <sup>∈</sup> **<sup>R</sup>***m*×*<sup>l</sup> . Suppose there exist the positive definite matrices* S ∈ **<sup>R</sup>***n*×*n*, *<sup>Θ</sup>* <sup>∈</sup> **<sup>R</sup>***l*×*<sup>l</sup> and <sup>Ψ</sup>* <sup>∈</sup> **<sup>R</sup>***l*×*<sup>l</sup> and the positive constants γ*<sup>1</sup> *and γ*<sup>2</sup> *satisfying the following LMIs.*

$$\begin{aligned} \mathcal{S}A\_K + A\_K^T \mathcal{S} + \gamma\_1 E^T E &\le -\mathcal{Q} \\ -\mathcal{C}^T \mathcal{O} \mathcal{C} + \mathcal{S} \mathcal{C}^T \mathcal{T}^T \mathcal{T} \mathcal{C} + \mathcal{C} \mathcal{T}^T \mathcal{T} \mathcal{S} &\le 0 \\ \begin{pmatrix} -\mathcal{C}^T \mathcal{Y} \mathcal{C} & \mathcal{S} \mathcal{C}^T \mathcal{T} & \mathcal{S} \mathcal{C}^T \mathcal{T} \\ \star & -\gamma\_1 I\_m & 0 \\ \star & \star & -\gamma\_2 I\_m \end{pmatrix} &\le 0 \end{aligned} \tag{3.7}$$

for a Class of Uncertain Dynamical Systems 13

Synthesis of Variable Gain Robust Controllers for a Class of Uncertain Dynamical Systems 323

Furthermore using the variable gain matrix (3.8), the LMIs (3.7) and the well-known inequality

= min {*λ*min {Q}}, we obtain the following inequality.

 *e*(*t*) 

≤ V (*e*, *<sup>t</sup>*) <sup>≤</sup> *<sup>ξ</sup>*<sup>+</sup>

On the other hand, one can see from the definition of the quadratic function V(*e*, *t*) that there

It is obvious that any solution *e*(*t*; *t*0,*e*(*t*0)) of the uncertain error system (3.6) is continuous.

 *t t*0 *d*

Therefore, from (3.20) we can obtain the following two results. Firstly, taking the limit as *t*

*<sup>d</sup><sup>τ</sup>* <sup>≤</sup> *<sup>ξ</sup>*<sup>+</sup>

*e*(*t*0) + *t t*0

*e*(*t*0) − *t t*0 *ζ e*(*τ*) *dτ* + *t t*0

 *t t*0 *ξ*<sup>∗</sup> *e*(*τ*) 

<sup>≤</sup> *<sup>ξ</sup>*<sup>+</sup>

*dt* <sup>V</sup>(*e*, *<sup>τ</sup>*)*d<sup>τ</sup>*

*dτ* + lim *t*→∞

*e*(*t*0)   *t t*0

*<sup>α</sup>* <sup>+</sup> *<sup>β</sup>* <sup>≤</sup> *<sup>α</sup>* <sup>∀</sup>*α*, *<sup>β</sup>* <sup>&</sup>gt; <sup>0</sup> (3.15)

<sup>2</sup> + *<sup>σ</sup>*(*t*) (3.17)

(3.18)

<sup>2</sup> (3.19)

*σ*(*τ*)*dτ*

<sup>2</sup> (3.21)

*σ*(*τ*)*dτ* (3.22)

+ *σ*∗ (3.23)

*σ*(*τ*)*dτ* (3.24)

(3.20)

*dt* <sup>V</sup>(*e*, *<sup>t</sup>*) ≤ −*eT*(*t*)Q*e*(*t*) + *<sup>σ</sup>*(*t*) (3.16)

*e*(*t*) 

<sup>0</sup> <sup>≤</sup> *αβ*

*dt* <sup>V</sup>(*e*, *<sup>t</sup>*) ≤ −*<sup>ζ</sup>*

are given by

always exist two positive constants *δ*min and *δ*max such that for any *t* ≥ *t*0,

*ξ*<sup>−</sup> *e*(*t*) � = *δ*min *e*(*t*) 2

*ξ*<sup>+</sup> *e*(*t*) � = *δ*max *e*(*t*) 

In addition, it follows from (3.17) and (3.18), that for any *t* ≥ *t*0, we have

*ξ*<sup>∗</sup> *e*(*t*) � = *ζ e*(*t*) 

≤ V (*e*, *t*) = V (*e*, *t*0) +

*dt*V(*e*, *<sup>τ</sup>*)*d<sup>τ</sup>* <sup>≤</sup> *<sup>ξ</sup>*<sup>+</sup>

approaches infinity on both sides of the inequality (3.20), we have

is defined as

*e*(*t*0) − lim *t*→∞

lim *t*→∞

<sup>0</sup> <sup>≤</sup> *<sup>ξ</sup>*<sup>−</sup>

*e*(*t*) 

 *t t*0 *ξ*<sup>∗</sup> *e*(*τ*) 

and some trivial manipulations give the following relation.

*d*

*d*

*ξ*<sup>−</sup> *e*(*t*) 

*e*(*t*) 

and *ξ*<sup>+</sup>

for any positive constants *α* and *β*

In addition, by letting *ζ* �

*e*(*t*) 

<sup>0</sup> <sup>≤</sup> *<sup>ξ</sup>*<sup>−</sup>

V (*e*, *t*0) +

In (3.20), *ξ*<sup>∗</sup>

*e*(*t*) 

*e*(*t*) 

 *t t*0 *d*

<sup>0</sup> <sup>≤</sup> *<sup>ξ</sup>*<sup>+</sup>

Thus one can see from (3.9) and (3.22) that

On the other hand, from (3.20), we obtain

where *ξ*<sup>−</sup>

*Using the positive definite matrices <sup>Θ</sup>* <sup>∈</sup> **<sup>R</sup>***l*×*<sup>l</sup> and <sup>Ψ</sup>* <sup>∈</sup> **<sup>R</sup>***l*×*<sup>l</sup> , we consider the following variable gain matrix.*

$$\mathcal{L}(e\_{\mathcal{Y}}, t) = -\frac{\left(\left\|\mathbf{Y}^{1/2}\mathbf{C}e(t)\right\|^2 + \gamma\_2 \left\|\mathbf{E}\overline{\mathbf{z}}(t)\right\|^2\right)^2}{\left\|\Theta^{1/2}\mathbf{C}e(t)\right\|^2 \left(\left\|\mathbf{Y}^{1/2}\mathbf{C}e(t)\right\|^2 + \gamma\_2 \left\|\mathbf{E}\overline{\mathbf{z}}(t)\right\|^2 + \sigma(t)\right)}\mathcal{T} \tag{3.8}$$

*In (3.7),* Q ∈ **<sup>R</sup>***n*×*<sup>n</sup> is a symmetric positive definite matrix selected by designers and <sup>σ</sup>*(*t*) <sup>∈</sup> **<sup>R</sup>**<sup>1</sup> *in (3.8) is any positive uniform continuous and bounded function which satisfies*

$$\int\_{t\_0}^{t} \sigma(\tau) d\tau \le \sigma^\* < \infty \tag{3.9}$$

*where t*<sup>0</sup> *and σ*∗ *are an initial time and any positive constant, respectively. Then asymptotical stability of the uncertain error system (3.6) is guaranteed.*

*Proof.* Firstly, we introduce the quadratic function V(*e*, *t*) � <sup>=</sup> *<sup>e</sup>T*(*t*)S*e*(*t*). The time derivative of the quadratic function V(*e*, *t*) can be written as

$$\frac{d}{dt}\mathcal{V}(\varepsilon,t) = \varepsilon^T(t)\left(\mathcal{S}A\_K + A\_K^T\mathcal{S}\right)e(t) + 2\varepsilon^T(t)\mathcal{S}\mathcal{B}\Delta(t)\mathcal{E}x(t) + 2\varepsilon^T(t)\mathcal{S}\mathcal{B}\mathcal{L}(\varepsilon\_\mathcal{Y},t)e\_\mathcal{Y}(t) \tag{3.10}$$

Now, using **Lemma 1** and the assumption (3.2) we can obtain

$$\frac{d}{dt}\mathcal{V}(e,t) \le e^T(t) \left(\mathcal{S}A\_K + A\_K^T \mathcal{S}\right) e(t) + 2e^T(t)\mathcal{S}B\Delta(t)E\left(e(t) + \overline{\pi}(t)\right) + 2e^T(t)\mathcal{S}B\mathcal{L}(e\_\mathcal{Y},t)e\_\mathcal{Y}(t)$$

$$\le e^T(t) \left(\mathcal{S}A\_K + A\_K^T\mathcal{S} + \gamma\_1 \mathcal{E}^T\mathcal{E}\right) e(t) + 2e^T(t)\mathcal{S}\mathcal{C}^T\mathcal{T}^T\mathcal{L}(e\_\mathcal{Y},t)e\_\mathcal{Y}(t)$$

$$+ \frac{1}{\gamma\_1}e^T(t)\mathcal{S}\mathcal{C}^T\mathcal{T}^T\mathcal{T}\mathcal{C}e(t) + \frac{1}{\gamma\_2}e^T(t)\mathcal{S}\mathcal{C}^T\mathcal{T}^T\mathcal{T}\mathcal{C}e(t) + \gamma\_2\mathbb{Z}^T(t)\mathcal{E}^T\mathcal{E}\overline{\pi}(t) \tag{3.11}$$

Here we have used the well-known following relation.

$$a^T a^T b \le \mu a^T a + \frac{1}{\mu} b^T b \tag{3.12}$$

where *a* and *b* are any vectors with appropriate dimensions and *μ* is any positive constant. Besides, we have the following inequality for the time derivative of the quadratic function V(*e*, *t*).

$$\begin{split} \frac{d}{dt} \mathcal{V}(e, t) \leq & \varepsilon^{T}(t) \left( \mathcal{S} A\_{K} + A\_{K}^{T} \mathcal{S} + \gamma\_{1} \mathcal{E}^{T} \mathcal{E} \right) e(t) + \varepsilon^{T}(t) \mathcal{C}^{T} \mathcal{V} \mathcal{C} e(t) + \gamma\_{2} \overline{\mathbf{x}}^{T}(t) \mathcal{E}^{T} \mathcal{E} \overline{\mathbf{x}}(t) \\ &+ 2 \varepsilon^{T}(t) \mathcal{S} \mathcal{C}^{T} \mathcal{T}^{T} \mathcal{L} (e\_{y}, t) e\_{y}(t) \end{split} \tag{3.13}$$

because by using **Lemma 2** (Schur complement) the third LMI of (3.7) can be written as

$$-\mathcal{L}^T \boldsymbol{\Psi} \mathbf{C} + \frac{1}{\gamma\_1} \mathbf{S} \mathbf{C}^T \boldsymbol{\mathcal{T}}^T \boldsymbol{\mathcal{T}} \mathbf{C} \mathbf{S} + \frac{1}{\gamma\_2} \mathbf{S} \mathbf{C}^T \boldsymbol{\mathcal{T}}^T \boldsymbol{\mathcal{T}} \mathbf{C} \mathbf{S} \le 0 \tag{3.14}$$

Furthermore using the variable gain matrix (3.8), the LMIs (3.7) and the well-known inequality for any positive constants *α* and *β*

$$0 \le \frac{\alpha \beta}{\alpha + \beta} \le \alpha \quad \forall \alpha, \beta > 0 \tag{3.15}$$

and some trivial manipulations give the following relation.

12 Will-be-set-by-IN-TECH

 <sup>2</sup> <sup>+</sup> *<sup>γ</sup>*<sup>2</sup> *Ex*(*t*) 2 2

> <sup>2</sup> <sup>+</sup> *<sup>γ</sup>*<sup>2</sup> *Ex*(*t*) <sup>2</sup> + *<sup>σ</sup>*(*t*)

> > �

*<sup>Ψ</sup>*1/2*Ce*(*t*)

*In (3.7),* Q ∈ **<sup>R</sup>***n*×*<sup>n</sup> is a symmetric positive definite matrix selected by designers and <sup>σ</sup>*(*t*) <sup>∈</sup> **<sup>R</sup>**<sup>1</sup> *in*

*, we consider the following variable gain*

<sup>=</sup> *<sup>e</sup>T*(*t*)S*e*(*t*). The time derivative of

*σ*(*τ*)*dτ* ≤ *σ*<sup>∗</sup> < ∞ (3.9)

*<sup>e</sup>*(*t*) + <sup>2</sup>*eT*(*t*)S*B*Δ(*t*)*Ex*(*t*) + <sup>2</sup>*eT*(*t*)S*B*L(*ey*, *<sup>t</sup>*)*ey*(*t*) (3.10)

*<sup>e</sup>*(*t*) + <sup>2</sup>*eT*(*t*)S*B*Δ(*t*)*<sup>E</sup>* (*e*(*t*) + *<sup>x</sup>*(*t*)) <sup>+</sup> <sup>2</sup>*eT*(*t*)S*B*L(*ey*, *<sup>t</sup>*)*ey*(*t*)

*<sup>e</sup>T*(*t*)S*CT*<sup>T</sup> *<sup>T</sup>*<sup>T</sup> *<sup>C</sup>*S*e*(*t*) + *<sup>γ</sup>*2*xT*(*t*)*ETEx*(*t*) (3.11)

*e*(*t*) + *eT*(*t*)*CTΨCe*(*t*) + *γ*2*xT*(*t*)*ETEx*(*t*)

*bTb* (3.12)

<sup>S</sup>*CT*<sup>T</sup> *<sup>T</sup>*<sup>T</sup> *<sup>C</sup>*S ≤ <sup>0</sup> (3.14)

*<sup>e</sup>*(*t*) + <sup>2</sup>*eT*(*t*)S*CT*<sup>T</sup> *<sup>T</sup>*L(*ey*, *<sup>t</sup>*)*ey*(*t*)

<sup>T</sup> (3.8)

*Using the positive definite matrices <sup>Θ</sup>* <sup>∈</sup> **<sup>R</sup>***l*×*<sup>l</sup> and <sup>Ψ</sup>* <sup>∈</sup> **<sup>R</sup>***l*×*<sup>l</sup>*

*Proof.* Firstly, we introduce the quadratic function V(*e*, *t*)

<sup>S</sup> *AK* <sup>+</sup> *<sup>A</sup><sup>T</sup>*

<sup>S</sup> *AK* <sup>+</sup> *<sup>A</sup><sup>T</sup>*

<sup>S</sup> *AK* <sup>+</sup> *<sup>A</sup><sup>T</sup>*

Here we have used the well-known following relation.

<sup>S</sup> *AK* <sup>+</sup> *<sup>A</sup><sup>T</sup>*

<sup>−</sup> *<sup>C</sup>TΨ<sup>C</sup>* <sup>+</sup>

1 *γ*1

the quadratic function V(*e*, *t*) can be written as

*<sup>Θ</sup>*1/2*Ce*(*t*)

*(3.8) is any positive uniform continuous and bounded function which satisfies*

*where t*<sup>0</sup> *and σ*∗ *are an initial time and any positive constant, respectively. Then asymptotical stability of the uncertain error system (3.6) is guaranteed.*

> *K*S

Now, using **Lemma 1** and the assumption (3.2) we can obtain

*K*S 

*<sup>e</sup>T*(*t*)S*CT*<sup>T</sup> *<sup>T</sup>*<sup>T</sup> *<sup>C</sup>*S*e*(*t*) + <sup>1</sup>

*<sup>K</sup>*<sup>S</sup> <sup>+</sup> *<sup>γ</sup>*1*ETE*

*γ*2

<sup>2</sup>*aTb* <sup>≤</sup> *<sup>μ</sup>aTa* <sup>+</sup>

*<sup>K</sup>*<sup>S</sup> <sup>+</sup> *<sup>γ</sup>*1*ETE*

where *a* and *b* are any vectors with appropriate dimensions and *μ* is any positive constant. Besides, we have the following inequality for the time derivative of the quadratic function

because by using **Lemma 2** (Schur complement) the third LMI of (3.7) can be written as

<sup>S</sup>*CT*<sup>T</sup> *<sup>T</sup>*<sup>T</sup> *<sup>C</sup>*<sup>S</sup> <sup>+</sup>

1 *μ*

<sup>+</sup> <sup>2</sup>*eT*(*t*)S*CT*<sup>T</sup> *<sup>T</sup>*L(*ey*, *<sup>t</sup>*)*ey*(*t*) (3.13)

1 *γ*2

 *t t*0

 <sup>2</sup>

*<sup>Ψ</sup>*1/2*Ce*(*t*)

L(*ey*, *t*) = −

*matrix.*

*d*

*d*

V(*e*, *t*).

*d*

*dt* <sup>V</sup>(*e*, *<sup>t</sup>*) = *<sup>e</sup>T*(*t*)

*dt* <sup>V</sup>(*e*, *<sup>t</sup>*) <sup>≤</sup> *<sup>e</sup>T*(*t*)

<sup>≤</sup> *<sup>e</sup>T*(*t*)

+ 1 *γ*1

*dt* <sup>V</sup>(*e*, *<sup>t</sup>*) <sup>≤</sup> *<sup>e</sup>T*(*t*)

$$\frac{d}{dt}\mathcal{V}(e,t) \le -e^T(t)\mathcal{Q}e(t) + \sigma(t) \tag{3.16}$$

In addition, by letting *ζ* � = min {*λ*min {Q}}, we obtain the following inequality.

$$\frac{d}{dt}\mathcal{V}(e,t) \le -\mathcal{J}\left\|e(t)\right\|^2 + \sigma(t) \tag{3.17}$$

On the other hand, one can see from the definition of the quadratic function V(*e*, *t*) that there always exist two positive constants *δ*min and *δ*max such that for any *t* ≥ *t*0,

$$
\xi^- \left( \left||e(t)\right|| \right) \le \mathcal{V} \left( e, t \right) \le \xi^+ \left( \left||e(t)\right|| \right) \tag{3.18}
$$

where *ξ*<sup>−</sup> *e*(*t*) and *ξ*<sup>+</sup> *e*(*t*) are given by

$$\begin{array}{l} \xi^{-} \left( \left||e(t)\right|| \right) \stackrel{\triangle}{=} \delta\_{\text{min}} \left||e(t)\right||^{2} \\ \xi^{+} \left( \left||e(t)\right|| \right) \stackrel{\triangle}{=} \delta\_{\text{max}} \left||e(t)\right||^{2} \end{array} \tag{3.19}$$

It is obvious that any solution *e*(*t*; *t*0,*e*(*t*0)) of the uncertain error system (3.6) is continuous. In addition, it follows from (3.17) and (3.18), that for any *t* ≥ *t*0, we have

$$\begin{split} 0 \le \xi^- \left( \|e(t)\| \right) &\le \mathcal{V} \left( e, t \right) = \mathcal{V} \left( e, t\_0 \right) + \int\_{t\_0}^t \frac{d}{dt} \mathcal{V} \left( e, \tau \right) d\tau \\ \mathcal{V} \left( e, t\_0 \right) + \int\_{t\_0}^t \frac{d}{dt} \mathcal{V} \left( e, \tau \right) d\tau &\le \xi^+ \left( \left\| \left| e(t\_0) \right| \right\| \right) - \int\_{t\_0}^t \xi \left( \left\| e(\tau) \right\| \right) d\tau + \int\_{t\_0}^t \sigma(\tau) d\tau \end{split} \tag{3.20}$$

In (3.20), *ξ*<sup>∗</sup> *e*(*t*) is defined as

$$\xi^\* \left( \left||e(t)\right|| \right) \stackrel{\triangle}{=} \zeta \left||e(t)\right||^2 \tag{3.21}$$

Therefore, from (3.20) we can obtain the following two results. Firstly, taking the limit as *t* approaches infinity on both sides of the inequality (3.20), we have

$$0 \le \xi^+\left(\left||e(t\_0)||\right|\right) - \lim\_{t \to \infty} \int\_{t\_0}^t \xi^\*\left(\left||e(\tau)||\right|\right)d\tau + \lim\_{t \to \infty} \int\_{t\_0}^t \sigma(\tau)d\tau\tag{3.22}$$

Thus one can see from (3.9) and (3.22) that

$$\lim\_{t \to \infty} \int\_{t\_0}^{t} \xi^\* \left( ||\boldsymbol{\varepsilon}(\boldsymbol{\tau})|| \right) d\boldsymbol{\tau} \le \xi^+ \left( ||\boldsymbol{\varepsilon}(t\_0)|| \right) + \sigma^\* \tag{3.23}$$

On the other hand, from (3.20), we obtain

$$0 \le \xi^- \left( \left\| e(t) \right\| \right) \le \xi^+ \left( \left\| e(t\_0) \right\| \right) + \int\_{t\_0}^t \sigma(\tau) d\tau \tag{3.24}$$

It follows from (3.9) and (3.24) that

$$0 \le \xi^- \left( \left||e(t)\right|| \right) \le \xi^+ \left( \left||e(t\_0)\right|| \right) + \sigma^\* \tag{3.25}$$

for a Class of Uncertain Dynamical Systems 15

Synthesis of Variable Gain Robust Controllers for a Class of Uncertain Dynamical Systems 325

*Proof.* By using the symmetric positive definite matrix S ∈ **<sup>R</sup>***n*×*n*, we consider the quadratic

Additionally, applying the inequality (3.12) to the second term on the right hand side of (3.31)

Now by using the LMIs (3.29), the variable gain matrix (3.30) and the inequality (3.15), we

*dt* <sup>V</sup>(*e*, *<sup>t</sup>*) ≤ −*eT*(*t*)Q*e*(*t*) + *<sup>σ</sup>*(*t*)

Therefore, one can see from the definition of the quadratic function V(*e*, *t*) and **Proof 1** that

⎞ ⎠ Δ(*t*)

Firstly, we design an output feedback gain matrix *<sup>K</sup>* <sup>∈</sup> **<sup>R</sup>**1×<sup>2</sup> for the nominal system. By selecting the design parameter *α* such as *α* = 4.5 and applying the LMI-based design algorithm (see. (2) and Appendix in (25)), we obtain the following output feedback gain

Finally, we use **Theorem 1** to design the proposed variable gain robust output feedback controller, i.e. we solve the LMIs (3.7). By selecting the symmetric positive definite matrix

≤ −*ζ* � �*e*(*t*) � �

*<sup>e</sup>*(*t*) + <sup>1</sup> *γ*

*<sup>e</sup>*(*t*) + <sup>2</sup>*eT*(*t*)S*CT*<sup>T</sup> *<sup>T</sup>*Δ(*t*)*Cx*(*t*)

*<sup>e</sup>T*(*t*)S*CT*<sup>T</sup> *<sup>T</sup>*<sup>T</sup> *<sup>C</sup>*S*e*(*t*) + *<sup>γ</sup>yT*(*t*)*y*(*t*)

<sup>2</sup> + *<sup>σ</sup>*(*t*) (3.33)

*x*(*t*) +

⎛ ⎝ 2.0 1.0 0.0

⎞ ⎠ *u*(*t*)

> 2.0 1.0� .

(3.34)

(3.36)

<sup>+</sup> <sup>2</sup>*eT*(*t*)S*CT*<sup>T</sup> *<sup>T</sup>*L(*ey*, *<sup>t</sup>*)*ey*(*t*) (3.31)

<sup>+</sup> <sup>2</sup>*eT*(*t*)S*CT*<sup>T</sup> *<sup>T</sup>*L(*ey*, *<sup>t</sup>*)*ey*(*t*) (3.32)

� 1.0 0.0 1.0 0.0 3.0 1.0�

3.17745 <sup>×</sup> <sup>10</sup>−<sup>1</sup> <sup>−</sup>1.20809 <sup>×</sup> 101 � (3.35)

� 6.73050 6.45459 6.57618 �

⎞ ⎠

, *Ψ* =

<sup>=</sup> *<sup>e</sup>T*(*t*)S*e*(*t*). Then using the assumption (3.2) we have

*K*S �

<sup>S</sup> *AK* <sup>+</sup> *<sup>A</sup><sup>T</sup>*

*K*S �

function V(*e*, *t*)

we obtain

have

*d dt <sup>x</sup>*(*t*) =

*d*

**3.3 Illustrative examples**

⎛ ⎝

*<sup>y</sup>*(*t*) = �

matrix *<sup>K</sup>* <sup>∈</sup> **<sup>R</sup>**1×2.

�

*d*

*dt* <sup>V</sup>(*e*, *<sup>t</sup>*) <sup>≤</sup>*eT*(*t*)

*dt* <sup>V</sup>(*e*, *<sup>t</sup>*) =*eT*(*t*)

�

where *ζ* is a positive scalar given by *ζ* = *λ*max {Q}.

the rest of proof of **Theorem 2** is straightforward.

Consider the uncertain linear system described by

−2.0 0.0 −6.0 0.0 1.0 1.0 3.0 0.0 −7.0

1.0 0.0 0.0 0.0 1.0 0.0�

Q ∈ **<sup>R</sup>**3×<sup>3</sup> such as <sup>Q</sup> <sup>=</sup> 0.1 <sup>×</sup> *<sup>I</sup>*3, we have

⎛ ⎝

S =

*Θ* =

�

<sup>S</sup> *AK* <sup>+</sup> *<sup>A</sup><sup>T</sup>*

*d*

⎞

*x*(*t*)

*K* = �

⎠ *x*(*t*) +

⎛ ⎝ 2.0 1.0 0.0

Namely, the matrix T ∈ **<sup>R</sup>**1×2in the assumption (3.2) can be expressed as <sup>T</sup> <sup>=</sup> �

7.18316 1.10208 3.02244 <sup>×</sup> <sup>10</sup>−<sup>1</sup> 5.54796 <sup>−</sup>6.10321 <sup>×</sup> <sup>10</sup>−<sup>2</sup> 4.74128

8.20347 �

*<sup>γ</sup>*<sup>1</sup> <sup>=</sup> 2.01669 <sup>×</sup> 103, *<sup>γ</sup>*<sup>2</sup> <sup>=</sup> 6.34316 <sup>×</sup> 102,

� 3.14338 <sup>×</sup> 101 1.54786 <sup>×</sup> 101

The relation (3.25) implies that *e*(*t*) is uniformly bounded. Since *e*(*t*) has been shown to be continuous, it follows that *e*(*t*) is uniformly continuous. Therefore, one can see from the definition that *ξ*<sup>∗</sup> *e*(*t*) is also uniformly continuous. Thus applying **Lemma 3** (Barbalat's lemma) to (3.23) yields

$$\lim\_{t \to \infty} \xi^\* \left( \left||e(t)\right|| \right) = \lim\_{t \to \infty} \zeta \left||e(t)\right||^2 = 0 \tag{3.26}$$

Namely, asymptotical stability of the uncertain error system (3.6) is ensured. Thus the uncertain linear system (3.1) is also stable.

It follows that the result of the theorem is true. Thus the proof of **Theorem 2** is completed.

**Theorem 2** provides a sufficient condition for the existence of a variable gain robust output feedback controller for the uncertain linear system (3.1). Next, we consider a special case. In this case, we consider the uncertain linear system described by

$$\begin{cases} \frac{d}{dt}\mathbf{x}(t) = \left(A + B\Delta(t)\mathbf{C}\right)\mathbf{x}(t) + Bu(t) \\ y(t) = \mathbf{C}\mathbf{x}(t) \end{cases} \tag{3.27}$$

Thus one can see from (3.3) – (3.5) and (3.27) that we have the following uncertain error system.

$$\begin{cases} \frac{d}{dt}e(t) = A\_K e(t) + B\Delta(t)\mathbb{C}x(t) + B\mathcal{L}(e\_{\mathcal{Y}}, t)e\_{\mathcal{Y}}(t) \\ e\_{\mathcal{Y}}(t) = \mathsf{C}e(t) \end{cases} \tag{3.28}$$

Next theorem gives an LMI-based design method of a variable gain robust output feedback controller for this case.

**Theorem 3.** *Consider the uncertain error system (3.28) with the variable gain matrix* L(*ey*, *t*) ∈ **R***m*×*<sup>l</sup> .*

*Suppose there exist the symmetric positive definite matrices* X > 0, Y > 0 *and matrices* S ∈ **<sup>R</sup>***n*×*n*, *<sup>Θ</sup>* <sup>∈</sup> **<sup>R</sup>***l*×*<sup>l</sup> and <sup>Ψ</sup>* <sup>∈</sup> **<sup>R</sup>***l*×*<sup>l</sup> and the positive constant <sup>γ</sup> satisfying the LMIs.*

$$\begin{aligned} \mathcal{S}A\_K + A\_K^T \mathcal{S} &\le -\mathcal{Q} \quad \left( \mathcal{Q} = \mathcal{Q}^T > 0 \right) \\ -\mathcal{C}^T \mathcal{C} \mathcal{C} + \mathcal{S} \mathcal{C}^T \mathcal{T}^T \mathcal{T} \mathcal{C} + \mathcal{C}^T \mathcal{T}^T \mathcal{T} \mathcal{C} \mathcal{S} &\le 0 \\ \begin{pmatrix} -\mathcal{C}^T \mathcal{Y} \mathcal{C} & \mathcal{S} \mathcal{C}^T \mathcal{T} \\ \star & -\gamma I\_{\mathcal{W}} \end{pmatrix} &\le 0 \end{aligned} \tag{3.29}$$

*Using positive definite matrices <sup>Ψ</sup>* <sup>∈</sup> **<sup>R</sup>***l*×*<sup>l</sup> and <sup>Θ</sup>* <sup>∈</sup> **<sup>R</sup>***l*×*<sup>l</sup> and the positive scalars <sup>γ</sup> satisfying the LMIs (3.29), we consider the variable gain matrix*

$$\mathcal{L}(e\_{\mathcal{Y}}, t) = -\frac{\left(\left\|\mathbf{Y}^{1/2} e\_{\mathcal{Y}}(t)\right\|^2 + \gamma \left\|\mathbf{y}(t)\right\|^2\right)^2}{\left\|\Theta^{1/2} \mathbf{C} e(t)\right\|^2 \left(\left\|\mathbf{Y}^{1/2} e\_{\mathcal{Y}}(t)\right\|^2 + \gamma \left\|\mathbf{y}(t)\right\|^2 + \sigma(t)\right)} \mathcal{T} \tag{3.30}$$

*where <sup>σ</sup>*(*t*) <sup>∈</sup> **<sup>R</sup>**<sup>1</sup> *is any positive uniform continuous and bounded function satisfying (3.9). Then asymptotical stability of the uncertain error system (3.28) is guaranteed.*

*Proof.* By using the symmetric positive definite matrix S ∈ **<sup>R</sup>***n*×*n*, we consider the quadratic function V(*e*, *t*) � <sup>=</sup> *<sup>e</sup>T*(*t*)S*e*(*t*). Then using the assumption (3.2) we have

$$\begin{split} \frac{d}{dt} \mathcal{V}(e, t) &= e^T(t) \left( \mathcal{S} A\_K + A\_K^T \mathcal{S} \right) e(t) + 2e^T(t) \mathcal{S} \mathcal{C}^T \mathcal{T}^T \Delta(t) \mathcal{C} \mathbf{x}(t) \\ &\quad + 2e^T(t) \mathcal{S} \mathcal{C}^T \mathcal{T}^T \mathcal{L}(e\_y, t) e\_y(t) \end{split} \tag{3.31}$$

Additionally, applying the inequality (3.12) to the second term on the right hand side of (3.31) we obtain

$$\frac{d}{dt}\mathcal{V}(e,t) \le e^T(t) \left(\mathcal{S}A\_K + A\_K^T \mathcal{S}\right) e(t) + \frac{1}{\gamma} e^T(t) \mathcal{S}\mathcal{C}^T \mathcal{T}^T \mathcal{T} \mathcal{C} \mathcal{S} e(t) + \gamma y^T(t) y(t)$$

$$+ 2e^T(t) \mathcal{S}\mathcal{C}^T \mathcal{T}^T \mathcal{L}(e\_y, t) e\_y(t) \tag{3.32}$$

Now by using the LMIs (3.29), the variable gain matrix (3.30) and the inequality (3.15), we have

$$\frac{d}{dt}\mathcal{V}(e,t) \le -e^T(t)\mathcal{Q}e(t) + \sigma(t)$$

$$\le -\zeta \left\| e(t) \right\|^2 + \sigma(t) \tag{3.33}$$

where *ζ* is a positive scalar given by *ζ* = *λ*max {Q}.

Therefore, one can see from the definition of the quadratic function V(*e*, *t*) and **Proof 1** that the rest of proof of **Theorem 2** is straightforward.

#### **3.3 Illustrative examples**

14 Will-be-set-by-IN-TECH

The relation (3.25) implies that *e*(*t*) is uniformly bounded. Since *e*(*t*) has been shown to be continuous, it follows that *e*(*t*) is uniformly continuous. Therefore, one can see from the

> = lim *<sup>t</sup>*→<sup>∞</sup> *<sup>ζ</sup> e*(*t*)

Namely, asymptotical stability of the uncertain error system (3.6) is ensured. Thus the

**Theorem 2** provides a sufficient condition for the existence of a variable gain robust output feedback controller for the uncertain linear system (3.1). Next, we consider a special case. In

*dt <sup>x</sup>*(*t*) = (*<sup>A</sup>* <sup>+</sup> *<sup>B</sup>*Δ(*t*)*C*) *<sup>x</sup>*(*t*) + *Bu*(*t*)

Thus one can see from (3.3) – (3.5) and (3.27) that we have the following uncertain error

*dte*(*t*) = *AKe*(*t*) + *<sup>B</sup>*Δ(*t*)*Cx*(*t*) + *<sup>B</sup>*L(*ey*, *<sup>t</sup>*)*ey*(*t*)

Next theorem gives an LMI-based design method of a variable gain robust output feedback

**Theorem 3.** *Consider the uncertain error system (3.28) with the variable gain matrix* L(*ey*, *t*) ∈

*Suppose there exist the symmetric positive definite matrices* X > 0, Y > 0 *and matrices* S ∈

<sup>−</sup>*CTΘ<sup>C</sup>* <sup>+</sup> <sup>S</sup>*CT*<sup>T</sup> *<sup>T</sup>*<sup>T</sup> *<sup>C</sup>* <sup>+</sup> *<sup>C</sup>T*<sup>T</sup> *<sup>T</sup>*<sup>T</sup> *<sup>C</sup>*S ≤ <sup>0</sup>

 ≤ 0

*Using positive definite matrices <sup>Ψ</sup>* <sup>∈</sup> **<sup>R</sup>***l*×*<sup>l</sup> and <sup>Θ</sup>* <sup>∈</sup> **<sup>R</sup>***l*×*<sup>l</sup> and the positive scalars <sup>γ</sup> satisfying the*

 <sup>2</sup> + *<sup>γ</sup> y*(*t*) 2 2

> <sup>2</sup> + *<sup>γ</sup> y*(*t*) <sup>2</sup> + *<sup>σ</sup>*(*t*)

*<sup>Ψ</sup>*1/2*ey*(*t*)

*<sup>Ψ</sup>*1/2*ey*(*t*)

<sup>Q</sup> <sup>=</sup> <sup>Q</sup>*<sup>T</sup>* <sup>&</sup>gt; <sup>0</sup>

It follows that the result of the theorem is true. Thus the proof of **Theorem 2** is completed.

<sup>≤</sup> *<sup>ξ</sup>*<sup>+</sup>

*e*(*t*0) 

is also uniformly continuous. Thus applying **Lemma 3** (Barbalat's

+ *σ*∗ (3.25)

<sup>2</sup> = <sup>0</sup> (3.26)

(3.27)

(3.28)

(3.29)

<sup>T</sup> (3.30)

<sup>0</sup> <sup>≤</sup> *<sup>ξ</sup>*<sup>−</sup>

lim *<sup>t</sup>*→<sup>∞</sup> *<sup>ξ</sup>*<sup>∗</sup>

this case, we consider the uncertain linear system described by *d*

*ey*(*t*) = *Ce*(*t*)

<sup>S</sup> *AK* <sup>+</sup> *<sup>A</sup><sup>T</sup>*

*<sup>Θ</sup>*1/2*Ce*(*t*)

*Then asymptotical stability of the uncertain error system (3.28) is guaranteed.*

*LMIs (3.29), we consider the variable gain matrix*

L(*ey*, *t*) = −

*e*(*t*) 

*e*(*t*) 

*y*(*t*) = *Cx*(*t*)

**<sup>R</sup>***n*×*n*, *<sup>Θ</sup>* <sup>∈</sup> **<sup>R</sup>***l*×*<sup>l</sup> and <sup>Ψ</sup>* <sup>∈</sup> **<sup>R</sup>***l*×*<sup>l</sup> and the positive constant <sup>γ</sup> satisfying the LMIs.*

<sup>−</sup>*CTΨ<sup>C</sup>* <sup>S</sup>*CT*<sup>T</sup> � −*γIm*

 <sup>2</sup>

*where <sup>σ</sup>*(*t*) <sup>∈</sup> **<sup>R</sup>**<sup>1</sup> *is any positive uniform continuous and bounded function satisfying (3.9).*

*<sup>K</sup>*S ≤ −Q

It follows from (3.9) and (3.24) that

*e*(*t*) 

uncertain linear system (3.1) is also stable.

*d*

definition that *ξ*<sup>∗</sup>

system.

**R***m*×*<sup>l</sup> .*

controller for this case.

lemma) to (3.23) yields

Consider the uncertain linear system described by

$$\begin{aligned} \frac{d}{dt}\mathbf{x}(t) &= \begin{pmatrix} -2.0 & 0.0 & -6.0\\ 0.0 & 1.0 & 1.0\\ 3.0 & 0.0 & -7.0 \end{pmatrix} \mathbf{x}(t) + \begin{pmatrix} 2.0\\ 1.0\\ 0.0 \end{pmatrix} \Delta(t) \begin{pmatrix} 1.0 & 0.0 & 1.0\\ 0.0 & 3.0 & 1.0 \end{pmatrix} \mathbf{x}(t) + \begin{pmatrix} 2.0\\ 1.0\\ 0.0 \end{pmatrix} \mathbf{u}(t) \\\ \mathbf{y}(t) &= \begin{pmatrix} 1.0 & 0.0 & 0.0\\ 0.0 & 1.0 & 0.0 \end{pmatrix} \mathbf{x}(t) \end{aligned} \tag{3.34}$$

Namely, the matrix T ∈ **<sup>R</sup>**1×2in the assumption (3.2) can be expressed as <sup>T</sup> <sup>=</sup> � 2.0 1.0� . Firstly, we design an output feedback gain matrix *<sup>K</sup>* <sup>∈</sup> **<sup>R</sup>**1×<sup>2</sup> for the nominal system. By selecting the design parameter *α* such as *α* = 4.5 and applying the LMI-based design algorithm (see. (2) and Appendix in (25)), we obtain the following output feedback gain matrix *<sup>K</sup>* <sup>∈</sup> **<sup>R</sup>**1×2.

$$K = \left(3.17745 \times 10^{-1} - 1.20809 \times 10^{1}\right) \tag{3.35}$$

Finally, we use **Theorem 1** to design the proposed variable gain robust output feedback controller, i.e. we solve the LMIs (3.7). By selecting the symmetric positive definite matrix Q ∈ **<sup>R</sup>**3×<sup>3</sup> such as <sup>Q</sup> <sup>=</sup> 0.1 <sup>×</sup> *<sup>I</sup>*3, we have

$$\begin{aligned} \mathcal{S} &= \begin{pmatrix} 7.18316 & 1.10208 & 3.02244 \times 10^{-1} \\ \star & 5.54796 & -6.10321 \times 10^{-2} \\ \star & \star & 4.74128 \end{pmatrix} \\ \gamma\_1 &= 2.01669 \times 10^3 \text{ } \gamma\_2 = 6.34316 \times 10^2 \text{ } \\ \Theta &= \begin{pmatrix} 3.14338 \times 10^1 & 1.54786 \times 10^1 \\ \star & 8.20347 \end{pmatrix} \text{ } \Psi = \begin{pmatrix} 6.73050 & 6.45459 \\ \star & 6.57618 \end{pmatrix} \end{aligned} \tag{3.36}$$

for a Class of Uncertain Dynamical Systems 17

Synthesis of Variable Gain Robust Controllers for a Class of Uncertain Dynamical Systems 327

0 1 2 3 4

Case 1) Case 2) Desired

Time

0 1 2 3 4

Case 1) Case 2) Desired

Time

In this section, we have proposed a variable gain robust output feedback controller for a class of uncertain linear systems. Besides, by numerical simulations, the effectiveness of the

The proposed controller design method is easy to design a robust output feedback controller. Additionally, the proposed control scheme is adaptable when some assumptions are satisfied, and in cases where only the output signal of the controlled system is available, the proposed method can be used widely. In addition, the proposed controller is more effective for systems with larger uncertainties. Namely, for the upper bound on the perturbation region of the unknown parameter Δ(*t*) is larger than 1, the proposed variable gain output feedback

**4. Variable gain robust controllers based on piecewise Lyapunov functions**

The quadratic stability approach is popularly used for robust stability analysis of uncertain linear systems. This approach, however, may lead to conservative results. Alternatively,






Control input

Fig. 7. Time histories of the control input *u*(*t*)

proposed controller has been presented.

controller can easily be extended.

**3.4 Summary**


0

1




State

Fig. 6. Time histories of the state *x*3(*t*)


0

1

Fig. 4. Time histories of the state *x*1(*t*)

Fig. 5. Time histories of the state *x*2(*t*)

In this example, we consider the following two cases for the unknown parameter <sup>Δ</sup>(*t*) <sup>∈</sup> **<sup>R</sup>**1×2.


Furthermore, initial values for the uncertain system (3.24) and the nominal system are selected as *x*(0) = 1.5 2.0 <sup>−</sup>4.5*<sup>T</sup>* and *<sup>x</sup>*(0) = 2.0 2.0 <sup>−</sup>5.0*<sup>T</sup>* , respectively. Besides, we choose *σ*(*t*) ∈ **<sup>R</sup>**<sup>+</sup> in (3.8) as *<sup>σ</sup>*(*t*) = 5.0 <sup>×</sup> 1012 <sup>×</sup> exp <sup>−</sup>1.0 <sup>×</sup> <sup>10</sup>−4*<sup>t</sup>* .

The results of the simulation of this example are depicted in Figures 4–7. In these figures, "Case 1)" and "Case 2)" represent the time-histories of the state variables *x*1(*t*) and *x*2(*t*) and the control input *u*(*t*) generated by the proposed variable gain robust output feedback controller, and "Desired" shows the desired time-response and the desired control input generated by the nominal system. From Figures 4–6, we find that the proposed variable gain robust output feedback controller stabilize the uncertain linear system (3.34) in spite of plant uncertainties and achieves good transient performance.

Fig. 6. Time histories of the state *x*3(*t*)

Fig. 7. Time histories of the control input *u*(*t*)

### **3.4 Summary**

16 Will-be-set-by-IN-TECH

Case 1) Case 2) Desired

Case 1) Case 2) Desired

0 1 2 3 4

Timett

0 1 2 3 4

Timet

In this example, we consider the following two cases for the unknown parameter <sup>Δ</sup>(*t*) <sup>∈</sup> **<sup>R</sup>**1×2.

Furthermore, initial values for the uncertain system (3.24) and the nominal system are selected

2.0 2.0 <sup>−</sup>5.0*<sup>T</sup>*

 .

<sup>−</sup>1.0 <sup>×</sup> <sup>10</sup>−4*<sup>t</sup>*

The results of the simulation of this example are depicted in Figures 4–7. In these figures, "Case 1)" and "Case 2)" represent the time-histories of the state variables *x*1(*t*) and *x*2(*t*) and the control input *u*(*t*) generated by the proposed variable gain robust output feedback controller, and "Desired" shows the desired time-response and the desired control input generated by the nominal system. From Figures 4–6, we find that the proposed variable gain robust output feedback controller stabilize the uncertain linear system (3.34) in spite of plant

, respectively. Besides, we choose *σ*(*t*) ∈

<sup>×</sup> <sup>10</sup>−<sup>1</sup>


0

7.30192 <sup>−</sup>5.00436

sin(5*πt*) cos(5*πt*)

1.5 2.0 <sup>−</sup>4.5*<sup>T</sup>* and *<sup>x</sup>*(0) =

uncertainties and achieves good transient performance.

0.5

1

State

1.5

2

0

1

State

Fig. 4. Time histories of the state *x*1(*t*)

Fig. 5. Time histories of the state *x*2(*t*)

**<sup>R</sup>**<sup>+</sup> in (3.8) as *<sup>σ</sup>*(*t*) = 5.0 <sup>×</sup> 1012 <sup>×</sup> exp

• Case 1) : Δ(*t*) =

• Case 2) : Δ(*t*) =

as *x*(0) =

2

3

In this section, we have proposed a variable gain robust output feedback controller for a class of uncertain linear systems. Besides, by numerical simulations, the effectiveness of the proposed controller has been presented.

The proposed controller design method is easy to design a robust output feedback controller. Additionally, the proposed control scheme is adaptable when some assumptions are satisfied, and in cases where only the output signal of the controlled system is available, the proposed method can be used widely. In addition, the proposed controller is more effective for systems with larger uncertainties. Namely, for the upper bound on the perturbation region of the unknown parameter Δ(*t*) is larger than 1, the proposed variable gain output feedback controller can easily be extended.

### **4. Variable gain robust controllers based on piecewise Lyapunov functions**

The quadratic stability approach is popularly used for robust stability analysis of uncertain linear systems. This approach, however, may lead to conservative results. Alternatively,

for a Class of Uncertain Dynamical Systems 19

Synthesis of Variable Gain Robust Controllers for a Class of Uncertain Dynamical Systems 329

In (4.6), the matrices Q ∈ **<sup>R</sup>***n*×*<sup>n</sup>* and R ∈ **<sup>R</sup>***m*×*<sup>m</sup>* are design parameters and <sup>Q</sup> is selected such that the pair (*A*, <sup>C</sup>) is detectable, where <sup>C</sup> is any matrix satisfying <sup>Q</sup> <sup>=</sup> CC*T*, and then the

Now on the basis of the works of Oya et al.(21; 22), in order to obtain on-line information on

optimal gain matrix *<sup>K</sup>* <sup>∈</sup> **<sup>R</sup>***m*×*<sup>n</sup>* for the nominal system (4.5), we consider the following control

where *<sup>ψ</sup>* (*x*,*e*,L, *<sup>t</sup>*) <sup>∈</sup> **<sup>R</sup>***<sup>m</sup>* is a compensation input so as to reduce the effect of uncertainties

where F ∈ **<sup>R</sup>***m*×*<sup>n</sup>* is a fixed gain matrix and <sup>L</sup>(*x*,*e*, *<sup>t</sup>*) <sup>∈</sup> **<sup>R</sup>***m*×*<sup>n</sup>* is an adjustable time-varying

*dte*(*t*) = (*<sup>A</sup>* <sup>+</sup> <sup>D</sup>Δ(*t*)E) *<sup>x</sup>*(*t*) + *<sup>B</sup>* {*Kx*(*t*) + *<sup>ψ</sup>* (*x*,*e*,L, *<sup>t</sup>*)}

�

the uncertain error system (4.9) is ensured, then the uncertain system (4.1) is robustly stable,

control theory for the nominal error system. Namely <sup>F</sup> <sup>=</sup> −RF *<sup>B</sup><sup>T</sup>*XF and XF <sup>∈</sup> **<sup>R</sup>***n*×*<sup>n</sup>* is

where QF <sup>∈</sup> **<sup>R</sup>***n*×*<sup>n</sup>* and RF <sup>∈</sup> **<sup>R</sup>***m*×*<sup>m</sup>* are design parameters and symmetric positive definite matrices. A decision method of the time-varying matrix <sup>L</sup>(*x*,*e*, *<sup>t</sup>*) <sup>∈</sup> **<sup>R</sup>***m*×*<sup>n</sup>* will be stated in the

From the above discussion, our control objective in this section is to design the robust stabilizing controller for the uncertain error system (4.9). That is to design the variable gain matrix <sup>L</sup>(*x*,*e*, *<sup>t</sup>*) <sup>∈</sup> **<sup>R</sup>***m*×*<sup>n</sup>* that the error system with uncertainties (4.9) is asymptotically stable.

The following theorem gives sufficient conditions for the existence of the proposed controller.

*(4.3), where* <sup>P</sup>*<sup>k</sup>* <sup>∈</sup> **<sup>R</sup>***n*×*<sup>n</sup> are symmetric positive definite matrices*† *satisfying the matrix inequalities*

*<sup>A</sup>*<sup>F</sup> <sup>+</sup> *<sup>A</sup><sup>T</sup>* F 

*<sup>j</sup>* <sup>P</sup>*kBB<sup>T</sup>*P*<sup>k</sup>* <sup>+</sup> <sup>Q</sup>*<sup>k</sup>* <sup>&</sup>lt; <sup>0</sup> (*<sup>k</sup>* <sup>=</sup> 1, ··· , <sup>N</sup> ) (4.11)

**Theorem 4.** *Consider the uncertain error system (4.9) and the control input (4.7) and (4.8).*

*<sup>K</sup>*XF <sup>+</sup> XF *AK* − XF *<sup>B</sup>*R−<sup>1</sup>

**4.2 Synthesis of variable gain robust state feedback controllers via PLFs**

<sup>=</sup> *<sup>x</sup>*(*t*) <sup>−</sup> *<sup>x</sup>*(*t*). Here, the fixed gain matrix F ∈ **<sup>R</sup>***m*×*<sup>n</sup>* is determined by using LQ

�

= *Kx*(*t*) + *ψ* (*x*,*e*,L, *t*) (4.7)

= F*e*(*t*) + L(*x*,*e*, *t*)*e*(*t*) (4.8)

= *AK* + *B*F. Note that if asymptotical stability of

<sup>F</sup> *<sup>B</sup><sup>T</sup>*XF <sup>+</sup> QF <sup>=</sup> <sup>0</sup> (4.10)

<sup>P</sup><sup>1</sup> <sup>+</sup> <sup>P</sup><sup>2</sup> <sup>+</sup> ··· <sup>+</sup> PN <sup>+</sup> <sup>P</sup>*kBB<sup>T</sup>*P*<sup>k</sup>*

= *<sup>A</sup>*<sup>F</sup> *<sup>e</sup>*(*t*) + DΔ(*t*)E*x*(*t*) + *<sup>B</sup>*L(*x*,*e*, *<sup>t</sup>*)*e*(*t*) (4.9)

<sup>=</sup> <sup>P</sup><sup>1</sup> <sup>+</sup> <sup>P</sup><sup>2</sup> <sup>+</sup> ··· <sup>+</sup> PN <sup>+</sup> <sup>P</sup>*kBB<sup>T</sup>*P*k*(*<sup>k</sup>* <sup>=</sup> 1, ··· , <sup>N</sup> ) *satisfy the relation*

= *x*(*t*) − *x*(*t*). Beside, using the

matrix *AK*

input.

because *e*(*t*)

next subsection.

�

*Suppose that the matrices* S*<sup>k</sup>*

*<sup>γ</sup>* (*k*)

+ N −1 ∑ *j*=1

�

= *A* + *BK* is stable.

matrix. From (4.1), (4.5), (4.7) and (4.8), we have

*d*

In (4.9), *<sup>A</sup>*<sup>F</sup> <sup>∈</sup> **<sup>R</sup>***n*×*<sup>n</sup>* is a matrix given by *<sup>A</sup>*<sup>F</sup>

unique solution of the algebraic Riccati equation *AT*

�

<sup>P</sup><sup>1</sup> <sup>+</sup> <sup>P</sup><sup>2</sup> <sup>+</sup> ··· <sup>+</sup> PN <sup>+</sup> <sup>P</sup>*kBB<sup>T</sup>*P*<sup>k</sup>*

† i.e. <sup>S</sup>*<sup>k</sup>* <sup>∈</sup> **<sup>R</sup>***n*×*<sup>n</sup>* are symmetric positive definite matrices.

the parameter uncertainty, we introduce the error vector *e*(*t*)

*u*(*t*) �

*ψ* (*x*,*e*, L,*t*)

and nonlinear perturbations, and it is supposed to have the following structure.

�

non-quadratic Lyapunov functions have been used to improve the estimate of robust stability and to design robust stabilizing controllers(7; 30; 34). We have also proposed variable gain controllers and adaptive gain controllers based on Piecewise Lyapunov functions (PLFs) for a class of uncertain linear systems(23; 24). However, the resulting variable gain robust controllers may occur the chattering phenomenon. In this section, we propose a variable gain robust state feedback controller avoiding chattering phenomenon for a class of uncertain linear systems via PLFs and show that sufficient conditions for the existence of the proposed variable gain robust state feedback controller.

### **4.1 Problem formulation**

Consider a class of linear systems with non-linear perturbations represented by the following state equation (see **Remark 2**).

$$\frac{d}{dt}\mathbf{x}(t) = \left(A + \mathcal{D}\Delta(t)\mathcal{E}\right)\mathbf{x}(t) + \mathcal{B}u(t) \tag{4.1}$$

where *<sup>x</sup>*(*t*) <sup>∈</sup> **<sup>R</sup>***<sup>n</sup>* and *<sup>u</sup>*(*t*) <sup>∈</sup> **<sup>R</sup>***<sup>m</sup>* are the vectors of the state (assumed to be available for feedback) and the control input, respectively. In (4.1), the matrices *A* and *B* denote the nominal values of the system, and the matrix *B* has full column rank. The matrices D and E which have appropriate dimensions represent the structure of uncertainties. The matrix <sup>Δ</sup>(*t*) <sup>∈</sup> **<sup>R</sup>***p*×*<sup>q</sup>* represents unknown time-varying parameters and satisfies the relation Δ(*t*) ≤ 1. Note that the uncertain term DΔ(*t*)E consists of matched part and unmatched one. Additionally, introducing the integer N ∈ **<sup>Z</sup>**<sup>+</sup> defined as

$$\mathcal{N} \stackrel{\triangle}{=} \arg\min\_{\mathcal{Z} \in \mathbb{Z}^+} \left\{ \mathcal{Z} \mid (\mathcal{Z}m - n) \ge 0 \right\} \tag{4.2}$$

we assume that there exist symmetric positive definite matrices <sup>S</sup>*<sup>k</sup>* <sup>∈</sup> **<sup>R</sup>***n*×*<sup>n</sup>* (*<sup>k</sup>* <sup>=</sup> 1, ··· , <sup>N</sup> ) which satisfies the following relation(23; 24).

$$\bigcap\_{k=1}^{N} \Omega\_{\mathcal{S}\_k} = \{0\} \tag{4.3}$$

where ΩS*<sup>k</sup>* represents a subspace defined as

$$\Omega\_{\mathcal{S}\_k} \stackrel{\triangle}{=} \left\{ \mathbf{x} \in \mathbb{R}^n \mid \boldsymbol{B}^T \mathcal{S}\_k \mathbf{x} = \mathbf{0} \right\} \tag{4.4}$$

The nominal system, ignoring the unknown parameter in (4.1), is given by

$$\frac{d}{dt}\overline{\mathbf{x}}(t) = A\overline{\mathbf{x}}(t) + B\overline{u}(t) \tag{4.5}$$

where *<sup>x</sup>*(*t*) <sup>∈</sup> **<sup>R</sup>***<sup>n</sup>* and *<sup>u</sup>*(*t*) <sup>∈</sup> **<sup>R</sup>***<sup>m</sup>* are the vectors of the state and the control input, respectively. First of all, we adopt the standard linear quadratic LQ control theory for the nominal system (4.5) in order to generate the desirable transient response for the plant systematically, i.e. the control input is given by *u*(*t*) = *Kx*(*t*). Note that some other design method so as to generate the desired response for the controlled system can also be used (e.g. pole assignment). Thus the feedback gain matrix *<sup>K</sup>* <sup>∈</sup> **<sup>R</sup>***m*×*<sup>n</sup>* is derived as *<sup>K</sup>* <sup>=</sup> −R−1*BT*<sup>P</sup> where P ∈ **<sup>R</sup>***n*×*<sup>n</sup>* is unique solution of the algebraic Riccati equation

$$A^T \mathcal{P} + \mathcal{P}A - \mathcal{P}B\mathcal{R}^{-1}B^T \mathcal{P} + \mathcal{Q} = 0\tag{4.6}$$

18 Will-be-set-by-IN-TECH

non-quadratic Lyapunov functions have been used to improve the estimate of robust stability and to design robust stabilizing controllers(7; 30; 34). We have also proposed variable gain controllers and adaptive gain controllers based on Piecewise Lyapunov functions (PLFs) for a class of uncertain linear systems(23; 24). However, the resulting variable gain robust controllers may occur the chattering phenomenon. In this section, we propose a variable gain robust state feedback controller avoiding chattering phenomenon for a class of uncertain linear systems via PLFs and show that sufficient conditions for the existence of the proposed

Consider a class of linear systems with non-linear perturbations represented by the following

where *<sup>x</sup>*(*t*) <sup>∈</sup> **<sup>R</sup>***<sup>n</sup>* and *<sup>u</sup>*(*t*) <sup>∈</sup> **<sup>R</sup>***<sup>m</sup>* are the vectors of the state (assumed to be available for feedback) and the control input, respectively. In (4.1), the matrices *A* and *B* denote the nominal values of the system, and the matrix *B* has full column rank. The matrices D and E which have appropriate dimensions represent the structure of uncertainties. The matrix <sup>Δ</sup>(*t*) <sup>∈</sup> **<sup>R</sup>***p*×*<sup>q</sup>*

that the uncertain term DΔ(*t*)E consists of matched part and unmatched one. Additionally,

we assume that there exist symmetric positive definite matrices <sup>S</sup>*<sup>k</sup>* <sup>∈</sup> **<sup>R</sup>***n*×*<sup>n</sup>* (*<sup>k</sup>* <sup>=</sup> 1, ··· , <sup>N</sup> )

*<sup>x</sup>* <sup>∈</sup> **<sup>R</sup>***<sup>n</sup>* <sup>|</sup> *<sup>B</sup>T*S*kx* <sup>=</sup> <sup>0</sup>

where *<sup>x</sup>*(*t*) <sup>∈</sup> **<sup>R</sup>***<sup>n</sup>* and *<sup>u</sup>*(*t*) <sup>∈</sup> **<sup>R</sup>***<sup>m</sup>* are the vectors of the state and the control input, respectively. First of all, we adopt the standard linear quadratic LQ control theory for the nominal system (4.5) in order to generate the desirable transient response for the plant systematically, i.e. the control input is given by *u*(*t*) = *Kx*(*t*). Note that some other design method so as to generate the desired response for the controlled system can also be used (e.g. pole assignment). Thus the feedback gain matrix *<sup>K</sup>* <sup>∈</sup> **<sup>R</sup>***m*×*<sup>n</sup>* is derived as *<sup>K</sup>* <sup>=</sup> −R−1*BT*<sup>P</sup> where P ∈ **<sup>R</sup>***n*×*<sup>n</sup>* is unique

represents unknown time-varying parameters and satisfies the relation

= arg min

 N

*k*=1

*dt <sup>x</sup>*(*t*) = (*<sup>A</sup>* <sup>+</sup> <sup>D</sup>Δ(*t*)E) *<sup>x</sup>*(*t*) + *Bu*(*t*) (4.1)

Z∈**Z**<sup>+</sup> {Z |(Z*<sup>m</sup>* <sup>−</sup> *<sup>n</sup>*) <sup>≥</sup> <sup>0</sup>} (4.2)

<sup>Ω</sup>S*<sup>k</sup>* = {0} (4.3)

*dt <sup>x</sup>*(*t*) = *Ax*(*t*) + *Bu*(*t*) (4.5)

*<sup>A</sup>T*<sup>P</sup> <sup>+</sup> <sup>P</sup> *<sup>A</sup>* − P*B*R−1*BT*<sup>P</sup> <sup>+</sup> <sup>Q</sup> <sup>=</sup> <sup>0</sup> (4.6)

Δ(*t*) 

≤ 1. Note

(4.4)

variable gain robust state feedback controller.

introducing the integer N ∈ **<sup>Z</sup>**<sup>+</sup> defined as

which satisfies the following relation(23; 24).

where ΩS*<sup>k</sup>* represents a subspace defined as

solution of the algebraic Riccati equation

*d*

<sup>N</sup> �

<sup>Ω</sup>S*<sup>k</sup>* � = 

The nominal system, ignoring the unknown parameter in (4.1), is given by *d*

**4.1 Problem formulation**

state equation (see **Remark 2**).

In (4.6), the matrices Q ∈ **<sup>R</sup>***n*×*<sup>n</sup>* and R ∈ **<sup>R</sup>***m*×*<sup>m</sup>* are design parameters and <sup>Q</sup> is selected such that the pair (*A*, <sup>C</sup>) is detectable, where <sup>C</sup> is any matrix satisfying <sup>Q</sup> <sup>=</sup> CC*T*, and then the matrix *AK* � = *A* + *BK* is stable.

Now on the basis of the works of Oya et al.(21; 22), in order to obtain on-line information on the parameter uncertainty, we introduce the error vector *e*(*t*) � = *x*(*t*) − *x*(*t*). Beside, using the optimal gain matrix *<sup>K</sup>* <sup>∈</sup> **<sup>R</sup>***m*×*<sup>n</sup>* for the nominal system (4.5), we consider the following control input.

$$
\mu(t) \stackrel{\triangle}{=} \text{Kx}(t) + \psi\left(\mathbf{x}, e, \mathcal{L}, t\right) \tag{4.7}
$$

where *<sup>ψ</sup>* (*x*,*e*,L, *<sup>t</sup>*) <sup>∈</sup> **<sup>R</sup>***<sup>m</sup>* is a compensation input so as to reduce the effect of uncertainties and nonlinear perturbations, and it is supposed to have the following structure.

$$
\psi\left(\mathbf{x},\mathbf{e},\mathcal{L},t\right) \stackrel{\triangle}{=} \mathcal{F}e(t) + \mathcal{L}(\mathbf{x},\mathbf{e},t)e(t) \tag{4.8}
$$

where F ∈ **<sup>R</sup>***m*×*<sup>n</sup>* is a fixed gain matrix and <sup>L</sup>(*x*,*e*, *<sup>t</sup>*) <sup>∈</sup> **<sup>R</sup>***m*×*<sup>n</sup>* is an adjustable time-varying matrix. From (4.1), (4.5), (4.7) and (4.8), we have

$$\frac{d}{dt}e(t) = \left(A + \mathcal{D}\Delta(t)\mathcal{E}\right)\mathbf{x}(t) + B\left\{\mathbf{K}\mathbf{x}(t) + \boldsymbol{\psi}\left(\mathbf{x}, \mathbf{e}, \mathcal{L}, t\right)\right\}$$

$$= A\_{\mathcal{F}}e(t) + \mathcal{D}\Delta(t)\mathcal{E}\mathbf{x}(t) + B\mathcal{L}(\mathbf{x}, \mathbf{e}, t)e(t) \tag{4.9}$$

In (4.9), *<sup>A</sup>*<sup>F</sup> <sup>∈</sup> **<sup>R</sup>***n*×*<sup>n</sup>* is a matrix given by *<sup>A</sup>*<sup>F</sup> � = *AK* + *B*F. Note that if asymptotical stability of the uncertain error system (4.9) is ensured, then the uncertain system (4.1) is robustly stable, because *e*(*t*) � <sup>=</sup> *<sup>x</sup>*(*t*) <sup>−</sup> *<sup>x</sup>*(*t*). Here, the fixed gain matrix F ∈ **<sup>R</sup>***m*×*<sup>n</sup>* is determined by using LQ control theory for the nominal error system. Namely <sup>F</sup> <sup>=</sup> −RF *<sup>B</sup><sup>T</sup>*XF and XF <sup>∈</sup> **<sup>R</sup>***n*×*<sup>n</sup>* is unique solution of the algebraic Riccati equation

$$A\_K^T \mathcal{X}\_{\mathcal{F}} + \mathcal{X}\_{\mathcal{F}} A\_K - \mathcal{X}\_{\mathcal{F}} B \mathcal{R}\_{\mathcal{F}}^{-1} B^T \mathcal{X}\_{\mathcal{F}} + \mathcal{Q}\_{\mathcal{F}} = 0 \tag{4.10}$$

where QF <sup>∈</sup> **<sup>R</sup>***n*×*<sup>n</sup>* and RF <sup>∈</sup> **<sup>R</sup>***m*×*<sup>m</sup>* are design parameters and symmetric positive definite matrices. A decision method of the time-varying matrix <sup>L</sup>(*x*,*e*, *<sup>t</sup>*) <sup>∈</sup> **<sup>R</sup>***m*×*<sup>n</sup>* will be stated in the next subsection.

From the above discussion, our control objective in this section is to design the robust stabilizing controller for the uncertain error system (4.9). That is to design the variable gain matrix <sup>L</sup>(*x*,*e*, *<sup>t</sup>*) <sup>∈</sup> **<sup>R</sup>***m*×*<sup>n</sup>* that the error system with uncertainties (4.9) is asymptotically stable.

#### **4.2 Synthesis of variable gain robust state feedback controllers via PLFs**

The following theorem gives sufficient conditions for the existence of the proposed controller.

**Theorem 4.** *Consider the uncertain error system (4.9) and the control input (4.7) and (4.8).*

*Suppose that the matrices* S*<sup>k</sup>* � <sup>=</sup> <sup>P</sup><sup>1</sup> <sup>+</sup> <sup>P</sup><sup>2</sup> <sup>+</sup> ··· <sup>+</sup> PN <sup>+</sup> <sup>P</sup>*kBB<sup>T</sup>*P*k*(*<sup>k</sup>* <sup>=</sup> 1, ··· , <sup>N</sup> ) *satisfy the relation (4.3), where* <sup>P</sup>*<sup>k</sup>* <sup>∈</sup> **<sup>R</sup>***n*×*<sup>n</sup> are symmetric positive definite matrices*† *satisfying the matrix inequalities*

$$\begin{aligned} \left(\mathcal{P}\_1 + \mathcal{P}\_2 + \dots + \mathcal{P}\_N + \mathcal{P}\_k \mathcal{B} \mathcal{B}^T \mathcal{P}\_k\right) A\_{\mathcal{F}} + A\_{\mathcal{F}}^T \left(\mathcal{P}\_1 + \mathcal{P}\_2 + \dots + \mathcal{P}\_N + \mathcal{P}\_k \mathcal{B} \mathcal{B}^T \mathcal{P}\_k\right) \\ + \sum\_{j=1}^{N-1} \gamma\_j^{(k)} \mathcal{P}\_k \mathcal{B} \mathcal{B}^T \mathcal{P}\_k + \mathcal{Q}\_k < 0 \quad (k = 1, \dots, N) \end{aligned} \tag{4.11}$$

† i.e. <sup>S</sup>*<sup>k</sup>* <sup>∈</sup> **<sup>R</sup>***n*×*<sup>n</sup>* are symmetric positive definite matrices.

for a Class of Uncertain Dynamical Systems 21

Synthesis of Variable Gain Robust Controllers for a Class of Uncertain Dynamical Systems 331

<sup>D</sup>*T*S*ke*(*t*)

<sup>D</sup>*T*S*ke*(*t*)

*<sup>e</sup>T*(*t*)P*kBB<sup>T</sup>*P*ke*(*t*)

*<sup>e</sup>*(*t*) <sup>&</sup>lt; 0 for *<sup>k</sup>* <sup>=</sup> arg max*<sup>k</sup>*

*<sup>j</sup>* ≥ 0 (*j* = 1, ··· , N − 1, *k* = 1, ··· , N ) satisfying

*<sup>j</sup>* PN *BB<sup>T</sup>*PN <sup>−</sup> *<sup>γ</sup>*(<sup>N</sup> )

<sup>S</sup>*kA*<sup>F</sup> <sup>+</sup> *<sup>A</sup><sup>T</sup>*

*<sup>k</sup>* {*λ*min {Q*k*}}, we obtain

*<sup>j</sup>* <sup>P</sup>1*BBT*P<sup>1</sup> <sup>−</sup> *<sup>γ</sup>*(1)

By using **Lemma 4** (S-procedure), the inequality (4.19) is satisfied if and only if there exist

Noting that since the condition (4.11) is a sufficient condition for the matrix inequalities (4.20), if the inequalities (4.11) are satisfied, then the condition (4.20) is also satisfied. Therefore, we

> <sup>F</sup> S*<sup>k</sup>*

<sup>2</sup> <sup>+</sup> *<sup>σ</sup>*(*t*) for *<sup>k</sup>* <sup>=</sup> arg max*<sup>k</sup>*

are given by

On the other hand, from the definition of the piecewise quadratic function, there always exist

≤ V (*e*, *<sup>t</sup>*) <sup>≤</sup> *<sup>η</sup>*<sup>+</sup>

<sup>D</sup>*T*S*ke*(*t*)

 E*x*(*t*) 

 E*x*(*t*) 2

*<sup>e</sup>*(*t*) + *<sup>σ</sup>*(*t*) for *<sup>k</sup>* <sup>=</sup> arg max*<sup>k</sup>*

<sup>1</sup> <sup>P</sup>2*BBT*P<sup>2</sup> −···− *<sup>γ</sup>*(1)

<sup>1</sup> <sup>P</sup>2*BBT*P<sup>2</sup> −···− *<sup>γ</sup>*(<sup>N</sup> )

*e*(*t*) 

 E*x*(*t*) 

*<sup>B</sup>T*S*ke*(*t*)

*<sup>e</sup>T*(*t*)P*kBB<sup>T</sup>*P*ke*(*t*)

N −<sup>1</sup>PN *BB<sup>T</sup>*PN <sup>&</sup>lt; <sup>0</sup>

*<sup>e</sup>*(*t*) <sup>&</sup>lt; <sup>−</sup>*eT*(*t*)Q*ke*(*t*) (4.21)

*<sup>e</sup>T*(*t*)P*kBB<sup>T</sup>*P*ke*(*t*)

  <sup>2</sup> *<sup>B</sup>T*S*<sup>k</sup>*

 *e*(*t*)

*<sup>e</sup>T*(*t*)P*kBB<sup>T</sup>*P*ke*(*t*)

N −<sup>1</sup>PN −1*BB<sup>T</sup>*PN −<sup>1</sup> <sup>&</sup>lt; <sup>0</sup>

(4.23)

<sup>2</sup> (4.24)

(4.18)

(4.19)

(4.20)

(4.22)

*e*(*t*) + 2

function V(*e*, *t*).

*dt* <sup>V</sup>(*e*, *<sup>t</sup>*) <sup>≤</sup> *<sup>e</sup>T*(*t*)

*eT*(*t*) 

<sup>F</sup> <sup>S</sup><sup>1</sup> <sup>+</sup>

<sup>F</sup> SN <sup>+</sup>

have the following relation.

*dt* <sup>V</sup>(*e*, *<sup>t</sup>*) ≤ −*ζ<sup>k</sup>*

*e*(*t*) 

Besides, by letting *ζ<sup>k</sup>*

*d*

where *η*<sup>−</sup>

<sup>S</sup>*<sup>k</sup>* <sup>&</sup>gt; 0 and *<sup>γ</sup>*(*k*)

<sup>S</sup>1*A*<sup>F</sup> <sup>+</sup> *<sup>A</sup><sup>T</sup>*

SN *<sup>A</sup>*<sup>F</sup> <sup>+</sup> *<sup>A</sup><sup>T</sup>*

<sup>+</sup> <sup>2</sup>*eT*(*t*)S*kB*

<sup>≤</sup> *<sup>e</sup>T*(*t*)

for *<sup>k</sup>* <sup>=</sup> arg max*<sup>k</sup>*

Now we consider the following inequality.

<sup>S</sup>*kA*<sup>F</sup> <sup>+</sup> *<sup>A</sup><sup>T</sup>*

N −1 ∑ *j*=1

*<sup>γ</sup>*(1)

*<sup>γ</sup>*(<sup>N</sup> )

*eT*(*t*) 

. . .

N −1 ∑ *j*=1

> � = min

> > *e*(*t*)

and *η*<sup>+</sup>

two positive constants *δ*min and *δ*max such that for any *t* ≥ *t*0, *η*<sup>−</sup> *e*(*t*) 

> *e*(*t*)

> > *η*<sup>−</sup> *e*(*t*) � = *δ*min *e*(*t*) 2

*η*<sup>+</sup> *e*(*t*) � = *δ*max *e*(*t*) 

<sup>S</sup>*kA*<sup>F</sup> <sup>+</sup> *<sup>A</sup><sup>T</sup>*

 −

<sup>S</sup>*kA*<sup>F</sup> <sup>+</sup> *<sup>A</sup><sup>T</sup>*

<sup>F</sup> S*<sup>k</sup>*  <sup>F</sup> S 

 *σ*(*t*) +

<sup>F</sup> S 

*d*

*In (4.11), <sup>γ</sup>*(*k*) *<sup>j</sup>* (*<sup>k</sup>* <sup>=</sup> 1, ··· , <sup>N</sup> , *<sup>j</sup>* <sup>=</sup> 1, ··· , N − <sup>1</sup>) *are positive scalars and* <sup>Q</sup>*<sup>k</sup>* <sup>∈</sup> **<sup>R</sup>***n*×*<sup>n</sup> are symmetric positive definite matrices.*

*By using the matrices* <sup>S</sup>*<sup>k</sup>* <sup>∈</sup> **<sup>R</sup>***n*×*n,* <sup>L</sup>(*x*,*e*, *<sup>t</sup>*) <sup>∈</sup> **<sup>R</sup>***m*×*<sup>n</sup> is determined as*

$$\mathcal{L}(\mathbf{x}, e, t) = -\frac{\left( \left| \left| \mathcal{D}^T \mathcal{S}\_k e(t) \right| \left| \left| \mathcal{E} \mathbf{x}(t) \right| \right| \right)^2}{\left( \sigma(t) + \left| \left| \mathcal{D}^T \mathcal{S}\_k e(t) \right| \right| \left| \left| \mathcal{E} \mathbf{x}(t) \right| \right| \right) \left| \left| B^T \mathcal{S}\_k e(t) \right| \right|^2} B^T \mathcal{S}\_k$$
 
$$\text{for} \quad k = \arg\max\_k \left\{ e^T(t) \mathcal{P}\_k B B^T \mathcal{P}\_k e(t) \right\} \tag{4.12}$$

*In (4.12), <sup>σ</sup>*(*t*) <sup>∈</sup> **<sup>R</sup>**<sup>1</sup> *is any positive uniform continuous and bounded function which satisfies*

$$\int\_{t\_0}^{t} \sigma(\tau) d\tau \le \sigma^\* < \infty \tag{4.13}$$

*where σ*∗ *is any positive constant and t*<sup>0</sup> *denotes an initial time. Then the uncertain error system (4.9) are bounded and*

$$\lim\_{t \to \infty} e(t; t\_0, e(t\_0)) = 0 \tag{4.14}$$

*Namely, asymptotical stability of the uncertain error system (4.9) is ensured.*

*Proof.* Using symmetric positive definite matrices <sup>P</sup>*<sup>k</sup>* <sup>∈</sup> **<sup>R</sup>***n*×*<sup>n</sup>* (*<sup>k</sup>* <sup>=</sup> 1, ··· , <sup>N</sup> ) which satisfy (4.11), we introduce the following piecewise quadratic function.

$$\mathcal{V}(e,t) = e^T(t)\mathcal{S}\_k e(t) \quad \text{for} \quad k = \arg\max\_k \left\{ e^T(t)\mathcal{P}\_k B B^T \mathcal{P}\_k e(t) \right\} \quad \text{and} \quad k = 1, \cdots, N$$

$$= \max\_k \left\{ e^T(t)\mathcal{S}\_k e(t) \right\} \tag{4.15}$$

Note that the piecewise quadratic function V(*e*, *t*) is continuous and its level set is closed. The time derivative of the piecewise quadratic function V(*e*, *t*) can be written as

$$\frac{d}{dt}\mathcal{V}(e,t) = e^T(t)\left(\mathcal{S}\_k A\_{\mathcal{F}} + A\_{\mathcal{F}}^T \mathcal{S}\_k\right)e(t) + 2e^T(t)\mathcal{S}\_k \mathcal{D}\Delta(t)\mathcal{E}\mathbf{x}(t) + 2e^T(t)\mathcal{S}\_k \mathcal{B}\mathcal{L}(\mathbf{x}, e, t)e(t)$$

$$\text{for} \quad k = \arg\max\_k \left\{ e^T(t)\mathcal{P}\_k \mathcal{B}\mathcal{B}^T \mathcal{P}\_k e(t) \right\} \tag{4.16}$$

Now, using **Lemma 1**, we can obtain

$$\frac{d}{dt}\mathcal{V}(e,t) \le e^T(t) \left(\mathcal{S}\_k A\_{\mathcal{F}} + A\_{\mathcal{F}}^T \mathcal{S}\_k\right) e(t) + 2 \left\|\mathcal{D}^T \mathcal{S}\_k e(t)\right\| \left\|\mathcal{E}x(t)\right\|$$

$$+ 2e^T(t) \mathcal{S}\_k B \mathcal{L}(x, e, t) e(t) \quad \text{for} \quad k = \arg\max\_k \left\{ e^T(t) \mathcal{P}\_k B B^T \mathcal{P}\_k e(t) \right\} \tag{4.17}$$

Also, using the time-varying gain matrix (4.12) and the relation (4.17) and some trivial manipulations give the following relation for the time derivative of the piecewise quadratic function V(*e*, *t*).

20 Will-be-set-by-IN-TECH

<sup>D</sup>*T*S*ke*(*t*)

<sup>D</sup>*T*S*ke*(*t*)

*In (4.12), <sup>σ</sup>*(*t*) <sup>∈</sup> **<sup>R</sup>**<sup>1</sup> *is any positive uniform continuous and bounded function which satisfies*

*By using the matrices* <sup>S</sup>*<sup>k</sup>* <sup>∈</sup> **<sup>R</sup>***n*×*n,* <sup>L</sup>(*x*,*e*, *<sup>t</sup>*) <sup>∈</sup> **<sup>R</sup>***m*×*<sup>n</sup> is determined as*

 *σ*(*t*) + for *<sup>k</sup>* <sup>=</sup> arg max*<sup>k</sup>*

 *t t*0

lim

*Namely, asymptotical stability of the uncertain error system (4.9) is ensured.*

(4.11), we introduce the following piecewise quadratic function.

<sup>F</sup> S*<sup>k</sup>* 

<sup>F</sup> S*<sup>k</sup>* 

<sup>+</sup> <sup>2</sup>*eT*(*t*)S*kB*L(*x*,*e*, *<sup>t</sup>*)*e*(*t*) for *<sup>k</sup>* <sup>=</sup> arg max*<sup>k</sup>*

<sup>S</sup>*kA*<sup>F</sup> <sup>+</sup> *<sup>A</sup><sup>T</sup>*

<sup>V</sup>(*e*, *<sup>t</sup>*) = *<sup>e</sup>T*(*t*)S*ke*(*t*) for *<sup>k</sup>* <sup>=</sup> arg max*<sup>k</sup>*

*<sup>e</sup>T*(*t*)S*ke*(*t*)

<sup>S</sup>*kA*<sup>F</sup> <sup>+</sup> *<sup>A</sup><sup>T</sup>*

= max *k* 

Now, using **Lemma 1**, we can obtain

*dt* <sup>V</sup>(*e*, *<sup>t</sup>*) <sup>≤</sup> *<sup>e</sup>T*(*t*)

for *<sup>k</sup>* <sup>=</sup> arg max*<sup>k</sup>*

*dt* <sup>V</sup>(*e*, *<sup>t</sup>*) = *<sup>e</sup>T*(*t*)

L(*x*,*e*, *t*) = −

*<sup>j</sup>* (*<sup>k</sup>* <sup>=</sup> 1, ··· , <sup>N</sup> , *<sup>j</sup>* <sup>=</sup> 1, ··· , N − <sup>1</sup>) *are positive scalars and* <sup>Q</sup>*<sup>k</sup>* <sup>∈</sup> **<sup>R</sup>***n*×*<sup>n</sup> are symmetric*

 E*x*(*t*) 2

*<sup>e</sup>T*(*t*)P*kBB<sup>T</sup>*P*ke*(*t*)

*<sup>e</sup>T*(*t*)P*kBB<sup>T</sup>*P*ke*(*t*)

<sup>D</sup>*T*S*ke*(*t*)

 E*x*(*t*) 

*<sup>e</sup>T*(*t*)P*kBB<sup>T</sup>*P*ke*(*t*)

*<sup>B</sup>T*S*ke*(*t*)

*σ*(*τ*)*dτ* ≤ *σ*<sup>∗</sup> < ∞ (4.13)

*<sup>t</sup>*→<sup>∞</sup> *<sup>e</sup>*(*t*; *<sup>t</sup>*0,*e*(*t*0)) = <sup>0</sup> (4.14)

*<sup>e</sup>*(*t*) + <sup>2</sup>*eT*(*t*)S*k*DΔ(*t*)E*x*(*t*) + <sup>2</sup>*eT*(*t*)S*kB*L(*x*,*e*, *<sup>t</sup>*)*e*(*t*)

and *k* = 1, ··· , N

 

<sup>2</sup> *<sup>B</sup>T*S*<sup>k</sup>*

(4.12)

(4.15)

(4.16)

(4.17)

 E*x*(*t*) 

*where σ*∗ *is any positive constant and t*<sup>0</sup> *denotes an initial time. Then the uncertain error system (4.9)*

*Proof.* Using symmetric positive definite matrices <sup>P</sup>*<sup>k</sup>* <sup>∈</sup> **<sup>R</sup>***n*×*<sup>n</sup>* (*<sup>k</sup>* <sup>=</sup> 1, ··· , <sup>N</sup> ) which satisfy

Note that the piecewise quadratic function V(*e*, *t*) is continuous and its level set is closed.

*<sup>e</sup>T*(*t*)P*kBB<sup>T</sup>*P*ke*(*t*)

*e*(*t*) + 2

Also, using the time-varying gain matrix (4.12) and the relation (4.17) and some trivial manipulations give the following relation for the time derivative of the piecewise quadratic

The time derivative of the piecewise quadratic function V(*e*, *t*) can be written as

*In (4.11), <sup>γ</sup>*(*k*)

*are bounded and*

*d*

*d*

*positive definite matrices.*

$$\begin{split} \frac{d}{dt}\mathcal{V}(e,t) &\leq \,\,\,^{T}(t)\left(\mathcal{S}\_{k}A\_{\mathcal{F}}+A\_{\mathcal{F}}^{\top}\mathcal{S}\right)e(t)+2\left\|\mathcal{D}^{\top}\mathcal{S}\_{k}e(t)\right\|\,\,\left\|\mathcal{E}x(t)\right\| \\ &\quad +2\,^{T}(t)\mathcal{S}\_{k}B\left\{-\frac{\left(\left\|\mathcal{D}^{\top}\mathcal{S}\_{k}e(t)\right\|\left\|\mathcal{E}x(t)\right\|\right)^{2}}{\left(\sigma(t)+\left\|\mathcal{D}^{\top}\mathcal{S}\_{k}e(t)\right\|\left\|\mathcal{E}x(t)\right\|\right)\left\|\mathcal{B}^{\top}\mathcal{S}\_{k}e(t)\right\|^{2}}B^{\top}\mathcal{S}\_{k}\right\}e(t) \\ &\quad\quad\,\,\,^{\text{for}}\,\,\,\,k=\operatorname\*{arg\,max}\_{k}\left\{e^{T}(t)\mathcal{P}\_{k}BB^{\top}\mathcal{P}\_{k}e(t)\right\} \\ &\leq \,\,e^{T}(t)\left(\mathcal{S}\_{k}A\_{\mathcal{F}}+A\_{\mathcal{F}}^{\top}\mathcal{S}\right)e(t)+\sigma(t) \quad\,\,\,\,\,k=\operatorname\*{arg\,max}\_{k}\left\{e^{T}(t)\mathcal{P}\_{k}BB^{\top}\mathcal{P}\_{k}e(t)\right\} \end{split} \tag{4.18}$$

Now we consider the following inequality.

$$e^{T}(t)\left(\mathcal{S}\_{k}A\_{\mathcal{F}} + A\_{\mathcal{F}}^{T}\mathcal{S}\_{k}\right)e(t) < 0 \quad \text{for} \quad k = \arg\max\_{k}\left\{e^{T}(t)\mathcal{P}\_{k}BB^{T}\mathcal{P}\_{k}e(t)\right\} \tag{4.19}$$

By using **Lemma 4** (S-procedure), the inequality (4.19) is satisfied if and only if there exist <sup>S</sup>*<sup>k</sup>* <sup>&</sup>gt; 0 and *<sup>γ</sup>*(*k*) *<sup>j</sup>* ≥ 0 (*j* = 1, ··· , N − 1, *k* = 1, ··· , N ) satisfying

$$\begin{aligned} \mathcal{S}\_1 A\_{\mathcal{F}} + A\_{\mathcal{F}}^T \mathcal{S}\_1 + \sum\_{j=1}^{N-1} \gamma\_j^{(1)} \mathcal{P}\_1 \mathcal{B} \mathcal{B}^T \mathcal{P}\_1 - \gamma\_1^{(1)} \mathcal{P}\_2 \mathcal{B} \mathcal{B}^T \mathcal{P}\_2 - \dots - \gamma\_{N-1}^{(1)} \mathcal{P}\_N \mathcal{B} \mathcal{B}^T \mathcal{P}\_N &< 0 \\ \vdots \\ \mathcal{S}\_N A\_{\mathcal{F}} + A\_{\mathcal{F}}^T \mathcal{S}\_N + \sum\_{j=1}^{N-1} \gamma\_j^{(N)} \mathcal{P}\_N \mathcal{B} \mathcal{B}^T \mathcal{P}\_N - \gamma\_1^{(N)} \mathcal{P}\_2 \mathcal{B} \mathcal{B}^T \mathcal{P}\_2 - \dots - \gamma\_{N-1}^{(N)} \mathcal{P}\_{N-1} \mathcal{B} \mathcal{B}^T \mathcal{P}\_{N-1} &< 0 \end{aligned} \tag{4.20}$$

Noting that since the condition (4.11) is a sufficient condition for the matrix inequalities (4.20), if the inequalities (4.11) are satisfied, then the condition (4.20) is also satisfied. Therefore, we have the following relation.

$$\left(\boldsymbol{\varepsilon}^{T}(t)\left(\mathcal{S}\_{k}\boldsymbol{A}\_{\mathcal{F}} + \boldsymbol{A}\_{\mathcal{F}}^{T}\boldsymbol{\mathcal{S}}\_{k}\right)\boldsymbol{\varepsilon}(t) < -\boldsymbol{\varepsilon}^{T}(t)\,\mathcal{Q}\_{k}\boldsymbol{\varepsilon}(t)\tag{4.21}$$

Besides, by letting *ζ<sup>k</sup>* � = min *<sup>k</sup>* {*λ*min {Q*k*}}, we obtain

$$\frac{d}{dt}\mathcal{V}(e,t) \le -\zeta\_k \left\| e(t) \right\|^2 + \sigma(t) \quad \text{for} \quad k = \arg\max\_k \left\{ e^T(t) \mathcal{P}\_k B B^T \mathcal{P}\_k e(t) \right\} \tag{4.22}$$

On the other hand, from the definition of the piecewise quadratic function, there always exist two positive constants *δ*min and *δ*max such that for any *t* ≥ *t*0,

$$
\eta^-\left(||e(t)||\right) \le \mathcal{V}\left(e, t\right) \le \eta^+\left(||e(t)||\right)\tag{4.23}
$$

where *η*<sup>−</sup> *e*(*t*) and *η*<sup>+</sup> *e*(*t*) are given by

$$\begin{array}{l} \eta^{-} \left( ||e(t)|| \right) \stackrel{\triangle}{=} \delta\_{\text{min}} \left||e(t)|| \right|^{2} \\ \eta^{+} \left( ||e(t)|| \right) \stackrel{\triangle}{=} \delta\_{\text{max}} \left||e(t)|| \right|^{2} \end{array} \tag{4.24}$$

for a Class of Uncertain Dynamical Systems 23

Synthesis of Variable Gain Robust Controllers for a Class of Uncertain Dynamical Systems 333

**Remark 3.** *In order to get the proposed controller, symmetric positive definite matrices* S*<sup>k</sup>* ∈ **<sup>R</sup>***n*×*<sup>n</sup>* (*<sup>k</sup>* <sup>=</sup> 1, ··· , <sup>N</sup> ) *satisfying the assumption (4.3) are required. The condition (4.3) is reduced*

S1*B* S2*B* ··· SN *B*

*However there is not a globally effective method to obtain matrices satisfying the conditions (4.32). In*

**Remark 4.** *In this section, we introduce the compensation input (4.8). From (4.8) and (4.12), one can see that if e*(*t*) = 0*, then the relation ψ* (*x*,*e*,L, *t*) ≡ 0 *is satisfied. Beside, we find that the variable*

Now, we consider the condition (4.11) in **Theorem 4**. The condition (4.11) requires symmetric

section, on the basis of the works of Oya et al.(23; 24), we consider the following inequalities

1, ··· , N ) and using **Lemma 3** (Schur complement), we find that the condition (4.33)

*<sup>Ψ</sup>* (P1, ··· ,PN ) + Q*<sup>k</sup>* P*kB* P*kB* ··· P*kB*

(*k*)

. . . .

*<sup>B</sup>T*P*<sup>k</sup>* 0 00 <sup>−</sup>*<sup>ξ</sup>*

Note that if there exist symmetric positive definite matrices <sup>P</sup>*<sup>k</sup>* <sup>∈</sup> **<sup>R</sup>***n*×*<sup>n</sup>* and positive scalars

From the above discussion, one can see that in order to get the proposed robust controller,

satisfy the LMIs (4.34) and the assumption (4.3) are needed. Therefore firstly, we solve the

*<sup>j</sup>* <sup>∈</sup> **<sup>R</sup>**<sup>1</sup> which satisfy the matrix inequalities (4.34), then the matrix inequality condition

�*T*�

<sup>F</sup> (P<sup>1</sup> <sup>+</sup> <sup>P</sup><sup>2</sup> <sup>+</sup> ··· <sup>+</sup> PN )

*<sup>j</sup>* <sup>P</sup>*kBB<sup>T</sup>*P*<sup>k</sup>* <sup>+</sup> <sup>Q</sup>*<sup>k</sup>* <sup>&</sup>lt; <sup>0</sup> (*<sup>k</sup>* <sup>=</sup> 1, ··· , <sup>N</sup> ) (4.33)

(*k*) *j* � = � *<sup>γ</sup>*(*k*) *j* �−<sup>1</sup>

<sup>1</sup> *Im* 0 ··· 0

<sup>2</sup> *Im* ··· 0

. ... .

*<sup>j</sup>* <sup>∈</sup> **<sup>R</sup>**<sup>1</sup> and the symmetric positive definite matrices <sup>P</sup>*<sup>k</sup>* <sup>∈</sup> **<sup>R</sup>***n*×*<sup>n</sup>* which

. . ⎞

⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎠ < 0,

<sup>F</sup> (P<sup>1</sup> <sup>+</sup> <sup>P</sup><sup>2</sup> <sup>+</sup> ··· <sup>+</sup> PN ) (4.35)

(*k*) N −1*Im*

(*k*)

*<sup>j</sup>* > 0 (*j* = 1, ··· , N − 1, *k* = 1, ··· , N )

= *n* (4.32)

*<sup>j</sup>* <sup>∈</sup> **<sup>R</sup>**<sup>1</sup> for stability. In this

(*j* = 1, ··· , N − 1, *k* =

(4.34)

rank ��

*future work, we will examine the assumption (4.3) and the condition (4.32).*

*gain matrix* <sup>L</sup>(*x*,*e*, *<sup>t</sup>*) <sup>∈</sup> **<sup>R</sup>***m*×*<sup>n</sup> can be calculated except for e*(*t*) = <sup>0</sup> *(see (24)).*

(P<sup>1</sup> <sup>+</sup> <sup>P</sup><sup>2</sup> <sup>+</sup> ··· <sup>+</sup> PN ) *<sup>A</sup>*<sup>F</sup> <sup>+</sup> *<sup>A</sup><sup>T</sup>*

*<sup>B</sup>T*P*<sup>k</sup>* <sup>−</sup>*<sup>ξ</sup>*

*<sup>B</sup>T*P*<sup>k</sup>* <sup>0</sup> <sup>−</sup>*<sup>ξ</sup>*

. .

positive definite matrices <sup>P</sup>*<sup>k</sup>* <sup>∈</sup> **<sup>R</sup>***n*×*<sup>n</sup>* and positive scalars *<sup>γ</sup>*(*k*)

*<sup>γ</sup>*(*k*)

In addition, introducing complementary variables *ξ*

. .

(*k*)

LMIs (4.34) and next, we check the rank condition (4.32).

where *<sup>Ψ</sup>* (P1, ··· ,PN ) in (1, 1)-block of the LMIs (4.34) is given by *<sup>Ψ</sup>* (P1, ··· ,PN ) <sup>=</sup> (P<sup>1</sup> <sup>+</sup> <sup>P</sup><sup>2</sup> <sup>+</sup> ··· <sup>+</sup> PN ) *<sup>A</sup>*<sup>F</sup> <sup>+</sup> *<sup>A</sup><sup>T</sup>*

P*<sup>k</sup>* > 0 and *ξ*

*to the following rank condition.*

instead of (4.11).

+ N −1 ∑ *j*=1

equivalent to the following LMIs. ⎛

⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎝

(4.11) is also satisfied (23; 24).

the positive scalars *<sup>γ</sup>*(*k*)

*<sup>γ</sup>*(*k*)

It is obvious that any solution *e*(*t*; *t*0,*e*(*t*0)) of the error system is continuous. In addition, it follows from (4.22) and (4.23), that for any *t* ≥ *t*0, the following relation holds.

$$\begin{split} 0 \le \eta^- \left( \| \| e(t) \| \right) \le \mathcal{V} \left( e, t \right) = \mathcal{V} \left( e, t\_0 \right) + \int\_{t\_0}^t \frac{d}{dt} \mathcal{V} (e, \tau) d\tau \\ \mathcal{V} \left( e, t\_0 \right) + \int\_{t\_0}^t \frac{d}{dt} \mathcal{V} (e, \tau) d\tau \le \eta^+ \left( \| \| e(t\_0) \| \right) - \int\_{t\_0}^t \eta^\* \left( \| e(\tau) \| \right) d\tau + \int\_{t\_0}^t \sigma(\tau) d\tau \end{split} \tag{4.25}$$

In (4.25), *η*<sup>∗</sup> *e*(*t*) is defined as

$$\eta^\*\left(\left||e(t)\right||\right) \stackrel{\triangle}{=} \zeta\_k\left||e(t)\right||^2\tag{4.26}$$

Therefore, from (4.25) we can obtain the following two results. Firstly, taking the limit as *t* approaches infinity on both sides of the inequality (4.25), we have

$$0 \le \eta^+\left(||e(t\_0)||\right) - \lim\_{t \to \infty} \int\_{t\_0}^t \eta^\*\left(||e(\tau)||\right)d\tau + \lim\_{t \to \infty} \int\_{t\_0}^t \sigma(\tau)d\tau\tag{4.27}$$

Thus one can see from (4.13) and (4.27) that

$$\lim\_{t \to \infty} \int\_{t\_0}^t \eta^\* \left( ||\boldsymbol{\varepsilon}(\tau)|| \right) d\tau \le \eta^+ \left( ||\boldsymbol{\varepsilon}(t\_0)|| \right) + \sigma^\* \tag{4.28}$$

On the other hand, from (4.25), we obtain

$$0 \le \eta^- \left( \| \| e(t) \| \right) \le \eta^+ \left( \| e(t\_0) \| \right) + \int\_{t\_0}^t \sigma(\tau) d\tau \tag{4.29}$$

It follows from (4.13) and (4.29) that

$$0 \le \eta^- \left( \| \| e(t) \| \right) \le \eta^+ \left( \| e(t\_0) \| \right) + \sigma^\* \tag{4.30}$$

The relation (4.30) implies that *e*(*t*) is uniformly bounded. Since *e*(*t*) has been shown to be continuous, it follows that *e*(*t*) is uniformly continuous. Therefore, one can see from the definition that *η*<sup>∗</sup> *e*(*t*) is also uniformly continuous. Applying the **Lemma 3** ( Barbalat's lemma ) to (4.28) yields

$$\lim\_{t \to \infty} \eta^\* \left( \left||e(t)\right|| \right) = \lim\_{t \to \infty} \mathbb{Z}\_k \left||e(t)\right|| = 0 \tag{4.31}$$

Namely, asymptotical stability of the uncertain error system (4.9) is ensured. Thus the uncertain linear system (4.1) is also stable. Thus the proof of **Theorem 4** is completed.

**Remark 2.** *In this section, we consider the uncertain dynamical system (4.1) which has uncertainties in the state matrix only. The proposed robust controller can also be applied to the case that the uncertainties are included in both the system matrix and the input one. By introducing additional actuator dynamics and constituting an augmented system, uncertainties in the input matrix are embedded in the system matrix of the augmented system(36). Therefore the same design procedure can be applied.*

22 Will-be-set-by-IN-TECH

It is obvious that any solution *e*(*t*; *t*0,*e*(*t*0)) of the error system is continuous. In addition, it

 *t t*0 *d*

Therefore, from (4.25) we can obtain the following two results. Firstly, taking the limit as *t*

*<sup>d</sup><sup>τ</sup>* <sup>≤</sup> *<sup>η</sup>*<sup>+</sup>

*e*(*t*0) + *t t*0

<sup>≤</sup> *<sup>η</sup>*<sup>+</sup>

The relation (4.30) implies that *e*(*t*) is uniformly bounded. Since *e*(*t*) has been shown to be continuous, it follows that *e*(*t*) is uniformly continuous. Therefore, one can see from the

> = lim *<sup>t</sup>*→<sup>∞</sup> *<sup>ζ</sup><sup>k</sup> e*(*t*)

Namely, asymptotical stability of the uncertain error system (4.9) is ensured. Thus the

**Remark 2.** *In this section, we consider the uncertain dynamical system (4.1) which has uncertainties in the state matrix only. The proposed robust controller can also be applied to the case that the uncertainties are included in both the system matrix and the input one. By introducing additional actuator dynamics and constituting an augmented system, uncertainties in the input matrix are embedded in the system matrix of the augmented system(36). Therefore the same design procedure*

*e*(*t*0) 

is also uniformly continuous. Applying the **Lemma 3** ( Barbalat's

*dt* <sup>V</sup>(*e*, *<sup>τ</sup>*)*d<sup>τ</sup>*

*dτ* + lim *t*→∞

*e*(*t*0)   *t t*0

*σ*(*τ*)*dτ*

*σ*(*τ*)*dτ* (4.27)

+ *σ*∗ (4.28)

*σ*(*τ*)*dτ* (4.29)

+ *σ*∗ (4.30)

= 0 (4.31)

<sup>2</sup> (4.26)

(4.25)

follows from (4.22) and (4.23), that for any *t* ≥ *t*0, the following relation holds.

*η*<sup>∗</sup> *e*(*t*) � = *ζ<sup>k</sup> e*(*t*) 

approaches infinity on both sides of the inequality (4.25), we have

*e*(*t*0) − *t t*0 *η*<sup>∗</sup> *e*(*τ*) *dτ* + *t t*0

 *t t*0 *η*<sup>∗</sup> *e*(*τ*) 

<sup>≤</sup> *<sup>η</sup>*<sup>+</sup>

≤ V (*e*, *t*) = V (*e*, *t*0) +

*dt* <sup>V</sup>(*e*, *<sup>τ</sup>*)*d<sup>τ</sup>* <sup>≤</sup> *<sup>η</sup>*<sup>+</sup>

is defined as

*e*(*t*0) − lim *t*→∞

lim *t*→∞  *t t*0 *η*<sup>∗</sup> *e*(*τ*) 

<sup>0</sup> <sup>≤</sup> *<sup>η</sup>*<sup>−</sup>

*e*(*t*) 

> *e*(*t*)

> *e*(*t*)

<sup>0</sup> <sup>≤</sup> *<sup>η</sup>*<sup>−</sup>

lim *<sup>t</sup>*→<sup>∞</sup> *<sup>η</sup>*<sup>∗</sup>

<sup>0</sup> <sup>≤</sup> *<sup>η</sup>*<sup>−</sup>

V (*e*, *t*0) +

In (4.25), *η*<sup>∗</sup>

*e*(*t*) 

*e*(*t*) 

 *t t*0 *d*

<sup>0</sup> <sup>≤</sup> *<sup>η</sup>*<sup>+</sup>

Thus one can see from (4.13) and (4.27) that

On the other hand, from (4.25), we obtain

*e*(*t*) 

uncertain linear system (4.1) is also stable. Thus the proof of **Theorem 4** is completed.

It follows from (4.13) and (4.29) that

definition that *η*<sup>∗</sup>

*can be applied.*

lemma ) to (4.28) yields

**Remark 3.** *In order to get the proposed controller, symmetric positive definite matrices* S*<sup>k</sup>* ∈ **<sup>R</sup>***n*×*<sup>n</sup>* (*<sup>k</sup>* <sup>=</sup> 1, ··· , <sup>N</sup> ) *satisfying the assumption (4.3) are required. The condition (4.3) is reduced to the following rank condition.*

$$\text{rank}\left\{ \left( \mathcal{S}\_1 \mathcal{B} \; \mathcal{S}\_2 \mathcal{B} \; \cdots \; \mathcal{S}\_N \mathcal{B} \right)^T \right\} = n \tag{4.32}$$

*However there is not a globally effective method to obtain matrices satisfying the conditions (4.32). In future work, we will examine the assumption (4.3) and the condition (4.32).*

**Remark 4.** *In this section, we introduce the compensation input (4.8). From (4.8) and (4.12), one can see that if e*(*t*) = 0*, then the relation ψ* (*x*,*e*,L, *t*) ≡ 0 *is satisfied. Beside, we find that the variable gain matrix* <sup>L</sup>(*x*,*e*, *<sup>t</sup>*) <sup>∈</sup> **<sup>R</sup>***m*×*<sup>n</sup> can be calculated except for e*(*t*) = <sup>0</sup> *(see (24)).*

Now, we consider the condition (4.11) in **Theorem 4**. The condition (4.11) requires symmetric positive definite matrices <sup>P</sup>*<sup>k</sup>* <sup>∈</sup> **<sup>R</sup>***n*×*<sup>n</sup>* and positive scalars *<sup>γ</sup>*(*k*) *<sup>j</sup>* <sup>∈</sup> **<sup>R</sup>**<sup>1</sup> for stability. In this section, on the basis of the works of Oya et al.(23; 24), we consider the following inequalities instead of (4.11).

$$\left(\mathcal{P}\_1 + \mathcal{P}\_2 + \dots + \mathcal{P}\_N\right) A\_{\mathcal{F}} + A\_{\mathcal{F}}^T \left(\mathcal{P}\_1 + \mathcal{P}\_2 + \dots + \mathcal{P}\_N\right)$$

$$+ \sum\_{j=1}^{N-1} \gamma\_j^{(k)} \mathcal{P}\_k B B^T \mathcal{P}\_k + \mathcal{Q}\_k < 0 \quad (k = 1, \dots, N) \tag{4.33}$$

In addition, introducing complementary variables *ξ* (*k*) *j* � = � *<sup>γ</sup>*(*k*) *j* �−<sup>1</sup> (*j* = 1, ··· , N − 1, *k* = 1, ··· , N ) and using **Lemma 3** (Schur complement), we find that the condition (4.33) equivalent to the following LMIs.

$$\begin{pmatrix} \Psi \begin{pmatrix} \mathcal{P}\_{1} \dots \mathcal{P}\_{N} \end{pmatrix} + \mathcal{Q}\_{k} & \mathcal{P}\_{k}B & \mathcal{P}\_{k}B & \cdots & \mathcal{P}\_{k}B \\\\ B^{T}\mathcal{P}\_{k} & -\mathfrak{f}\_{1}^{(k)}I\_{m} & 0 & \cdots & 0 \\\\ B^{T}\mathcal{P}\_{k} & 0 & -\mathfrak{f}\_{2}^{(k)}I\_{m} & \cdots & 0 \\\\ \vdots & \vdots & \vdots & \ddots & \vdots \\\\ B^{T}\mathcal{P}\_{k} & 0 & 0 & 0 & -\mathfrak{f}\_{N-1}^{(k)}I\_{m} \end{pmatrix} < 0,\\\\ \mathcal{P}\_{k} > 0 \text{ and } \mathfrak{f}\_{j}^{(k)} > 0 \quad (j = 1, \dots, N-1, \ k = 1, \dots, N) \end{pmatrix} < 0. \tag{4.34}$$

where *<sup>Ψ</sup>* (P1, ··· ,PN ) in (1, 1)-block of the LMIs (4.34) is given by

$$\Psi\left(\mathcal{P}\_{1},\cdots,\mathcal{P}\_{N}\right) = \left(\mathcal{P}\_{1} + \mathcal{P}\_{2} + \cdots + \mathcal{P}\_{N}\right)A\_{\mathcal{F}} + A\_{\mathcal{F}}^{T}\left(\mathcal{P}\_{1} + \mathcal{P}\_{2} + \cdots + \mathcal{P}\_{N}\right)\tag{4.35}$$

Note that if there exist symmetric positive definite matrices <sup>P</sup>*<sup>k</sup>* <sup>∈</sup> **<sup>R</sup>***n*×*<sup>n</sup>* and positive scalars *<sup>γ</sup>*(*k*) *<sup>j</sup>* <sup>∈</sup> **<sup>R</sup>**<sup>1</sup> which satisfy the matrix inequalities (4.34), then the matrix inequality condition (4.11) is also satisfied (23; 24).

From the above discussion, one can see that in order to get the proposed robust controller, the positive scalars *<sup>γ</sup>*(*k*) *<sup>j</sup>* <sup>∈</sup> **<sup>R</sup>**<sup>1</sup> and the symmetric positive definite matrices <sup>P</sup>*<sup>k</sup>* <sup>∈</sup> **<sup>R</sup>***n*×*<sup>n</sup>* which satisfy the LMIs (4.34) and the assumption (4.3) are needed. Therefore firstly, we solve the LMIs (4.34) and next, we check the rank condition (4.32).

for a Class of Uncertain Dynamical Systems 25

Synthesis of Variable Gain Robust Controllers for a Class of Uncertain Dynamical Systems 335

Besides, for numerical simulations, the initial values for the uncertain linear system (4.36)

The results of the simulation of this example are depicted in Figures 8–10. In these Figures, "Case 1)" and "Case 2)" represent the time-histories of the state variables *x*1(*t*) and *x*2(*t*) and the control input *u*(*t*) for the proposed variable gain robust controller. "Desired" shows the desired time-response and the desired control input generated by the nominal system. From Figures 8–10, we find that the proposed robust controller stabilizes the uncertain system (4.36) in spite of uncertainties. one can see from Figure 10 the proposed controller can avoid

0 0.5 1 1.5 2 2.5 3

Case 1) Case 2) Desired

Time

0 0.5 1 1.5 2 2.5 3

Case 1) Case 2) Desired

Time

2.0 <sup>−</sup>1.0*<sup>T</sup>* (i.e. *<sup>e</sup>*(0) =

0.0 0.0*<sup>T</sup>*),

 .

<sup>−</sup>1.0 <sup>×</sup> <sup>10</sup>−3*<sup>t</sup>*

0 − sin(3.0*πt*)

respectively, and we choose *<sup>σ</sup>*(*t*) <sup>∈</sup> **<sup>R</sup>**<sup>+</sup> in (4.12) as *<sup>σ</sup>*(*t*) = 5.0 <sup>×</sup> 1012 <sup>×</sup> exp

serious chattering. Therefore the effectiveness of the proposed controller is shown.

and the nominal system are selected as *x*(0) = *x*(0) =






State

Fig. 9. Time histories of the state *x*2(*t*)


0

0

0.5

State

Fig. 8. Time histories of the state *x*1(*t*)

1

1.5

2

• Case 2) : <sup>Δ</sup>(*t*) = cos(3.0*πt*) <sup>0</sup>

#### **4.3 Illustrative examples**

Consider the following uncertain linear system, i.e. Z = 2.

$$\frac{d}{dt}\mathbf{x}(t) = \begin{pmatrix} -4 \ 1 \\ 0 \ 2 \end{pmatrix} \mathbf{x}(t) + \begin{pmatrix} 5 \ -1 \\ 0 \ 1 \end{pmatrix} \Delta(t) \begin{pmatrix} 1 \ 1 \\ 0 \ 1 \end{pmatrix} \mathbf{x}(t) + \begin{pmatrix} 0 \\ 1 \end{pmatrix} \boldsymbol{\mu}(t) \tag{4.36}$$

By applying **Theorem 4**, we consider deriving the proposed robust controller. Now we select the weighting matrices Q ∈ **<sup>R</sup>**2×<sup>2</sup> and R ∈ **<sup>R</sup>**1×<sup>1</sup> such as <sup>Q</sup> <sup>=</sup> 1.0*I*<sup>2</sup> and <sup>R</sup> <sup>=</sup> 4.0 for the quadratic cost function for the standard linear quadratic control problem for the nominal system, respectively. Then solving the algebraic Riccati equation (4.6), we obtain the optimal gain matrix

$$K = \begin{pmatrix} -5.15278 \times 10^{-3} \ -4.06405 \end{pmatrix} \tag{4.37}$$

In addition, setting the design parameters QF and RF such as QF <sup>=</sup> 10.0 <sup>×</sup> 106 *<sup>I</sup>*<sup>2</sup> and RF <sup>=</sup> 1.0, respectively, we have the following fixed gain matrix.

$$\mathcal{F} = \begin{pmatrix} -1.23056 \ -9.99806 \end{pmatrix} \times 10^3 \tag{4.38}$$

Besides, selecting the matrices Q*<sup>k</sup>* (*k* = 1, 2) in (4.34) as

$$\mathcal{Q}\_1 = \begin{pmatrix} 20.0 \ 1.0 \\ \\ 1.0 \ 1.0 \end{pmatrix}, \quad \mathcal{Q}\_2 = \begin{pmatrix} 1.0 \ 0.0 \\ \\ 0.0 \ 20.0 \end{pmatrix} \tag{4.39}$$

and solving the LMI condition (4.34), we get

$$\begin{array}{l} \mathcal{P}\_1 = \begin{pmatrix} 7.59401 \times 10^1 & 6.82676 \times 10^{-4} \\ 6.82676 \times 10^{-4} & 2.00057 \times 10^{-3} \\ 7.59401 \times 10^1 & 5.96286 \times 10^{-4} \\ 5.96286 \times 10^{-4} & 5.76862 \times 10^{-2} \end{pmatrix} \\ \mathcal{P}\_2 = \begin{pmatrix} 7.59402 \times 10^{-4} & 5.76862 \times 10^{-2} \\ 7.13182 \times 10^{-3} & \gamma\_2 = 7.13182 \times 10^{-3} \end{pmatrix} \end{array} \tag{4.40}$$

From (4.36) and (4.40), <sup>Ω</sup>S*<sup>k</sup>* (*<sup>k</sup>* = 1, 2) can be written as

$$\begin{aligned} \Omega\_{\mathcal{S}\_1} &= \{ \mathbf{x} \in \mathbb{R}^2 \mid 1.28240 \mathbf{x}\_1 + 7.80246 \mathbf{x}\_2 = 0 \} \\ \Omega\_{\mathcal{S}\_2} &= \{ \mathbf{x} \in \mathbb{R}^2 \mid 1.28032 \mathbf{x}\_1 + 7.77319 \mathbf{x}\_2 = 0 \} \end{aligned} \tag{4.41}$$

and thus the assumption (4.3) is satisfied.

On the other hand for the uncertain linear system (4.36), the quadratic stabilizing controller based on a fixed quadratic Lyapunov function cannot be obtained, because the solution of the LMI of (A.1) does not exist.

In this example, we consider the following two cases for the unknown parameter Δ(*t*).

$$\begin{pmatrix} \bullet & \text{Case 1} \\ \end{pmatrix} : \Delta(t) = \begin{pmatrix} -4.07360 \ 8.06857 \\ -4.41379 \ 3.81654 \end{pmatrix} \times 10^{-1}$$

$$\begin{array}{rcl} \bullet & \text{Case 2} \end{array} : \Delta(t) = \begin{pmatrix} \cos(3.0\pi t) & 0\\ 0 & -\sin(3.0\pi t) \end{pmatrix}.$$

24 Will-be-set-by-IN-TECH

� Δ(*t*)

By applying **Theorem 4**, we consider deriving the proposed robust controller. Now we select the weighting matrices Q ∈ **<sup>R</sup>**2×<sup>2</sup> and R ∈ **<sup>R</sup>**1×<sup>1</sup> such as <sup>Q</sup> <sup>=</sup> 1.0*I*<sup>2</sup> and <sup>R</sup> <sup>=</sup> 4.0 for the quadratic cost function for the standard linear quadratic control problem for the nominal system, respectively. Then solving the algebraic Riccati equation (4.6), we obtain the optimal

In addition, setting the design parameters QF and RF such as QF <sup>=</sup> 10.0 <sup>×</sup> 106 *<sup>I</sup>*<sup>2</sup> and RF <sup>=</sup>

<sup>−</sup>1.23056 <sup>−</sup>9.99806�

⎟⎠ , <sup>Q</sup><sup>2</sup> <sup>=</sup>

� 7.59401 <sup>×</sup> 101 6.82676 <sup>×</sup> <sup>10</sup>−<sup>4</sup> 6.82676 <sup>×</sup> <sup>10</sup>−<sup>4</sup> 2.00057 <sup>×</sup> <sup>10</sup>−<sup>3</sup>

� 7.59401 <sup>×</sup> 101 5.96286 <sup>×</sup> <sup>10</sup>−<sup>4</sup> 5.96286 <sup>×</sup> <sup>10</sup>−<sup>4</sup> 5.76862 <sup>×</sup> <sup>10</sup>−<sup>2</sup>

*<sup>γ</sup>*<sup>1</sup> <sup>=</sup> 7.13182 <sup>×</sup> <sup>10</sup>−3, *<sup>γ</sup>*<sup>2</sup> <sup>=</sup> 7.13182 <sup>×</sup> <sup>10</sup>−<sup>3</sup>

On the other hand for the uncertain linear system (4.36), the quadratic stabilizing controller based on a fixed quadratic Lyapunov function cannot be obtained, because the solution of the

<sup>×</sup> <sup>10</sup>−<sup>1</sup>

In this example, we consider the following two cases for the unknown parameter Δ(*t*).

�

*<sup>x</sup>* <sup>∈</sup> **<sup>R</sup>**<sup>2</sup> <sup>|</sup> 1.28240*x*<sup>1</sup> <sup>+</sup> 7.80246*x*<sup>2</sup> <sup>=</sup> <sup>0</sup>}

⎛

1.0 0.0

0.0 20.0

⎜⎝

⎞

� 1 1

0 1

�

*x*(*t*) +

<sup>−</sup>5.15278 <sup>×</sup> <sup>10</sup>−<sup>3</sup> <sup>−</sup>4.06405� (4.37)

⎞

�

�

*<sup>x</sup>* <sup>∈</sup> **<sup>R</sup>**<sup>2</sup> <sup>|</sup> 1.28032*x*<sup>1</sup> <sup>+</sup> 7.77319*x*<sup>2</sup> <sup>=</sup> <sup>0</sup>} (4.41)

� 0 1

�

<sup>×</sup> 103 (4.38)

⎟⎠ (4.39)

(4.40)

*u*(*t*) (4.36)

� <sup>5</sup> <sup>−</sup><sup>1</sup> 0 1

**4.3 Illustrative examples**

gain matrix

*d dt <sup>x</sup>*(*t*) =

Consider the following uncertain linear system, i.e. Z = 2.

�

*K* = �

<sup>F</sup> <sup>=</sup> �

⎛

20.0 1.0

1.0 1.0

⎜⎝

1.0, respectively, we have the following fixed gain matrix.

Besides, selecting the matrices Q*<sup>k</sup>* (*k* = 1, 2) in (4.34) as

P<sup>1</sup> =

P<sup>2</sup> =

From (4.36) and (4.40), <sup>Ω</sup>S*<sup>k</sup>* (*<sup>k</sup>* = 1, 2) can be written as

<sup>Ω</sup>S<sup>1</sup> <sup>=</sup> �

<sup>Ω</sup>S<sup>2</sup> <sup>=</sup> �

� <sup>−</sup>4.07360 8.06857 4.41379 3.81654

and thus the assumption (4.3) is satisfied.

LMI of (A.1) does not exist.

• Case 1) : Δ(*t*) =

and solving the LMI condition (4.34), we get

Q<sup>1</sup> =

*x*(*t*) +

0 2

� <sup>−</sup>4 1

Besides, for numerical simulations, the initial values for the uncertain linear system (4.36) and the nominal system are selected as *x*(0) = *x*(0) = 2.0 <sup>−</sup>1.0*<sup>T</sup>* (i.e. *<sup>e</sup>*(0) = 0.0 0.0*<sup>T</sup>*), respectively, and we choose *<sup>σ</sup>*(*t*) <sup>∈</sup> **<sup>R</sup>**<sup>+</sup> in (4.12) as *<sup>σ</sup>*(*t*) = 5.0 <sup>×</sup> 1012 <sup>×</sup> exp <sup>−</sup>1.0 <sup>×</sup> <sup>10</sup>−3*<sup>t</sup>* .

The results of the simulation of this example are depicted in Figures 8–10. In these Figures, "Case 1)" and "Case 2)" represent the time-histories of the state variables *x*1(*t*) and *x*2(*t*) and the control input *u*(*t*) for the proposed variable gain robust controller. "Desired" shows the desired time-response and the desired control input generated by the nominal system.

From Figures 8–10, we find that the proposed robust controller stabilizes the uncertain system (4.36) in spite of uncertainties. one can see from Figure 10 the proposed controller can avoid serious chattering. Therefore the effectiveness of the proposed controller is shown.

Fig. 8. Time histories of the state *x*1(*t*)

Fig. 9. Time histories of the state *x*2(*t*)

for a Class of Uncertain Dynamical Systems 27

Synthesis of Variable Gain Robust Controllers for a Class of Uncertain Dynamical Systems 337

The future research subjects are an extension of the variable gain robust state feedback controller via PLFs to output feedback control systems. Besides, the problem for the extension to such a broad class of systems as uncertain large-scale systems, uncertain time-delay systems and so on should be tackled. Furthermore in future work, we will examine the condition (3.2)

On the other hand, the design of feedback controllers is often complicated by presence of physical constraints : saturating actuators, temperatures, pressures within safety margins and so on. If the constraints are violated, serious consequences may ensue, for example, physical components may be damaged, or saturation may cause a loss of closed-loop stability. In particular, input saturation is a common feature of control systems and the stabilization problems of linear systems with control input saturation have been studied (e.g. (17; 32)). Furthermore, some researchers have investigated analysis of constrained linear systems and reference managing for linear systems subject to input and state constraints (e.g. (10; 15)). Therefore, the future research subjects are to address the constrained robust control problems

The following lemma provides a LMI-based design method of a robust controller via

*There exists the state feedback gain matrix H* <sup>∈</sup> **<sup>R</sup>***m*×*<sup>n</sup> such that the control law u*(*t*) = *Hx*(*t*) *is a*

*If the solution* <sup>X</sup> ,<sup>Y</sup> *and <sup>δ</sup> of the LMI (A.1) exists, then the gain matrix H* <sup>∈</sup> **<sup>R</sup>***m*×*<sup>n</sup> is obtained as*

*T* P 

*T* P 

Here we have used the well-known relation (3.12). Thus the uncrtain linear system (4.1) is

*<sup>T</sup>* <sup>P</sup> <sup>+</sup> *<sup>δ</sup>*PDD*T*<sup>P</sup> <sup>+</sup>

<sup>=</sup> <sup>P</sup>−<sup>1</sup> and consider the change of variable <sup>Y</sup> �

EX −*δIq*

*<sup>x</sup>*(*t*) + <sup>2</sup>*xT*(*t*)PDΔ(*t*)E*x*(*t*)

*<sup>x</sup>*(*t*) + *<sup>δ</sup>xT*(*t*)PDD*T*P*x*(*t*) + <sup>1</sup>

1 *δ*

1 *δ*

<sup>=</sup> *exT*(*t*)P*x*(*t*) as a Lyapunov function

< 0 (A.1)

*δ*

<sup>E</sup> *<sup>T</sup>*<sup>E</sup> <sup>&</sup>lt; <sup>0</sup> (A.3)

X E *<sup>T</sup>*EX <sup>&</sup>lt; <sup>0</sup> (A.4)

= *H*X . Then, by pre-

*<sup>x</sup>T*(*t*)<sup>E</sup> *<sup>T</sup>*E*x*(*t*)

(A.2)

*<sup>A</sup>*<sup>X</sup> <sup>+</sup> <sup>X</sup> *<sup>A</sup><sup>T</sup>* <sup>+</sup> *<sup>B</sup>*<sup>Y</sup> <sup>+</sup> <sup>Y</sup>*TBT* <sup>+</sup> *<sup>δ</sup>*DD*<sup>T</sup>* X E *<sup>T</sup>*

**Lemma A.1.** *Consider the uncertain linear system (4.1) and the control law u*(*t*) = *Hx*(*t*)*.*

*quadratic stabilizing control, if there exist* X > 0, Y *and δ* > 0 *satisfying the LMI*

in section 3 and assumptions (4.3) and (4.32) in section 4.

reducing the effect of unknown parameters.

*Proof.* Introducing the quadratic function <sup>V</sup>(*x*, *<sup>t</sup>*) �

P (*A* + *BH*) + (*A* + *BH*)

P (*A* + *BH*) + (*A* + *BH*)

robustly stable provided that the following relation is satisfied.

P (*A* + *BH*) + (*A* + *BH*)

*<sup>A</sup>*<sup>X</sup> <sup>+</sup> <sup>X</sup> *<sup>A</sup><sup>T</sup>* <sup>+</sup> *<sup>B</sup>*<sup>Y</sup> <sup>+</sup> <sup>Y</sup>*TBT* <sup>+</sup> *<sup>δ</sup>*DD*<sup>T</sup>* <sup>+</sup>

and post-multiplying (A.3) by <sup>X</sup> <sup>=</sup> <sup>P</sup>−<sup>1</sup> , we have

**6. Appendix**

*<sup>H</sup>* <sup>=</sup> YX <sup>−</sup>1*.*

*d*

candidate, we have

*dt* <sup>V</sup>(*x*, *<sup>t</sup>*) = *<sup>x</sup>T*(*t*)

<sup>≤</sup> *<sup>x</sup>T*(*t*)

We introduce the matrix <sup>X</sup> �

**6.1 Quadratic stabilization**

Lyapunov stability criterion.

Fig. 10. Time histories of the control input *u*(*t*)

### **4.4 Summary**

In this section, we have proposed a design method of a variable gain robust controller for a class of uncertain nonlinear systems. The uncertainties under consideration are composed of matched part and unmatched one, and by using the concept of piecewise Lyapunov functions, we have shown that the proposed robust controller can be obtained by solving LMIs (4.34) and cheking the rank condition (4.32). By numerical simulations, the effectiveness of the proposed controller has been presented.

### **5. Conclusions and future works**

In this chapter, we have presented that the variable gain robust controller for a class of uncertain linear systems and through the numerical illustrations, the effectiveness of the proposed vaiable gain robust controllers has been shown. The advantage of the proposed controller synthesis is as follows; the proposed variable gain robust controller in which the real effect of the uncertainties can be reflected as on-line information is more flexible and adaptive than the conventional robust controller with a fixed gain which is derived by the worst-case design for the parameter variations. Additionally the proposed control systems are constructed by renewing the parameter which represents the perturbation region of unknown parameters, and there is no need to solve any other equation for the stability.

In Section 2 for linear systems with matched uncertainties, a design problem of variable gain robust state feedback controllers in order to achieve satisfactory transient behavior as closely as possible to desirable one generated by the nominal system is considered. Section 3 extends the result for the variable gain robust state feedback controller given in Section 2 to variable gain robust output feedback controllers. In this Section, some assumptions for the structure of the system parameters are introduced and by using these assumptions, an LMI-based the variable gain robust output feedback controller synthesis has been presented. In Section 4, the design method of variable gain robust state feedback controller via piecewise Lyapunov functions has been suggested. One can see that the crucial difference between the existing results and the proposed variable gain controller based on PLFs is that for uncertain linear systems which cannot be statilizable via the conventional quadratic stabilizing controllers, the proposed design procedure can stabilize it. Besides, it is obvious that the proposed variable robust control scheme is more effective for linear systems with larger uncertainties.

The future research subjects are an extension of the variable gain robust state feedback controller via PLFs to output feedback control systems. Besides, the problem for the extension to such a broad class of systems as uncertain large-scale systems, uncertain time-delay systems and so on should be tackled. Furthermore in future work, we will examine the condition (3.2) in section 3 and assumptions (4.3) and (4.32) in section 4.

On the other hand, the design of feedback controllers is often complicated by presence of physical constraints : saturating actuators, temperatures, pressures within safety margins and so on. If the constraints are violated, serious consequences may ensue, for example, physical components may be damaged, or saturation may cause a loss of closed-loop stability. In particular, input saturation is a common feature of control systems and the stabilization problems of linear systems with control input saturation have been studied (e.g. (17; 32)). Furthermore, some researchers have investigated analysis of constrained linear systems and reference managing for linear systems subject to input and state constraints (e.g. (10; 15)). Therefore, the future research subjects are to address the constrained robust control problems reducing the effect of unknown parameters.

### **6. Appendix**

26 Will-be-set-by-IN-TECH

Case 1) Case 2) Desired

0 0.5 1 1.5 2 2.5 3

Time

In this section, we have proposed a design method of a variable gain robust controller for a class of uncertain nonlinear systems. The uncertainties under consideration are composed of matched part and unmatched one, and by using the concept of piecewise Lyapunov functions, we have shown that the proposed robust controller can be obtained by solving LMIs (4.34) and cheking the rank condition (4.32). By numerical simulations, the effectiveness of the proposed

In this chapter, we have presented that the variable gain robust controller for a class of uncertain linear systems and through the numerical illustrations, the effectiveness of the proposed vaiable gain robust controllers has been shown. The advantage of the proposed controller synthesis is as follows; the proposed variable gain robust controller in which the real effect of the uncertainties can be reflected as on-line information is more flexible and adaptive than the conventional robust controller with a fixed gain which is derived by the worst-case design for the parameter variations. Additionally the proposed control systems are constructed by renewing the parameter which represents the perturbation region of unknown

In Section 2 for linear systems with matched uncertainties, a design problem of variable gain robust state feedback controllers in order to achieve satisfactory transient behavior as closely as possible to desirable one generated by the nominal system is considered. Section 3 extends the result for the variable gain robust state feedback controller given in Section 2 to variable gain robust output feedback controllers. In this Section, some assumptions for the structure of the system parameters are introduced and by using these assumptions, an LMI-based the variable gain robust output feedback controller synthesis has been presented. In Section 4, the design method of variable gain robust state feedback controller via piecewise Lyapunov functions has been suggested. One can see that the crucial difference between the existing results and the proposed variable gain controller based on PLFs is that for uncertain linear systems which cannot be statilizable via the conventional quadratic stabilizing controllers, the proposed design procedure can stabilize it. Besides, it is obvious that the proposed variable

parameters, and there is no need to solve any other equation for the stability.

robust control scheme is more effective for linear systems with larger uncertainties.


0

1

2

Control input

Fig. 10. Time histories of the control input *u*(*t*)

**4.4 Summary**

controller has been presented.

**5. Conclusions and future works**

3

4

5

### **6.1 Quadratic stabilization**

The following lemma provides a LMI-based design method of a robust controller via Lyapunov stability criterion.

**Lemma A.1.** *Consider the uncertain linear system (4.1) and the control law u*(*t*) = *Hx*(*t*)*. There exists the state feedback gain matrix H* <sup>∈</sup> **<sup>R</sup>***m*×*<sup>n</sup> such that the control law u*(*t*) = *Hx*(*t*) *is a quadratic stabilizing control, if there exist* X > 0, Y *and δ* > 0 *satisfying the LMI*

$$
\begin{pmatrix}
\mathcal{A}\mathcal{X} + \mathcal{X}A^T + \mathcal{B}\mathcal{Y} + \mathcal{Y}^T\mathcal{B}^T + \delta\mathcal{D}\mathcal{D}^T & \mathcal{X}\mathcal{E}^T \\
\mathcal{E}\mathcal{X} & -\delta I\_\emptyset
\end{pmatrix} < 0\tag{A.1}
$$

*If the solution* <sup>X</sup> ,<sup>Y</sup> *and <sup>δ</sup> of the LMI (A.1) exists, then the gain matrix H* <sup>∈</sup> **<sup>R</sup>***m*×*<sup>n</sup> is obtained as <sup>H</sup>* <sup>=</sup> YX <sup>−</sup>1*.*

*Proof.* Introducing the quadratic function <sup>V</sup>(*x*, *<sup>t</sup>*) � <sup>=</sup> *exT*(*t*)P*x*(*t*) as a Lyapunov function candidate, we have

$$\begin{split} \frac{d}{dt} \mathcal{V}(\mathbf{x}, t) &= \mathbf{x}^T(t) \left\{ \mathcal{P} \left( A + BH \right) + \left( A + BH \right)^T \mathcal{P} \right\} \mathbf{x}(t) + 2\mathbf{x}^T(t) \mathcal{P} \mathcal{D} \Delta(t) \mathcal{E} \mathbf{x}(t) \\ &\leq \mathbf{x}^T(t) \left\{ \mathcal{P} \left( A + BH \right) + \left( A + BH \right)^T \mathcal{P} \right\} \mathbf{x}(t) + \delta \mathbf{x}^T(t) \mathcal{P} \mathcal{D} \mathcal{D}^T \mathcal{P} \mathbf{x}(t) + \frac{1}{\delta} \mathbf{x}^T(t) \mathcal{E}^T \mathcal{E} \mathbf{x}(t) \end{split} \tag{A.2}$$

Here we have used the well-known relation (3.12). Thus the uncrtain linear system (4.1) is robustly stable provided that the following relation is satisfied.

$$\mathcal{P}\left(A+\mathcal{B}H\right) + \left(A+\mathcal{B}H\right)^{T}\mathcal{P} + \delta\mathcal{P}\mathcal{D}\mathcal{D}^{T}\mathcal{P} + \frac{1}{\delta}\mathcal{E}^{T}\mathcal{E} < 0\tag{A.3}$$

We introduce the matrix <sup>X</sup> � <sup>=</sup> <sup>P</sup>−<sup>1</sup> and consider the change of variable <sup>Y</sup> � = *H*X . Then, by preand post-multiplying (A.3) by <sup>X</sup> <sup>=</sup> <sup>P</sup>−<sup>1</sup> , we have

$$A\mathcal{X} + \mathcal{X}A^T + \mathcal{B}\mathcal{Y} + \mathcal{Y}^T\mathcal{B}^T + \delta\mathcal{D}\mathcal{D}^T + \frac{1}{\delta}\mathcal{X}\mathcal{E}^T\mathcal{E}\mathcal{X} < 0\tag{A.4}$$

for a Class of Uncertain Dynamical Systems 29

Synthesis of Variable Gain Robust Controllers for a Class of Uncertain Dynamical Systems 339

[19] H. Oya and K. Hagino: "Robust Servo System with Adaptive Compensation Input for Linear Uncertain Systems", Proc. of the 4th Asian Contr. Conf., pp.972-977, SINGAPORE,

[20] H. Oya and K. Hagino : "Observer-based Robust Control Giving Consideration to Transient Behavior for Linear Systems with Structured Uncertainties", Int. J. Contr.,

[21] H. Oya and K. Hagino, "Robust Control with Adaptive Compensation Input for Linear Uncertain Systems", IEICE Trans. Fundamentals of Electronics, Communications and

[22] H. Oya and K. Hagino, "Adaptive Robust Control Scheme for Linear Systems with Structured Uncertainties", IEICE Trans. Fundamentals of Electronics, Communications

[23] H. Oya, K. Hagino, S. Kayo and M. Matsuoka, "Adaptive Robust Stabilization for a Class of Uncertain Linear Systems via Variable Gain Controllers", Proc. of the 45th IEEE Conf.

[24] H. Oya, K. Hagino and S. Kayo, "Adaptive Robust Control Based on Piecewise Lyapunov Functions for a Class of Uncertain Linear Systems", Proc. of the European Contr. Conf.

[25] H. Oya, K. Hagino and S. Kayo, "Synthesis of Adaptive Robust Output Feedback Controllers for a Class of Uncertain Linear Systems", Proc. of the 47th IEEE Conf.on

[26] H. Oya and K. Hagino, "A New Adaptive Robust Controller Avoiding Chattering Phenomenon for a Class of Uncertain Linear Systems", Proc. of the 28th IASTED Int.

[27] I. R. Petersen, "A Riccati Equation Approach to the Design of Stabilizing Controllers and Observers for a Class of Uncertain Linear Systems", IEEE Trans. Automat. Contr., Vol.30,

[28] I. R. Petersen and D. C. McFarlane, "Optimal Guaranteed Cost Control and Filtering for Uncertain Linear Systems", IEEE Trans. Automat. Contr., Vol.39, No.9, pp.1971-1977,

[29] I. R. Petersen and C. C. Hollot, "A Riccati Equation Approach to the Stabilization of

[30] E. S. Pyatnitskii, V. I. Skrodinskii, "Numerical Methods of Lyapunov Function Contruction and Their Application to the Absolute Stability Problem", Syst. & Contr.

[31] S. O. Reza Moheimani and I. R. Petersen : "Optimal Guaranteed Cost Control of Uncertain Systems via Static and Dynamic Output Feedback", Automatica, Vol.32, No.4,

[32] M. C. Turner, I. Postlethwaite and D. J. Walker, "Non-linear Tracking Control for Multivariable Constrained Input Linear Systems", Int. J. Contr., Vol.73, No.12,

[33] S. Ushida, S. Yamamoto and H. Kimura, "Quadratic Stabilization by <sup>H</sup><sup>∞</sup> state feedback controllers with Adjustable Parameters", Proc. of the 35th IEEE Conf. Decision and

[34] V. Veselý, "Design of Robust Output Affine Quadratic Controller", Proc. of 2002 IFAC

Conf. on Modeling, Identification and Contr., pp.236–241, Innsbruck, 2009.

Uncertain Linear Systems", Automatica, Vol.22, No.4, pp.397-411, 1986.

2002.

Vol.75, No.15, pp.1231-1240, 2002.

No.9, pp.904-907, 1985.

Lett., Vol.2, No.1, 1986.

pp.575-579, 1996.

pp.1160-1172, 2000.

Contr., pp.1003-1008, Kobe, JAPAN, 1996.

World Congress, pp.1-6, Barcelona, SPAIN, 2002.

1994.

Computer Sciences, Vol.E86-A, No.6, pp.1517-1524, 2003.

and Computer Sciences, Vol.E87-A, No.8, pp.2168-2173, 2004.

on Decision and Contr., pp.1183–1188, San Diego, USA, 2006.

Decision and Contr., pp.995-1000, Cancun, MEXICO, 2008.

2007 (ECC2007), pp.810–815, Kos, GREECE, 2007.

One can see from **Lemma 2** (Schur complement) that the inequaity (A.4) is equivalent to the LMI (A.1).

### **7. References**


28 Will-be-set-by-IN-TECH

One can see from **Lemma 2** (Schur complement) that the inequaity (A.4) is equivalent to the

[1] H. L. S. Almeida, A. Bhaya and D. M. Falcão, "A Team Algorithm for Robust Stability Analysis and Control Design of Uncertain Time-Varying Systems using Piecewise Quadratic Lyapunov Functions", Int. J. Robust and Nonlinear Contr., Vol.11, No.1,

[2] R. E. Benton, JR and D. Smith, "A Non-itarative LMI-based Algorithm for Robust Static-output-feedback Stabilization", Int. J. Contr., Vol.72, No.14, pp.1322-1330, 1999. [3] B. R. Bermish , M. Corless and G. Leitmann, "A New Class of Stabilizing Controllers for Uncertain Dynamical Systems", SIAM J. Contr. Optimiz. Vol.21, No.2, pp.246-255, 1983. [4] S. Boyd, L. El Ghaoui, E. Feron and V. Balakrishnan, Linear Matrix Inequalities in System

[5] Y. K. Choi, M. J. Chung and Z. Bien, "An Adaptive Control Scheme for Robot

[6] J. C. Doyle, K. Glover, P. P. Khargonekar and B. A. Francis, "State-Space Solutions to Standarad <sup>H</sup><sup>2</sup> and <sup>H</sup><sup>∞</sup> Control Problems", IEEE Trans. Automat. Contr., Vol.34, No.8,

[7] P. Gahinet, P. Apkarian and M. Chilali, "Affine Parameter Dependent Lyapunov Functions and Real Parameter Uncertainty", IEEE Trans. Automat. Contr., Vol.41, No.3,

[8] F. R. Gantmacher, "The Theory of Matrices", Vol.1, Chelsea Publishing Company, New

[9] J. C. Geromel, C. C. De Souza and R. E. Skelton, "LMI Numerical Solution for Output Feedbadck Stabilization", Proc. of the 1994 American Contr. Conf., Baltimore, MD, USA,

[10] E. G. Gilbert and I. Kolmanovsky, "Nonlinear Tracking Controlin the Presence of State and Control Constraints : A Generalized Reference Govonor", Automatica, Vol.38,

[11] T. Iwasaki, R. E. Skelton and J. C. Geromel, "Linear Quadratic Suboptimal Control with Static Output Feedback", Syst. & Contr. Lett., Vol.23, No.6, pp.421-430, 1994. [12] F. Jabbari and W. E. Schmitendolf, "Effect of Using Observers on Stabilization of Uncertain Linear Systems", IEEE Trans. Automat. Contr., Vol.38, No.2, pp.266-271, 1993.

[14] P. P. Khargonekar and M. A. Rotea, "Mixed H2/H<sup>∞</sup> Control , A Convex Optimization

[15] K. Kogiso and K. Hirata, "Reference Governor for Contrained Systems with Time-Varying References", J. Robotics and Autonomous Syst., Vol.57, Issue 3, pp.289-295,

[16] V. Kuˇcera and C. E. De Souza, "A Necessary and Sufficient Conditions for Output

[17] Z. Lin, M. Pachter and S. Band, "Toward Improvement of Tracking Performance – Nonlinear Feedback for Linear Systems", Int. J. Contr., Vol.70, No.1, pp.1-11 (1998) [18] M. Maki and K. Hagino, "Robust Control with Adaptation Mechanism for Improving

[13] H. K. Khalil, "Nonlinear Systems, Third Edition", Prentice Hall, 2002.

Approach", IEEE Trans. Automat. Contr., Vol.36, No.7, pp.824-837, 1991.

Feedback Stabilizability", Automatica , Vol.31, No.9, pp.1357-1359, 1995.

Transient Behavior", Int. J. Contr., Vol.72, No.13, pp.1218-1226, 1999.

and Control Theory, SIAM Studies in Applied Mathmatics, 1994.

Manipulators", Int. J. Contr., Vol.44, No.4, pp.1185-1191, 1986.

LMI (A.1).

**7. References**

pp.357-371, 2001.

pp.831-847, 1989.

pp.436-442, 1996.

York, 1960.

2009.

pp.40-44, 1994.

No.12, pp.2071-2077, 2002.


**15** 

Ciprian Lupu

*Romania*

**Simplified Deployment of Robust** 

**Real-Time Systems Using Multiple** 

**Model and Process Characteristic** 

**Architecture-Based Process Solutions** 

*Department of Automatics and Computer Science, University "Politehnica" Bucharest* 

A common industrial practice is to find some specific control structures for the nonlinear processes that reduce, as much as possible, the design techniques to classic control approaches. There are a lot of situations when the designing of robust controller leads to complex hardware and software requirements. In international literature there are some interesting solutions (Kuhnen & Janocha, 2001; Dai et al., 2003; Wang & Su, 2006) for solving

In following sections there will be presented, in the first part, some elements of classic robust design of RST control algorithm and on the second, two alternative solutions based

The robustness of the systems is reported mainly to model parameters change or the structural model estimation uncertainties (Landau et al., 1997). A simple frequency analysis shows that the critical Nyquist point (i.e. the point (-1, 0) in the complex plane) plays an important role in assessing the robustness of the system. In this plan, we can trace hodograf (Nyquist place) open-loop system, i.e. the frequency response. The distance from the hodograf critical point system (edge module), i.e. radius centered at the critical point and tangent to hodograf is a measure of the intrinsic robustness of the system. The distance is

**1. Introduction** 

implementation reduction.

greater, the system is more robust.

Fig. 1. RST control algorithm structure

on multiple model and nonlinear compensators structures.

**2. Some elements about classic RST robust control design** 


## **Simplified Deployment of Robust Real-Time Systems Using Multiple Model and Process Characteristic Architecture-Based Process Solutions**

Ciprian Lupu

*Department of Automatics and Computer Science, University "Politehnica" Bucharest Romania*

## **1. Introduction**

30 Will-be-set-by-IN-TECH

340 Recent Advances in Robust Control – Novel Approaches and Design Methods

[35] L. Xie, S. Shishkin and M. Fu, "Piecewise Lyapunov Functions for Robust Stability of

[36] K. Zhou and P. P. Khargonekar, "Robust Stabilization on Linear Systems with Norm Bounded Time-Varying Uncertainty", Syst. & Contr. Lett., Vol.10, No.1, pp.17-20, 1988.

[37] K. Zhou, "Essentials of Robust Control", Prentice Hall Inc., New Jersey, USA, 1998.

Linear Time-Varying Systems", Syst. & Contr. Letters, Vol.31, No.3, 1997.

A common industrial practice is to find some specific control structures for the nonlinear processes that reduce, as much as possible, the design techniques to classic control approaches. There are a lot of situations when the designing of robust controller leads to complex hardware and software requirements. In international literature there are some interesting solutions (Kuhnen & Janocha, 2001; Dai et al., 2003; Wang & Su, 2006) for solving implementation reduction.

In following sections there will be presented, in the first part, some elements of classic robust design of RST control algorithm and on the second, two alternative solutions based on multiple model and nonlinear compensators structures.

## **2. Some elements about classic RST robust control design**

The robustness of the systems is reported mainly to model parameters change or the structural model estimation uncertainties (Landau et al., 1997). A simple frequency analysis shows that the critical Nyquist point (i.e. the point (-1, 0) in the complex plane) plays an important role in assessing the robustness of the system. In this plan, we can trace hodograf (Nyquist place) open-loop system, i.e. the frequency response. The distance from the hodograf critical point system (edge module), i.e. radius centered at the critical point and tangent to hodograf is a measure of the intrinsic robustness of the system. The distance is greater, the system is more robust.

Fig. 1. RST control algorithm structure

Simplified Deployment of Robust Real-Time Systems Using

0

with the following initial conditions:

represents the measures vector.

are calculated using the next expression:

Fig. 2. Sensibility function graphic representation

The estimated ˆ

sensibility function.

module margin.

Multiple Model and Process Characteristic Architecture-Based Process Solutions 343

*k k Fk k k k N*

 

*T T*

 

*Fk k kFk Fk Fk k N*

 

*k yk k t k N*

 

<sup>1</sup> *F I GI I* (0) ( ) ,0 1

This approach allows the users to verify, and if necessary, to calibrate the algorithm's robustness (Landau et al., 1997). Next expression and Fig. 2 present "disturbance-output"

( )( ) , ( )( ) ( ) ( )

 

 

 

In the same time, the negative maximum value of the sensibility function represents the

Based on this value, in an "input-output" representation (Landau et al., 1997), process nonlinearity can be bounded inside the "conic" sector, presented in Fig. 3, where *a1* and *a2*

> 1 2 1 1 1 1 *a a <sup>M</sup> <sup>M</sup>*

 

*Ae Se <sup>R</sup>*

max ( ) *<sup>j</sup> dB vy <sup>R</sup> dB M Se*

 

(11)

(12)

*j j jj j j*

*Ae Se Be Re*

ˆ ˆ ( 1) ( ) ( 1) ( ) ( 1),

()() ()() ( 1) ( ) , 1 ()()()

ˆ ( 1) ( 1) ( ) ( ),

() ()

*Se He*

*def j j vy vy*

  0

*T*

*kFk k*

 

( ) *k* represents the parameters of the polynomial plant model and ( ) *<sup>T</sup>*

(9)

(8)

(10)

*k*

For this study we use a RST algorithm. For robustification there are used pole placement procedures (Landau et al., 1997). Fig. 1 presents a RST algorithm. The R, S, T polynomials are:

$$\begin{aligned} R\left(q^{-1}\right) &= r\_0 + r\_1 q^{-1} + \dots + r\_{nr} q^{-nr} \\ S\left(q^{-1}\right) &= s\_0 + s\_1 q^{-1} + \dots + s\_{ns} q^{-ns} \\ T\left(q^{-1}\right) &= t\_0 + t\_1 q^{-1} + \dots + t\_{nt} q^{-nt} \end{aligned} \tag{1}$$

The RST control algorithm is:

$$(S(q^{-1})u(k) + R(q^{-1})y(k) = T(q^{-1})y^\*(k)\tag{2}$$

or:

$$\mu(k) = \frac{1}{s} \text{l} - \sum\_{i=1}^{n\_S} s\_i \mu(k - i) - \sum\_{i=0}^{n\_R} r\_i y(k - i) + \sum\_{i=0}^{n\_T} t\_i y^\*(k - i) \text{l} \tag{3}$$

where: u(k) - algorithm output, y(k) - process output, y\*(k) - trajectory or filtered set point. When necessary, an imposed trajectory can be generated using a trajectory model generator:

$$\text{s.t.}\begin{aligned} \text{s.t.} &(\text{s}^{-1})\\ &\frac{m}{A} \binom{\text{q}^{-1}}{\text{m}} r(k) \end{aligned} \tag{4}$$

with Am and Bm like:

$$\begin{aligned} \, ^0A\_m(q^{-1}) &= \, 1 + a\_{m1}q^{-1} + \dots + a\_{mm} \, ^0q \, ^{-m}Am \\ \, ^0A\_m \, ^{-1} &= \, ^0Am \, ^0q \, ^{-1} + \dots + b\_{mm} \, ^0q \, ^{-m} \end{aligned} \tag{5}$$

Algorithm pole placement design procedure is based on the identified process' model.

$$y(k) = \frac{q^{-d}B(q^{-1})}{A(q^{-1})}u(k)\tag{6}$$

where

$$\begin{aligned} \, \_B\left(q^{-1}\right) &= b\_1 q^{-1} + b\_2 q^{-2} + \dots + b\_{nb} q^{-nb} \\ \, \_A\left(q^{-1}\right) &= 1 + a\_1 q^{-1} + \dots + a\_{na} q^{-na} \end{aligned} \tag{7}$$

The identification (Landau & Karimi, 1997; Lainiotis & Magill, 1969; Foulloy et al., 2004) is made in a specific process operating point and can use recursive least square algorithm exemplified in next relations developed in (Landau et al., 1997):

$$\begin{aligned} \hat{\theta}(k+1) &= \hat{\theta}(k) + F(k+1)\phi(k)x^{0}(k+1), \forall k \in N\\ F(k+1) &= F(k) - \frac{F(k)\phi(k)\phi^{T}(k)F(k)}{1 + \phi^{T}(k)F(k)\phi(k)}, \forall k \in N\\ \varepsilon^{0}(k+1) &= y(k+1) - \hat{\theta}^{T}(k)\phi(t), \forall k \in N\end{aligned} \tag{8}$$

with the following initial conditions:

342 Recent Advances in Robust Control – Novel Approaches and Design Methods

For this study we use a RST algorithm. For robustification there are used pole placement

...

*nr nr*

*ns ns*

(1)

(3)

(5)

(7)

*nt nt*

1 1 1\* *<sup>S</sup>*( )() ( )() ( ) () *<sup>q</sup> uk R q y k T q y <sup>k</sup>* (2)

(4)

*n*

*n*

(6)

...

...

*R q r rq r q*

<sup>1</sup> \* ( ) [ ( ) ( ) ( )] 10 0

<sup>1</sup> ( ) \*( 1) ( ) <sup>1</sup> ( ) *B q y k r m k A q m*

*Aq aq a q Am m mn*

*Bq b bq b q Bm m m mn m Bm*

*m Am*

Algorithm pole placement design procedure is based on the identified process' model.

1 12 ... 1 2

*nb Bq bq bq b q nb*

*na Aq aq a q na*

The identification (Landau & Karimi, 1997; Lainiotis & Magill, 1969; Foulloy et al., 2004) is made in a specific process operating point and can use recursive least square algorithm

1 1 1 ... <sup>1</sup>

1 1 ( ) 1 <sup>1</sup>

1 1 ( ) 0 1

exemplified in next relations developed in (Landau et al., 1997):

1 1 ( ) ( ) ( ) ( ) *<sup>d</sup> q Bq <sup>y</sup> k u <sup>k</sup> A q* 

where: u(k) - algorithm output, y(k) - process output, y\*(k) - trajectory or filtered set point. When necessary, an imposed trajectory can be generated using a trajectory model generator:

*<sup>n</sup> n n <sup>S</sup> R T uk suk i ry k i ty k i ii i <sup>s</sup> ii i* 

*S q s sq s q*

*T q t tq t q*

procedures (Landau et al., 1997). Fig. 1 presents a RST algorithm.

0

The R, S, T polynomials are:

The RST control algorithm is:

with Am and Bm like:

where

or:

$$F(0) = \frac{1}{\delta}I = (GI)I,\\ 0 < \delta < 1\tag{9}$$

The estimated ˆ ( ) *k* represents the parameters of the polynomial plant model and ( ) *<sup>T</sup> k* represents the measures vector.

This approach allows the users to verify, and if necessary, to calibrate the algorithm's robustness (Landau et al., 1997). Next expression and Fig. 2 present "disturbance-output" sensibility function.

$$\begin{split} S\_{vy} \{ e^{j\alpha} \} &= H\_{vy} \{ e^{j\alpha} \} = \\ &= \frac{A(e^{j\alpha}) S(e^{j\alpha})}{A(e^{j\alpha}) S(e^{j\alpha}) + B(e^{j\alpha}) R(e^{j\alpha})}, \quad \forall \alpha \in R \end{split} \tag{10}$$

In the same time, the negative maximum value of the sensibility function represents the module margin.

$$\left. \Delta M \right|\_{dB} = -\max\_{o \bowtie R} \left| S\_{vy} \left( e^{j o \nu} \right) \right|\_{dB} \tag{11}$$

Based on this value, in an "input-output" representation (Landau et al., 1997), process nonlinearity can be bounded inside the "conic" sector, presented in Fig. 3, where *a1* and *a2* are calculated using the next expression:

$$\frac{1}{1 - \Delta M} \ge a\_1 \ge a\_2 \ge \frac{1}{1 + \Delta M} \tag{12}$$

Fig. 2. Sensibility function graphic representation

Simplified Deployment of Robust Real-Time Systems Using

the classic algorithm with a sufficient robustness reserve. In Figure 4 and 5, the blocks and variables are as follows:

Command calculus – unit that computes the process control law;

Process – physical system to be controlled;

Classic Alg. – control algorithm (PID, RST);

 u – output of the Command calculus block; u alg. – output of the classic algorithm; u i.m. – output of the inverse model block; r – system's set point or reference trajectory;

aspects met on designing of the presented structure.

a. determination of the process' (static) characteristic,

nonlinearity compensator serialized with real process.

c. robust control law design of classic algorithm. The second structure imposes following these steps:

y – outp5t of the process;

**3.1 Control design procedure** 

b. construction of command generator,

a. determination of process' characteristic, b. construction of nonlinearity compensator,

**3.2 Determination of process characteristic** 

points is obtained using extrapolation procedure.

p – disturbances.

will be presented.

process.

Multiple Model and Process Characteristic Architecture-Based Process Solutions 345

The second solution (serial structure) has the inverse model command generator between the classic algorithm and the process. The inverse model command generator acts as a nonlinear compensator and depends on the command value. The (classic) algorithm generates a command that, filtered by the nonlinearity compensator, controls the real

The presented solutions propose treating the inverse model mismatches that "disturb" the classic command as some algorithm's model mismatches. This approach imposes designing

Related to classical control loops, both solutions need addressing some supplementary specific aspects: determination of static characteristic of the process, construction of inverse model, robust control law design. In next sections we will focus on the most important

c. designing the classic algorithm based on "composed process" which contains the

These steps are more or less similar for the two structures. For the (a) and (c) steps it is obvious; for (b) the command generator and nonlinearity compensator have different functions but the same design and functioning procedure. Essential aspects for these steps

This operation is based on several experiments of discrete step increasing and decreasing of the command *u(k)* and measuring the corresponding stabilized process output *y(k)* (figure 6 (a)). The command *u(k)* covers all (0 to 100%) possibilities. Because the noise is present, the static characteristics are not identical. The final static characteristic is obtained by meaning of all correspondent positions of these experiments. The graphic between two "mean"

For the first structure the specific aspects of the control design procedure are:

Fig. 3. Robust control design procedure

## **3. Nonlinear compensator control solution**

Various papers and researches target the inverse model control approach; a few of these can be mentioned: (Tao & Kokotovic, 1996; Yuan et al., 2007) etc.

In these researches there have been proposed several types of structures based on the inverse model. According to those results, this section comes up with two very simple and efficient structures presented in Figures 4 and 5. Here, the inverse model is reduced to the geometric inversed process (nonlinear) characteristic – reflection from the first leap of static characteristic of the process, as presented in Figure 6(b).

The first solution (parallel structure) considers the addition of two commands: the first "a feedforward command" generated by the inverse model command generator and the second, generated by a classic, simple algorithm (PID, RST ).

The first command, based on the static process characteristic, depends on the set point value and is designed to generate a corresponding value that drives the process' output close to the imposed set point. The second (classic) algorithm generates a command that corrects the difference caused by external disturbances and, according to the set point, by eventual bias errors caused by mismatches between calculated inverse process characteristic and the real process.

Fig. 4. Proposed scheme for "parallel" structure

Fig. 5. Proposed scheme for "serial" structure

The second solution (serial structure) has the inverse model command generator between the classic algorithm and the process. The inverse model command generator acts as a nonlinear compensator and depends on the command value. The (classic) algorithm generates a command that, filtered by the nonlinearity compensator, controls the real process.

The presented solutions propose treating the inverse model mismatches that "disturb" the classic command as some algorithm's model mismatches. This approach imposes designing the classic algorithm with a sufficient robustness reserve.

In Figure 4 and 5, the blocks and variables are as follows:


344 Recent Advances in Robust Control – Novel Approaches and Design Methods

Various papers and researches target the inverse model control approach; a few of these can

In these researches there have been proposed several types of structures based on the inverse model. According to those results, this section comes up with two very simple and efficient structures presented in Figures 4 and 5. Here, the inverse model is reduced to the geometric inversed process (nonlinear) characteristic – reflection from the first leap of static

The first solution (parallel structure) considers the addition of two commands: the first "a feedforward command" generated by the inverse model command generator and the

The first command, based on the static process characteristic, depends on the set point value and is designed to generate a corresponding value that drives the process' output close to the imposed set point. The second (classic) algorithm generates a command that corrects the difference caused by external disturbances and, according to the set point, by eventual bias errors caused by mismatches between calculated inverse process characteristic and the real

Fig. 3. Robust control design procedure

process.

**3. Nonlinear compensator control solution** 

be mentioned: (Tao & Kokotovic, 1996; Yuan et al., 2007) etc.

characteristic of the process, as presented in Figure 6(b).

Fig. 4. Proposed scheme for "parallel" structure

Fig. 5. Proposed scheme for "serial" structure

second, generated by a classic, simple algorithm (PID, RST ).

Related to classical control loops, both solutions need addressing some supplementary specific aspects: determination of static characteristic of the process, construction of inverse model, robust control law design. In next sections we will focus on the most important aspects met on designing of the presented structure.

## **3.1 Control design procedure**

For the first structure the specific aspects of the control design procedure are:


The second structure imposes following these steps:


These steps are more or less similar for the two structures. For the (a) and (c) steps it is obvious; for (b) the command generator and nonlinearity compensator have different functions but the same design and functioning procedure. Essential aspects for these steps will be presented.

## **3.2 Determination of process characteristic**

This operation is based on several experiments of discrete step increasing and decreasing of the command *u(k)* and measuring the corresponding stabilized process output *y(k)* (figure 6 (a)). The command *u(k)* covers all (0 to 100%) possibilities. Because the noise is present, the static characteristics are not identical. The final static characteristic is obtained by meaning of all correspondent positions of these experiments. The graphic between two "mean" points is obtained using extrapolation procedure.

Simplified Deployment of Robust Real-Time Systems Using

Fig. 7. Parallel RST feedback-feedforward control structure

characteristic and for systems with different functioning regimes.

component is not influenced by the noise.

replace the inverse model command generator.

law control doesn't need important resources.

flexibility to control it.

can be another disadvantage.

Narendra & Balakrishnan, 1997).

supplementary specific aspects:

Control law switching.

can be highlighted:

Selection of the best algorithm;

Dimension of multiple-model configuration;

developments.

**4. Multiple model control solution** 

Multiple Model and Process Characteristic Architecture-Based Process Solutions 347

The system is very stabile due to the global command that contains a "constant" component generated by an inverse static model command block, according to the set point value. This

A fuzzy logic bloc that can "contain" human experience about some nonlinear processes can

Being not very complex in terms of real time software and hardware implementation, the

This structure is very difficult to use for the system that doesn't have a bijective static

Another limitation is that this structure can only be used for stabile processes. In the situations where the process is "running", the global command is likely to not have enough

The increased number of experiments for the determination of a correct static characteristic

The essential function of a real-time control system is to preserve the closed-loop performances in case of non-linearity, structural disturbances or process uncertainties. A valuable way to solve these problems is the multiple-models or multicontroller structure. The first papers that mentioned the "multiple-models" structure/system have been reported in the 90s. Balakrishnan and Narendra are among the first authors addressing problems of stability, robustness, switching and designing this type of structures in their papers

Research refinement in this field has brought extensions to the multiple-model control concept. Parametric adaptation procedures – Closed-Loop Output Error (Landau & Karimi, 1997), use of Kalman filter representation (Lainiotis & Magill, 1969), the use of neural networks (Balakrishnan, 1996) or the fuzzy systems are some of the important

Related to classical control loops, multiple-model based systems need addressing some

From the multiple-models control systems viewpoint, two application oriented problems

Fig. 6. (a) - left - Determination of process characteristic. Continuous line represents the final characteristic. (b) - right - Construction of nonlinearity compensator

According to system identification theory, the dispersion of process trajectory can be found using next expression (Ljung & Soderstroom, 1983).

$$\sigma^2\left[n\right] \cong \frac{1}{n-1} \sum\_{i=1}^{n} y^2\left[i\right], \forall n \in N^\* \; \left\langle 1 \right\rangle \tag{13}$$

This can express a measure of superposing of noise onto process, process' nonlinearity etc. and it is very important for the control algorithm robust design.

### **3.3 Construction of nonlinearity compensator (generator)**

This step deals with the process's static characteristic "transposition" operation. Figure 6 (b) presents this construction. According to this, *u(k)* is dependent to *r(k)*. This characteristic is stored in a table; thus we can conclude that, for the nonlinearity compensator based controller, selecting a new set point *r(k)* will impose finding in this table the corresponding command *u(k)* that determines a process output *y(k)* close to the reference value.

### **3.4 Control law design**

The control algorithm's duty is to eliminate the disturbances and differences between the nonlinearity compensator computed command and the real process behavior. A large variety of control algorithms can be used: PID, RST, fuzzy etc., but the goal is to have a very simple one. For this study we use a RST algorithm. This is designed using the pole placement procedure (Landau et al., 1997). Figure 7 presents a RST base algorithm structure. Finally, if it is imposed that all nonlinear characteristics be (graphically) bounded by the two gains, or gain limit to be great or equal to the process static maximal distance characteristic ΔG≥*mg*, a controller that has sufficient robustness was designed.

### **3.5 Analysis and conclusions for proposed structure**

The main advantage consists in using a classic procedure for designing the control algorithm and determining the nonlinearity compensator command block, comparative to robust control design procedures. Well known procedures for identification and law control design are used. All procedures for the inverse characteristic model identification can be included in a real time software application.

Fig. 7. Parallel RST feedback-feedforward control structure

346 Recent Advances in Robust Control – Novel Approaches and Design Methods

(a) (b)

Fig. 6. (a) - left - Determination of process characteristic. Continuous line represents the final

According to system identification theory, the dispersion of process trajectory can be found

 2 2\* 1

This can express a measure of superposing of noise onto process, process' nonlinearity etc.

This step deals with the process's static characteristic "transposition" operation. Figure 6 (b) presents this construction. According to this, *u(k)* is dependent to *r(k)*. This characteristic is stored in a table; thus we can conclude that, for the nonlinearity compensator based controller, selecting a new set point *r(k)* will impose finding in this table the corresponding

The control algorithm's duty is to eliminate the disturbances and differences between the nonlinearity compensator computed command and the real process behavior. A large variety of control algorithms can be used: PID, RST, fuzzy etc., but the goal is to have a very simple one. For this study we use a RST algorithm. This is designed using the pole placement procedure (Landau et al., 1997). Figure 7 presents a RST base algorithm structure. Finally, if it is imposed that all nonlinear characteristics be (graphically) bounded by the two gains, or gain limit to be great or equal to the process static maximal distance

The main advantage consists in using a classic procedure for designing the control algorithm and determining the nonlinearity compensator command block, comparative to robust control design procedures. Well known procedures for identification and law control design are used. All procedures for the inverse characteristic model identification can be

*n i n yi N <sup>n</sup>*

command *u(k)* that determines a process output *y(k)* close to the reference value.

characteristic ΔG≥*mg*, a controller that has sufficient robustness was designed.

**3.5 Analysis and conclusions for proposed structure** 

included in a real time software application.

<sup>1</sup> , n \ 1 <sup>1</sup>

(13)

characteristic. (b) - right - Construction of nonlinearity compensator

and it is very important for the control algorithm robust design.

**3.3 Construction of nonlinearity compensator (generator)** 

using next expression (Ljung & Soderstroom, 1983).

**3.4 Control law design** 

The system is very stabile due to the global command that contains a "constant" component generated by an inverse static model command block, according to the set point value. This component is not influenced by the noise.

A fuzzy logic bloc that can "contain" human experience about some nonlinear processes can replace the inverse model command generator.

Being not very complex in terms of real time software and hardware implementation, the law control doesn't need important resources.

This structure is very difficult to use for the system that doesn't have a bijective static characteristic and for systems with different functioning regimes.

Another limitation is that this structure can only be used for stabile processes. In the situations where the process is "running", the global command is likely to not have enough flexibility to control it.

The increased number of experiments for the determination of a correct static characteristic can be another disadvantage.

## **4. Multiple model control solution**

The essential function of a real-time control system is to preserve the closed-loop performances in case of non-linearity, structural disturbances or process uncertainties. A valuable way to solve these problems is the multiple-models or multicontroller structure. The first papers that mentioned the "multiple-models" structure/system have been reported in the 90s. Balakrishnan and Narendra are among the first authors addressing problems of stability, robustness, switching and designing this type of structures in their papers Narendra & Balakrishnan, 1997).

Research refinement in this field has brought extensions to the multiple-model control concept. Parametric adaptation procedures – Closed-Loop Output Error (Landau & Karimi, 1997), use of Kalman filter representation (Lainiotis & Magill, 1969), the use of neural networks (Balakrishnan, 1996) or the fuzzy systems are some of the important developments.

Related to classical control loops, multiple-model based systems need addressing some supplementary specific aspects:


From the multiple-models control systems viewpoint, two application oriented problems can be highlighted:

Simplified Deployment of Robust Real-Time Systems Using

u – output generate by Command calculus block

 r –set point system or reference trajectory; p – disturbances of physical process.

algorithms, respectively;

on the "switching" problem.

To be (very) fast.

the algorithm coefficients.

**4.1.1 Classic solutions** 

**4.1 Control algorithms switching** 

most appropriate model for the system's current state;

algorithm, which involves at least performance degradation.

y and y1, y2, …, yN – output of the process and outputs of the N models;

Multiple Model and Process Characteristic Architecture-Based Process Solutions 349

SELECTOR – based on adequate criteria evaluations, provides information about the

u and u1, u2, uN – output of the Command calculus block and outputs of the N control

As noted above, depending on the process specifics and the approach used to solve the "control algorithms switching" and/or "the best model choice" problems, the scheme can be adapted on the situation by adding/eliminating some specific blocks. This section focuses

The logic operation of multiple model system structure implies that after finding the best algorithm for the current operating point of the, the next step consists in switching the control algorithm. Two essential conditions must be verified with respect to this operation: To be designed so that no bumps in the applications of the control law are encountered;

Shocks determined by the switching operation cause non-efficient and/or dangerous behaviors. Moreover, a switch determines a slow moving area of action of the control

These are the main problems to be solved in designing block switching algorithms. From structurally point of view, this block may contain all implementation algorithms or at least

Present solutions (Landau et al., 1997; Dumitrache, 2005) solve more or less this problem and they are based on maintaining in active state all the control algorithms, also called "warm state". This supposes that every algorithm receives information about the process output y(k) and set the point value (eventually filtered) r(k), but only the control law ui(k) is applied on the real process, the one chosen by the switching block. This solution does not impose supplementary logic function for the system architecture and, for this reason, the switching time between algorithms is short. The drawback of this approach is that when

These supplementary conditions demand the match of the control algorithm outputs in the neighborhood switching zones. The superposition of models identification zones accomplishes this aspect. That can be seen in Fig. 9. As a result of this superposition, the

Other approaches (Dussud et al., 2000; Pages et al., 2002) propose the mix of two or more algorithms outputs. The "weighting" of each control law depends on the distance from the current process operating point and the action zone of each algorithm. Based on this, the switching from an algorithm to another one is done using weighting functions with a continuous evolution in [0–1] intervals. This technique can be easily implemented using fuzzy approach, An example is presented in Fig. 10. This solution involves solving control

designing the multi-model structure several supplementary steps are necessary.

multi-model structure will have an increased number of models.

gain problems, determined by mixing algorithm outputs.

Fig. 8. General scheme for multiple model structure


As function of the process particularity, several multiple-models structures are proposed (Balakrishnan, 1996). One of the most general architectures is presented in Figure 8.


As noted above, depending on the process specifics and the approach used to solve the "control algorithms switching" and/or "the best model choice" problems, the scheme can be adapted on the situation by adding/eliminating some specific blocks. This section focuses on the "switching" problem.

## **4.1 Control algorithms switching**

The logic operation of multiple model system structure implies that after finding the best algorithm for the current operating point of the, the next step consists in switching the control algorithm. Two essential conditions must be verified with respect to this operation:


348 Recent Advances in Robust Control – Novel Approaches and Design Methods

Class of systems with nonlinear characteristic, which cannot be controlled by a single

 Class of systems with different operating regimes, where different functioning regimes don't allow the use of a unique algorithm or imposes using a very complex one with

As function of the process particularity, several multiple-models structures are proposed

 Status or position identification system – component that provide information about the model–control algorithm "best" matching for the current state of the system; Mod. 1, Mod. 2, …, Mod. N - previously identified models of different regimes or

(Balakrishnan, 1996). One of the most general architectures is presented in Figure 8.

Command calculus – unit that computes the process law control ;

Alg. 1, Alg. 2, …, Alg. N – control algorithms designed for the N models;

Fig. 8. General scheme for multiple model structure

special problems on implementation.

In Fig 8, the blocks and variables are as follows: Process – physical system to be controlled;

SWITCH – mix or switch between the control laws;

algorithm;

operating points;

Shocks determined by the switching operation cause non-efficient and/or dangerous behaviors. Moreover, a switch determines a slow moving area of action of the control algorithm, which involves at least performance degradation.

These are the main problems to be solved in designing block switching algorithms. From structurally point of view, this block may contain all implementation algorithms or at least the algorithm coefficients.

## **4.1.1 Classic solutions**

Present solutions (Landau et al., 1997; Dumitrache, 2005) solve more or less this problem and they are based on maintaining in active state all the control algorithms, also called "warm state". This supposes that every algorithm receives information about the process output y(k) and set the point value (eventually filtered) r(k), but only the control law ui(k) is applied on the real process, the one chosen by the switching block. This solution does not impose supplementary logic function for the system architecture and, for this reason, the switching time between algorithms is short. The drawback of this approach is that when designing the multi-model structure several supplementary steps are necessary.

These supplementary conditions demand the match of the control algorithm outputs in the neighborhood switching zones. The superposition of models identification zones accomplishes this aspect. That can be seen in Fig. 9. As a result of this superposition, the multi-model structure will have an increased number of models.

Other approaches (Dussud et al., 2000; Pages et al., 2002) propose the mix of two or more algorithms outputs. The "weighting" of each control law depends on the distance from the current process operating point and the action zone of each algorithm. Based on this, the switching from an algorithm to another one is done using weighting functions with a continuous evolution in [0–1] intervals. This technique can be easily implemented using fuzzy approach, An example is presented in Fig. 10. This solution involves solving control gain problems, determined by mixing algorithm outputs.

Simplified Deployment of Robust Real-Time Systems Using

Fig. 11. Proposed multiple model switching solution

performance solution to this is proposed in the next section.

commands for the "new" selected one.

presented.

**4.2 Manual – automatic bumpless transfer** 

Multiple Model and Process Characteristic Architecture-Based Process Solutions 351

corresponds to the manual control for all the other N-1 inactive algorithms., as presented in Fig. 11. In the switching situation, when a "better" *Aj* algorithm is found, the actual *Ai* active algorithm is commuted in an inactive state, and *Aj* in active state, respectively. For a bumpless commutation, the manual–automatic transfer problems must be solved, and the

The system can be implemented in two variants – first - with all inactive algorithms holding on manual regime, or – second - just a single operating algorithm (the active one) and activation of the "new" one after the computation of the currently corresponding manual regime and switching on automatic regime. Both variants have advantages and disadvantages. Choosing one of them requires knowledge about the hardware performances

In all cases, it is considered that the active algorithm output values represent manual

The "key" of proposed multiple model switching solution performances is based on manual-to-automatc bumpless transfer, so in this section some important elements about are

The practice implementation highlights important problems like manual-to-automatic (MA)/automatic-to-manual (AM) regime commutations, respectively turning out/in from the control saturation states; (i.e. manual operation is the situation where the command is calculated and applied by human operator). Of course, these problems exist in

of the structure. After a general view, the first variant seems to be more reasonable.

Fig. 9. Superposition of identification zones for two neighbor-models and their corresponding control actions

Fig. 10. Algorithms weighting functions for a specified operating position

### **4.1.2 Proposed solution**

In this subsection, there is presented a solution that provides very good results for fast processes with nonlinear characteristics. The main idea is that, during the current functioning of multiple-models control systems with N model-algorithm pairs, it is supposed that just one single algorithm is to be maintained active, the good one, and all the other N-1 algorithms rest inactive. The active and inactive states represent automatic, respectively manual, regimes of a law control. The output value of the active algorithm 350 Recent Advances in Robust Control – Novel Approaches and Design Methods

Fig. 9. Superposition of identification zones for two neighbor-models and their

Fig. 10. Algorithms weighting functions for a specified operating position

In this subsection, there is presented a solution that provides very good results for fast processes with nonlinear characteristics. The main idea is that, during the current functioning of multiple-models control systems with N model-algorithm pairs, it is supposed that just one single algorithm is to be maintained active, the good one, and all the other N-1 algorithms rest inactive. The active and inactive states represent automatic, respectively manual, regimes of a law control. The output value of the active algorithm

corresponding control actions

**4.1.2 Proposed solution** 

Fig. 11. Proposed multiple model switching solution

corresponds to the manual control for all the other N-1 inactive algorithms., as presented in Fig. 11. In the switching situation, when a "better" *Aj* algorithm is found, the actual *Ai* active algorithm is commuted in an inactive state, and *Aj* in active state, respectively. For a bumpless commutation, the manual–automatic transfer problems must be solved, and the performance solution to this is proposed in the next section.

The system can be implemented in two variants – first - with all inactive algorithms holding on manual regime, or – second - just a single operating algorithm (the active one) and activation of the "new" one after the computation of the currently corresponding manual regime and switching on automatic regime. Both variants have advantages and disadvantages. Choosing one of them requires knowledge about the hardware performances of the structure. After a general view, the first variant seems to be more reasonable.

In all cases, it is considered that the active algorithm output values represent manual commands for the "new" selected one.

## **4.2 Manual – automatic bumpless transfer**

The "key" of proposed multiple model switching solution performances is based on manual-to-automatc bumpless transfer, so in this section some important elements about are presented.

The practice implementation highlights important problems like manual-to-automatic (MA)/automatic-to-manual (AM) regime commutations, respectively turning out/in from the control saturation states; (i.e. manual operation is the situation where the command is calculated and applied by human operator). Of course, these problems exist in

Simplified Deployment of Robust Real-Time Systems Using

Multiple Model and Process Characteristic Architecture-Based Process Solutions 353

Fig. 12. Computation of the set point value for imposed manual command

M3 (52-100%). These will be the zones for corresponding algorithms.

We have evaluated the achieved performances of the multi-model control structure and nonlinear compensator control using a hardware and software experimental platform, developed on National Instruments LabWindows/CVI. In figure 13, one can see a positioning control system. The main goal is the vertical control of the ball position, placed inside the pipe; here, the actuator is air supply unit connected to cDAQ family data

The obtained results are comparead to very complex (degree = 8) RST robust algorithm. Total operations number for robust structure is 24 multiplies and 24 adding or subtraction. The nonlinear relation between the position Y (%) and actuator command U (%) is presented in Figure 14. One considers three operating points P1, P2, and P3 on the plant's nonlinear diagram (Figure 14). Three different models are identified like: M1 (0-21%), M2 (21-52%) and

According to the models-algorithms matching zones (Lupu et al., 2008), we have identified the models M1, M2 and M3, as being appropriated to the following intervals (0-25%), (15- 55%) (48-100%), respectively. For a sampling period Te=0.2 sec, the least-squares identification method from Adaptech/WinPIM platform (Landau et al., 1997) identifies the

> 1 1 2 0.35620 0.05973 1 0.454010 0.09607

2 1 2 1.23779 0.33982 1 0.98066 0.17887 *<sup>q</sup> <sup>M</sup>*

3 1 2 2.309530 0.089590 1 0.827430 0.006590 *<sup>q</sup> <sup>M</sup>*

*<sup>q</sup> <sup>M</sup>*

1

1

1

 

 

*q q*

*q q*

*q q*

**5. Experimental results** 

acqusition module.

next models:

analogical systems and have specific counteracting procedures, which are not applicable on numerical systems.

In real functioning, MA transfer is preceded by "driving" the process in the nominal action zone. To avoid command switching "bumps", one must respect the following two conditions:


Neglecting these conditions leads to "bumps" in the transfer because the control algorithm output value is computed using the actual, but also the past, values of the command, process and set point, respectively.

At the same time, there are situations when the perfect "matching" between process output and set point value is very difficult to obtained and/or needs a very long time. Hence, the application of this procedure becomes impossible in the presence of important disturbances. In the following, these facts will be illustrated using an RST control algorithm (Foulloy et al.,

2004), Fig. 1.

In this context, for a inactive algorithm – possible candidate for next active one, since the algorithm output is the manual command set by operator (or active algorithm) and the process output depends on command, the set point remains the only "free" variable in the control algorithm computation. Therefore, the proposed solution consists in the modification of the set point value, according to the existent control algorithm, manual command and process output (Lupu et al., 2006).

Memory updating control algorithm is done similarly as in the automatic regime. For practical implementatio a supplementary memory location for the set point value is necessary. From Eq(3), results the expression for the set point value:

$$y^\*(k) = \frac{1}{t\_0} \left[ \sum\_{i=0}^{n\_s} s\_i \mu(k-i) + \sum\_{i=0}^{n\_R} r\_i y(k-i) - \sum\_{i=1}^{n\_T} t\_i y^\*(k-i) \right] \tag{14}$$

When the set point (trajectory) generator Eq(4) exists, keeping all the data in correct chronology must be with respect to the following relation:

$$r(k) = \frac{A\_m(q^{-1})}{B\_m(q^{-1})} y^\*(k) \tag{15}$$

System operation scheme is presented in Fig. 12.

Concluding, this solution proposes the computation of that set point value that determines, according to the algorithm history and process output, a control equal to the manual command applied by the operator (or active algorithm). At the instant time of the MA switching, there are no gaps in the control algorithm memory that could determine bumps. An eventually mismatching between the set point and process output is considered as a simple change of the set point value. Moreover, this solution can be successfully used in cases of command limitation.

The only inconvenient of this solution is represented by the necessary big computation power when approaching high order systems, which is not, however, a problem nowadays.

Fig. 12. Computation of the set point value for imposed manual command

## **5. Experimental results**

352 Recent Advances in Robust Control – Novel Approaches and Design Methods

analogical systems and have specific counteracting procedures, which are not applicable on

In real functioning, MA transfer is preceded by "driving" the process in the nominal action zone. To avoid command switching "bumps", one must respect the following two

 According to the algorithm complexity (function of the degrees of controller polynomials), the complete algorithm memory actualization must be waited for. Neglecting these conditions leads to "bumps" in the transfer because the control algorithm output value is computed using the actual, but also the past, values of the command,

At the same time, there are situations when the perfect "matching" between process output and set point value is very difficult to obtained and/or needs a very long time. Hence, the application of this procedure becomes impossible in the presence of important disturbances. In the following, these facts will be illustrated using an RST control algorithm (Foulloy et al.,

In this context, for a inactive algorithm – possible candidate for next active one, since the algorithm output is the manual command set by operator (or active algorithm) and the process output depends on command, the set point remains the only "free" variable in the control algorithm computation. Therefore, the proposed solution consists in the modification of the set point value, according to the existent control algorithm, manual

Memory updating control algorithm is done similarly as in the automatic regime. For practical implementatio a supplementary memory location for the set point value is

When the set point (trajectory) generator Eq(4) exists, keeping all the data in correct

<sup>1</sup> ( ) \* ( ) ( ) <sup>1</sup> ( ) *A q rk y k <sup>m</sup> B q <sup>m</sup>*

Concluding, this solution proposes the computation of that set point value that determines, according to the algorithm history and process output, a control equal to the manual command applied by the operator (or active algorithm). At the instant time of the MA switching, there are no gaps in the control algorithm memory that could determine bumps. An eventually mismatching between the set point and process output is considered as a simple change of the set point value. Moreover, this solution can be successfully used in

The only inconvenient of this solution is represented by the necessary big computation power when approaching high order systems, which is not, however, a problem nowadays.

(14)

(15)

\* \* 001 0 <sup>1</sup> () [ ( ) ( ) ( )] *nS n n R T ii i ii i <sup>y</sup> su k i ry k i ty k i <sup>t</sup>*

Process output must be perfectly matched with the set point value;

numerical systems.

process and set point, respectively.

command and process output (Lupu et al., 2006).

*k*

System operation scheme is presented in Fig. 12.

cases of command limitation.

chronology must be with respect to the following relation:

necessary. From Eq(3), results the expression for the set point value:

conditions:

2004), Fig. 1.

We have evaluated the achieved performances of the multi-model control structure and nonlinear compensator control using a hardware and software experimental platform, developed on National Instruments LabWindows/CVI. In figure 13, one can see a positioning control system. The main goal is the vertical control of the ball position, placed inside the pipe; here, the actuator is air supply unit connected to cDAQ family data acqusition module.

The obtained results are comparead to very complex (degree = 8) RST robust algorithm. Total operations number for robust structure is 24 multiplies and 24 adding or subtraction.

The nonlinear relation between the position Y (%) and actuator command U (%) is presented in Figure 14. One considers three operating points P1, P2, and P3 on the plant's nonlinear diagram (Figure 14). Three different models are identified like: M1 (0-21%), M2 (21-52%) and M3 (52-100%). These will be the zones for corresponding algorithms.

According to the models-algorithms matching zones (Lupu et al., 2008), we have identified the models M1, M2 and M3, as being appropriated to the following intervals (0-25%), (15- 55%) (48-100%), respectively. For a sampling period Te=0.2 sec, the least-squares identification method from Adaptech/WinPIM platform (Landau et al., 1997) identifies the next models:

$$\begin{aligned} M\_1 &= \frac{0.35620 - 0.05973q^{-1}}{1 - 0.454010q^{-1} - 0.09607q^{-2}} \\\\ M\_2 &= \frac{1.23779 - 0.33982q^{-1}}{1 - 0.98066q^{-1} - 0.17887q^{-2}} \\\\ M\_3 &= \frac{2.309530 - 0.089590q^{-1}}{1 - 0.827430q^{-1} - 0.006590q^{-2}} \end{aligned}$$

Simplified Deployment of Robust Real-Time Systems Using

Multiple Model and Process Characteristic Architecture-Based Process Solutions 355

In this case, we have computed three corresponding RST algorithms using a pole placement procedure from Adaptech/WinREG platform (Landau et al., 1997). The same nominal performances are imposed to all systems, through a second order system, defined by the dynamics 0 = 3.0, = 2.5 (tracking performances) and 0 = 7.5, = 0.8 (disturbance rejection

> 1 1 2 <sup>1</sup> *<sup>R</sup>* ( ) 1.670380 -0.407140 -0.208017 *q q <sup>q</sup>*

1 1 2 <sup>1</sup> *<sup>S</sup>* ( ) 1.000000 -1.129331 0.129331 *q qq*

1 1 2 <sup>1</sup> *T q*( ) 3.373023 -3.333734q 1.015934q

1 1 2 <sup>2</sup> *<sup>R</sup>* ( ) 0.434167 0.153665 -0.239444 *q qq*

1 1 2 <sup>2</sup> *<sup>S</sup>* ( ) 1.000000 -0.545100 -0.454900 *q q <sup>q</sup>*

1 1 2 <sup>2</sup> *T q*( ) 1.113623 -1.100651q 0.335417q

1 1 2 <sup>3</sup> *<sup>R</sup>* ( ) 0.231527 -0.160386 -8.790E-04 *q qq*

1 1 2 <sup>3</sup> *<sup>S</sup>* ( ) 1.000000 -0.988050 -0.011950 *q q <sup>q</sup>*

1 1 2 <sup>3</sup> *T q*( ) 0.416820 -0.533847q 0.187289q

Fig. 15. Multi-model controller real-time software application

performances) respectively, keeping the same sampling period as for identification. All of these algorithms control the process in only their corresponding zones.

Fig. 13. Process experimental platform

Fig. 14. Nonlinear diagram of the process

354 Recent Advances in Robust Control – Novel Approaches and Design Methods

Fig. 13. Process experimental platform

Fig. 14. Nonlinear diagram of the process

In this case, we have computed three corresponding RST algorithms using a pole placement procedure from Adaptech/WinREG platform (Landau et al., 1997). The same nominal performances are imposed to all systems, through a second order system, defined by the dynamics 0 = 3.0, = 2.5 (tracking performances) and 0 = 7.5, = 0.8 (disturbance rejection performances) respectively, keeping the same sampling period as for identification. All of these algorithms control the process in only their corresponding zones.

$$R\_1(q^{-1}) = 1.670380 \ \text{-} 0.407140 q^{-1} \ \text{-} 0.208017 q^{-2}$$

$$S\_1(q^{-1}) = 1.000000 \ \text{-} 1.129331 q^{-1} \ + \ \text{ 0.129331} q^{-2}$$

$$T\_1(q^{-1}) = 3.373023 \ \text{-} 3.333734 q^{-1} \ + \ \text{ 1.015934q^{-2}}$$

$$R\_2(q^{-1}) = 0.434167 \ \text{0.153665q^{-1}} \ \text{-} 0.239444q^{-2}$$

$$S\_2(q^{-1}) = 1.0000000 \ \text{-} 0.545100 q^{-1} \ \text{-} 0.454900 q^{-2}$$

$$T\_2(q^{-1}) = 1.113623 \ \text{-} 1.100651q^{-1} \ + \ \text{ 0.335417q^{-2}}$$

$$R\_3(q^{-1}) = 0.231527 \ \text{-} 0.160386q^{-1} \ \text{-} 8.790 \ \text{E-} 04q^{-2}$$

$$S\_3(q^{-1}) = 1.0000000 \ \text{-} 0.988050q^{-1} \ \text{-} 0.011950q^{-2}$$

$$T\_3(q^{-1}) = 0.416820 \ \text{-} 0.533847q^{-1} \ + \ \text{.187289q^{-2}}$$

Fig. 15. Multi-model controller real-time software application

Simplified Deployment of Robust Real-Time Systems Using

effectuated to verify the structure. These tests are:

(0-100%) in 10 subinterval (0-10, 10-20 etc).

keeping the same sampling period as for identification.

7 multiplies and 7 adding or subtraction operations.

Fig. 18. a) Process static determination test; b) functioning test;

evolutions.

functioning points.

Multiple Model and Process Characteristic Architecture-Based Process Solutions 357

user interface is presented on Figure 17. This application implement the scheme proposed in Fig 7 and allows the user in a special window, to construct the nonlinear compensator. Using this application, that contains a simple second order RST algorithm, few tests were

a. Determination of inverse model characteristic. Figure 18(a) presents this evolutions and contains the corresponding *r(k)-u(k)* data pairs obtained by dividing the total domain

b. Testing structure stability on different functioning point. Figure 18(b) presents these

On (a) test one can see the nonlinear process model characteristics identification procedures. The second one, present that there are no shocks and the system is stable on different

For proposed control structure, presented in Figure 7 was identified a wery simple model:

*M*

1.541650 1 0.790910

*<sup>q</sup>*

In this case, we have computed the corresponding RST algorithms using a pole placement procedure from Adaptech/WinREG platform. The nominal performances are imposed, through a second order system, defined by the dynamics 0 = 2, = 0.95 (tracking performances) and 0 = 1.1, = 0.8 (disturbance rejection performances) respectively,

1 1 *R*( ) 0.083200 -0.056842 *q q*

1 1 *S q*( ) 1.000000 -1.000000*q*

1 1 <sup>2</sup> *T q*( ) 0.648656 -1.078484q 0.456187q

To calculate corresponding command for a single controller presented before, there are used

For the second control structure, in addition to command calculus operation here is the calculus of direct command. This depends on software implementation. For PLC, particular

1

To verify the proposed switching algorithm, a multi-model controller real-time software application was designed and implemented, that can be connected to the process. The user interface is presented on Figure 15.

On the top of Figure 15, there are respectively the set point, the output and control values, manual-automatic general switch, general manual command and graphical system evolution display. On the bottom of Figure 15, one can see three graphical evolution displays corresponding to the three controllers (Ri, Si, Ti, i=1...3). The colors are as follows: yellow – set point value, red – command value, blue – process output value and green – filtered set point value.

Using this application, few tests were done to verify the switching between two algorithms. The switching procedure is determinate by the change of the set point value. These tests are:


In both tests, one can see that there are no shocks or that there are very small oscillations in the control evolution by applying this approach. Increasing the number of modelsalgorithms to 4 or 5 could eliminate the small oscillations.

To verify the nonlinear compensator control structure, a second real-time software application was designed and implemented, that can be connected with the process. The

Fig. 16. a) (left) switching test; b) (right) switching test

Fig. 17. Nonlinear compensator controller real-time software application

356 Recent Advances in Robust Control – Novel Approaches and Design Methods

To verify the proposed switching algorithm, a multi-model controller real-time software application was designed and implemented, that can be connected to the process. The user

On the top of Figure 15, there are respectively the set point, the output and control values, manual-automatic general switch, general manual command and graphical system evolution display. On the bottom of Figure 15, one can see three graphical evolution displays corresponding to the three controllers (Ri, Si, Ti, i=1...3). The colors are as follows: yellow – set point value, red – command value, blue – process output value and green –

Using this application, few tests were done to verify the switching between two algorithms. The switching procedure is determinate by the change of the set point value. These tests are: a. from 20% (where algorithm 1 is active) to 40% (where algorithm 2 is active). The effective switching operation is done when the filtered set point (and process output)

b. from 38% (where algorithm 2 is active) to 58% (where algorithm 3 is active). The effective switching operation is done when the filtered set point (and process output)

In both tests, one can see that there are no shocks or that there are very small oscillations in the control evolution by applying this approach. Increasing the number of models-

To verify the nonlinear compensator control structure, a second real-time software application was designed and implemented, that can be connected with the process. The

becomes greater than 21%. Figure 16(a) presents the evolutions.

becomes greater than 52%. Figure 16(b) presents the evolutions.

algorithms to 4 or 5 could eliminate the small oscillations.

Fig. 16. a) (left) switching test; b) (right) switching test

Fig. 17. Nonlinear compensator controller real-time software application

interface is presented on Figure 15.

filtered set point value.

user interface is presented on Figure 17. This application implement the scheme proposed in Fig 7 and allows the user in a special window, to construct the nonlinear compensator.

Using this application, that contains a simple second order RST algorithm, few tests were effectuated to verify the structure. These tests are:


On (a) test one can see the nonlinear process model characteristics identification procedures. The second one, present that there are no shocks and the system is stable on different functioning points.

For proposed control structure, presented in Figure 7 was identified a wery simple model:

$$M = \frac{1.541650}{1 - 0.790910q^{-1}}$$

In this case, we have computed the corresponding RST algorithms using a pole placement procedure from Adaptech/WinREG platform. The nominal performances are imposed, through a second order system, defined by the dynamics 0 = 2, = 0.95 (tracking performances) and 0 = 1.1, = 0.8 (disturbance rejection performances) respectively, keeping the same sampling period as for identification.

$$R(q^{-1}) = 0.083200 \quad \text{-} 0.056842q^{-1}$$

$$S(q^{-1}) = 1.0000000 \quad \text{-} 1.0000000q^{-1}$$

$$T(q^{-1}) = 0.648656 \quad \text{-} 1.078484 \mathbf{q}^{-1} \quad + \quad 0.456187 \mathbf{q}^{-2}$$

To calculate corresponding command for a single controller presented before, there are used 7 multiplies and 7 adding or subtraction operations.

For the second control structure, in addition to command calculus operation here is the calculus of direct command. This depends on software implementation. For PLC, particular

Fig. 18. a) Process static determination test; b) functioning test;

Simplified Deployment of Robust Real-Time Systems Using

*Application*, pp. 267-277

Ph. D. Dissertation, University of Yale, USA

Press, Bucuresti, 2005, ISBN 973-8449-72-3

no. 2, pp. 246-256, ISSN 1063–6536

14(2):215–218, April, *ISSN* 0018-9286

0005-1098

9786720-0-3

ISBN 3-540-76187-X

**8. References** 

Multiple Model and Process Characteristic Architecture-Based Process Solutions 359

Balakrishnan, J., (1996). *Control System Design Using Multiple Models, Switching and Tuning*,

Dai, X., He, D., Zhang, T. & Zhang, K., (2003). Generalized inversion for the linearization

Dumitrache, I., (2005). *Engineering the automatic* (*Ingneria Reglarii Automate)*, Politehnica

Dussud, M., Galichet S. & Foulloy L.P., (2000). Application of fuzzy logic control for

Foulloy, L., Popescu, D. & Tanguy, G.D., (2004). *Modelisation, Identification et Commande des Systemes*, Editura Academiei Romane, Bucuresti, ISBN 973-27-1086-1 Kuhnen, K. & Janocha, H. (2001). Inverse feedforward controller for complex hysteretic

Lainiotis, D. G. & Magill, D.T, (1969). Recursive algorithm for the calculation of the adaptive

Landau, I. D. & Karimi, A., (1997). Recursive algorithm for identification in closed loop: a

Landau, I. D., Lozano, R. & Saad, M. M', (1997). *Adaptive Control*, Springer Verlag, London,

Ljung, L. & Soderstroom, T., (1983). *Theory and Practice of Recursive Identification*, MIT Press,

Lupu C., Popescu D., Ciubotaru, Petrescu C. & Florea G., (2006). Switching Solution for

Lupu, C., Popescu, D., Petrescu, C., Ticlea, Al., Dimon, C., Udrea, A. & Irimia, B., (2008).

Narendra, K. S. & Balakrishnan, J., (1997). Adaptive Control using multiple models, *IEEE Transactions on Automatic Control*, vol. 42, no. 2, pp. 171 – 187, *ISSN* 0018-9286 Pages, O., Mouille P., Odonez, R. & Caron, B., (2002). Control system design by using a

Tao, G. & Kokotovic, P., (1996). *Adaptive control of systems with actuator and sensor* 

Yuan, X., Wang, Y. & Wu, L., (2007). Adaptive inverse control of excitation system with

*Identification and Control, Insbruck*, pp. 375-380, ISBN 0171-8096

Cambridge, Massashusetts, ISBN 10 - 0-262-12095-X.

France, pp. 71-76, ISBN 978-90-77381-4-03

*nonlinearities*, Wiley, N.Y., ISBN 0-471-15654-X

August 2007, pp. 419-428, ISSN 1991-8763

and decoupling control of nonlinear systems, *Proc. Of IEE Control Theory* 

continuous casting mold level control, *IEEE Trans. on Control Systems Technology*, 6.

nonlinearities in smart-material systems*, Proc. of the 20th IASTED-Conf. on Modeling,* 

Kalman filter weighting coefficients, *IEEE Transactions on Automatic Control*,

unified approach and evaluation, *Automatica*, vol. 33, no. 8, pp. 1499-1523, ISSN

Multiple Models Control Systems, *Proc. of MED'06*, The *14th Mediterranean Conference on Control Automation*, 28-30 June, 2006, Ancona, Italy, pp. 1-6, ISBN: 0-

Multiple-Model Design and Switching Solution for Nonlinear Processes Control, *Proc. of ISC'08, The 6th Annual Industrial Simulation Conference*, 09-11 June, Lyon,

multi-controller approach with a real-time experimentation for a robot wrist, *International Journal of Control*, vol. 75 (no. 16 & 17), pp. 1321 – 1334, ISSN: 1366-5820

actuator uncertainty, *Wseas Transactions on System and Control*, Issue 8, Vol.2,

and real time process computer, in general, where (C) code programming can be used, in a solution or other similar implementation:

*// segment determination segment = (int)(floor(rdk/10)); // segment gain and difference determination panta = (tab\_cp[segment+1] - tab\_cp[segment]) \* 0.1; // linear value calculus val\_com\_tr = uk + 1.00 \* (panta \* (rdk - segment\*10.0) + tab\_cp[segment]);* 

there are necessary 10 multiplies and 4 adding or subtraction operations (the time and memory addressing effort operation is considered equal to a multiply operation). Total operations number for nonlinear compensator structure is 17 multiplies and 14 adding or subtraction.

Because the multi-models control structure must assure no bump commutations, all of 3 control algorithms work in parallel (Lupu et al., 2008). So, for multiple model structure, to calculate corresponding command for a C1 controller 9 multiplies and 9 adding or subtraction operations are used, for C2 9 multiplies and 9 adding or subtraction operations and for C3 9 multiplies and 9 adding or subtraction operations, total number 27 multiplies and 27 adding or subtractions.

As mentioned before, total operations number for classic robust structure is 24 multiplies and 24 adding or subtraction.

It is visible that nonlinear compensator structure has a less number of multiplies and adding or subtraction comparative to classic multi-model solutions and robust control approach.

In the same time multi-model and robust control solutions have comparative numbers of implemented operations. The choice of solution depends on process features and used hardware.

This means that the system with nonlinear compensator is faster or needs a more simplified hardware and software arquitecture.

## **6. Conclusions**

The first proposed method (multiple models) is a more elaborated one and needs a lot of precise operations like data acquisition, models identification, and control algorithms design. For these reasons it allows us to control a large class of nonlinear processes that can contain nonlinear characteristics, different functioning regimes etc.

The second proposed method (inverse model) does not impose complex operations, it is very easy to use, but it is limited from the nonlinearity class point of view. This structure is very difficult to use for the system that doesn't have a bijective static characteristic or have different functioning regimes.

## **7. Acknowledgment**

This work was supported by CNCSIS - IDEI Research Program of Romanian Research, Development and Integration National Plan II, Grant no. 1044/2007 and "Automatics, Process Control and Computers" Research Center from University "Politehnica" of Bucharest. (U.P.B. – A.C.P.C.) projects.

## **8. References**

358 Recent Advances in Robust Control – Novel Approaches and Design Methods

and real time process computer, in general, where (C) code programming can be used, in a

there are necessary 10 multiplies and 4 adding or subtraction operations (the time and memory addressing effort operation is considered equal to a multiply operation). Total operations number for nonlinear compensator structure is 17 multiplies and 14 adding or

Because the multi-models control structure must assure no bump commutations, all of 3 control algorithms work in parallel (Lupu et al., 2008). So, for multiple model structure, to calculate corresponding command for a C1 controller 9 multiplies and 9 adding or subtraction operations are used, for C2 9 multiplies and 9 adding or subtraction operations and for C3 9 multiplies and 9 adding or subtraction operations, total number 27 multiplies

As mentioned before, total operations number for classic robust structure is 24 multiplies

It is visible that nonlinear compensator structure has a less number of multiplies and adding or subtraction comparative to classic multi-model solutions and robust control approach. In the same time multi-model and robust control solutions have comparative numbers of implemented operations. The choice of solution depends on process features and used

This means that the system with nonlinear compensator is faster or needs a more simplified

The first proposed method (multiple models) is a more elaborated one and needs a lot of precise operations like data acquisition, models identification, and control algorithms design. For these reasons it allows us to control a large class of nonlinear processes that can

The second proposed method (inverse model) does not impose complex operations, it is very easy to use, but it is limited from the nonlinearity class point of view. This structure is very difficult to use for the system that doesn't have a bijective static characteristic or have

This work was supported by CNCSIS - IDEI Research Program of Romanian Research, Development and Integration National Plan II, Grant no. 1044/2007 and "Automatics, Process Control and Computers" Research Center from University "Politehnica" of

contain nonlinear characteristics, different functioning regimes etc.

solution or other similar implementation:

*// segment gain and difference determination* 

*panta = (tab\_cp[segment+1] - tab\_cp[segment]) \* 0.1;* 

*val\_com\_tr = uk + 1.00 \* (panta \* (rdk - segment\*10.0) + tab\_cp[segment]);* 

*// segment determination segment = (int)(floor(rdk/10));* 

*// linear value calculus* 

and 27 adding or subtractions.

and 24 adding or subtraction.

hardware and software arquitecture.

different functioning regimes.

Bucharest. (U.P.B. – A.C.P.C.) projects.

**7. Acknowledgment** 

subtraction.

hardware.

**6. Conclusions** 


**0**

**16**

*Slovakia*

**Partially Decentralized Design Principle in**

A number of problems that arise in state control can be reduced to a handful of standard convex and quasi-convex problems that involve matrix inequalities. It is known that the optimal solution can be computed by using interior point methods (Nesterov & Nemirovsky (1994)) which converge in polynomial time with respect to the problem size, and efficient interior point algorithms have recently been developed for and further development of algorithms for these standard problems is an area of active research. For this approach, the stability conditions may be expressed in terms of linear matrix inequalities (LMI), which have a notable practical interest due to the existence of powerful numerical solvers. Some progres review in this field can be found e.g. in Boyd et al. (1994), Hermann et al. (2007), Skelton et al.

Over the past decade, H∞ norm theory seems to be one of the most sophisticated frameworks for robust control system design. Based on concept of quadratic stability which attempts to find a quadratic Lyapunov function (LF), H∞ norm computation problem is transferred into a standard LMI optimization task, which includes bounded real lemma (BRL) formulation (Wu et al. (2010)). A number of more or less conservative analysis methods are presented to assess quadratic stability for linear systems using a fixed Lyapunov function. The first version of the BRL presents simple conditions under which a transfer function is contractive on the imaginary axis of the complex variable plain. Using it, it was possible to determine the H∞ norm of a transfer function, and the BRL became a significant element to shown and prove that the existence of feedback controllers (that results in a closed loop transfer matrix having the H∞ norm less than a given upper bound) is equivalent to the existence of solutions of certain LMIs. Linear matrix inequality approach based on convex optimization algorithms is extensively applied to solve the above mentioned problem (Jia (2003), Kozáková & Veselý

For time-varying parameters the quadratic stability approach is preferable utilized (see. e.g. Feron et al. (1996)). In this approach a quadratic Lyapunov function is used which is independent of the uncertainty and which guarantees stability for all allowable uncertainty values. Setting Lyapunov function be independent of uncertainties, this approach guarantees uniform asymptotic stability when the parameter is time varying, and, moreover, using a parameter-dependent Lyapunov matrix quadratic stability may be established by LMI tests over the discrete, enumerable and bounded set of the polytope vertices, which define the uncertainty domain. To include these requirements the equivalent LMI representations of

**1. Introduction**

(1998), and the references therein.

(2009)), Pipeleers et al. (2009).

**Large-Scale System Control**

Anna Filasová and Dušan Krokavec

*Technical University of Košice*


## **Partially Decentralized Design Principle in Large-Scale System Control**

Anna Filasová and Dušan Krokavec *Technical University of Košice Slovakia*

### **1. Introduction**

360 Recent Advances in Robust Control – Novel Approaches and Design Methods

Wang, Q. & Su, C., (2006). Robust adaptive control of a class of nonlinear systems including

Wang, Q. & Stengel, R., F., (2001). Searching for robust minimal-order compensators, *ASME* 

pp. 859 – 867, ISSN 0005-1098

ISSN 0022-0434

actuator hysteresis with Prandtl–Ishlinskii presentations, *Automatica*, vol. 42, 2006,

*Journal Dynamic Systems, Measurement and, Control*, vol. 123, no. 2, pp. 233–236, 2001,

A number of problems that arise in state control can be reduced to a handful of standard convex and quasi-convex problems that involve matrix inequalities. It is known that the optimal solution can be computed by using interior point methods (Nesterov & Nemirovsky (1994)) which converge in polynomial time with respect to the problem size, and efficient interior point algorithms have recently been developed for and further development of algorithms for these standard problems is an area of active research. For this approach, the stability conditions may be expressed in terms of linear matrix inequalities (LMI), which have a notable practical interest due to the existence of powerful numerical solvers. Some progres review in this field can be found e.g. in Boyd et al. (1994), Hermann et al. (2007), Skelton et al. (1998), and the references therein.

Over the past decade, H∞ norm theory seems to be one of the most sophisticated frameworks for robust control system design. Based on concept of quadratic stability which attempts to find a quadratic Lyapunov function (LF), H∞ norm computation problem is transferred into a standard LMI optimization task, which includes bounded real lemma (BRL) formulation (Wu et al. (2010)). A number of more or less conservative analysis methods are presented to assess quadratic stability for linear systems using a fixed Lyapunov function. The first version of the BRL presents simple conditions under which a transfer function is contractive on the imaginary axis of the complex variable plain. Using it, it was possible to determine the H∞ norm of a transfer function, and the BRL became a significant element to shown and prove that the existence of feedback controllers (that results in a closed loop transfer matrix having the H∞ norm less than a given upper bound) is equivalent to the existence of solutions of certain LMIs. Linear matrix inequality approach based on convex optimization algorithms is extensively applied to solve the above mentioned problem (Jia (2003), Kozáková & Veselý (2009)), Pipeleers et al. (2009).

For time-varying parameters the quadratic stability approach is preferable utilized (see. e.g. Feron et al. (1996)). In this approach a quadratic Lyapunov function is used which is independent of the uncertainty and which guarantees stability for all allowable uncertainty values. Setting Lyapunov function be independent of uncertainties, this approach guarantees uniform asymptotic stability when the parameter is time varying, and, moreover, using a parameter-dependent Lyapunov matrix quadratic stability may be established by LMI tests over the discrete, enumerable and bounded set of the polytope vertices, which define the uncertainty domain. To include these requirements the equivalent LMI representations of

numerical examples are given to illustrate the feasibility and properties of different equivalent

Partially Decentralized Design Principle in Large-Scale System Control 363

where *<sup>q</sup>*(*t*) <sup>∈</sup> *IRn*, *<sup>u</sup>*(*t*) <sup>∈</sup> *IRr*, and *<sup>y</sup>*(*t*) <sup>∈</sup> *IR<sup>m</sup>* are vectors of the state, input and measurable output variables, respectively, nominal system matrices *<sup>A</sup>* <sup>∈</sup> *IRn*×*n*, *<sup>B</sup>* <sup>∈</sup> *IRn*×*r*, *<sup>C</sup>* <sup>∈</sup> *IRm*×*<sup>n</sup>* and

**Proposition 1.** *. Let Q* > 0*, R* > 0*, S are real matrices of appropriate dimensions, then the next*

*Proof.* Let the linear matrix inequality takes the starting form in (3), det *R* �= 0 then using

*I SR*−<sup>1</sup> **0** *I*

Note that in the next sections the matrix notations *Q*, *R*, *S*, can be used in another context, too.

*there exist a symmetric positive definite matrix <sup>P</sup>* <sup>&</sup>gt; <sup>0</sup>*, <sup>P</sup>* <sup>∈</sup> *IRn*×*<sup>n</sup> and a positive scalar <sup>γ</sup>* <sup>&</sup>gt; <sup>0</sup>*, <sup>γ</sup>* <sup>∈</sup> *IR*

⎤ ⎦ < 0

⎤ ⎦ < 0

*ATP*+*PA PB C<sup>T</sup>* <sup>∗</sup> <sup>−</sup>*γI<sup>r</sup> <sup>D</sup><sup>T</sup>* ∗ ∗ −*I<sup>m</sup>*

*PAT*+*AP PC<sup>T</sup> B* ∗ −*γI<sup>m</sup> D* ∗ ∗ −*I<sup>r</sup>*

**Proposition 2.** *System (1), (2) is stable with quadratic performance* �*C*(*sI*−*A*)−1*<sup>B</sup>* <sup>+</sup>*D*�<sup>2</sup>

� � *I* **0** *R*−1*S<sup>T</sup> I*

�

� = �

�

*Q* + *SR*−1*S<sup>T</sup>* **0**

det �

**0** −*R*

*q*˙(*t*) = *Aq*(*t*) + *Bu*(*t*) (1)

*y*(*t*) = *Cq*(*t*) + *Du*(*t*) (2)

*Q* + *SR*−1*S<sup>T</sup>* **0 0** −*R*

<sup>&</sup>lt; <sup>0</sup> <sup>⇔</sup> *<sup>Q</sup>* <sup>+</sup> *SR*−1*S<sup>T</sup>* <sup>&</sup>lt; 0, **<sup>R</sup>** <sup>&</sup>gt; 0 (3)

�

= 1 (5)

(4)

<sup>∞</sup> ≤ *γ if*

The class of the systems considering in this section can be formed as follows

BRL representations.

**2.1 System model**

**2. Basic preliminaries**

*<sup>D</sup>* <sup>∈</sup> *IRm*×*<sup>r</sup>* are real matrices.

**2.2 Schur complement**

*inequalities are equivalent*

**2.3 Bounded real lemma**

Since

*such that*

� *Q S*

�

Gauss elimination principle it yields � *I SR*−<sup>1</sup> **0** *I*

< 0 ⇔

�

� � *Q S <sup>S</sup><sup>T</sup>* <sup>−</sup>*<sup>R</sup>*

and it is evident that (4) implies (3). This concludes the proof.

*i*. ⎡ ⎣

*ii*. ⎡ ⎣

*<sup>S</sup><sup>T</sup>* <sup>−</sup>*<sup>R</sup>*

BRL for continuous-time, as well as discrete-time uncertain systems were introduced (e.g. see Wu and Duan (2006), and Xie (2008)). Motivated by the underlying ideas a simple technique for the BRL representation can be extended to state feedback controller design, performing system H∞ properties of quadratic performance. When used in robust analysis of systems with polytopic uncertainties, they can reduce conservatism inherent in the quadratic methods and the parameter-dependent Lyapunov function approach. Of course, the conservativeness has not been totally eliminated by this approach.

In recent years, modern control methods have found their way into design of interconnected systems leading to a wide variety of new concepts and results. In particular, paradigms of LMIs and *H*∞ norm have appeared to be very attractive due to their good promise of handling systems with relative high dimensions, and design of partly decentralized schemes substantially minimized the information exchange between subsystems of a large scale system. With respect to the existing structure of interconnections in a large-scale system it is generally impossible to stabilize all subsystems and the whole system simultaneously by using decentralized controllers, since the stability of interconnected systems is not only dependent on the stability degree of subsystems, but is closely dependent on the interconnections (Jamshidi (1997), Lunze (1992), Mahmoud & Singh (1981)). Including into design step the effects of interconnections, a special view point of decentralized control problem (Filasová & Krokavec (1999), Filasová & Krokavec (2000), Leros (1989)) can be such adapted for large-scale systems with polytopic uncertainties. This approach can be viewed as pairwise-autonomous partially decentralized control of large-scale systems, and gives the possibility establish LMI-based design method as a special problem of pairwise autonomous subsystems control solved by using parameter dependent Lyapunov function method in the frames of equivalent BRL representations.

The chapter is devoted to studying partially decentralized control problems from above given viewpoint and to presenting the effectiveness of parameter-dependent Lyapunov function method for large-scale systems with polytopic uncertainties. Sufficient stability conditions for uncertain continuous-time systems are stated as a set of linear matrix inequalities to enable the determination of parameter independent Lyapunov matrices and to encompass quadratic stability case. Used structures in the presented forms enable potentially to design systems with the reconfigurable controller structures.

The chapter is organized as follows. In section 2 basis preliminaries concerning the H∞ norm problems are presented along with results on BRL, improved BRLs representations and modifications, as well as with quadratic stability. To generalize properties of non-expansive systems formulated as H∞ problems in BRL forms, the main motivation of section 3 was to present the most frequently used BRL structures for system quadratic performance analyzes. Starting work with such introduced formalism, in section 4 the principle of memory-less state control design with quadratic performances which performs H∞ properties of the closed-loop system is formulated as a feasibility problem and expressed over a set of LMIs. In section 5, the BRL based design method is outlined to posse the sufficient conditions for the pairwise decentralized control of one class of large-scale systems, where Lyapunov matrices are separated from the matrix parameters of subsystem pairs. Exploring such free Lyapunov matrices, the parameter-dependent Lyapunov method is adapted for pairwise decentralized controller design method of uncertain large-scale systems in section 6, namely quadratic stability conditions and the state feedback stabilizability problem based on these conditions. Finally, some concluding remarks are given in the end. However, especially in sections 4-6, numerical examples are given to illustrate the feasibility and properties of different equivalent BRL representations.

### **2. Basic preliminaries**

### **2.1 System model**

2 Robust

BRL for continuous-time, as well as discrete-time uncertain systems were introduced (e.g. see Wu and Duan (2006), and Xie (2008)). Motivated by the underlying ideas a simple technique for the BRL representation can be extended to state feedback controller design, performing system H∞ properties of quadratic performance. When used in robust analysis of systems with polytopic uncertainties, they can reduce conservatism inherent in the quadratic methods and the parameter-dependent Lyapunov function approach. Of course, the conservativeness

In recent years, modern control methods have found their way into design of interconnected systems leading to a wide variety of new concepts and results. In particular, paradigms of LMIs and *H*∞ norm have appeared to be very attractive due to their good promise of handling systems with relative high dimensions, and design of partly decentralized schemes substantially minimized the information exchange between subsystems of a large scale system. With respect to the existing structure of interconnections in a large-scale system it is generally impossible to stabilize all subsystems and the whole system simultaneously by using decentralized controllers, since the stability of interconnected systems is not only dependent on the stability degree of subsystems, but is closely dependent on the interconnections (Jamshidi (1997), Lunze (1992), Mahmoud & Singh (1981)). Including into design step the effects of interconnections, a special view point of decentralized control problem (Filasová & Krokavec (1999), Filasová & Krokavec (2000), Leros (1989)) can be such adapted for large-scale systems with polytopic uncertainties. This approach can be viewed as pairwise-autonomous partially decentralized control of large-scale systems, and gives the possibility establish LMI-based design method as a special problem of pairwise autonomous subsystems control solved by using parameter dependent Lyapunov function method in the

The chapter is devoted to studying partially decentralized control problems from above given viewpoint and to presenting the effectiveness of parameter-dependent Lyapunov function method for large-scale systems with polytopic uncertainties. Sufficient stability conditions for uncertain continuous-time systems are stated as a set of linear matrix inequalities to enable the determination of parameter independent Lyapunov matrices and to encompass quadratic stability case. Used structures in the presented forms enable potentially to design systems

The chapter is organized as follows. In section 2 basis preliminaries concerning the H∞ norm problems are presented along with results on BRL, improved BRLs representations and modifications, as well as with quadratic stability. To generalize properties of non-expansive systems formulated as H∞ problems in BRL forms, the main motivation of section 3 was to present the most frequently used BRL structures for system quadratic performance analyzes. Starting work with such introduced formalism, in section 4 the principle of memory-less state control design with quadratic performances which performs H∞ properties of the closed-loop system is formulated as a feasibility problem and expressed over a set of LMIs. In section 5, the BRL based design method is outlined to posse the sufficient conditions for the pairwise decentralized control of one class of large-scale systems, where Lyapunov matrices are separated from the matrix parameters of subsystem pairs. Exploring such free Lyapunov matrices, the parameter-dependent Lyapunov method is adapted for pairwise decentralized controller design method of uncertain large-scale systems in section 6, namely quadratic stability conditions and the state feedback stabilizability problem based on these conditions. Finally, some concluding remarks are given in the end. However, especially in sections 4-6,

has not been totally eliminated by this approach.

frames of equivalent BRL representations.

with the reconfigurable controller structures.

The class of the systems considering in this section can be formed as follows

$$
\dot{q}(t) = Aq(t) + Bu(t) \tag{1}
$$

$$y(t) = \mathcal{C}q(t) + \mathcal{D}u(t) \tag{2}$$

where *<sup>q</sup>*(*t*) <sup>∈</sup> *IRn*, *<sup>u</sup>*(*t*) <sup>∈</sup> *IRr*, and *<sup>y</sup>*(*t*) <sup>∈</sup> *IR<sup>m</sup>* are vectors of the state, input and measurable output variables, respectively, nominal system matrices *<sup>A</sup>* <sup>∈</sup> *IRn*×*n*, *<sup>B</sup>* <sup>∈</sup> *IRn*×*r*, *<sup>C</sup>* <sup>∈</sup> *IRm*×*<sup>n</sup>* and *<sup>D</sup>* <sup>∈</sup> *IRm*×*<sup>r</sup>* are real matrices.

### **2.2 Schur complement**

**Proposition 1.** *. Let Q* > 0*, R* > 0*, S are real matrices of appropriate dimensions, then the next inequalities are equivalent*

$$
\begin{bmatrix} \mathbf{Q} & \mathbf{S} \\ \mathbf{S}^T - \mathbf{R} \end{bmatrix} < 0 \Leftrightarrow \begin{bmatrix} \mathbf{Q} + \mathbf{S} \mathbf{R}^{-1} \mathbf{S}^T & \mathbf{0} \\ \mathbf{0} & -\mathbf{R} \end{bmatrix} < 0 \Leftrightarrow \mathbf{Q} + \mathbf{S} \mathbf{R}^{-1} \mathbf{S}^T < 0, \ \mathbf{R} > 0 \tag{3}
$$

*Proof.* Let the linear matrix inequality takes the starting form in (3), det *R* �= 0 then using Gauss elimination principle it yields

$$
\begin{bmatrix} I \ \mathbf{S} \mathbf{R}^{-1} \\ \mathbf{0} & I \end{bmatrix} \begin{bmatrix} \mathbf{Q} & \mathbf{S} \\ \mathbf{S}^T & -\mathbf{R} \end{bmatrix} \begin{bmatrix} I & \mathbf{0} \\ \mathbf{R}^{-1} \mathbf{S}^T \ I \end{bmatrix} = \begin{bmatrix} \mathbf{Q} + \mathbf{S} \mathbf{R}^{-1} \mathbf{S}^T & \mathbf{0} \\ \mathbf{0} & -\mathbf{R} \end{bmatrix} \tag{4}$$

Since

$$\det\begin{bmatrix} I \ \mathbf{S} \mathbf{R}^{-1} \\ \mathbf{0} & I \end{bmatrix} = 1 \tag{5}$$

and it is evident that (4) implies (3). This concludes the proof.

Note that in the next sections the matrix notations *Q*, *R*, *S*, can be used in another context, too.

### **2.3 Bounded real lemma**

**Proposition 2.** *System (1), (2) is stable with quadratic performance* �*C*(*sI*−*A*)−1*<sup>B</sup>* <sup>+</sup>*D*�<sup>2</sup> <sup>∞</sup> ≤ *γ if there exist a symmetric positive definite matrix <sup>P</sup>* <sup>&</sup>gt; <sup>0</sup>*, <sup>P</sup>* <sup>∈</sup> *IRn*×*<sup>n</sup> and a positive scalar <sup>γ</sup>* <sup>&</sup>gt; <sup>0</sup>*, <sup>γ</sup>* <sup>∈</sup> *IR such that*

$$\begin{array}{cccc} i. & \begin{bmatrix} A^T \mathbf{P} + \mathbf{P}A & \mathbf{P}\mathbf{B} & \mathbf{C}^T \\ \ast & -\gamma I\_r & \mathbf{D}^T \\ \ast & \ast & -I\_m \end{bmatrix} < 0 \\\\ ii. & \begin{bmatrix} P A^T + A \mathbf{P} & \mathbf{P} \mathbf{C}^T & \mathbf{B} \\ \ast & -\gamma I\_m & \mathbf{D} \\ \ast & \ast & -I\_I \end{bmatrix} < 0 \end{array}$$

$$\begin{aligned} \text{iii.} \begin{bmatrix} \mathbf{P}^{-1} \mathbf{A}^T + \mathbf{A} \mathbf{P}^{-1} & \mathbf{B} & \mathbf{P}^{-1} \mathbf{C}^T \\ \ast & -\gamma I\_I & \mathbf{D}^T \\ \ast & \ast & -I\_m \end{bmatrix} < 0 \\ \text{iv.} \begin{bmatrix} \mathbf{A}^T \mathbf{P}^{-1} + \mathbf{P}^{-1} \mathbf{A} & \mathbf{C}^T & \mathbf{P}^{-1} \mathbf{B} \\ \ast & -\gamma I\_m & \mathbf{D} \\ \ast & \ast & -I\_r \end{bmatrix} < 0 \end{aligned} \tag{6}$$

and substituting the dual matrix parameters into *i.* of (2) implies *ii.* of (2).

*<sup>L</sup>*<sup>1</sup> = diag �

*P*−<sup>1</sup> *I<sup>r</sup> I<sup>m</sup>*

and pre-multiplying left-hand side and right-hand side of *i.* of (2) by (16) subsequently gives

Partially Decentralized Design Principle in Large-Scale System Control 365

*iii.* Analogously, substituting the matrix parameters of the dual system description form into

Note, to design the gain matrix of memory-free control law using LMI principle only the

Preposition 2 is quite attractive giving a representative result of its type to conclude the asymptotic stability of a system which *H*<sup>∞</sup> norm is less than a real value *γ* > 0, and can be employed in the next for comparative purposes. However, its proof is technical, which more or less, can brings about inconvenience in understanding and applying the results. Thus, in

As soon as the representations (2) of the BRL is given, the proof of improvement BRL

*exist a symmetric positive definite matrix <sup>P</sup>* <sup>&</sup>gt; <sup>0</sup>*, <sup>P</sup>* <sup>∈</sup> *IRn*×*n, matrices <sup>S</sup>*1, *<sup>S</sup>*<sup>2</sup> <sup>∈</sup> *IRn*×*n, and a scalar*

<sup>∗</sup> <sup>−</sup>*γI<sup>r</sup>* <sup>−</sup>*BTS<sup>T</sup>*

∗ ∗ *<sup>S</sup>*2+*S<sup>T</sup>*

<sup>∗</sup> <sup>−</sup>*γI<sup>m</sup>* <sup>−</sup>*CS<sup>T</sup>*

∗ ∗ *<sup>S</sup>*2+*S<sup>T</sup>*

<sup>1</sup> <sup>−</sup>*S*1*B P*+*S*1−*ATS<sup>T</sup>*

∗∗∗ −*I<sup>m</sup>*

<sup>1</sup> <sup>−</sup>*S*1*C<sup>T</sup> <sup>P</sup>*+*S*1−*AS<sup>T</sup>*

∗ ∗∗ −*I<sup>r</sup>*

*v*˙(*q*(*t*)) = <sup>=</sup> *<sup>q</sup>*˙ *<sup>T</sup>*(*t*)*Pq*(*t*)+*qT*(*t*)*Pq*˙(*t*)−*γuT*(*t*)*u*(*t*)+(*Cq*(*t*)+*Du*(*t*))*T*(*Cq*(*t*)+*Du*(*t*))+ +(*qT*(*t*)*S*1+*q*˙ *<sup>T</sup>*(*t*)*S*2)(*q*˙(*t*)−*Aq*(*t*)−*Bu*(*t*))+

<sup>2</sup> *<sup>C</sup><sup>T</sup>*

<sup>2</sup> *B*

*q*˙(*t*) − *Aq*(*t*) − *Bu*(*t*) = **0** (18)

(*t*)*S*2)(*q*˙(*t*)−*Aq*(*t*)−*Bu*(*t*)) = 0 (19)

<sup>2</sup> *q*˙(*t*))< 0

<sup>2</sup> *D*

<sup>2</sup> **0**

<sup>1</sup> *<sup>q</sup>*(*t*)+*S<sup>T</sup>*

⎤ ⎥ ⎥ ⎦ < 0

⎤ ⎥ ⎥ ⎦ < 0

<sup>2</sup> *<sup>D</sup><sup>T</sup>*

<sup>2</sup> **0**

this chapter, some modifications are proposed to directly reach applicable solutions.

**Theorem 1.** *System (1), (2) is stable with quadratic performance* �*C*(*sI*−*A*)−1*<sup>B</sup>* <sup>+</sup>*D*�<sup>2</sup>

� (16)

<sup>∞</sup> ≤ *γ if there*

(17)

(20)

*iii.* Defining the congruence transform matrix

*ii.* of (16).

*iii.* of (2) implies *iv.* of (2).

condition *ii.* and *iii.* of (2) are suitable.

**2.4 Improved BRL representation**

*γ* > 0*, γ* ∈ *IR such that*

*Proof. i.* Since (1) implies

representation is rather easy as given in the following.

*i*.

*ii*.

⎡ ⎢ ⎢ ⎣

⎡ ⎢ ⎢ ⎣ <sup>−</sup>*S*1*A*−*ATS<sup>T</sup>*

<sup>−</sup>*S*1*AT*−*AS<sup>T</sup>*

then with arbitrary square matrices *<sup>S</sup>*1, *<sup>S</sup>*<sup>2</sup> <sup>∈</sup> *IRn*×*<sup>n</sup>* it yields

(*t*)*S*1+*q*˙ *<sup>T</sup>*

Thus, adding (19), as well as its transposition to (8) and substituting (2) it yields

+(*q*˙ *<sup>T</sup>*(*t*)−*qT*(*t*)*AT*−*uT*(*t*)*BT*)(*S<sup>T</sup>*

(*q<sup>T</sup>*

*where <sup>I</sup><sup>r</sup>* <sup>∈</sup> *IRr*×*<sup>r</sup> , <sup>I</sup><sup>m</sup>* <sup>∈</sup> *IRm*×*<sup>m</sup> are identity matrices, respectively.* Hereafter, ∗ denotes the symmetric item in a symmetric matrix.

*Proof. i.* Defining Lyapunov function as follows (Gahinet et al. (1996))

$$\sigma(\boldsymbol{q}(t)) = \boldsymbol{q}^T(t)\boldsymbol{P}\boldsymbol{q}(t) + \int\_0^t (\boldsymbol{y}^T(\boldsymbol{r})\boldsymbol{y}(\boldsymbol{r}) - \gamma\boldsymbol{u}^T(\boldsymbol{r})\boldsymbol{u}(\boldsymbol{r}))d\boldsymbol{r} > 0\tag{7}$$

where *<sup>P</sup>* <sup>=</sup> *<sup>P</sup><sup>T</sup>* <sup>&</sup>gt; 0, *<sup>P</sup>* <sup>∈</sup> *IRn*×*n*, *<sup>γ</sup>* <sup>&</sup>gt; <sup>0</sup> <sup>∈</sup> *IR*, and evaluating the derivative of *<sup>v</sup>*(*q*(*t*)) with respect to *t* along a system trajectory then it yields

$$\dot{\boldsymbol{\psi}}(\boldsymbol{q}(t)) = \dot{\boldsymbol{q}}^T(t)\boldsymbol{P}\boldsymbol{q}(t) + \boldsymbol{q}^T(t)\boldsymbol{P}\dot{\boldsymbol{q}}(t) + \boldsymbol{y}^T(t)\boldsymbol{y}(t) - \gamma\boldsymbol{u}^T(t)\boldsymbol{u}(t) < 0 \tag{8}$$

Thus, substituting (1), (2) into (8) gives

$$\begin{split} \dot{\boldsymbol{\psi}}(\boldsymbol{q}(t)) &= (\boldsymbol{A}\boldsymbol{q}(t) + \boldsymbol{\mathcal{B}}\boldsymbol{u}(t))^{T} \boldsymbol{\mathcal{P}} \boldsymbol{q}(t) + \boldsymbol{q}^{T}(t) \boldsymbol{\mathcal{P}} (\boldsymbol{A}\boldsymbol{q}(t) + \boldsymbol{\mathcal{B}}\boldsymbol{u}(t)) - \boldsymbol{\gamma}\boldsymbol{u}^{T}(t)\boldsymbol{u}(t) + \\ &\quad + (\boldsymbol{\mathcal{C}}\boldsymbol{q}(t) + \boldsymbol{\mathcal{D}}\boldsymbol{u}(t))^{T} (\boldsymbol{\mathcal{C}}\boldsymbol{q}(t) + \boldsymbol{\mathcal{D}}\boldsymbol{u}(t)) < 0 \end{split} \tag{9}$$

and with the next notation

$$\boldsymbol{\mathfrak{q}}\_{\mathcal{E}}^{T}(t) = \begin{bmatrix} \boldsymbol{\mathfrak{q}}^{T}(t) \ \boldsymbol{\mathfrak{u}}^{T}(t) \end{bmatrix} \tag{10}$$

it is obtained

$$
\dot{v}(\boldsymbol{q}(t)) = \boldsymbol{q}\_c^T(t)\boldsymbol{P}\_c\boldsymbol{q}\_c(t) < 0 \tag{11}
$$

where

$$P\_{\mathbb{C}} = \begin{bmatrix} \mathbf{A}^T \mathbf{P} + \mathbf{P} \mathbf{A} & \mathbf{P} \mathbf{B} \\ \mathbf{\*} & -\gamma I\_r \end{bmatrix} + \begin{bmatrix} \mathbf{C}^T \mathbf{C} \mathbf{C}^T \mathbf{D} \\ \mathbf{\*} & \mathbf{D}^T \mathbf{D} \end{bmatrix} < 0 \tag{12}$$

Since

$$
\begin{bmatrix} \mathbf{C}^T \mathbf{C} \mathbf{C}^T \mathbf{D} \\ \mathbf{\*} \quad \mathbf{D}^T \mathbf{D} \end{bmatrix} = \begin{bmatrix} \mathbf{C}^T \\ \mathbf{D}^T \end{bmatrix} \begin{bmatrix} \mathbf{C} \ \mathbf{D} \end{bmatrix} \ge 0 \tag{13}
$$

Schur complement property implies

$$
\begin{bmatrix}
\mathbf{0} \ \mathbf{0} \ \mathbf{C}^T \\
\ast \ \mathbf{0} \ \mathbf{D}^T \\
\ast \ \ast - I\_m
\end{bmatrix} \geq 0 \tag{14}
$$

and using (14) the LMI condition (12) can be written compactly as *i.* of (2).

*ii.* Since *H*∞ norm is closed with respect to complex conjugation and matrix transposition (Petersen et al. (2000)), then

$$\|\mathbf{C}(\mathbf{s}\mathbf{I} - \mathbf{A})^{-1}\mathbf{B} + \mathbf{D}\|\_{\infty}^{2} \le \gamma \quad \Leftrightarrow \quad \|\mathbf{B}^{T}(\mathbf{s}\mathbf{I} - \mathbf{A}^{T})^{-1}\mathbf{C}^{T} + \mathbf{D}^{T}\|\_{\infty}^{2} \le \gamma \tag{15}$$

and substituting the dual matrix parameters into *i.* of (2) implies *ii.* of (2). *iii.* Defining the congruence transform matrix

$$L\_1 = \text{diag}\left[\:P^{-1}\:I\_r\:I\_m\right] \tag{16}$$

and pre-multiplying left-hand side and right-hand side of *i.* of (2) by (16) subsequently gives *ii.* of (16).

*iii.* Analogously, substituting the matrix parameters of the dual system description form into *iii.* of (2) implies *iv.* of (2).

Note, to design the gain matrix of memory-free control law using LMI principle only the condition *ii.* and *iii.* of (2) are suitable.

Preposition 2 is quite attractive giving a representative result of its type to conclude the asymptotic stability of a system which *H*<sup>∞</sup> norm is less than a real value *γ* > 0, and can be employed in the next for comparative purposes. However, its proof is technical, which more or less, can brings about inconvenience in understanding and applying the results. Thus, in this chapter, some modifications are proposed to directly reach applicable solutions.

#### **2.4 Improved BRL representation**

4 Robust

*AT*+*AP*−<sup>1</sup> *B P*−1*C<sup>T</sup>* <sup>∗</sup> <sup>−</sup>*γI<sup>r</sup> <sup>D</sup><sup>T</sup>* ∗ ∗ −*I<sup>m</sup>*

> ∗ −*γI<sup>m</sup> D* ∗ ∗ −*I<sup>r</sup>*

*A C<sup>T</sup> P*−1*B*

⎤ ⎦ < 0

⎤ ⎦ < 0

(*yT*(*r*)*y*(*r*) <sup>−</sup> *<sup>γ</sup>uT*(*r*)*u*(*r*))d*<sup>r</sup>* <sup>&</sup>gt; <sup>0</sup> (7)

� (10)

*C D* � <sup>≥</sup> <sup>0</sup> (13)

⎦ ≥ 0 (14)

< 0 (12)

<sup>∞</sup> ≤ *γ* (15)

*<sup>c</sup>* (*t*)*Pcqc*(*t*) < 0 (11)

(6)

*iii*. ⎡ ⎣

*iv*. ⎡ ⎣

*v*(*q*(*t*)) = *q<sup>T</sup>*

respect to *t* along a system trajectory then it yields

*P<sup>c</sup>* = �

�*C*(*sI*−*A*)−1*<sup>B</sup>* <sup>+</sup>*D*�<sup>2</sup>

�

Thus, substituting (1), (2) into (8) gives

Schur complement property implies

(Petersen et al. (2000)), then

and with the next notation

it is obtained

where

Since

Hereafter, ∗ denotes the symmetric item in a symmetric matrix.

*Proof. i.* Defining Lyapunov function as follows (Gahinet et al. (1996))

(*t*)*Pq*(*t*) +

*qT <sup>c</sup>* (*t*) = �

*v*˙(*q*(*t*)) = *q<sup>T</sup>*

*ATP* + *PA PB* ∗ −*γI<sup>r</sup>*

> ⎡ ⎣

and using (14) the LMI condition (12) can be written compactly as *i.* of (2).

**0 0** *C<sup>T</sup>* <sup>∗</sup> **<sup>0</sup>** *<sup>D</sup><sup>T</sup>* ∗ ∗ −*I<sup>m</sup>*

*ii.* Since *H*∞ norm is closed with respect to complex conjugation and matrix transposition

� = � *CT D<sup>T</sup>* � �

*CTC CTD* <sup>∗</sup> *<sup>D</sup>T<sup>D</sup>* � *t*

0

where *<sup>P</sup>* <sup>=</sup> *<sup>P</sup><sup>T</sup>* <sup>&</sup>gt; 0, *<sup>P</sup>* <sup>∈</sup> *IRn*×*n*, *<sup>γ</sup>* <sup>&</sup>gt; <sup>0</sup> <sup>∈</sup> *IR*, and evaluating the derivative of *<sup>v</sup>*(*q*(*t*)) with

*<sup>v</sup>*˙(*q*(*t*)) = (*Aq*(*t*) + *Bu*(*t*))*TPq*(*t*) + *<sup>q</sup>T*(*t*)*P*(*Aq*(*t*)+*Bu*(*t*))−*γuT*(*t*)*u*(*t*)+

*qT*(*t*) *uT*(*t*)

� + �

⎤

<sup>∞</sup> <sup>≤</sup> *<sup>γ</sup>* ⇔ �*BT*(*sI*−*AT*)−1*C<sup>T</sup>* <sup>+</sup>*DT*�<sup>2</sup>

*<sup>v</sup>*˙(*q*(*t*)) = *<sup>q</sup>*˙ *<sup>T</sup>*(*t*)*Pq*(*t*) + *<sup>q</sup>T*(*t*)*Pq*˙(*t*) + *<sup>y</sup>T*(*t*)*y*(*t*) <sup>−</sup> *<sup>γ</sup>uT*(*t*)*u*(*t*) <sup>&</sup>lt; <sup>0</sup> (8)

+(*Cq*(*t*)+*Du*(*t*))*T*(*Cq*(*t*)+*Du*(*t*)) <sup>&</sup>lt; <sup>0</sup> (9)

*CTC CTD* <sup>∗</sup> *<sup>D</sup>T<sup>D</sup>*

�

*where <sup>I</sup><sup>r</sup>* <sup>∈</sup> *IRr*×*<sup>r</sup>*

*P*−<sup>1</sup>

*ATP*−1+*P*−<sup>1</sup>

*, <sup>I</sup><sup>m</sup>* <sup>∈</sup> *IRm*×*<sup>m</sup> are identity matrices, respectively.*

As soon as the representations (2) of the BRL is given, the proof of improvement BRL representation is rather easy as given in the following.

**Theorem 1.** *System (1), (2) is stable with quadratic performance* �*C*(*sI*−*A*)−1*<sup>B</sup>* <sup>+</sup>*D*�<sup>2</sup> <sup>∞</sup> ≤ *γ if there exist a symmetric positive definite matrix <sup>P</sup>* <sup>&</sup>gt; <sup>0</sup>*, <sup>P</sup>* <sup>∈</sup> *IRn*×*n, matrices <sup>S</sup>*1, *<sup>S</sup>*<sup>2</sup> <sup>∈</sup> *IRn*×*n, and a scalar γ* > 0*, γ* ∈ *IR such that*

$$\begin{array}{c} i. \begin{bmatrix} -\mathbf{S\_1A} - \mathbf{A}^T \mathbf{S\_1^T} & -\mathbf{S\_1B} \ \mathbf{P} + \mathbf{S\_1} - \mathbf{A}^T \mathbf{S\_2^T} & \mathbf{C}^T \\ \ast & -\gamma I\_T & -\mathbf{B}^T \mathbf{S\_2^T} & \mathbf{D}^T \\ \ast & \ast & \mathbf{S\_2} + \mathbf{S\_2^T} & \mathbf{0} \\ \ast & \ast & \ast & -I\_m \end{bmatrix} < 0 \\\\ ii. \begin{bmatrix} -\mathbf{S\_1A^T} - \mathbf{A} \mathbf{S\_1^T} & -\mathbf{S\_1} \mathbf{C^T} & \mathbf{P} + \mathbf{S\_1} - \mathbf{A} \mathbf{S\_2^T} & \mathbf{B} \\ \ast & -\gamma I\_m & -\mathbf{C} \mathbf{S\_2^T} & \mathbf{D} \\ \ast & \ast & \mathbf{S\_2} + \mathbf{S\_2} & \mathbf{0} \\ \ast & \ast & \ast & -I\_r \end{bmatrix} < 0 \end{array} \tag{17}$$

*Proof. i.* Since (1) implies

$$
\dot{q}(t) - Aq(t) - Bu(t) = \mathbf{0} \tag{18}
$$

then with arbitrary square matrices *<sup>S</sup>*1, *<sup>S</sup>*<sup>2</sup> <sup>∈</sup> *IRn*×*<sup>n</sup>* it yields

$$\dot{q}\left(\dot{q}^T(t)\mathbf{S}\_1 + \dot{q}^T(t)\mathbf{S}\_2\right)\left(\dot{q}(t) - \mathbf{A}q(t) - \mathbf{B}u(t)\right) = 0\tag{19}$$

Thus, adding (19), as well as its transposition to (8) and substituting (2) it yields

$$\begin{split} \dot{\boldsymbol{\psi}}(\boldsymbol{q}(t)) &= \\ = \dot{\boldsymbol{q}}^{T}(t)\mathbf{P}\boldsymbol{q}(t) + \boldsymbol{q}^{T}(t)\mathbf{P}\dot{\boldsymbol{q}}(t) - \boldsymbol{\gamma}\boldsymbol{u}^{T}(t)\boldsymbol{u}(t) + (\mathbf{C}\boldsymbol{q}(t) + \mathbf{D}\boldsymbol{u}(t))^{T}(\mathbf{C}\boldsymbol{q}(t) + \mathbf{D}\boldsymbol{u}(t)) + \\ &+ (\boldsymbol{q}^{T}(t)\mathbf{S}\_{1} + \dot{\boldsymbol{q}}^{T}(t)\mathbf{S}\_{2})(\dot{\boldsymbol{q}}(t) - \mathbf{A}\boldsymbol{q}(t) - \mathbf{B}\boldsymbol{u}(t)) + \\ &+ (\dot{\boldsymbol{q}}^{T}(t) - \boldsymbol{q}^{T}(t)\boldsymbol{A}^{T} - \boldsymbol{u}^{T}(t)\mathbf{B}^{T})(\mathbf{S}\_{1}^{T}\boldsymbol{q}(t) + \mathbf{S}\_{2}^{T}\dot{\boldsymbol{q}}(t)) < 0 \end{split} \tag{20}$$

and using the notation

$$\boldsymbol{q}\_c^T(t) = \begin{bmatrix} \boldsymbol{q}^T(t) \ \boldsymbol{u}^T(t) \ \dot{\boldsymbol{q}}^T(t) \end{bmatrix} \tag{21}$$

**2.6 Associate modifications**

*to*

*where*

*(24) implies*

2 the following conclusions can be given.

⎡ ⎢ ⎢ ⎣

> ⎡ ⎢ ⎢ ⎣

> > **Λ**<sup>1</sup> =

⎤ ⎦*P*−1�

**Λ**<sup>2</sup> =

*(28) be negative definite for a feasible P of ii*. *of (2).*

*and (34) can be written as (29), with (30) and with*

formulate design task as BMI problem.

⎡ ⎢ ⎢ ⎣

*Choosing δ as a sufficiently small scalar, where*

⎡ ⎣ *AP CP* **0**

Since alternate conditions of a similar type are also available, similar to the proof of Theorem

Partially Decentralized Design Principle in Large-Scale System Control 367

**Corollary 1.** *Similarly, setting S*<sup>2</sup> =−*δP, where δ* > 0*, δ* ∈ *IR the inequality ii*. *given in (24) reduces*

⎤ ⎥ ⎥ ⎦

⎤ ⎥ ⎥ ⎦

⎤

**Λ**<sup>1</sup> + 0.5 *δ***Λ**<sup>2</sup> < 0 (29)

*APA<sup>T</sup> APC<sup>T</sup>* **0** *CPA<sup>T</sup> CPC<sup>T</sup>* **0 0 00**

0 < *δ* < 2*λ*1/*λ*<sup>2</sup> (32)

*λ*<sup>1</sup> = *λmax*(−**Λ**1), *λ*<sup>2</sup> = *λmin*(**Λ**2) (33)

⎤ ⎥ ⎥ ⎦ < 0 (27)

< 0 (28)

⎦ < 0 (30)

< 0 (34)

⎦ (35)

⎦ (31)

⎤

*PAT*+*AP PC<sup>T</sup> A B* ∗ −*γI<sup>m</sup> C D* ∗ ∗ <sup>−</sup>2*δ*−1*P*−<sup>1</sup> **<sup>0</sup>** ∗∗ ∗ −*I<sup>r</sup>*

*PAT*+*AP PC<sup>T</sup> AP B* ∗ −*γI<sup>m</sup> CP D* ∗ ∗ <sup>−</sup>2*δ*−1*<sup>P</sup>* **<sup>0</sup>** ∗ ∗∗ −*I<sup>r</sup>*

> *AP*+*PA<sup>T</sup> PC<sup>T</sup> B* ∗ −*γI<sup>m</sup> D* ∗ ∗ −*I<sup>r</sup>*

> > � = ⎡ ⎣

*PA<sup>T</sup> PC<sup>T</sup>* **0**

**Remark 1.** *Associated with the second statement of the Theorem 2, setting S*<sup>2</sup> = −*δIn, then ii*. *of*

*AA<sup>T</sup> AC<sup>T</sup>* **0** *CA<sup>T</sup> CC<sup>T</sup>* **0 0 00**

Note, the form (34) is suitable to optimize a solution with respect to both LMI variables *γ*, *δ* in an LMI structure. Conversely, the form (28) behaves LMI structure only if *δ* is a prescribed constant design parameter, and only *γ* can by optimized as an LMI variable if possible, or to

⎤

*AP*+*PA<sup>T</sup> PC<sup>T</sup> A B* ∗ −*γI<sup>m</sup> C D* ∗ ∗ <sup>−</sup>2*δ*−<sup>1</sup> *<sup>I</sup><sup>n</sup>* **<sup>0</sup>** ∗ ∗∗ −*I<sup>r</sup>*

Λ<sup>2</sup> =

*Thus, satisfying (32), (33) then (34) be negative definite for a feasible P of iii*. *of (2).*

⎡ ⎣

*respectively, and using Schur complement property then (28) can now be rewritten as*

⎡ ⎣

it can be obtained

$$
\dot{v}(\boldsymbol{q}(t)) = \boldsymbol{q}\_c^T(t)\mathbf{P}\_c^\diamond \boldsymbol{q}\_c(t) < 0 \tag{22}
$$

where

$$P\_c^\diamond = \begin{bmatrix} \mathbf{C}^T \mathbf{C} \ \mathbf{C}^T \mathbf{D} \ \mathbf{0} \\ \ast \ \mathbf{D}^T \mathbf{D} \ \mathbf{0} \\ \ast \ \ast \ \mathbf{0} \end{bmatrix} + \begin{bmatrix} -\mathbf{S}\_1 \mathbf{A} - \mathbf{A}^T \mathbf{S}\_1^T & -\mathbf{S}\_1 \mathbf{B} \ \mathbf{P} + \mathbf{S}\_1 - \mathbf{A}^T \mathbf{S}\_2^T \\ \ast & -\gamma I\_m & -\mathbf{B}^T \mathbf{S}\_2^T \\ \ast & \ast & \mathbf{S}\_2 + \mathbf{S}\_2^T \end{bmatrix} < 0 \tag{23}$$

Thus, analogously to (13), (14) it then follows the inequality (23) can be written compactly as *i.* of (17).

*ii.* Using duality principle, substituting the dual matrix parameters into *i.* of (17) implies *ii.* of (17).

### **2.5 Basic modifications**

Obviously, the aforementioned proof for Theorem 1 is rather simple, and connection between Theorem 1 and the existing results of Preposition 2 can be established. To convert it into basic modifications the following theorem yields alternative ways to describe the H∞-norm.

**Theorem 2.** *System (1), (2) is stable with quadratic performance* �*C*(*sI*−*A*)−1*<sup>B</sup>* <sup>+</sup>*D*�<sup>2</sup> <sup>∞</sup> ≤ *γ if there exist a symmetric positive definite matrix <sup>P</sup>* <sup>&</sup>gt; <sup>0</sup>*, <sup>P</sup>* <sup>∈</sup> *IRn*×*n, a matrix <sup>S</sup>*<sup>2</sup> <sup>∈</sup> *IRn*×*n, and a scalar <sup>γ</sup>* <sup>&</sup>gt; <sup>0</sup>*, γ* ∈ *IR such that*

$$\begin{array}{ccccc} \text{i.i.} & \begin{bmatrix} P^{-1}A^T + AP^{-1} & \mathbf{B} & P^{-1}A^T & P^{-1}C^T \\ \ast & -\gamma I\_r & \mathbf{B}^T & \mathbf{D}^T \\ \ast & \ast & -S\_2^{-1} - S\_2^{-T} & \mathbf{0} \\ \ast & \ast & \ast & -I\_m \end{bmatrix} < 0 \\\\ \text{i.i.} & \begin{bmatrix} PA^T + AP & PC^T & A & \mathbf{B} \\ \ast & -\gamma I\_m & \mathbf{C} & \mathbf{D} \\ \ast & \ast & -S\_2^{-1} - S\_2^{-T} & \mathbf{0} \\ \ast & \ast & \ast & -I\_r \end{bmatrix} < 0 \\\\ \text{i.i.} & \begin{bmatrix} \ast & \ast & \ast \end{bmatrix} & \begin{array}{c} \ast \end{array} \end{array} \tag{24}$$

*Proof. i*. Since *S*1, *S*<sup>2</sup> are arbitrary square matrices selection of *S*<sup>1</sup> can now be made in the form *S*<sup>1</sup> = −*P*, and it can be supposed that det(*S*2) �= 0. Thus, defining the congruence transform matrix

$$L\_2 = \text{diag}\left[\mathbf{P}^{-1} \ \mathbf{I}\_r \ -\mathbf{S}\_2^{-1} \ \mathbf{I}\_m\right] \tag{25}$$

and pre-multiplying right-hand side of *i*. of (17) by *L*2, and left-hand side of *i*. of (17) by *L<sup>T</sup>* 2 leads to *i*. of (24).

*ii.* Analogously, selecting *S*<sup>1</sup> = −*P*, and considering det(*S*2) �= 0 the next congruence transform matrix can be introduced

$$L\_3 = \text{diag}\left[I\_n \; I\_m \; -\mathbb{S}\_2^{-1} \; I\_n\right] \tag{26}$$

and pre-multiplying right-hand side of *ii*. of (17) by *L*3, and left-hand side of *ii*. of (17) by *L<sup>T</sup>* 3 leads to *ii*. of (24).

### **2.6 Associate modifications**

Since alternate conditions of a similar type are also available, similar to the proof of Theorem 2 the following conclusions can be given.

**Corollary 1.** *Similarly, setting S*<sup>2</sup> =−*δP, where δ* > 0*, δ* ∈ *IR the inequality ii*. *given in (24) reduces to*

$$
\begin{bmatrix}
\mathbf{PA}^{\mathrm{I}} + \mathbf{AP} & \mathbf{PC}^{\mathrm{I}} & \mathbf{A} & \mathbf{B} \\
\ast & -\gamma I\_{\mathrm{II}} & \mathbf{C} & \mathbf{D} \\
\ast & \ast & -2\delta^{-1}\mathbf{P}^{-1} & \mathbf{0} \\
\ast & \ast & \ast & -I\_{\mathrm{I}}
\end{bmatrix} < 0 \tag{27}
$$

$$
\begin{bmatrix}
\mathbf{PA}^T + \mathbf{AP} & \mathbf{PC}^T & \mathbf{AP} & \mathbf{B} \\
\ast & -\gamma I\_m & \mathbf{CP} & \mathbf{D} \\
\ast & \ast & -2\delta^{-1}\mathbf{P} & \mathbf{0} \\
\ast & \ast & \ast & -I\_r
\end{bmatrix} < 0 \tag{28}
$$

*respectively, and using Schur complement property then (28) can now be rewritten as*

$$
\Lambda\_1 + 0.5\,\delta\Lambda\_2 < 0\tag{29}
$$

*where*

6 Robust

*<sup>c</sup>* (*t*)*P*◦

<sup>−</sup>*S*1*A*−*A<sup>T</sup>*

Thus, analogously to (13), (14) it then follows the inequality (23) can be written compactly as

*ii.* Using duality principle, substituting the dual matrix parameters into *i.* of (17) implies *ii.* of

Obviously, the aforementioned proof for Theorem 1 is rather simple, and connection between Theorem 1 and the existing results of Preposition 2 can be established. To convert it into basic modifications the following theorem yields alternative ways to describe the H∞-norm.

*exist a symmetric positive definite matrix <sup>P</sup>* <sup>&</sup>gt; <sup>0</sup>*, <sup>P</sup>* <sup>∈</sup> *IRn*×*n, a matrix <sup>S</sup>*<sup>2</sup> <sup>∈</sup> *IRn*×*n, and a scalar <sup>γ</sup>* <sup>&</sup>gt; <sup>0</sup>*,*

<sup>∗</sup> <sup>−</sup>*γI<sup>r</sup> <sup>B</sup><sup>T</sup> <sup>D</sup><sup>T</sup>*

∗ ∗∗ −*I<sup>m</sup>*

<sup>2</sup> <sup>−</sup>*S*−*<sup>T</sup>* <sup>2</sup> **0**

*<sup>P</sup>*−<sup>1</sup> *<sup>I</sup><sup>r</sup>* <sup>−</sup>*S*−<sup>1</sup>

*<sup>I</sup><sup>n</sup> <sup>I</sup><sup>m</sup>* <sup>−</sup>*S*−<sup>1</sup>

<sup>2</sup> *I<sup>m</sup>*

<sup>2</sup> *I<sup>n</sup>*

<sup>2</sup> <sup>−</sup>*S*−*<sup>T</sup>*

**Theorem 2.** *System (1), (2) is stable with quadratic performance* �*C*(*sI*−*A*)−1*<sup>B</sup>* <sup>+</sup>*D*�<sup>2</sup>

*AT*+*AP*−<sup>1</sup> *B P*−<sup>1</sup>

∗ ∗ <sup>−</sup>*S*−<sup>1</sup>

*PAT*+*AP PC<sup>T</sup> A B*

∗ ∗ <sup>−</sup>*S*−<sup>1</sup>

*L*<sup>2</sup> = diag �

*L*<sup>3</sup> = diag �

∗ −*γI<sup>m</sup> C D*

∗∗ ∗ −*I<sup>r</sup>*

*Proof. i*. Since *S*1, *S*<sup>2</sup> are arbitrary square matrices selection of *S*<sup>1</sup> can now be made in the form *S*<sup>1</sup> = −*P*, and it can be supposed that det(*S*2) �= 0. Thus, defining the congruence transform

and pre-multiplying right-hand side of *i*. of (17) by *L*2, and left-hand side of *i*. of (17) by *L<sup>T</sup>*

*ii.* Analogously, selecting *S*<sup>1</sup> = −*P*, and considering det(*S*2) �= 0 the next congruence

and pre-multiplying right-hand side of *ii*. of (17) by *L*3, and left-hand side of *ii*. of (17) by *L<sup>T</sup>*

*qT*(*t*) *uT*(*t*) *q*˙ *<sup>T</sup>*(*t*)

*ST*

� (21)

*<sup>c</sup> qc*(*t*) < 0 (22)

2

⎤ ⎥ ⎦

< 0 (23)

<sup>∞</sup> ≤ *γ if there*

(24)

2

3

2

2

<sup>1</sup> <sup>−</sup>*S*1*B P*+*S*1−*ATS<sup>T</sup>*

<sup>∗</sup> <sup>−</sup>*γI<sup>m</sup>* <sup>−</sup>*BTS<sup>T</sup>*

∗ ∗ *<sup>S</sup>*2+*S<sup>T</sup>*

*A<sup>T</sup> P*−1*C<sup>T</sup>*

⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦

< 0

� (25)

� (26)

<sup>2</sup> **0**

⎤ ⎥ ⎥ ⎥ ⎥ ⎦

< 0

*qT <sup>c</sup>* (*t*) = �

⎤ ⎥ <sup>⎦</sup> <sup>+</sup>

*v*˙(*q*(*t*)) = *q<sup>T</sup>*

⎡ ⎢ ⎣

and using the notation

*P*◦ *<sup>c</sup>* =

**2.5 Basic modifications**

*γ* ∈ *IR such that*

matrix

leads to *i*. of (24).

leads to *ii*. of (24).

transform matrix can be introduced

⎡ ⎢ ⎣

*i*.

*ii*.

⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣

⎡ ⎢ ⎢ ⎢ ⎢ ⎣ *P*−<sup>1</sup>

*CTC CTD* **0**

<sup>∗</sup> *<sup>D</sup>T<sup>D</sup>* **<sup>0</sup>** ∗ ∗ **0**

it can be obtained

where

*i.* of (17).

(17).

$$
\Lambda\_1 = \begin{bmatrix}
\mathbf{AP} + \mathbf{PA}^T & \mathbf{PC}^T & \mathbf{B} \\
\ast & -\gamma I\_m & \mathbf{D} \\
\ast & \ast & -I\_r
\end{bmatrix} < 0 \tag{30}
$$

$$\mathbf{A}\_{2} = \begin{bmatrix} \mathbf{A}\mathbf{P} \\ \mathbf{C}\mathbf{P} \\ \mathbf{0} \end{bmatrix} \mathbf{P}^{-1} \begin{bmatrix} \mathbf{P} \mathbf{A}^{T} \ \mathbf{P} \mathbf{C}^{T} \ \mathbf{0} \end{bmatrix} = \begin{bmatrix} \mathbf{A}\mathbf{P} \mathbf{A}^{T} \ \mathbf{A}\mathbf{P} \mathbf{C}^{T} & \mathbf{0} \\ \mathbf{C}\mathbf{P} \mathbf{A}^{T} \ \mathbf{C}\mathbf{P} \mathbf{C}^{T} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \mathbf{0} \end{bmatrix} \tag{31}$$

*Choosing δ as a sufficiently small scalar, where*

$$0 < \delta < 2\lambda\_1/\lambda\_2\tag{32}$$

$$
\lambda\_1 = \lambda\_{\max}(-\Lambda\_1), \qquad \lambda\_2 = \lambda\_{\min}(\Lambda\_2) \tag{33}
$$

*(28) be negative definite for a feasible P of ii*. *of (2).*

**Remark 1.** *Associated with the second statement of the Theorem 2, setting S*<sup>2</sup> = −*δIn, then ii*. *of (24) implies*

$$
\begin{bmatrix}
\boldsymbol{AP} + \boldsymbol{P}\boldsymbol{A}^T & \boldsymbol{P}\boldsymbol{C}^T & \boldsymbol{A} & \mathbf{B} \\
\ast & -\gamma I\_{\rm ll} & \mathbf{C} & \mathbf{D} \\
\ast & \ast & -2\delta^{-1} I\_{\rm ll} & \mathbf{0} \\
\ast & \ast & \ast & -I\_r
\end{bmatrix} < 0 \tag{34}
$$

*and (34) can be written as (29), with (30) and with*

$$
\Lambda\_2 = \begin{bmatrix}
\mathbf{A}\mathbf{A}^T \mathbf{A}\mathbf{C}^T & \mathbf{0} \\
\mathbf{C}\mathbf{A}^T \mathbf{C}\mathbf{C}^T & \mathbf{0} \\
\mathbf{0} & \mathbf{0} & \mathbf{0}
\end{bmatrix} \tag{35}
$$

*Thus, satisfying (32), (33) then (34) be negative definite for a feasible P of iii*. *of (2).*

Note, the form (34) is suitable to optimize a solution with respect to both LMI variables *γ*, *δ* in an LMI structure. Conversely, the form (28) behaves LMI structure only if *δ* is a prescribed constant design parameter, and only *γ* can by optimized as an LMI variable if possible, or to formulate design task as BMI problem.

*y*(*t*)=(*C* − *DK*)*q*(*t*) (45)

*T* = *T<sup>T</sup>* > 0, *γ* > 0 (46)

*K* = *WV*−*<sup>T</sup>* (48)

<sup>∞</sup> ≤ *γ,*

< 0 (47)

<sup>4</sup> gives

< 0 (50)

⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦

� (49)

<sup>2</sup> = −*U* (51)

⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦

< 0 (52)

The state-feedback control problem is to find, for an optimized (or prescribed) scalar *γ* > 0, the state-feedback gain *<sup>K</sup>* such that the control law guarantees an upper bound of √*<sup>γ</sup>* to *<sup>H</sup>*<sup>∞</sup> norm of the closed-loop transfer function. Thus, Theorem 2 can be reformulated to solve this

Partially Decentralized Design Principle in Large-Scale System Control 369

*<sup>A</sup><sup>c</sup>* <sup>=</sup> *<sup>A</sup>*−*BK, <sup>C</sup><sup>c</sup>* <sup>=</sup> *<sup>C</sup>*−*DK if there exist regular square matrices <sup>T</sup>*, *<sup>U</sup>*,*<sup>V</sup>* <sup>∈</sup> *IRn*×*n, a matrix*

**Theorem 3.** *Closed-loop system (44), (45) is stable with performance* �*C<sup>c</sup>* (*sI*−*A<sup>c</sup>* )−1*B*�<sup>2</sup>

*V AT*−*WTBT*+*AVT*−*BW* <sup>−</sup>*B T*−*UT*+*V AT*−*WTB<sup>T</sup>* <sup>−</sup>*VCT*+*WTD<sup>T</sup>*

<sup>∗</sup> <sup>−</sup>*γI<sup>r</sup>* <sup>−</sup>*B<sup>T</sup> <sup>D</sup><sup>T</sup>* ∗ ∗ <sup>−</sup>*U*−*U<sup>T</sup>* **<sup>0</sup>** ∗ ∗∗ −*I<sup>m</sup>*

*Proof.* Considering that det *S*<sup>1</sup> �= 0, det *S*<sup>2</sup> �= 0 the congruence transform *L*<sup>4</sup> can be defined as

*S*−<sup>1</sup> <sup>1</sup> *<sup>I</sup><sup>r</sup> <sup>S</sup>*−<sup>1</sup>

<sup>1</sup> *PS*−*<sup>T</sup>*

<sup>∗</sup> <sup>−</sup>*γI<sup>r</sup>* <sup>−</sup>*B<sup>T</sup> <sup>D</sup><sup>T</sup>*

∗∗ ∗ −*I<sup>m</sup>*

(*A*−*BK*)*VT*+*V*(*A*−*BK*)*<sup>T</sup>* <sup>−</sup>*B T*−*UT*+*V*(*A*−*BK*)*<sup>T</sup>* <sup>−</sup>*V*(*<sup>C</sup>* <sup>−</sup> *DK*)*<sup>T</sup>*

<sup>∗</sup> <sup>−</sup>*γI<sup>r</sup>* <sup>−</sup>*B<sup>T</sup> <sup>D</sup><sup>T</sup>* ∗ ∗ <sup>−</sup>*U*−*U<sup>T</sup>* **<sup>0</sup>** ∗ ∗∗ −*I<sup>m</sup>*

<sup>2</sup> <sup>+</sup>*S*−*<sup>T</sup>*

<sup>2</sup> <sup>+</sup>*S*−*<sup>T</sup>*

<sup>1</sup> <sup>=</sup> <sup>−</sup>*V*, *<sup>S</sup>*−<sup>1</sup>

<sup>2</sup> *I<sup>m</sup>*

<sup>2</sup> <sup>−</sup>*S*−<sup>1</sup>

<sup>1</sup> *<sup>A</sup><sup>T</sup> <sup>S</sup>*−<sup>1</sup>

<sup>2</sup> **0**

<sup>1</sup> *<sup>C</sup><sup>T</sup>*

*W* = *KV<sup>T</sup>* (53)

⎤ ⎥ ⎥ ⎥ ⎥ ⎦

*<sup>L</sup>*<sup>4</sup> = diag �

<sup>1</sup> *<sup>A</sup><sup>T</sup>* <sup>−</sup>*B S*−<sup>1</sup>

∗ ∗ *<sup>S</sup>*−<sup>1</sup>

<sup>2</sup> <sup>=</sup> *<sup>T</sup>*, *<sup>S</sup>*−<sup>1</sup>

and multiplying left-hand side of *i*. of (17) by *L*4, and right-hand side of (17) by *L<sup>T</sup>*

state-feedback control problem for linear continuous time systems.

*<sup>W</sup>* <sup>∈</sup> *IRr*×*n, and a scalar <sup>γ</sup>* <sup>&</sup>gt; <sup>0</sup>*, <sup>γ</sup>* <sup>∈</sup> *IR such that*

*The control law gain matrix is now given as*

⎡ ⎢ ⎢ ⎢ ⎢ ⎣

(50) takes the form ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣

and with

(50) implies (47).

<sup>−</sup>*AS*−*<sup>T</sup>*

<sup>1</sup> <sup>−</sup>*S*−<sup>1</sup>

Inserting *A* ← *Ac*, *C* ← *C<sup>c</sup>* into (50) and denoting *S*−<sup>1</sup> <sup>1</sup> *PS*−*<sup>T</sup>*

⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣

follows

**Corollary 2.** *By the same way, setting S*<sup>2</sup> =−*δP, where δ* > 0*, δ* ∈ *IR the inequality i*. *given in (24) be reduced to*

$$
\begin{bmatrix}
\mathbf{P}^{-1}\mathbf{A}^{\mathrm{T}} + \mathbf{A}\mathbf{P}^{-1} & \mathbf{B} & \mathbf{P}^{-1}\mathbf{A}^{\mathrm{T}} & \mathbf{P}^{-1}\mathbf{C}^{\mathrm{T}} \\
\ast & -\gamma I\_{\mathrm{T}} & \mathbf{B}^{\mathrm{T}} & \mathbf{D}^{\mathrm{T}} \\
\ast & \ast & -2\delta^{-1}\mathbf{P}^{-1} & \mathbf{0} \\
\ast & \ast & \ast & -I\_{\mathrm{m}}
\end{bmatrix} < 0\tag{36}
$$

*Then (36) can be written as (29), with*

$$
\Lambda\_1 = \begin{bmatrix}
\mathbf{P}^{-1}\mathbf{A}^T + \mathbf{A}\mathbf{P}^{-1} & \mathbf{B} & \mathbf{P}^{-1}\mathbf{C}^T \\
\ast & -\gamma I\_r & \mathbf{D}^T \\
\ast & \ast & -I\_m
\end{bmatrix} \tag{37}
$$

$$
\Lambda\_2 = \begin{bmatrix}
\mathbf{P}^{-1} \mathbf{A}^T \mathbf{P} \mathbf{A} \mathbf{P}^{-1} & \mathbf{P}^{-1} \mathbf{A}^T \mathbf{P} \mathbf{B} & \mathbf{0} \\
\mathbf{B}^T \mathbf{P} \mathbf{A} \mathbf{P}^{-1} & \mathbf{B}^T \mathbf{P} \mathbf{B} & \mathbf{0} \\
\mathbf{0} & \mathbf{0} & \mathbf{0}
\end{bmatrix} \tag{38}
$$

*Thus, satisfying (32), (33) then (36) be negative definite for a feasible P of iii*. *of (2).*

**Remark 2.** *By a similar procedure, setting S*<sup>2</sup> =−*δIn, where δ* > 0*, δ* ∈ *IR then i*. *of (24) implies the following*

$$
\begin{bmatrix}
\mathbf{P}^{-1}\mathbf{A}^T + \mathbf{A}\mathbf{P}^{-1} & \mathbf{B} & \mathbf{P}^{-1}\mathbf{A}^T & \mathbf{P}^{-1}\mathbf{C}^T \\
\ast & -\gamma I\_r & \mathbf{B}^T & \mathbf{D}^T \\
\ast & \ast & -2\delta^{-1}I\_n & \mathbf{0} \\
\ast & \ast & \ast & -I\_m
\end{bmatrix} < 0\tag{39}
$$

*It is evident that (39) yields with the same* **Λ**<sup>1</sup> *as given in (37) and*

$$
\Lambda\_2 = \begin{bmatrix}
\mathbf{P}^{-1} \mathbf{A}^T \mathbf{A} \mathbf{P}^{-1} & \mathbf{P}^{-1} \mathbf{A}^T \mathbf{B} & \mathbf{0} \\
\mathbf{B}^T \mathbf{A} \mathbf{P}^{-1} & \mathbf{B}^T \mathbf{B} & \mathbf{0} \\
\mathbf{0} & \mathbf{0} & \mathbf{0}
\end{bmatrix} \tag{40}
$$

*Thus, this leads to the equivalent results as presented above, but with possible different interpretation.*

### **3. Control law parameter design**

### **3.1 Problem description**

Through this section the task is concerned with the computation of a state feedback *u*(*t*), which control the linear dynamic system given by (1), (2), i.e.

$$
\dot{q}(t) = Aq(t) + Bu(t) \tag{41}
$$

$$y(t) = \mathsf{Cq}(t) + \mathsf{Du}(t)\tag{42}$$

Problem of the interest is to design stable closed-loop system with quadratic performance *γ* > 0 using the linear memoryless state feedback controller of the form

$$\mathfrak{u}(t) = -\mathbf{K}q(t) \tag{43}$$

where matrix *<sup>K</sup>* <sup>∈</sup> *IRr*×*<sup>n</sup>* is a gain matrix.

Then the unforced system, formed by the state controller (43), can be written as

$$
\dot{q}(t) = (A - \mathbf{B}\mathbf{K})q(t) \tag{44}
$$

$$y(t) = (\mathbf{C} - \mathbf{D}\mathbf{K})q(t) \tag{45}$$

The state-feedback control problem is to find, for an optimized (or prescribed) scalar *γ* > 0, the state-feedback gain *<sup>K</sup>* such that the control law guarantees an upper bound of √*<sup>γ</sup>* to *<sup>H</sup>*<sup>∞</sup> norm of the closed-loop transfer function. Thus, Theorem 2 can be reformulated to solve this state-feedback control problem for linear continuous time systems.

**Theorem 3.** *Closed-loop system (44), (45) is stable with performance* �*C<sup>c</sup>* (*sI*−*A<sup>c</sup>* )−1*B*�<sup>2</sup> <sup>∞</sup> ≤ *γ, <sup>A</sup><sup>c</sup>* <sup>=</sup> *<sup>A</sup>*−*BK, <sup>C</sup><sup>c</sup>* <sup>=</sup> *<sup>C</sup>*−*DK if there exist regular square matrices <sup>T</sup>*, *<sup>U</sup>*,*<sup>V</sup>* <sup>∈</sup> *IRn*×*n, a matrix <sup>W</sup>* <sup>∈</sup> *IRr*×*n, and a scalar <sup>γ</sup>* <sup>&</sup>gt; <sup>0</sup>*, <sup>γ</sup>* <sup>∈</sup> *IR such that*

$$T = T^T > 0, \quad \gamma > 0 \tag{46}$$

$$\begin{bmatrix} \mathbf{V} \mathbf{A}^T - \mathbf{W}^T \mathbf{B}^T + \mathbf{A} \mathbf{V}^T - \mathbf{B} \mathbf{W} & -\mathbf{B} & \mathbf{T} - \mathbf{U}^T + \mathbf{V} \mathbf{A}^T - \mathbf{W}^T \mathbf{B}^T - \mathbf{V} \mathbf{C}^T + \mathbf{W}^T \mathbf{D}^T \\\\ \mathbf{\ast} & -\gamma I\_T & -\mathbf{B}^T & \mathbf{D}^T \\\\ \mathbf{\ast} & \ast & -\mathbf{U} - \mathbf{U}^T & \mathbf{0} \\\\ \mathbf{\ast} & \ast & \ast & -I\_m \end{bmatrix} < 0 \tag{47}$$

*The control law gain matrix is now given as*

$$\mathbf{K} = \mathbf{W}\mathbf{V}^{-T} \tag{48}$$

*Proof.* Considering that det *S*<sup>1</sup> �= 0, det *S*<sup>2</sup> �= 0 the congruence transform *L*<sup>4</sup> can be defined as follows

$$L\_4 = \text{diag}\left[\mathbf{S}\_1^{-1} \; I\_I \; \mathbf{S}\_2^{-1} \; I\_m \right] \tag{49}$$

and multiplying left-hand side of *i*. of (17) by *L*4, and right-hand side of (17) by *L<sup>T</sup>* <sup>4</sup> gives

$$
\begin{bmatrix}
\ast & -\gamma I\_T & -\mathbf{B}^T & \mathbf{D}^T \\
\ast & \ast & \mathbf{S}\_2^{-1} + \mathbf{S}\_2^{-T} & \mathbf{0} \\
\ast & \ast & \ast & -I\_{\mathrm{m}}
\end{bmatrix} < 0 \tag{50}
$$

Inserting *A* ← *Ac*, *C* ← *C<sup>c</sup>* into (50) and denoting

$$\mathbf{S}\_1^{-1}\mathbf{P}\mathbf{S}\_2^{-T} = \mathbf{T}\_\prime \qquad \mathbf{S}\_1^{-1} = -\mathbf{V}\_\prime \qquad \mathbf{S}\_2^{-1} = -\mathbf{U} \tag{51}$$

(50) takes the form

$$\begin{bmatrix} (A - \mathbf{B}\mathbf{K})\mathbf{V}^T + \mathbf{V}(\mathbf{A} - \mathbf{B}\mathbf{K})^T & -\mathbf{B} & T - \mathbf{U}^T + \mathbf{V}(A - \mathbf{B}\mathbf{K})^T - \mathbf{V}(\mathbf{C} - \mathbf{D}\mathbf{K})^T \\\\ \ast & -\gamma I\_r & -\mathbf{B}^T & \mathbf{D}^T \\\\ \ast & \ast & -\mathbf{U} - \mathbf{U}^T & \mathbf{0} \\\\ \ast & \ast & \ast & \ast & -I\_m \end{bmatrix} < 0 \tag{52}$$

and with

8 Robust

**Corollary 2.** *By the same way, setting S*<sup>2</sup> =−*δP, where δ* > 0*, δ* ∈ *IR the inequality i*. *given in (24)*

<sup>∗</sup> <sup>−</sup>*γI<sup>r</sup> <sup>B</sup><sup>T</sup> <sup>D</sup><sup>T</sup>* ∗ ∗ <sup>−</sup>2*δ*−1*P*−<sup>1</sup> **<sup>0</sup>** ∗ ∗∗ −*I<sup>m</sup>*

*ATPAP*−<sup>1</sup> *P*−<sup>1</sup>

**Remark 2.** *By a similar procedure, setting S*<sup>2</sup> =−*δIn, where δ* > 0*, δ* ∈ *IR then i*. *of (24) implies the*

<sup>∗</sup> <sup>−</sup>*γI<sup>r</sup> <sup>B</sup><sup>T</sup> <sup>D</sup><sup>T</sup>* ∗ ∗ <sup>−</sup>2*δ*−<sup>1</sup> *<sup>I</sup><sup>n</sup>* **<sup>0</sup>** ∗ ∗∗ −*I<sup>m</sup>*

*ATAP*−<sup>1</sup> *P*−<sup>1</sup>

*Thus, this leads to the equivalent results as presented above, but with possible different interpretation.*

Through this section the task is concerned with the computation of a state feedback *u*(*t*),

Problem of the interest is to design stable closed-loop system with quadratic performance

*BTAP*−<sup>1</sup> *BTB* **0 0 00**

*BTPAP*−<sup>1</sup> *BTPB* **0 0 00**

*AT*+*AP*−<sup>1</sup> *B P*−1*C<sup>T</sup>* <sup>∗</sup> <sup>−</sup>*γI<sup>r</sup> <sup>D</sup><sup>T</sup>* ∗ ∗ −*I<sup>m</sup>*

*A<sup>T</sup> P*−<sup>1</sup>

*ATPB* **0**

*A<sup>T</sup> P*−<sup>1</sup>

*ATB* **0**

*C<sup>T</sup>*

⎤

*q*˙(*t*) = *Aq*(*t*) + *Bu*(*t*) (41) *y*(*t*) = *Cq*(*t*) + *Du*(*t*) (42)

*u*(*t*) = −*Kq*(*t*) (43)

*q*˙(*t*)=(*A* − *BK*)*q*(*t*) (44)

⎤ ⎥ ⎥ ⎦

*C<sup>T</sup>*

⎤ ⎥ ⎥ ⎦

⎤

⎤

< 0 (36)

⎦ (37)

⎦ (38)

< 0 (39)

⎦ (40)

*AT*+*AP*−<sup>1</sup> *B P*−<sup>1</sup>

*be reduced to*

*following*

⎡ ⎢ ⎢ ⎣

*Then (36) can be written as (29), with*

*P*−<sup>1</sup>

**Λ**<sup>1</sup> =

Λ<sup>2</sup> =

⎡ ⎢ ⎢ ⎣

**3. Control law parameter design**

where matrix *<sup>K</sup>* <sup>∈</sup> *IRr*×*<sup>n</sup>* is a gain matrix.

**3.1 Problem description**

*P*−<sup>1</sup>

*It is evident that (39) yields with the same* **Λ**<sup>1</sup> *as given in (37) and*

which control the linear dynamic system given by (1), (2), i.e.

*γ* > 0 using the linear memoryless state feedback controller of the form

Then the unforced system, formed by the state controller (43), can be written as

Λ<sup>2</sup> =

⎡ ⎣ *P*−<sup>1</sup>

⎡ ⎣ *P*−<sup>1</sup>

> ⎡ ⎣

*P*−<sup>1</sup>

*Thus, satisfying (32), (33) then (36) be negative definite for a feasible P of iii*. *of (2).*

*AT*+*AP*−<sup>1</sup> *B P*−<sup>1</sup>

$$\mathbf{W} = \mathbf{K}\mathbf{V}^T\tag{53}$$

(50) implies (47).

*Y* = �

� 5.1969 7.6083 <sup>−</sup>1.2838 <sup>−</sup>0.5004 <sup>−</sup>2.5381 0.4276 �

and results the control system parameters

*K* =

**3.3 Associate modifications**

*i*.

*ii*.

**Illustrative example**

*X* =

*Y* = �

*K* =

⎡ ⎢ ⎢ ⎣ ⎡ ⎢ ⎢ ⎢ ⎢ ⎣

⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣

*where feasible X, Y, γ, ξ implies the gain matrix (48).*

0.6402 −0.3918 −0.1075 −0.3918 0.7796 0.3443 −0.1075 0.3443 0.9853

0.5451 3.3471 0.6650 0.6113 <sup>−</sup>1.6481 <sup>−</sup>0.3733�

� 5.2296 7.5340 <sup>−</sup>1.3870 <sup>−</sup>0.5590 <sup>−</sup>2.6022 0.4694�

*i*. *γ* = 8.3659 *ii*. *γ* = 35.7411 *ξ* = 5.7959 *ξ* = 30.0832

The closed-loop system response concerning *ii*. of (60) is in the Fig. 2.

⎤ ⎥ ⎥ ⎦

0.4917 3.2177 0.7775 0.6100 <sup>−</sup>1.5418 <sup>−</sup>0.3739�

desired steady-state output variable values were set as [*y*<sup>1</sup> *y*2]=[1−0.5].

*as inserting the same into (34) and setting X* = *P, Y* = *KX, δ*−<sup>1</sup> = *ξ gives*

The example is shown of the closed-loop system response in the forced mode, where in the Fig. 1 the output response, as well as state variable response are presented, respectively. The

Partially Decentralized Design Principle in Large-Scale System Control 371

**Remark 3.** *Inserting <sup>A</sup>* <sup>←</sup> *<sup>A</sup>c, <sup>C</sup>* <sup>←</sup> *<sup>C</sup><sup>c</sup> into (39) and setting <sup>X</sup>* <sup>=</sup> *<sup>P</sup>*−1*, <sup>Y</sup>* <sup>=</sup> *KX, <sup>δ</sup>*−<sup>1</sup> <sup>=</sup> *<sup>ξ</sup>, as well*

*AX*+*XAT*−*BY*−*YTB<sup>T</sup> B XAT*−*YTB<sup>T</sup> XCT*−*YTD<sup>T</sup>* <sup>∗</sup> <sup>−</sup>*γI<sup>r</sup> <sup>B</sup><sup>T</sup> <sup>D</sup><sup>T</sup>* ∗ ∗ −2*ξ I<sup>n</sup>* **0** ∗ ∗∗ −*I<sup>m</sup>*

*AX*+*XAT*−*BY*−*YTB<sup>T</sup> XCT*−*YTD<sup>T</sup> AX*−*BY B*

∗ −*γI<sup>m</sup> CX*−*DY D* ∗ ∗ −2*ξ I<sup>n</sup>* **0** ∗ ∗∗ −*I<sup>r</sup>*

Considering the same parameters of (41), (42) and desired output values as is given above then solving (59), (59) with respect to LMI variables *X*, *Y*, and *γ* given task was feasible with

*X* =

*Y* = �

*K* =

*ρ*(*A<sup>c</sup>* ) = {−6.3921, −7.7931 ± 1.8646 i} *ρ*(*A<sup>c</sup>* ) = {−2.3005, −3.8535, −8.7190}

⎡ ⎢ ⎢ ⎣

, *γ* = 8.4359

, *ρ*(*Ac*) = {−5.5999, −8.3141 ± 1.6528 i}

*X* = *X<sup>T</sup>* > 0, *γ* > 0, *ξ* > 0 (59)

⎤ ⎥ ⎥ ⎥ ⎥ ⎦

⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦

8.7747 −4.7218 −1.2776 −4.7218 5.8293 0.4784 −1.2776 0.4784 8.4785

2.7793 14.7257 5.1591 3.3003 <sup>−</sup>6.8347 <sup>−</sup>1.8016�

� 3.1145 4.9836 0.7966 <sup>−</sup>0.4874 <sup>−</sup>1.5510 <sup>−</sup>0.1984�

⎤ ⎥ ⎥ ⎦

< 0

< 0

(60)

Fig. 1. System output and state response

### **3.2 Basic modification**

**Corollary 3.** *Following the same lines of that for Theorem 2 it is immediate by inserting A* ← *Ac, C* ← *C<sup>c</sup> into i*. *of (24) and denoting*

$$P^{-1} = \mathbf{X}, \qquad \mathbf{S}\_2 = \mathbf{Z} \tag{54}$$

*that*

$$
\begin{bmatrix}
\mathbf{A}\mathbf{X} + \mathbf{X}\mathbf{A}^T - \mathbf{B}\mathbf{K}\mathbf{X} - \mathbf{X}\mathbf{K}^T\mathbf{B}^T & \mathbf{B} & -\mathbf{X}\mathbf{A}^T + \mathbf{X}\mathbf{K}^T\mathbf{B}^T \mathbf{X}\mathbf{C}^T - \mathbf{X}\mathbf{K}^T\mathbf{D}^T \\
\ast & -\gamma I\_T & -\mathbf{B}^T & \mathbf{D}^T \\
\ast & \ast & -\mathbf{Z} - \mathbf{Z}^T & \mathbf{0} \\
\ast & \ast & \ast & \ast & -I\_m
\end{bmatrix} < 0 \qquad(55)
$$

*Thus, using Schur complement equivalency, and with*

$$\mathbf{Y} = \mathbf{K}\mathbf{X} \tag{56}$$

*(58) implies*

$$\mathbf{X} = \mathbf{X}^T > \mathbf{0}, \qquad \gamma > \mathbf{0} \tag{57}$$

$$
\begin{bmatrix}
\mathbf{A}\mathbf{X} + \mathbf{X}\mathbf{A}^T - \mathbf{B}\mathbf{Y} - \mathbf{Y}^T\mathbf{B}^T & \mathbf{B} & \mathbf{X}\mathbf{A}^T - \mathbf{Y}^T\mathbf{B}^T \mathbf{X}\mathbf{C}^T - \mathbf{Y}^T\mathbf{D}^T \\
\ast & -\gamma I\_r & \mathbf{B}^T & \mathbf{D}^T \\
\ast & \ast & -\mathbf{Z} - \mathbf{Z}^T & \mathbf{0} \\
\ast & \ast & \ast & -I\_m
\end{bmatrix} < 0 \tag{58}
$$

### **Illustrative example**

The approach given above is illustrated by an example where the parameters of the (41), (42) are

$$A = \begin{bmatrix} 0 & 1 & 0 \\ 0 & 0 & 1 \\ -5 & -9 & -5 \end{bmatrix}, \quad B = \begin{bmatrix} 1 \ 3 \\ 2 \ 1 \\ 1 \ 5 \end{bmatrix}, \quad C^T = \begin{bmatrix} 1 & 1 \\ 2 & -1 \\ -2 & 0 \\ \mathbf{v} & \mathbf{v} & \mathbf{v} \end{bmatrix}, \quad D = \mathbf{0}$$

Solving (57), (58) with respect to the next LMI variables *X*, *Y*, *Z*, and *δ* using SeDuMi (Self-Dual-Minimization) package for Matlab (Peaucelle et al. (1994)) given task was feasible with

$$\mathbf{X} = \begin{bmatrix} 0.6276 & -0.3796 & -0.0923 \\ -0.3796 & 0.7372 & 0.3257 \\ -0.0923 & 0.3257 & 0.9507 \end{bmatrix}, \quad \mathbf{Z} = \begin{bmatrix} 5.0040 & 0.1209 & 0.4891 \\ 0.1209 & 4.9512 & 0.4888 \\ 0.4891 & 0.4888 & 5.2859 \end{bmatrix}$$

370 Recent Advances in Robust Control – Novel Approaches and Design Methods Partially Decentralized Design Principle in Large-Scale System Control <sup>11</sup> Partially Decentralized Design Principle in Large-Scale System Control 371

$$Y = \begin{bmatrix} 0.4917 & 3.2177 & 0.7775 \\ 0.6100 & -1.5418 & -0.3739 \end{bmatrix}, \qquad \gamma = 8.4359$$

and results the control system parameters

$$\mathbf{K} = \begin{bmatrix} 5.1969 & 7.6083 \ -1.2838 \\ -0.5004 \ -2.5381 & 0.4276 \end{bmatrix}, \quad \rho(\mathbf{A}\_{\mathcal{E}}) = \{-5.5999, \ -8.3141 \pm 1.6528 \,\mathrm{i}\}$$

The example is shown of the closed-loop system response in the forced mode, where in the Fig. 1 the output response, as well as state variable response are presented, respectively. The desired steady-state output variable values were set as [*y*<sup>1</sup> *y*2]=[1−0.5].

### **3.3 Associate modifications**

10 Robust

−0.4 −0.2 0 0.2 0.4 0.6 0.8 1

**Corollary 3.** *Following the same lines of that for Theorem 2 it is immediate by inserting A* ← *Ac,*

*AX*+*X AT*−*BKX*−*XKTB<sup>T</sup> <sup>B</sup>* <sup>−</sup>*X A<sup>T</sup>* <sup>+</sup> *XKTB<sup>T</sup> XC<sup>T</sup>* <sup>−</sup> *XKTD<sup>T</sup>* <sup>∗</sup> <sup>−</sup>*γI<sup>r</sup>* <sup>−</sup>*B<sup>T</sup> <sup>D</sup><sup>T</sup>* ∗ ∗ <sup>−</sup>*Z*−*Z<sup>T</sup>* **<sup>0</sup>** ∗ ∗∗ −*I<sup>m</sup>*

*AX*+*X AT*−*BY*−*YTB<sup>T</sup> B XA<sup>T</sup>* <sup>−</sup> *<sup>Y</sup>TB<sup>T</sup> XC<sup>T</sup>* <sup>−</sup> *<sup>Y</sup>TD<sup>T</sup>* <sup>∗</sup> <sup>−</sup>*γI<sup>r</sup> <sup>B</sup><sup>T</sup> <sup>D</sup><sup>T</sup>* ∗ ∗ <sup>−</sup>*Z*−*Z<sup>T</sup>* **<sup>0</sup>** ∗ ∗∗ −*I<sup>m</sup>*

The approach given above is illustrated by an example where the parameters of the (41), (42)

Solving (57), (58) with respect to the next LMI variables *X*, *Y*, *Z*, and *δ* using SeDuMi (Self-Dual-Minimization) package for Matlab (Peaucelle et al. (1994)) given task was feasible

⎤

⎦ , *Z* =

⎤

<sup>⎦</sup> , *<sup>C</sup><sup>T</sup>* <sup>=</sup>

⎡ ⎣

⎡ ⎣

⎤

5.0040 0.1209 0.4891 0.1209 4.9512 0.4888 0.4891 0.4888 5.2859

⎦ , *D* = **0**

⎤ ⎦

q(t)

<sup>0</sup> 0.5 <sup>1</sup> 1.5 <sup>2</sup> 2.5 <sup>3</sup> 3.5 <sup>4</sup> −0.6

*P*−<sup>1</sup> = *X*, *S*<sup>2</sup> = *Z* (54)

*Y* = *KX* (56)

⎤ ⎥ ⎥ ⎦

*X* = *X<sup>T</sup>* > 0, *γ* > 0 (57)

t[s]

⎤ ⎥ ⎥ ⎦

< 0 (55)

< 0 (58)

q1 (t) q2 (t) q3 (t)

y1 (t) y2 (t)

<sup>0</sup> 0.5 <sup>1</sup> 1.5 <sup>2</sup> 2.5 <sup>3</sup> 3.5 <sup>4</sup> −0.6

Fig. 1. System output and state response

*C* ← *C<sup>c</sup> into i*. *of (24) and denoting*

⎡ ⎢ ⎢ ⎣

*A* =

*X* =

⎡ ⎣ ⎡ ⎣

010 001 −5 −9 −5

⎤

0.6276 −0.3796 −0.0923 −0.3796 0.7372 0.3257 −0.0923 0.3257 0.9507

⎦ , *B* =

**Illustrative example**

are

with

t[s]

*Thus, using Schur complement equivalency, and with*

−0.4 −0.2 0 0.2 0.4 0.6 0.8 1 1.2

*that*

*(58) implies*

**3.2 Basic modification**

⎡ ⎢ ⎢ ⎣

y(t)

**Remark 3.** *Inserting <sup>A</sup>* <sup>←</sup> *<sup>A</sup>c, <sup>C</sup>* <sup>←</sup> *<sup>C</sup><sup>c</sup> into (39) and setting <sup>X</sup>* <sup>=</sup> *<sup>P</sup>*−1*, <sup>Y</sup>* <sup>=</sup> *KX, <sup>δ</sup>*−<sup>1</sup> <sup>=</sup> *<sup>ξ</sup>, as well as inserting the same into (34) and setting X* = *P, Y* = *KX, δ*−<sup>1</sup> = *ξ gives*

$$X = \mathbf{X}^{\top} > 0, \quad \gamma > 0, \quad \tilde{\xi} > 0 \tag{59}$$

$$\text{i.e.} \begin{bmatrix} AX + \mathbf{X}\mathbf{A}^{\top} - BY - \mathbf{Y}^{\top}\mathbf{B}^{\top} & \mathbf{B} & \mathbf{X}\mathbf{A}^{\top} - \mathbf{Y}^{\top}\mathbf{B}^{\top} & \mathbf{X}\mathbf{C}^{\top} - \mathbf{Y}^{\top}\mathbf{D}^{\top} \\ \ast & -\gamma I\_{\mathrm{I}} & \mathbf{B}^{\top} & \mathbf{D}^{\top} \\ \ast & \ast & -2\tilde{\epsilon}I\_{\mathrm{II}} & \mathbf{0} \\ \ast & \ast & \ast & -I\_{\mathrm{m}} \\ \ast & \ast & \ast & -I\_{\mathrm{m}} \\ \text{ii.} \begin{bmatrix} AX + \mathbf{X}\mathbf{A}^{\top} - BY - Y^{\top}\mathbf{B}^{\top} & \mathbf{X}\mathbf{C}^{\top} - Y^{\top}\mathbf{D}^{\top} & \mathbf{A}\mathbf{X} - \mathbf{B}Y & \mathbf{B} \\ \ast & -\gamma I\_{\mathrm{m}} & \mathbf{C}\mathbf{X} - \mathbf{D}Y & \mathbf{D} \\ \ast & \ast & -2\tilde{\xi}I\_{\mathrm{m}} & \mathbf{0} \\ \ast & \ast & \ast & -I\_{\mathrm{I}} \end{bmatrix} < 0 \end{bmatrix} \tag{60}$$

*where feasible X, Y, γ, ξ implies the gain matrix (48).*

#### **Illustrative example**

Considering the same parameters of (41), (42) and desired output values as is given above then solving (59), (59) with respect to LMI variables *X*, *Y*, and *γ* given task was feasible with

*i*. *γ* = 8.3659 *ii*. *γ* = 35.7411 *ξ* = 5.7959 *ξ* = 30.0832 *X* = ⎡ ⎢ ⎢ ⎣ 0.6402 −0.3918 −0.1075 −0.3918 0.7796 0.3443 −0.1075 0.3443 0.9853 ⎤ ⎥ ⎥ ⎦ *X* = ⎡ ⎢ ⎢ ⎣ 8.7747 −4.7218 −1.2776 −4.7218 5.8293 0.4784 −1.2776 0.4784 8.4785 ⎤ ⎥ ⎥ ⎦ *Y* = � 0.5451 3.3471 0.6650 0.6113 <sup>−</sup>1.6481 <sup>−</sup>0.3733� *Y* = � 2.7793 14.7257 5.1591 3.3003 <sup>−</sup>6.8347 <sup>−</sup>1.8016� *K* = � 5.2296 7.5340 <sup>−</sup>1.3870 <sup>−</sup>0.5590 <sup>−</sup>2.6022 0.4694� *K* = � 3.1145 4.9836 0.7966 <sup>−</sup>0.4874 <sup>−</sup>1.5510 <sup>−</sup>0.1984� *ρ*(*A<sup>c</sup>* ) = {−6.3921, −7.7931 ± 1.8646 i} *ρ*(*A<sup>c</sup>* ) = {−2.3005, −3.8535, −8.7190}

The closed-loop system response concerning *ii*. of (60) is in the Fig. 2.

**Illustrative example**

*X* =

*Y* = �

*K* = �

(2009)).

*γ* ∈ *IR such that*

**3.4 Dependent modifications**

*i*.

*ii*.

*where K is given in (48).*

(36) implies *ii*. of (64).

implies *i*. of (64).

⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣

⎡ ⎢ ⎢ ⎢ ⎢ ⎣

⎡ ⎢ ⎢ ⎣

Using the same example consideration as are given above then solving (61), (62) with respect

Partially Decentralized Design Principle in Large-Scale System Control 373

*X* =

*Y* = �

*K* = �

*ρ*(*Ac*) = {−4.3952, −4.6009 ± 14.8095 i} *ρ*(*Ac*) = {−2.2682, −3.1415 ± 9.634 i}

It is evident that different design conditions implying from the equivalent, but different,

Similar extended LMI characterizations can be derived by formulating LMI in terms of product *ξP*, where *ξ* is a prescribed scalar to overcome BMI formulation (Veselý & Rosinová

**Theorem 4.** *Closed-loop system (1), (2) is stable with quadratic performance* �*Cc*(*sI*−*Ac*)−1*B*�<sup>2</sup>

*γ, A<sup>c</sup>* = *A*−*BK, C<sup>c</sup>* = *C*−*DK if for given ξ* > 0 *there exist a symmetric positive definite matrix <sup>X</sup>* <sup>&</sup>gt; <sup>0</sup>*, <sup>X</sup>* <sup>∈</sup> *IRn*×*n, a regular square matrix <sup>Z</sup>* <sup>∈</sup> *IRn*×*n, a matrix <sup>Y</sup>* <sup>∈</sup> *IRr*×*n, and a scalar <sup>γ</sup>* <sup>&</sup>gt; <sup>0</sup>*,*

*AX*+*XAT*−*BY*−*YTB<sup>T</sup> B XAT*−*YTB<sup>T</sup> XCT*−*YTD<sup>T</sup>*

*AX*+*XAT*−*BY*−*YTB<sup>T</sup> XCT*−*YTD<sup>T</sup> AX*−*BY B*

<sup>∗</sup> <sup>−</sup>*γI<sup>r</sup> <sup>B</sup><sup>T</sup> <sup>D</sup><sup>T</sup>* ∗ ∗ −2*ξX* **0** ∗ ∗∗ −*I<sup>m</sup>*

∗ −*γI<sup>m</sup> CX*−*DY D* ∗ ∗ −2*ξX* **0** ∗ ∗∗ −*I<sup>r</sup>*

*Proof. i*. Inserting *<sup>A</sup>* <sup>←</sup> *<sup>A</sup>c*, *<sup>C</sup>* <sup>←</sup> *<sup>C</sup><sup>c</sup>* into (36) and setting *<sup>X</sup>* <sup>=</sup> *<sup>P</sup>*−1, *<sup>Y</sup>* <sup>=</sup> *KX*, and *<sup>ξ</sup>* <sup>=</sup> *<sup>δ</sup>*−<sup>1</sup> then

*ii*. Inserting *<sup>A</sup>* <sup>←</sup> *<sup>A</sup>c*, *<sup>C</sup>* <sup>←</sup> *<sup>C</sup><sup>c</sup>* into (28) and setting *<sup>X</sup>* <sup>=</sup> *<sup>P</sup>*, *<sup>Y</sup>* <sup>=</sup> *KX*, and *<sup>ξ</sup>* <sup>=</sup> *<sup>δ</sup>*−<sup>1</sup> then (28)

The simulation results are shown in Fig. 3, and are concerning with *i*. of (62).

bounded lemma structures results in different numerical solutions.

⎡ ⎢ ⎢ ⎣

6.0755 −0.9364 1.0524 −0.9364 5.1495 2.4320 1.0524 2.4320 7.2710

6.3651 9.9547 −8.7603 2.2941 <sup>−</sup>5.3741 <sup>−</sup>6.2975�

2.0688 3.5863 −2.7038 0.4033 <sup>−</sup>0.6338 <sup>−</sup>0.7125�

*X* = *X<sup>T</sup>* > 0, *γ* > 0 (63)

⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦

⎤ ⎥ ⎥ ⎥ ⎥ ⎦

< 0

< 0

⎤ ⎥ ⎥ ⎦

∞ ≤

(64)

to LMI variables *X*, *Y*, and *γ* given task was feasible with

1.1852 0.1796 0.6494 0.1796 1.4325 1.1584 0.6494 1.1584 2.1418

2.0355 3.7878 −3.2286 0.6142 <sup>−</sup>2.1847 <sup>−</sup>3.0636�

4.4043 7.8029 −7.0627 1.5030 <sup>−</sup>0.3349 <sup>−</sup>1.7049�

*i*. *γ* = 6.8386 *ii*. *γ* = 17.6519

⎤ ⎥ ⎥ ⎦

Fig. 3. System output and state response

<sup>0</sup> 0.5 <sup>1</sup> 1.5 <sup>2</sup> 2.5 <sup>3</sup> 3.5 <sup>4</sup> −0.5

t[s]

**Remark 4.** *The closed-loop system (44), (45) is stable with quadratic performance γ* > 0 *and the inequalities (15) are true if and only if there exists a symmetric positive definite matrix X* > 0*, X* ∈ *IRn*×*n, a matrix <sup>Y</sup>* <sup>∈</sup> *IRr*×*n, and a scalar <sup>γ</sup>* <sup>&</sup>gt; <sup>0</sup>*, <sup>γ</sup>* <sup>∈</sup> *IR such that*

$$\mathbf{X} = \mathbf{X}^T > 0, \quad \gamma > 0, \quad \tilde{\xi} > 0 \tag{61}$$

$$i. \quad \begin{bmatrix} AX + \mathbf{X}A^T - BY - \mathbf{Y}^T \mathbf{B}^T & \mathbf{B} & \mathbf{X} \mathbf{C}^T - \mathbf{Y}^T \mathbf{D}^T \\\\ \* & -\gamma I\_r & \mathbf{0} \\\\ \* & \* & -I\_m \end{bmatrix} < 0$$

$$ii. \quad \begin{bmatrix} AX + \mathbf{X}A^T - BY - \mathbf{Y}^T \mathbf{B}^T & \mathbf{X} \mathbf{C}^T - \mathbf{Y}^T \mathbf{D}^T & \mathbf{B} \\\\ \* & -\gamma I\_m & \mathbf{D} \\\\ \* & \* & -I\_r \end{bmatrix} < 0$$

<sup>0</sup> 0.5 <sup>1</sup> 1.5 <sup>2</sup> 2.5 <sup>3</sup> 3.5 <sup>4</sup> −1.5

t[s]

### **Illustrative example**

12 Robust

−0.4 −0.2 0 0.2 0.4 0.6 0.8 1

−1 −0.5 0 0.5 1 1.5

q(t)

**Remark 4.** *The closed-loop system (44), (45) is stable with quadratic performance γ* > 0 *and the inequalities (15) are true if and only if there exists a symmetric positive definite matrix X* > 0*, X* ∈

> ∗ −*γI<sup>r</sup>* **0** ∗ ∗ −*I<sup>m</sup>*

∗ −*γI<sup>m</sup> D* ∗ ∗ −*I<sup>r</sup>*

<sup>−</sup>*Y<sup>T</sup>*

*AX*+*XAT*−*BY*−*YTB<sup>T</sup> B XC<sup>T</sup>*

*AX*+*XAT*−*BY*−*YTB<sup>T</sup> XC<sup>T</sup>*

q(t)

<sup>0</sup> 0.5 <sup>1</sup> 1.5 <sup>2</sup> 2.5 <sup>3</sup> 3.5 <sup>4</sup> −0.6

<sup>0</sup> 0.5 <sup>1</sup> 1.5 <sup>2</sup> 2.5 <sup>3</sup> 3.5 <sup>4</sup> −1.5

*X* = *X<sup>T</sup>* > 0, *γ* > 0, *ξ* > 0 (61)

<sup>−</sup>*YTD<sup>T</sup>*

*D<sup>T</sup> B*

⎤ ⎥ ⎥ ⎦ < 0

⎤ ⎥ ⎥ ⎥ ⎦ < 0

t[s]

t[s]

q1 (t) q2 (t) q3 (t)

q1 (t) q2 (t) q3 (t)

(62)

y1 (t) y2 (t)

> y1 (t) y2 (t)

<sup>0</sup> 0.5 <sup>1</sup> 1.5 <sup>2</sup> 2.5 <sup>3</sup> 3.5 <sup>4</sup> −0.6

Fig. 2. System output and state response

t[s]

<sup>0</sup> 0.5 <sup>1</sup> 1.5 <sup>2</sup> 2.5 <sup>3</sup> 3.5 <sup>4</sup> −0.5

Fig. 3. System output and state response

*i*.

*ii*.

⎡ ⎢ ⎢ ⎣

⎡ ⎢ ⎢ ⎢ ⎣

t[s]

*IRn*×*n, a matrix <sup>Y</sup>* <sup>∈</sup> *IRr*×*n, and a scalar <sup>γ</sup>* <sup>&</sup>gt; <sup>0</sup>*, <sup>γ</sup>* <sup>∈</sup> *IR such that*

−0.4 −0.2 0 0.2 0.4 0.6 0.8 1 1.2 1.4

> 0 0.5 1 1.5 2 2.5

y(t)

y(t)

Using the same example consideration as are given above then solving (61), (62) with respect to LMI variables *X*, *Y*, and *γ* given task was feasible with

*i*. *γ* = 6.8386 *ii*. *γ* = 17.6519 *X* = ⎡ ⎢ ⎢ ⎣ 1.1852 0.1796 0.6494 0.1796 1.4325 1.1584 0.6494 1.1584 2.1418 ⎤ ⎥ ⎥ ⎦ *X* = ⎡ ⎢ ⎢ ⎣ 6.0755 −0.9364 1.0524 −0.9364 5.1495 2.4320 1.0524 2.4320 7.2710 ⎤ ⎥ ⎥ ⎦ *Y* = � 2.0355 3.7878 −3.2286 0.6142 <sup>−</sup>2.1847 <sup>−</sup>3.0636� *Y* = � 6.3651 9.9547 −8.7603 2.2941 <sup>−</sup>5.3741 <sup>−</sup>6.2975� *K* = � 4.4043 7.8029 −7.0627 1.5030 <sup>−</sup>0.3349 <sup>−</sup>1.7049� *K* = � 2.0688 3.5863 −2.7038 0.4033 <sup>−</sup>0.6338 <sup>−</sup>0.7125� *ρ*(*Ac*) = {−4.3952, −4.6009 ± 14.8095 i} *ρ*(*Ac*) = {−2.2682, −3.1415 ± 9.634 i}

The simulation results are shown in Fig. 3, and are concerning with *i*. of (62).

It is evident that different design conditions implying from the equivalent, but different, bounded lemma structures results in different numerical solutions.

### **3.4 Dependent modifications**

Similar extended LMI characterizations can be derived by formulating LMI in terms of product *ξP*, where *ξ* is a prescribed scalar to overcome BMI formulation (Veselý & Rosinová (2009)).

**Theorem 4.** *Closed-loop system (1), (2) is stable with quadratic performance* �*Cc*(*sI*−*Ac*)−1*B*�<sup>2</sup> ∞ ≤ *γ, A<sup>c</sup>* = *A*−*BK, C<sup>c</sup>* = *C*−*DK if for given ξ* > 0 *there exist a symmetric positive definite matrix <sup>X</sup>* <sup>&</sup>gt; <sup>0</sup>*, <sup>X</sup>* <sup>∈</sup> *IRn*×*n, a regular square matrix <sup>Z</sup>* <sup>∈</sup> *IRn*×*n, a matrix <sup>Y</sup>* <sup>∈</sup> *IRr*×*n, and a scalar <sup>γ</sup>* <sup>&</sup>gt; <sup>0</sup>*, γ* ∈ *IR such that*

$$\begin{array}{ccccc} & \mathbf{X} = \mathbf{X}^T > 0, \quad \gamma > 0 & & & & (63) \\\\ \mathbf{i} & \begin{bmatrix} AX + \mathbf{X}A^T - BY - Y^T \mathbf{B}^T & \mathbf{B} & \mathbf{X}A^T - Y^T \mathbf{B}^T & \mathbf{X}C^T - Y^T \mathbf{D}^T \\\\ \ast & -\gamma I\_I & \mathbf{B}^T & \mathbf{D}^T \\\\ \ast & \ast & -\tilde{\gamma} I\_I & \mathbf{0} \\\\ \ast & \ast & -I\_m \\\\ \text{i.i.} & \begin{bmatrix} AX + \mathbf{X}A^T - BY - Y^T \mathbf{B}^T & \mathbf{X}C^T - Y^T \mathbf{D}^T & \mathbf{A}X - BY & \mathbf{B} \\\\ \ast & -\gamma I\_m & \mathbf{C}X - DY & \mathbf{D} \\\\ \ast & \ast & -\tilde{\gamma} I\_I & \mathbf{0} \\\\ \ast & \ast & \ast & -I\_I \end{bmatrix} < 0 \end{array} \tag{64}$$

*where K is given in (48).*

*Proof. i*. Inserting *<sup>A</sup>* <sup>←</sup> *<sup>A</sup>c*, *<sup>C</sup>* <sup>←</sup> *<sup>C</sup><sup>c</sup>* into (36) and setting *<sup>X</sup>* <sup>=</sup> *<sup>P</sup>*−1, *<sup>Y</sup>* <sup>=</sup> *KX*, and *<sup>ξ</sup>* <sup>=</sup> *<sup>δ</sup>*−<sup>1</sup> then (36) implies *ii*. of (64).

*ii*. Inserting *<sup>A</sup>* <sup>←</sup> *<sup>A</sup>c*, *<sup>C</sup>* <sup>←</sup> *<sup>C</sup><sup>c</sup>* into (28) and setting *<sup>X</sup>* <sup>=</sup> *<sup>P</sup>*, *<sup>Y</sup>* <sup>=</sup> *KX*, and *<sup>ξ</sup>* <sup>=</sup> *<sup>δ</sup>*−<sup>1</sup> then (28) implies *i*. of (64).

**4.1 Problem description**

O :=

such that the control law of

performance criterion.

*matrix <sup>W</sup>* <sup>∈</sup> *IRr*×*<sup>n</sup> such that*

*<sup>i</sup>* <sup>−</sup>*WTB<sup>T</sup>*

⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣

*V A<sup>T</sup>*

polytopic uncertainty domain O,

�

Q =

�

of the vertex matrices (*Ai*, *Bi*, *Ci*, *Di*), *i* = 1, 2, . . . , *s*.

guarantees an upper bound of √*<sup>γ</sup>* to *<sup>H</sup>*<sup>∞</sup> norm.

Assuming that the matrices *A*, *B*, *C*, and *D* of (1), (2) are not precisely known but belong to a

Partially Decentralized Design Principle in Large-Scale System Control 375

where Q is the unit simplex, *Ai*, *Bi*, *Ci*, and *D<sup>i</sup>* are constant matrices with appropriate

Since *a* is constrained to the unit simplex as (66) the matrices (*A*, *B*, *C*, *D*)(*a*) are affine functions of the uncertain parameter vector *<sup>a</sup>* <sup>∈</sup> *IR<sup>s</sup>* described by the convex combination

The state-feedback control problem is to find, for a *γ* > 0, the state-feedback gain matrix *K*

By virtue of the property of convex combinations, (48) can be readily used to derive the robust

**Theorem 5.** *Given system (65), (66) the closed-loop H*<sup>∞</sup> *norm is less than a real value* √*<sup>γ</sup>* > <sup>0</sup>*, if there exist positive matrices <sup>T</sup><sup>i</sup>* <sup>∈</sup> *IRn*×*n*, *<sup>i</sup>* <sup>=</sup> 1, 2, . . . ,*s, real square matrices <sup>U</sup>*,*<sup>V</sup>* <sup>∈</sup> *IRn*×*n, and a real*

> ∗ ∗ <sup>−</sup>*U*−*U<sup>T</sup>* **<sup>0</sup>** ∗ ∗∗ −*I<sup>m</sup>*

**Remark 5.** *Thereby, robust control performance of uncertain continuous-time systems is guaranteed*

*s* ∑ *i*=1

*T*(*a*) =

*<sup>i</sup>* <sup>+</sup>*AiVT*−*Bi<sup>W</sup>* <sup>−</sup>*B<sup>i</sup> <sup>T</sup>i*−*UT*+*V A<sup>T</sup>*

<sup>∗</sup> <sup>−</sup>*γI<sup>r</sup>* <sup>−</sup>*B<sup>T</sup>*

*If the existence is affirmative, the state-feedback gain K is given by*

*Proof.* It is obvious that (47), (48) implies directly (69), (70).

*by a parameter-dependent Lyapunov matrix, which is constructed as*

*s* ∑ *i*=1

*s* ∑ *i*=1

*ai* = 1; *ai* > 0, *i* = 1, 2, . . . , *s*

*ai* (*Ai*, *<sup>B</sup>i*, *<sup>C</sup>i*, *<sup>D</sup>i*), *<sup>a</sup>* ∈ Q�

*u*(*t*) = −*Kq*(*t*) (67)

*γ* > 0 (68)

*<sup>i</sup>* <sup>+</sup>*WTD<sup>T</sup> i*

⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦

<0 (69)

*i*

*aiT<sup>i</sup>* (71)

*<sup>i</sup>* <sup>−</sup>*VC<sup>T</sup>*

*K* = *WV*−*<sup>T</sup>* (70)

*<sup>i</sup> <sup>D</sup><sup>T</sup>*

*<sup>i</sup>* <sup>−</sup>*WTB<sup>T</sup>*

�

(65)

(66)

(*A*, *B*, *C*, *D*)(*a*) : (*A*, *B*, *C*, *D*)(*a*) =

(*a*1, *a*2, ··· , *as*) :

dimensions, and *ai*, *i* = 1, 2, . . . , *s* are time-invariant uncertainties.

Fig. 4. System output and state response

Note, other nontrivial solutions can be obtained using different setting of *Sl*, *l* = 1, 2.

#### **Illustrative example**

Considering the same system parameters of (1), (2), and the same desired output values as are given above then solving (63), (64) with respect to LMI variables *X*, *Y*, and *γ* with prescribed *ξ* = 10/*xi* = 30, respectively, given task was feasible with

$$\begin{aligned} \tilde{\mathbf{y}} &= \mathbf{y} - \mathbf{3}\mathbf{7}\mathbf{3}\mathbf{1} & \text{if.}\quad \boldsymbol{\gamma} = 17.6519\\ \tilde{\mathbf{y}} &= \mathbf{0} & \tilde{\mathbf{y}} = \mathbf{3}\mathbf{0}\\ \mathbf{X} &= \begin{bmatrix} 0.5203 - 0.2338\ 0.0038\\ -0.2338\ 0.7293\ 0.2359\\ 0.0038 & 0.2359\ 0.7728 \end{bmatrix} & \mathbf{X} = \begin{bmatrix} 0.8926 - 0.2332\ 0.0489\\ -0.2332\ 1.2228\ 0.3403\\ 0.0489\ 0.3403\ 1.3969 \end{bmatrix} \\ \mathbf{Y} &= \begin{bmatrix} 0.8689\ \ 3.2428\ 0.6068\\ 0.3503\ - 1.6271\ -0.1495\\ 0.2028\ -2.8097\ 3.0331 \end{bmatrix} & \mathbf{Y} = \begin{bmatrix} 3.0546\ \ 8.861\ 0.2482\\ 2.0238\ - 2.8097\ 3.0331\\ 2.0238\ - 2.8097\ 3.0331 \end{bmatrix} \\ \mathbf{K} &= \begin{bmatrix} 4.4898\ \ 6.2565\ - 1.1462\\ -0.4912\ - 2.5815\ 0.5968\end{bmatrix} & \mathbf{K} = \begin{bmatrix} 5.8920 & 8.9877 - 2.2185\\ 1.3774 - 2.8170\ 2.8094\\ 1.3774 - 2.8170\ 2.8094 \end{bmatrix} \end{aligned}$$

The same simulation study as above was carried out, and the simulation results concerning *ii*. of (64) for the states and output variables of the system are shown in Fig. 4.

It also should be noted, the cost value *γ* will not be a monotonously decreasing function with the decreasing of *ξ*, if *δ* = *ξ*−<sup>1</sup> is chosen.

### **4. Uncertain continuous-time systems**

The importance of Theorem 3 is that it separates *T* from *A*, *B*, *C*, and *D*, i.e. there are no terms containing the product of *T* and any of them. This enables to derive other forms of bonded real lemma for a system with polytopic uncertainties by using a parameter-dependent Lyapunov function.

### **4.1 Problem description**

14 Robust

−0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1

q(t)

<sup>0</sup> 0.5 <sup>1</sup> 1.5 <sup>2</sup> 2.5 <sup>3</sup> 3.5 <sup>4</sup> −0.8

0.8926 −0.2332 0.0489 −0.2332 1.2228 0.3403 0.0489 0.3403 1.3969

3.0546 8.8611 0.2482 2.0238 −2.8097 3.0331

5.8920 8.9877 −2.2185 1.3774 −2.8170 2.8094

⎤ ⎥ ⎥ ⎦

�

�

t[s]

q1 (t) q2 (t) q3 (t)

y1 (t) y2 (t)

Note, other nontrivial solutions can be obtained using different setting of *Sl*, *l* = 1, 2.

Considering the same system parameters of (1), (2), and the same desired output values as are given above then solving (63), (64) with respect to LMI variables *X*, *Y*, and *γ* with prescribed

*X* =

*Y* = �

*K* = �

*ρ*(*A<sup>c</sup>* ) ={−8.3448, −5.7203 ± 3.6354 i} *ρ*(*Ac*) ={−4.6346, −12.3015, −25.0751}

The same simulation study as above was carried out, and the simulation results concerning *ii*.

It also should be noted, the cost value *γ* will not be a monotonously decreasing function with

The importance of Theorem 3 is that it separates *T* from *A*, *B*, *C*, and *D*, i.e. there are no terms containing the product of *T* and any of them. This enables to derive other forms of bonded real lemma for a system with polytopic uncertainties by using a parameter-dependent Lyapunov

⎡ ⎢ ⎢ ⎣

<sup>0</sup> 0.5 <sup>1</sup> 1.5 <sup>2</sup> 2.5 <sup>3</sup> 3.5 <sup>4</sup> −0.5

Fig. 4. System output and state response

t[s]

*ξ* = 10/*xi* = 30, respectively, given task was feasible with

0.5203 −0.2338 0.0038 −0.2338 0.7293 0.2359 0.0038 0.2359 0.7728

0.8689 3.2428 0.6068 0.3503 −1.6271 −0.1495

� 4.4898 6.2565 <sup>−</sup>1.1462 −0.4912 −2.5815 0.5968

the decreasing of *ξ*, if *δ* = *ξ*−<sup>1</sup> is chosen.

**4. Uncertain continuous-time systems**

*i*. *γ* = 8.3731 *ii*. *γ* = 17.6519 *ξ* = 10 *ξ* = 30

> ⎤ ⎥ ⎥ ⎦

�

�

of (64) for the states and output variables of the system are shown in Fig. 4.

0

**Illustrative example**

*X* =

*Y* = �

*K* =

function.

⎡ ⎢ ⎢ ⎣

0.5

y(t)

1

1.5

2

Assuming that the matrices *A*, *B*, *C*, and *D* of (1), (2) are not precisely known but belong to a polytopic uncertainty domain O,

$$\mathcal{O} := \left\{ \left( \mathbf{A}, \mathbf{B}, \mathbf{C}, \mathbf{D} \right) \left( \mathbf{a} \right) : \left( \mathbf{A}, \mathbf{B}, \mathbf{C}, \mathbf{D} \right) \left( \mathbf{a} \right) = \sum\_{i=1}^{s} a\_i \left( \mathbf{A}\_{i\cdot} \mathbf{B}\_{i\cdot} \mathbf{C}\_{i\cdot} \mathbf{D}\_i \right), \quad \mathbf{a} \in \mathcal{Q} \right\} \tag{65}$$

$$\mathcal{Q} = \left\{ (a\_1, a\_2, \dots, a\_s) : \sum\_{i=1}^s a\_i = 1; \quad a\_i > 0, \ i = 1, 2, \dots, s \right\} \tag{66}$$

where Q is the unit simplex, *Ai*, *Bi*, *Ci*, and *D<sup>i</sup>* are constant matrices with appropriate dimensions, and *ai*, *i* = 1, 2, . . . , *s* are time-invariant uncertainties.

Since *a* is constrained to the unit simplex as (66) the matrices (*A*, *B*, *C*, *D*)(*a*) are affine functions of the uncertain parameter vector *<sup>a</sup>* <sup>∈</sup> *IR<sup>s</sup>* described by the convex combination of the vertex matrices (*Ai*, *Bi*, *Ci*, *Di*), *i* = 1, 2, . . . , *s*.

The state-feedback control problem is to find, for a *γ* > 0, the state-feedback gain matrix *K* such that the control law of

$$\mu(t) = -\mathbf{K}\boldsymbol{q}(t)\tag{67}$$

guarantees an upper bound of √*<sup>γ</sup>* to *<sup>H</sup>*<sup>∞</sup> norm.

By virtue of the property of convex combinations, (48) can be readily used to derive the robust performance criterion.

**Theorem 5.** *Given system (65), (66) the closed-loop H*<sup>∞</sup> *norm is less than a real value* √*<sup>γ</sup>* > <sup>0</sup>*, if there exist positive matrices <sup>T</sup><sup>i</sup>* <sup>∈</sup> *IRn*×*n*, *<sup>i</sup>* <sup>=</sup> 1, 2, . . . ,*s, real square matrices <sup>U</sup>*,*<sup>V</sup>* <sup>∈</sup> *IRn*×*n, and a real matrix <sup>W</sup>* <sup>∈</sup> *IRr*×*<sup>n</sup> such that*

$$
\begin{bmatrix}
\gamma > 0 & & & & & \text{(68)} \\
\nabla \mathbf{A}\_i^T - \mathbf{W}^T \mathbf{B}\_i^T + \mathbf{A}\_i \mathbf{V}^T - \mathbf{B}\_i \mathbf{W} - \mathbf{B}\_i & \mathbf{T}\_i - \mathbf{U}^T + \mathbf{V} \mathbf{A}\_i^T - \mathbf{W}^T \mathbf{B}\_i^T - \mathbf{V} \mathbf{C}\_i^T + \mathbf{W}^T \mathbf{D}\_i^T & & & & \text{(69)} \\
\ast & -\gamma \mathbf{I}\_I & -\mathbf{B}\_i^T & & & \mathbf{D}\_i^T \\
\ast & \ast & -\mathbf{U} - \mathbf{U}^T & & \mathbf{0} \\
\ast & \ast & \ast & -\mathbf{I}\_m \\
\end{bmatrix} < 0 \tag{69}
$$

*If the existence is affirmative, the state-feedback gain K is given by*

$$\mathbf{K} = \mathbf{W}\mathbf{V}^{-T} \tag{70}$$

*Proof.* It is obvious that (47), (48) implies directly (69), (70).

**Remark 5.** *Thereby, robust control performance of uncertain continuous-time systems is guaranteed by a parameter-dependent Lyapunov matrix, which is constructed as*

$$T(a) = \sum\_{i=1}^{s} a\_i T\_i \tag{71}$$

### **4.2 Dependent modifications**

**Theorem 6.** *Given system (65), (66) the closed-loop H*<sup>∞</sup> *norm is less than a real value* √*<sup>γ</sup>* > <sup>0</sup>*, if there exist positive symmetric matrices <sup>T</sup><sup>i</sup>* <sup>∈</sup> *IRn*×*n*, *<sup>i</sup>* <sup>=</sup> 1, 2, . . . , *n, a real square matrices <sup>V</sup>* <sup>∈</sup> *IRn*×*n, a real matrix <sup>W</sup>* <sup>∈</sup> *IRr*×*n, and a positive scalar <sup>δ</sup>* <sup>&</sup>gt; <sup>0</sup>*, <sup>δ</sup>* <sup>∈</sup> *IR such that*

*T<sup>i</sup>* > 0, *i* = 1, 2, . . . , *n*, *γ* > 0 (72) ⎡ ⎢ ⎢ ⎣ *V A<sup>T</sup> <sup>i</sup>* <sup>+</sup>*AiVT*−*WTB<sup>T</sup> <sup>i</sup>* <sup>−</sup>*Bi<sup>W</sup>* <sup>−</sup>*B<sup>i</sup> <sup>T</sup>i*−*δVT*+*V A<sup>T</sup> <sup>i</sup>* <sup>−</sup>*WTB<sup>T</sup> <sup>i</sup>* <sup>−</sup>*VC<sup>T</sup> <sup>i</sup>* <sup>+</sup>*WTD<sup>T</sup> i* <sup>∗</sup> <sup>−</sup>*γI<sup>r</sup>* <sup>−</sup>*B<sup>T</sup> <sup>i</sup> <sup>D</sup><sup>T</sup> i* ∗ ∗ <sup>−</sup>*δ*(*V*+*VT*) **<sup>0</sup>** ∗ ∗∗ −*I<sup>m</sup>* ⎤ ⎥ ⎥ ⎦ <0 *ii*. ⎡ ⎢ ⎢ ⎣ *V A<sup>T</sup> <sup>i</sup>* <sup>+</sup>*AiVT*−*WTB<sup>T</sup> <sup>i</sup>* <sup>−</sup>*BiW VC<sup>T</sup> <sup>i</sup>* <sup>−</sup>*WTD<sup>T</sup> <sup>i</sup> <sup>T</sup>i*−*VT*+*δAiV*−*δB<sup>i</sup> W B<sup>i</sup>* ∗ −*γI<sup>m</sup> δCiV*−*δDiW D<sup>i</sup>* ∗ ∗ <sup>−</sup>*δ*(*V*+*VT*) **<sup>0</sup>** ∗ ∗∗ −*I<sup>r</sup>* ⎤ ⎥ ⎥ ⎦ < 0 (73)

*If the existence is affirmative, the state-feedback gain K is given by*

$$\mathbf{K} = \mathbf{W}\mathbf{V}^{-T} \tag{74}$$

*Proof. i*. Setting *U* = *δV* then (69) implies *i*. of (73). *ii*. Setting *S*<sup>1</sup> = −*V*, and *S*<sup>2</sup> = −*δV* then *ii*. of (17) implies *ii*. of (73).

### **Illustrative example**

The approach given above is illustrated by the numerical example yielding the matrix parameters of the system *D*(*t*) = *D* = **0**

$$A(t) = \begin{bmatrix} 0 & 1 & 0 \\ 0 & 0 & 1 \\ -5 & -6r(t) & -5r(t) \end{bmatrix}, \quad \mathbf{B}(t) = \mathbf{B} = \begin{bmatrix} 1 \ 3 \\ 2 \ 1 \\ 1 \ 5 \end{bmatrix}, \quad \mathbf{C}^T(t) = \mathbf{C}^T = \begin{bmatrix} 1 & 1 \\ 2 & -1 \\ -2 & 0 \end{bmatrix}$$

where the time varying uncertain parameter *r*(*t*) lies within the interval �0.5, 1.5�. In order to represent uncertainty on *r*(*t*) it is assumed that the matrix parameters belongs to the polytopic uncertainty domain O,

$$\mathcal{O} := \left\{ (\mathbf{A}, \mathbf{B}, \mathbf{C}, \mathbf{D}) \, (\mathbf{a}) : (\mathbf{A}, \mathbf{B}, \mathbf{C}, \mathbf{D}) \, (\mathbf{a}) = \sum\_{i=1}^{2} a\_i \left( \mathbf{A}\_{i\prime} \mathbf{B}\_{i\prime} \mathbf{C}\_{i\prime} \mathbf{D}\_i \right) , \quad \mathbf{a} \in \mathcal{Q} \right\}$$

$$\mathcal{Q} = \left\{ (a\_1, a\_2) : a\_2 = 1 - a\_1; \quad 0 < a\_1 < 1 \right\}$$

$$A\_1 = \begin{bmatrix} 0 & 1 & 0 \\ 0 & 0 & 1 \\ -5 & -3 & -2.5 \end{bmatrix} \qquad A\_2 = \begin{bmatrix} 0 & 1 & 0 \\ 0 & 0 & 1 \\ -5 & -9 & -7.5 \end{bmatrix}$$

$$\begin{aligned} \mathbf{B}\_1 &= \mathbf{B}\_2 = \mathbf{B}, \qquad \mathbf{C}\_1^T = \mathbf{C}\_2^T = \mathbf{C}^T, \qquad \mathbf{D}\_1 = \mathbf{D}\_2 = \mathbf{0} \\\ A &= a\_1 \mathbf{A}\_1 + (1 - a\_1) \mathbf{A}\_2, \quad \mathbf{A}\_c = \mathbf{A} - \mathbf{B} \mathbf{K} \quad \mathbf{A}\_{c0} = \mathbf{A}\_0 - \mathbf{B} \mathbf{K} \end{aligned}$$

Thus, solving (72) and *i*. of (73) with respect to the LMI variables *T*1, *T*2, *V*, *W*, and *δ* given task was feasible for *a*<sup>1</sup> = 0.2, *δ* = 20. Subsequently, with

$$\gamma = 10.5304$$

*i*.

<sup>0</sup> <sup>2</sup> <sup>4</sup> <sup>6</sup> <sup>8</sup> <sup>10</sup> −1.5

Fig. 5. System output and state response

⎡ ⎣

> ⎡ ⎣

the control law parameters were computed as

*K* = �

*T*<sup>1</sup> =

*V* =

t[s]

7.0235 2.4579 2.6301 2.4579 7.4564 −0.4037 2.6301 −0.4037 5.3152

0.2250 −0.0758 −0.0350 0.0940 0.1801 −0.0241 0.1473 0.0375 0.1992

was feasible for *a*<sup>1</sup> = 0.2, *δ* = 20, too, and subsequently, with

239.1234 108.9248 250.1206 108.9248 307.9712 13.8497 250.1206 13.8497 397.1333

> 6.5513 −2.0718 −0.2451 2.1635 2.2173 0.1103 0.2448 0.2964 0.4568

y1 (t) y2 (t)

−0.5

q(t)

⎤

⎤

and including into the state control law the were obtained the closed-loop system matrix

*ρ*(*Ac*0) = {−2.0598, −22.2541, −24.7547} Solving (72) and *ii*. of (73) with respect to the LMI variables *T*1, *T*2, *V*, *W*, and *δ* given task

*γ* = 10.5304

⎦ , *T*<sup>2</sup> =

⎦ , *W* =

*ρ*(*Ac*) = {−50.4633, −1.1090 ± 2.1623 i} It is evident, that the eigenvalues spectrum *ρ*(*Ac*0) of the closed control loop is stable in both cases. However, taking the same values of *γ*, the solutions differ especially in the

⎡ ⎣

�

⎤

⎤

1.1296 2.2771 −7.9446 0.2888 <sup>−</sup>1.1375 10.0427 �

6.5392 12.5891 −5.7581 0.2809 <sup>−</sup>3.6944 4.1922 �

⎦ , *T*<sup>2</sup> =

⎦ , *W* =

⎡ ⎣

�

0

0.5

Partially Decentralized Design Principle in Large-Scale System Control 377

<sup>0</sup> <sup>2</sup> <sup>4</sup> <sup>6</sup> <sup>8</sup> <sup>10</sup> −1

6.6651 2.6832 2.0759 2.6832 7.4909 −0.2568 2.0759 −0.2568 6.2386

0.7191 3.0209 0.2881 0.1964 <sup>−</sup>0.7401 0.7382�

222.8598 121.9115 251.6458 121.9115 341.0193 63.4202 251.6458 63.4202 445.9279

4.6300 6.6167 −2.6780 1.7874 <sup>−</sup>0.7898 4.3214�

, �*K*� = 13.1076

, �*K*� = 16.3004

t[s]

q1 (t) q2 (t) q3 (t)

⎤ ⎦

> ⎤ ⎦

−1 −0.5 0 0.5 1 1.5

eigenvalues set

*T*<sup>1</sup> =

⎡ ⎣

*V* =

⎡ ⎣

the closed-loop parameters were computed as

*K* = �

y(t)

Fig. 5. System output and state response

16 Robust

**Theorem 6.** *Given system (65), (66) the closed-loop H*<sup>∞</sup> *norm is less than a real value* √*<sup>γ</sup>* > <sup>0</sup>*, if there exist positive symmetric matrices <sup>T</sup><sup>i</sup>* <sup>∈</sup> *IRn*×*n*, *<sup>i</sup>* <sup>=</sup> 1, 2, . . . , *n, a real square matrices <sup>V</sup>* <sup>∈</sup> *IRn*×*n,*

> ∗ ∗ <sup>−</sup>*δ*(*V*+*VT*) **<sup>0</sup>** ∗ ∗∗ −*I<sup>m</sup>*

∗ −*γI<sup>m</sup> δCiV*−*δDiW D<sup>i</sup>* ∗ ∗ <sup>−</sup>*δ*(*V*+*VT*) **<sup>0</sup>** ∗ ∗∗ −*I<sup>r</sup>*

The approach given above is illustrated by the numerical example yielding the matrix

In order to represent uncertainty on *r*(*t*) it is assumed that the matrix parameters belongs to

Q = {(*a*1, *a*2) : *a*<sup>2</sup> = 1 − *a*1; 0 < *a*<sup>1</sup> < 1}

⎦ *A*<sup>2</sup> =

⎤

<sup>1</sup> <sup>=</sup> *<sup>C</sup><sup>T</sup>*

*A* = *a*1*A*<sup>1</sup> + (1−*a*1)*A*2, *A<sup>c</sup>* = *A*−*BK Ac*<sup>0</sup> = *A*0−*BK* Thus, solving (72) and *i*. of (73) with respect to the LMI variables *T*1, *T*2, *V*, *W*, and *δ* given

*γ* = 10.5304

> 2 ∑ *i*=1

> > ⎡ ⎣

010 001 −5 −9 −7.5

<sup>2</sup> <sup>=</sup> *<sup>C</sup>T*, *<sup>D</sup>*<sup>1</sup> <sup>=</sup> *<sup>D</sup>*<sup>2</sup> <sup>=</sup> **<sup>0</sup>**

⎤

⎦ , *B*(*t*) = *B* =

where the time varying uncertain parameter *r*(*t*) lies within the interval �0.5, 1.5�.

*<sup>i</sup>* <sup>−</sup>*WTD<sup>T</sup>*

*<sup>i</sup>* <sup>−</sup>*Bi<sup>W</sup>* <sup>−</sup>*B<sup>i</sup> <sup>T</sup>i*−*δVT*+*V A<sup>T</sup>*

<sup>∗</sup> <sup>−</sup>*γI<sup>r</sup>* <sup>−</sup>*B<sup>T</sup>*

*T<sup>i</sup>* > 0, *i* = 1, 2, . . . , *n*, *γ* > 0 (72)

*<sup>i</sup> <sup>D</sup><sup>T</sup>*

*<sup>i</sup>* <sup>−</sup>*VC<sup>T</sup>*

*K* = *WV*−*<sup>T</sup>* (74)

<sup>⎦</sup> , *<sup>C</sup>T*(*t*) = *<sup>C</sup><sup>T</sup>* <sup>=</sup>

*ai* (*Ai*, *Bi*, *Ci*, *Di*), *a* ∈ Q

⎤ ⎦ *<sup>i</sup>* <sup>+</sup>*WTD<sup>T</sup> i*

> ⎤ ⎥ ⎥ ⎦ < 0

⎡ ⎣

�

⎤ ⎦

⎤ ⎥ ⎥ ⎦ <0

(73)

*i*

*W B<sup>i</sup>*

*<sup>i</sup>* <sup>−</sup>*WTB<sup>T</sup>*

*<sup>i</sup> <sup>T</sup>i*−*VT*+*δAiV*−*δB<sup>i</sup>*

*a real matrix <sup>W</sup>* <sup>∈</sup> *IRr*×*n, and a positive scalar <sup>δ</sup>* <sup>&</sup>gt; <sup>0</sup>*, <sup>δ</sup>* <sup>∈</sup> *IR such that*

*<sup>i</sup>* <sup>−</sup>*BiW VC<sup>T</sup>*

*ii*. Setting *S*<sup>1</sup> = −*V*, and *S*<sup>2</sup> = −*δV* then *ii*. of (17) implies *ii*. of (73).

⎤

(*A*, *B*, *C*, *D*)(*a*) : (*A*, *B*, *C*, *D*)(*a*) =

010 001 −5 −3 −2.5

*If the existence is affirmative, the state-feedback gain K is given by*

*Proof. i*. Setting *U* = *δV* then (69) implies *i*. of (73).

010 001 −5 −6*r*(*t*) −5*r*(*t*)

*A*<sup>1</sup> =

⎡ ⎣

*B*<sup>1</sup> = *B*<sup>2</sup> = *B*, *C<sup>T</sup>*

task was feasible for *a*<sup>1</sup> = 0.2, *δ* = 20. Subsequently, with

**4.2 Dependent modifications**

*<sup>i</sup>* <sup>+</sup>*AiVT*−*WTB<sup>T</sup>*

*<sup>i</sup>* <sup>+</sup>*AiVT*−*WTB<sup>T</sup>*

parameters of the system *D*(*t*) = *D* = **0**

the polytopic uncertainty domain O,

�

⎡ ⎣

*i*.

*ii*.

⎡ ⎢ ⎢ ⎣

⎡ ⎢ ⎢ ⎣

*V A<sup>T</sup>*

*V A<sup>T</sup>*

**Illustrative example**

*A*(*t*) =

O :=

$$\begin{aligned} T\_1 &= \begin{bmatrix} 7.0235 & 2.4579 & 2.6301 \\ 2.4579 & 7.4564 & -0.4037 \\ 2.6301 & -0.4037 & 5.3152 \end{bmatrix}, \quad T\_2 &= \begin{bmatrix} 6.6651 & 2.6832 & 2.0759 \\ 2.6832 & 7.4909 & -0.2568 \\ 2.0759 & -0.2568 & 6.2386 \end{bmatrix}, \\\ V &= \begin{bmatrix} 0.2250 & -0.0758 & -0.0350 \\ 0.0940 & 0.1801 & -0.0241 \\ 0.1473 & 0.0375 & 0.1992 \end{bmatrix}, \quad \mathbf{W} = \begin{bmatrix} 0.7191 & 3.0209 & 0.2881 \\ 0.1964 & -0.7401 & 0.7382 \end{bmatrix} \end{aligned}$$

the control law parameters were computed as

$$\mathbf{K} = \begin{bmatrix} 6.5392 & 12.5891 \ -5.7581 \\ 0.2809 \ -3.6944 & 4.1922 \end{bmatrix}, \quad \|\mathbf{K}\| = 16.3004$$

and including into the state control law the were obtained the closed-loop system matrix eigenvalues set

$$
\rho(A\_{c0}) = \{-2.0598, -22.2541, -24.7547\}
$$

Solving (72) and *ii*. of (73) with respect to the LMI variables *T*1, *T*2, *V*, *W*, and *δ* given task was feasible for *a*<sup>1</sup> = 0.2, *δ* = 20, too, and subsequently, with

$$\gamma = 10.5304$$

$$\begin{aligned} T\_1 &= \begin{bmatrix} 239.1234 & 108.9248 & 250.1206 \\ 108.9248 & 307.9712 & 13.8497 \\ 250.1206 & 13.8497 & 397.1333 \end{bmatrix}, & \quad T\_2 &= \begin{bmatrix} 222.8598 & 121.9115 & 251.6458 \\ 121.9115 & 341.0193 & 63.4202 \\ 251.6458 & 63.4202 & 445.9279 \end{bmatrix}, \\\ V &= \begin{bmatrix} 6.5513 & -2.0718 & -0.2451 \\ 2.1635 & 2.2173 & 0.1103 \\ 0.2448 & 0.2964 & 0.4568 \end{bmatrix}, & \quad W &= \begin{bmatrix} 4.6300 & 6.6167 & -2.6780 \\ 1.7874 & -0.7898 & 4.3214 \end{bmatrix} \end{aligned}$$

the closed-loop parameters were computed as

$$\mathbf{K} = \begin{bmatrix} 1.1296 & 2.2771 \ -7.9446 \\ 0.2888 \ -1.1375 & 10.0427 \end{bmatrix} \prime \quad \|\!|\mathbf{K}\|\!| = 13.1076$$

$$\rho(A\_{\odot}) = \{-50.4633, -1.1090 \pm 2.1623 \,\text{i}\}$$

It is evident, that the eigenvalues spectrum *ρ*(*Ac*0) of the closed control loop is stable in both cases. However, taking the same values of *γ*, the solutions differ especially in the

where *<sup>q</sup>h*(*t*) <sup>∈</sup> *IRnh* , *<sup>u</sup>h*(*t*) <sup>∈</sup> *IRrh* , *<sup>y</sup>h*(*t*) <sup>∈</sup> *IRmh* , *<sup>A</sup>hl* <sup>∈</sup> *IRnh*×*nl* , *<sup>B</sup><sup>h</sup>* <sup>∈</sup> *IRrh*×*nh* , and *<sup>C</sup>hl* <sup>∈</sup> *IRmh*×*nh* ,

Partially Decentralized Design Principle in Large-Scale System Control 379

Problem of the interest is to design closed-loop system using a linear memoryless state

⎤ ⎥ ⎥ ⎥

*p* ∑ *l*=1, *l*�=*h*

**Lemma 1.** *Unforced (autonomous) system (75)-(77) is stable if there exists a set of symmetric matrices*

� *P<sup>k</sup> <sup>h</sup> Phk Pkh P<sup>h</sup> k*

*<sup>q</sup>hk*(*t*) + *<sup>q</sup><sup>T</sup>*

*qhk*(*t*) +

*qT <sup>h</sup>* (*t*) *<sup>q</sup><sup>T</sup> <sup>k</sup>* (*t*)

where *<sup>P</sup>* <sup>=</sup> *<sup>P</sup><sup>T</sup>* <sup>&</sup>gt; 0, *<sup>P</sup>* <sup>∈</sup> *IRn*×*n*, then the time rate of change of *<sup>v</sup>*(*q*(*t*)) along a solution of the

(*t*)*Pq*(*t*) + *q<sup>T</sup>*

⎤ ⎥ ⎥ ⎥

<sup>⎦</sup> , *<sup>P</sup>hh* <sup>=</sup>

*<sup>l</sup>*=<sup>1</sup> *ml*.

<sup>⎦</sup> , *<sup>K</sup>hh* <sup>=</sup>

*Khlq<sup>l</sup>*

�

*hk*(*t*)

*p* ∑ *l*=1, *l*�=*h*,*k*

� *P<sup>k</sup> <sup>h</sup> Phk Pkh P<sup>h</sup> k*

> � *Ahl Akl* � *ql*

� *q*˙ *hk*(*t*) �

*u*(*t*) = −*Kq*(*t*) (80)

*Kl*

(*t*), *h* = 1, 2, . . . , *p* (82)

*<sup>h</sup>* (81)

(83)

< 0 (84)

(*t*) (85)

*<sup>h</sup>* (89)

� (86)

(*t*)*Pq*(*t*) > 0 (87)

*Pl*

*p* ∑ *l*=1, *l*�=*h*

(*t*)*Pq*˙(*t*) < 0 (88)

*p* ∑ *l*=1, *l*�=*h*

*<sup>l</sup>*=<sup>1</sup> *rl*, *<sup>m</sup>* <sup>=</sup> <sup>∑</sup>*<sup>p</sup>*

*<sup>l</sup>*=<sup>1</sup> *nl*, *<sup>r</sup>* <sup>=</sup> <sup>∑</sup>*<sup>p</sup>*

in such way that the large-scale system be stable, and

⎡ ⎢ ⎢ ⎢ ⎣

*uh*(*t*) = −*Khhqh*(*t*) −

� *P<sup>k</sup> <sup>h</sup> Phk Pkh P<sup>h</sup> k*

> *Ahh Ahk <sup>A</sup>kh <sup>A</sup>kk* �

> > *qT hk*(*t*) = �

*v*˙(*q*(*t*)) = *q*˙ *<sup>T</sup>*

*P*<sup>11</sup> *P*<sup>12</sup> ... *P*1*<sup>M</sup> P*<sup>21</sup> *P*<sup>22</sup> ... *P*2*<sup>M</sup>* . . . *PM*<sup>1</sup> *PM*<sup>2</sup> ... *PMM*

*v*(*q*(*t*)) = *q<sup>T</sup>*

*K*<sup>11</sup> *K*<sup>12</sup> ... *K*1*<sup>p</sup> K*<sup>21</sup> *K*<sup>22</sup> ... *K*2*<sup>p</sup>* . . . *Kp*<sup>1</sup> *Kp*<sup>2</sup> ... *Kpp*

> *P*◦ *hk* =

> > �

*K* =

respectively, and *<sup>n</sup>* = <sup>∑</sup>*<sup>p</sup>*

*such that*

*where*

system (75), (77) is

*p*−1 ∑ *h*=1

*p* ∑ *k*=*h*+1

� *q*˙ *T hk*(*t*)

*Proof.* Defining Lyapunov function as follows

*<sup>q</sup>*˙ *hk*(*t*) = �

Considering the same form of *P* with respect to *K*, i.e.

⎡ ⎢ ⎢ ⎢ ⎣

*P* =

feedback controller of the form

Fig. 6. System output and state response

closed-loop dominant eigenvalues, as well as in the control law gain matrix norm, giving together closed-loop system matrix eigenstructure. To prefer any of them is not as so easy as it seems at the first sight, and the less gain norm may not be the best choice.

Fig. 5 illustrates the simulation results with respect to a solution of *i*. of (73) and (72). The initial state of system state variable was setting as [*q*<sup>1</sup> *q*<sup>2</sup> *q*3] *<sup>T</sup>* = [0.5 1 0] *<sup>T</sup>*, the desired steady-state output variable values were set as [*y*<sup>1</sup> *y*2] *<sup>T</sup>* = [1−0.5] *<sup>T</sup>*, and the system matrix parameter change from *p* = 1 to *p* = 0.54 was realized 5 seconds after the state control start-up.

The same simulation study was carried out using the control parameter obtained by solving *ii*. of (73), (72), and the simulation results are shown in Fig. 6. It can be seen that the presented control scheme partly eliminates the effects of parameter uncertainties, and guaranteed the quadratic stability of the closed-loop system.

### **5. Pairwise-autonomous principle in control design**

### **5.1 Problem description**

Considering the system model of the form (1), (2), i.e.

$$
\dot{q}(t) = Aq(t) + Bu(t) \tag{75}
$$

$$\mathbf{y}(t) = \mathbf{C}\boldsymbol{\eta}(t) + \mathbf{D}\boldsymbol{u}(t) \tag{76}$$

but reordering in such way that

$$\mathbf{A} = \begin{bmatrix} \mathbf{A}\_{i,l} \end{bmatrix}, \ \mathbf{C} = \begin{bmatrix} \mathbf{C}\_{i,l} \end{bmatrix}, \ \mathbf{B} = \text{diag}\begin{bmatrix} \mathbf{B}\_i \end{bmatrix}, \ \mathbf{D} = \mathbf{0} \tag{77}$$

where *i*, *l* = 1, 2, . . . , *p*, and all parameters and variables are with the same dimensions as it is given in Subsection 2.1. Thus, respecting the above give matrix structures it yields

$$\dot{q}\_h(t) = \mathbf{A}\_{hl}\boldsymbol{q}\_h(t) + \sum\_{\substack{l=1,\ l \neq h}}^p \left(\mathbf{A}\_{hl}\boldsymbol{q}\_l(t) + \mathbf{B}\_h\boldsymbol{\mu}\_h(t)\right) \tag{78}$$

$$\mathbf{y}\_h(t) = \mathbf{C}\_{hh}\mathbf{q}\_h(t) + \sum\_{\substack{l=1,\ l \neq h}}^p \mathbf{C}\_{hl}\mathbf{q}\_l(t) \tag{79}$$

where *<sup>q</sup>h*(*t*) <sup>∈</sup> *IRnh* , *<sup>u</sup>h*(*t*) <sup>∈</sup> *IRrh* , *<sup>y</sup>h*(*t*) <sup>∈</sup> *IRmh* , *<sup>A</sup>hl* <sup>∈</sup> *IRnh*×*nl* , *<sup>B</sup><sup>h</sup>* <sup>∈</sup> *IRrh*×*nh* , and *<sup>C</sup>hl* <sup>∈</sup> *IRmh*×*nh* , respectively, and *<sup>n</sup>* = <sup>∑</sup>*<sup>p</sup> <sup>l</sup>*=<sup>1</sup> *nl*, *<sup>r</sup>* <sup>=</sup> <sup>∑</sup>*<sup>p</sup> <sup>l</sup>*=<sup>1</sup> *rl*, *<sup>m</sup>* <sup>=</sup> <sup>∑</sup>*<sup>p</sup> <sup>l</sup>*=<sup>1</sup> *ml*.

Problem of the interest is to design closed-loop system using a linear memoryless state feedback controller of the form

$$\mu(t) = -\mathbf{K}\boldsymbol{q}(t)\tag{80}$$

in such way that the large-scale system be stable, and

$$\mathbf{K} = \begin{bmatrix} \mathbf{K}\_{11} \ \mathbf{K}\_{12} \ \dots \ \mathbf{K}\_{1p} \\ \mathbf{K}\_{21} \ \mathbf{K}\_{22} \ \dots \ \mathbf{K}\_{2p} \\ \vdots \\ \mathbf{K}\_{p1} \ \mathbf{K}\_{p2} \ \dots \ \mathbf{K}\_{pp} \end{bmatrix}, \qquad \mathbf{K}\_{hh} = \sum\_{l=1, \ l \neq h}^{p} \mathbf{K}\_{h}^{l} \tag{81}$$

$$\mathfrak{u}\_{\hbar}(t) = -\mathbf{K}\_{\hbar\hbar}\mathfrak{q}\_{\hbar}(t) - \sum\_{\substack{l=1,\ l\neq\hbar}}^{p} \mathbf{K}\_{\hbar l}\mathfrak{q}\_{\hbar}(t), \quad \hbar = 1,2,\ldots,p\tag{82}$$

**Lemma 1.** *Unforced (autonomous) system (75)-(77) is stable if there exists a set of symmetric matrices*

$$\boldsymbol{P}\_{hk}^{\diamond} = \begin{bmatrix} \mathbf{P}\_{h}^{k} & \mathbf{P}\_{hk} \\ \mathbf{P}\_{kh} & \mathbf{P}\_{k}^{h} \end{bmatrix} \tag{83}$$

*such that*

18 Robust

−0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8

closed-loop dominant eigenvalues, as well as in the control law gain matrix norm, giving together closed-loop system matrix eigenstructure. To prefer any of them is not as so easy as

Fig. 5 illustrates the simulation results with respect to a solution of *i*. of (73) and (72).

parameter change from *p* = 1 to *p* = 0.54 was realized 5 seconds after the state control

The same simulation study was carried out using the control parameter obtained by solving *ii*. of (73), (72), and the simulation results are shown in Fig. 6. It can be seen that the presented control scheme partly eliminates the effects of parameter uncertainties, and guaranteed the

> *Ci*,*<sup>l</sup>*

given in Subsection 2.1. Thus, respecting the above give matrix structures it yields

*yh*(*t*) = *Chhqh*(*t*) +

where *i*, *l* = 1, 2, . . . , *p*, and all parameters and variables are with the same dimensions as it is

*p* ∑ *l*=1, *l*�=*h*

, *B* = diag

*p* ∑ *l*=1, *l*�=*h*

it seems at the first sight, and the less gain norm may not be the best choice.

The initial state of system state variable was setting as [*q*<sup>1</sup> *q*<sup>2</sup> *q*3]

steady-state output variable values were set as [*y*<sup>1</sup> *y*2]

**5. Pairwise-autonomous principle in control design**

Considering the system model of the form (1), (2), i.e.

*A* =

*Ai*,*<sup>l</sup>* , *C* =

*q*˙ *<sup>h</sup>*(*t*) = *Ahhqh*(*t*) +

quadratic stability of the closed-loop system.

q(t)

<sup>0</sup> <sup>2</sup> <sup>4</sup> <sup>6</sup> <sup>8</sup> <sup>10</sup> −1

t[s]

*<sup>T</sup>* = [0.5 1 0]

*<sup>T</sup>* = [1−0.5]

*q*˙(*t*) = *Aq*(*t*) + *Bu*(*t*) (75)

*y*(*t*) = *Cq*(*t*) + *Du*(*t*) (76)

*Bi*  q1 (t) q2 (t) q3 (t)

*<sup>T</sup>*, the desired

*<sup>T</sup>*, and the system matrix

, *D* = **0** (77)

(*Ahlql*(*t*) + *Bhuh*(*t*)) (78)

*Chlql*(*t*) (79)

y1 (t) y2 (t)

<sup>0</sup> <sup>2</sup> <sup>4</sup> <sup>6</sup> <sup>8</sup> <sup>10</sup> −1.5

Fig. 6. System output and state response

t[s]

−1 −0.5 0 0.5 1 1.5

y(t)

start-up.

**5.1 Problem description**

but reordering in such way that

$$\sum\_{h=1}^{p-1} \sum\_{k=h+1}^{p} \left( \dot{\boldsymbol{q}}\_{hk}^{T}(t) \begin{bmatrix} \mathbf{P}\_{h}^{k} \ \mathbf{P}\_{hk} \\ \mathbf{P}\_{kh} \ \mathbf{P}\_{k}^{h} \end{bmatrix} \boldsymbol{q}\_{hk}(t) + \boldsymbol{q}\_{hk}^{T}(t) \begin{bmatrix} \mathbf{P}\_{h}^{k} \ \mathbf{P}\_{hk} \\ \mathbf{P}\_{kh} \ \mathbf{P}\_{k}^{h} \end{bmatrix} \dot{\boldsymbol{q}}\_{hk}(t) \right) < 0 \tag{84}$$

*where*

$$
\dot{\boldsymbol{q}}\_{lk}(t) = \begin{bmatrix} \mathbf{A}\_{lh} \ \mathbf{A}\_{hk} \\ \mathbf{A}\_{kh} \ \mathbf{A}\_{kk} \end{bmatrix} \boldsymbol{q}\_{hk}(t) + \sum\_{l=1, l \neq h, k}^{p} \begin{bmatrix} \mathbf{A}\_{hl} \\ \mathbf{A}\_{kl} \end{bmatrix} \boldsymbol{q}\_{l}(t) \tag{85}
$$

$$\boldsymbol{\mathfrak{q}}\_{\rm hk}^{T}(t) = \begin{bmatrix} \boldsymbol{\mathfrak{q}}\_{\rm h}^{T}(t) \ \boldsymbol{\mathfrak{q}}\_{\rm k}^{T}(t) \end{bmatrix} \tag{86}$$

*Proof.* Defining Lyapunov function as follows

$$v(q(t)) = q^T(t)Pq(t) > 0\tag{87}$$

where *<sup>P</sup>* <sup>=</sup> *<sup>P</sup><sup>T</sup>* <sup>&</sup>gt; 0, *<sup>P</sup>* <sup>∈</sup> *IRn*×*n*, then the time rate of change of *<sup>v</sup>*(*q*(*t*)) along a solution of the system (75), (77) is

$$
\dot{\boldsymbol{v}}(\boldsymbol{q}(t)) = \dot{\boldsymbol{q}}^T(t)\mathbf{P}\boldsymbol{q}(t) + \boldsymbol{q}^T(t)\mathbf{P}\dot{\boldsymbol{q}}(t) < 0 \tag{88}
$$

Considering the same form of *P* with respect to *K*, i.e.

$$\mathbf{P} = \begin{bmatrix} \mathbf{P}\_{11} & \mathbf{P}\_{12} & \dots & \mathbf{P}\_{1M} \\ \mathbf{P}\_{21} & \mathbf{P}\_{22} & \dots & \mathbf{P}\_{2M} \\ & & \vdots \\ & & & \mathbf{I} \\ \mathbf{P}\_{M1} & \mathbf{P}\_{M2} & \dots & \mathbf{P}\_{MM} \end{bmatrix}, \qquad \mathbf{P}\_{hh} = \sum\_{l=1, l \neq h}^{p} \mathbf{P}\_{h}^{l} \tag{89}$$

 *uh*(*t*) *uk*(*t*)

respectively. Then substituting (97) in (91) gives

− *B<sup>h</sup>* **0 0** *B<sup>k</sup>*

=

Using the next notations

(98) can be written as

of the subsystems. On the other hand, if

then

*ω*◦ *hk*(*t*)=

where

*Ahh Ahk <sup>A</sup>kh <sup>A</sup>kk*

> *A*◦ *hkc* =

> > *Bhu<sup>l</sup>*

*A*◦ *hk* = 

*Bku<sup>l</sup>*

*p* ∑ *l*=1, *l*�=*h*,*k*  = *uk <sup>h</sup>*(*t*) *uh <sup>k</sup>* (*t*)

*K<sup>k</sup> <sup>h</sup> Khk Kkh K<sup>h</sup> k*

> − *B<sup>h</sup>* **0 0** *B<sup>k</sup>*

*hkωhk*(*t*) +

*p* ∑ *l*=1, *l*�=*h*,*k*

> , *B*◦ *hk* = *B<sup>h</sup>* **0 0** *B<sup>k</sup>*

*hkcqhk*(*t*) +

(*t*)

 <sup>=</sup> *<sup>p</sup>* ∑ *l*=1, *l*�=*h*,*k*

 *ul <sup>h</sup>*(*t*) *ul <sup>k</sup>*(*t*)

*Ahh Ahk <sup>A</sup>kh <sup>A</sup>kk*

*<sup>h</sup>*(*t*) + *Ahlql*(*t*)

*<sup>k</sup>*(*t*) + *Aklq<sup>l</sup>*

= *B*◦

*Ahh Ahk <sup>A</sup>kh <sup>A</sup>kk*

*ωhk*(*t*) =

*q*˙ *hk*(*t*) = *A*◦

*p* ∑ *l*=1, *l*�=*h*

> *p*−1 ∑ *h*=1

*Cl <sup>h</sup>*, *C*◦ *hk* = *Ck <sup>h</sup> Chk Ckh C<sup>h</sup> k*

*p* ∑ *k*=*h*+1

*hkqhk*(*t*) +

*hkcqhk*(*t*) +

*hkqhk*(*t*) +

 *C*◦

Now, taking (103), (106) considered pair of controlled subsystems is fully described as

*Chh* =

*y*(*t*) =

*q*˙ *hk*(*t*) = *A*◦

*yhk*(*t*) = *C*◦

*yhk*(*t*) = *C*◦

 +

Partially Decentralized Design Principle in Large-Scale System Control 381

*q*˙ *hk*(*t*) =

*<sup>q</sup>hk*(*t*)+

*K<sup>k</sup>*

*p* ∑ *l*=1, *l*�=*h*,*k*

*p* ∑ *l*=1, *l*�=*h*,*k*

where *ωhk*(*t*) can be considered as a generalized auxiliary disturbance acting on the pair *h*, *k*

*hkqhk*(*t*) +

*p* ∑ *l*=1, *l*�=*h*

*p* ∑ *l*=1, *l*�=*h*,*k*

> *p* ∑ *l*=1, *l*�=*h*

*Cl*◦ *hkq<sup>l</sup>*

*Al*◦

*Cl*◦ *hkq<sup>l</sup>*

*hkql*(*t*) + *B*◦

*<sup>h</sup> Khk Kkh K<sup>h</sup> k*

> *B*◦ *hk ul <sup>h</sup>*(*t*) *ul <sup>k</sup>*(*t*)

*Al*◦ *hkql*(*t*)

, *Al*◦

 , *K*◦ *hk* = *Kk <sup>h</sup> Khk Kkh K<sup>h</sup> k*

*Al*◦

*hk* = *Ahl Akl*

*hkql*(*t*) + *B*◦

 , *Cl*◦ *hk* = *Chl Ckl*

*p* ∑ *l*=1, *l*�=*h*

*Cl hql* (*t*) 

*p* ∑ *l*=1, *l*�=*h*,*k*

 *ul <sup>h</sup>*(*t*) *ul <sup>k</sup>*(*t*)

*p* ∑ *l*=1, *l*�=*h*,*k*

> = *A*◦

 *Bhu<sup>l</sup>*

*Bku<sup>l</sup>*

*hk* − *B*◦

 + *Ahl Akl ql* (*t*) =

*<sup>h</sup>*(*t*) + *Ahlq<sup>l</sup>*

*<sup>k</sup>*(*t*) + *Aklq<sup>l</sup>*

*hkK*◦

*hkωhk*(*t*) (103)

(*t*) + **0** *ωhk*(*t*) (106)

(*t*) + **0** *ωhk*(*t*) (108)

*hkωhk*(*t*) (107)

(*t*)

(*t*)

*hk* (99)

(97)

(100)

(101)

(102)

(104)

(105)

(98)

then the next separation is possible

$$\mathcal{P} = \left( \left( \begin{bmatrix} \mathbf{P}\_1^p \ \mathbf{P}\_{12} \ \mathbf{0} \ \dots \ \mathbf{0} \\\\ \mathbf{P}\_{21} \ \mathbf{P}\_2^p \ \mathbf{0} \ \dots \ \mathbf{0} \\\\ \vdots \\\\ \mathbf{0} & \mathbf{0} \ \mathbf{0} \ \dots \ \mathbf{0} \end{bmatrix} + \dots + \begin{bmatrix} \mathbf{P}\_1^p \ \mathbf{0} \ \dots \ \mathbf{0} \ \mathbf{P}\_{1p} \\\\ \mathbf{0} \ \mathbf{0} \ \dots \ \mathbf{0} \ \mathbf{0} \\\\ \vdots \\\\ \mathbf{P}\_{p1} \ \mathbf{0} \ \dots \ \mathbf{0} \ \mathbf{P}\_p^1 \\\\ \vdots \\\\ \mathbf{0} \ \dots \ \mathbf{0} \ \mathbf{P}\_{p-1}^p \ \mathbf{P}\_{p-1,p} \\\\ \mathbf{0} \ \dots \ \mathbf{0} \ \mathbf{P}\_{p,p-1} \ \mathbf{P}\_p^{p-1} \end{bmatrix} \right) . \tag{90}$$

Writing (78) as

$$\dot{\boldsymbol{q}}\_{\rm hk}(t) = \begin{bmatrix} \mathbf{A}\_{\rm hl} \ \mathbf{A}\_{\rm hk} \\ \mathbf{A}\_{\rm kh} \ \mathbf{A}\_{\rm kk} \end{bmatrix} \boldsymbol{\boldsymbol{q}}\_{\rm hk}(t) + \sum\_{\substack{l=1,\ l \neq h,k}}^{p} \begin{bmatrix} \mathbf{A}\_{\rm hl} \\ \mathbf{A}\_{\rm kl} \end{bmatrix} \boldsymbol{\boldsymbol{q}}\_{\rm l}(t) + \begin{bmatrix} \mathbf{B}\_{\rm h} & \mathbf{0} \\ \mathbf{0} & \mathbf{B}\_{\rm k} \end{bmatrix} \begin{bmatrix} \boldsymbol{u}\_{\rm h}(t) \\ \boldsymbol{u}\_{\rm k}(t) \end{bmatrix} \tag{91}$$

and considering that for unforced system there are *ul*(*t*) = **0**, *l* =1, . . . *p* then (91) implies (85). Subsequently, with (90), (91) the inequality (88) implies (84).

#### **5.2 Pairwise system description**

Supposing that there exists the partitioned structure of *K* as is defined in (81), (82) then it yields

$$\begin{aligned} \boldsymbol{\mathfrak{u}}\_{h}(t) &= -\sum\_{\substack{l=1,\,l\neq h}}^{p} \left[ \mathbf{K}\_{h}^{l} \, \mathbf{K}\_{hl} \right] \begin{bmatrix} \boldsymbol{q}\_{h}(t) \\ \boldsymbol{q}\_{l}(t) \end{bmatrix} = \\ &= -\left[ \mathbf{K}\_{h}^{k} \, \mathbf{K}\_{hk} \right] \begin{bmatrix} \boldsymbol{q}\_{h}(t) \\ \boldsymbol{q}\_{k}(t) \end{bmatrix} - \sum\_{\substack{l=1,\,l\neq h,k}}^{p} \left[ \mathbf{K}\_{h}^{l} \, \mathbf{K}\_{hl} \right] \begin{bmatrix} \boldsymbol{q}\_{h}(t) \\ \boldsymbol{q}\_{l}(t) \end{bmatrix} = \mathbf{u}\_{h}^{k}(t) + \sum\_{\substack{l=1,\,l\neq h,k}}^{p} \mathbf{u}\_{h}^{l}(t) \end{aligned} \tag{92}$$

where for *l* = 1, 2, . . . , *p*, *i* �= *h*, *k*

$$\boldsymbol{\mathfrak{u}}\_{h}^{l}(t) = -\begin{bmatrix} \mathbf{K}\_{h}^{l} \ \mathbf{K}\_{hl} \end{bmatrix} \begin{bmatrix} \mathbf{q}\_{lh}(t) \\ \mathbf{q}\_{l}(t) \end{bmatrix} \tag{93}$$

Defining with *h* = 1, 2 . . . , *p* − 1, *k* = *h* + 1, *h* + 2 ... , *p*

$$
\begin{bmatrix} \boldsymbol{\mu}\_{h}^{k}(t) \\ \boldsymbol{\mu}\_{k}^{h}(t) \end{bmatrix} = - \begin{bmatrix} \mathbf{K}\_{h}^{k} \ \mathbf{K}\_{hk} \\ \mathbf{K}\_{kh} \ \mathbf{K}\_{k}^{h} \end{bmatrix} \begin{bmatrix} \boldsymbol{\eta}\_{h}(t) \\ \boldsymbol{\eta}\_{k}(t) \end{bmatrix} = -\mathbf{K}\_{hk}^{\diamond} \begin{bmatrix} \boldsymbol{\eta}\_{h}(t) \\ \boldsymbol{\eta}\_{k}(t) \end{bmatrix} \tag{94}$$

$$\mathbf{K}\_{hk}^{\diamond} = \begin{bmatrix} \mathbf{K}\_h^k & \mathbf{K}\_{hk} \\ \mathbf{K}\_{kh} & \mathbf{K}\_k^h \end{bmatrix} \tag{95}$$

and combining (92) for *h* and *k* it is obtained

$$
\begin{bmatrix} \boldsymbol{u}\_{h}(t) \\ \boldsymbol{u}\_{k}(t) \end{bmatrix} = -\begin{bmatrix} \mathbf{K}\_{h}^{k} \ \mathbf{K}\_{kh} \\ \mathbf{K}\_{hk} \ \mathbf{K}\_{k}^{h} \end{bmatrix} \begin{bmatrix} \boldsymbol{q}\_{h}(t) \\ \boldsymbol{q}\_{k}(t) \end{bmatrix} - \begin{bmatrix} \sum\limits\_{l=1, l \neq h,k}^{p} \left[\mathbf{K}\_{h}^{l} \ \mathbf{K}\_{hl}\right] \begin{bmatrix} \boldsymbol{q}\_{h}(t) \\ \boldsymbol{q}\_{l}(t) \end{bmatrix} \\ \sum\limits\_{l=1, l \neq h,k}^{p} \left[\mathbf{K}\_{k}^{l} \ \mathbf{K}\_{kl}\right] \begin{bmatrix} \boldsymbol{q}\_{k}(t) \\ \boldsymbol{q}\_{l}(t) \end{bmatrix} \end{bmatrix} \tag{96}
$$

$$
\begin{bmatrix} \boldsymbol{\mu}\_{\boldsymbol{h}}(t) \\ \boldsymbol{\mu}\_{\boldsymbol{k}}(t) \end{bmatrix} = \begin{bmatrix} \boldsymbol{\mu}\_{\boldsymbol{h}}^{k}(t) \\ \boldsymbol{\mu}\_{\boldsymbol{k}}^{\boldsymbol{h}}(t) \end{bmatrix} + \sum\_{\substack{l=1,\ l \neq h,k}}^{p} \begin{bmatrix} \boldsymbol{\mu}\_{\boldsymbol{h}}^{l}(t) \\ \boldsymbol{\mu}\_{\boldsymbol{k}}^{\boldsymbol{h}}(t) \end{bmatrix} \tag{97}
$$

respectively. Then substituting (97) in (91) gives

$$\dot{q}\_{hk}(t) = \begin{bmatrix} \dot{q}\_{hk}(t) \\ \begin{bmatrix} \mathbf{A}\_{hh} \ \mathbf{A}\_{hk} \end{bmatrix} - \begin{bmatrix} \mathbf{B}\_{h} & \mathbf{0} \\ \mathbf{0} & \mathbf{B}\_{k} \end{bmatrix} \begin{bmatrix} \mathbf{K}\_{h}^{k} \ \mathbf{K}\_{hk} \\ \mathbf{K}\_{kh} \ \mathbf{K}\_{k}^{l} \end{bmatrix} \end{bmatrix} q\_{hk}(t) + \sum\_{l=1, l \neq h,k}^{p} \begin{bmatrix} \mathbf{B}\_{h} \mathbf{u}\_{h}^{l}(t) + \mathbf{A}\_{hl} \mathbf{q}\_{l}(t) \\ \mathbf{B}\_{k} \mathbf{u}\_{k}^{l}(t) + \mathbf{A}\_{kl} \mathbf{q}\_{l}(t) \end{bmatrix} \tag{98}$$

Using the next notations

$$\mathbf{A}\_{h\&}^{\diamond} = \begin{bmatrix} \mathbf{A}\_{hh} \ \mathbf{A}\_{hk} \\ \mathbf{A}\_{kh} \ \mathbf{A}\_{kk} \end{bmatrix} - \begin{bmatrix} \mathbf{B}\_{h} & \mathbf{0} \\ \mathbf{0} & \mathbf{B}\_{k} \end{bmatrix} \begin{bmatrix} \mathbf{K}\_{h}^{k} \ \mathbf{K}\_{hk} \\ \mathbf{K}\_{kh} \ \mathbf{K}\_{k}^{h} \end{bmatrix} = \mathbf{A}\_{hk}^{\diamond} - \mathbf{B}\_{hk}^{\diamond} \mathbf{K}\_{hk}^{\diamond} \tag{99}$$

$$\boldsymbol{\omega}\_{h\mathbf{k}}^{\diamond}(t) = \sum\_{l=1, l \neq h, k}^{p} \begin{bmatrix} \mathbf{B}\_{h} \boldsymbol{\upmu}\_{h}^{l}(t) + \mathbf{A}\_{h l} \boldsymbol{\upmu}\_{l}(t) \\ \mathbf{B}\_{k} \boldsymbol{\upmu}\_{k}^{l}(t) + \mathbf{A}\_{k l} \boldsymbol{\upmu}\_{l}(t) \end{bmatrix} = \sum\_{l=1, l \neq h, k}^{p} \left( \mathbf{B}\_{h k}^{\diamond} \begin{bmatrix} \boldsymbol{\upmu}\_{h}^{l}(t) \\ \boldsymbol{\upmu}\_{k}^{l}(t) \end{bmatrix} + \begin{bmatrix} \mathbf{A}\_{h l} \\ \mathbf{A}\_{k l} \end{bmatrix} \boldsymbol{\upmu}\_{l}(t) \right) = \boldsymbol{\upmu}\_{h}^{\diamond} \tag{100}$$

$$=\mathcal{B}\_{hk}^{\diamond}\omega\_{hk}(t) + \sum\_{\substack{l=1,\ l\neq h,k}}^{p} \mathcal{A}\_{hk}^{l\diamond}\mathfrak{q}\_{l}(t)$$

where

20 Robust

+ ··· +

**0** ... **00 0**

**<sup>0</sup>** ... **<sup>0</sup>** *<sup>P</sup>p*,*p*−<sup>1</sup> *<sup>P</sup>p*−<sup>1</sup>

� *Ahl Akl* � *ql* (*t*) + � *B<sup>h</sup>* **0 0** *B<sup>k</sup>*

*p* ∑ *l*=1, *l*�=*h*,*k*

and considering that for unforced system there are *ul*(*t*) = **0**, *l* =1, . . . *p* then (91) implies (85).

Supposing that there exists the partitioned structure of *K* as is defined in (81), (82) then it

� *Kl <sup>h</sup> Khl* � � *qh*(*t*) *ql*(*t*)

*Kl <sup>h</sup> Khl* � � *qh*(*t*) *ql* (*t*) �

� � *<sup>q</sup>h*(*t*) *qk*(*t*)

� *K<sup>k</sup> <sup>h</sup> Khk Kkh K<sup>h</sup> k*

> � −

⎡ ⎢ ⎢ ⎢ ⎣ �

�

*p* ∑ *l*=1, *l*�=*h*,*k*

*p* ∑ *l*=1, *l*�=*h*,*k*

= −*K*◦ *hk* � *qh*(*t*) *qk*(*t*)

> � *Kl <sup>h</sup> Khl* � � *qh*(*t*) *ql* (*t*) �

� *Kl <sup>k</sup> Kkl* � � *qk*(*t*) *ql*(*t*)

⎡ ⎢ ⎢ ⎢ ⎢ ⎣

*<sup>p</sup>*−<sup>1</sup> *<sup>P</sup>p*−1,*<sup>p</sup>*

*p*

*Pp*

<sup>1</sup> **0** ... **0** *P*1*<sup>p</sup>* **0 0** ... **0 0** . . . *Pp*<sup>1</sup> **0** ... **0** *P*<sup>1</sup>

> ⎤ ⎥ ⎥ ⎥ ⎥ ⎦

⎞

⎟⎟⎟⎟⎠ .

� = *p*

⎤ ⎥ ⎥ ⎥ ⎥ ⎦ ⎞

⎟⎟⎟⎟⎠ +

� � *uh*(*t*) *uk*(*t*)

*p* ∑ *l*=1, *l*�=*h*,*k*

�

�

⎤ ⎥ ⎥ ⎥ ⎦

*ul <sup>h</sup>*(*t*)

�

(90)

(91)

(92)

(93)

(94)

(95)

(96)

⎤ ⎥ ⎥ ⎥ ⎦

. . . **0** ... **0** *P<sup>p</sup>*

then the next separation is possible

*P* =

*q*˙ *hk*(*t*) =

**5.2 Pairwise system description**

<sup>=</sup> <sup>−</sup> � *Kk <sup>h</sup> Khk* � � *qh*(*t*) *qk*(*t*)

> � *uh*(*t*) *uk*(*t*)

where for *l* = 1, 2, . . . , *p*, *i* �= *h*, *k*

�

*Ahh Ahk Akh Akk*

Writing (78) as

yields

⎛

⎛

⎡ ⎢ ⎢ ⎢ ⎣ *P*2

*P*<sup>21</sup> *P*<sup>1</sup>

+ ··· +

�

Subsequently, with (90), (91) the inequality (88) implies (84).

*qhk*(*t*) +

*<sup>u</sup>h*(*t*) = <sup>−</sup> *<sup>p</sup>*

*ul*

Defining with *h* = 1, 2 . . . , *p* − 1, *k* = *h* + 1, *h* + 2 ... , *p*

� = −

� *K<sup>k</sup> <sup>h</sup> Kkh Khk K<sup>h</sup> k*

� *uk <sup>h</sup>*(*t*) *uh <sup>k</sup>* (*t*)

and combining (92) for *h* and *k* it is obtained

� = − *<sup>h</sup>*(*t*) = <sup>−</sup> �

� *K<sup>k</sup> <sup>h</sup> Khk Kkh K<sup>h</sup> k*

> *K*◦ *hk* =

� � *<sup>q</sup>h*(*t*) *qk*(*t*)

� <sup>−</sup> *<sup>p</sup>* ∑ *l*=1, *l*�=*h*,*k*

∑ *l*=1, *l*�=*h*

> � *Kl <sup>h</sup> Khl* � � *qh*(*t*) *ql* (*t*) � = *u<sup>k</sup> <sup>h</sup>*(*t*) +

<sup>1</sup> *P*<sup>12</sup> **0** ... **0**

<sup>2</sup> **0** ... **0** . . . **0 00** ... **0**

> ⎡ ⎢ ⎢ ⎢ ⎢ ⎣

⎜⎜⎜⎜⎝

⎜⎜⎜⎜⎝

$$\boldsymbol{\omega}\_{\boldsymbol{h}\boldsymbol{k}}(t) = \sum\_{\boldsymbol{l}=1,\,l\neq h,\boldsymbol{k}}^{p} \begin{bmatrix} \boldsymbol{u}\_{\boldsymbol{h}}^{\boldsymbol{l}}(t) \\ \boldsymbol{u}\_{\boldsymbol{k}}^{\boldsymbol{l}}(t) \end{bmatrix}, \qquad \boldsymbol{A}\_{\boldsymbol{h}\boldsymbol{k}}^{\boldsymbol{l}\odot} = \begin{bmatrix} \boldsymbol{A}\_{\boldsymbol{h}\boldsymbol{l}} \\ \boldsymbol{A}\_{\boldsymbol{k}\boldsymbol{l}} \end{bmatrix} \tag{101}$$

$$\mathbf{A}\_{hk}^{\diamond} = \begin{bmatrix} \mathbf{A}\_{hh} \ \mathbf{A}\_{hk} \\ \mathbf{A}\_{kh} \ \mathbf{A}\_{kk} \end{bmatrix}, \ \mathbf{B}\_{hk}^{\diamond} = \begin{bmatrix} \mathbf{B}\_{lh} & \mathbf{0} \\ \mathbf{0} & \mathbf{B}\_{k} \end{bmatrix}, \ \mathbf{K}\_{hk}^{\diamond} = \begin{bmatrix} \mathbf{K}\_{h}^{k} \ \mathbf{K}\_{hk} \\ \mathbf{K}\_{kh} \ \mathbf{K}\_{k}^{h} \end{bmatrix} \tag{102}$$

(98) can be written as

$$\dot{\boldsymbol{q}}\_{\rm hk}(t) = \boldsymbol{A}\_{\rm hkc}^{\diamond} \boldsymbol{q}\_{\rm hk}(t) + \sum\_{l=1, l \neq h, k}^{p} \boldsymbol{A}\_{\rm hk}^{l\diamond} \boldsymbol{q}\_{l}(t) + \boldsymbol{B}\_{\rm hk}^{\diamond} \boldsymbol{\omega}\_{\rm hk}(t) \tag{103}$$

where *ωhk*(*t*) can be considered as a generalized auxiliary disturbance acting on the pair *h*, *k* of the subsystems.

On the other hand, if

$$\mathbf{C}\_{hh} = \sum\_{l=1,\,l\neq h}^{p} \mathbf{C}\_{h\prime}^{l} \quad \mathbf{C}\_{hk}^{\diamond} = \begin{bmatrix} \mathbf{C}\_{h}^{k} \ \mathbf{C}\_{hk} \\ \mathbf{C}\_{kh} \ \mathbf{C}\_{k}^{h} \end{bmatrix} \quad \mathbf{C}\_{hk}^{l\diamond} = \begin{bmatrix} \mathbf{C}\_{hl} \\ \mathbf{C}\_{kl} \end{bmatrix} \tag{104}$$

then

$$\mathbf{y}(t) = \sum\_{h=1}^{p-1} \sum\_{k=h+1}^{p} \left( \mathbf{C}\_{hk}^{\odot} \boldsymbol{\eta}\_{hk}(t) + \sum\_{l=1, \, l \neq h}^{p} \mathbf{C}\_{h}^{l} \boldsymbol{\eta}\_{l}(t) \right) \tag{105}$$

$$\mathfrak{g}\_{hk}(t) = \mathbf{C}\_{hk}^{\diamond}\mathfrak{q}\_{hk}(t) + \sum\_{\substack{l=1,\,l\neq h\\ \ldots,\,l}}^{p} \mathbf{C}\_{hk}^{\diamond}\mathfrak{q}\_{l}(t) + \mathbf{0}\,\boldsymbol{\omega}\_{hk}(t) \tag{106}$$

Now, taking (103), (106) considered pair of controlled subsystems is fully described as

$$\dot{\boldsymbol{q}}\_{hk}(t) = \mathbf{A}\_{hk\mathbf{c}}^{\diamond} \boldsymbol{q}\_{hk}(t) + \sum\_{l=1, l \neq h, k}^{p} \mathbf{A}\_{hk}^{l\diamond} \boldsymbol{q}\_{l}(t) + \mathbf{B}\_{hk}^{\diamond} \boldsymbol{\omega}\_{hk}(t) \tag{107}$$

$$\mathbf{y}\_{hk}(t) = \mathbf{C}\_{hk}^{\diamond} \mathbf{q}\_{hk}(t) + \sum\_{\substack{l=1,\,l\neq h}}^{p} \mathbf{C}\_{hk}^{l\diamond} \mathbf{q}\_{l}(t) + \mathbf{0}\,\boldsymbol{\omega}\_{hk}(t) \tag{108}$$

Analogously, (106) can be rewritten as

*hkqhk*(*t*) +

**Γ**◦

is used in the example. The parameters of (75)-(77) are

To solve this problem the next separations were done

, *A*3◦ <sup>12</sup> = � 2 0 � , *A*4◦ <sup>12</sup> = � −1 1 � , *C*◦ <sup>12</sup> = � 1 1 0 2�

, *A*2◦ <sup>14</sup> =

, *A*1◦ <sup>23</sup> = � −1 1 � , *A*4◦ <sup>23</sup> = � 1 3 � , *C*◦ <sup>23</sup> =

, *A*1◦ <sup>24</sup> = � −1 1 � , *A*3◦ <sup>24</sup> =

, *A*1◦ <sup>34</sup> = � 1 1 � , *A*2◦ <sup>34</sup> = � −1 −2 � , *C*◦ <sup>34</sup> = � 1 0 1 1�

*X*◦

, *A*2◦ <sup>13</sup> =

*Bhk* = � 1 0 0 1�

> � 1 −1 � , *A*4◦ <sup>13</sup> = � −1 3 � , *C*◦ <sup>13</sup> = � 1 2 2 1�

� 1 −2 � , *A*3◦ <sup>14</sup> =

<sup>23</sup>, *Y*◦ <sup>23</sup>, *Z*◦

<sup>23</sup> *B*◦

<sup>∗</sup> <sup>−</sup>*ε*<sup>231</sup> **0 0** *<sup>A</sup>*1◦*<sup>T</sup>*

∗ ∗ <sup>−</sup>*ε*<sup>234</sup> **<sup>0</sup>** *<sup>A</sup>*4◦*<sup>T</sup>*

∗∗ ∗ <sup>−</sup>*γ*<sup>23</sup> *<sup>I</sup>*<sup>2</sup> *<sup>B</sup>*◦*<sup>T</sup>*

∗∗ ∗ ∗ −*Z*◦

<sup>23</sup> <sup>=</sup> *<sup>X</sup>*◦*<sup>T</sup>*

<sup>23</sup> *<sup>A</sup>*4◦

312 −1 −1201 1 −113 1 −2 −2 2

*p* ∑ *l*=1, *l*�=*h*

> *Dl*◦ *hk* = � {*Chk*}*<sup>p</sup>*

*hk* <sup>=</sup> diag �

and inserting appropriate into (57), (58) then (109), (110) be obtained.

⎤ ⎥ ⎥ <sup>⎦</sup> , *<sup>C</sup>* <sup>=</sup>

*Cl*◦

Partially Decentralized Design Principle in Large-Scale System Control 383

{*εhklInl* }*<sup>p</sup>*

To demonstrate properties of this approach a simple system with four-inputs and four-outputs

⎡ ⎢ ⎢ ⎣

� 2 −2 � , *C*◦ <sup>14</sup> = � 1 1 0 1�

� 0 −2 � , *C*◦ <sup>24</sup> = � 2 0 0 1�

<sup>23</sup> *X*◦

∗∗ ∗ ∗ ∗ −*I*<sup>2</sup>

<sup>23</sup> > 0, *ε*<sup>231</sup> > 0, *ε*<sup>234</sup> > 0, *γ*<sup>23</sup> > 0

<sup>23</sup> <sup>−</sup>*Y*◦*<sup>T</sup>*

23−*Z*◦*<sup>T</sup>*

<sup>23</sup> *<sup>B</sup>*◦*<sup>T</sup>* <sup>23</sup> *X*◦

<sup>23</sup> *<sup>C</sup>*1◦*<sup>T</sup>*

<sup>23</sup> *<sup>C</sup>*4◦*<sup>T</sup>*

<sup>23</sup> **0**

<sup>23</sup> **0**

23*A*◦*<sup>T</sup>*

, *h* = 1, 2, 3, *k* = 2, 3, 4, *h* < *k*

*hkql*(*t*) + **0** *ωhk*(*t*) = *C*◦

*<sup>l</sup>*=1, *<sup>l</sup>*�=*h*,*<sup>k</sup>* **<sup>0</sup>**

�

*<sup>l</sup>*=1, *<sup>l</sup>*�=*h*,*<sup>k</sup> <sup>γ</sup>hk <sup>I</sup>*(*rh*+*rk*)

⎤ ⎥ ⎥

<sup>⎦</sup> , *<sup>B</sup>* <sup>=</sup> diag �

, *C*3◦ <sup>12</sup> = � 2 1 � , *C*4◦ <sup>12</sup> = � 1 0 �

, *C*2◦ <sup>14</sup> = � 1 0 � , *C*3◦ <sup>14</sup> = � 2 1 �

, *C*1◦ <sup>23</sup> = � 0 2 � , *C*4◦ <sup>23</sup> = � 0 0 �

, *C*1◦ <sup>24</sup> = � 0 0 � , *C*3◦ <sup>24</sup> = � 1 1 �

, *C*1◦ <sup>34</sup> = � 2 0 � , *C*2◦ <sup>34</sup> = � −1 0 �

<sup>23</sup>, *ε*231, *ε*<sup>234</sup> *δ*<sup>23</sup> it means to rewrite (109)-(111) as

23*C*◦*<sup>T</sup>* 23

⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦

< 0

23

23

� 1 −1 � , *C*4◦ <sup>13</sup> = � 1 0 �

, *C*2◦ <sup>13</sup> =

� 2 1 −1 1� *hkqhk*(*t*) + *<sup>D</sup>l*◦

1111� ,

�

*hkωl*◦

*hk* (117)

(118)

(119)

*yhk*(*t*) = *C*◦

*A* =

� 3 1 −1 2�

� 2 0 −1 1�

� 2 1 −2 2�

� 1 3 −2 2�

> ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣

**Φ**◦ <sup>23</sup> *<sup>A</sup>*1◦

Solving e.g. with respect to *X*◦

⎡ ⎢ ⎢ ⎣

where

Therefore, defining

**Illustrative example**

*A*◦ <sup>12</sup> =

*A*◦ <sup>13</sup> = � 3 2 1 1�

*A*◦ <sup>14</sup> = � 3 −1 1 2�

*A*◦ <sup>23</sup> =

*A*◦ <sup>24</sup> =

*A*◦ <sup>34</sup> =

### **5.3 Controller parameter design**

**Theorem 7.** *Subsystem pair (91) in system (75), (77), controlled by control law (97) is stable with quadratic performances* �*C*◦ *hk*(*sI*−*A*◦ *hkc*)−1*B*◦ *hk*�<sup>2</sup> <sup>∞</sup> <sup>≤</sup> *<sup>γ</sup>hk,* �*Cl*◦ *hk*(*sI*−*A*◦ *hkc*)−1*Bl*◦ *hk*�<sup>2</sup> <sup>∞</sup> ≤ *εhkl if for h* = 1, 2 . . . , *p*−1*, k* = *h*+1, *h*+2..., *p, l* = 1, 2 . . . , *p*, *l* �= *h*, *k, there exist a symmetric positive definite matrix X*◦ *hk* <sup>∈</sup> *IR*(*nh*+*nk*)×(*nh*+*nk*)*, matrices <sup>Z</sup>*◦ *hk* <sup>∈</sup> *IR*(*nh*+*nk*)×(*nh*+*nk*)*, <sup>Y</sup>*◦ *hk* <sup>∈</sup> *IR*(*rh*+*rk*)×(*nh*+*nk* )*, and positive scalars γhk*, *εhkl* ∈ *IR such that*

$$\mathbf{X}\_{hk}^{o} = \mathbf{X}\_{hk}^{oT} > 0, \quad \varepsilon\_{hkl} > 0, \quad \gamma\_{hk} > 0, \quad h, l = 1, \dots, p, \ l \neq h, k, \ h < k \le p \tag{109}$$

$$\begin{bmatrix} \Phi\_{hk}^{o} & \mathbf{A}\_{hk}^{o} & \cdots & \mathbf{A}\_{hk}^{o} & \mathbf{B}\_{hk}^{o} & \mathbf{X}\_{hk}^{o} \mathbf{A}\_{hk}^{oT} - \mathbf{Y}\_{hk}^{oT} \mathbf{B}\_{hk}^{oT} \mathbf{X}\_{hk}^{o} \mathbf{C}\_{hk}^{oT} \\ \ast & -\varepsilon\_{hkl} \mathbf{I}\_{h\mathbf{1}\_{1}} & \ddots & \mathbf{0} & \mathbf{A}\_{hk}^{1 \circ T} & \mathbf{C}\_{hk}^{1 \circ T} \\ \vdots & \vdots & \ddots & \vdots & \vdots & \vdots & \vdots \\ \ast & \ast & \cdots - \varepsilon\_{hkp} \mathbf{I}\_{h\mathbf{p}} & \mathbf{0} & \mathbf{A}\_{hk}^{p \circ T} & \mathbf{C}\_{hk}^{p \circ T} \\ \ast & \ast & \cdots & \ast & -\gamma\_{hh} \mathbf{I}\_{(\gamma\_{h} + \gamma\_{h})} & \mathbf{B}\_{hk}^{oT} & \mathbf{0} \\ \ast & \ast & \cdots & \ast & \ast & -\mathbf{Z}\_{hk}^{o} - \mathbf{Z}\_{hk}^{oT} & \mathbf{0} \\ \ast & \ast & \cdots & \ast & \ast & \ast & -\mathbf{I}\_{(m\_{h} + m\_{h})} \end{bmatrix} < 0 \tag{110}$$

*where A*◦ *hk, B*◦ *hk, <sup>A</sup>l*◦ *hk, C*◦ *hk, <sup>C</sup>l*◦ *hk are defined in (99), (101), (104), respectively,*

$$\boldsymbol{\Phi}\_{\boldsymbol{h}\boldsymbol{k}}^{\diamond} = \mathbf{X}\_{\boldsymbol{h}\boldsymbol{k}}^{\diamond}\boldsymbol{A}\_{\boldsymbol{h}\boldsymbol{k}}^{\diamond\top} + \mathbf{A}\_{\boldsymbol{h}\boldsymbol{k}}^{\diamond}\mathbf{X}\_{\boldsymbol{h}\boldsymbol{k}}^{\diamond} - \mathbf{B}\_{\boldsymbol{h}\boldsymbol{k}}^{\diamond}\mathbf{Y}\_{\boldsymbol{h}\boldsymbol{k}}^{\diamond} - \mathbf{Y}\_{\boldsymbol{h}\boldsymbol{k}}^{\diamond\top}\mathbf{B}\_{\boldsymbol{h}\boldsymbol{k}}^{\diamond\top} \tag{111}$$

*and where Ah*◦ *hk , <sup>A</sup>k*◦ *hk, as well as <sup>C</sup>h*◦ *hk , <sup>C</sup>k*◦ *hk are not included into the structure of (110). Then K*◦ *hk is given as*

$$\mathbf{K}\_{hk}^{\diamond} = \mathbf{Y}\_{hk}^{\diamond} \mathbf{X}\_{hk}^{\diamond -1} \tag{112}$$

Note, using the above given principle based on the the pairwise decentralized design of control, the global system be stable. The proof can be find in Filasová & Krokavec (2011).

*Proof.* Considering *ω*◦ *hk*(*t*) given in (100) as an generalized input into the subsystem pair (107), (108) then using (83) - (86), and (107) it can be written

$$\sum\_{h=1}^{p-1} \sum\_{k=h+1}^{p} \left( \dot{\boldsymbol{q}}\_{hk}^T(t) \mathbf{P}\_{hk}^\circ \boldsymbol{q}\_{hk}(t) + \boldsymbol{q}\_{hk}^T(t) \mathbf{P}\_{hk}^\circ \dot{\boldsymbol{q}}\_{hk}(t) \right) < 0 \tag{113}$$

$$\sum\_{h=1}^{p-1} \sum\_{k=h+1}^{p} \begin{pmatrix} \left(\mathbf{A}\_{hk\mathbf{c}}^{\diamond} \boldsymbol{q}\_{hk}(t) + \sum\_{\substack{l=1,\,l\neq h,k}}^{p} \mathbf{A}\_{hk}^{l\diamond} \boldsymbol{q}\_{l}(t) + \mathbf{B}\_{hk}^{\diamond} \boldsymbol{\omega}\_{hk}(t)\right)^{T} \mathbf{P}\_{hk}^{\diamond} \boldsymbol{q}\_{hk}(t) + \\\ + \boldsymbol{q}\_{hk}^{T}(t) \mathbf{P}\_{hk}^{\diamond} \left(\mathbf{A}\_{hk\mathbf{c}}^{\diamond} \boldsymbol{q}\_{hk}(t) + \sum\_{\substack{l=1,\,l\neq h,k}}^{p} \mathbf{A}\_{hk}^{l\diamond} \boldsymbol{q}\_{l}(t) + \mathbf{B}\_{hk}^{\diamond} \boldsymbol{\omega}\_{hk}(t)\right) \end{pmatrix} < 0 \tag{114}$$

respectively. Introducing the next notations

$$\mathbf{B}\_{\mathrm{hk}}^{l\diamond} = \left[ \left\{ \mathbf{A}\_{\mathrm{hk}}^{l\diamond} \right\}\_{l=1, \, l \neq \mathtt{h}, \mathbf{k}}^{p} \mathbf{B}\_{\mathrm{hk}}^{\diamond} \right], \qquad \boldsymbol{\omega}\_{\mathrm{hk}}^{l\diamond T} = \left[ \left\{ \mathbf{q}\_{l}^{T} \right\}\_{l=1, \, l \neq \mathtt{h}, \mathbf{k}}^{p} \boldsymbol{\omega}\_{\mathrm{hk}}^{T} (t) \right] \tag{115}$$

(114) can be written as

$$\sum\_{h=1}^{p-1} \sum\_{k=h+1}^{p} \left( (\mathbf{A}\_{hkc}^{\diamond} \boldsymbol{q}\_{hk}(\mathbf{t}) + \mathbf{B}\_{hk}^{l\diamond} \boldsymbol{\omega}\_{hk}^{l\diamond})^T \mathbf{P}\_{hk}^{\diamond} \boldsymbol{q}\_{hk}(\mathbf{t}) + \mathbf{q}\_{hk}^T(\mathbf{t}) \mathbf{P}\_{hk}^{\diamond} (\mathbf{A}\_{hkc}^{\diamond} \boldsymbol{q}\_{hk}(\mathbf{t}) + \mathbf{B}\_{hk}^{l\diamond} \boldsymbol{\omega}\_{hk}^{l\diamond}) \right) < 0 \tag{116}$$

Analogously, (106) can be rewritten as

$$\mathbf{y}\_{h\mathbf{k}}(t) = \mathbf{C}\_{h\mathbf{k}}^{\diamond}\mathbf{q}\_{h\mathbf{k}}(t) + \sum\_{\substack{l=1,\;l\neq h}}^{p} \mathbf{C}\_{h\mathbf{k}}^{l\diamond}\mathbf{q}\_{l}(t) + \mathbf{0}\,\boldsymbol{\omega}\_{h\mathbf{k}}(t) = \mathbf{C}\_{h\mathbf{k}}^{\diamond}\mathbf{q}\_{h\mathbf{k}}(t) + \mathbf{D}\_{h\mathbf{k}}^{l\diamond}\boldsymbol{\omega}\_{h\mathbf{k}}^{l\diamond} \tag{117}$$

where

22 Robust

**Theorem 7.** *Subsystem pair (91) in system (75), (77), controlled by control law (97) is stable with*

*h* = 1, 2 . . . , *p*−1*, k* = *h*+1, *h*+2..., *p, l* = 1, 2 . . . , *p*, *l* �= *h*, *k, there exist a symmetric positive*

*hk X*◦

*. .*

··· **0 0** *<sup>A</sup>*1◦*<sup>T</sup>*

*.*

∗ ∗ ··· ∗∗ ∗ −*I*(*mh*+*mk*)

*hkX*◦

Note, using the above given principle based on the the pairwise decentralized design of control, the global system be stable. The proof can be find in Filasová & Krokavec (2011).

*hkqhk*(*t*) + *<sup>q</sup><sup>T</sup>*

*hkql*(*t*) + *B*◦

*p* ∑ *l*=1, *l*�=*h*,*k*

, *ωl*◦*<sup>T</sup>*

*hkqhk*(*t*) + *<sup>q</sup><sup>T</sup>*

*Al*◦

*hk are defined in (99), (101), (104), respectively,*

*hk* − *B*◦

*hkX*◦−<sup>1</sup>

*hkY*◦

*hk*(*t*) given in (100) as an generalized input into the subsystem pair (107),

*hk*(*t*)*P*◦

*Al*◦

*hk* =

*hk*(*t*)*P*◦

*hkωhk*(*t*)

� � *qT l* �*p*

*hk*(*A*◦

*hkql*(*t*) + *B*◦

�*T P*◦

<sup>∞</sup> <sup>≤</sup> *<sup>γ</sup>hk,* �*Cl*◦

*hk* > 0, *εhkl* > 0, *γhk* > 0, *h*, *l* = 1, . . . , *p*, *l* �= *h*, *k*, *h* < *k* ≤ *p* (109)

*hk* <sup>−</sup>*Y*◦*<sup>T</sup>*

*.*

*hk*−*Z*◦*<sup>T</sup>*

*hk* <sup>−</sup> *<sup>Y</sup>*◦*<sup>T</sup>*

*hk are not included into the structure of (110). Then K*◦

*hkA*◦*<sup>T</sup>*

*hk*(*sI*−*A*◦

*hk <sup>B</sup>*◦*<sup>T</sup> hk X*◦ *hkC*◦*<sup>T</sup> hk*

*hk <sup>C</sup>*1◦*<sup>T</sup>*

*. .*

*hk <sup>C</sup>p*◦*<sup>T</sup>*

*hk* **0**

*hk* **0**

*hk <sup>B</sup>*◦*<sup>T</sup>*

*hk*

*. .*

*hk*

*hk* (112)

*hkq*˙ *hk*(*t*)) < 0 (113)

⎞

⎟⎟⎟⎠

*hk*(*t*) �

*hkωl*◦ *hk*) �

< 0 (114)

(115)

< 0 (116)

�

*hkqhk*(*t*)+

*hkωhk*(*t*)

*<sup>l</sup>*=1, *<sup>l</sup>*�=*h*,*<sup>k</sup> <sup>ω</sup><sup>T</sup>*

*hkcqhk*(*t*)+*Bl*◦

*hk* <sup>∈</sup> *IR*(*nh*+*nk*)×(*nh*+*nk*)*, <sup>Y</sup>*◦

*hkc*)−1*Bl*◦

*hk*�<sup>2</sup>

⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ <sup>∞</sup> ≤ *εhkl if for*

< 0 (110)

*hk is*

*hk* (111)

*hk* <sup>∈</sup> *IR*(*rh*+*rk*)×(*nh*+*nk* )*,*

*hk*�<sup>2</sup>

**5.3 Controller parameter design**

*and positive scalars γhk*, *εhkl* ∈ *IR such that*

∗ −*εhk*<sup>1</sup> *In*<sup>1</sup>

*hk*(*sI*−*A*◦

*hk* ··· *<sup>A</sup>p*◦

*. ... .*

*hk, <sup>C</sup>l*◦

**Φ**◦ *hk* = *X*◦

*hk, as well as <sup>C</sup>h*◦

(108) then using (83) - (86), and (107) it can be written

*hkcqhk*(*t*) +

*p* ∑ *k*=*h*+1

(*q*˙ *<sup>T</sup> hk*(*t*)*P*◦

*l*=1, *l*�=*h*,*k*

*hkωl*◦ *hk*)*TP*◦

*p* ∑ *l*=1, *l*�=*h*,*k*

*hkcqhk*(*t*) +

*B*◦ *hk* �

*p*−1 ∑ *h*=1

*hk* <sup>∈</sup> *IR*(*nh*+*nk*)×(*nh*+*nk*)*, matrices <sup>Z</sup>*◦

*.*

*hkc*)−1*B*◦

*hk B*◦

*. .*

∗ ∗ · · ·−*εhkp <sup>I</sup>np* **<sup>0</sup>** *<sup>A</sup>p*◦*<sup>T</sup>*

∗ ∗ ··· <sup>∗</sup> <sup>−</sup>*γhk <sup>I</sup>*(*rh*+*rk*) *<sup>B</sup>*◦*<sup>T</sup>*

∗ ∗ ··· ∗ ∗ −*Z*◦

*hkA*◦*<sup>T</sup> hk* + *A*◦

*hk , <sup>C</sup>k*◦

*K*◦ *hk* = *Y*◦

*quadratic performances* �*C*◦

*hk* <sup>=</sup> *<sup>X</sup>*◦*<sup>T</sup>*

⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ **Φ**◦ *hk <sup>A</sup>*1◦

> *. . . . .*

*hk, <sup>A</sup>l*◦ *hk, C*◦

*hk , <sup>A</sup>k*◦

*definite matrix X*◦

*X*◦

*where A*◦

*given as*

*and where Ah*◦

*hk, B*◦

*Proof.* Considering *ω*◦

*p*−1 ∑ *h*=1

*p* ∑ *k*=*h*+1 ⎛

� *A*◦

> +*q<sup>T</sup> hk*(*t*)*P*◦ *hk* � *A*◦

respectively. Introducing the next notations

� � *Al*◦ *hk* �*p*

*hkcqhk*(*t*)+*Bl*◦

⎜⎜⎜⎝

*Bl*◦ *hk* =

(114) can be written as

� (*A*◦

*p* ∑ *k*=*h*+1

*p*−1 ∑ *h*=1

$$\mathbf{D}\_{hk}^{l\diamond} = \left[ \left\{ \mathbf{C}\_{hk} \right\}\_{l=1,\ l \neq h,k}^{p} \mathbf{0} \right] \tag{118}$$

Therefore, defining

$$\mathbf{T}\_{hk}^{\diamond} = \text{diag}\left[ \{ \varepsilon\_{hkl} \mathbf{I}\_{\text{Il}\_l} \}\_{l=1, \, l \neq h, k}^{p} \, \gamma\_{hk} \mathbf{I}\_{\begin{pmatrix} r\_h + r\_k \\ \cdot \end{pmatrix}} \right] \tag{119}$$

and inserting appropriate into (57), (58) then (109), (110) be obtained.

### **Illustrative example**

To demonstrate properties of this approach a simple system with four-inputs and four-outputs is used in the example. The parameters of (75)-(77) are

$$A = \begin{bmatrix} 3 & 1 & 2 & -1 \\ -1 & 2 & 0 & 1 \\ 1 & -1 & 1 & 3 \\ 1 & -2 & -2 & 2 \end{bmatrix}, \mathbf{C} = \begin{bmatrix} 3 & 1 \ 2 \ 1 \\ 0 & 6 \ 1 \ 0 \\ 2 & -1 \ 3 \ 0 \\ 0 & 0 \ 1 \ 3 \end{bmatrix}, \mathbf{B} = \text{diag}\begin{bmatrix} 1 \ 1 \ 1 \ 1 \ 1 \end{bmatrix}.$$

To solve this problem the next separations were done

*Bhk* = � 1 0 0 1� , *h* = 1, 2, 3, *k* = 2, 3, 4, *h* < *k A*◦ <sup>12</sup> = � 3 1 −1 2� , *A*3◦ <sup>12</sup> = � 2 0 � , *A*4◦ <sup>12</sup> = � −1 1 � , *C*◦ <sup>12</sup> = � 1 1 0 2� , *C*3◦ <sup>12</sup> = � 2 1 � , *C*4◦ <sup>12</sup> = � 1 0 � *A*◦ <sup>13</sup> = � 3 2 1 1� , *A*2◦ <sup>13</sup> = � 1 −1 � , *A*4◦ <sup>13</sup> = � −1 3 � , *C*◦ <sup>13</sup> = � 1 2 2 1� , *C*2◦ <sup>13</sup> = � 1 −1 � , *C*4◦ <sup>13</sup> = � 1 0 � *A*◦ <sup>14</sup> = � 3 −1 1 2� , *A*2◦ <sup>14</sup> = � 1 −2 � , *A*3◦ <sup>14</sup> = � 2 −2 � , *C*◦ <sup>14</sup> = � 1 1 0 1� , *C*2◦ <sup>14</sup> = � 1 0 � , *C*3◦ <sup>14</sup> = � 2 1 � *A*◦ <sup>23</sup> = � 2 0 −1 1� , *A*1◦ <sup>23</sup> = � −1 1 � , *A*4◦ <sup>23</sup> = � 1 3 � , *C*◦ <sup>23</sup> = � 2 1 −1 1� , *C*1◦ <sup>23</sup> = � 0 2 � , *C*4◦ <sup>23</sup> = � 0 0 � *A*◦ <sup>24</sup> = � 2 1 −2 2� , *A*1◦ <sup>24</sup> = � −1 1 � , *A*3◦ <sup>24</sup> = � 0 −2 � , *C*◦ <sup>24</sup> = � 2 0 0 1� , *C*1◦ <sup>24</sup> = � 0 0 � , *C*3◦ <sup>24</sup> = � 1 1 � *A*◦ <sup>34</sup> = � 1 3 −2 2� , *A*1◦ <sup>34</sup> = � 1 1 � , *A*2◦ <sup>34</sup> = � −1 −2 � , *C*◦ <sup>34</sup> = � 1 0 1 1� , *C*1◦ <sup>34</sup> = � 2 0 � , *C*2◦ <sup>34</sup> = � −1 0 �

Solving e.g. with respect to *X*◦ <sup>23</sup>, *Y*◦ <sup>23</sup>, *Z*◦ <sup>23</sup>, *ε*231, *ε*<sup>234</sup> *δ*<sup>23</sup> it means to rewrite (109)-(111) as

$$X\_{23}^{\diamond} = X\_{23}^{\diamondT} > 0, \quad \varepsilon\_{231} > 0, \quad \varepsilon\_{234} > 0, \quad \gamma\_{23} > 0$$

$$\begin{bmatrix} \Phi\_{23}^{\diamond} & A\_{23}^{\diamond} & A\_{23}^{\diamond} & B\_{23}^{\diamond} & X\_{23}^{\diamond}A\_{23}^{\diamondT} - Y\_{23}^{\diamondT}B\_{23}^{\diamondT} & X\_{23}^{\diamond}C\_{23}^{\top} \\ \* & -\varepsilon\_{231} & \mathbf{0} & \mathbf{0} & A\_{23}^{\diamondT} & C\_{23}^{\diamondT} \\ \* & \* & -\varepsilon\_{234} & \mathbf{0} & A\_{23}^{\diamondT} & C\_{23}^{\diamondT} & \mathbf{0} \\ \* & \* & \* & -\gamma\_{23}I\_{2} & B\_{23}^{\diamondT} & \mathbf{0} \\ \* & \* & \* & \* & -\mathbf{Z}\_{23}^{\diamond} - \mathbf{Z}\_{23}^{\diamondT} & \mathbf{0} \\ \* & \* & \* & \* & \* & -I\_{2} \end{bmatrix} < 0$$

**Theorem 8.** *Uncertain subsystem pair (91) in system (75), (77), controlled by control law (97) is stable*

Partially Decentralized Design Principle in Large-Scale System Control 385

*for δ* > 0*, δ* ∈ *IR, h* = 1, 2 . . . , *p*−1*, k* = *h*+1, *h*+2..., *p, l* = 1, 2 . . . , *p*, *l* �= *h*, *k, there exist*

*hk*�<sup>2</sup>

<sup>∞</sup> <sup>≤</sup> *<sup>γ</sup>hk,* �*Cl*◦

*hki* ∈∈ *IR*(*nh*+*nk*)×(*nh*+*nk*)*, matrices <sup>V</sup>*◦

*hki* > 0, *εhkl* > 0, *γhk* > 0, *h*, *l* = 1, . . . , *p*, *l* �= *h*, *k*, *h* < *k* ≤ *p*, *i* = 1, 2, . . . ,*s* (121)

*hkA*◦*<sup>T</sup>*

*.*

*hk*+*V*◦*<sup>T</sup>*

*hk are equivalently defined as in (99), (101), (104), respectively,*

*hkiW*◦

*hki*−*W*◦*<sup>T</sup>*

*hk* +*V*◦

*hk*(*sI*−*A*◦

*hk <sup>B</sup>*◦*<sup>T</sup> hki V*◦ *hkC*◦*<sup>T</sup> hki*

*hki <sup>C</sup>*1◦*<sup>T</sup>*

*. .*

*hki <sup>C</sup>p*◦*<sup>T</sup>*

*hki* **0**

*hk* <sup>−</sup> *<sup>W</sup>*◦*<sup>T</sup>*

*hki are not included into the structure of (122). Then K*◦

*hk* ) **0**

*hk <sup>B</sup>*◦*<sup>T</sup>*

*hk* (124)

� 1 3.6�

, *i* = 1, 2, and in other cases only one

� 6.3809 0.5280 −0.6811 6.3946�

� 7.2453 0.9196 −1.0352 7.5124�

*hkc*)−1*Bl*◦

*hki*

*. .*

*hki*

*hk*�<sup>2</sup>

*hk* <sup>∈</sup> *IR*(*nh*+*nk*)×(*nh*+*nk* )*,*

⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦

*hki* (123)

<sup>∞</sup> ≤ *εhkl if*

<0 (122)

*hk is*

*hkc*)−1*B*◦

*hk* <sup>∈</sup> *IR*(*rh*+*rk*)×(*nh*+*nk*)*, and positive scalars <sup>γ</sup>hk*, *<sup>ε</sup>hkl* <sup>∈</sup> *IR such that for i* <sup>=</sup> 1, 2 . . . ,*<sup>s</sup>*

*hki*−*δV*◦*<sup>T</sup>*

*. .*

∗ ∗ ··· ∗∗ ∗ −*I*(*mh*+*mk* )

*hkiV*◦*<sup>T</sup>*

*Proof.* Considering (109)-(112) and inserting these appropriate into (72), *i* of(73), and (74) then

Considering the same system parameters as were those given in the example presented in Subsection 5.3 but with *A*34*r*(*t*), and *r*(*t*) lies within the interval �0.8, 1.2� then the next matrix

> , *A*4◦ <sup>231</sup> =

, *A*◦ <sup>342</sup> =

, and *T*◦ 34*i*

5.0484 0.0232 0.0232 5.0349�

5.5035 0.0258 0.0258 5.5252�

� 1 2.4�

� 1 3.6 −2 2� ,

> , *T*◦ <sup>12</sup> =

, *T*◦ <sup>14</sup> =

, *A*4◦ <sup>23</sup> =

*hk* − *B*◦

*hkV*◦*T*−<sup>1</sup>

*hki T*◦

··· **0 0** *<sup>A</sup>*1◦*<sup>T</sup>*

*hk*(*sI*−*A*◦

*hki B*◦

∗ ∗ · · ·−*εhkp <sup>I</sup>np* **<sup>0</sup>** *<sup>A</sup>p*◦*<sup>T</sup>*

∗ ∗ ··· <sup>∗</sup> <sup>−</sup>*γhk <sup>I</sup>*(*rh*+*rk*) *<sup>B</sup>*◦*<sup>T</sup>*

*hkA*◦*<sup>T</sup>*

*hki, <sup>C</sup>k*◦

, *A*4◦ <sup>132</sup> = � −1 3.6�

> � 1 2.4 −2 2�

13*i* , *T*◦ 23*i*

<sup>12</sup>, *T*◦ <sup>14</sup>, *T*◦ <sup>24</sup>).

, *T*◦ <sup>132</sup> = �

, *T*◦ <sup>232</sup> = �

The task is feasible, the Lyapunov matrices are computed as follows

*A*◦ <sup>341</sup> =

5.7244 −0.3591 0.1748 5.6673�

6.1360 0.0841 0.0090 6.2377� *hki* + *A*◦

*K*◦ *hk* = *W*◦

∗ ∗ ··· ∗ ∗ −*δ*(*V*◦

*with quadratic performances* �*C*◦

*W*◦

*T*◦

⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣

*where A*◦

*given as*

*and where Ah*◦

*hk, B*◦

(121)-(124) be obtained.

**Illustrative example**

*hk, <sup>A</sup>l*◦ *hk, C*◦

*hki, <sup>A</sup>k*◦

**Φ**◦ *hki <sup>A</sup>*1◦

> *. . . . .*

*hki* <sup>=</sup> *<sup>T</sup>*◦*<sup>T</sup>*

∗ −*εhk*<sup>1</sup> *In*<sup>1</sup>

*symmetric positive definite matrices T*◦

*hki* ··· *<sup>A</sup>p*◦

*. ... .*

*. . . .*

*hk, <sup>C</sup>l*◦

*hki, as well as <sup>C</sup>h*◦

**Φ**◦ *hki* = *V*◦

parameter have to be included into solution

*A*4◦ <sup>131</sup> = � −1 2.4�

i.e. a solution be associated with *T*◦

matrix inequality be computed (*T*◦

*T*◦ <sup>131</sup> = �

*T*◦ <sup>231</sup> = �

**Φ**◦ <sup>23</sup> = *X*◦ 23*A*◦*<sup>T</sup>* <sup>23</sup> + *A*◦ 23*X*◦ <sup>23</sup> − *B*◦ 23*Y*◦ <sup>23</sup> <sup>−</sup> *<sup>Y</sup>*◦*<sup>T</sup>* <sup>23</sup> *<sup>B</sup>*◦*<sup>T</sup>* 23

Using SeDuMi package for Matlab given task was feasible with

$$\varepsilon\_{231} = 9.3761, \qquad \varepsilon\_{234} = 6.7928, \qquad \gamma\_{23} = 6.2252$$

$$X\_{23}^{\diamond} = \begin{bmatrix} 0.5383 & -0.0046 \\ -0.0046 & 0.8150 \end{bmatrix}, \ Y\_{23}^{\diamond} = \begin{bmatrix} 4.8075 & -0.0364 \\ -0.4196 & 5.1783 \end{bmatrix}, \ Z\_{23}^{\diamond} = \begin{bmatrix} 4.2756 \ 0.1221 \\ 0.1221 \ 4.5297 \end{bmatrix}$$

$$\mathbf{K}\_{23}^{\diamond} = \begin{bmatrix} 1.1255 & -0.0384 \\ -0.1309 & 1.1467 \end{bmatrix}$$

By the same way computing the rest gain matrices the gain matrix set is

$$\mathbf{K}\_{12}^{\diamond} = \begin{bmatrix} 7.3113 & 3.8869\\ 1.4002 & 10.0216 \end{bmatrix}, \mathbf{K}\_{13}^{\diamond} = \begin{bmatrix} 7.9272 \ 4.0712\\ 4.2434 \ 8.8245 \end{bmatrix}, \ K\_{14}^{\diamond} = \begin{bmatrix} 7.4529 \ 1.5651\\ 1.6990 \ 5.6584 \end{bmatrix}$$

$$\mathbf{K}\_{24}^{\diamond} = \begin{bmatrix} 7.2561 \ 0.7243\\ -2.7951 \ 4.4839 \end{bmatrix}, \qquad \mathbf{K}\_{34}^{\diamond} = \begin{bmatrix} 6.3680 \ 4.1515\\ 0.8099 \ 5.2661 \end{bmatrix}$$

Note, the control laws are realized in the partly-autonomous structure (94), (95), where every subsystem pair is stable, and the large-scale system be stable, too. To compare, an equivalent gain matrix (81) to centralized control can be constructed

$$\mathbf{K} = \begin{bmatrix} 22.6914 & 3.8869 & 4.0712 & 1.5651 \\ 1.4002 & 18.4032 & -0.0384 & 0.7243 \\ 4.2434 & -0.1309 & 16.3393 & 4.1515 \\ 1.6990 & -2.7951 & 0.8099 & 15.4084 \end{bmatrix}$$

Thus, the resulting closed-loop eigenvalue spectrum is

$$\rho(A - \mathbf{B}\mathbf{K}) = \left\{-13.0595 \pm 0.4024 \,\mathrm{i} \,\, -16.2717 \,\, -22.4515\right\}$$

Matrix *K* structure implies evidently that the control gain is diagonally dominant.

### **6. Pairwise decentralized design of control for uncertain systems**

Consider for the simplicity that only the system matrix blocks are uncertain, and one or none uncertain function is associated with a system matrix block. Then the structure of the pairwise system description implies

$$A\_{lk}r(t) \in \begin{cases} A\_{lk}^{\diamond} \cup \{A\_{lh}^{k\diamond} \}\_{l=1}^{h-1} \cup \{A\_{hl}^{k\diamond} \}\_{l=h+1}^{p} \; ; \text{upper triangular blocks } (h < k) \\\\ \{A\_{lh}^{\diamond} \}\_{l=1}^{h-1} \cup \{A\_{hl}^{\diamond} \}\_{l=h}^{p} \; & \text{; diagonal blocks } (h = k) \\\\ A\_{kh}^{\diamond} \cup \{A\_{lk}^{h\diamond} \}\_{l=1}^{k-1} \cup \{A\_{kl}^{h\diamond} \}\_{l=k+1}^{p} \; ; \text{lower triangular blocks } (h > k) \end{cases} \tag{120}$$

Analogously it can be obtained equivalent expressions with respect to *Bhkr*(*t*), *Chkr*(*t*), respectively. Thus, it is evident already in this simple case that a single uncertainty affects *p*−1 from *q* = ( *p* <sup>2</sup>) linear matrix inequalities which have to be included into design. Generally, the next theorem can be formulated.

**Theorem 8.** *Uncertain subsystem pair (91) in system (75), (77), controlled by control law (97) is stable with quadratic performances* �*C*◦ *hk*(*sI*−*A*◦ *hkc*)−1*B*◦ *hk*�<sup>2</sup> <sup>∞</sup> <sup>≤</sup> *<sup>γ</sup>hk,* �*Cl*◦ *hk*(*sI*−*A*◦ *hkc*)−1*Bl*◦ *hk*�<sup>2</sup> <sup>∞</sup> ≤ *εhkl if for δ* > 0*, δ* ∈ *IR, h* = 1, 2 . . . , *p*−1*, k* = *h*+1, *h*+2..., *p, l* = 1, 2 . . . , *p*, *l* �= *h*, *k, there exist symmetric positive definite matrices T*◦ *hki* ∈∈ *IR*(*nh*+*nk*)×(*nh*+*nk*)*, matrices <sup>V</sup>*◦ *hk* <sup>∈</sup> *IR*(*nh*+*nk*)×(*nh*+*nk* )*, W*◦ *hk* <sup>∈</sup> *IR*(*rh*+*rk*)×(*nh*+*nk*)*, and positive scalars <sup>γ</sup>hk*, *<sup>ε</sup>hkl* <sup>∈</sup> *IR such that for i* <sup>=</sup> 1, 2 . . . ,*<sup>s</sup>*

$$\mathbf{T}\_{hki}^{o} = \mathbf{T}\_{hki}^{oT} > 0,\ \boldsymbol{\varepsilon}\_{hkl} > 0,\ \gamma\_{hk} > 0,\ h,l = 1,\ldots,p,\ l \neq h,k,\ h < k \le p,\ i = 1,2,\ldots,s \quad \text{(121)}$$

$$\begin{bmatrix} \mathbf{o}\_{hki}^{o} & \mathbf{A}\_{hki}^{1o} & \cdots & \mathbf{A}\_{hki}^{p\circ} & \mathbf{B}\_{hki}^{o} - \delta \mathbf{V}\_{hk}^{oT} + \mathbf{V}\_{hk}^{o} + \mathbf{V}\_{hk}^{o} \mathbf{A}\_{hki}^{oT} - \mathbf{W}\_{hk}^{oT} \mathbf{B}\_{hki}^{oT} \mathbf{V}\_{hk}^{oT} \mathbf{C}\_{hki}^{oT} \\ \ast & -\varepsilon\_{hkl} \mathbf{I}\_{n\choose n} & \mathbf{0} & \mathbf{0} & \mathbf{A}\_{hki}^{1oT} & \mathbf{C}\_{hki}^{1oT} \\ \vdots & \vdots & \ddots & \vdots & \vdots & \vdots & \vdots \\ \vdots & \vdots & \ddots & \vdots & \vdots & \vdots & \vdots \\ \ast & \ast & -\varepsilon\_{hkp} \mathbf{I}\_{n\_p} & \mathbf{0} & \mathbf{A}\_{hki}^{p\circ T} & \mathbf{C}\_{hki}^{p\circ T} \\ \ast & \ast & \cdots & \ast & -\gamma\_{hk} \mathbf{I}\_{(r\_k + r\_k)} & \mathbf{B}\_{hki}^{oT} & \mathbf{0} \\ \ast & \ast & \cdots & \ast & \ast & -\delta(\mathbf{V}\_{hk}^{o} + \mathbf{V}\_{hk}^{oT}) & \mathbf{0} \end{bmatrix} < 0 \tag{122}$$

∗ ∗ ··· ∗∗ ∗ −*I*(*mh*+*mk* ) *where A*◦ *hk, B*◦ *hk, <sup>A</sup>l*◦ *hk, C*◦ *hk, <sup>C</sup>l*◦ *hk are equivalently defined as in (99), (101), (104), respectively,*

$$\boldsymbol{\Phi}\_{hki}^{\diamond} = \mathbf{V}\_{hk}^{\diamond}\boldsymbol{A}\_{hki}^{\diamond T} + \mathbf{A}\_{hki}^{\diamond}\mathbf{V}\_{hk}^{\diamond T} - \mathbf{B}\_{hki}^{\diamond}\mathbf{W}\_{hk}^{\diamond} - \mathbf{W}\_{hk}^{\diamond T}\mathbf{B}\_{hki}^{\diamond T} \tag{123}$$

*and where Ah*◦ *hki, <sup>A</sup>k*◦ *hki, as well as <sup>C</sup>h*◦ *hki, <sup>C</sup>k*◦ *hki are not included into the structure of (122). Then K*◦ *hk is given as*

$$\mathbf{K}\_{hk}^{\diamond} = \mathbf{W}\_{hk}^{\diamond} \mathbf{V}\_{hk}^{\diamond T - 1} \tag{124}$$

⎥ ⎦

*Proof.* Considering (109)-(112) and inserting these appropriate into (72), *i* of(73), and (74) then (121)-(124) be obtained.

#### **Illustrative example**

⎢ ⎣

24 Robust

23*X*◦

*ε*<sup>231</sup> = 9.3761, *ε*<sup>234</sup> = 6.7928, *γ*<sup>23</sup> = 6.2252

<sup>23</sup> − *B*◦

� 4.8075 <sup>−</sup>0.0364 −0.4196 5.1783

� 1.1255 <sup>−</sup>0.0384 −0.1309 1.1467

> 7.9272 4.0712 4.2434 8.8245

, *K*◦

22.6914 3.8869 4.0712 1.5651 1.4002 18.4032 −0.0384 0.7243 4.2434 −0.1309 16.3393 4.1515 1.6990 −2.7951 0.8099 15.4084

Note, the control laws are realized in the partly-autonomous structure (94), (95), where every subsystem pair is stable, and the large-scale system be stable, too. To compare, an equivalent

<sup>34</sup> = �

<sup>−</sup>13.0595 <sup>±</sup> 0.4024 i <sup>−</sup>16.2717 <sup>−</sup>22.4515�

�

Matrix *K* structure implies evidently that the control gain is diagonally dominant.

Consider for the simplicity that only the system matrix blocks are uncertain, and one or none uncertain function is associated with a system matrix block. Then the structure of the pairwise

**6. Pairwise decentralized design of control for uncertain systems**

*<sup>l</sup>*=<sup>1</sup> ∪ {*Ak*◦

*<sup>l</sup>*=<sup>1</sup> ∪ {*Ah*◦

*hl* }*<sup>p</sup>*

*kl* }*<sup>p</sup>*

Analogously it can be obtained equivalent expressions with respect to *Bhkr*(*t*), *Chkr*(*t*), respectively. Thus, it is evident already in this simple case that a single uncertainty affects

<sup>2</sup>) linear matrix inequalities which have to be included into design.

23*Y*◦

<sup>23</sup> <sup>−</sup> *<sup>Y</sup>*◦*<sup>T</sup>*

� , *Z*◦ <sup>23</sup> = �

�

� , *K*◦ <sup>14</sup> = �

> 6.3680 4.1515 0.8099 5.2661

> > ⎤ ⎥ ⎥ ⎦

*<sup>l</sup>*=*h*+<sup>1</sup> ; upper triagonal blocks (*h*<*k*)

*<sup>l</sup>*=*k*+<sup>1</sup> ; lower triagonal blocks (*h*>*k*)

*<sup>l</sup>*=*<sup>h</sup>* ; diagonal blocks (*h*=*k*)

<sup>23</sup> *<sup>B</sup>*◦*<sup>T</sup>* 23

> 4.2756 0.1221 0.1221 4.5297

7.4529 1.5651 1.6990 5.6584

�

�

�

(120)

**Φ**◦ <sup>23</sup> = *X*◦

� 0.5383 <sup>−</sup>0.0046 −0.0046 0.8150

> 7.3113 3.8869 1.4002 10.0216

gain matrix (81) to centralized control can be constructed

⎡ ⎢ ⎢ ⎣

*K* =

Thus, the resulting closed-loop eigenvalue spectrum is

*<sup>ρ</sup>*(*A*−*BK*) = �

*K*◦ <sup>24</sup> =

*X*◦ <sup>23</sup> =

> *K*◦ <sup>12</sup> = �

system description implies

⎧ ⎪⎪⎪⎨

*A*◦

� *A*◦ *lh* �*h*−<sup>1</sup> *<sup>l</sup>*=<sup>1</sup> <sup>∪</sup> �

*A*◦

*hk* ∪ {*Ak*◦

*kh* ∪ {*Ah*◦

Generally, the next theorem can be formulated.

*lh* }*h*−<sup>1</sup>

*lk* }*k*−<sup>1</sup>

*A*◦ *hl* �*p*

⎪⎪⎪⎩

*p*

*Ahkr*(*t*) ∈

*p*−1 from *q* = (

23*A*◦*<sup>T</sup>* <sup>23</sup> + *A*◦

Using SeDuMi package for Matlab given task was feasible with

� , *Y*◦ <sup>23</sup> =

> *K*◦ <sup>23</sup> =

� , *K*◦ <sup>13</sup> = �

By the same way computing the rest gain matrices the gain matrix set is

� 7.2561 0.7243 −2.7951 4.4839

> Considering the same system parameters as were those given in the example presented in Subsection 5.3 but with *A*34*r*(*t*), and *r*(*t*) lies within the interval �0.8, 1.2� then the next matrix parameter have to be included into solution

$$A\_{131}^{4\circ} = \begin{bmatrix} -1 \\ 2.4 \end{bmatrix}, \ A\_{132}^{4\circ} = \begin{bmatrix} -1 \\ 3.6 \end{bmatrix}, \ A\_{231}^{4\circ} = \begin{bmatrix} 1 \\ 2.4 \end{bmatrix}, \ A\_{23}^{4\circ} = \begin{bmatrix} 1 \\ 3.6 \end{bmatrix}$$

$$A\_{341}^{\circ} = \begin{bmatrix} 1 \ 2.4 \\ -2 \ 2 \end{bmatrix}, \ A\_{342}^{\circ} = \begin{bmatrix} 1 \ 3.6 \\ -2 \ 2 \end{bmatrix},$$

i.e. a solution be associated with *T*◦ 13*i* , *T*◦ 23*i* , and *T*◦ 34*i* , *i* = 1, 2, and in other cases only one matrix inequality be computed (*T*◦ <sup>12</sup>, *T*◦ <sup>14</sup>, *T*◦ <sup>24</sup>).

The task is feasible, the Lyapunov matrices are computed as follows

$$\begin{aligned} \,^{\circ}T\_{131}^{\circ} &= \begin{bmatrix} 5.7244 & -0.3591 \\ 0.1748 & 5.6673 \end{bmatrix}, \,^{\circ}T\_{132}^{\circ} = \begin{bmatrix} 5.0484 & 0.0232 \\ 0.0232 & 5.0349 \end{bmatrix}, \,^{\circ}T\_{12}^{\circ} = \begin{bmatrix} 6.3809 \ 0.5280 \\ -0.6811 \ 6.3946 \end{bmatrix} \\\,^{\circ}T\_{231}^{\circ} &= \begin{bmatrix} 6.1360 \ 0.0841 \\ 0.0090 \ 6.2377 \end{bmatrix}, \,\,^{\circ}T\_{232}^{\circ} = \begin{bmatrix} 5.5035 \ 0.0258 \\ 0.0258 \ 5.5252 \end{bmatrix}, \,\,^{\circ}T\_{14}^{\circ} = \begin{bmatrix} 7.2453 \ 0.9196 \\ -1.0352 \ 7.5124 \end{bmatrix} \end{aligned}$$

**7. Concluding remarks**

complexity is increased.

**8. Acknowledgments**

**9. References**

Oxford.

The main difficulty of solving the decentralized control problem comes from the fact that the feedback gain is subject to structural constraints. At the beginning study of large scale system theory, some people thought that a large scale system is decentrally stabilizable under controllability condition by strengthening the stability degree of subsystems, but because of the existence of decentralized fixed modes, some large scale systems can not be decentrally stabilized at all. In this chapter the idea to stabilize all subsystems and the whole system simultaneously by using decentralized controllers is replaced by another one, to stabilize all subsystems pairs and the whole system simultaneously by using partly decentralized control. In this sense the final scope of this chapter are quadratic performances of one class of uncertain continuous-time large-scale systems with polytopic convex uncertainty domain. It is shown how to expand the Lyapunov condition for pairwise control by using additive matrix variables in LMIs based on equivalent BRL formulations. As mentioned above, such matrix inequalities are linear with respect to the subsystem variables, and does not involve any product of the Lyapunov matrices and the subsystem ones. This enables to derive a sufficient condition for quadratic performances, and provides one way for determination of parameter-dependent Lyapunov functions by solving LMI problems. Numerical examples demonstrate the principle effectiveness, although some computational

Partially Decentralized Design Principle in Large-Scale System Control 387

The work presented in this paper was supported by VEGA, Grant Agency of Ministry of Education and Academy of Sciences of Slovak Republic under Grant No. 1/0256/11, as well as by Research & Development Operational Programme Grant No. 26220120030 realized in Development of Center of Information and Communication Technologies for Knowledge

Boyd, D.; El Ghaoui, L.; Peron, E. & Balakrishnan, V. (1994). *Linear Matrix Inequalities in System*

Duan, Z.; Wang, J.Z. & Huang, L. (1994). Special decentralized control problems and

Filasová, A. & Krokavec, D. (1999). Pair-wise decentralized control of large-scale systems. *Journal of Electrical Engineering* Vol. 50, No. 3-4, 1999, 1-10, ISSN 1335-3632 Filasová, A. & Krokavec, D. (2000). Pair-wise partially decentralized Kalman estimator. In

Filasová, A. & Krokavec, D. (2011). Pairwise control principle in large-scale systems. *Archives*

Gahinet, P.; Nemirovski, A.; Laub, A.J. & Chilali, M. (1995). *LMI Control Toolbox User's Guide*.

*the American Control Conference 2005*, pp. 1697-1702, Portland, June 2005. Feron, E.; Apkarian, P. & Gahinet, P. (1996). Analysis and synthesis of robust control systems

effectiveness of parameter-dependent Lyapunov function method. In *Proceedings of*

via parameter-dependent Lyapunov functions, *IEEE Transactions on Automatic*

*Control Systems Design (CSD 2000): A Proceedings Volume from IFAC Conference*, Kozák, Š., Huba, M. (Ed.), pp. 125-130, ISBN 00-08-043546-7, Bratislava, June, 2000, Elsevier,

*and Control Theory*. SIAM, ISBN 0-89871-334-X, Philadelphia.

*Control*, Vol. 41, No. 7, 1996, 1041-1046, ISSN 0018-9286

Systems. These supports are very gratefully acknowledged.

*of Control Sciences*, 21, 2011 (in print).

The MathWorks, Inc., Natick, 1995.

$$\begin{bmatrix} T\_{341}^{\diamond} = \begin{bmatrix} 2.4585 \ 3.9935 \\ -3.7569 \ 1.5487 \end{bmatrix}, \; T\_{342}^{\diamond} = \begin{bmatrix} 5.7297 \\ & 5.7249 \end{bmatrix}, \; T\_{24}^{\diamond} = \begin{bmatrix} 2.5560 \ 2.1220 \\ -1.9076 \ 2.9055 \end{bmatrix} $$

the control law matrices takes form

$$\begin{aligned} \mathbf{K}\_{12}^{\complement} &= \begin{bmatrix} 13.2095 & 0.7495 \\ 2.2753 & 14.1033 \end{bmatrix}, \; \mathbf{K}\_{13}^{\complement} = \begin{bmatrix} 14.2051 & 4.4679 \\ 1.9440 & 13.4616 \end{bmatrix}, \; \mathbf{K}\_{14}^{\complement} = \begin{bmatrix} 12.6360 & -1.6407 \\ 2.9881 & 10.6109 \end{bmatrix}, \\\ \mathbf{K}\_{23}^{\complement} &= \begin{bmatrix} 14.3977 & -0.4237 \\ -1.0494 & 12.3509 \end{bmatrix}, \; \mathbf{K}\_{24}^{\complement} = \begin{bmatrix} -2.9867 & 5.9950 \\ -6.8459 & -2.6627 \end{bmatrix}, \; \mathbf{K}\_{34}^{\complement} = \begin{bmatrix} 5.3699 & 2.7480 \\ -0.6542 & 6.1362 \end{bmatrix} \end{aligned}$$

and with the common *δ* = 10 the subsystem interaction transfer functions *H*∞-norm upper-bound squares are

$$\begin{aligned} \varepsilon\_{123} &= 10.9960, \ \varepsilon\_{124} = 7.6712, \ \gamma\_{12} = 7.1988, \ \varepsilon\_{132} = 7.7242, \ \varepsilon\_{134} = 8.7654, \ \gamma\_{13} = 6.4988 \\ \varepsilon\_{142} &= 8.9286, \ \varepsilon\_{143} = 12.1338, \ \gamma\_{14} = 8.1536, \ \varepsilon\_{231} = 10.3916, \ \varepsilon\_{234} = 8.2081, \ \gamma\_{23} = 7.0939 \\ \varepsilon\_{241} &= 5.3798, \ \varepsilon\_{243} = 6.6286, \ \gamma\_{24} = 5.4780, \ \varepsilon\_{341} = 16.1618, \ \varepsilon\_{342} = 15.0874, \ \gamma\_{34} = 9.0965 \end{aligned}$$

In the same sense as given above, the control laws are realized in the partly-autonomous structure (94), (95), too, and as every subsystem pair as the large-scale system be stable.

Only for comparison reason, the composed gain matrix (defined as in (81)), and the resulting closed-loop system matrix eigenvalue spectrum, realized using the nominal system matrix parameter *An* and the robust and the nominal equivalent gain matrices *K*, *An*, respectively, were constructed using the set of gain matrices *Khk*, *k* = 1, 2, 3, *h* = 2, 3, 4, *h* �= *k*. As it can see

$$\mathbf{K} = \begin{bmatrix} 40.0507 & 0.7495 & 4.4679 & -1.6407 \\ 2.2753 & 25.5144 & -0.4237 & 5.9950 \\ 1.9440 & -1.0494 & 31.1824 & 2.7480 \\ 2.9881 & -6.8459 & -0.6542 & 14.0844 \end{bmatrix}, \quad \rho(A\_{\mathrm{ul}} - \mathbf{B}\mathbf{K}) = \begin{bmatrix} -15.0336 \\ -20.6661 \\ -29.8475 \\ -37.2846 \end{bmatrix}$$

$$\mathbf{K}\_{\mathrm{ll}} = \begin{bmatrix} 39.6876 & 0.7495 & 4.2372 & -1.6407 \\ 2.2753 & 24.8764 & -0.4500 & 5.9950 \\ 2.3218 & -1.0008 & 30.3905 & 3.2206 \\ 2.9881 & -6.8459 & -0.6666 & 14.0725 \end{bmatrix}, \quad \rho(A\_{\mathrm{ul}} - \mathbf{B}\mathbf{K}\_{\mathrm{ll}}) = \begin{bmatrix} -15.3818 \\ -19.6260 \\ -29.0274 \\ -36.9918 \end{bmatrix}$$

and the resulted structures of both gain matrices imply that by considering parameter uncertainties in design step the control gain matrix *K* is diagonally more dominant then *Kn* reflecting only the system nominal parameters.

It is evident that Lyapunov matrices *T*◦ *hki* are separated from *A*◦ *hki*, *<sup>A</sup>l*◦ *hki*, *B*◦ *hki*, *C*◦ *hki*, and *<sup>C</sup>l*◦ *hki h* = 1, 2 . . . , *p*−1, *k* = *h*+1, *h*+2..., *p*, *l* = 1, 2 . . . , *p*, *l* �= *h*, *k*, i.e. there are no terms containing the product of *T*◦ *hki* and any of them. By introducing a new variable *V*◦ *hk*, the products of type *P*◦ *hkiA*◦ *hki* and *<sup>A</sup>*◦*<sup>T</sup> hkiP*◦ *hki* are relaxed to new products *A*◦ *hkiV*◦*<sup>T</sup> hk* and *V*◦ *hkA*◦*<sup>T</sup> hki* where *V*◦ *hk* needs not be symmetric and positive definite. This enables a robust BRL can be obtained for a system with polytopic uncertainties by using a parameter-dependent Lyapunov function, and to deal with linear systems with parametric uncertainties.

Although no common Lyapunov matrices are required the method generally leads to a larger number of linear matrix inequalities, and so more computational effort be needed to provide robust stability. However, used conditions are less restrictive than those obtained via a quadratic stability analysis (i.e. using a parameter-independent Lyapunov function), and are more close to necessity conditions. It is a very useful extension to control performance synthesis problems.

## **7. Concluding remarks**

26 Robust

14.2051 4.4679 1.9440 13.4616

−2.9867 5.9950 −6.8459 −2.6627

> ⎤ ⎥ ⎥

> ⎤ ⎥ ⎥

*hki* are separated from *A*◦

*hkiV*◦*<sup>T</sup>*

and the resulted structures of both gain matrices imply that by considering parameter uncertainties in design step the control gain matrix *K* is diagonally more dominant then *Kn*

*h* = 1, 2 . . . , *p*−1, *k* = *h*+1, *h*+2..., *p*, *l* = 1, 2 . . . , *p*, *l* �= *h*, *k*, i.e. there are no terms containing

be symmetric and positive definite. This enables a robust BRL can be obtained for a system with polytopic uncertainties by using a parameter-dependent Lyapunov function, and to deal

Although no common Lyapunov matrices are required the method generally leads to a larger number of linear matrix inequalities, and so more computational effort be needed to provide robust stability. However, used conditions are less restrictive than those obtained via a quadratic stability analysis (i.e. using a parameter-independent Lyapunov function), and are more close to necessity conditions. It is a very useful extension to control performance

*hki* and any of them. By introducing a new variable *V*◦

*hki* are relaxed to new products *A*◦

<sup>⎦</sup> , *<sup>ρ</sup>*(*An*−*BK*) =

<sup>⎦</sup> , *<sup>ρ</sup>*(*An*−*BKn*) =

*hk* and *V*◦

⎡ ⎢ ⎢ ⎣

> ⎡ ⎢ ⎢ ⎣

*hki*, *B*◦

*hki* where *V*◦

*hki*, *<sup>A</sup>l*◦

*hkA*◦*<sup>T</sup>*

−15.0336 −20.6661 −29.8475 −37.2846

> −15.3818 −19.6260 −29.0274 −36.9918

> > *hki*, *C*◦

*hk*, the products of type

⎤ ⎥ ⎥ ⎦

> ⎤ ⎥ ⎥ ⎦

*hki*, and *<sup>C</sup>l*◦

*hk* needs not

*hki*

and with the common *δ* = 10 the subsystem interaction transfer functions *H*∞-norm

*ε*<sup>123</sup> = 10.9960, *ε*<sup>124</sup> = 7.6712, *γ*<sup>12</sup> = 7.1988, *ε*<sup>132</sup> = 7.7242, *ε*<sup>134</sup> = 8.7654, *γ*<sup>13</sup> = 6.4988 *ε*<sup>142</sup> = 8.9286, *ε*<sup>143</sup> = 12.1338, *γ*<sup>14</sup> = 8.1536, *ε*<sup>231</sup> = 10.3916, *ε*<sup>234</sup> = 8.2081, *γ*<sup>23</sup> = 7.0939 *ε*<sup>241</sup> = 5.3798, *ε*<sup>243</sup> = 6.6286, *γ*<sup>24</sup> = 5.4780, *ε*<sup>341</sup> = 16.1618, *ε*<sup>342</sup> = 15.0874, *γ*<sup>34</sup> = 9.0965 In the same sense as given above, the control laws are realized in the partly-autonomous structure (94), (95), too, and as every subsystem pair as the large-scale system be stable. Only for comparison reason, the composed gain matrix (defined as in (81)), and the resulting closed-loop system matrix eigenvalue spectrum, realized using the nominal system matrix parameter *An* and the robust and the nominal equivalent gain matrices *K*, *An*, respectively, were constructed using the set of gain matrices *Khk*, *k* = 1, 2, 3, *h* = 2, 3, 4, *h* �= *k*. As it can

5.7249

� , *T*◦ <sup>24</sup> =

� , *K*◦ <sup>14</sup> = �

> � , *K*◦ <sup>34</sup> =

� 2.5560 2.1220 −1.9076 2.9055

> 12.6360 −1.6407 2.9881 10.6109

� 5.3699 2.7480 −0.6542 6.1362

�

�

�

*T*◦ <sup>341</sup> =

*K*◦ <sup>12</sup> = �

*K*◦ <sup>23</sup> =

see

upper-bound squares are

*K* =

*K<sup>n</sup>* =

the product of *T*◦

*hki* and *<sup>A</sup>*◦*<sup>T</sup>*

synthesis problems.

*P*◦ *hkiA*◦ ⎡ ⎢ ⎢ ⎣

⎡ ⎢ ⎢ ⎣

reflecting only the system nominal parameters.

with linear systems with parametric uncertainties.

It is evident that Lyapunov matrices *T*◦

*hkiP*◦

the control law matrices takes form

� 2.4585 3.9935 −3.7569 1.5487

13.2095 0.7495 2.2753 14.1033

� 14.3977 <sup>−</sup>0.4237 −1.0494 12.3509 � , *T*◦ <sup>342</sup> = � 5.7297

� , *K*◦ <sup>13</sup> = �

> � , *K*◦ <sup>24</sup> = �

40.0507 0.7495 4.4679 −1.6407 2.2753 25.5144 −0.4237 5.9950 1.9440 −1.0494 31.1824 2.7480 2.9881 −6.8459 −0.6542 14.0844

39.6876 0.7495 4.2372 −1.6407 2.2753 24.8764 −0.4500 5.9950 2.3218 −1.0008 30.3905 3.2206 2.9881 −6.8459 −0.6666 14.0725 The main difficulty of solving the decentralized control problem comes from the fact that the feedback gain is subject to structural constraints. At the beginning study of large scale system theory, some people thought that a large scale system is decentrally stabilizable under controllability condition by strengthening the stability degree of subsystems, but because of the existence of decentralized fixed modes, some large scale systems can not be decentrally stabilized at all. In this chapter the idea to stabilize all subsystems and the whole system simultaneously by using decentralized controllers is replaced by another one, to stabilize all subsystems pairs and the whole system simultaneously by using partly decentralized control. In this sense the final scope of this chapter are quadratic performances of one class of uncertain continuous-time large-scale systems with polytopic convex uncertainty domain. It is shown how to expand the Lyapunov condition for pairwise control by using additive matrix variables in LMIs based on equivalent BRL formulations. As mentioned above, such matrix inequalities are linear with respect to the subsystem variables, and does not involve any product of the Lyapunov matrices and the subsystem ones. This enables to derive a sufficient condition for quadratic performances, and provides one way for determination of parameter-dependent Lyapunov functions by solving LMI problems. Numerical examples demonstrate the principle effectiveness, although some computational complexity is increased.

### **8. Acknowledgments**

The work presented in this paper was supported by VEGA, Grant Agency of Ministry of Education and Academy of Sciences of Slovak Republic under Grant No. 1/0256/11, as well as by Research & Development Operational Programme Grant No. 26220120030 realized in Development of Center of Information and Communication Technologies for Knowledge Systems. These supports are very gratefully acknowledged.

### **9. References**


Kazuhiro Yubai, Akitaka Mizutani and Junji Hirai

**Structure with Stability Constraint** 

**the Generalized Internal Model Control** 

In the design of the control system, the plant perturbations and the plant uncertainties could cause the performance degradation and/or destabilization of the control system. The *H*∞ control synthesis and the *μ* synthesis are well known as the suitable controller syntheses for the plant with the large plant perturbations and/or the plant uncertainties (Zhou & Doyle, 1998), and many successful applications are also reported in various fields. However, these controller syntheses provide the controller robustly stabilizing the closed-loop system for the worst-case and overestimated disturbances and uncertainties at the expense of the nominal control performance. It means that there exists a trade-off between the nominal control

**A Model-Free Design of the Youla Parameter on** 

Meanwhile from the view point of the control architecture, the Generalized Internal Model Control (GIMC) structure is proposed by Zhou using Youla parameterization (Vidyasagar, 1985) to resolve the above-mentioned trade-off (Campos-Delgado & Zhou, 2003; Zhou & Ren, 2001). The GIMC structure is interpreted as an extension of the Internal Model Control (IMC) (Morari & Zafiriou, 1997), which is only applicable to stable plants, to unstable plants by introducing coprime factorization. The GIMC structure consists of a conditional feedback structure and an outer-loop controller. The conditional feedback structure can detect model uncertainties and any disturbances, and they are compensated through the Youla parameter. It means that the robustness of the control system in the GIMC structure is specified by the Youla parameter. On the other hand, in case where there exist no plant uncertainties and no disturbances, the conditional feedback structure would detect nothing, and the feedback control system would be governed only by the outer-loop controller. Since the nominal control performance is independent of the Youla parameter, the outer-loop controller can be designed according to various controller design techniques, and the trade-off between the nominal

For the design of the Youla parameter, we proposed the design method using the dual Youla parameter which represents the plant perturbation and/or the plant uncertainties (Matsumoto et al., 1993; Yubai et al., 2007). The design procedure is as follows: The dual Youla parameter is identified by the Hansen scheme (Hansen et al., 1989) using appropriate identification techniques, and the Youla parameter is designed based on the robust controller

performance and the robustness in the design of the control system.

control performance and the robustness is resolved.

**1. Introduction**

*Mie University*

*Japan*

**17**


## **A Model-Free Design of the Youla Parameter on the Generalized Internal Model Control Structure with Stability Constraint**

Kazuhiro Yubai, Akitaka Mizutani and Junji Hirai *Mie University Japan*

### **1. Introduction**

28 Robust

388 Recent Advances in Robust Control – Novel Approaches and Design Methods

Gahinet, P.; Apkarian, P. & Mahmoud Chilali, M. (1996) Affine parameter-dependent

Herrmann, G.; Turner, M.C. & Postlethwaite, I. (2007). Linear matrix inequalities in control, In

Jamshidi, M. (1997). *Large-Scale Systems: Modeling, Control and Fuzzy Logic*, Prentice Hall, ISBN

Jia, Y. (2003). Alternative proofs for improved LMI representations for the analysis and the

Kozáková, A. & V. Veselý, V. (2009). Design of robust decentralized controllers using the

Krokavec, D. & Filasová, A. (2008). *Discrete-Time Systems*, Elfa, ISBN 978-80-8086-101-8, Košice.

Leros, A.P. (1989). LSS linear regulator problem. A partially decentralized approach. *International Journal of Control*, Vol. 49, No. 4, 1989, 1377-1399, ISSN 0020-7179 Lunze, J. (1992). *Feedback Control of Large-Scale Systems*, Prentice Hall, ISBN 0-13-318353-X,

Mahmoud, M.S. & Singh, M.G. (1981). *Large-Scale Systems Modelling*, Pergamon Press, ISBN

Nesterov, Y. & Nemirovsky, A. (1994). *Interior Point Polynomial Methods in Convex Programming.*

Peaucelle, D.; Henrion, D.; Labit, Y. & Taitz, K. (1994). *User's Guide for SeDuMi Interface 1.04*,

Petersen, I.R.; Ugrinovskii, V.A. & Savkin A.V. (2000). *Robust Control Design Using H*∞ *Methods*,

Pipeleers, G.; Demeulenaerea, B.; Sweversa, J. & Vandenbergheb, L. (2009). Extended LMI

Skelton, E.E.; Iwasaki, T. & Grigoriadis, K. (1998). *A Unified Algebraic Approach to Linear Control*

Veselý, V. & Rosinová, D. (2009). Robust output model predictive control design. BMI

Wu, M.; He, Y. & She, J.H. (2010). *Stability Analysis and Robust Control of Time-Delay Systems*,

Xie, W. (2008). An equivalent LMI representation of bounded real lemma for continous-time systems, *Journal of Inequalities and Applications*, Vol. 5, 2010, ISSN 1025-5834 Zhou, K.; Doyle, J.C. & Glover, K. (1995). *Robust and Optimal Control*, Prentice Hall, ISBN

Wang, Q.G. (2003). *Decoupling Control*, Springer-Verlag, ISBN 978-3-54044-128-1, Berlin. Wu, A.I. & Duan, G.R. (2006). Enhanced LMI representations for H2 performance of polytopic

characterizations for stability and performance of linear systems, *Systems & Control*

approach, *International Journal of Innovative Computing, Information and Control*, Vol. 5,

uncertain systems: Continuous-time case, *International Journal of Automation and*

*Theory and Applications*. SIAM, ISBN 0-89871-319-6, Philadelphia.

*Control*, Vol. 41, No. 3, 1996, 436-442, ISSN 0018-9286

0-13-125683-1, Upper Saddle River.

Vol. 40, No. 5, 2009, 497-505, ISSN 0020-7721

Springer-Verlag, ISBN 1-85233-171-2, London.

*Letters*, Vol.58, 2009, 510-518, ISSN 0167-6911

No. 4, 2009, 1115-1123, ISBN 1751-648X

*Computing*, Vol. 3, 2006, 304-308, ISSN 1476-8186

Springer–Verlag, ISBN 3-64203-036-X, Berlin.

0-13-456567-3, Englewood Cliffs

*Design*, Taylor & Francis, ISBN 0-7484-0592-5, London.

ISSN 0018-9286

(in Slovak)

London.

00-08-027313-0, Oxford.

LAAS-CNRS, Toulouse.

Lyapunov functions and real parametric uncertainty, *IEEE Transactions on Automatic*

*Mathematical Methods for Robust and Nonlinear Control. EPRSC Summer School.*, Turner, M.C., Bates, D.G. (Ed.), pp. 123-142, Springer–Verlag, ISBN 978-1-84800-024-7, Berlin.

design of continuous-time systems with poolytopic type uncertainty: A predictive approach, *IEEE Transactions on Automatic Control*, Vol. 48, No. 8, 2003, 1413-1416,

*M* − Δ structures. Robust stability conditions. *International Journal of System Science*,

In the design of the control system, the plant perturbations and the plant uncertainties could cause the performance degradation and/or destabilization of the control system. The *H*∞ control synthesis and the *μ* synthesis are well known as the suitable controller syntheses for the plant with the large plant perturbations and/or the plant uncertainties (Zhou & Doyle, 1998), and many successful applications are also reported in various fields. However, these controller syntheses provide the controller robustly stabilizing the closed-loop system for the worst-case and overestimated disturbances and uncertainties at the expense of the nominal control performance. It means that there exists a trade-off between the nominal control performance and the robustness in the design of the control system.

Meanwhile from the view point of the control architecture, the Generalized Internal Model Control (GIMC) structure is proposed by Zhou using Youla parameterization (Vidyasagar, 1985) to resolve the above-mentioned trade-off (Campos-Delgado & Zhou, 2003; Zhou & Ren, 2001). The GIMC structure is interpreted as an extension of the Internal Model Control (IMC) (Morari & Zafiriou, 1997), which is only applicable to stable plants, to unstable plants by introducing coprime factorization. The GIMC structure consists of a conditional feedback structure and an outer-loop controller. The conditional feedback structure can detect model uncertainties and any disturbances, and they are compensated through the Youla parameter. It means that the robustness of the control system in the GIMC structure is specified by the Youla parameter. On the other hand, in case where there exist no plant uncertainties and no disturbances, the conditional feedback structure would detect nothing, and the feedback control system would be governed only by the outer-loop controller. Since the nominal control performance is independent of the Youla parameter, the outer-loop controller can be designed according to various controller design techniques, and the trade-off between the nominal control performance and the robustness is resolved.

For the design of the Youla parameter, we proposed the design method using the dual Youla parameter which represents the plant perturbation and/or the plant uncertainties (Matsumoto et al., 1993; Yubai et al., 2007). The design procedure is as follows: The dual Youla parameter is identified by the Hansen scheme (Hansen et al., 1989) using appropriate identification techniques, and the Youla parameter is designed based on the robust controller

Fig. 1. GIMC structure.

robustness.

on RH<sup>∞</sup> as

**2.1 GIMC structure**

**2. Robust control by the GIMC structure**

the Youla parameter *Q* ∈ RH<sup>∞</sup> (Vidyasagar, 1985):

where *Q* is a free parameter and is determined arbitrarily as long as

This section gives a brief review of the GIMC (Generalized Internal Model Control) structure and it is a control architecture solving the trade-off between the control performance and the

<sup>391</sup> A Model-Free Design of the Youla Parameter

on the Generalized Internal Model Control Structure with Stability Constraint

A linear time-invariant plant *P*<sup>0</sup> is assumed to have a coprime factorization (Vidyasagar, 1985)

where RH<sup>∞</sup> denotes the set of all real rational proper stable transfer functions. A nominal controller *C*<sup>0</sup> stabilizing *P*<sup>0</sup> is also assumed to have a coprime factorization on RH<sup>∞</sup> as

where *X* and *Y* satisfy the Bezout identity *XN* + *YD* = 1. Then a class of all stabilizing controllers *C* is parameterized as (3), which is called as Youla parameterization, by introducing

det(*Y*(∞) − *Q*(∞)*N*(∞)) �= 0.

Then, the GIMC structure is constructed as Fig. 1 by using (1) and (3), where *r*, *u*, *y* and *β* represent reference inputs, control inputs, observation outputs and residual signals, respectively. The only difference between the GIMC structure and a standard unity feedback control structure shown in Fig. 2 is that the input of *D* is *y* in the GIMC structure instead of *e*. Since the GIMC structure has a conditional feedback structure, the Youla parameter *Q* is only activated in the case where disturbances are injected and/or there exist plant uncertainties. If there is no disturbance and no plant uncertainty (*β* = 0), *Q* in the GIMC structure does not generate any compensation signals and the control system is governed by only a nominal controller *C*0. It means that the nominal control performance is specified by only the nominal controller *C*0. On the other hand, if there exist disturbances and/or plant uncertainties (*β* �= 0),

*<sup>P</sup>*<sup>0</sup> <sup>=</sup> *ND*<sup>−</sup>1, *<sup>N</sup>*, *<sup>D</sup>* ∈ RH∞, (1)

*<sup>C</sup>*<sup>0</sup> <sup>=</sup> *XY*<sup>−</sup>1, *<sup>X</sup>*, *<sup>Y</sup>* ∈ RH∞, (2)

*<sup>C</sup>* = (*<sup>Y</sup>* <sup>−</sup> *QN*)−1(*<sup>X</sup>* <sup>+</sup> *QD*), (3)

synthesis. However, since it is difficult to give a physical interpretation to the dual Youla parameter in general, we must select the weighting function for identification and the order of the identified model by trial and error. For implementation aspect, a low-order controller is much preferable, which means that a low-order model of the dual Youla parameter should be identified. However, it is difficult to identify the low-order model of the dual Youla parameter which contains enough information on the actual dual Youla parameter to design the appropriate Youla parameter. Moreover, there may be the cases where an accurate and reasonably low-order model of the dual Youla parameter can not be obtained easily.

To avoid these difficulties in system identification of the dual Youla parameter, this article addresses the design method of the Youla parameter by model-free controller synthesis. Model-free controller syntheses have the advantages that the controller is directly synthesized or tuned only from the input/output data collected from the plant, and no plant mathematical model is required for the controller design, which avoids the troublesome model identification of the dual Youla parameter. Moreover, since the order and the controller structure are specified by the designer, we can easily design a low-order Youla parameter by model-free controller syntheses.

A number of model-free controller syntheses have been proposed, e.g., the Iterative Feedback Tuning (IFT) (Hjalmarsson, 1998), the Virtual Reference Feedback Tuning (VRFT) (Campi et al., 2002), and the Correlation-based Tuning (CbT) (Miskovic et al., 2007) and so on. These model-free controller syntheses address the model matching problem as a typical control objective. Since the IFT and the CbT basically deal with nonlinear optimization problems, they require the iterative experiments to update the gradient of the cost function and the Hessian for the Gauss-Newton method at each iterative parameter update. On the other hand, the VRFT brings controllers using only a single set of input/output data collected from the plant if the controllers are linearly parameterized with respect to the parameter vector to be tuned. This article adopts the VRFT to design the Youla parameter to exploit the above-mentioned feature. However, the model-free controller syntheses have a common disadvantage that the stability of the closed-loop system can not be evaluated in advance of controller implementation because we have no mathematical plant model to evaluate the stability and/or the control performance. From the view point of safety, destabilization of the control system is not acceptable. Recently, the data-driven test on the closed-loop stability before controller implementation (Karimi et al., 2007; Yubai et al., 2011) and the data-driven controller synthesis at least guaranteeing the closed-loop stability (Heusden et al., 2010) are developed for the standard unity feedback control structure.

This article derives the robust stability condition for the design of the Youla parameter, and its sufficient condition is described as the *H*<sup>∞</sup> norm of the product of the Youla and the dual Youla parameters. Moreover, the *H*∞ norm is estimated using the input/output data collected from the plant in the closed-loop manner. This sufficient condition of the robust stability is imposed as the stability constraint to the design problem of the Youla parameter based on the VRFT previously proposed by the authors (Sakuishi et al., 2008). Finally, the Youla parameter guaranteeing the closed-loop stability is obtained by solving the convex optimization. The discussion is limited to SISO systems in this article.

Fig. 1. GIMC structure.

2 Robust Control / Book 2

synthesis. However, since it is difficult to give a physical interpretation to the dual Youla parameter in general, we must select the weighting function for identification and the order of the identified model by trial and error. For implementation aspect, a low-order controller is much preferable, which means that a low-order model of the dual Youla parameter should be identified. However, it is difficult to identify the low-order model of the dual Youla parameter which contains enough information on the actual dual Youla parameter to design the appropriate Youla parameter. Moreover, there may be the cases where an accurate and

To avoid these difficulties in system identification of the dual Youla parameter, this article addresses the design method of the Youla parameter by model-free controller synthesis. Model-free controller syntheses have the advantages that the controller is directly synthesized or tuned only from the input/output data collected from the plant, and no plant mathematical model is required for the controller design, which avoids the troublesome model identification of the dual Youla parameter. Moreover, since the order and the controller structure are specified by the designer, we can easily design a low-order Youla parameter by model-free

A number of model-free controller syntheses have been proposed, e.g., the Iterative Feedback Tuning (IFT) (Hjalmarsson, 1998), the Virtual Reference Feedback Tuning (VRFT) (Campi et al., 2002), and the Correlation-based Tuning (CbT) (Miskovic et al., 2007) and so on. These model-free controller syntheses address the model matching problem as a typical control objective. Since the IFT and the CbT basically deal with nonlinear optimization problems, they require the iterative experiments to update the gradient of the cost function and the Hessian for the Gauss-Newton method at each iterative parameter update. On the other hand, the VRFT brings controllers using only a single set of input/output data collected from the plant if the controllers are linearly parameterized with respect to the parameter vector to be tuned. This article adopts the VRFT to design the Youla parameter to exploit the above-mentioned feature. However, the model-free controller syntheses have a common disadvantage that the stability of the closed-loop system can not be evaluated in advance of controller implementation because we have no mathematical plant model to evaluate the stability and/or the control performance. From the view point of safety, destabilization of the control system is not acceptable. Recently, the data-driven test on the closed-loop stability before controller implementation (Karimi et al., 2007; Yubai et al., 2011) and the data-driven controller synthesis at least guaranteeing the closed-loop stability (Heusden et al., 2010) are

This article derives the robust stability condition for the design of the Youla parameter, and its sufficient condition is described as the *H*<sup>∞</sup> norm of the product of the Youla and the dual Youla parameters. Moreover, the *H*∞ norm is estimated using the input/output data collected from the plant in the closed-loop manner. This sufficient condition of the robust stability is imposed as the stability constraint to the design problem of the Youla parameter based on the VRFT previously proposed by the authors (Sakuishi et al., 2008). Finally, the Youla parameter guaranteeing the closed-loop stability is obtained by solving the convex optimization.

developed for the standard unity feedback control structure.

The discussion is limited to SISO systems in this article.

reasonably low-order model of the dual Youla parameter can not be obtained easily.

controller syntheses.

## **2. Robust control by the GIMC structure**

This section gives a brief review of the GIMC (Generalized Internal Model Control) structure and it is a control architecture solving the trade-off between the control performance and the robustness.

### **2.1 GIMC structure**

A linear time-invariant plant *P*<sup>0</sup> is assumed to have a coprime factorization (Vidyasagar, 1985) on RH<sup>∞</sup> as

$$P\_0 = ND^{-1}, \quad N\_\prime D \in \mathcal{RH}\_{\infty\prime} \tag{1}$$

where RH<sup>∞</sup> denotes the set of all real rational proper stable transfer functions. A nominal controller *C*<sup>0</sup> stabilizing *P*<sup>0</sup> is also assumed to have a coprime factorization on RH<sup>∞</sup> as

$$\mathbb{C}\_0 = XY^{-1}, \quad \mathbf{X}\_\prime \mathbf{Y} \in \mathcal{RH}\_{\infty\prime} \tag{2}$$

where *X* and *Y* satisfy the Bezout identity *XN* + *YD* = 1. Then a class of all stabilizing controllers *C* is parameterized as (3), which is called as Youla parameterization, by introducing the Youla parameter *Q* ∈ RH<sup>∞</sup> (Vidyasagar, 1985):

$$\mathcal{C} = (Y - \mathcal{Q}N)^{-1}(X + \mathcal{Q}D)\_\prime \tag{3}$$

where *Q* is a free parameter and is determined arbitrarily as long as

$$\det(Y(\infty) - Q(\infty)N(\infty)) \neq 0.$$

Then, the GIMC structure is constructed as Fig. 1 by using (1) and (3), where *r*, *u*, *y* and *β* represent reference inputs, control inputs, observation outputs and residual signals, respectively. The only difference between the GIMC structure and a standard unity feedback control structure shown in Fig. 2 is that the input of *D* is *y* in the GIMC structure instead of *e*. Since the GIMC structure has a conditional feedback structure, the Youla parameter *Q* is only activated in the case where disturbances are injected and/or there exist plant uncertainties. If there is no disturbance and no plant uncertainty (*β* = 0), *Q* in the GIMC structure does not generate any compensation signals and the control system is governed by only a nominal controller *C*0. It means that the nominal control performance is specified by only the nominal controller *C*0. On the other hand, if there exist disturbances and/or plant uncertainties (*β* �= 0),

Fig. 3. Equivalent block-diagram of GIMC.

problem;

where

for implementation.

identification of *R*.

reviewed.

example. This design problem is formulated in frequency domain as a model matching

<sup>393</sup> A Model-Free Design of the Youla Parameter

on the Generalized Internal Model Control Structure with Stability Constraint

*M* is a reference model for *Gry* given by the designer and it corresponds to the nominal control

According to the model-based controller design techniques, the following typical controller design procedure is taken place: Firstly, we identify the dual Youla parameter *R* using the input/output data set. Secondly, the Youla parameter *Q* is designed based on the identified

it depends on the coprime factors, *N*, *D*, *X* and *Y*, which makes it difficult to give a physical interpretation for *R*. As a result, the identification of *R* requires trial-and-error for the selection of the structure and/or the order of *R*. As is clear from (8), *R* should be modeled as a high order model, the designed *Q* tends to be a high order controller, which is a serious problem

In this article, we address the direct design problem of the fixed-order and fixed-structural *Q* from the input/output data set minimizing the evaluation function (7) without any model

The Virtual Reference Feedback Tuning (VRFT) is one of model-free controller design methods to achieve the model matching. The VRFT provides the controller parameters using only the input/output data set so that the actual closed-loop property approaches to its reference model given by the designer. In this subsection, the basic concept and its algorithm are

*<sup>M</sup>* <sup>−</sup> (*<sup>N</sup>* <sup>+</sup> *RY*)*<sup>X</sup>* 1 + *RQ*

*<sup>Q</sup>*˜ *<sup>J</sup>*MR(*Q*˜), (6)

. (7)

 2

*<sup>R</sup>* <sup>=</sup> *<sup>D</sup>*(*<sup>P</sup>* <sup>−</sup> *<sup>P</sup>*0){*Y*(<sup>1</sup> <sup>+</sup> *PC*0)}<sup>−</sup>1, (8)

2

*Q* = arg min

 *WM* 

model of *R*. However, since the dual Youla parameter *R* is described as

**3.1 Review of the Virtual Reference Feedback Tuning (VRFT)**

*J*MR(*Q*) =

performance. *WM* is a frequency weighting function.

Fig. 2. A unity feedback control structure.

the inner loop controller *Q* generates the compensation signal to suppress the effect of plant uncertainties and disturbances in addition to the nominal controller *C*0.

In this way, the role of *C*<sup>0</sup> and that of *Q* are clearly separated: *C*<sup>0</sup> could be designed to achieve the higher nominal control performance, while *Q* could be designed to attain the higher robustness for plant uncertainties and disturbances. This is the reason why the GIMC structure is one of promising control architectures which solve the trade-off between the nominal control performance and the robustness in the design of the feedback control system. In this article, we address the design problem of the Youla parameter *Q* using the input/output data set to generate an appropriate compensation signal to reduce the effect of plant uncertainties and/or disturbances on the assumption that the nominal controller *C*<sup>0</sup> which meets the given nominal control performance requirements has been already available.

### **2.2 Dual Youla parameterization and robust stability condition**

For appropriate compensation of plant uncertainties, information on plant uncertainties is essential. In the design of the Youla parameter *Q*, the following parameterization plays an important role. On the assumption that the nominal plant *P*<sup>0</sup> factorized as (1) and its deviated version, *P*, are stabilized by the nominal controller *C*0, then *P* is parameterized by introducing a dual Youla parameter *R* ∈ RH<sup>∞</sup> as follows:

$$P = (N + \mathcal{YR})(D - X\mathcal{R})^{-1}.\tag{4}$$

This parameterization is called as the dual Youla parameterization, which is a dual version of the Youla parameterization mentioned in the previous subsection. It says that the actual plant *P*, which is deviated from the nominal plant *P*0, can be represented by the dual Youla parameter *R*. By substituting (4) to the block-diagram shown by Fig. 1, we obtain the equivalent block-diagram shown by Fig. 3. From this block-diagram, the robust stability condition when the controlled plant deviates from *P*<sup>0</sup> to *P* is derived as

$$(1 + R\mathbb{Q})^{-1} \in \mathcal{RH}\_{\infty}.\tag{5}$$

We must design *Q* so as to meet this stability condition.

### **3. Direct design of the Youla parameter from experimental data**

As stated in the previous subsection, the role of *Q* is to suppress plant variations and disturbances. This article addresses the design problem of *Q* to approach the closed-loop performance from *r* to *y*, denoted by *Gry*, to the its nominal control performance as an

Fig. 3. Equivalent block-diagram of GIMC.

example. This design problem is formulated in frequency domain as a model matching problem;

$$Q = \arg\min\_{\vec{Q}} J\_{\text{MR}}(\vec{Q})\_{\text{\textquotedblleft}} \tag{6}$$

where

4 Robust Control / Book 2

the inner loop controller *Q* generates the compensation signal to suppress the effect of plant

In this way, the role of *C*<sup>0</sup> and that of *Q* are clearly separated: *C*<sup>0</sup> could be designed to achieve the higher nominal control performance, while *Q* could be designed to attain the higher robustness for plant uncertainties and disturbances. This is the reason why the GIMC structure is one of promising control architectures which solve the trade-off between the nominal control performance and the robustness in the design of the feedback control system. In this article, we address the design problem of the Youla parameter *Q* using the input/output data set to generate an appropriate compensation signal to reduce the effect of plant uncertainties and/or disturbances on the assumption that the nominal controller *C*<sup>0</sup> which meets the given nominal control performance requirements has been already available.

For appropriate compensation of plant uncertainties, information on plant uncertainties is essential. In the design of the Youla parameter *Q*, the following parameterization plays an important role. On the assumption that the nominal plant *P*<sup>0</sup> factorized as (1) and its deviated version, *P*, are stabilized by the nominal controller *C*0, then *P* is parameterized by introducing

This parameterization is called as the dual Youla parameterization, which is a dual version of the Youla parameterization mentioned in the previous subsection. It says that the actual plant *P*, which is deviated from the nominal plant *P*0, can be represented by the dual Youla parameter *R*. By substituting (4) to the block-diagram shown by Fig. 1, we obtain the equivalent block-diagram shown by Fig. 3. From this block-diagram, the robust stability

As stated in the previous subsection, the role of *Q* is to suppress plant variations and disturbances. This article addresses the design problem of *Q* to approach the closed-loop performance from *r* to *y*, denoted by *Gry*, to the its nominal control performance as an

*<sup>P</sup>* = (*<sup>N</sup>* <sup>+</sup> *YR*)(*<sup>D</sup>* <sup>−</sup> *XR*)<sup>−</sup>1. (4)

(<sup>1</sup> <sup>+</sup> *RQ*)−<sup>1</sup> ∈ RH∞. (5)

uncertainties and disturbances in addition to the nominal controller *C*0.

**2.2 Dual Youla parameterization and robust stability condition**

condition when the controlled plant deviates from *P*<sup>0</sup> to *P* is derived as

**3. Direct design of the Youla parameter from experimental data**

We must design *Q* so as to meet this stability condition.

a dual Youla parameter *R* ∈ RH<sup>∞</sup> as follows:

Fig. 2. A unity feedback control structure.

$$J\_{\rm MR}(Q) = \left\| W\_M \left( M - \frac{(N + RY)X}{1 + RQ} \right) \right\|\_2^2. \tag{7}$$

*M* is a reference model for *Gry* given by the designer and it corresponds to the nominal control performance. *WM* is a frequency weighting function.

According to the model-based controller design techniques, the following typical controller design procedure is taken place: Firstly, we identify the dual Youla parameter *R* using the input/output data set. Secondly, the Youla parameter *Q* is designed based on the identified model of *R*. However, since the dual Youla parameter *R* is described as

$$R = D(P - P\_0) \left\{ Y(1 + PC\_0) \right\}^{-1} \text{ } \tag{8}$$

it depends on the coprime factors, *N*, *D*, *X* and *Y*, which makes it difficult to give a physical interpretation for *R*. As a result, the identification of *R* requires trial-and-error for the selection of the structure and/or the order of *R*. As is clear from (8), *R* should be modeled as a high order model, the designed *Q* tends to be a high order controller, which is a serious problem for implementation.

In this article, we address the direct design problem of the fixed-order and fixed-structural *Q* from the input/output data set minimizing the evaluation function (7) without any model identification of *R*.

### **3.1 Review of the Virtual Reference Feedback Tuning (VRFT)**

The Virtual Reference Feedback Tuning (VRFT) is one of model-free controller design methods to achieve the model matching. The VRFT provides the controller parameters using only the input/output data set so that the actual closed-loop property approaches to its reference model given by the designer. In this subsection, the basic concept and its algorithm are reviewed.

*θ*ˆ is calculated by the least-squares method as

where *ϕ*(*t*) = *Lσ*(*r*˜(*t*) − *y*0(*t*)), *uL*(*t*) = *Lu*0(*t*).

*Q*(*z*, *θ*) linearly parameterized with respect to *θ* as

and *θ* is a parameter vector of length *n* defined as

set used in the controller design,

where

are calculated as follows:

*θ*ˆ = *<sup>N</sup>* ∑ *t*=1

on the Generalized Internal Model Control Structure with Stability Constraint

**3.2 Direct tuning of** *Q* **from experimental data by the VRFT**

where *σ*(*z*) is a discrete-time transfer function vector defined as

*J*MR(*θ*) =

minimizer *θ*¯ of *J*MR(*θ*) using the closed-loop experimental data set

 *WM* 

*ϕ*(*t*)*ϕ*T(*t*)

<sup>−</sup><sup>1</sup> *<sup>N</sup>* ∑ *t*=1

<sup>395</sup> A Model-Free Design of the Youla Parameter

This subsection describes the application of the VRFT to the design of the Youla parameter *Q* without any model identification of the dual Youla parameter *R*. The experimental data

composed of the perturbed plant *P* and the nominal controller *C*0. Define the Youla parameter

*θ* = [*θ*1, *θ*2, ··· , *θn*]

Then the model matching problem formulated as (6) can be rewritten with respect to *θ* as

*<sup>M</sup>* <sup>−</sup> (*<sup>N</sup>* <sup>+</sup> *RY*)*<sup>X</sup>* 1 + *RQ*(*θ*)

Under the condition that the dual Youla parameter *R* is unknown, we will obtain the

Firstly, we obtain the input and the output data of *R* denoted by *α*(*t*), and *β*(*t*), respectively. In Fig. 1, we treat the actual plant *P* as the perturbed plant described by (4) and set *Q* = 0 since *Q* is a parameter to be designed. Then, we calculate *α*(*t*) and *β*(*t*) using the input/output data, {*u*0(*t*), *y*0(*t*)} collected from the plant when the appropriate reference signal *r*0(*t*) is applied to the standard unity feedback control structure as shown in Fig. 5. The signals *α*(*t*) and *β*(*t*)

*α*(*t*) = *Xy*0(*t*) + *Yu*0(*t*)

Although *α*(*t*) is an internal signal of the feedback control system, *α*(*t*) is an function of the external signal *r*0(*t*) given by the designer as is clear from (14). This means that the

*r*0(*t*), *u*0(*t*), *y*0(*t*)

*ϕ*(*t*)*uL*(*t*),

, is collected from the closed-loop system

T. (11)

. (13)

 .

*r*0(*t*), *u*0(*t*), *y*0(*t*)

*Q*(*z*, *θ*) = *σ*(*z*)T*θ*, (9)

*<sup>θ</sup>*¯ <sup>=</sup> arg min*<sup>θ</sup> <sup>J</sup>*MR(*θ*), (12)

 2

2

= *Xr*0(*t*), (14) *β*(*t*) = *Dy*0(*t*) − *Nu*0(*t*). (15)

*<sup>σ</sup>*(*z*)=[*σ*1(*z*), *<sup>σ</sup>*2(*z*), ··· , *<sup>σ</sup>n*(*z*)]T, (10)

Fig. 4. Basic concept of the VRFT.

The basic concept of the VRFT is depicted in Fig. 4. For a stable plant, assume that the input/output data set {*u*0(*t*), *y*0(*t*)} of length *N* has been already collected in open-loop manner. Introduce the virtual reference *r*˜(*t*) such that

$$y\_0(t) = M\tilde{r}(t)\_\*$$

where *M* is a reference model to be achieved. Now, assume that the output of the feedback system consisting of *P* and *C*(*θ*) parameterized by the parameter vector *θ* coincides with *y*0(*t*) when the virtual reference signal *r*˜(*t*) is given as a reference signal. Then, the output of *C*(*θ*), denoted by *u*˜(*t*, *θ*) is represented as

$$\begin{aligned} \tilde{u}(t, \theta) &= \mathcal{C}(\theta)(\tilde{r}(t) - y\_0(t)) \\ &= \mathcal{C}(\theta)(M^{-1} - 1)y\_0(t). \end{aligned}$$

If *u*˜(*t*, *θ*) = *u*0(*t*), then the model matching is achieved, i.e.,

$$M = \frac{PC(\mathfrak{g})}{1 + PC(\mathfrak{g})}.$$

Since the exact model matching is difficult in practice due to the restricted structural controller, the measurement noise injected to the output etc., we consider the alternative optimization problem:

$$\hat{\boldsymbol{\theta}} = \arg\min\_{\boldsymbol{\Theta}} J^{N}\_{\text{VR}}(\boldsymbol{\theta})\_{\text{s}}$$

where

$$\begin{split} J\_{\mathrm{VR}}^{N}(\boldsymbol{\theta}) &= \frac{1}{N} \sum\_{t=1}^{N} [L(\boldsymbol{u}\_{0}(t) - \boldsymbol{\tilde{u}}(t, \boldsymbol{\theta}))]^{2} \\ &= \frac{1}{N} \sum\_{t=1}^{N} [L\boldsymbol{u}\_{0}(t) - \mathsf{C}(\boldsymbol{\theta})L(\boldsymbol{\tilde{r}}(t) - \boldsymbol{y}\_{0}(t))]^{2} \end{split}$$

*<sup>L</sup>* is a prefilter given by the designer. By selection of *<sup>L</sup>* <sup>=</sup> *WMM*(<sup>1</sup> <sup>−</sup> *<sup>M</sup>*), *<sup>θ</sup>*<sup>ˆ</sup> would be a good approximation of the exact solution of the model matching problem *<sup>θ</sup>*¯ even if *<sup>u</sup>*˜(*t*, *<sup>θ</sup>*) �<sup>=</sup> *<sup>u</sup>*0(*t*) (Campi et al., 2002). Especially, in case where the controller *C*(*θ*) is linearly parameterized with respect to *θ* using an appropriate transfer matrix *σ*, i.e., *C*(*θ*) = *σ*T*θ*, the optimal solution *θ*ˆ is calculated by the least-squares method as

$$\hat{\boldsymbol{\theta}} = \left[ \sum\_{t=1}^{N} \boldsymbol{\varphi}(t) \boldsymbol{\varphi}^{\mathsf{T}}(t) \right]^{-1} \sum\_{t=1}^{N} \boldsymbol{\varphi}(t) \boldsymbol{\mu}\_{L}(t) \boldsymbol{\nu}$$

where *ϕ*(*t*) = *Lσ*(*r*˜(*t*) − *y*0(*t*)), *uL*(*t*) = *Lu*0(*t*).

### **3.2 Direct tuning of** *Q* **from experimental data by the VRFT**

This subsection describes the application of the VRFT to the design of the Youla parameter *Q* without any model identification of the dual Youla parameter *R*. The experimental data set used in the controller design, *r*0(*t*), *u*0(*t*), *y*0(*t*) , is collected from the closed-loop system composed of the perturbed plant *P* and the nominal controller *C*0. Define the Youla parameter *Q*(*z*, *θ*) linearly parameterized with respect to *θ* as

$$Q(z, \theta) = \sigma(z)^{\mathsf{T}} \theta,\tag{9}$$

where *σ*(*z*) is a discrete-time transfer function vector defined as

$$\sigma(z) = \left[\sigma\_1(z), \sigma\_2(z), \dots, \sigma\_n(z)\right]^\Gamma,\tag{10}$$

and *θ* is a parameter vector of length *n* defined as

$$\boldsymbol{\theta} = [\theta\_1, \theta\_2, \dots, \theta\_n]^\mathrm{T}.\tag{11}$$

Then the model matching problem formulated as (6) can be rewritten with respect to *θ* as

$$\bar{\theta} = \arg\min\_{\theta} f\_{\text{MR}}(\theta)\_{\text{\textquotedblleft}} \tag{12}$$

where

6 Robust Control / Book 2

The basic concept of the VRFT is depicted in Fig. 4. For a stable plant, assume that the input/output data set {*u*0(*t*), *y*0(*t*)} of length *N* has been already collected in open-loop

*y*0(*t*) = *Mr*˜(*t*),

where *M* is a reference model to be achieved. Now, assume that the output of the feedback system consisting of *P* and *C*(*θ*) parameterized by the parameter vector *θ* coincides with *y*0(*t*) when the virtual reference signal *r*˜(*t*) is given as a reference signal. Then, the output of *C*(*θ*),

*u*˜(*t*, *θ*) = *C*(*θ*)(*r*˜(*t*) − *y*0(*t*))

*<sup>M</sup>* <sup>=</sup> *PC*(*θ*) 1 + *PC*(*θ*)

*<sup>θ</sup>*<sup>ˆ</sup> <sup>=</sup> arg min*<sup>θ</sup> <sup>J</sup><sup>N</sup>*

Since the exact model matching is difficult in practice due to the restricted structural controller, the measurement noise injected to the output etc., we consider the alternative optimization

<sup>=</sup> *<sup>C</sup>*(*θ*)(*M*−<sup>1</sup> <sup>−</sup> <sup>1</sup>)*y*0(*t*).

.

VR(*θ*),

[*Lu*0(*t*) <sup>−</sup> *<sup>C</sup>*(*θ*)*L*(*r*˜(*t*) <sup>−</sup> *<sup>y</sup>*0(*t*))]<sup>2</sup>

[*L*(*u*0(*t*) <sup>−</sup> *<sup>u</sup>*˜(*t*, *<sup>θ</sup>*))]<sup>2</sup>

*<sup>L</sup>* is a prefilter given by the designer. By selection of *<sup>L</sup>* <sup>=</sup> *WMM*(<sup>1</sup> <sup>−</sup> *<sup>M</sup>*), *<sup>θ</sup>*<sup>ˆ</sup> would be a good approximation of the exact solution of the model matching problem *<sup>θ</sup>*¯ even if *<sup>u</sup>*˜(*t*, *<sup>θ</sup>*) �<sup>=</sup> *<sup>u</sup>*0(*t*) (Campi et al., 2002). Especially, in case where the controller *C*(*θ*) is linearly parameterized with respect to *θ* using an appropriate transfer matrix *σ*, i.e., *C*(*θ*) = *σ*T*θ*, the optimal solution

Fig. 4. Basic concept of the VRFT.

denoted by *u*˜(*t*, *θ*) is represented as

problem:

where

manner. Introduce the virtual reference *r*˜(*t*) such that

If *u*˜(*t*, *θ*) = *u*0(*t*), then the model matching is achieved, i.e.,

*JN*

VR(*θ*) = <sup>1</sup>

*N*

<sup>=</sup> <sup>1</sup> *N*

*N* ∑ *t*=1

*N* ∑ *t*=1

$$J\_{\rm MR}(\theta) = \left\| W\_M \left( M - \frac{(N + RY)X}{1 + RQ(\theta)} \right) \right\|\_2^2. \tag{13}$$

Under the condition that the dual Youla parameter *R* is unknown, we will obtain the minimizer *θ*¯ of *J*MR(*θ*) using the closed-loop experimental data set *r*0(*t*), *u*0(*t*), *y*0(*t*) .

Firstly, we obtain the input and the output data of *R* denoted by *α*(*t*), and *β*(*t*), respectively. In Fig. 1, we treat the actual plant *P* as the perturbed plant described by (4) and set *Q* = 0 since *Q* is a parameter to be designed. Then, we calculate *α*(*t*) and *β*(*t*) using the input/output data, {*u*0(*t*), *y*0(*t*)} collected from the plant when the appropriate reference signal *r*0(*t*) is applied to the standard unity feedback control structure as shown in Fig. 5. The signals *α*(*t*) and *β*(*t*) are calculated as follows:

$$\begin{split} u(t) &= Xy\_0(t) + \mathcal{Y}u\_0(t) \\ &= Xr\_0(t), \end{split} \tag{14}$$

$$\beta(t) = D y\_0(t) - N u\_0(t). \tag{15}$$

Although *α*(*t*) is an internal signal of the feedback control system, *α*(*t*) is an function of the external signal *r*0(*t*) given by the designer as is clear from (14). This means that the loop-gain from *β* to *α* is equivalent to 0, and that the input-output characteristic from *β* to *α* is an open-loop system, which is also understood by Fig. 3 with *Q* = 0. Moreover, since *R* belongs to RH<sup>∞</sup> according to the dual Youla parameterization, the input/output data set of *R* is always available by an open-loop experiment. As a result, the basic requirement for the VRFT is always satisfied in this parameterization.

Secondly, we regard *y*0(*t*) as the output of the reference model *M*, and obtain the virtual reference *r*˜(*t*) such that

$$y\_0(t) = M\vec{r}(t). \tag{16}$$

Fig. 5. Data acquisition of *α*(*t*) and *β*(*t*).

robust stability is derived as

where

system or not in advance of its implementation. To avoid the instability, the data-based

<sup>397</sup> A Model-Free Design of the Youla Parameter

As stated in the subsection 2.2, the robust stability condition when the plant perturbs from *P*<sup>0</sup> to *P* is described using *R* and *Q* as (5). However, (5) is non-convex with respect to the parameter *θ*, and it is difficult to incorporate this stability condition into the least-squares based VRFT as the constraint. Using the small-gain theorem, the sufficient condition of the

The alternative constraint (20) is imposed instead of (5), and the original constrained optimization problem is reduced to the tractable one (Matsumoto et al., 1993). Since model information on the plant can not be available in the model-free controller syntheses such as the VRFT, we must evaluate (20) using only the input/output data set {*u*0(*t*), *y*0(*t*)} obtained from the closed-loop system. As is clear from Fig. 3, since the input and the output data of *R* are *α*(*t*) and *β*(*t*), respectively, the open-loop transfer function from *α* to *ξ*(*θ*) corresponds to *RQ*(*θ*) by introducing the virtual signal *ξ*(*t*, *θ*) = *Q*(*θ*)*β*(*t*). Assuming that *α*(*t*) is a *p* times repeating signal of a periodic signal with a period *T*, i.e., *α*(*t*) is of length *N* = *pT*, the *H*<sup>∞</sup> norm of *RQ*(*θ*) denoted by *δ*(*θ*) can be estimated via the spectral analysis method as the ratio between the power spectral density function of *α*(*t*), denoted by Φ*α*(*ωk*), and the power cross

spectral density function between *α*(*t*) and *ξ*(*t*, *θ*), denoted by Φ*αξ* (*ωk*) (Ljung, 1999).

*T*

*T*

<sup>Φ</sup>*α*(*ωk*) = <sup>1</sup>

*<sup>R</sup>α*(*τ*) = <sup>1</sup>

(DFT) of an auto-correlation of *α*(*t*), denoted by *Rα*(*τ*):

From the Wiener-Khinchin Theorem, Φ*α*(*ωk*) is represented as a discrete Fourier transform

*T*−1 ∑ *τ*=0

*T*−1 ∑ *τ*=1

*ω<sup>k</sup>* = 2*πk*/(*TTs*) (*T* = 0, ··· ,(*T* − 1)/2), and *Ts* is a sampling time. The frequency points *ω<sup>k</sup>* must be defined as a sequence with a much narrow interval for a good estimate of *δ*(*θ*).

*α*(*t* − *τ*)*α*(*t*),

*δ* = �*RQ*(*θ*)�<sup>∞</sup> < 1. (20)

*Rα*(*τ*)*e*−*iτω<sup>k</sup>* , (21)

stability constraint should be introduced in the optimization problem (17).

on the Generalized Internal Model Control Structure with Stability Constraint

If there exists the parameter *θ* such that *α*(*t*) = *Xr*˜(*t*) − *Q*(*θ*)*β*(*t*), the exact model matching is achieved (*Gry* = *M*). According to the concept of the VRFT, the approximated solution of the model matching problem, *θ*ˆ, is obtained by solving the following optimization problem:

$$\hat{\boldsymbol{\theta}} = \arg\min\_{\boldsymbol{\Theta}} J\_{\text{VR}}^{N}(\boldsymbol{\theta}) \,\tag{17}$$

where

$$J\_{\rm VR}^{N}(\boldsymbol{\theta}) = \frac{1}{N} \sum\_{t=1}^{N} [L\_M(\boldsymbol{\alpha}(t) - \boldsymbol{X}\tilde{r}(t) + \boldsymbol{\mathcal{Q}}(\boldsymbol{\theta})\boldsymbol{\beta}(t))]^2.$$

Since *Q*(*θ*) is linear with respect to the parameter vector *θ* as defined in (9), *J<sup>N</sup>* VR(*θ*) is rewritten as

$$f\_{\rm VR}^{N}(\boldsymbol{\theta}) = \frac{1}{N} \sum\_{t=1}^{N} [y\_L(t) - \boldsymbol{\varrho}(t)^{\mathrm{T}} \boldsymbol{\theta}]^2. \tag{18}$$

where

$$\begin{aligned} \mathfrak{g}(t) &= -L\_M \sigma \beta(t), \\ \mathfrak{y}\_L(t) &= L\_M(\mathfrak{a}(t) - X\tilde{r}(t)). \end{aligned}$$

The minimizer of *J<sup>N</sup>* VR(*θ*) is then calculated using the least-squares method as

$$\hat{\boldsymbol{\theta}} = \left[ \sum\_{t=1}^{N} \boldsymbol{\varphi}(t) \boldsymbol{\varphi}^{\mathrm{T}}(t) \right]^{-1} \sum\_{t=1}^{N} \boldsymbol{\varphi}(t) \boldsymbol{y}\_{L}(t). \tag{19}$$

The filter *LM* is specified by the designer. By selecting *LM* = *WMMY*Φ*α*(*ω*)−1, *θ*ˆ could be a good approximation of *<sup>θ</sup>*¯ in case *<sup>N</sup>* <sup>→</sup> <sup>∞</sup>, where <sup>Φ</sup>*α*(*ω*) is a spectral density function of *<sup>α</sup>*(*t*). Moreover, this design approach needs an inverse system of the reference model, *M*−1, when *r*˜(*t*) is generated. However, by introducing *LM*, we can avoid overemphasis by derivation in *M*−<sup>1</sup> in the case where the noise corrupted data *y*0(*t*) is used.

### **3.3 Stability constraint on the design of** *Q* **by the VRFT**

The design method of *Q* based on the VRFT stated in the previous subsection does not explicitly address the stability issue of the resulting closed-loop system. Therefore, we can not evaluate whether the resulting Youla parameter *Q*(*θ*) actually stabilizes the closed-loop

Fig. 5. Data acquisition of *α*(*t*) and *β*(*t*).

system or not in advance of its implementation. To avoid the instability, the data-based stability constraint should be introduced in the optimization problem (17).

As stated in the subsection 2.2, the robust stability condition when the plant perturbs from *P*<sup>0</sup> to *P* is described using *R* and *Q* as (5). However, (5) is non-convex with respect to the parameter *θ*, and it is difficult to incorporate this stability condition into the least-squares based VRFT as the constraint. Using the small-gain theorem, the sufficient condition of the robust stability is derived as

$$\delta = \|RQ(\theta)\|\_{\infty} < 1. \tag{20}$$

The alternative constraint (20) is imposed instead of (5), and the original constrained optimization problem is reduced to the tractable one (Matsumoto et al., 1993). Since model information on the plant can not be available in the model-free controller syntheses such as the VRFT, we must evaluate (20) using only the input/output data set {*u*0(*t*), *y*0(*t*)} obtained from the closed-loop system. As is clear from Fig. 3, since the input and the output data of *R* are *α*(*t*) and *β*(*t*), respectively, the open-loop transfer function from *α* to *ξ*(*θ*) corresponds to *RQ*(*θ*) by introducing the virtual signal *ξ*(*t*, *θ*) = *Q*(*θ*)*β*(*t*). Assuming that *α*(*t*) is a *p* times repeating signal of a periodic signal with a period *T*, i.e., *α*(*t*) is of length *N* = *pT*, the *H*<sup>∞</sup> norm of *RQ*(*θ*) denoted by *δ*(*θ*) can be estimated via the spectral analysis method as the ratio between the power spectral density function of *α*(*t*), denoted by Φ*α*(*ωk*), and the power cross spectral density function between *α*(*t*) and *ξ*(*t*, *θ*), denoted by Φ*αξ* (*ωk*) (Ljung, 1999).

From the Wiener-Khinchin Theorem, Φ*α*(*ωk*) is represented as a discrete Fourier transform (DFT) of an auto-correlation of *α*(*t*), denoted by *Rα*(*τ*):

$$\Phi\_a(\omega\_k) = \frac{1}{T} \sum\_{\tau=0}^{T-1} R\_d(\tau) e^{-i\tau \omega\_k} \,\,\,\,\,\tag{21}$$

where

8 Robust Control / Book 2

loop-gain from *β* to *α* is equivalent to 0, and that the input-output characteristic from *β* to *α* is an open-loop system, which is also understood by Fig. 3 with *Q* = 0. Moreover, since *R* belongs to RH<sup>∞</sup> according to the dual Youla parameterization, the input/output data set of *R* is always available by an open-loop experiment. As a result, the basic requirement for the

Secondly, we regard *y*0(*t*) as the output of the reference model *M*, and obtain the virtual

If there exists the parameter *θ* such that *α*(*t*) = *Xr*˜(*t*) − *Q*(*θ*)*β*(*t*), the exact model matching is achieved (*Gry* = *M*). According to the concept of the VRFT, the approximated solution of the model matching problem, *θ*ˆ, is obtained by solving the following optimization problem:

[*LM*(*α*(*t*) <sup>−</sup> *Xr*˜(*t*) + *<sup>Q</sup>*(*θ*)*β*(*t*))]2.

[*yL*(*t*) <sup>−</sup>*ϕ*(*t*)T*θ*]

*<sup>θ</sup>*<sup>ˆ</sup> <sup>=</sup> arg min*<sup>θ</sup> <sup>J</sup><sup>N</sup>*

*y*0(*t*) = *Mr*˜(*t*). (16)

VR(*θ*), (17)

VR(*θ*) is rewritten

2, (18)

*ϕ*(*t*)*yL*(*t*). (19)

VRFT is always satisfied in this parameterization.

*JN*

VR(*θ*) = <sup>1</sup>

*N*

*JN*

*θ*ˆ =

*M*−<sup>1</sup> in the case where the noise corrupted data *y*0(*t*) is used.

**3.3 Stability constraint on the design of** *Q* **by the VRFT**

 *N* ∑ *t*=1

*N* ∑ *t*=1

VR(*θ*) = <sup>1</sup>

Since *Q*(*θ*) is linear with respect to the parameter vector *θ* as defined in (9), *J<sup>N</sup>*

*N*

*N* ∑ *t*=1

*ϕ*(*t*) = −*LMσβ*(*t*), *yL*(*t*) = *LM*(*α*(*t*) − *Xr*˜(*t*)).

*ϕ*(*t*)*ϕ*T(*t*)

VR(*θ*) is then calculated using the least-squares method as

The filter *LM* is specified by the designer. By selecting *LM* = *WMMY*Φ*α*(*ω*)−1, *θ*ˆ could be a good approximation of *<sup>θ</sup>*¯ in case *<sup>N</sup>* <sup>→</sup> <sup>∞</sup>, where <sup>Φ</sup>*α*(*ω*) is a spectral density function of *<sup>α</sup>*(*t*). Moreover, this design approach needs an inverse system of the reference model, *M*−1, when *r*˜(*t*) is generated. However, by introducing *LM*, we can avoid overemphasis by derivation in

The design method of *Q* based on the VRFT stated in the previous subsection does not explicitly address the stability issue of the resulting closed-loop system. Therefore, we can not evaluate whether the resulting Youla parameter *Q*(*θ*) actually stabilizes the closed-loop

−<sup>1</sup> *<sup>N</sup>* ∑ *t*=1

reference *r*˜(*t*) such that

where

as

where

The minimizer of *J<sup>N</sup>*

$$R\_{\alpha}(\tau) = \frac{1}{T} \sum\_{\tau=1}^{T-1} \alpha(t-\tau)\alpha(t)\lambda$$

*ω<sup>k</sup>* = 2*πk*/(*TTs*) (*T* = 0, ··· ,(*T* − 1)/2), and *Ts* is a sampling time. The frequency points *ω<sup>k</sup>* must be defined as a sequence with a much narrow interval for a good estimate of *δ*(*θ*). A shorter sampling time *Ts* is preferable to estimate *δ*(*θ*) in higher frequencies, and a longer period *T* improves the frequency resolution.

Similarly, Φ*αξ* (*ωk*, *θ*) is estimated as a DFT of the cross-correlation between *α*(*t*) and *ξ*(*t*, *θ*), denoted by *Rαξ* (*τ*):

$$\hat{\Phi}\_{\mathfrak{a}\mathfrak{f}}(\omega\_{k\mathfrak{e}}\boldsymbol{\theta}) = \frac{1}{T} \sum\_{\tau=0}^{T-1} \hat{\mathcal{R}}\_{\mathfrak{a}\mathfrak{f}}(\tau, \boldsymbol{\theta}) e^{-i\tau\omega\_{k}},\tag{22}$$

Fig. 6. Experimental set-up of a belt-driven two-mass system.

on the Generalized Internal Model Control Structure with Stability Constraint

*T*−1 ∑ *τ*=0

*<sup>R</sup>*<sup>ˆ</sup> *αξ* (*τ*, *<sup>θ</sup>*)*<sup>e</sup>*

nominal plant *P*<sup>0</sup> is identified by the simple frequency response test as

*<sup>P</sup>*<sup>0</sup> <sup>=</sup> <sup>4964</sup>

Moreover, the delay time of 14 ms is emulated by the software as the plant perturbation in *P*, but it is not reflected in *P*0. Due to the delay time, the closed-loop system tends to be destabilized when the gain of the feedback controller is high. This means that if the reference model with the high cut-off frequency is given, the closed-loop system readily destabilized.

−*iτω<sup>k</sup>* < 1 *T*

<sup>399</sup> A Model-Free Design of the Youla Parameter

*ω<sup>k</sup>* = 2*πk*/*T*, *k* = 0, . . . ,(*T* − 1)/2.

To verify the effectiveness of the proposed design method, we address a velocity control problem of a belt-driven two-mass system frequently encountered in many industrial

The plant to be controlled is depicted as Fig. 6. The velocity of the drive disk is controlled by the drive motor connected to the drive disk. The pulley is connected to the load disk through the flexible belt, the restoring force of the flexible belt affects the velocity of the drive disk, which causes the resonant vibration of the drive disk. The resonant frequency highly depends on the position and the number of the weights mounted on the drive disk and the load disk. We treat this two-mass resonant system as the controlled plant *P*. Since the position and the number of the weights mainly changes the resonant frequency, a rigid model is treated as the nominal plant *P*<sup>0</sup> identified easily, which changes little in response to load change. The

*T*−1 ∑ *τ*=0

*Rα*(*τ*)*e*

−*iτω<sup>k</sup>* ,

*<sup>s</sup>*<sup>2</sup> <sup>+</sup> 136.1*<sup>s</sup>* <sup>+</sup> 8.16 . (24)

 1 *T*

subject to

**4. Design example**

**4.1 Controlled plant**

processes.

where

$$
\hat{R}\_{\alpha\xi}(\tau,\theta) = \frac{1}{N} \sum\_{\tau=1}^{N} \alpha(t-\tau)\xi(t,\theta).
$$

Using the *<sup>p</sup>*-periods cyclic signal *<sup>α</sup>*(*t*) in the estimate of *<sup>R</sup>*<sup>ˆ</sup> *αξ* (*τ*, *<sup>θ</sup>*), the effect of the measurement noise involved in *ξ*(*t*, *θ*) is averaged and the estimate error in Φ*αξ* (*ωk*, *θ*) is then reduced. Especially, the measurement noise is normalized, the effect on the estimate of Φ*αξ* (*ωk*, *θ*) by the measurement noise is asymptotically reduced to 0.

Since *Q*(*θ*) is linearly defined with respect to *θ*, *R*ˆ *αξ* (*τ*, *θ*) and Φˆ *αξ* (*ωk*, *θ*) are also linear with respect to *θ*. As a result, the stability constraint of (20) is evaluated using only the input/output data as

$$\widehat{\delta}(\boldsymbol{\theta}) = \max\_{\{\boldsymbol{\omega}\_{k} \mid \boldsymbol{\Phi}\_{\boldsymbol{a}}(\boldsymbol{\omega}\_{k}) \neq 0\}} \left| \frac{\boldsymbol{\Phi}\_{\boldsymbol{a}\boldsymbol{\xi}}(\boldsymbol{\omega}\_{k'} \boldsymbol{\theta})}{\boldsymbol{\Phi}\_{\boldsymbol{a}}(\boldsymbol{\omega}\_{k})} \right| < 1. \tag{23}$$

Since this constraint is convex with respect to *θ* at each frequency point *ωk*, we can integrate this *H*∞ norm constraint into the optimization problem (17) and solve it as a convex optimization problem.

### **3.4 Design algorithm**

This subsection describes the design algorithm of *Q*(*θ*) imposing the stability constraint.


$$\begin{aligned} \alpha(t) &= Xr\_0(t), \\ \beta(t) &= Dy\_0(t) - N\mu\_0(t). \end{aligned}$$

**[step 3]** Generate the virtual reference *r*˜(*t*) such that

$$y\_0(t) = M\tilde{r}(t).$$

**[step 4]** Solve the following convex optimization problem;

$$\boldsymbol{\theta} = \underset{\boldsymbol{\theta}}{\text{arg min }} \underset{\boldsymbol{\theta}}{\text{min }} I^{N}\_{\text{VR}}(\boldsymbol{\theta})\_{\text{s}}$$

Fig. 6. Experimental set-up of a belt-driven two-mass system.

subject to

10 Robust Control / Book 2

A shorter sampling time *Ts* is preferable to estimate *δ*(*θ*) in higher frequencies, and a longer

Similarly, Φ*αξ* (*ωk*, *θ*) is estimated as a DFT of the cross-correlation between *α*(*t*) and *ξ*(*t*, *θ*),

*T*−1 ∑ *τ*=0

*N* ∑ *τ*=1

Using the *<sup>p</sup>*-periods cyclic signal *<sup>α</sup>*(*t*) in the estimate of *<sup>R</sup>*<sup>ˆ</sup> *αξ* (*τ*, *<sup>θ</sup>*), the effect of the measurement noise involved in *ξ*(*t*, *θ*) is averaged and the estimate error in Φ*αξ* (*ωk*, *θ*) is then reduced. Especially, the measurement noise is normalized, the effect on the estimate of

Since *Q*(*θ*) is linearly defined with respect to *θ*, *R*ˆ *αξ* (*τ*, *θ*) and Φˆ *αξ* (*ωk*, *θ*) are also linear with respect to *θ*. As a result, the stability constraint of (20) is evaluated using only the

> 

Since this constraint is convex with respect to *θ* at each frequency point *ωk*, we can integrate this *H*∞ norm constraint into the optimization problem (17) and solve it as a convex

This subsection describes the design algorithm of *Q*(*θ*) imposing the stability constraint.

*α*(*t*) = *Xr*0(*t*),

**[step 1]** Collect the input/output data set {*u*0(*t*), *y*0(*t*)} of length *N* in the closed-loop manner in the unity feedback control structure shown in Fig. 5 when the appropriate reference

*β*(*t*) = *Dy*0(*t*) − *Nu*0(*t*).

*y*0(*t*) = *Mr*˜(*t*).

*<sup>θ</sup>*<sup>ˆ</sup> <sup>=</sup> arg min*<sup>θ</sup> <sup>J</sup><sup>N</sup>*

VR(*θ*),

Φˆ *αξ* (*ωk*, *θ*) Φ*α*(*ωk*)

 

*α*(*t* − *τ*)*ξ*(*t*, *θ*).

*<sup>R</sup>*<sup>ˆ</sup> *αξ* (*τ*, *<sup>θ</sup>*)*e*−*iτω<sup>k</sup>* , (22)

< 1. (23)

*T*

*N*

<sup>Φ</sup><sup>ˆ</sup> *αξ* (*ωk*, *<sup>θ</sup>*) = <sup>1</sup>

*<sup>R</sup>*<sup>ˆ</sup> *αξ* (*τ*, *<sup>θ</sup>*) = <sup>1</sup>

Φ*αξ* (*ωk*, *θ*) by the measurement noise is asymptotically reduced to 0.

*δ*(*θ*) = max

**[step 2]** Calculate *α*(*t*) and *β*(*t*) using the data set {*r*0(*t*), *u*0(*t*), *y*0(*t*)} as

{*ωk*|Φ*α*(*ωk*)�=0}

ˆ

**[step 3]** Generate the virtual reference *r*˜(*t*) such that

**[step 4]** Solve the following convex optimization problem;

period *T* improves the frequency resolution.

denoted by *Rαξ* (*τ*):

input/output data as

optimization problem.

**3.4 Design algorithm**

signal *r*0(*t*) is applied.

where

$$\begin{aligned} \left| \frac{1}{T} \sum\_{\tau=0}^{T-1} \hat{\mathcal{R}}\_{\alpha\_{\xi}^{\tau}}(\tau, \theta) e^{-i\tau \omega\_{k}} \right| &< \left| \frac{1}{T} \sum\_{\tau=0}^{T-1} \mathcal{R}\_{\alpha}(\tau) e^{-i\tau \omega\_{k}} \right|, \\\omega\_{k} &= 2\pi k/T, \quad k = 0, \ldots, (T-1)/2. \end{aligned}$$

### **4. Design example**

To verify the effectiveness of the proposed design method, we address a velocity control problem of a belt-driven two-mass system frequently encountered in many industrial processes.

### **4.1 Controlled plant**

The plant to be controlled is depicted as Fig. 6. The velocity of the drive disk is controlled by the drive motor connected to the drive disk. The pulley is connected to the load disk through the flexible belt, the restoring force of the flexible belt affects the velocity of the drive disk, which causes the resonant vibration of the drive disk. The resonant frequency highly depends on the position and the number of the weights mounted on the drive disk and the load disk. We treat this two-mass resonant system as the controlled plant *P*. Since the position and the number of the weights mainly changes the resonant frequency, a rigid model is treated as the nominal plant *P*<sup>0</sup> identified easily, which changes little in response to load change. The nominal plant *P*<sup>0</sup> is identified by the simple frequency response test as

$$P\_0 = \frac{4964}{s^2 + 136.1s + 8.16}.\tag{24}$$

Moreover, the delay time of 14 ms is emulated by the software as the plant perturbation in *P*, but it is not reflected in *P*0. Due to the delay time, the closed-loop system tends to be destabilized when the gain of the feedback controller is high. This means that if the reference model with the high cut-off frequency is given, the closed-loop system readily destabilized.

The Youla parameter *Q*(*s*, *θ*) is defined in the continuous-time so that the properness of *Q*(*s*, *θ*)

<sup>401</sup> A Model-Free Design of the Youla Parameter

*<sup>Q</sup>*(*s*, *<sup>θ</sup>*) = *<sup>θ</sup>*1*<sup>s</sup>* <sup>+</sup> *<sup>θ</sup>*2*s*<sup>2</sup> <sup>+</sup> *<sup>θ</sup>*3*s*<sup>3</sup> <sup>+</sup> *<sup>θ</sup>*4*s*<sup>4</sup> <sup>+</sup> *<sup>θ</sup>*5*s*<sup>5</sup>

<sup>=</sup> <sup>1</sup> (0.06*s* + 1)<sup>4</sup>

on the Generalized Internal Model Control Structure with Stability Constraint

= *σ*(*s*)T*θ*.

*θ*ˆ w/o =

> *θ*ˆ w/ =

stability constraint is not satisfied around 60 rad/s, and ˆ

⎡ ⎢ ⎢ ⎢ ⎢ ⎣

> ⎡ ⎢ ⎢ ⎢ ⎢ ⎣

w/o) and *Q*(*z*, *θ*ˆ

that the closed-loop system might be destabilized if the Youla parameter *Q*(*z*, *θ*ˆ

(0.06*s* + 1)<sup>5</sup>

*s s*<sup>2</sup> *s*<sup>3</sup> *s*<sup>4</sup> *s*<sup>5</sup>

�

w/o as (26) when the stability constraint is not imposed.

w/ as (27) when the stability constraint is imposed.

w/) are shown in Fig. 7. For *Q*(*z*, *θ*ˆ

w/o) = 7.424. Since the sufficient

w/), which satisfies the stability

⎤ ⎥ ⎥ ⎥ ⎥ ⎦

*δ*(*θ*ˆ

(26)

(27)

w/o), the

w/o) was

⎤ ⎥ ⎥ ⎥ ⎥ ⎦ ⎡ ⎢ ⎢ ⎢ ⎢ ⎣

*θ*1 *θ*2 *θ*3 *θ*4 *θ*5 ⎤ ⎥ ⎥ ⎥ ⎥ ⎦

�

The discrete-time Youla parameter *Q*(*z*, *θ*) is defined by discretization of *Q*(*s*, *θ*), i.e., *σ*(*s*), with the sampling time *Ts* = 1 [ms]. In order to construct the type-I servo system even if the plant perturbs, the constant term of the numerator of *Q*(*s*, *θ*) is set to 0 such that *Q*(*s*, *θ*)|*s*=<sup>0</sup> =

The VRFT can be regarded as the open-loop identification problem of the controller parameter by the least-squares method. We select the pseudo random binary signal (PRBS) as the input for identification of the controller parameter as same as in the general open-loop identification problem, since the identification input should have certain power spectrum in all frequencies. The PRBS is generated through a 12-bit shift register (i.e., *<sup>T</sup>* <sup>=</sup> 212 <sup>−</sup> <sup>1</sup> <sup>=</sup> 4095 samples), the reference signal *r*<sup>0</sup> is constructed by repeating this PRBS 10 times (i.e., *p* = 10, *N* = 40950).

> <sup>−</sup>2.878 <sup>×</sup> <sup>10</sup>−<sup>2</sup> 1.429 <sup>×</sup> <sup>10</sup>−<sup>2</sup> <sup>−</sup>1.594 <sup>×</sup> <sup>10</sup>−<sup>3</sup> 1.184 <sup>×</sup> <sup>10</sup>−<sup>5</sup> 1.339 <sup>×</sup> <sup>10</sup>−<sup>6</sup>

1.263 <sup>×</sup> <sup>10</sup>−<sup>1</sup> 1.261 <sup>×</sup> <sup>10</sup>−<sup>2</sup> 7.425 <sup>×</sup> <sup>10</sup>−<sup>4</sup> 4.441 <sup>×</sup> <sup>10</sup>−<sup>6</sup> 4.286 <sup>×</sup> <sup>10</sup>−<sup>7</sup>

condition for the robust stability is not satisfied, we can predict in advance of implementation

*δ*(*θ*ˆ w/) = 0.9999 for *Q*(*z*, *θ*ˆ

and the relation, *Q* ∈ RH∞, are satisfied as

0 in the continuous-time (Sakuishi et al., 2008).

**4.3 Experimental result**

Firstly, we obtain the parameter *θ*ˆ

Secondly, we obtain the parameter *θ*ˆ

The estimates of *δ*(*θ*) for *Q*(*z*, *θ*ˆ

implemented. On the other hand, ˆ

### **4.2 Experimental condition**

For the simplicity, the design problem is restricted to the model matching of *Gry* approaching to its reference model *M* in the previous section. However, the proposed method readily address the model matching of multiple characteristics. In the practical situations, we must solve the trade-off between several closed-loop properties. In this experimental set-up, we show the design result of the simultaneous optimization problem approaching the tracking performance, *Gry*, and the noise attenuation performance, *Gny* to their reference models, *M* and *T*, respectively. The evaluation function is defined as

$$J\_{\rm MR}(\theta) = \left\| \mathcal{W}\_{\rm M} \left( M - \frac{(N + RY)X}{1 + RQ(\theta)} \right) \right\|\_{2}^{2} + \left\| \mathcal{W}\_{T} \left( T - \frac{(N + RY)(X + Q(\theta)D)}{1 + RQ(\theta)} \right) \right\|\_{2}^{2}. \tag{25}$$

To deal with the above multiobjective optimization problem, we redefine *ϕ*(*t*) and *yL*(*t*) in (18) as

$$\begin{aligned} \boldsymbol{\varrho}(t) &= \begin{bmatrix} -L\_M \boldsymbol{\sigma} \boldsymbol{\beta}(t), \ -L\_T \boldsymbol{\sigma} \left( \boldsymbol{\beta}(t) - D \boldsymbol{\pi}(t) \right) \end{bmatrix}, \\ \boldsymbol{y}\_L(t) &= \begin{bmatrix} L\_M (\boldsymbol{\alpha}(t) - X \boldsymbol{\tilde{r}}(t)), \ L\_T (\boldsymbol{\alpha}(t) - X \boldsymbol{\tilde{n}}(t)) \end{bmatrix}^T \end{aligned}$$

where *n*˜(*t*) is a virtual reference such that *y*0(*t*) = *Tn*˜(*t*), *LT* is a filter selected as *LT* = *WTT*Φ*α*(*ω*)−1. The reference models for *Gry* and *Gny* are given by discretization of

$$M = \frac{50^2}{(s+50)^2}, \text{ and }$$

$$T = \frac{50^2}{(s+50)^2}$$

with the sampling time *Ts* = 1 [ms].

The nominal controller stabilizing *P*<sup>0</sup> is evaluated from the relation

$$M = \frac{P\_0 C\_0}{1 + P\_0 C\_0}.$$

as

$$\begin{aligned} \mathbb{C}\_0 &= \frac{M}{(1 - M)P\_0} \\ &= \frac{0.5036s^2 + 68.52s + 4.110}{s(s + 100)} \end{aligned}$$

The weighting functions *WM* and *WT* are given to improve the tracking performance in low frequencies and the noise attenuation performance in high frequencies as

$$W\_M = \frac{200^2}{(s+200)^2}, \text{ and }$$

$$W\_T = \frac{s^2}{(s+200)^2}.$$

The Youla parameter *Q*(*s*, *θ*) is defined in the continuous-time so that the properness of *Q*(*s*, *θ*) and the relation, *Q* ∈ RH∞, are satisfied as

$$\begin{split} Q(\mathbf{s},\boldsymbol{\theta}) &= \frac{\theta\_1 \mathbf{s} + \theta\_2 \mathbf{s}^2 + \theta\_3 \mathbf{s}^3 + \theta\_4 \mathbf{s}^4 + \theta\_5 \mathbf{s}^5}{(0.06 \mathbf{s} + 1)^5} \\ &= \frac{1}{(0.06 \mathbf{s} + 1)^4} \begin{bmatrix} \mathbf{s} \ \mathbf{s}^2 \ \mathbf{s}^3 \ \mathbf{s}^4 \ \mathbf{s}^5 \end{bmatrix} \begin{bmatrix} \theta\_1 \\ \theta\_2 \\ \theta\_3 \\ \theta\_4 \\ \theta\_5 \end{bmatrix} \\ &= \sigma(\mathbf{s})^\mathsf{T} \boldsymbol{\theta}. \end{split}$$

The discrete-time Youla parameter *Q*(*z*, *θ*) is defined by discretization of *Q*(*s*, *θ*), i.e., *σ*(*s*), with the sampling time *Ts* = 1 [ms]. In order to construct the type-I servo system even if the plant perturbs, the constant term of the numerator of *Q*(*s*, *θ*) is set to 0 such that *Q*(*s*, *θ*)|*s*=<sup>0</sup> = 0 in the continuous-time (Sakuishi et al., 2008).

### **4.3 Experimental result**

12 Robust Control / Book 2

For the simplicity, the design problem is restricted to the model matching of *Gry* approaching to its reference model *M* in the previous section. However, the proposed method readily address the model matching of multiple characteristics. In the practical situations, we must solve the trade-off between several closed-loop properties. In this experimental set-up, we show the design result of the simultaneous optimization problem approaching the tracking performance, *Gry*, and the noise attenuation performance, *Gny* to their reference models, *M*

> 2

2 + *WT* 

To deal with the above multiobjective optimization problem, we redefine *ϕ*(*t*) and *yL*(*t*) in

*ϕ*(*t*) = [−*LMσβ*(*t*), −*LTσ*(*β*(*t*) − *Dn*˜(*t*))] , *yL*(*t*) = [*LM*(*α*(*t*) − *Xr*˜(*t*)), *LT*(*α*(*t*) − *Xn*˜(*t*))]

where *n*˜(*t*) is a virtual reference such that *y*0(*t*) = *Tn*˜(*t*), *LT* is a filter selected as *LT* =

(*<sup>s</sup>* <sup>+</sup> <sup>50</sup>)<sup>2</sup> , and

*WTT*Φ*α*(*ω*)−1. The reference models for *Gry* and *Gny* are given by discretization of

*<sup>M</sup>* <sup>=</sup> <sup>502</sup>

*<sup>T</sup>* <sup>=</sup> <sup>502</sup> (*s* + 50)<sup>2</sup>

> *<sup>M</sup>* <sup>=</sup> *<sup>P</sup>*0*C*<sup>0</sup> 1 + *P*0*C*<sup>0</sup>

(1 − *M*)*P*<sup>0</sup>

*WM* <sup>=</sup> <sup>2002</sup>

*WT* <sup>=</sup> *<sup>s</sup>*<sup>2</sup>

<sup>=</sup> 0.5036*s*<sup>2</sup> <sup>+</sup> 68.52*<sup>s</sup>* <sup>+</sup> 4.110

The weighting functions *WM* and *WT* are given to improve the tracking performance in low

(*<sup>s</sup>* <sup>+</sup> <sup>200</sup>)<sup>2</sup> , and

(*<sup>s</sup>* <sup>+</sup> <sup>200</sup>)<sup>2</sup> .

*<sup>s</sup>*(*<sup>s</sup>* <sup>+</sup> <sup>100</sup>) .

The nominal controller stabilizing *P*<sup>0</sup> is evaluated from the relation

*<sup>C</sup>*<sup>0</sup> <sup>=</sup> *<sup>M</sup>*

frequencies and the noise attenuation performance in high frequencies as

*<sup>T</sup>* <sup>−</sup> (*<sup>N</sup>* <sup>+</sup> *RY*)(*<sup>X</sup>* <sup>+</sup> *<sup>Q</sup>*(*θ*)*D*) 1 + *RQ*(*θ*)

T ,

 2

2

. (25)

**4.2 Experimental condition**

*J*MR(*θ*) =

(18) as

as

 *WM* 

with the sampling time *Ts* = 1 [ms].

and *T*, respectively. The evaluation function is defined as

*<sup>M</sup>* <sup>−</sup> (*<sup>N</sup>* <sup>+</sup> *RY*)*<sup>X</sup>* 1 + *RQ*(*θ*)

> The VRFT can be regarded as the open-loop identification problem of the controller parameter by the least-squares method. We select the pseudo random binary signal (PRBS) as the input for identification of the controller parameter as same as in the general open-loop identification problem, since the identification input should have certain power spectrum in all frequencies. The PRBS is generated through a 12-bit shift register (i.e., *<sup>T</sup>* <sup>=</sup> 212 <sup>−</sup> <sup>1</sup> <sup>=</sup> 4095 samples), the reference signal *r*<sup>0</sup> is constructed by repeating this PRBS 10 times (i.e., *p* = 10, *N* = 40950). Firstly, we obtain the parameter *θ*ˆ w/o as (26) when the stability constraint is not imposed.

$$
\hat{\boldsymbol{\theta}}\_{\text{w}/\text{o}} = \begin{bmatrix} -2.878 \times 10^{-2} \\ 1.429 \times 10^{-2} \\ -1.594 \times 10^{-3} \\ 1.184 \times 10^{-5} \\ 1.339 \times 10^{-6} \end{bmatrix} \tag{26}
$$

Secondly, we obtain the parameter *θ*ˆ w/ as (27) when the stability constraint is imposed.

$$
\hat{\boldsymbol{\theta}}\_{\mathbf{w}} = \begin{bmatrix}
1.263 \times 10^{-1} \\
1.261 \times 10^{-2} \\
7.425 \times 10^{-4} \\
4.441 \times 10^{-6} \\
4.286 \times 10^{-7}
\end{bmatrix} \tag{27}
$$

The estimates of *δ*(*θ*) for *Q*(*z*, *θ*ˆ w/o) and *Q*(*z*, *θ*ˆ w/) are shown in Fig. 7. For *Q*(*z*, *θ*ˆ w/o), the stability constraint is not satisfied around 60 rad/s, and ˆ *δ*(*θ*ˆ w/o) = 7.424. Since the sufficient condition for the robust stability is not satisfied, we can predict in advance of implementation that the closed-loop system might be destabilized if the Youla parameter *Q*(*z*, *θ*ˆ w/o) was implemented. On the other hand, ˆ *δ*(*θ*ˆ w/) = 0.9999 for *Q*(*z*, *θ*ˆ w/), which satisfies the stability

that only minimization of the 2-norm based cost function may not provide the appropriate

<sup>403</sup> A Model-Free Design of the Youla Parameter

In this article, the design method of the Youla parameter in the GIMC structure by the typical model-free controller design method, VRFT, is proposed. By the model-free controller design method, we can significantly reduce the effort for identification of *R* and the design of *Q* compared with the model-based control design method. We can also specify the order and the structure of *Q*, which enable us to design a low-order controller readily. Moreover, the stability constraint derived from the small-gain theorem is integrated into the 2-norm based standard optimization problem. As a result, we can guarantee the closed-loop stability by the designed *Q* in advance of the controller implementation. The effectiveness of the proposed

As a future work, we must tackle the robustness issue. The proposed method guarantees the closed-loop stability only at the specific condition where the input/output data is collected. If the load condition changes, the closed-loop stability is no longer guaranteed in the proposed method. We must improve the proposed method to enhance the robustness for the plant perturbation and/or the plant uncertainties. Morevover, though the controller structure is now restricted to the linearly parameterized one in the proposed method, the

M. C. Campi, A. Lecchini and S. M. Savaresi: "Virtual Reference Feedback Tuning: a

D. U. Campos-Delgado and K. Zhou: "Reconfigurable Fault Tolerant Control Using GIMC Structure", *IEEE Transactions on Automatic Control*, Vol. 48, No. 5, pp. 832–838 (2003) F. Hansen, G. Franklin and R. Kosut: "Closed-Loop Identification via the Fractional

K. van Heusden, A. Karimi and D. Bonvin: "Non-iterative Data-driven Controller Tuning with

K. van Heusden, A. Karimi and D. Bonvin: "Data-driven Controller Tuning with Integrated

H. Hjalmarsson, M. Gevers, S. Gunnarsson and O. Lequin: "Iterative Feedback Tuning:

A. Karimi, K. van Heusden and D. Bonvin: "Noniterative Data-driven Controller Tuning

L. Ljung: System Identification Theory for the User (second edition), Prentice Hall (1999)

*IEEE Multi-Conference on Systems and Control*, pp. 1005–1010 (2010)

Direct Method for the Design of Feedback Controllers", *Automatica*, Vol. 38, No. 8,

Representation: Experiment Design", *Proc. of American Control Conference 1989*,

Guaranteed Stability: Application to Direct-drive Pick-and-place Robot", *Proc. of 2010*

Stability Constraint", *Proceedings of 47th IEEE Conference on Decision and Control*,

Theory and Applications", *IEEE Control Systems Magazine*, Vol. 18, No. 4, pp. 26–41

Using the Correlation Approach", *Proc. of European Control Conference 2007*,

controller design method is confirmed by the experiment on the two-mass system.

fully parameterized controller should be tuned for the higher control performance.

stabilizing controller in model-free controller syntheses.

on the Generalized Internal Model Control Structure with Stability Constraint

**5. Conclusion**

**6. References**

pp. 1337–1346 (2002)

pp. 1422–1427 (1989)

pp. 2612–2617 (2008)

pp. 5189–5195 (2007)

(1998)

Fig. 7. Estimate of *δ*(*θ*), ˆ *δ*(*θ*ˆ w/o) and ˆ *δ*(*θ*ˆ w/).

constraint. Therefore, we can predict in advance of implementation that the closed-loop system could be stabilized if the Youla parameter *Q*(*z*, *θ*ˆ w/) was implemented. Figure 8 shows the step responses of the GIMC structure with implementing *Q*(*z*, *θ*ˆ w/o) and *Q*(*z*, *θ*ˆ w/). In the case of *Q*(*z*, *θ*ˆ w/o), its response vibrates persistently, the tracking performance, *Gry*, degrades compared with the case that the control system is governed by only the nominal controller *C*0, i.e., *Q* = 0. On the other hand, in the case of *Q*(*z*, *θ*ˆ w/), its response does not coincides with the output of the reference model due to the long delay time, but Fig. 8 shows that the control system is at least stabilized. Moreover, we can confirm that the vibration is suppressed compared with the case of *Q* = 0 and the proposed method provides the Youla parameter reflecting the objective function without destabilizing the closed-loop system. Although *J<sup>N</sup>* VR(*θ*<sup>ˆ</sup> w/o) < *J<sup>N</sup>* VR(*θ*<sup>ˆ</sup> w/), the response for *Q*(*z*, *θ*ˆ w/) is much closer to the output of the reference model than that for *Q*(*z*, *θ*ˆ w/o). This observation implies

Fig. 8. Step responses for a belt-driven two-mass system with and without stability constraint.

that only minimization of the 2-norm based cost function may not provide the appropriate stabilizing controller in model-free controller syntheses.

## **5. Conclusion**

14 Robust Control / Book 2

Fig. 7. Estimate of *δ*(*θ*), ˆ

and *Q*(*z*, *θ*ˆ

constraint.

*δ*(*θ*ˆ

w/). In the case of *Q*(*z*, *θ*ˆ

the closed-loop system. Although *J<sup>N</sup>*

system could be stabilized if the Youla parameter *Q*(*z*, *θ*ˆ

w/o) and ˆ

*δ*(*θ*ˆ w/).

VR(*θ*<sup>ˆ</sup>

Fig. 8. Step responses for a belt-driven two-mass system with and without stability

constraint. Therefore, we can predict in advance of implementation that the closed-loop

performance, *Gry*, degrades compared with the case that the control system is governed by only the nominal controller *C*0, i.e., *Q* = 0. On the other hand, in the case of *Q*(*z*, *θ*ˆ

response does not coincides with the output of the reference model due to the long delay time, but Fig. 8 shows that the control system is at least stabilized. Moreover, we can confirm that the vibration is suppressed compared with the case of *Q* = 0 and the proposed method provides the Youla parameter reflecting the objective function without destabilizing

w/o) < *J<sup>N</sup>*

closer to the output of the reference model than that for *Q*(*z*, *θ*ˆ w/o). This observation implies

VR(*θ*<sup>ˆ</sup>

Figure 8 shows the step responses of the GIMC structure with implementing *Q*(*z*, *θ*ˆ

w/) was implemented.

w/), the response for *Q*(*z*, *θ*ˆ

w/o), its response vibrates persistently, the tracking

w/o)

w/), its

w/) is much

In this article, the design method of the Youla parameter in the GIMC structure by the typical model-free controller design method, VRFT, is proposed. By the model-free controller design method, we can significantly reduce the effort for identification of *R* and the design of *Q* compared with the model-based control design method. We can also specify the order and the structure of *Q*, which enable us to design a low-order controller readily. Moreover, the stability constraint derived from the small-gain theorem is integrated into the 2-norm based standard optimization problem. As a result, we can guarantee the closed-loop stability by the designed *Q* in advance of the controller implementation. The effectiveness of the proposed controller design method is confirmed by the experiment on the two-mass system.

As a future work, we must tackle the robustness issue. The proposed method guarantees the closed-loop stability only at the specific condition where the input/output data is collected. If the load condition changes, the closed-loop stability is no longer guaranteed in the proposed method. We must improve the proposed method to enhance the robustness for the plant perturbation and/or the plant uncertainties. Morevover, though the controller structure is now restricted to the linearly parameterized one in the proposed method, the fully parameterized controller should be tuned for the higher control performance.

## **6. References**


Yutaka Uchimura

*Japan*

**18**

*Shibaura Institute of Technology*

Time delay often exists in engineering systems such as chemical plants, steel making processes, etc. and studies on time-delay system have long historical background. Therefore the system with time-delay has attracted many researchers' interest and various studies have been conducted. It had been a classic problem; however evolution of the network technology and spread of the Internet brought it back to the main stage. Rapid growth of computer network technology and wide spread of the Internet have been brought remarkable innovation to the world. They enabled not only the speed-of-light information exchange but also offering various services via Internet. Even the daily lives of people have been changed by network based services such as emails, web browsing, twitter and social networks. In the field of motion control engineering, computer networks are utilized for connecting sensors, machines and controllers. Network applications in the machine industry are replacing bunch of traditional wiring, which is complex, heavy and requires high installation costs (Farsi et al., 1999). Especially, the weight of the signal wires increases the gas consumption of automobiles, which is nowadays not only an issue on the driving performance

**Model Based** *Ǎ***-Synthesis Controller Design** 

**for Time-Varying Delay System** 

Much research and development is also being conducted in application level, such as tele-surgery (Ghodoussi et al., 2002), tele-operated rescue robots (Yeh et al., 2008), and bilateral control with force feedback via a network (Uchimura & Yakoh, 2004). These applications commonly include sensors, actuators and controllers that are mutually connected

When transmitting data on a network, transmission delays are accumulated due to one or more of the following factors: signal propagation delay, non-deterministic manner of network media access, waiting time in queuing, and so on. The delays sometimes become substantial and affect the performance of the system. Especially, delays in feedback not only weaken system performance, but also cause system unstable in the worst case. Various studies have investigated ways to deal the system with transmission delay. Time-delay systems belong to the class of functional differential equations which are infinite dimensional. It means that there exists infinite number of eigenvalues and conventional control methods developed for

Therefore many methods for the time-delay systems were proposed. A classic but prominent method is the Smith compensator (Smith, 1957). The Smith compensator essentially assumes that a time delay is constant. If the delay varies, the system may become unstable (Palmor, 1980). Vatanski et.al. (Vatanski et al., 2009) proposed a modified Smith predictor method by

the linear time-invariant system do not always reach the most optimized solution.

**1. Introduction**

but also on the environmental issue.

and exchange information via a network.


## **Model Based** *Ǎ***-Synthesis Controller Design for Time-Varying Delay System**

Yutaka Uchimura *Shibaura Institute of Technology Japan*

### **1. Introduction**

16 Robust Control / Book 2

404 Recent Advances in Robust Control – Novel Approaches and Design Methods

K. Matsumoto, T. Suzuki, S. Sangwongwanich and S. Okuma: "Internal Structure of

L. Miˇskovi´c, A. Karimi, D. Bonvin and M. Gevers: "Correlation-Based Tuning of Linear

T. Sakuishi, K. Yubai and J. Hirai: "A Direct Design from Input/Output Data of Fault-Tolerant

M. Vidyasagar: Control System Synthesis: A Factorization Approach, The MIT Press (1985) K. Yubai, S. Terada and J. Hirai: "Stability Test for Multivariable NCbT Using Input/Output Data", *IEEJ Transactions on Electronics, Information and Systems*, Vol. 130, No. 4 (2011) K. Yubai, T. Sakuishi and J. Hirai: "Compensation of Performance Degradation Caused by

K. Zhou and Z. Ren: "A New Controller Architecture for High Performance, Robust, and

M. Morari and E. Zafiriou: Robust Process Control, Prentice Hall (1997)

K. Zhou and J. C. Doyle: Essentials of Robust Control, Prentice Hall (1998)

Vol. 128, No. 6, pp. 758–766 (2008) (in Japanese)

No. 8, pp. 451–455 (2007) (in Japanese)

pp. 1613–1618 (2001)

pp. 768–777 (1993) (in Japanese)

(2007)

Two-Degree-of-Freedom controller and a Design Method for Free Parameter of Compensator", *IEEJ Transactions on Industry Applications*, Vol. 113-D, No. 6,

Multivariable Decoupling Controllers", *Automatica*, Vol. 43, No. 9, pp. 1481–1494

Control System Based on GIMC Structure", *IEEJ Transactions on Industry Applications*,

Fault Based on GIMC Structure", *IEEJ Transactions on Industry Applications*, Vol. 127,

Fault-Tolerant Control", *IEEE Transactions on Automatic Control*, Vol. 46, No. 10,

Time delay often exists in engineering systems such as chemical plants, steel making processes, etc. and studies on time-delay system have long historical background. Therefore the system with time-delay has attracted many researchers' interest and various studies have been conducted. It had been a classic problem; however evolution of the network technology and spread of the Internet brought it back to the main stage. Rapid growth of computer network technology and wide spread of the Internet have been brought remarkable innovation to the world. They enabled not only the speed-of-light information exchange but also offering various services via Internet. Even the daily lives of people have been changed by network based services such as emails, web browsing, twitter and social networks.

In the field of motion control engineering, computer networks are utilized for connecting sensors, machines and controllers. Network applications in the machine industry are replacing bunch of traditional wiring, which is complex, heavy and requires high installation costs (Farsi et al., 1999). Especially, the weight of the signal wires increases the gas consumption of automobiles, which is nowadays not only an issue on the driving performance but also on the environmental issue.

Much research and development is also being conducted in application level, such as tele-surgery (Ghodoussi et al., 2002), tele-operated rescue robots (Yeh et al., 2008), and bilateral control with force feedback via a network (Uchimura & Yakoh, 2004). These applications commonly include sensors, actuators and controllers that are mutually connected and exchange information via a network.

When transmitting data on a network, transmission delays are accumulated due to one or more of the following factors: signal propagation delay, non-deterministic manner of network media access, waiting time in queuing, and so on. The delays sometimes become substantial and affect the performance of the system. Especially, delays in feedback not only weaken system performance, but also cause system unstable in the worst case. Various studies have investigated ways to deal the system with transmission delay. Time-delay systems belong to the class of functional differential equations which are infinite dimensional. It means that there exists infinite number of eigenvalues and conventional control methods developed for the linear time-invariant system do not always reach the most optimized solution.

Therefore many methods for the time-delay systems were proposed. A classic but prominent method is the Smith compensator (Smith, 1957). The Smith compensator essentially assumes that a time delay is constant. If the delay varies, the system may become unstable (Palmor, 1980). Vatanski et.al. (Vatanski et al., 2009) proposed a modified Smith predictor method by

�·�<sup>∞</sup> indicates *<sup>H</sup>*<sup>∞</sup> norm defined by �*G*�<sup>∞</sup> :<sup>=</sup> sup*ω*∈**<sup>R</sup>** *<sup>σ</sup>*¯[*G*(*jω*)] where *<sup>σ</sup>*¯(*M*) is the maximum singular value of complex matrix *M*. Let (*A*, *B*, *C*, *D*) be a minimal realization of *G*(*s*) with

Model Based μ-Synthesis Controller Design for Time-Varying Delay System 407

*<sup>G</sup>*(*s*) = *<sup>A</sup> <sup>B</sup>*

Time-delay system attracts much interest of researchers and many studies have been conducted. In the manner of classic frequency domain control theory, the system seems to have infinite order, i.e. it has infinite poles, which makes it intractable problem. Since time delay is a source of instability of the system, stability analysis has been one of the main concerns. These studies roughly categorized into frequency domain based methods and time domain based methods. Frequency domain based methods include Nyquist criterion, Pade

Meanwhile time domain based methods are mostly offered with conditions which are associated with Lyapunov-Krasovskii functional. The condition is formulated in terms of LMI,

where *<sup>x</sup>*(*t*) <sup>∈</sup> **<sup>R</sup>***<sup>n</sup>* which is a state variable, *<sup>A</sup>* <sup>∈</sup> **<sup>R</sup>***n*×*n*, *Ad* <sup>∈</sup> **<sup>R</sup>***n*×*<sup>n</sup>* are parameters of state space model of a plant and *δ*(*t*) corresponds to the delay on transmission such as network

Much interest in the past literature has focused on searching less conservate conditions. Conservativeness is often measured by the amount of *δ*(*t*), that is, the larger *δ*(*t*) is the better. In fact, constraints on *δ*(*t*) plays an important role on conservativeness measure.

1. Delay dependent or independent. Whether or not there exists the upper bound of delay ¯

As for the first constraint, stability condition is often referred as delay-dependent/independent. If the stability condition is delay-independent, it allows

Verriest (Verriest et al., 1993) showed that the system in (2) is uniformly asymptotically stable,

*<sup>d</sup> P* −*Q*

The condition (3) is a sufficient condition for delay-independent case. One may notice that the

< 0. (3)

*PA* + *ATP* + *Q PAd A<sup>T</sup>*

*δ*(*t*) < *ν*.

*x*˙(*t*) = *Ax*(*t*) + *Adx*(*t* − *δ*(*t*)) (2)

**2. Related works and comparison on conservativeness**

hence can be solved efficiently. Consider a time-delay system in (2),

communication delay.

where *δ*(*t*) < ¯

*δ*.

3. The value of upper bound of ˙

amount of time-delay to be infinity.

**2.1 Stability analysis approaches, eigen values, small gain and LMI**

Conservativeness strongly depends on the following constraints:

**2.2 Delay independent stability analysis in time domain**

matrix form is similar to that of the bounded real lemma.

if there exist symmetric positive definite matrix *P* and *Q* such that 

2. *δ*(*t*) is time variant or time invariant (variable delay or constant delay).

*δ*(*t*), *ν*, where ˙

approximation and robust control theory such as *H*∞ control based approaches.

*C D*

. (1)

*δ*,

measuring time varying delays on the network, which eliminates the sensor time delay (the delay from a plant to a controller). The gain (P gain) of the controller is adjusted based on the amount of time delay to maintain stability of the system. Passivity based control using scattering transformation does not requires an upper bound of delay (Anderson & Spong, 1989); however, as noted in previous research (Yokokohji et al., 1999), the method tends to be conservative and to consequently deteriorate overall performance.

One of the typical approaches is a method base on robust control theory. Leung proposed to deal with time delay as a perturbation and a stabilizing controller was obtained in the frame work of *μ*-synthesis (Leung et al., 1997). Chen showed a robust asymptotic stability condition by a structured singular value (Chen & Latchman, 1994). The paper also discussed on systems whose state variables include multiple delays.

Another typical approach is to derive a sufficient condition of stability using Lyapunov-Krasovskii type function (Kharitonov et al, 2003). The conditions are mostly shown in the form of LMI (Linear Matrix Inequality)(Mahmoud & AI-bluthairi, 1994)(Skelton et al., 1998). Furthermore, a stabilizing controller for a time invariant uncertain plant is also shown in the form of LMI (Huang & Nguang, 2007). However, Lyapunov-Krasovskii based approaches commonly face against conservative issues. For example, if the Lyapunov function is time independent (Verriest et al., 1993), the system tends to be very conservative. Thus, many different Lyapunov-Krasovskii functions are proposed to reduce the conservativeness of the controller (Yue et al., 2004)(Richard, 2003). Lyapunov-Krasovskii based methods deal with systems in the time domain, whereas robust control theory is usually described in the frequency domain.

Even though those two methods deal with the same object, their approaches seem to be very different. Zhang (Zhang et al., 2001) showed an interconnection between those two approaches by introducing the scaled small gain theory and a system named comparison system. The paper also examined on conservativeness of several stability conditions formulated in LMI and *μ*-synthesis based design, which concluded that *μ*-synthesis based controller was less conservative than other LMI based controllers. Detail of this examination is shown in the next section.

In fact, conservativeness really depends how much information of the plant is known. It is obvious that delay-independent condition is more conservative than delay-dependent condition. Generally, the more you know the plant, you possibly gain the chance to improve. For example, time delay on a network is not completely uncertain, in other words it is measurable. If the value of delay is known and explicitly used for control, performance would be improved. Meanwhile, in the model based control, the modeling error between the plant model and the real plant can affect the performance and stability of the system. However, perfect modeling of the plant is very difficult, because the properties of the real plant may vary due to the variation of loads or deterioration by aging. Thus modeling error is inevitable. The modeling error is considered to be a loop gain variation (multiplicative uncertainty) . The error seriously affects the stability of the feedback system. In order to consider the adverse effect of the modeling error together with time delay, we exploited a *μ*-synthesis to avoid the instability due to uncertainty.

This chapter proposes a model based controller design with *μ*-synthesis for a network based system with time varying delay and the plant model uncertainty. For the time delay, the explicit modeling is introduced, while uncertainty of the plant model is considered as a perturbation based on the robust control theory.

The notations in this chapter are as follows: **R** is the set of real numbers, **C** is the set of complex numbers, **<sup>R</sup>***n*×*<sup>m</sup>* is the set of all real *<sup>n</sup>* <sup>×</sup> *<sup>m</sup>* matrices, *In* is *<sup>n</sup>* <sup>×</sup> *<sup>n</sup>* identity matrix, *<sup>W</sup><sup>T</sup>* is the transpose of matrix *W*, *P* > 0 indicates that *P* is a symmetric and positive definite matrix,

�·�<sup>∞</sup> indicates *<sup>H</sup>*<sup>∞</sup> norm defined by �*G*�<sup>∞</sup> :<sup>=</sup> sup*ω*∈**<sup>R</sup>** *<sup>σ</sup>*¯[*G*(*jω*)] where *<sup>σ</sup>*¯(*M*) is the maximum singular value of complex matrix *M*. Let (*A*, *B*, *C*, *D*) be a minimal realization of *G*(*s*) with

$$G(s) = \begin{bmatrix} \frac{A \ \mid B \ \mid}{\mathbb{C} \ \mid D} \end{bmatrix}. \tag{1}$$

### **2. Related works and comparison on conservativeness**

### **2.1 Stability analysis approaches, eigen values, small gain and LMI**

Time-delay system attracts much interest of researchers and many studies have been conducted. In the manner of classic frequency domain control theory, the system seems to have infinite order, i.e. it has infinite poles, which makes it intractable problem. Since time delay is a source of instability of the system, stability analysis has been one of the main concerns. These studies roughly categorized into frequency domain based methods and time domain based methods. Frequency domain based methods include Nyquist criterion, Pade approximation and robust control theory such as *H*∞ control based approaches.

Meanwhile time domain based methods are mostly offered with conditions which are associated with Lyapunov-Krasovskii functional. The condition is formulated in terms of LMI, hence can be solved efficiently.

Consider a time-delay system in (2),

2 Will-be-set-by-IN-TECH

measuring time varying delays on the network, which eliminates the sensor time delay (the delay from a plant to a controller). The gain (P gain) of the controller is adjusted based on the amount of time delay to maintain stability of the system. Passivity based control using scattering transformation does not requires an upper bound of delay (Anderson & Spong, 1989); however, as noted in previous research (Yokokohji et al., 1999), the method tends to

One of the typical approaches is a method base on robust control theory. Leung proposed to deal with time delay as a perturbation and a stabilizing controller was obtained in the frame work of *μ*-synthesis (Leung et al., 1997). Chen showed a robust asymptotic stability condition by a structured singular value (Chen & Latchman, 1994). The paper also discussed on systems

Another typical approach is to derive a sufficient condition of stability using Lyapunov-Krasovskii type function (Kharitonov et al, 2003). The conditions are mostly shown in the form of LMI (Linear Matrix Inequality)(Mahmoud & AI-bluthairi, 1994)(Skelton et al., 1998). Furthermore, a stabilizing controller for a time invariant uncertain plant is also shown in the form of LMI (Huang & Nguang, 2007). However, Lyapunov-Krasovskii based approaches commonly face against conservative issues. For example, if the Lyapunov function is time independent (Verriest et al., 1993), the system tends to be very conservative. Thus, many different Lyapunov-Krasovskii functions are proposed to reduce the conservativeness of the controller (Yue et al., 2004)(Richard, 2003). Lyapunov-Krasovskii based methods deal with systems in the time domain, whereas robust control theory is

Even though those two methods deal with the same object, their approaches seem to be very different. Zhang (Zhang et al., 2001) showed an interconnection between those two approaches by introducing the scaled small gain theory and a system named comparison system. The paper also examined on conservativeness of several stability conditions formulated in LMI and *μ*-synthesis based design, which concluded that *μ*-synthesis based controller was less conservative than other LMI based controllers. Detail of this examination

In fact, conservativeness really depends how much information of the plant is known. It is obvious that delay-independent condition is more conservative than delay-dependent condition. Generally, the more you know the plant, you possibly gain the chance to improve. For example, time delay on a network is not completely uncertain, in other words it is measurable. If the value of delay is known and explicitly used for control, performance would be improved. Meanwhile, in the model based control, the modeling error between the plant model and the real plant can affect the performance and stability of the system. However, perfect modeling of the plant is very difficult, because the properties of the real plant may vary due to the variation of loads or deterioration by aging. Thus modeling error is inevitable. The modeling error is considered to be a loop gain variation (multiplicative uncertainty) . The error seriously affects the stability of the feedback system. In order to consider the adverse effect of the modeling error together with time delay, we exploited a *μ*-synthesis to avoid the

This chapter proposes a model based controller design with *μ*-synthesis for a network based system with time varying delay and the plant model uncertainty. For the time delay, the explicit modeling is introduced, while uncertainty of the plant model is considered as a

The notations in this chapter are as follows: **R** is the set of real numbers, **C** is the set of complex numbers, **<sup>R</sup>***n*×*<sup>m</sup>* is the set of all real *<sup>n</sup>* <sup>×</sup> *<sup>m</sup>* matrices, *In* is *<sup>n</sup>* <sup>×</sup> *<sup>n</sup>* identity matrix, *<sup>W</sup><sup>T</sup>* is the transpose of matrix *W*, *P* > 0 indicates that *P* is a symmetric and positive definite matrix,

be conservative and to consequently deteriorate overall performance.

whose state variables include multiple delays.

usually described in the frequency domain.

is shown in the next section.

instability due to uncertainty.

perturbation based on the robust control theory.

$$\dot{\mathbf{x}}(t) = A\mathbf{x}(t) + A\_d \mathbf{x}(t-\delta(t)) \tag{2}$$

where *<sup>x</sup>*(*t*) <sup>∈</sup> **<sup>R</sup>***<sup>n</sup>* which is a state variable, *<sup>A</sup>* <sup>∈</sup> **<sup>R</sup>***n*×*n*, *Ad* <sup>∈</sup> **<sup>R</sup>***n*×*<sup>n</sup>* are parameters of state space model of a plant and *δ*(*t*) corresponds to the delay on transmission such as network communication delay.

Much interest in the past literature has focused on searching less conservate conditions. Conservativeness is often measured by the amount of *δ*(*t*), that is, the larger *δ*(*t*) is the better. In fact, constraints on *δ*(*t*) plays an important role on conservativeness measure. Conservativeness strongly depends on the following constraints:


As for the first constraint, stability condition is often referred as delay-dependent/independent. If the stability condition is delay-independent, it allows amount of time-delay to be infinity.

#### **2.2 Delay independent stability analysis in time domain**

Verriest (Verriest et al., 1993) showed that the system in (2) is uniformly asymptotically stable, if there exist symmetric positive definite matrix *P* and *Q* such that

$$
\begin{bmatrix} PA + A^T P + Q \ P A\_d \\ A\_d^T P & -Q \end{bmatrix} < 0. \tag{3}
$$

The condition (3) is a sufficient condition for delay-independent case. One may notice that the matrix form is similar to that of the bounded real lemma.

(8)

where

where

where

*x*˙(*t*) = *Ax*(*t*) + *Adx*(*t* − *δ*(*t*)) (6)

*δ* (7)

⎦ > 0 (8)

> 0 (10)

<sup>−</sup>1)*X*. (9)

<sup>−</sup><sup>1</sup> + *β*<sup>2</sup>

*δ*(*W<sup>T</sup>* + *P*). (12)

*δ*(*t*) ≤ *ν* < 1 (13)

*δ*(*t*) < 1 implies that the preceding packet

*δ*], if and only if

< 0 (14)

*δ*], if and only if *ψ*(*jω*, *δ*) �=

*d Z*

Ω<sup>3</sup> = *PA* + *ATP* + *Y* + *Y<sup>T</sup>* + *Q*. (15)

*det*[*In* − *G*(*jω*)Φ(*jδω*)] �= 0, ∀*ω* ≥ 0, (16)

⎤ ⎥ ⎥ ⎦

<sup>0</sup> <sup>≤</sup> *<sup>δ</sup>*(*t*) <sup>&</sup>lt; ¯

Model Based μ-Synthesis Controller Design for Time-Varying Delay System 409

Ω<sup>1</sup> *XAdA XAdAd*

∗ ∗ *β*<sup>2</sup>

satisfying condition (7) if there exist matrix *P* > 0, *Q* > 0, *V* > 0, and *W* such that

<sup>Ω</sup><sup>2</sup> <sup>−</sup>*WTAd <sup>A</sup>TAT*

Θ = ¯

<sup>0</sup> <sup>≤</sup> *<sup>δ</sup>*(*t*) <sup>&</sup>lt; ¯

∗∗ −*V* 0 ∗ ∗ ∗−*V*

The system given in (6) with time-varying delay which satisfies (7) is asymptotically stable for any delay *δ*(*t*) which satisfies (13), if there exist matrices *P* > 0, *Q* > 0, *Z* > 0,*Y* and *W* such

*δ*, ˙

<sup>Ω</sup><sup>3</sup> <sup>−</sup>*<sup>Y</sup>* <sup>+</sup> *PAd* <sup>+</sup> *<sup>W</sup><sup>T</sup>* <sup>−</sup>*<sup>Y</sup>* ¯*dATZ* ∗ −*<sup>W</sup>* <sup>−</sup> *<sup>W</sup><sup>T</sup>* <sup>−</sup> (<sup>1</sup> <sup>−</sup> *<sup>ν</sup>*)*<sup>Q</sup>* <sup>−</sup>*W dA<sup>T</sup>*

∗ ∗−*Z* 0 ∗ ∗ ∗−*Z*

In frequency domain based, Nyquist criterion gives necessary and sufficient condition and the

eigen value based analysis described below is another option of the analysis.

∗ −*Q A<sup>T</sup>*

<sup>−</sup>1*X* 0

*TX* <sup>+</sup> *<sup>X</sup>*(*<sup>A</sup>* <sup>+</sup> *Ad*)

The system given in (6) with time-varying delay is asymptotically stable for any delay *δ*(*t*)

*<sup>d</sup> <sup>A</sup><sup>T</sup> <sup>d</sup> B* 0

<sup>−</sup>1*X*

*<sup>d</sup> V* Θ

⎤

� − (*β*<sup>1</sup>

> ⎤ ⎥ ⎥ ⎦

<sup>Ω</sup><sup>2</sup> = (*<sup>A</sup>* + *Ad*)*TP* + *<sup>P</sup>*(*<sup>A</sup>* + *<sup>B</sup>*) + *<sup>W</sup>TB* + *<sup>B</sup>TW* + *<sup>Q</sup>* (11)

⎡ ⎣

(*A* + *Ad*)

⎡ ⎢ ⎢ ⎣

that the following linear matrix inequality (LMI) holds:

⎡ ⎢ ⎢ ⎣

In packet based networked system, the condition ˙

**2.4 Delay dependent stability analysis in frequency domain**

**Lemma 2** The system (2) is asymptotically stable for all *<sup>δ</sup>* <sup>∈</sup> [0, ¯

**Corollary 1** The system (2) is asymptotically stable for all *<sup>δ</sup>* <sup>∈</sup> [0, ¯

is not caught up by the successive packet.

0, <sup>∀</sup>*<sup>ω</sup>* <sup>&</sup>gt; 0 where *<sup>ψ</sup>*(*s*, *<sup>δ</sup>*) := (*sIn* <sup>−</sup> *<sup>A</sup>* <sup>−</sup> *Ade*−*sδ*).

<sup>Ω</sup><sup>1</sup> <sup>=</sup> <sup>−</sup> ¯

**Theorem 2** (Park, 1999):

**Theorem 3** *(Tang & Liu, 2008)*:

*δ*−<sup>1</sup> � ∗ *β*<sup>1</sup>

Fig. 1. Interconnection of a plant and time delay

**Lemma 1** *(Bounded real lemma)*: Assume *G*(*s*) which is the transfer function of a system, i.e. *<sup>G</sup>*(*s*) :<sup>=</sup> *<sup>C</sup>*(*sI* <sup>−</sup> *<sup>A</sup>*)−1*B*. �*G*(*s*)�<sup>∞</sup> <sup>&</sup>lt; *<sup>γ</sup>*, if and only if there exists a matrix *<sup>P</sup>* <sup>&</sup>gt; 0,

$$
\begin{bmatrix} PA + A^T P + \frac{\mathbb{C}^T \mathbb{C}}{\gamma} & PA\_d \\ A\_d^T P & -\gamma I\_n \end{bmatrix} < 0. \tag{4}
$$

Suppose (*A*, *<sup>B</sup>*, *<sup>C</sup>*, *<sup>D</sup>*) of system (2) is (*A*, *Ad*, *In*, 0) and let *<sup>G</sup>*(*s*)=(*sIn* <sup>−</sup> *<sup>A</sup>*)−<sup>1</sup>*Ad* be a transfer function of the system and *γ* = 1 in (4), (3) and (4) are identical. This fact implies that a system with time delay is stable regardless the value of time delay, if �*G*(*s*)�<sup>∞</sup> < 1. This condition corresponds to the small gain theorem.

Fig.1 shows an interconnection of system *G*(*s*) and delay block Δ(*s*), where *u*(*t*) = *y*(*t* − *δ*(*t*)). In the figure, Δ(*s*) is a block of time delay whose *H*<sup>∞</sup> norm �Δ(*s*)�<sup>∞</sup> is induced by (5) .

$$\|\|\Delta(s)\|\|\_{\infty} = \sup\_{y \in L\_2} \frac{\sqrt{\int\_0^\infty u^T(t)u(t)dt}}{\sqrt{\int\_0^\infty y^T(t)y(t)dt}} = \sup\_{y \in L\_2} \frac{\|\|u\|\|\_2}{\|y\|\_2} \tag{5}$$

Because the input energy to the delay block is same as the output energy, *H*∞ norm of Δ(*s*) is equal to 1, i.e. �Δ(*s*)�<sup>∞</sup> = 1. Hence, the interconnected system is stable because �*G*(*s*)Δ(*s*)�<sup>∞</sup> < 1. This implies that if �*G*(*s*)Δ(*s*)�<sup>∞</sup> > 1, the system becomes unstable when the delay exceeds the limitation. If the delay *δ*(*t*) is bounded by the maximum value ¯ *δ*, system in (2) is stable even if �*G*(*s*)Δ(*s*)�<sup>∞</sup> > 1. Evaluation of conservativeness is often measured by the upper bound ¯ *δ* for the given system. A condition which gives larger ¯ *δ* is regarded as less conservative.

#### **2.3 Delay dependent stability analysis with Lyapnouv-Krasovskii functional**

Delay independent stability condition is generally very conservative, because it allows infinite time delay and requires the system *G*(*s*) to be small in terms of the system gain. However, the given system is not always �*G*(*s*)�<sup>∞</sup> < 1. For the system whose *H*<sup>∞</sup> norm is more than one, there exist an upper bound of delay. Generally, an upper bound of delay is given and stability conditions of the system with the upper bound are shown. There have been many studies on Lyapnouv-Krasovskii based analysis for time varying delay system. These have been refining forms of Lyapnouv-Krasovskii functional to reduce conservativeness. Following theorems are LMI based stability conditions for a system with time-varying delay.

**Theorem 1** *(Li & de Souza, 1996)*:

The system given in (6) with time-varying delay is asymptotically stable for any delay *δ*(*t*) satisfying condition (7) if there exist matrix *X* > 0 and constants *β*<sup>1</sup> > 0 and *β*<sup>2</sup> > 0 satisfying (8)

4 Will-be-set-by-IN-TECH

**Lemma 1** *(Bounded real lemma)*: Assume *G*(*s*) which is the transfer function of a system, i.e.

*<sup>γ</sup> PAd*

= sup *y*∈*L*<sup>2</sup> �*u*�<sup>2</sup> �*y*�<sup>2</sup>

< 0. (4)

(5)

*δ*, system

*δ* is regarded as less

*<sup>d</sup> P* −*γIn*

Suppose (*A*, *<sup>B</sup>*, *<sup>C</sup>*, *<sup>D</sup>*) of system (2) is (*A*, *Ad*, *In*, 0) and let *<sup>G</sup>*(*s*)=(*sIn* <sup>−</sup> *<sup>A</sup>*)−<sup>1</sup>*Ad* be a transfer function of the system and *γ* = 1 in (4), (3) and (4) are identical. This fact implies that a system with time delay is stable regardless the value of time delay, if �*G*(*s*)�<sup>∞</sup> < 1. This condition

Fig.1 shows an interconnection of system *G*(*s*) and delay block Δ(*s*), where *u*(*t*) = *y*(*t* − *δ*(*t*)). In the figure, Δ(*s*) is a block of time delay whose *H*<sup>∞</sup> norm �Δ(*s*)�<sup>∞</sup> is induced by (5) .

Because the input energy to the delay block is same as the output energy, *H*∞ norm of Δ(*s*) is equal to 1, i.e. �Δ(*s*)�<sup>∞</sup> = 1. Hence, the interconnected system is stable because �*G*(*s*)Δ(*s*)�<sup>∞</sup> < 1. This implies that if �*G*(*s*)Δ(*s*)�<sup>∞</sup> > 1, the system becomes unstable when the delay exceeds the limitation. If the delay *δ*(*t*) is bounded by the maximum value ¯

in (2) is stable even if �*G*(*s*)Δ(*s*)�<sup>∞</sup> > 1. Evaluation of conservativeness is often measured by

Delay independent stability condition is generally very conservative, because it allows infinite time delay and requires the system *G*(*s*) to be small in terms of the system gain. However, the given system is not always �*G*(*s*)�<sup>∞</sup> < 1. For the system whose *H*<sup>∞</sup> norm is more than one, there exist an upper bound of delay. Generally, an upper bound of delay is given and stability conditions of the system with the upper bound are shown. There have been many studies on Lyapnouv-Krasovskii based analysis for time varying delay system. These have been refining forms of Lyapnouv-Krasovskii functional to reduce conservativeness. Following theorems are

The system given in (6) with time-varying delay is asymptotically stable for any delay *δ*(*t*) satisfying condition (7) if there exist matrix *X* > 0 and constants *β*<sup>1</sup> > 0 and *β*<sup>2</sup> > 0 satisfying

*δ* for the given system. A condition which gives larger ¯

<sup>0</sup> *<sup>u</sup>T*(*t*)*u*(*t*)*dt*

<sup>0</sup> *<sup>y</sup>T*(*t*)*y*(*t*)*dt*

<sup>∞</sup>

<sup>∞</sup>

*<sup>G</sup>*(*s*) :<sup>=</sup> *<sup>C</sup>*(*sI* <sup>−</sup> *<sup>A</sup>*)−1*B*. �*G*(*s*)�<sup>∞</sup> <sup>&</sup>lt; *<sup>γ</sup>*, if and only if there exists a matrix *<sup>P</sup>* <sup>&</sup>gt; 0,

*PA* + *ATP* + *<sup>C</sup>TC*

*A<sup>T</sup>*

�Δ(*s*)�<sup>∞</sup> = sup

*y*∈*L*<sup>2</sup>

**2.3 Delay dependent stability analysis with Lyapnouv-Krasovskii functional**

LMI based stability conditions for a system with time-varying delay.

Fig. 1. Interconnection of a plant and time delay

corresponds to the small gain theorem.

the upper bound ¯

**Theorem 1** *(Li & de Souza, 1996)*:

conservative.

$$
\dot{\mathbf{x}}(t) = A\mathbf{x}(t) + A\_d \mathbf{x}(t - \delta(t)) \tag{6}
$$

$$0 \le \delta(t) < \overline{\delta} \tag{7}$$

$$
\begin{bmatrix}
\Omega\_1 \ X A\_d A \ X A\_d A\_d \\
\ast & \beta\_1^{-1} X & 0 \\
\ast & \ast & \beta\_2^{-1} X
\end{bmatrix} > 0
$$

where

$$\Omega\_1 = -\bar{\delta}^{-1} \left[ (A + A\_d)^T X + X(A + A\_d) \right] - (\beta\_1^{-1} + \beta\_2^{-1}) X. \tag{9}$$

### **Theorem 2** (Park, 1999):

The system given in (6) with time-varying delay is asymptotically stable for any delay *δ*(*t*) satisfying condition (7) if there exist matrix *P* > 0, *Q* > 0, *V* > 0, and *W* such that

$$
\begin{bmatrix}
\Omega\_2 & -W^T A\_d \ A^T A\_d^T V & \Theta \\
\* & -Q & A\_d^T A\_d^T B & 0 \\
\* & \* & -V & 0 \\
\* & \* & \* & -V
\end{bmatrix} > 0\tag{10}
$$

where

$$
\Omega\_2 = (A + A\_d)^T P + P(A + B) + \mathcal{W}^T B + \mathcal{B}^T W + Q \tag{11}
$$

$$
\Theta = \overline{\delta}(\mathcal{W}^T + P). \tag{12}
$$

### **Theorem 3** *(Tang & Liu, 2008)*:

The system given in (6) with time-varying delay which satisfies (7) is asymptotically stable for any delay *δ*(*t*) which satisfies (13), if there exist matrices *P* > 0, *Q* > 0, *Z* > 0,*Y* and *W* such that the following linear matrix inequality (LMI) holds:

$$0 \le \delta(t) < \overline{\delta}, \; \dot{\delta}(t) \le \nu < 1 \tag{13}$$

$$
\begin{bmatrix}
\Omega\_3 & -Y + PA\_d + W^T & -Y \ \bar{d}A^T Z \\
\* & -W - W^T - (1 - \nu)Q - W \, dA\_d^T Z \\
\* & \* & -Z & 0 \\
\* & \* & \* & -Z
\end{bmatrix} < 0 \tag{14}
$$

where

$$
\Omega\_3 = PA + A^T P + Y + Y^T + Q.\tag{15}
$$

In packet based networked system, the condition ˙ *δ*(*t*) < 1 implies that the preceding packet is not caught up by the successive packet.

#### **2.4 Delay dependent stability analysis in frequency domain**

In frequency domain based, Nyquist criterion gives necessary and sufficient condition and the eigen value based analysis described below is another option of the analysis.

**Lemma 2** The system (2) is asymptotically stable for all *<sup>δ</sup>* <sup>∈</sup> [0, ¯ *δ*], if and only if *ψ*(*jω*, *δ*) �= 0, <sup>∀</sup>*<sup>ω</sup>* <sup>&</sup>gt; 0 where *<sup>ψ</sup>*(*s*, *<sup>δ</sup>*) := (*sIn* <sup>−</sup> *<sup>A</sup>* <sup>−</sup> *Ade*−*sδ*).

**Corollary 1** The system (2) is asymptotically stable for all *<sup>δ</sup>* <sup>∈</sup> [0, ¯ *δ*], if and only if

$$\det[I\_{\hbar} - G(j\omega)\Phi(j\delta\omega)] \neq 0, \forall \omega \ge 0,\tag{16}$$

<sup>10</sup>−2 <sup>10</sup>−1 <sup>10</sup><sup>0</sup> <sup>10</sup><sup>1</sup> <sup>10</sup><sup>2</sup> <sup>10</sup><sup>3</sup> −50

(a) *δ* = 0.1

Frequency (rad/s)

−40 −30 −20 −10 0 10

*δ*)

result when *ν* = 0 and Tang1 is that of *ν* = 1, where *ν* is in (13).

Gain (dB)

Fig. 3. Bode plot of *<sup>e</sup>*−*δ<sup>s</sup>* <sup>−</sup> <sup>1</sup>

Table 1. Upper bound of *δ* ( ¯

problem with iteration operations. Table 1 shows the maximum value ¯ 10−2 <sup>10</sup>−1 <sup>10</sup><sup>0</sup> <sup>10</sup><sup>1</sup> <sup>10</sup><sup>2</sup> <sup>10</sup><sup>3</sup> −50

(b) *δ* = 1

Frequency (rad/s)

−40 −30 −20 −10 0 10

Gain (dB)

Model Based μ-Synthesis Controller Design for Time-Varying Delay System 411

<sup>10</sup>−2 <sup>10</sup>−1 <sup>10</sup><sup>0</sup> <sup>10</sup><sup>1</sup> <sup>10</sup><sup>2</sup> <sup>10</sup><sup>3</sup> −50

(c) *δ* = 10

*ζ* \ Nyquist Li'96 Park'99 Tang0 Tang1 Robust 0.1 0.1838 0.1818 0.1834 0.1834 0.1818 0.1809 0.3 0.6096 0.5455 0.5933 0.5933 0.5455 0.5289 0.5 1.2965 0.9091 1.1927 1.1927 0.9091 0.8690 0.7 2.9816 1.2727 2.4815 2.4815 1.2727 1.4210 1.0 7.9927 1.8182 6.0302 6.0302 1.8182 3.2000 10.0 117.0356 18.1818 85.0562 85.0562 18.1818 23.0000

By using YALMIP (Lofberg, 2005) with Matlab for the problem modeling and CSDP (CSDP, 1999) for the LMI solver, we calculated the maximum value of *δ* by solving LMI feasibility

the table, Li'96 and Park'99 are obtained by the Theorem 1 and 2 respectively. Tang0 is the

*δ* which measures conservativeness of the conditions. In

Frequency (rad/s)

−40 −30 −20 −10 0 10

Gain (dB)

Fig. 2. Robust control based method

where *<sup>G</sup>*(*s*) = *<sup>F</sup>*(*sIn* <sup>−</sup> *<sup>A</sup>*¯)−1*H*, *Ad* <sup>=</sup> *HF*, *<sup>A</sup>*¯ :<sup>=</sup> *<sup>A</sup>* <sup>+</sup> *Ad* and <sup>Φ</sup>(*δs*) = *<sup>φ</sup>*(*δs*)*Iq*, *<sup>φ</sup>*(*δs*) = *<sup>e</sup>*−*δ<sup>s</sup>* <sup>−</sup> 1. Lemma 2 and Corollary 1 requires solving a transcendental equation. Thus, another set Δ(*jω*) which covers Φ(*δs*) is chosen. This selection of set Δ(*jω*) seriously effects on conservativeness. Zhang proposed very less conservative method using modified Pade approximation. It gives very less conservative ¯ *δ* which is very close to the Nyquist criterion.

The eigen value analysis including Pade based method can be only applicable for time invariant delay. For time variant delay, stability analysis with robust control based methods has been proposed.

The robust control based method regards a set Δ(*jω*) as the frequency dependent worst case gain (Leung et al., 1997). In the method, a weighting function is chosen to cover the gain of Φ(*δs*). Fig. 2 illustrates the block diagram of robust control method. Fig. 2 (a) shows a system with a single delay and it can be converted to Fig. 2 (b), i.e. <sup>Φ</sup>(*δs*) = *<sup>φ</sup>*(*δs*) = *<sup>e</sup>*−*δ<sup>s</sup>* <sup>−</sup> 1. Fig. 2 (c) represents multiplicable uncertainty with associated weighting function *Wd*(*s*) and Δ*<sup>u</sup>* is a unit disk (�Δ*u*�<sup>∞</sup> = 1).

*Wd*(*s*) is chosen such that *H*<sup>∞</sup> gain of *Wd*(*s*) is more than *φ*(*δs*) − 1, i.e. �*φ*(*δs*) − 1�<sup>∞</sup> < �*Wd*(*s*)�<sup>∞</sup> .

Fig. 3 shows the bode plot of *<sup>φ</sup>*(*δs*) = *<sup>e</sup>*−*δ<sup>s</sup>* <sup>−</sup> 1, where (a) shows the plot of *<sup>δ</sup>* <sup>=</sup> 0.1, (b) shows the plot of *δ* = 1 and (c) shows the case of *δ* = 10. As shown in the figures, the bode plot shifts along frequency axis by changing value of *δ*. It shifts towards the low frequency when *δ* becomes large.

The robust control method gives a sufficient condition based on the small gain theory by choosing a unit disk with a weighting function *Wd*(*s*) for a set Δ(*jω*).

### **2.5 Conservation examination on LMI based method and robust control method**

We examined conservativeness of LMI based conditions including Theorem 1, 2, 3 and previously introduced robust control based method.

### **2.5.1 Numerical example**

Suppose a second order LTI system whose parameters are

$$A = \begin{bmatrix} 0 & 1 \\ -1 & -2\zeta \end{bmatrix} \prime \qquad \qquad \qquad A\_d = \begin{bmatrix} 0 & 0 \\ -1.1 & 0 \end{bmatrix} \tag{17}$$

where *ζ* corresponds to a damping factor.


Fig. 3. Bode plot of *<sup>e</sup>*−*δ<sup>s</sup>* <sup>−</sup> <sup>1</sup>

6 Will-be-set-by-IN-TECH

 

where *<sup>G</sup>*(*s*) = *<sup>F</sup>*(*sIn* <sup>−</sup> *<sup>A</sup>*¯)−1*H*, *Ad* <sup>=</sup> *HF*, *<sup>A</sup>*¯ :<sup>=</sup> *<sup>A</sup>* <sup>+</sup> *Ad* and <sup>Φ</sup>(*δs*) = *<sup>φ</sup>*(*δs*)*Iq*, *<sup>φ</sup>*(*δs*) = *<sup>e</sup>*−*δ<sup>s</sup>* <sup>−</sup> 1. Lemma 2 and Corollary 1 requires solving a transcendental equation. Thus, another set Δ(*jω*) which covers Φ(*δs*) is chosen. This selection of set Δ(*jω*) seriously effects on conservativeness. Zhang proposed very less conservative method using modified Pade approximation. It gives

*δ* which is very close to the Nyquist criterion. The eigen value analysis including Pade based method can be only applicable for time invariant delay. For time variant delay, stability analysis with robust control based methods

The robust control based method regards a set Δ(*jω*) as the frequency dependent worst case gain (Leung et al., 1997). In the method, a weighting function is chosen to cover the gain of Φ(*δs*). Fig. 2 illustrates the block diagram of robust control method. Fig. 2 (a) shows a system with a single delay and it can be converted to Fig. 2 (b), i.e. <sup>Φ</sup>(*δs*) = *<sup>φ</sup>*(*δs*) = *<sup>e</sup>*−*δ<sup>s</sup>* <sup>−</sup> 1. Fig. 2 (c) represents multiplicable uncertainty with associated weighting function *Wd*(*s*) and Δ*<sup>u</sup>* is a

*Wd*(*s*) is chosen such that *H*<sup>∞</sup> gain of *Wd*(*s*) is more than *φ*(*δs*) − 1, i.e. �*φ*(*δs*) − 1�<sup>∞</sup> <

Fig. 3 shows the bode plot of *<sup>φ</sup>*(*δs*) = *<sup>e</sup>*−*δ<sup>s</sup>* <sup>−</sup> 1, where (a) shows the plot of *<sup>δ</sup>* <sup>=</sup> 0.1, (b) shows the plot of *δ* = 1 and (c) shows the case of *δ* = 10. As shown in the figures, the bode plot shifts along frequency axis by changing value of *δ*. It shifts towards the low frequency when

The robust control method gives a sufficient condition based on the small gain theory by

We examined conservativeness of LMI based conditions including Theorem 1, 2, 3 and

, *Ad* =

 0 0 −1.1 0 (17)

choosing a unit disk with a weighting function *Wd*(*s*) for a set Δ(*jω*).

previously introduced robust control based method.

*A* =

where *ζ* corresponds to a damping factor.

Suppose a second order LTI system whose parameters are

 0 1 −1 −2*ζ*

**2.5 Conservation examination on LMI based method and robust control method**

very less conservative ¯

has been proposed.

unit disk (�Δ*u*�<sup>∞</sup> = 1).

�*Wd*(*s*)�<sup>∞</sup> .

*δ* becomes large.

**2.5.1 Numerical example**

 

Fig. 2. Robust control based method

 

 

Table 1. Upper bound of *δ* ( ¯ *δ*)

By using YALMIP (Lofberg, 2005) with Matlab for the problem modeling and CSDP (CSDP, 1999) for the LMI solver, we calculated the maximum value of *δ* by solving LMI feasibility problem with iteration operations.

Table 1 shows the maximum value ¯ *δ* which measures conservativeness of the conditions. In the table, Li'96 and Park'99 are obtained by the Theorem 1 and 2 respectively. Tang0 is the result when *ν* = 0 and Tang1 is that of *ν* = 1, where *ν* is in (13).

Fig. 4. Bode plots of *Wd*(*s*) (blue line) and *<sup>e</sup>*−*Ts* <sup>−</sup> 1 (red dotted line)

Robust in Table 1 shows the results of the robust control method which regards the varying delay as a perturbation, where the following weighting function was used.

$$\mathcal{W}\_{\rm d}(\mathbf{s}) = \frac{2\mathbf{s}(T^2\mathbf{s}^2/4 + (T + T/4)\mathbf{s} + 1)}{(\mathbf{s} + 2/T)(T^2\mathbf{s}^2/4 + \mathbf{T}\mathbf{s} + 1)}\tag{18}$$

 

**Δ***<sup>r</sup>* := {diag[*λ*<sup>1</sup> *In*<sup>1</sup> , *λ*<sup>2</sup> *In*<sup>2</sup> ] : *λ<sup>i</sup>* ∈ **C**} . (22)

∗ > 0

�

(20)

<sup>−</sup><sup>1</sup> (21)

� (24)

< 1 (23)

bound with *D* scales defined in (23) and (24) was used.

**Dr** := �

sup *ω*∈**R**

 - *Ad* =

value *μ***Δ<sup>r</sup>** (*G*(*jω*)) defined in (21) with respect to a block structure **Δ<sup>r</sup>** in (22).

inf *D*∈**Dr** *σ*¯ �

⎡ ⎢ ⎢ ⎣

The paper examined conservativeness with *μ*-synthesis based method which is a representative method of the robust control. Specifically, it calculates the structured singular

*μ***Δr**(*G*(*jω*)) = [min{*σ*¯(Δ) : det(*I* − *G*Δ) = 0, Δ ∈ **Δr**}]

Because calculating of *μ* is NP-hard (non-deterministic polynomial-time hard), its upper

diag[*D*1, *<sup>D</sup>*2]|*Di* <sup>∈</sup> **<sup>C</sup>***n*×*n*, *Di* <sup>=</sup> *Di*

The analytical results are shown in Fig. 5 (Zhang et al., 2001). In the figure, the plot (1) shows the case of Nyquist Criterion, (2) shows *μ* upper bound with frequency-dependent D scaling, (3) shows the upper bound by Theorem 2 and (3) shows the upper bound by Theorem 1. The results show that the LMI based conditions are more conservative than *D*-scaled *μ* based method. The reason of this is stated that the scale matrix *D* in *μ* method is frequency

*DG*(*jω*)*D*−<sup>1</sup>

⎤ ⎥ ⎥ ⎦

Model Based μ-Synthesis Controller Design for Time-Varying Delay System 413

Fig. 5. Delay margin versus K.

 

 

 

 

 

Fig. 4 shows the bode plots of *Wd*(*s*) and *<sup>e</sup>*−*Ts* <sup>−</sup> 1 where *<sup>T</sup>* <sup>=</sup> 1.

Notice that the results of Li'96 are exactly same as Tang1 and Park'99 are the same as Tang1. This implies these two pairs are equivalent conditions. In fact, *ν* = 0 corresponds that time delay is constant because *ν* = ˙ *δ*(*t*). Robust control results lie between Tang0 and Tang1, i.e. between *ν* = 0 and *ν* = 1. In fact, the perturbation assumed by robust control shall include the case *ν* = 1, thus these results imply that the robust control approach seems to be less conservative.

So far, Lyapunov-Krasovskii controllers are mostly designed with (memory less) static feedback of the plant state (Jiang & Han, 2005). From the performance point of view, the static state feedback performs often worse than the dynamic controller such as *H*∞ based controllers.

### **2.5.2 Examination on LMI based method and** *μ***-synthesis**

Zhang also examined conservativeness on stability conditions formulated in LMI form and robust control (Zhang et al., 2001), both delay independent and dependent condition were also discussed. In the examination, a system in (2) with parameters in (19) and (20) was used, which was motivated by the dynamics of machining chatter (Tlusty, 1985).

$$A = \begin{bmatrix} 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ -(10.0 + K) & 10.0 & 1 & 0 \\ & 5.0 & -15.0 & 0 & -0.25 \end{bmatrix} \tag{19}$$

Fig. 5. Delay margin versus K.

8 Will-be-set-by-IN-TECH

<sup>10</sup>−1 <sup>10</sup><sup>0</sup> <sup>101</sup> <sup>10</sup><sup>2</sup> −30

Robust in Table 1 shows the results of the robust control method which regards the varying

*Wd*(*s*) = <sup>2</sup>*s*(*T*2*s*2/4 + (*<sup>T</sup>* <sup>+</sup> *<sup>T</sup>*/4)*<sup>s</sup>* <sup>+</sup> <sup>1</sup>)

Notice that the results of Li'96 are exactly same as Tang1 and Park'99 are the same as Tang1. This implies these two pairs are equivalent conditions. In fact, *ν* = 0 corresponds that time

between *ν* = 0 and *ν* = 1. In fact, the perturbation assumed by robust control shall include the case *ν* = 1, thus these results imply that the robust control approach seems to be less

So far, Lyapunov-Krasovskii controllers are mostly designed with (memory less) static feedback of the plant state (Jiang & Han, 2005). From the performance point of view, the static state feedback performs often worse than the dynamic controller such as *H*∞ based controllers.

Zhang also examined conservativeness on stability conditions formulated in LMI form and robust control (Zhang et al., 2001), both delay independent and dependent condition were also discussed. In the examination, a system in (2) with parameters in (19) and (20) was used,

> 0 010 0 001 −(10.0 + *K*) 10.0 1 0 5.0 −15.0 0 −0.25

(*<sup>s</sup>* <sup>+</sup> 2/*T*)(*T*2*s*2/4 <sup>+</sup> *Ts* <sup>+</sup> <sup>1</sup>) (18)

*δ*(*t*). Robust control results lie between Tang0 and Tang1, i.e.

⎤ ⎥ ⎥ ⎦

(19)

Fig. 4. Bode plots of *Wd*(*s*) (blue line) and *<sup>e</sup>*−*Ts* <sup>−</sup> 1 (red dotted line)

Fig. 4 shows the bode plots of *Wd*(*s*) and *<sup>e</sup>*−*Ts* <sup>−</sup> 1 where *<sup>T</sup>* <sup>=</sup> 1.

**2.5.2 Examination on LMI based method and** *μ***-synthesis**

*A* =

which was motivated by the dynamics of machining chatter (Tlusty, 1985).

⎡ ⎢ ⎢ ⎣

delay as a perturbation, where the following weighting function was used.

Frequency (rad/s)

−25

delay is constant because *ν* = ˙

conservative.

−20

−15

−10

Gain (dB)

−5

0

5

10

15

$$A\_d = \begin{bmatrix} 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ K & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{bmatrix} \tag{20}$$

The paper examined conservativeness with *μ*-synthesis based method which is a representative method of the robust control. Specifically, it calculates the structured singular value *μ***Δ<sup>r</sup>** (*G*(*jω*)) defined in (21) with respect to a block structure **Δ<sup>r</sup>** in (22).

$$\mu\_{\Delta\_{\mathbf{r}}}(G(j\omega)) = \left[ \min \{ \vec{\sigma}(\Delta) : \det(I - G\Delta) = 0, \Delta \in \Delta\_{\mathbf{r}} \} \right]^{-1} \tag{21}$$

$$\Delta\_r := \left\{ \text{diag} [\lambda\_1 I\_{\mathbb{N}\_1}, \lambda\_2 I\_{\mathbb{N}\_2}] : \lambda\_i \in \mathbb{C} \right\}. \tag{22}$$

Because calculating of *μ* is NP-hard (non-deterministic polynomial-time hard), its upper bound with *D* scales defined in (23) and (24) was used.

$$\sup\_{\omega \in \mathbf{R}} \inf\_{D \in \mathbf{D}\_{\mathbf{r}}} \sigma \left( DG(j\omega)D^{-1} \right) < 1 \tag{23}$$

$$\mathbf{D\_{I}} := \left\{ \text{diag} [D\_{1}, D\_{2}] \, | \, D\_{i} \in \mathbb{C}^{n \times n}, D\_{i} = D\_{i} \, ^\* > 0 \right\} \tag{24}$$

The analytical results are shown in Fig. 5 (Zhang et al., 2001). In the figure, the plot (1) shows the case of Nyquist Criterion, (2) shows *μ* upper bound with frequency-dependent D scaling, (3) shows the upper bound by Theorem 2 and (3) shows the upper bound by Theorem 1.

The results show that the LMI based conditions are more conservative than *D*-scaled *μ* based method. The reason of this is stated that the scale matrix *D* in *μ* method is frequency

Model Based μ-Synthesis Controller Design for Time-Varying Delay System 415

(26) implies the maximum gain of *C*(*s*) is limited by the norm of *Gm*(*s*) and (1 + *Wd*(*s*)). If the gain of *C*(*s*) is small, even the norm sensitivity function without delay cannot become small

In general, the norm of the sensitivity function directly represents the performance of the system such as servo response and disturbance attenuation. The restriction due to the bounded norm of the controller may degrade the performance of the system. In order to avoid it, we propose a unification of model based control with *μ*-synthesis robust control design. Fig. 8 shows a proposed model based control structure which includes the model of time delay and the remote plant where *G*˜*m*(*s*) is a model of the plant. In the real implementation, a model of time delay is also employed, which exactly measures the value of time delay. The measurement of delay can be implemented by time-stamped packets and synchronization of the local and remote node (Uchimura et al., 2007). By introducing the plant model, the upper bound restriction of *C*(*s*) is relaxed if the model *G*˜*m*(*s*) is close to *Gm*(*s*), i.e. if

1 1 + *C*(*s*)*Gm*(*s*)

1

 ∞

*<sup>p</sup>* (*s*) works to restrict the upper bound

(*Gm*(*s*) <sup>−</sup> *<sup>G</sup>*˜*m*(*s*))(<sup>1</sup> <sup>+</sup> *Wd*(*s*))

In fact, perfect modeling of *Gm*(*s*) is impossible and property of the remote model may vary in time due to various factors such as aging or variation of loads. Therefore we need to admit the difference between *G*˜*m*(*s*) and *Gm*(*s*) and need to deal with it as a perturbation of the remote plant *Gm*(*s*). Then another perturbation factor associated with a weighting function *Wm*(*s*) is added. Additionally, another perturbation factor with *Wp*(*s*) after the remote plant is also

 

�*S*(*s*)�<sup>∞</sup> =

<sup>∞</sup> is smaller than �*Gm*(*s*)�∞.

�*C*(*s*)�<sup>∞</sup> <

added to improve the performance of the system. *W*−<sup>1</sup>

 

> ∞

 

 

(27)

(28)

Fig. 8. Model based control structure

 

 

Fig. 9. Overall structure with perturbations

 

 

as shown in (27).

*<sup>G</sup>*˜*m*(*s*) <sup>−</sup> *Gm*(*s*)

 

Fig. 6. Basic structure of network based system

Fig. 7. Time varying delay as a perturbation

dependent function which is obtained by frequency sweeping of *G*(*jω*). On contrary, LMI formed condition corresponds to fix *D* scale a real constant value. Constant *D* scaling is well known to provide a more conservative result than frequency-dependent *D* scaling. This result revealed that Lyapunov-Krasovskii based conditions formulated in LMI may be caught into conservative issue and their robust margin possibly becomes smaller than *μ* based controller. Through the investigations stated above, we determined to exploit a *μ*-synthesis based controller design. Because *μ*-synthesis based controller is designed based on the robust control theory. In the next section, we describe a model based *μ* controller design for a system with time delay and model uncertainty.

### **3. Model based** *μ***-synthesis controller**

Fig. 6 shows basic structure of a network based system where *C*(*s*) is a controller and *Gm*(*s*) is a remote plant. The block Δ*<sup>d</sup>* is a delay factor which represents transmission delay on a network. It represents round trip delay, which accumulates forward and backward delays. The time varying delay *<sup>δ</sup>*(*t*) is bounded with 0 <sup>≤</sup> *<sup>δ</sup>*(*t*) <sup>≤</sup> ¯ *δ*. If the time delay *δ*(*t*) is a constant value *δc*, the block can be written as Δ*<sup>d</sup>* = *e*−*δcs* in frequency domain, however *e*−*δ<sup>s</sup>* is not accurate expression for time varying delay *δ*. As described in the previous section, Leung proposed to regard time varying delay as an uncertainty and the delay is represented as a perturbation associated with a weighting function (Leung et al., 1997) . In particular, time delay factor can be denoted as shown in Fig. 7, where Δ*u* is unknown but assure to be stable with �Δ*u*(*s*)�<sup>∞</sup> ≤ 1 and *Wd*(*s*) is a weighting function which holds <sup>|</sup>*e*<sup>−</sup> ¯ *<sup>δ</sup><sup>s</sup>* <sup>−</sup> <sup>1</sup><sup>|</sup> <sup>&</sup>lt; <sup>|</sup>*Wd*(*jω*)|, <sup>∀</sup>*<sup>ω</sup>* <sup>∈</sup> **<sup>R</sup>**, i.e. *Wd*(*s*) covers the upper bound of gain *<sup>e</sup>*<sup>−</sup> ¯*δ<sup>s</sup>* <sup>−</sup> 1 . Applying the small gain theory considering �*Wd*(*s*)Δ*u*(*s*)�<sup>∞</sup> ≤ �*Wd*(*s*)�∞, the system is stable if the condition (25) holds.

$$\|\|\mathbf{C}(\mathbf{s})\mathbf{G}\_{\mathcal{W}}(\mathbf{s})(1+\mathcal{W}\_{\mathcal{d}}(\mathbf{s}))\|\_{\infty} < 1\tag{25}$$

(25) is rewritten in (26)

$$\|\mathbb{C}(\mathbf{s})\|\_{\infty} < \frac{1}{\|G\_m(\mathbf{s})(1 + W\_d(\mathbf{s}))\|\_{\infty}}.\tag{26}$$

Fig. 8. Model based control structure

10 Will-be-set-by-IN-TECH

 

*δ*. If the time delay *δ*(*t*) is a

. (26)

 

dependent function which is obtained by frequency sweeping of *G*(*jω*). On contrary, LMI formed condition corresponds to fix *D* scale a real constant value. Constant *D* scaling is well known to provide a more conservative result than frequency-dependent *D* scaling. This result revealed that Lyapunov-Krasovskii based conditions formulated in LMI may be caught into conservative issue and their robust margin possibly becomes smaller than *μ* based controller. Through the investigations stated above, we determined to exploit a *μ*-synthesis based controller design. Because *μ*-synthesis based controller is designed based on the robust control theory. In the next section, we describe a model based *μ* controller design for a system with

Fig. 6 shows basic structure of a network based system where *C*(*s*) is a controller and *Gm*(*s*) is a remote plant. The block Δ*<sup>d</sup>* is a delay factor which represents transmission delay on a network. It represents round trip delay, which accumulates forward and backward delays.

constant value *δc*, the block can be written as Δ*<sup>d</sup>* = *e*−*δcs* in frequency domain, however *e*−*δ<sup>s</sup>* is not accurate expression for time varying delay *δ*. As described in the previous section, Leung proposed to regard time varying delay as an uncertainty and the delay is represented as a perturbation associated with a weighting function (Leung et al., 1997) . In particular, time delay factor can be denoted as shown in Fig. 7, where Δ*u* is unknown but assure to be stable with �Δ*u*(*s*)�<sup>∞</sup> ≤ 1 and *Wd*(*s*) is a weighting function which holds

*<sup>δ</sup><sup>s</sup>* <sup>−</sup> <sup>1</sup><sup>|</sup> <sup>&</sup>lt; <sup>|</sup>*Wd*(*jω*)|, <sup>∀</sup>*<sup>ω</sup>* <sup>∈</sup> **<sup>R</sup>**, i.e. *Wd*(*s*) covers the upper bound of gain *<sup>e</sup>*<sup>−</sup> ¯*δ<sup>s</sup>* <sup>−</sup> 1 . Applying the small gain theory considering �*Wd*(*s*)Δ*u*(*s*)�<sup>∞</sup> ≤ �*Wd*(*s*)�∞, the system is stable if the

> 1 �*Gm*(*s*)(1 + *Wd*(*s*))�<sup>∞</sup>

�*C*(*s*)*Gm*(*s*)(1 + *Wd*(*s*))�<sup>∞</sup> < 1 (25)

Fig. 6. Basic structure of network based system

 

Fig. 7. Time varying delay as a perturbation

time delay and model uncertainty.

<sup>|</sup>*e*<sup>−</sup> ¯

condition (25) holds.

(25) is rewritten in (26)

**3. Model based** *μ***-synthesis controller**

The time varying delay *<sup>δ</sup>*(*t*) is bounded with 0 <sup>≤</sup> *<sup>δ</sup>*(*t*) <sup>≤</sup> ¯

�*C*(*s*)�<sup>∞</sup> <

Fig. 9. Overall structure with perturbations

(26) implies the maximum gain of *C*(*s*) is limited by the norm of *Gm*(*s*) and (1 + *Wd*(*s*)). If the gain of *C*(*s*) is small, even the norm sensitivity function without delay cannot become small as shown in (27).

$$\|\|\mathcal{S}(s)\|\|\_{\infty} = \left\|\frac{1}{1 + \mathcal{C}(s)\mathcal{G}\_{\mathcal{W}}(s)}\right\|\_{\infty} \tag{27}$$

In general, the norm of the sensitivity function directly represents the performance of the system such as servo response and disturbance attenuation. The restriction due to the bounded norm of the controller may degrade the performance of the system. In order to avoid it, we propose a unification of model based control with *μ*-synthesis robust control design.

Fig. 8 shows a proposed model based control structure which includes the model of time delay and the remote plant where *G*˜*m*(*s*) is a model of the plant. In the real implementation, a model of time delay is also employed, which exactly measures the value of time delay. The measurement of delay can be implemented by time-stamped packets and synchronization of the local and remote node (Uchimura et al., 2007). By introducing the plant model, the upper bound restriction of *C*(*s*) is relaxed if the model *G*˜*m*(*s*) is close to *Gm*(*s*), i.e. if *<sup>G</sup>*˜*m*(*s*) <sup>−</sup> *Gm*(*s*) <sup>∞</sup> is smaller than �*Gm*(*s*)�∞.

$$\left\|\mathbb{C}(\mathbf{s})\right\|\_{\infty} < \frac{1}{\left\|(\mathcal{G}\_{\mathcal{m}}(\mathbf{s}) - \tilde{\mathcal{G}}\_{\mathcal{m}}(\mathbf{s}))(1 + \mathcal{W}\_{\mathcal{d}}(\mathbf{s}))\right\|\_{\infty}}\tag{28}$$

In fact, perfect modeling of *Gm*(*s*) is impossible and property of the remote model may vary in time due to various factors such as aging or variation of loads. Therefore we need to admit the difference between *G*˜*m*(*s*) and *Gm*(*s*) and need to deal with it as a perturbation of the remote plant *Gm*(*s*). Then another perturbation factor associated with a weighting function *Wm*(*s*) is added. Additionally, another perturbation factor with *Wp*(*s*) after the remote plant is also added to improve the performance of the system. *W*−<sup>1</sup> *<sup>p</sup>* (*s*) works to restrict the upper bound

 

Fig. 12. Experimental setup and system configuration

Fig. 13. Overview of the experimental device

**4.1 Design procedure of a controller**

**4. Design of model based** *μ* **controller and experimental evaluation**

nominal plant *Gm*(*s*) was identified as a first order transfer function in (31).

In order to evaluate the performance of proposed controller, we set up an experiment. Fig. 12 shows the configuration of the experiment. As shown in the figure, we used wireless LAN to transmit data in between local controller and remote plant. Fig. 13 shows the overview of the experimental device of the remote plant (geared motor). In the experiment, we used a geared DC motor with an inertial load on the output axis. We assumed load variation, thus two different inertial loads are prepared. Through the examination of identification tests, the

*Gm*(*s*) = 260.36

 

 

*<sup>s</sup>* <sup>+</sup> 154.28 (31)

 

Model Based μ-Synthesis Controller Design for Time-Varying Delay System 417

Fig. 10. Augmented plant *Pm*

Fig. 11. Simplified block diagram of the augmented plant *Pm*(*s*)

of the norm of the sensitivity function *S*(*s*). In the experiment described in later, the gain of *Wp*(*s*) is large at low frequency range.

Fig. 9 shows the overall structure with perturbations of the proposed control system. There are three perturbations in the system and each perturbation has no correlation with others. Therefore we applied *μ*-synthesis to design the controller *C*(*s*). As previously mentioned, the value of *μ* is hard to calculate thus we also employed frequency dependent scale *D*(*jω*) to calculate the upper bound of *μ***Dr** as follows.

$$\mu\_{\mathbf{D}\_{\mathbf{t}}} = \sup\_{\omega \in \mathbf{R}} \inf\_{D \in \mathbf{D}\_{\mathbf{t}}} \tilde{\sigma} \left( D(\mathbf{j}\omega) P\_{\mathfrak{m}}(\mathbf{j}\omega) D(\mathbf{j}\omega)^{-1} \right) \tag{29}$$

Since there exist three perturbations in the proposed method, class of **Dr** is defined in (30).

$$\mathbf{D}\_{\mathbf{r}} := \{ \text{diag}[d\_1, d\_2, d\_3] \mid d\_i \in \mathbb{C} \}\tag{30}$$

*Pm*(*s*) in (29) is the transfer function matrix of the augmented plant with three inputs and three outputs. The plant *Pm*(*s*) includes three weighting functions *Wd*(*s*), *Wm*(*s*), *Wp*(*s*) and controller *C*(*s*). Fig. 10 shows the augmented plant *Pm*(*s*) where the area surrounded by dotted line corresponds to *Pm*(*s*) and it can be simplified to the block diagram shown in Fig. 11.

Because finding *D*(*s*) and *C*(*s*) simultaneously is difficult, so called *D-K* iteration is used to find a adequate combination of *D*(*s*) and *C*(*s*).

12 Will-be-set-by-IN-TECH

   

 

 

 

11.

 

Fig. 10. Augmented plant *Pm*

*Wp*(*s*) is large at low frequency range.

calculate the upper bound of *μ***Dr** as follows.

find a adequate combination of *D*(*s*) and *C*(*s*).

 

 

Fig. 11. Simplified block diagram of the augmented plant *Pm*(*s*)

*μ***Dr** = sup *ω*∈**R**

inf *D*∈**Dr** *σ*¯ 

 

 

 

of the norm of the sensitivity function *S*(*s*). In the experiment described in later, the gain of

Fig. 9 shows the overall structure with perturbations of the proposed control system. There are three perturbations in the system and each perturbation has no correlation with others. Therefore we applied *μ*-synthesis to design the controller *C*(*s*). As previously mentioned, the value of *μ* is hard to calculate thus we also employed frequency dependent scale *D*(*jω*) to

Since there exist three perturbations in the proposed method, class of **Dr** is defined in (30).

*Pm*(*s*) in (29) is the transfer function matrix of the augmented plant with three inputs and three outputs. The plant *Pm*(*s*) includes three weighting functions *Wd*(*s*), *Wm*(*s*), *Wp*(*s*) and controller *C*(*s*). Fig. 10 shows the augmented plant *Pm*(*s*) where the area surrounded by dotted line corresponds to *Pm*(*s*) and it can be simplified to the block diagram shown in Fig.

Because finding *D*(*s*) and *C*(*s*) simultaneously is difficult, so called *D-K* iteration is used to

*D*(*jω*)*Pm*(*jω*)*D*(*jω*)−<sup>1</sup>

**Dr** := {diag[*d*1, *d*2, *d*3] | *di* ∈ **C** } (30)

(29)

Fig. 12. Experimental setup and system configuration

Fig. 13. Overview of the experimental device

## **4. Design of model based** *μ* **controller and experimental evaluation**

## **4.1 Design procedure of a controller**

In order to evaluate the performance of proposed controller, we set up an experiment. Fig. 12 shows the configuration of the experiment. As shown in the figure, we used wireless LAN to transmit data in between local controller and remote plant. Fig. 13 shows the overview of the experimental device of the remote plant (geared motor). In the experiment, we used a geared DC motor with an inertial load on the output axis. We assumed load variation, thus two different inertial loads are prepared. Through the examination of identification tests, the nominal plant *Gm*(*s*) was identified as a first order transfer function in (31).

$$G\_{\rm m}(s) = \frac{260.36}{s + 154.28} \tag{31}$$

Fig. 14. Measurement results of time delay

We intentionally chose different transfer function for a plant model *G*˜*<sup>m</sup>* in (32). It aimed to evaluate robust performance against unexpected load variations.

$$\tilde{G}\_m(s) = \frac{182.25}{s + 108.0} \tag{32}$$

−60 −40 −20 0 20 40 60

0

−90

 

 

Fig. 16. Block diagram of the system with the conventinal controller

of local and remote node every 100 [msec] (Uchimura et al., 2007).

−45

Phase (deg)

Fig. 15. Bode plot of the controller *C*(*s*)

 

10−3 10−2 10−1 100 101 102 103 104

Model Based μ-Synthesis Controller Design for Time-Varying Delay System 419

Frequency (rad/s)

were executed on RT-Linux. RT-Messenger (Sato & Yakoh, 2000) was used to implement the network data-transmission task in Linux kernel mode process. IEEE802.11g compliant wireless LAN device are used, which was connected to PC via USB bus. For the delay measurement, a beacon packet was used as a time stamp. A beacon packet contains a counter value of TSF (timing synchronization function), which is a standard function for IEEE 802.11 compliant devices and resolution of counter is 1 [*μ*sec]. The function synchronize both timers

To evaluate the performance of the proposed controller, we prepared a controller for comparison purpose, which was also designed by *μ*-synthesis, however it is designed without the remote plant model *G*˜*m*(*s*) and the time delay model, hereinafter referred to as conventional controller. Fig. 16 shows the overall block diagram with the conventional controller. Compareing it with Fig. 9, one may notice that there is no plant model. The

Fig. 17 shows the result of a step response of the velocity control. The blue plot shows the response of the proposed controller and the red plot shows the result by the conventional controller. Comparing these two plots, the proposed controller shows better response in transient response. Fig. 18 shows the result when we intentionally added 200 [msec] delay

conventional controller corresponds to the one which appears in (Leung et al., 1997).

 

 

 

Gain (dB)

Fig. 14 shows one of the measurement results of time delay, green plot shows transmission delay from local to remote and the blue plot shows ones from remote to local. Based on measurements under various circumstances, we chose the upper bound of time delay as 100 [msec] and the weighing function *Wd* was chosen to be

$$\mathcal{W}\_d(\mathbf{s}) = \frac{2.1s}{\mathbf{s} + 10}.\tag{33}$$

The second weighting function *Wm*(*s*) which is associated with model uncertainty was chosen to cover the difference of *Gm*(*s*) and *G*˜*m*(*s*) as shown in (32).

$$\mathcal{W}\_{\rm m}(s) = \frac{78s^2 + 12050s}{260s^2 + 92390s + 8056000} \tag{34}$$

The third weighting function *Wp*(*s*) for performance is determined to maintain the value of the sensitivity function to be small. It also aimed to attenuate the disturbance at low frequency.

$$W\_p(s) = \frac{0.421s + 4.21}{s + 0.01} \tag{35}$$

We used Robust Toolbox of Matlab for numerical computation including *D-K* iteration and obtained a solution of *C*(*s*) which satisfied the condition *μ***Dr** < 1. After 8 times *D-K* iterations, peak *μ* value was converged to *μ* = 0.991 and a controller with 17th order was obtained. The Bode plot of the obtained controller *C*(*s*) is shown in Fig.15.

#### **4.2 Experimental result**

We implemented obtained controller on PC hardware by transferring it into the discrete-time controller with 1 [msec] sampling time. The controller tasks with motor control tasks

Fig. 15. Bode plot of the controller *C*(*s*)

14 Will-be-set-by-IN-TECH

5

delay (msec)

Fig. 14. Measurement results of time delay

evaluate robust performance against unexpected load variations.

[msec] and the weighing function *Wd* was chosen to be

to cover the difference of *Gm*(*s*) and *G*˜*m*(*s*) as shown in (32).

Bode plot of the obtained controller *C*(*s*) is shown in Fig.15.

**4.2 Experimental result**

10

<sup>0</sup> <sup>5</sup> <sup>10</sup> <sup>15</sup> <sup>0</sup>

time (sec)

We intentionally chose different transfer function for a plant model *G*˜*<sup>m</sup>* in (32). It aimed to

*<sup>G</sup>*˜*m*(*s*) = 182.25

Fig. 14 shows one of the measurement results of time delay, green plot shows transmission delay from local to remote and the blue plot shows ones from remote to local. Based on measurements under various circumstances, we chose the upper bound of time delay as 100

*Wd*(*s*) = 2.1*<sup>s</sup>*

The second weighting function *Wm*(*s*) which is associated with model uncertainty was chosen

The third weighting function *Wp*(*s*) for performance is determined to maintain the value of the sensitivity function to be small. It also aimed to attenuate the disturbance at low frequency.

*Wp*(*s*) = 0.421*<sup>s</sup>* <sup>+</sup> 4.21

We used Robust Toolbox of Matlab for numerical computation including *D-K* iteration and obtained a solution of *C*(*s*) which satisfied the condition *μ***Dr** < 1. After 8 times *D-K* iterations, peak *μ* value was converged to *μ* = 0.991 and a controller with 17th order was obtained. The

We implemented obtained controller on PC hardware by transferring it into the discrete-time controller with 1 [msec] sampling time. The controller tasks with motor control tasks

*Wm*(*s*) = <sup>78</sup>*s*<sup>2</sup> <sup>+</sup> <sup>12050</sup>*<sup>s</sup>*

*<sup>s</sup>* <sup>+</sup> 108.0 (32)

*<sup>s</sup>* <sup>+</sup> <sup>10</sup> . (33)

*<sup>s</sup>* <sup>+</sup> 0.01 (35)

<sup>260</sup>*s*<sup>2</sup> <sup>+</sup> <sup>92390</sup>*<sup>s</sup>* <sup>+</sup> <sup>8056000</sup> (34)

Fig. 16. Block diagram of the system with the conventinal controller

were executed on RT-Linux. RT-Messenger (Sato & Yakoh, 2000) was used to implement the network data-transmission task in Linux kernel mode process. IEEE802.11g compliant wireless LAN device are used, which was connected to PC via USB bus. For the delay measurement, a beacon packet was used as a time stamp. A beacon packet contains a counter value of TSF (timing synchronization function), which is a standard function for IEEE 802.11 compliant devices and resolution of counter is 1 [*μ*sec]. The function synchronize both timers of local and remote node every 100 [msec] (Uchimura et al., 2007).

To evaluate the performance of the proposed controller, we prepared a controller for comparison purpose, which was also designed by *μ*-synthesis, however it is designed without the remote plant model *G*˜*m*(*s*) and the time delay model, hereinafter referred to as conventional controller. Fig. 16 shows the overall block diagram with the conventional controller. Compareing it with Fig. 9, one may notice that there is no plant model. The conventional controller corresponds to the one which appears in (Leung et al., 1997).

Fig. 17 shows the result of a step response of the velocity control. The blue plot shows the response of the proposed controller and the red plot shows the result by the conventional controller. Comparing these two plots, the proposed controller shows better response in transient response. Fig. 18 shows the result when we intentionally added 200 [msec] delay

conventional controllers may show similar performance, because they are designed with same *Wp*(*s*) for performance weight. In *μ*-synthesis based design, the obtained controller assures *μ***Δ<sup>r</sup>** < 1 against all possible perturbations. However the system may be stable when one of the perturbations goes beyond the maximum, if it is not the critical one. Namely, the stability margins for different perturbations are not always same. As stated in previous section, model based controller holds more margin in loop gain; hence the deference in the delay margin may appear on the result. As a result, the proposed controller is more robust against time delay

Model Based μ-Synthesis Controller Design for Time-Varying Delay System 421

In this chapter, a model based controller design by exploiting *μ*-synthesis is proposed, which is designed for a network based system with time varying delay and the plant model uncertainty. The proposed controller includes the model of the remote plant and time delay. The delay was measured by time-stamped packet. To avoid instability due to model uncertainty and variation of delays, we applied *μ*-synthesis based robust control method to design a controller. The paper also studied conservativeness on the stability condition based on Lyapnov-Krasovskii functional with LMI and on the robust control including *μ*-synthesis. Evaluation of the proposed system was carried out by experiments on a motor control system. From the results, we verified the stability and satisfactory performance of the system with the

Anderson,R. J. & Spong, M. W. (1989). Bilateral Control of Teleoperators with Time Delay *IEEE*

Borchers, B. (1999). CSDP, A C library for semidefinite programming, *Optimization Methods*

Chen, J. & Latchman, H. , (1994). Asymptotic stability independent of delays, *American Control*

Farsi, M.; Ratcliff, K. & Barbosa, M. (1999). An overview of Controller Area Network, *IEEE*

Ghodoussi, M.; Butner, S.E. & Wang, Y. (2002). Robotic surgery - the transatlantic case, *Proc. of*

Huang, D. & Nguang, S. K. (2007). State feedback control of uncertain networked control

Jiang, X. & Han,Q. (2005). On *H*∞ control for linear systems with interval time-varying delay,

Jun,M. & Safonov, M. G. (2000). Stability Analysis of a System with time delayed States, *IEEE*

Gu, K. ; Kharitonov, V. L. & Che,n J. (2003), Stability of Time-Delay Systems, *1st ed. Boston,*

Leung, G. ; Franci,s B. ; Apkarian, J. (1997), Bilateral Controller for Teleoperators with Time

Li, X. & de Souza, C. E. (1996). Robust stabilization and *H*∞ control of uncertain linear

Lofberg, J. (2005). YALMIP : a toolbox for modeling and optimization in MATLAB *2004 IEEE International Symposium on Computer Aided Control Systems Design,* , pp. 284 - 289.

timedelay systems, *Proc. 13th IFAC World Congress*, pp. 113-118.

systems with random time-delays, *46th IEEE Conf. on Decision and Control*, pp. 3964 -

Delay via *μ*-Synthesis, *IEEE Trans. on Robotics and Automation*, Vol. 11, No.1, pp.

than the conventional controller while maintaining same performance.

*Trans. on Automatic Control*, vol.34, no.5, pp. 494-501.

*Conference*, Vol. 1, pp. 1027 - 1031.

*Automatica*, vol.41, pp 2099 - 2106.

*American Control Conference*, pp. 949-952.

*and Software*, Volume 11, Issue 1 - 4 1999 , pp. 613 - 623.

*Computing and Control Engineering Journal*, Vol.10, pp. 113-120.

*IEEE Int. Conf on Robotics and Automation*, pp.1882-1888, Vol. 2

**5. Conclusion**

proposed methods.

3969.

*MA: Birkhauser*.

105-116.

**6. References**

Fig. 17. Experimental results (Step response of velocity control)

Fig. 18. Experimental results (with intentionally added delay)

in the network transmission path. The delay was virtually emulated by buffering data in memory.

Comparing two plots, the result by the conventional controller shows unstable response, whereas the response by the proposed controller still maintains stability. In fact, we designed both controllers under the assumption of 100 [msec] maximum delay, however the results showed different aspect. These results can be analyzed as following reasons. A *μ*-synthesis based controller guarantees to maintain robust performance of the system, namely it accomplishes required performance as long as the perturbations of delay and model uncertainty are within the worst case. In terms of robust performance, both proposed and conventional controllers may show similar performance, because they are designed with same *Wp*(*s*) for performance weight. In *μ*-synthesis based design, the obtained controller assures *μ***Δ<sup>r</sup>** < 1 against all possible perturbations. However the system may be stable when one of the perturbations goes beyond the maximum, if it is not the critical one. Namely, the stability margins for different perturbations are not always same. As stated in previous section, model based controller holds more margin in loop gain; hence the deference in the delay margin may appear on the result. As a result, the proposed controller is more robust against time delay than the conventional controller while maintaining same performance.

## **5. Conclusion**

16 Will-be-set-by-IN-TECH

Conventional Method

Proposed Method

0 2 4 6 8 10 12 14 16 18

Time (sec)

0 2 4 6 8 10

Time (sec)

in the network transmission path. The delay was virtually emulated by buffering data in

Comparing two plots, the result by the conventional controller shows unstable response, whereas the response by the proposed controller still maintains stability. In fact, we designed both controllers under the assumption of 100 [msec] maximum delay, however the results showed different aspect. These results can be analyzed as following reasons. A *μ*-synthesis based controller guarantees to maintain robust performance of the system, namely it accomplishes required performance as long as the perturbations of delay and model uncertainty are within the worst case. In terms of robust performance, both proposed and

0

−20

memory.

−15

−10

−5

0

Velocity (rad/sec)

5

10

15

20

Fig. 17. Experimental results (Step response of velocity control)

Fig. 18. Experimental results (with intentionally added delay)

1

2

3

Velocity (rad/sec)

4

5

6

In this chapter, a model based controller design by exploiting *μ*-synthesis is proposed, which is designed for a network based system with time varying delay and the plant model uncertainty. The proposed controller includes the model of the remote plant and time delay. The delay was measured by time-stamped packet. To avoid instability due to model uncertainty and variation of delays, we applied *μ*-synthesis based robust control method to design a controller. The paper also studied conservativeness on the stability condition based on Lyapnov-Krasovskii functional with LMI and on the robust control including *μ*-synthesis. Evaluation of the proposed system was carried out by experiments on a motor control system. From the results, we verified the stability and satisfactory performance of the system with the proposed methods.

### **6. References**


**19** 

*Concordia University* 

*1Canada 2,3China* 

**Robust Control of Nonlinear Systems with** 

Hysteresis phenomenon occurs in all smart material-based sensors and actuators, such as shape memory alloys, piezoceramics and magnetostrictive actuators (Su, et al, 2000; Fu, et al, 2007; Banks & Smith, 2000; Tan & Baras, 2004). When the hysteresis nonlinearity precedes a system plant, the nonlinearity usually causes the overall closed-loop systems to exhibit inaccuracies or oscillations, even leading to instability (Tao & Kokotovic, 1995). This fact often makes the traditional control methods insufficient for precision requirement and even not be able to guarantee the basic requirement of system stability owing to the non-smooth and multi-value nonlinearities of the hysteresis (Tao & Levis, 2001). Hence the control of nonlinear systems in presence of hysteresis nonlinearities is difficult and challenging (Fu, et

Generally there are two ways to mitigate the effects of hysteresis. One is to construct an inverse operator of the considered hysteresis model to perform inversion compensation (Tan & Baras, 2004; Tao & Kokotovic, 1995; Tao & Levis, 2001). The other is, without necessarily constructing an inverse, to fuse a suitable hysteresis model with available robust control techniques to mitigate the hysteretic effects (Su, et al, 2000; Fu, et al, 2007; Zhou, et al, 2004; Wen & Zhou, 2007). The inversion compensation was pioneered in (Tao & Kokotovic, 1995) and there are some other important results in (Tan & Baras, 2005; Iyer, et al, 2005; Tan & Bennani, 2008). However, most of these results were achieved only at actuator component level without allowing for the overall dynamic systems with actuator hysteresis nonlinearities. Essentially, constructing inverse operator relies on the phenomenological model (such as Preisach models) and influences strongly the practical application of the design concept (Su, et al, 2000). Because of multi-valued and non-smoothness feature of hysteresis, those methods are often complicated, computationally costly and possess strong sensitivity of the model parameters to unknown measurement errors. These issues are directly linked to the difficulties of guaranteeing the stability of systems except for certain special cases (Tao & Kokotovic, 1995). For the methods to mitigate hysteretic effects without constructing the inverse, there are two main challenges involved in this idea. One challenge is that very few hysteresis models

**1. Introduction** 

al, 2007; Tan & Baras, 2004).

**Hysteresis Based on Play-Like Operators** 

Jun Fu1, Wen-Fang Xie1, Shao-Ping Wang2 and Ying Jin3

*1The Department of Mechanical & Industrial Engineering* 

*3State Key Laboratory of Integrated Automation of Process Industry, Northeastern University* 

*2The Department of Mechatronic Control, Beihang University* 


## **Robust Control of Nonlinear Systems with Hysteresis Based on Play-Like Operators**

Jun Fu1, Wen-Fang Xie1, Shao-Ping Wang2 and Ying Jin3 *1The Department of Mechanical & Industrial Engineering Concordia University 2The Department of Mechatronic Control, Beihang University 3State Key Laboratory of Integrated Automation of Process Industry, Northeastern University 1Canada* 

*2,3China* 

## **1. Introduction**

18 Will-be-set-by-IN-TECH

422 Recent Advances in Robust Control – Novel Approaches and Design Methods

Mahmoud, M.S. & AI-bluthairi, N.F. (1994) Quadratic Stabilization of Continuous Time

Niculescu, S.-I. ; Neto,A. T. ; Dion, J.-M. & Dugard, L. (1995). Delay-dependent stability of

Park, P. (1999). A delay-dependent stability criterion for systems with uncertain time-invariant

Palmor, Z. (1980), Stability properties of Smith dead-time compensator controllers, *Int. J. of*

Richard, J.P. (2003). Time-delay systems: an overview of some recent advances and open

Sato, H. & Yakoh, T. (2000). A real-time communication mechanism for RTLinux, *26th Annual*

Skelton, R. E. ; Iwasaki, T. & Grigoriadis, K. (1998) A Unified Algebraic Approach to Linear

Smith, O. J. M. (1957). Closer Control of Loops with Dead Time , *Chemical Engineering Progress*,

Tang, M. Liu, X. (2008). Delay-dependent stability analysis for time delay system, *7th World*

Tlusty, J. (1985). Machine dynamics, *Handbook of High Speed Machining Technology*, R. I. King,

Uchimura, Y. ; Nasu,T. & Takahashi, M. (2007). Time synchronized wireless sensor network

Verriest, E. I. et.al. (1993). Frequency domain robust stability criteria for linear delay systems,

Vatanski, N. ; Georges, J.P.; Aubrun, C. ; Rondeau, E. ; Jamsa-Jounela, S. (2009). Networked

Yue, D. , Han, Q.-L.,Peng, C. (2004). State Feedback controller design for networked control

Yeh, S. ; Hsu, C. ; Shih, T. ; Hsiao, J. & Hsu, P. (2008). Remote control realization of distributed

Yokokohji, Y. ; Imaida,T. & Yoshikawa, T. (1999). Bilateral teleoperation under time-varying

Zhang, J. ; Knopse, C.R. & Tsiotras, P. (2001). Stability of Time-Delay Systems: Equivalence

*of the IEEE Industrial Electronics Society (IECON2007)*, pp. 2633 - 2638. Uchimura Y. & Yakoh T. , (2004). Bilateral robot system on the real-time network structure,

*IEEE Trans. on Industrial Electronics*, Vol. 51, Issue: 5 , pp.940 - 946.

and its application to building vibration measurement, *Proc. of 33rd Annual Conference*

control with delay measurement and estimation, *Control Engineering Practice*, 17 (2) .

rescue robots via the wireless network, *Proc. of SICE Annual Conference*, pp.2928-2932.

communication delay, *IEEE/RSJ Int. Conf. Intelligent Robots and Systems*, Vol. 3, pp.

between Lyapunov and Scaled Small-Gain Conditions, *IEEE Transactions on*

*Congress on Intelligent Control and Automation*, pp. 7260 - 7263.

*Confjerence of the IEEE Industrial Electronics Society, IECON 2000*, Vol.4, p.p 2437 - 2442.

*Trans. on Automatic Control*, pp. 2135-2139.

*Control*, Vol. 32, No. 6, pp. 937-949.

delays, *IEEE Trans. Automat. Contr.*, vol. 44, pp.876-877.

problems, *Automatica*, Volume 39, pp. 1667-1694.

Control Design, *New York: Taylor & Francis*.

Ed. New York: Chapman & Hall, pp. 48 - 153.

*Proc. 32nd IEEE Conf. Decision Control*, pp. 3473-3478.

*Automatic Control*, Vo. 46 , Issue: 3 , pp. 482 - 486.

systems, *IEEE Trans. Circ. Sys.*, vol. 51, no. 11, pp. 640-644.

*Control*, pp. 1495-1497.

53 , pp. 217 - 219.

1854-1859.

Systems with State delay and Norm-bounded Time-varying Uncertainties, *IEEE*

linear systems with delayed state: An LMI approach, *Proc. 34th IEEE Conf. Decision*

Hysteresis phenomenon occurs in all smart material-based sensors and actuators, such as shape memory alloys, piezoceramics and magnetostrictive actuators (Su, et al, 2000; Fu, et al, 2007; Banks & Smith, 2000; Tan & Baras, 2004). When the hysteresis nonlinearity precedes a system plant, the nonlinearity usually causes the overall closed-loop systems to exhibit inaccuracies or oscillations, even leading to instability (Tao & Kokotovic, 1995). This fact often makes the traditional control methods insufficient for precision requirement and even not be able to guarantee the basic requirement of system stability owing to the non-smooth and multi-value nonlinearities of the hysteresis (Tao & Levis, 2001). Hence the control of nonlinear systems in presence of hysteresis nonlinearities is difficult and challenging (Fu, et al, 2007; Tan & Baras, 2004).

Generally there are two ways to mitigate the effects of hysteresis. One is to construct an inverse operator of the considered hysteresis model to perform inversion compensation (Tan & Baras, 2004; Tao & Kokotovic, 1995; Tao & Levis, 2001). The other is, without necessarily constructing an inverse, to fuse a suitable hysteresis model with available robust control techniques to mitigate the hysteretic effects (Su, et al, 2000; Fu, et al, 2007; Zhou, et al, 2004; Wen & Zhou, 2007). The inversion compensation was pioneered in (Tao & Kokotovic, 1995) and there are some other important results in (Tan & Baras, 2005; Iyer, et al, 2005; Tan & Bennani, 2008). However, most of these results were achieved only at actuator component level without allowing for the overall dynamic systems with actuator hysteresis nonlinearities. Essentially, constructing inverse operator relies on the phenomenological model (such as Preisach models) and influences strongly the practical application of the design concept (Su, et al, 2000). Because of multi-valued and non-smoothness feature of hysteresis, those methods are often complicated, computationally costly and possess strong sensitivity of the model parameters to unknown measurement errors. These issues are directly linked to the difficulties of guaranteeing the stability of systems except for certain special cases (Tao & Kokotovic, 1995). For the methods to mitigate hysteretic effects without constructing the inverse, there are two main challenges involved in this idea. One challenge is that very few hysteresis models

Robust Control of Nonlinear Systems with Hysteresis Based on Play-Like Operators 425

with *v t*( ) as the input and *w t*( ) as the output. The operator *P v*[ ] will be constructed in detail in next section. The nonlinear dynamic system being preceded by the previous hysteresis is

( ) ( ( ), ( ), , ( )) ( )

+ = ∑ " (2)

<sup>−</sup> = " is continuous. Furthermore,

(3)

*x t a Y x t x t x t bw t* <sup>−</sup>

where*Yi* are known continuous, linear or nonlinear function. Parameters *<sup>i</sup> a* and control gain *b* are unknown constants. It is a common assumption that the sign of *b* is known. Without losing generality, we assume *b* is greater than zero. It should be noted that more general classes of nonlinear systems can be transformed into this structure (Isidori, 1989). The control objective is to design controller *v t*( ) in (1), as shown in Figure 1, to render the

In this section, we will first recall the backlash-like operator (Su, et al, 2000) which will serve as elementary hysteresis operator, in other words, the backlash-like operator will play a role of building blocks, then will show how the new hysteresis will be constructed by using the

In 2000, Su et al proposed a continuous-time dynamic model to describe a class of backlash-

<sup>1</sup> ( ) *dF dv dv cv F B dt dt dt* = −+

( ) ( 1)

plant state *x t*( ) to track a specified desired trajectory ( ) *<sup>d</sup> x t* , i.e., () () *<sup>d</sup> xt x t* → as *t* → ∞ .

*<sup>k</sup> n n i i*

1

*i*

Throughout this paper the following assumption is made.

Fig. 1. Configuration of the hysteretic system

**3. Prandtl-Ishlinskii-Like model** 

( ) <sup>1</sup> [, ] *TT n <sup>n</sup> Xx R d d <sup>d</sup>*

**3.1 Backlash-like operator** 

like hysteresis, as given by

**Assumption:** The desired trajectory ( 1) [,,, ] *<sup>n</sup> <sup>T</sup> X xx x d dd <sup>d</sup>*

<sup>+</sup> ∈Ω ⊂ with Ω*<sup>d</sup>* being a compact set.

backlash-like operator and explore its some useful properties of this model.

α

=

described in the canonical form as

are suitable to be fused with available robust adaptive control techniques. And the other is how to fuse the suitable hysteresis model with available control techniques to guarantee the stability of the dynamics systems (Su, et al, 2000). Hence it is usually difficult to construct new suitable hysteresis models to be fused into control plants, and to explore new control techniques to mitigate the effects of hysteresis and to ensure the system stability, without necessarily constructing the hysteresis inverse.

Noticing the above challenges, we first construct a hysteresis model using play-like operators, in a similar way to L. Prandtl's construction of the Prandtl-Ishilinskii model using play operators (Brokate & Sprekels, 1996), and thus name it Prandtl-Ishilinskii-Like model. Because the play-like operator in (Ekanayake & Iyer, 2008) is a generalization of the backlash-like operator in (Su, et al, 2000), the Prandtl-Ishilinskii-Like model is a subclass of SSSL-PKP hysteresis model (Ekanayake & Iyer, 2008). Then, the development of two robust adaptive control schemes to mitigate the hysteresis avoids constructing a hysteresis inverse. The new methods not only can perform global stabilization and tracking tasks of the dynamic nonlinear systems, but also can derive transient performance in terms of *L*2 norm of tracking error as an explicit function of design parameters, which allows designers to meet the desired performance requirement by tuning the design parameters in an explicit way.

The main contributions in this chapter are highlighted as follows:


The organization of this chapter is as follows. Section 2 gives the problem statement. In Section 3, we will construct Prandtl-Ishlinshii-Like model and explore its properties. The details about two control schemes for the nonlinear systems preceded by Prandtl-Ishlinshii-Like model proposed in Section 3 are presented in Section 4. Simulation results are given in Section 5. Section 6 concludes this paper with some brief remarks.

### **2. Problem statement**

Consider a controlled system consisting of a nonlinear plant preceded by an actuator with hysteresis nonlinearity, that is, the hysteresis is presented as an input to the nonlinear plant. The hysteresis is denoted as an operator

$$w(t) = P[v](t) \tag{1}$$

424 Recent Advances in Robust Control – Novel Approaches and Design Methods

are suitable to be fused with available robust adaptive control techniques. And the other is how to fuse the suitable hysteresis model with available control techniques to guarantee the stability of the dynamics systems (Su, et al, 2000). Hence it is usually difficult to construct new suitable hysteresis models to be fused into control plants, and to explore new control techniques to mitigate the effects of hysteresis and to ensure the system stability, without

Noticing the above challenges, we first construct a hysteresis model using play-like operators, in a similar way to L. Prandtl's construction of the Prandtl-Ishilinskii model using play operators (Brokate & Sprekels, 1996), and thus name it Prandtl-Ishilinskii-Like model. Because the play-like operator in (Ekanayake & Iyer, 2008) is a generalization of the backlash-like operator in (Su, et al, 2000), the Prandtl-Ishilinskii-Like model is a subclass of SSSL-PKP hysteresis model (Ekanayake & Iyer, 2008). Then, the development of two robust adaptive control schemes to mitigate the hysteresis avoids constructing a hysteresis inverse. The new methods not only can perform global stabilization and tracking tasks of the dynamic nonlinear systems, but also can derive transient performance in terms of *L*2 norm of tracking error as an explicit function of design parameters, which allows designers to meet the desired performance requirement by tuning the design parameters in an explicit

i. A new hysteresis model is constructed, where the play-like operators developed in (Ekanayake & Iyer, 2008) play a role of building blocks. From a standpoint of categories of hysteresis models, this class of hysteresis models is a subclass of SSSL-PKP hysteresis models. It provides a possibility to mitigate the effects of hysteresis without necessarily constructing an inverse, which is the unique feature of this subclass model identified

ii. A challenge is addressed to fuse a suitable hysteresis model with available robust adaptive techniques to mitigate the effects of hysteresis without constructing a

iii. Two backstepping schemes are proposed to accomplish robust adaptive control tasks for a class of nonlinear systems preceded by the Prandtl-Ishilinskii-Like models. Such control schemes not only ensure the stabilization and tracking of the hysteretic dynamic nonlinear systems, but also derive the transient performance in terms of *L*2 norm of

The organization of this chapter is as follows. Section 2 gives the problem statement. In Section 3, we will construct Prandtl-Ishlinshii-Like model and explore its properties. The details about two control schemes for the nonlinear systems preceded by Prandtl-Ishlinshii-Like model proposed in Section 3 are presented in Section 4. Simulation results are given in

Consider a controlled system consisting of a nonlinear plant preceded by an actuator with hysteresis nonlinearity, that is, the hysteresis is presented as an input to the nonlinear plant.

*wt Pv t* ( ) [ ]( ) = (1)

necessarily constructing the hysteresis inverse.

The main contributions in this chapter are highlighted as follows:

complicated inverse operator of the hysteresis model;

tracking error as an explicit function of design parameters.

Section 5. Section 6 concludes this paper with some brief remarks.

**2. Problem statement** 

The hysteresis is denoted as an operator

from the SSSL-PKP hysteresis model of general class in the literature;

way.

with *v t*( ) as the input and *w t*( ) as the output. The operator *P v*[ ] will be constructed in detail in next section. The nonlinear dynamic system being preceded by the previous hysteresis is described in the canonical form as

$$\mathbf{x}^{(n)}(t) + \sum\_{i=1}^{k} a\_i Y\_i(\mathbf{x}(t), \dot{\mathbf{x}}(t), \dots, \mathbf{x}^{(n-1)}(t)) = bw(t) \tag{2}$$

where*Yi* are known continuous, linear or nonlinear function. Parameters *<sup>i</sup> a* and control gain *b* are unknown constants. It is a common assumption that the sign of *b* is known. Without losing generality, we assume *b* is greater than zero. It should be noted that more general classes of nonlinear systems can be transformed into this structure (Isidori, 1989).

The control objective is to design controller *v t*( ) in (1), as shown in Figure 1, to render the plant state *x t*( ) to track a specified desired trajectory ( ) *<sup>d</sup> x t* , i.e., () () *<sup>d</sup> xt x t* → as *t* → ∞ . Throughout this paper the following assumption is made.

Fig. 1. Configuration of the hysteretic system

**Assumption:** The desired trajectory ( 1) [,,, ] *<sup>n</sup> <sup>T</sup> X xx x d dd <sup>d</sup>* <sup>−</sup> = " is continuous. Furthermore, ( ) <sup>1</sup> [, ] *TT n <sup>n</sup> Xx R d d <sup>d</sup>* <sup>+</sup> ∈Ω ⊂ with Ω*<sup>d</sup>* being a compact set.

## **3. Prandtl-Ishlinskii-Like model**

In this section, we will first recall the backlash-like operator (Su, et al, 2000) which will serve as elementary hysteresis operator, in other words, the backlash-like operator will play a role of building blocks, then will show how the new hysteresis will be constructed by using the backlash-like operator and explore its some useful properties of this model.

### **3.1 Backlash-like operator**

In 2000, Su et al proposed a continuous-time dynamic model to describe a class of backlashlike hysteresis, as given by

$$\frac{dF}{dt} = \alpha \left| \frac{dv}{dt} \right| (cv - F) + B\_1 \frac{dv}{dt} \tag{3}$$

Robust Control of Nonlinear Systems with Hysteresis Based on Play-Like Operators 427

and is expected to be identified from experimental data (Krasnoskl'skill & Pokrovskill, 1983; Brokate & Sprekels, 1996). Since the density function *p*( )*r* vanishes for large values of*r* , the choice of *R* = +∞ as the upper limit of integration in the literature is just a matter of

*R R <sup>v</sup> <sup>r</sup>*

*R R <sup>v</sup> <sup>r</sup>*

*<sup>R</sup> <sup>v</sup> <sup>r</sup>*

<sup>⎪</sup> <sup>−</sup> <sup>&</sup>gt; <sup>⎪</sup> = + <sup>⎨</sup>

*<sup>R</sup> <sup>v</sup> <sup>r</sup>*

*p p* <sup>=</sup> *r dr* ∫ is a constant which depends on the density function *p*( )*<sup>r</sup>* .

*<sup>R</sup> <sup>v</sup> <sup>r</sup>*

*<sup>R</sup> <sup>v</sup> <sup>r</sup>*

1

(1 )

− −

*B*

1

<sup>−</sup> <sup>−</sup>

( ) ( ) ( )( ) , 0

*p r dr v t p r r re dr v*

<sup>⎪</sup> <sup>⋅</sup> +− > <sup>⎪</sup> <sup>=</sup> <sup>⎨</sup>

*p r dr v t p r r re dr v*

<sup>⎪</sup> <sup>⋅</sup> + −+ <sup>&</sup>lt; <sup>⎩</sup>

( ) ( ) ( )( ) , 0

1

(1 )

− −

*p r r re dr v*

*p r r re dr v*

1

(1 )

− −

<sup>⎪</sup> <sup>−</sup> <sup>&</sup>gt; <sup>⎪</sup> <sup>=</sup> <sup>⎨</sup>

*p r r re dr v*

*B*

1

<sup>−</sup> <sup>−</sup>

<sup>⎪</sup> <sup>−</sup> + < <sup>⎩</sup>

1

*<sup>B</sup> <sup>v</sup> <sup>r</sup>*

⎪⎪ <sup>−</sup> <sup>&</sup>gt; <sup>=</sup> <sup>⎨</sup>

(1 )

1

*B v r r re v*

*r re v*

⎪ − + < ⎩

1 , 0 (, )

−

Based on the analysis in (Su, et al, 2000), for each fixed *r R* ∈(0, ) , it is always possible there

00 0 [ ]( ) ( ) ( , ) ( ) ( , ) ( ) *RR R d v t p r R r v dr p r R r v dr M p r dr* =≤ ≤ ∫∫ ∫

− − <sup>−</sup>

*p r r re dr v*

1

*B*

( )( ), 0

∫ <sup>&</sup>lt; +∞ , then for any 0 () ( , ) *vt C t* <sup>∈</sup> *pm* <sup>∞</sup> , there exists a constant

, 0

1

( )( ), 0

<sup>−</sup> <sup>−</sup>

<sup>⎪</sup> <sup>−</sup> + < <sup>⎩</sup>

*B*

1

*B*

( )( ), 0

( )( ), 0

1

*B*

<sup>0</sup> *p r dr* ( ) <sup>∞</sup> < +∞ ∫ ,

(9)

(10)

(11)

where *p*( )*r* is a given continuous density function, satisfying *p r*() 0 ≥ with

0 0

∫ ∫

0 0

0 0 1

∫

0

∫

0

∫

⎧

⎪

0

⎧

⎪

∫

∫ ∫

⎧

⎪

convenience (Brokate & Sprekels, 1996).

[ ]( )

⎧

⎪

( )

*wt pv*

[ ]( )

*dv t*

<sup>0</sup> *p r dr* ( ) <sup>∞</sup>

**Proof:** since (7) can be rewritten as () () (, ) *F t vt Rr v <sup>r</sup>* = + where

*Rrv*

exists a positive constant *M*<sup>1</sup> , such that <sup>1</sup> *Rrv M* (, ) ≤ . Hence

*wv t*

the hysteresis (9) can be expressed as

Inserting (7) into (8) yields

where <sup>0</sup> <sup>0</sup> ( ) *R*

satisfying *p r*() 0 ≥ with

*M* ≥ 0 such that *dv t M* [ ]( ) ≤ .

**Property 1:** Let

whereα, , *c* and *B*<sup>1</sup> are constants, satisfying 1 *c B* > .

The solution of (3) can be solved explicitly for piecewise monotone *v* as follows

$$F(t) = c\upsilon(t) + [F\_0 - c\upsilon\_0]e^{-\alpha(v-v\_0)\text{sgn}\,\psi} + e^{-\alpha v\text{sgn}\,\psi} \int\_{v\_0}^{v} [B\_1 - c] e^{\alpha \zeta(\text{sgn}\,\psi)} d\zeta \tag{4}$$

for *v* constant and 0 0 *wv w* ( ) = . Equation (4) can also be rewritten as

$$F(t) = \begin{cases} c\upsilon(t) + [F\_0 - c\upsilon\_0]e^{-\alpha(\upsilon - \upsilon\_0)} + e^{-\alpha\upsilon}\frac{B\_1 - c}{\alpha}(e^{\alpha\upsilon} - e^{\alpha\upsilon\_0}), & \dot{\upsilon} > 0 \\\\ c\upsilon(t) + [F\_0 - c\upsilon\_0]e^{\alpha(\upsilon - \upsilon\_0)} + e^{\alpha\upsilon}\frac{B\_1 - c}{-\alpha}(e^{-\alpha\upsilon} - e^{-\alpha\upsilon\_0}), & \dot{\upsilon} < 0 \end{cases} \tag{5}$$

It is worth to note that

$$\begin{aligned} \lim\_{v \to +\infty} (F(v) - c\upsilon) &= -\frac{c - B\_1}{\alpha} \\ \lim\_{v \to -\infty} (F(v) - c\upsilon) &= \frac{c - B\_1}{\alpha} \end{aligned} \tag{6}$$

Hence, solution *F t*( ) exponentially converges the output of a play operator with threshold <sup>1</sup> *c B r* α <sup>−</sup> <sup>=</sup> and switches between lines <sup>1</sup> *c B cv* α <sup>−</sup> <sup>+</sup> and <sup>1</sup> *c B cv* α <sup>−</sup> <sup>−</sup> . We will construct a new Prandtl-Ishilinskii-Like model by using the above backlash-like model in next subsection, similar to the construction of the well-known Prandtl-Ishilinskii model from play operators, which is our motivation behind the construction of this new model indeed.

### **3.2 Prandtl-Ishilinskii-Like model**

We now ready to construct Prandtl-Ishilinskii-Like model through a weighted superposition of elementary backlash-like operator [ ]( ) *Fvt <sup>r</sup>* , in a similar way as L. Prandtl (Brokate & Sprekels, 1996) constructed Prandtl-Ishilinskii model by using play operators.

Keep <sup>1</sup> *c B r* α <sup>−</sup> <sup>=</sup> in mind and, without losing generality, set *F v*( (0) 0) 0 <sup>=</sup> <sup>=</sup> and 1 *<sup>c</sup>* <sup>=</sup> , we rewrite equation (5) as

$$F\_r(t) = \begin{cases} v(t) + r - re^{-\frac{-(1-B\_1)}{r}}, & \dot{v} > 0 \\ v(t) - r + re^{\frac{1-B\_1}{r}}, & \dot{v} < 0 \end{cases} \tag{7}$$

where *r* is the threshold of the backlash-like operator.

To this end, we construct the Prandtl-Ishilinskii-Like model by

$$w(t) = \int\_0^R p(r) F\_r[v](t) dr \tag{8}$$

where *p*( )*r* is a given continuous density function, satisfying *p r*() 0 ≥ with <sup>0</sup> *p r dr* ( ) <sup>∞</sup> < +∞ ∫ , and is expected to be identified from experimental data (Krasnoskl'skill & Pokrovskill, 1983; Brokate & Sprekels, 1996). Since the density function *p*( )*r* vanishes for large values of*r* , the choice of *R* = +∞ as the upper limit of integration in the literature is just a matter of convenience (Brokate & Sprekels, 1996).

Inserting (7) into (8) yields

426 Recent Advances in Robust Control – Novel Approaches and Design Methods

0 ( )sgn sgn (sgn )

> α

 α

1

1

α

α

<sup>−</sup> <sup>+</sup> and <sup>1</sup> *c B cv*

α

 αζ

 α

 α ζ

(6)

<sup>−</sup> <sup>−</sup> . We will construct

(7)

α

(5)

*v vv v v v v*

−− − = +− + − ∫ (4)

0 0

α

*v v v v v*

0 0

*v v v v v*

− − −

α

The solution of (3) can be solved explicitly for piecewise monotone *v* as follows

α

α

lim ( ( ) )

lim ( ( ) )

0

− − −

*<sup>v</sup> F t cv t F cv e e B ce d* αα

( ) 1

*B c cv t F cv e e e e v*

<sup>⎧</sup> <sup>−</sup> <sup>+</sup> − + −> <sup>⎪</sup>

*B c cv t F cv e e e e v*

<sup>⎪</sup> <sup>−</sup> <sup>⎪</sup> <sup>+</sup> −+ − < <sup>⎩</sup> <sup>−</sup>

*c B F v cv*

<sup>−</sup> − =−

*c B F v cv*

Hence, solution *F t*( ) exponentially converges the output of a play operator with

a new Prandtl-Ishilinskii-Like model by using the above backlash-like model in next subsection, similar to the construction of the well-known Prandtl-Ishilinskii model from play operators, which is our motivation behind the construction of this new model indeed.

We now ready to construct Prandtl-Ishilinskii-Like model through a weighted superposition of elementary backlash-like operator [ ]( ) *Fvt <sup>r</sup>* , in a similar way as L. Prandtl (Brokate &

<sup>−</sup> <sup>=</sup> in mind and, without losing generality, set *F v*( (0) 0) 0 <sup>=</sup> <sup>=</sup> and 1 *<sup>c</sup>* <sup>=</sup> , we

1

*<sup>B</sup> <sup>v</sup> <sup>r</sup>*

*w t <sup>r</sup>* <sup>=</sup> *<sup>p</sup> r F v t dr* ∫ (8)

(1 )

1

() , 0

1 ( ) , 0 ( )

−

*v t r re v*

⎪ − + < ⎩

<sup>0</sup> ( ) ( ) [ ]( ) *R*

*v t r re v*

⎪⎪ <sup>+</sup> − > <sup>=</sup> <sup>⎨</sup>

− − <sup>−</sup>

Sprekels, 1996) constructed Prandtl-Ishilinskii model by using play operators.

*<sup>r</sup> <sup>B</sup> <sup>v</sup> <sup>r</sup>*

*F t*

To this end, we construct the Prandtl-Ishilinskii-Like model by

where *r* is the threshold of the backlash-like operator.

⎧

⎪

<sup>−</sup> − =

*cv*

α

() [ ] ( ), 0

α

() [ ] ( ), 0

( ) 1

0 0 <sup>1</sup> () () [ ] [ ]

, , *c* and *B*<sup>1</sup> are constants, satisfying 1 *c B* > .

for *v* constant and 0 0 *wv w* ( ) = . Equation (4) can also be rewritten as

0 0

( )

⎪ = ⎨

*F t*

It is worth to note that

threshold <sup>1</sup> *c B r*

Keep <sup>1</sup> *c B r*

α

rewrite equation (5) as

α

**3.2 Prandtl-Ishilinskii-Like model** 

0 0

*v*

→+∞

→−∞

*v*

<sup>−</sup> <sup>=</sup> and switches between lines <sup>1</sup> *c B*

whereα

$$w[v](t) = \begin{cases} \int\_0^R p(r) dr \cdot v(t) + \int\_0^R p(r)(r - re^{-\frac{-(1-B\_1)}{r}v}) dr, & \dot{v} > 0\\ \int\_0^R p(r) dr \cdot v(t) + \int\_0^R p(r)(-r + re^{-\frac{1-B\_1}{r}v}) dr, & \dot{v} < 0 \end{cases} \tag{9}$$

the hysteresis (9) can be expressed as

$$w(t) = p\_0 v + \begin{cases} \int\_0^R p(r)(r - re^{\frac{-(1-B\_1)}{r}v}) dr, & \dot{v} > 0\\ \int\_0^R p(r)(-r + re^{-\frac{1-B\_1}{r}v}) dr, & \dot{v} < 0 \end{cases} \tag{10}$$

where <sup>0</sup> <sup>0</sup> ( ) *R p p* <sup>=</sup> *r dr* ∫ is a constant which depends on the density function *p*( )*<sup>r</sup>* . **Property 1:** Let

$$d[\upsilon](t) = \begin{cases} \int\_0^R p(r)(r - re^{\frac{-(1-B\_1)}{r}}) dr, & \dot{\upsilon} > 0\\ \int\_0^R p(r)(-r + re^{-\frac{1-B\_1}{r}}) dr, & \dot{\upsilon} < 0 \end{cases} \tag{11}$$

satisfying *p r*() 0 ≥ with <sup>0</sup> *p r dr* ( ) <sup>∞</sup> ∫ <sup>&</sup>lt; +∞ , then for any 0 () ( , ) *vt C t* <sup>∈</sup> *pm* <sup>∞</sup> , there exists a constant *M* ≥ 0 such that *dv t M* [ ]( ) ≤ .

**Proof:** since (7) can be rewritten as () () (, ) *F t vt Rr v <sup>r</sup>* = + where

$$R(r, \upsilon) = \begin{cases} r - re^{-\frac{-(1-B\_1)}{r}\upsilon} & \dot{\upsilon} > 0\\ \frac{1-B\_1}{-r + re^{-\frac{1}{r}}\upsilon} & \dot{\upsilon} < 0 \end{cases}$$

Based on the analysis in (Su, et al, 2000), for each fixed *r R* ∈(0, ) , it is always possible there exists a positive constant *M*<sup>1</sup> , such that <sup>1</sup> *Rrv M* (, ) ≤ . Hence

$$d[\upsilon\](t) = \bigcap\_{0}^{R} p(r)R(r,\upsilon)dr \le \bigcup\_{0}^{R} p(r) \left| R(r,\upsilon) \right| dr \le M\_1 \bigcup\_{0}^{R} p(r)dr$$

Robust Control of Nonlinear Systems with Hysteresis Based on Play-Like Operators 429

**Remark 2:** From another point of an alternative one-parametric representation of Preisach operator (Krejci, 1996), the Prandtl-Ishilinskii-Like model falls into PKP-type operator (Ekanayake & Iyer, 2008), as Prandtl-Ishilinskii model into Preisach model. As a preliminary step, in the paper we explore the properties of this model and its potential to facilitate control when a system is preceded by this kind of hysteresis model, which will be demonstrated in the next section. Regarding hysteresis phenomena in which kind of smart actuator this model could characterize, it is still unclear. The future work will focus on,

**Remark 3:** It should be note that (10) decomposes the hysteresis behavior into two terms. The first term describes the linear reversible part, while the second term describes the nonlinear hysteretic behavior. This decomposition is crucial (Su, et al, 2000, Fu, et al, 2007) since it facilitates the utilization of the currently available control techniques for the

From (10) and Proposition 1 we see that the signal *w t*( ) is expressed as a linear function of input signal *v t*( ) plus a bounded term. Using the hysteresis model of (10), the nonlinear

1 2

() [ ]

{ ( ) [ ]( )}

*bpvt dv t Y b vt d v*

*<sup>n</sup> x t xt x t x t* <sup>−</sup> <sup>=</sup> " <sup>=</sup> 1 2 [ , ,, ]*<sup>T</sup>*

*p b*

Before presenting the adaptive control design using the backstepping technique in (Krisic, et al, 1995) to achieve the desired control objectives, we make the following change of

α

−

=− − = "

Where α*i*-1 is the virtual controller in the *i* th step and will be determined later. In the following, we give two control schemes. In Scheme I, the controller is discontinuous; the

*n ii n*

*x aY x t x t x t*

( ( ), ( ), , ( ))

<sup>1</sup> , 2,3, ,

*i n* <sup>−</sup>

" (13)

*<sup>k</sup>* **a** =− − − *aa a* " , and *<sup>p</sup>* <sup>0</sup> *b bp* =

(14)

<sup>0</sup> *wt pv dv t* ( ) [ ]( ) = + (12)

which is beyond of the scope of this paper.

*p p* <sup>=</sup> *r dr* ∫ and *dv t* [ ]( ) is defined by (11).

controller design, which will be clear in next section.

system dynamics described by (2), can be re-expressed as

where <sup>1</sup> *x t xt* ( ) ( ), <sup>=</sup> ( 1)

1 2 [,,,]*<sup>T</sup> Y YY Y* <sup>=</sup> " *<sup>k</sup>* , and [ ]( ) [ ]( ) *<sup>b</sup> d v t bd v t* <sup>=</sup> .

other is continuous in Scheme II.

coordinates:

1 2

*x x*

 # 

=

1

−

*x x*

*n n k*

= −

=

1 0

+ − =+ −

( 1)

*d i iid i*

*i*

=

∑

*T*

**a**

<sup>2</sup> ( ) ( ), , ( ) ( ), *<sup>n</sup>*

1 1

*zxx zxx*

= −

To this end, we can rewrite (9) into

where 0 <sup>0</sup> ( )

*R*

**4. Adaptive control design** 

By the definition of *p*( ), *r* one can conclude that <sup>1</sup> 0 ( ) *R <sup>M</sup>* <sup>=</sup> *M p r dr* ∫ .

**Property 2:** the Prandtl-Ishilinskii-Like model constructed by (9) is rate-independent. **Proof:** Following (Brokate & Sprekels, 1996), we let :[0, ] [0, ] *E E* σ *t t* → satisfying σ(0) 0 = and ( ) *E E* σ *t t* = be a continuous increasing function, i.e. σ( )⋅ is an admissible time transformation and define [ ] *w v <sup>f</sup> <sup>t</sup>* satisfying [ ] [ ]( ), [0, ] *w v wv t t t <sup>f</sup> t E* = ∈ and [0, ] *pm E vM t* ∈ where *<sup>t</sup> v* represents the truncation of *v* at *t* , defined by () () *<sup>t</sup> v v* τ = τ for 0 ≤τ ≤ *t* and ( ) () *<sup>t</sup> v vt* τ = for *<sup>E</sup> t t* ≤ ≤ τ, and *wv t* [ ]( ) constructed by (9). For the model (9), we can easily have

$$w[v \circ \sigma](t) = w\_f[(v \circ \sigma)\_t] = w\_f[v\_{\sigma(t)} \circ \sigma] = w\_f[v\_{\sigma(t)}] = w[v](\sigma(t)) = w[v](t) \circ \sigma(t)$$

Hence for all admissible time transformationσ( )⋅ , according to the definition 2.2.1 in (Brokate & Sprekels, 1996), the model constructed by (9) is rate-independent.

**Property 3:** the Prandtl-Ishilinskii-Like model constructed by (9) has the Volterra property.

**Proof:** it is obvious whenever , [0, ] *pm E vv M t* ∈ and [0, ] *<sup>E</sup> t t* ∈ , then *t t v v* = implies that ( [ ]) ( [ ]) *wv wv t t* = , so, according to (Brokate & Sprekels, 1996, Page 37), the model (8) has Volterra property.

Lemma 1: If a functional : [0, ] ([0, ]) *w C t Map t pm E* → *<sup>E</sup>* has both rate independence property and Volterra property, then *w* is a hysteresis operator (Brokate & Sprekels, 1996).

**Proposition 1:** the Prandtl-Ishilinskii-Like model constructed by (9) is a hysteresis operator. **Proof:** From the Properties 1, 2 and Lemma 1, the Prandtl-Ishilinskii-Like model (9) is a hysteresis model.

**Remark 1:** It should be mentioned that Prandtl-Ishilinskii model is a weighted superposition of play operator, i.e. play operator is the hysteron (Krasnoskl'skill & Pokrovskill, 1983), and that backlash-like operator can be viewed as a play-like operator from a 1st order differential equation (Ekanayake & Iyer, 2008). Hence, the model (8) is, with a litter abuse terminology, named Prandtl-Ishilinskii-Like model. As an illustration, Figure 2 shows *w t*( ) generated by (9), with <sup>2</sup> 6.7(0.1 1) ( ) (0,50] *<sup>r</sup> pr e <sup>r</sup>* − − <sup>=</sup> <sup>∈</sup> , 1 *<sup>B</sup>* <sup>=</sup> 0.505 , and input *vt t t* ( ) 7 sin(4 ) /(1 ), = + with *F v*( (0) 0) 0. = =

Fig. 2. Prandtl-Ishlinskii-Like Hysteresis curves given by (10)

**Remark 2:** From another point of an alternative one-parametric representation of Preisach operator (Krejci, 1996), the Prandtl-Ishilinskii-Like model falls into PKP-type operator (Ekanayake & Iyer, 2008), as Prandtl-Ishilinskii model into Preisach model. As a preliminary step, in the paper we explore the properties of this model and its potential to facilitate control when a system is preceded by this kind of hysteresis model, which will be demonstrated in the next section. Regarding hysteresis phenomena in which kind of smart actuator this model could characterize, it is still unclear. The future work will focus on, which is beyond of the scope of this paper.

To this end, we can rewrite (9) into

428 Recent Advances in Robust Control – Novel Approaches and Design Methods

**Property 2:** the Prandtl-Ishilinskii-Like model constructed by (9) is rate-independent.

, and *wv t* [ ]( ) constructed by (9). For the model (9), we can easily have

σ

(Brokate & Sprekels, 1996), the model constructed by (9) is rate-independent.

and Volterra property, then *w* is a hysteresis operator (Brokate & Sprekels, 1996).

and define [ ] *w v <sup>f</sup> <sup>t</sup>* satisfying [ ] [ ]( ), [0, ] *w v wv t t t <sup>f</sup> t E* = ∈ and [0, ] *pm E vM t* ∈ where *<sup>t</sup> v*

( ) ( ) [ ]( ) [( ) ] [ ] [ ] [ ]( ( )) [ ]( ) ( ) *wv t w v w v w v wv t wv t t* DD D

**Property 3:** the Prandtl-Ishilinskii-Like model constructed by (9) has the Volterra property. **Proof:** it is obvious whenever , [0, ] *pm E vv M t* ∈ and [0, ] *<sup>E</sup> t t* ∈ , then *t t v v* = implies that ( [ ]) ( [ ]) *wv wv t t* = , so, according to (Brokate & Sprekels, 1996, Page 37), the model (8) has

Lemma 1: If a functional : [0, ] ([0, ]) *w C t Map t pm E* → *<sup>E</sup>* has both rate independence property

**Proposition 1:** the Prandtl-Ishilinskii-Like model constructed by (9) is a hysteresis operator. **Proof:** From the Properties 1, 2 and Lemma 1, the Prandtl-Ishilinskii-Like model (9) is a

**Remark 1:** It should be mentioned that Prandtl-Ishilinskii model is a weighted superposition of play operator, i.e. play operator is the hysteron (Krasnoskl'skill & Pokrovskill, 1983), and that backlash-like operator can be viewed as a play-like operator from a 1st order differential equation (Ekanayake & Iyer, 2008). Hence, the model (8) is, with a litter abuse terminology, named Prandtl-Ishilinskii-Like model. As an illustration, Figure 2 shows *w t*( ) generated by (9), with <sup>2</sup> 6.7(0.1 1) ( ) (0,50] *<sup>r</sup> pr e <sup>r</sup>* − − <sup>=</sup> <sup>∈</sup> , 1 *<sup>B</sup>* <sup>=</sup> 0.505 , and input


v(t)

σ

 σ

0 ( ) *R <sup>M</sup>* <sup>=</sup> *M p r dr* ∫ .

> τfor 0 ≤

=

*t t* → satisfying

= = = D

τ

( )⋅ is an admissible time transformation

σσ

( )⋅ , according to the definition 2.2.1 in

σ

 ≤ *t* and ( ) () *<sup>t</sup> v vt* τ

(0) 0 = and

= for

σ

σ

τ=

> σ

By the definition of *p*( ), *r* one can conclude that <sup>1</sup>

*t t* = be a continuous increasing function, i.e.

represents the truncation of *v* at *t* , defined by () () *<sup>t</sup> v v*

σ

Hence for all admissible time transformation

*vt t t* ( ) 7 sin(4 ) /(1 ), = + with *F v*( (0) 0) 0. = =

Fig. 2. Prandtl-Ishlinskii-Like Hysteresis curves given by (10)

w(t)

( ) *E E* σ

*<sup>E</sup> t t* ≤ ≤ τ

σ

Volterra property.

hysteresis model.

**Proof:** Following (Brokate & Sprekels, 1996), we let :[0, ] [0, ] *E E*

= *f t ft ft*

$$w(t) = p\_0 v + d[v](t) \tag{12}$$

where 0 <sup>0</sup> ( ) *R p p* <sup>=</sup> *r dr* ∫ and *dv t* [ ]( ) is defined by (11).

**Remark 3:** It should be note that (10) decomposes the hysteresis behavior into two terms. The first term describes the linear reversible part, while the second term describes the nonlinear hysteretic behavior. This decomposition is crucial (Su, et al, 2000, Fu, et al, 2007) since it facilitates the utilization of the currently available control techniques for the controller design, which will be clear in next section.

### **4. Adaptive control design**

From (10) and Proposition 1 we see that the signal *w t*( ) is expressed as a linear function of input signal *v t*( ) plus a bounded term. Using the hysteresis model of (10), the nonlinear system dynamics described by (2), can be re-expressed as

$$\begin{aligned} \dot{\mathbf{x}}\_1 &= \mathbf{x}\_2 \\ \vdots \\ \dot{\mathbf{x}}\_{n-1} &= \mathbf{x}\_n \\ \dot{\mathbf{x}}\_n &= -\sum\_{i=1}^k a\_i Y\_i(\mathbf{x}\_1(t), \mathbf{x}\_2(t), \dots, \mathbf{x}\_n(t)) \\ &+ b\{p\_0 \boldsymbol{\upsilon}(t) - d[\boldsymbol{\upsilon}](t)\} \\ &= \mathbf{a}^T Y + b\_p \boldsymbol{\upsilon}(t) - d\_b[\boldsymbol{\upsilon}] \end{aligned} \tag{13}$$

where <sup>1</sup> *x t xt* ( ) ( ), <sup>=</sup> ( 1) <sup>2</sup> ( ) ( ), , ( ) ( ), *<sup>n</sup> <sup>n</sup> x t xt x t x t* <sup>−</sup> <sup>=</sup> " <sup>=</sup> 1 2 [ , ,, ]*<sup>T</sup> <sup>k</sup>* **a** =− − − *aa a* " , and *<sup>p</sup>* <sup>0</sup> *b bp* = 1 2 [,,,]*<sup>T</sup> Y YY Y* <sup>=</sup> " *<sup>k</sup>* , and [ ]( ) [ ]( ) *<sup>b</sup> d v t bd v t* <sup>=</sup> .

Before presenting the adaptive control design using the backstepping technique in (Krisic, et al, 1995) to achieve the desired control objectives, we make the following change of coordinates:

$$\begin{aligned} z\_1 &= \mathbf{x}\_1 - \mathbf{x}\_d\\ z\_i &= \mathbf{x}\_i - \mathbf{x}\_d^{(i-1)} - \alpha\_{i-1}, \quad i = \mathbf{2}, \mathbf{3}, \cdots, m \end{aligned} \tag{14}$$

Where α*i*-1 is the virtual controller in the *i* th step and will be determined later. In the following, we give two control schemes. In Scheme I, the controller is discontinuous; the other is continuous in Scheme II.

### **Scheme I**

In what follows, the robust adaptive control law will be developed for Scheme I. First, we give the following definitions

$$\begin{aligned} \tilde{\mathbf{a}}(t) &= \mathbf{a} - \hat{\mathbf{a}}(t) \\ \tilde{\boldsymbol{\phi}}(t) &= \boldsymbol{\phi} - \hat{\boldsymbol{\phi}}(t) \\ \boldsymbol{M}(t) &= \boldsymbol{M} - \hat{\mathbf{M}}(t) \end{aligned} \tag{15}$$

Robust Control of Nonlinear Systems with Hysteresis Based on Play-Like Operators 431

1 11

1 2 11 *z z cz* = −

<sup>2321</sup> *z z* = + − α

2 22 1 1

2 22 1 3 *z cz z z* = − −+ Following this procedure step by step, we can derive the dynamics of the rest of states until

> ( ) <sup>1</sup> ( ) [ ]( ) *<sup>T</sup> <sup>n</sup> n p <sup>d</sup> n b z bvt aY x d v t* = + −− +

1 1 1

=− − − − + +

11 1 <sup>ˆ</sup> () () () () () () *p p <sup>p</sup> bvt b tv t v t b tv t* = = −

*n nn n nb p z cz z Y z M d v t b tv t* =− − − − + − <sup>−</sup> **a**

ˆ ( ) ˆ sgn( )

*v t cz z Y z M x*

**a**

α

*T n nn n n d n*

− −

 φ

1 1 <sup>ˆ</sup> <sup>ˆ</sup> sgn( ) [ ]( ) ( ) ( ) *<sup>T</sup>*

2 122

*b*

η

φ*M*

> γ

= + Γ+ + ∑ **a a** (22)

1 1 1 22 2 2

−

*<sup>n</sup> <sup>T</sup> <sup>p</sup>*

(21)

<sup>−</sup> (18)

( )

α

φ

(19)

(20)

= − −+ *cz z*

α

α

α= −*c z*

where 1*c* is a positive design parameter.

α

**Step n:** the n-th dynamics are given by

We design the real control as follows:

**Step 2:** Differentiating 2 *z* gives

The virtual control

Hence the dynamics is

the real control appears.

Hence, we obtain

The derivative *V* is given by

Hence, we can get the first equation of tracking error

<sup>2</sup> can be designed as

1

ˆ () () ()

*vt tv t*

φ

=

ˆ() ()

= − = Γ =

 η

γ

To this end, we defend the candidate Lyapunov function as

1

*i*

=

*V z*

*i*

*t v tz t Yz Mt z*

ˆ( ) ( )

φ

 

**a**

Note that ( ) *<sup>p</sup> bvt* in (19) can be expressed as

1

*n n* *n*

φ

α

where **a**ˆ is an estimate of **a** , ˆ φ is an estimate of φ , which is defined as <sup>1</sup> : *p b* φ = , and *M*ˆ is an estimate of *M* .

Given the plant and the hysteresis model subject to the assumption above, we propose the following control law

$$\begin{aligned} v(t) &= \dot{\hat{\phi}}(t)v\_1(t) \\ v\_1(t) &= -c\_n z\_n - z\_{n-1} - \hat{\mathbf{a}}^T Y - \text{sgn}(z\_n)\hat{D} + \mathbf{x}\_d^{(n)} + \dot{\alpha}\_{n-1} \\ \dot{\hat{\phi}}(t) &= -\eta v\_1(t)z\_n \\ \dot{\hat{\mathbf{a}}}(t) &= \Gamma' Y z\_n \\ \dot{M}(t) &= \gamma \left| z\_n \right| \end{aligned} \tag{16}$$

where *nc* , η , and γ are positive design parameters, and Γ is a positive-definite matrix. These parameters can provide a certain degree of freedom to determine the rates of the adaptations. Andα*<sup>n</sup>*−<sup>1</sup> and the implicit <sup>1</sup> , α*<sup>i</sup>*<sup>−</sup> *i n* = 2,3, , 1 " − in (16) will be designed in the proof of the following theorem for stability analysis.

The stability of the closed-loop system described in (13) and (16) is established as:

**Theorem 1:** For the plant given in (2) with the hysteresis (8), subject to Assumption 1, the robust adaptive controller specified by (16) ensures the following statements hold.


$$\left\|\mathbf{x}(t) - \mathbf{x}\_d(t)\right\|\_2 \leq \sqrt{\frac{\left(\frac{1}{2}\tilde{\mathbf{a}}(\mathbf{0})^T \Gamma^{-1} \tilde{\mathbf{a}}(\mathbf{0}) + \frac{b\_p}{2\eta}\tilde{\phi}(\mathbf{0})^2 + \frac{1}{2\gamma}\tilde{M}(\mathbf{0})^2\right)}{c\_1}}$$

**Proof:** we will use a standard backstepping technique to prove the statements in a systematically way as follows:

**Step 1:** The time derivative of 1*z* can be computed as

$$
\dot{z}\_1 = z\_2 + a\_1 \tag{17}
$$

The virtual controlα<sup>1</sup> can be designed as

1 11 α= −*c z*

where 1*c* is a positive design parameter.

Hence, we can get the first equation of tracking error

$$
\dot{z}\_1 = z\_2 - c\_1 z\_1
$$

**Step 2:** Differentiating 2 *z* gives

$$
\dot{z}\_2 = z\_3 + a\_2 - \dot{a}\_1
$$

The virtual control α<sup>2</sup> can be designed as

$$
\alpha\_2 = -c\_2 z\_2 - z\_1 + \dot{a}\_1
$$

Hence the dynamics is

430 Recent Advances in Robust Control – Novel Approaches and Design Methods

() () ˆ ˆ () () ˆ () ()

φ

Given the plant and the hysteresis model subject to the assumption above, we propose the

1 1 1

These parameters can provide a certain degree of freedom to determine the rates of the

**Theorem 1:** For the plant given in (2) with the hysteresis (8), subject to Assumption 1, the

i. The resulting closed-loop system (2) and (8) is globally stable in the sense that all the

*T p*

⎝ ⎠ − ≤

**Proof:** we will use a standard backstepping technique to prove the statements in a

12 1 *z z* = +α

*xt x t* →∞

1

*c*

1 1 (0) (0) (0) (0) 2 22

<sup>−</sup> ⎛ ⎞ ⎜ ⎟ Γ+ +

η

φ

*b*

**a a**

− = ;

1 22

=− − − − + +

*T n nn n n d n*

− −

are positive design parameters, and Γ is a positive-definite matrix.

(15)

: *p b*

= , and *M*ˆ is an

(16)

φ

, which is defined as <sup>1</sup>

( )

α

*<sup>i</sup>*<sup>−</sup> *i n* = 2,3, , 1 " − in (16) will be designed in the

*M*

(17)

 γ

*t t t t M t M Mt*

= − = − = −

**a aa**

 φφ

In what follows, the robust adaptive control law will be developed for Scheme I.

φ

is an estimate of

ˆ ( ) ˆ sgn( )

α

The stability of the closed-loop system described in (13) and (16) is established as:

robust adaptive controller specified by (16) ensures the following statements hold.

*v t cz z Y z D x*

**a**

φ

ˆ () () ()

*vt tv t*

φ

=

ˆ() ()

= − = Γ =

 η

γ

*t v tz t Yz Mt z*

ˆ( ) ( )

proof of the following theorem for stability analysis.

φ

 

**a**

γ

α

1

1

*n n*

*<sup>n</sup>*−<sup>1</sup> and the implicit <sup>1</sup> ,

signals of the closed-loop system ultimately bounded; ii. The asymptotic tracking is achieved, i.e., lim[ ( ) ( )] 0 *<sup>d</sup> <sup>t</sup>*

iii. The transient tracking error can be explicitly specified by

2

<sup>1</sup> can be designed as

() ()

*xt x t*

systematically way as follows:

α

The virtual control

*d*

**Step 1:** The time derivative of 1*z* can be computed as

*n*

**Scheme I** 

First, we give the following definitions

where **a**ˆ is an estimate of **a** , ˆ

estimate of *M* .

where *nc* ,

adaptations. And

η, and

following control law

$$\dot{z}\_2 = -c\_2 z\_2 - z\_1 + z\_3$$

Following this procedure step by step, we can derive the dynamics of the rest of states until the real control appears.

**Step n:** the n-th dynamics are given by

$$\dot{z}\_n = b\_p \upsilon(t) + a^T Y - \mathbf{x}\_d^{(n)} - \dot{\alpha}\_{n-1} + d\_b[\upsilon](t) \tag{18}$$

We design the real control as follows:

$$\begin{aligned} v(t) &= \hat{\phi}(t)v\_1(t) \\ v\_1(t) &= -c\_n z\_n - z\_{n-1} - \hat{\mathbf{a}}^T Y - \text{sgn}(z\_n) \hat{M} + \mathbf{x}\_d^{(n)} + \dot{\mathbf{a}}\_{n-1} \\ \dot{\hat{\phi}}(t) &= -\eta v\_1(t) z\_n \\ \dot{\hat{\mathbf{a}}}(t) &= \Gamma Y z\_n \\ \dot{M}(t) &= \gamma \left| z\_n \right| \end{aligned} \tag{19}$$

Note that ( ) *<sup>p</sup> bvt* in (19) can be expressed as

$$b\_p v(t) = b\_p \hat{\phi}(t) v\_1(t) = v\_1(t) - b\_p \tilde{\phi}(t) v\_1(t) \tag{20}$$

Hence, we obtain

$$\dot{z}\_n = -c\_n z\_n - z\_{n-1} - \hat{\mathbf{a}}^T \mathbf{Y} - \text{sgn}(z\_n)\hat{M} + d\_b[\upsilon](t) - b\_p \tilde{\phi}(t)\upsilon\_1(t) \tag{21}$$

To this end, we defend the candidate Lyapunov function as

$$V = \sum\_{i=1}^{n} \frac{1}{2} z\_i^2 + \frac{1}{2} \tilde{\mathbf{a}}^T \Gamma^{-1} \tilde{\mathbf{a}} + \frac{b\_p}{2\eta} \tilde{\phi}^2 + \frac{1}{2\gamma} \tilde{M}^2 \tag{22}$$

The derivative *V* is given by

Robust Control of Nonlinear Systems with Hysteresis Based on Play-Like Operators 433

1,

<sup>⎧</sup> <sup>≥</sup> <sup>⎪</sup> <sup>=</sup> <sup>⎨</sup> <sup>&</sup>lt;

⎪− ≤ − <sup>⎩</sup>

*z*

*z*

1,

⎧⎪ <sup>≥</sup> <sup>=</sup> <sup>⎨</sup> ⎪ < ⎩

Given the plant and the hysteresis model subject to the assumption above, we propose the

1 1

=− + − − − + +

*n n n nn n n d n*

γ

δ

*T p*

*b*

<sup>−</sup> ⎛ ⎞

 φ

− + − as did in (Zhou et al, 2004).

η

⎝ ⎠

1

α

 , i.e., 1 lim[ ( ) ( )] *<sup>d</sup> <sup>t</sup> xt x t*

> γ

− + <sup>−</sup> in Section 3.1 and *<sup>i</sup> <sup>z</sup>* in the design procedure

**a a** (27)

→∞

1 22

*<sup>n</sup>*−<sup>1</sup> and the implicit <sup>1</sup> ,

**Theorem 2:** For the plant given in (2) with the hysteresis (8), subject to Assumption 1, the

i. The resulting closed-loop system (2) and (8) is globally stable in the sense that all the

1 1 <sup>1</sup> () () (0) (0) (0) (0) 2 22

− ≤+ Γ + + ⎜ ⎟

*xt x t M*

*ii i z sg* δ

1 *<sup>n</sup> V z fz*

*n*

1 1 1 11 <sup>1</sup> ( ) ()

δ<sup>+</sup> = − <sup>+</sup> ,

**a**

*iiii i i*

( ) ( ) 0,

1, ( ) 0,

ˆ ( ) ( 1)( ) ( ) ( ) ˆ

*v t c z sg z Y sg z M x*

η, and

robust adaptive controller specified by (26) ensures the following statements hold.

*n nnnn*

δ

*i i*

*f z*

*sg z f z z*

*i n* = " is positive. It can be known that ( ) *i i sg z* has ( 2) *n i* − + -

*i i*

δ

δ

δ

( )

 α

are positive design parameters, and Γ

*<sup>i</sup>*<sup>−</sup> *i n* = 2,3, , 1 " − in (26) will be

δ

1/2

*n*

*<sup>i</sup> z* in the Lyaounov

− = ;

−

(26)

*z*

*z*

*i i*

δ

δ

*i i*

*i i*

*T n*

where design parameter ( 1, , ) *<sup>i</sup>*

following continuous controller as follows:

ˆ () () ()

*vt tv t*

φ

=

φη

**a**

is a positive-definite matrix, and

  1

1

=− − =Γ − = −

() ( )

ii. The tracking error can asymptotically reach to <sup>1</sup>

*Mt z f*

γ

where, similarly as Control Scheme 1, *nc* ,

functions will be replaced by <sup>2</sup> ( )*n i*

detailed below will be replaced by <sup>1</sup> ( )*n i*

**Step 1:** We choose a positive-definition function *V*1 as

ˆ( ) ( )( ) ( )

*t v t z f sg z t Y z f sg z*

δ

*n nnnn*

α

designed in the proof of the following theorem for stability analysis.

signals of the closed-loop system ultimately bounded;

iii. The transient tracking error can be explicitly specified by

2 1 2 1

*c* δ

**Proof:** To guarantee the differentiability of the resultant functions, <sup>2</sup>

*ii i z f* δ

*d n*

 δ

ˆ() ( ) ( )

*n nn*

 δ

th order derivatives. Hence we have

where

δ

$$\begin{split} \dot{V} &= \sum\_{i=1}^{n} z\_i \dot{z}\_i + \tilde{\mathbf{a}}^T \Gamma^{-1} \dot{\tilde{\mathbf{a}}} + \frac{b\_p}{\eta} \dot{\tilde{\phi}} \dot{\tilde{\phi}} + \frac{1}{\gamma} \tilde{M} \dot{\tilde{M}} \\ &\leq \sum\_{i=1}^{n} c\_i z\_i^2 + \tilde{\mathbf{a}}^T \Gamma^{-1} (\Gamma Y z\_n - \dot{\tilde{\mathbf{a}}}) - \frac{b\_p}{\eta} \tilde{\phi} (\eta v\_1 z\_n + \dot{\tilde{\phi}}) - |z\_n| \dot{\tilde{M}} + |z\_n| |d\_b[\upsilon](t)| + \frac{1}{\gamma} \tilde{M} \dot{\tilde{M}} \\ &\leq -\sum\_{i=1}^{n} c\_i z\_i^2 + \tilde{\mathbf{a}}^T \Gamma^{-1} (\Gamma Y z\_n - \dot{\tilde{\mathbf{a}}}) - \frac{b\_p}{\eta} \tilde{\phi} (\eta v\_1 z\_n + \dot{\tilde{\phi}}) + \frac{1}{\gamma} \tilde{M} (\mathcal{y} \left| z\_n \right| - \dot{\tilde{M}}) \\ &= -\sum\_{i=1}^{n} c\_i z\_i^2 \end{split} \tag{23}$$

Equations (22) and (23) imply that *V* is nonincreasing. Hence, the boundedness of the variables 1 2 ,,, *<sup>n</sup> zz z* " , <sup>ˆ</sup> φ , **a**ˆ , *M*ˆ are ensured. By applying the LaSalle-Yoshizawa Theorem (Krisic, et al, 1995, Theorem 2.1), if further follows that 0 *<sup>i</sup> z* → , *i n* = 1,2, , " as time goes to infinity, which implies lim[ ( ) ( )] 0 *<sup>d</sup> <sup>t</sup> xt x t* →∞ − = .

We can prove the third statement of Theorem 1 in the following way. From (23), we know

$$\left\| z\_1 \right\|\_2^2 = \bigcap\_{0}^{\infty} \left| z\_1(s) \right|^2 ds \le \frac{V(0) - V(\infty)}{c\_1} \le \frac{V(0)}{c\_1}$$

Noticing 1 1 1 22 (0) (0) (0) (0) (0) 2 22 *<sup>T</sup> <sup>p</sup> b V M* φ η γ <sup>−</sup> = Γ+ + **a a** after setting (0) 0, 1,2, , *<sup>i</sup> zi n* <sup>=</sup> <sup>=</sup> " , hence

$$\left\|\mathbf{x}(t) - \mathbf{x}\_d(t)\right\|\_2 \le \sqrt{\frac{\left(\frac{1}{2}\|\mathbf{\tilde{a}}(0)^T\Gamma^{-1}\mathbf{\tilde{a}}(0) + \frac{b\_p}{2\eta}\tilde{\varphi}(0)^2 + \frac{1}{2\gamma}\tilde{M}(0)^2\right)}{c\_1}}\tag{24}$$

**Remark 4:** From (24), we know that the transient performance in a computable explicit form depends on the design parameters <sup>1</sup> η γ, ,*c* and on the initial estimate errors (0), (0) **a** φ *M*(0) , which gives designers enough tuning freedom for transient performance.

### **Scheme II**

In the control scheme above, we notice that in the controller, there is sgn( ) *nz* introduced in the design process, which makes the controller discontinuous and this may cause undesirable chattering. An alternative smooth scheme is proposed to avoid possible chattering with resort to the definition of continuous sign function (Zhou et al, 2004). First, the definition of ( ) *i i sg z* is introduced as follows:

$$\text{sg}\_i(z\_i) = \begin{cases} \frac{\mathbf{z}\_i}{\|\mathbf{z}\_i\|} \prime & \|z\_i\| \ge \delta\_i\\ \frac{\mathbf{z}\_i}{\|\mathbf{z}\_i\| + \left(\delta\_i^2 - z\_i^2\right)^{n-i+2}} & \|z\_i\| < \delta\_i \end{cases} \tag{25}$$

where design parameter ( 1, , ) *<sup>i</sup>* δ *i n* = " is positive. It can be known that ( ) *i i sg z* has ( 2) *n i* − + th order derivatives. Hence we have

$$s g\_i(z\_i) f\_i(z\_i) = \begin{cases} 1, & z\_i \ge \delta\_i \\ 0, & \left| z\_i \right| < \delta\_i \\ -1, & z\_i \le -\delta\_i \end{cases}$$

where

432 Recent Advances in Robust Control – Novel Approaches and Design Methods

1

 γ

1

φη

Equations (22) and (23) imply that *V* is nonincreasing. Hence, the boundedness of the

(Krisic, et al, 1995, Theorem 2.1), if further follows that 0 *<sup>i</sup> z* → , *i n* = 1,2, , " as time goes to

0 1 1 (0) ( ) (0) ( ) *VV V z z s ds*

*T p*

⎝ ⎠ − ≤

**Remark 4:** From (24), we know that the transient performance in a computable explicit form

In the control scheme above, we notice that in the controller, there is sgn( ) *nz* introduced in the design process, which makes the controller discontinuous and this may cause undesirable chattering. An alternative smooth scheme is proposed to avoid possible

22 2

<sup>⎪</sup> <sup>&</sup>lt; <sup>⎪</sup> + − <sup>⎩</sup>

<sup>⎧</sup> <sup>≥</sup> <sup>⎪</sup>

− +

( )

*i ii*

*z z*

δ

*i*

chattering with resort to the definition of continuous sign function (Zhou et al, 2004).

,

*i*

*z*

*z sg z <sup>z</sup>*

*i*

<sup>∞</sup> − ∞ =≤ ≤ <sup>∫</sup>

 γ

*i i n n n nb*

φη

≤ +ΓΓ −− +− + +

*b*

η

≤− + Γ Γ − − + + −

*i i n n n*

η

*b c z Yz vz M z M*

1

*c z Yz v z z M z d v t MM*

<sup>1</sup> ˆ ˆ ( )( )( ) <sup>ˆ</sup>

<sup>1</sup> ˆ ˆ ( )( ) <sup>ˆ</sup> [ ]( )

 φ

> γ

, **a**ˆ , *M*ˆ are ensured. By applying the LaSalle-Yoshizawa Theorem

*c c*

1 22

*M*

(24)

φ

(25)

*M*(0) ,

 γ

, ,*c* and on the initial estimate errors (0), (0) **a**

*i i*

δ

δ

*n i i i*

*z*

*z*

<sup>−</sup> = Γ+ + **a a** after setting (0) 0, 1,2, , *<sup>i</sup> zi n* <sup>=</sup> <sup>=</sup> " , hence

1

*c*

1 1 (0) (0) (0) (0) 2 22

<sup>−</sup> ⎛ ⎞ ⎜ ⎟ Γ+ +

η

ϕ

*b*

**a a**

 γ  γ

(23)

 φ

1

−

*V zz MM*

−

*<sup>n</sup> <sup>T</sup> <sup>p</sup>*

−

*<sup>n</sup> <sup>T</sup> <sup>p</sup>*

*xt x t* →∞

Noticing 1 1 1 22 (0) (0) (0) (0) (0) 2 22 *<sup>T</sup> <sup>p</sup> b V M*

() ()

*xt x t*

depends on the design parameters <sup>1</sup>

*d*

First, the definition of ( ) *i i sg z* is introduced as follows:

( )

⎪ = ⎨

*i i*

− = .

2 2 1 1 2

φ

η γ

which gives designers enough tuning freedom for transient performance.

η

2

We can prove the third statement of Theorem 1 in the following way.

**a a**

**a a**

*b*

η

φφ

2 1

*<sup>n</sup> <sup>T</sup> <sup>p</sup>*

= +Γ + +

**a a**

2 1

1

*i*

=

∑

*i i*

1

*i n i i i*

=

∑

*i*

= −

variables 1 2 ,,, *<sup>n</sup> zz z* " , <sup>ˆ</sup>

From (23), we know

**Scheme II** 

=

∑

1

1

=

∑

2

φ

infinity, which implies lim[ ( ) ( )] 0 *<sup>d</sup> <sup>t</sup>*

*c z*

$$f\_i(z\_i) = \begin{cases} 1, & \left| z\_i \right| \ge \delta\_i \\ 0, & \left| z\_i \right| < \delta\_i \end{cases}$$

Given the plant and the hysteresis model subject to the assumption above, we propose the following continuous controller as follows:

$$\begin{aligned} v(t) &= \dot{\hat{\phi}}(t)v\_1(t) \\ v\_1(t) &= -(c\_n + 1)(|z\_n| - \delta\_n) \text{sg}\_n(z\_n) - \hat{\mathbf{a}}^T Y - \text{sg}\_n(z\_n)\hat{M} + \mathbf{x}\_d^{(n)} + \dot{\alpha}\_{n-1} \\ \dot{\hat{\phi}}(t) &= -\eta v\_1(t)(|z\_n| - \delta\_n)f\_n \text{sg}\_n(z\_n) \\ \dot{\hat{\mathbf{a}}}(t) &= \Gamma^\gamma Y(|z\_n| - \delta\_n)f\_n \text{sg}\_n(z\_n) \\ \dot{M}(t) &= \eta(|z\_n| - \delta\_n)f\_n \end{aligned} \tag{26}$$

where, similarly as Control Scheme 1, *nc* , η , and γ are positive design parameters, and Γ is a positive-definite matrix, andα*<sup>n</sup>*−<sup>1</sup> and the implicit <sup>1</sup> , α*<sup>i</sup>*<sup>−</sup> *i n* = 2,3, , 1 " − in (26) will be designed in the proof of the following theorem for stability analysis.

**Theorem 2:** For the plant given in (2) with the hysteresis (8), subject to Assumption 1, the robust adaptive controller specified by (26) ensures the following statements hold.


$$\left\|\mathbf{x}(t) - \mathbf{x}\_d(t)\right\|\_2 \le \delta\_1 + \frac{1}{c\_1^{2n}} \left(\frac{1}{2}\mathbf{\tilde{a}}(0)^T \Gamma^{-1} \mathbf{\tilde{a}}(0) + \frac{b\_p}{2\eta}\tilde{\phi}(0)^2 + \frac{1}{2\gamma}\tilde{M}(0)^2\right)^{1/2n} \tag{27}$$

**Proof:** To guarantee the differentiability of the resultant functions, <sup>2</sup> *<sup>i</sup> z* in the Lyaounov functions will be replaced by <sup>2</sup> ( )*n i ii i z f* δ − + <sup>−</sup> in Section 3.1 and *<sup>i</sup> <sup>z</sup>* in the design procedure detailed below will be replaced by <sup>1</sup> ( )*n i ii i z sg* δ − + − as did in (Zhou et al, 2004). **Step 1:** We choose a positive-definition function *V*1 as

$$V\_1 = \frac{1}{n+1} (|z\_1| - \delta\_1)^{n+1} f\_1(z\_1) \dots$$

Robust Control of Nonlinear Systems with Hysteresis Based on Play-Like Operators 435

1 1 ( 1)( ) ( ) ( 1) ( ), ( 3, , 1) *n i*

δ

2 1

*V V z f z sg z z MM*

<sup>1</sup> ( ) () ()

2( 1) 1

*i i i ii n*

*c z f z Yz*

≤− − +ΓΓ −

( ) () ( )ˆ

− + −

2( 1) 1

( ) () ( )ˆ

− + −

*<sup>n</sup> n i <sup>T</sup> <sup>p</sup> i i i ii n*

simple nonlinear systems (Su, et al, 2000; Zhou et al, 2004) described by

φ

*v z z M z d v t MM*

<sup>1</sup> <sup>ˆ</sup> ( ) [ ]( )

*n n nb*

− +− + +

2( 1)

( ) ()

*i i i ii*

*c z fz*

*c z f z Yz*

≤− − +ΓΓ −−

and compute its time derivative by using (13), (28), (30) and (32),

= +− +Γ + +

δ*sg z i n* − + =− + + − +− + − + = − " with positive

> *<sup>n</sup> n i <sup>T</sup> <sup>p</sup> i i ii*

= − +Γ + + − + ∑ **a a**

*V zf z M*

*<sup>n</sup>*−<sup>1</sup> can be obtained from the common form of virtual controllers

1 11 ( ) () 2 2 2 2

*T p*

−

**a a**

**a a**

**a a**

Thus we proved the first statement of the theorem. The rest of the statements can be easily proved following those of the proof of theorem 1, hence omitted here for saving space. **Remark 5:** It is now clear the two proposed control schemes to mitigate the hysteresis nonlinearities can be applied to many systems and may not necessarily be limited to the system (2). However, we should emphasize that our goal is to show the fusion of the hysteresis model with available control techniques in a simpler setting that reveals its

In this section, we illustrate the methodologies presented in the previous sections using a

( ) ( ) <sup>1</sup> ( ) <sup>1</sup> *x t x t <sup>e</sup> x a bw t e* − − <sup>−</sup> = +

where *w* represents the output of the hysteresis nonlinearity. The actual parameter values are 1 *a* = , and 1 *b* = . Without control, i.e., *w t*() 0 = , (33) is unstable, because ( ) ( ) (1 )/(1 ) 0 *x t x t xe e* − − =− + > for 0 *x* > , and ( ) ( ) (1 )/(1 ) 0 *x t x t xe e* − − = − +< for 0 *x* < . The objective is to control the system state *x* to follow the desired trajectory 12.5sin(2.3 ) *<sup>d</sup> x t* = . In the simulations, the robust adaptive control law (19) of Scheme I was used, taking

 γ

<sup>1</sup>

*b*

η

 φφ

− + −

( 2) 122

 γ

*b*

η

φη

*b*

 φ

> γ

<sup>1</sup> ˆ ˆ ( )( )

*vz M z M*

++ −

 γ

*n n*

γ

 φ

<sup>+</sup> (33)

(0) 0.8 / 3 <sup>=</sup> , ˆ*M*(0) 2 <sup>=</sup> , *x*ˆ(0) 3.05 <sup>=</sup> , *v*(0) 0 <sup>=</sup> , 1 *<sup>B</sup>* <sup>=</sup> 0.505 ,

η

where

α

α

design parameters *<sup>i</sup> c* .

1

−

1

φη

*i p*

=

∑

*b*

η

1

1

=− − ∑

*i*

*i*

essential features.

<sup>1</sup>*c* = 0.9,

γ

 = 0.2 , 0.1 η

= , 0.1 Γ = , ˆ

**5. Simulation results** 

=

=

∑

1

*c k z sg z*

We define a positive-definition function as

1

δ

δ

 φ

δ

δ− +

*<sup>n</sup> n i*

*n i*

*n n n nn nnn*

*<sup>n</sup> n i <sup>T</sup>*

*i*

1

=

*i i i i ii i i ii*

δα

and design virtual controller α<sup>1</sup> as

$$\alpha\_1 = -(c\_1 + k)(\left|z\_1\right| - \left\langle\delta\_1\right\rangle)^\iota \text{sg}\_1(z\_1) - (\left\langle\delta\_2 + 1\right\rangle \text{sg}\_1(z\_1))\tag{28}$$

with constant *<sup>k</sup>* satisfying <sup>1</sup> <sup>0</sup> 4 < *k* ≤ and a positive design parameter 1*c* , then compute its time derivative by using (17)(28),

$$\begin{split} \dot{V}\_1 &= \left( \left| z\_1 \right| - \delta\_1 \right)^n f\_1(z\_1) \text{sg}\_1(z\_1) \dot{z}\_1 \\ &\le - (c\_1 + k) (\left| z\_1 \right| - \delta\_1)^{2n} f\_1(z\_1) + (\left| z\_1 \right| - \delta\_1)^n (\left| z\_2 \right| - \delta\_2 - 1) f\_1(z\_1) \end{split} \tag{29}$$

**Step 2:** We choose a positive-definition function *V*1 as

$$V\_2 = V\_1 + \frac{1}{n}(|z\_2| - \delta\_2)^n f\_2(z\_2) \; ,$$

and design virtual controller α<sup>2</sup> as

$$\alpha\_2 = -(c\_2 + k + 1)(\left|z\_2\right| - \delta\_2)^{n-1} s \mathbf{g}\_2(z\_2) + \dot{\alpha}\_1 - (\delta\_3 + 1) s \mathbf{g}\_2(z\_2) \tag{30}$$

with a positive design parameter 2 *c* , then compute its time derivative,

$$\begin{aligned} \dot{V}\_2 &\leq -\sum\_{i=1}^2 c\_i (|z\_i| - \delta\_i)^{2(n-i+1)} f\_i(z\_i) - k (|z\_1| - \delta\_1)^{2n} f\_1(z\_1) + (|z\_1| - \delta\_1)^n (|z\_2| - \delta\_2 - 1) f\_1(z\_1) \\ &- (|z\_2| - \delta\_2)^{2(n-1)} f\_2(z\_2) + (|z\_2| - \delta\_2)^{n-1} (|z\_3| - \delta\_3 - 1) f\_2(z\_2) \end{aligned}$$

By using inequality 2 2 2*ab a b* ≤ + , we have

$$\begin{aligned} \dot{V}\_2 &\le -\sum\_{i=1}^2 c\_i (\left| z\_i \right| - \delta\_i)^{2(n-i+1)} f\_i(z\_i) + \frac{1}{4k} (\left| z\_2 \right| - \delta\_2 - 1)^2 \\ &- (\left| z\_2 \right| - \delta\_2)^{2(n-1)} f\_2(z\_2) + (\left| z\_2 \right| - \delta\_2)^{n-1} (\left| z\_3 \right| - \delta\_3 - 1) f\_2(z\_2) \end{aligned}$$

for both cases 2 2 *z* ≥ + δ 1 and 2 2 *z* < δ+ 1 , we can conclude that

$$\dot{V}\_2 \le -\sum\_{i=1}^{2} c\_i (\left| z\_i \right| - \delta\_i)^{2(n-i+1)} f\_i(z\_i) + (\left| z\_2 \right| - \delta\_2)^{n-1} (\left| z\_3 \right| - \delta\_3 - 1) f\_2(z\_2) \tag{31}$$

**Step n:** Following this procedure step by step, we can derive the real control

$$\begin{aligned} v(t) &= \hat{\phi}(t)v\_1(t) \\ v\_1(t) &= -(c\_n + 1)(|z\_n| - \delta\_n) \text{sg}\_n(z\_n) - \hat{\mathbf{a}}^T Y - \text{sg}\_n(z\_n)\hat{M} + \mathbf{x}\_d^{(n)} + \dot{\mathbf{a}}\_{n-1} \\ \dot{\hat{\phi}}(t) &= -\eta v\_1(t)(|z\_n| - \delta\_n)f\_n \text{sg}\_n(z\_n) \\ \dot{\hat{\mathbf{a}}}(t) &= \Gamma^\gamma Y(|z\_n| - \delta\_n)f\_n \text{sg}\_n(z\_n) \\ \dot{M}(t) &= \gamma(|z\_n| - \delta\_n)f\_n \end{aligned} \tag{32}$$

where α*<sup>n</sup>*−<sup>1</sup> can be obtained from the common form of virtual controllers 1 1 1 ( 1)( ) ( ) ( 1) ( ), ( 3, , 1) *n i i i i i ii i i ii* α δα *c k z sg z* δ *sg z i n* − + =− + + − +− + − + = − " with positive design parameters *<sup>i</sup> c* .

We define a positive-definition function as

434 Recent Advances in Robust Control – Novel Approaches and Design Methods

1 1 1 1 11 2 11 ( )( ) ( ) ( 1) ( ) *<sup>n</sup>*

δδ

1 1 1 11 1 1 2 2 11

δ

(29)

,

( )( ) ( ) ( ) ( 1) ( )

*c kz fz z z fz*

*n n*

≤− + − + − − −

2 1 2 2 22 <sup>1</sup> ( ) () *<sup>n</sup> V V z fz n* =+ −

1 2 2 2 2 22 1 3 22 ( 1)( ) ( ) ( 1) ( ) *<sup>n</sup>*

2 1 1 11 1 1 2 2 11

*V cz f z kz f zz z f z*

2( 1) 1

− −

2( 1) 1 2 2 2 3 3 2 2

1 1

=− + − − − + +

*n n n nn n n d n*

**a**

*n n*

− − + − −−

<sup>1</sup> ( ) ( ) ( 1) <sup>4</sup>

≤− − − − + − −−

*n i n n*

δδ

( ) ( ) ( ) ( ) ( ) ( 1) ( )

2( 1) 2

2 2 22 2 2 3 3 22

+ 1 , we can conclude that

( ) ( ) ( ) ( 1) ( ) *n i <sup>n</sup>*

≤− − ∑ + − −− (31)

( ) ( ) ( ) ( 1) ( )

*z f zz z f z*

δδ

 δ

δδ

δδ*f zz z f z* − + <sup>−</sup>

*T n*

( )

 α

−

(32)

δα

with a positive design parameter 2 *c* , then compute its time derivative,

2( 1) 1

− −

*n n*

− − + − −−

2( 1) 2

2 2 22 2 2 3 3 22

2 2 2

≤− − + −−

− +

*n i i i i ii*

*V c z fz z <sup>k</sup>*

δ

**Step n:** Following this procedure step by step, we can derive the real control

ˆ ( ) ( 1)( ) ( ) ( ) ˆ

*v t c z sg z Y sg z M x*

*n nnnn*

δ

*i i i ii*

δ

ˆ( ) ( )( ) ( )

*t v t z f sg z t Y z f sg z*

δ

*n nnnn*

 δ

ˆ() ( ) ( )

*n nn*

 δ δ

δ

1 and 2 2 *z* <

( ) ( ) ( ) ( 1) ( )

*z fz z z fz*

*c k z sg z*

=− + − − + *c k z sg z sg z* (28)

< *k* ≤ and a positive design parameter 1*c* , then compute its

δδ

δ*sg z* <sup>−</sup> =− + + − +− + (30)

δ

and design virtual controller

with constant *<sup>k</sup>* satisfying <sup>1</sup> <sup>0</sup>

time derivative by using (17)(28),

and design virtual controller

2

1

*i*

∑

for both cases 2 2 *z* ≥ +

=

α

δ

By using inequality 2 2 2*ab a b* ≤ + , we have

α<sup>1</sup> as

4

2

δ

1 1 1 11 111

*n*

*V z f z sg z z*

α<sup>2</sup> as

δ

**Step 2:** We choose a positive-definition function *V*1 as

*i i i ii*

2

1

*i*

∑

δ

2

1

ˆ () () ()

*vt tv t*

φ

=

φη

**a**

  1

1

=− − =Γ − = −

() ( )

*Mt z f*

γ

*i V cz*

=

=

− +

δ

= −

( ) () ()

α

$$V = \sum\_{i=1}^{n} \frac{1}{n - i + 2} (|z\_i| - \delta\_i)^{(n - i + 2)} f\_i(z\_i) + \frac{1}{2} \tilde{\mathbf{a}}^T \Gamma^{-1} \tilde{\mathbf{a}} + \frac{b\_p}{2\eta} \tilde{\phi}^2 + \frac{1}{2\gamma} \tilde{M}^2$$

and compute its time derivative by using (13), (28), (30) and (32),

2 1 1 2( 1) 1 1 1 2( 1) 1 1 <sup>1</sup> ( ) () () ( ) () ( )ˆ <sup>1</sup> <sup>ˆ</sup> ( ) [ ]( ) ( ) () ( )ˆ *T p n n n nn nnn <sup>n</sup> n i <sup>T</sup> i i i ii n i p n n nb <sup>n</sup> n i <sup>T</sup> <sup>p</sup> i i i ii n i b V V z f z sg z z MM c z f z Yz b v z z M z d v t MM b c z f z Yz* δ φφ η γ δ φη φ η γ δ η − − − + − = − + − = = +− +Γ + + ≤− − +ΓΓ − − +− + + ≤− − +ΓΓ −− ∑ ∑ **a a a a a a** <sup>1</sup> 2( 1) 1 <sup>1</sup> ˆ ˆ ( )( ) ( ) () *n n <sup>n</sup> n i i i i ii i vz M z M c z fz* φη φ γ γ δ − + = ++ − =− − ∑ 

Thus we proved the first statement of the theorem. The rest of the statements can be easily proved following those of the proof of theorem 1, hence omitted here for saving space. **Remark 5:** It is now clear the two proposed control schemes to mitigate the hysteresis nonlinearities can be applied to many systems and may not necessarily be limited to the system (2). However, we should emphasize that our goal is to show the fusion of the hysteresis model with available control techniques in a simpler setting that reveals its essential features.

### **5. Simulation results**

In this section, we illustrate the methodologies presented in the previous sections using a simple nonlinear systems (Su, et al, 2000; Zhou et al, 2004) described by

$$\dot{\alpha} = a \frac{1 - e^{-\alpha(t)}}{1 + e^{-\alpha(t)}} + bw(t) \tag{33}$$

where *w* represents the output of the hysteresis nonlinearity. The actual parameter values are 1 *a* = , and 1 *b* = . Without control, i.e., *w t*() 0 = , (33) is unstable, because ( ) ( ) (1 )/(1 ) 0 *x t x t xe e* − − =− + > for 0 *x* > , and ( ) ( ) (1 )/(1 ) 0 *x t x t xe e* − − = − +< for 0 *x* < . The objective is to control the system state *x* to follow the desired trajectory 12.5sin(2.3 ) *<sup>d</sup> x t* = .

In the simulations, the robust adaptive control law (19) of Scheme I was used, taking <sup>1</sup>*c* = 0.9, γ = 0.2 , 0.1 η = , 0.1 Γ = , ˆ φ(0) 0.8 / 3 <sup>=</sup> , ˆ*M*(0) 2 <sup>=</sup> , *x*ˆ(0) 3.05 <sup>=</sup> , *v*(0) 0 <sup>=</sup> , 1 *<sup>B</sup>* <sup>=</sup> 0.505 ,

Robust Control of Nonlinear Systems with Hysteresis Based on Play-Like Operators 437

We have for the first time constructed a class of new hysteresis model based on play-like operators and named it Prandtl-Ishlinshii-Like model where the play-like operators play a role of building blocks. We have proposed two control schemes to accomplish robust adaptive control tasks for a class of nonlinear systems preceded by Prandtl-Ishlinshii-Like models to not only ensure stabilization and tracking of the hysteretic dynamic nonlinear systems, but also derive the transient performance in terms of *L*2 norm of tracking error as an explicit function of design parameters. By proposing Prandtl-Ishlinshii-Like model and using the backstepping technique, this paper has address a challenge that how to fuse a suitable hysteresis model with available robust adaptive techniques to mitigate the effects of hysteresis avoid constructing a complicated inverse operator of the hysteresis model. After this preliminary result, the idea in this paper is being further explored to deal with a class of perturbed strict-feedback nonlinear systems with unknown control directions preceded by

This work was supported by the NSERC Grant, the National Natural Science Foundation of China (61004009, 61020106003), Doctoral Fund of Ministry of Education of China (20100042120033), and the Fundamental Research Funds for the Central Universities

Su, C. Y.; Stepanenko, Y.; Svoboda, J. & Leung, T. P. (2000). Robust Adaptive Control of a

Fu, J.; Xie, W. F. & Su, C. Y. (2007). Practically Adaptive Output Tracking Control of

*IEEE Conference on Decision and Control*, 1326-1331, New Orleans, LA, USA. Banks, H. T. & Smith, R. C. (2000). Hysteresis modeling in smart material systems, *J. Appl.* 

Tan, X. & Baras, J. S. (2004). Modelling and control of hysteresis in magnetostrictive

Tao G. & Kokotovic P. V. (1995). Adaptive control of Plants with Unknown Hysteresis, *IEEE* 

Tao G. & Lewis, F., (2001). Eds, *Adaptive control of nonsmooth dynamic systems*, New York:

Zhou J.; Wen C. & Zhang Y. (2004). Adaptive backstepping control of a class of uncertain

Wen C. & Zhou J.(2007). Decentralized adaptive stabilization in the presence of unknown

Tan X. & Baras J. (2005). Adaptive Identification and Control of Hysteresis in Smart Materials*, IEEE Transactions on Automatic Control*, Vol. 50, No. 6, pp. 827-839.

nonlinear systems with unknown backlash-like hysteresis, *IEEE Transactions on* 

*Transactions on Automatic Control*, Vol. 45, No. 12, pp. 2427-2432.

actuators, *Automatica*, Vol. 40, No. 9, pp. 1469-1480.

*Automatic Control,* Vol. 49, No. 10, pp. 1751-1757.

*Transactions on Automatic Control*, Vol. 40, No. 2, pp. 200-212.

backlash-like hysteresis, *Automatica*, Vol. 43, No. 3, pp. 426-440.

Class of Nonlinear Systems with Unknown Backlash-Like Hysteresis, *IEEE* 

Inherently Nonlinear Systems Proceeded by Unknown Hysteresis, *Proc. of the 46th* 

**6. Conclusion** 

this new hysteresis model.

**7. Acknowledgement** 

(N100408004, N100708001).

*Mech. Eng*, Vol. 5, pp. 31-45.

Springer-Verlag, 2001.

**8. References** 

<sup>2</sup> 6.7(0.1 1) ( ) *<sup>r</sup> pr e*− − = for *r* ∈(0,50]. The simulation results presented in the Figure 3 is the comparison of system tracking errors for the proposed control Scheme I and the scenario without considering the effects of the hysteresis. For Scheme II, we choose the same initial values as before and 0.35 δ = . The simulation results presented in the Figure 4 is the comparison of system tracking errors for the proposed control Scheme II and the scenario without considering the effects of the hysteresis. Clearly, the all simulation results verify our proposed schemes and show their effectiveness.

Fig. 3. Tracking errors -- control Scheme I (solid line) and the scenario without considering hysteresis effects (dotted line)

Fig. 4. Tracking errors -- control Scheme II (solid line) and the scenario without considering hysteresis effects (dotted line)

## **6. Conclusion**

436 Recent Advances in Robust Control – Novel Approaches and Design Methods

<sup>2</sup> 6.7(0.1 1) ( ) *<sup>r</sup> pr e*− − = for *r* ∈(0,50]. The simulation results presented in the Figure 3 is the comparison of system tracking errors for the proposed control Scheme I and the scenario without considering the effects of the hysteresis. For Scheme II, we choose the same initial

comparison of system tracking errors for the proposed control Scheme II and the scenario without considering the effects of the hysteresis. Clearly, the all simulation results verify our

Fig. 3. Tracking errors -- control Scheme I (solid line) and the scenario without considering

Fig. 4. Tracking errors -- control Scheme II (solid line) and the scenario without considering

= . The simulation results presented in the Figure 4 is the

values as before and 0.35

hysteresis effects (dotted line)

hysteresis effects (dotted line)

δ

proposed schemes and show their effectiveness.

We have for the first time constructed a class of new hysteresis model based on play-like operators and named it Prandtl-Ishlinshii-Like model where the play-like operators play a role of building blocks. We have proposed two control schemes to accomplish robust adaptive control tasks for a class of nonlinear systems preceded by Prandtl-Ishlinshii-Like models to not only ensure stabilization and tracking of the hysteretic dynamic nonlinear systems, but also derive the transient performance in terms of *L*2 norm of tracking error as an explicit function of design parameters. By proposing Prandtl-Ishlinshii-Like model and using the backstepping technique, this paper has address a challenge that how to fuse a suitable hysteresis model with available robust adaptive techniques to mitigate the effects of hysteresis avoid constructing a complicated inverse operator of the hysteresis model. After this preliminary result, the idea in this paper is being further explored to deal with a class of perturbed strict-feedback nonlinear systems with unknown control directions preceded by this new hysteresis model.

## **7. Acknowledgement**

This work was supported by the NSERC Grant, the National Natural Science Foundation of China (61004009, 61020106003), Doctoral Fund of Ministry of Education of China (20100042120033), and the Fundamental Research Funds for the Central Universities (N100408004, N100708001).

## **8. References**


**20** 

*1Canada 2Saudi Arabia* 

**Identification of Linearized Models and** 

This chapter presents the design of a controller that ensures both the robust stability and robust performance of a physical plant using a linearized identified model . The structure of the plant and the statistics of the noise and disturbances affecting the plant are assumed to be unknown. As the design of the robust controller relies on the availability of a plant model, the mathematical model of the plant is first identified and the identified model, termed here the nominal model, is then employed in the controller design. As an effective design of the robust controller relies heavily on an accurately identified model of the plant, a reliable identification scheme is developed here to handle unknown model structures and statistics of the noise and disturbances. Using a mixed-sensitivity *H*<sup>∞</sup> optimization framework, a robust controller is designed with the plant uncertainty modeled by additive perturbations in the numerator and denominator polynomials of the identified plant model. The proposed identification and robust controller design are evaluated extensively on simulated systems as well as on two laboratory-scale physical systems, namely the magnetic levitation and two- tank liquid level systems. In order to appreciate the importance of the identification stage and the interplay between this stage and the robust controller design stage, let us first consider a model of an electro-mechanical system formed of a DC motor relating the input voltage to the armature and the output angular velocity. Based on the physical laws, it is a third-order closed-loop system formed of fast electrical and slow mechanical subsystems. It is very difficult to identify the fast dynamics of this system, and hence the identified model will be of a second-order while the true order remains to be three. Besides this error in the model order, there may also be errors in the estimated model parameters. Consider now the problem of designing a controller for this electro-mechanical system. A constant-gain controller based on the identified second-order model will be stable for all values of the gain as long the negative feedback is used. If, however, the constant gain controller is implemented on the physical system, the true closed-loop third-order system may not be stable for large values of the controller gain. This simple example clearly shows the disparity between the performance of the identified system and the real one and hence provides a strong motivation for designing a robust controller

**1. Introduction** 

which factors uncertainties in the model.

**Robust Control of Physical Systems** 

Rajamani Doraiswami1 and Lahouari Cheded2 *1Department of Electrical and Computer Engineering,* 

*King Fahd University of Petroleum and Minerals, Dhahran,* 

*University of New Brunswick, Fredericton, 2Department of Systems Engineering,* 


## **Identification of Linearized Models and Robust Control of Physical Systems**

Rajamani Doraiswami1 and Lahouari Cheded2

*1Department of Electrical and Computer Engineering, University of New Brunswick, Fredericton, 2Department of Systems Engineering, King Fahd University of Petroleum and Minerals, Dhahran, 1Canada 2Saudi Arabia* 

## **1. Introduction**

438 Recent Advances in Robust Control – Novel Approaches and Design Methods

Iyer R.; Tan X., & Krishnaprasad P.(2005), Approximate Inversion of the Preisach Hysteresis

Tan X. & Bennani O. (2008). Fast Inverse Compensation of Preisach-Type Hysteresis

Isidori A. (1989). *Nonlienar Control Systems: an Introduction*, 2nd ed. Berlin, Germany:

Krasnoskl'skii M. A. & Pokrovskii A. V. (1983). *Systems with Hysteresis*. Moscow, Russia:

Brokate, M. & Sprekels, J. (1996). *Hysteresis and Phase Transitions*, New York: Springer-

Krejci P. (1996) *Hysteresis, convexity and dissipation in hyperbolic equations*, Gakuto Int. Series

Ekanayake D. & Iyer V. (2008), Study of a Play-like Operator, *Physica B: Condensed Matter*,

Krisic M.; Kanellakopoulos I. & Kokotovic P. (1995). *Nonlinear and Adaptive Control Design.*

*Automatic Control*, Vol. 50, No. 6, pp. 798-810.

*Control Conference*, Seattle, USA, pp. 2365-2370.

Math. Sci. & Appl. Vol. 8, Gakkotosho, Tokyo.

Vol. 403, No.2-3, pp. 456-459.

Springer-Verlag.

New York: Wiley.

Nauka.

Verlag.

Operator with Application to Control of Smart Actuators, *IEEE Transactions on* 

Operators Using Field-Programmable Gate Arrays, *in Proceedings of the American* 

This chapter presents the design of a controller that ensures both the robust stability and robust performance of a physical plant using a linearized identified model . The structure of the plant and the statistics of the noise and disturbances affecting the plant are assumed to be unknown. As the design of the robust controller relies on the availability of a plant model, the mathematical model of the plant is first identified and the identified model, termed here the nominal model, is then employed in the controller design. As an effective design of the robust controller relies heavily on an accurately identified model of the plant, a reliable identification scheme is developed here to handle unknown model structures and statistics of the noise and disturbances. Using a mixed-sensitivity *H*<sup>∞</sup> optimization framework, a robust controller is designed with the plant uncertainty modeled by additive perturbations in the numerator and denominator polynomials of the identified plant model. The proposed identification and robust controller design are evaluated extensively on simulated systems as well as on two laboratory-scale physical systems, namely the magnetic levitation and two- tank liquid level systems. In order to appreciate the importance of the identification stage and the interplay between this stage and the robust controller design stage, let us first consider a model of an electro-mechanical system formed of a DC motor relating the input voltage to the armature and the output angular velocity. Based on the physical laws, it is a third-order closed-loop system formed of fast electrical and slow mechanical subsystems. It is very difficult to identify the fast dynamics of this system, and hence the identified model will be of a second-order while the true order remains to be three. Besides this error in the model order, there may also be errors in the estimated model parameters. Consider now the problem of designing a controller for this electro-mechanical system. A constant-gain controller based on the identified second-order model will be stable for all values of the gain as long the negative feedback is used. If, however, the constant gain controller is implemented on the physical system, the true closed-loop third-order system may not be stable for large values of the controller gain. This simple example clearly shows the disparity between the performance of the identified system and the real one and hence provides a strong motivation for designing a robust controller which factors uncertainties in the model.

Identification of Linearized Models and Robust Control of Physical Systems 441

2001). The proposed scheme is extensively tested on both simulated systems and physical laboratory-scale systems namely, a magnetic levitation and two-tank liquid level systems. The key contribution herein is to demonstrate the efficacy of (a) the proposed model order selection criterion to reduce the uncertainty in the plant model structure, a criterion which is simple, verifiable and reliable (b) the two-stage closed-loop identification scheme which ensures quality of the identification performance, and (c) the mixed-sensitivity optimization technique in the *H*∞-framework to meet the control objectives of robust performance and robust stability without violating the physical constraints imposed by components such as actuators, and in the face of uncertainties that stem from the identified model employed in the design of the robust controller. It should be noted here that the identified model used in the design of the robust controller is the linearized model of the physical system at some

The chapter is structured as follows. Section 2 discusses the stability and performance of a typical closed-loop system. In Section 3, the robust performance and robust stability problems are considered in the mixed-sensitivity *H*∞ framework. Section 4 discusses the problem of designing a robust controller using the identified model with illustrated examples. Section 5 gives a detailed description of the complete identification scheme used to select the model order, identify the plant in a closed-loop configuration and in the presence of unknown noise and disturbances. Finally, in Section 6, evaluations of the

An important objective of the control system to ensure that the output of the system tracks a given reference input signal in the face of both noise and disturbances affecting the system, and the plant model uncertainty. A further objective of the control system is to ensure that the performance of the system meets the desired time-domain and frequency-domain specifications such as the rise time, settling time, overshoot, bandwidth, and peak of the magnitude frequency response while respecting the constraints on the control input and other variables. An issue of paramount practical importance facing the control engineer is how to design a controller which will both stabilize the plant when its model is uncertain and ensure that its performance specifications are all met. Put succinctly, we seek a controller that will ensure both stability and performance robustness in the face of model uncertainties. To achieve this dual purpose, we need to first introduce some analytical tools

Consider the typical closed-loop system shown in Fig. 1 where *G*<sup>0</sup> is the nominal plant, *C*<sup>0</sup> the controller that stabilizes the nominal plant *G*<sup>0</sup> ; *r* and *y* the reference input, and output, respectively; *<sup>i</sup> d* and 0 *d* the disturbances at the plant input and plant output, respectively, and *v* the measurement or sensor noise. The nominal model, heretofore referred to as the identified model, represents a mathematical model of a physical plant

Let *w* and *z* be, respectively, a (4x1) input vector comprising *r*, <sup>0</sup> *d* , *<sup>i</sup> d* and *v* , and a (3x1) output vector formed of the plant output *y*, control input *u*, and the tracking error *e* , as

designed robust controllers on two-laboratory scale systems are presented.

**2. Stability and performance of a closed-loop system** 

obtained from physical reasoning and experimental data.

operating point, termed the nominal model.

as described next.

given below by:

**2.1 Key sensitivity functions** 

A physical system, in general, is formed of cascade, parallel and feedback combinations of many subsystems. It may be highly complex, be of high order and its structure may be different from the one derived from physical laws governing its behavior. The identified model of a system is at best an approximation of the real system because of the many difficulties encountered and assumptions made in completely capturing its dynamical behavior. Factors such as the presence of noise and disturbances affecting the input and the output, the lack of persistency of excitation, and a finite number of input-output samples all contribute to the amount of uncertainty in the identified model. As a result of this, highfrequency behavior including fast dynamics may go un-captured in the identified model. The performance of the closed- loop system formed of a physical plant and a controller depends critically upon the quality of the identified model. Relying solely on the robustness of the controller to overcome the uncertainties of the identified plant will result in a poor performance. Generally, popular controllers such as proportional (P), proportional integral (PI) or proportional integral and derivative (PID) controllers are employed in practice as they are simple, intuitive and easy to use and their parameters can be tuned on line. When these controllers are designed using the identified model, and implemented on the physical system, there is no guarantee that the closed-loop system will be stable, let alone meeting the performance requirements. The design of controllers using identified models to ensure robust stability is becoming increasingly important in recent times. In (Cerone, Milanese, and Regruto, 2009), an interesting iterative scheme is proposed which consists of first identifying the plant and employing the identified model to design a robust controller, then implementing the designed controller on the real plant and evaluating its performance on the actual closed-loop system. However, it is difficult to establish whether the identifycontrol-implement-evaluate scheme will converge, and even if it does, whether it will converge to an optimal robust controller. In this work, each of these issues, namely the identification, the controller design and its implementation on an actual system, are all addressed separately with the clear objective of developing a reliable identification scheme so that the identified model will be close to the true model, hence yielding a reliable controller design scheme which will produce a controller that will be robust enough to ensure both stability and robust performance of the actual closed-loop system. Crucial issues in the identification of physical systems include the unknown order of the model, the partially or totally unknown statistics of the noise and disturbances affecting data, and the fact that the plant is operating in a closed-loop configuration. To tackle these issues, a number of schemes designed to (a) attenuate the effect of unknown noise and disturbances (Doraiswami, 2005), (b) reliably select the model order of the identified system (Doraiswami, Cheded, and Khalid, 2010) and (c) identify a plant operating in a closed-loop (Shahab and Doraiswami, 2009) have been developed and are presented here for completeness. The model uncertainty associated with the identified model is itself modeled as additive perturbations in both the plant numerator and the denominator polynomials so as to develop robust controllers using the mixed-sensitivity *H∞* controller design procedure (Kwakernaak, 1993). The mixed-sensitivity *H*<sup>∞</sup> control design procedure conservatively combines and simultaneously solves both problems of robust stability and robust performance using a single *H*∞ norm.

This design procedure is sound, mature, focuses on handling the problem of controller design when the plant model is uncertain, and has been successfully employed in practice in recent years (Cerone, Milanese, and Regruto, 2009), (Tan, Marquez, Chen, and Gooden, 440 Recent Advances in Robust Control – Novel Approaches and Design Methods

A physical system, in general, is formed of cascade, parallel and feedback combinations of many subsystems. It may be highly complex, be of high order and its structure may be different from the one derived from physical laws governing its behavior. The identified model of a system is at best an approximation of the real system because of the many difficulties encountered and assumptions made in completely capturing its dynamical behavior. Factors such as the presence of noise and disturbances affecting the input and the output, the lack of persistency of excitation, and a finite number of input-output samples all contribute to the amount of uncertainty in the identified model. As a result of this, highfrequency behavior including fast dynamics may go un-captured in the identified model. The performance of the closed- loop system formed of a physical plant and a controller depends critically upon the quality of the identified model. Relying solely on the robustness of the controller to overcome the uncertainties of the identified plant will result in a poor performance. Generally, popular controllers such as proportional (P), proportional integral (PI) or proportional integral and derivative (PID) controllers are employed in practice as they are simple, intuitive and easy to use and their parameters can be tuned on line. When these controllers are designed using the identified model, and implemented on the physical system, there is no guarantee that the closed-loop system will be stable, let alone meeting the performance requirements. The design of controllers using identified models to ensure robust stability is becoming increasingly important in recent times. In (Cerone, Milanese, and Regruto, 2009), an interesting iterative scheme is proposed which consists of first identifying the plant and employing the identified model to design a robust controller, then implementing the designed controller on the real plant and evaluating its performance on the actual closed-loop system. However, it is difficult to establish whether the identifycontrol-implement-evaluate scheme will converge, and even if it does, whether it will converge to an optimal robust controller. In this work, each of these issues, namely the identification, the controller design and its implementation on an actual system, are all addressed separately with the clear objective of developing a reliable identification scheme so that the identified model will be close to the true model, hence yielding a reliable controller design scheme which will produce a controller that will be robust enough to ensure both stability and robust performance of the actual closed-loop system. Crucial issues in the identification of physical systems include the unknown order of the model, the partially or totally unknown statistics of the noise and disturbances affecting data, and the fact that the plant is operating in a closed-loop configuration. To tackle these issues, a number of schemes designed to (a) attenuate the effect of unknown noise and disturbances (Doraiswami, 2005), (b) reliably select the model order of the identified system (Doraiswami, Cheded, and Khalid, 2010) and (c) identify a plant operating in a closed-loop (Shahab and Doraiswami, 2009) have been developed and are presented here for completeness. The model uncertainty associated with the identified model is itself modeled as additive perturbations in both the plant numerator and the denominator polynomials so as to develop robust controllers using the mixed-sensitivity *H∞* controller design procedure (Kwakernaak, 1993). The mixed-sensitivity *H*<sup>∞</sup> control design procedure conservatively combines and simultaneously solves both problems of robust stability and robust

This design procedure is sound, mature, focuses on handling the problem of controller design when the plant model is uncertain, and has been successfully employed in practice in recent years (Cerone, Milanese, and Regruto, 2009), (Tan, Marquez, Chen, and Gooden,

performance using a single *H*∞ norm.

2001). The proposed scheme is extensively tested on both simulated systems and physical laboratory-scale systems namely, a magnetic levitation and two-tank liquid level systems.

The key contribution herein is to demonstrate the efficacy of (a) the proposed model order selection criterion to reduce the uncertainty in the plant model structure, a criterion which is simple, verifiable and reliable (b) the two-stage closed-loop identification scheme which ensures quality of the identification performance, and (c) the mixed-sensitivity optimization technique in the *H*∞-framework to meet the control objectives of robust performance and robust stability without violating the physical constraints imposed by components such as actuators, and in the face of uncertainties that stem from the identified model employed in the design of the robust controller. It should be noted here that the identified model used in the design of the robust controller is the linearized model of the physical system at some operating point, termed the nominal model.

The chapter is structured as follows. Section 2 discusses the stability and performance of a typical closed-loop system. In Section 3, the robust performance and robust stability problems are considered in the mixed-sensitivity *H*∞ framework. Section 4 discusses the problem of designing a robust controller using the identified model with illustrated examples. Section 5 gives a detailed description of the complete identification scheme used to select the model order, identify the plant in a closed-loop configuration and in the presence of unknown noise and disturbances. Finally, in Section 6, evaluations of the designed robust controllers on two-laboratory scale systems are presented.

## **2. Stability and performance of a closed-loop system**

An important objective of the control system to ensure that the output of the system tracks a given reference input signal in the face of both noise and disturbances affecting the system, and the plant model uncertainty. A further objective of the control system is to ensure that the performance of the system meets the desired time-domain and frequency-domain specifications such as the rise time, settling time, overshoot, bandwidth, and peak of the magnitude frequency response while respecting the constraints on the control input and other variables. An issue of paramount practical importance facing the control engineer is how to design a controller which will both stabilize the plant when its model is uncertain and ensure that its performance specifications are all met. Put succinctly, we seek a controller that will ensure both stability and performance robustness in the face of model uncertainties. To achieve this dual purpose, we need to first introduce some analytical tools as described next.

### **2.1 Key sensitivity functions**

Consider the typical closed-loop system shown in Fig. 1 where *G*<sup>0</sup> is the nominal plant, *C*<sup>0</sup> the controller that stabilizes the nominal plant *G*<sup>0</sup> ; *r* and *y* the reference input, and output, respectively; *<sup>i</sup> d* and 0 *d* the disturbances at the plant input and plant output, respectively, and *v* the measurement or sensor noise. The nominal model, heretofore referred to as the identified model, represents a mathematical model of a physical plant obtained from physical reasoning and experimental data.

Let *w* and *z* be, respectively, a (4x1) input vector comprising *r*, <sup>0</sup> *d* , *<sup>i</sup> d* and *v* , and a (3x1) output vector formed of the plant output *y*, control input *u*, and the tracking error *e* , as given below by:

Identification of Linearized Models and Robust Control of Physical Systems 443

In order to ensure that there is no unstable pole-zero cancellation, a more rigorous definition of stability, termed internal stability, needs to be defined. The closed-loop system is internally stable if and if all the eight transfer function elements of the transfer matrix of Equation (6) are stable. Since there are only four distinct sensitivity functions, *S*<sup>0</sup> , *Si*<sup>0</sup> , *Su*<sup>0</sup> and *T*<sup>0</sup> , the closed-loop system is therefore internally stable if and only if these four sensitivity functions *S*<sup>0</sup> , *Si*<sup>0</sup> , *Su*<sup>0</sup> and *T*<sup>0</sup> are all stable. Since all these sensitivity functions

0 00 0 0 () () () () () *pc p c*

where 0 0 ( ), ( ) *NsDs p p* and 0 0 ( ), ( ) *NsDs c c* are the numerator and the denominator polynomials of 0 *G s*( ) and 0 *C s*( ), respectively. One may express internal stability in terms of

**Lemma 1** (Goodwin, Graeb, and Salgado, 2001): The closed-loop system is internally stable

• The tracking error *e* is small if **(a)** *S*0 is small in the frequency range where *r* and <sup>0</sup> *d* are large, **(b)** *Su*<sup>0</sup> is small in the frequency range where *<sup>i</sup> d* is large and **(c)** *T*<sup>0</sup> and is

• The control input *u* is small if **(a)** *Su*0 is small in the frequency range where *r* , <sup>0</sup> *d* and

Thus the performance requirement must respect the physical constraint that imposes on the

Model uncertainty stems from the fact that it is very difficult to obtain a mathematical model that can capture completely the behavior of a physical system and which is relevant for the intended application. One may use physical laws to obtain the structure of a mathematical model of a physical system, with the parameters of this model obtained using system identification techniques. However, in practice, the structure as well as the parameters need to be identified from the input-output data as the structure derived from the physical laws may not capture adequately the behavior of the system or, in the extreme case, the physical laws may not be known. The "true" model is a more comprehensive model that contains features not captured by the identified model, and is relevant to the application at hand, such as controller design, fault diagnosis, and condition monitoring. The difference between the

nominal and true model is termed as the modeling error which includes the following:

• The structure of the nominal model which differs from that of the true model as a result of our inability to identify features such as high-frequency behavior, fast subsystem dynamics, and approximation of infinite-dimensional system by a finite- dimensional

• Errors in the estimates of the numerator and denominator coefficients, and in the

*v* are large, and **(b)** *T*0 is small in the frequency range where *<sup>i</sup> d* is large.

control input to be small so that the actuator does not get saturated.

 ( )*s* all lie in the open left-half of the s-plane. We will now focus on the performance of the closed-loop system by analyzing the closedloop transfer matrix given by Equation (6). We will focus on the tracking error *e* for

*s N sD s D sN s* = +

ϕ

(7)

( )*s* of the closed-

have a common denominator ( 0 0 1 + *G C* ), the characteristic polynomial 0

ϕ

performance, and the control input *u* for actuator saturation:

small in the frequency range where *v* is large.

**3. Robust stability and performance** 

the roots of the characteristic polynomial as follows.

ϕ

loop system is:

ones.

estimate of the time delay

if and only if the roots of <sup>0</sup>

Fig. 1. A typical control system

$$w = \begin{bmatrix} r & d\_i & d\_0 & v \end{bmatrix}^T \tag{1}$$

$$\begin{bmatrix} z = \begin{bmatrix} e & u & y \end{bmatrix}^T \end{bmatrix} \tag{2}$$

The four key closed-loop transfer functions which play a significant role in the stability and performance of a control system are the four sensitivity functions for the nominal plant and nominal controller. They are the system's sensitivity *S*<sup>0</sup> , the input-disturbance sensitivity *Si*<sup>0</sup> , the control sensitivity *Su*<sup>0</sup> and the complementary sensitivity *T*<sup>0</sup> , given by:

$$\mathbf{S}\_{0} = \frac{1}{\mathbf{1} + \mathbf{G}\_{0}\mathbf{C}\_{0}} \; \prime \; S\_{i0} = \frac{\mathbf{G}\_{0}}{\mathbf{1} + \mathbf{G}\_{0}\mathbf{C}\_{0}} = \mathbf{S}\_{0}\mathbf{G}\_{0} \; \prime \; S\_{i0} = \frac{\mathbf{C}\_{0}}{\mathbf{1} + \mathbf{G}\_{0}\mathbf{C}\_{0}} = \mathbf{S}\_{0}\mathbf{C}\_{0} \; \prime \; T\_{0} = \frac{\mathbf{G}\_{0}\mathbf{C}\_{0}}{\mathbf{1} + \mathbf{G}\_{0}\mathbf{C}\_{0}} \tag{3}$$

The performance objective of a control system is to regulate the tracking error *e r* = − *y* so that the steady-state tracking error is acceptable and its transient response meets the timeand frequency-domain specifications respecting the physical constraints on the control input so that, for example, the actuator does not get saturated. The output to be regulated, namely *e* and *u*, are given by:

$$e = S\_0(r - d\_0) + T\_0 \upsilon - S\_{i0} d\_i \tag{4}$$

$$
\mu = S\_{\mu 0} (r - \upsilon - d\_0) - T\_0 d\_i \tag{5}
$$

The transfer matrix relating *w* to *z* is then given by:

$$
\begin{bmatrix} e \\ \mu \end{bmatrix} = \begin{bmatrix} S\_0 & -S\_{i0} & -S\_0 & T\_0 \\ S\_{\
u0} & -T\_0 & -S\_{\
u0} & -S\_{\
u0} \end{bmatrix} \begin{bmatrix} r \\ d\_i \\ d\_0 \\ v \end{bmatrix} \tag{6}
$$

### **2.2 Stability and performance**

One cannot reliably assert the stability of the closed-loop by merely analyzing only one of the four sensitivity functions such as the closed-loop transfer function 0 *T s*( ) because there may be an implicit pole/zero cancellation process wherein the unstable poles of the plant (or the controller) may be cancelled by the zeros of the controller (or the plant). The cancellation of unstable poles may exhibit unbounded output response in the time domain. 442 Recent Advances in Robust Control – Novel Approaches and Design Methods

*i d*

*p u*

[ ] <sup>0</sup>

[ ]*<sup>T</sup>*

The four key closed-loop transfer functions which play a significant role in the stability and performance of a control system are the four sensitivity functions for the nominal plant and nominal controller. They are the system's sensitivity *S*<sup>0</sup> , the input-disturbance sensitivity

*Si*<sup>0</sup> , the control sensitivity *Su*<sup>0</sup> and the complementary sensitivity *T*<sup>0</sup> , given by:

000 0 0 0 0 0

<sup>1</sup> ,, , 11 1 1 *i u <sup>G</sup> <sup>C</sup> G C <sup>S</sup> <sup>S</sup> SG S SC T GC GC G C G C* <sup>=</sup> = = = = = ++ + +

The performance objective of a control system is to regulate the tracking error *e r* = − *y* so that the steady-state tracking error is acceptable and its transient response meets the timeand frequency-domain specifications respecting the physical constraints on the control input so that, for example, the actuator does not get saturated. The output to be regulated, namely

> 0 0 00 0 0 0 00 *i i*

*e S S STd uS T S Sd*

<sup>⎡</sup> <sup>⎤</sup> <sup>⎢</sup> <sup>⎥</sup> ⎡⎤ ⎡ − − ⎤⎢ <sup>⎥</sup> ⎢⎥ ⎢ <sup>=</sup> <sup>⎥</sup> −− − <sup>⎢</sup> <sup>⎥</sup> ⎣⎦ ⎣ <sup>⎦</sup>

*u uu*

One cannot reliably assert the stability of the closed-loop by merely analyzing only one of the four sensitivity functions such as the closed-loop transfer function 0 *T s*( ) because there may be an implicit pole/zero cancellation process wherein the unstable poles of the plant (or the controller) may be cancelled by the zeros of the controller (or the plant). The cancellation of unstable poles may exhibit unbounded output response in the time domain.

*v*

0 0 0 0

0 00 0 ( ) *i i e S r d Tv S d* = −+ − (4)

0 00 ( ) *u S r v d Td* = −− − *u i* (5)

(6)

*r*

*v*

⎢ ⎥ ⎣ ⎦

0 0 0 0 0 0 0 0

<sup>0</sup> *G*<sup>0</sup> *C*

*y*

0 *d*

*<sup>T</sup> w rd d v* <sup>=</sup> *<sup>i</sup>* (1)

*z eu* = *y* (2)

(3)

*r u*

Fig. 1. A typical control system

*e* and *u*, are given by:

**2.2 Stability and performance** 

The transfer matrix relating *w* to *z* is then given by:

In order to ensure that there is no unstable pole-zero cancellation, a more rigorous definition of stability, termed internal stability, needs to be defined. The closed-loop system is internally stable if and if all the eight transfer function elements of the transfer matrix of Equation (6) are stable. Since there are only four distinct sensitivity functions, *S*<sup>0</sup> , *Si*<sup>0</sup> , *Su*<sup>0</sup> and *T*<sup>0</sup> , the closed-loop system is therefore internally stable if and only if these four sensitivity functions *S*<sup>0</sup> , *Si*<sup>0</sup> , *Su*<sup>0</sup> and *T*<sup>0</sup> are all stable. Since all these sensitivity functions have a common denominator ( 0 0 1 + *G C* ), the characteristic polynomial 0 ϕ ( )*s* of the closedloop system is:

$$
\varphi\_0(\mathbf{s}) = N\_{p0}(\mathbf{s}) D\_{c0}(\mathbf{s}) + D\_{p0}(\mathbf{s}) N\_{c0}(\mathbf{s}) \tag{7}
$$

where 0 0 ( ), ( ) *NsDs p p* and 0 0 ( ), ( ) *NsDs c c* are the numerator and the denominator polynomials of 0 *G s*( ) and 0 *C s*( ), respectively. One may express internal stability in terms of the roots of the characteristic polynomial as follows.

**Lemma 1** (Goodwin, Graeb, and Salgado, 2001): The closed-loop system is internally stable if and only if the roots of <sup>0</sup> ϕ( )*s* all lie in the open left-half of the s-plane.

We will now focus on the performance of the closed-loop system by analyzing the closedloop transfer matrix given by Equation (6). We will focus on the tracking error *e* for performance, and the control input *u* for actuator saturation:


Thus the performance requirement must respect the physical constraint that imposes on the control input to be small so that the actuator does not get saturated.

## **3. Robust stability and performance**

Model uncertainty stems from the fact that it is very difficult to obtain a mathematical model that can capture completely the behavior of a physical system and which is relevant for the intended application. One may use physical laws to obtain the structure of a mathematical model of a physical system, with the parameters of this model obtained using system identification techniques. However, in practice, the structure as well as the parameters need to be identified from the input-output data as the structure derived from the physical laws may not capture adequately the behavior of the system or, in the extreme case, the physical laws may not be known. The "true" model is a more comprehensive model that contains features not captured by the identified model, and is relevant to the application at hand, such as controller design, fault diagnosis, and condition monitoring. The difference between the nominal and true model is termed as the modeling error which includes the following:


Identification of Linearized Models and Robust Control of Physical Systems 445

*u*

−

Δ*D*

(12)

*y*

(13)

<sup>∞</sup> <sup>&</sup>lt; (14)

1 2 [ ] *N D*

**3.2 Robust stability and performance** 

using the small gain theorem.

case proved in (Zhou, Doyle, & Glover, 1996).

all allowable plant model perturbations [ ] <sup>0</sup> Δ Δ≤ *N D* 1 /

numerator uncertainty Δ*<sup>N</sup>* is large.

frequency region and,

If and only if

*q q <sup>y</sup>* <sup>⎡</sup> <sup>⎤</sup> − = Δ −Δ <sup>⎢</sup> <sup>⎥</sup> <sup>⎣</sup> <sup>⎦</sup>

Since the reference input does not play any role in the stability robustness, it is set equal to

−

<sup>−</sup> <sup>1</sup> *D S* 0 0

The robust stability of the closed-loop system with plant model uncertainty is established

**Theorem 1:** Assume that *C*<sup>0</sup> internally stabilizes the nominal plant *G*<sup>0</sup> . Hence *S RH* <sup>0</sup> ∈ <sup>∞</sup> and *S RH <sup>u</sup>*<sup>0</sup> ∈ <sup>∞</sup> . Then the closed-loop system stability problem is well posed and the system

[ ] <sup>0</sup> Δ Δ≤ *N D* 1 /

[ ] <sup>1</sup> *SSD* 0 00 0 *<sup>u</sup>*

**Proof:** The SISO robust stability problem considered herein is a special case of the MIMO

Thus to ensure a robustly-stable closed-loop system, the nominal sensitivity *S*<sup>0</sup> should be made small in frequency regions where the denominator uncertainty Δ*<sup>D</sup>* is large, and the nominal control input sensitivity *Su*<sup>0</sup> should be made small in frequency regions where the

Our objective here is to design a controller *C*<sup>0</sup> such that robust performance and robust stability of the system are both achieved, that is, both the performance and stability hold for

requirements, we need also to consider physical constraints on some components such as actuators, for example, that especially place some limitations on the control input. From Theorem 1 and Equation (6), it is clear that the requirements for robust stability, robust

• Robust performance for tracking with disturbance rejection as well as robust stability in the face of denominator perturbations require a small sensitivity function *S*<sup>0</sup> in the low-

performance and control input limitations are inter-related, as explained next:

γ

γ<sup>−</sup>

γ

for some <sup>0</sup>

γ

> 0 . Besides these

is internally stable for all allowable numerator and denominator perturbations, i.e.:

zero and the robust stability model then becomes as given in Fig. 3

Δ*N*

1 2 *u q q* −

<sup>1</sup> *D S* 0 0*<sup>u</sup>*

Fig. 3. Stability robustness model with zero reference input

• The deliberate negligence of fast dynamics to simplify sub-systems' models. This will yield a system model that is simple, yet capable enough to capture the relevant features that would facilitate the intended design.

### **3.1 Co-prime factor-based uncertainty model**

The numerator-denominator perturbation model considers the perturbation in the numerator and denominator polynomials separately, instead of lumping them together as a single perturbation of the overall transfer function. This perturbation model is useful in applications where an estimate of the model is obtained using system identification methods such as the best least-squares fit between the actual output and its estimate obtained from an assumed mathematical model. Further, an estimate of the perturbation on the numerator and denominator coefficients may be computed from the data matrix and the noise variance. Let *G*<sup>0</sup> and *G* be respectively the nominal and actual SISO rational transfer functions. The normalized co-prime factorization in this case is given by

$$\begin{aligned} G\_0 &= N\_0 D\_0^{-1} \\ G &= N D^{-1} \end{aligned} \tag{8}$$

where *N*<sup>0</sup> and *N* are the numerator polynomials, and both *D*0 and *D* the denominator polynomials. In terms of the nominal numerator and denominator polynomials, the transfer function *G* is given by:

$$G = \left(N\_0 + \Delta\_N\right) \left(D\_0 + \Delta\_D\right)^{-1} \tag{9}$$

where Δ*<sup>N</sup>* and Δ ∈*<sup>D</sup> RH*<sup>∞</sup> are respectively the frequency-dependent perturbation in the numerator and denominator polynomials (Kwakernaak, 1993). Fig. 2 shows the closed- loop system driven by a reference input *r* with a perturbation in the numerator and denominator polynomials. The three relevant signals are expressed in equations (10-12).

$$\text{Fig. 2. Co-prime factor-based uncertainty model for a SISO plant.}$$

$$\mu = \frac{C\_0}{1 + G\_0 C\_0} r - \frac{D\_0^{-1} C\_0}{1 + G\_0 C\_0} (q\_1 - q\_2) = S\_{\text{u}0} r - D\_0^{-1} S\_{\text{u}0} (q\_1 - q\_2) \tag{10}$$

$$y = T\_0 r + \frac{D\_0^{-1}}{1 + G\_0 C\_0} (q\_2 - q\_1) = T\_0 r + D\_0^{-1} S\_0 \left( q\_2 - q\_1 \right) \tag{11}$$

$$q\_1 - q\_2 = \begin{bmatrix} \Delta\_N & -\Delta\_D \end{bmatrix} \begin{bmatrix} u \\ y \end{bmatrix} \tag{12}$$

### **3.2 Robust stability and performance**

Since the reference input does not play any role in the stability robustness, it is set equal to zero and the robust stability model then becomes as given in Fig. 3

Fig. 3. Stability robustness model with zero reference input

The robust stability of the closed-loop system with plant model uncertainty is established using the small gain theorem.

**Theorem 1:** Assume that *C*<sup>0</sup> internally stabilizes the nominal plant *G*<sup>0</sup> . Hence *S RH* <sup>0</sup> ∈ <sup>∞</sup> and *S RH <sup>u</sup>*<sup>0</sup> ∈ <sup>∞</sup> . Then the closed-loop system stability problem is well posed and the system is internally stable for all allowable numerator and denominator perturbations, i.e.:

$$\left\| \begin{bmatrix} \Lambda\_N & \Lambda\_D \end{bmatrix} \right\| \le 1 / \chi\_0 \tag{13}$$

If and only if

444 Recent Advances in Robust Control – Novel Approaches and Design Methods

• The deliberate negligence of fast dynamics to simplify sub-systems' models. This will yield a system model that is simple, yet capable enough to capture the relevant features

The numerator-denominator perturbation model considers the perturbation in the numerator and denominator polynomials separately, instead of lumping them together as a single perturbation of the overall transfer function. This perturbation model is useful in applications where an estimate of the model is obtained using system identification methods such as the best least-squares fit between the actual output and its estimate obtained from an assumed mathematical model. Further, an estimate of the perturbation on the numerator and denominator coefficients may be computed from the data matrix and the noise variance. Let *G*<sup>0</sup> and *G* be respectively the nominal and actual SISO rational transfer functions. The

1

= (8)

<sup>−</sup> = +Δ +Δ (9)

−

() ()

() ()

1 2 0 0 01 2

<sup>1</sup> *D*<sup>0</sup> −

Δ*D*

*y*

(10)

(11)

− −

0 00 1

where *N*<sup>0</sup> and *N* are the numerator polynomials, and both *D*0 and *D* the denominator polynomials. In terms of the nominal numerator and denominator polynomials, the transfer

( )( ) <sup>1</sup> *GN D* 0 0 *N D*

where Δ*<sup>N</sup>* and Δ ∈*<sup>D</sup> RH*<sup>∞</sup> are respectively the frequency-dependent perturbation in the numerator and denominator polynomials (Kwakernaak, 1993). Fig. 2 shows the closed- loop system driven by a reference input *r* with a perturbation in the numerator and denominator

*N*0

0 00 1

0 1 0 2 1 0 00 2 1

*<sup>y</sup> Tr q q Tr D S q q G C* <sup>−</sup> <sup>−</sup> =+ −=+ −

00 00 1 1 *u u C DC u r q q Sr DS q q GC GC* <sup>−</sup> <sup>−</sup> = − −= − − + +

Δ*N*

1 2 *u q q* −

polynomials. The three relevant signals are expressed in equations (10-12).

*G ND G ND*

=

that would facilitate the intended design.

**3.1 Co-prime factor-based uncertainty model** 

normalized co-prime factorization in this case is given by

function *G* is given by:

−

*r*

*C*0

Fig. 2. Co-prime factor-based uncertainty model for a SISO plant

1

1

0 0 1 *D*

+

$$\left\| \begin{bmatrix} S\_0 & S\_{u0} \end{bmatrix} D\_0^{-1} \right\|\_{\mathcal{O}} < \mathcal{Y}\_0 \tag{14}$$

**Proof:** The SISO robust stability problem considered herein is a special case of the MIMO case proved in (Zhou, Doyle, & Glover, 1996).

Thus to ensure a robustly-stable closed-loop system, the nominal sensitivity *S*<sup>0</sup> should be made small in frequency regions where the denominator uncertainty Δ*<sup>D</sup>* is large, and the nominal control input sensitivity *Su*<sup>0</sup> should be made small in frequency regions where the numerator uncertainty Δ*<sup>N</sup>* is large.

Our objective here is to design a controller *C*<sup>0</sup> such that robust performance and robust stability of the system are both achieved, that is, both the performance and stability hold for all allowable plant model perturbations [ ] <sup>0</sup> Δ Δ≤ *N D* 1 /γ for some <sup>0</sup> γ > 0 . Besides these requirements, we need also to consider physical constraints on some components such as actuators, for example, that especially place some limitations on the control input. From Theorem 1 and Equation (6), it is clear that the requirements for robust stability, robust performance and control input limitations are inter-related, as explained next:

• Robust performance for tracking with disturbance rejection as well as robust stability in the face of denominator perturbations require a small sensitivity function *S*<sup>0</sup> in the lowfrequency region and,

• Control input limitations and robust stability in the face of numerator perturbations require a small control input sensitivity function *Su*<sup>0</sup> in the relevant frequency region.

With a view to addressing these requirements, let us select the regulated outputs to be a frequency-weighted tracking error *we* , and a weighted control input *uw* to meet respectively the requirements of performance, and control input limitations.

$$z\_w = \begin{bmatrix} \mathcal{e}\_w & \mathcal{u}\_w \end{bmatrix}^T \tag{15}$$

Identification of Linearized Models and Robust Control of Physical Systems 447

measure*Trz* is cancelled, thus yielding the following simplified measure

*<sup>T</sup> T WS WS rz S u u* <sup>=</sup> . The mixed-sensitivity optimization problem for robust performance and stability in the *H*<sup>∞</sup> − framework is then reduced to finding the controller *C*<sup>0</sup> such that :

*T CG rz* ( 0 0 , 1 )

It is shown in (McFarlane & Glover, 1990) that the minimization of *Trz* <sup>∞</sup> as given by Equation (18), guarantees not only robust stability but also robust performance for all

<sup>∞</sup> ΔΔ ≤ .

Consider the problem of designing a controller for an unknown plant *G*. We will assume however that the system *G* is linear and admits a rational polynomial model. A number of identification experiments are performed off-line under various operating regimes that

• The plant operates in a closed-loop, thus making the plant input correlated with both

*Gi* be the identified model from the *th i* experiment based on one or more of the above

γ <sup>∞</sup> Δ ≤ where ˆ

optimization scheme, let us consider the following example. Let the true order of the system *G* be 2 and assume the noise to be colored. Let <sup>ˆ</sup> : 1,2,3 *G i <sup>i</sup>* <sup>=</sup> be the estimates obtained assuming the model order to be 2, 3, and 4, respectively and let the noise be a zero-mean white noise process; 4 *<sup>G</sup>*ˆ is obtained assuming the model order to be 2, the noise to be colored but the input not to be rich enough; Let 5 *<sup>G</sup>*<sup>ˆ</sup> be an estimate based on correct assumptions regarding model order, noise statistics, richness of excitation of the input and other factors as pointed out above.

{ } ˆ ˆ <sup>ˆ</sup> : 1/ <sup>ˆ</sup> *S G i ii i*

performing a number of experiments under different assumptions on the model order, types

) centered at ˆ

γ

. Given an estimate of the plant model ˆ

*Gi* and the controller <sup>ˆ</sup>

the nominal plant *<sup>G</sup>*<sup>0</sup> and nominal controller*C*<sup>0</sup> , respectively. Let the controller <sup>ˆ</sup>

*Gi* for all <sup>ˆ</sup> 1 / <sup>ˆ</sup> *i i*

*i* γ

*i* γ

*Ci* is then designed using the mixed-sensitivity *H*<sup>∞</sup> optimization scheme ,

*Ci* be the corresponding controller which stabilizes all the plants in

*Ci* based on it, now effectively replacing

*Gi* . To illustrate the identification-based *H*<sup>∞</sup> -

<sup>∞</sup> = Δ≤ (19)

Δ*i* is formed of the perturbations

*Gi* , i.e. <sup>ˆ</sup> *G S* <sup>∉</sup> *<sup>i</sup>* for all 5 *<sup>i</sup>* <sup>≠</sup> where

*Gi* . Fig. 5 below shows the results of

*Ci*

γ

γ

<sup>−</sup> term appearing in the mixed sensitivity

<sup>∞</sup> <sup>≤</sup> <sup>&</sup>lt; (18)

where *W DW s s* = 0 and *W DW u u* = 0 so that the <sup>1</sup> *D*<sup>0</sup>

allowable perturbations satisfying [ *N D*] 1 /

the measurement noise and disturbances

**4.** *H*<sup>∞</sup> **controller design using the identified model** 

includes assumptions on the model and its environment, such as :

*Gi* within a ball of radius 1 / ˆ

[ ] 0 0

• The model order

• Noise statistics

stated assumptions. Let ˆ

the neighborhood of ˆ

*Gi* , the controller <sup>ˆ</sup>

Let ˆ

The set ˆ

• The length of the data record • The type of rich inputs

• Combinations of any the above

with both the identified model ˆ

stabilize the identified plant ˆ

in the numerator and denominator of ˆ

*Si* is a ball of radius ( 1 / ˆ

Clearly the true plant *G* may not be in the neighborhood of ˆ

where *wz* is a (2x1) vector output to be regulated, *we* , and *uw* are defined by their respective Fourier transforms: () () () *w S e j ej W j* ω = ω ω and () () () *u j uj W j w u* ω = ω ω . The frequency weights involved, ( ) *W j <sup>S</sup>* ω and ( ) *W j <sup>u</sup>* ω , are chosen such that their inverses are the upper bounds of the respective sensitive functions so that weighted sensitive functions become normalized, i.e.:

$$\left| \left| \text{V}\_{S}(jo)S\_{0}(jo) \right| \le 1 \text{ } \left| \text{V}\_{u}(jo)S\_{u0}(jo) \right| \le 1 \tag{16}$$

The map relating the frequency weighted output *wz* and the reference input *r* is shown in Fig. 4:

Fig. 4. Nominal closed-loop system relating the reference input and the weighted outputs

The weighting functions ( ) *W j <sup>s</sup>* ω , and ( ) *W j su* ω provide the tools to specify the trade-off between robust performance and robust stability for a given application. For example, if performance robustness (and stability robustness to the denominator perturbation Δ*<sup>D</sup>* ) is more important than the control input limitation, then the weighting function *WS* is chosen to be larger in magnitude than *Wu*<sup>0</sup> . On the other hand, to emphasize control input limitation (and stability robustness to the numerator perturbation Δ*<sup>N</sup>* ), the weighting function *Wu*0 is chosen to be larger in magnitude than *WS* . For steady-state tracking with disturbance rejection, one may include in the weighting function *WS* an approximate but stable 'integrator' by choosing its pole close to zero for continuous-time systems or close to unity for discrete-time systems so as to avoid destabilizing the system (Zhou, Doyle, and Glover, 1996). Let *Trz* be the nominal transfer matrix (when the plant perturbation <sup>0</sup> Δ = 0 ) relating the reference input to the frequency-weighted vector output *wz* , which is a function of *G*<sup>0</sup> and*C*0 , be given by:

$$T\_{rz} = D\_0^{-1} \left[ \nabla \overline{\mathcal{W}}\_S \mathcal{S}\_0 \quad \overline{\mathcal{W}}\_u \mathcal{S}\_{u0} \right]^T \tag{17}$$

where *W DW s s* = 0 and *W DW u u* = 0 so that the <sup>1</sup> *D*<sup>0</sup> <sup>−</sup> term appearing in the mixed sensitivity measure*Trz* is cancelled, thus yielding the following simplified measure [ ] 0 0 *<sup>T</sup> T WS WS rz S u u* <sup>=</sup> . The mixed-sensitivity optimization problem for robust performance and stability in the *H*<sup>∞</sup> − framework is then reduced to finding the controller *C*<sup>0</sup> such that :

$$\left\| T\_{rz} \left( \mathbb{C}\_{0'} \mathcal{G}\_0 \right) \right\|\_{\infty} \le \gamma < 1 \tag{18}$$

It is shown in (McFarlane & Glover, 1990) that the minimization of *Trz* <sup>∞</sup> as given by Equation (18), guarantees not only robust stability but also robust performance for all allowable perturbations satisfying [ *N D*] 1 /γ<sup>∞</sup> ΔΔ ≤ .

## **4.** *H*<sup>∞</sup> **controller design using the identified model**

Consider the problem of designing a controller for an unknown plant *G*. We will assume however that the system *G* is linear and admits a rational polynomial model. A number of identification experiments are performed off-line under various operating regimes that includes assumptions on the model and its environment, such as :

• The model order

446 Recent Advances in Robust Control – Novel Approaches and Design Methods

• Control input limitations and robust stability in the face of numerator perturbations require a small control input sensitivity function *Su*<sup>0</sup> in the relevant frequency region. With a view to addressing these requirements, let us select the regulated outputs to be a frequency-weighted tracking error *we* , and a weighted control input *uw* to meet respectively

[ ]*<sup>T</sup>*

where *wz* is a (2x1) vector output to be regulated, *we* , and *uw* are defined by their

ω

the upper bounds of the respective sensitive functions so that weighted sensitive functions

0 0 ( ) ( ) 1, ( ) ( ) 1 *WjSj WjS j <sup>S</sup>*

The map relating the frequency weighted output *wz* and the reference input *r* is shown in

*C*<sup>0</sup> *G*<sup>0</sup>

Fig. 4. Nominal closed-loop system relating the reference input and the weighted outputs

ω

between robust performance and robust stability for a given application. For example, if performance robustness (and stability robustness to the denominator perturbation Δ*<sup>D</sup>* ) is more important than the control input limitation, then the weighting function *WS* is chosen to be larger in magnitude than *Wu*<sup>0</sup> . On the other hand, to emphasize control input limitation (and stability robustness to the numerator perturbation Δ*<sup>N</sup>* ), the weighting function *Wu*0 is chosen to be larger in magnitude than *WS* . For steady-state tracking with disturbance rejection, one may include in the weighting function *WS* an approximate but stable 'integrator' by choosing its pole close to zero for continuous-time systems or close to unity for discrete-time systems so as to avoid destabilizing the system (Zhou, Doyle, and Glover, 1996). Let *Trz* be the nominal transfer matrix (when the plant perturbation <sup>0</sup> Δ = 0 ) relating the reference input to the frequency-weighted vector output *wz* , which is a function

ω

 ω

 ω  ω

*e u*

*Wu WS*

provide the tools to specify the trade-off

ω=

and ( ) *W j <sup>u</sup>*

*r y*

, and ( ) *W j su*

1

*T D WS WS rz S uu*

00 0

*T*

<sup>−</sup> <sup>=</sup> <sup>⎡</sup> <sup>⎤</sup> <sup>⎣</sup> <sup>⎦</sup> (17)

ω

ω

ωω

*w ww z eu* = (15)

 and () () () *u j uj W j w u* ω=

, are chosen such that their inverses are

≤ *u u* ≤ (16)

ω

 ω. The

the requirements of performance, and control input limitations.

respective Fourier transforms: () () () *w S e j ej W j*

frequency weights involved, ( ) *W j <sup>S</sup>*

The weighting functions ( ) *W j <sup>s</sup>*

of *G*<sup>0</sup> and*C*0 , be given by:

become normalized, i.e.:

Fig. 4:


Let ˆ *Gi* be the identified model from the *th i* experiment based on one or more of the above stated assumptions. Let ˆ *Ci* be the corresponding controller which stabilizes all the plants in the neighborhood of ˆ *Gi* within a ball of radius 1 / ˆ *i* γ . Given an estimate of the plant model ˆ *Gi* , the controller <sup>ˆ</sup> *Ci* is then designed using the mixed-sensitivity *H*<sup>∞</sup> optimization scheme , with both the identified model ˆ *Gi* and the controller <sup>ˆ</sup> *Ci* based on it, now effectively replacing the nominal plant *<sup>G</sup>*<sup>0</sup> and nominal controller*C*<sup>0</sup> , respectively. Let the controller <sup>ˆ</sup> *Ci*

stabilize the identified plant ˆ *Gi* for all <sup>ˆ</sup> 1 / <sup>ˆ</sup> *i i* γ <sup>∞</sup> Δ ≤ where ˆ Δ*i* is formed of the perturbations

in the numerator and denominator of ˆ *Gi* . To illustrate the identification-based *H*<sup>∞</sup> optimization scheme, let us consider the following example. Let the true order of the system *G* be 2 and assume the noise to be colored. Let <sup>ˆ</sup> : 1,2,3 *G i <sup>i</sup>* <sup>=</sup> be the estimates obtained assuming the model order to be 2, 3, and 4, respectively and let the noise be a zero-mean white noise process; 4 *<sup>G</sup>*ˆ is obtained assuming the model order to be 2, the noise to be colored but the input not to be rich enough; Let 5 *<sup>G</sup>*<sup>ˆ</sup> be an estimate based on correct assumptions regarding model order, noise statistics, richness of excitation of the input and other factors as pointed out above. Clearly the true plant *G* may not be in the neighborhood of ˆ *Gi* , i.e. <sup>ˆ</sup> *G S* <sup>∉</sup> *<sup>i</sup>* for all 5 *<sup>i</sup>* <sup>≠</sup> where

$$
\hat{S}\_i = \left\{ \hat{\mathbf{G}}\_i : \left\| \hat{\boldsymbol{\Delta}}\_i \right\|\_\infty \le \mathbf{1} \;/\; \hat{\boldsymbol{\gamma}}\_i \right\} \tag{19}
$$

The set ˆ *Si* is a ball of radius ( 1 / ˆ *i* γ ) centered at ˆ *Gi* . Fig. 5 below shows the results of performing a number of experiments under different assumptions on the model order, types

Identification of Linearized Models and Robust Control of Physical Systems 449

appropriate identification scheme to handle colored noise and model order selection to

Fig. 6. Figures A and B on the left show the Step responses (top) and Magnitude responses of sensitivity (bottom) when the model order is varied from 2 to 10 when the noise standard

σ

The physical system is in general complex, high-order and nonlinear and therefore an assumed linear mathematical model of such a system is at best an approximation of the 'true model'. Nevertheless a mathematical model linearized at a given operating point can be identified and the identified model successfully used in the design of the required controller, as explained below. Some key issues in the identification of a physical system include **(a)** the unknown statistics of the noise and disturbance affecting the input-output data **(b)** the proper selection of an appropriate structure of the mathematical model,

For the case **(a)** a two-stage identification scheme, originally proposed in (Doraiswami, 2005) is employed here. First a high-order model is selected so as to capture both the system dynamics and any artifacts (from noise or other sources). Then, in the second stage, lowerorder models are derived from the estimated high-order model using a frequency-weighted estimation scheme. To handle the model order selection, and the identification of the plant, especially an unstable one, approaches proposed in (Doraiswami, Cheded, and Khalid,

For mathematical tractability, the well-known criteria based on information-theoretic criteria such as the famous Akaike Information Criterion (Stoica and Selen, 2004), when applied to

especially its order and **(c)** the plants operating in a closed-loop configuration.

2010) and (Shahab and Doraiswami, 2009) are employed respectively.

*<sup>v</sup>* = . Similarly figures C and D on the right-hand show when the noise

*<sup>v</sup>* ∈[0.02 0.11] .

ensure a more robust performance and stability.

deviation is 0.001 σ

σ

**5. Identification of the plant** 

**5.1 Model order selection** 

*<sup>v</sup>* is varied in the range

standard deviation

of rich inputs, length of the data record, noise statistics and their combinations. The true plant *G*, its estimates ˆ *Gi* and the set <sup>ˆ</sup> *Si* are all indicated by a circle of radius ( 1 / ˆ *i* γ ) centered at ˆ *Gi* in Figure 5. The true plant *G* is located at the center of the set <sup>5</sup> ˆ *S* .

Fig. 5. The set ˆ *Si* is a ball of radius 1 / ˆ *i* γ centered at ˆ *Gi*

### **4.1 Illustrative example:** *H*<sup>∞</sup> **controller design**

A plant is first identified and then the identified model is employed in designing an *H*<sup>∞</sup> controller using the mixed sensitivity performance measure. As discrete-time models and digital controllers are commonly used in system identification and controller implementation, a discrete-time equivalent of the continuous plant is used here to design a discrete-time *H*<sup>∞</sup> controller. The plant model is given by:

$$\mathcal{G}\_0(z) = \frac{0.5335 \left(1 - z^{-1}\right)}{1 - 0.7859z^{-1} + 0.3679z^{-2}} \tag{20}$$

The weighting function for the sensitivity and control input sensitivity functions were chosen to be 1 0.01 , 0.1 1 0.99 *W W s u <sup>z</sup>*<sup>−</sup> <sup>=</sup> <sup>=</sup> <sup>−</sup> . The weighting function for the sensitivity is chosen

to have a pole close to the unit circle to ensure an acceptable small steady-state error. The controller will have a pole at 0.99 approximating a stable integrator. The plant is identified for **(a)** different choices of model orders ranging from 1 to 10 when the true order is 2, and **(b)** different values of the standard deviation of the colored measurement noiseσ *<sup>v</sup>* . Fig. 6 shows the step and the magnitude response of the sensitivity function. The closed-loop system is unstable when the selected order is 1 and for some realizations of the noise, and hence these cases are not included in the figures shown here. When the model order is selected to be less than the true order, in this case 1, and when the measurement noise's standard deviation σ *<sup>v</sup>* is large, the set of identified models does not contain the true model. Consequently the closed-loop system will be unstable.

**Comments:** The robust performance and the stability of the closed-loop system depend upon the accuracy of the identified model. One cannot simply rely on the robustness of the *H*<sup>∞</sup> controller to absorb the model uncertainties. The simulation results clearly show that the model error stems from an improper selection of the model order and the Signal-to-Noise Ratio (SNR) of the input-output data. The simulation results show that there is a need for an appropriate identification scheme to handle colored noise and model order selection to ensure a more robust performance and stability.

Fig. 6. Figures A and B on the left show the Step responses (top) and Magnitude responses of sensitivity (bottom) when the model order is varied from 2 to 10 when the noise standard deviation is 0.001 σ *<sup>v</sup>* = . Similarly figures C and D on the right-hand show when the noise standard deviation σ *<sup>v</sup>* is varied in the rangeσ*<sup>v</sup>* ∈[0.02 0.11] .

## **5. Identification of the plant**

448 Recent Advances in Robust Control – Novel Approaches and Design Methods

of rich inputs, length of the data record, noise statistics and their combinations. The true

*Gi* in Figure 5. The true plant *G* is located at the center of the set <sup>5</sup>

3 ˆ *S*

3 *G*ˆ

<sup>4</sup> *<sup>G</sup>*<sup>ˆ</sup> <sup>1</sup> *<sup>G</sup>*<sup>ˆ</sup> <sup>5</sup> *G G* <sup>ˆ</sup> <sup>=</sup>

2 *G*ˆ

centered at ˆ

A plant is first identified and then the identified model is employed in designing an *H*<sup>∞</sup> controller using the mixed sensitivity performance measure. As discrete-time models and digital controllers are commonly used in system identification and controller implementation, a discrete-time equivalent of the continuous plant is used here to design a

> ( ) ( ) <sup>1</sup> 0 1 2 0.5335 1 1 0.7859 0.3679

The weighting function for the sensitivity and control input sensitivity functions were

to have a pole close to the unit circle to ensure an acceptable small steady-state error. The controller will have a pole at 0.99 approximating a stable integrator. The plant is identified for **(a)** different choices of model orders ranging from 1 to 10 when the true order is 2, and

shows the step and the magnitude response of the sensitivity function. The closed-loop system is unstable when the selected order is 1 and for some realizations of the noise, and hence these cases are not included in the figures shown here. When the model order is selected to be less than the true order, in this case 1, and when the measurement noise's

**Comments:** The robust performance and the stability of the closed-loop system depend upon the accuracy of the identified model. One cannot simply rely on the robustness of the *H*<sup>∞</sup> controller to absorb the model uncertainties. The simulation results clearly show that the model error stems from an improper selection of the model order and the Signal-to-Noise Ratio (SNR) of the input-output data. The simulation results show that there is a need for an

**(b)** different values of the standard deviation of the colored measurement noise

<sup>−</sup> <sup>=</sup> − +

<sup>ˆ</sup> <sup>5</sup> *<sup>S</sup>* <sup>ˆ</sup> <sup>1</sup> *<sup>S</sup>* <sup>ˆ</sup>

2 ˆ *S*

*i* γ

4

*Gi*

*z*

− − −

*z z*

*W W s u <sup>z</sup>*<sup>−</sup> <sup>=</sup> <sup>=</sup> <sup>−</sup> . The weighting function for the sensitivity is chosen

*<sup>v</sup>* is large, the set of identified models does not contain the true model.

*Si* are all indicated by a circle of radius ( 1 / ˆ

ˆ *S* .

(20)

σ*<sup>v</sup>* . Fig. 6

*i* γ)

*Gi* and the set <sup>ˆ</sup>

*S*

*Si* is a ball of radius 1 / ˆ

discrete-time *H*<sup>∞</sup> controller. The plant model is given by:

*G z*

0.01 , 0.1

**4.1 Illustrative example:** *H*<sup>∞</sup> **controller design** 

plant *G*, its estimates ˆ

centered at ˆ

Fig. 5. The set ˆ

chosen to be 1

standard deviation

1 0.99

σ

Consequently the closed-loop system will be unstable.

The physical system is in general complex, high-order and nonlinear and therefore an assumed linear mathematical model of such a system is at best an approximation of the 'true model'. Nevertheless a mathematical model linearized at a given operating point can be identified and the identified model successfully used in the design of the required controller, as explained below. Some key issues in the identification of a physical system include **(a)** the unknown statistics of the noise and disturbance affecting the input-output data **(b)** the proper selection of an appropriate structure of the mathematical model, especially its order and **(c)** the plants operating in a closed-loop configuration.

For the case **(a)** a two-stage identification scheme, originally proposed in (Doraiswami, 2005) is employed here. First a high-order model is selected so as to capture both the system dynamics and any artifacts (from noise or other sources). Then, in the second stage, lowerorder models are derived from the estimated high-order model using a frequency-weighted estimation scheme. To handle the model order selection, and the identification of the plant, especially an unstable one, approaches proposed in (Doraiswami, Cheded, and Khalid, 2010) and (Shahab and Doraiswami, 2009) are employed respectively.

### **5.1 Model order selection**

For mathematical tractability, the well-known criteria based on information-theoretic criteria such as the famous Akaike Information Criterion (Stoica and Selen, 2004), when applied to

Identification of Linearized Models and Robust Control of Physical Systems 451

sensitivity functions using a subspace Multi-Input, Multi-Output (MIMO) identification scheme (Shahab & Doraiswami, 2009). In the second stage, the plant transfer function is obtained from the estimates of the plant input and output generated by the first stage.

In the first stage, the sensitivity function *S z*( ) and the complementary sensitivity functions *T z*( ) are estimated using all the three available measurements, namely the reference input, *r*, plant input, *u* and the plant output, *y* , to ensure that the estimates are reliable. In other words, a Multiple-Input, Multiple-Output (MIMO) identification scheme with one input (the reference input *r*), and two outputs (the plant input *u* and the plant output *y*) is used here rather than a Single-Input, Single-Output (SISO) scheme using one input *u* and one output *y*. The MIMO identification scheme is based on minimizing the performance

<sup>ˆ</sup> min <sup>ˆ</sup> *<sup>z</sup>*

plant output. The plant input *u*, and the plant output *y* are related to the reference input *r*

As pointed out earlier, the proposed MIMO identification scheme will ensure that the estimates of the sensitivity and the complementary sensitivity functions are consistent (i.e. they have identical denominators), and hence will also ensure that the estimates of the plant input *u* and the plant output *y* , which are both employed in the second stage, are reliable. Note here that the reference signal *r* is uncorrelated with the measurement noise *w* and the disturbance *v*, unlike in the case where the plant is identified using the direct approach. This is the main reason for using the MIMO scheme in the first stage. In the second stage, the plant *G z*( ) is identified from the estimated plant input, *u*ˆ , and plant output, *y*ˆ , obtained

Note that here the input *u*ˆ and the output *y*ˆ are not correlated with the noise *w* and disturbance term *v*. Treating *u*ˆ as the input and *y*ˆ as the output of the plant, and ˆ

estimate of the plant output estimate, *y*ˆ , the identification scheme is based on minimizing

ˆ min ( ) ( ) ( ) ˆ ˆ

*Wj yj yj* ωω

( ) <sup>2</sup>

 ω

2

*z yu* = , *u*ˆ is the estimated plant input and *y*ˆ is the estimated

*uz Szrz Szwz* ( ) ()() ( ) ( ) = + (23)

ˆ *uz Szrz* ˆ( ) ( )( ) = (25)

ˆ *y*ˆ( ) ( )( ) *z Tzrz* = (26)

− (27)

*y*ˆ as the

*y*( ) ( )( ) ( ) ( ) ( ) *z Tzrz Tzwz vz* = + + (24)

*J zz* = − (22)

**5.2.1 Two-stage identification** 

measure, *J*, as:

where [ ]*<sup>T</sup>*

and the disturbance *w* by:

*z yu* = and ˆ ˆˆ [ ]*<sup>T</sup>*

from the stage 1 identification scheme, i.e.:

the weighted frequency-domain performance measure

ˆ ˆ,

*y*

a physical system, may require simplified assumptions such as long and uncorrelated data records, linearized models and a Gaussian probability distribution function (PDF) of the residuals. Because of these simplifying assumptions, the resulting criteria may not always give the correct model order. Generally, the estimated model order may be large due to the presence of artifacts arising from noise, nonlinearities, and pole-zero cancellation effects. The proposed model order selection scheme consists of selecting only the set of models, which are identified using the scheme proposed in (Doraiswami, 2005), and for which all the poles are in the right-half plane (Doraiswami, Cheded, and Khalid, 2010). The remaining identified models are not selected as they consist of extraneous poles.

**Proposed Criterion:** The model order selection criterion hinges on the following Lemma established in (Doraiswami, Cheded, and Khalid, 2010).

**Lemma:** If the sampling frequency is chosen in the range 2 4 *cs c f* ≤ *f f* < , then the complexconjugate poles of the equivalent discrete-time equivalent of a continuous-time system will all lie on the right-half of the z-plane, whereas the real ones will all lie on the positive real line.

This shows that the discrete-time poles lie on the right-half of the z-plane if the sampling rate ( *sf* ) is more than twice the Nyquist rate ( 2 *cf* ). Thus, to ensure that the system poles are located on the right-half and the noise poles on the left-half of the z-plane, the sampling rate *sf* must be larger than four times the maximum frequency max *<sup>s</sup> f* of the system, and less than four times the minimum frequency of the noise, min *<sup>v</sup> f* .

$$4f\_{\text{max}}^s \le f\_s < 4f\_{\text{min}}^v \tag{21}$$

### **5.2 Identification of a plant operating in closed loop**

In practice, and for a variety of reasons (for e.g. analysis, design and control), it is often necessary to identify a system that must operate in a closed-loop fashion under some type of feedback control. These reasons could also include safety issues, the need to stabilize an unstable plant and /or improve its performance while avoiding the cost incurred through downtime if the plant were to be taken offline for test. In these cases, it is therefore necessary to perform closed-loop identification. There are three basic approaches to closed-loop identification, namely a direct, an indirect and a two-stage one. A direct approach to identifying a plant in a closed-loop identification scheme using the plant input and output data is fraught with difficulties due to the presence of unknown and generally inaccessible noise, the complexity of the model or a combination of both. Although computationally simple, this approach can lead to parameter estimates that may be biased due mainly to the correlation between the input and the noise, unless the noise model is accurately represented or the signal-to-noise ratio is high (Raol, Girija, & Singh, 2004). The conventional indirect approach is based on identifying the closed-loop system using the reference input and the system (plant) output. Given an estimate of the system open-loop transfer function, an estimate of the closed-loop transfer function can be obtained from the algebraic relationship between the system's open-loop and closed-loop transfer functions. The desired plant transfer function can then be deduced from the estimated closed-loop transfer function. However, the derivation of the plant transfer function from the closedloop transfer function may itself be prone to errors due to inaccuracies in the model of the subsystem connected in cascade with the plant. The two-stage approach, itself a form of an indirect method, is based on first identifying the sensitivity and the complementary sensitivity functions using a subspace Multi-Input, Multi-Output (MIMO) identification scheme (Shahab & Doraiswami, 2009). In the second stage, the plant transfer function is obtained from the estimates of the plant input and output generated by the first stage.

### **5.2.1 Two-stage identification**

450 Recent Advances in Robust Control – Novel Approaches and Design Methods

a physical system, may require simplified assumptions such as long and uncorrelated data records, linearized models and a Gaussian probability distribution function (PDF) of the residuals. Because of these simplifying assumptions, the resulting criteria may not always give the correct model order. Generally, the estimated model order may be large due to the presence of artifacts arising from noise, nonlinearities, and pole-zero cancellation effects. The proposed model order selection scheme consists of selecting only the set of models, which are identified using the scheme proposed in (Doraiswami, 2005), and for which all the poles are in the right-half plane (Doraiswami, Cheded, and Khalid, 2010). The remaining

**Proposed Criterion:** The model order selection criterion hinges on the following Lemma

**Lemma:** If the sampling frequency is chosen in the range 2 4 *cs c f* ≤ *f f* < , then the complexconjugate poles of the equivalent discrete-time equivalent of a continuous-time system will all lie on the right-half of the z-plane, whereas the real ones will all lie on the positive real

This shows that the discrete-time poles lie on the right-half of the z-plane if the sampling rate ( *sf* ) is more than twice the Nyquist rate ( 2 *cf* ). Thus, to ensure that the system poles are located on the right-half and the noise poles on the left-half of the z-plane, the sampling

max min 4 4 *s v*

In practice, and for a variety of reasons (for e.g. analysis, design and control), it is often necessary to identify a system that must operate in a closed-loop fashion under some type of feedback control. These reasons could also include safety issues, the need to stabilize an unstable plant and /or improve its performance while avoiding the cost incurred through downtime if the plant were to be taken offline for test. In these cases, it is therefore necessary to perform closed-loop identification. There are three basic approaches to closed-loop identification, namely a direct, an indirect and a two-stage one. A direct approach to identifying a plant in a closed-loop identification scheme using the plant input and output data is fraught with difficulties due to the presence of unknown and generally inaccessible noise, the complexity of the model or a combination of both. Although computationally simple, this approach can lead to parameter estimates that may be biased due mainly to the correlation between the input and the noise, unless the noise model is accurately represented or the signal-to-noise ratio is high (Raol, Girija, & Singh, 2004). The conventional indirect approach is based on identifying the closed-loop system using the reference input and the system (plant) output. Given an estimate of the system open-loop transfer function, an estimate of the closed-loop transfer function can be obtained from the algebraic relationship between the system's open-loop and closed-loop transfer functions. The desired plant transfer function can then be deduced from the estimated closed-loop transfer function. However, the derivation of the plant transfer function from the closedloop transfer function may itself be prone to errors due to inaccuracies in the model of the subsystem connected in cascade with the plant. The two-stage approach, itself a form of an indirect method, is based on first identifying the sensitivity and the complementary

*<sup>v</sup> f* .

*<sup>s</sup> f ff* ≤ < (21)

*<sup>s</sup> f* of the system, and less

identified models are not selected as they consist of extraneous poles.

rate *sf* must be larger than four times the maximum frequency max

than four times the minimum frequency of the noise, min

**5.2 Identification of a plant operating in closed loop** 

established in (Doraiswami, Cheded, and Khalid, 2010).

line.

In the first stage, the sensitivity function *S z*( ) and the complementary sensitivity functions *T z*( ) are estimated using all the three available measurements, namely the reference input, *r*, plant input, *u* and the plant output, *y* , to ensure that the estimates are reliable. In other words, a Multiple-Input, Multiple-Output (MIMO) identification scheme with one input (the reference input *r*), and two outputs (the plant input *u* and the plant output *y*) is used here rather than a Single-Input, Single-Output (SISO) scheme using one input *u* and one output *y*. The MIMO identification scheme is based on minimizing the performance measure, *J*, as:

$$\min\_{\hat{z}} J = \left\| z - \hat{z} \right\|^2 \tag{22}$$

where [ ]*<sup>T</sup> z yu* = and ˆ ˆˆ [ ]*<sup>T</sup> z yu* = , *u*ˆ is the estimated plant input and *y*ˆ is the estimated plant output. The plant input *u*, and the plant output *y* are related to the reference input *r* and the disturbance *w* by:

$$u(z) = S(z)r(z) + S(z)w(z) \tag{23}$$

$$y(z) = T(z)r(z) + T(z)w(z) + v(z) \tag{24}$$

As pointed out earlier, the proposed MIMO identification scheme will ensure that the estimates of the sensitivity and the complementary sensitivity functions are consistent (i.e. they have identical denominators), and hence will also ensure that the estimates of the plant input *u* and the plant output *y* , which are both employed in the second stage, are reliable. Note here that the reference signal *r* is uncorrelated with the measurement noise *w* and the disturbance *v*, unlike in the case where the plant is identified using the direct approach. This is the main reason for using the MIMO scheme in the first stage. In the second stage, the plant *G z*( ) is identified from the estimated plant input, *u*ˆ , and plant output, *y*ˆ , obtained from the stage 1 identification scheme, i.e.:

$$
\hat{u}(z) = \hat{S}(z)r(z) \tag{25}
$$

$$
\hat{y}(z) = \hat{T}(z)r(z) \tag{26}
$$

Note that here the input *u*ˆ and the output *y*ˆ are not correlated with the noise *w* and disturbance term *v*. Treating *u*ˆ as the input and *y*ˆ as the output of the plant, and ˆ *y*ˆ as the estimate of the plant output estimate, *y*ˆ , the identification scheme is based on minimizing the weighted frequency-domain performance measure

$$\min\_{\hat{\hat{y}}\_{\mathcal{L}}} \left\| \mathcal{W}(jo) \Big( \hat{y}(jo) - \hat{\hat{y}}(jo) \Big) \right\|^2 \tag{27}$$

Identification of Linearized Models and Robust Control of Physical Systems 453

closed-loop system were identified. The estimated plant input and output were employed in the second stage to estimate the plant model. The model order for identification was selected to be second order using the proposed scheme. Figure 8 below gives the pole-zero maps of both the plant and the sensitivity function on the left-hand side, and, on the righthand side, the comparison between the frequency response of the identified model ˆ

obtained through non-parametric identification, i.e. estimated by injecting various sinusoidal inputs of different frequencies applied to the system, and the estimate of the

Fig. 8. A and B show pole-zero maps of the plant and of the sensitivity function (left) while C and D (right) show the comparison of the frequency response of the identified model with the non-parametric model estimate, and the correlation of the residual, respectively

> 0 1 2 1.7124z 1 1.116 ( ) 1 - 1.7076z +0.7533z

0 1 1

• The proposed model-order selection was employed. The identifications in stages I and II were performed for orders ranging from 1 to 4. A second-order model was selected in both stages since all the poles of the identified model were located in the right-half of

*D z z*

( ) 1 1.116 1 0.7578 *N z z*

( ) 1 1

− − − −

( ) ( )

0.0582 1 0.0687

*z*

( )

1 1

− − − −

<sup>−</sup> == (29)

<sup>−</sup> = = − − (30)

The nominal closed-loop input sensitivity function was identified as:

0

0

The identified model was validated using the following criteria:

*S z*

*G z*

and the nominal plant model as:

**6.1.1 Model validation** 

transfer function obtained using the proposed scheme.

*G j* ( ) ω,

where *W j* ( ) ωis the weighting function. Furthermore, it is shown that:

**Lemma:** If the closed-loop system is stable, then


This provides a cross-checking of the estimates of the poles and the zeros of the plant estimated in the second stage with the zeros of the sensitivity and complementary functions in the first stage, respectively.

### **6.1 Evaluation on a physical system: magnetic levitation system (MAGLEV)**

The physical system is a feedback magnetic levitation system (MAGLEV) (Galvao, Yoneyama, Marajo, & Machado, 2003). Identification and control of the magnetic levitation system has been a subject of research in recent times in view of its applications to transportation systems, magnetic bearings used to eliminate friction, magnetically-levitated micro robot systems, magnetic levitation-based automotive engine valves. It poses a challenge for both identification and controller design.

Fig. 7. Laboratory-scale MAGLEV system

The model of the MAGLEV system, shown in Fig. 7, is unstable, nonlinear and is modeled by:

$$\frac{y(s)}{u(s)} = \frac{\beta}{s^2 - a} \tag{28}$$

where *y* is the position, and *u* the voltage input. The poles, *p*, of the plant are real and are symmetrically located about the imaginary axis, i.e.: *p* = ± α . The linearized model of the system was identified in a closed-loop configuration using LABVIEW data captured through both A/D and D/A devices. Being unstable, the plant was identified in a closedloop configuration using a controller which was a lead compensator. The reference input was a rich persistently-exciting signal consisting of a random binary sequence. An appropriate sampling frequency was determined by analyzing the input-output data for different choices of the sampling frequencies. A sampling frequency of 5msec was found to be the best as it proved to be sufficiently small to capture the dynamics of the system but not the noise artifacts. The physical system was identified using the proposed two-stage MIMO identification scheme. First, the sensitivity and complementary sensitivity functions of the 452 Recent Advances in Robust Control – Novel Approaches and Design Methods

• The unstable poles of the plant must be cancelled exactly by the zeros of the sensitivity

• The zeros of the plants form a subset of the zeros of the complementary transfer

This provides a cross-checking of the estimates of the poles and the zeros of the plant estimated in the second stage with the zeros of the sensitivity and complementary functions

The physical system is a feedback magnetic levitation system (MAGLEV) (Galvao, Yoneyama, Marajo, & Machado, 2003). Identification and control of the magnetic levitation system has been a subject of research in recent times in view of its applications to transportation systems, magnetic bearings used to eliminate friction, magnetically-levitated micro robot systems, magnetic levitation-based automotive engine valves. It poses a

The model of the MAGLEV system, shown in Fig. 7, is unstable, nonlinear and is modeled

( ) ( ) *y s u s s*

2

where *y* is the position, and *u* the voltage input. The poles, *p*, of the plant are real and are

system was identified in a closed-loop configuration using LABVIEW data captured through both A/D and D/A devices. Being unstable, the plant was identified in a closedloop configuration using a controller which was a lead compensator. The reference input was a rich persistently-exciting signal consisting of a random binary sequence. An appropriate sampling frequency was determined by analyzing the input-output data for different choices of the sampling frequencies. A sampling frequency of 5msec was found to be the best as it proved to be sufficiently small to capture the dynamics of the system but not the noise artifacts. The physical system was identified using the proposed two-stage MIMO identification scheme. First, the sensitivity and complementary sensitivity functions of the

β

α

<sup>=</sup> <sup>−</sup> (28)

. The linearized model of the

α

is the weighting function. Furthermore, it is shown that:

**6.1 Evaluation on a physical system: magnetic levitation system (MAGLEV)** 

where *W j* ( )

function

ω

in the first stage, respectively.

**Lemma:** If the closed-loop system is stable, then

function if the reference input is bounded.

challenge for both identification and controller design.

Fig. 7. Laboratory-scale MAGLEV system

symmetrically located about the imaginary axis, i.e.: *p* = ±

by:

closed-loop system were identified. The estimated plant input and output were employed in the second stage to estimate the plant model. The model order for identification was selected to be second order using the proposed scheme. Figure 8 below gives the pole-zero maps of both the plant and the sensitivity function on the left-hand side, and, on the righthand side, the comparison between the frequency response of the identified model ˆ *G j* ( ) ω , obtained through non-parametric identification, i.e. estimated by injecting various sinusoidal inputs of different frequencies applied to the system, and the estimate of the transfer function obtained using the proposed scheme.

Fig. 8. A and B show pole-zero maps of the plant and of the sensitivity function (left) while C and D (right) show the comparison of the frequency response of the identified model with the non-parametric model estimate, and the correlation of the residual, respectively

The nominal closed-loop input sensitivity function was identified as:

$$S\_0(z) = \frac{1.7124 \,\mathrm{z}^{-1} \left(1 - 1.116 \,\mathrm{z}^{-1}\right)}{1 - 1.7076 \,\mathrm{z}^{-1} + 0.7533 \,\mathrm{z}^{-2}}\tag{29}$$

and the nominal plant model as:

$$\mathbf{G}\_{0}(z) = \frac{N\_{0}}{D\_{0}} = \frac{0.0582 \, z^{-1} \left(1 - 0.0687 \, z^{-1}\right)}{\left(1 - 1.116 \, z^{-1}\right) \left(1 - 0.7578 \, z^{-1}\right)}\tag{30}$$

### **6.1.1 Model validation**

The identified model was validated using the following criteria:

• The proposed model-order selection was employed. The identifications in stages I and II were performed for orders ranging from 1 to 4. A second-order model was selected in both stages since all the poles of the identified model were located in the right-half of

Identification of Linearized Models and Robust Control of Physical Systems 455

It is interesting to note here that there is a pole-zero cancelation between the nominal plant and the controller since a plant pole and a controller zero are both equal to 0.7578. In this

magnitude responses of the weighted sensitivity, complementary sensitivity and the control

The physical system under evaluation here is formed of two tanks connected by a pipe. A dc motor-driven pump supplies fluid to the first tank and a PI controller is used to control the fluid level in the second tank by maintaining the liquid height at a specified level, as shown in Fig. 10. This system is a cascade connection of a dc motor and a pump relating the input to the motor, *u* , and the flow *Qi* . It is expressed by the following first-order time-delay system:

and saturation-type of nonlinearity. The Proportional and Integral (PI) controller is given by:

3 2 *p I* 3

*x erh u ke kx* = = − = +

φ

<sup>∞</sup> = = with [ *N D*] 1 / 6.6087.

input sensitivity of the closed-loop control system are all shown above in Fig. 9.

**6.2 Evaluation on a physical sensor network: a two-tank liquid level system** 

= 0.1513 and hence the performance and stability measure is

<sup>∞</sup> ΔΔ ≤ = The step response and

(32)

(33)

*a* γ

*s*3 γ

*s*1 γ

φ

( ) *u* is a dead-band

γ

case, the *H*<sup>∞</sup> norm is

[ 0 0 ] 0.1513 *WS WS S uu*

γ

() *Q aQ b u i mi m* =− +

where *am* and *bm* are the parameters of the motor-pump subsystem and

where *<sup>p</sup> k* and *<sup>I</sup> k* are the PI controller's gains and *r* is the reference input.

γ

Fig. 10. Two-tank liquid level system

the z-plane. Note here that the dynamics of the actuator (electrical subsystem) was not captured by the model as it is very fast compared to that of the mechanical subsystem.


### **6.1.2** *H*<sup>∞</sup> **Mixed sensitivity** *H*<sup>∞</sup> **controller design**

The weighting functions are selected by giving more emphasis on robust stability and less on robust performance: ( ) 0.001 *W j <sup>s</sup>* ω = and ( ) 0.1 *W j <sup>u</sup>* ω = . To improve the robustness of the closed-loop system, a feed-forward control of the reference input is used, instead of the inclusion of an integrator in the controller. The *H*<sup>∞</sup> controller is given by:

$$C\_0(z) = \frac{2.5734 \left(1 + 1.113z^{-1}\right) \left(1 - 0.7578z^{-1}\right)}{\left(1 - 0.2044z^{-1}\right) \left(1 + 0.7457z^{-1}\right)}\tag{31}$$

Fig. 9. The step and frequency responses of the closed-loop system with *H*<sup>∞</sup> controller

454 Recent Advances in Robust Control – Novel Approaches and Design Methods

• The plant has one stable pole located at 0.7580 and one unstable pole at 1.1158. The reciprocity condition is not exactly satisfied as, theoretically, the stable pole should be at

• The zeros of the sensitivity function contain the unstable pole of the plant, i.e. the unstable pole of the plant located at 1.1158 is a zero of the sensitivity function. • The frequency responses of the plant, computed using two entirely different approaches, should be close to each other. In this case, a non-parametric approach was employed and compared to the frequency response obtained using the proposed model-based scheme, as shown on the right-hand side of Fig. 8. The non-parametric approach gives an inaccurate estimate at high frequencies due to correlation between

The weighting functions are selected by giving more emphasis on robust stability and less

closed-loop system, a feed-forward control of the reference input is used, instead of the

ω

( )( )

*z z*

− −

1 1

− −

*z z*

( ) ( )

= . To improve the robustness of the

(31)

= and ( ) 0.1 *W j <sup>u</sup>*

0 1 1 2.5734 1 1.113 1 0.7578

( ) 1 0.2044 1 0.7457

+ − <sup>=</sup> − +

Fig. 9. The step and frequency responses of the closed-loop system with *H*<sup>∞</sup> controller

for the subsequent stage II identification.

0.8962 and not at 0.7580.

the plant input and the noise.

on robust performance: ( ) 0.001 *W j <sup>s</sup>*

**6.1.2** *H*<sup>∞</sup> **Mixed sensitivity** *H*<sup>∞</sup> **controller design** 

*C z*

ω

• The residual is zero mean white noise with very small variance.

inclusion of an integrator in the controller. The *H*<sup>∞</sup> controller is given by:

the z-plane. Note here that the dynamics of the actuator (electrical subsystem) was not captured by the model as it is very fast compared to that of the mechanical subsystem. • A 4th order model was employed in stage I to estimate the plant input and the output It is interesting to note here that there is a pole-zero cancelation between the nominal plant and the controller since a plant pole and a controller zero are both equal to 0.7578. In this case, the *H*<sup>∞</sup> norm is γ = 0.1513 and hence the performance and stability measure is [ 0 0 ] 0.1513 *WS WS S uu* γ <sup>∞</sup> = = with [ *N D*] 1 / 6.6087. γ <sup>∞</sup> ΔΔ ≤ = The step response and magnitude responses of the weighted sensitivity, complementary sensitivity and the control input sensitivity of the closed-loop control system are all shown above in Fig. 9.

### **6.2 Evaluation on a physical sensor network: a two-tank liquid level system**

The physical system under evaluation here is formed of two tanks connected by a pipe. A dc motor-driven pump supplies fluid to the first tank and a PI controller is used to control the fluid level in the second tank by maintaining the liquid height at a specified level, as shown in Fig. 10. This system is a cascade connection of a dc motor and a pump relating the input to the motor, *u* , and the flow *Qi* . It is expressed by the following first-order time-delay system:

$$
\dot{Q}\_i = -a\_m Q\_i + b\_m \phi(\mu) \tag{32}
$$

where *am* and *bm* are the parameters of the motor-pump subsystem and φ( ) *u* is a dead-band and saturation-type of nonlinearity. The Proportional and Integral (PI) controller is given by:

$$\begin{aligned} \dot{\mathbf{x}}\_3 &= e = r - h\_2 \\ \boldsymbol{\mu} &= k\_p e + k\_I \boldsymbol{\chi}\_3 \end{aligned} \tag{33}$$

where *<sup>p</sup> k* and *<sup>I</sup> k* are the PI controller's gains and *r* is the reference input.

Fig. 10. Two-tank liquid level system

Identification of Linearized Models and Robust Control of Physical Systems 457

Fig. 11. (Left) The error and flow rate and their estimates and (Right) the control input and

189.1386 - 378.6386 190.4933

The zeros of the sensitivity function, relating the reference input *r* to the error *e* , are located

Fig. 12 below shows the combined plots of the actual values of the height, flow rate and control input, and their estimates from both stages 1 and 2. From this figure, we can conclude that the results are on the whole excellent, especially for both the height and

Stage II identification yields the following three open-loop transfer functions that are identified using their respective input/output estimates generated by the stage-1

> ( ) 0.4576 <sup>ˆ</sup> ( ) 0.0067 ( ) <sup>1</sup> *eu u z <sup>z</sup> G z e z z*

( ) 0.0104 ( ) ( ) 1 0.9968 *uq Qz z G z*

*u z z*

 1.9927 - 191.5216 380.4066 - 190.8783 0.0067 - 1.2751 2.5526 - 1.2842

⎡ ⎤ ⎢ ⎥


⎣ ⎦

− −− <sup>=</sup>

12 3

−− −

12 3 12 3 1 23

1 23

1 1

<sup>−</sup> = = <sup>−</sup> (38)

− <sup>−</sup> = = + <sup>−</sup> (37)

> 1 1

−

− −−

−− − −− −

*zz z zz z*

*z zz*

*z zz*

height and their estimates.


1.0000 - 2.3830 + 1.7680 - 0.3850

*D zz z*

where

=

*N*

at 1.02 and 1.0.

control input.

identification process:

With the inclusion of the leakage, the liquid level system is now modeled by :

$$\begin{aligned} A\_1 \frac{dH\_1}{dt} &= Q\_i - \mathcal{C}\_{12} \varphi \left( H\_1 - H\_2 \right) - \mathcal{C}\_\ell \varphi \left( H\_1 \right) \\ A\_2 \frac{dH\_2}{dt} &= \mathcal{C}\_{12} \varphi \left( H\_1 - H\_2 \right) - \mathcal{C}\_0 \varphi \left( H\_2 \right) \end{aligned} \tag{34}$$

where ϕ(.) (.) 2 (.) = *sign g* , *QC H* A A = ϕ ( <sup>1</sup> ) is the leakage flow rate, *QC H* 00 2 = ϕ ( ) is the output flow rate, *H*<sup>1</sup> is the height of the liquid in tank 1, *H*<sup>2</sup> the height of the liquid in tank 2, *A*<sup>1</sup> and *A*2 the cross-sectional areas of the 2 tanks, g=980 <sup>2</sup> *cm* /sec the gravitational constant, and *C*12 and *Co* the discharge coefficients of the inter-tank and output valves, respectively. The linearized model of the entire system formed by the motor, pump, and the tanks is given by:

$$\begin{aligned} \dot{x} &= Ax + Br \\ y &= Cx \end{aligned} \tag{35}$$

where *x, A, B* and *C* are given by:

$$\mathbf{x} = \begin{bmatrix} h\_1 \\ h\_2 \\ \mathbf{x}\_3 \\ q\_i \end{bmatrix}, \mathbf{A} = \begin{bmatrix} -a\_1 - a & a\_1 & 0 & b\_1 \\ a\_2 & -a\_2 - \beta & 0 & 0 \\ -1 & 0 & 0 & 0 \\ -b\_m k\_p & 0 & b\_m k\_l & -a\_m \end{bmatrix}, \mathbf{B} = \begin{bmatrix} 0 & 0 & 1 & b\_m k\_p \end{bmatrix}^T, \mathbf{C} = \begin{bmatrix} 1 & 0 & 0 & 0 \end{bmatrix}^T$$

*<sup>i</sup> <sup>q</sup>* , *<sup>q</sup>*<sup>A</sup> , <sup>0</sup> *<sup>q</sup>* , <sup>1</sup> *<sup>h</sup>* and 2 *<sup>h</sup>* are respectively the increments in *Qi* , *<sup>Q</sup>*<sup>A</sup> ,*Qo* , <sup>0</sup> *<sup>H</sup>*<sup>1</sup> and <sup>0</sup> *<sup>H</sup>*<sup>2</sup> , whereas <sup>1</sup>*a* , <sup>2</sup> *a* ,α and β are parameters associated with the linearization process, α is the leakage flow rate, 1 *q h* <sup>A</sup> = α , and β is the output flow rate, and *<sup>o</sup>* <sup>2</sup> *q h* = β . The dual-tank fluid system structure can be cast into that of an interconnected system with a sensor network, composed of 3 subsystems , *Geu Guq* , and *Gqh* relating the measured signals, namely the error *e*, control input *u*, flow rate *Q* and the height *h*, respectively. The proposed two-stage identification scheme is employed to identify these subsystems. It consists of the following two stages:


Figure 11 shows the estimation of the 4 key signals *e*, *u*, *Q* and *h* in our two-tank experiment, that are involved in the MIMO transfer function in stage I identification. Stage I identification yields the following MIMO closed-loop transfer function given by:

$$
\begin{bmatrix}
\hat{e}(z) & \hat{u}(z) & \hat{f}(z) & \hat{h}(z)
\end{bmatrix}^T = D^{-1}(z)N(z)r(z) \tag{36}
$$

456 Recent Advances in Robust Control – Novel Approaches and Design Methods

1 12 1 2 1

*dH A QC H H C H dt*

=− − −

ϕ

2 12 1 2 0 2

flow rate, *H*<sup>1</sup> is the height of the liquid in tank 1, *H*<sup>2</sup> the height of the liquid in tank 2, *A*<sup>1</sup> and *A*2 the cross-sectional areas of the 2 tanks, g=980 <sup>2</sup> *cm* /sec the gravitational constant, and *C*12 and *Co* the discharge coefficients of the inter-tank and output valves, respectively. The linearized model of the entire system formed by the motor, pump, and the tanks is

> *x Ax Br y Cx* = + =

0 0 , , 0 0 1 , [1 0 0 0] 1 0 00

*<sup>i</sup> <sup>q</sup>* , *<sup>q</sup>*<sup>A</sup> , <sup>0</sup> *<sup>q</sup>* , <sup>1</sup> *<sup>h</sup>* and 2 *<sup>h</sup>* are respectively the increments in *Qi* , *<sup>Q</sup>*<sup>A</sup> ,*Qo* , <sup>0</sup> *<sup>H</sup>*<sup>1</sup> and <sup>0</sup> *<sup>H</sup>*<sup>2</sup> , whereas

structure can be cast into that of an interconnected system with a sensor network, composed of 3 subsystems , *Geu Guq* , and *Gqh* relating the measured signals, namely the error *e*, control input *u*, flow rate *Q* and the height *h*, respectively. The proposed two-stage identification scheme is employed to identify these subsystems. It consists of the following

• In Stage 1, the MIMO closed-loop system is identified using data formed of the reference input *r*, and the subsystems' outputs measured by the 3 available sensors. • In Stage 2, the subsystems *Geu Guq* , and *Gqh* are then identified using the subsystem's

Figure 11 shows the estimation of the 4 key signals *e*, *u*, *Q* and *h* in our two-tank experiment, that are involved in the MIMO transfer function in stage I identification. Stage I

> ˆ ˆ <sup>1</sup> ˆ ˆ () () () () () ()() *T*

estimated input and output measurements obtained from the first stage.

identification yields the following MIMO closed-loop transfer function given by:

are parameters associated with the linearization process,

is the output flow rate, and *<sup>o</sup>* <sup>2</sup> *q h* =

0

*x A B bk C*

⎢ ⎥ ⎢ ⎥ − − = = ⎢ ⎥ <sup>=</sup> ⎡ ⎤ <sup>=</sup> ⎢ ⎥ ⎢ ⎥ <sup>−</sup> ⎣ ⎦

*dH A C HH CH*

= −−

ϕ

( ) ()

 ϕ

( <sup>1</sup> ) is the leakage flow rate, *QC H* 00 2 =

 ϕ

(34)

( ) is the output

ϕ

(35)

*T m p*

β

*ez uz f z hz D zNzrz* <sup>−</sup> ⎡ ⎤ <sup>=</sup> ⎣ ⎦ (36)

α

. The dual-tank fluid system

is the leakage

A

( ) ()

With the inclusion of the leakage, the liquid level system is now modeled by :

*i*

1

2

ϕ

*dt*

1 11 1

⎡ ⎤ ⎡ ⎤ − −

*h aa b*

α

*i m p mI m*

*q b k bk a*

β

⎢ ⎥ − − ⎣ ⎦ ⎣ ⎦

0

β

(.) (.) 2 (.) = *sign g* , *QC H* A A =

where *x, A, B* and *C* are given by:

β

α, and

3

*x*

flow rate, 1 *q h* <sup>A</sup> =

<sup>1</sup>*a* , <sup>2</sup> *a* ,αand

two stages:

2 2 2

*h a a*

where ϕ

given by:

Fig. 11. (Left) The error and flow rate and their estimates and (Right) the control input and height and their estimates.

$$\text{where } N = \begin{bmatrix} 1.9927 & -191.5216z^{-1} & 380.4066z^{-2} & -190.8783z^{-3} \\ 0.0067 & -1.2751z^{-1} & 2.5526z^{-2} & -1.2842z^{-3} \\ -183.5624 & 472.5772z^{-1} & -394.4963z^{-2} & 105.4815z^{-3} \\ -0.9927 & 189.1386z^{-1} & -378.6386z^{-2} & 190.4933z^{-3} \end{bmatrix}$$
 
$$D = 1.0000 \begin{array}{cccc} \text{-} & 2.3830z^{-1} & +1.7680z^{-2} & \text{-} & 0.3850z^{-3} \end{array}$$

The zeros of the sensitivity function, relating the reference input *r* to the error *e* , are located at 1.02 and 1.0.

Fig. 12 below shows the combined plots of the actual values of the height, flow rate and control input, and their estimates from both stages 1 and 2. From this figure, we can conclude that the results are on the whole excellent, especially for both the height and control input.

Stage II identification yields the following three open-loop transfer functions that are identified using their respective input/output estimates generated by the stage-1 identification process:

$$
\hat{G}\_{cu}(z) = \frac{\mu(z)}{c(z)} = 0.0067 + \frac{0.4576z^{-1}}{1 - z^{-1}} \tag{37}
$$

$$G\_{\rm nq}(z) = \frac{Q(z)}{\mu(z)} = \frac{0.0104z^{-1}}{1 - 0.9968z^{-1}}\tag{38}$$

$$G\_{qh}(z) = \frac{h(z)}{Q(z)} = \frac{0.7856z^{-1}}{1 - 1.0039z^{-1}}\tag{39}$$

Identification of Linearized Models and Robust Control of Physical Systems 459

0 1 2

The weighting functions are selected by giving more emphasis on robust stability and less

The controller has an approximate integral action for steady-state tracking with disturbance rejection and a pole at 0.99 which is very close to unity. In this case, the *H*<sup>∞</sup> norm is

 = 0.0663 . The step response and the magnitude responses of the sensitivity, complementary sensitivity and the control input sensitivity of the closed-loop control

Fig. 13. Step and magnitude freq. res1ponses of the closed-loop system with *H*<sup>∞</sup> controller

The sensitivity is low in the low frequency regions where the denominator perturbations are large, the control sensitivity is small in the high frequency regions of the numerator perturbations, and the complementary sensitivity is low in the high frequency region where the overall multiplicative model perturbations are high. As the robustness is related to

**6.2.2 Remarks on the mixed-sensitivity** *H*<sup>∞</sup> **control design** 

*D z z* <sup>−</sup> <sup>−</sup>

0.044029(1 )(1 1.98 0.9804 ) (1 0.99 )( ( 1 0.6093 )(1 0.60 ) 08 ) *z zz <sup>C</sup>*

*z*

*z z*

<sup>7</sup> ( ) ( ) *<sup>z</sup>* 0.9968

<sup>0</sup> 1 1.99

2

1

−− + <sup>=</sup> (41)

1 2 112

− −− −−− +− +

<sup>=</sup> − + <sup>=</sup> (40)

ω= . The *H*<sup>∞</sup>

−

0

controller is then given by:

system are all shown in Fig. 13.

γ

0

*z*

( )

*N z <sup>z</sup> G z*

on robust performance: ( ) <sup>1</sup> ( ) 0.01 / 1 0.99 *W z <sup>s</sup> <sup>z</sup>*<sup>−</sup> = − <sup>⎡</sup> <sup>⎤</sup> <sup>⎣</sup> <sup>⎦</sup> and () 1 *W z <sup>u</sup>* <sup>=</sup> where *<sup>j</sup> z e*

Fig. 12. The actual height (in blue), its estimate from stage 1(in green) and its estimate from stage 2 (in red). Similarly for the flow rate and the control input

### **Comments:**


### **6.2.1 Mixed-sensitivity** *H*<sup>∞</sup> **controller design**

The identified plant is the cascade combination of the motor, pump and the two tanks, which is essentially the forward path transfer function formed of the cascade combinations of *Guq* and *Gqh* , that relates the control input *u* to the tank height *h* , and which is given by:

458 Recent Advances in Robust Control – Novel Approaches and Design Methods

( ) 0.7856 ( ) ( ) 1 1.0039 *qh hz z G z*

Fig. 12. The actual height (in blue), its estimate from stage 1(in green) and its estimate from

• The two-tank level system is highly nonlinear as can be clearly seen especially from the flow rate profile located at the top right corner of Fig. 11. There is a saturation-type

• The subsystems *Geu* and *Gqh* representing respectively the PI controller and the transfer function relating the flow rate to the tank height are both unstable with a pole at unity

• The zeros of the sensitivity function have captured the unstable poles of the open- loop unstable plant with some error. The values of the zeros of the sensitivity function are

The identified plant is the cascade combination of the motor, pump and the two tanks, which is essentially the forward path transfer function formed of the cascade combinations of *Guq* and *Gqh* , that relates the control input *u* to the tank height *h* , and which is given by:

*Gqh* , located at 1.0039 , is very close to unity. This slight deviation from unity

representing an integral action. The estimated transfer functions ˆ

1.0178, and 1.0002 while those of the subsystem poles are 1 and 1.0039.

stage 2 (in red). Similarly for the flow rate and the control input

captured these unstable poles. Although the pole of ˆ

may be due to the nonlinearity effects on the flow rate

nonlinearity involved in the flow process.

**6.2.1 Mixed-sensitivity** *H*<sup>∞</sup> **controller design** 

**Comments:** 

pole of ˆ

*Q z z*

1 1

<sup>−</sup> = = <sup>−</sup> (39)

*Geu* and <sup>ˆ</sup>

*Geu* is exactly equal to unity, the

*Gqh* have

−

$$\text{G}\_0(z) = \frac{N\_0(z)}{D\_0(z)} = \frac{z^{-2}}{1 - 1.997z^{-1} + 0.9968z^{-2}}\tag{40}$$

The weighting functions are selected by giving more emphasis on robust stability and less on robust performance: ( ) <sup>1</sup> ( ) 0.01 / 1 0.99 *W z <sup>s</sup> <sup>z</sup>*<sup>−</sup> = − <sup>⎡</sup> <sup>⎤</sup> <sup>⎣</sup> <sup>⎦</sup> and () 1 *W z <sup>u</sup>* <sup>=</sup> where *<sup>j</sup> z e* ω = . The *H*<sup>∞</sup> controller is then given by:

$$C\_0(z) = \frac{0.044029(1+z^{-1})(1-1.98z^{-1}+0.9804z^{-2})}{(1-0.99z^{-1})(1-0.6093z^{-1})(1+0.6008z^{-2})}\tag{41}$$

The controller has an approximate integral action for steady-state tracking with disturbance rejection and a pole at 0.99 which is very close to unity. In this case, the *H*<sup>∞</sup> norm is γ = 0.0663 . The step response and the magnitude responses of the sensitivity, complementary sensitivity and the control input sensitivity of the closed-loop control system are all shown in Fig. 13.

Fig. 13. Step and magnitude freq. res1ponses of the closed-loop system with *H*<sup>∞</sup> controller

### **6.2.2 Remarks on the mixed-sensitivity** *H*<sup>∞</sup> **control design**

The sensitivity is low in the low frequency regions where the denominator perturbations are large, the control sensitivity is small in the high frequency regions of the numerator perturbations, and the complementary sensitivity is low in the high frequency region where the overall multiplicative model perturbations are high. As the robustness is related to

Identification of Linearized Models and Robust Control of Physical Systems 461

Council (NSERC) of Canada , and the King Fahd University of Petroleum and Minerals

Cerone, V., Milanese, M., & Regruto, D. (2009, September). Yaw Stability Control Design

Doraiswami, R. (2005). A two-stage Identification with Application to Control, Feature

Doraiswami, R., Cheded, L., & Khalid, H. M. (2010). Model Order Selection Criterion with

Doraiswami, R., Cheded, L., Khalid, H. M., Qadeer, A., & Khoki, A. (2010). Robust Control

Galvao, R., Yoneyama, T., Marajo, F., & Machado, R. (2003). A Simple Technique for

Goodwin, G. C., Graeb, S. F., & Salgado, M. E. (2001). *Control System Design.* New Jersey,

Kwakernaak, H. (1993). Robust Control and H-inf Optimization:tutorial paper. *Automatica,* 

McFarlane, D., & Glover, K. (1990). *Robust Controller Design using Normalized Coprime Factor* 

Mendel, J. (1995). *Lessons in Estimation Theory in Signal Processing, Communications and* 

Pintelon, R., & Schoukens, J. (2001). *System Identification: A Frequency Domain Approach.* New

Raol, J., Girija, G., & Singh, J. (2004). *Modeling and Parameter Estimation.* IEE Control

Shahab, M., & Doraiswami, R. (2009). A Novel Two-Stage Identification of Unstable

Skogestad, S., & Poslethwaite, I. (1996). *Multivariable Feedback Control Analysis and Design.*

Stoica, P., & Selen, Y. S. (2004). Model-Order Selection: A review of Information Criterion

Tan, W., Marquez, H. J., Chen, T., & Gooden, R. ( 2001). H infinity Control Design for

Systems. *Seventh International Conference on Control and Automation (ICCA 2010).*

Industrial Boiler. *Proceedings of The American Control Conference*, (pp. 2537-2542).

Ljung, L. (1999). *System Identification: Theory for the User.* New Jersey: Prentice-Hall.

Engineering Series 65, Instituition of Electrical Engineers.

Through Mixed Sensitivity Approach. *IEEE Transactions on Control Systems* 

Extraction, and Spectral Estimation. *IEE Proceedings: Control Theory and Applications,* 

Applications to Physical Systems. Toronto, Canada: Conference on Automation

of a Closed Loop Identified System with Parameteric Model Uncertainties and External Disturbances. *International Conference on Systems, Modelling and Simulations.*

Identifying a Linearized Model for a Didactic Magnetic Levitatiom System. *IEEE* 

(KFUPM), Dhahran 31261, Saudi Arabia

*Technology, 17*(5), 1096-1104.

Science and Engineering, CASE 2010.

*Transactions on Education, 46*(1), 22-25.

*Plant Descriptions.* New York: Springer-Verlag.

*152*(4), 379-386.

Liverpool, U.K.

USA: Prentice Hall.

*Control.* Prentice-Hall.

Christ Church, New Zealand.

New York, USA: John Wiley and Sons.

Rules. *IEEE Signal Processing Magazine*, 36-47.

Jersey: IEEE Press.

Viginia, USA.

*29*(2), 255-273.

**9. References** 

performance, this will ensure robust performance for steady-state tracking with disturbance rejection, controller input limitations and measurement noise attenuation. When tight performance bounds are specified, the controller will react strongly but may be unstable when implemented on the actual physical plant. For safety reasons, the controller design is started with very loose performance bounds, resulting in a controller with very small gains to ensure stability of the controller on the actual plant. Then, the performance bounds are made tighter to gradually increase the performance of the controller. The design method based on the mixed-sensitivity criterion generalizes some classical control design techniques such as the classical loop-shaping technique, integral control to ensure tracking, performance and specified high frequency roll-off, and direct control over the closed-loop bandwidth and time response by means of pole placement.

## **7. Conclusion**

This chapter illustrates, through analysis, simulation and practical evaluation, how the two key objectives of control system design, namely robust stability and robust performance, can be achieved. Specifically, it shows that in order to ensure both robust performance and robust stability of a closed-loop system where the controller is designed based on an identified model of the plant, it is then of paramount importance that both the identification scheme as well as the controller design strategy be selected appropriately, as the tightness of the achieved robustness bound depends on the magnitude of the modeling error produced by the selected identification scheme. In view of this close dependence, a comprehensive closed-loop identification scheme was proposed here that greatly mitigates the effects of measurement noise and disturbances and relies on a novel model order selection scheme. More specifically, the proposed identification consists of (a) a two-stage scheme to overcome the unknown noise and disturbance by first obtaining a high-order model, and then deriving from it a reduced-order model, (b) a novel model-order selection criterion based on verifying the location of the poles and (c) a two-stage scheme to identify first the closed-loop transfer functions of subsystems, and then obtain the plant model using the estimates on the input and output from the first stage. The controller design was based on the well-known mixed-sensitivity *H*∞ controller design technique that achieves simultaneously robust stability and robust performance. This technique is able to handle plant uncertainties modeled as additive perturbations in the numerator and denominator of the identified model, and provides tools to achieve a trade-off between robust stability, robust performance and control input limitations. The identification and controller design were both successfully evaluated on a number of simulated as well practical physical systems including the laboratory-scale unstable magnetic levitation and two-tank liquid level systems. This study has provided us with ample encouragement to replicate the use of the powerful techniques used in this chapter, on different systems and to enrich the overall approach with other identification and robust controller design.

### **8. Acknowledgement**

The authors acknowledge the support of the department of Electrical and Computer Engineering, University of New Brunswick, the National Science and Engineering Research Council (NSERC) of Canada , and the King Fahd University of Petroleum and Minerals (KFUPM), Dhahran 31261, Saudi Arabia

### **9. References**

460 Recent Advances in Robust Control – Novel Approaches and Design Methods

performance, this will ensure robust performance for steady-state tracking with disturbance rejection, controller input limitations and measurement noise attenuation. When tight performance bounds are specified, the controller will react strongly but may be unstable when implemented on the actual physical plant. For safety reasons, the controller design is started with very loose performance bounds, resulting in a controller with very small gains to ensure stability of the controller on the actual plant. Then, the performance bounds are made tighter to gradually increase the performance of the controller. The design method based on the mixed-sensitivity criterion generalizes some classical control design techniques such as the classical loop-shaping technique, integral control to ensure tracking, performance and specified high frequency roll-off, and direct control over the closed-loop

This chapter illustrates, through analysis, simulation and practical evaluation, how the two key objectives of control system design, namely robust stability and robust performance, can be achieved. Specifically, it shows that in order to ensure both robust performance and robust stability of a closed-loop system where the controller is designed based on an identified model of the plant, it is then of paramount importance that both the identification scheme as well as the controller design strategy be selected appropriately, as the tightness of the achieved robustness bound depends on the magnitude of the modeling error produced by the selected identification scheme. In view of this close dependence, a comprehensive closed-loop identification scheme was proposed here that greatly mitigates the effects of measurement noise and disturbances and relies on a novel model order selection scheme. More specifically, the proposed identification consists of (a) a two-stage scheme to overcome the unknown noise and disturbance by first obtaining a high-order model, and then deriving from it a reduced-order model, (b) a novel model-order selection criterion based on verifying the location of the poles and (c) a two-stage scheme to identify first the closed-loop transfer functions of subsystems, and then obtain the plant model using the estimates on the input and output from the first stage. The controller design was based on the well-known mixed-sensitivity *H*∞ controller design technique that achieves simultaneously robust stability and robust performance. This technique is able to handle plant uncertainties modeled as additive perturbations in the numerator and denominator of the identified model, and provides tools to achieve a trade-off between robust stability, robust performance and control input limitations. The identification and controller design were both successfully evaluated on a number of simulated as well practical physical systems including the laboratory-scale unstable magnetic levitation and two-tank liquid level systems. This study has provided us with ample encouragement to replicate the use of the powerful techniques used in this chapter, on different systems and to enrich the overall

bandwidth and time response by means of pole placement.

approach with other identification and robust controller design.

The authors acknowledge the support of the department of Electrical and Computer Engineering, University of New Brunswick, the National Science and Engineering Research

**8. Acknowledgement** 

**7. Conclusion** 


Zhou, K., Doyle, J., & Glover, K. (1996). *Robust Optimal Control.* New Jersey, USA: Prentice-Hall.

462 Recent Advances in Robust Control – Novel Approaches and Design Methods

Zhou, K., Doyle, J., & Glover, K. (1996). *Robust Optimal Control.* New Jersey, USA: Prentice-

Hall.

## *Edited by Andreas Mueller*

Robust control has been a topic of active research in the last three decades culminating in H\_2/H\_\infty and \mu design methods followed by research on parametric robustness, initially motivated by Kharitonov's theorem, the extension to non-linear time delay systems, and other more recent methods. The two volumes of Recent Advances in Robust Control give a selective overview of recent theoretical developments and present selected application examples. The volumes comprise 39 contributions covering various theoretical aspects as well as different application areas. The first volume covers selected problems in the theory of robust control and its application to robotic and electromechanical systems. The second volume is dedicated to special topics in robust control and problem specific solutions. Recent Advances in Robust Control will be a valuable reference for those interested in the recent theoretical advances and for researchers working in the broad field of robotics and mechatronics.

Photo by curraheeshutter / iStock

Recent Advances in Robust Control - Novel Approaches and Design Methods

Recent Advances in

Robust Control

Novel Approaches and Design Methods

*Edited by Andreas Mueller*