**5. Robust model predictive control**

MPC is a class of model-based control theories that use linear or nonlinear process models to forecast system behavior. The success of the MPC control performance depends on the accu‐ racy of the open loop predictions, which in turn depends on the accuracy of the process models. It is possible for the predicted trajectory to differ from the actual plant behavior [45]. Needless to say, such control systems that provide optimal performance for a particular model may perform very poorly when implemented on a physical system that is not exactly described by the model (see e.g. [46]).

When we say that a control system is robust we mean that stability is maintained and that the performance specifications are met for a specified range of model variations (uncertainty range). To be meaningful, any statement about robustness of a particular control algorithm must make reference to a specific uncertainty range as well as specific stability and perform‐ ance criteria.

Predictive controllers that explicitly consider the process and model uncertainties, when de‐ termining the optimal control policies, are called robust predictive controllers. The main concept of such controllers is similar to the idea of *H∞*controllers and consists on the mini‐ mization of worst disturbance effect to the process behavior [47]. Several applications for the formulation of robust predictive control laws began to appear in the literature in the 1990s, focusing on both model uncertainties and disturbances.

Although a rich theory has been developed for the robust control of linear systems, very lit‐ tle is known about the robust control of linear systems with constraints. Most studies on ro‐ bustness consider unconstrained systems. According to the Lyapunov theory, we know that if a Lyapunov function for the nominal closed-loop system maintains its descent property if the disturbance (uncertainty) is sufficiently small, then stability is maintained in the pres‐ ence of uncertainty. However, when constraints on states and controls are present, it is nec‐ essary to ensure, in addition, that disturbances do not cause transgression of the constraints. This adds an extra level of complexity.

In this section, we review two robust model predictive control (RMPC) methods and men‐ tion the advantages and disadvantages of methods below. The basic idea of each method and some method applications are stated.

#### **5.1. Min-Max RMPC methods**

steps is not necessarily guaranteed unless further assumptions are made. Because the con‐ traction parameter implies a specific speed of convergence, its choice comes natural to the

**Model predictive control with linearization.** All the methods discussed so far require a nonlinear program to be solved on-line at each time step. The effort varies somewhat be‐ cause some methods require only that a feasible (and not necessarily optimal) solution be found or that only an improvement be achieved from time step to time step. Nevertheless the effort is usually formidable when compared to the linear case and stopping with a feasi‐ ble rather than optimal solution can have unpredictable consequences for the performance. The computational effort can be greatly reduced when the system is linearized first in some manner and then the techniques developed for linear systems are employed on-line. Some

Linearization theory may, in some applications, be employed to transform the original non‐ linear system, using state and feedback control transformations, into a linear system. Model predictive control may be applied to the transformed system [41, 42]. [42] applies first feed‐ back linearization and then uses MPC in a cascade arrangement for the resulting linear sys‐ tem. The optimal control problem is not, however, transformed into a convex problem, because the transformed control and state constraint sets and the transformed cost are no longer necessarily convex. [43, 44] employ linear transformation (*x*(*k* + 1)= *Ax* + *Bu*is re‐ placed by*x*(*k* + 1)=(*A* + *BK*)*x* + *Bv*, where *v* : =*u* − *Kx* is the re-parameterized control) to im‐

**Conclusions.** MPC for linear constrained systems has been shown to provide an excellent control solution both theoretically and practically. The incorporation of nonlinear models poses a much more challenging problem mainly because of computational and control theo‐ retical difficulties, but also holds much promise for practical applications. In this section an overview over the stability analysis of NMPC is given. As outlined some of the challenges occurring in NMPC are already solvable. Nevertheless in the nonlinear area a variety of is‐ sues remain which are technically complex but have potentially significant practical implica‐

MPC is a class of model-based control theories that use linear or nonlinear process models to forecast system behavior. The success of the MPC control performance depends on the accu‐ racy of the open loop predictions, which in turn depends on the accuracy of the process models. It is possible for the predicted trajectory to differ from the actual plant behavior [45]. Needless to say, such control systems that provide optimal performance for a particular model may perform very poorly when implemented on a physical system that is not exactly

When we say that a control system is robust we mean that stability is maintained and that the performance specifications are met for a specified range of model variations (uncertainty

prove conditioning of the optimal control problem solved on-line.

operating personnel.

88 Advances in Discrete Time Systems

approaches have been proposed.

tions for stability and performance.

**5. Robust model predictive control**

described by the model (see e.g. [46]).

In the main stream robust control literature, "robust performance" is measured by determin‐ ing the worst performance over the specified uncertainty range. In direct extension of this definition it is natural to set up a new RMPC objective where the control action is selected to minimize the worst value the objective function can attain as a function of the uncertain model parameters. This describes the first attempt toward a RMPC algorithm which was proposed by [48]. They showed that for FIR models the optimization problem which must be solved on-line at each time step is a linear program of moderate size with uncertain coef‐ ficients and an *∞*-norm objective function. Unfortunately, it is well known now that robust stability is not guaranteed with this algorithm [46].

In literature [48], the Campo algorithm fails to address the fact that only the first element of the optimal input trajectory is implemented and the whole min-max optimization is repeat‐ ed at the next time step with a feedback update. In the subsequent optimization, the worstcase parameter values may change because of the feedback's update. In the case of a system with uncertainties, the open-loop optimal solution differs from the feedback optimal solu‐ tion, thereby violating the basic premise behind MPC. This is why robust stability cannot be assured with the Campo algorithm.

The literature [49] proposed the RMPC formulations which explicitly take into account un‐ certainties in the prediction model

$$f\left(\mathbf{x}\_{k'}\ \mathbf{u}\_{k'}\ \mathbf{w}\_{k'}\ \mathbf{v}\_k\right) = A\langle\mathbf{w}\_k\rangle \mathbf{x}\_k + B\langle\mathbf{w}\_k\rangle \boldsymbol{\mu}\_k + E\boldsymbol{v}\_k\tag{9}$$

To account for the effect of feedback, the authors of [55] propose to calculate at each time step not a sequence of control moves but a state feedback gain matrix which is determined to minimize an upper bound on robust performance. For fairly general uncertainty descrip‐ tions, the optimization problem can be expressed as a set of linear matrix inequalities for

Discrete-Time Model Predictive Control http://dx.doi.org/10.5772/51122 91

In the above method, a cost function is minimized considering the worst case into all the plants described by the uncertainties. Barriers of RMPC algorithms include: the computa‐ tional cost, the applicability depending on the speed and size of the plant on which the con‐ trol will act. In this section, we present one such MPC-based technique for the control of plants with uncertainties. This technique is motivated by developments in the theory and application (to control) of optimization involving linear matrix inequalities (LMIs) [56].

In this regard, the authors in [55] used the formulation in LMIs to solve the optimization problem. The basic idea of LMIs is to interpret a control problem as a semi-definite pro‐ gramming (SDP), that is, an optimization problem with linear objective and positive-definite

There are two reasons why LMI optimization is relevant to MPC. Firstly, LMI-based optimi‐ zation problems can be solved in polynomial time, which means that they have low compu‐ tational complexity. From a practical standpoint, there are effective and powerful algorithms for the solution of these problems, that is, algorithms that rapidly compute the global opti‐ mum, with non-heuristic stopping criteria. It is comparable to that required for the elevation of an analytical solution for a similar problem. Thus LMI optimization is well suited for online implementation, which is essential for MPC. Secondly, it is possible to recast much of

The implication is that we can devise an MPC scheme where, at each time instant, an LMI optimization problem (as opposed to conventional linear or quadratic programs) is solved that incorporates input/output constraints and a description of the plant uncertainty. What's

In recent decades, many research results in the design of RMPC have appeared, see for ex‐ amples, [55-62] and the references therein. The main drawback associated with the abovementioned methods proposed in MPC is that a single Lyapunov matrix is used to guarantee the desired closed-loop multi-objective specifications. This must work for all matrices in the uncertain domain to ensure that the hard constraints on inputs and outputs are satisfied. This condition is generally conservative if used in time-invariant systems. Furthermore, the hard constraints on outputs of closed-loop systems cannot be transformed into a linear ma‐

constraints involving symmetric matrices that are related to the decision variables.

existing robust control theory in the framework of LMIs [55].

trix inequality (LMI) form using the method proposed in [57, 58, 60].

more, it can guarantee certain robustness properties.

**5.3. Our works**

which efficient solution techniques exist.

**5.2. LMI-based RMPC methods**

where *A*(*w*)= *<sup>A</sup>*<sup>0</sup> <sup>+</sup> ∑ *i*=1 *q Ai wi* , *<sup>B</sup>*(*w*)= *<sup>B</sup>*<sup>0</sup> <sup>+</sup> ∑ *i*=1 *q Bi wi wk* <sup>∈</sup>W∈ℝ*nw*, *vk* <sup>∈</sup>V∈ℝ*nv*

Let*vk* , *wk*be modeled as unknown but bounded exogenous disturbances and parametric un‐ certainties andW, Vbe polytopes respectively. A RMPC strategy often used is to solve a minmax problem that minimize the worst-case performance while enforcing input and state constraints for all possible disturbances. The following min-max control problem is referred as *open-loop constrained robust optimal control problem* (OL-CROC).

$$\begin{aligned} &\min\_{\mathbf{w}\_0,\ldots,\mathbf{w}\_{N-1}} \left| \max\_{\substack{\mathbf{z}\_0,\ldots,\mathbf{z}\_{N-1} \in \mathsf{V} \\ \mathbf{u}\_0,\ldots,\mathbf{u}\_{N-1} \in \mathsf{W}}} \sum\_{k=0}^{N-1} l(\mathbf{x}\_{k'},\mathbf{u}\_k) + F(\mathbf{x}\_N) \right| \\ &\text{s.t.} \qquad \text{dynamic} \quad \text{(9)}, \text{ constraints} \quad \text{(3)} \quad \text{satisified} \quad \forall \, \mathbf{v}\_k \in \mathsf{V}, \ \forall \, \mathbf{w}\_k \in \mathsf{W} \end{aligned} \tag{10}$$

Other papers in the literature aim at explicitly or implicitly approximating the problem above by simplifying the objective and uncertainty description, and making the on-line ef‐ fort more manageable, but still guaranteeing at least robust stability. For example, the au‐ thors of [50] use an *∞*-norm open-loop objective function and both assume FIR models with uncertain coefficients. A similar but more general technique has also been proposed for state-space systems with a bounded input matrix [51].

The authors of [52] have defined a dynamic programming problem (thus accounting for feedback) to determine the control sequence minimizing the worst case cost. They show that with the horizon set to infinity this procedure guarantees robust stability. However, the approach suffers from the curse of dimensionality and the optimization problem at each stage of the dynamic program is non-convex. Thus, in its generality the method is unsuitable for on-line (or even off-line) use except for low order systems with simple uncertainty descriptions.

These formulations may be conservative for certain problems leading to sluggish behavior because of three reasons. First of all, arbitrarily time-varying uncertain parameters are usu‐ ally not a good description of the model uncertainty encountered in practice, where the pa‐ rameters may be either constant or slowly varying but unknown. Second, the computationally simple open-loop formulations neglect the effect of feedback. Third, the worst-case error minimization itself may be a conservative formulation for most problems.

The authors of [50, 53, 54] propose to optimize nominal rather than robust performance and to achieve robust stability by enforcing a robust contraction constraint, i.e.requiring the worstcase prediction of the state to contract. With this formulation robust global asymptotic stabil‐ ity can be guaranteed for a set of linear time-invariant stable systems. The optimization problem can be cast as a quadratic program of moderate size for a broad class of uncertainty descriptions. To account for the effect of feedback, the authors of [55] propose to calculate at each time step not a sequence of control moves but a state feedback gain matrix which is determined to minimize an upper bound on robust performance. For fairly general uncertainty descrip‐ tions, the optimization problem can be expressed as a set of linear matrix inequalities for which efficient solution techniques exist.

#### **5.2. LMI-based RMPC methods**

*f* (*xk* , *uk* , *wk* , *vk* )= *A*(*wk* )*xk* + *B*(*wk* )*uk* + *Evk* (9)

(10)

where *A*(*w*)= *<sup>A</sup>*<sup>0</sup> <sup>+</sup> ∑

90 Advances in Discrete Time Systems

min *u*0

,…,*uN* <sup>−</sup>1{ max *<sup>v</sup>*0,…,*vN* <sup>−</sup>1∈<sup>V</sup> *w*0,…,*wN* <sup>−</sup>1∈W

∑ *k*=0 *N* −1

state-space systems with a bounded input matrix [51].

*wk* <sup>∈</sup>W∈ℝ*nw*, *vk* <sup>∈</sup>V∈ℝ*nv*

*i*=1 *q Ai wi*

, *<sup>B</sup>*(*w*)= *<sup>B</sup>*<sup>0</sup> <sup>+</sup> ∑

as *open-loop constrained robust optimal control problem* (OL-CROC).

*l*(*xk* , *uk* ) + *F*(*xN* )

}

*s*.*t*. dynamics (9), constraints (3) satisfied ∀*vk* ∈V, ∀*wk* ∈W

Other papers in the literature aim at explicitly or implicitly approximating the problem above by simplifying the objective and uncertainty description, and making the on-line ef‐ fort more manageable, but still guaranteeing at least robust stability. For example, the au‐ thors of [50] use an *∞*-norm open-loop objective function and both assume FIR models with uncertain coefficients. A similar but more general technique has also been proposed for

The authors of [52] have defined a dynamic programming problem (thus accounting for feedback) to determine the control sequence minimizing the worst case cost. They show that with the horizon set to infinity this procedure guarantees robust stability. However, the approach suffers from the curse of dimensionality and the optimization problem at each stage of the dynamic program is non-convex. Thus, in its generality the method is unsuitable for on-line (or even off-line) use except for low order systems with simple uncertainty descriptions.

These formulations may be conservative for certain problems leading to sluggish behavior because of three reasons. First of all, arbitrarily time-varying uncertain parameters are usu‐ ally not a good description of the model uncertainty encountered in practice, where the pa‐ rameters may be either constant or slowly varying but unknown. Second, the computationally simple open-loop formulations neglect the effect of feedback. Third, the worst-case error minimization itself may be a conservative formulation for most problems.

The authors of [50, 53, 54] propose to optimize nominal rather than robust performance and to achieve robust stability by enforcing a robust contraction constraint, i.e.requiring the worstcase prediction of the state to contract. With this formulation robust global asymptotic stabil‐ ity can be guaranteed for a set of linear time-invariant stable systems. The optimization problem can be cast as a quadratic program of moderate size for a broad class of uncertainty descriptions.

*i*=1 *q Bi wi*

Let*vk* , *wk*be modeled as unknown but bounded exogenous disturbances and parametric un‐ certainties andW, Vbe polytopes respectively. A RMPC strategy often used is to solve a minmax problem that minimize the worst-case performance while enforcing input and state constraints for all possible disturbances. The following min-max control problem is referred

In the above method, a cost function is minimized considering the worst case into all the plants described by the uncertainties. Barriers of RMPC algorithms include: the computa‐ tional cost, the applicability depending on the speed and size of the plant on which the con‐ trol will act. In this section, we present one such MPC-based technique for the control of plants with uncertainties. This technique is motivated by developments in the theory and application (to control) of optimization involving linear matrix inequalities (LMIs) [56].

In this regard, the authors in [55] used the formulation in LMIs to solve the optimization problem. The basic idea of LMIs is to interpret a control problem as a semi-definite pro‐ gramming (SDP), that is, an optimization problem with linear objective and positive-definite constraints involving symmetric matrices that are related to the decision variables.

There are two reasons why LMI optimization is relevant to MPC. Firstly, LMI-based optimi‐ zation problems can be solved in polynomial time, which means that they have low compu‐ tational complexity. From a practical standpoint, there are effective and powerful algorithms for the solution of these problems, that is, algorithms that rapidly compute the global opti‐ mum, with non-heuristic stopping criteria. It is comparable to that required for the elevation of an analytical solution for a similar problem. Thus LMI optimization is well suited for online implementation, which is essential for MPC. Secondly, it is possible to recast much of existing robust control theory in the framework of LMIs [55].

The implication is that we can devise an MPC scheme where, at each time instant, an LMI optimization problem (as opposed to conventional linear or quadratic programs) is solved that incorporates input/output constraints and a description of the plant uncertainty. What's more, it can guarantee certain robustness properties.

#### **5.3. Our works**

In recent decades, many research results in the design of RMPC have appeared, see for ex‐ amples, [55-62] and the references therein. The main drawback associated with the abovementioned methods proposed in MPC is that a single Lyapunov matrix is used to guarantee the desired closed-loop multi-objective specifications. This must work for all matrices in the uncertain domain to ensure that the hard constraints on inputs and outputs are satisfied. This condition is generally conservative if used in time-invariant systems. Furthermore, the hard constraints on outputs of closed-loop systems cannot be transformed into a linear ma‐ trix inequality (LMI) form using the method proposed in [57, 58, 60].

We present a multi-model paradigm for robust control. Underlying this paradigm is a linear time-varying (LTV) system.

$$\begin{array}{ccccc}\mathbf{x}(k+1) & = & A(k)\mathbf{x}(k) + B(k)\boldsymbol{\mu}(k) \\ \mathbf{y}(k) & = & \mathbf{C}(k)\mathbf{x}(k) \\ \begin{bmatrix} A(k) & B(k) \\ C(k) & 0 \end{bmatrix} & \in & \Omega \end{array} \tag{11}$$

ques developed in [66]. As the method proposed in [55] is a special case of our results, the optimization problem should be feasible using the method proposed in our paper since it is solvable using the approach in [55]. However, the optimization may not have a solution by

*Example* (Input and Output Constraints) Consider the linear discrete-time parameter uncer‐

<sup>−</sup>0.86 ,

It is shown that the optimization is infeasible with the method proposed in [55] without the constraints. However, taking output constraints with *y*1,*max* =2 and input constraints with

An overview of some methods on RMPC is presented. The methods are studied based on LMI and Min-Max. The basic idea and applications of methods are stated in each part. Ad‐

Despite the extensive literature that exists on predictive control and robustness to uncertain‐ ty, both multiplicative (e.g. parametric) and additive (e.g. exogenous), very little attention

has been paid to the case of stochastic uncertainty. Although robust predictive control can handle constrained systems that are subject to stochastic uncertainty, it will propagate the effects of uncertainty over a prediction horizon which can be computationally expensive and conservative. Yet this situation arises naturally in many control applications. The aim of this section is to review some of the recent advances in stochastic model predictive control

The basic SMPC problem is defined in Subsection 6.1. and a review of earlier work is given

0.40 <sup>−</sup>0.85 , *<sup>A</sup>*<sup>3</sup> <sup>=</sup> 0.96 0.13

0.28 <sup>−</sup>0.90 ,

, it is feasible using the method proposed in this paper. The simu‐

2

Discrete-Time Model Predictive Control http://dx.doi.org/10.5772/51122

, *ξ*2(*k*)=0.6*sin*(*k*)

(14)

93

2 ,

the result in [55], while it has a solution by our result.

0.35 0.45 , *<sup>A</sup>*<sup>2</sup> <sup>=</sup> 0.90 0.85

<sup>−</sup>0.8 , *<sup>B</sup>*<sup>3</sup> <sup>=</sup> <sup>1</sup>

*u*1,*max* =0.8, and with uncertain parameters assumed to be*ξ*1(*k*)=0.5*cos*(*k*)

vantages and disadvantages of methods are stated in this section too.

*C*<sup>1</sup> = 1 0.3 , *C*<sup>2</sup> = 0.8 0.2 , *C*<sup>3</sup> = 1.2 0.4 .

*<sup>A</sup>*<sup>1</sup> <sup>=</sup> <sup>−</sup>0.90 0.80

<sup>−</sup><sup>1</sup> , *<sup>B</sup>*<sup>2</sup> <sup>=</sup> <sup>1</sup>

2

**6. Recent developments in stochastic MPC**

lation results are given in Figure 2 and Figure 3.

*<sup>B</sup>*<sup>1</sup> <sup>=</sup> <sup>1</sup>

<sup>2</sup> + 0.4*sin*(*k*)

tain system (11) with

*ξ*3(*k*)=0.5*cos*(*k*)

**5.4. Conclusion**

(SMPC).

in Subsection 6.2.

where *u*(*k*)∈ℝ*nu* is the control input, *x*(*k*)∈ℝ*nx* is the state of the plant and *y*(*k*)∈ℝ*ny* is the plant output, and *Ω* is some pre-specified set.

For polytopic systems, the set *Ω*is the polytope

$$\mathbf{C}\mathcal{Q} = \text{Co}\begin{bmatrix} \begin{bmatrix} A\_1 & B\_1 \\ C\_1 & 0 \end{bmatrix} \begin{bmatrix} A\_2 & B\_2 \\ C\_2 & 0 \end{bmatrix} \dots \text{v} \begin{bmatrix} A\_L & B\_L \\ C\_L & 0 \end{bmatrix} \end{bmatrix} \tag{12}$$

where *Co* devotes to the convex hull. In other words, if *A*(*k*) *B*(*k*) *<sup>C</sup>*(*k*) <sup>0</sup> <sup>∈</sup>*Ω*, then, for some non‐ negative*ξ*1(*k*), *ξ*2(*k*),...,*ξ<sup>L</sup>* (*k*) summing to one, we have

$$
\begin{bmatrix} A(k) & B(k) \\ C(k) & 0 \end{bmatrix} = \sum\_{i=1}^{L} \xi\_i(k) \begin{bmatrix} A\_i & B\_i \\ C\_i & 0 \end{bmatrix} \tag{13}
$$

where *L* =1 corresponds to the nominal LTI system. The system described in equation (11) subject to input and output constraints

$$\begin{array}{rcl} \mid \mathsf{u}\_{\mathsf{h}} \left( k+i \mid k \right) \mid & \leq & \mathsf{u}\_{\mathsf{h}\_{\mathsf{h}}, \mathsf{max}'} \; i \geq 0, \; h = 1, \; \mathsf{2}, \; \dots \; \; \mathsf{R}^{\mathsf{n}\_{\mathsf{u}}} \\\\ \mid \; y\_{\mathsf{h}} \left( k+i \mid k \right) \mid & \leq & y\_{\mathsf{h}\_{\mathsf{h}}, \mathsf{max}'} \; i \geq 1, \; h = 1, \; \mathsf{2}, \; \dots \; \; \; \mathsf{R}^{\mathsf{n}\_{\mathsf{y}}} .\end{array}$$

In 2001, the authors of [63] firstly put forward the idea of using the parameter-dependent Lyapunov function to solve the problem of robust constrained MPC for linear continuoustime uncertain systems, and hereafter, this idea was applied to linear discrete-time uncertain systems in [64, 65].

Inspired by above-mentioned work, we addressed the problem of robust constrained MPC based on parameter-dependent Lyapunov functions with polytopic-type uncertainties in [66]. The results are based on a new extended LMI characterization of the quadratic objec‐ tive, with hard constraints on inputs and outputs. Sufficient conditions in LMI do not in‐ volve the product of the Lyapunov matrices and the system dynamic matrices. The state feedback control guarantees that the closed-loop system is robustly stable and the hard con‐ straints on inputs and outputs are satisfied. The approach provides a way to reduce the con‐ servativeness of the existing conditions by decoupling the control parameterization from the Lyapunov matrix. An example will be provided to illustrate the effectiveness of the techni‐ ques developed in [66]. As the method proposed in [55] is a special case of our results, the optimization problem should be feasible using the method proposed in our paper since it is solvable using the approach in [55]. However, the optimization may not have a solution by the result in [55], while it has a solution by our result.

*Example* (Input and Output Constraints) Consider the linear discrete-time parameter uncer‐ tain system (11) with

$$\begin{aligned} \, \_1A\_1 &= \begin{bmatrix} -0.90 & 0.80 \\ 0.35 & 0.45 \end{bmatrix} \, \_2A\_2 = \begin{bmatrix} 0.90 & 0.85 \\ 0.40 & -0.85 \end{bmatrix} \, \_1A\_3 = \begin{bmatrix} 0.96 & 0.13 \\ 0.28 & -0.90 \end{bmatrix} \\ \, \_2B\_1 &= \begin{bmatrix} 1 \\ -1 \end{bmatrix} \, \_2B\_2 = \begin{bmatrix} 1 \\ -0.8 \end{bmatrix} \, \_2B\_3 = \begin{bmatrix} 1 \\ -0.86 \end{bmatrix} \\ \, \_1C\_1 &= \begin{bmatrix} 1 & 0.3 \end{bmatrix} \, \_2C\_2 = \begin{bmatrix} 0.8 & 0.2 \end{bmatrix} \, \_2C\_3 = \begin{bmatrix} 1.2 & 0.4 \end{bmatrix} . \end{aligned} \tag{14}$$

It is shown that the optimization is infeasible with the method proposed in [55] without the constraints. However, taking output constraints with *y*1,*max* =2 and input constraints with *u*1,*max* =0.8, and with uncertain parameters assumed to be*ξ*1(*k*)=0.5*cos*(*k*) 2 , *ξ*2(*k*)=0.6*sin*(*k*) 2 , *ξ*3(*k*)=0.5*cos*(*k*) <sup>2</sup> + 0.4*sin*(*k*) 2 , it is feasible using the method proposed in this paper. The simu‐ lation results are given in Figure 2 and Figure 3.

#### **5.4. Conclusion**

We present a multi-model paradigm for robust control. Underlying this paradigm is a linear

(11)

is the

is the state of the plant and *y*(*k*)∈ℝ*ny*

*CL* <sup>0</sup> }, (12)

*<sup>C</sup>*(*k*) <sup>0</sup> <sup>∈</sup>*Ω*, then, for some non‐

*Ci* <sup>0</sup> , (13)

,

.

*x*(*k* + 1) = *A*(*k*)*x*(*k*) + *B*(*k*)*u*(*k*)

∈ *Ω*

*A*<sup>2</sup> *B*<sup>2</sup> *<sup>C</sup>*<sup>2</sup> <sup>0</sup> , ...,

> *i*=1 *L ξi* (*k*)

where *L* =1 corresponds to the nominal LTI system. The system described in equation (11)

<sup>|</sup>*uh* (*<sup>k</sup>* <sup>+</sup> *<sup>i</sup>* <sup>|</sup>*k*)| <sup>≤</sup> *uh* ,*max*, *<sup>i</sup>* <sup>≥</sup>0, *<sup>h</sup>* =1, 2, ..., <sup>ℝ</sup>*nu*

<sup>|</sup> *yh* (*<sup>k</sup>* <sup>+</sup> *<sup>i</sup>* <sup>|</sup>*k*)| <sup>≤</sup> *yh* ,*max*, *<sup>i</sup>* <sup>≥</sup>1, *<sup>h</sup>* =1, 2, ..., <sup>ℝ</sup>*ny*

In 2001, the authors of [63] firstly put forward the idea of using the parameter-dependent Lyapunov function to solve the problem of robust constrained MPC for linear continuoustime uncertain systems, and hereafter, this idea was applied to linear discrete-time uncertain

Inspired by above-mentioned work, we addressed the problem of robust constrained MPC based on parameter-dependent Lyapunov functions with polytopic-type uncertainties in [66]. The results are based on a new extended LMI characterization of the quadratic objec‐ tive, with hard constraints on inputs and outputs. Sufficient conditions in LMI do not in‐ volve the product of the Lyapunov matrices and the system dynamic matrices. The state feedback control guarantees that the closed-loop system is robustly stable and the hard con‐ straints on inputs and outputs are satisfied. The approach provides a way to reduce the con‐ servativeness of the existing conditions by decoupling the control parameterization from the Lyapunov matrix. An example will be provided to illustrate the effectiveness of the techni‐

*AL BL*

*Ai Bi*

*A*(*k*) *B*(*k*)

*y*(*k*) = *C*(*k*)*x*(*k*)

*A*(*k*) *B*(*k*) *C*(*k*) 0

is the control input, *x*(*k*)∈ℝ*nx*

*<sup>Ω</sup>* <sup>=</sup>*Co*{ *<sup>A</sup>*<sup>1</sup> *<sup>B</sup>*<sup>1</sup>

where *Co* devotes to the convex hull. In other words, if

negative*ξ*1(*k*), *ξ*2(*k*),...,*ξ<sup>L</sup>* (*k*) summing to one, we have

subject to input and output constraints

*<sup>C</sup>*<sup>1</sup> <sup>0</sup> ,

*A*(*k*) *B*(*k*) *<sup>C</sup>*(*k*) <sup>0</sup> <sup>=</sup><sup>∑</sup>

plant output, and *Ω* is some pre-specified set. For polytopic systems, the set *Ω*is the polytope

time-varying (LTV) system.

92 Advances in Discrete Time Systems

where *u*(*k*)∈ℝ*nu*

systems in [64, 65].

An overview of some methods on RMPC is presented. The methods are studied based on LMI and Min-Max. The basic idea and applications of methods are stated in each part. Ad‐ vantages and disadvantages of methods are stated in this section too.
