**4. Nonlinear model predictive control**

Three practical questions are immediate [1]:

action which can be implemented?

loop optimal control problem?

These questions will be explained in the following sections.

loop stable?

82 Advances in Discrete Time Systems

**3.2. Feasibility**

feasible region.

**3.3. Closed loop stability**

One wishes to show that

\* (*x*(*k*))<sup>−</sup> *JN*

\* (*x*(*k*))<sup>−</sup> *JN*

*JN*

Rewriting *JN*

**1.** When is the problem formulated above feasible, so that the algorithm yields a control

**2.** When does the sequence of computed control actions lead to a system which is closed-

**3.** What closed-loop performance results from repeated solution of the specified open-

The constraints stipulated in (3) may render the optimization problem infeasible. It may happen, for example, because of a disturbance, that the optimization problem posed above becomes infeasible at a particular time step. It may also happen, that the algorithm which minimizes an open-loop objective, inadvertently drives the closed-loop system outside the

In either the infinite or the finite horizon constrained case it is not clear under what condi‐ tions the closed loop system is stable. Much research on linear MPC has focused on this problem. Two approaches have been proposed to guarantee stability: one based on the origi‐ nal problem (1), (2), and (3) and the other where a contraction constraint is added [9, 10]. With the contraction constraint the norm of the state is forced to decrease with time and sta‐ bility follows trivially independent of the various parameters in the objective function. With‐

General proofs of stability for constrained MPC based on the monotonicity property of the value function have been proposed by [11] and [12]. The most comprehensive and also most compact analysis has been presented by [13] and [14] whose arguments we will sketch here.

To simplify the exposition we assume*p* =*m*= *N* , then *J*( *<sup>p</sup>*,*m*)= *JN* as defined in eq.(2). The key

\* (*x*(*k* + 1)) = *x <sup>T</sup>* (*k*)*Qx*(*k*) + *u* \**<sup>T</sup>* (*x*(*k*))*Ru* \*

<sup>+</sup> *<sup>J</sup> <sup>N</sup>* <sup>−</sup><sup>1</sup> \* (*x*(*<sup>k</sup>* <sup>+</sup> 1))<sup>−</sup> *JN*

\* , the value function, as a Lyapunov function.

(*x*(*k*))

(6)

\* (*x*(*k* + 1))>0, *for x* ≠0 (5)

\* (*x*(*k* + 1))

out the contraction constraint the stability problem is more complicated.

idea is to use the optimal finite horizon cost*JN*

*JN*

\* (*x*(*k* + 1))gives,

\* (*x*(*k*))<sup>−</sup> *JN*

Linear MPC has been developed, in our opinion, to a stage where it has achieved sufficient maturity to warrant the active interest of researchers in nonlinear control. While linear mod‐ el predictive control has been popular since the 70s of the past century, the 90s have wit‐ nessed a steadily increasing attention from control theorists as well as control practitioners in the area of nonlinear model predictive control (NMPC).

The practical interest is driven by the fact that many systems are in general inherently non‐ linear and today's processes need to be operated under tighter performance specifications. At the same time more and more constraints for example from environmental and safety considerations, need to be satisfied. In these cases, linear models are often inadequate to de‐ scribe the process dynamics and nonlinear models have to be used. This motivates the use of nonlinear model predictive control.

The system to be controlled is described, or approximated by a discrete-time model

$$\begin{aligned} \mathbf{x}(k+1) &= f\left(\mathbf{x}(k), u(k)\right), \\ \mathbf{y}(k) &= h\left(\mathbf{x}(k)\right), \end{aligned} \tag{7}$$

where *f* ( ·) is implicitly defined by the originating differential equation that has an equili‐ brium point at the origin (i.e. *f* (0, 0)=0). The control and state sequences must satisfy (3).

### **4.1. Difficulties of NMPC**

The same receding horizon idea which we discussed in section 3 is also the principle under‐ lying nonlinear MPC, with the exception that the model describing the process dynamics is nonlinear. Contrary to the linear case, however, feasibility and the possible mismatch be‐ tween the open-loop performance objective and the actual closed loop performance are largely unresolved research issues in nonlinear MPC. An additional difficulty is that the op‐ timization problems to be solved on line are generally nonlinear programs without any re‐ deeming features, which implies that convergence to a global optimum cannot be assured. For the quadratic programs arising in the linear case this is guaranteed. As most proofs of stability for constrained MPC are based on the monotonicity property of the value function, global optimality is usually not required, as long as the cost attained at the minimizer de‐ creases (which is usually the case, especially when the optimization algorithm is initialized from the previous shifted optimal sequence). However, although stability is not altered by local minimum, performance clearly deteriorates.

feedback controllers would meet the specifications for the fuzzy systems with input or out‐

Discrete-Time Model Predictive Control http://dx.doi.org/10.5772/51122 85

In this section, different possibilities to achieve closed-loop stability for NMPC using a finite horizon length have been proposed. Just as outlined for the linear case, in the proof the val‐ ue function is employed as a Lyapunov function. A global optimum must be found at each time step to guarantee stability. As mentioned above, when the horizon is infinity, feasibility at a particular time step implies feasibility at all future time steps. Unfortunately, contrary to

Most of approaches modify the NMPC setup such that stability of the closed-loop can be guaranteed independently of the plant and performance specifications. This is usually ach‐ ieved by adding suitable equality or inequality constraints and suitable additional penalty terms to the objective functional. These additional constraints are usually not motivated by physical restrictions or desired performance requirements but have the sole purpose to en‐ force stability of the closed-loop. Therefore, they are usually termed stability constraints.

**Terminal equality constraint.** The simplest possibility to enforce stability with a finite pre‐ diction horizon is to add a so called zero terminal equality constraint at the end of the pre‐ diction horizon [17, 23, 28], i.e. to add the equality constraint *x*(*k* + *p* |*k*)=0to the optimization problem (2), (3) and (7). This leads to stability of the closed-loop, if the optimal control problem possesses a solution at*k*, since the feasibility at one time instance does also lead to feasibility at the following time instances and a decrease in the value function.

The first proposal for this form of model predictive control for time-varying, constrained, nonlinear, discrete-time systems was made by [17]. This paper is particularly important, be‐ cause it provides a definitive stability analysis of this version of discrete-time receding hori‐ zon control (under mild conditions of controllability and observability) and shows the value

the infinite horizon problem as the horizon approaches infinity. This paper remains a key reference on the stabilizing properties of model predictive control and subsumes much of

In fact, the main advantages of this version are the straightforward application and the con‐ ceptual simplicity. On the other hand, one disadvantage of a zero terminal constraint is that the system must be brought to the origin in finite time. This leads in general to feasibility problems for short prediction/control horizon lengths, i.e. a small region of attraction. From a computational point of view, the optimization problem with terminal constraint can be solved in principle, but equality constraints are computationally very expensive and can on‐ ly be met asymptotically [24]. In addition, one cannot guarantee convergence to a feasible solution even when a feasible solution exists, a discomforting fact. Furthermore, specifying a terminal constraint which is not met in actual operation is always somewhat artificial and may lead to aggressive behavior. Finally, to reduce the complexity of the optimization prob‐ lem it is desirable to keep the control horizon small, or, more generally, characterize the con‐

the later literature on discrete-time MPC that uses a terminal equality constraint.

\* associated with the finite horizon optimal control problem approaches that of

the linear case, the infinite horizon problem cannot be solved numerically.

put constraints.

function *JN*

*4.2.2. Finite horizon NMPC*

The next section focuses on system theoretical aspects of NMPC. Especially the question on closed-loop stability is considered.

#### **4.2. Closed-loop stability**

One of the key questions in NMPC is certainly, whether a finite horizon NMPC strategy does lead to stability of the closed-loop or not. Here only the key ideas are reviewed and no detailed proofs are given. Furthermore, notice that we will not cover all existing NMPC approaches, instead we refer the reader to the overview papers [2, 15, 16]. For all the fol‐ lowing sections it is assumed that the prediction horizon is set equal to the control hori‐ zon, that is,*p* =*m*.

#### *4.2.1. Infinite horizon NMPC*

As pointed out, the key problem with a finite prediction and control horizon stems from the fact that the predicted open and the resulting closed-loop behavior is in general different. The most intuitive way to achieve stability is the use of an infinite horizon cost [17, 18], that is, *p*in (4) is set to*∞*. As mentioned in [19], in the nominal case, feasibility at one sampling instance also implies feasibility and optimality at the next sampling instance. This follows from Bellman's Principle of Optimality [20], that is the input and state trajectories computed as the solution of the NMPC optimization problem (2), (3) and (7) at a specific instance in time, are in fact equal to the closed-loop trajectories of the nonlinear system, i.e. the remain‐ ing parts of the trajectories after one sampling instance are the optimal solution at the next sampling instance. This fact also implies closed-loop stability. When the system is both in‐ finite-horizon and constrained, [21] considered this case for T-S fuzzy systems with PDC law and non-PDC law. New sufficient conditions were proposed in terms of LMIs. Both the cor‐ responding PDC and non-PDC state-feedback controllers were designed, which could guar‐ antee that the resulting closed-loop fuzzy system be asymptotically stable. In addition, the feedback controllers would meet the specifications for the fuzzy systems with input or out‐ put constraints.

### *4.2.2. Finite horizon NMPC*

**4.1. Difficulties of NMPC**

84 Advances in Discrete Time Systems

local minimum, performance clearly deteriorates.

closed-loop stability is considered.

**4.2. Closed-loop stability**

zon, that is,*p* =*m*.

*4.2.1. Infinite horizon NMPC*

The same receding horizon idea which we discussed in section 3 is also the principle under‐ lying nonlinear MPC, with the exception that the model describing the process dynamics is nonlinear. Contrary to the linear case, however, feasibility and the possible mismatch be‐ tween the open-loop performance objective and the actual closed loop performance are largely unresolved research issues in nonlinear MPC. An additional difficulty is that the op‐ timization problems to be solved on line are generally nonlinear programs without any re‐ deeming features, which implies that convergence to a global optimum cannot be assured. For the quadratic programs arising in the linear case this is guaranteed. As most proofs of stability for constrained MPC are based on the monotonicity property of the value function, global optimality is usually not required, as long as the cost attained at the minimizer de‐ creases (which is usually the case, especially when the optimization algorithm is initialized from the previous shifted optimal sequence). However, although stability is not altered by

The next section focuses on system theoretical aspects of NMPC. Especially the question on

One of the key questions in NMPC is certainly, whether a finite horizon NMPC strategy does lead to stability of the closed-loop or not. Here only the key ideas are reviewed and no detailed proofs are given. Furthermore, notice that we will not cover all existing NMPC approaches, instead we refer the reader to the overview papers [2, 15, 16]. For all the fol‐ lowing sections it is assumed that the prediction horizon is set equal to the control hori‐

As pointed out, the key problem with a finite prediction and control horizon stems from the fact that the predicted open and the resulting closed-loop behavior is in general different. The most intuitive way to achieve stability is the use of an infinite horizon cost [17, 18], that is, *p*in (4) is set to*∞*. As mentioned in [19], in the nominal case, feasibility at one sampling instance also implies feasibility and optimality at the next sampling instance. This follows from Bellman's Principle of Optimality [20], that is the input and state trajectories computed as the solution of the NMPC optimization problem (2), (3) and (7) at a specific instance in time, are in fact equal to the closed-loop trajectories of the nonlinear system, i.e. the remain‐ ing parts of the trajectories after one sampling instance are the optimal solution at the next sampling instance. This fact also implies closed-loop stability. When the system is both in‐ finite-horizon and constrained, [21] considered this case for T-S fuzzy systems with PDC law and non-PDC law. New sufficient conditions were proposed in terms of LMIs. Both the cor‐ responding PDC and non-PDC state-feedback controllers were designed, which could guar‐ antee that the resulting closed-loop fuzzy system be asymptotically stable. In addition, the In this section, different possibilities to achieve closed-loop stability for NMPC using a finite horizon length have been proposed. Just as outlined for the linear case, in the proof the val‐ ue function is employed as a Lyapunov function. A global optimum must be found at each time step to guarantee stability. As mentioned above, when the horizon is infinity, feasibility at a particular time step implies feasibility at all future time steps. Unfortunately, contrary to the linear case, the infinite horizon problem cannot be solved numerically.

Most of approaches modify the NMPC setup such that stability of the closed-loop can be guaranteed independently of the plant and performance specifications. This is usually ach‐ ieved by adding suitable equality or inequality constraints and suitable additional penalty terms to the objective functional. These additional constraints are usually not motivated by physical restrictions or desired performance requirements but have the sole purpose to en‐ force stability of the closed-loop. Therefore, they are usually termed stability constraints.

**Terminal equality constraint.** The simplest possibility to enforce stability with a finite pre‐ diction horizon is to add a so called zero terminal equality constraint at the end of the pre‐ diction horizon [17, 23, 28], i.e. to add the equality constraint *x*(*k* + *p* |*k*)=0to the optimization problem (2), (3) and (7). This leads to stability of the closed-loop, if the optimal control problem possesses a solution at*k*, since the feasibility at one time instance does also lead to feasibility at the following time instances and a decrease in the value function.

The first proposal for this form of model predictive control for time-varying, constrained, nonlinear, discrete-time systems was made by [17]. This paper is particularly important, be‐ cause it provides a definitive stability analysis of this version of discrete-time receding hori‐ zon control (under mild conditions of controllability and observability) and shows the value function *JN* \* associated with the finite horizon optimal control problem approaches that of the infinite horizon problem as the horizon approaches infinity. This paper remains a key reference on the stabilizing properties of model predictive control and subsumes much of the later literature on discrete-time MPC that uses a terminal equality constraint.

In fact, the main advantages of this version are the straightforward application and the con‐ ceptual simplicity. On the other hand, one disadvantage of a zero terminal constraint is that the system must be brought to the origin in finite time. This leads in general to feasibility problems for short prediction/control horizon lengths, i.e. a small region of attraction. From a computational point of view, the optimization problem with terminal constraint can be solved in principle, but equality constraints are computationally very expensive and can on‐ ly be met asymptotically [24]. In addition, one cannot guarantee convergence to a feasible solution even when a feasible solution exists, a discomforting fact. Furthermore, specifying a terminal constraint which is not met in actual operation is always somewhat artificial and may lead to aggressive behavior. Finally, to reduce the complexity of the optimization prob‐ lem it is desirable to keep the control horizon small, or, more generally, characterize the con‐ trol input sequence with a small number of parameters. However, a small number of degrees of freedom may lead to quite a gap between the open-loop performance objective and the actual closed loop performance.

**Terminal constraint set and terminal cost function.** Many schemes have been proposed [24, 26-28, 30, 31, 36, 37], to try to overcome the use of a zero terminal constraint. Most of them either use the so called terminal region constraint

$$\mathbf{x}(k+p\mid k) \in \mathcal{X}\_f \subseteq \mathcal{X} \tag{8}$$

equal to *J<sup>∞</sup>*

*kf* (*Xf* )⊂U.

be solved.

\*

attraction that is this level set of*JN*

( · ) in a suitable neighborhood of the origin. Choosing *Xf* to be an appropriate

\* ( · ). The

87

Discrete-Time Model Predictive Control http://dx.doi.org/10.5772/51122

subset of this neighborhood yields many advantages and motivates the choice of *E*( ·) and

For the case when the system is nonlinear but there are no state or control constraints, [35] use a stabilizing local control law*kf* ( ·), a terminal cost function *E*( ·) that is a (local) Lyapu‐ nov function for the stabilized system, and a terminal constraint set *Xf* that is a level set of *E*( ·) and is positively invariant for the system*x*(*k* + 1)= *f* (*x*(*k*), *kf* (*x*(*k*))). The terminal con‐ straint is omitted from the optimization problem solved on-line, but it is nevertheless shown

resultant closed-loop system is asymptotically (or exponentially) stabilizing with a region of

When the system is both nonlinear and constrained, *E*( ·)and *Xf* include features from the example immediately above. In [36], *kf* ( ·) is chosen to stabilize the linearized system *x*(*k* + 1)= *Ax*(*k*) + *Bu*(*k*), where *A*: = *f <sup>x</sup>*(0, 0) and*B* : = *f <sup>u</sup>*(0, 0). Then the author of [36] employs a non-quadratic terminal cost *E*( ·) and a terminal constraint set *Xf* that is positively invari‐ ant for the nonlinear system *x*(*k* + 1)= *f* (*x*(*k*), *kf* (*x*(*k*))) and that satisfies *Xf* ⊂X and

**Variable horizon/Hybrid model predictive control.** These techniques were proposed by [37] and developed by [38] to deal with both the global optimality and the feasibility prob‐ lems, which plague nonlinear MPC with a terminal constraint. Variable horizon MPC also employs a terminal constraint, but the time horizon at the end of which this constraint must be satisfied is itself an optimization variable. It is assumed that inside this region another controller is employed for which it is somehow known that it asymptotically stabilizes the system. Variable horizon has also been employed in contractive model predictive control (see the next section). With these modifications a global optimum is no longer needed and feasibility at a particular time step implies feasibility at all future time steps. The terminal constraint is somewhat less artificial here because it may be met in actual operation. Howev‐ er, a variable horizon is inconvenient to handle on-line, an exact end constraint is difficult to satisfy, and the exact determination of the terminal region is all but impossible except maybe for low order systems. In order to show that this region is invariant and that the sys‐ tem is asymptotically stable in this region, usually a global optimization problem needs to

**Contractive model predictive control.** The idea of contractive MPC was mentioned by [39], the complete algorithm and stability proof were developed by [40]. In this approach a con‐ straint is added to the usual formulation which forces the actual and not only the predict‐ ed state to contract at discrete intervals in the future. From this requirement a Lyapunov function can be constructed easily and stability can be established. The stability is independ‐ ent of the objective function and the convergence of the optimization algorithm as long as a solution is found which satisfies the contraction constraint. The feasibility at future time

that this constraint is automatically satisfied for all initial states in a level set of*JN*

*Xf* in most of the examples of this form of model predictive control.

\* ( · ).

and/or a terminal penalty term *E*(*x*(*k* + *p* |*k*)) which is added to the cost functional. Note that the terminal penalty term is not a performance specification that can be chosen freely. Rather *E* and the terminal region *Xf* in (8) are determined off-line such that stability is en‐ forced.


**Terminal cost and constraint set.** Most recent model predictive controllers belong to this category. There are a variety of good reasons for incorporating both a terminal cost and a terminal constraint set in the optimal control problem. Ideally, the terminal cost *E*( ·) should be the infinite horizon value function *J<sup>∞</sup>* \* ( · ) if this were the case, then*JN* \* ( · )= *<sup>J</sup><sup>∞</sup>* \* (· ), on-line optimization would be unnecessary, and the known advantages of an infinite horizon, such as stability and robustness, would automatically accrue. Nonlinearity and/or constraints render this impossible, but it is possible to choose *E*( ·) so that it is exactly or approximately equal to *J<sup>∞</sup>* \* ( · ) in a suitable neighborhood of the origin. Choosing *Xf* to be an appropriate subset of this neighborhood yields many advantages and motivates the choice of *E*( ·) and *Xf* in most of the examples of this form of model predictive control.

trol input sequence with a small number of parameters. However, a small number of degrees of freedom may lead to quite a gap between the open-loop performance objective

**Terminal constraint set and terminal cost function.** Many schemes have been proposed [24, 26-28, 30, 31, 36, 37], to try to overcome the use of a zero terminal constraint. Most of them

and/or a terminal penalty term *E*(*x*(*k* + *p* |*k*)) which is added to the cost functional. Note that the terminal penalty term is not a performance specification that can be chosen freely. Rather *E* and the terminal region *Xf* in (8) are determined off-line such that stability is en‐

**•** *Terminal constraint set.* In this version of model predictive control, *Xf* is a subset of X con‐ taining a neighborhood of the origin. The purpose of the model predictive controller is to steer the state to *Xf* in finite time. Inside*Xf* , a local stabilizing controller *kf* ( ·) is em‐ ployed. This form of model predictive control is therefore sometimes referred to as dual mode, and was proposed. Fixed horizon versions for constrained, nonlinear, discrete-time

**•** *Terminal cost function.* One of the earliest proposals for modifying (2), (3) and (7) to ensure closed-loop stability was the addition of a terminal cost. In this version of model predic‐ tive control, the terminal cost *E*( ·) is nontrivial and there is no terminal constraint so that *Xf* <sup>=</sup>ℝ*n*. The proposal [34] was made in the context of predictive control of unconstrained linear system. Can this technique for achieving stability (by adding only a terminal cost) be successfully employed for constrained and/or nonlinear systems? From the literature the answer may appear affirmative. However, in this literature there is an implicit re‐ quirement that *x*(*k* + *p* |*k*)∈ *Xf* is satisfied for every initial state in a given compact set, and this is automatically satisfied if *N* is chosen sufficiently large. The constraint *x*(*k* + *p* |*k*)∈ *Xf* then need not be included explicitly in the optimal control problem ac‐ tually solved on-line. Whether this type of model predictive control is regarded as having only a terminal cost or having both a terminal cost and a terminal constraint is a matter of definition. We prefer to consider it as belonging to the latter category as the constraint is

necessary even though it is automatically satisfied if *N* is chosen sufficiently large.

\*

**Terminal cost and constraint set.** Most recent model predictive controllers belong to this category. There are a variety of good reasons for incorporating both a terminal cost and a terminal constraint set in the optimal control problem. Ideally, the terminal cost *E*( ·) should

optimization would be unnecessary, and the known advantages of an infinite horizon, such as stability and robustness, would automatically accrue. Nonlinearity and/or constraints render this impossible, but it is possible to choose *E*( ·) so that it is exactly or approximately

( · ) if this were the case, then*JN*

\* ( · )= *<sup>J</sup><sup>∞</sup>* \*

(· ), on-line

*x*(*k* + *p* |*k*)∈ *Xf* ⊆X (8)

and the actual closed loop performance.

86 Advances in Discrete Time Systems

forced.

either use the so called terminal region constraint

systems are proposed in [32] and [33].

be the infinite horizon value function *J<sup>∞</sup>*

For the case when the system is nonlinear but there are no state or control constraints, [35] use a stabilizing local control law*kf* ( ·), a terminal cost function *E*( ·) that is a (local) Lyapu‐ nov function for the stabilized system, and a terminal constraint set *Xf* that is a level set of *E*( ·) and is positively invariant for the system*x*(*k* + 1)= *f* (*x*(*k*), *kf* (*x*(*k*))). The terminal con‐ straint is omitted from the optimization problem solved on-line, but it is nevertheless shown that this constraint is automatically satisfied for all initial states in a level set of*JN* \* ( · ). The resultant closed-loop system is asymptotically (or exponentially) stabilizing with a region of attraction that is this level set of*JN* \* ( · ).

When the system is both nonlinear and constrained, *E*( ·)and *Xf* include features from the example immediately above. In [36], *kf* ( ·) is chosen to stabilize the linearized system *x*(*k* + 1)= *Ax*(*k*) + *Bu*(*k*), where *A*: = *f <sup>x</sup>*(0, 0) and*B* : = *f <sup>u</sup>*(0, 0). Then the author of [36] employs a non-quadratic terminal cost *E*( ·) and a terminal constraint set *Xf* that is positively invari‐ ant for the nonlinear system *x*(*k* + 1)= *f* (*x*(*k*), *kf* (*x*(*k*))) and that satisfies *Xf* ⊂X and *kf* (*Xf* )⊂U.

**Variable horizon/Hybrid model predictive control.** These techniques were proposed by [37] and developed by [38] to deal with both the global optimality and the feasibility prob‐ lems, which plague nonlinear MPC with a terminal constraint. Variable horizon MPC also employs a terminal constraint, but the time horizon at the end of which this constraint must be satisfied is itself an optimization variable. It is assumed that inside this region another controller is employed for which it is somehow known that it asymptotically stabilizes the system. Variable horizon has also been employed in contractive model predictive control (see the next section). With these modifications a global optimum is no longer needed and feasibility at a particular time step implies feasibility at all future time steps. The terminal constraint is somewhat less artificial here because it may be met in actual operation. Howev‐ er, a variable horizon is inconvenient to handle on-line, an exact end constraint is difficult to satisfy, and the exact determination of the terminal region is all but impossible except maybe for low order systems. In order to show that this region is invariant and that the sys‐ tem is asymptotically stable in this region, usually a global optimization problem needs to be solved.

**Contractive model predictive control.** The idea of contractive MPC was mentioned by [39], the complete algorithm and stability proof were developed by [40]. In this approach a con‐ straint is added to the usual formulation which forces the actual and not only the predict‐ ed state to contract at discrete intervals in the future. From this requirement a Lyapunov function can be constructed easily and stability can be established. The stability is independ‐ ent of the objective function and the convergence of the optimization algorithm as long as a solution is found which satisfies the contraction constraint. The feasibility at future time steps is not necessarily guaranteed unless further assumptions are made. Because the con‐ traction parameter implies a specific speed of convergence, its choice comes natural to the operating personnel.

range). To be meaningful, any statement about robustness of a particular control algorithm must make reference to a specific uncertainty range as well as specific stability and perform‐

Discrete-Time Model Predictive Control http://dx.doi.org/10.5772/51122 89

Predictive controllers that explicitly consider the process and model uncertainties, when de‐ termining the optimal control policies, are called robust predictive controllers. The main concept of such controllers is similar to the idea of *H∞*controllers and consists on the mini‐ mization of worst disturbance effect to the process behavior [47]. Several applications for the formulation of robust predictive control laws began to appear in the literature in the 1990s,

Although a rich theory has been developed for the robust control of linear systems, very lit‐ tle is known about the robust control of linear systems with constraints. Most studies on ro‐ bustness consider unconstrained systems. According to the Lyapunov theory, we know that if a Lyapunov function for the nominal closed-loop system maintains its descent property if the disturbance (uncertainty) is sufficiently small, then stability is maintained in the pres‐ ence of uncertainty. However, when constraints on states and controls are present, it is nec‐ essary to ensure, in addition, that disturbances do not cause transgression of the constraints.

In this section, we review two robust model predictive control (RMPC) methods and men‐ tion the advantages and disadvantages of methods below. The basic idea of each method

In the main stream robust control literature, "robust performance" is measured by determin‐ ing the worst performance over the specified uncertainty range. In direct extension of this definition it is natural to set up a new RMPC objective where the control action is selected to minimize the worst value the objective function can attain as a function of the uncertain model parameters. This describes the first attempt toward a RMPC algorithm which was proposed by [48]. They showed that for FIR models the optimization problem which must be solved on-line at each time step is a linear program of moderate size with uncertain coef‐ ficients and an *∞*-norm objective function. Unfortunately, it is well known now that robust

In literature [48], the Campo algorithm fails to address the fact that only the first element of the optimal input trajectory is implemented and the whole min-max optimization is repeat‐ ed at the next time step with a feedback update. In the subsequent optimization, the worstcase parameter values may change because of the feedback's update. In the case of a system with uncertainties, the open-loop optimal solution differs from the feedback optimal solu‐ tion, thereby violating the basic premise behind MPC. This is why robust stability cannot be

The literature [49] proposed the RMPC formulations which explicitly take into account un‐

focusing on both model uncertainties and disturbances.

This adds an extra level of complexity.

and some method applications are stated.

stability is not guaranteed with this algorithm [46].

assured with the Campo algorithm.

certainties in the prediction model

**5.1. Min-Max RMPC methods**

ance criteria.

**Model predictive control with linearization.** All the methods discussed so far require a nonlinear program to be solved on-line at each time step. The effort varies somewhat be‐ cause some methods require only that a feasible (and not necessarily optimal) solution be found or that only an improvement be achieved from time step to time step. Nevertheless the effort is usually formidable when compared to the linear case and stopping with a feasi‐ ble rather than optimal solution can have unpredictable consequences for the performance. The computational effort can be greatly reduced when the system is linearized first in some manner and then the techniques developed for linear systems are employed on-line. Some approaches have been proposed.

Linearization theory may, in some applications, be employed to transform the original non‐ linear system, using state and feedback control transformations, into a linear system. Model predictive control may be applied to the transformed system [41, 42]. [42] applies first feed‐ back linearization and then uses MPC in a cascade arrangement for the resulting linear sys‐ tem. The optimal control problem is not, however, transformed into a convex problem, because the transformed control and state constraint sets and the transformed cost are no longer necessarily convex. [43, 44] employ linear transformation (*x*(*k* + 1)= *Ax* + *Bu*is re‐ placed by*x*(*k* + 1)=(*A* + *BK*)*x* + *Bv*, where *v* : =*u* − *Kx* is the re-parameterized control) to im‐ prove conditioning of the optimal control problem solved on-line.

**Conclusions.** MPC for linear constrained systems has been shown to provide an excellent control solution both theoretically and practically. The incorporation of nonlinear models poses a much more challenging problem mainly because of computational and control theo‐ retical difficulties, but also holds much promise for practical applications. In this section an overview over the stability analysis of NMPC is given. As outlined some of the challenges occurring in NMPC are already solvable. Nevertheless in the nonlinear area a variety of is‐ sues remain which are technically complex but have potentially significant practical implica‐ tions for stability and performance.
