**3. Linear Model Predictive Control**

MPC has become an attractive feedback strategy, especially for linear processes. By now, lin‐ ear MPC theory is quite mature. The issues of feasibility of the on-line optimization, stability and performance are largely understood for systems described by linear models.

#### **3.1. Mathematical formulation**

The idea of MPC is not limited to a particular system description, but the computation and implementation depend on the model representation. Depending on the context, we will readily switch between state space, transfer matrix and convolution type models [4]. In addition, nowadays in the research literature, MPC is formulated almost always in the state space. We will assume the system to be described in state space by a linear discretetime model.

$$\mathbf{x}(k+1) = A\mathbf{x}(k) + Bu(k), \ \mathbf{x}(0) = \mathbf{x}\_0 \tag{1}$$

1

*<sup>T</sup> <sup>T</sup>* is the sequence of manipulated variables to be op‐

(4)

81

\* be the opti‐

Discrete-Time Model Predictive Control http://dx.doi.org/10.5772/51122

\* (*k* |*k*) to ob‐


=

å

*p*

( ( )) min[ ( | ) ( | ) ( | ) ( | )

= + ++ + +

timized; *x*(*k* + *i* |*k*), *i* =1, 2, ..., *p*denote the state prediction generated by the nominal model (1) on the basis of the state informations at time *k*under the action of the control sequence *u*(*k*), *u*(*k* + 1|*k*), ..., *u*(*k* + *i* −1|*k*); P0, *Q* and*R* are strictly positive definite symmetric weighting matrices. Let *u*( *<sup>p</sup>*,*m*) \* (*<sup>i</sup>* <sup>|</sup>*k*), *<sup>i</sup>* <sup>=</sup>*k*, ..., *<sup>k</sup>* <sup>+</sup> *<sup>m</sup>*−1 be the minimizing control sequence for

tain*x*(*<sup>k</sup>* <sup>+</sup> 1)= *Ax*(*k*) <sup>+</sup> *Bu*( *<sup>p</sup>*,*m*) \* (*<sup>k</sup>* <sup>|</sup>*k*). The rest of the control sequence *<sup>u</sup>*( *<sup>p</sup>*,*m*) \* (*<sup>i</sup>* <sup>|</sup>*k*), *<sup>i</sup>* <sup>=</sup>*<sup>k</sup>* <sup>+</sup> 1, ..., *<sup>k</sup>* <sup>+</sup> *<sup>m</sup>*−1 is discarded and *x*(*<sup>k</sup>* <sup>+</sup> 1) is used to update the optimization problem (2) as a new initial condition. This process is repeated, each time using only the first control action to obtain a new initial condition, then shifting the cost ahead one time step and repeating. This is the reason why MPC is also sometimes referred to as receding horizon control (RHC) or moving horizon control (MHC). The purpose of taking new measurements at each time step is to compensate for unmeasured disturbances and model inaccuracy, both of which cause the system output to be different from the one predicted by the model. Fig.1

(*x*(*k*)) subject to the system dynamics (1) and the constraint (3), and *J*( *<sup>p</sup>*,*m*)

A receding horizon policy proceeds by implementing only the first control *u*( *<sup>p</sup>*,*m*)

*T T*

*J x k x k p k P x k p k x k i k Qx k i k*

(,) <sup>0</sup> (·) <sup>0</sup>

*p m <sup>u</sup> <sup>i</sup>*

+++

( | ) ( | )]

*u k i k Ru k i k*

1


*m T*

*i*

where *u*( ·): = *u*(*k*)

mizing value function.

presents a conceptual picture of MPC.

**Figure 1.** Principle of model predictive control.

*J*( *<sup>p</sup>*,*m*)

=

å

0

*<sup>T</sup>* , ..., *u*(*k* + *m*−1|*k*)

where *x*(*k*)∈ℝ*n*is the state vector at time*k*, and *u*(*k*)∈ℝ*<sup>r</sup>* is the vector of manipulated varia‐ bles to be determined by the controller. The control and state sequences must satisfy *u*(*k*)∈U, *x*(*k*)∈X. Usually, Uis a convex, compact subset of ℝ*<sup>r</sup>* and X convex, closed subset ofℝ^{*n*}, each set containing the origin in its interior.

The control objective is usually to steer the state to the origin or to an equilibrium state *xr* for which the output *yr* =*h* (*xr*)=*r* where *r*is the constant reference. A suitable change of coordi‐ nates reduces the second problem to the first which, therefore, we consider in the sequel. Assuming that a full measurement of the state *x*(*k*) is available at the current time*k*. Then for event (*x*, *k*)(i.e. for state *x* at time*k*), a receding horizon implementation is typically formu‐ lated by introducing the following open-loop optimization problem.

$$J\_{\{p,m\}}(\mathbf{x}(k))\tag{2}$$

subject to

$$
u(k) \in \mathsf{U},$$
 
$$
x(k) \in \mathsf{X},$$

(*p* ≥*m*)where*p* denotes the length of the prediction horizon or output horizon, and *m* de‐ notes the length of the control horizon or input horizon. (When, *p* =*∞*we refer to this as the infinite horizon problem, and similarly, when*p* is finite, we refer to it as a finite horizon problem). For the problem to be meaningful we assume that the origin (*x* =0, *u* =0) is in the interior of the feasible region.

Several choices of the objective function *J*( *<sup>p</sup>*,*m*) (*x*(*k*)) in the optimization eq.(2) have been re‐ ported in [4-7] and have been compared in [8]. In this Chapter, we consider the following quadratic objective

$$\begin{aligned} \left(J\_{(p,m)}(\mathbf{x}(k))\right) &= \min\_{\mathbf{u}(\cdot)} \left[\mathbf{x}^T(k+p\mid k)P\_0\mathbf{x}(k+p\mid k) + \sum\_{i=0}^{p-1} \mathbf{x}^T(k+i\mid k)Q\mathbf{x}(k+i\mid k) \\ &+ \sum\_{i=0}^{n-1} \mathbf{u}^T(k+i\mid k)R\mathbf{u}(k+i\mid k) \right] \end{aligned} \tag{4}$$

where *u*( ·): = *u*(*k*) *<sup>T</sup>* , ..., *u*(*k* + *m*−1|*k*) *<sup>T</sup> <sup>T</sup>* is the sequence of manipulated variables to be op‐ timized; *x*(*k* + *i* |*k*), *i* =1, 2, ..., *p*denote the state prediction generated by the nominal model (1) on the basis of the state informations at time *k*under the action of the control sequence *u*(*k*), *u*(*k* + 1|*k*), ..., *u*(*k* + *i* −1|*k*); P0, *Q* and*R* are strictly positive definite symmetric weighting matrices. Let *u*( *<sup>p</sup>*,*m*) \* (*<sup>i</sup>* <sup>|</sup>*k*), *<sup>i</sup>* <sup>=</sup>*k*, ..., *<sup>k</sup>* <sup>+</sup> *<sup>m</sup>*−1 be the minimizing control sequence for *J*( *<sup>p</sup>*,*m*) (*x*(*k*)) subject to the system dynamics (1) and the constraint (3), and *J*( *<sup>p</sup>*,*m*) \* be the opti‐ mizing value function.

A receding horizon policy proceeds by implementing only the first control *u*( *<sup>p</sup>*,*m*) \* (*k* |*k*) to ob‐ tain*x*(*<sup>k</sup>* <sup>+</sup> 1)= *Ax*(*k*) <sup>+</sup> *Bu*( *<sup>p</sup>*,*m*) \* (*<sup>k</sup>* <sup>|</sup>*k*). The rest of the control sequence *<sup>u</sup>*( *<sup>p</sup>*,*m*) \* (*<sup>i</sup>* <sup>|</sup>*k*), *<sup>i</sup>* <sup>=</sup>*<sup>k</sup>* <sup>+</sup> 1, ..., *<sup>k</sup>* <sup>+</sup> *<sup>m</sup>*−1 is discarded and *x*(*<sup>k</sup>* <sup>+</sup> 1) is used to update the optimization problem (2) as a new initial condition. This process is repeated, each time using only the first control action to obtain a new initial condition, then shifting the cost ahead one time step and repeating. This is the reason why MPC is also sometimes referred to as receding horizon control (RHC) or moving horizon control (MHC). The purpose of taking new measurements at each time step is to compensate for unmeasured disturbances and model inaccuracy, both of which cause the system output to be different from the one predicted by the model. Fig.1 presents a conceptual picture of MPC.

**Figure 1.** Principle of model predictive control.

**3.1. Mathematical formulation**

80 Advances in Discrete Time Systems

ofℝ^{*n*}, each set containing the origin in its interior.

lated by introducing the following open-loop optimization problem.

*J*( *<sup>p</sup>*,*m*)

*u*(*k*)∈U,

(*p* ≥*m*)where*p* denotes the length of the prediction horizon or output horizon, and *m* de‐ notes the length of the control horizon or input horizon. (When, *p* =*∞*we refer to this as the infinite horizon problem, and similarly, when*p* is finite, we refer to it as a finite horizon problem). For the problem to be meaningful we assume that the origin (*x* =0, *u* =0) is in the

ported in [4-7] and have been compared in [8]. In this Chapter, we consider the following

time model.

subject to

interior of the feasible region.

quadratic objective

Several choices of the objective function *J*( *<sup>p</sup>*,*m*)

The idea of MPC is not limited to a particular system description, but the computation and implementation depend on the model representation. Depending on the context, we will readily switch between state space, transfer matrix and convolution type models [4]. In addition, nowadays in the research literature, MPC is formulated almost always in the state space. We will assume the system to be described in state space by a linear discrete-

where *x*(*k*)∈ℝ*n*is the state vector at time*k*, and *u*(*k*)∈ℝ*<sup>r</sup>* is the vector of manipulated varia‐ bles to be determined by the controller. The control and state sequences must satisfy *u*(*k*)∈U, *x*(*k*)∈X. Usually, Uis a convex, compact subset of ℝ*<sup>r</sup>* and X convex, closed subset

The control objective is usually to steer the state to the origin or to an equilibrium state *xr* for which the output *yr* =*h* (*xr*)=*r* where *r*is the constant reference. A suitable change of coordi‐ nates reduces the second problem to the first which, therefore, we consider in the sequel. Assuming that a full measurement of the state *x*(*k*) is available at the current time*k*. Then for event (*x*, *k*)(i.e. for state *x* at time*k*), a receding horizon implementation is typically formu‐

*x*(*k* + 1)= *Ax*(*k*) + *Bu*(*k*), *x*(0)= *x*<sup>0</sup> (1)

(*x*(*k*)) (2)

*<sup>x</sup>*(*k*)∈X, (3)

(*x*(*k*)) in the optimization eq.(2) have been re‐

Three practical questions are immediate [1]:

**1.** When is the problem formulated above feasible, so that the algorithm yields a control action which can be implemented?

If it can be shown that the right hand side of (6) is positive, then stability is proven. Due to

Several approaches have been presented to assure that the right hand side of (6) is positive, please refer to [1]. The various constraints introduced to guarantee stability (end constraint for all states, end constraint for unstable modes, terminal region, etc.) may lead to feasibility problems. For instance, the terminal equality constraint may become infeasible unless a suf‐

In receding horizon control only the first of the computed control moves is implemented, and the remaining ones are discarded. Therefore the sequence of actually implemented con‐ trol moves may differ significantly from the sequence of control moves calculated at a par‐ ticular time step. Consequently the finite horizon objective which is minimized may have only a tentative connection with the value of the objective function as it is obtained when the

Linear MPC has been developed, in our opinion, to a stage where it has achieved sufficient maturity to warrant the active interest of researchers in nonlinear control. While linear mod‐ el predictive control has been popular since the 70s of the past century, the 90s have wit‐ nessed a steadily increasing attention from control theorists as well as control practitioners

The practical interest is driven by the fact that many systems are in general inherently non‐ linear and today's processes need to be operated under tighter performance specifications. At the same time more and more constraints for example from environmental and safety considerations, need to be satisfied. In these cases, linear models are often inadequate to de‐ scribe the process dynamics and nonlinear models have to be used. This motivates the use of

The system to be controlled is described, or approximated by a discrete-time model

*x*(*k* + 1)= *f* (*x*(*k*), *u*(*k*)),

where *f* ( ·) is implicitly defined by the originating differential equation that has an equili‐ brium point at the origin (i.e. *f* (0, 0)=0). The control and state sequences must satisfy (3).

*<sup>y</sup>*(*k*)=*<sup>h</sup>* (*x*(*k*)), (7)

(*x*(*k*)) is positive. In general, it can‐

Discrete-Time Model Predictive Control http://dx.doi.org/10.5772/51122 83

\* (*x*(*k* + 1)) is nonnegative.

*Q* >0 and*R* >0, the first term *x <sup>T</sup>* (*k*)*Qx*(*k*) + *u* \**<sup>T</sup>* (*x*(*k*))*Ru* \*

**3.4. Open-loop performance objective versus closed loop performance**

not be asserted that the second term *<sup>J</sup> <sup>N</sup>* <sup>−</sup><sup>1</sup> \* (*x*(*<sup>k</sup>* <sup>+</sup> 1))<sup>−</sup> *JN*

ficiently large horizon is used.

control moves are implemented.

nonlinear model predictive control.

**4. Nonlinear model predictive control**

in the area of nonlinear model predictive control (NMPC).


These questions will be explained in the following sections.
