**6. Recent developments in stochastic MPC**

Despite the extensive literature that exists on predictive control and robustness to uncertain‐ ty, both multiplicative (e.g. parametric) and additive (e.g. exogenous), very little attention

has been paid to the case of stochastic uncertainty. Although robust predictive control can handle constrained systems that are subject to stochastic uncertainty, it will propagate the effects of uncertainty over a prediction horizon which can be computationally expensive and conservative. Yet this situation arises naturally in many control applications. The aim of this section is to review some of the recent advances in stochastic model predictive control (SMPC).

The basic SMPC problem is defined in Subsection 6.1. and a review of earlier work is given in Subsection 6.2.

where *x* ∈ℝ*n* is the state, *u* ∈ℝ*m*is the input and the disturbance *w*k are assumed to be inde‐

−*α* ≤*wk* ≤*α* (16)

Discrete-Time Model Predictive Control http://dx.doi.org/10.5772/51122 95

, *j* =1, ..., *ρ ϕ<sup>k</sup>* =*Gxk* +1 + *F uk* (17)

*<sup>T</sup>* denotes the *j*th row of the identity matrix. This formulation

*<sup>ϕ</sup><sup>k</sup>* <sup>=</sup> *<sup>g</sup> <sup>T</sup> xk* +1 <sup>+</sup> *<sup>f</sup> <sup>T</sup> uk* (18)

E(*x <sup>T</sup>* (*k*)*Qx*(*k*) + *u <sup>T</sup>* (*k*)*Ru*(*k*)) (19)

pendent and identically distributed (i.i.d.), with zero mean, known distribution, and

where *α* >0and inequalities apply element wise. (15) is subject to probabilistic constraints

covers the case of state only, input only and state/input constraints which can be probabilis‐ tic (soft) or deterministic (hard) since *pj* =1can be chosen for some or all *j*. For each *j* (17) can

(where Edenotes expectation) and guarantees that the closed loop system is stable, while its

As is common in the literature on probabilistic robustness (e.g. [67]), all stochastic uncertain‐ ties are assumed to have bounded support. Not only is this necessary for asserting feasibility and stability, but it matches the real world more closely than the mathematically convenient Gaussian assumption which permits *w*to become arbitrarily large (albeit with small proba‐

Stochastic predictive control (SMPC) is emerging as a research area of both practical and

MPC has proved successful because it attains approximate optimality in the presence of con‐ straints. In addition, RMPC can maintain a satisfactory level of performance and guarantee constraint satisfaction when the system is subject to bounded uncertainty [2]. However, such an approach does not cater for the case in which model and measurement uncertainties are stochastic in nature, subject to some statistical regularity, and neither can it handle the case of random uncertainty whose distribution does not have finite support (e.g. normal distribu‐

The problem is to devise a receding horizon MPC strategy that minimizes the cost

state converges to a neighborhood of the origin subject to the constraint (17).

bility), since noise and disturbances derived from physical processes are finite.

*P*(*ej*

where*<sup>G</sup>* ∈ℝ*ρ*×*n*, *<sup>F</sup>* ∈ℝ*ρ*×*m*and *ej*

**6.2. Earlier work**

theoretical interest.

*<sup>T</sup>ϕ<sup>k</sup>* <sup>≤</sup>*hj*

)≥ *pj*

be invoked separately so that in this section is taken to be scalar

*J* =∑ *k*=0 *∞*

**Figure 2.** States (*x*1, *x*2) (method in [66]).

**Figure 3.** Output *y*and input *u*(method in [66]).

#### **6.1. Basic SMPC problem**

Consider the system described by the model

$$\mathbf{x}(k+1) = A\mathbf{x}(k) + Bu(k) + w(k)\tag{15}$$

where *x* ∈ℝ*n* is the state, *u* ∈ℝ*m*is the input and the disturbance *w*k are assumed to be inde‐ pendent and identically distributed (i.i.d.), with zero mean, known distribution, and

$$
\sigma - \alpha \le w\_k \le \alpha \tag{16}
$$

where *α* >0and inequalities apply element wise. (15) is subject to probabilistic constraints

$$P(e\_j^T \phi\_k \le h\_j) \ge p\_{j\prime} \quad j = 1, \dots, \rho \qquad \qquad \phi\_k = G \ge\_{k+1} + F \, u\_k \tag{17}$$

where*<sup>G</sup>* ∈ℝ*ρ*×*n*, *<sup>F</sup>* ∈ℝ*ρ*×*m*and *ej <sup>T</sup>* denotes the *j*th row of the identity matrix. This formulation covers the case of state only, input only and state/input constraints which can be probabilis‐ tic (soft) or deterministic (hard) since *pj* =1can be chosen for some or all *j*. For each *j* (17) can be invoked separately so that in this section is taken to be scalar

$$\boldsymbol{\phi}\_{k} = \boldsymbol{g}^{\top} \,\, \mathbf{x}\_{k+1} + \boldsymbol{f}^{\top} \,\, \boldsymbol{u}\_{k} \tag{18}$$

The problem is to devise a receding horizon MPC strategy that minimizes the cost

$$J = \sum\_{k=0}^{\bullet} \mathsf{E}\left(\mathbf{x}^{\top}(k)\mathbf{Q}\mathbf{x}(k) + \boldsymbol{\mu}^{\top}(k)\mathbf{R}\boldsymbol{u}(k)\right) \tag{19}$$

(where Edenotes expectation) and guarantees that the closed loop system is stable, while its state converges to a neighborhood of the origin subject to the constraint (17).

As is common in the literature on probabilistic robustness (e.g. [67]), all stochastic uncertain‐ ties are assumed to have bounded support. Not only is this necessary for asserting feasibility and stability, but it matches the real world more closely than the mathematically convenient Gaussian assumption which permits *w*to become arbitrarily large (albeit with small proba‐ bility), since noise and disturbances derived from physical processes are finite.

#### **6.2. Earlier work**

**Figure 2.** States (*x*1, *x*2) (method in [66]).

94 Advances in Discrete Time Systems

**Figure 3.** Output *y*and input *u*(method in [66]).

Consider the system described by the model

*x*(*k* + 1)= *Ax*(*k*) + *Bu*(*k*) + *w*(*k*) (15)

**6.1. Basic SMPC problem**

Stochastic predictive control (SMPC) is emerging as a research area of both practical and theoretical interest.

MPC has proved successful because it attains approximate optimality in the presence of con‐ straints. In addition, RMPC can maintain a satisfactory level of performance and guarantee constraint satisfaction when the system is subject to bounded uncertainty [2]. However, such an approach does not cater for the case in which model and measurement uncertainties are stochastic in nature, subject to some statistical regularity, and neither can it handle the case of random uncertainty whose distribution does not have finite support (e.g. normal distribu‐ tions). Therefore RMPC can be conservative since it ignores information on the probabilistic distribution of the uncertainty.

**Applications**

most other areas of control research.

**7. Networked control systems**

**7.1. Characteristics of NCSs**

and references therein.

In the next two sections, we will show that many important practical and theoretical prob‐ lems can be formulated in the MPC framework. Pursuing them will assure MPC of its stat‐ ure as a vibrant research area, where theory is seen to support practice more directly than in

Discrete-Time Model Predictive Control http://dx.doi.org/10.5772/51122 97

Traditionally, the different components (i.e., sensor, controller, and actuator) in a control system are connected via wired, point-to-point links, and the control laws are designed and

In recent years, there has been a growing interest in the design of controllers based on the network systems in several areas such as traffic, communication, aviation and spaceflight [86]. The networked control systems (NCSs) is defined as a feedback control system where control loops are closed through a real-time network [96, 97], which is different from tradi‐ tional control systems. For an overview, the readers can refer to [97], which systematically addresses several key issues (band-limited channels, sampling and delay, packet loss, sys‐

**Advantages.** Communication networks make the transmission of data much easier and pro‐ vide a higher degree of freedom in the configuration of control systems. Network-based communication allows for easy modification of the control strategy by rerouting signals, having redundant systems that can be activated automatically when component failure oc‐ curs. Particularly, NCSs allow remote monitoring and adjustment of plants over the Inter‐ net. This enables the control system to benefit from the way it retrieves data and reacts to plant fluctuations from anywhere around the world at any time, see for example, [98-101]

**Disadvantages.** Although the network makes it convenient to control large distributed sys‐ tems, new issues arise in the design of a NCSs. Augmenting existing control networks with real-time wired or wireless sensor and actuator networks challenges many of the assump‐ tions made in the development of traditional process control methods dealing with dynami‐ cal systems linked through ideal channels with flawless, continuous communication. In the context of networked control systems, key issues that need to be carefully handled at the control system design level include data losses due to field interference and time-delays due to network traffic as well as due to the potentially heterogeneous nature of the additional measurements (for example, continuous, asynchronous and delayed) [102]. These issues will

deteriorate the performance and may even cause the system to be unstable.

operate based on local continuously-sampled process output measurements.

tem architecture) that make NCSs distinct from other control systems.

It is possible to adopt a stochastic uncertainty description (instead of a set-based descrip‐ tion) and develop an MPC algorithm that minimizes the expected value of a cost function. In general, the same difficulties that plagued the set-based approach are encountered here. One notable exception is that, when the stochastic parameters are independent sequences, the true closed-loop optimal control problem can be solved analytically using dynamic pro‐ gramming [68]. In many cases, the expected error may be a more meaningful performance measure than the worst-case error.

SMPC also derives from the fact that most real life applications are subject to stochastic un‐ certainty and have to obey constraints. However, not all constraints are hard (i.e. inviolable), and it may be possible to improve performance by tolerating violations of constraints pro‐ viding that the frequency of violations remains within allowable limits, namely soft con‐ straints (see e.g. [69, 70] or [71, 73]).

These concerns are addressed by stochastic MPC. Early work [74] considered additive dis‐ turbances and ignored the presence of constraints. Later contributions [68, 75-78] took con‐ straints into account, but suffered from either excessive computation or a high degree of conservativeness, or did not consider issues of closed loop stability/feasibility.

An approach that arose in the context of sustainable development [70, 79] overcame some of these difficulties by using stochastic moving average models and equality stability con‐ straints. This was extended to state space models with stochastic output maps and to inequal‐ ity maps involving terminal invariant sets [81]. The restriction of model uncertainty to the output map was removed in [81], but the need to propagate the effects of uncertainty over the prediction horizon prevented the statement of results in respect of feasibility. [82] over‐ comes these issues through an augmented autonomous prediction formulation, and pro‐ vides a method of handling probabilistic constraints and ensuring closed loop stability through the use of an extension of the concept of invariance, namely invariance with prob‐ ability *p*.

Recent work [83, 84] proposed SMPC algorithms that use probabilistic information on addi‐ tive disturbances in order to minimize the expected value of a predicted cost subject to hard and soft (probabilistic) constraints. Stochastic tubes were used to provide a recursive guar‐ antee of feasibility and thus ensure closed loop stability and constraint satisfaction. More‐ over, the authors of [84] proposed conditions that, for the parameterization of predictions employed, are necessary and sufficient for recursive feasibility, thereby incurring no addi‐ tional conservatism. The approach was based on state feedback, which assumed that the states are measurable. In practice this is often not the case, and it is then necessary to esti‐ mate the state via an observer. The introduction of state estimation into RMPC is well un‐ derstood and uses lifting to describe the combined system and observer dynamics. In [85], these ideas are extended to include probabilistic information on measurement noise and the unknown initial plant state, and extends the approach of [84].
