**Applications**

tions). Therefore RMPC can be conservative since it ignores information on the probabilistic

It is possible to adopt a stochastic uncertainty description (instead of a set-based descrip‐ tion) and develop an MPC algorithm that minimizes the expected value of a cost function. In general, the same difficulties that plagued the set-based approach are encountered here. One notable exception is that, when the stochastic parameters are independent sequences, the true closed-loop optimal control problem can be solved analytically using dynamic pro‐ gramming [68]. In many cases, the expected error may be a more meaningful performance

SMPC also derives from the fact that most real life applications are subject to stochastic un‐ certainty and have to obey constraints. However, not all constraints are hard (i.e. inviolable), and it may be possible to improve performance by tolerating violations of constraints pro‐ viding that the frequency of violations remains within allowable limits, namely soft con‐

These concerns are addressed by stochastic MPC. Early work [74] considered additive dis‐ turbances and ignored the presence of constraints. Later contributions [68, 75-78] took con‐ straints into account, but suffered from either excessive computation or a high degree of

An approach that arose in the context of sustainable development [70, 79] overcame some of these difficulties by using stochastic moving average models and equality stability con‐ straints. This was extended to state space models with stochastic output maps and to inequal‐ ity maps involving terminal invariant sets [81]. The restriction of model uncertainty to the output map was removed in [81], but the need to propagate the effects of uncertainty over the prediction horizon prevented the statement of results in respect of feasibility. [82] over‐ comes these issues through an augmented autonomous prediction formulation, and pro‐ vides a method of handling probabilistic constraints and ensuring closed loop stability through the use of an extension of the concept of invariance, namely invariance with prob‐

Recent work [83, 84] proposed SMPC algorithms that use probabilistic information on addi‐ tive disturbances in order to minimize the expected value of a predicted cost subject to hard and soft (probabilistic) constraints. Stochastic tubes were used to provide a recursive guar‐ antee of feasibility and thus ensure closed loop stability and constraint satisfaction. More‐ over, the authors of [84] proposed conditions that, for the parameterization of predictions employed, are necessary and sufficient for recursive feasibility, thereby incurring no addi‐ tional conservatism. The approach was based on state feedback, which assumed that the states are measurable. In practice this is often not the case, and it is then necessary to esti‐ mate the state via an observer. The introduction of state estimation into RMPC is well un‐ derstood and uses lifting to describe the combined system and observer dynamics. In [85], these ideas are extended to include probabilistic information on measurement noise and the

unknown initial plant state, and extends the approach of [84].

conservativeness, or did not consider issues of closed loop stability/feasibility.

distribution of the uncertainty.

96 Advances in Discrete Time Systems

measure than the worst-case error.

straints (see e.g. [69, 70] or [71, 73]).

ability *p*.

In the next two sections, we will show that many important practical and theoretical prob‐ lems can be formulated in the MPC framework. Pursuing them will assure MPC of its stat‐ ure as a vibrant research area, where theory is seen to support practice more directly than in most other areas of control research.
