**3.2. Feasibility**

The constraints stipulated in (3) may render the optimization problem infeasible. It may happen, for example, because of a disturbance, that the optimization problem posed above becomes infeasible at a particular time step. It may also happen, that the algorithm which minimizes an open-loop objective, inadvertently drives the closed-loop system outside the feasible region.

#### **3.3. Closed loop stability**

In either the infinite or the finite horizon constrained case it is not clear under what condi‐ tions the closed loop system is stable. Much research on linear MPC has focused on this problem. Two approaches have been proposed to guarantee stability: one based on the origi‐ nal problem (1), (2), and (3) and the other where a contraction constraint is added [9, 10]. With the contraction constraint the norm of the state is forced to decrease with time and sta‐ bility follows trivially independent of the various parameters in the objective function. With‐ out the contraction constraint the stability problem is more complicated.

General proofs of stability for constrained MPC based on the monotonicity property of the value function have been proposed by [11] and [12]. The most comprehensive and also most compact analysis has been presented by [13] and [14] whose arguments we will sketch here.

To simplify the exposition we assume*p* =*m*= *N* , then *J*( *<sup>p</sup>*,*m*)= *JN* as defined in eq.(2). The key idea is to use the optimal finite horizon cost*JN* \* , the value function, as a Lyapunov function. One wishes to show that

$$J\_N^\*(\mathbf{x}(k)) - J\_N^\*(\mathbf{x}(k+1)) \ge 0, \text{ for } \mathbf{x} \not\le 0 \tag{5}$$

Rewriting *JN* \* (*x*(*k*))<sup>−</sup> *JN* \* (*x*(*k* + 1))gives,

$$\begin{aligned} \text{Tr}\_{N}^{\star}(\mathbf{x}(k)) - I\_{N}^{\star}(\mathbf{x}(k+1)) &= \text{Tr}^{\star}(\mathbf{k}) \text{Qx}(k) + \text{u}^{\star}(\mathbf{x}(k)) \text{Ru}^{\star}(\mathbf{x}(k)) \mathbf{I} \\ &\quad \text{+L} \text{I}\_{N-1}^{\star}(\mathbf{x}(k+1)) - I\_{N}^{\star}(\mathbf{x}(k+1)) \text{I} \end{aligned} \tag{6}$$

If it can be shown that the right hand side of (6) is positive, then stability is proven. Due to *Q* >0 and*R* >0, the first term *x <sup>T</sup>* (*k*)*Qx*(*k*) + *u* \**<sup>T</sup>* (*x*(*k*))*Ru* \* (*x*(*k*)) is positive. In general, it can‐ not be asserted that the second term *<sup>J</sup> <sup>N</sup>* <sup>−</sup><sup>1</sup> \* (*x*(*<sup>k</sup>* <sup>+</sup> 1))<sup>−</sup> *JN* \* (*x*(*k* + 1)) is nonnegative.

Several approaches have been presented to assure that the right hand side of (6) is positive, please refer to [1]. The various constraints introduced to guarantee stability (end constraint for all states, end constraint for unstable modes, terminal region, etc.) may lead to feasibility problems. For instance, the terminal equality constraint may become infeasible unless a suf‐ ficiently large horizon is used.

#### **3.4. Open-loop performance objective versus closed loop performance**

In receding horizon control only the first of the computed control moves is implemented, and the remaining ones are discarded. Therefore the sequence of actually implemented con‐ trol moves may differ significantly from the sequence of control moves calculated at a par‐ ticular time step. Consequently the finite horizon objective which is minimized may have only a tentative connection with the value of the objective function as it is obtained when the control moves are implemented.
