**3.1 Minimal-time approach**

Minimal-time approaches allow constraint violations for a certain period of time, which is to be minimized. There is no commitment to reduce the peaks of the violations during this period. These are, respectively, the strongest advantage and the weakest drawback of these methods. The constraint violations are usually allowed to take place in the beginning of the control task, which reduces the time taken to achieve feasibility at the cost of degrading the transient response of the control-loop. Scokaert & Rawlings (1999) introduce an approach of minimal-time solution that considers the peak violation of the constraints as a secondary objective, after the minimization of the time to enforce the constraints. This avoids unnecessarily large peak violations.

One possibility to avoid control constraint violations, which are usually physical ones, is to enforce them while relaxing operating constraints on the state. This way, the problem always becomes feasible. One algorithm that implements a solution of this type may be stated as:

```
Data: x(k)
Result: Optimized control sequence Vˆ ∗
Solve constrained MPC problem;
if infeasible then
   Remove constraints on the state;
   Solve MPC problem;
   Find κ = κunc, which is the instant at which the state constraints are all enforced;
else
   Employ obtained control sequence;
   Terminate.
end
while feasible do
   κ ← κ − 1;
```
Solve MPC problem with state constraints enforced from time *κ* until the end of the prediction horizon;

**end**

Terminate.

Employ last feasible control sequence;

**Algorithm 1:** Minimal-time algorithm

This algorithm determines the smallest time window over which the state constraints must be removed at the beginning of the prediction horizon in order to attain feasibility.

#### **3.2 Soft-constraint approach**

6 Will-be-set-by-IN-TECH

optimization problem. However, this issue can be circumvented by introducing a terminal constraint for the state in the form of a Maximal Output Admissible Set (MAS) (Gilbert & Tan, 1991). This problem will be tackled in section 4. For now, it is sufficient to state that there exists a finite horizon within which enforcement of the constraints leads to enforcement of the constraints over an infinite horizon, given some reasonable assumptions on the plant

Minimal-time approaches allow constraint violations for a certain period of time, which is to be minimized. There is no commitment to reduce the peaks of the violations during this period. These are, respectively, the strongest advantage and the weakest drawback of these methods. The constraint violations are usually allowed to take place in the beginning of the control task, which reduces the time taken to achieve feasibility at the cost of degrading the transient response of the control-loop. Scokaert & Rawlings (1999) introduce an approach of minimal-time solution that considers the peak violation of the constraints as a secondary objective, after the minimization of the time to enforce the constraints. This avoids

One possibility to avoid control constraint violations, which are usually physical ones, is to enforce them while relaxing operating constraints on the state. This way, the problem always becomes feasible. One algorithm that implements a solution of this type may be stated as:

Find *κ* = *κunc*, which is the instant at which the state constraints are all enforced;

Solve MPC problem with state constraints enforced from time *κ* until the end of the

**Algorithm 1:** Minimal-time algorithm

This algorithm determines the smallest time window over which the state constraints must be

removed at the beginning of the prediction horizon in order to attain feasibility.

dynamics (Rawlings & Muske, 1993).

unnecessarily large peak violations.

**Result**: Optimized control sequence *V*ˆ <sup>∗</sup> Solve constrained MPC problem;

Remove constraints on the state;

Employ obtained control sequence;

Solve MPC problem;

prediction horizon;

Employ last feasible control sequence;

**Data**: *x*(*k*)

**else**

**end**

**end**

Terminate.

**if** *infeasible* **then**

Terminate.

**while** *feasible* **do** *κ* ← *κ* − 1;

**3.1 Minimal-time approach**

**3. Constraint relaxation approaches**

In this approach the cost function is modified to include a penalization on the violation of operating constraints. This way, a compromise is achieved between time and peak values of the violations, as well as performance of the control-loop. Scokaert & Rawlings (1999) propose the penalization of the sum of the square of the values of the violations instead of the peak as means to reduce their time length. This can be accomplished by simply adding slack variables to the state/output constraints of Eq. (7) in case of infeasibility and adding a term to the right-hand side of Eq. (8), as follows:

$$J\_{Soft} = \sum\_{i=0}^{N-1} \boldsymbol{\vartheta}^T(k+i|k)\boldsymbol{\Psi}\boldsymbol{\vartheta}(k+i|k) + \boldsymbol{\varepsilon}\_p^T \,\mathcal{W}\_{\varepsilon\_p} \boldsymbol{\varepsilon}\_p + \boldsymbol{\varepsilon}\_n^T \,\mathcal{W}\_{\varepsilon\_n} \boldsymbol{\varepsilon}\_n \tag{21}$$

$$\begin{array}{c} \mathbf{x}\_{P,\min} - \mathbf{x}\_{ref} - \mathbf{e}\_{\text{fl}} \le \mathbf{x} \le \mathbf{x}\_{P,\max} - \mathbf{x}\_{ref} + \mathbf{e}\_{\text{pf}}\\ \mathbf{e}\_{p\prime} \text{ } \mathbf{e}\_{\text{fl}} \ge \mathbf{0} \end{array} \tag{22}$$

where *W<sup>n</sup>* and *W<sup>n</sup>* are positive-definite weight matrices. The additional restrictions *<sup>p</sup>*, *<sup>n</sup>* ≥ 0 impose that the constraints are not made more restrictive than their original settings.

With the cost function of Eq. (21) subject to the constraints of Eq. (22), the amount by which each constraint is prioritized can be tuned by the choice of the weight matrices.

To this end, a rule of thumb known as "Bryson's rule" (Franklin et al., 2005), (Bryson & Ho, 1969) can be used as a guideline. It states that one may use the limits of the variables as parameters to choose their weights in the cost function so that their contribution is normalized. Therefore, the weights must be chosen so that the product between the admissible range (maximum value - minimum value) and the weight is approximately the same for all variables. However, in the present case, it is desirable that deviations of the slack variables from zero are more penalized than control deviations in order to enforce the constraints when possible. Therefore, it is reasonable to choose the weights for these variables an order of magnitude greater than the values obtained via Bryson's rule.

Scokaert & Rawlings (1999) discuss the inclusion of a linear term of penalization of the slack variables as means to obtain exact relaxations, i. e., the controller relaxes the constraints only when necessary. This can be achieved by tuning the weights of this term based on the Lagrange multipliers associated to the constrained minimization problem. However, an advantage of introducing terms that penalize the square of the slack variables is that the choice of a positive-definite weight matrix leads to a well-posed quadratic program, since the associated Hessian is positive definite.

#### **3.3 Hard constraint relaxation with prioritization**

There are methods which relax the operating constraints, possibly according to a priority list, in order to achieve feasibility of the optimization problem. There are various techniques employing such policies, some of which resort to optimization problems parallel to the MPC optimization in order to determine the minimum relaxation that is necessary to achieve feasibility. In this line, the priority list can be explored by solving many Linear Programming (LP) problems relaxing the constraints of lower priority until feasibility is achieved or by solving a single LP problem online as proposed by Vada et al. (2001). In their work, offline computations of the weights of the slack variables that relax the constraints are performed.

<sup>0</sup> <sup>10</sup> <sup>20</sup> <sup>30</sup> <sup>40</sup> −0.5

<sup>0</sup> <sup>10</sup> <sup>20</sup> <sup>30</sup> <sup>40</sup> −0.2

t

t

Minimal-time

Soft constraint with weight W<sup>1</sup>

Minimal-time

Infeasibility Handling in Constrained MPC 55

Soft constraint with weight W<sup>1</sup>

0

Fig. 2. Position (*x*1) with constraint relaxation.

−0.15

Fig. 3. Velocity (*x*2) with constraint relaxation.

−0.1

−0.05

x2(t)

0

0.05

0.1

0.15

0.5

x1(t)

1

1.5

The calculated weights have the property of relaxing the constraints according to the defined priority in a single LP problem.

#### **3.4 Simulation example**

This example is based on a double integrator model, with sampling period of 1 time unit. Double integrators can be used to model a number of real-world systems, such as a vehicle moving in an environment where friction is negligible (space, for instance).

The discrete-time model matrices are:

$$A = \begin{bmatrix} 1 \ 1 \\ 0 \ 1 \end{bmatrix}, B = \begin{bmatrix} 0.5 \\ 1 \end{bmatrix} \tag{23}$$

and the LQR weight matrices are:

$$Q\_{lqr} = \begin{bmatrix} 1 \ 0 \\ 0 \ 1 \end{bmatrix} \text{ } \space R\_{lqr} = 1 \tag{24}$$

The control and prediction horizons were set to *M* = 7 and *N* = 20, respectively.

The constraints are: −0.5 ≤ *x*<sup>1</sup> ≤ 0.5 (position), −0.1 ≤ *x*<sup>2</sup> ≤ 0.1 (velocity) and −0.01 ≤ *u* ≤ 0.01 (acceleration).

A comparison between the results obtained with a minimal-time solution and a soft constraint approach is presented. Two choices of weight matrices were considered:

$$\boldsymbol{W}\_{\varepsilon\_{\pi}}^{1} = \boldsymbol{W}\_{\varepsilon\_{p}}^{1} = \boldsymbol{W}^{1} = \begin{bmatrix} 10 & 0 \\ 0 & 20 \end{bmatrix}, \boldsymbol{W}\_{\varepsilon\_{\pi}}^{2} = \boldsymbol{W}\_{\varepsilon\_{p}}^{2} = \boldsymbol{W}^{2} = \begin{bmatrix} 100 & 0 \\ 0 & 10000 \end{bmatrix} \tag{25}$$

The application of Bryson's rule to adjust the weight matrices would require the definition of an acceptable violation of the constraints, which could be established as the difference between physical and operating state constraints. However, since this example does not discriminate between these two types of constraints, the *W*<sup>1</sup> and *W*<sup>2</sup> matrices were chosen for the sole purpose of illustrating the effect of varying the weights.

The initial state of the system is *x*<sup>0</sup> = [1.5 0] *<sup>T</sup>*, which violates the constraints on *x*1.

The first comparison involves the two infeasibility handling techniques (minimal-time and soft constraint). For this purpose, the *W*<sup>1</sup> weight matrix was employed. Figures 2 and 3 show the resulting state trajectories. It can be seen that the minimal-time approach leads to a faster recovery of feasibility, as the soft constraint approach takes longer to enforce all the constraints. This result can also be associated to the control profile presented in Fig. 4. In fact, the control obtained with the minimal-time approach reverses its sign earlier, as compared to the soft constraint approach.

The second comparison involves three scenarios: no state constraints and soft constraint approach with weights *W*<sup>1</sup> and *W*2. Figures 5, 6 and 7 show the resulting state and control trajectories. As can be seen, a reduction in the weights tends to generate a solution closer to the unconstrained case. In fact, smaller weights on the slack variables result in a smaller penalization of the constraint violations. In the limit, if the weights are made equal to zero, the constraints can be relaxed as much as it is needed and therefore the unconstrained optimal solution is obtained.

8 Will-be-set-by-IN-TECH

The calculated weights have the property of relaxing the constraints according to the defined

This example is based on a double integrator model, with sampling period of 1 time unit. Double integrators can be used to model a number of real-world systems, such as a vehicle

(23)

(25)

, *Rlqr* = 1 (24)

moving in an environment where friction is negligible (space, for instance).

*A* = 1 1 0 1 , *B* = 0.5 1 

*Qlqr* =

approach is presented. Two choices of weight matrices were considered:

 10 0 0 20

for the sole purpose of illustrating the effect of varying the weights.

*<sup>p</sup>* <sup>=</sup> *<sup>W</sup>*<sup>1</sup> <sup>=</sup>

The control and prediction horizons were set to *M* = 7 and *N* = 20, respectively.

 1 0 0 1 

The constraints are: −0.5 ≤ *x*<sup>1</sup> ≤ 0.5 (position), −0.1 ≤ *x*<sup>2</sup> ≤ 0.1 (velocity) and −0.01 ≤ *u* ≤

A comparison between the results obtained with a minimal-time solution and a soft constraint

The application of Bryson's rule to adjust the weight matrices would require the definition of an acceptable violation of the constraints, which could be established as the difference between physical and operating state constraints. However, since this example does not discriminate between these two types of constraints, the *W*<sup>1</sup> and *W*<sup>2</sup> matrices were chosen

The first comparison involves the two infeasibility handling techniques (minimal-time and soft constraint). For this purpose, the *W*<sup>1</sup> weight matrix was employed. Figures 2 and 3 show the resulting state trajectories. It can be seen that the minimal-time approach leads to a faster recovery of feasibility, as the soft constraint approach takes longer to enforce all the constraints. This result can also be associated to the control profile presented in Fig. 4. In fact, the control obtained with the minimal-time approach reverses its sign earlier, as compared to

The second comparison involves three scenarios: no state constraints and soft constraint approach with weights *W*<sup>1</sup> and *W*2. Figures 5, 6 and 7 show the resulting state and control trajectories. As can be seen, a reduction in the weights tends to generate a solution closer to the unconstrained case. In fact, smaller weights on the slack variables result in a smaller penalization of the constraint violations. In the limit, if the weights are made equal to zero, the constraints can be relaxed as much as it is needed and therefore the unconstrained optimal

*<sup>n</sup>* <sup>=</sup> *<sup>W</sup>*<sup>2</sup>

*<sup>p</sup>* <sup>=</sup> *<sup>W</sup>*<sup>2</sup> <sup>=</sup>

*<sup>T</sup>*, which violates the constraints on *x*1.

100 0 0 10000  , *W*<sup>2</sup>

priority in a single LP problem.

The discrete-time model matrices are:

and the LQR weight matrices are:

*W*<sup>1</sup> *<sup>n</sup>* <sup>=</sup> *<sup>W</sup>*<sup>1</sup>

the soft constraint approach.

solution is obtained.

The initial state of the system is *x*<sup>0</sup> = [1.5 0]

0.01 (acceleration).

**3.4 Simulation example**

Fig. 2. Position (*x*1) with constraint relaxation.

Fig. 3. Velocity (*x*2) with constraint relaxation.

<sup>0</sup> <sup>10</sup> <sup>20</sup> <sup>30</sup> <sup>40</sup> −0.2

Infeasibility Handling in Constrained MPC 57

<sup>0</sup> <sup>10</sup> <sup>20</sup> <sup>30</sup> <sup>40</sup> −0.02

Fig. 7. Acceleration (*u*) without state constraints and with soft constraint relaxation.

t

Fig. 6. Velocity (*x*2) without state constraints and with soft constraint relaxation.

t

No state constraints Slack variable weight W<sup>1</sup> Slack variable weight W<sup>2</sup>

No state constraints Slack variable weight W<sup>1</sup> Slack variable weight W<sup>2</sup>

−0.15

−0.015

−0.01

−0.005

u(t)

0.005

0.01

0.015

0.02

0

−0.1

−0.05

x2(t)

0

0.05

0.1

0.15

Fig. 4. Acceleration (*u*) with constraint relaxation.

Fig. 5. Position (*x*1) without state constraints and with soft constraint relaxation.

10 Will-be-set-by-IN-TECH

Minimal-time

Soft constraint with weight W<sup>1</sup>

No state constraints Slack variable weight W<sup>1</sup> Slack variable weight W<sup>2</sup>

<sup>0</sup> <sup>10</sup> <sup>20</sup> <sup>30</sup> <sup>40</sup> −0.02

<sup>0</sup> <sup>10</sup> <sup>20</sup> <sup>30</sup> <sup>40</sup> −0.5

Fig. 5. Position (*x*1) without state constraints and with soft constraint relaxation.

t

t

−0.015

0

0.5

x1(t)

1

1.5

Fig. 4. Acceleration (*u*) with constraint relaxation.

−0.01

−0.005

u(t)

0.005

0.01

0.015

0.02

0

Fig. 6. Velocity (*x*2) without state constraints and with soft constraint relaxation.

Fig. 7. Acceleration (*u*) without state constraints and with soft constraint relaxation.

*z*¯ ≤ �

*C*¯ =

∗, with *t*

min *V*ˆ , *μ*

*umax* <sup>−</sup> *uref* �

*xP*,*max* − *xref* + *Cμ*

Φ(*xP*(*k*) − *xref* + *Cμ*) −

Φ*U*(*xP*(*k*) − *xref* + *Cμ*) −

*z*¯ = *C*¯*x*¯ with

to a finite one.

s.t.

and *x*ˆ(*k* + *i*|*k*) ∈ **X** until *i* = *N* + *t*

**4.2 Optimization problem formulation**

involves *V*ˆ and *μ* as decision variables. Thus, the optimization problem becomes

> ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣

the plant is given by

*HU* −*HU H* −*H*

⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦

*<sup>V</sup>*<sup>ˆ</sup> <sup>≤</sup>

*H*, *Hu*, Φ and Φ*<sup>u</sup>* are in accordance with Eq. 20.

without the need of setpoint management.

⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣

�

*uP*(*k*) = *uref* + *v*ˆ

�

*xP*,*max* − *xref xref* <sup>−</sup> *xP*,*min* �

Since *u* = −*Kx* inside the MAS, the output function for the determination of the MAS becomes

Infeasibility Handling in Constrained MPC 59

Having determined the MAS (**O**¯ <sup>∞</sup>) associated to the dynamics of Eq. (27) with the constraints of Eq. (30), it can be particularized online by fixing the value of *μ*. The set **O**¯ <sup>∞</sup> obtained is invariant regarding matrix *<sup>A</sup>*¯. It is convenient to note that the terminal constraint *<sup>x</sup>*ˆ(*<sup>k</sup>* <sup>+</sup> *<sup>N</sup>*|*k*) <sup>∈</sup> **O**¯ <sup>∞</sup> for a particular choice of *μ* can replace the constraints from *i* = *N* onwards in Eqs. (14) and (15). Imposing *<sup>x</sup>*ˆ(*<sup>k</sup>* <sup>+</sup> *<sup>N</sup>*|*k*) <sup>∈</sup> **<sup>O</sup>**¯ <sup>∞</sup> is equivalent to imposing the constraints *<sup>u</sup>*ˆ(*<sup>k</sup>* <sup>+</sup> *<sup>i</sup>*|*k*) <sup>∈</sup> **<sup>U</sup>**

parameterized MAS. Therefore, the infinite set of constraints of Eqs. (14) and (15) is reduced

Considering the setpoint management, the optimization problem to be solved at time *k* now

*N*+*t*∗+1

where *W<sup>μ</sup>* is a positive-definite weight matrix, the operator [•]*<sup>j</sup>* stacks *j* copies of vector •, and

The greater the weights in *W<sup>μ</sup>* in comparison to Ψ, the closer the solution is to the one obtained

After the solution of the optimization problem of Eq. (32), the control signal to be applied to

�

� *In* <sup>−</sup>*<sup>C</sup>* −*In C*

�

∗ obtained during the offline determination of the

*V*ˆ *<sup>T</sup>*Ψ*V*ˆ + *μTWμμ* (32)

�

*N*+*t*∗+1

⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦

� *N*+*t*∗

− Φ*U*(*xP*(*k*) − *xref* + *Cμ*)

*umin* − *uref* +

*<sup>N</sup>*+*t*<sup>∗</sup> <sup>−</sup> <sup>Φ</sup>(*xP*(*k*) <sup>−</sup> *xref* <sup>+</sup> *<sup>C</sup>μ*)

*xP*,*min* − *xref* + *Cμ*

<sup>∗</sup>(*k*|*k*) − *K*(*xP*(*k*) − *xref* + *Cμ*∗) (33)

�

�

(30)

(31)

#### **4. Setpoint management approaches**

The main idea behind setpoint management schemes is to find a new setpoint *x*� *ref*(*k*) = *xref*(*k*) − *Cμ* at each time *k* in order to make the problem feasible and to progressively steer the system state towards the original setpoint *xref* . *<sup>μ</sup>* <sup>∈</sup> **<sup>R</sup>***<sup>n</sup>* is the setpoint management variable and *<sup>C</sup>* <sup>∈</sup> **<sup>R</sup>***q*×*<sup>n</sup>* is a constant matrix. It is worth noting that, in the general case, changing the setpoint *xref* would also affect the corresponding setpoint *uref* for the control. As a result, the bounds on the control *u* would need to be changed, which would require the online recalculation of the terminal constraint set. Therefore, the class of systems considered in this study are restricted to those which require no adjustment in the control setpoint after a change in the state setpoint. This is a property of plants with integral behavior.

It is worth noting that these setpoint modifications impose a need of redetermination of the MAS every time the value of *μ* changes. The approach presented in the following subsection introduces a parameterization of the MAS in terms of the possible values of *μ*, avoiding the necessity to repeat the determination of the terminal set online.

#### **4.1 Parameterization of the MAS**

The parameterization of the MAS may be carried out through the employment of an augmented state vector *x*¯ defined as (Almeida & Leissling, 2010)

$$
\bar{\mathbf{x}} = \begin{bmatrix} x \\ \mu \end{bmatrix}, \tag{26}
$$

which evolves inside the MAS according to

$$\vec{\mathfrak{x}}(k+1) = \vec{A}\vec{\mathfrak{x}}(k), \quad \vec{A} = \begin{bmatrix} A - BK & 0 \\ 0 & I\_{\mathfrak{n}} \end{bmatrix}. \tag{27}$$

It is worth noting that the identity matrix *In* <sup>∈</sup> **<sup>R</sup>***n*×*<sup>n</sup>* multiplies the additional components of the state because these are supposed to remain constant along the prediction horizon. Although *A*¯ has eigenvalues in the border of the unit circle (eigenvalues at +1 associated to the matrix *In*), it is still possible to determine the MAS in a finite number of steps because the dynamics given by Eq. (27) is stable in the Lyapunov sense (Gilbert & Tan, 1991).

The state constraints are altered by the management variable *μ* in the following fashion:

$$
\lambda \mathbf{x}\_{\text{P,min}} - \mathbf{x}\_{ref} + \mathbf{C}\mu \le \mathbf{x} \le \mathbf{x}\_{\text{P,max}} - \mathbf{x}\_{ref} + \mathbf{C}\mu \tag{28}
$$

where *<sup>C</sup>* is a matrix that relates the vector *<sup>μ</sup>* <sup>∈</sup> **<sup>R</sup>***<sup>n</sup>* of setpoint management variables to the corresponding component of the state vector *<sup>x</sup>* <sup>∈</sup> **<sup>R</sup>***<sup>n</sup>* whose setpoint is managed.

In order to incorporate the constraints to the parameterization, an auxiliary output variable *z*¯ may be defined as

$$\overline{z} = \begin{bmatrix} \mathbf{x} - \mathbf{C}\mu \\ -\mathbf{x} + \mathbf{C}\mu \end{bmatrix} \tag{29}$$

which is subject to the following constraints:

$$\overline{z} \le \begin{bmatrix} \boldsymbol{\chi}\_{P,\max} - \boldsymbol{\chi}\_{ref} \\ \boldsymbol{\chi}\_{ref} - \boldsymbol{\chi}\_{P,\min} \end{bmatrix} \tag{30}$$

Since *u* = −*Kx* inside the MAS, the output function for the determination of the MAS becomes *z*¯ = *C*¯*x*¯ with

$$
\tilde{\mathbf{C}} = \begin{bmatrix} I\_{\text{fl}} & -\mathbf{C} \\ -I\_{\text{fl}} & \mathbf{C} \end{bmatrix} \tag{31}
$$

Having determined the MAS (**O**¯ <sup>∞</sup>) associated to the dynamics of Eq. (27) with the constraints of Eq. (30), it can be particularized online by fixing the value of *μ*. The set **O**¯ <sup>∞</sup> obtained is invariant regarding matrix *<sup>A</sup>*¯. It is convenient to note that the terminal constraint *<sup>x</sup>*ˆ(*<sup>k</sup>* <sup>+</sup> *<sup>N</sup>*|*k*) <sup>∈</sup> **O**¯ <sup>∞</sup> for a particular choice of *μ* can replace the constraints from *i* = *N* onwards in Eqs. (14) and (15). Imposing *<sup>x</sup>*ˆ(*<sup>k</sup>* <sup>+</sup> *<sup>N</sup>*|*k*) <sup>∈</sup> **<sup>O</sup>**¯ <sup>∞</sup> is equivalent to imposing the constraints *<sup>u</sup>*ˆ(*<sup>k</sup>* <sup>+</sup> *<sup>i</sup>*|*k*) <sup>∈</sup> **<sup>U</sup>** and *x*ˆ(*k* + *i*|*k*) ∈ **X** until *i* = *N* + *t* ∗, with *t* ∗ obtained during the offline determination of the parameterized MAS. Therefore, the infinite set of constraints of Eqs. (14) and (15) is reduced to a finite one.

#### **4.2 Optimization problem formulation**

Considering the setpoint management, the optimization problem to be solved at time *k* now involves *V*ˆ and *μ* as decision variables.

Thus, the optimization problem becomes

$$\min\_{\hat{V}\_{\prime}, \mu} \hat{V}^T \Psi \hat{V} + \mu^T W\_{\mu} \mu \tag{32}$$

s.t.

12 Will-be-set-by-IN-TECH

*xref*(*k*) − *Cμ* at each time *k* in order to make the problem feasible and to progressively steer the system state towards the original setpoint *xref* . *<sup>μ</sup>* <sup>∈</sup> **<sup>R</sup>***<sup>n</sup>* is the setpoint management variable and *<sup>C</sup>* <sup>∈</sup> **<sup>R</sup>***q*×*<sup>n</sup>* is a constant matrix. It is worth noting that, in the general case, changing the setpoint *xref* would also affect the corresponding setpoint *uref* for the control. As a result, the bounds on the control *u* would need to be changed, which would require the online recalculation of the terminal constraint set. Therefore, the class of systems considered in this study are restricted to those which require no adjustment in the control setpoint after a change

It is worth noting that these setpoint modifications impose a need of redetermination of the MAS every time the value of *μ* changes. The approach presented in the following subsection introduces a parameterization of the MAS in terms of the possible values of *μ*, avoiding the

The parameterization of the MAS may be carried out through the employment of an

It is worth noting that the identity matrix *In* <sup>∈</sup> **<sup>R</sup>***n*×*<sup>n</sup>* multiplies the additional components of the state because these are supposed to remain constant along the prediction horizon. Although *A*¯ has eigenvalues in the border of the unit circle (eigenvalues at +1 associated to the matrix *In*), it is still possible to determine the MAS in a finite number of steps because

where *<sup>C</sup>* is a matrix that relates the vector *<sup>μ</sup>* <sup>∈</sup> **<sup>R</sup>***<sup>n</sup>* of setpoint management variables to the

In order to incorporate the constraints to the parameterization, an auxiliary output variable *z*¯

 *<sup>x</sup>* <sup>−</sup> *<sup>C</sup><sup>μ</sup>* −*x* + *Cμ* the dynamics given by Eq. (27) is stable in the Lyapunov sense (Gilbert & Tan, 1991). The state constraints are altered by the management variable *μ* in the following fashion:

corresponding component of the state vector *<sup>x</sup>* <sup>∈</sup> **<sup>R</sup>***<sup>n</sup>* whose setpoint is managed.

*z*¯ =

*A* − *BK* 0 0 *In*

*xP*,*min* − *xref* + *Cμ* ≤ *x* ≤ *xP*,*max* − *xref* + *Cμ* (28)

, (26)

. (27)

(29)

*x*¯ = *x μ*

*x*¯(*k* + 1) = *A*¯ *x*¯(*k*), *A*¯ =

*ref*(*k*) =

The main idea behind setpoint management schemes is to find a new setpoint *x*�

in the state setpoint. This is a property of plants with integral behavior.

necessity to repeat the determination of the terminal set online.

augmented state vector *x*¯ defined as (Almeida & Leissling, 2010)

**4. Setpoint management approaches**

**4.1 Parameterization of the MAS**

may be defined as

which evolves inside the MAS according to

which is subject to the following constraints:

$$\begin{bmatrix} H\_{lI} \\ -H\_{lI} \\ H \\ H \\ -H \end{bmatrix} \dot{\mathcal{V}} \le \begin{bmatrix} \left[ \boldsymbol{u}\_{\max} - \boldsymbol{u}\_{ref} \right]\_{N+l^\*+1} - \boldsymbol{\Phi}\_{\boldsymbol{II}} (\boldsymbol{\mathcal{x}}\_{P}(\boldsymbol{k}) - \boldsymbol{\mathcal{x}}\_{ref} + \boldsymbol{\mathsf{C}}\boldsymbol{\mu}) \\\\ \boldsymbol{\Phi}\_{\boldsymbol{II}} (\boldsymbol{\mathcal{x}}\_{P}(\boldsymbol{k}) - \boldsymbol{\mathcal{x}}\_{ref} + \boldsymbol{\mathsf{C}}\boldsymbol{\mu}) - \left[ \boldsymbol{u}\_{\min} - \boldsymbol{u}\_{ref} + \right]\_{N+l^\*+1} \\\\ \left[ \boldsymbol{\mathcal{x}}\_{P,\max} - \boldsymbol{\mathsf{x}}\_{ref} + \boldsymbol{\mathsf{C}}\boldsymbol{\mu} \right]\_{N+l^\*} - \boldsymbol{\Phi}(\boldsymbol{\mathcal{x}}\_{P}(\boldsymbol{k}) - \boldsymbol{\mathsf{x}}\_{ref} + \boldsymbol{\mathsf{C}}\boldsymbol{\mu}) \\\\ \boldsymbol{\Phi}(\boldsymbol{\mathcal{x}}\_{P}(\boldsymbol{k}) - \boldsymbol{\mathsf{x}}\_{ref} + \boldsymbol{\mathsf{C}}\boldsymbol{\mu}) - \left[ \boldsymbol{\mathsf{x}}\_{P,\min} - \boldsymbol{\mathsf{x}}\_{ref} + \boldsymbol{\mathsf{C}}\boldsymbol{\mu} \right]\_{N+l^\*} \end{bmatrix}$$

where *W<sup>μ</sup>* is a positive-definite weight matrix, the operator [•]*<sup>j</sup>* stacks *j* copies of vector •, and *H*, *Hu*, Φ and Φ*<sup>u</sup>* are in accordance with Eq. 20.

The greater the weights in *W<sup>μ</sup>* in comparison to Ψ, the closer the solution is to the one obtained without the need of setpoint management.

After the solution of the optimization problem of Eq. (32), the control signal to be applied to the plant is given by

$$
\mu\_P(k) = \mu\_{ref} + \mathfrak{d}^\*(k|k) - \mathcal{K}(\mathfrak{x}\_P(k) - \mathfrak{x}\_{ref} + \mathbb{C}\mu^\*) \tag{33}
$$

<sup>0</sup> <sup>10</sup> <sup>20</sup> <sup>30</sup> <sup>40</sup> −0.1

Infeasibility Handling in Constrained MPC 61

<sup>0</sup> <sup>10</sup> <sup>20</sup> <sup>30</sup> <sup>40</sup> −0.02

t

t

−0.05

−0.015

Fig. 10. Acceleration (*u*) with setpoint management.

−0.01

−0.005

u(t)

0.005

0.01

0.015

0.02

0

x2(t)

0

Fig. 9. Velocity (*x*2) with setpoint management.

0.05

0.1

#### **4.3 Simulation example**

The simulation scenario employed in this example is the same as that of subsection 3.4. Only the constraints over the position variable are different (−1 ≤ *x*<sup>1</sup> ≤ 1). The determination of the MAS leads to *t* ∗ = 7 and *M* remains equal to 7. Therefore, the constraint horizon in order to guarantee that the constraints are enforced over an infinite horizon is *N* = *M* + *t* ∗ = 14.

The initial state is *x*<sup>0</sup> = [1 0] *<sup>T</sup>*, which respects the constraints. However, the problem is infeasible, making the employment of a technique to recover feasibility mandatory. The procedure described in this section can be used to recover feasibility. The setpoint of the position is chosen for management, meaning that *μ* ∈ **R** and

$$\mathbf{C} = \begin{bmatrix} 1 \\ 0 \end{bmatrix} \tag{34}$$

It is desirable to keep the setpoint management as close to zero as possible. To this end, the weight of the setpoint management variable is chosen as *W<sup>μ</sup>* = 1000.

Figure 8 shows the position variable, which starts at the edge of the constraint and is steered to the origin without violating the constraints.

Fig. 8. Position (*x*1) with setpoint management.

It can be seen in Fig. 9 that the velocity variable gets close to its lower bound (−0.1), but this constraint is also satisfied. Figure 10 shows that the constraints on the acceleration are active in the beginning of the maneuver, but are not violated.

The setpoint management variable *μ* is shown in Fig. 11. It can be seen that the management technique is applied up to time *t* = 10. This time coincides with the change in the acceleration from negative to positive.

14 Will-be-set-by-IN-TECH

The simulation scenario employed in this example is the same as that of subsection 3.4. Only the constraints over the position variable are different (−1 ≤ *x*<sup>1</sup> ≤ 1). The determination of

infeasible, making the employment of a technique to recover feasibility mandatory. The procedure described in this section can be used to recover feasibility. The setpoint of the

It is desirable to keep the setpoint management as close to zero as possible. To this end, the

Figure 8 shows the position variable, which starts at the edge of the constraint and is steered

<sup>0</sup> <sup>10</sup> <sup>20</sup> <sup>30</sup> <sup>40</sup> −1

It can be seen in Fig. 9 that the velocity variable gets close to its lower bound (−0.1), but this constraint is also satisfied. Figure 10 shows that the constraints on the acceleration are active

The setpoint management variable *μ* is shown in Fig. 11. It can be seen that the management technique is applied up to time *t* = 10. This time coincides with the change in the acceleration

t

*C* = 1 0 

to guarantee that the constraints are enforced over an infinite horizon is *N* = *M* + *t*

position is chosen for management, meaning that *μ* ∈ **R** and

to the origin without violating the constraints.

−0.5

Fig. 8. Position (*x*1) with setpoint management.

in the beginning of the maneuver, but are not violated.

0

x1(t)

from negative to positive.

0.5

1

weight of the setpoint management variable is chosen as *W<sup>μ</sup>* = 1000.

∗ = 7 and *M* remains equal to 7. Therefore, the constraint horizon in order

*<sup>T</sup>*, which respects the constraints. However, the problem is

∗ = 14.

(34)

**4.3 Simulation example**

The initial state is *x*<sup>0</sup> = [1 0]

the MAS leads to *t*

Fig. 9. Velocity (*x*2) with setpoint management.

Fig. 10. Acceleration (*u*) with setpoint management.

output constraint relaxation - text in portuguese), *Proc. XVIII Congresso Brasileiro de*

tolerance to actuator faults, *Proc. Conf. Control and Fault-Tolerant Systems (SysTol),*

linear systems via predictive reference management, *IEEE Trans. Automatic Control*

reference management, *Proc. 3rd IEEE Conf. Control Applications*, Glasgow, UK,

state and control constraints and disturbance inputs, *Proc. 34th IEEE Conference on*

and application of maximal output admissible sets, *IEEE Trans. Automatic Control*

Afonso, R. J. M. & Galvão, R. K. H. (2010b). Predictive control of a helicopter model with

Infeasibility Handling in Constrained MPC 63

Almeida, F. A. & Leissling, D. (2010). Fault-tolerant model predictive control with flight-test

Alvarez, T. & de Prada, C. (1997). Handling infeasibilities in predictive control, *Computers &*

Bemporad, A., Casavola, A. & Mosca, E. (1997). Nonlinear control of constrained

Bemporad, A. & Mosca, E. (1994). Constraint fulfilment in feedback control via predictive

Chisci, L., Rossiter, J. A. & Zappa, G. (2001). Systems with persistent disturbances: predictive

Franklin, G., Powell, J. & Emami-Naeini, A. (2005). *Feedback Control of Dynamic Systems*, 5*th*

Gilbert, E. G. & Kolmanovsky, I. (1995). Discrete-time reference governors for systems with

Gilbert, E. G. & Tan, K. T. (1991). Linear systems with state and control constraints: the theory

Kapasouris, P., Athans, M. & Stein, G. (1988). Design of feedback control systems for stable plants with saturating actuators, *Proc. 27th IEEE Conference on Decision and Control*. Kouvaritakis, B., Rossiter, J. A. & Cannon, M. (1998). Linear quadratic feasible predictive

Limon, D., Alvarado, I., Alamo, T. & Camacho, E. (2008). MPC for tracking piecewise constant references for constrained linear systems, *Automatica* 44(9): 2382–2387. Maciejowski, J. M. (2002). *Predictive Control with Constraints*, 1st edn, Prentice Hall, Harlow,

Montandon, A. G., Borges, R. M. & Henrique, H. M. (2008). Experimental application of

Rawlings, J. & Muske, K. (1993). The stability of constrained receding horizon control, *IEEE*

Rodrigues, M. A. & Odloak, D. (2005). Robust mpc for systems with output feedback and

Rossiter, J. A. (2003). *Model-based Predictive Control: a practical approach*, 1st edn, CRC Press,

Scokaert, P. & Rawlings, J. (1998). Constrained linear quadratic regulation, *IEEE Trans.*

a neural constrained model predictive controller based on reference system, *Latin*

Bryson, A. E. & Ho, Y.-C. (1969). *Applied Optimal Control*, Blaisdell, Waltham, MA.

control with restricted constraints, *Automatica* 37(7): 1019–1028.

*Automática*, pp. 1797 – 1804.

results, *J. Guid. Control Dyn.* 33(2): 363 – 375.

edn, Prentice Hall, Upper Saddle River, NJ.

control, *Automatica* 34(12): 1583–1592.

*American applied research* 38: 51 – 62.

*Automatic Control* 43(8): 1163–1169.

*Trans. Automatic Control* 38(10): 1512–1516.

input saturation, *Journal of Process Control* 15: 837 – 846.

Scokaert, P. (1994). *Constrained Predictive Control*, PhD thesis, Univ. Oxford, UK.

*chemical engineering* 21: S577 – S582.

*2010*, pp. 744 – 751.

42(3): 340 – 349.

pp. 1909 – 1914.

*Decision and Control*.

36(9): 1008–1020.

England.

Boca Raton.

Fig. 11. Position setpoint management variable (*μ*).

#### **5. Conclusions**

In real applications of MPC controllers, noise, disturbances, model-plant mismatches and faults are commonly found. Therefore, infeasibility of the associated optimization problem can be a recurrent issue. This justifies the study of techniques capable of driving the system to a feasible region, since infeasibility may cause prediction errors, deployment of impracticable control sequences and instability of the control loop. Computational workload is also of great concern in real applications, thus the adopted techniques must be simple enough to be executed in a commercial off-the-shelf computer within the sample period and effective enough to make the problem feasible. In this chapter a review of the literature regarding feasibility issues was presented and two of the more widely adopted approaches (constraint relaxation and setpoint management) were described. Simulation examples of some illustrative techniques were presented in order to clarify the advantages, drawbacks and difficulties in implementation of some techniques.

### **6. Acknowledgements**

The authors acknowledge the financial support of FAPESP (MSc scholarship 2009/12674-0) and CNPq (research fellowship).

#### **7. References**

Afonso, R. J. M. & Galvão, R. K. H. (2010a). Controle preditivo com garantia de estabilidade nominal aplicado a um helicóptero com três graus de liberdade empregando relaxamento de restrições de saída (Predictive control with nominal stability guarantee applied to a helicopter with three degrees of freedom employing 16 Will-be-set-by-IN-TECH

<sup>0</sup> <sup>10</sup> <sup>20</sup> <sup>30</sup> <sup>40</sup> −0.8

In real applications of MPC controllers, noise, disturbances, model-plant mismatches and faults are commonly found. Therefore, infeasibility of the associated optimization problem can be a recurrent issue. This justifies the study of techniques capable of driving the system to a feasible region, since infeasibility may cause prediction errors, deployment of impracticable control sequences and instability of the control loop. Computational workload is also of great concern in real applications, thus the adopted techniques must be simple enough to be executed in a commercial off-the-shelf computer within the sample period and effective enough to make the problem feasible. In this chapter a review of the literature regarding feasibility issues was presented and two of the more widely adopted approaches (constraint relaxation and setpoint management) were described. Simulation examples of some illustrative techniques were presented in order to clarify the advantages, drawbacks

The authors acknowledge the financial support of FAPESP (MSc scholarship 2009/12674-0)

Afonso, R. J. M. & Galvão, R. K. H. (2010a). Controle preditivo com garantia de

estabilidade nominal aplicado a um helicóptero com três graus de liberdade empregando relaxamento de restrições de saída (Predictive control with nominal stability guarantee applied to a helicopter with three degrees of freedom employing

t

−0.6

**5. Conclusions**

**6. Acknowledgements**

**7. References**

and CNPq (research fellowship).

Fig. 11. Position setpoint management variable (*μ*).

and difficulties in implementation of some techniques.

−0.4

−0.2

μ(t)

0

0.2

output constraint relaxation - text in portuguese), *Proc. XVIII Congresso Brasileiro de Automática*, pp. 1797 – 1804.


**Part 2** 

**Recent Applications of MPC** 


**Part 2** 

**Recent Applications of MPC** 

18 Will-be-set-by-IN-TECH

64 Frontiers of Model Predictive Control

Scokaert, P. & Rawlings, J. (1999). Feasibility issues in linear model preditctive control, *AIChE*

Vada, J., Slupphaug, O., Johansen, T. & Foss, B. (2001). Linear mpc with optimal prioritized

Zafiriou, E. & Chiou, H. (1993). Output constraint softening for siso model predictive control,

infeasibility handling: application, computational issues and stability, *Automatica*

*Jounal* 45(8): 1649 – 1659.

*American Control Conference*.

37(11): 1835 – 1843.

**4** 

**Predictive Control Applied to** 

 **Networked Control Systems** 

*2School of Electrical and Information Engineering,* 

*3Yunnan Land and Resources Vocational College,* 

*4Beijing Municipal Engineering Professional Design* 

 *University of Sydney, Sydney,* 

 *Institute Co.Ltd, Beijing,* 

 *Kunming,* 

*1,3,4China 2Australia* 

Xunhe Yin1,2, Shunli Zhao1, Qingquan Cui1,3 and Hong Zhang4 *1School of Electric and Information Engineering, Beijing Jiaotong University,* 

The researches of the networked control systems (NCSs) cover a broader, more complex technology, because that networked control systems relate to computer network, communication, control, and other interdisciplinary fields. Networked control systems have become one of the hot spots of international control areas in recent years. The networked control system theoretical research is far behind its application, so the networked control

NCSs performance is not only related with the control algorithms, but also the network environment and the scheduling algorithms. The purpose of network scheduling is to avoid network conflicts and congestion, accordingly reducing the network-induced delay, packet loss rate and so on, which can ensure the better network environment. If the case, where the data cannot be scheduled, appears in the network, the control algorithm has not fundamentally improved the performance of the system, thus only adjusting data transmission priorities and instants over the network by using the scheduling algorithms, in

Along with the networked control system further research, people gradually realized that the scheduling performance must be taken into account when they research control algorithms, that is, considering the two aspects of scheduling and control synthetically. The joint design of both scheduling performance and control performance is concerned by the majority of researchers (Gaid M B et al., 2006a,2006b; Arzen K E et al., 2000). Therefore, NCSs resource scheduling algorithms, as well as scheduling and control co-design are the

The generalized predictive control and the EDF (Earliest Deadline First) scheduling algorithm are adopted by the NCSs co-design in this chapter. The co-design method

system theory study has important academic value and economic benefits at present.

order to make the whole system to achieve the desired performance.

main research directions and research focus.

**1. Introduction** 
