**Proof:**

Since Lð Þ *K*<sup>∗</sup> , *P*<sup>∗</sup> , *S* <sup>∗</sup> is minimal in some neighborhood of ð Þ *K* <sup>∗</sup> , *P*<sup>∗</sup> , *S* <sup>∗</sup> , it follows that *<sup>∂</sup>*<sup>L</sup> *<sup>∂</sup><sup>K</sup>* ð Þ¼ *<sup>K</sup>* <sup>∗</sup> , *<sup>P</sup>*<sup>∗</sup> , *<sup>S</sup>* <sup>∗</sup> 0, *<sup>∂</sup>*<sup>L</sup> *<sup>∂</sup><sup>P</sup>* ð Þ¼ *K* <sup>∗</sup> , *P*<sup>∗</sup> , *S* <sup>∗</sup> 0, and *∂*L *<sup>∂</sup><sup>S</sup>* ð Þ¼ *<sup>K</sup>* <sup>∗</sup> , *<sup>P</sup>*<sup>∗</sup> , *<sup>S</sup>* <sup>∗</sup> *<sup>Y</sup>T*ð Þ¼ *<sup>K</sup>*<sup>∗</sup> , *<sup>P</sup>*<sup>∗</sup> *Y K*ð Þ¼ <sup>∗</sup> , *<sup>P</sup>*<sup>∗</sup> 0.

can deal also with sparse SOFs for LQR and with regional pole-placement SOFs

Example 4.1. In the following simple example, we illustrate the notions appearing in the definition of the RS algorithm, and we demonstrate the operation

> 3 5*xk* þ

where we look for SOF *K* stabilizing the system while reducing the LQR func-

2 4 1 1

2 � *k*<sup>1</sup> 1 � *k*<sup>2</sup> �*k*<sup>1</sup> � <sup>1</sup>

2 � � <sup>þ</sup> <sup>3</sup>

*k*<sup>1</sup> þ 2*k*<sup>2</sup> þ 2,

� �,

where *υ* is the number of sign variations in the set. According to the Bistritz criterion, the system is stable if and only if *υ* ¼ 0. We conclude that S is the set of all

<sup>2</sup> *<sup>k</sup>*<sup>1</sup> <sup>þ</sup> <sup>2</sup>*k*<sup>2</sup> <sup>þ</sup> <sup>2</sup> <sup>&</sup>gt;0, <sup>1</sup>

empty (which could have make the set non-convex). The set S appears in **Figure 1** as the blue region, where the golden star is the analytic global optimal solution *K* <sup>∗</sup> ¼ ½ � 1*:*09473459 0*:*36138828 (computed by the related discrete algebraic Riccati

**Algorithm 2** The MC algorithm for LQR via SOF for discrete-time systems.

� � � �

**Require:** *An algorithm for deciding α-stability, an algorithm for computing σ*maxð Þ *K*

<sup>2</sup> , 0 <*α* <1, integers: *m*, *s* >0, controllable pairs ð Þ *A*, *B* and

� *vec Q* <sup>þ</sup> *CTKT*

� � � � ;*σmax*ð Þ *K*<sup>0</sup> max ð Þ *<sup>σ</sup>*ð Þ *<sup>P</sup>*<sup>0</sup>

*BTP*0*AS*0*C<sup>T</sup>*ð Þ *CS*0*<sup>C</sup>* <sup>þ</sup> � *<sup>K</sup>*<sup>0</sup>

<sup>2</sup> *<sup>k</sup>*<sup>1</sup> � <sup>3</sup>*k*<sup>2</sup> <sup>þ</sup> <sup>3</sup>

*uk*, *k* ¼ 0, 1, …

<sup>2</sup> � *<sup>k</sup>*<sup>2</sup>

1 2 3 5,

*k*<sup>1</sup> � 3*k*<sup>2</sup> þ

<sup>2</sup> *<sup>k</sup>*<sup>1</sup> � <sup>3</sup>*k*<sup>2</sup> <sup>þ</sup> <sup>3</sup>

<sup>0</sup>*RK*0*C*

<sup>2</sup> *k*<sup>1</sup> � 2*k*<sup>2</sup> � 1. Applying the Y.

3 2

<sup>2</sup> >0 or

<sup>2</sup> <0, where the last branch is

" #

*Algorithms for LQR via Static Output Feedback for Discrete-Time LTI Systems*

2 4

*Ac*ℓð Þ¼ *K A* � *BK* ¼

*<sup>k</sup>*<sup>1</sup> � *<sup>k</sup>*<sup>2</sup> � <sup>3</sup>

<sup>2</sup> <sup>&</sup>gt;0, � <sup>3</sup>

*and algorithms for general linear algebra operations.*

*AT*,*C<sup>T</sup>* � �, matrices *<sup>Q</sup>* <sup>&</sup>gt;0, *<sup>R</sup>* <sup>&</sup>gt;0 and *<sup>K</sup>*ð Þ <sup>0</sup> <sup>∈</sup> *int* ð Þ <sup>S</sup>*<sup>α</sup>* . **Output:** *K* ∈S*<sup>α</sup>* that locally minimizes *σmax*ð Þ *K* .

� ��<sup>1</sup>

<sup>0</sup> ⊗ *AT* 0

� ��<sup>1</sup> � *vec Ip*

<sup>2</sup> *<sup>k</sup>*<sup>1</sup> <sup>þ</sup> <sup>2</sup>*k*<sup>2</sup> <sup>þ</sup> <sup>2</sup> <sup>&</sup>lt;0, <sup>1</sup>

2 , � 3 2

of the RS algorithm. Consider the unstable system

8 >><

*DOI: http://dx.doi.org/10.5772/intechopen.89319*

>>:

*xk*þ<sup>1</sup> ¼

tional (2) with *Q* ¼ *I*, *R* ¼ 1. Let *K* ¼ ½ � *k*<sup>1</sup> *k*<sup>2</sup> then

*yk* ¼ *xk*,

with characteristic polynomial *<sup>z</sup>*<sup>2</sup> <sup>þ</sup> *z k*<sup>1</sup> <sup>þ</sup> *<sup>k</sup>*<sup>2</sup> � <sup>3</sup>

Bistritz stability criterion (see [25]), we have

2

*<sup>υ</sup>* <sup>¼</sup> *Var* <sup>5</sup>

<sup>2</sup> *<sup>k</sup>*<sup>1</sup> � *<sup>k</sup>*<sup>2</sup> � <sup>3</sup>

<sup>2</sup> <sup>&</sup>lt; 0, � <sup>3</sup>

*K* such that <sup>5</sup>

<sup>2</sup> *<sup>k</sup>*<sup>1</sup> � *<sup>k</sup>*<sup>2</sup> � <sup>3</sup>

equation).

**Input:** 0<ϵ≤ <sup>1</sup>

5. *flag* 0

**65**

1. *j* 0;*A*<sup>0</sup> *A* � *BK*0*C* 2. *<sup>P</sup>*<sup>0</sup> *mat Ip* <sup>⊗</sup> *Ip* � *<sup>A</sup><sup>T</sup>*

4. <sup>Δ</sup>*K*<sup>0</sup> *<sup>R</sup>* <sup>þ</sup> *BTP*0*<sup>B</sup>* � ��<sup>1</sup>

*s*

9. **if** *K t*ð Þ∈S*<sup>α</sup>* **then** 10. *A t*ðÞ *A* � *BK t*ð Þ*C*

6. **for** *k* ¼ 0 **to** *s* **do** 7. *<sup>t</sup> <sup>k</sup>*

3. *S*<sup>0</sup> *mat Ip* ⊗ *Ip* � *A*<sup>0</sup> ⊗ *A*<sup>0</sup>

8. *K t*ðÞ ð Þ 1 � *t K*<sup>0</sup> þ *t*Δ*K*<sup>0</sup>

5

for LQR.

The condition *<sup>∂</sup>*<sup>L</sup> *<sup>∂</sup><sup>S</sup>* ð Þ¼ *<sup>K</sup>*<sup>∗</sup> , *<sup>P</sup>*<sup>∗</sup> , *<sup>S</sup>* <sup>∗</sup> *<sup>Y</sup>T*ð Þ¼ *<sup>K</sup>* <sup>∗</sup> , *<sup>P</sup>*<sup>∗</sup> *Y K*ð Þ¼ <sup>∗</sup> , *<sup>P</sup>*<sup>∗</sup> 0 is just

$$P\_\* - A\_{\mathfrak{c}\ell}(K\_\* \, )^T P\_\* A\_{\mathfrak{c}\ell}(K\_\* \, ) = Q + C^T K\_\*^T R K\_\* \mathcal{C}$$

which with *P*<sup>∗</sup> >0 and *Q* >0, *R* >0 implies that *Ac*ℓð Þ¼ *K*<sup>∗</sup> *A* � *BK* <sup>∗</sup>*C* is stable. Now, since *<sup>σ</sup>*ð Þ *Ac*ℓð Þ *<sup>K</sup>*<sup>∗</sup> <sup>⊂</sup> , it follows that *Ip* <sup>⊗</sup> *Ip* � *Ac*ℓð Þ *<sup>K</sup>* <sup>∗</sup> *<sup>T</sup>* <sup>⊗</sup> *Ac*ℓð Þ *<sup>K</sup>* <sup>∗</sup> *<sup>T</sup>* is invertible, and therefore,

$$P\_\* = \operatorname{mat} \left( \left( I\_p \otimes I\_p - A\_{\varepsilon \ell} (K\_\*)^T \otimes A\_{\varepsilon \ell} (K\_\*)^T \right)^{-1} \cdot \operatorname{vec} \left( \mathbf{Q} + \mathbf{C}^T K\_\*^T R K\_\* \mathbf{C} \right) \right).$$

Since *Ip* <sup>⊗</sup> *Ip* � *Ac*ℓð Þ *<sup>K</sup>*<sup>∗</sup> <sup>⊗</sup> *Ac*ℓð Þ *<sup>K</sup>* <sup>∗</sup> is invertible, *<sup>∂</sup>*<sup>L</sup> *<sup>∂</sup><sup>P</sup>* ð Þ¼ *K* <sup>∗</sup> , *P*<sup>∗</sup> , *S* <sup>∗</sup> 0 implies that

$$S\_\* := \operatorname{mat} \left( \left( I\_p \otimes I\_p - A\_{\iota \ell'}(K\_\* \,) \otimes A\_{\iota \ell'}(K\_\* \,) \right)^{-1} \operatorname{vec} \left( \varkappa\_0 \boldsymbol{\varkappa}\_0^T \right) \right).$$

Finally, *<sup>∂</sup>*<sup>L</sup> *<sup>∂</sup><sup>P</sup>* ð Þ¼ *<sup>K</sup>*<sup>∗</sup> , *<sup>P</sup>*<sup>∗</sup> , *<sup>S</sup>* <sup>∗</sup> 0 implies that *<sup>K</sup>* <sup>∗</sup>*CS* <sup>∗</sup>*CT* <sup>¼</sup> *<sup>R</sup>* <sup>þ</sup> *BTP*<sup>∗</sup> *<sup>B</sup>* �<sup>1</sup> *BT <sup>P</sup>*<sup>∗</sup> *AS* <sup>∗</sup>*C<sup>T</sup>*, which in view of Lemma 2.1 implies *<sup>R</sup>* <sup>þ</sup> *BTP*<sup>∗</sup> *<sup>B</sup>* �<sup>1</sup> *BTP*<sup>∗</sup> *AS* <sup>∗</sup>*C<sup>T</sup>* � *LCS* <sup>∗</sup> *CT* ¼ 0 and

$$K\_\* = \left(\boldsymbol{R} + \boldsymbol{B}^T \boldsymbol{P}\_\* \boldsymbol{B}\right)^{-1} \boldsymbol{B}^T \boldsymbol{P}\_\* \boldsymbol{A} \boldsymbol{S}\_\* \boldsymbol{C}^T \left(\boldsymbol{C} \boldsymbol{S}\_\* \boldsymbol{C}^T\right)^+ + \boldsymbol{Z}\_\* \cdot \boldsymbol{R}\_{\boldsymbol{C} \boldsymbol{S}\_\* \boldsymbol{C}^T}$$

where *<sup>Z</sup>* <sup>∗</sup> is some *<sup>q</sup>* � *<sup>r</sup>* matrix. ■

Note that the equations are coupled tightly, in the sense that *P*<sup>∗</sup> and *S* <sup>∗</sup> need *K* <sup>∗</sup> , while *K* <sup>∗</sup> needs *P*<sup>∗</sup> and *S* <sup>∗</sup> . Note also the cubic dependencies (that can be made quadratic by introducing new variables). These make the related QMEs non-convex and, therefore, hard to compute.

Remark 4.1. When *x*<sup>0</sup> is unknown, it is customary to assume that *x*<sup>0</sup> is uniformly distributed on the unit sphere, which implies that *E x*0*xT* 0 <sup>¼</sup> *Ip*, where *<sup>E</sup>*½ �• is the expectation operator. Thus, changing the problem to that of minimizing *EJ x* ½ � ð Þ 0, *P* amounts to replacing *S* <sup>∗</sup> with

$$E[\mathbb{S}\_\*] = \operatorname{mat} \left( \left( I\_p \otimes I\_p - A\_{\epsilon \ell}(K\_\* \,) \otimes A\_{\epsilon \ell}(K\_\* \,) \right)^{-1} \nu \epsilon c(I\_p) \right) > 0.1$$

Therefore, there is change in Algorithm 2.

Remark 4.2. The convergence of Algorithm 2 to local minimum can be proved similarly to the proof appearing in [15], under the assumptions that <sup>S</sup>*<sup>q</sup>*�*<sup>r</sup> <sup>α</sup>* is nonempty and that *Q*<sup>1</sup> 2, *A* is detectable (here, we do not need this condition because of the assumption that *Q* > 0). The convergence can actually be proved for the more general problem that adds k k *<sup>K</sup>* <sup>2</sup> *<sup>F</sup>* <sup>¼</sup> *trace KTK* to the LQR functional, thus minimizing also the Frobenius norm of *K*. In this context, note that by adding k k *<sup>K</sup>* <sup>2</sup> <sup>¼</sup> max *<sup>σ</sup> <sup>K</sup>TK* to the LQR functional will lose the argument, and there will be a need to more general proof, because in the proof appearing in [15], the demand is for **<sup>C</sup>**<sup>1</sup> smooth function of *<sup>K</sup>*, while k k *<sup>K</sup>* <sup>2</sup> <sup>¼</sup> *<sup>σ</sup>*max *<sup>K</sup>TK* is continuous but not Lipschitz continuous. The RS algorithm can use any continuous function of *K* and

*Algorithms for LQR via Static Output Feedback for Discrete-Time LTI Systems DOI: http://dx.doi.org/10.5772/intechopen.89319*

can deal also with sparse SOFs for LQR and with regional pole-placement SOFs for LQR.

Example 4.1. In the following simple example, we illustrate the notions appearing in the definition of the RS algorithm, and we demonstrate the operation of the RS algorithm. Consider the unstable system

$$\begin{cases} \varkappa\_{k+1} = \begin{bmatrix} 2 & 1\\ 0 & -\frac{1}{2} \end{bmatrix} \varkappa\_k + \begin{bmatrix} 1\\ 1 \end{bmatrix} u\_k, k = 0, 1, \dots \\\ \varkappa\_k = \varkappa\_k, \end{cases}$$

where we look for SOF *K* stabilizing the system while reducing the LQR functional (2) with *Q* ¼ *I*, *R* ¼ 1. Let *K* ¼ ½ � *k*<sup>1</sup> *k*<sup>2</sup> then

$$A\_{\epsilon\ell}(K) = A - BK = \begin{bmatrix} 2 - k\_1 & 1 - k\_2 \\ -k\_1 & -\frac{1}{2} - k\_2 \end{bmatrix},$$

with characteristic polynomial *<sup>z</sup>*<sup>2</sup> <sup>þ</sup> *z k*<sup>1</sup> <sup>þ</sup> *<sup>k</sup>*<sup>2</sup> � <sup>3</sup> 2 � � <sup>þ</sup> <sup>3</sup> <sup>2</sup> *k*<sup>1</sup> � 2*k*<sup>2</sup> � 1. Applying the Y. Bistritz stability criterion (see [25]), we have

$$w = \operatorname{Var}\left\{\frac{5}{2}k\_1 - k\_2 - \frac{3}{2}, -\frac{3}{2}k\_1 + 2k\_2 + 2, \frac{1}{2}k\_1 - 3k\_2 + \frac{3}{2}\right\},$$

where *υ* is the number of sign variations in the set. According to the Bistritz criterion, the system is stable if and only if *υ* ¼ 0. We conclude that S is the set of all *K* such that <sup>5</sup> <sup>2</sup> *<sup>k</sup>*<sup>1</sup> � *<sup>k</sup>*<sup>2</sup> � <sup>3</sup> <sup>2</sup> <sup>&</sup>gt;0, � <sup>3</sup> <sup>2</sup> *<sup>k</sup>*<sup>1</sup> <sup>þ</sup> <sup>2</sup>*k*<sup>2</sup> <sup>þ</sup> <sup>2</sup> <sup>&</sup>gt;0, <sup>1</sup> <sup>2</sup> *<sup>k</sup>*<sup>1</sup> � <sup>3</sup>*k*<sup>2</sup> <sup>þ</sup> <sup>3</sup> <sup>2</sup> >0 or 5 <sup>2</sup> *<sup>k</sup>*<sup>1</sup> � *<sup>k</sup>*<sup>2</sup> � <sup>3</sup> <sup>2</sup> <sup>&</sup>lt; 0, � <sup>3</sup> <sup>2</sup> *<sup>k</sup>*<sup>1</sup> <sup>þ</sup> <sup>2</sup>*k*<sup>2</sup> <sup>þ</sup> <sup>2</sup> <sup>&</sup>lt;0, <sup>1</sup> <sup>2</sup> *<sup>k</sup>*<sup>1</sup> � <sup>3</sup>*k*<sup>2</sup> <sup>þ</sup> <sup>3</sup> <sup>2</sup> <0, where the last branch is empty (which could have make the set non-convex). The set S appears in **Figure 1** as the blue region, where the golden star is the analytic global optimal solution *K* <sup>∗</sup> ¼ ½ � 1*:*09473459 0*:*36138828 (computed by the related discrete algebraic Riccati equation).

**Algorithm 2** The MC algorithm for LQR via SOF for discrete-time systems.

**Require:** *An algorithm for deciding α-stability, an algorithm for computing σ*maxð Þ *K and algorithms for general linear algebra operations.*

**Input:** 0<ϵ≤ <sup>1</sup> <sup>2</sup> , 0 <*α* <1, integers: *m*, *s* >0, controllable pairs ð Þ *A*, *B* and *AT*,*C<sup>T</sup>* � �, matrices *<sup>Q</sup>* <sup>&</sup>gt;0, *<sup>R</sup>* <sup>&</sup>gt;0 and *<sup>K</sup>*ð Þ <sup>0</sup> <sup>∈</sup> *int* ð Þ <sup>S</sup>*<sup>α</sup>* .

$$\begin{aligned} &\mathbf{Output}\ \mathbf{Setupt}\ \mathbf{Setupt}\ \mathbf{Setupt}\ \mathbf{Setupt}\ \mathbf{Simimizizes}\ \sigma\_{\max}(K),\\ &1\\_j \leftarrow 0; \mathbf{A}\_0 \leftarrow A - \mathbf{BK}\_0\mathbf{C}\\ &2\\_P \leftarrow \max\left(\left(I\_p \otimes I\_p - A\_0^T \otimes \mathbf{A}\_0^T\right)^{-1} \cdot \nu c\left(\mathbf{Q} + \mathbf{C}^T\mathbf{K}\_0^T\mathbf{RK}\_0\mathbf{C}\right)\right)\\ &3\\_S \leftarrow \max\left(\left(I\_p \otimes I\_P - A\_0 \otimes \mathbf{A}\_0\right)^{-1} \cdot \nu c\left(I\_p\right)\right); \sigma\_{\max}(K\_0) \leftarrow \max\left(\sigma(P\_0)\right)\\ &4\\_S \leftarrow \left(R + B^T P\_0 B\right)^{-1} \mathbf{B}^T P\_0 \mathbf{AS}\_0 \mathbf{C}^T \left(\mathbf{CS}\_0 \mathbf{C}\right)^+ - K\_0\\ &5\\_R \mathbf{f}\mathbf{q} \leftarrow 0\\ &6\\_\mathbf{for}\ \mathbf{for}\ \mathbf{k} = \mathbf{0}\text{ to }\mathbf{s}\ \mathbf{do}\\ &7.\qquad K \leftarrow \frac{k}{z}\\ &8.\qquad K \leftarrow (1-t)K\_0 + t\Delta K\_0\\ &9.\qquad K \leftarrow A - BK(t)C\\ &10.\qquad A(t) \leftarrow A - BK(t)C\end{aligned}$$

**Proof:**

*<sup>∂</sup><sup>K</sup>* ð Þ¼ *<sup>K</sup>* <sup>∗</sup> , *<sup>P</sup>*<sup>∗</sup> , *<sup>S</sup>* <sup>∗</sup> 0, *<sup>∂</sup>*<sup>L</sup>

The condition *<sup>∂</sup>*<sup>L</sup>

*Control Theory in Engineering*

ible, and therefore,

Finally, *<sup>∂</sup>*<sup>L</sup>

*LCS* <sup>∗</sup> *CT* ¼ 0 and

*<sup>∂</sup><sup>S</sup>* ð Þ¼ *<sup>K</sup>* <sup>∗</sup> , *<sup>P</sup>*<sup>∗</sup> , *<sup>S</sup>* <sup>∗</sup> *<sup>Y</sup>T*ð Þ¼ *<sup>K</sup>*<sup>∗</sup> , *<sup>P</sup>*<sup>∗</sup> *Y K*ð Þ¼ <sup>∗</sup> , *<sup>P</sup>*<sup>∗</sup> 0.

*<sup>P</sup>*<sup>∗</sup> <sup>¼</sup> *mat Ip* <sup>⊗</sup> *Ip* � *Ac*ℓð Þ *<sup>K</sup>* <sup>∗</sup> *<sup>T</sup>* <sup>⊗</sup> *Ac*ℓð Þ *<sup>K</sup>*<sup>∗</sup> *<sup>T</sup>* �<sup>1</sup>

Since *Ip* <sup>⊗</sup> *Ip* � *Ac*ℓð Þ *<sup>K</sup>*<sup>∗</sup> <sup>⊗</sup> *Ac*ℓð Þ *<sup>K</sup>* <sup>∗</sup> is invertible, *<sup>∂</sup>*<sup>L</sup>

*S* <sup>∗</sup> ¼ *mat Ip* ⊗ *Ip* � *Ac*ℓð Þ *K* <sup>∗</sup> ⊗ *Ac*ℓð Þ *K*<sup>∗</sup>

*<sup>P</sup>*<sup>∗</sup> *AS* <sup>∗</sup>*C<sup>T</sup>*, which in view of Lemma 2.1 implies *<sup>R</sup>* <sup>þ</sup> *BTP*<sup>∗</sup> *<sup>B</sup>* �<sup>1</sup>

distributed on the unit sphere, which implies that *E x*0*xT*

Therefore, there is change in Algorithm 2.

2, *A* 

the more general problem that adds k k *<sup>K</sup>* <sup>2</sup>

*E S*½ �¼ <sup>∗</sup> *mat Ip* ⊗ *Ip* � *Ac*ℓð Þ *K*<sup>∗</sup> ⊗ *Ac*ℓð Þ *K* <sup>∗</sup>

similarly to the proof appearing in [15], under the assumptions that <sup>S</sup>*<sup>q</sup>*�*<sup>r</sup>*

*<sup>K</sup>* <sup>∗</sup> <sup>¼</sup> *<sup>R</sup>* <sup>þ</sup> *BTP*<sup>∗</sup> *<sup>B</sup>* �<sup>1</sup>

where *<sup>Z</sup>* <sup>∗</sup> is some *<sup>q</sup>* � *<sup>r</sup>* matrix. ■

and, therefore, hard to compute.

amounts to replacing *S* <sup>∗</sup> with

nonempty and that *Q*<sup>1</sup>

**64**

that *<sup>∂</sup>*<sup>L</sup>

*∂*L

Since Lð Þ *K*<sup>∗</sup> , *P*<sup>∗</sup> , *S* <sup>∗</sup> is minimal in some neighborhood of ð Þ *K* <sup>∗</sup> , *P*<sup>∗</sup> , *S* <sup>∗</sup> , it follows

*<sup>∂</sup><sup>S</sup>* ð Þ¼ *<sup>K</sup>*<sup>∗</sup> , *<sup>P</sup>*<sup>∗</sup> , *<sup>S</sup>* <sup>∗</sup> *<sup>Y</sup>T*ð Þ¼ *<sup>K</sup>* <sup>∗</sup> , *<sup>P</sup>*<sup>∗</sup> *Y K*ð Þ¼ <sup>∗</sup> , *<sup>P</sup>*<sup>∗</sup> 0 is just

which with *P*<sup>∗</sup> >0 and *Q* >0, *R* >0 implies that *Ac*ℓð Þ¼ *K*<sup>∗</sup> *A* � *BK* <sup>∗</sup>*C* is stable. Now, since *<sup>σ</sup>*ð Þ *Ac*ℓð Þ *<sup>K</sup>*<sup>∗</sup> <sup>⊂</sup> , it follows that *Ip* <sup>⊗</sup> *Ip* � *Ac*ℓð Þ *<sup>K</sup>* <sup>∗</sup> *<sup>T</sup>* <sup>⊗</sup> *Ac*ℓð Þ *<sup>K</sup>* <sup>∗</sup> *<sup>T</sup>* is invert-

�<sup>1</sup>

*<sup>∂</sup><sup>P</sup>* ð Þ¼ *<sup>K</sup>*<sup>∗</sup> , *<sup>P</sup>*<sup>∗</sup> , *<sup>S</sup>* <sup>∗</sup> 0 implies that *<sup>K</sup>* <sup>∗</sup>*CS* <sup>∗</sup>*CT* <sup>¼</sup> *<sup>R</sup>* <sup>þ</sup> *BTP*<sup>∗</sup> *<sup>B</sup>* �<sup>1</sup>

Note that the equations are coupled tightly, in the sense that *P*<sup>∗</sup> and *S* <sup>∗</sup> need *K* <sup>∗</sup> , while *K* <sup>∗</sup> needs *P*<sup>∗</sup> and *S* <sup>∗</sup> . Note also the cubic dependencies (that can be made quadratic by introducing new variables). These make the related QMEs non-convex

Remark 4.1. When *x*<sup>0</sup> is unknown, it is customary to assume that *x*<sup>0</sup> is uniformly

expectation operator. Thus, changing the problem to that of minimizing *EJ x* ½ � ð Þ 0, *P*

�<sup>1</sup>

Remark 4.2. The convergence of Algorithm 2 to local minimum can be proved

because of the assumption that *Q* > 0). The convergence can actually be proved for

minimizing also the Frobenius norm of *K*. In this context, note that by adding k k *<sup>K</sup>* <sup>2</sup> <sup>¼</sup> max *<sup>σ</sup> <sup>K</sup>TK* to the LQR functional will lose the argument, and there will be a need to more general proof, because in the proof appearing in [15], the demand is for **<sup>C</sup>**<sup>1</sup> smooth function of *<sup>K</sup>*, while k k *<sup>K</sup>* <sup>2</sup> <sup>¼</sup> *<sup>σ</sup>*max *<sup>K</sup>TK* is continuous but not Lipschitz continuous. The RS algorithm can use any continuous function of *K* and

<sup>&</sup>gt;0*:*

is detectable (here, we do not need this condition

<sup>∗</sup> *RK*∗*<sup>C</sup>*

*:*

*BTP*<sup>∗</sup> *AS* <sup>∗</sup>*CT CS* <sup>∗</sup>*C<sup>T</sup>* <sup>þ</sup> <sup>þ</sup> *<sup>Z</sup>* <sup>∗</sup> � *RCS* <sup>∗</sup> *<sup>C</sup><sup>T</sup>* ,

0

*vec Ip*

*<sup>F</sup>* <sup>¼</sup> *trace KTK* to the LQR functional, thus

<sup>¼</sup> *Ip*, where *<sup>E</sup>*½ �• is the

*<sup>α</sup>* is

<sup>∗</sup> *RK*∗*C*

� *vec Q* <sup>þ</sup> *CTKT*

*vec x*0*xT* 0

*<sup>∂</sup><sup>P</sup>* ð Þ¼ *K* <sup>∗</sup> , *P*<sup>∗</sup> , *S* <sup>∗</sup> 0 implies that

*:*

*BT*

*BTP*<sup>∗</sup> *AS* <sup>∗</sup>*C<sup>T</sup>* �

*<sup>∂</sup><sup>P</sup>* ð Þ¼ *K* <sup>∗</sup> , *P*<sup>∗</sup> , *S* <sup>∗</sup> 0, and

*<sup>P</sup>*<sup>∗</sup> � *Ac*ℓð Þ *<sup>K</sup>*<sup>∗</sup> *TP*<sup>∗</sup> *Ac*ℓð Þ¼ *<sup>K</sup>*<sup>∗</sup> *<sup>Q</sup>* <sup>þ</sup> *CTKT*

11. *P t*ðÞ *mat Ip* <sup>⊗</sup> *Ip* � *A t*ð Þ*<sup>T</sup>* <sup>⊗</sup> *A t*ð Þ*<sup>T</sup>* � ��<sup>1</sup> � *vec Q* <sup>þ</sup> *CTK t*ð Þ*TRK t*ð Þ*<sup>C</sup>* � � � � 12. *S t*ðÞ *mat Ip* <sup>⊗</sup> *Ip* � *A t*ð Þ <sup>⊗</sup> *A t*ð Þ � ��<sup>1</sup> � *vec Ip* � � � � ;*σmax*ð Þ *K t*ð Þ max ð Þ *σ*ð Þ *P t*ð Þ 13. **if** *σmax*ð Þ *K t*ð Þ <*σmax*ð Þ *K*<sup>0</sup> **then** 14. *K*<sup>1</sup> *K t*ð Þ;*A*<sup>1</sup> *A* � *BK*1*C*;*P*<sup>1</sup> *P t*ð Þ;*S*<sup>1</sup> *S t*ð Þ;*σmax*ð Þ *K*<sup>1</sup> *σmax*ð Þ *K t*ð Þ 15. *flag* 1 16. **end if** 17. **end if** 18. **end for** 19. **if** *flag* ¼¼ 1 **then** 20. **while** *σmax K <sup>j</sup>*þ<sup>1</sup> � � � *<sup>σ</sup>max <sup>K</sup> <sup>j</sup>* � � � � � �≥<sup>ϵ</sup> � � and ð Þ *<sup>j</sup>*<sup>&</sup>lt; *<sup>m</sup>* **do** 21. <sup>Δ</sup>*<sup>K</sup> <sup>j</sup> <sup>R</sup>* <sup>þ</sup> *<sup>B</sup>TPjB* � ��<sup>1</sup> *BTPjAS jC<sup>T</sup> CS jC<sup>T</sup>* � �<sup>þ</sup> � *<sup>K</sup> <sup>j</sup>* 22. **for** *k* ¼ 0 **to** *s* **do** 23. *<sup>t</sup> <sup>k</sup> s* 24. *K t*ðÞ ð Þ 1 � *t K <sup>j</sup>* þ *t*Δ*K <sup>j</sup>* 25. **if** *K t*ð Þ∈S*<sup>α</sup>* **then** 21. *A t*ðÞ *A* � *BK t*ð Þ*C* 22. *P t*ðÞ *mat Ip* <sup>⊗</sup> *Ip* � *A t*ð Þ*<sup>T</sup>* <sup>⊗</sup> *A t*ð Þ*<sup>T</sup>* � ��<sup>1</sup> � *vec Q* <sup>þ</sup> *<sup>C</sup>TK t*ð Þ*TRK t*ð Þ*<sup>C</sup>* � � � � 23. *S t*ðÞ *mat Ip* <sup>⊗</sup> *Ip* � *A t*ð Þ <sup>⊗</sup> *A t*ð Þ � ��<sup>1</sup> � *vec Ip* � � � � ;*σmax*ð Þ *K t*ð Þ max ð Þ *σ*ð Þ *P t*ð Þ 29. **if** *σmax*ð Þ *K t*ð Þ < *σmax K <sup>j</sup>* � � **then** 30. *K <sup>j</sup>*þ<sup>1</sup> *K t*ð Þ;*Aj*þ<sup>1</sup> *A* � *BK <sup>j</sup>*þ<sup>1</sup>*C*;*Pj*þ<sup>1</sup> *P t*ð Þ;*S <sup>j</sup>*þ<sup>1</sup> *S t*ð Þ;*σmax K <sup>j</sup>*þ<sup>1</sup> � � *<sup>σ</sup>max*ð Þ *K t*ð Þ 31. **end if** 32. **end if** 33. **end for** 34. *j j* þ 1 35. **end while** 36. **end if** 37. **return** *<sup>K</sup>*ð Þ best *<sup>K</sup> <sup>j</sup>*, *<sup>P</sup>*ð Þ best *Pj*, *<sup>σ</sup>*ð Þ best *max σmax K <sup>j</sup>* � �

The algorithm chooses *F* in the basis of the triangle and defines *K t*ð Þ to be the ray from *K*<sup>0</sup> to *F*. The ray is sampled at 10 equally spaced points, and the best feasible

In **Figure 2**, we can see that 5 iterations suffice to include *K* <sup>∗</sup> in some triangle and to find improving points very close to *K*<sup>∗</sup> . In **Figure 3**, we can see that when we allow 20 iterations, after 10 iterations, the center *K*ð Þ <sup>0</sup> is switched to the best point found so far (see lines 24 � 26 in Algorithm 1). This is done in order to raise the probability to hit *K* <sup>∗</sup> or its ϵ-neighborhood, and as we can see, the final best point

The results of the algorithm for 1, 5 and 20 iterations are the following. Note that

max ¼ 25*:*7307,

max ¼ 6*:*1391,

max ¼ 25*:*7307,

max ¼ 6*:*0001*:*

*σ*maxð Þ¼ *K* <sup>∗</sup> 5*:*9551, and note the "huge variations" the function *σmax*ð Þ *K* has.

*<sup>K</sup>*ð Þ <sup>0</sup> <sup>¼</sup> ½ � <sup>0</sup>*:*<sup>58739333</sup> � <sup>0</sup>*:*<sup>15823016</sup> , *<sup>σ</sup>*ð Þ <sup>0</sup>

**RS** : *<sup>K</sup>*ð Þ *best* <sup>¼</sup> ½ � <sup>1</sup>*:*17786349 0*:*<sup>35034398</sup> , *<sup>σ</sup>*ð Þ *best*

*Algorithms for LQR via Static Output Feedback for Discrete-Time LTI Systems*

**MC** : *<sup>K</sup>*ð Þ *best* <sup>¼</sup> ½ � <sup>0</sup>*:*<sup>58739333</sup> � <sup>0</sup>*:*<sup>15823016</sup> , *<sup>σ</sup>*ð Þ *best*

**RS** <sup>þ</sup> **MC** : *<sup>K</sup>*ð Þ *best* <sup>¼</sup> ½ � <sup>1</sup>*:*05244278 0*:*<sup>31681948</sup> , *<sup>σ</sup>*ð Þ *best*

point is recorded.

For *m* ¼ 1 we had

**Figure 2.**

**67**

**Figure 1.**

*The stability region* S*.*

*Single iteration of the RS algorithm.*

(green star) is very close to *K* <sup>∗</sup> (**Figure 4**).

*DOI: http://dx.doi.org/10.5772/intechopen.89319*

In **Figure 1**, we can see how the RS algorithm works: we fix *<sup>α</sup>* <sup>¼</sup> <sup>10</sup>�3, <sup>ϵ</sup> <sup>¼</sup> <sup>10</sup>�16,*r*<sup>∞</sup> <sup>¼</sup> 2, *<sup>h</sup>* <sup>¼</sup> 2, and we set *<sup>m</sup>* <sup>¼</sup> 1, *<sup>n</sup>* <sup>¼</sup> 5, *<sup>s</sup>* <sup>¼</sup> 10 for a single iteration, where the single cone is sampled along 5 rays and each ray is sampled 10 times. The sampled points are circled, where red circles indicate infeasible or nonimproving points and the black circles indicate improving points. The green star point is the initial point *K*ð Þ <sup>0</sup> found by the Ray-Shooting algorithm for minimalnorm SOF. The bold black circle is the boundary of the closed circle *K*ð Þ <sup>0</sup> , *h* � �. We choose <sup>U</sup>ð Þ <sup>0</sup> randomly to define the search direction, and we set *<sup>L</sup>*ð Þ <sup>0</sup> <sup>¼</sup> *<sup>K</sup>*ð Þ <sup>0</sup> <sup>þ</sup> *<sup>h</sup>* � Uð Þ <sup>0</sup> to be the point where the direction meets the boundary of the circle. *L* is the tangent line at *<sup>L</sup>*ð Þ <sup>0</sup> to the circle, and <sup>R</sup>∞ð Þ<sup>ϵ</sup> is the 2*r*<sup>∞</sup> width segment on the line, inflated by <sup>ϵ</sup>. The search cone Dð Þ <sup>0</sup> <sup>¼</sup> *chull K*ð Þ <sup>0</sup> , *<sup>R</sup>*∞ð Þ<sup>ϵ</sup> � � is the related black triangle. Here <sup>S</sup>ð Þ <sup>0</sup> *<sup>α</sup>* ¼ S*<sup>α</sup>* <sup>∩</sup> Dð Þ <sup>0</sup> is just the portion of the blue region inside the triangle, and we can see that the assumption that *<sup>K</sup>*<sup>∗</sup> <sup>∈</sup> <sup>D</sup>ð Þ <sup>0</sup> is in force. For the current problem ⌈ *e* ffiffiffiffiffiffiffiffiffiffiffi <sup>2</sup>*<sup>π</sup> qr* <sup>p</sup> ⌉ <sup>¼</sup> 10, and therefore, by making 10 iterations, *<sup>K</sup>*<sup>∗</sup> will be inside some triangle almost surely. **66**

*Algorithms for LQR via Static Output Feedback for Discrete-Time LTI Systems DOI: http://dx.doi.org/10.5772/intechopen.89319*

The algorithm chooses *F* in the basis of the triangle and defines *K t*ð Þ to be the ray from *K*<sup>0</sup> to *F*. The ray is sampled at 10 equally spaced points, and the best feasible point is recorded.

In **Figure 2**, we can see that 5 iterations suffice to include *K* <sup>∗</sup> in some triangle and to find improving points very close to *K*<sup>∗</sup> . In **Figure 3**, we can see that when we allow 20 iterations, after 10 iterations, the center *K*ð Þ <sup>0</sup> is switched to the best point found so far (see lines 24 � 26 in Algorithm 1). This is done in order to raise the probability to hit *K* <sup>∗</sup> or its ϵ-neighborhood, and as we can see, the final best point (green star) is very close to *K* <sup>∗</sup> (**Figure 4**).

The results of the algorithm for 1, 5 and 20 iterations are the following. Note that *σ*maxð Þ¼ *K* <sup>∗</sup> 5*:*9551, and note the "huge variations" the function *σmax*ð Þ *K* has. For *m* ¼ 1 we had

> *<sup>K</sup>*ð Þ <sup>0</sup> <sup>¼</sup> ½ � <sup>0</sup>*:*<sup>58739333</sup> � <sup>0</sup>*:*<sup>15823016</sup> , *<sup>σ</sup>*ð Þ <sup>0</sup> max ¼ 25*:*7307, **RS** : *<sup>K</sup>*ð Þ *best* <sup>¼</sup> ½ � <sup>1</sup>*:*17786349 0*:*<sup>35034398</sup> , *<sup>σ</sup>*ð Þ *best* max ¼ 6*:*1391, **MC** : *<sup>K</sup>*ð Þ *best* <sup>¼</sup> ½ � <sup>0</sup>*:*<sup>58739333</sup> � <sup>0</sup>*:*<sup>15823016</sup> , *<sup>σ</sup>*ð Þ *best* max ¼ 25*:*7307, **RS** <sup>þ</sup> **MC** : *<sup>K</sup>*ð Þ *best* <sup>¼</sup> ½ � <sup>1</sup>*:*05244278 0*:*<sup>31681948</sup> , *<sup>σ</sup>*ð Þ *best* max ¼ 6*:*0001*:*

**Figure 1.** *The stability region* S*.*

11. *P t*ðÞ *mat Ip* <sup>⊗</sup> *Ip* � *A t*ð Þ*<sup>T</sup>* <sup>⊗</sup> *A t*ð Þ*<sup>T</sup>* � ��<sup>1</sup>

max ð Þ *σ*ð Þ *P t*ð Þ 13. **if** *σmax*ð Þ *K t*ð Þ <*σmax*ð Þ *K*<sup>0</sup> **then**

*σmax*ð Þ *K t*ð Þ

21. <sup>Δ</sup>*<sup>K</sup> <sup>j</sup> <sup>R</sup>* <sup>þ</sup> *<sup>B</sup>TPjB* � ��<sup>1</sup>

*s* 24. *K t*ðÞ ð Þ 1 � *t K <sup>j</sup>* þ *t*Δ*K <sup>j</sup>*

22. **for** *k* ¼ 0 **to** *s* **do**

25. **if** *K t*ð Þ∈S*<sup>α</sup>* **then** 21. *A t*ðÞ *A* � *BK t*ð Þ*C*

15. *flag* 1 16. **end if** 17. **end if** 18. **end for**

*Control Theory in Engineering*

19. **if** *flag* ¼¼ 1 **then** 20. **while** *σmax K <sup>j</sup>*þ<sup>1</sup>

23. *<sup>t</sup> <sup>k</sup>*

31. **end if** 32. **end if** 33. **end for** 34. *j j* þ 1 35. **end while** 36. **end if**

<sup>S</sup>ð Þ <sup>0</sup>

**66**

⌈ *e* ffiffiffiffiffiffiffiffiffiffiffi

triangle almost surely.

12. *S t*ðÞ *mat Ip* <sup>⊗</sup> *Ip* � *A t*ð Þ <sup>⊗</sup> *A t*ð Þ � ��<sup>1</sup> � *vec Ip*

� � � *<sup>σ</sup>max <sup>K</sup> <sup>j</sup>* � � � � �

22. *P t*ðÞ *mat Ip* <sup>⊗</sup> *Ip* � *A t*ð Þ*<sup>T</sup>* <sup>⊗</sup> *A t*ð Þ*<sup>T</sup>* � ��<sup>1</sup>

*S t*ð Þ;*σmax K <sup>j</sup>*þ<sup>1</sup>

max ð Þ *σ*ð Þ *P t*ð Þ 29. **if** *σmax*ð Þ *K t*ð Þ < *σmax K <sup>j</sup>*

37. **return** *<sup>K</sup>*ð Þ best *<sup>K</sup> <sup>j</sup>*, *<sup>P</sup>*ð Þ best *Pj*, *<sup>σ</sup>*ð Þ best

23. *S t*ðÞ *mat Ip* <sup>⊗</sup> *Ip* � *A t*ð Þ <sup>⊗</sup> *A t*ð Þ � ��<sup>1</sup> � *vec Ip*

In **Figure 1**, we can see how the RS algorithm works: we fix

14. *K*<sup>1</sup> *K t*ð Þ;*A*<sup>1</sup> *A* � *BK*1*C*;*P*<sup>1</sup> *P t*ð Þ;*S*<sup>1</sup> *S t*ð Þ;*σmax*ð Þ *K*<sup>1</sup>

�≥<sup>ϵ</sup> � � and ð Þ *<sup>j</sup>*<sup>&</sup>lt; *<sup>m</sup>* **do**

� � **then**

30. *K <sup>j</sup>*þ<sup>1</sup> *K t*ð Þ;*Aj*þ<sup>1</sup> *A* � *BK <sup>j</sup>*þ<sup>1</sup>*C*;*Pj*þ<sup>1</sup> *P t*ð Þ;*S <sup>j</sup>*þ<sup>1</sup>

� � *<sup>σ</sup>max*ð Þ *K t*ð Þ

*<sup>α</sup>* <sup>¼</sup> <sup>10</sup>�3, <sup>ϵ</sup> <sup>¼</sup> <sup>10</sup>�16,*r*<sup>∞</sup> <sup>¼</sup> 2, *<sup>h</sup>* <sup>¼</sup> 2, and we set *<sup>m</sup>* <sup>¼</sup> 1, *<sup>n</sup>* <sup>¼</sup> 5, *<sup>s</sup>* <sup>¼</sup> 10 for a single iteration, where the single cone is sampled along 5 rays and each ray is sampled 10 times. The sampled points are circled, where red circles indicate infeasible or nonimproving points and the black circles indicate improving points. The green star point is the initial point *K*ð Þ <sup>0</sup> found by the Ray-Shooting algorithm for minimalnorm SOF. The bold black circle is the boundary of the closed circle *K*ð Þ <sup>0</sup> , *h* � �. We choose <sup>U</sup>ð Þ <sup>0</sup> randomly to define the search direction, and we set *<sup>L</sup>*ð Þ <sup>0</sup> <sup>¼</sup> *<sup>K</sup>*ð Þ <sup>0</sup> <sup>þ</sup> *<sup>h</sup>* � Uð Þ <sup>0</sup> to be the point where the direction meets the boundary of the circle. *L* is the tangent line at *<sup>L</sup>*ð Þ <sup>0</sup> to the circle, and <sup>R</sup>∞ð Þ<sup>ϵ</sup> is the 2*r*<sup>∞</sup> width segment on the line, inflated by <sup>ϵ</sup>. The search cone Dð Þ <sup>0</sup> <sup>¼</sup> *chull K*ð Þ <sup>0</sup> , *<sup>R</sup>*∞ð Þ<sup>ϵ</sup> � � is the related black triangle. Here

*<sup>α</sup>* ¼ S*<sup>α</sup>* <sup>∩</sup> Dð Þ <sup>0</sup> is just the portion of the blue region inside the triangle, and we can

<sup>2</sup>*<sup>π</sup> qr* <sup>p</sup> ⌉ <sup>¼</sup> 10, and therefore, by making 10 iterations, *<sup>K</sup>*<sup>∗</sup> will be inside some

see that the assumption that *<sup>K</sup>*<sup>∗</sup> <sup>∈</sup> <sup>D</sup>ð Þ <sup>0</sup> is in force. For the current problem

� *vec Q* <sup>þ</sup> *CTK t*ð Þ*TRK t*ð Þ*<sup>C</sup>*

� *vec Q* <sup>þ</sup> *<sup>C</sup>TK t*ð Þ*TRK t*ð Þ*<sup>C</sup>*

� � � �

� � � � ;*σmax*ð Þ *K t*ð Þ

*BTPjAS jC<sup>T</sup> CS jC<sup>T</sup>* � �<sup>þ</sup> � *<sup>K</sup> <sup>j</sup>*

� � � �

� � � � ;*σmax*ð Þ *K t*ð Þ

*max σmax K <sup>j</sup>*

� �

**Figure 2.** *Single iteration of the RS algorithm.*

*<sup>K</sup>*ð Þ <sup>0</sup> <sup>¼</sup> ½ � <sup>0</sup>*:*<sup>51029365</sup> � <sup>0</sup>*:*<sup>22521376</sup> , *<sup>σ</sup>*ð Þ <sup>0</sup>

*DOI: http://dx.doi.org/10.5772/intechopen.89319*

**Figure 5.**

**Figure 6.**

**69**

*The initial condition response of the open-loop system.*

*compared with the global optimal response (red).*

**RS** : *<sup>K</sup>*ð Þ *best* <sup>¼</sup> ½ � <sup>1</sup>*:*11453066 0*:*<sup>33955607</sup> , *<sup>σ</sup>*ð Þ *best*

*Algorithms for LQR via Static Output Feedback for Discrete-Time LTI Systems*

**MC** : *<sup>K</sup>*ð Þ *best* <sup>¼</sup> ½ � <sup>0</sup>*:*<sup>51029365</sup> � <sup>0</sup>*:*<sup>22521376</sup> , *<sup>σ</sup>*ð Þ *best*

**RS** <sup>þ</sup> **MC** : *<sup>K</sup>*ð Þ *best* <sup>¼</sup> ½ � <sup>1</sup>*:*11453066 0*:*<sup>33955607</sup> , *<sup>σ</sup>*ð Þ *best*

In **Figure 5**, the initial condition response of the open-loop system is given. One can see the unstable mode related to the unstable eigenvalue 2. In **Figures 6–8**, the

*The initial condition response of the closed-loop system with the SOF computed by the RS algorithm (blue)*

max ¼ 3198*:*8196,

max ¼ 5*:*9893,

max ¼ 3198*:*8196,

max ¼ 5*:*9893*:*

**Figure 3.** *Five iterations of the RS algorithm.*

**Figure 4.** *Twenty iterations of the RS algorithm.*

Note that in this case, the *MC* algorithm makes no improvement, while the RS and RS + MC have very close values to the global optimal value, with slightly better value for the RS + MC, over the RS algorithm.

For *m* ¼ 5 we had

$$\begin{aligned} K^{(0)} &= [0.60478870 & -0.06023828], \sigma^{(0)}\_{\text{max}} = 36.4583, \\ \mathbf{RS}: K^{(bect)} &= [1.04166520 & 0.40826562], \sigma^{(bect)}\_{\text{max}} = 6.1655, \\ \mathbf{MC}: K^{(bect)} &= [0.60478870 & -0.06023828], \sigma^{(bect)}\_{\text{max}} = 36.4583, \\ \mathbf{RS} + \mathbf{MC}: K^{(bect)} &= [1.04166520 & 0.40826562], \sigma^{(bect)}\_{\text{max}} = 6.16557843. \end{aligned}$$

For *m* ¼ 20 we had

*Algorithms for LQR via Static Output Feedback for Discrete-Time LTI Systems DOI: http://dx.doi.org/10.5772/intechopen.89319*

$$\begin{aligned} K^{(0)} &= [0.51029365 & -0.22521376], \sigma\_{\text{max}}^{(0)} &= 3198.8196, \\ \mathbf{RS}: K^{(bet)} &= [1.11453066 & 0.33955607], \sigma\_{\text{max}}^{(bet)} &= 5.9893, \\ \mathbf{MC}: K^{(best)} &= [0.51029365 & -0.22521376], \sigma\_{\text{max}}^{(bet)} &= 3198.8196, \\ \mathbf{RS} + \mathbf{MC}: K^{(best)} &= [1.11453066 & 0.33955607], \sigma\_{\text{max}}^{(bet)} &= 5.9893. \end{aligned}$$

In **Figure 5**, the initial condition response of the open-loop system is given. One can see the unstable mode related to the unstable eigenvalue 2. In **Figures 6–8**, the

**Figure 5.** *The initial condition response of the open-loop system.*

**Figure 6.**

*The initial condition response of the closed-loop system with the SOF computed by the RS algorithm (blue) compared with the global optimal response (red).*

Note that in this case, the *MC* algorithm makes no improvement, while the RS and RS + MC have very close values to the global optimal value, with slightly better

max ¼ 36*:*4583,

max ¼ 6*:*1655,

max ¼ 36*:*4583,

max ¼ 6*:*16557843*:*

value for the RS + MC, over the RS algorithm.

*<sup>K</sup>*ð Þ <sup>0</sup> <sup>¼</sup> ½ � <sup>0</sup>*:*<sup>60478870</sup> � <sup>0</sup>*:*<sup>06023828</sup> , *<sup>σ</sup>*ð Þ <sup>0</sup>

**RS** : *<sup>K</sup>*ð Þ *best* <sup>¼</sup> ½ � <sup>1</sup>*:*04166520 0*:*<sup>40826562</sup> , *<sup>σ</sup>*ð Þ *best*

**MC** : *<sup>K</sup>*ð Þ *best* <sup>¼</sup> ½ � <sup>0</sup>*:*<sup>60478870</sup> � <sup>0</sup>*:*<sup>06023828</sup> , *<sup>σ</sup>*ð Þ *best*

**RS** <sup>þ</sup> **MC** : *<sup>K</sup>*ð Þ *best* <sup>¼</sup> ½ � <sup>1</sup>*:*04166520 0*:*<sup>40826562</sup> , *<sup>σ</sup>*ð Þ *best*

For *m* ¼ 5 we had

*Twenty iterations of the RS algorithm.*

**Figure 3.**

**Figure 4.**

**68**

*Five iterations of the RS algorithm.*

*Control Theory in Engineering*

For *m* ¼ 20 we had

**5. Experiments**

**System Size**

**Table 1.**

**71**

*p***,** *q***,** *r* � �

*General information of the systems and initial values.*

the experiments, we used *<sup>m</sup>* <sup>¼</sup> <sup>2</sup>⌈ *<sup>e</sup>* ffiffiffiffiffiffiffiffiffiffiffi

*DOI: http://dx.doi.org/10.5772/intechopen.89319*

<sup>ϵ</sup> <sup>¼</sup> <sup>10</sup>�16, for the RS algorithm; and *<sup>m</sup>* <sup>¼</sup> <sup>200</sup>⌈ *<sup>e</sup>* ffiffiffiffiffiffiffiffiffiffiffi

In the following experiments, we applied Algorithms 1 and 2, for systems taken

<sup>2</sup>*<sup>π</sup> qr* <sup>p</sup> ⌉, *<sup>n</sup>* <sup>¼</sup> 100, *<sup>s</sup>* <sup>¼</sup> 100; *<sup>h</sup>* <sup>¼</sup> 100,*r*<sup>∞</sup> <sup>¼</sup> 100,

*σ*ð Þ **<sup>0</sup> max for** ð Þ *A***,** *B***,** *C* *σ***max**ð Þ *F***<sup>∗</sup> for** ð Þ *A***,** *B*

<sup>2</sup>*<sup>π</sup> qr* <sup>p</sup> ⌉, *<sup>s</sup>* <sup>¼</sup> 100, for the MC

continuous-time systems. In order to get related discrete-time systems, we sampled the systems using the Tustin method with sampling rate *Ts* ¼ 0*:*01 sec ½ �. We took only the systems for which the RS algorithm succeeded in finding SOFs for the continuous-time systems (see [10], Table 8, p. 231). In order to initialize the MC Algorithm, we also used the RS algorithm to find a starting *α*-stabilizing SOF. In all

Algorithm, in order to get the same number of total iterations and the same number *s* ¼ 100 of iterations for the local search. We took *Q* ¼ *Ip*, *R* ¼ *Iq* in all the cases. The stability margin column of **Table 1** relates to 0 <*α* <1 for which the absolute

> **RS CPU time** ½ � **sec**

AC1 ð Þ 5, 3, 3 <sup>0</sup>*:*01 2*:*<sup>6226</sup> <sup>1</sup>*:*<sup>0701</sup> � <sup>10</sup><sup>4</sup> <sup>1</sup>*:*<sup>3073</sup> � 103 AC5 ð Þ 4, 2, 2 <sup>0</sup>*:*001 1*:*<sup>5468</sup> <sup>1</sup>*:*<sup>5888</sup> � <sup>10</sup><sup>9</sup> <sup>8</sup>*:*<sup>4264</sup> � 107 AC6 ð Þ 7, 2, 4 <sup>0</sup>*:*001 0*:*<sup>7094</sup> <sup>3</sup>*:*<sup>1767</sup> � <sup>10</sup><sup>3</sup> <sup>5</sup>*:*<sup>9783</sup> � <sup>10</sup><sup>2</sup> AC11 ð Þ 5, 2, 4 <sup>0</sup>*:*01 1*:*<sup>0575</sup> <sup>1</sup>*:*<sup>2968</sup> � 104 <sup>5</sup>*:*<sup>8777</sup> � <sup>10</sup><sup>2</sup> HE1 ð Þ 4, 2, 1 <sup>0</sup>*:*001 0*:*<sup>0872</sup> <sup>1</sup>*:*<sup>5040</sup> � <sup>10</sup><sup>3</sup> <sup>3</sup>*:*<sup>0013</sup> � <sup>10</sup><sup>2</sup> HE3 ð Þ 8, 4, 6 <sup>0</sup>*:*001 2*:*<sup>6845</sup> <sup>5</sup>*:*<sup>4064</sup> � <sup>10</sup><sup>6</sup> <sup>6</sup>*:*<sup>1185</sup> � <sup>10</sup><sup>4</sup> HE4 ð Þ 8, 4, 6 <sup>0</sup>*:*001 2*:*<sup>5633</sup> <sup>4</sup>*:*<sup>1660</sup> � <sup>10</sup><sup>6</sup> <sup>2</sup>*:*<sup>2992</sup> � 104 ROC1 ð Þ 9, 2, 2 <sup>10</sup>�<sup>5</sup> <sup>0</sup>*:*<sup>5279</sup> <sup>1</sup>*:*<sup>5906</sup> � <sup>10</sup><sup>7</sup> <sup>1</sup>*:*<sup>1207</sup> � <sup>10</sup><sup>5</sup> ROC4 ð Þ 9, 2, 2 <sup>10</sup>�<sup>5</sup> <sup>0</sup>*:*<sup>4677</sup> <sup>1</sup>*:*<sup>2273</sup> � <sup>10</sup><sup>6</sup> <sup>8</sup>*:*<sup>5460</sup> � <sup>10</sup><sup>4</sup> DIS4 ð Þ 8, 4, 6 <sup>0</sup>*:*01 2*:*<sup>5074</sup> <sup>4</sup>*:*<sup>5133</sup> � <sup>10</sup><sup>3</sup> <sup>1</sup>*:*<sup>7556</sup> � <sup>10</sup><sup>2</sup> DIS5 ð Þ 4, 2, 2 <sup>0</sup>*:*001 1*:*<sup>2187</sup> <sup>2</sup>*:*<sup>8686</sup> � <sup>10</sup><sup>8</sup> <sup>9</sup>*:*<sup>0756</sup> � 106 TF1 ð Þ 7, 2, 4 <sup>10</sup>�<sup>4</sup> <sup>0</sup>*:*<sup>8011</sup> <sup>7</sup>*:*<sup>9884</sup> � <sup>10</sup><sup>5</sup> <sup>5</sup>*:*<sup>8134</sup> � 103 NN5 ð Þ 7, 1, 2 <sup>10</sup>�<sup>4</sup> <sup>0</sup>*:*<sup>4138</sup> <sup>5</sup>*:*<sup>4066</sup> � <sup>10</sup><sup>6</sup> <sup>2</sup>*:*<sup>8789</sup> � 105 NN13 ð Þ 6, 2, 2 <sup>0</sup>*:*01 0*:*<sup>4876</sup> <sup>7</sup>*:*<sup>8402</sup> � <sup>10</sup><sup>2</sup> <sup>63</sup>*:*<sup>5366</sup> NN16 ð Þ 8, 4, 4 <sup>10</sup>�<sup>4</sup> <sup>3</sup>*:*<sup>5530</sup> <sup>1</sup>*:*<sup>9688</sup> � 103 <sup>2</sup>*:*<sup>3327</sup> � <sup>10</sup><sup>2</sup> NN17 ð Þ 3, 2, 1 <sup>0</sup>*:*001 0*:*<sup>0925</sup> <sup>3</sup>*:*<sup>2733</sup> � 104 <sup>3</sup>*:*<sup>1358</sup> � <sup>10</sup><sup>2</sup>

value of any eigenvalue of the closed loop is less than 1 � *α*. The values of *α* in **Table 1** relates to the largest 0 <*α* <1 for which the RS algorithm succeeded in finding a starting SOF *K*ð Þ <sup>0</sup> . As we saw above, it is worth searching for a starting point *K*ð Þ <sup>0</sup> that maximizes 0 <*α*< 1. This can be achieved efficiently by running a binary search on the 0< *α*<1 and using the RS algorithm as an oracle. Note that the RS CPU time appearing in the fourth column of **Table 1** relates to running the RS algorithm for known optimal value of 0 <*α*< 1. The RS algorithm is sufficiently fast also for this purpose, but other algorithms such as the HIFOO (see [24]) and

> **Stab***:* **Mgn***:*

from the libraries [26–28]. The systems given in these libraries are real-life

*Algorithms for LQR via Static Output Feedback for Discrete-Time LTI Systems*

**Figure 7.**

*The initial condition response of the closed-loop system with the SOF computed by the MC algorithm (blue) compared with the global optimal response (red).*

#### **Figure 8.**

*The initial condition response of the closed-loop system with the SOF computed by the RS + MC algorithm (blue) compared with the global optimal response (red).*

initial condition responses of the closed-loop systems with the SOFs for *m* ¼ 20, with *<sup>x</sup>*<sup>0</sup> <sup>¼</sup> ½ � 3 1 *<sup>T</sup>* and sampling time *Ts* <sup>¼</sup> <sup>0</sup>*:*01, are given. One can see that the responses of the closed-loop systems with the SOFs computed by RS and RS + MC are very close to the global optimal response, while the response of the closed-loop system with the SOF computed by the MC algorithm (actually with the initial SOF), although stable, is unacceptable.

*Algorithms for LQR via Static Output Feedback for Discrete-Time LTI Systems DOI: http://dx.doi.org/10.5772/intechopen.89319*
