**3. The randomized Ray-Shooting Method-based algorithm**

The Ray-Shooting Method works as follows, for general function minimization: let *f x*ð Þ≥0 be a continuous function defined over some compact set *<sup>X</sup>* <sup>⊂</sup> *<sup>n</sup>*. Let ϵ >0 be given and assume that we want to compute *x*<sup>∗</sup> ∈*X* such that *y* <sup>∗</sup> ≔ *f x*ð Þ¼ <sup>∗</sup> min *<sup>x</sup>*∈*Xf x*ð Þ up to ϵ, i.e., to find ð Þ *x*, *y* in the set Sð Þ¼ ϵ fð Þ *x*, *y* j *x*∈*X*, *f x*ð Þ <sup>∗</sup> <sup>≤</sup> *<sup>y</sup>* <sup>¼</sup> *f x*ð Þ<sup>≤</sup> *f x*ð Þþ <sup>∗</sup> <sup>ϵ</sup>*:*g. Let *<sup>x</sup>*<sup>0</sup> <sup>∈</sup> <sup>X</sup> be given, let *<sup>y</sup>*<sup>0</sup> <sup>≔</sup> *f x*ð Þ<sup>0</sup> and let <sup>S</sup>ð Þ <sup>0</sup> <sup>¼</sup> ð Þ *x*, *y x*∈*X*, *f x*ð Þ≤*y*≤ *y*<sup>0</sup> denote the search space, which is a subset of the epigraph of *<sup>f</sup>*. Let <sup>D</sup>ð Þ <sup>0</sup> <sup>¼</sup> ð Þ *<sup>x</sup>*, *<sup>y</sup> <sup>x</sup>*<sup>∈</sup> <sup>X</sup>, 0 <sup>≤</sup>*y*<sup>≤</sup> *<sup>y</sup>*<sup>0</sup> denote the cylinder enclosed between <sup>X</sup> and the level *<sup>y</sup>*0. Let <sup>L</sup>ð Þ <sup>0</sup> <sup>¼</sup> fð Þ *<sup>x</sup>*, *<sup>y</sup>* <sup>j</sup> *<sup>x</sup>*<sup>∈</sup> <sup>X</sup>, *<sup>y</sup>* <sup>¼</sup> <sup>0</sup> or *<sup>x</sup>*<sup>∈</sup> *<sup>∂</sup>*X, 0 <sup>≤</sup>*y*<sup>≤</sup> *<sup>y</sup>*0*:*g. Let *z*<sup>0</sup> ≔ *x*0, *y*<sup>0</sup> and note that *<sup>z</sup>*<sup>0</sup> <sup>∈</sup>Sð Þ <sup>0</sup> . Then, we choose *<sup>w</sup>*<sup>0</sup> in <sup>L</sup>ð Þ <sup>0</sup> randomly, according to some distribution, and we define the ray as *z t*ð Þ ≔ ð Þ 1 � *t z*0þ *tw*0, 0 ≤*t*≤ 1. We scan the ray and choose the largest 0 ≤*t*<sup>0</sup> ≤1 such that 1ð Þ � *t*<sup>0</sup> *z*<sup>0</sup> þ *<sup>t</sup>*0*w*<sup>0</sup> <sup>∈</sup>Sð Þ <sup>0</sup> (actually, we scan the ray from *<sup>t</sup>* <sup>¼</sup> 1 in equal-spaced points and take the first *t* for which this happens). We define *z*<sup>1</sup> ≔ ð Þ 1 � *t*<sup>0</sup> *z*<sup>0</sup> þ *t*0*w*<sup>0</sup> and update sets <sup>S</sup>ð Þ <sup>0</sup> , <sup>D</sup>ð Þ <sup>0</sup> , and Lð Þ <sup>0</sup> by replacing *<sup>y</sup>*<sup>0</sup> with *<sup>y</sup>*1, where *<sup>x</sup>*1, *<sup>y</sup>*<sup>1</sup> <sup>¼</sup> *<sup>z</sup>*1. Let <sup>S</sup>ð Þ<sup>1</sup> , <sup>D</sup>ð Þ<sup>1</sup> , and <sup>L</sup>ð Þ<sup>1</sup> denote the updated sets. We continue the process similarly from *<sup>z</sup>*<sup>1</sup> <sup>∈</sup>Sð Þ<sup>1</sup> , and we define a sequence *zn* <sup>∈</sup>Sð Þ *<sup>n</sup>* , *<sup>n</sup>* <sup>¼</sup> 0, 1, …. Note that Sð Þ<sup>ϵ</sup> <sup>⊂</sup>Sð Þ *<sup>n</sup>*þ<sup>1</sup> <sup>⊂</sup> <sup>S</sup>ð Þ *<sup>n</sup>* for any *n* ¼ 0, 1, …, unless we have *zn* ∈Sð Þϵ for some *n* (in which the process is ceased). One can show that the sequence f g *zn* <sup>∞</sup> *<sup>n</sup>*¼<sup>0</sup> converges (in probability) to a point in Sð Þ*ε* . Note that shooting rays from the points of local minimum have positive probability to hit Sð Þ*ε* (under the following mild assumption), because any global minimum is visible from any local minimum. Moreover, for a given level of certainty, we hit Sð Þ*ε* in a finite number of iterations (see Remark 3.1 below). Practically, we may stop the algorithm if no improvement is detected within a window of 20% of the allowed number of iterations. The function need not be smooth or even continuous. It only needs to be well defined and measurable over the compact domain X, and Sð Þ*ε* should have non-negligible measure (i.e., should have some positive volume). Obviously, global minimum points belong to the boundary of the search space <sup>S</sup>ð Þ <sup>0</sup> , and actually such points are where the distance between the compact sets X � f g<sup>0</sup> and <sup>S</sup>ð Þ <sup>0</sup> in *<sup>n</sup>*þ<sup>1</sup> is accepted. This is essential for the efficiency of the Ray-Shooting Method, although we raised the search space dimension from *n* to *n* þ 1.

In order to apply the Ray-Shooting Method for the LQR via SOF problem, we need the following definitions: assume that *<sup>K</sup>*ð Þ <sup>0</sup> <sup>∈</sup> *int* ð Þ <sup>S</sup>*<sup>α</sup>* was found by the RS algorithm (see [10]) or by any other method (see [22–24]). Let *<sup>h</sup>* <sup>&</sup>gt;0 and let <sup>U</sup>ð Þ <sup>0</sup> be a unit vector (actually a matrix, but we consider here the space of matrices as a normed vector space) with respect to the Frobenius norm, i.e., <sup>U</sup>ð Þ <sup>0</sup> *<sup>F</sup>* ¼ 1. Let *<sup>L</sup>*ð Þ <sup>0</sup> <sup>¼</sup> *<sup>K</sup>*ð Þ <sup>0</sup> <sup>þ</sup> *<sup>h</sup>* � Uð Þ <sup>0</sup> and let <sup>L</sup> be the hyperplane defined by *<sup>L</sup>*ð Þ <sup>0</sup> <sup>þ</sup> *<sup>V</sup>*, where 〈*V*, <sup>U</sup>ð Þ <sup>0</sup> 〉*<sup>F</sup>* <sup>¼</sup> 0. Here <sup>L</sup> is the tangent space at *<sup>L</sup>*ð Þ <sup>0</sup> to the closed ball *<sup>K</sup>*ð Þ <sup>0</sup> , *<sup>h</sup>* centered at *K*ð Þ <sup>0</sup> with radius *h*, with respect to the Frobenius norm on *<sup>q</sup>*�*<sup>r</sup>* . Let *<sup>r</sup>*<sup>∞</sup> <sup>&</sup>gt;0 and let <sup>R</sup><sup>∞</sup> denote the set of all *<sup>F</sup>* <sup>∈</sup>L, such that *<sup>F</sup>* � *<sup>L</sup>*ð Þ <sup>0</sup> *<sup>F</sup>* ≤*r*∞. Let R∞ð Þ¼ ϵ *R*<sup>∞</sup> þ ð Þ 0, ϵ , where ð Þ 0, ϵ denotes the closed ball centered at 0 with radius ϵ (0< ϵ≤ <sup>1</sup> 2 ). Let Dð Þ <sup>0</sup> <sup>¼</sup> *chull K*ð Þ <sup>0</sup> , <sup>R</sup>∞ð Þ<sup>ϵ</sup> denote the convex hull of the vertex *<sup>K</sup>*ð Þ <sup>0</sup> with the basis <sup>R</sup>∞ð Þ<sup>ϵ</sup> . Let <sup>S</sup>ð Þ <sup>0</sup> *<sup>α</sup>* ¼ S*<sup>α</sup>* <sup>∩</sup> <sup>D</sup>ð Þ <sup>0</sup> and note that <sup>S</sup>ð Þ <sup>0</sup> *<sup>α</sup>* is compact (but generally not convex). We wish to minimize the continuous function *σmax*ð Þ *K* (or the continuous function *J x*ð Þ 0,*K* , when *x*<sup>0</sup> is known) over the compact set <sup>S</sup>*<sup>α</sup>* <sup>∩</sup> *<sup>K</sup>*ð Þ <sup>0</sup> , *<sup>h</sup>* . Let *<sup>K</sup>* <sup>∗</sup> denote a point in <sup>S</sup>*<sup>α</sup>* <sup>∩</sup> *<sup>K</sup>*ð Þ <sup>0</sup> , *<sup>h</sup>* where the minimum of *<sup>σ</sup>max*ð Þ *<sup>K</sup>* is accepted. Obviously, *<sup>K</sup>* <sup>∗</sup> <sup>∈</sup> Dð Þ <sup>0</sup> , for some direction *<sup>U</sup>*ð Þ <sup>0</sup> from *K*ð Þ <sup>0</sup> .

**Algorithm 1.** The Ray-Shooting Algorithm for LQR via SOF for discrete-time

**Require:** *An algorithm for deciding α-stability, an algorithm for computing σ*maxð Þ *K*

*max* � � **then**

*max σmax*ð Þ *K t*ð Þ

<sup>2</sup> , 0 <*α* <1, *h* >0,*r*<sup>∞</sup> >0, integers: *m*, *n*, *s* > 0, controllable pairs

*<sup>F</sup>* ¼ 1, uniformly at random

2*πqr* p ⌉ iterations in the

� � as the size of the problem.

*<sup>α</sup>* ð Þϵ

*and algorithms for general linear algebra operations.*

**Output:** *K* ∈S*<sup>α</sup>* close as possible to *K*<sup>∗</sup> .

*DOI: http://dx.doi.org/10.5772/intechopen.89319*

1. compute *P K*ð Þ <sup>0</sup> � � as in (5)

*max* max *<sup>σ</sup> <sup>P</sup>*ð Þ best � � � �

7. *<sup>L</sup>*ð Þ <sup>0</sup> *<sup>K</sup>*ð Þ <sup>0</sup> <sup>þ</sup> *<sup>h</sup>* � Uð Þ <sup>0</sup> 8. **for** *j* ¼ 1 **to** *n* **do**

11. *<sup>t</sup> <sup>k</sup>*

6. choose <sup>U</sup>ð Þ <sup>0</sup> such that <sup>U</sup>ð Þ <sup>0</sup> �

10. **for** *k* ¼ 0 **downto** *s* **do**

ð Þ *<sup>A</sup>*, *<sup>B</sup>* and *<sup>A</sup>T*,*CT* � �, matrices *<sup>Q</sup>* <sup>&</sup>gt;0, *<sup>R</sup>* <sup>&</sup>gt; 0 and *<sup>K</sup>*ð Þ <sup>0</sup> <sup>∈</sup> *int* ð Þ <sup>S</sup>*<sup>α</sup>* .

*Algorithms for LQR via Static Output Feedback for Discrete-Time LTI Systems*

9. choose *F* ∈ R∞ð Þϵ , uniformly at random

*s* 12. *K t*ðÞ ð Þ <sup>1</sup> � *<sup>t</sup> <sup>K</sup>*ð Þ <sup>0</sup> <sup>þ</sup> *tF* 13. **if** *K t*ð Þ∈ S*<sup>α</sup>* **then**

14. compute *PKt* ð Þ ð Þ as in (5) 15. *σmax*ð Þ *K t*ð Þ max ð Þ *σ*ð Þ *PKt* ð Þ ð Þ

2*π qr* p ⌉ **then**

*max*

*<sup>α</sup>* ð Þ<sup>ϵ</sup> is given by *<sup>O</sup>* j j ln ð Þ *<sup>β</sup> <sup>h</sup>*

algorithm by restricting the input and by regarding *<sup>r</sup>*<sup>∞</sup>

Remark 3.1. In [12] it is shown that by taking *<sup>m</sup>* <sup>¼</sup> ⌈*<sup>e</sup>* � ffiffiffiffiffiffiffiffiffi

*<sup>α</sup>* j *σ*maxð Þ *K* ≤*σ*maxð Þþ *K*<sup>∗</sup> ϵ

ϵ

outer loop, we have *<sup>K</sup>* <sup>∗</sup> <sup>∈</sup> *<sup>D</sup>*ð Þ <sup>0</sup> , for some direction <sup>U</sup>ð Þ <sup>0</sup> , almost surely. Let <sup>S</sup>ð Þ <sup>0</sup>

arithmetic operations of the RS algorithm that guarantees a probability of at least

*r*∞ *r*ϵ � �*<sup>q</sup>*0*r*<sup>0</sup>

*q*≤*q*0,*r*≤ *r*<sup>0</sup> for fixed *q*0,*r*0, where *r*<sup>ϵ</sup> is the radius of the basis of a cone with height ϵ

n o. Then, the total number of

max ð Þ *<sup>q</sup>*,*<sup>r</sup>* <sup>3</sup> <sup>þ</sup> *<sup>p</sup>*<sup>6</sup> � � � � , for systems with

*<sup>α</sup>* ð Þϵ ; see [10–12]. This is a polynomial-time

*r*ϵ

16. **if** *<sup>σ</sup>max*ð Þ *K t*ð Þ <sup>&</sup>lt; *<sup>σ</sup>*ð Þ best

17. *<sup>K</sup>*ð Þ best *K t*ð Þ 18. *<sup>P</sup>*ð Þ best *PKt* ð Þ ð Þ

19. *σ*ð Þ best

20. **end if** 21. **end if** 22. **end for** 23. **end for**

24. **if** *<sup>i</sup>* <sup>&</sup>gt;*<sup>υ</sup>* � ⌈ *<sup>e</sup>* ffiffiffiffiffiffiffiffiffiffiffi

25. *<sup>K</sup>*ð Þ <sup>0</sup> *<sup>K</sup>*ð Þ best 26. *υ υ* þ 1 27. **end for** 28. **end for**

29. **return** *K*ð Þ best , *P*ð Þ best , *σ*ð Þ best

that has the same volume as of <sup>S</sup>ð Þ <sup>0</sup>

denote the set *<sup>K</sup>* <sup>∈</sup>Sð Þ <sup>0</sup>

<sup>1</sup> � *<sup>β</sup>* to hit <sup>S</sup>ð Þ <sup>0</sup>

**61**

� � �

systems.

**Input:** 0<ϵ≤ <sup>1</sup>

2. *<sup>P</sup>*ð Þ best *P K*ð Þ <sup>0</sup> � �

5. **for** *i* ¼ 1 **to** *m* **do**

3. *σ*ð Þ best

4. *υ* 1

The Ray-Shooting Algorithm 1 for the LQR via SOF problem, works as follows: we start with a point *<sup>K</sup>*ð Þ <sup>0</sup> <sup>∈</sup> *int* ð Þ <sup>S</sup>*<sup>α</sup>* , found by the RS algorithm (see [10]). Assuming that *<sup>K</sup>* <sup>∗</sup> <sup>∈</sup> Dð Þ <sup>0</sup> , the inner loop ( *<sup>j</sup>* <sup>¼</sup> 1, …, *<sup>n</sup>*) uses the Ray-Shooting Method in order to find an approximation of the global minimum of the function *σmax*ð Þ *K* over <sup>S</sup>ð Þ <sup>0</sup> *<sup>α</sup>* —the portion of <sup>S</sup>*<sup>α</sup>* bounded in the cone *<sup>D</sup>*ð Þ <sup>0</sup> . The proof of convergence in probability of the inner loop and its complexity (under the above-mentioned assumption) can be found in [10] (see also [11]). In the inner loop, we choose a search direction by choosing a point *<sup>F</sup>* in <sup>R</sup>∞ð Þ<sup>ϵ</sup> —the base of the cone <sup>D</sup>ð Þ <sup>0</sup> . Next, in the most inner loop (*<sup>k</sup>* <sup>¼</sup> 0, …, *<sup>s</sup>*), we scan the ray *K t*ð Þ <sup>≔</sup> ð Þ <sup>1</sup> � *<sup>t</sup> <sup>K</sup>*ð Þ <sup>0</sup> <sup>þ</sup> *tF* and record the best controller on it. Repeating this sufficiently many times, we reach *K*<sup>∗</sup> (or an <sup>ϵ</sup> neighborhood of it) with high probability, under the assumption that *<sup>K</sup>* <sup>∗</sup> <sup>∈</sup> Dð Þ <sup>0</sup> (see Remark 3.1).

The reasoning of the Ray-Shooting Method is that sampling the whole search space will lead to the probabilistic method that is doomed to the "curse of dimensionality," which the method tries to avoid. This is achieved by slicing the search space into covering cones (*m* is the number of cones allowed), because any point in the cone is visible from its vertex. At each cone we shoot rays (*n* is the number of rays per cone) from its node toward its basis, where each ray is sampled from its head toward its tail, while updating the best point found so far. Note that the global minimum of *σ*maxð Þ *K* over any compact subset of S*<sup>α</sup>* is achieved on the boundary of the related portion of the epigraph of *σ*maxð Þ *K* . Therefore, we can break the most inner loop; in the first moment, we find an improvement in *σ*maxð Þ *K* . This bypasses the need to sample the whole search space (although we raise by 1 the search space dimension) and explains the efficiency of the Ray-Shooting Method in finding global optimum. Another advantage of the Ray-Shooting Method which is specific to the problem of LQR via SOF is that the search is concentrated to the parameter space (the *qr*-dimension space where the *K* rests) and not to the certificate space (the *p*2-dimension space where the Lyapunov matrices *P* rests). Thus, the method avoids the need to solve any Riccati, LMI, and BMI equations, which might make crucial difference for large-scale systems (i.e., where *p*<sup>2</sup> > >*qr*).

*Algorithms for LQR via Static Output Feedback for Discrete-Time LTI Systems DOI: http://dx.doi.org/10.5772/intechopen.89319*

**Algorithm 1.** The Ray-Shooting Algorithm for LQR via SOF for discrete-time systems.

**Require:** *An algorithm for deciding α-stability, an algorithm for computing σ*maxð Þ *K and algorithms for general linear algebra operations.* **Input:** 0<ϵ≤ <sup>1</sup> <sup>2</sup> , 0 <*α* <1, *h* >0,*r*<sup>∞</sup> >0, integers: *m*, *n*, *s* > 0, controllable pairs ð Þ *<sup>A</sup>*, *<sup>B</sup>* and *<sup>A</sup>T*,*CT* � �, matrices *<sup>Q</sup>* <sup>&</sup>gt;0, *<sup>R</sup>* <sup>&</sup>gt; 0 and *<sup>K</sup>*ð Þ <sup>0</sup> <sup>∈</sup> *int* ð Þ <sup>S</sup>*<sup>α</sup>* . **Output:** *K* ∈S*<sup>α</sup>* close as possible to *K*<sup>∗</sup> . 1. compute *P K*ð Þ <sup>0</sup> � � as in (5) 2. *<sup>P</sup>*ð Þ best *P K*ð Þ <sup>0</sup> � � 3. *σ*ð Þ best *max* max *<sup>σ</sup> <sup>P</sup>*ð Þ best � � � � 4. *υ* 1 5. **for** *i* ¼ 1 **to** *m* **do** 6. choose <sup>U</sup>ð Þ <sup>0</sup> such that <sup>U</sup>ð Þ <sup>0</sup> � � � � *<sup>F</sup>* ¼ 1, uniformly at random 7. *<sup>L</sup>*ð Þ <sup>0</sup> *<sup>K</sup>*ð Þ <sup>0</sup> <sup>þ</sup> *<sup>h</sup>* � Uð Þ <sup>0</sup> 8. **for** *j* ¼ 1 **to** *n* **do** 9. choose *F* ∈ R∞ð Þϵ , uniformly at random 10. **for** *k* ¼ 0 **downto** *s* **do** 11. *<sup>t</sup> <sup>k</sup> s* 12. *K t*ðÞ ð Þ <sup>1</sup> � *<sup>t</sup> <sup>K</sup>*ð Þ <sup>0</sup> <sup>þ</sup> *tF* 13. **if** *K t*ð Þ∈ S*<sup>α</sup>* **then** 14. compute *PKt* ð Þ ð Þ as in (5) 15. *σmax*ð Þ *K t*ð Þ max ð Þ *σ*ð Þ *PKt* ð Þ ð Þ 16. **if** *<sup>σ</sup>max*ð Þ *K t*ð Þ <sup>&</sup>lt; *<sup>σ</sup>*ð Þ best *max* � � **then** 17. *<sup>K</sup>*ð Þ best *K t*ð Þ 18. *<sup>P</sup>*ð Þ best *PKt* ð Þ ð Þ 19. *σ*ð Þ best *max σmax*ð Þ *K t*ð Þ 20. **end if** 21. **end if** 22. **end for** 23. **end for** 24. **if** *<sup>i</sup>* <sup>&</sup>gt;*<sup>υ</sup>* � ⌈ *<sup>e</sup>* ffiffiffiffiffiffiffiffiffiffiffi 2*π qr* p ⌉ **then** 25. *<sup>K</sup>*ð Þ <sup>0</sup> *<sup>K</sup>*ð Þ best 26. *υ υ* þ 1 27. **end for** 28. **end for** 29. **return** *K*ð Þ best , *P*ð Þ best , *σ*ð Þ best *max*

Remark 3.1. In [12] it is shown that by taking *<sup>m</sup>* <sup>¼</sup> ⌈*<sup>e</sup>* � ffiffiffiffiffiffiffiffiffi 2*πqr* p ⌉ iterations in the outer loop, we have *<sup>K</sup>* <sup>∗</sup> <sup>∈</sup> *<sup>D</sup>*ð Þ <sup>0</sup> , for some direction <sup>U</sup>ð Þ <sup>0</sup> , almost surely. Let <sup>S</sup>ð Þ <sup>0</sup> *<sup>α</sup>* ð Þϵ denote the set *<sup>K</sup>* <sup>∈</sup>Sð Þ <sup>0</sup> *<sup>α</sup>* j *σ*maxð Þ *K* ≤*σ*maxð Þþ *K*<sup>∗</sup> ϵ n o. Then, the total number of arithmetic operations of the RS algorithm that guarantees a probability of at least <sup>1</sup> � *<sup>β</sup>* to hit <sup>S</sup>ð Þ <sup>0</sup> *<sup>α</sup>* ð Þ<sup>ϵ</sup> is given by *<sup>O</sup>* j j ln ð Þ *<sup>β</sup> <sup>h</sup>* ϵ *r*∞ *r*ϵ � �*<sup>q</sup>*0*r*<sup>0</sup> max ð Þ *<sup>q</sup>*,*<sup>r</sup>* <sup>3</sup> <sup>þ</sup> *<sup>p</sup>*<sup>6</sup> � � � � , for systems with *q*≤*q*0,*r*≤ *r*<sup>0</sup> for fixed *q*0,*r*0, where *r*<sup>ϵ</sup> is the radius of the basis of a cone with height ϵ that has the same volume as of <sup>S</sup>ð Þ <sup>0</sup> *<sup>α</sup>* ð Þϵ ; see [10–12]. This is a polynomial-time algorithm by restricting the input and by regarding *<sup>r</sup>*<sup>∞</sup> *r*ϵ � � as the size of the problem.

In order to apply the Ray-Shooting Method for the LQR via SOF problem, we need the following definitions: assume that *<sup>K</sup>*ð Þ <sup>0</sup> <sup>∈</sup> *int* ð Þ <sup>S</sup>*<sup>α</sup>* was found by the RS algorithm (see [10]) or by any other method (see [22–24]). Let *<sup>h</sup>* <sup>&</sup>gt;0 and let <sup>U</sup>ð Þ <sup>0</sup> be a unit vector (actually a matrix, but we consider here the space of matrices as a

> 

 

*<sup>α</sup>* ¼ S*<sup>α</sup>* <sup>∩</sup> <sup>D</sup>ð Þ <sup>0</sup> and note that <sup>S</sup>ð Þ <sup>0</sup>

*<sup>F</sup>* ¼ 1. Let

. Let

*<sup>F</sup>* ≤*r*∞. Let

*<sup>α</sup>* is

normed vector space) with respect to the Frobenius norm, i.e., <sup>U</sup>ð Þ <sup>0</sup>

*<sup>r</sup>*<sup>∞</sup> <sup>&</sup>gt;0 and let <sup>R</sup><sup>∞</sup> denote the set of all *<sup>F</sup>* <sup>∈</sup>L, such that *<sup>F</sup>* � *<sup>L</sup>*ð Þ <sup>0</sup>

radius ϵ (0< ϵ≤ <sup>1</sup>

*Control Theory in Engineering*

from *K*ð Þ <sup>0</sup> .

(see Remark 3.1).

<sup>S</sup>ð Þ <sup>0</sup>

**60**

2

vertex *<sup>K</sup>*ð Þ <sup>0</sup> with the basis <sup>R</sup>∞ð Þ<sup>ϵ</sup> . Let <sup>S</sup>ð Þ <sup>0</sup>

*<sup>L</sup>*ð Þ <sup>0</sup> <sup>¼</sup> *<sup>K</sup>*ð Þ <sup>0</sup> <sup>þ</sup> *<sup>h</sup>* � Uð Þ <sup>0</sup> and let <sup>L</sup> be the hyperplane defined by *<sup>L</sup>*ð Þ <sup>0</sup> <sup>þ</sup> *<sup>V</sup>*, where 〈*V*, <sup>U</sup>ð Þ <sup>0</sup> 〉*<sup>F</sup>* <sup>¼</sup> 0. Here <sup>L</sup> is the tangent space at *<sup>L</sup>*ð Þ <sup>0</sup> to the closed ball *<sup>K</sup>*ð Þ <sup>0</sup> , *<sup>h</sup>* centered at *K*ð Þ <sup>0</sup> with radius *h*, with respect to the Frobenius norm on *<sup>q</sup>*�*<sup>r</sup>*

R∞ð Þ¼ ϵ *R*<sup>∞</sup> þ ð Þ 0, ϵ , where ð Þ 0, ϵ denotes the closed ball centered at 0 with

function *σmax*ð Þ *K* (or the continuous function *J x*ð Þ 0,*K* , when *x*<sup>0</sup> is known) over the compact set <sup>S</sup>*<sup>α</sup>* <sup>∩</sup> *<sup>K</sup>*ð Þ <sup>0</sup> , *<sup>h</sup>* . Let *<sup>K</sup>* <sup>∗</sup> denote a point in <sup>S</sup>*<sup>α</sup>* <sup>∩</sup> *<sup>K</sup>*ð Þ <sup>0</sup> , *<sup>h</sup>* where the minimum of *<sup>σ</sup>max*ð Þ *<sup>K</sup>* is accepted. Obviously, *<sup>K</sup>* <sup>∗</sup> <sup>∈</sup> Dð Þ <sup>0</sup> , for some direction *<sup>U</sup>*ð Þ <sup>0</sup>

The Ray-Shooting Algorithm 1 for the LQR via SOF problem, works as follows: we start with a point *<sup>K</sup>*ð Þ <sup>0</sup> <sup>∈</sup> *int* ð Þ <sup>S</sup>*<sup>α</sup>* , found by the RS algorithm (see [10]). Assuming that *<sup>K</sup>* <sup>∗</sup> <sup>∈</sup> Dð Þ <sup>0</sup> , the inner loop ( *<sup>j</sup>* <sup>¼</sup> 1, …, *<sup>n</sup>*) uses the Ray-Shooting Method in order to find an approximation of the global minimum of the function *σmax*ð Þ *K* over

*<sup>α</sup>* —the portion of <sup>S</sup>*<sup>α</sup>* bounded in the cone *<sup>D</sup>*ð Þ <sup>0</sup> . The proof of convergence in probability of the inner loop and its complexity (under the above-mentioned assumption) can be found in [10] (see also [11]). In the inner loop, we choose a search direction by choosing a point *<sup>F</sup>* in <sup>R</sup>∞ð Þ<sup>ϵ</sup> —the base of the cone <sup>D</sup>ð Þ <sup>0</sup> . Next, in the most inner loop (*<sup>k</sup>* <sup>¼</sup> 0, …, *<sup>s</sup>*), we scan the ray *K t*ð Þ <sup>≔</sup> ð Þ <sup>1</sup> � *<sup>t</sup> <sup>K</sup>*ð Þ <sup>0</sup> <sup>þ</sup> *tF* and record the best controller on it. Repeating this sufficiently many times, we reach *K*<sup>∗</sup> (or an <sup>ϵ</sup> neighborhood of it) with high probability, under the assumption that *<sup>K</sup>* <sup>∗</sup> <sup>∈</sup> Dð Þ <sup>0</sup>

The reasoning of the Ray-Shooting Method is that sampling the whole search space will lead to the probabilistic method that is doomed to the "curse of dimensionality," which the method tries to avoid. This is achieved by slicing the search space into covering cones (*m* is the number of cones allowed), because any point in the cone is visible from its vertex. At each cone we shoot rays (*n* is the number of rays per cone) from its node toward its basis, where each ray is sampled from its head toward its tail, while updating the best point found so far. Note that the global minimum of *σ*maxð Þ *K* over any compact subset of S*<sup>α</sup>* is achieved on the boundary of the related portion of the epigraph of *σ*maxð Þ *K* . Therefore, we can break the most inner loop; in the first moment, we find an improvement in *σ*maxð Þ *K* . This bypasses the need to sample the whole search space (although we raise by 1 the search space dimension) and explains the efficiency of the Ray-Shooting Method in finding global optimum. Another advantage of the Ray-Shooting Method which is specific to the problem of LQR via SOF is that the search is concentrated to the parameter space (the *qr*-dimension space where the *K* rests) and not to the certificate space (the *p*2-dimension space where the Lyapunov matrices *P* rests). Thus, the method avoids the need to solve any Riccati, LMI, and BMI equations, which might make

crucial difference for large-scale systems (i.e., where *p*<sup>2</sup> > >*qr*).

compact (but generally not convex). We wish to minimize the continuous

). Let Dð Þ <sup>0</sup> <sup>¼</sup> *chull K*ð Þ <sup>0</sup> , <sup>R</sup>∞ð Þ<sup>ϵ</sup> denote the convex hull of the

## **4. The deterministic algorithm**

The deterministic algorithm we introduce here as Algorithm 2 (which we call the MC algorithm) generalizes the algorithm of Daniel D. Moerder and Anthony A. Calise (see [15]) to the case of discrete-time systems. To the best of our knowledge, this is the best algorithm for LQR via SOF published so far, in terms of rate of convergence (to local minimum).

Here, we wish to minimize the LQR functional

$$J(\mathbf{x}\_0, P) = \mathbf{x}\_0^T P \mathbf{x}\_0,\tag{8}$$

We also have

¼ *∂*

¼ *∂*

¼ *∂*

Therefore,

Otherwise, if

then

*∂*L

8 >>>>><

>>>>>:

**63**

which is equivalent to

*<sup>∂</sup><sup>K</sup> trace SY* ð Þ

*DOI: http://dx.doi.org/10.5772/intechopen.89319*

*<sup>∂</sup><sup>K</sup> trace S Q* <sup>þ</sup> *<sup>C</sup>TKTRKC* � *<sup>P</sup>* <sup>þ</sup> *AT* � *<sup>C</sup>TKTBT* � �*P A*ð Þ � *BKC* � � � �

*<sup>∂</sup><sup>K</sup> trace SCTKTRKC* � *SATPBKC* � *SCTKTBTPA* <sup>þ</sup> *SCTKTBTPBKC* � �

*<sup>∂</sup><sup>K</sup> trace SCTKTRKC* � *SATPBKC* � *ATPTBKCST* <sup>þ</sup> *SCTKTBTPBKC* � �

<sup>⇔</sup>*RKCSC<sup>T</sup>* � *BTPASCT* <sup>þ</sup> *BTPBKCSCT* <sup>¼</sup> <sup>0</sup>

*BTPASCT:*

*BTPASCT CSC<sup>T</sup>* � ��<sup>1</sup>

*BTPASCT* � *LCSCT* <sup>¼</sup> 0,

where *Z* is arbitrary *q* � *r* matrix (and we may take *Z* ¼ 0, unless some other constraints on *K* are needed). Note that if condition (12) does not happen, then

Theorem 4.1. Assume that Lð Þ *K*, *P*, *S* given by (10) is minimized locally at some

<sup>∗</sup> *RK*∗*<sup>C</sup>* � � � �

<sup>∗</sup> ¼ *S* <sup>∗</sup> . Then

*BTPASCT* � *LCSCT* <sup>¼</sup> 0, (12)

*<sup>B</sup>TP*<sup>∗</sup> *AS* <sup>∗</sup>*C<sup>T</sup> CS* <sup>∗</sup>*C<sup>T</sup>* � �<sup>þ</sup> <sup>þ</sup> *<sup>Z</sup>* <sup>∗</sup> � *RCSCT* , *for some q* � *r matrix Z* <sup>∗</sup>

*vec x*0*x<sup>T</sup>* 0

� *vec Q* <sup>þ</sup> *CTKT*

*BTPASCT CSCT* � �<sup>þ</sup> <sup>þ</sup> *<sup>Z</sup>* � *RCSCT* , (13)

*:* (11)

(14)

<sup>⇔</sup> *<sup>R</sup>* <sup>þ</sup> *BTPB* � �*KCSCT* <sup>¼</sup> *BTPASCT*

<sup>⇔</sup>*KCSCT* <sup>¼</sup> *<sup>R</sup>* <sup>þ</sup> *<sup>B</sup>TPB* � ��<sup>1</sup>

*<sup>K</sup>* <sup>¼</sup> *<sup>R</sup>* <sup>þ</sup> *<sup>B</sup>TPB* � ��<sup>1</sup>

*<sup>R</sup>* <sup>þ</sup> *BTPB* � ��<sup>1</sup>

*<sup>K</sup>* <sup>¼</sup> *<sup>R</sup>* <sup>þ</sup> *BTPB* � ��<sup>1</sup>

*<sup>∂</sup><sup>K</sup>* 6¼ 0. We conclude with the following theorem:

*<sup>P</sup>*<sup>∗</sup> <sup>¼</sup> *mat Ip* <sup>⊗</sup> *Ip* � *Ac*ℓð Þ *<sup>K</sup>*<sup>∗</sup> *<sup>T</sup>* <sup>⊗</sup> *Ac*ℓð Þ *<sup>K</sup>*<sup>∗</sup> *<sup>T</sup>* � ��<sup>1</sup>

� ��<sup>1</sup>

� � � � ,

*S* <sup>∗</sup> ¼ *mat Ip* ⊗ *Ip* � *Ac*ℓð Þ *K* <sup>∗</sup> ⊗ *Ac*ℓð Þ *K* <sup>∗</sup>

where *Ac*ℓð Þ¼ *K*<sup>∗</sup> *A* � *BK* <sup>∗</sup>*C*.

point *K*<sup>∗</sup> , *P*<sup>∗</sup> >0, and *S* <sup>∗</sup> such that *S<sup>T</sup>*

*<sup>K</sup>*<sup>∗</sup> <sup>¼</sup> *<sup>R</sup>* <sup>þ</sup> *<sup>B</sup>TP*<sup>∗</sup> *<sup>B</sup>* � ��<sup>1</sup>

<sup>¼</sup> *<sup>R</sup>TKCSTCT* <sup>þ</sup> *RKCSC<sup>T</sup>* � *BTPTASTCT* � *BTPASCT*

*Algorithms for LQR via Static Output Feedback for Discrete-Time LTI Systems*

<sup>þ</sup>*BTPTBKCSTCT* <sup>þ</sup> *BTPBKCSCT*

*∂*L *<sup>∂</sup><sup>K</sup>* <sup>¼</sup> <sup>0</sup>

Thus, if *CSCT* is invertible, then

<sup>¼</sup> <sup>2</sup>*RKCSC<sup>T</sup>* � <sup>2</sup>*BTPASCT* <sup>þ</sup> <sup>2</sup>*BTPBKCSCT:*

*∂*L *<sup>∂</sup><sup>K</sup>* <sup>¼</sup> *<sup>∂</sup>*

under the constraints

$$Y(K, P) \coloneqq Q + C^T K^T R K C - P + A\_{\varepsilon\ell}(K)^T P A\_{\varepsilon\ell}(K) = \mathbf{0}, P > \mathbf{0}. \tag{9}$$

Since *<sup>Y</sup><sup>T</sup>* <sup>¼</sup> *<sup>Y</sup>*, there exist orthogonal matrix <sup>U</sup> such that *<sup>Y</sup>*^ ¼ U*TY*<sup>U</sup> is diagonal. Now, minimizing (8) under the constraints (9) is equivalent to minimizing

$$\mathcal{L}(K, P, \mathbb{S}) = \operatorname{trace} \left( \mathbf{x}\_0^T P \mathbf{x}\_0 \right) + \sum\_{i=1}^p \hat{\mathbf{S}}\_{i,i} \hat{\mathbf{Y}}\_{i,i} \mathbf{x}\_i$$

under the constraint *P* > 0, where ^ *Si*,*<sup>i</sup>* are the Lagrange multipliers. We have

$$\begin{aligned} \mathcal{L}(K, P, S) &= \text{trace}\left(\mathbf{x}\_0^T P \mathbf{x}\_0\right) + \sum\_{i=1}^p \hat{S}\_{i,i} \hat{Y}\_{i,i} \\ &= \text{trace}\left(\mathbf{x}\_0^T P \mathbf{x}\_0\right) + \text{trace}\left(\hat{S} \hat{Y}\right) \\ &= \text{trace}\left(\mathbf{x}\_0^T P \mathbf{x}\_0\right) + \text{trace}\left(\mathcal{U} \hat{S} \mathcal{U}^T Y\right) \\ &= \text{trace}\left(\mathbf{x}\_0^T P \mathbf{x}\_0\right) + \text{trace}(\mathcal{U}) \end{aligned}$$
 
$$\begin{aligned} &= \text{trace}\left(\mathbf{x}\_0^T P \mathbf{x}\_0\right) + \text{trace}(S Y) \end{aligned}$$

where *<sup>S</sup>* ¼ U ^ *<sup>S</sup>*U*<sup>T</sup>*. Note that *ST* <sup>¼</sup> *<sup>S</sup>*. Let the Lagrangian be defined by

$$\mathcal{L}(K, P, S) = \text{trace}(\mathbf{x}\_0^T P \mathbf{x}\_0) + \text{trace}(S Y(K, P)), \tag{10}$$

for any *<sup>K</sup>* any *<sup>P</sup>* <sup>&</sup>gt;0 and any *<sup>S</sup>* such that *ST* <sup>¼</sup> *<sup>S</sup>*. The necessary conditions for optimality are *<sup>∂</sup>*<sup>L</sup> *<sup>∂</sup><sup>K</sup>* <sup>¼</sup> 0, *<sup>∂</sup>*<sup>L</sup> *<sup>∂</sup><sup>P</sup>* <sup>¼</sup> 0, and *<sup>∂</sup>*<sup>L</sup> *<sup>∂</sup><sup>S</sup>* <sup>¼</sup> *<sup>Y</sup><sup>T</sup>* <sup>¼</sup> *<sup>Y</sup>* <sup>¼</sup> 0.

Now, using Lemma 2.2, we have

$$\begin{split} \frac{\partial \mathcal{L}}{\partial P} &= \mathbf{0} \\ \Leftrightarrow & \boldsymbol{\kappa}\_{0} \mathbf{x}\_{0}^{T} - \boldsymbol{S}^{T} + \boldsymbol{A}\_{c\ell} \boldsymbol{\mathcal{S}}^{T} \boldsymbol{A}\_{c\ell}^{T} = \mathbf{0} \\ \Leftrightarrow & \boldsymbol{\kappa}\_{0} \mathbf{x}\_{0}^{T} - \boldsymbol{S} + \boldsymbol{A}\_{c\ell} \boldsymbol{\mathcal{S}} \boldsymbol{A}\_{c\ell}^{T} = \mathbf{0} \\ \Leftrightarrow & \boldsymbol{\mathcal{S}} - \boldsymbol{A}\_{c\ell} \boldsymbol{\mathcal{S}} \boldsymbol{A}\_{c\ell}^{T} = \boldsymbol{\kappa}\_{0} \mathbf{x}\_{0}^{T} \\ \Leftrightarrow & \left(\boldsymbol{I}\_{p} \otimes \boldsymbol{I}\_{p} - \boldsymbol{A}\_{c\ell} \otimes \boldsymbol{A}\_{c\ell}\right) \boldsymbol{vec}(\boldsymbol{S}) = \boldsymbol{\nu} \boldsymbol{c} \left(\boldsymbol{\kappa}\_{0} \mathbf{x}\_{0}^{T}\right) \\ \Leftrightarrow & \boldsymbol{\mathcal{S}} = \boldsymbol{\mat} \boldsymbol{\left(\left(\boldsymbol{I}\_{p} \otimes \boldsymbol{I}\_{p} - \boldsymbol{A}\_{c\ell} \otimes \boldsymbol{A}\_{c\ell}\right)^{-1} \boldsymbol{\nu} \boldsymbol{c} \left(\boldsymbol{\kappa}\_{0} \mathbf{x}\_{0}^{T}\right)\right)} \end{split}$$

where the last passage is affordable because *σ*ð Þ *Ac*<sup>ℓ</sup> ⊂ . Note that the last with the stability of *Ac*<sup>ℓ</sup> implies that *S*≥0.

*Algorithms for LQR via Static Output Feedback for Discrete-Time LTI Systems DOI: http://dx.doi.org/10.5772/intechopen.89319*

We also have

**4. The deterministic algorithm**

*Control Theory in Engineering*

convergence (to local minimum).

under the constraint *P* > 0, where ^

under the constraints

where *<sup>S</sup>* ¼ U ^

optimality are *<sup>∂</sup>*<sup>L</sup>

**62**

*<sup>∂</sup><sup>K</sup>* <sup>¼</sup> 0, *<sup>∂</sup>*<sup>L</sup>

Now, using Lemma 2.2, we have

the stability of *Ac*<sup>ℓ</sup> implies that *S*≥0.

*∂*L *<sup>∂</sup><sup>P</sup>* <sup>¼</sup> <sup>0</sup> ⇔*x*0*xT*

⇔*x*0*xT*

<sup>⇔</sup>*<sup>S</sup>* � *Ac*ℓ*SA<sup>T</sup>*

Here, we wish to minimize the LQR functional

The deterministic algorithm we introduce here as Algorithm 2 (which we call the MC algorithm) generalizes the algorithm of Daniel D. Moerder and Anthony A. Calise (see [15]) to the case of discrete-time systems. To the best of our knowledge, this is the best algorithm for LQR via SOF published so far, in terms of rate of

*Y K*ð Þ , *<sup>P</sup>* <sup>≔</sup> *<sup>Q</sup>* <sup>þ</sup> *<sup>C</sup>TKTRKC* � *<sup>P</sup>* <sup>þ</sup> *Ac*ℓð Þ *<sup>K</sup> TPAc*ℓð Þ¼ *<sup>K</sup>* 0, *<sup>P</sup>* <sup>&</sup>gt;0*:* (9)

*p*

*i*¼1 ^ *Si*,*iY*^*<sup>i</sup>*,*<sup>i</sup>*,

*p*

*i*¼1 ^ *Si*,*iY*^*<sup>i</sup>*,*<sup>i</sup>*

*Si*,*<sup>i</sup>* are the Lagrange multipliers. We have

*SY*^ � �

*<sup>S</sup>*U*TY*<sup>U</sup> � �

*<sup>S</sup>*U*TY* � �

� � <sup>þ</sup> *trace SY K* ð Þ ð Þ , *<sup>P</sup>* , (10)

0 � �

*vec x*0*x<sup>T</sup>* 0

Since *<sup>Y</sup><sup>T</sup>* <sup>¼</sup> *<sup>Y</sup>*, there exist orthogonal matrix <sup>U</sup> such that *<sup>Y</sup>*^ ¼ U*TY*<sup>U</sup> is diagonal.

<sup>0</sup> *Px*<sup>0</sup> � � þ<sup>X</sup>

<sup>0</sup> *Px*<sup>0</sup> � � þ<sup>X</sup>

<sup>0</sup> *Px*<sup>0</sup> � � <sup>þ</sup> *trace* ^

<sup>0</sup> *Px*<sup>0</sup> � � <sup>þ</sup> *trace* ^

<sup>0</sup> *Px*<sup>0</sup>

<sup>0</sup> *Px*<sup>0</sup>

<sup>0</sup> *Px*<sup>0</sup>

for any *<sup>K</sup>* any *<sup>P</sup>* <sup>&</sup>gt;0 and any *<sup>S</sup>* such that *ST* <sup>¼</sup> *<sup>S</sup>*. The necessary conditions for

*<sup>∂</sup><sup>S</sup>* <sup>¼</sup> *<sup>Y</sup><sup>T</sup>* <sup>¼</sup> *<sup>Y</sup>* <sup>¼</sup> 0.

*<sup>c</sup>*<sup>ℓ</sup> ¼ 0

*<sup>c</sup>*<sup>ℓ</sup> ¼ 0

� ��<sup>1</sup>

where the last passage is affordable because *σ*ð Þ *Ac*<sup>ℓ</sup> ⊂ . Note that the last with

� � � � ,

� �*vec S*ð Þ¼ *vec x*0*xT*

*<sup>S</sup>*U*<sup>T</sup>*. Note that *ST* <sup>¼</sup> *<sup>S</sup>*. Let the Lagrangian be defined by

� � <sup>þ</sup> *trace* <sup>U</sup> ^

� � <sup>þ</sup> *trace SY* ð Þ

<sup>0</sup> *Px*0, (8)

*J x*ð Þ¼ 0, *<sup>P</sup> xT*

Now, minimizing (8) under the constraints (9) is equivalent to minimizing

Lð Þ¼ *<sup>K</sup>*, *<sup>P</sup>*, *<sup>S</sup> trace xT*

Lð Þ¼ *<sup>K</sup>*, *<sup>P</sup>*, *<sup>S</sup> trace x<sup>T</sup>*

Lð Þ¼ *<sup>K</sup>*, *<sup>P</sup>*, *<sup>S</sup> trace x<sup>T</sup>*

*<sup>∂</sup><sup>P</sup>* <sup>¼</sup> 0, and *<sup>∂</sup>*<sup>L</sup>

<sup>0</sup> � *ST* <sup>þ</sup> *Ac*ℓ*STAT*

*<sup>c</sup>*<sup>ℓ</sup> <sup>¼</sup> *<sup>x</sup>*0*xT* 0

⇔*S* ¼ *mat Ip* ⊗ *Ip* � *Ac*<sup>ℓ</sup> ⊗ *Ac*<sup>ℓ</sup>

<sup>0</sup> � *<sup>S</sup>* <sup>þ</sup> *Ac*ℓ*SA<sup>T</sup>*

⇔ *Ip* ⊗ *Ip* � *Ac*<sup>ℓ</sup> ⊗ *Ac*<sup>ℓ</sup>

<sup>¼</sup> *trace x<sup>T</sup>*

<sup>¼</sup> *trace x<sup>T</sup>*

<sup>¼</sup> *trace x<sup>T</sup>*

<sup>¼</sup> *trace x<sup>T</sup>*

*∂*L *<sup>∂</sup><sup>K</sup>* <sup>¼</sup> *<sup>∂</sup> <sup>∂</sup><sup>K</sup> trace SY* ð Þ ¼ *∂ <sup>∂</sup><sup>K</sup> trace S Q* <sup>þ</sup> *<sup>C</sup>TKTRKC* � *<sup>P</sup>* <sup>þ</sup> *AT* � *<sup>C</sup>TKTBT* � �*P A*ð Þ � *BKC* � � � � ¼ *∂ <sup>∂</sup><sup>K</sup> trace SCTKTRKC* � *SATPBKC* � *SCTKTBTPA* <sup>þ</sup> *SCTKTBTPBKC* � � ¼ *∂ <sup>∂</sup><sup>K</sup> trace SCTKTRKC* � *SATPBKC* � *ATPTBKCST* <sup>þ</sup> *SCTKTBTPBKC* � � <sup>¼</sup> *<sup>R</sup>TKCSTCT* <sup>þ</sup> *RKCSC<sup>T</sup>* � *BTPTASTCT* � *BTPASCT* <sup>þ</sup>*BTPTBKCSTCT* <sup>þ</sup> *BTPBKCSCT* <sup>¼</sup> <sup>2</sup>*RKCSC<sup>T</sup>* � <sup>2</sup>*BTPASCT* <sup>þ</sup> <sup>2</sup>*BTPBKCSCT:*

Therefore,

$$\begin{split} \frac{\partial \mathcal{L}}{\partial K} &= \mathbf{0} \\ \Leftrightarrow & \mathbf{R} \mathbf{K} \mathbf{C} \mathbf{S} \mathbf{C}^{T} - \mathbf{B}^{T} \mathbf{P} \mathbf{A} \mathbf{S} \mathbf{C}^{T} + \mathbf{B}^{T} \mathbf{P} \mathbf{B} \mathbf{K} \mathbf{C} \mathbf{S} \mathbf{C}^{T} = \mathbf{0} \\ \Leftrightarrow & \left( \mathbf{R} + \mathbf{B}^{T} \mathbf{P} \mathbf{B} \right) \mathbf{K} \mathbf{C} \mathbf{S} \mathbf{C}^{T} = \mathbf{B}^{T} \mathbf{P} \mathbf{A} \mathbf{S} \mathbf{C}^{T} \\ \Leftrightarrow & \mathbf{K} \mathbf{C} \mathbf{S} \mathbf{C}^{T} = \left( \mathbf{R} + \mathbf{B}^{T} \mathbf{P} \mathbf{B} \right)^{-1} \mathbf{B}^{T} \mathbf{P} \mathbf{A} \mathbf{S} \mathbf{C}^{T} . \end{split}$$

Thus, if *CSCT* is invertible, then

$$K = \left(\mathbb{R} + \mathbf{B}^T \mathbf{P} \mathbf{B}\right)^{-1} \mathbf{B}^T \mathbf{P} \mathbf{A} \mathbf{S} \mathbf{C}^T \left(\mathbf{C} \mathbf{S} \mathbf{C}^T\right)^{-1}. \tag{11}$$

Otherwise, if

$$\left(\boldsymbol{R} + \boldsymbol{B}^T \boldsymbol{P} \boldsymbol{B}\right)^{-1} \boldsymbol{B}^T \boldsymbol{P} \boldsymbol{A} \boldsymbol{S} \boldsymbol{C}^T \cdot \boldsymbol{L}\_{\boldsymbol{C} \mathbf{S} \boldsymbol{C}^T} = \mathbf{0},$$

which is equivalent to

$$B^T P \mathbf{A} \mathbf{S} \mathbf{C}^T \cdot L\_{\mathbf{C} \mathbf{S} \mathbf{C}^T} = \mathbf{0},\tag{12}$$

then

$$K = \left(\mathbf{R} + \mathbf{B}^T \mathbf{P} \mathbf{B}\right)^{-1} \mathbf{B}^T \mathbf{P} \mathbf{A} \mathbf{S} \mathbf{C}^T \left(\mathbf{C} \mathbf{S} \mathbf{C}^T\right)^+ + \mathbf{Z} \cdot \mathbf{R}\_{\mathbf{C} \mathbf{S} \mathbf{C}^T},\tag{13}$$

where *Z* is arbitrary *q* � *r* matrix (and we may take *Z* ¼ 0, unless some other constraints on *K* are needed). Note that if condition (12) does not happen, then *∂*L *<sup>∂</sup><sup>K</sup>* 6¼ 0. We conclude with the following theorem:

Theorem 4.1. Assume that Lð Þ *K*, *P*, *S* given by (10) is minimized locally at some point *K*<sup>∗</sup> , *P*<sup>∗</sup> >0, and *S* <sup>∗</sup> such that *S<sup>T</sup>* <sup>∗</sup> ¼ *S* <sup>∗</sup> . Then

$$\begin{cases} \boldsymbol{K}\_{\*} = \left(\boldsymbol{R} + \boldsymbol{B}^{T}\boldsymbol{P}\_{\*}\boldsymbol{B}\right)^{-1} \boldsymbol{B}^{T}\boldsymbol{P}\_{\*}\boldsymbol{A}\mathbf{S}\_{\*}\boldsymbol{C}^{T}\left(\boldsymbol{\mathcal{CS}}\_{\*}\boldsymbol{C}^{T}\right)^{+} + \boldsymbol{Z}\_{\*} \cdot \boldsymbol{R}\_{\mathrm{CC}^{T}}\boldsymbol{f}, \text{for some } \boldsymbol{q} \times \boldsymbol{r} \text{ matrix } \boldsymbol{Z}\_{\*}, \\\ \boldsymbol{P}\_{\*} = \boldsymbol{m}at\Big{pmatrix} \left(\boldsymbol{I}\_{p}\otimes\boldsymbol{I}\_{p} - \boldsymbol{A}\_{\boldsymbol{c}\boldsymbol{\ell}}(\boldsymbol{K}\_{\*})^{T}\otimes\boldsymbol{A}\_{\boldsymbol{c}\boldsymbol{\ell}}(\boldsymbol{K}\_{\*})^{T}\right)^{-1} \cdot \boldsymbol{\nu}\boldsymbol{c}\boldsymbol{c}\big{\big(Q + \boldsymbol{C}^{T}\boldsymbol{K}\_{\*}^{T}\boldsymbol{R}\boldsymbol{K}\_{\*}\big{.}\big{C}\big)\right) \\\ \boldsymbol{S}\_{\*} = \boldsymbol{m}at\Big{\Big(\left(\boldsymbol{I}\_{p}\otimes\boldsymbol{I}\_{p} - \boldsymbol{A}\_{\boldsymbol{c}\boldsymbol{\ell}}(\boldsymbol{K}\_{\*})\otimes\boldsymbol{A}\_{\boldsymbol{c}\boldsymbol{\ell}}(\boldsymbol{K}\_{\*})\big)\big{)}^{-1}\boldsymbol{\nu}\boldsymbol{c}\big{(\boldsymbol{x}\_{0}\boldsymbol{x}\_{0}^{T}\big)}\big{)}, \end{cases} \tag{14}$$

where *Ac*ℓð Þ¼ *K*<sup>∗</sup> *A* � *BK* <sup>∗</sup>*C*.
