Experimental Analysis and TCP Technique

#### **Chapter 5**

## On Parametrizations of State Feedbacks and Static Output Feedbacks and Their Applications

*Yossi Peretz*

#### **Abstract**

In this chapter, we provide an explicit free parametrization of all the stabilizing static state feedbacks for continuous-time Linear-Time-Invariant (LTI) systems, which are given in their state-space representation. The parametrization of the set of all the stabilizing static output feedbacks is next derived by imposing a linear constraint on the stabilizing static state feedbacks of a related system. The parametrizations are utilized for optimal control problems and for pole-placement and exact pole-assignment problems.

**Keywords:** control systems, continuous-time systems, state-space representation, feedback stabilization, static state feedback, static output feedback, Lyapunov equation, parametrization, optimization, optimal control, *H*∞-control, *H*2-control, linear-quadratic regulators, pole assignment, pole placement, robust control

#### **1. Introduction**

The solution of the problem of stabilizing static output feedback (SOF) has a great practical importance, for several reasons: they are simple, cheap, reliable, and their implementation is simple and direct. Since in practical applications, full-state measurements are not always available, the application of stabilizing state feedback (SF) is not always possible. Obviously, in practical applications, the entries of the needed SOFs are bounded with bounds known in advance, but unfortunately, the problem of SOFs with interval constrained entries is NP-hard (see [1, 2]). Exact pole assignment and simultaneous stabilization via SOF or stabilization via structured SOFs are also NP-hard problems (see [2, 3] resp.). These problems become even harder when optimal SOFs are sought, when the optimality notions can be the sparsity (see [4]) of the controller (e.g., for reliability purposes of networked control systems (NCSs)), the cost or energy consumption of the controller (which are related to various norm-bounds on the controller), the *H*∞-norm, the *H*2-norm or the linear-quadratic regulator (LQR) functional of the closed loop. The practical meaning of the NP-hardness of the aforementioned problems is that the problems cannot be formulated as convex problems (e.g., through LMIs or SDPs) and cannot have any efficient algorithms (under the widespread belief that P 6¼ NP). Thus, one has to compromise the exactness (which might affect the feasibility of the solution) or the optimality of the solution. Therefore, one has to utilize the specific structure of the given problem, in order to describe effectively the set of all feasible solutions, by reducing the number of variables and constraints to the minimum, for the purpose of increasing the efficiency and accuracy of the available algorithms. This is the aim of the proposed method.

Several formulations and related algorithms were introduced in the literature for the constrained SOF and other control hard problems. The iterated linear matrix inequalities (ILMI), bilinear matrix inequalities (BMI), and semi-definite programming (SDP) approaches for the constrained SOF problem, for the simultaneous stabilizing SOF problem, and for the robust control via SOF (with related algorithms) were studied in: [5–11]. The problem of pole placement via SOF and the problem of robust pole placement via Static Feedback were studied in: [12, 13]. In [14, 15], the method of alternating projections was utilized to solve the problems of rank minimization and pole placement via SOFs, respectively. The probabilistic and randomized methods for the constrained SOF problem and robust stabilization via SOFs (among other hard problems) were discussed in [16–19]. In [20], the problem of minimal-gain SOF was solved efficiently by the randomized method. A nonsmooth analysis approach for *H*<sup>∞</sup> synthesis and for the SOF problem is given in [21, 22], respectively. A MATLAB® library for multiobjective robust control problems based on the non-smooth analysis approach was introduced in [23]. All these references (and many more references not brought here) show the significance of the constrained SOF problem to control applications.

Many problems can be reduced to the SOF constrained problem, including the reduction of the minimal-degree dynamic-feedback problem and robust or decentralized stability via static-feedback, reduced-order *H*<sup>∞</sup> filter problem, global minimization of LQR functional via SOF, and the design problem of optimal PID controllers (see [2, 10, 24–27] respectively). It is worth mentioning [28], where the alternating direction method of multipliers was utilized to alternate between optimizing the sparsity of the state feedback matrix and optimizing the closed-loop *H*2-norm, where the sparsity measure was introduced as a penalty term, without any pre-assumed knowledge about the sparsity structure of the controller. The method of augmented Lagrangian for optimal structured static-feedbacks was considered in [29], where it is assumed that the structure is known in advance (otherwise, one should solve a combinatorial problem). The computation overhead of all the aforementioned methods can be reduced significantly, if good parametrization of all the SOFs of the given system could be found, where a parametrization can be called "good" if it takes into account the structure of the given specific system and if it well separates between free and dependent parameters, thus resulting in a minimal set of nonlinear nonconvex inequalities/equations needed to be solved.

In [30], a parametrization of all the SFs and SOFs of Linear-Time-Invariant (LTI) continuous-time systems is achieved by using a characterization of all the (marginally) stable matrices as dissipative Hamiltonian matrices, leading to a highly performance sequential semi-definite programming algorithm for the minimal-gain SOF problem. The proposed method there can be applied also to LTI discrete-time systems by adding semi-definite conditions for placing the closed-loop eigenvalues in the unit disk. A new parametrization for SOF control of linear parameter-varying (LPV) discrete-time systems, with guaranteed ℓ2-gain performance, is provided in [31]. The parametrization there is given in terms of an infinite set of LMIs that becomes finite, if some structure on the parameter-dependent matrices is assumed (e.g., an affine dependency). The *H*2-norm guaranteed-performance SOF control for hidden Markov jump linear systems (HMJLS) is studied in [32], where the SOFs are parameterized via convex optimization with LMI constraints, under the assumptions of full-rank sensor matrices and an efficient and accurate Markov chain state estimator. In [33], an iterative LMI algorithm is proposed for the SOF

#### *On Parametrizations of State Feedbacks and Static Output Feedbacks and Their Applications DOI: http://dx.doi.org/10.5772/intechopen.101176*

problem for LTI continuous-time negative-imaginary (NI) systems with given *H*<sup>∞</sup> norm-bound on the closed loop, based on decoupling the dependencies between the SOF and the Lyapunov certificate matrix.

When solving an optimization problem, it is important to have a convenient parametrization for the set of feasible solutions. Otherwise, one needs to use the probability method (i.e., the "generate and check" method), which is seriously doomed to the "curse of dimensionality" (see [16]). In [13], a closed form of all the stabilizing state feedbacks is proved (up to a set of measure 0), for the purpose of exact pole assignment, when the location errors are optimized by lowering the condition number of the similarity matrix, and the controller performance is optimized by minimizing its Frobenius norm. The parametrization in [13] is based on the assumptions that the input-to-state matrix *B* has a full rank and at least one real state feedback leading to diagonalizable closed-loop matrix exists, where a necessary condition for the existence of such feedback is that the multiplicity of any assigned eigenvalue is less than or equal to *rank B*ð Þ. In this context, it is worth mentioning [34] in which a parametrization of all the exact pole-assignment state feedbacks is given, under the assumption that the set of needed closed-loop poles should contain sufficient number of real eigenvalues (which make no problem if the problem of pole placement is of concern, where it is generally assumed that the region is symmetric with respect to the real axis and contains a real-axis segment with its neighborhood). The results of [34] and of the current chapter are based on a controllability recursive structure that was discovered in [35].

In this chapter, using the aforementioned controllability recursive structure, we introduce a parametrization of the set of all stabilizing SOFs for continuous-time LTI systems with no other assumptions on the given system (for discrete-time LTI systems, the parametrization is much more involved and will be treated in a future work). As opposed to the notable works [36–38], where for the parametrization one still needs to solve some LMIs in order to get the Lyapunov matrix, here we give an explicit recursive formula for the Lyapunov matrix and for the feedback in the case of SF, and a constrained form for the Lyapunov matrix and for the feedback in the case of SOF.

The rest of the chapter goes as follows:

In Section 2, we set notions and give some basic useful lemmas, and in Section 3, we introduce the parametrization of the set of all stabilizing static-state feedbacks for LTI continuous-time systems. In Section 4, we introduce the constrained parametrization of the set of all stabilizing SOFs for LTI continuous-time systems. The effectiveness of the method is shown on a real-life system. Section 5 is based on [34] and is devoted to the problem of exact pole assignment by SF, for LTI continuous-time or discrete-time systems. The effectiveness of the method is shown on a real-life system. Finally, in Section 6, we conclude with some remarks and intentions for a future work.

#### **2. Preliminaries**

By we denote the complex field and by � the open left half-plane. For *z*∈ we denote by Rð Þ*z* its real part, while by Ið Þ*z* we denote its imaginary part. For a square matrix *<sup>Z</sup>*, we denote by *<sup>σ</sup>*ð Þ *<sup>Z</sup>* the spectrum of *<sup>Z</sup>*. For a *<sup>p</sup>*�*<sup>q</sup>* matrix *<sup>Z</sup>*, we denote by *<sup>Z</sup><sup>T</sup>* its transpose, and by *zi*,*<sup>j</sup>* or by *Zi*,*<sup>j</sup>*, its ð Þ *<sup>i</sup>*, *<sup>j</sup>* 'th element or block element. A square matrix *Z* in the continuous-time context (in the discrete-time context) is said to be (asymptotically) stable, if any eigenvalue *λ*∈ *σ*ð Þ *Z* satisfies Rð Þ*λ* < 0, i.e., *λ*∈ � (satisfies j j *λ* < 1, i.e. *λ*∈ � where � is the open unit disk).

Consider a continuous-time system in the form:

$$\begin{cases} \frac{d}{dt}\mathbf{x}(t) = A\mathbf{x}(t) + Bu(t) \\\\ \mathbf{y}(t) = \mathbf{C}\mathbf{x}(t) \end{cases} \tag{1}$$

where *A* ∈ *<sup>n</sup>*�*n*, *B*∈ *<sup>n</sup>*�*m*,*C*∈ *<sup>r</sup>*�*n*, and *x*, *u*, *y* are the state, the input, and the measurement, respectively. Assuming that the state *x* is fully accessible and fully available for feedback, we define *<sup>u</sup>* ¼ �*K*ð Þ <sup>0</sup> *<sup>x</sup>* to be the state feedback (SF). When the state is not fully accessible or not fully available for feedback but the measurement *y* is available for feedback, we define *u* ¼ �*Ky* to be the static output feedback (SOF). The problems that we consider here are the following:


The parameterizations will be used for achieving other goals and performance keys for the system, other than stability, which is the feasibility defining basic key.

A square matrix *<sup>Z</sup>* is said to be non-negative (denoted as *<sup>Z</sup>* <sup>≥</sup>0) if *<sup>Z</sup><sup>T</sup>* <sup>¼</sup> *<sup>Z</sup>* and *vTZv*≥0 for any vector *v*. A non-negative matrix *Z* is said to be strictly non-negative (denoted as *<sup>Z</sup>* <sup>&</sup>gt;0) if *vTZv*>0 for any vector *<sup>v</sup>* 6¼ 0. For two square matrices *<sup>Z</sup>*,*W*, we would write *Z* ≥*W* (*Z* >*W*) if *Z* � *W* ≥ 0 (respectively, if *Z* � *W* >0). For a matrix *Z* ∈ *<sup>p</sup>*�*<sup>q</sup>* , we denote by *Z*<sup>þ</sup> the Moore-Penrose pseudo-inverse (see [39, 40] for definition and properties). By *LZ*, *RZ* we denote the orthogonal projections *Iq* � *Z*þ*Z* and *Ip* � *ZZ*þ, respectively, where *Is* denotes the identity matrix of size *s* � *s*. Note that *Z*þ*Z* and *ZZ*<sup>þ</sup> (as well as *LZ* and *RZ*) are symmetric and orthogonally diagonalizable with eigenvalues from 0, 1 f g. By *diag* and *bdiag* we denote diagonal and block-diagonal matrices, respectively.

A system triplet ð Þ *A*, *B*,*C* is SOF stabilizable (or just stabilizable) if and only if there exist *K* and *P*> 0 such that

$$\mathbf{E}\mathbf{P} + \mathbf{P}\mathbf{E}^T = -\mathbf{R} \tag{2}$$

for some given *R*>0 (*R* ¼ *I* can always be chosen), where *E* ¼ *A* � *BKC*. For the "if" direction, note that (2) implies the negativity of the real part of any eigenvalue of *E*, implying that the closed-loop *E* is stable. For the "only-if" direction, under the assumption that *E* ¼ *A* � *BKC* is stable for some given *K*, one can show that *P* ≔ Ð <sup>∞</sup> <sup>0</sup> exp ð Þ *Et <sup>R</sup>* exp *ETt* � �*dt* is well defined, satisfies *PT* <sup>¼</sup> *<sup>P</sup>* and *<sup>P</sup>*<sup>&</sup>gt; 0, and is the unique solution for (2).

Note that the set of all SOFs is given by *K* ¼ *B*þ*XC*<sup>þ</sup> þ *LBS* þ *TRC* where *S*, *T* are any *m* � *r* matrices and *X* is any *n* � *n* matrix such that *E* ¼ *A* � *BB*þ*XC*þ*C* is stable. Thus, one can optimize *K* by utilizing the freeness in *S*, *T* without changing the closed-loop performance achieved by *X*. This characterization of the feasibility space shows its effectiveness in proving theorems, as will be seen along the chapter (see also [20, 35]). We also conclude that ð Þ *A*, *B*,*C* is stabilizable if and only if *A*, *BB*þ,*C*<sup>þ</sup> ð Þ *C* is stabilizable.

In the sequel, we make use of the following lemma (see [39]):

*On Parametrizations of State Feedbacks and Static Output Feedbacks and Their Applications DOI: http://dx.doi.org/10.5772/intechopen.101176*

Lemma 2.1 The matrix equation *AX* ¼ *B* has solutions if and only if *AA*þ*B* ¼ *B* (equivalently, *RAB* ¼ 0). When the condition is satisfied, the set of all solutions is given by

$$X = A^{+}B + L\_{A}Z,\tag{3}$$

where *<sup>Z</sup>* is arbitrary matrix. Moreover, we have: k k *<sup>X</sup>* <sup>2</sup> *<sup>F</sup>* <sup>¼</sup> *<sup>A</sup>*<sup>þ</sup> k k*<sup>B</sup>* <sup>2</sup> *<sup>F</sup>* <sup>þ</sup> k k *LAZ* <sup>2</sup> *F*, implying that the minimal Frobenius-norm solution is *X* ¼ *A*þ*B*.

Similarly, the equation *YA* ¼ *B* has solutions if and only if *BA*þ*A* ¼ *B* (equivalently, *BLA* ¼ 0). When the condition is satisfied, the set of all solutions is given by

$$Y = BA^{+} + WR\_{A},\tag{4}$$

where *<sup>W</sup>* is arbitrary matrix. Moreover, we have: k k *<sup>Y</sup>* <sup>2</sup> *<sup>F</sup>* <sup>¼</sup> *BA*<sup>þ</sup> k k<sup>2</sup> *<sup>F</sup>* þ k k *WRA* 2 *F*, implying that the minimal Frobenius-norm solution is *Y* ¼ *BA*þ.

#### **3. Parametrization of all the static state feedbacks**

We start with the following lemma known as the projection lemma (see [41], Theorem 3.1):

Lemma 3.1 The pair *A*, *BB*<sup>þ</sup> ð Þ is stabilizable if and only if there exists *P*>0 such that

$$R\_B(I + AP + PA^T)R\_B = \mathbf{0}.\tag{5}$$

When (5) is satisfied then, *X* is a stabilizing SF if and only if *X* is a solution for

$$\mathbf{B}\mathbf{B}^+\mathbf{X}\mathbf{P} + \mathbf{P}\mathbf{X}^T\mathbf{B}\mathbf{B}^+ = I + \mathbf{A}\mathbf{P} + \mathbf{P}\mathbf{A}^T.\tag{6}$$

Moreover, one specific solution for (6) is given by

$$X\_0 = \left(I + AP + PA^T\right) \left(I - \frac{1}{2}BB^+\right)P^{-1}.\tag{7}$$

Similarly, *AT*,*C*þ*C* is stabilizable if and only if there exists *Q* > 0 such that

$$L\_C \left( I + A^T Q + Q A \right) L\_C = 0. \tag{8}$$

When (8) is satisfied then, *<sup>Y</sup><sup>T</sup>* is a stabilizing SF (i.e., *AT* � *<sup>C</sup>*þ*CY<sup>T</sup>* or, equivalently, *A* � *YC*þ*C* is stable) if and only if *Y* is a solution for:

$$\mathbf{C}^{+}\mathbf{C}\mathbf{Y}^{T}\mathbf{Q} + \mathbf{Q}\mathbf{Y}\mathbf{C}^{+}\mathbf{C} = I + \mathbf{A}^{T}\mathbf{Q} + \mathbf{Q}\mathbf{A}.\tag{9}$$

One specific solution for (9) is given by

$$Y\_0 = Q^{-1} \left( I - \frac{1}{2} \mathbf{C}^+ \mathbf{C} \right) \left( I + A^T \mathbf{Q} + Q \mathbf{A} \right). \tag{10}$$

Remark 3.1 The explicit formulas (7) and (10) are our little contribution to the projection lemma, Lemma 3.1. Unfortunately, we do not have such an explicit formulas for LTI discrete-time systems.

In order to describe the set of all solutions for (6) and (9), we need the following lemma that can be proved easily:

Lemma 3.2 Let *P*> 0, *Q* > 0. Then, the set of all solutions for:

$$\mathbf{Z}\mathbf{P} + \mathbf{P}\mathbf{Z}^T = \mathbf{0},\tag{11}$$

is given by *<sup>Z</sup>* <sup>¼</sup> *WP*�<sup>1</sup> where *<sup>W</sup><sup>T</sup>* ¼ �*W*. Similarly, the set of all solutions for:

$$\mathbf{Z}^T \mathbf{Q} + \mathbf{Q} \mathbf{Z} = \mathbf{0},\tag{12}$$

is given by *<sup>Z</sup>* <sup>¼</sup> *<sup>Q</sup>*�<sup>1</sup> *<sup>V</sup>* where *<sup>V</sup><sup>T</sup>* ¼ �*V*.

The following theorem describes the set of all solutions for (6) and (9), using the controllers (7) and (10):

Theorem 3.1 Let *P*> 0 satisfy (5) and let *X*<sup>0</sup> be given by (7). Then, *X* is a solution for (6) if and only if:

$$X = X\_0 + \mathcal{W}\mathcal{P}^{-1} + R\_B L,\tag{13}$$

where *<sup>W</sup>* satisfies *<sup>W</sup><sup>T</sup>* ¼ �*W*, *RBW* <sup>¼</sup> 0 and *<sup>L</sup>* is arbitrary.

Similarly, let *Q* >0 satisfy (8) and let *Y*<sup>0</sup> be given by (10). Then, *Y* is a solution for (9) if and only if:

$$Y = Y\_0 + Q^{-1}V + ML\_C,\tag{14}$$

where *<sup>V</sup>* satisfies *<sup>V</sup><sup>T</sup>* ¼ �*V*,*VLC* <sup>¼</sup> 0 and *<sup>M</sup>* is arbitrary. **Proof:**

Assume that *X* is a solution for (6). Since *X*<sup>0</sup> is also a solution for (6), it follows that *BB*þð Þ *X* � *X*<sup>0</sup> *P* þ *P X*ð Þ � *X*<sup>0</sup> *TBB*<sup>þ</sup> <sup>¼</sup> 0. Let *<sup>Z</sup>* <sup>¼</sup> *BB*þð Þ *<sup>X</sup>* � *<sup>X</sup>*<sup>0</sup> . Then, *RBZ* <sup>¼</sup> <sup>0</sup> and *ZP* <sup>þ</sup> *PZ<sup>T</sup>* <sup>¼</sup> 0. Lemma 3.2 implies that *<sup>Z</sup>* <sup>¼</sup> *WP*�<sup>1</sup> , where *<sup>W</sup><sup>T</sup>* ¼ �*<sup>W</sup>* and therefore *RBW* <sup>¼</sup> 0. We conclude that *<sup>X</sup>* � *<sup>X</sup>*<sup>0</sup> <sup>¼</sup> *WP*�<sup>1</sup> <sup>þ</sup> *RBL* for some *<sup>L</sup>* (namely, *L* ¼ *X* � *X*0).

Conversely, let *<sup>X</sup>* be given by (13) and let *<sup>Z</sup>* <sup>¼</sup> *WP*�<sup>1</sup> . Then, *BB*þð Þ¼ *X* � *X*<sup>0</sup> *BB*þ*<sup>Z</sup>* <sup>¼</sup> *<sup>Z</sup>* since *BB*þ*RB* <sup>¼</sup> 0 and since *RBZ* <sup>¼</sup> 0. Now, *ZP* <sup>þ</sup> *PZ<sup>T</sup>* <sup>¼</sup> 0 implies that

> *BB*þð Þ *X* � *X*<sup>0</sup> *P* þ *P X*ð Þ � *X*<sup>0</sup> *TBB*<sup>þ</sup> <sup>¼</sup> 0,

from which we conclude that *X* satisfies (6), since *X*<sup>0</sup> satisfies (6). The second claim is proved similarly. ■

In the following we describe the set P of all matrices *P*> 0 satisfying (5). Note that in Theorem 3.1 the existence of *P*> 0 satisfying (5) is guaranteed by the assumption that *A*, *BB*<sup>þ</sup> ð Þ is stabilizable and as a result of Lemma 3.1. Let *P*∈P and let

$$\begin{cases} X\_0 = \left(I + AP + PA^T\right) \cdot \left(I - \frac{1}{2}BB^+\right)P^{-1} \\\\ W \text{ arbitrary such that } W^T = -W, R\_B W = 0 \\\\ X = X\_0 + WP^{-1} + R\_BL \text{ where } L \text{ is arbitrary} \\\\ K = B^+X + L\_B F \text{ where } F \text{ is arbitrary.} \end{cases} \tag{15}$$

*On Parametrizations of State Feedbacks and Static Output Feedbacks and Their Applications DOI: http://dx.doi.org/10.5772/intechopen.101176*

Let Xð Þ *P* denote the set of all matrices *X* satisfying (15) for a fixed *P*∈P, and let Kð Þ *P* denote the set of all matrices *K* satisfying (15) for a fixed *P*∈P. Note that for a fixed *P*∈P, the set Xð Þ *P* is convex (actually affine) and ∪*<sup>P</sup>*∈PXð Þ *P* contains all the stabilizing *X* parameters of the stabilizable pair *A*, *BB*<sup>þ</sup> ð Þ. Finally, ∪*<sup>P</sup>* <sup>∈</sup>PKð Þ *P* contains all the stabilizing SF's *K* of the stabilizable pair ð Þ *A*, *B* .

For a stabilizable pair *AT*,*C<sup>T</sup>* � �, let Q be the set of all matrices *Q* > 0 satisfying (8), and let

$$\begin{cases} Y\_0 = \mathbf{Q}^{-1} \cdot \left( I - \frac{1}{2} \mathbf{C}^+ \mathbf{C} \right) \left( I + \mathbf{A}^T \mathbf{Q} + \mathbf{Q} \mathbf{A} \right) \\\\ V \text{ arbitrary such that } \mathbf{V}^T = -\mathbf{V}, \mathbf{V} \mathbf{L}\_C = \mathbf{0} \\\\ Y = Y\_0 + \mathbf{Q}^{-1} \mathbf{V} + M \mathbf{L}\_C \text{ where } M \text{ is arbitrary} \\\\ K = \mathbf{Y} \mathbf{C}^+ + \mathbf{G} \mathbf{R}\_C \text{ where } \mathbf{G} \quad \text{is arbitrary} \end{cases} \tag{16}$$

Let Yð Þ *Q* denote the set of all matrices *Y* satisfying (16) for a fixed *Q* ∈ Q, and let Kð Þ *Q* denote the set of all matrices *K* satisfying (16) for a fixed *Q* ∈ Q. Then, <sup>∪</sup>*<sup>Q</sup>* <sup>∈</sup> <sup>Q</sup>*K Q*ð Þ contains all the stabilizing SFs *<sup>K</sup>* of the stabilizable pair *<sup>A</sup><sup>T</sup>*,*C<sup>T</sup>* � �.

In the following we assume (without loss of generality, see Remark 4.2) that *A*, *BB*<sup>þ</sup> ð Þ is controllable. Under this assumption, we recursively (go downwards and) define a sequence of sub-systems of the given system *A*, *BB*<sup>þ</sup> ð Þ. Since *BB*<sup>þ</sup> is symmetric matrix (with simple eigenvalues from the set 0, 1 f g), it is diagonalizable by an orthogonal matrix. Let *U* denote an orthogonal matrix such that

$$
\widehat{B} = U^T B B^+ U = \begin{bmatrix} I\_k & \mathbf{0} \\ \mathbf{0} & \mathbf{0} \end{bmatrix} = b \text{diag}(I\_k, \mathbf{0}) \tag{17}
$$

(where *k* ¼ *rank B*ð Þ¼ *rank BB*<sup>þ</sup> ð Þ≥1 since ð Þ *A*, *B* is controllable). Let *<sup>A</sup>*<sup>b</sup> <sup>¼</sup> *<sup>U</sup>TAU* <sup>¼</sup> *<sup>A</sup>*b1,1 *<sup>A</sup>*b1,2 *<sup>A</sup>*b2,1 *<sup>A</sup>*b2,2 " # be partitioned accordingly. Let *<sup>U</sup>*ð Þ <sup>0</sup> <sup>¼</sup> *<sup>U</sup>* and let *<sup>A</sup>*ð Þ <sup>0</sup> <sup>¼</sup> *<sup>A</sup>*, *<sup>B</sup>*ð Þ <sup>0</sup> <sup>¼</sup> *<sup>B</sup>*, *<sup>n</sup>*<sup>0</sup> <sup>¼</sup> *<sup>n</sup>*, *<sup>k</sup>*<sup>0</sup> <sup>¼</sup> *rank B*ð Þ <sup>0</sup> � �. Similarly, let *<sup>U</sup>*ð Þ<sup>1</sup> be an orthogonal matrix such that *<sup>U</sup>*ð Þ<sup>1</sup> *TB*ð Þ<sup>1</sup> *<sup>B</sup>*ð Þþ<sup>1</sup> *<sup>U</sup>*ð Þ<sup>1</sup> <sup>¼</sup> *bdiag Ik*<sup>1</sup> ð Þ , 0 , where *<sup>B</sup>*ð Þ<sup>1</sup> <sup>¼</sup> *<sup>A</sup>*b2,1. Let *<sup>A</sup>*ð Þ<sup>1</sup> <sup>¼</sup> *<sup>A</sup>*b2,2, *<sup>n</sup>*<sup>1</sup> <sup>¼</sup> *<sup>n</sup>*<sup>0</sup> � *<sup>k</sup>*0, *<sup>k</sup>*<sup>1</sup> <sup>¼</sup> *rank B*ð Þ<sup>1</sup> � �. Then, *<sup>A</sup>*ð Þ<sup>1</sup> , *<sup>B</sup>*ð Þ<sup>1</sup> � � is controllable since *A*ð Þ <sup>0</sup> , *B*ð Þ <sup>0</sup> � � is controllable (see [35] and see Lemma 5.1 in the following).

Recursively, assume that the pair *A*ð Þ*<sup>i</sup>* , *B*ð Þ*<sup>i</sup>* � � was defined and is controllable. Let *<sup>U</sup>*ð Þ*<sup>i</sup>* be an orthogonal matrix such that <sup>c</sup>*B*ð Þ*<sup>i</sup>* <sup>¼</sup> *<sup>U</sup>*ð Þ*<sup>i</sup> TB*ð Þ*<sup>i</sup> <sup>B</sup>*ð Þþ*<sup>i</sup> <sup>U</sup>*ð Þ*<sup>i</sup>* <sup>¼</sup> *bdiag Iki* ð Þ , 0 , where *ki* ≥1 (since *A*ð Þ*<sup>i</sup>* , *B*ð Þ*<sup>i</sup>* � � is controllable). Let *<sup>A</sup>* <sup>d</sup>ð Þ*<sup>i</sup>* <sup>¼</sup> *<sup>U</sup>*ð Þ*<sup>i</sup> TA*ð Þ*<sup>i</sup> <sup>U</sup>*ð Þ*<sup>i</sup>* <sup>¼</sup> *<sup>A</sup>* dð Þ*<sup>i</sup>* 1,1 *A* dð Þ*<sup>i</sup>* 1,2 *A* dð Þ*<sup>i</sup>* 2,1 *A* dð Þ*<sup>i</sup>* 2,2 2 4 3 5 be partitioned accordingly, with sizes *ki* � *ki* and ð Þ� *ni* � *ki* ð Þ *ni* � *ki* of the main diagonal blocks. Let *<sup>A</sup>*ð Þ *<sup>i</sup>*þ<sup>1</sup> <sup>¼</sup> *<sup>A</sup>* dð Þ*<sup>i</sup>* 2,2, *<sup>B</sup>*ð Þ *<sup>i</sup>*þ<sup>1</sup> <sup>¼</sup> *<sup>A</sup>* dð Þ*<sup>i</sup>* 2,1, *ni*þ<sup>1</sup> <sup>¼</sup> *ni* � *ki*, *ki* <sup>¼</sup> *rank B*ð Þ*<sup>i</sup>* � �. Then, *<sup>A</sup>*ð Þ *<sup>i</sup>*þ<sup>1</sup> , *<sup>B</sup>*ð Þ *<sup>i</sup>*þ<sup>1</sup> � � is controllable. We stop the recursion when *<sup>B</sup>*ð Þ*<sup>i</sup> <sup>B</sup>*ð Þþ*<sup>i</sup>* <sup>¼</sup> *Iki* for some *i* ¼ *b* (i.e. the base case, in which also *kb* ¼ *nb*).

Now, we go upward and define the Lyapunov matrices and the related SFs of the sub-systems. For the base case *<sup>i</sup>* <sup>¼</sup> *<sup>b</sup>*, let *<sup>P</sup>*ð Þ *<sup>b</sup>* <sup>&</sup>gt;0 be arbitrary (note that it is a free parameter!). Let

$$\begin{cases} X\_0^{(b)} = \frac{1}{2} \left( I\_{\boldsymbol{\eta}\_b} + A^{(b)} P^{(b)} + P^{(b)} A^{(b)T} \right) \left( P^{(b)} \right)^{-1} \\ \text{ } \boldsymbol{W}^{(b)} \text{ arbitrary such that } \boldsymbol{W}^{(b)T} = -\boldsymbol{W}^{(b)} \\ \boldsymbol{X}^{(b)} = \boldsymbol{X}\_0^{(b)} + \boldsymbol{W}^{(b)} \left( P^{(b)} \right)^{-1} \\ \boldsymbol{K}^{(b)} = \boldsymbol{B}^{(b)+} \boldsymbol{X}^{(b)} + L\_{\boldsymbol{B}^{(b)}} F^{(b)} \text{ where } F^{(b)} \text{ is arbitrary,} \end{cases} \tag{18}$$

and note that *RB*ð Þ *<sup>b</sup>* ¼ 0 in the base case. Now, it can be checked that *<sup>E</sup>*ð Þ *<sup>b</sup>* <sup>¼</sup> *<sup>A</sup>*ð Þ *<sup>b</sup>* � *<sup>B</sup>*ð Þ *<sup>b</sup> <sup>K</sup>*ð Þ *<sup>b</sup>* <sup>¼</sup> *<sup>A</sup>*ð Þ *<sup>b</sup>* � *<sup>X</sup>*ð Þ *<sup>b</sup>* is stable. We therefore have a parametrization of Kð Þ *<sup>b</sup> P*ð Þ *<sup>b</sup>* � � through arbitrary *P*ð Þ *<sup>b</sup>* >0.

Let Pð Þ *<sup>i</sup>*þ<sup>1</sup> denote the set of all *P*ð Þ *<sup>i</sup>*þ<sup>1</sup> > 0 satisfying:

$$R\_{\mathcal{B}^{(i+1)}} \left( I\_{n\_{i+1}} + A^{(i+1)}P^{(i+1)} + P^{(i+1)}A^{(i+1)T} \right) R\_{\mathcal{B}^{(i+1)}} = \mathbf{0},\tag{19}$$

and assume that Kð Þ *<sup>i</sup>*þ<sup>1</sup> *P*ð Þ *<sup>i</sup>*þ<sup>1</sup> � � was parameterized through *P*ð Þ *<sup>i</sup>*þ<sup>1</sup> > 0 ranging in the set Pð Þ *<sup>i</sup>*þ<sup>1</sup> , as is defined by (19). Similarly, let Pð Þ*<sup>i</sup>* denote the set of all *P*ð Þ*<sup>i</sup>* >0 satisfying:

$$R\_{B^{(i)}} \left( I\_{n\_i} + A^{(i)} P^{(i)} + P^{(i)} A^{(i)T} \right) R\_{B^{(i)}} = \mathbf{0},\tag{20}$$

and assume that Kð Þ*<sup>i</sup> P*ð Þ*<sup>i</sup>* � � was parameterized through *P*ð Þ*<sup>i</sup>* >0 ranging in the set Pð Þ*<sup>i</sup>* , as is defined by (20).

Now, we need to characterize the matrices *P*ð Þ*<sup>i</sup>* >0 belonging to the set Pð Þ*<sup>i</sup>* . Multiplying (20) from the left by *U*ð Þ*<sup>i</sup> <sup>T</sup>* and from the right by *U*ð Þ*<sup>i</sup>* we get:

$$
\widehat{R\_{B^{(i)}}} \left( I\_{n\_i} + \widehat{A^{(i)}} \widehat{P^{(i)}} + \widehat{P^{(i)}} \widehat{A^{(i)T}} \right) \widehat{R\_{B^{(i)}}} = \mathbf{0},
$$

$$
\text{where } \widehat{R\_{B^{(i)}}} = \begin{bmatrix} \mathbf{0} & \mathbf{0} \\ \mathbf{0} & I\_{n\_i - k\_i} \end{bmatrix} \text{ and } \widehat{P^{(i)}} = U^{(i)T} P^{(i)} U^{(i)} = \begin{bmatrix} \widehat{P^{(i)}}\_{1,1} & \widehat{P^{(i)}}\_{1,2} \\ \widehat{P^{(i)T}}\_{2,1} & \widehat{P^{(i)}}\_{2,2} \end{bmatrix} \text{ is}
$$

partitioned accordingly. The condition (20) is therefore equivalent to:

$$\begin{split} I\_{n\_i - k\_i} &+ \left( \widehat{\boldsymbol{A}^{(i)}}\_{2,2} + \widehat{\boldsymbol{A}^{(i)}}\_{2,1} \widehat{\boldsymbol{P}^{(i)}}\_{1,2} \left( \widehat{\boldsymbol{P}^{(i)}}\_{2,2} \right)^{-1} \right) \widehat{\boldsymbol{P}^{(i)}}\_{2,2} + \\ &+ \widehat{\boldsymbol{P}^{(i)}}\_{2,2} \left( \widehat{\boldsymbol{A}^{(i)}}\_{2,2} + \widehat{\boldsymbol{A}^{(i)}}\_{2,1} \widehat{\boldsymbol{P}^{(i)}}\_{1,2} \left( \widehat{\boldsymbol{P}^{(i)}}\_{2,2} \right)^{-1} \right)^{T} = \mathbf{0}, \end{split} \tag{21}$$

which is equivalent to:

$$\begin{aligned} I\_{n\_i - k\_i} &+ \left( A^{(i+1)} + B^{(i+1)} \widehat{P^{(i)}}\_{1,2} \left( \widehat{P^{(i)}}\_{2,2} \right)^{-1} \right) \widehat{P^{(i)}}\_{2,2} + \\ &+ \widehat{P^{(i)}}\_{2,2} \left( A^{(i+1)} + B^{(i+1)} \widehat{P^{(i)}}\_{1,2} \left( \widehat{P^{(i)}}\_{2,2} \right)^{-1} \right)^{T} = \mathbf{0}. \end{aligned} \tag{22}$$

Let *P*ð Þ *<sup>i</sup>*þ<sup>1</sup> ∈Pð Þ *<sup>i</sup>*þ<sup>1</sup> and let *K*ð Þ *<sup>i</sup>*þ<sup>1</sup> ∈ Kð Þ *<sup>i</sup>*þ<sup>1</sup> *P*ð Þ *<sup>i</sup>*þ<sup>1</sup> � �. Set c*P*ð Þ*<sup>i</sup>* 2,2 ≔ *P*ð Þ *<sup>i</sup>*þ<sup>1</sup> and set c*P*ð Þ*<sup>i</sup>* 1,2 <sup>≔</sup> � *<sup>K</sup>*ð Þ *<sup>i</sup>*þ<sup>1</sup> *<sup>P</sup>*ð Þ *<sup>i</sup>*þ<sup>1</sup> . Now, since <sup>c</sup>*P*ð Þ*<sup>i</sup>* 2,2 ≔ *P*ð Þ *<sup>i</sup>*þ<sup>1</sup> >0, (22) implies that the system: *On Parametrizations of State Feedbacks and Static Output Feedbacks and Their Applications DOI: http://dx.doi.org/10.5772/intechopen.101176*

$$\mathcal{A}^{(i+1)} + \mathcal{B}^{(i+1)} \widehat{P^{(i)}} \mathbf{1}\_{1,2} \left( \widehat{P^{(i)}} \mathbf{1}\_{2,2} \right)^{-1} := \mathcal{A}^{(i+1)} - \mathcal{B}^{(i+1)} \mathcal{K}^{(i+1)},\tag{23}$$

is stable. Now,

$$
\widehat{P^{(i)}} = \begin{bmatrix}
\widehat{P^{(i)}}\_{1,1} & -K^{(i+1)}P^{(i+1)} \\
\end{bmatrix} \tag{24}
$$

and we need to define c*P*ð Þ*<sup>i</sup>* 1,1 in order to complete c*P*ð Þ*<sup>i</sup>* to a strictly non-negative matrix. Since:

$$
\begin{bmatrix}
\widehat{P^{(i)}}\_{1,1} & -K^{(i+1)}P^{(i+1)} \\
\end{bmatrix} = \\
$$

$$
= \begin{bmatrix}
I\_{k\_i} & -K^{(i+1)} \\
\mathbf{0} & I\_{n\_i-k\_i}
\end{bmatrix}.
$$

$$
\cdot \begin{bmatrix}
\widehat{P^{(i)}}\_{1,1} - K^{(i+1)}P^{(i+1)}K^{(i+1)T} & \mathbf{0} \\
\mathbf{0} & P^{(i+1)}
\end{bmatrix}.
$$

$$
\cdot \begin{bmatrix}
I\_{k\_i} & \mathbf{0} \\
\end{bmatrix},
$$

it follows that c*P*ð Þ*<sup>i</sup>* >0 if and only if c*P*ð Þ*<sup>i</sup>* 1,1 � *<sup>K</sup>*ð Þ *<sup>i</sup>*þ<sup>1</sup> *<sup>P</sup>*ð Þ *<sup>i</sup>*þ<sup>1</sup> *<sup>K</sup>*ð Þ *<sup>i</sup>*þ<sup>1</sup> *<sup>T</sup>* <sup>&</sup>gt; 0 or equivalently if and only if c*P*ð Þ*<sup>i</sup>* 1,1 <sup>¼</sup> <sup>Δ</sup> <sup>c</sup>*P*ð Þ*<sup>i</sup>* 1,1 <sup>þ</sup> *<sup>K</sup>*ð Þ *<sup>i</sup>*þ<sup>1</sup> *<sup>P</sup>*ð Þ *<sup>i</sup>*þ<sup>1</sup> *<sup>K</sup>*ð Þ *<sup>i</sup>*þ<sup>1</sup> *<sup>T</sup>*, where <sup>Δ</sup> <sup>c</sup>*P*ð Þ*<sup>i</sup>* 1,1 is arbitrary strictly non-negative matrix (a free parameter!).

Conversely, if *P*ð Þ*<sup>i</sup>* >0 satisfies (20) then (23) is stable and thus:

$$K^{(i+1)} = -\widehat{P^{(i)}}\_{1,2} \left(\widehat{P^{(i)}}\_{2,2}\right)^{-1} \in \mathcal{K}^{(i+1)}\left(\mathcal{R}^{(i+1)}\right),$$

for some *R*ð Þ *<sup>i</sup>*þ<sup>1</sup> ∈Pð Þ *<sup>i</sup>*þ<sup>1</sup> . But since *K*ð Þ *<sup>i</sup>*þ<sup>1</sup> ∈ Kð Þ *<sup>i</sup>*þ<sup>1</sup> *R*ð Þ *<sup>i</sup>*þ<sup>1</sup> � � if and only if:

$$I\_{n\_l - k\_l} + \left( A^{(i+1)} - B^{(i+1)} K^{(i+1)} \right) R^{(i+1)} + R^{(i+1)} \left( A^{(i+1)} - B^{(i+1)} K^{(i+1)} \right)^T = \mathbf{0}, \quad \text{(25)}$$

since the last equation has unique strictly non-negative solution and since c*P*ð Þ*<sup>i</sup>* 2,2 satisfies this equation, it follows that *<sup>R</sup>*ð Þ *<sup>i</sup>*þ<sup>1</sup> <sup>¼</sup> <sup>c</sup>*P*ð Þ*<sup>i</sup>* 2,2. Let *<sup>P</sup>*ð Þ *<sup>i</sup>*þ<sup>1</sup> <sup>¼</sup> <sup>c</sup>*P*ð Þ*<sup>i</sup>* 2,2. Then, *K*ð Þ *<sup>i</sup>*þ<sup>1</sup> ∈ Kð Þ *<sup>i</sup>*þ<sup>1</sup> *P*ð Þ *<sup>i</sup>*þ<sup>1</sup> � � and since c*P*ð Þ*<sup>i</sup>* 1,2 ¼ �*K*ð Þ *<sup>i</sup>*þ<sup>1</sup> *<sup>P</sup>*ð Þ *<sup>i</sup>*þ<sup>1</sup> , it follows that <sup>c</sup>*P*ð Þ*<sup>i</sup>* has the form (24). Thus, c*P*ð Þ*<sup>i</sup>* 1,1 <sup>¼</sup> <sup>Δ</sup> <sup>c</sup>*P*ð Þ*<sup>i</sup>* 1,1 <sup>þ</sup> *<sup>K</sup>*ð Þ *<sup>i</sup>*þ<sup>1</sup> *<sup>P</sup>*ð Þ *<sup>i</sup>*þ<sup>1</sup> *<sup>K</sup>*ð Þ *<sup>i</sup>*þ<sup>1</sup> *<sup>T</sup>* where <sup>Δ</sup> <sup>c</sup>*P*ð Þ*<sup>i</sup>* 1,1 >0 is arbitrary (a free parameter!) and

$$\widehat{P^{(i)}} = U^{(i)T} P^{(i)} U^{(i)} = \begin{bmatrix} \widehat{\Delta P^{(i)}} \mathbf{1}, + K^{(i+1)} P^{(i+1)} K^{(i+1)T} & -K^{(i+1)} P^{(i+1)} \\\\ -P^{(i+1)} K^{(i+1)T} & P^{(i+1)} \end{bmatrix}. \tag{26}$$

Therefore, <sup>P</sup>ð Þ*<sup>i</sup>* is the set of all *<sup>P</sup>*ð Þ*<sup>i</sup>* <sup>&</sup>gt;0 such that <sup>c</sup>*P*ð Þ*<sup>i</sup>* <sup>¼</sup> *<sup>U</sup>*ð Þ*<sup>i</sup> TP*ð Þ*<sup>i</sup> <sup>U</sup>*ð Þ*<sup>i</sup>* is given by (26). We thus have a parametrization of all *P*ð Þ*<sup>i</sup>* >0 satisfying (20). Specifically, Pð Þ <sup>0</sup> is the set of all *P*ð Þ <sup>0</sup> >0 satisfying (5).

Now, let *P*ð Þ*<sup>i</sup>* ∈Pð Þ*<sup>i</sup>* and let

$$\begin{cases} \mathbf{X}\_{0}^{(i)} = \left(I\_{n\_{i}} + A^{(i)}P^{(i)} + P^{(i)}A^{(i)T}\right) \cdot \\ \qquad \cdot \left(I\_{n\_{i}} - \frac{1}{2}B^{(i)}B^{(i)+}\right) \left(P^{(i)}\right)^{-1} \\ \text{ } \mathbf{W}^{(i)} \text{ arbitrary such that } \mathbf{W}^{(i)T} = -\mathbf{W}^{(i)}, R\_{B^{(i)}}\mathbf{W}^{(i)} = \mathbf{0} \\ \mathbf{X}^{(i)} = \mathbf{X}\_{0}^{(i)} + \mathbf{W}^{(i)}\left(P^{(i)}\right)^{-1} + R\_{B^{(i)}}L^{(i)} \text{ where } L^{(i)} \text{ is arbitrary} \\ \mathbf{K}^{(i)} = \mathbf{B}^{(i)+}\mathbf{X}^{(i)} + L\_{B^{(i)}}F^{(i)} \text{ where } F^{(i)} \text{ is arbitrary.} \end{cases} \tag{27}$$

Then, it can be checked that *<sup>E</sup>*ð Þ*<sup>i</sup>* <sup>¼</sup> *<sup>A</sup>*ð Þ*<sup>i</sup>* � *<sup>B</sup>*ð Þ*<sup>i</sup> <sup>K</sup>*ð Þ*<sup>i</sup>* <sup>¼</sup> *<sup>A</sup>*ð Þ*<sup>i</sup>* � *<sup>B</sup>*ð Þ*<sup>i</sup> <sup>B</sup>*ð Þþ*<sup>i</sup> <sup>X</sup>*ð Þ*<sup>i</sup>* is stable. We therefore have a parametrization of Kð Þ*<sup>i</sup> P*ð Þ*<sup>i</sup>* � � through *P*ð Þ*<sup>i</sup>* ∈Pð Þ*<sup>i</sup>* . We conclude the discussion above with the following:

Theorem 3.2 Let ð Þ *A*, *B* be a controllable pair. Then, in the above notations, for *<sup>i</sup>* <sup>¼</sup> *<sup>b</sup>* � 1, … , 0, *<sup>P</sup>*ð Þ*<sup>i</sup>* <sup>&</sup>gt;0 satisfies (20) if and only if <sup>c</sup>*P*ð Þ*<sup>i</sup>* <sup>¼</sup> *<sup>U</sup>*ð Þ*<sup>i</sup> TP*ð Þ*<sup>i</sup> <sup>U</sup>*ð Þ*<sup>i</sup>* has the structure (26) where Δ c*P*ð Þ*<sup>i</sup>* 1,1 >0 is arbitrary (free parameter), where *K*ð Þ *<sup>i</sup>*þ<sup>1</sup> ∈ Kð Þ *<sup>i</sup>*þ<sup>1</sup> *P*ð Þ *<sup>i</sup>*þ<sup>1</sup> � �, where *P*ð Þ *<sup>b</sup>* >0 is arbitrary (free parameter) and Kð Þ *<sup>b</sup> P*ð Þ *<sup>b</sup>* � � is given by (18). Moreover, <sup>K</sup>ð Þ*<sup>i</sup> <sup>P</sup>*ð Þ*<sup>i</sup>* � � for *<sup>i</sup>* <sup>¼</sup> *<sup>b</sup>* � 1, … , 0 is given by (27).

Similarly to the discussion above, relating to *AT*,*C*þ*C* � � and defining subsystems for *<sup>j</sup>* <sup>¼</sup> 0, … ,*c*, we have a parametrization of all *<sup>Q</sup>*ð Þ*<sup>j</sup>* <sup>&</sup>gt; 0 satisfying (8) for the related sub-system and specifically, Qð Þ <sup>0</sup> is the set of all *Q*ð Þ <sup>0</sup> >0 satisfying (8). The parametrizations of all the stabilizing SF's of *<sup>A</sup>*, *BB*<sup>þ</sup> ð Þ and *<sup>A</sup><sup>T</sup>*,*C*þ*<sup>C</sup>* � � are given in the following:

Corollary 3.1 Let *A*, *BB*<sup>þ</sup> ð Þ be a given controllable pair. Then, the set of all stabilizing SF's of *<sup>A</sup>*, *BB*<sup>þ</sup> ð Þ is given by *<sup>X</sup>* <sup>¼</sup> *<sup>X</sup>*<sup>0</sup> <sup>þ</sup> *WP*�<sup>1</sup> <sup>þ</sup> *RBL* where

$$X\_0 = \left(I + AP + PA^T\right) \left(I - \frac{1}{2}BB^+\right)P^{-1},$$

where *<sup>L</sup>* is arbitrary, *<sup>W</sup>* satisfies *<sup>W</sup><sup>T</sup>* ¼ �*W*, *RBW* <sup>¼</sup> 0, and *<sup>P</sup>*<sup>&</sup>gt; 0 satisfies

$$R\_B \left( I + AP + PA^T \right) R\_B = \mathbf{0},$$

i.e. *P*∈Pð Þ <sup>0</sup> .

Similarly, let *AT*,*C*þ*C* � � be a given controllable pair. Then, the set of all stabilizing SF's of *AT*,*C*þ*<sup>C</sup>* � � is given by *<sup>Y</sup>* <sup>¼</sup> *<sup>Y</sup>*<sup>0</sup> <sup>þ</sup> *<sup>Q</sup>*�<sup>1</sup> *V* þ *MLC* where

$$Y\_0 = Q^{-1} \left( I - \frac{1}{2} \mathbf{C}^+ \mathbf{C} \right) (I + A^T \mathbf{Q} + Q A),$$

where *<sup>M</sup>* is arbitrary, *<sup>V</sup>* satisfies *<sup>V</sup><sup>T</sup>* ¼ �*V*,*VLC* <sup>¼</sup> 0, and *<sup>Q</sup>* <sup>&</sup>gt;0 satisfies

$$L\_C(I + A^T Q + QA)L\_C = 0,$$

i.e. *Q* ∈ Qð Þ <sup>0</sup> .

*On Parametrizations of State Feedbacks and Static Output Feedbacks and Their Applications DOI: http://dx.doi.org/10.5772/intechopen.101176*

#### **4. Parametrizations of all the static output feedbacks**

In this section, we give two parametrizations for the set of all the stabilizing SOFs. We start with the following lemma, which was extensively used in [20]:

Lemma 4.1 A system ð Þ *<sup>A</sup>*, *<sup>B</sup>*,*<sup>C</sup>* is stabilizable if and only if ð Þ *<sup>A</sup>*, *<sup>B</sup>* and *<sup>A</sup>T*,*C<sup>T</sup>* � � are stabilizable and there exist matrices *<sup>X</sup>*, *<sup>Y</sup>* <sup>∈</sup> *<sup>n</sup>*�*<sup>n</sup>* such that *<sup>A</sup>* � *BB*þ*<sup>X</sup>* and *<sup>A</sup>* � *YC*þ*C* are stable and *BB*þ*X* ¼ *YC*þ*C*. When the conditions hold, the set of all stabilizing SOFs related to the chosen matrices *X*, *Y* is given by *KX* ¼ *B*þ*XC*<sup>þ</sup> þ *LBS* þ *TRC* or by *KY* ¼ *B*þ*YC*<sup>þ</sup> þ *LBF* þ *HRC* respectively, where *S*, *T*, *F*, *H* are any *m* � *r* matrices. The closed-loop matrix is given by *E* ¼ *A* � *BKXC* ¼ *A* � *BB*þ*X* ¼ *A* � *YC*þ*C* ¼ *A* � *BKYC*.

Remark 4.1 Under the hypotheses of Corollary 3.1, note that *<sup>X</sup>* <sup>¼</sup> *<sup>X</sup>*<sup>0</sup> <sup>þ</sup> *WP*�<sup>1</sup> <sup>þ</sup> *RBL* and *<sup>Y</sup>* <sup>¼</sup> *<sup>Y</sup>*<sup>0</sup> <sup>þ</sup> *<sup>Q</sup>*�<sup>1</sup> *V* þ *MLC* satisfies *BB*þ*X* ¼ *YC*þ*C* if and only if *BB*þ*X*<sup>0</sup> þ *WP*�<sup>1</sup> <sup>¼</sup> *<sup>Y</sup>*0*C*þ*<sup>C</sup>* <sup>þ</sup> *<sup>Q</sup>*�<sup>1</sup> *V*, since *BB*þ*W* ¼ *W*, *BB*þ*RB* ¼ 0,*VC*þ*C* ¼ *V*, *LCC*þ*C* ¼ 0. Moreover, this condition can be simplified to (meaning that it does not include matrix inverses):

$$\begin{aligned} \,^1QBB^+ \left(I + AP + PA^T\right) \left(I - \frac{1}{2}BB^+\right) + QW &= \\ \,^1Q = \left(I - \frac{1}{2}C^+C\right) \left(I + A^TQ + QA\right)C^+CP + VP. \end{aligned} \tag{28}$$

We can state now the first parametrization for the set of all the stabilizing SOFs:

Corollary 4.1 Let ð Þ *<sup>A</sup>*, *<sup>B</sup>*,*<sup>C</sup>* be a given system triplet. Assume that ð Þ *<sup>A</sup>*, *<sup>B</sup>* , *AT*,*C<sup>T</sup>* � � are controllable. Then, the system has a stabilizing static output feedback if and only if there exist *P*, *Q* >0 and *W*,*V* such that

$$\begin{cases} R\_B(I + AP + PA^T)R\_B = \mathbf{0} \ (i.e.P \in \mathcal{P}^{(0)}) \\\\ L\_C(I + A^T Q + QA)L\_C = \mathbf{0} \ (i.e.Q \in \mathcal{Q}^{(0)}) \\\\ W^T = -W, R\_B W = \mathbf{0} \\\\ V^T = -V, VL\_C = \mathbf{0} \\\\ QBB^+ \left(I + AP + PA^T\right) \left(I - \frac{1}{2}BB^+\right) + QW = \\\\ \begin{aligned} &= \left(I - \frac{1}{2}C^+C\right) \left(I + A^TQ + QA\right)C^+CP + VP. \end{aligned} \end{cases}$$

In this case, *A* � *BKC* is stable if and only if

$$\begin{cases} K = K\_X = B^+ X C^+ + L\_B \mathbb{S} + T \mathcal{R}\_C \\\\ X = X\_0 + W P^{-1} + R\_B \mathcal{L} \\\\ X\_0 = \left( I + AP + PA^T \right) \left( I - \frac{1}{2} B B^+ \right) P^{-1}, \end{cases}$$

where *S*, *T*, *L* are arbitrary.

Similarly, *A* � *BKC* is stable if and only if

$$\begin{cases} \boldsymbol{K} = \boldsymbol{K}\_{\boldsymbol{Y}} = \boldsymbol{B}^+ \boldsymbol{Y} \boldsymbol{C}^+ + \boldsymbol{L}\_{\boldsymbol{B}} \boldsymbol{F} + \boldsymbol{H} \boldsymbol{R}\_{\boldsymbol{C}} \\ \boldsymbol{Y} = \boldsymbol{Y}\_0 + \boldsymbol{Q}^{-1} \boldsymbol{V} + \boldsymbol{M} \boldsymbol{L}\_{\boldsymbol{C}} \\ \boldsymbol{Y}\_0 = \boldsymbol{Q}^{-1} \left( \boldsymbol{I} - \frac{1}{2} \boldsymbol{C}^+ \boldsymbol{C} \right) \left( \boldsymbol{I} + \boldsymbol{A}^T \boldsymbol{Q} + \boldsymbol{Q} \boldsymbol{A} \right), \end{cases}$$

where *F*, *H*, *M* are arbitrary.

We conclude this section with a second SOF parametrization:

Corollary 4.2 Let ð Þ *<sup>A</sup>*, *<sup>B</sup>* and *AT*,*C<sup>T</sup>* � � be controllable pairs. Then, *<sup>A</sup>* � *BKC* is stable if and only if there exists *K*ð Þ <sup>0</sup> ∈ Kð Þ <sup>0</sup> *P*ð Þ <sup>0</sup> � � for some *P*ð Þ <sup>0</sup> ∈Pð Þ <sup>0</sup> , such that *<sup>K</sup>*ð Þ <sup>0</sup> *LC* <sup>¼</sup> 0 . In this case, the set of all *<sup>K</sup>*'s such that *<sup>A</sup>* � *BKC* is stable, is given by *<sup>K</sup>* <sup>¼</sup> *<sup>K</sup>*ð Þ <sup>0</sup> *<sup>C</sup>*<sup>þ</sup> <sup>þ</sup> *GRC* where *<sup>K</sup>*ð Þ <sup>0</sup> <sup>∈</sup> <sup>K</sup>ð Þ <sup>0</sup> *<sup>P</sup>*ð Þ <sup>0</sup> � �, and *<sup>G</sup>* is arbitrary.

**Proof:** If there exists *<sup>K</sup>*ð Þ <sup>0</sup> <sup>∈</sup> <sup>K</sup>ð Þ <sup>0</sup> *<sup>P</sup>*ð Þ <sup>0</sup> � � such that *<sup>K</sup>*ð Þ <sup>0</sup> *LC* <sup>¼</sup> 0 for some *<sup>P</sup>*ð Þ <sup>0</sup> <sup>∈</sup>Pð Þ <sup>0</sup> then *<sup>K</sup>*ð Þ <sup>0</sup> <sup>¼</sup> *<sup>K</sup>*ð Þ <sup>0</sup> *<sup>C</sup>*þ*C*. Since *<sup>K</sup>*ð Þ <sup>0</sup> <sup>∈</sup> <sup>K</sup>ð Þ <sup>0</sup> *<sup>P</sup>*ð Þ <sup>0</sup> � � it follows that *<sup>A</sup>* � *BK*ð Þ <sup>0</sup> is stable. Thus, for *<sup>K</sup>* <sup>¼</sup> *<sup>K</sup>*ð Þ <sup>0</sup> *<sup>C</sup>*<sup>þ</sup> we get that *<sup>A</sup>* � *BKC* is stable.

Conversely, if *<sup>A</sup>* � *BKC* is stable for some *<sup>K</sup>* then, for *<sup>K</sup>*ð Þ <sup>0</sup> <sup>¼</sup> *KC* we have *<sup>K</sup>*ð Þ <sup>0</sup> *LC* <sup>¼</sup> 0 and since *<sup>A</sup>* � *BK*ð Þ <sup>0</sup> is stable, Theorem 3.2 implies that there exists *<sup>P</sup>*ð Þ <sup>0</sup> <sup>∈</sup>Pð Þ <sup>0</sup> such that *<sup>K</sup>*ð Þ <sup>0</sup> <sup>∈</sup> <sup>K</sup>ð Þ <sup>0</sup> *<sup>P</sup>*ð Þ <sup>0</sup> � �. ■.

Remark 4.2 Note that when *A*, *BB*<sup>þ</sup> ð Þ is stabilizable, there exists an orthogonal matrix *V* such that

$$V^TAV = \begin{bmatrix} \hat{A}\_{1,1} & \mathbf{0} \\ \hat{A}\_{2,1} & \hat{A}\_{2,2} \end{bmatrix}, V^TBB^+V = \begin{bmatrix} \mathbf{0} & \mathbf{0} \\ \mathbf{0} & \hat{B}\_{2,2}\hat{B}\_{2,2}^+ \end{bmatrix},$$

where *<sup>A</sup>*b1,1 is stable and *<sup>A</sup>*b2,2, *<sup>B</sup>*b2,2*B*b<sup>þ</sup> 2,2 � � is controllable (see [35], Lemma 3.1, p. 536). Thus, we may assume without loss of generality that the given pair is controllable. If ð Þ *<sup>A</sup>*, *<sup>B</sup>*,*<sup>C</sup>* is a system triplet such that *<sup>A</sup>*, *BB*<sup>þ</sup> ð Þ and *AT*,*C*þ*<sup>C</sup>* � � are stabilizable then, there exists an orthogonal matrix *<sup>V</sup>* such that *<sup>A</sup>*b2,2, *<sup>B</sup>*b2,2*B*b<sup>þ</sup> 2,2 � � and *A*b*T* 2,2,*C*b<sup>þ</sup> 2,2*C*b2,2 � � are controllable, where *<sup>C</sup>*<sup>b</sup> <sup>¼</sup> *<sup>V</sup>TC*þ*CV* is partitioned accordingly (see [35], Theorem 4.1 and Remark 4.1, p. 539). Thus, the assumption in Corollary 4.2 that the given pairs are controllable does not make any loss of generality of the results.

The effectiveness of the method is shown in the following example, but first, for the convenience of the reader, we summarize the whole method in Algorithm 1 (with its continuation in Algorithm 2). Let *f K*ð Þ denote a target function of the SOF *K*, to be minimized (e.g., k k *K <sup>F</sup>*, the LQR functional, the *H*∞-norm or the *H*2-norm of the closed loop, the pole-placement errors of the closed loop, or any other key performance that depends on *K*).

Regarding the LQR problem, let the LQR functional be defined by:

$$J(\mathbf{x}\_0, \boldsymbol{\mu}) = \int\_0^\infty \left( \mathbf{x}(t)^T \mathbf{Q} \mathbf{x}(t) + \boldsymbol{\mu}(t)^T \mathbf{R} \boldsymbol{\mu}(t) \right) dt,\tag{29}$$

where *Q* >0 and *R*≥0 are given. We need to find *u t*ð Þ that minimizes the functional value for any initial disturbance *x*<sup>0</sup> from the equilibrium point 0. Assuming that *u t*ð Þ is realized by a stabilizing SOF, let *u t*ðÞ¼�*Ky t*ðÞ¼�*KCx t*ð Þ. Then, by substitution of the last into (29), we get:

$$J(\mathbf{x}\_0, K) = \int\_0^\infty \mathbf{x}(t)^T \left(\mathbf{Q} + \mathbf{C}^T K^T \mathbf{R} \mathbf{K} \mathbf{C}\right) \mathbf{x}(t) dt. \tag{30}$$

*On Parametrizations of State Feedbacks and Static Output Feedbacks and Their Applications DOI: http://dx.doi.org/10.5772/intechopen.101176*

Now, since *<sup>Q</sup>* <sup>þ</sup> *<sup>C</sup>TKTRKC*<sup>&</sup>gt; 0 and since *<sup>E</sup>* <sup>≔</sup> *<sup>A</sup>* � *BKC* is stable, the Lyapunov equation:

$$E^T P + PE = -\left(Q + \mathbf{C}^T \mathbf{K}^T \mathbf{R} \mathbf{K} \mathbf{C}\right),\tag{31}$$

has unique solution *PLQR*ð Þ *K* >0 given by:

$$\begin{split} P\_{LQR}(K) &= \int\_0^\infty \exp\left(E^T t\right) \left(\mathbb{Q} + \mathbf{C}^T K^T \mathbf{R} \mathbf{K}\right) \exp\left(Et\right) dt = \\ &= -\operatorname{mat}\left(\left(I \otimes E^T + E^T \otimes I\right)^{-1}\right) \operatorname{vec}\left(\mathbb{Q} + \mathbf{C}^T K^T \mathbf{R} \mathbf{K}\right). \end{split} \tag{32}$$

By substitution of (31) into (30), we get:

$$J(\mathbf{x}\_0, \mathbf{K}) = \mathbf{x}\_0^T P\_{LQR}(\mathbf{K}) \mathbf{x}\_0 = \left\| P\_{LQR}(\mathbf{K})^\frac{1}{2} \mathbf{x}\_0 \right\|\_2^2. \tag{33}$$

#### **Algorithm 1. An Algorithm For Optimal SOF's.**

**Require:** *An algorithm for optimizing f K*ð Þ *under LMI and linear constraints, an algorithm for computing the Moore-Penrose pseudo-inverse and an algorithm for orthogonal diagonalization.*

**Input:** *System triplet A*ð Þ , *<sup>B</sup>*,*<sup>C</sup> such that A*ð Þ , *<sup>B</sup>* , *AT*,*C<sup>T</sup>* � � *are controllable.* **Output:** *SOF K such that A* � *BKC is stable minimizing f K*ð Þ*— if exists*

1. *<sup>A</sup>*ð Þ <sup>0</sup> *<sup>A</sup>* 2. *<sup>B</sup>*ð Þ <sup>0</sup> *<sup>B</sup>* 3. *i* 0 4. *<sup>k</sup>*<sup>0</sup> *rank B*ð Þ <sup>0</sup> � � 5. **while** *<sup>B</sup>*ð Þ*<sup>i</sup> <sup>B</sup>*ð Þþ*<sup>i</sup>* 6¼ *Iki* **do** 6. *compute orthogonal matrix U*ð Þ*<sup>i</sup> such that U*ð Þ*<sup>i</sup> TB*ð Þ*<sup>i</sup> <sup>B</sup>*ð Þþ*<sup>i</sup> <sup>U</sup>*ð Þ*<sup>i</sup>* <sup>¼</sup> *bdiag Iki* ð Þ , 0 7. *A* <sup>d</sup>ð Þ*<sup>i</sup> <sup>U</sup>*ð Þ*<sup>i</sup> TA*ð Þ*<sup>i</sup> <sup>U</sup>*ð Þ*<sup>i</sup>* 8. *partition A* <sup>d</sup>ð Þ*<sup>i</sup>* <sup>¼</sup> *<sup>A</sup>* dð Þ*<sup>i</sup>* 1,1 *A* dð Þ*<sup>i</sup>* 1,2 *A* dð Þ*<sup>i</sup>* 2,1 *A* dð Þ*<sup>i</sup>* 2,2 2 4 3 5 9. *<sup>A</sup>*ð Þ *<sup>i</sup>*þ<sup>1</sup> *<sup>A</sup>* dð Þ*<sup>i</sup>* 2,2 10. *<sup>B</sup>*ð Þ *<sup>i</sup>*þ<sup>1</sup> *<sup>A</sup>* dð Þ*<sup>i</sup>* 2,1 11. *i i* þ 1 12. *ki rank B*ð Þ*<sup>i</sup>* � � 13. **end while** 14. *b i* 15. *let P*ð Þ *<sup>b</sup> be a symbol for P*ð Þ *<sup>b</sup>* >0 16. *X*ð Þ *<sup>b</sup>* 0 <sup>1</sup> <sup>2</sup> *Inb* <sup>þ</sup> *<sup>A</sup>*ð Þ *<sup>b</sup> <sup>P</sup>*ð Þ *<sup>b</sup>* <sup>þ</sup> *<sup>P</sup>*ð Þ *<sup>b</sup> <sup>A</sup>*ð Þ *<sup>b</sup> <sup>T</sup>* � � *<sup>P</sup>*ð Þ *<sup>b</sup>* � ��<sup>1</sup> 17. *let W*ð Þ *<sup>b</sup> be a symbol for a matrix satisfying W*ð Þ *<sup>b</sup> <sup>T</sup>* ¼ �*W*ð Þ *<sup>b</sup>* 18. *<sup>X</sup>*ð Þ *<sup>b</sup> <sup>X</sup>*ð Þ *<sup>b</sup>* <sup>0</sup> <sup>þ</sup> *<sup>W</sup>*ð Þ *<sup>b</sup> <sup>P</sup>*ð Þ *<sup>b</sup>* � ��<sup>1</sup> 19. *let F*ð Þ *<sup>b</sup> be a symbol for arbitrary matrix* 20. *<sup>K</sup>*ð Þ *<sup>b</sup> <sup>B</sup>*ð Þþ*<sup>b</sup> <sup>X</sup>*ð Þ *<sup>b</sup>* <sup>þ</sup> *LB*ð Þ *<sup>b</sup> <sup>F</sup>*ð Þ *<sup>b</sup>*

Thus,

$$\begin{aligned} J(\mathbf{x}\_0, K) &= \left\| P\_{LQR}(K)^{\frac{1}{2}} \mathbf{x}\_0 \right\|\_2^2 \le \\ &\le \left\| P\_{LQR}(K)^{\frac{1}{2}} \right\|^2 \| \mathbf{x}\_0 \|\_2^2 = \\ &= \left\| P\_{LQR}(K) \right\| \| \| \mathbf{x}\_0 \| \|\_2^2 = \\ &= \sigma\_{\max} \left( P\_{LQR}(K) \right) \| \mathbf{x}\_0 \|\_2^2, \end{aligned}$$

#### **Algorithm 2. An Algorithm For Optimal SOF's, Continued.**

1. **for** *i* ¼ *b* � 1 *downto* 0 **do** 2. *let* Δ c*P*ð Þ*<sup>i</sup>* 1,1 *be a symbol for* Δ c*P*ð Þ*<sup>i</sup>* 1,1 > 0 3. <sup>c</sup>*P*ð Þ*<sup>i</sup>* <sup>Δ</sup> <sup>c</sup>*P*ð Þ*<sup>i</sup>* 1,1 <sup>þ</sup> *<sup>K</sup>*ð Þ *<sup>i</sup>*þ<sup>1</sup> *<sup>P</sup>*ð Þ *<sup>i</sup>*þ<sup>1</sup> *<sup>K</sup>*ð Þ *<sup>i</sup>*þ<sup>1</sup> *<sup>T</sup>* �*K*ð Þ *<sup>i</sup>*þ<sup>1</sup> *<sup>P</sup>*ð Þ *<sup>i</sup>*þ<sup>1</sup> �*P*ð Þ *<sup>i</sup>*þ<sup>1</sup> *<sup>K</sup>*ð Þ *<sup>i</sup>*þ<sup>1</sup> *<sup>T</sup> <sup>P</sup>*ð Þ *<sup>i</sup>*þ<sup>1</sup> " # 4. *P* cð Þ*i* � ��<sup>1</sup> Δ c*P*ð Þ*<sup>i</sup>* 1,1 � ��<sup>1</sup> Δ c*P*ð Þ*<sup>i</sup>* 1,1 � ��<sup>1</sup> *K*ð Þ *<sup>i</sup>*þ<sup>1</sup> *K*ð Þ *<sup>i</sup>*þ<sup>1</sup> *<sup>T</sup>* Δ c*P*ð Þ*<sup>i</sup>* 1,1 � ��<sup>1</sup> *<sup>P</sup>*ð Þ *<sup>i</sup>*þ<sup>1</sup> � ��<sup>1</sup> <sup>þ</sup> *<sup>K</sup>*ð Þ *<sup>i</sup>*þ<sup>1</sup> *<sup>T</sup>* <sup>Δ</sup> <sup>c</sup>*P*ð Þ*<sup>i</sup>* 1,1 � ��<sup>1</sup> *K*ð Þ *<sup>i</sup>*þ<sup>1</sup> 2 6 6 6 4 3 7 7 7 5 5. *<sup>P</sup>*ð Þ*<sup>i</sup> <sup>U</sup>*ð Þ*<sup>i</sup>* <sup>c</sup>*P*ð Þ*<sup>i</sup> <sup>U</sup>*ð Þ*<sup>i</sup> <sup>T</sup>* 6. *<sup>P</sup>*ð Þ*<sup>i</sup>* � ��<sup>1</sup> *<sup>U</sup>*ð Þ*<sup>i</sup>* <sup>c</sup>*P*ð Þ*<sup>i</sup>* � ��<sup>1</sup> *U*ð Þ*<sup>i</sup> <sup>T</sup>* 7. *X*ð Þ*<sup>i</sup>* <sup>0</sup> *Ini* <sup>þ</sup> *<sup>A</sup>*ð Þ*<sup>i</sup> <sup>P</sup>*ð Þ*<sup>i</sup>* <sup>þ</sup> *<sup>P</sup>*ð Þ*<sup>i</sup> <sup>A</sup>*ð Þ*<sup>i</sup> <sup>T</sup>* � � *Ini* � <sup>1</sup> <sup>2</sup> *<sup>B</sup>*ð Þ*<sup>i</sup> <sup>B</sup>*ð Þþ*<sup>i</sup>* � � *<sup>P</sup>*ð Þ*<sup>i</sup>* � ��<sup>1</sup> 8. *let W*ð Þ*<sup>i</sup> be a symbol for a matrix satisfying W*ð Þ*<sup>i</sup> <sup>T</sup>* ¼ �*W*ð Þ*<sup>i</sup> and RB*ð Þ*<sup>i</sup> <sup>W</sup>*ð Þ*<sup>i</sup>* <sup>¼</sup> <sup>0</sup> 9. *let L*ð Þ*<sup>i</sup> be a symbol for arbitrary matrix* 10. *let F*ð Þ*<sup>i</sup> be a symbol for arbitrary matrix* 11. *<sup>X</sup>*ð Þ*<sup>i</sup> <sup>X</sup>*ð Þ*<sup>i</sup>* <sup>0</sup> <sup>þ</sup> *<sup>W</sup>*ð Þ*<sup>i</sup> <sup>P</sup>*ð Þ*<sup>i</sup>* � ��<sup>1</sup> <sup>þ</sup> *RB*ð Þ*<sup>i</sup> <sup>L</sup>*ð Þ*<sup>i</sup>* 12. *<sup>K</sup>*ð Þ*<sup>i</sup> <sup>B</sup>*ð Þþ*<sup>i</sup> <sup>X</sup>*ð Þ*<sup>i</sup>* <sup>þ</sup> *LB*ð Þ*<sup>i</sup> <sup>F</sup>*ð Þ*<sup>i</sup>* 13. **end for** 14. *optimize f K*ð Þ *under the matrix equation K*ð Þ <sup>0</sup> *LC* <sup>¼</sup> <sup>0</sup> *and the constraints* Δ*P* dð Þ <sup>0</sup> 1,1 >0, … , Δ*P* dð Þ *<sup>b</sup>*�<sup>1</sup> 1,1 >0, *P*ð Þ *<sup>b</sup>* >0 *with respect to F*ð Þ <sup>0</sup> , … , *F*ð Þ *<sup>b</sup> to W*ð Þ <sup>0</sup> , … ,*W*ð Þ *<sup>b</sup> and to* Δ*P* dð Þ <sup>0</sup> 1,1, … , Δ*P* dð Þ *<sup>b</sup>*�<sup>1</sup> 1,1, *P*ð Þ *<sup>b</sup> as variables* 15. **if** *a solution was found* **then** 16. **return** *K* 17. **else** 18. **return** "*no solution was found*" 19. **end if**

where *<sup>σ</sup>max PLQR*ð Þ *<sup>K</sup>* � � is the largest eigenvalue of *PLQR*ð Þ *<sup>K</sup>* . Therefore,

$$\frac{J(\varkappa\_0, K)}{\left\|\varkappa\_0\right\|\_2^2} \le \sigma\_{\max} \left(P\_{LQR}(K)\right). \tag{34}$$

Now, if *<sup>x</sup>*<sup>0</sup> is known then we can minimize *J x*ð Þ 0, *<sup>K</sup>* by minimizing *xT* <sup>0</sup> *PLQR*ð Þ *K x*0. Otherwise, and if we design for the worst-case, we need to minimize *<sup>σ</sup>max PLQR*ð Þ *<sup>K</sup>* � �. *On Parametrizations of State Feedbacks and Static Output Feedbacks and Their Applications DOI: http://dx.doi.org/10.5772/intechopen.101176*

In the following examples, we have executed the algorithm on Processor: Intel (R) Core(TM) i5-2400 CPU @ 3.10GHz 3.10 GHz, RAM: 8.00 GB, Operating System: Windows 10, System Type: 64-bit Operating System, x64-based processor, Platform: MATLAB®, Version: R2018b, Function: fmincon.

Example 4.1 A system of Boeing B-747 aircraft (the "AC5" system in [42] and see also [43]) is given by the general model (given here with slight changes):

$$\begin{cases} \frac{d}{dt}\mathfrak{x}(t) = A\mathfrak{x}(t) + B\_1\mathfrak{w}(t) + Bu(t) \\\\ z(t) = C\_1\mathfrak{x}(t) + D\_{1,1}\mathfrak{w}(t) + D\_{1,2}\mathfrak{u}(t) \\\\ \mathfrak{y}(t) = \mathsf{Cx}(t) + D\_{2,1}\mathfrak{w}(t), \end{cases}$$

where *x* is the state, *w* is the noise, *u* is the control input, *z* is the regulated output, and *y* is the measurement, where:

$$A = \begin{bmatrix} 0.980100000000000 & 0.000300000000000 & -0.098000000000000 & 0.003800000000000 \\ -0.386800000000000 & 0.90710000000000 & 0.0471000000000000 & -0.008000000000000 \\ 0.159100000000000 & -0.0015000000000000 & 0.969100000000000 & 0.003000000000000 \\ -0.019800000000000 & 0.095800000000000 & 0.0021000000000000 & 1.000000000000000 \\ B = \begin{bmatrix} -0.0001000000000000 & 0.00580000000000 \\ 0.025000000000000 & 0.015300000000000 \\ 0.0012000000000000 & -0.0980000000000000 \\ 0.001500000000000 & 0.000800000000000 \end{bmatrix}, C = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix},$$

$$B\_1 = B\_1C\_1 = C\_1D\_{11} = 0\_{0.41}D\_{11} = 0\_{0.41}D\_{10} = 0\_{0.42}.$$

with

$$\sigma(A) = \begin{cases} 0.978871342065923 \pm 0.128159143146289i \\ & 0.899614120838404 \\ & 0.998943195029751 \end{cases} \text{i}\begin{cases} \\ \\ \\ \end{cases}$$

Note that ð Þ *<sup>A</sup>*, *<sup>B</sup>* and *AT*,*C<sup>T</sup>* � � here are controllable. Let *<sup>u</sup>* <sup>¼</sup> *ur* � *Ky*, where *ur* is a reference input. Then, *u* ¼ *ur* � *KCx* � *KD*2,1*w* and substitution the last into the system yields the closed-loop system:

$$\begin{cases} \frac{d}{dt}\mathbf{x}(t) = (A - BK\mathbf{C})\mathbf{x}(t) + (B\_1 - BKD\_{2,1})w(t) + Bu\_r(t) \\\\ z(t) = (C\_1 - D\_{1,2}KC)\mathbf{x}(t) + (D\_{1,1} - D\_{1,2}KD\_{2,1})w(t) + D\_{1,2}u\_r(t), \end{cases}$$

where the behavior of *z* is of our interest. Note that we actually have:

$$\begin{cases} \frac{d}{dt}\mathbf{x}(t) = (A - BK\mathbf{C})\mathbf{x}(t) + B\boldsymbol{w}(t) + B\boldsymbol{u}\_r(t),\\ \boldsymbol{z}(t) = \mathbf{C}\mathbf{x}(t) = \mathbf{y}(t). \end{cases}$$

For the stabilization via SOF with minimal Frobenius-norm, we need to minimize *f K*ð Þ¼ k k *<sup>K</sup> <sup>F</sup>*. For the LQR problem we need to minimize *f K*ð Þ¼ *xT* <sup>0</sup> *PLQR*ð Þ *K x*<sup>0</sup> when *<sup>x</sup>*<sup>0</sup> is known and to minimize *f K*ð Þ¼ *<sup>σ</sup>max PLQR*ð Þ *<sup>K</sup>* � � when *<sup>x</sup>*<sup>0</sup> is unknown, where *PLQR*ð Þ *K* is given by (32). For the *H*<sup>∞</sup> and the *H*<sup>2</sup> problems, we need to minimize *f K*ð Þ¼ k k *Tw*,*<sup>z</sup>*ð Þ*s <sup>H</sup>*<sup>∞</sup> and *f K*ð Þ¼ k k *Tw*,*<sup>z</sup>*ð Þ*s <sup>H</sup>*<sup>2</sup> , resp. where:

$$\begin{split} \mathbf{C}\_{w,\mathbf{z}}(\mathbf{s}) &= \left(D\_{1,1} - D\_{1,2}\mathbf{K}D\_{2,1}\right) + \left(\mathbf{C}\_{1} - D\_{1,2}\mathbf{K}\mathbf{C}\right) \left(\mathbf{s}\mathbf{I} - \mathbf{A} + \mathbf{B}\mathbf{K}\mathbf{C}\right)^{-1} \left(\mathbf{B}\_{1} - \mathbf{B}\mathbf{K}\mathbf{D}\_{2,1}\right) = \\ &= \mathbf{C}(\mathbf{s}\mathbf{I} - \mathbf{A} + \mathbf{B}\mathbf{K}\mathbf{C})^{-1}\mathbf{B}. \end{split}$$

These problems needed to be solved under the constraint that *A* � *BKC* stable, i.e., that *<sup>K</sup>* <sup>¼</sup> *<sup>K</sup>*ð Þ <sup>0</sup> *<sup>C</sup>*<sup>þ</sup> <sup>þ</sup> *GRC*, where *<sup>K</sup>*ð Þ <sup>0</sup> <sup>∈</sup>*K*ð Þ <sup>0</sup> *<sup>P</sup>*ð Þ <sup>0</sup> � � for some *<sup>P</sup>*ð Þ <sup>0</sup> <sup>∈</sup>*P*ð Þ <sup>0</sup> , such that *<sup>K</sup>*ð Þ <sup>0</sup> *LC* <sup>¼</sup> 0.

Applying the algorithm we had:

$$U^{(0)} = \begin{bmatrix} -0.063882699439918 & 0 & -0.997957414277919 & 0\\ 0.01219587562698 & 0.998643130545343 & -0.000780700106257 & -0.050621625208610\\ 0.997882828566422 & -0.012222870716218 & -0.063877924951025 & 0.000269421014890\\ 0.000348971897020 & 0.05062113481318 & -0.00002238894237 & 0.99871367304654\\ 0.00348951890720 & 0.050621134381318 & -0.00002238894237 & 0.99871367304654 \end{bmatrix}$$

$$A^{(1)} = \begin{bmatrix} 0.98365083853766 & -0.003772277911855\\ 0.0000914909905229 & 0.99495907227612 \end{bmatrix}, B^{(1)} = \begin{bmatrix} 0.09883972659475 & -0.001544979864136\\ 0.0009939225045070 & 0.10024897815585 \end{bmatrix}$$

The "while-loop" stops because *<sup>B</sup>*ð Þ<sup>1</sup> *<sup>B</sup>*ð Þþ<sup>1</sup> <sup>¼</sup> *<sup>I</sup>*2. We have

*<sup>B</sup>*ð Þþ <sup>0</sup> <sup>¼</sup> �0*:*386571795892900 33*:*468356299440529 5*:*629731696802776 1*:*694878880538571 *:*696955535886478 0*:*442712098375399 �10*:*893875111014371 0*:*025378383262705 *<sup>B</sup>*ð Þþ<sup>1</sup> <sup>¼</sup> *:*111382188357680 0*:*155830743345230 �0*:*094732770050116 9*:*973704058215121 *LB*ð Þ <sup>0</sup> ¼ 02, *RC*ð Þ <sup>0</sup> ¼ 02, *LB*ð Þ<sup>1</sup> ¼ 02, *RB*ð Þ <sup>0</sup> ¼ *:*995919000712269 0*:*000779105459367 0*:*063747448813564 0*:*000022293265130 *:*000779105459367 0*:*002563158431417 0*:*000036230973158 �0*:*050556704127861 *:*063747448813564 0*:*000036230973158 0*:*004080461883732 0*:*000270502543607 *:*000022293265130 �0*:*050556704127861 0*:*000270502543607 0*:*997437378972582 , *RB*ð Þ<sup>1</sup> ¼ 02 *<sup>I</sup>*<sup>4</sup> � <sup>1</sup> *<sup>B</sup>*ð Þ <sup>0</sup> *<sup>B</sup>*ð Þþ <sup>0</sup> <sup>¼</sup> *:*997959500356135 0*:*000389552729683 0*:*031873724406782 0*:*000011146632565 *:*000389552729683 0*:*501281579215708 0*:*000018115486579 �0*:*025278352063931 *:*031873724406782 0*:*000018115486579 0*:*502040230941866 0*:*000135251271804 *:*000011146632565 �0*:*025278352063931 0*:*000135251271804 0*:*998718689486291 *<sup>I</sup>*<sup>2</sup> � <sup>1</sup> *<sup>B</sup>*ð Þ<sup>1</sup> *<sup>B</sup>*ð Þþ<sup>1</sup> <sup>¼</sup> <sup>1</sup> *I*2*:*

Now, we parameterize all the matrices *<sup>K</sup>*ð Þ <sup>0</sup> such that *<sup>A</sup>*ð Þ <sup>0</sup> � *<sup>B</sup>*ð Þ <sup>0</sup> *<sup>K</sup>*ð Þ <sup>0</sup> is stable. Let *<sup>P</sup>*ð Þ<sup>1</sup> <sup>¼</sup> *<sup>p</sup>*<sup>1</sup> *<sup>p</sup>*<sup>2</sup> *p*<sup>2</sup> *p*<sup>3</sup> � �, where *<sup>p</sup>*1, *<sup>d</sup>*<sup>1</sup> <sup>≔</sup> *<sup>p</sup>*1*p*<sup>3</sup> � *<sup>p</sup>*<sup>2</sup> >0. Let *w*<sup>1</sup> be arbitrary and let *<sup>W</sup>*ð Þ<sup>1</sup> <sup>¼</sup> <sup>0</sup> *<sup>w</sup>*<sup>1</sup> �*w*<sup>1</sup> 0 � �. Let *<sup>S</sup>*ð Þ<sup>1</sup> <sup>¼</sup> *<sup>B</sup>*ð Þþ<sup>1</sup> <sup>1</sup> *<sup>I</sup>*<sup>2</sup> <sup>þ</sup> *<sup>A</sup>*ð Þ<sup>1</sup> *<sup>P</sup>*ð Þ<sup>1</sup> <sup>þ</sup> *<sup>P</sup>*ð Þ<sup>1</sup> *<sup>A</sup>*ð Þ<sup>1</sup> *<sup>T</sup>* � � <sup>þ</sup> *<sup>W</sup>*ð Þ<sup>1</sup> � �*:*

Then

$$K^{(1)} = \mathcal{S}^{(1)} \left( P^{(1)} \right)^{-1}.$$

*On Parametrizations of State Feedbacks and Static Output Feedbacks and Their Applications DOI: http://dx.doi.org/10.5772/intechopen.101176*

$$\text{Let } \widehat{\Delta P^{(0)}}\_{1,1} = \begin{bmatrix} p\_4 & p\_5 \\ p\_5 & p\_6 \end{bmatrix}, \text{ where } p\_4, d\_2 \coloneqq p\_4 p\_6 - p\_5^2 > 0. \text{ Then,}$$

$$P^{(0)} = U^{(0)} \begin{bmatrix} \widehat{\Delta P^{(0)}}\_{1,1} + K^{(1)} P^{(1)} K^{(1)T} & -K^{(1)} P^{(1)} \\\\ -P^{(1)} K^{(1)T} & P^{(1)} \end{bmatrix} U^{(0)T} = \begin{bmatrix} 0 \\\\ -S^{(0)} \end{bmatrix}$$

$$= U^{(0)} \begin{bmatrix} \widehat{\Delta P^{(0)}}\_{1,1} + S^{(1)} K^{(1)T} & -S^{(1)} \\\\ -S^{(1)T} & P^{(1)} \end{bmatrix} U^{(0)T},$$

and

$$\left(P^{(0)}\right)^{-1} = U^{(0)} \begin{bmatrix} \left(\widehat{\Delta P^{(0)}}\_{1,1}\right)^{-1} & \left(\widehat{\Delta P^{(0)}}\_{1,1}\right)^{-1} K^{(1)} \\\\ K^{(1)T} \left(\widehat{\Delta P^{(0)}}\_{1,1}\right)^{-1} & \left(P^{(1)}\right)^{-1} + K^{(1)T} \left(\widehat{\Delta P^{(0)}}\_{1,1}\right)^{-1} K^{(1)} \end{bmatrix} U^{(0)T} \dots$$

Let

$$\mathcal{S}^{(0)} = \mathcal{B}^{(0)+} \left( I\_4 + A^{(0)} P^{(0)} + P^{(0)} A^{(0)T} \right) \left( I\_4 - \frac{1}{2} \mathcal{B}^{(0)} \mathcal{B}^{(0)+} \right).$$

Then

$$\boldsymbol{K}^{(0)} = \mathcal{S}^{(0)} \left( \boldsymbol{P}^{(0)} \right)^{-1}.$$

Note that *<sup>W</sup>*ð Þ <sup>0</sup> *<sup>T</sup>* ¼ �*W*ð Þ <sup>0</sup> , *RB*ð Þ <sup>0</sup> *<sup>W</sup>*ð Þ <sup>0</sup> <sup>¼</sup> 0 implies that *<sup>W</sup>*ð Þ <sup>0</sup> <sup>¼</sup> 04. We have completed the parametrization of all the SFs of the system, where the parameters *W*ð Þ<sup>1</sup> , *P*ð Þ<sup>1</sup> >0, and Δ*P* dð Þ <sup>0</sup> 1,1 >0 are free.

Regarding the optimization stage, we had the following results: starting from the point (feasible for SF but not feasible for SOF):

> *p*<sup>1</sup> *p*<sup>2</sup> *p*<sup>3</sup> *p*<sup>4</sup> *p*<sup>5</sup> *p*<sup>6</sup> *w*<sup>1</sup> � � <sup>¼</sup> ¼ ½ � 750 0 750 750 0 750 0 ,

in *CPU* � *Time* ¼ 0*:*59375½ � *sec* the fmincon function (with the interior-point option and the default optimization parameters) has converged to the optimal point

*p*<sup>1</sup> *p*<sup>2</sup> *p*<sup>3</sup> *p*<sup>4</sup> *p*<sup>5</sup> *p*<sup>6</sup> *w*<sup>1</sup> � � <sup>¼</sup> <sup>¼</sup> 107 � <sup>½</sup> <sup>0</sup>*:*009103654227197 0*:*000105735816664 0*:*006486122495094 1*:*912355216342858 0*:*057053301125719 1*:*<sup>792930229298237</sup> �0*:*<sup>000647092932030</sup> �,

resulting with the following optimal Frobenius-norm SF and SOF

$$K^{(0)} = 10^3 \cdot \begin{bmatrix} -0.157533999747776 & -0.0000000000000000 & -0.000000000000000 & 1.27376653891793 \\ 0.332659954180600 & -0.000000000000000 & -0.000000000000000 & 0.00065986128882 \end{bmatrix}$$

$$K = 10^3 \cdot \begin{bmatrix} -0.15753399974776 & 1.27376653891793 \\ 0.332659954180600 & 0.000655866128882 \end{bmatrix},$$

with k k *<sup>K</sup> <sup>F</sup>* <sup>¼</sup> <sup>1</sup>*:*<sup>325888763265586</sup> � <sup>10</sup><sup>3</sup> . The resulting closed-loop eigenvalues are:

$$
\sigma \left( A^{(0)} - B^{(0)} K C^{(0)} \right) = \begin{cases} -0.000004083911866 \pm 1.423412274467895i \\ -0.000005220069661 \pm 1.681989978268793i \end{cases} \text{i.e.} $$

For a comparison, in this (small) example we had seven scalar indeterminate, four scalar equations and four scalar inequalities while by the BMI method *<sup>A</sup>*ð Þ <sup>0</sup> � *<sup>B</sup>*ð Þ <sup>0</sup> *KC*ð Þ <sup>0</sup> *<sup>P</sup>* <sup>þ</sup> *P A*ð Þ <sup>0</sup> � *<sup>B</sup>*ð Þ <sup>0</sup> *KC*ð Þ <sup>0</sup> *<sup>T</sup>* <0, *P*>0 we would have 14 scalar indeterminate and eight scalar inequalities. This shows the potential of the method in reducing the number of variables and inequalities/equations, thus enabling to deal efficiently with larger problems. Moreover, the method removes the decoupling of *P* and *K*, in the sense that now *K* depends on *P* and the dependence of *P* on *K* has removed, thus making the problem more relaxed.

**Figures 1** and **2** show the impulse response and the step response of the closedloop system, in terms of the regulated output *z* ¼ *y*, where *w* ¼ 0 and *ur* is the delta Dirac function or the unit-step function, respectively. While the amplitudes seem to be reasonable, the settling time of order 10<sup>5</sup> seems unreasonable. This happens because lowering the SOF-norm results in pushing the closed-loop eigenvalues toward the imaginary axes, as can be seen from the dense oscillations. We therefore must set a barrier on the abscissa of the closed-loop eigenvalues as a constraint. Note however that as a starting point for other optimization keys where we need any stabilizing SOF that we can get, the above SOF might be sufficient.

**Figure 1.** *Impulse response of the closed loop with the minimal-norm SOF.*

*On Parametrizations of State Feedbacks and Static Output Feedbacks and Their Applications DOI: http://dx.doi.org/10.5772/intechopen.101176*

#### **Figure 2.** *Step response of the closed loop with the minimal-norm SOF.*

Regarding the LQR functional with *Q* ¼ *I*, *R* ¼ *I*, starting from:

*p*<sup>1</sup> *p*<sup>2</sup> *p*<sup>3</sup> *p*<sup>4</sup> *p*<sup>5</sup> *p*<sup>6</sup> *w*<sup>1</sup> � � <sup>¼</sup> ¼ ½ � 300 0 300 300 0 300 15 ,

in *CPU* � *Time* ¼ 0*:*90625½ � *sec* the fmincon function has converged to the optimal point

*p*<sup>1</sup> *p*<sup>2</sup> *p*<sup>3</sup> *p*<sup>4</sup> *p*<sup>5</sup> *p*<sup>6</sup> *w*<sup>1</sup> � � <sup>¼</sup> <sup>¼</sup> 102 � <sup>½</sup> <sup>0</sup>*:*<sup>010603084813420</sup> �0*:*002595549083521 0*:*009614240830002 1*:*<sup>009076872432389</sup> �0*:*832493933321482 1*:*812148701422345 0*:*<sup>001750170924431</sup> �,

resulting with the following optimal SF and SOF

$$K^{(0)} = 10^3 \cdot \begin{bmatrix} -1.466094499085196 & 0.000000000000002 & -0.000000000000000 & 2.73168259944369 \\ 0.703352989722306 & 0.00000000000000 & 0.000000000000000 & -0.14910163495946 \end{bmatrix}$$
 
$$K = 10^3 \cdot \begin{bmatrix} -1.466094499085196 & 2.731682599443639 \\ 0.703352998722306 & -0.149101634953946 \end{bmatrix},$$

with k k *<sup>K</sup> <sup>F</sup>* <sup>¼</sup> <sup>3</sup>*:*<sup>182523976577676</sup> � 103 and LQR wort-case functional-value *<sup>σ</sup>max PLQR*ð Þ *<sup>K</sup>* � � <sup>¼</sup> <sup>1</sup>*:*<sup>981249586261248</sup> � 106. The resulting closed-loop eigenvalues are:

$$
\sigma \left( A^{(0)} - B^{(0)} K C^{(0)} \right) = \begin{cases} -1.723892066022943 \pm 1.346849871126735i \\ -0.450106460827155 \pm 1.711908728695912i \end{cases} \text{i.e.} $$

The entries of *z* ¼ *y* under *w* ¼ 0 and *ur* ¼ 0, when the closed-loop system is derived by the initial condition *<sup>x</sup>*<sup>0</sup> <sup>¼</sup> ½ � <sup>1111</sup> *<sup>T</sup>* are depicted in **Figure 3**. The results might not be satisfactory regarding the amplitudes or the settling time; however, as a starting point for other optimization keys where we need any stabilizing SOF that we can get, the above SOF might be sufficient.

For the problem of pole placement via SOF, assume that the target is to place the closed-loop eigenvalue as close as possible to �10 � *i*, � 1 � 0*:*1*i*. Then, starting from:

$$
\begin{aligned}
& \begin{bmatrix} p\_1 & p\_2 & p\_3 & p\_4 & p\_5 & p\_6 & w\_1 \end{bmatrix} = \\ & = [\mathbf{1} & \mathbf{0} & \mathbf{1} & \mathbf{1} & \mathbf{0} & \mathbf{1} & \mathbf{0} \end{bmatrix},
\end{aligned}
$$

in *CPU* � *Time* ¼ 1*:*828125½ � *sec* the fmincon function has converged to the optimal point

```
p1 p2 p3 p4 p5 p6 w1
� � ¼
¼ 103 � ½ 0:017543717092354 �0:025265281638022 0:040984285298812 1:855250747170489 �3:397079720955738 6:251476924192442 0:013791203700439 �,
```
#### resulting with the following optimal SF and SOF

*<sup>K</sup>*ð Þ <sup>0</sup> <sup>¼</sup> 105 � �1*:*014246236498231 0*:*000000000000000 0*:*000000000000000 0*:*708417204523523 �0*:*177349433397692 0*:*000000000000000 0*:*000000000000000 0*:*124922842398310 2 4 3 5 *<sup>K</sup>* <sup>¼</sup> 105 � �1*:*014246236498231 0*:*708417204523523 �0*:*177349433397692 0*:*124922842398310 2 4 3 5,

**Figure 3.** *Response of initial condition of the closed loop with the LQR SOF.*

*On Parametrizations of State Feedbacks and Static Output Feedbacks and Their Applications DOI: http://dx.doi.org/10.5772/intechopen.101176*

with k k *<sup>K</sup> <sup>F</sup>* <sup>¼</sup> <sup>1</sup>*:*<sup>256029021159584</sup> � <sup>10</sup><sup>5</sup> . The resulting closed-loop eigenvalues are:

$$
\sigma \left( A^{(0)} - B^{(0)} K C^{(0)} \right) = \begin{cases} -9.682859336407926 \pm 0.940019732471932i \\ -0.157090195949127 \pm 1.083963060760387i \end{cases} \text{i.e.} $$

**Figures 4** and **5** depict the impulse response and the step response of the closed loop with the pole-placement SOF. The amplitudes look reasonable but the settling time might be unsatisfactory.

Regarding the *H*∞-norm of the closed loop, starting from:

$$\begin{aligned} & \begin{bmatrix} p\_1 & p\_2 & p\_3 & p\_4 & p\_5 & p\_6 & w\_1 \end{bmatrix} = \\ &= [\mathbf{1} & \mathbf{0} & \mathbf{1} & \mathbf{1} & \mathbf{0} & \mathbf{1} & \mathbf{0} \end{bmatrix}, \end{aligned}$$

in *CPU* � *Time* ¼ 0*:*703125½ � *sec* the fmincon function has converged to the optimal point

*p*<sup>1</sup> *p*<sup>2</sup> *p*<sup>3</sup> *p*<sup>4</sup> *p*<sup>5</sup> *p*<sup>6</sup> *w*<sup>1</sup> � � <sup>¼</sup> ¼ ½ 0*:*888221316790683 0*:*005450463395221 0*:*509688006534611 0*:*351367770700493 0*:*108479534948988 2*:*135683618863295 �0*:*023286321711901 �,

resulting with the following optimal SF and SOF

$$K^{(0)} = 10^4 \cdot \begin{bmatrix} -0.434126348764817 & -0.0000000000000000 & 0.000000000000000 & 6.23042514028776 \\ 6.139781209081431 & 0.000000000000000 & -0.000000000000000 & 0.41799456159739 \end{bmatrix}$$
 
$$K = 10^4 \cdot \begin{bmatrix} -0.434126348764817 & 6.230425140286776 \\ 6.139781209081431 & 0.41799456199739 \end{bmatrix}.$$

**Figure 4.** *Impulse response of the closed loop with the pole-placement SOF.*

**Figure 5.** *Step response of the closed loop with the pole-placement SOF.*

with k k *<sup>K</sup> <sup>F</sup>* <sup>¼</sup> <sup>8</sup>*:*<sup>768026908280017</sup> � <sup>10</sup><sup>4</sup> and k k *Tw*,*<sup>z</sup>*ð Þ*<sup>s</sup> <sup>H</sup>*<sup>∞</sup> <sup>¼</sup> <sup>1</sup>*:*<sup>631954397074613</sup> � 10�<sup>5</sup> . The resulting closed-loop eigenvalues are:

$$
\sigma \left( A^{(0)} - B^{(0)} K C^{(0)} \right) = \begin{cases} -356.9401845964764 \\ -90.9530886951882 \\ -1.0236129938218 \\ -0.5685837870690 \end{cases} \right\dagger$$

*:*

The simulation results of the closed-loop system are given in **Figure 6**, where *w* is normally distributed random disturbance, where each entry is N 0, 10<sup>6</sup> � � distributed. The maximum absolute values of the entries of *z* ¼ *y* are

½ � <sup>0</sup>*:*034719714201842 0*:*<sup>014588756724050</sup> *<sup>T</sup>*,

and the maximum absolute values of the entries of *x* are

½ 0*:*034719714201842 0*:*279876853192629 0*:*549124101316666 0*:*014588756724050 � *T:*

The results here are good.

Regarding the *H*2-norm of the closed loop, starting from:

$$
\begin{aligned}
& \begin{bmatrix} p\_1 & p\_2 & p\_3 & p\_4 & p\_5 & p\_6 & w\_1 \end{bmatrix} = \\
& & = [\mathbf{1} & \mathbf{0} & \mathbf{1} & \mathbf{1} & \mathbf{0} & \mathbf{1} & \mathbf{0} \end{bmatrix},
\end{aligned}
$$

*On Parametrizations of State Feedbacks and Static Output Feedbacks and Their Applications DOI: http://dx.doi.org/10.5772/intechopen.101176*

#### **Figure 6.**

*Response of the closed loop with the optimal H*∞*-norm SOF to a* 106 *variance zero-mean normally distributed random disturbance.*

#### in *CPU* � *Time* ¼ 8*:*390625½ � *sec* the fmincon function has converged to the optimal point

*p*<sup>1</sup> *p*<sup>2</sup> *p*<sup>3</sup> *p*<sup>4</sup> *p*<sup>5</sup> *p*<sup>6</sup> *w*<sup>1</sup> � � <sup>¼</sup> ¼ ½ 0*:*891178477642138 0*:*006639774876451 0*:*508684007482598 0*:*000038288661546 0*:*000053652014908 0*:*000140441315775 �0*:*021795268637180 �,

#### resulting with the following optimal SF and SOF

*<sup>K</sup>*ð Þ <sup>0</sup> <sup>¼</sup> 109 � *:*835262537587536 0*:*000000000003900 0*:*000000000012416 1*:*804218059606446 *:*<sup>194244874676116</sup> �0*:*000000000002080 0*:*000000000007286 0*:*<sup>578593785387393</sup> " # *<sup>K</sup>* <sup>¼</sup> <sup>10</sup><sup>9</sup> � *:*835262537587536 1*:*804218059606446 *:*194244874676116 0*:*<sup>578593785387393</sup> " #,

with k k *<sup>K</sup> <sup>F</sup>* <sup>¼</sup> <sup>2</sup>*:*<sup>895579903518702</sup> � <sup>10</sup><sup>9</sup> and k k *Tw*,*<sup>z</sup>*ð Þ*<sup>s</sup> <sup>H</sup>*<sup>2</sup> <sup>¼</sup> <sup>2</sup>*:*<sup>289352128445973</sup> � 10�6. The resulting closed-loop eigenvalues are:

$$\sigma \left( A^{(0)} - B^{(0)} K C^{(0)} \right) = 10^6 \cdot \begin{Bmatrix} -8.825067937292802 \\ -1.087222799352728 \\ -0.000000561491544 \\ -0.000000982645227 \end{Bmatrix}$$

*:*

The simulation results of the closed-loop system is given in **Figure 6**, where *w* is normally distributed random disturbance, where each entry is N 0, 10<sup>6</sup> � � distributed. The maximum absolute values of the entries of *z* ¼ *y* are

**Figure 7.** *Response of the closed loop with the optimal H*2*-norm SOF to a* 106 *variance zero-mean normally distributed random disturbance.*

<sup>10</sup>�<sup>5</sup> � ½ � <sup>0</sup>*:*793706028985933 0*:*<sup>829879751812045</sup> *<sup>T</sup>*,

and the maximum absolute values of the entries of *x* are

<sup>10</sup>�<sup>5</sup> � <sup>½</sup> <sup>0</sup>*:*7937060289859 16*:*3502413073901 12*:*3720896708621 0*:*<sup>8298797518120</sup> � *T:*

The results here are excellent.

We conclude that the best performance of the closed-loop system is achieved with the optimal *H*2-norm SOF; however, since the Frobenius-norm of the SOF controller is high, the cost of construction and of operation of the SOF controller might be high, and there are no "free meals." Note also that by minimizing the SOF Frobenius-norm, the eigenvalues of the closed loop tend to get closer to the imaginary axes (to the region of lower degree of stability), while by minimizing the *H*2-norm, the eigenvalues of the closed loop tend to escape from the imaginary axes (to the region of higher degree of stability). These are conflicting demands, and therefore, one should use some combination of the related key functions or to use some multiobjective optimization algorithm in order to get the best SOF in some or all of the needed key performance measures (**Figure 7**).

The following counterintuitive example shows that the SOF problem can be unsolvable (or hard to solve) even for small systems. In the example we show how nonexistence of SOF can be detected by the method:

Example 4.2 Let

$$\mathcal{A}^{(0)} = \begin{bmatrix} \mathbf{1} & \mathbf{1} \\ \mathbf{0} & \mathbf{1} \end{bmatrix}, \mathcal{B}^{(0)} = \begin{bmatrix} \mathbf{1} \\ \mathbf{1} \end{bmatrix}, \mathcal{C}^{(0)} = \begin{bmatrix} \mathbf{1} & \mathbf{1} \end{bmatrix}.$$

*On Parametrizations of State Feedbacks and Static Output Feedbacks and Their Applications DOI: http://dx.doi.org/10.5772/intechopen.101176*

Applying the algorithm we have

$$U^{(0)} = \frac{1}{\sqrt{2}} \begin{bmatrix} \mathbf{1} & -\mathbf{1} \\ \mathbf{1} & \mathbf{1} \end{bmatrix}, A^{(1)} = \frac{1}{2}, B^{(1)} = -\frac{1}{2}.$$

The "while-loop" stops because *<sup>B</sup>*ð Þ<sup>1</sup> *<sup>B</sup>*ð Þþ<sup>1</sup> <sup>¼</sup> 1. Let *<sup>P</sup>*ð Þ<sup>1</sup> <sup>¼</sup> *<sup>p</sup>*<sup>1</sup> where *<sup>p</sup>*<sup>1</sup> <sup>&</sup>gt;0. Then,

$$K^{(1)} = B^{(1)+} \left(\frac{\mathbf{1}}{2} \left(\mathbf{1} + A^{(1)}P^{(1)} + P^{(1)}A^{(1)T}\right)\right) \left(P^{(1)}\right)^{-1} = -\frac{\left(\mathbf{1} + p\_1\right)}{p\_1}.$$

Let Δ*P* dð Þ <sup>0</sup> 1,1 ¼ *p*2, where *p*<sup>2</sup> >0. Then

$$\begin{split} P^{(0)} &= U^{(0)} \begin{bmatrix} \Delta \widehat{P^{(0)}} \, \_{1,1} + K^{(1)} P^{(1)} K^{(1)T} & -K^{(1)} P^{(1)} \\ & -P^{(1)} K^{(1)T} & P^{(1)} \end{bmatrix} U^{(0)T} = \\ &= \frac{1}{2p\_1} \begin{bmatrix} p\_2 p\_1 + \mathbf{1} & p\_2 p\_1 + \mathbf{1} + 2p\_1 \\ p\_2 p\_1 + \mathbf{1} + 2p\_1 & p\_2 p\_1 + \mathbf{1} + 4p\_1 + 4p\_1^2 \end{bmatrix}. \end{split}$$

We therefore have:

$$\begin{split} K^{(0)} &= B^{(0)+} \left( I\_2 + A^{(0)} B^{(0)} + P^{(0)} A^{(0)T} \right) \left( I\_2 - \frac{1}{2} B^{(0)} B^{(0)+} \right) \left( P^{(0)} \right)^{-1} = \\ &= \frac{1}{4p\_2 p\_1^3} \begin{bmatrix} 4p\_2 p\_1^2 + 4p\_1 + 6p\_1^2 + 4p\_1^3 + p\_2^2 p\_1^2 + 2p\_2 p\_1 + 4p\_2 p\_1^3 + 1 \\ -2p\_2 p\_1^2 - 2p\_1 - 2p\_1^2 - p\_2^2 p\_1^2 - 2p\_2 p\_1 - \mathbf{1} + 4p\_2 p\_1^3 \end{bmatrix}^T, \end{split}$$

as the free parametrization of all the state feedbacks for which *<sup>A</sup>*ð Þ <sup>0</sup> � *<sup>B</sup>*ð Þ <sup>0</sup> *<sup>K</sup>*ð Þ <sup>0</sup> is stable—for any choice of *<sup>p</sup>*1, *<sup>p</sup>*<sup>2</sup> <sup>&</sup>gt; 0. Now *LC*ð Þ <sup>0</sup> <sup>¼</sup> <sup>1</sup> 2 1 �1 �1 1 � � and the equations *<sup>K</sup>*ð Þ <sup>0</sup> *LC*ð Þ <sup>0</sup> <sup>¼</sup> 0 are equivalent to the single equation:

$$\frac{1}{4p\_2p\_1^3} \left( p\_2^2p\_1^2 + p\_2(2p\_1 + 3p\_1^2) + \left(2p\_1^3 + 4p\_1^2 + 3p\_1 + 1\right) \right) = 0.1$$

Assuming *p*1, *p*<sup>2</sup> >0, the last equation implies that

$$p\_2 = \frac{-\left(2p\_1 + 3p\_1^2\right) \pm \sqrt{\left(2p\_1 + 3p\_1^2\right)^2 - 4p\_1^2\left(2p\_1^3 + 4p\_1^2 + 3p\_1 + 1\right)}}{2p\_1^2},$$

leading to a contradiction with *p*<sup>2</sup> being real positive number.

#### **5. A parametrization for exact pole assignment via SFs**

This section is based on the results reported in [34], where the proofs of the following lemma and theorem can be found. The aim of this section is to introduce a parametrization of all the SF's for the exact pole-assignment problem, when the set of eigenvalues can be given as free parameters (under some reasonable assumptions). This is done as part of the research of the problem of parametrization of all

the SOFs for pole assignment. Note that the problem of exact pole assignment by SOFs is NP-hard (see [3]), meaning that an efficient algorithm for the problem probably does not exist, and therefore an effective description of the set of all solutions might not exist too. Also note that with SOFs, the feasible set Ω might exclude some open set from being a feasible set for the closed-loop spectrum (see [12]). These make the full aim very hard (if not impossible) to achieve. We therefore focus here on the problem of exact pole assignment via SFs.

Let the control system be given by:

$$\begin{cases} \Sigma(\mathbf{x}(t)) = A\mathbf{x}(t) + Bu(t) \\ \qquad \mathbf{y}(t) = \mathbf{C}\mathbf{x}(t) \end{cases} \tag{35}$$

where *<sup>A</sup>* <sup>∈</sup> *<sup>n</sup>*�*n*, *<sup>B</sup>*<sup>∈</sup> *<sup>n</sup>*�*m*,*C*<sup>∈</sup> *<sup>r</sup>*�*n*, <sup>Σ</sup>ð Þ¼ *x t*ð Þ *<sup>d</sup> dt x t*ð Þ in the continuous-time context and Σð Þ¼ *x t*ð Þ *x t*ð Þ þ 1 in the discrete-time context. We assume without loss of generality that ð Þ *A*, *B* is controllable. The problem of exact pole assignment by SF is defined as follows:

• (SF-EPA) Given a set Ω ⊆ �, j j Ω ¼ *n* (in the discrete-time context Ω ⊆ �), symmetric with respect to the *x*-axis, find a state feedback *F* ∈ *<sup>m</sup>*�*<sup>n</sup>* such that the closed-loop state-to-state matrix *E* ¼ *A* � *BF* has Ω as its complete set of eigenvalues, with their given multiplicities.

In [13], a closed form of all the exact pole-placement SFs is proved (up to a set of measure 0), based on Moore's method. In order to minimize the inaccuracy of the eigenvalues final placement and in order to minimize the Frobenius-norm of the feedback, a convex combination of the condition number of the similarity matrix and of the feedback norm was minimized. The parametrization proposed in [13] is based on the assumptions that there exists at least one real state feedback that leads to a diagonalizable state-to-state closed-loop matrix and that *B* is full rank. A necessary condition for such SF to exist is that the final multiplicity of any eigenvalue is less than or equal to *rank B*ð Þ. Here, we do not assume that *B* is full rank and we only assume that Ω contains sufficient number of real eigenvalues. A survey of most of the methods for robust pole assignment via SFs or by SOFs and the formulation of these methods as optimization problems with optimality necessary conditions is given in [44]. In [45] a performance comparison of most of the algorithmic methods for robust pole placement is given. A formulation of the general problem of robust exact pole assignment via SFs as an SDP problem and LMI-based linearization is introduced in [46], where the robustness is with respect to the condition number of the similarity matrix, which is made in order to hopefully minimize the inaccuracy of the eigenvalues final placement. Unfortunately, one probably cannot gain a parametric closed form of the SFs from such formulations. Moreover, the following proposed method is exact and therefore enables the use of the parametrization free parameters for other (and maybe more important) optimization purposes. Note that since the proposed method is exact, the closed-loop eigenvalues thyself can be inserted to the problem as parameters.

A completely different notion of robustness with respect to pole placement is considered in the following works:

Robust pole placement in LMI regions and *H*<sup>∞</sup> design with pole placement in LMI regions are considered in [47, 48], respectively. An algorithm based on alternating projections is introduced in [15], which aims to solve efficiently the problem of pole placement via SOFs. A randomized algorithm for pole placement via SOFs with minimal norm, in nonconvex or unconnected regions, is considered in [20].

*On Parametrizations of State Feedbacks and Static Output Feedbacks and Their Applications DOI: http://dx.doi.org/10.5772/intechopen.101176*

$$\text{Let } \Omega = \left\{ \underbrace{a\_1, \overline{a\_1}}\_{c\_1 \text{ times}}, \dots, \underbrace{a\_m, \overline{a\_m}}\_{c\_m \text{ times}}, \underbrace{\beta\_1}\_{r\_1 \text{ times}}, \dots, \underbrace{\beta\_\ell}\_{r\_\ell \text{ times}} \right\}, \text{ be the intended}$$

closed-loop eigenvalues, where the *α*s denote the paired complex-conjugate eigenvalues (with nonzero imaginary part), the *β*s denote the real eigenvalues, and 2*c*1, … , 2*cm*,*r*1, … ,*r*<sup>ℓ</sup> denote their respective multiplicities, where 2P*<sup>m</sup> <sup>i</sup>*¼<sup>1</sup>*ci* <sup>þ</sup> P<sup>ℓ</sup> *<sup>j</sup>*¼<sup>1</sup>*rj* <sup>¼</sup> *<sup>n</sup>*. In the following we would say that the size of the set (actually, the multiset) Ω is *n* (counting multiplicities) and we would write j j Ω ¼ *n*. Note that ð Þ *A*, *B* is controllable if and only if *A*, *BB*<sup>þ</sup> ð Þ is controllable, and also note that *BB*<sup>þ</sup> is a real symmetric matrix with simple eigenvalues in the set 0, 1 f g and thus is orthogonally diagonalizable matrix. Let *U* denote an orthogonal matrix such that:

$$
\widehat{B} = U^T B B^+ U = \begin{bmatrix} I\_k & \mathbf{0} \\ \mathbf{0} & \mathbf{0} \end{bmatrix} = b \text{diag}(I\_k, \mathbf{0}), \tag{36}
$$

where *k* ¼ *rank B*ð Þ¼ *rank BB*<sup>þ</sup> ð Þ≥ 1 since ð Þ *A*, *B* is controllable, and let *<sup>A</sup>*<sup>b</sup> <sup>¼</sup> *<sup>U</sup>TAU* <sup>¼</sup> *<sup>A</sup>*b1,1 *<sup>A</sup>*b1,2 *<sup>A</sup>*b2,1 *<sup>A</sup>*b2,2 " # be partitioned accordingly. We cite here the following lemma taken from [34] connecting between the controllability of the given system and the controllability of its sub-system:

Lemma 5.1 In the notations above, *A*, *BB*<sup>þ</sup> ð Þ is controllable if and only if *<sup>A</sup>*b2,2, *<sup>A</sup>*b2,1*A*b<sup>þ</sup> 2,1 � � is controllable.

Again, we use the recursive controllable structure. Let *<sup>U</sup>*ð Þ <sup>0</sup> <sup>¼</sup> *<sup>U</sup>* and let *<sup>A</sup>*ð Þ <sup>0</sup> <sup>¼</sup> *<sup>A</sup>*, *<sup>B</sup>*ð Þ <sup>0</sup> <sup>¼</sup> *<sup>B</sup>*, *<sup>n</sup>*<sup>0</sup> <sup>¼</sup> *<sup>n</sup>*, *<sup>k</sup>*<sup>0</sup> <sup>¼</sup> *rank B*ð Þ <sup>0</sup> � �. Similarly, let *<sup>U</sup>*ð Þ<sup>1</sup> be an orthogonal matrix such that *<sup>U</sup>*ð Þ<sup>1</sup> *TB*ð Þ<sup>1</sup> *<sup>B</sup>*ð Þþ<sup>1</sup> *<sup>U</sup>*ð Þ<sup>1</sup> <sup>¼</sup> *bdiag Ik*<sup>1</sup> ð Þ , 0 , where *<sup>B</sup>*ð Þ<sup>1</sup> <sup>¼</sup> *<sup>A</sup>*b2,1. Let *<sup>A</sup>*ð Þ<sup>1</sup> <sup>¼</sup> *<sup>A</sup>*b2,2, *<sup>n</sup>*<sup>1</sup> <sup>¼</sup> *<sup>n</sup>*<sup>0</sup> � *<sup>k</sup>*0, *<sup>k</sup>*<sup>1</sup> <sup>¼</sup> *rank B*ð Þ<sup>1</sup> � �. Now, Lemma 5.1 implies that *<sup>A</sup>*ð Þ<sup>1</sup> , *<sup>B</sup>*ð Þ<sup>1</sup> � � is controllable since *A*ð Þ <sup>0</sup> , *B*ð Þ <sup>0</sup> � � is controllable. Recursively, assume that the pair *<sup>A</sup>*ð Þ*<sup>i</sup>* , *<sup>B</sup>*ð Þ*<sup>i</sup>* � � is controllable. Let *<sup>U</sup>*ð Þ*<sup>i</sup>* be an orthogonal matrix such that <sup>c</sup>*B*ð Þ*<sup>i</sup>* <sup>¼</sup> *<sup>U</sup>*ð Þ*<sup>i</sup> TB*ð Þ*<sup>i</sup> <sup>B</sup>*ð Þþ*<sup>i</sup> <sup>U</sup>*ð Þ*<sup>i</sup>* <sup>¼</sup> *bdiag Iki* ð Þ , 0 , where *ki* <sup>≥</sup>1 (since *<sup>A</sup>*ð Þ*<sup>i</sup>* , *<sup>B</sup>*ð Þ*<sup>i</sup>* � � is controllable). Let

*A* <sup>d</sup>ð Þ*<sup>i</sup>* <sup>¼</sup> *<sup>U</sup>*ð Þ*<sup>i</sup> TA*ð Þ*<sup>i</sup> <sup>U</sup>*ð Þ*<sup>i</sup>* <sup>¼</sup> *<sup>A</sup>* dð Þ*<sup>i</sup>* 1,1 *A* dð Þ*<sup>i</sup>* 1,2 *A* dð Þ*<sup>i</sup>* 2,1 *A* dð Þ*<sup>i</sup>* 2,2 2 4 3 5 be partitioned accordingly, with sizes

*ki* � *ki* and ð Þ� *ni* � *ki* ð Þ *ni* � *ki* of the main block-diagonal blocks. Let *<sup>A</sup>*ð Þ *<sup>i</sup>*þ<sup>1</sup> <sup>¼</sup> *A* dð Þ*<sup>i</sup>* 2,2, *<sup>B</sup>*ð Þ *<sup>i</sup>*þ<sup>1</sup> <sup>¼</sup> *<sup>A</sup>* dð Þ*<sup>i</sup>* 2,1, *ni*þ<sup>1</sup> <sup>¼</sup> *ni* � *ki*, *ki* <sup>¼</sup> *rank B*ð Þ*<sup>i</sup>* � �. Then, Lemma 5.1 implies that *<sup>A</sup>*ð Þ *<sup>i</sup>*þ<sup>1</sup> , *<sup>B</sup>*ð Þ *<sup>i</sup>*þ<sup>1</sup> � � is controllable. The recursion stops when *<sup>B</sup>*ð Þ*<sup>i</sup> <sup>B</sup>*ð Þþ*<sup>i</sup>* <sup>¼</sup> *Iki* for some *i* ¼ *b* (which we call the base case). Note that in the worst case, the recursion stops when the rank *kb* ¼ 1.

Theorem 5.1 In the above notations, assume that P<sup>ℓ</sup> *<sup>j</sup>*¼<sup>1</sup>*rj* <sup>≥</sup> *<sup>a</sup>*, where *<sup>a</sup>* is the number of parity alternations in the sequence h i *n*0, *n*1, … , *nb* . Let Ω<sup>0</sup> ¼ Ω. Then, there exist a sequence Ω<sup>0</sup> ⊇ Ω<sup>1</sup> ⊇⋯⊇ Ω*<sup>b</sup>* of symmetric sets with size Ω*<sup>i</sup>* j j ¼ *ni* (counting multiplicities) and there exist a real state feedback *Fi* ¼ *Fi*ð Þ *Gi*þ1, *Fi*þ<sup>1</sup> such that *<sup>σ</sup> <sup>A</sup>*ð Þ*<sup>i</sup>* � *<sup>B</sup>*ð Þ*<sup>i</sup> Fi* � � <sup>¼</sup> <sup>Ω</sup>*i*. Moreover, an explicit (recursive) formula for *Fi*ð Þ *Gi*þ1, *Fi*þ<sup>1</sup> is given by:

$$\begin{cases} F\_i = \mathcal{B}^{(t) + \prime} \mathcal{W}^{(t)} \\ W^{(i)} = U^{(i)} \widehat{\mathcal{W}}^{(i)} U^{(i)} T \\ \widehat{\boldsymbol{W}}^{(i)} = \begin{bmatrix} \widehat{\boldsymbol{W}}\_{1,1}^{(i)} & \widehat{\boldsymbol{W}}\_{1,2}^{(i)} \\ \boldsymbol{0} & \boldsymbol{0} \end{bmatrix} \\\\ \widehat{\boldsymbol{W}}\_{1,1}^{(i)} = \widehat{\boldsymbol{A}}\_{1,1}^{(i)} + F\_{i+1} \widehat{\boldsymbol{A}}\_{2,1}^{(i)} - G\_{i+1} \\ \widehat{\boldsymbol{W}}\_{1,2}^{(i)} = \widehat{\boldsymbol{A}}\_{1,2}^{(i)} + F\_{i+1} \widehat{\boldsymbol{A}}\_{2,2}^{(i)} - G\_{i+1} F\_{i+1}, \end{cases} \tag{37}$$

where *<sup>σ</sup> <sup>A</sup>*bð Þ*<sup>i</sup>* 2,2 � *<sup>A</sup>*bð Þ*<sup>i</sup>* 2,1*Fi*þ<sup>1</sup> � � <sup>¼</sup> *<sup>σ</sup> <sup>A</sup>*ð Þ *<sup>i</sup>*þ<sup>1</sup> � *<sup>B</sup>*ð Þ *<sup>i</sup>*þ<sup>1</sup> *Fi*þ<sup>1</sup> � � <sup>¼</sup> <sup>Ω</sup>*i*þ<sup>1</sup> and *Gi*þ<sup>1</sup> is arbitrary real matrix such that *σ*ð Þ¼ *Gi*þ<sup>1</sup> Ω*i*nΩ*i*þ1.

Example 5.1 Consider the problem of exact pole assignment via SF for the same system from Example 4.1. We therefore assume here that the full state is available for feedback control. Now, using the calculations from Example 4.1, we have: h i *n*0, *n*<sup>1</sup> ¼ h i 4, 2 implying that the number of parity alternations is *a* ¼ 0. We therefore can assign by the method any symmetric set of eigenvalues to the closed loop. Let <sup>Ω</sup><sup>0</sup> <sup>¼</sup> *<sup>α</sup>*, *<sup>α</sup>*, *<sup>β</sup>*, *<sup>β</sup>* � � be the eigenvalues to be assigned, and let <sup>Ω</sup><sup>1</sup> <sup>¼</sup> *<sup>β</sup>*, *<sup>β</sup>* � �. Now,

$$F\_1 = B^{(1)+} \left( A^{(1)} - G\_2 \right),$$

where

$$G\_2 = \begin{bmatrix} \Re(\beta) & \Im(\beta) \\ -\Im(\beta) & \Re(\beta) \end{bmatrix},$$

and

$$F\_0 = \mathcal{B}^{(0)+} W^{(0)}\,,$$

where

$$\begin{aligned} \widehat{W}^{(0)} &= \widehat{U}^{(0)} \widehat{W}^{(0)} \widehat{U}^{(0)T} \\ \widehat{W}^{(0)} &= \begin{bmatrix} \widehat{W}^{(0)}{}\_{1,1} & \widehat{W}^{(0)}{}\_{1,2} \\ & \mathbf{0}\_2 & \mathbf{0}\_2 \end{bmatrix} \\ \widehat{W}^{(0)}{}\_{1,1} &= \widehat{A^{(0)}} {}\_{1,1} + F\_1 {}\_1 \widehat{A^{(0)}} {}\_{2,1} - G\_1 \\ \widehat{W^{(0)}}{}\_{1,2} &= \widehat{A^{(0)}} {}\_{1,2} + F\_1 {}\_1 \widehat{A^{(0)}} {}\_{2,2} - G\_1 F\_1 \\ G\_1 &= \begin{bmatrix} \Re(a) & \Im(a) \\ -\Im(a) & \Re(a) \end{bmatrix} . \end{aligned}$$

We have completed the pole-assignment SF parametrization. As an application, assume that *α* ¼ �10 þ *i*, *β* ¼ �1 þ 0*:*1*i*. Then,

*<sup>F</sup>*<sup>0</sup> <sup>¼</sup> <sup>10</sup><sup>3</sup> � �2*:*312944765727539 0*:*<sup>063274421635669</sup> �0*:*033598570868307 7*:*<sup>136847864481691</sup> <sup>2</sup>*:*<sup>382051359668675</sup> �0*:*002249112006026 0*:*011460219946760 0*:*<sup>432788287638212</sup> � � , *On Parametrizations of State Feedbacks and Static Output Feedbacks and Their Applications DOI: http://dx.doi.org/10.5772/intechopen.101176*

resulting with the closed-loop eigenvalues:

$$\sigma\left(A^{(0)} - B^{(0)}F\_0\right) = \begin{cases} -10.00000000000000000 \pm 1.000000000000000i \\ -1.00000000000000003 \pm 0.09999999999999998i \end{cases} \text{i.e.} $$

In our calculations, we used MATLAB®, which has a general precession of 5 � <sup>7</sup> significant digits in computing eigenvalues. Thus, we have almost no loss of digits by the method. For a comparison, see the last case in Example 4.1 and note that while exact pole assignment can be achieved by SF, in general, it cannot be achieved by SOF because the last is a NP-hard problem (see the introduction of this section). Even regional pole placement is hard to achieve by SOF because of the nonconvexity of the SOF feasibility domain.

Remark 5.1 Note that the indices h i *k*0, *k*1, … , *kb* as well as the indices h i *n*0, *n*1, … , *nb* , can be calculated from ð Þ *A*, *B* in advance. After calculating these indices and the number *a* of parity alternations in the sequence h i *n*0, *n*1, … , *nb* , the designer can define Ω as to satisfy the assumption of Theorem 5.1, i.e., being symmetric with at least *a* real eigenvalues, in a parametric way, and get a parametrization of all the real SF leading to Ω as the set of closed-loop eigenvalues. Next, the designer can play with the specific values of these and of other free parameters, in order to gain the needed closed-loop performance requirements. This is in contrast with other methods where the parametrization is calculated ad-hoc for a specific set of eigenvalues, where any change in the set of eigenvalues necessitates new execution of the method.

Remark 5.2 Note that *Fi*þ<sup>1</sup> can be replaced by *Fi*þ<sup>1</sup> <sup>þ</sup> *<sup>I</sup>* � *<sup>B</sup>*ð Þþ*<sup>i</sup> <sup>B</sup>*ð Þ*<sup>i</sup> Hi*þ<sup>1</sup> where *Hi*þ<sup>1</sup> is any real matrix, without changing the closed-loop eigenvalues (if one seek feedbacks with minimal Frobenius-norm, then he should take *Hi*þ<sup>1</sup> ¼ 0, otherwise he should leave *Hi*þ<sup>1</sup> as another free parameter). Thus, the freeness in h i *Hb*, … , *H*1, *H*<sup>0</sup> and in *Gb*þ<sup>1</sup> h i , *Gb*, … , *G*<sup>1</sup> makes the freeness in *F*<sup>0</sup> (e.g. in order to globally optimize the *H*∞-norm of the closed loop, the *H*2-norm or the LQR functional of the closed loop or any other performance key thereof). Note also that the sequences h i *Fb*, … , *F*1, *F*<sup>0</sup> and *Gb*þ<sup>1</sup> h i , *Gb*, … , *G*<sup>1</sup> can be calculated for Ω as in Theorem 5.1, where the eigenvalues in Ω are given as free parameters. In that case, it can be easily proved by induction that the state feedbacks h i *Fb*, … , *F*1, *F*<sup>0</sup> depend polynomially on the eigenvalues parameters and on the other free parameters mentioned above (for complex eigenvalue *α* they depend polynomially on Rð Þ *α* , Ið Þ *α* ).

Finally, it is wort mentioning that the complementary theorem of Theorem 5.1 was also proved in [34], meaning that under the assumptions of Theorem 5.1, any SF that solves the problem has the form given in the theorem (up to a factor of the form given in Remark 5.2).

#### **6. Concluding remarks**

In this chapter, we have introduced an explicit free parametrization of all the stabilizing SF's of a controllable pair ð Þ *A*, *B* . This enables global optimization over the set of all the stabilizing SF's of such pair, because the parametrization is free. For a system triplet ð Þ *A*, *B*,*C* , we have shown how to get the parametrization of all the SOFs of the system by parameterizing all the SFs of ð Þ *A*, *B* and all the SFs of *AT*,*C<sup>T</sup>* and then imposing the compatibility constraint (28). We have also shown a parametrization of all the SOFs of the system triplet ð Þ *A*, *B*,*C* by imposing the linear constraint *<sup>K</sup>*ð Þ <sup>0</sup> *LC*ð Þ <sup>0</sup> <sup>¼</sup> 0 on the SF *<sup>K</sup>*ð Þ <sup>0</sup> of the pair ð Þ *<sup>A</sup>*, *<sup>B</sup>* , where *<sup>K</sup>*ð Þ <sup>0</sup> was defined recursively and parameterizes the set of all SFs of ð Þ *A*, *B* . This leads to a set

of polynomial equations (after multiplying by the l.c.m. of the denominators of the rational entries of *K*ð Þ <sup>0</sup> ) and inequalities that can be brought to polynomial equations. The resulting polynomial set of equations can be solved (parametrically) by using the Gröbner basis method (see e.g., [49–52]). By applying the Gröbner basis method, one would get an indication to the existence of solutions and in case that solutions do exist, it would tell what are the free parameters and how other parameters depend on the free parameters. It seems that the proposed method makes the Gröbner basis computations overhead (or other methods thereof) reduced significantly, thus enabling SOF global optimization for larger systems.

In view of Theorem 5.1 (with its complementary theorem proved in [34]), we have introduced a sound and complete parametrization of all the state feedbacks *F* which make the matrix *<sup>E</sup>*<sup>b</sup> <sup>¼</sup> *<sup>U</sup>T*ð Þ *<sup>A</sup>* � *BF U k*-complementary ð Þ *<sup>n</sup>* � *<sup>k</sup>* -invariant with respect to Ω<sup>1</sup> (see [34] for the definition and properties), where *U* is orthogonal such that *<sup>U</sup>TBB*þ*<sup>U</sup>* <sup>¼</sup> *bdiag I*ð Þ *<sup>k</sup>*, 0 , *<sup>k</sup>* <sup>¼</sup> *rank B*ð Þ, where <sup>Ω</sup> is symmetric and has at least *<sup>a</sup>* (being the parity alternations in the sequence h i *n*0, … , *nb* ) real eigenvalues, where Ω<sup>1</sup> ⊆ Ω is symmetric with maximum real eigenvalues with size j j Ω<sup>1</sup> ¼ *n* � *k*. Assuming Ω as above, we have generalized the results of [13] in the sense that we do not assume the existence of real state feedback *F* that brings the closed-loop *E* ¼ *A* � *BF* to diagonalizable matrix, which actually means that the geometric and algebraic multiplicity coincides for any eigenvalue of the closed loop, and we do not assume the restriction on the multiplicity of each eigenvalue to be less than or equal to *rank B*ð Þ. However, in cases where the number of real eigenvalues in Ω is less than *a*, one should use the parametrizations given in [13], in [45] or in the references there. Note that in communication systems, where complex SFs and SOFs are sought, the introduced method is complete (with no restrictions) since the number of parity alternations and the restriction on Ω to contain as much real eigenvalues, were needed only to guarantee that *Fi* for*i* ¼ *b*, … , 0 is real in each stage, which is needless in communication systems.

In view of Example 5.1, one can see that the accuracy of the final location of the closed-loop eigenvalues given by the proposed method depends only on the accuracy of computing *<sup>B</sup>*ð Þþ*<sup>i</sup>* and *<sup>U</sup>*ð Þ*<sup>i</sup>* for *<sup>i</sup>* <sup>¼</sup> 0, … , *<sup>b</sup>* and in the algorithm that we have to compute the closed-loop eigenvalues (see [53], for example) in order to validate their final location, and it has nothing to do with the specific values of the specific eigenvalues given in Ω. Therefore, by the proposed method, once that *B*ð Þþ*<sup>i</sup>* and *U*ð Þ*<sup>i</sup>* for *i* ¼ 0, … , *b* were computed as accurate as possible, the location of the closedloop eigenvalues will be accurate accordingly. Thus, by the proposed method the designer can save time since he can do it parametrically only once, and afterward he only needs to play with the specific values of the eigenvalues until he gets a satisfactory closed-loop performance, where he can be sure that the accuracy of the final placement will be the same for all of his trials independently on the specific values of the chosen eigenvalues. Also, the given parametrization of *F*<sup>0</sup> is polynomially dependent on the free parameters and thus is very convenient for applying automatic differentiation and optimization methods.

To conclude, we have introduced parametrizations of SFs and SOFs that are based on the recursive controllable structure that was discovered in [35]. The results has powerful implications for real-life systems, and we expect for more results in this direction. Unfortunately, for uncertain systems, the method cannot work directly because of the dependencies of ð Þ *A*, *B*,*C* in uncertain parameters, for which we cannot compute *<sup>U</sup>*ð Þ*<sup>i</sup>* for *<sup>i</sup>* <sup>¼</sup> 0, … , *<sup>b</sup>*. However, if a nominal system *<sup>A</sup>*~, *<sup>B</sup>*~,*C*<sup>~</sup> � � is known accurately then, the method can be applied to that system and the free parameters of the parametrization can be used to "catch" the uncertainty of the whole system, together with the closed-loop performance requirements. The research of this method will be left for a future work.

*On Parametrizations of State Feedbacks and Static Output Feedbacks and Their Applications DOI: http://dx.doi.org/10.5772/intechopen.101176*

### **Author details**

Yossi Peretz

Department of Computer Sciences, Lev Academic Center, Jerusalem College of Technology, Jerusalem, Israel

\*Address all correspondence to: yosip@g.jct.ac.il

© 2021 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

### **References**

[1] Nemirovskii A. Several NP-hard problems arising in robust stability analysis. Mathematics of Control, Signals, and Systems. Springer. 1993;**6** (2):99-105

[2] Blondel V, Tsitsiklis JN. NP-hardness of some linear control design problems. SIAM Journal on Control and Optimization. 1997;**35**(6):2118-2127

[3] Fu M. Pole placement via static output feedback is NP-hard. IEEE Transactions on Automatic Control. 2004;**49**(5):855-857

[4] Peretz Y. On applications of the rayshooting method for structured and structured-sparse static-outputfeedbacks. International Journal of Systems Science. 2017;**48**(9):1902-1913

[5] Kučera V, Trofino-Neto A. Stabilization via static output feedback. IEEE Transactions on Automatic Control. 1993;**38**(5):764-765

[6] de Souza CC, Geromel JC, Skelton RE. Static output feedback controllers: Stability and convexity. IEEE Transactions On Automatic Control. 1998;**43**(1):120-125

[7] Cao YY, Lam J, Sun YX. Static output feedback stabilization: An ILMI approch. Automatica. 1998;**34**(12):1641-1645

[8] Cao YY, Sun YX, Lam J. Simultaneous stabilization via static output feedback and state feedback. IEEE Transactions on Automatic Control. 1998;**44**(6):1277-1282

[9] Quoc TD, Gumussoy S, Michiels W, Diehl M. Combining convex-concave decompositions and linearization approaches for solving BMIs, with application to static output feedback. IEEE Transactions on Automatic Control. 2012;**57**(6): 1377-1390

[10] Mesbahi M. A semi-definite programming solution of the least order dynamic output feedback synthesis problem. Procedings of the 38'th IEEE Conference on Decision and Control. 1999;(2):1851-1856

[11] Henrion D, Loefberg J, Kočvara M, Stingl M. Solving Polynomial static output feedback problems with PENBMI. In: Proc. Joint IEEE Conf. Decision Control and Europ. Control Conf.; 2005; Sevilla, Spain. 2005

[12] Eremenko A, Gabrielov A. Pole placement by static output feedback for generic linear systems. SIAM Journal on Control and Optimization. 2002;**41**(1): 303-312

[13] Schmid R, Pandey A, Nguyen T. Robust pole placement with Moore's algorithm. IEEE Transactions on Automatic Control. 2014;**59**(2):500-505

[14] Fazel M, Hindi H, Boyd S. Rank minimization and applications in system theory. In: Proceedings of the American Control Conference. 2004. pp. 3273-3278

[15] Yang K, Orsi R. Generalized pole placement via static output feedback: A methodology based on projections. Automatica. 2006;**42**:2143-2150

[16] Vidyasagar M, Blondel VD. Probabilistic solutions to some NP-hard matrix problems. Automatica. 2001;**37**: 1397-1405

[17] Tempo R, Calafiore G, Dabbene F. Randomized Algorithms for Analysis and Control of Uncertain Systems. London: Springer-Verlag; 2005

[18] Tempo R, Ishii H. Monte Carlo and Las Vegas randomized algorithms for systems and control. European Journal of Control. 2007;**13**:189-203

*On Parametrizations of State Feedbacks and Static Output Feedbacks and Their Applications DOI: http://dx.doi.org/10.5772/intechopen.101176*

[19] Arzelier D, Gryazina EN, Peaucelle D, Polyak BT. Mixed LMI/randomized fcmethods for static output feedback control. In: Proceedings of the American Control Conference, IEEE Conference Publications. 2010. pp. 4683-4688

[20] Peretz Y. A randomized approximation algorithm for the minimal-norm static-output-feedback problem. Automatica. 2016;**63**:221-234

[21] Apkarian P, Noll D. Nonsmooth *H*<sup>∞</sup> synthesis. IEEE Transactions on Automatic Control. 2006;**51**(1):71-86

[22] Burke JV, Lewis AS, Overton ML. Stabilization via nonsmooth, nonconvex optimization. IEEE Transactions On Automatic Control. 2006;**51**(11): 1760-1769

[23] Gumussoy S, Henrion D, Millstone M, Overton ML. Multiobjective robust control with HIFOO 2.0. In: Proceedings of the IFAC Symposium on Robust Control Design; 2009; Haifa, Israel. 2009

[24] Borges RA, Calliero TR, Oliveira CLF, Peres PLD. Improved conditions for reduced-order *H*<sup>∞</sup> filter design as a static output feedback problem. In: American Control Conference, San-Francisco, CA, USA. 2011

[25] Peretz Y. On application of the rayshooting method for LQR via staticoutput-feedback. MDPI Algorithms Journal. 2018;**11**(1):1-13

[26] Zheng F, Wang QG, Lee TH. On the design of multivariable PID controllers via LMI approach. Automatica. 2002;**38**: 517-526

[27] Peretz Y. A randomized algorithm for optimal PID controllers. MDPI Algorithms Journal. 2018;**11**(81):1-15

[28] Lin F, Farad M, Jovanović M. Sparse feedback synthesis via the alternating direction method of multipliers. In:

IEEE, Proceedings of the American Control Conference (ACC), 2012. pp. 4765-4770

[29] Lin F, Farad M, Jovanović M. Augmented lagrangian approach to design of strutured optimal state feedback gains. IEEE Transactions On Automatic Control. 2011;**56**(12): 2923-2929

[30] Gillis N, Sharma P. Minimal-norm static feedbacks using dissipative Hamiltonian matrices. Linear Algebra and its Applications. 2021;**623**:258-281

[31] Silva RN, Frezzatto L. A new parametrization for static output feedback control of LPV discrete-time systems. Automatica. 2021;**128**:109566

[32] de Oliveira AM, Costa OLV. On the *H*<sup>2</sup> static output feedback control for hidden Markov jump linear systems. Annals of the Academy of Romanian. Scientists. Series on Mathematics and its Applications. 2020;**12**(1-2)

[33] Ren D, Xiong J, Ho DW. Static output feedback negative imaginary controller synthesis with an *H*<sup>∞</sup> norm bound. Automatica. 2021;**126**:109157

[34] Peretz Y. On parametrization of all the exact pole-assignment state feedbacks for LTI systems. IEEE Transactions on Automatic Control. 2017;**62**(7):3436-3441

[35] Peretz Y. A characterization of all the static stabilizing controllers for LTI systems. Linear Algebra and its Applications. 2012;**437**(2):525-548

[36] Iwasaki T, Skelton RE. All controllers for the general *H*<sup>∞</sup> control problem: LMI existence conditions and state space formulas. Automatica. 1994; **30**(8):1307-1317

[37] Iwasaki T, Skelton RE. Parametrization of all stabilizing controllers via quadratic lyapunov functions. Journal of Optimization Theory and Applications. 1995;**85**(2): 291-307

[38] Skelton RE, Iwasaki T, Grigoriadis KM. A Unified Algebraic Approach To Linear Control Design. London: Tylor & Francis Ltd.; 1998

[39] Piziak R, Odell PL. In: Nashed Z, Taft E, editors. Matrix Theory: From Generalized Inverses to Jordan Form. Chapman & Hall/CRC \& Francis Group, 6000 Broken Sound Parkway NW, Suit 300, Boca Raton, FL 33487- 2742; 2007. p. 288

[40] Karlheinz S. Abstract Algebra With Applications. Marcel Dekker, Inc.; 1994

[41] Ohara A, Kitamori T. Geometric structures of stable state feedback systems. IEEE Transactions on Automatic Control. 1993;**38**(10):1579-1583

[42] Leibfritz F. COMPleib: Constrained matrix-optimization problem library— A collection of test examples for nonlinear semidefinite programs, control system design and related problems. In: Dept. Math., Univ. Trier, Germany, Tech.-Report, (2003).

[43] Tadashi I, Hai-Jiao G, Hiroshi T. A design of discrete-time integral controllers with computation delays via loop transfer recovery. Automatica. 1992;**28**(3):599-603

[44] Chu EK. Optimization and pole assignment in control system design. International Journal of Applied Mathematics and Computer Science. 2001;**11**(5):1035-1053

[45] Pandey A, Schmid R, Nguyen T, Yang Y, Sima V, Tits AL. Performance survey of robust pole placement methods. In: IEEE 53rd Conference on Decision and Control; December 15-17; Los Angeles, California, USA. 2014. pp. 3186-3191

[46] Ait Rami M, El Faiz S, Benzaouia A, Tadeo F. Robust exact pole placement via an LMI-based algorithm. IEEE Transactions on Automatic Control. 2009;**54**(2):394-398

[47] Chilali M, Gahinet P. *H*<sup>∞</sup> design with pole placement constraints: An LMI approach. IEEE Transactions on Automatic Control. 1996;**41**(3):358-367

[48] Chilali M, Gahinet P, Apkarian P. Robust pole placement in LMI regions. IEEE Transactions on Automatic Control. 1999;**44**(12):2257-2270

[49] Buchberger B. Gröbner bases and system theory. Multidimentional Systems and Signal Processing. 2001;**12**: 223-251

[50] Shin HS, Lall S. Optimal decentralized control of linear systems via Groebner bases and variable elimination. In: American Control Conference. Baltimore, MD, USA: Marriott Waterfront; 2010. pp. 5608-5613

[51] Lin Z. Gröbner bases and applications in control and systems. In: IEEE, 7th International Conference On Control, Automation, Robotics And Vision (ICARCV'02). 2002. pp. 1077-1082

[52] Lin Z, Xu L, Bose NK. A tutorial on Gröbner bases with applications in signals and systems. IEEE Transactions on Circuits and Systems. 2008;**55**(1): 445-461

[53] Pan VY. Univariate polynomials: nearly optimal algorithms for numerical factorization and root-finding. Elsevier Journal of Symbolic Computation. 2002; **33**(5):701-733

#### **Chapter 6**

## Experimental Studies of Asynchronous Electric Drives with "Stepwise" Changes in the Active Load

*Vladimir L. Kodkin, Alexandr S. Anikin, Alexandr A. Baldenkov and Natalia A. Loginova*

#### **Abstract**

The article offers the results of experimental studies of asynchronous electric 10 motors with "squirrel cage" rotor with frequency control. The results of bench tests of the modes of parrying stepwise changes in the load created by a similar frequencycontrolled electric drive are presented. A preliminary qualitative analysis of the known control methods is carried out and it is shown that the assumptions made when creating their algorithms in the modes of countering the load become too significant. The reasons for this are the fundamental inaccuracies of the vector equations of asynchronous electric motors with frequency regulation. The proposed interpretation of asynchronous electric motors by nonlinear continuous transfer functions, outlined in the articles written by the same authors earlier, and the corrections they proposed turned out to be more accurate for the operating modes under consideration than the traditional methods of interpretation and correction of the frequency control of asynchronous electric motors This made it possible to assess as objectively as possible the effectiveness of the interpretation of asynchronous electric drives and methods of their regulation. Numerous articles on this topic over the past 25–30 years have not provided such results.

**Keywords:** asynchronous drive, frequency regulation, dynamic positive feedbacks, active stator current, rotor current, signal spectrum

#### **1. Introduction**

The paradoxical situation has developed in the last 20 years in the frequency control asynchronous electric drives.

On the one hand, the frequency control of squirrel cage induction motor (SCIM) with semiconductor frequency and voltage converters (FC) is widespread in industry and energy, has several universal control methods that provide applied in increasingly complex and accurate technological units [1–4].

On the other hand, there are currently several fundamental unresolved theoretical problems that have been formed more than 100 years ago—when forming the theory of AC electrical machines.

The description of the processes of AC electric drives control processes is the vector equations and flowing from them—substitution schemes and vector diagrams (**Figure 1**) ([1], p. 18):

$$\begin{aligned} u\_1 &= i\_1(r\_1 + j\mathbf{x}\_{1\sigma}) + j i\_m \mathbf{x}\_m \\ \mathbf{0} &= i\_2 \left(\frac{r\_2}{\mathfrak{s}} + j\mathbf{x}\_{2\sigma}\right) + j i\_m + \mathbf{x}\_m \\ m &= \frac{m\_1}{2} Z\_P L\_m |i\_2 \times i\_m| = \frac{m\_1}{2} L\_m I\_{2\max} I\_{m\max} \sin \eta \\ \mathbf{x}\_1 &= \alpha\_1 L\_1; \mathbf{x}\_2 = \alpha\_1 L\_2; \mathbf{x}\_m = \alpha\_1 L\_m \end{aligned} \tag{1}$$

These equations have a number of assumptions and simplifications acceptable to static modes, but completely erroneous for dynamic.

First, these equations suggest the sinusoidal nature of currents and stresses formed in asynchronous electric motors. The theory of control that operates vectors is simply not able to take into account any more components of these variables. At the same time, the presence of such components—the so-called "higher harmonics" in the currents of the motors and FC recognize all experts. However, in the engine equation, these components are not included, and only electrical interference is taken into account from all problems.

Secondly, the operating vectors instead of sinusoidal functions, interpreting currents in the stator and the rotor of the engines are valid only if the frequencies of their change are constant. Only in this case the differential equations "pass" in the vector and can be significantly simplified. It is important to note that even evaluating the error of such a replacement during frequency variations is analytically very difficult.

#### **Figure 1.**

*Substitution scheme, vector diagram of asynchronous motor and vector equations in traditional form [1].* X*1,* X*<sup>2</sup> – inductive resistances stator and rotor;* r*1,* r*<sup>2</sup> – active resistances stator and rotor;* I*1,* I*<sup>2</sup> – stator and rotor currents; and* E*2,* U*<sup>1</sup> – rotor EMF and stator voltage.*

*Experimental Studies of Asynchronous Electric Drives with "Stepwise"… DOI: http://dx.doi.org/10.5772/intechopen.101864*

Thirdly, even with these assumptions of the equation remain extremely nonlinear and complex. In compiling the equations of elements of the vector control unit, additional simplifications are assumed, for example—the constancy of the rotor magnetic flux and the equality of the frequency of the stator voltage and the speed of rotation.

"The coordinate junction block (CJB) can be constructed on the basis of the equations of the control model controlled by voltage ([1], p. 2.22). They can be put *<sup>ω</sup>*<sup>1</sup> <sup>¼</sup> *<sup>ω</sup>* and *<sup>d</sup>*Ψ<sup>2</sup> *dt* ¼ 0."

Thus, the generally accepted alternating current equations for vector control are largely simplified equations, which, in principle, incorrectly describe transient processes associated with changes in the frequency of stator voltage. Such changes lead to a change in the substitution schemes themselves, because their parameters: *x*<sup>1</sup> ¼ *ω*1*L*1; *x*<sup>2</sup> ¼ *ω*1*L*2; *xm* ¼ *ω*1*Lm* directly depend on the frequency.

The "qualitative processes" of the drive reaction on stepped load jumps can be obtained by combining the calculations of the substitution schemes with the calculations of the mechanical characteristics. If the frequency of the stator voltage does not change, then such descriptions are sufficiently accurate, since only the element *r*2 *<sup>S</sup>* is changed in such modes in the substitution schemes, and quite quickly. In this case, the initial and final states correspond to two substitution schemes with different *<sup>r</sup>*<sup>2</sup> *<sup>S</sup>* and, respectively, by different current vectors. But with more complex frequency control algorithms, under which significant changes in the stator voltage occur—amplitude and frequency, new vector diagrams and substitution schemes are "formed" at each moment in which the frequency *ω*<sup>1</sup> changes. In these cases, high-quality analysis is significantly complicated, since all elements of the substitution scheme change, and the transition from one state to another vector equations is not described in principle. A description of such processes and their correction require a different approach or at least understanding the causes of the problems of existing approaches. This article provides several experiments, the results of which make it possible to understand the processes of stepped load jumps in asynchronous drives with different frequency control methods.

All equations for alternating current machines are designed in the 20s of the last centuries, when adjusting the frequency of the stator voltage was the problem of a distant future. In the past 30 years, this future has come, but simplified remains and make final control errors too significant, and theoretical provisions and even modeling are most often inconclusive. The most significant results of studies of asynchronous electric drives in this situation are experiments, and as close as possible to real industrial conditions.

At the same time, the experiments also require special justifications, since the processes in the asynchronous electric motor rotor are not available for measurement.

This state of affairs restrains the introduction of asynchronous electric drives in new areas for them—in aggregates requiring speed and accuracy. In addition, it is difficult to optimize electric drives in power engineering and transport where they are widely used. At the same time, their economy, acceptable price and high reliability remain a significant cause of research to improve their controllability.

Consider the main generally accepted methods for control of asynchronous electric drives.

#### **2. Formulation of the problem**

#### **2.1 Physics of traditional control methods**

Scalar control, which is customary to be simple and reliable—at a given speed, the scalar is selected—amplitude and frequency of change of voltage supplied to the engine stator. Mechanical characteristics are determined by the properties of the electric motor and the dependence *U*(*f*1). The structural scheme in **Figure 2**.

In **Figure 3a** shows the diagrams of the active values of the stator current and speed during acceleration and when the load is signed, obtained during stand-based studies, described in detail several articles [5–7].

The transition trajectory is determined by processes in the motor. Qualitatively these processes are as follows: When exposed to the load torque, the speed of rotation decreases, currents in the rotor and the stator increase, the slip in the motor grows and increases the torque developed by the motor to the state to which the torque corresponds equally to the load. If the parameters of the motor and the substitution schemes are "correct," the process occurs without oscillations and is fast enough, as in the examples of experiments in **Figure 3a**.

In **Figure 3b**, the mechanical characteristics of the motor and the trajectory of the transition from point *A* (with low load) to point *B* (with a large load torque).

In **Figure 3c**—the mechanical characteristics of the transition to the mechanical characteristic, corrected IR-compensation during load. Changes in working points *B*<sup>1</sup> and *B* are minor.

#### *2.1.1 Vector control sets the task of efficient drive control*

To do this, in the models embedded in the control unit of the FC on the measured values of the stator current and stator voltage, the required parameters of the stator voltage vector are calculated, which may vary at any time. The initial engine equations—significantly nonlinear undergo many simplifications, the main of which are the constancy of the rotor flow, the equality of the frequency of the stator speed voltage and the absence of other harmonics [2].

**Figure 2.** *Structural scheme of asynchronous electric drive with scalar control.*

**Figure 3.** *Process diagrams (a) and mechanical characteristics (b and c) of asynchronous drive with scalar control.*

*Experimental Studies of Asynchronous Electric Drives with "Stepwise"… DOI: http://dx.doi.org/10.5772/intechopen.101864*

$$
\rho\_1 = \rho; \frac{d\Psi\_2}{dt} = 0 \tag{2}
$$

In this case, the algorithm for the control of vector sensorless control picks up the vector of stator voltage, but not the transition path from one vector to another (**Figure 4**).

(Transition trajectories are given special attention in a separate method of vector control—"Direct Torque Control." This method is used mainly by ABB. In the articles dedicated to the method, the algorithms are described mainly at the level of logical provisions. As part of this work, this method of control will be a dedicated comment. Special studies have not been conducted.)

The purpose of the vector control algorithms is to linearize the drive and bring its characteristics to the characteristics of the DC drive characteristics, but the assumptions and errors adopted at these conclusions, as well as the inconsistency of the reality model makes the control error. It should be noted that for linearization, the vector control uses serial corrective devices that do poorly perform these functions. Especially in variations in the characteristics of the linearizable object. "Interferes" into operation and correction and impulse nature of the power of the frequency converter, through which the "correction" of the nonlinearities of the asynchronous electric motor occurs. These features of vector control are noted by almost all researchers [1–4]. In general, in the opening vector control of the change of stator voltage, the load jump is not too large and the processes of the load jump are not much different from the scalar control—**Figures 3c** and **5b**.

Experiments have shown that when overclocking the task signals with a small and unchanged load, assumptions (1) can be considered permissible and the engine equations quite correctly take into account changes in the frequency of the stator voltage and form the correct transition path from the vector state with one frequency to another, but in parrying mode adaptation loads does not occur, the differences in the speed of rotation and frequency of the stator voltage is a significant amount (absolute slip). And the transition trajectory to another state is not corrected. Most often, vector sensorless control poorly adjusts the voltage parameters on the stator and the engine is steaming the load in the same way as with a scalar control mode (**Figure 5a**). Mechanical characteristics are shown in **Figure 5b**. Vector control eliminates unstable "branches" of mechanical characteristics, at the same time, work areas differ little with a scalar. In **Figure 5b** shows the mechanical characteristics of the drives with the reaction to the load diagram and the transition trajectory.

#### *2.1.2 Vector control with a speed loop*

By analogy with direct current drives, the additional linearization circuit should carry out the speed control circuit with the PID regulator.

**Figure 4.** *Structural scheme of asynchronous electric drive with vector sensorless control.*

Speed sensors are quite rarely installed on general industrial mechanisms as shown experiments on the stand of a special effect on their application in drives with vector control is not too significant.

When the rotation speed circuit, the control signals for the FC are formed in the PID controller, the inputs of which are sent to the speed of rotation speed and the feedback signal. At the same time, all the problems of dumbfounded vector control are only aggravated (**Figure 6**).

One of the main assumptions of vector sensorless control is the equality of the frequency of the stator voltage and the speed of rotation in the output of the equation coordinate junction block (CJB) of the vector control ([1], p. 61):

$$
\omega\_1 = \omega \tag{3}
$$

When controlling the speed of rotation on the side of the task with the PID controller, this is quite acceptable, but when the loading torque is parried, the permissibility of speed equality and frequency is fundamentally incorrect.

At the time of the jump of load, a dynamic failure of the motor speed occurs, the output of the speed controller generates a signal to an increase in the frequency of the rotor voltage following the conditions (assumptions) in the drive, this leads to a mitigation of the mechanical characteristic (**Figure 7c**), an increase in the dynamic rate of speed and tighten the speedy recovery. This follows from the process diagrams and the transition paths on the graphs of mechanical characteristics (**Figure 5**).

"Double" linearization with very significant assumptions leads to the fact that the acceleration of the drive is similar to the acceleration of the DC drive, and the

**Figure 6.** *Structural scheme of asynchronous electric drive with vector sensorless control and speed loop.*

*Experimental Studies of Asynchronous Electric Drives with "Stepwise"… DOI: http://dx.doi.org/10.5772/intechopen.101864*

**Figure 7.**

*Process charts (a and b) and mechanical characteristics (c) asynchronous drive with vector control and PID-regulator on speed loop.*

processes of parrying the torque load in such a drive have zero static error in speed. However, the efficiency of such a drive cannot be considered significant. The time of the transition process and the dynamic failure of the speed are such that it is impossible to use this option of parrying the load for almost any industrial mechanism.

Comparison of the process of parrying of torques shows that the initial and final states can be the same in the drive, but the transitions between them have an infinite set of trajectories, which are determined by the stator voltage, and the control method form the transition path and the final vector.

#### *2.1.3 Mathematics describing modes of operation*

A significant role in the formation of algorithms is played by the mathematics of the description of processes in alternating current electric machines. In describing the operation of the SCIM [2], vector equations and dependencies with a large number of assumptions and simplifications are used.

Vector equations describing all asynchronous and synchronous motors, do not take into account the variable nature of the frequency of voltages and currents. It should be recognized if you assume that the frequency of the stator voltage is a complex function of time, the transition from Eqs. (2.19) to (2.21) ([1], p. 56) will be impossible and, the motor equations are complicated so much to analyze them and choose them effective correction will be impossible.

In the works [5–11], a nonlinear transfer function was proposed linking the mechanical torque developed by the SCIM and the absolute slip—the difference between the frequency of the stator voltage and the speed of rotation of the engine. The formula of this function includes, as variables, the frequency of the stator voltage and the relative slip. The formula can be called a nonlinear transfer function or a dynamic Kloss formula. In the articles [5–8], the conclusion of the proposed nonlinear transfer function is given in sufficient detail, the result is as follows:

$$\mathcal{W}(p) = \frac{2\mathcal{M}\_k \left(T\_2' p + \mathbf{1}\right) \mathbf{S}\_k}{o\_1 \left[\left(\mathbf{1} + T\_2' p\right)^2 \mathbf{S}\_k^2 + \beta^2\right]}\tag{4}$$

where, *ω*<sup>1</sup> is the frequency of stator voltage, *β* is a relative slip, depending on the load of the drive.

This gear ratio corresponds to the structural diagram of the SCIM, shown in **Figure 8**.

#### **Figure 8.**

*Block diagram of the ADCZ with a nonlinear transfer function of the link forming the torque.*

The works [9–13] show how it is possible to linearize the specified transfer function, that is, to exclude or significantly weaken the dependence of the transfer function and the dynamics of the asynchronous drive on the frequency of the stator voltage and sliding by positive feedback on the developed torque. The block diagram will take the form (**Figure 9**):

The transfer function of the corrective link, which is necessary for positive feedback to maintain the stability of the drive, is as follows:

$$\text{BW}\_{\text{DPF}} = \frac{\alpha\_1 \beta^2}{2 \mathbf{M}\_k \mathbf{S}\_k \left( T\_2' p + \mathbf{1} \right)};\tag{5}$$

The equivalent transfer function of the drive with this connection will take the form:

$$\mathcal{W}\_{\rm eqv} = \frac{2\mathbf{M}\_k \mathbf{S}\_k \left(T\_2' p + \mathbf{1}\right)}{o o\_1 \left[\left(\mathbf{1} + T\_2' p\right)^2 \mathbf{S}\_k^2\right]} = \frac{2\mathbf{M}\_k}{o o\_1 \mathbf{S}\_k \left(\mathbf{1} + T\_2' p\right)}\tag{6}$$

From the point of view of mathematics, the transfer function with parameters depending on the functions of frequency and slip, this is the same inaccurate mathematical expression, as well as the vector equation, originally derived at unchanged frequencies of harmonic variables—the currents of the engine and EMF rotor and the stator used to analyze the dynamics of these same variables. However, there is one significant difference. If the vector equation is valid exclusively for the constant frequencies of signals associated with the equation, and in principle cannot describe the change in these frequencies, the transfer function retains its ability to describe the dynamics of processes in some area of changes in these functions and even has sufficient accuracy of this description.

So, when the drive is working out a torque disturbance at a constant rotational speed (and a constant frequency of the stator voltage) with a slight (for the transfer function) change in the relative slip *β*, the transfer function describes the processes quite accurately, and, more importantly, allows you to accurately select corrective connections that linearize the transfer function and make the parrying of the

**Figure 9.** *Block diagram of SCIM with dynamic positive feedback (DPF).*

*Experimental Studies of Asynchronous Electric Drives with "Stepwise"… DOI: http://dx.doi.org/10.5772/intechopen.101864*

disturbance in the drive much more efficient. So, the proposed positive dynamic feedback by the torque of the engine or its analogue:

$$\text{W}\_{\text{DPF}} = \frac{K}{\left(T\_2' p + 1\right)}\tag{7}$$

The transfer function forms and optimally the transition trajectory and significantly reduces the time of transient processes. Dynamic "dips" speeds are also reduced. Experiments exploring the reaction of the drive to jump of load at various methods of controlling the SCIM fully confirmed this (**Figure 10b**).

The processes in **Figure 10b** show that DPF allows correcting the static error of the drive and significantly speeding up transients. At the same time, the nonlinear transfer function can describe transients and justify continuous devices for their correction, which is the positive feedback on the active component of the stator current. The stator voltage "selected" by this feedback (voltage amplitude and frequency) provides parrying of step loads with minimal transient processes (**Figure 10b**).

This made it possible to formulate a hypothesis that the identification of SCIM with FC a non-linear transfer function is more accurate than vector equations, which is confirmed by the choice of a more effective correction. In asynchronous electric drives using widely used frequency converters, it is quite problematic to introduce positive torque feedback. As experiments have shown [14, 15], it can be replaced by a connection according to the active component of the stator current, which is measured by almost all known frequency converters used in industry.

As mentioned above, the crucial importance in assessing the correctness and effectiveness should be given to experimental studies.

The stand where the research was carried out initially consisted of two identical asynchronous electric drives, each of which contains an asynchronous

**Figure 10.** *Structural diagram of the actuator with dynamic bond on time (a), transient processes (b), and mechanical characteristics (c).*

short-circuited electric motor and a frequency and voltage converter. The drives operate on one common shaft, the stand contains current sensors and a rotation speed sensor of the common shaft of the motors and a generator of periodic control signals. Quite a lot of different experiments were carried out, described in detail in the articles [16–20].

This study provides the results of experiments during steady-state after the jump of load. These modes of operation are selected, the most exactly corresponding to the vector equations of asynchronous electric drives and well-specific qualitative analysis using well-known methods, mechanical characteristics of the drive (**Figure 11**).

The technology of experiments is extremely simplified. A certain control mode is set in the working drive—vector sensorless or with speed feedback, scalar, or with positive feedback—DPF or DPF2 (DPF2 is a dynamic positive feedback, similar to DPF, but with an increased transmission coefficient (*Kt* = 3)). Scalar control is installed in the load drive. Directly by the signal supplied to the input of the frequency converter *UZ2* of the working motor *M2*, the drive is output to a certain rotation speed (and the corresponding frequency of the stator voltage). After a certain time interval, a task is sent to the frequency converter *UZ1* of the load motor *M1*. The operating mode of the load drive is determined by the task for the speed of rotation, as an equivalent of the mechanical characteristic (**Figure 12**). The drives work counter. The resulting modes are well explained by the mechanical characteristics—**Figure 12**. The working points are determined by the intersection of the mechanical characteristics of the working and load motors. At the same time, **Figure 12a**—*U*/*f* = const., **Figure 12b**—*U*2/*f*<sup>2</sup> > *U*1/*f*1.

The jump of load smoothly enough, the rate of load increase is commensurate with the processes of torque formation in the working drive.

The parameters of the modes—the stator currents and the rotation speed (or the sliding value) are determined by the vectors of the stator voltage, which the corresponding control algorithm will "choose." The diagrams of the speed and current of the stator are similar to those shown in **Figures 4**–**6**.

**Figure 11.** *Stand scheme.*

*Experimental Studies of Asynchronous Electric Drives with "Stepwise"… DOI: http://dx.doi.org/10.5772/intechopen.101864*

**Figure 12.**

*Mechanical characteristics of the counter activation of the working drive of the stand with different parameters of the stator voltage (*U*,* f*) and the load drive. (a)* U*/*f *= const. and (b)* U*1/*f*<sup>1</sup> >* U*2/*f*2.*

As follows from the figures, it is quite difficult to evaluate the efficiency of torque generation in the drive-by one or another control method using these diagrams. Since all these algorithms control the frequency of the stator voltage to a greater or lesser extent, even by the speed signal, it is difficult to estimate the sliding in the motor during experiments.

A methodology was proposed for evaluating the effectiveness of the AED control method by sliding, necessary for the formation of torque in the motor.

Sliding can be determined most accurately in a real drive by the frequency of the rotary current. To work with this technique, the working motor in the stand was replaced with an electric motor with a phase rotor, in which rotor current sensors were installed. **Figures 13** and **14** show diagrams of rotor currents in the working motor of the stand in a circuit with dynamic feedback, and in a circuit with vector control and a PID speed controller, respectively. The results were very telling. The frequency of the main harmonic of the rotor current in the drive with DPF (3.5 Hz) is significantly lower than in the drive with a PID speed controller (8.125 Hz).

In the analysis of the experiments, the main attention was paid to the frequency of rotor current, which is in the circuit with the DPF was significantly lower. But also turned out to be smaller and the amplitude values of the rotor current. To carry out a more detailed assessment of the effectiveness of choice of the stator voltage vector, new experiments were conducted.

**Figure 13.** *Rotor currents and the spectrum of rotor current in drive with scalar control and DPF.*

**Figure 14.** *Rotor currents and the spectrum of rotor current in drive with vector control and speed loop.*

#### **3. New experiments with static modes**

#### **3.1 Methodology and course of experiments**

The working drive, in which five control methods are implemented sequentially, is output at rotational speeds corresponding to the set frequencies of the stator voltage—20 and 30 Hz—**Table 1**.

The load drive generates a braking torque corresponding to the "counter" mechanical characteristic with a given rotational speed of 10 and 15 Hz.

Signals of currents and frequencies are recorded in the rotor and stator of the working drive without load and when switched on with the load drive in steadystate modes. Data in **Tables 1**–**5**.

The values of the voltage *U*1, the amplitude of the stator current *I*1, the rotation speed recalculated in FC relative to a given frequency are recorded according to the


**Table 1.**

*Parameters of the working drive at low load, the load drive is switched off.*


#### **Table 2.**

*Parameters of the working drive at a load of 70% of* Mn *(setting the speed of the load drive is equivalent to setting "15 Hz").*

*Experimental Studies of Asynchronous Electric Drives with "Stepwise"… DOI: http://dx.doi.org/10.5772/intechopen.101864*


**Table 3.**

*Parameters of the working drive at low load, the load drive is switched off.*


**Table 4.**

*Parameters of the working drive at a load of 50% of* Mn *(setting the speed of the load drive is equivalent to setting "10 Hz").*


#### **Table 5.**

*Parameters of the working drive at a load of 70% of* Mn *(setting the speed of the load drive is equivalent to setting "15 Hz").*

readings of the FC monitor, the rotor current is recorded by the Instek GDS-2062 oscilloscope.

The actual values of the frequency of the stator voltage *Fs* and the ratio *U*/*f*<sup>1</sup> in each experiment are calculated from the values of the measured rotational speed and frequency of the rotor current. These relations determine, as indicated in [6, 10], the main magnetic flux in the engine, the value of the critical torque and, according to the Kloss formula, the slip required to create torque in this static mode. The frequency of the rotor currents determines the actual slip in the working drive.

**Comment 1:** The stator current changes little by increasing the load and the torque of the "working" electric motor—0.5 A—without load, and 0.6–0.7 at during loads, its changes cannot be analyzed. Rotary current varies significantly—from 1.3 to 2.6 A without load and from 6 to 16 A with loads.

It is a rotary current, being active, creates a torque. DPF communication, managing simultaneously and frequency, and voltage—*U*/*f*, retains its value and the magnitude of the main magnetic flux in the engine, the value of an absolute slip and the amplitude of the rotor currents also change little when the control algorithms change. But the speed of rotation under load is better adjusted than in drives with "open" algorithms. The DPF connection simply "shifts" the mechanical characteristics parallel to natural—**Figure 12a**.

Communication DPF2 increases the *U*/*f* and at the same time—the main magnetic flux in the motor. The drive currents, sliding and the reactive power flows are reduced, as compared with the electric drive with DPF—**Figure 12b** and **Tables 6** and **7**. Since these control algorithms are formed, as seen from the tables, different stator voltage vectors to establish a clear connection between the control algorithm,


**Table 6.**

*Parameters of the working drive at light load with only scalar control, the load drive is off.*


**Table 7.**

*Parameters of the working drive with only scalar control, at a load of 50% of Mn (setting the speed of the load drive is equivalent to setting "10 Hz").*

this vector and the drive mode at loads, that is, values of currents and their frequencies have been carried out in additional experiments.

Additional experiments were carried out in the following order.

In the scalar control mode, a different amplitude of the stator voltage from 130 to 200 V was set for a certain rotation frequency. All process parameters in the stator and rotor were recorded at low load (**Table 6**). The parameters of the processes in the drives when the load drive is turned on, rotating in the opposite direction, are given in **Table 7**.

**Comment 2:** The analysis showed that when the amplitude and frequency of the voltage across the motor stator are set similar to the parameters of the "selected" DPF and DPF2 modes, the parameters of the processes in the rotor and stator turn out to be the same with scalar control.

At the maximum value of *U*/*f* = 6.8, the minimum values of the rotor current (1.72 A) and slip (4.48 Hz) under load, and 0.75 A and 2.63 Hz, respectively, at low load are fixed.

This confirms the assumption that it is the parameters of the stator voltage (*U*, *f*), which determine the main magnetic flux in the motor, that determine the operating mode of the AED—sliding and rotor currents. The larger the main flow, the smaller the slip and the rotor current.

The control algorithms (SC, SVC, DPF, and DPF2) only "select" the values of *U* and *f* according to the task for the rotation speed, the load torque and in accordance with the control algorithm.

The DPF control algorithm relies on a continuous nonlinear transfer function—a more accurate interpretation of the AED, since it does not have the assumptions that are made in vector control (only the main harmonics in the currents and EMF of the motor, *<sup>ω</sup>*<sup>1</sup> <sup>¼</sup> *<sup>ω</sup>*, *<sup>d</sup>ψ*<sup>2</sup> *dt* ¼ 0), "selects" control more efficiently.

This correction method is certainly more promising both for static and quasistatic modes, and for operation under complex disturbances and in complex technological systems (special vehicles, wind turbines, drones, technological complexes, power engineering … ).

#### **3.2 The discussion of the results**

As experiments have shown, the static modes of operation of the AED, given *U* and *f*, can be accurately described by vector diagrams and mechanical characteristics. At the same time, it is necessary to clarify the amount of slip required to create torque.

The "selection" of the values of the amplitude and frequency of the stator voltage is carried out by the control algorithm—SC, SVC, and DPF. With an increase in the main flux in the motor, the stator current at low load increases, while the rotor current and slip at high load fall. That is, positive torque feedback provides low no-load stator currents and reduced rotor currents under load.

Traditional control algorithms (SC, SVC) do not choose *U* and *f* optimally, since the algorithms are based on vector equations and several assumptions that are very erroneous for working under load even in static modes (*ω*<sup>1</sup> ¼ *ω*, for example). The control transition paths are selected incorrectly.

The use of continuous nonlinear transfer functions interpreting AED and corrections by local feedbacks on these functions form transition processes (including transition trajectories) in complex operating modes with variable load.

This determines their advantages and prospects for the use of continuous transfer functions and continuous local corrections and structures in AED complex technological complexes.

This is especially important because, in terms of price, reliability and overload capacity, AED has no alternatives.

#### **3.3 About vector control**

Theoretical "understanding" of vector control problems shows that sequential correction, which is vector control (**Figure 3**) with simplifications of the original equations, passing through nonlinear blocks with delays (the pulsed power part of the FC) is very inefficient and experiments confirm this, although statics is closest to the original vector equations. We should expect even greater problems in operating modes with significant dynamics of loads and rotational speeds. The sequential correction that vector control "tries" to implement does not work, because it is based on many erroneous simplifications and is ineffective as a sequential correction of the nonlinear structure of torque generation in asynchronous motors.

#### **3.4 About direct torque control (DTC) technology**

Over 30 years of direct torque control (DTC) technology, a lot of work has been devoted. An example of the scheme is shown in **Figure 15**.

Most often, there are no detailed descriptions, there is no technology, including the characteristics of the "observer" of flow coupling—its accuracy, dynamics, which fundamentally affect all processes. Another obvious, but ignored in the descriptions of technology, the problem is the presence of high harmonics in all. This "suggests" some kind of conditionality of the results.

The very appearance of this kind of vector control is probably a reaction to the poor-quality operation of the vector control at the entrance from the torque of the disturbance and represents the formation of a transition trajectory from one vector state to another, improved compared to the vector transition, in which this trajectory is not paid attention at all. The trajectory is carried out at the expense of "basic" vectors. The algorithm software is very complex and other companies (except ABB) do not use it. Confirms that discontinuous vector control does not give trajectories of transitions from one state to another.

**Figure 15.** *Block diagram of direct torque control (DTC) in an asynchronous electric drive.*

On the speed of the formation of the torque, which is often mentioned in the description of technology [21–24]. The transition time is 1–2 ms without specifying the drive power and specific charts are difficult to consider a convincing argument. In the article [21] the increasing time of 50 ms is also fast, but already real. In our experiments (**Figure 10a**) The process of recovery of speed is 80–100 ms. That for the torque commensurate with DTC. It is implemented in any FC and has prospects for improvement.

It should be noted that in the articles to directly control the torque [21–24] there are no at least some mathematics of the description of the dynamics—differential equations, etc.

On local connections based on the representation of an asynchronous electric motor by a nonlinear transfer function.

The transfer function of the link forms the torque in an asynchronous electric motor—the initial and adjusted ones are determined by the transition conditions, that is, the initial and final conditions for changing the load and frequency of the stator voltage.

The continuity of the transfer function contributes to optimizing the formation of the transition trajectory from one state to another. With the original vector equations, this transition is not determined and an additional algorithm is required—not only the basic vectors, but also the trajectories of transitions to them.

Since in all experiments, that is, at different operating speeds and loads with a deep positive connection in the stator current (more precisely, in the developed torque of the sliding value), the amplitudes of the currents are smaller than with traditional control methods (scalar and vector), it can be reasonably argued that the amplification of the main magnetic flux in asynchronous motors reduces the angle of shift between the reduced vectors of the rotor and stator currents of the motor.

#### **4. Conclusion**

The static modes of operation of the AEP in the control methods under consideration are described fairly accurately by nonlinear transfer functions (NTF), but transitions between static states are described incorrectly in these algorithms and

*Experimental Studies of Asynchronous Electric Drives with "Stepwise"… DOI: http://dx.doi.org/10.5772/intechopen.101864*

when loading the work of these algorithms—in closed vector circuits—is incorrect and leads to significantly inefficient modes.

The local connection formed by the NTF is a positive dynamic feedback, "selects" the parameters of the stator voltage (*U*, *f*) the best for parrying static disturbing torques.

Discontinuous vector AED equations, based on which correction algorithms are formed by traditional methods, do not allow to obtain optimal modes of parrying load surges with their help.

The advantages in the quality of operation of control systems using the proposed local connections are even more obvious with large variable external disturbances and when working in complex ACS.

There is a proposal to introduce into the drive a positive feedback by the amplitude of the rotor current, as an analogue of the feedback by torque.

The operation of AED in complex systems and under complex disturbing influences is not described in principle (with sufficient accuracy) by the method of vector equations, hence the endless modes of selection and calculation of engine and drive parameters, automatic identification, and, finally, DTC technology.

Feedback by the torque or by the value of the active current of the stator under load reduces the currents of the stator, rotor and slip, that is, makes the processes more active, which is extremely "useful" for drives with large and varying loads.

#### **Author details**

Vladimir L. Kodkin\*, Alexandr S. Anikin, Alexandr A. Baldenkov and Natalia A. Loginova South Ural State University, Chelyabinsk, Russian Federation

\*Address all correspondence to: kodkina2@mail.ru

© 2022 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

### **References**

[1] Usoltsev AA. Vector Control of Asynchronous Motors: Tutorial. Spb.: ITMO; 2002. p. 120. Available from: http://servomotors.ru/documentation/ frequency\_control\_of\_asynchronous\_ motors/chastupr.pdf

[2] Park R, Robertson B. The reactances of synchronous machines. Transactions of the American Institute of Electrical Engineers. 1928;**47**(2):514-535

[3] Vas P. Vector Control of AC Machines. New York, NY, USA: Oxford University Press; 1990

[4] Mishchenko VA. The vector control method of electromechanical converters. Electrical Engineering. 2004;**7**:47-51

[5] Kodkin VL, Anikin AS, Baldenkov AA. Identification of AC drivers from the families of frequency characteristics. Russian Electrical Engineering. 2020; **91**(12):756-760. DOI: 10.3103/ S1068371220120081

[6] Kodkin V, Anikin A. On the physical nature of frequency control problems of induction motor drives. Energies. 2021; **14**(14):4246. DOI: 10.3390/en14144246

[7] Kodkin VL, Anikin AS, Baldenkov AA. The dynamics identification of asynchronous electric drives via frequency response. International Journal of Power Electronics and Drive Systems. 2019;**10**(1):66-73. DOI: 10.11591/ijpeds.v10n1.pp66-73

[8] Kodkin VL, Anikin AS. Experimental study of the VFD's speed stabilization efficiency under torque disturbances. International Journal of Power Electronics and Drive Systems. 2021;**12**(1):80-87. DOI: 10.11591/ijpeds.v12.i1.pp80-87

[9] Kodkin V, Baldenkov A, Anikin A. A method for assessing the stability of digital automatic control systems (ACS) with discrete elements. Hypothesis and simulation results. Energies. 2021; **14**(20):6561. DOI: 10.3390/en14206561

[10] Kodkin VL, Anikin AS, Baldenkov AA. Stabilization of the stator and rotor flux linkage of the induction motor in the asynchronous electric drives with frequency regulation. International Journal of Power Electronics and Drive Systems. 2020;**11**(1):213-219. DOI: 10.11591/ijpeds.v11.i1.pp213-219

[11] Codkin VL, Anikin AS, Baldenkov AA. Assessing the efficiency of control systems of asynchronous electric drives using spectral analysis of rotor currents. Russian Electrical Engineering. 2021; **92**(1):32-37. DOI: 10.3103/S106837 1221010065

[12] Kodkin VL, Anikin AS, Shmarin YA. Dynamic load disturbance correction for alternative current electric drives. In: 2016 2nd International Conference on Industrial Engineering, Applications and Manufacturing (ICIEAM): Proceedings: South Ural State University (National Research University), Chelyabinsk, Russia. 2016. DOI:10.1109/SIBCON.2015.7146978

[13] Kodkin VL. Methods of optimizing the speed and accuracy of optical complex guidance systems based on equivalence of automatic control system domain of attraction and unconditional stability of their equivalent circuits. In: Proceedings of SPIE—The International Society for Optical Engineering. 2016

[14] Kodkin VL, Anikin AS. Frequency control of asynchronous electric drives in transport. In: 2015 International Siberian Conference on Control and Communications (SIBCON 2015)— Proceedings. 2015. DOI: 10.1109/ SIBCON.2015.7146978

[15] Kodkin VL, Anikin AS, Shmarin YA. Effective frequency control for

*Experimental Studies of Asynchronous Electric Drives with "Stepwise"… DOI: http://dx.doi.org/10.5772/intechopen.101864*

induction electric drives under overloading. Russian Electrical Engineering. 2014;**85**(10):641-644. DOI: 10.3103/S1068371214100101

[16] Kodkin VL, Anikin AS, Baldenkov AA. Experimental research of asynchronous electric drive with positive dynamic feedback on stator current. In: 2017 International Conference on Industrial Engineering, Applications and Manufacturing (ICIEAM)—Proceedings. 2017. DOI:10.1109/ICIEAM.2017.8076179

[17] Kodkin VL, Anikin AS, Baldenkov AA. Spectral analysis of rotor currents in frequency-controlled electric drives. In: 2nd International Conference on Automation, Mechanical and Electrical Engineering (AMEE 2017)— Proceedings. 2017. DOI: 10.2991/ amee-17.2017.26

[18] Kodkin VL, Anikin AS, Baldenkov AA. Families of frequency characteristics, as a basis for the identification of asynchronous electric drives. In: 2018 International Russian Automation Conference (RusAutoCon). 2018

[19] Kodkin VL, Anikin AS, Baldenkov AA. Analysis of stability of electric drives as non-linear systems according to Popov criterion adjusted to amplitude and phase frequency characteristics of its elements. In: 2nd International Conference on Applied Mathematics, Simulation and Modelling (AMSM 2017)—Proceedings. 2017. pp. 7-14. DOI: 10.12783/dtetr/amsm2017/14810

[20] Kodkin VL, Anikin AS, Baldenkov AA. The analysis of the quality of the frequency control of induction motor carried out on the basis of the processes in the rotor circuit. Journal of Physics Conference Series. 2018;**944**(1):012052

[21] Karandeev DY, Engel EA. Direct torque control of an induction motor using adaptive neurocontroller in

conditions of uncertainty. Internet Journal Science. 2015;**7**(5):1-9. DOI: 10.15862/91TVN515

[22] Wang F et al. Advanced control strategies of induction machine: Field oriented control, direct torque control and model predictive control. Energies. 2018;**11**(1):120. DOI: 10.3390/en110 10120

[23] Alsofyani IM, Idris NRN. Simple flux regulation for improving state estimation at very low and zero speed of a speed sensorless direct torque control of an induction motor. IEEE Transactions on Power Electronics. 2016;**31**(4):3027-3035. DOI: 10.1109/ TPEL.2015.2447731

[24] Toufouti R. Direct torque control for induction motor using intelligent techniques. Journal of Theoretical and Applied Information Technology. 2007; **3**(3):35-44

*Edited by P. Balasubramaniam, Sathiyaraj Thambiayya, Kuru Ratnavelu and JinRong Wang*

The portfolio diversification strategy study is useful to help investors to plan for the best investment strategy in maximizing return with the given level of risk or minimizing risk. Further, a new set of generalized sufficient conditions for the existence and uniqueness of the solution and finite-time stability has been achieved by using Generalized Gronwall-Bellman inequality. Moreover, a novel development is proposed to solve classical control theory's difference diagrams and transfer functions. Advanced TCP strategies and free parametrization for continuous-time LTI systems and quality of operation of control systems are presented.

Published in London, UK © 2022 IntechOpen © lena\_serditova / iStock

Control Systems in Engineering and Optimization Techniques

Control Systems in

Engineering and

Optimization Techniques

*Edited by P. Balasubramaniam, Sathiyaraj Thambiayya, Kuru Ratnavelu* 

*and JinRong Wang*