**2. Computational Lyapunov stability analysis**

#### **2.1 Control Lyapunov functions**

The concept of control Lyapunov function (CLF) is a very useful appliance in solving stability tasks. We search to stabilize a nonlinear system by selecting a

Lyapunov function *V(x)* and then try to find a feedback control *u(x)* that gives *V x* \_ ð Þ , *u x*ð Þ negative definite. If with an arbitrary choice of V this attempt may fail, when V(x) is a CLF, to find a stabilizing control law *u(x)* is easier.

Similar with the system (5), we can define a control system like follows:

$$
\dot{\mathfrak{x}} = f(\mathfrak{x}, \mathfrak{u}) \tag{13}
$$

in control theory. Besides the stability analysis of nonlinear systems, SOS has opened a new direction in approaching different types of systems and answering different analysis questions. The SOS technique generalizes a well-known computational appliance in linear robust control theory, "Linear Matrix Inequalities" – LMI. Parrilo and Ahmadi had important contributions in this field. [9] Using LMI in different analysis problems is advantageous, since there are efficient algorithms developed in the framework of semi-definite programming (SDP) [8, 9]. The SOS uses these types of algorithms, but all questions are formulated at polynomial level,

*Qualitative Analysis for Controllable Dynamical Systems: Stability with Control Lyapunov…*

Related to the Lyapunov functions, to construct them "by test" requires analytic skills of the researchers and moreover, depend on the small-state dimensions. When the vector field f and the Lyapunov function candidate V are both polynomial, the Lyapunov conditions are polynomial non-negativity conditions, which are quite hard to test. This could be one of the reasons for lack of efficiency in the algorithmic construction of a Lyapunnov function. But, if the non-negativity conditions are replaced by SOS conditions, then constructing the Lyapunov function can be done

There are a lot of the control problems which follow the same two steps: i) recasting the primal problem as a Lyapunov-type problem and then, ii) constructing a sum-ofsquares relaxation to the problem.We present in what follows a theoretical tool which is

Given *x*∈*R<sup>n</sup>* we denote the ring of multivariable polynomials with real coeffi-

**Theorem 3**. Let *<sup>x</sup>*<sup>∗</sup> <sup>¼</sup> <sup>0</sup><sup>∈</sup> *<sup>D</sup>* <sup>⊂</sup>*R<sup>n</sup>* an equilibrium point of (7). If there exists a function *V* : *D* ! *R* continuously differentiable such that the following hold:

**Theorem 4**. Let *<sup>x</sup>*<sup>∗</sup> <sup>¼</sup> 0 be equilibrium for (7) and assume that *<sup>D</sup>* <sup>⊂</sup> *Rn* is a given

for all *t*≥ 0, *x*∈ *D*. Then *x\** is exponentially stable. Furthermore, if the assump-

The following theorem illustrates how sum-of-squares programming can be used to construct a polynomial stability certificate, in this case a Lyapunov function, for an equilibrium point of (7). The domain D must be defined. In particular D must be representable as a semi-algebraic set. If we consider that the domain of interest D is represented by all points that satisfy *β*ð Þ *x* ≤0, *β* ∈*R x*½ �, then we have the following

**Theorem 5.** If there exists a polynomial function V, sum-of-squares polynomials r1, r2 and positive definite polynomial functions *φ*1, *φ*<sup>2</sup> all of bounded degree, such

Then *x\** is stable. Moreover, *x\** is asymptotically stable if (11) holds.

*V x* \_ ð Þ<sup>≤</sup> � *<sup>k</sup>*3k k*<sup>x</sup> <sup>p</sup>*

tion *V* : *D* ! *R* and positive constants k1, k2, k3 such that

tions hold when *<sup>D</sup>* <sup>¼</sup> *<sup>R</sup><sup>n</sup>*, then *x\** is globally exponentially stable.

*R[x]* and the subset of sum-of-squares polynomials in the variable x by *[x]*. Sometimes it may be necessary to indicate the maximum degree of a polynomial or sum-of-squares polynomial in which case we use the subscript notation

> *V*ð Þ¼ 0 0; (16) *V*ð Þ *x* >0, ∀*x*∈ *D*∖f g0 (17) *<sup>V</sup>*\_ ð Þ *<sup>x</sup>* <sup>≤</sup>0∀*x*<sup>∈</sup> *<sup>D</sup>:* (18)

. Assume there exists a continuously differentiable func-

*<sup>k</sup>*1k k*<sup>x</sup> <sup>p</sup>* <sup>≤</sup>*V x*ð Þ<sup>≤</sup> *<sup>k</sup>*2k k*<sup>x</sup> <sup>p</sup>* (19)

, *p*∈*Z* (20)

or in polynomial-matrix terms.

*DOI: http://dx.doi.org/10.5772/intechopen.96872*

cients by P

efficiently using semi-definite programming.

*Rd[x]* or P*<sup>d</sup> [x]* where d is a positive integer.

domain which includes *x\**

result [8].

that

**111**

basic in the sum of squares computational approach [8].

where *u*∈ *U* ⊂*R<sup>m</sup>* is the control. We speak about an *open-loop control* if u is function of time, *u=u(t)* and *closed-loop* if **u = k(x).** The closed-loop control is in fact the *feedback control*. We speak also about *feedback stabilized system* if the feedback has been fixed, **u** = k(**x**) and the equilibrium in the origin has a desired stability property.

The system (13) is called *locally, asymptotically null-controllable*, [4] if for every *ρ* in a neighborhood of the origin there is an open-loop control u such that the solution of the system with initial value *ρ* tends asymptotically towards the origin, i.e. if it is possible to steer the system state asymptotically to the origin. A *control Lyapunov function* (CLF) for such a system, introduced by Sontag 1983 [5], is a positive definite function V such that

$$\inf\_{\mathbf{u}\in U} \nabla V(\mathbf{x}) \bullet \mathbf{f}(\mathbf{x}, \mathbf{u}) \le -\chi(\|\mathbf{x}\|) \tag{14}$$

where γ is a comparison function [4]. Asymptotic null-controllability cannot be characterized by smooth control Lyapunov functions and one must resort to more general definitions of differentiability like the Dini- or the proximal sub-differential [6].

For asymptotically null-controllable systems the equilibrium at the origin is sometimes referred to as weakly asymptotically stable, in contrast to strongly asymptotically stable equilibrium, where every choice of **u** leads to states being attracted asymptotically to the equilibrium.

As mentioned in the previous section, the stability concept has a lot of approaches: we have the classic Lagrange, Dirichlet and Lyapunov stability but, depending on the context, we also have input–output stability, hyperstability, input-to-state stability [3]. The last one, input to state stability (ISS) was introduced by Sontag [7] and it is interesting by the idea of characterizing a certain kind of stability at the origin imposing for the Lyapunov function V the condition:

$$
\nabla V(\mathbf{x}) \bullet f(\mathbf{x}, \mathfrak{u}) \le -\gamma (||\mathfrak{x}||) + a(||\mathfrak{u}||) \tag{15}
$$

where α and γ are comparison functions [4]. The origin is thus an asymptotically stable equilibrium of the system *x*\_ ¼ *f x*ð Þ , 0 and a practically stable equilibrium of the system *x*\_ ¼ *f x*ð Þ , *u* for k k*u* ≤*umax*, with *umax* >0 a (not too large) constant. Moreover, the smaller umax is, usually interpreted as a bound on the perturbation u, the closer solutions of the system will be to zero in the long run.

#### **2.2 Stability analysis using sum of squares Lyapunov functions**

The stability of dynamical systems is basically carried out by Lyapunov theory. For linear systems, the construction of an "energy-like function" – the Lyapunov function, fulfilling certain positivity conditions, is not difficult. For a system *x*\_ ¼ *Ax* this implies finding a matrix P such that *ATP* <sup>þ</sup> *PA* is negative definite [8]. Then the associated Lyapunov function is given by *V x*ð Þ¼ *xTPx*. But although obviously, it was seen only recently that, in this context, both *V x*ð Þ,*V x* \_ ð Þ are *sum of squares functions*!

Sum of squares (SOS) optimization is a quite new technique at the interface between convex optimization and computational algebra. Recently it had significant impact not only in optimization, but over several disciplines as well, especially

### *Qualitative Analysis for Controllable Dynamical Systems: Stability with Control Lyapunov… DOI: http://dx.doi.org/10.5772/intechopen.96872*

in control theory. Besides the stability analysis of nonlinear systems, SOS has opened a new direction in approaching different types of systems and answering different analysis questions. The SOS technique generalizes a well-known computational appliance in linear robust control theory, "Linear Matrix Inequalities" – LMI. Parrilo and Ahmadi had important contributions in this field. [9] Using LMI in different analysis problems is advantageous, since there are efficient algorithms developed in the framework of semi-definite programming (SDP) [8, 9]. The SOS uses these types of algorithms, but all questions are formulated at polynomial level, or in polynomial-matrix terms.

Related to the Lyapunov functions, to construct them "by test" requires analytic skills of the researchers and moreover, depend on the small-state dimensions. When the vector field f and the Lyapunov function candidate V are both polynomial, the Lyapunov conditions are polynomial non-negativity conditions, which are quite hard to test. This could be one of the reasons for lack of efficiency in the algorithmic construction of a Lyapunnov function. But, if the non-negativity conditions are replaced by SOS conditions, then constructing the Lyapunov function can be done efficiently using semi-definite programming.

There are a lot of the control problems which follow the same two steps: i) recasting the primal problem as a Lyapunov-type problem and then, ii) constructing a sum-ofsquares relaxation to the problem.We present in what follows a theoretical tool which is basic in the sum of squares computational approach [8].

Given *x*∈*R<sup>n</sup>* we denote the ring of multivariable polynomials with real coeffi-P cients by *R[x]* and the subset of sum-of-squares polynomials in the variable x by *[x]*. Sometimes it may be necessary to indicate the maximum degree of a polynomial or sum-of-squares polynomial in which case we use the subscript notation *Rd[x]* or P*<sup>d</sup> [x]* where d is a positive integer.

**Theorem 3**. Let *<sup>x</sup>*<sup>∗</sup> <sup>¼</sup> <sup>0</sup><sup>∈</sup> *<sup>D</sup>* <sup>⊂</sup>*R<sup>n</sup>* an equilibrium point of (7). If there exists a function *V* : *D* ! *R* continuously differentiable such that the following hold:

$$V(\mathbf{0}) = \mathbf{0};\tag{16}$$

$$V(\mathfrak{x}) > 0, \forall \mathfrak{x} \in D \circ \{0\} \tag{17}$$

$$
\dot{V}(\mathbf{x}) \le 0 \forall \mathbf{x} \in D. \tag{18}
$$

Then *x\** is stable. Moreover, *x\** is asymptotically stable if (11) holds.

**Theorem 4**. Let *<sup>x</sup>*<sup>∗</sup> <sup>¼</sup> 0 be equilibrium for (7) and assume that *<sup>D</sup>* <sup>⊂</sup> *Rn* is a given domain which includes *x\** . Assume there exists a continuously differentiable function *V* : *D* ! *R* and positive constants k1, k2, k3 such that

$$k\_1 ||\boldsymbol{\omega}||^p \le \mathbf{V}(\boldsymbol{\omega}) \le k\_2 ||\boldsymbol{\omega}||^p \tag{19}$$

$$
\dot{V}(\varkappa) \le -k\_3 \|\varkappa\|^p, p \in \mathcal{Z} \tag{20}
$$

for all *t*≥ 0, *x*∈ *D*. Then *x\** is exponentially stable. Furthermore, if the assumptions hold when *<sup>D</sup>* <sup>¼</sup> *<sup>R</sup><sup>n</sup>*, then *x\** is globally exponentially stable.

The following theorem illustrates how sum-of-squares programming can be used to construct a polynomial stability certificate, in this case a Lyapunov function, for an equilibrium point of (7). The domain D must be defined. In particular D must be representable as a semi-algebraic set. If we consider that the domain of interest D is represented by all points that satisfy *β*ð Þ *x* ≤0, *β* ∈*R x*½ �, then we have the following result [8].

**Theorem 5.** If there exists a polynomial function V, sum-of-squares polynomials r1, r2 and positive definite polynomial functions *φ*1, *φ*<sup>2</sup> all of bounded degree, such that

Lyapunov function *V(x)* and then try to find a feedback control *u(x)* that gives *V x* \_ ð Þ , *u x*ð Þ negative definite. If with an arbitrary choice of V this attempt may fail,

Similar with the system (5), we can define a control system like follows:

of time, *u=u(t)* and *closed-loop* if **u = k(x).** The closed-loop control is in fact the *feedback control*. We speak also about *feedback stabilized system* if the feedback has been fixed, **u** = k(**x**) and the equilibrium in the origin has a desired stability property.

where *u*∈ *U* ⊂*R<sup>m</sup>* is the control. We speak about an *open-loop control* if u is function

The system (13) is called *locally, asymptotically null-controllable*, [4] if for every *ρ* in a neighborhood of the origin there is an open-loop control u such that the solution of the system with initial value *ρ* tends asymptotically towards the origin, i.e. if it is possible to steer the system state asymptotically to the origin. A *control Lyapunov function* (CLF) for such a system, introduced by Sontag 1983 [5], is a positive

where γ is a comparison function [4]. Asymptotic null-controllability cannot be characterized by smooth control Lyapunov functions and one must resort to more general definitions of differentiability like the Dini- or the proximal sub-differential [6]. For asymptotically null-controllable systems the equilibrium at the origin is sometimes referred to as weakly asymptotically stable, in contrast to strongly asymptotically stable equilibrium, where every choice of **u** leads to states being

As mentioned in the previous section, the stability concept has a lot of approaches: we have the classic Lagrange, Dirichlet and Lyapunov stability but, depending on the context, we also have input–output stability, hyperstability, input-to-state stability [3]. The last one, input to state stability (ISS) was introduced by Sontag [7] and it is interesting by the idea of characterizing a certain kind of stability at the origin imposing for the Lyapunov function V the condition:

the closer solutions of the system will be to zero in the long run.

**2.2 Stability analysis using sum of squares Lyapunov functions**

recently that, in this context, both *V x*ð Þ,*V x* \_ ð Þ are *sum of squares functions*!

where α and γ are comparison functions [4]. The origin is thus an asymptotically stable equilibrium of the system *x*\_ ¼ *f x*ð Þ , 0 and a practically stable equilibrium of the system *x*\_ ¼ *f x*ð Þ , *u* for k k*u* ≤*umax*, with *umax* >0 a (not too large) constant. Moreover, the smaller umax is, usually interpreted as a bound on the perturbation u,

The stability of dynamical systems is basically carried out by Lyapunov theory. For linear systems, the construction of an "energy-like function" – the Lyapunov function, fulfilling certain positivity conditions, is not difficult. For a system *x*\_ ¼ *Ax* this implies finding a matrix P such that *ATP* <sup>þ</sup> *PA* is negative definite [8]. Then the associated Lyapunov function is given by *V x*ð Þ¼ *xTPx*. But although obviously, it was seen only

Sum of squares (SOS) optimization is a quite new technique at the interface between convex optimization and computational algebra. Recently it had significant impact not only in optimization, but over several disciplines as well, especially

*x*\_ ¼ *f x*ð Þ , *u* (13)

*inf <sup>u</sup>* <sup>∈</sup> *<sup>U</sup>*∇*V*ð Þ *x* ∙*f x*ð Þ , *u* ≤ � *γ*ð Þ k k*x* (14)

∇*V*ð Þ *x* ∙*f x*ð Þ , *u* ≤ � *γ*ð Þþ k k*x α*ð Þ k k*u* (15)

when V(x) is a CLF, to find a stabilizing control law *u(x)* is easier.

*Advances in Dynamical Systems Theory, Models, Algorithms and Applications*

definite function V such that

**110**

attracted asymptotically to the equilibrium.

$$V(\mathbf{x}) + r\_1(\mathbf{x})\beta(\mathbf{x}) - \rho\_1(\mathbf{x})\,\epsilon\Sigma[\mathbf{x}] \tag{21}$$

**<sup>F</sup>** <sup>¼</sup> ð Þ <sup>∇</sup>*X*Ф*t*ð Þ **<sup>X</sup>** *<sup>T</sup>*, *Fij* <sup>¼</sup> *<sup>∂</sup>xi*

*Qualitative Analysis for Controllable Dynamical Systems: Stability with Control Lyapunov…*

history motion" (CSHM flows). Details can be found in [10].

�

*DOI: http://dx.doi.org/10.5772/intechopen.96872*

between the extension axis and x2, if K is positive [10].

8 >>><

>>>:

*x*\_ <sup>1</sup> ¼ *G* ∙ *x*<sup>2</sup>

with the third component for the moving velocity of the system.

respect to the parameters is significant, both in 2d and 3d case [11, 12].

**3.2 Existence of a CLF for the mixing flow dynamical system in a slightly**

*x*\_ <sup>3</sup> ¼ *c*

*x*\_<sup>1</sup> ¼ *Gx*<sup>2</sup>

kinematic 2d mixing flow

*x*2

1 ∣*K*∣ � �<sup>1</sup>*=*<sup>2</sup>

<sup>2</sup> � *<sup>K</sup>* <sup>∙</sup> *<sup>x</sup>*<sup>2</sup>

dynamical system [11]:

**perturbed form**

**113**

account the following three stages:

For a material filament and correspondingly for a material surface, in a mixing flow, there are defined another two basic deformation measures, the *length deformation λ* and *surface deformation η*. In this context, a specific analysis for deformations of infinitesimal elements is the so-called "*good mixing concept*", related to the boundaries of the quantities λ and η. The class of flows with a special form of F is of very large interest in the literature, as it contains the so-called "constant stretch

When studying the mixing flow phenomena, one starts from the widespread

*<sup>x</sup>*\_<sup>2</sup> <sup>¼</sup> *KGx*1, � <sup>1</sup><*<sup>K</sup>* <sup>&</sup>lt;1, *<sup>G</sup>* <sup>∈</sup> *<sup>R</sup>*

Although this is a linear model, when associating the corresponding initial condition.

<sup>1</sup> ¼ *const:* and this is corresponding to some ellipses with the axes rate

if K is negative, and to some hyperbolas with the angle *<sup>β</sup>* <sup>¼</sup> arctan <sup>1</sup>

*x*\_ <sup>2</sup> ¼ *K* ∙ *G* ∙ *x*1, � 1<*K* < 1,*c* ¼ *const:*

In the 3d case, the non-periodic model exhibits a complicate behavior. A lot of comparative computational analysis proved the great influence of the parameters on the model behavior, leading to far from equilibrium models [11]. The perturbed model was also taken into account, and it was found out that its sensitivity with

The central aim in the study of the mixing flow dynamical system was associated rather with the fluid mechanics standpoint, namely analyzing the *efficiency of mixing* [10, 11].This is a concept which implies the analysis of deformation efficiencies in length and surface for the material mixed in the basic fluid. The physical phenomena associated are the *multiphase flow* phenomena, and the analysis and numeric simulation for these complex flows is in study. Briefly, in order to obtain a good behavior for the deformation efficiencies, the mathematical context must take into

To this broad isochoric flow, we can associate easily the corresponding 3d

it is obtained a complex solution for the Cauchy problem (26)-(27) [11]. From geometric standpoint, the streamlines of the above model satisfy the relation

*x*1ð Þ¼ 0 *x*1ð Þ¼ *t* ¼ 0 *X*1; *x*2ð Þ¼ 0 *x*2ð Þ¼ *t* ¼ 0 *X*<sup>2</sup> (27)

*∂X <sup>j</sup>*

� �*:* (25)

*:* (26)

∣*K*∣ � �<sup>1</sup>*=*<sup>2</sup>

(28)

$$-\dot{V}(\infty) + r\_2(\infty)\beta(\infty) - \rho\_2(\infty)\,\epsilon\Sigma[\infty] \tag{22}$$

then the equilibrium point *x\** of (7) is asymptotically stable.

A few questions immediately come to mind: Do stable polynomial systems always admit a sum-of-squares Lyapunov function? How conservative are we being by limiting ourselves to positive polynomials that admit a sum-of-squares decomposition? Can we determine a priori the degree of the Lyapunov function required? The answer to the first question is simply, no.

There is a vast literature on the SOS method to compute Lyapunov functions in various settings and for different kinds of systems [4]. Between them, Parillo given important result on SOS and SDP (Semi Definite Problems) programming in finding Lyapunov functions [9]. As specified above, he introduced an efficient LMI program for finding Lyapunov functions as sum of squares for polynomial systems. The above setting is simplified for finding global Lyapunov functions, but if we are interested to find SOS Lyapunov functions on a compact domain, is important to mention the "Positivstellensazt" as useful appliance [9].

In the case of a dynamical system of a simple polynomial form, the stability study and Lyapunov function search can begin with a function test which facilitates the further analysis, as we see in the section 3.
