**1. Introduction: general outlines in feedback and stability for dynamical systems**

The term *feedback* is used to refer to a situation in which two (or more) dynamical systems are connected together such that each system influences the other and their dynamics are thus strongly coupled. Feedback is a powerful idea whose principle is based on corrections on the difference between desired and current performance. It was applied, rediscovered and patented in different contexts in engineering. The feedback methods had important improvements in time and, due to its remarkable properties, these improvements had significant importance in all applied sciences models.

A basic feature of feedback is that it changes the dynamics of a system. By modifying the behavior in the sense needed by the application, we can stabilize the model which is initially unstable, or we can obtain responsive systems from sluggish ones.

A survey on control process strategies and applications shows that: (1) a variety of nonlinear controller design techniques are based on input–output linearization; (2) few experimental studies of these techniques have been presented; and (3) many important problems remain unsolved [1].

### **1.1 Feedback model outlines**

There are several types of finite-dimensional, nonlinear process models. The *continuous-time, state-space* model has the form:

$$
\dot{\boldsymbol{x}} = \boldsymbol{f}(\boldsymbol{x}) + \mathbf{g}(\boldsymbol{x})\boldsymbol{u} \tag{1}
$$

$$
\boldsymbol{y} = \boldsymbol{h}(\boldsymbol{x}).
$$

with simple canonical structure. The integer r is the fundamental characteristic of a nonlinear system, named the *relative degree*; if r < n, we have to complete the coordinate transformation by adding additional n-r state variables [1].

*Qualitative Analysis for Controllable Dynamical Systems: Stability with Control Lyapunov…*

For a dynamical system, the *controllability* problem is to check the existence of a forcing term or control function u(t) such that the corresponding solution of the system will pass through a desired point, *x t*ð Þ¼ <sup>∗</sup> *x*<sup>∗</sup> . The initial form of the controlled system motivates the form of the control. Thus, a controlled system will have a complex dynamic and therefore, analyzing its stability implies analyzing the nonlinear operators that describe the system's components. In applied sciences and engineering models, the control is aimed to compare the system against the desired behavior and compute corrective actions based on a model of the system's response to external inputs. Therefore, the modern control techniques include the use of algorithms [1].

Let us consider the solution of a differential equation representing a physical phenomenon or the evolution of some system. There always is some uncertainty concerning the initial conditions, because, when one attempts to repeat a given experiment, the reproduction of the initial conditions is never entirely identical. It is thus fundamental to be able to recognize the circumstances under which small variations in the initial conditions will only introduce small variations in what follows of the phenomenon. It is known that stability is a property of the solutions of differential equations in

> <sup>0</sup> , *x*<sup>∗</sup> 0

remains close to *x*<sup>∗</sup> *t*, *t* <sup>∗</sup>

<sup>0</sup> , *x*<sup>∗</sup> 0 , any

<sup>0</sup> , *x*<sup>∗</sup> 0 

Rn of the form *<sup>x</sup>*\_ <sup>¼</sup> *<sup>f</sup>*ð Þ *<sup>t</sup>*, *<sup>x</sup>* by which, given a "reference" solution *<sup>x</sup>*<sup>∗</sup> *<sup>t</sup>*, *<sup>t</sup>* <sup>∗</sup>

mainstay in the control theory and in the applied sciences modeling.

tion trajectories are usually not known. Krasovski himself noted that [2]:

*"One could hope that a method for proving the existence of a Lyapunov function might carry with it a constructive method for obtaining this function. This hope has not been*

Numerous computational construction methods have been developed in mathematical community, based on different methods such as linear matrix inequalities, linear programming, series expansion, algebraic methods, theoretic methods and many others.

for long times. Thus, generally speaking, we can state the question of stability as: "small variations in the initial conditions will imply small variations in what follows

The stability concept has the beginnings back in the past, in the analysis of the planets motion. Then Lagrange, Dirichlet had refined the definition, including the boundedness of trajectories. But Lyapunov's work was a corner stone in this area, by analyzing the stability concept, with the help of a positive non-decreasing function which is decreasing along the system trajectories. The Lyapunov functions are a

Within the mathematical context they are implied, the Lyapunov functions provide sufficient conditions for the stability of equilibrium and for analyzing its basin of attraction or more general invariant sets. They characterize the long-time behavior of the solutions depending on their initial solutions. Therefore it appears the natural question how to compute a Lyapunov function for a particular system? Although the existence of Lyapunov functions has been studied in few theorems, it is not provided yet a general method to compute them. The converse theorems which appeared around 1950 were a great help in the issue. A converse theorem generally establishes that, if a system has a certain kind of stability, then there exists a Lyapunov function for the system that characterizes that kind of stability. Still, the converse theorems are not very constructive in practice, since they use the solution trajectory of the system to construct the Lyapunov function and the solu-

other solution *x t*ð Þ , *<sup>t</sup>*0, *<sup>x</sup>*<sup>0</sup> starting close to *<sup>x</sup>*<sup>∗</sup> *<sup>t</sup>*, *<sup>t</sup>* <sup>∗</sup>

**1.2 Stability outlines**

*DOI: http://dx.doi.org/10.5772/intechopen.96872*

for the phenomenon".

*realized*".

**107**

**x** is the vector of state variables, *x*∈*Rn*, **u** is the vector of input variables, **y** the vector of controlled output variables, *u*, *y*∈*R<sup>m</sup>*; **f** and **h** are vectors of nonlinear functions, *f* ∈*R<sup>n</sup>*, *h*∈*R<sup>m</sup>*; finally **g** is a matrix of nonlinear functions.

The single-input, single-output (SISO) case where *m=1* is generally easiest and good to facilitate understanding the basic concepts. Consider the Jacobian linearization of the nonlinear model

$$\dot{\boldsymbol{x}} = \left[ \frac{\partial \boldsymbol{f}(\boldsymbol{x}\_{0})}{\partial \boldsymbol{x}} + \frac{\partial \mathbf{g}(\boldsymbol{x}\_{0})}{\partial \boldsymbol{x}} \boldsymbol{u}\_{0} \right] (\boldsymbol{x} - \boldsymbol{x}\_{0}) + \mathbf{g}(\boldsymbol{x}\_{0}) (\boldsymbol{u} - \boldsymbol{u}\_{0}) \tag{2}$$

$$\boldsymbol{y} - \boldsymbol{y}\_{0} = \frac{\partial h(\boldsymbol{x}\_{0})}{\partial \boldsymbol{x}} (\boldsymbol{x} - \boldsymbol{x}\_{0}) .$$

Using derivation variables, the Jacobian model can be written as a linear statespace system

$$
\dot{\boldsymbol{x}} = A\boldsymbol{x} + Bu\boldsymbol{u}\tag{3}
$$

$$
\boldsymbol{y} = \mathbf{C}\boldsymbol{x}
$$

with obvious definitions for the matrices A, B and C. It is important to note that the Jacobian model is an exact representation of the nonlinear model only at the point *x*0, *y*<sup>0</sup> . As result, a control strategy based on a linearized model may involve unsatisfactory performance and robustness at other operating points.

Roughly speaking, *feedback linearization* is a collection of ways for transforming the original system models into equivalent models of a simpler form. The central idea of feedback linearization is to algebraically transform nonlinear systems dynamics into (fully or partly) linear ones, in order to enable to apply linear control techniques. This approach is essential different from the classic Jacobian linearization, because feedback linearization is realized by an exact state transformation and a feedback law, rather than by linear approximations of the dynamics of the model. More important is the *local* feedback linearization, as it allows avoiding complications associated with the global problem.

After feedback linearization, the input–output model is linear:

$$
\dot{\xi} = A\xi + Bv \tag{4}
$$

$$
w = C\xi.
$$

Here ξ is the vector of transformed variables, *ξϵRr* ; v is the vector of input variables; w is the vector of output variables, *w*, *vϵR<sup>m</sup>*, and A, B and C are matrices *Qualitative Analysis for Controllable Dynamical Systems: Stability with Control Lyapunov… DOI: http://dx.doi.org/10.5772/intechopen.96872*

with simple canonical structure. The integer r is the fundamental characteristic of a nonlinear system, named the *relative degree*; if r < n, we have to complete the coordinate transformation by adding additional n-r state variables [1].

For a dynamical system, the *controllability* problem is to check the existence of a forcing term or control function u(t) such that the corresponding solution of the system will pass through a desired point, *x t*ð Þ¼ <sup>∗</sup> *x*<sup>∗</sup> . The initial form of the controlled system motivates the form of the control. Thus, a controlled system will have a complex dynamic and therefore, analyzing its stability implies analyzing the nonlinear operators that describe the system's components. In applied sciences and engineering models, the control is aimed to compare the system against the desired behavior and compute corrective actions based on a model of the system's response to external inputs. Therefore, the modern control techniques include the use of algorithms [1].

#### **1.2 Stability outlines**

A survey on control process strategies and applications shows that: (1) a variety of nonlinear controller design techniques are based on input–output linearization; (2) few experimental studies of these techniques have been presented; and (3)

There are several types of finite-dimensional, nonlinear process models. The

*y* ¼ *h x*ð Þ*:*

functions, *f* ∈*R<sup>n</sup>*, *h*∈*R<sup>m</sup>*; finally **g** is a matrix of nonlinear functions.

*Advances in Dynamical Systems Theory, Models, Algorithms and Applications*

*<sup>y</sup>* � *<sup>y</sup>*<sup>0</sup> <sup>¼</sup> *<sup>∂</sup>h x*ð Þ<sup>0</sup>

unsatisfactory performance and robustness at other operating points.

After feedback linearization, the input–output model is linear:

\_

*w* ¼ *Cξ:*

variables; w is the vector of output variables, *w*, *vϵR<sup>m</sup>*, and A, B and C are matrices

**x** is the vector of state variables, *x*∈*Rn*, **u** is the vector of input variables, **y** the vector of controlled output variables, *u*, *y*∈*R<sup>m</sup>*; **f** and **h** are vectors of nonlinear

The single-input, single-output (SISO) case where *m=1* is generally easiest and good to facilitate understanding the basic concepts. Consider the Jacobian lineari-

Using derivation variables, the Jacobian model can be written as a linear state-

*y* ¼ *Cx*

with obvious definitions for the matrices A, B and C. It is important to note that the Jacobian model is an exact representation of the nonlinear model only at the

Roughly speaking, *feedback linearization* is a collection of ways for transforming the original system models into equivalent models of a simpler form. The central idea of feedback linearization is to algebraically transform nonlinear systems dynamics into (fully or partly) linear ones, in order to enable to apply linear control techniques. This approach is essential different from the classic Jacobian linearization, because feedback linearization is realized by an exact state transformation and a feedback law, rather than by linear approximations of the dynamics of the model. More important is the *local* feedback linearization, as it allows avoiding

. As result, a control strategy based on a linearized model may involve

*<sup>∂</sup><sup>x</sup>* ð Þ *<sup>x</sup>* � *<sup>x</sup>*<sup>0</sup> *:*

*x*\_ ¼ *f x*ð Þþ *g x*ð Þ*u* (1)

ð Þþ *x* � *x*<sup>0</sup> *g x*ð Þ<sup>0</sup> ð Þ *u* � *u*<sup>0</sup> (2)

*x*\_ ¼ *Ax* þ *Bu* (3)

*ξ* ¼ *Aξ* þ *Bv* (4)

; v is the vector of input

many important problems remain unsolved [1].

*continuous-time, state-space* model has the form:

*<sup>x</sup>*\_ <sup>¼</sup> *<sup>∂</sup>f*ð*x*<sup>0</sup> *∂x* þ *<sup>∂</sup>g*ð*x*<sup>0</sup> *∂x u*0

complications associated with the global problem.

Here ξ is the vector of transformed variables, *ξϵRr*

**1.1 Feedback model outlines**

zation of the nonlinear model

space system

point *x*0, *y*<sup>0</sup>

**106**

Let us consider the solution of a differential equation representing a physical phenomenon or the evolution of some system. There always is some uncertainty concerning the initial conditions, because, when one attempts to repeat a given experiment, the reproduction of the initial conditions is never entirely identical. It is thus fundamental to be able to recognize the circumstances under which small variations in the initial conditions will only introduce small variations in what follows of the phenomenon.

It is known that stability is a property of the solutions of differential equations in Rn of the form *<sup>x</sup>*\_ <sup>¼</sup> *<sup>f</sup>*ð Þ *<sup>t</sup>*, *<sup>x</sup>* by which, given a "reference" solution *<sup>x</sup>*<sup>∗</sup> *<sup>t</sup>*, *<sup>t</sup>* <sup>∗</sup> <sup>0</sup> , *x*<sup>∗</sup> 0 , any other solution *x t*ð Þ , *<sup>t</sup>*0, *<sup>x</sup>*<sup>0</sup> starting close to *<sup>x</sup>*<sup>∗</sup> *<sup>t</sup>*, *<sup>t</sup>* <sup>∗</sup> <sup>0</sup> , *x*<sup>∗</sup> 0 remains close to *x*<sup>∗</sup> *t*, *t* <sup>∗</sup> <sup>0</sup> , *x*<sup>∗</sup> 0 for long times. Thus, generally speaking, we can state the question of stability as: "small variations in the initial conditions will imply small variations in what follows for the phenomenon".

The stability concept has the beginnings back in the past, in the analysis of the planets motion. Then Lagrange, Dirichlet had refined the definition, including the boundedness of trajectories. But Lyapunov's work was a corner stone in this area, by analyzing the stability concept, with the help of a positive non-decreasing function which is decreasing along the system trajectories. The Lyapunov functions are a mainstay in the control theory and in the applied sciences modeling.

Within the mathematical context they are implied, the Lyapunov functions provide sufficient conditions for the stability of equilibrium and for analyzing its basin of attraction or more general invariant sets. They characterize the long-time behavior of the solutions depending on their initial solutions. Therefore it appears the natural question how to compute a Lyapunov function for a particular system? Although the existence of Lyapunov functions has been studied in few theorems, it is not provided yet a general method to compute them. The converse theorems which appeared around 1950 were a great help in the issue. A converse theorem generally establishes that, if a system has a certain kind of stability, then there exists a Lyapunov function for the system that characterizes that kind of stability. Still, the converse theorems are not very constructive in practice, since they use the solution trajectory of the system to construct the Lyapunov function and the solution trajectories are usually not known. Krasovski himself noted that [2]:

*"One could hope that a method for proving the existence of a Lyapunov function might carry with it a constructive method for obtaining this function. This hope has not been realized*".

Numerous computational construction methods have been developed in mathematical community, based on different methods such as linear matrix inequalities, linear programming, series expansion, algebraic methods, theoretic methods and many others. Very important to notice is the Lyapunov theorem, which enable establishing stability or asymptotic stability of equilibrium points without explicitly computing trajectories [2].

The Lyapunov theorem is of fundamental importance in system theory. It asserts the possibility of establishing stability or asymptotic stability of equilibrium points without explicitly computing trajectories [2].

**Theorem 1 (Lyapunov)**. Let *xe* ¼ 0 be an equilibrium point for the system (1). Let *<sup>V</sup>* : *<sup>R</sup><sup>n</sup>* ! *<sup>R</sup>* be a positive definite continuously differentiable function.

1. If *<sup>V</sup>*\_ : *Rn* ! *<sup>R</sup>* is negative semi-definite, then xe is stable;

2. If *V*\_ is negative definite, then xe is asymptotically stable.

The theorem assesses the existence of a Lyapunov function but does not provide a method to compute one. In the case of linear systems, this issue arises naturally, but in general computing a Lyapunov function is an open problem giving rise to different ways to construct it.

We recall in what follows the two basic Lyapunov criteria.

a. *The first Lyapunov criterion* is based on the eigenvalues analysis.

Let us consider the following continuous-time nonlinear system:

$$
\dot{\mathfrak{x}} = \mathfrak{f}(\mathfrak{x}(t), \mathfrak{u}(t)). \tag{5}
$$

Then *x = 0* is global asymptotically stable.

*DOI: http://dx.doi.org/10.5772/intechopen.96872*

where ,h i is the inner product in R<sup>n</sup> and *<sup>∂</sup><sup>V</sup>*

hood U of x0 which satisfies:

attraction of the equilibrium.

**2.1 Control Lyapunov functions**

**109**

**2. Computational Lyapunov stability analysis**

*iff x* ¼ *x*<sup>0</sup>

The condition (9) refers to the *monotonicity* of the Lyapunov function. We say

*Qualitative Analysis for Controllable Dynamical Systems: Stability with Control Lyapunov…*

*<sup>∂</sup>V*ð Þ *<sup>x</sup> <sup>∂</sup><sup>x</sup>* , *<sup>f</sup>*ð Þ *<sup>x</sup>*

We could ask if Lyapunov functions always exist, and if so, how could we find such

**Definition 1**. A *strict Lyapunov* function for the equilibrium x0 of (7) is a realvalued, continuously differentiable function *<sup>V</sup>* : *<sup>U</sup>* <sup>⊂</sup> *Rn* ! *<sup>R</sup>* defined on a neighbor-

a. *Minimum*. V has a minimum at x0, i.e. *V x*ð Þ≥0 *for all x*∈ *U and V x*ð Þ¼ 0

b. *Decrease*. V is strictly decreasing along solution trajectories of (7) in U except

Thus, two important properties are deduced for a strict Lyapunov function [4]:

• If we have a strict Lyapunov function, then the equilibrium is asymptotically stable;

• Compact sublevel sets of a strict Lyapunov function are subsets of the basin of

This chapter is organized as follows. The section 2 is dedicated to Lyapunov functions computational analysis. There are exposed the basic outlines of CLF concept and also the related outlines: LMI approach and SOS Lyapunov functions. In the section 3, a computational Lyapunov function is searched for the mixing flow dynamical system in a slightly perturbed form. After presenting the mathematical context of the 2d mixing

The concept of control Lyapunov function (CLF) is a very useful appliance in solving stability tasks. We search to stabilize a nonlinear system by selecting a

flow dynamical system, together with recent results in the field, the results of searching a CLF for the mixing flow are presented. The section 4 is dedicated for conclusions and further aims in the topic. The chapter ends with references.

for the equilibrium. A sufficient condition is *V*\_ <0 *for all x*∈ *U*∖*x*0.

(12)

*<sup>∂</sup><sup>x</sup>* is the gradient of V. Also, the

that V is decreasing along trajectories, using the *orbital derivative* given by:

*<sup>V</sup>*\_ ð Þ¼ *<sup>x</sup>*

condition (10) refers to the requirement for V to be *radially unbounded*.

a function? For the first part of the question the answer is generally positive but, finding a Lyapunov function is not immediate, since the converse theorems assume the knowledge of the solutions of the system (7) [2, 3]. Therefore refining the definition of Lyapunov function and establishing a more specific context was very necessary. An important aim in qualitative analysis of the stability is to search if the solutions remain close to the equilibrium and moreover, if they converge towards it. Therefore the search for the Lyapunov function must be more accurate. The *strict Lyapunov functions* can achieve this goal. Designed as generalization of the energy in a physical dissipative system, they preserve the property of decreasing energy along trajectories and thus, the solutions of the system converge to a (local) minimum of energy.

In the vicinity of the equilibrium point ð Þ *x*0,*u*<sup>0</sup> , let us consider the corresponding linearized system:

$$
\dot{\tilde{\mathbf{x}}}(t) = A\tilde{\mathbf{x}}(t) + B\tilde{u}(t). \tag{6}
$$

This criterion has three distinct cases for the eigenvalues λ<sup>i</sup> of the matrix A [3]:


#### b. *The second Lyapunov criterion*

**Theorem 2**. Consider the dynamical system in R<sup>n</sup> :

$$
\dot{\mathfrak{x}}(t) = \mathfrak{f}(\mathfrak{x}(t)) \tag{7}
$$

and let *x=0* be its unique equilibrium point. If there exists a continuously differentiable function *<sup>V</sup>* : *<sup>R</sup><sup>n</sup>* ! *<sup>R</sup>* such that:

$$V(\mathbf{0}) = \mathbf{0};\tag{8}$$

$$V(\mathfrak{x}) > 0, \forall \mathfrak{x} \neq 0;\tag{9}$$

$$\|\|\mathbf{x}\|\| \to \infty \Rightarrow V(\mathbf{x}) \to \infty;\tag{10}$$

$$
\dot{V}(\mathbf{x}) < \mathbf{0} \forall \mathbf{x} \neq \mathbf{0}. \tag{11}
$$

*Qualitative Analysis for Controllable Dynamical Systems: Stability with Control Lyapunov… DOI: http://dx.doi.org/10.5772/intechopen.96872*

Then *x = 0* is global asymptotically stable.

Very important to notice is the Lyapunov theorem, which enable establishing stability or asymptotic stability of equilibrium points without explicitly computing trajectories [2]. The Lyapunov theorem is of fundamental importance in system theory. It asserts the possibility of establishing stability or asymptotic stability of equilibrium points

**Theorem 1 (Lyapunov)**. Let *xe* ¼ 0 be an equilibrium point for the system (1).

The theorem assesses the existence of a Lyapunov function but does not provide a method to compute one. In the case of linear systems, this issue arises naturally, but in general computing a Lyapunov function is an open problem giving rise to

In the vicinity of the equilibrium point ð Þ *x*0,*u*<sup>0</sup> , let us consider the corresponding

This criterion has three distinct cases for the eigenvalues λ<sup>i</sup> of the matrix A [3]:

ii. If there exits at least one *i* such as *Re λ<sup>i</sup>* > 0 then ð Þ *x*0, *u*<sup>0</sup> is unstable;

iii. If there exits at least one *i* such as *Re λ<sup>i</sup>* ¼ 0 and for all other λj, j 6¼ i,

and let *x=0* be its unique equilibrium point. If there exists a continuously

*Re λ <sup>j</sup>* <0, then we cannot conclude anything about the stability of ð Þ *x*0, *u*<sup>0</sup> .

:

*x*\_ðÞ¼ *t f x*ð Þ ð Þ*t* (7)

*V*ð Þ¼ 0 0; (8) *V*ð Þ *x* >0, ∀*x* 6¼ 0; (9) k k*x* ! ∞ ) *V*ð Þ! *x* ∞; (10) *<sup>V</sup>*\_ ð Þ *<sup>x</sup>* <sup>&</sup>lt;0∀*<sup>x</sup>* 6¼ <sup>0</sup>*:* (11)

*x*\_ ¼ *f x*ð Þ ð Þ*t* , *u t*ð Þ *:* (5)

*x t* ~\_ðÞ¼ *Ax t* <sup>~</sup>ðÞþ *Bu t* <sup>~</sup>ð Þ*:* (6)

Let *<sup>V</sup>* : *<sup>R</sup><sup>n</sup>* ! *<sup>R</sup>* be a positive definite continuously differentiable function.

1. If *<sup>V</sup>*\_ : *Rn* ! *<sup>R</sup>* is negative semi-definite, then xe is stable;

*Advances in Dynamical Systems Theory, Models, Algorithms and Applications*

2. If *V*\_ is negative definite, then xe is asymptotically stable.

We recall in what follows the two basic Lyapunov criteria.

a. *The first Lyapunov criterion* is based on the eigenvalues analysis.

Let us consider the following continuous-time nonlinear system:

i. If *Re λ<sup>i</sup>* <0 for all *i*, then ð Þ *x*0, *u*<sup>0</sup> is asymptotically stable;

In this case we say that the criterion is not effective

**Theorem 2**. Consider the dynamical system in R<sup>n</sup>

b. *The second Lyapunov criterion*

differentiable function *<sup>V</sup>* : *<sup>R</sup><sup>n</sup>* ! *<sup>R</sup>* such that:

without explicitly computing trajectories [2].

different ways to construct it.

linearized system:

**108**

The condition (9) refers to the *monotonicity* of the Lyapunov function. We say that V is decreasing along trajectories, using the *orbital derivative* given by:

$$\dot{V}(\mathfrak{x}) = \left\langle \frac{\partial V(\mathfrak{x})}{\partial \mathfrak{x}}, f(\mathfrak{x}) \right\rangle \tag{12}$$

where ,h i is the inner product in R<sup>n</sup> and *<sup>∂</sup><sup>V</sup> <sup>∂</sup><sup>x</sup>* is the gradient of V. Also, the condition (10) refers to the requirement for V to be *radially unbounded*.

We could ask if Lyapunov functions always exist, and if so, how could we find such a function? For the first part of the question the answer is generally positive but, finding a Lyapunov function is not immediate, since the converse theorems assume the knowledge of the solutions of the system (7) [2, 3]. Therefore refining the definition of Lyapunov function and establishing a more specific context was very necessary.

An important aim in qualitative analysis of the stability is to search if the solutions remain close to the equilibrium and moreover, if they converge towards it. Therefore the search for the Lyapunov function must be more accurate. The *strict Lyapunov functions* can achieve this goal. Designed as generalization of the energy in a physical dissipative system, they preserve the property of decreasing energy along trajectories and thus, the solutions of the system converge to a (local) minimum of energy.

**Definition 1**. A *strict Lyapunov* function for the equilibrium x0 of (7) is a realvalued, continuously differentiable function *<sup>V</sup>* : *<sup>U</sup>* <sup>⊂</sup> *Rn* ! *<sup>R</sup>* defined on a neighborhood U of x0 which satisfies:


Thus, two important properties are deduced for a strict Lyapunov function [4]:


This chapter is organized as follows. The section 2 is dedicated to Lyapunov functions computational analysis. There are exposed the basic outlines of CLF concept and also the related outlines: LMI approach and SOS Lyapunov functions. In the section 3, a computational Lyapunov function is searched for the mixing flow dynamical system in a slightly perturbed form. After presenting the mathematical context of the 2d mixing flow dynamical system, together with recent results in the field, the results of searching a CLF for the mixing flow are presented. The section 4 is dedicated for conclusions and further aims in the topic. The chapter ends with references.
