**Feedback and Partial Feedback Linearization of Nonlinear Systems: A Tribute to the Elders Feedback and Partial Feedback Linearization of Nonlinear Systems: A Tribute to the Elders**

Issa Amadou Tall Issa Amadou Tall

[16] S. S. Motsa, V. M. Magagula, P. Sibanda, A Bivariate Chebyshev Spectral Collocation Quasilinearization Method for Nonlinear Evolution Parabolic Equations, Sci World J,

[17] S. Ismail, On bivariate polynomial interpolation, East J Approx Vol. 8, pp. 209–218, 2002. [18] R. E. Bellman, R. E. Kalaba, Quasilinearization and Nonlinear Boundary-Value

[19] S. S. Motsa, P. Dlamini, M. Khumalo, A new multi-stage spectral relaxation method for solving chaotic initial value systems, Nonlinear Dynam, Vol. 72, Issue 1–2, pp. 265–283,

[20] S. S. Motsa, A new piecewise-quasilinearization method for solving chaotic systems of initial value problems, Cent Eur J Phys, Vol. 10, Issue 4, pp. 936–946, 2012.

[21] S. S. Motsa, A new piece-wise-quasilinearisation method approach to a four-dimensional hyper-chaotic system with cubic nonlinearity, Nonlinear Dynam Vol. 70, pp. 651–

[22] V. Lakshmikantham, An extension of the method of quasilinearization, J Optim Theory

[23] V. Lakshmikantham, Further improvement of generalized quasilinearization, Nonlin-

[24] W. Wang, A. J. Roberts, Diffusion approximation for self-similarity of stochastic advection in Burger's equation, Commun Math Phys, Vol. 5, pp. 37–48, 2014.

[25] O. Y. Yetimova, N. A. Kudryashov, Exact solutions of the Burgers-Huxley equation, J

[26] Q. Chen, Ivo BabuSkab, Approximate optimal points for polynomial interpolation of real functions in an interval and in a triangle, Comput Methods Appl Mech Eng Vol.

[27] J. F. Epperson, On the Runge example, Amer Math Monthly, Vol. 94, pp. 329–341, 1987.

Vol. 2014, pp 13, 2014. doi:10.1155/2014/581987

20 Nonlinear Systems - Design, Analysis, Estimation and Control

2013.

657, 2012.

Appl Vol. 82, pp. 315–321, 1994.

128, pp. 405–417, 1995.

ear Anal Vol. 27, pp. 315–321, 1996.

Appl Math Mech, Vol. 68, Issue 3, pp. 413–420, 2004.

Problems, Elsevier Publishing Company, New York, 1965.

Additional information is available at the end of the chapter Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/64689

### **Abstract**

Arthur Krener and Roger Brockett pioneered the feedback linearization problem for control systems, that is, the transforming of a nonlinear control system into linear dynamics via change of coordinates and feedback. While the former gave necessary and sufficient conditions to linearize a system under change of coordinates only, the latter introduced the concept of feedback and solved the problem for a particular case. Their work was soon extended in the earlier eighties by Jakubczyk and Responder, and Hunt and Su who gave the conditions for a control system to be linearizable by change of coordinates and feedback (full rank and involutivity of the associated distributions). It turned out that those conditions are very restrictive; however, it was showed later that systems that fail to be linearizable can still be transformed into two interconnected subsystems: one linear and the other nonlinear. This fact is known as partial feedback linearization. For input-output systems with well-defined relative degree, coordinates can be found by differentiating the outputs. For systems without outputs, necessary and sufficient geometric conditions for partial linearization have been obtained in terms of the Lie algebra of the system; however, both results of linearization and partial feedback linearization lack practicability. Until recently, none has provided a way to actually compute the linearizing coordinates and feedback. In this paper, we propose an algorithm allowing to find the linearizing coordinates and feedback if the system is linearizable, and in the contrary, to decompose a system (without outputs) while achieving the largest linear subsystem. Those algorithms are built upon successive applications of the Frobenius theorem. Examples are provided to illustrate.

**Keywords:** feedback, Frobenius theorem, partial linearization

© 2016 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. © 2016 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

## **1. Introduction**

Roger Brockett is considered as the father of feedback linearization, one of the most important techniques for studying nonlinear systems. The problem of feedback linearization seeks to find new coordinates in which the system exhibits linear dynamics driven by new control inputs. The role of linear systems in engineering and mechanical systems has already been demonstrated in several applications. First, let us consider a linear system

$$\Lambda: \begin{cases} \dot{\mathbf{x}} = F\mathbf{x} + Gu = F\mathbf{x} + G\_1 u\_1 + \dots + G\_m u\_m \\\\ \mathbf{y} = H\mathbf{x} = H\_1 \mathbf{x}\_1 + \dots + H\_n \mathbf{x}\_n \end{cases} \tag{1}$$

where 1, ⋯, are, respectively, on , *Hx* a linear vector field on , denotes the state of the system, and the control input. To the linear system Λ, we attach two geometric objects: one called controllability space as a *n* × (*nm*) matrix whose columns are those of the matrices *Fi*−1*G* for = 1, 2, ⋯, ; the other called observability space as a *p* × (*nm*) matrix whose columns are those of the matrices *Hi*−1*F* for = 1, 2, ⋯, . The system Λ is controllable if and only if and the system is observable if and only if . By a linear change of coordinates *z* = *Tx* and a linear feedback = + where *T, K*, and *L* are matrices of appropriate sizes, *T* and *L* being invertible, the system Λ is transformed into a linear equivalent one

$$\overline{\Lambda}: \begin{cases} \dot{\mathbf{z}} = A\mathbf{z} + B\mathbf{v} = A\mathbf{z} + B\_1\mathbf{v}\_1 + \dots + B\_m\mathbf{v}\_m \\\\ \mathbf{y} = \mathbf{C}\mathbf{z} = \mathbf{C}\_1\mathbf{z}\_1 + \dots + \mathbf{C}\_n\mathbf{z}\_n \end{cases} \tag{2}$$

with = T(F + GK)−1, = , and *C* = *HT*−1.

For the linear system ˙ = where *A* and *B* are *n* × *n* and *n* × *m* matrices, respectively, we denote by = [ ⋯ − 1] and = dim. We define = max ≥ where *n*0 = 0 and *ni* = *mi –mi*−1 for 1 ≤ *i* ≤ *n*. It is straightforward to notice that 1 ≥⋯≥ with 1 +⋯+ = . It is a classical result of the linear control theory that a certain choice of the matrices *T, K*, and *L* leads to the Brunovsky canonical form <sup>Λ</sup> = Λ for which = 1, 2, ⋯, and = 1, 2, ⋯, with (see [1])

Feedback and Partial Feedback Linearization of Nonlinear Systems: A Tribute to the Elders http://dx.doi.org/10.5772/64689 23

$$A\\_i = \begin{pmatrix} 0 & 1 & 0 & \cdots & 0 \\ 0 & 0 & 1 & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & \cdots & 1 \\ 0 & 0 & 0 & \cdots & 0 \end{pmatrix} \text{ and } b\\_i = \begin{pmatrix} 0 \\ 0 \\ \vdots \\ 0 \\ 1 \end{pmatrix} \tag{3}$$

form a canonical pair of dimension *ki* . Moreover, = 1 0 ⋯ 0 .

**1. Introduction**

22 Nonlinear Systems - Design, Analysis, Estimation and Control

Roger Brockett is considered as the father of feedback linearization, one of the most important techniques for studying nonlinear systems. The problem of feedback linearization seeks to find new coordinates in which the system exhibits linear dynamics driven by new control inputs. The role of linear systems in engineering and mechanical systems has already been demon-

denotes the state of the system, and the control input. To the linear system Λ, we attach two geometric objects: one called controllability space as a *n* × (*nm*) matrix whose columns are those of the matrices *Fi*−1*G* for = 1, 2, ⋯, ; the other called observability space as a *p* × (*nm*) matrix whose columns are those of the matrices *Hi*−1*F* for = 1, 2, ⋯, . The system Λ is controllable if and only if and the system is observable if and only if . By a linear change of coordinates *z* = *Tx* and a linear feedback = + where *T, K*, and *L* are matrices of appropriate sizes, *T* and *L* being

For the linear system ˙ = where *A* and *B* are *n* × *n* and *n* × *m* matrices, respectively, we denote by = [ ⋯ − 1] and = dim. We define = max ≥ where

1 +⋯+ = . It is a classical result of the linear control theory that a certain choice of the matrices *T, K*, and *L* leads to the Brunovsky canonical form <sup>Λ</sup> = Λ for which

*–mi*−1 for 1 ≤ *i* ≤ *n*. It is straightforward to notice that 1 ≥⋯≥ with

(1)

(2)

strated in several applications. First, let us consider a linear system

where 1, ⋯, are, respectively, on , *Hx* a linear vector field on ,

invertible, the system Λ is transformed into a linear equivalent one

= 1, 2, ⋯, and = 1, 2, ⋯, with (see [1])

with = T(F + GK)−1, = , and *C* = *HT*−1.

*n*0 = 0 and *ni*

= *mi*

Now let us consider a nonlinear control system (control-affine for simplicity)

$$\begin{aligned} \bullet \colon \begin{cases} \dot{\mathbf{x}} = f\left(\mathbf{x}\right) + \mathbf{g}\left(\mathbf{x}\right)u = f\left(\mathbf{x}\right) + \mathbf{g}\_1\left(\mathbf{x}\right)u\_1 + \dots + \mathbf{g}\_m\left(\mathbf{x}\right)u\_{m'}, \mathbf{x} \bullet \mathfrak{M}^n, \boldsymbol{\mu} \bullet \mathfrak{M}^m \\\\ \mathbf{y} = h\left(\mathbf{x}\right) = \left(h\_1\left(\mathbf{x}\right), \dots, h\_p\left(\mathbf{x}\right)\right) \end{cases} \tag{4}$$

where denotes the state of the system, and the control input., , 1, ⋯, are smooth or analytic vector fields with <sup>0</sup> = 0, and ℎ1, ⋯, ℎ analytic functions on .

The problem of finding new coordinates in which the system Σ, driven by new inputs , takes the form Λ is referred as the input-output static state feedback linearization. For input-output systems, the problem of linearization is equivalent to achieving a relative degree (see details later). When the relative degree is achieved, finding the coordinates system in which the system becomes linear is a simple differentiation process. For systems without outputs, we only refer to static state linearization (Problem 1) or static state feedback linearization (Problem 2) as follows:

Problem 1: Find new coordinates = Φ() that transform the system Σ : ˙ = + into a linear controllable system ˙ = + .

Problem 2: Find new coordinates = Φ() and an invertible feedback = + that transform the system Σ : ˙ = + into a linear controllable system ˙ = + .

Arthur Krener [2] formulated and completely solved the first problem by showing that the Lie brackets of some vector fields have to be zero, that is, a certain set of vector fields associated with the system have to commute. Roger Brockett [3] solved the second problem under the assumption that *m* = 1 (single-input), *p* = 1 (single-output) and *β* is constant. The general case of input-output feedback linearization (Problem 2) was solved by Jakubczyk and Respondek [4] on one side, and independently by Hunt and Su [5] on the other side. Necessary and sufficient geometric conditions were obtained and showed that there is only a small class of nonlinear systems that can be linearized by feedback. Indeed, the system should satisfy the following two strong conditions:

(F1) an involutive distribution,

(F2) a distribution with full rank equal to the dimension of the system.

Those conditions are very restrictive, thus making the class of nonlinear systems that can be linearized by static state feedback very small. To enlarge the class of nonlinear systems that can be analyzed via feedback linearization, several techniques have been introduced including dynamic feedback linearization, nonregular state feedback linearization, partial feedback linearization, orbital feedback linearization, and transverse feedback linearization. Dynamic feedback linearization differs from static state feedback linearization in the sense that a compensator ˙ = , + , , ,= , + , is thought that enlarges the dimension of the system. This means that one tries to linearize the system

$$\begin{aligned} \Sigma: \begin{cases} \dot{\boldsymbol{x}} = \boldsymbol{f}(\boldsymbol{x}) + \boldsymbol{g}(\boldsymbol{x})a(\boldsymbol{x}, \boldsymbol{w}) + \boldsymbol{g}(\boldsymbol{x})\boldsymbol{\beta}(\boldsymbol{x}, \boldsymbol{w})\boldsymbol{\upsilon}, & \boldsymbol{x} \in \mathfrak{R}^{n}, \boldsymbol{\upsilon} \in \mathfrak{R}^{m} \\\\ \dot{\boldsymbol{w}} = a(\boldsymbol{x}, \boldsymbol{w}) + b(\boldsymbol{x}, \boldsymbol{w})\boldsymbol{\upsilon}, & \boldsymbol{w} \in \mathfrak{R}^{q} \end{cases} \tag{5}$$

using an extended state space transformation = , ℜ+ . This problem is referred as regular feedback linearization ((.) is an invertible matrix). More general feedbacks have been exploited to enlarge the class of linearizable systems by allowing the matrix (.) to be noninvertible, that is, admitting fewer inputs than the original system [6, 7]. In this case, we talk about nonregular feedback linearization [8]. Orbital feedback linearization, also known as time scale feedback linearization, introduces a new time scale *τ* such that = () is a positive function (preserve orientation). Hence, in the new time scale *τ*, the problem becomes to linearize the time-scaled system (see [9] and references therein)

$$\Sigma: \begin{cases} \frac{d\mathbf{x}}{d\mathbf{r}} = \chi(\mathbf{x})f(\mathbf{x}) + g(\mathbf{x})u = \chi(\mathbf{x})f(\mathbf{x}) + g\_1(\mathbf{x})u\_1 + \dots + g\_m(\mathbf{x})u\_m, \ x \in \mathbb{R}^n, u \in \mathbb{R}^m \\\\ \mathbf{y} = h(\mathbf{x}) = (h\_1(\mathbf{x}), \dots, h\_p(\mathbf{x})) \end{cases} \tag{6}$$

Transverse feedback linearization [10] deals with transforming a control-affine system coupled with a controlled invariant manifold into a system whose dynamics, transversal to the invariant manifold, are linear and controllable.

The feedback linearization problem has been thoroughly investigated in the past four decades but have regained interest recently with new algorithms developed to circumvent the solving of partial differential equations associated to the linearization (see [4, 5, 21–28], and the references therein). Whenever a system fails to satisfy either condition (F1) or (F2), its dynamics contain nonlinearities in any given coordinate system. The fundamental question is in which coordinates does the system exhibit the largest linear subsystem. This question was first addressed naturally for systems with outputs [6, 7, 11–20]. We propose in this paper an algorithmic way of transforming a control system into a cascade of two systems: one nonlinear and one linear with the largest dimension. First, we will recall basics about vector fields and the Frobenius theorem, then Section 3 deals with linearization of control systems with outputs, Section 4 contains the partial linearization algorithm. We end the paper with Section 5 with few examples as an illustration.

## **2. Vector fields and Frobenius theorem**

(F2) a distribution with full rank equal to the dimension of the system.

24 Nonlinear Systems - Design, Analysis, Estimation and Control

dimension of the system. This means that one tries to linearize the system

to linearize the time-scaled system (see [9] and references therein)

manifold, are linear and controllable.

Those conditions are very restrictive, thus making the class of nonlinear systems that can be linearized by static state feedback very small. To enlarge the class of nonlinear systems that can be analyzed via feedback linearization, several techniques have been introduced including dynamic feedback linearization, nonregular state feedback linearization, partial feedback linearization, orbital feedback linearization, and transverse feedback linearization. Dynamic feedback linearization differs from static state feedback linearization in the sense that a compensator ˙ = , + , , ,= , + , is thought that enlarges the

using an extended state space transformation = , ℜ+ . This problem is referred as regular feedback linearization ((.) is an invertible matrix). More general feedbacks have been exploited to enlarge the class of linearizable systems by allowing the matrix (.) to be noninvertible, that is, admitting fewer inputs than the original system [6, 7]. In this case, we talk about nonregular feedback linearization [8]. Orbital feedback linearization, also known as time scale feedback linearization, introduces a new time scale *τ* such that = () is a positive function (preserve orientation). Hence, in the new time scale *τ*, the problem becomes

Transverse feedback linearization [10] deals with transforming a control-affine system coupled with a controlled invariant manifold into a system whose dynamics, transversal to the invariant

The feedback linearization problem has been thoroughly investigated in the past four decades but have regained interest recently with new algorithms developed to circumvent the solving of partial differential equations associated to the linearization (see [4, 5, 21–28], and the references therein). Whenever a system fails to satisfy either condition (F1) or (F2), its dynamics contain nonlinearities in any given coordinate system. The fundamental question is in which coordinates does the system exhibit the largest linear subsystem. This question was first addressed naturally for systems with outputs [6, 7, 11–20]. We propose in this paper an algorithmic way of transforming a control system into a cascade of two systems: one nonlinear and one linear with the largest dimension. First, we will recall basics about vector fields and the Frobenius theorem, then Section 3 deals with linearization of control systems with outputs,

(5)

(6)

The theory of differential equations is one of the most productive and useful contributions of our modern times. Its applications are widespread in all branches of natural sciences, particularly, in physics, biology, chemistry, engineering, ecology, and in weather predictions, just to name few. It plays the role of a connector between abstract mathematical theories and applications in real world problems. Paraphrasing Newton quoted as saying that "it is useful to solve differential equations," a lot has been deserved in solving differential equations with various methods and techniques provided in the literature. Existence and uniqueness of solutions have been addressed in many scientific papers and textbooks. Consider the simplest expression of a linear partial differential equation

$$f\_1(\mathbf{x})\frac{\partial u}{\partial \mathbf{x}\_1} + \dots + f\_n(\mathbf{x})\frac{\partial u}{\partial \mathbf{x}\_n} = b(\mathbf{x}) \tag{7}$$

where 1 , ⋯, , and *b*(*x*) are smooth or analytic functions in the variable *x*. This partial differential equation is referred to as a homogeneous (resp. nonhomogeneous) linear first order partial differential equation when ≡ 0 (resp. . The vector field whose components are 1 , ⋯, is called the characteristic vector field of the homogeneous equation and the corresponding dynamical system ˙ = (), its characteristic equation. The solutions of the system are the integral curves of the characteristic equation and are often obtained by solving the so-called Lagrange subsidiary equation (also called characteristic equation)

$$\frac{d\boldsymbol{\lambda}\_1}{d\boldsymbol{f}\_1(\boldsymbol{\lambda})} = \dots = \frac{d\boldsymbol{\lambda}\_n}{f\_n(\boldsymbol{\lambda})} = \frac{d\boldsymbol{u}}{b(\boldsymbol{\lambda})}\tag{8}$$

Several methods have been devoted to the solving of such system among them Euler's method and Natani's method. Most of the work on ordinary differential equations have been done around equilibrium points (nonregular or singular point), that is, a point *x*0 where (*fx*0 = 0) . The reason being that regular points, that is, where 0 ≠ 0 are not topologically reach, because in those neighborhoods all trajectories are straight parallel lines (straightening theorem). Though this fact remains true and hence often neglected, the straightening theorem has many important applications. Indeed, a solution of the nonhomogeneous partial differential equation above can be easily found around a regular point *x*0 of *f* by simple quadrature in new coordinates: If = () is a change of coordinates around *x*0 that rectifies the vector

field *f*, that is, such that \* <sup>=</sup> <sup>∂</sup> ∂ , then the nonhomogeneous equation simplifies as ∂ ∂ = (), where = (()) and = . A solution (yielding = ∘ ) is given

$$\mathfrak{u}(\mathbf{z}) = a(\mathbf{z}\_1, \cdots, \mathbf{z}\_{n-1}) + \int\_0^{\mathbf{z}\_n} \tilde{b}(\mathbf{z}\_1, \cdots, \mathbf{z}\_{n-1}, \mathbf{e}) d\mathbf{e} \,\tag{9}$$

The dynamical system ˙ = () takes in this case the canonical form

$$\begin{cases} \dot{\mathbf{z}}\_1 = \mathbf{0} \\ \dot{\mathbf{z}}\_2 = \mathbf{0} \\ \vdots \\ \dot{\mathbf{z}}\_{n-1} = \mathbf{0} \\ \dot{\mathbf{z}}\_n = \mathbf{1} \end{cases} \tag{10}$$

**Theorem 1:** (Flow-box) Let *f* be a vector field defined in a neighbourhood of a nonsingular point , that is, (0)≠0. There exists a local change of coordinates = Φ in a neigh-

bourhood of *x*0 such that Φ\* <sup>=</sup> <sup>∂</sup> ∂ for all .

The existence and proof of this theorem, as well as its general form, can be found in the literature. The only difficulty in applying the straightening theorem is in finding the straightening diffeomorphism as one needs to solve the system of highly nonlinear partial differential equations:

$$\begin{cases} \frac{\partial \Phi\_1}{\partial x\_1} f\_n(\mathbf{x}) + \dots + \frac{\partial \Phi\_1}{\partial x\_n} f\_n(\mathbf{x}) = 0\\ \frac{\partial \Phi\_2}{\partial x\_1} f\_n(\mathbf{x}) + \dots + \frac{\partial \Phi\_2}{\partial x\_n} f\_n(\mathbf{x}) = 0\\ \vdots\\ \frac{\partial \Phi\_{n-1}}{\partial x\_1} f\_n(\mathbf{x}) + \dots + \frac{\partial \Phi\_{n-1}}{\partial x\_n} f\_n(\mathbf{x}) = 0\\ \frac{\partial \Phi\_n}{\partial x\_1} f\_n(\mathbf{x}) + \dots + \frac{\partial \Phi\_n}{\partial x\_n} f\_n(\mathbf{x}) = 1 \end{cases} \tag{11}$$

In earlier work [25], we provided a solution to this problem by giving explicit changes of coordinates, which will be recalled below. If *x*0 is a singular point, that is, 0 = 0, the notion of linearization, and later of normal form, were introduced by Poincare. Before we recall those facts, let us remind the reader that dynamical systems are a subclass of a largest class named control systems. Indeed, a control system can be interpreted as a parameterized family of dynamical systems ˙ = (, ) where for each fixed value of *u*, : = (, ) is a vector field. When *u* = 0, we rediscover dynamical systems. Poincare was the first to address the problem of linearization for dynamical systems around an equilibrium point. He indeed showed that when ∂ ∂ 0 = is a matrix whose spectrum = (1, ⋯, ) is not resonant, then new coordinates = () exist where the dynamical system takes the linear form ˙ = . We recall that a spectrum = (1, ⋯, ) is called resonant if there are nonnegative integers 1, ⋯, with 1 + ⋯ <sup>+</sup> ≥ 2 such that 11 +⋯+ = for some 1 ≤ ≤ . He further showed that, when resonances are present, the dynamical system can be put in a normal form

field *f*, that is, such that \* <sup>=</sup> <sup>∂</sup>

26 Nonlinear Systems - Design, Analysis, Estimation and Control

bourhood of *x*0 such that Φ\* <sup>=</sup> <sup>∂</sup>

∂ ∂

equations:

∂

The dynamical system ˙ = () takes in this case the canonical form

−

1

0 1 *n n*

**Theorem 1:** (Flow-box) Let *f* be a vector field defined in a neighbourhood of a nonsingular point , that is, (0)≠0. There exists a local change of coordinates = Φ in a neigh-

The existence and proof of this theorem, as well as its general form, can be found in the literature. The only difficulty in applying the straightening theorem is in finding the straightening diffeomorphism as one needs to solve the system of highly nonlinear partial differential

In earlier work [25], we provided a solution to this problem by giving explicit changes of coordinates, which will be recalled below. If *x*0 is a singular point, that is, 0 = 0, the notion of linearization, and later of normal form, were introduced by Poincare. Before we recall those facts, let us remind the reader that dynamical systems are a subclass of a largest class named control systems. Indeed, a control system can be interpreted as a parameterized family of dynamical systems ˙ = (, ) where for each fixed value of *u*, : = (, ) is a vector field. When *u* = 0, we rediscover dynamical systems. Poincare was the first to address the

for all .

 

∂

*z z*

 = = <sup>=</sup> =

*z*

1 2

z 0 0

 

= (), where = (()) and = . A solution (yielding = ∘ ) is given

, then the nonhomogeneous equation simplifies as

(9)

(10)

(11)

$$\dot{\mathbf{z}} = F\mathbf{z} + \sum\_{\|m\|=2}^{\infty} \Theta^m \mathbf{z}\_1^{m\_1} \mathbf{z}\_2^{m\_2} \cdots \mathbf{z}\_n^{m\_n} \tag{12}$$

where is a vector constant whose *j th*-component is zero when there is no resonance of order *m* associated to the eigenvalue *λj* .

**Notations**: For a vector field ()=(1 , ⋯, ) on and a function *h* in *x*-coordinates = (1, ⋯, ), we denote by

$$\mathcal{L}\_{\text{f}}h(\mathbf{x}) = \frac{\partial \hbar}{\partial \mathbf{x}\_1} f\_1(\mathbf{x}) + \frac{\partial \hbar}{\partial \mathbf{x}\_2} f\_2(\mathbf{x}) + \dots + \frac{\partial \hbar}{\partial \mathbf{x}\_n} f\_n(\mathbf{x}) \tag{13}$$

the Lie derivative of *h* along the vector field *f*, and recursively, we define the Lie-derivatives

$$
\mathcal{L}\_{\prime}^{0}h(\mathbf{x}) = h(\mathbf{x}), \mathcal{L}\_{\prime}^{i}h(\mathbf{x}) = \mathcal{L}\_{\prime}\mathcal{L}\_{\prime}^{i-1}h(\mathbf{x}), j = 1, 2, \dots, \infty \tag{14}
$$

For another vector field ()=(1 , ⋯, ) on , we define the Lie bracket [, ] between the two vector fields as a new vector field

$$\mathbb{E}\left[f,\mathcal{g}\right](\mathbf{x}) = \left(\mathcal{L}\_f \mathcal{g}\_1(\mathbf{x}) - \mathcal{L}\_{\mathcal{g}} f\_1(\mathbf{x}), \dots, \mathcal{L}\_f \mathcal{g}\_n(\mathbf{x}) - \mathcal{L}\_{\mathcal{g}} f\_n(\mathbf{x})\right) \tag{15}$$

and, for simplicity, we denote such vector field as <sup>=</sup> , , and recursively, we define

$$ad^0\_/\mathbb{g}\left(\mathbf{x}\right) = \mathbf{g}\left(\mathbf{x}\right),\\ad^i\_/\mathbf{g}\left(\mathbf{x}\right) = \left[f, ad^{i-1}\_/\mathbf{g}\right]\left(\mathbf{x}\right), j = 1, 2, \dots, \infty\tag{16}$$

Let be a local diffeomorphism with Φ 0 = 0, giving rise to new coordinates = Φ . The vector field *f* is transported by Φ into a new vector field, denoted <sup>z</sup> ≜ Φ\* (), whose components <sup>z</sup> <sup>=</sup> 1 , …, are given for all 1 ≤ *j* ≤ *n* by

$$\tilde{f}\_{j}(\mathbf{z}) = \mathcal{L}\_{f} \Phi\_{j}(\Phi^{-1}(\mathbf{z})) = \frac{\partial \Phi\_{j}}{\partial \mathbf{x}\_{1}} f\_{1}(\Phi^{-1}(\mathbf{z})) + \frac{\partial \Phi\_{j}}{\partial \mathbf{x}\_{2}} f\_{2}(\Phi^{-1}(\mathbf{z})) + \dots + \frac{\partial \Phi\_{j}}{\partial \mathbf{x}\_{n}} f\_{n}(\Phi^{-1}(\mathbf{z})) \tag{17}$$

Below we recall the method we provided in [25] to solve the problem of straightening a vector field around a nonsingular point. Without loss of generality, we will assume the nonsingular point to be .

**Theorem 2:** Let = 1, …, be aanalytic vector field on and <sup>=</sup> <sup>1</sup> () .

**i.** Define = Φ by its components as following

$$\begin{array}{llll}\Phi\_{j}(\mathbf{x}) &=& \boldsymbol{\chi}\_{j} + \sum\_{s=1}^{\alpha} \frac{(-1)^{s} \boldsymbol{x}\_{k}^{\mathbb{I}}}{s!} \mathcal{L}\_{\boldsymbol{\sigma}\boldsymbol{\nu}}^{s-1} (\boldsymbol{\sigma} \boldsymbol{v}\_{j})(\mathbf{x}), \quad j \neq k \\\\ \Phi\_{j}(\mathbf{x}) &=& \sum\_{s=1}^{\overline{m}} \frac{(-1)^{s+1} \boldsymbol{x}\_{k}^{\mathbb{I}}}{s!} \mathcal{L}\_{\boldsymbol{\sigma}\boldsymbol{\nu}}^{s-1} (\boldsymbol{\sigma})(\mathbf{x}), \quad j = k \end{array} \tag{18}$$

The local diffeomorphism Φ satisfies Φ\* <sup>=</sup> 0, …, 0, 1, 0, …, 0 <sup>≜</sup> <sup>∂</sup> ∂ .

**ii.** The local diffeomorphism = Ψ() whose components are given by

$$\begin{array}{rcl}\Psi\_{j}(\mathbf{z})&=&\mathbf{z}\_{j}+\sum\_{z=1}^{\alpha\_{n}^{\prime}}\frac{x\_{n}^{\prime}}{s!}\Big(\sum\_{l=0}^{\alpha\_{1}}\binom{\alpha\_{l}}{l}\mathcal{C}\_{n}^{\dagger}\,\partial\_{\mathbf{z}\_{k}}^{i}\mathcal{L}\_{\mathbf{v}}^{s-l-1}\Big(\mathbf{v}\_{j}\big)\big),\quad j\neq k\end{array}\tag{19}$$
 
$$\begin{array}{rcl}\Psi\_{j}(\mathbf{z})&=&\sum\_{z=1}^{\alpha\_{n}}\frac{x\_{n}^{\prime}}{s!}\Big(\sum\_{l=0}^{\alpha\_{1}}\binom{\alpha\_{l}}{l}\mathcal{C}\_{n}^{\dagger}\,\partial\_{\mathbf{z}\_{k}}^{i}\mathcal{L}\_{\mathbf{v}}^{s-l-1}\big(\mathbf{v}\_{n}\big)\big),\quad j=k\end{array}\tag{19}$$

is the inverse of = Φ(x), that is, <sup>Φ</sup> <sup>Ψ</sup> <sup>z</sup> = z and <sup>Ψ</sup> <sup>Φ</sup> = such that ∂Ψ ∂ = (Ψ ).

The series proposed above are not Taylor series or series in the variable *xk* (resp. *zk*). Indeed, the coefficients ℒ 1 and ℒ <sup>1</sup> are functions that depend on the variables *xk* (resp. *zk*). Above, the notation ∂ <sup>ℎ</sup> means the *i th*-derivative of *h* about the variable *zk*. We refer to [tall-adjm] for more details and the generalization of Frobenius theorem to the straightening of a set of vector fields as stated below.

**Theorem 3:** Let 1 , …, be a set of analytic vector fields on such that the distribution <sup>=</sup> 1 , …, <sup>1</sup> is involutive and of maximal rank m ≤ n in a neighborhood of the origin. There exist an open neighborhood and a change of coordinates = Φ() such that Φ\* <sup>=</sup> <sup>∂</sup> ∂ for all and = 1, …, .

We proposed a constructive way to find the diffeomorphism Φ through successive applications of Frobenius theorem.

## **3. Control systems and feedback linearization**

(17)

(18)

(19)

() .

∂ .

∂

<sup>1</sup> are functions that depend on the variables *xk*

*th*-derivative of *h* about the variable *zk*. We refer

= (Ψ ).

Below we recall the method we provided in [25] to solve the problem of straightening a vector field around a nonsingular point. Without loss of generality, we will assume the nonsingular

The local diffeomorphism Φ satisfies Φ\* <sup>=</sup> 0, …, 0, 1, 0, …, 0 <sup>≜</sup> <sup>∂</sup>

The series proposed above are not Taylor series or series in the variable *xk* (resp. *zk*). Indeed,

to [tall-adjm] for more details and the generalization of Frobenius theorem to the straightening

the origin. There exist an open neighborhood and a change of coordinates = Φ()

We proposed a constructive way to find the diffeomorphism Φ through successive applications

for all and = 1, …, .

, …, be a set of analytic vector fields on such that the distribution

is involutive and of maximal rank m ≤ n in a neighborhood of

**ii.** The local diffeomorphism = Ψ() whose components are given by

is the inverse of = Φ(x), that is, <sup>Φ</sup> <sup>Ψ</sup> <sup>z</sup> = z and <sup>Ψ</sup> <sup>Φ</sup> = such that ∂Ψ

<sup>ℎ</sup> means the *i*

and ℒ

**Theorem 2:** Let = 1, …, be aanalytic vector field on and <sup>=</sup> <sup>1</sup>

**i.** Define = Φ by its components as following

28 Nonlinear Systems - Design, Analysis, Estimation and Control

point to be .

the coefficients ℒ

**Theorem 3:** Let 1

such that Φ\* <sup>=</sup> <sup>∂</sup>

of Frobenius theorem.

<sup>=</sup> 1

1

of a set of vector fields as stated below.

, …, <sup>1</sup>

∂

(resp. *zk*). Above, the notation ∂

Let us reconsider the control-affine nonlinear system with outputs

$$\Sigma: \begin{cases} \dot{\mathbf{x}} = f(\mathbf{x}) + g(\mathbf{x})u = f(\mathbf{x}) + g\_1(\mathbf{x})u\_1 + \dots + g\_m(\mathbf{x})u\_m, \ x \in \mathfrak{R}^n, u \in \mathfrak{R}^m \\\\ \mathbf{y} = h(\mathbf{x}) = (h\_1(\mathbf{x}), \dots, h\_p(\mathbf{x})) \end{cases} \tag{20}$$

The input-output feedback linearization as stated earlier is to find new coordinates system and new inputs under which the system Σ has linear dynamics and linear outputs. This problem has been connected directly to the notion of relative degree. Indeed, one needs to differentiate the outputs repeatedly until the inputs appear. Formally, if there exists > 0 such that for all <sup>1</sup> ≤ ≤ and <sup>0</sup> ≤ ≤ − 2 with for some *j*, we say that *γi* is the relative degree of the *j th* output. In other words, *γi* is the smallest integer *k* for which the *kth*-derivative () of *yi* depends explicitly on the input *u*. The set 1, ⋯, is called *vector relative degree* associated to the outputs of Σ. It is well known that taking for <sup>1</sup> <sup>≤</sup> ≤ and completing the coordinates with + 1 . ⋯, , the system can be expressed into *m*-subsystems of the form

$$\begin{cases} \dot{z}\_1^{i'} &=& \dot{z}\_2^{i'} \\ \dot{z}\_2^{i'} &=& \dot{z}\_3^{i'} \\ & \vdots \\ \dot{z}\_{\gamma\_1 - 1}^{i'} &=& \dot{z}\_{\gamma\_1}^{i'} \\ \dot{z}\_{\gamma\_1}^{i'} &=& f\_{\gamma\_1} \left(\mathbf{z}\right) + \mathbf{g}\_{\gamma\_1}^1 \left(\mathbf{z}\right) \boldsymbol{v}\_1 + \cdots + \mathbf{g}\_{\gamma\_1}^m \left(\mathbf{z}\right) \boldsymbol{v}\_m \\ & \vdots \\ \dot{z}\_{\gamma\_i}^{i'} &=& f\_{\gamma\_i} \left(\mathbf{z}\right) + \mathbf{g}\_{\gamma\_i}^1 \left(\mathbf{z}\right) \boldsymbol{v}\_1 + \cdots + \mathbf{g}\_{\gamma\_i}^m \left(\mathbf{z}\right) \boldsymbol{v}\_m \\ & \mathbf{y}\_i = \mathbf{z}\_1^i \end{cases} \tag{21}$$

for 1 ≤ *i* ≤ *m* with 1 +⋯+ =. Thus, the system becomes a connection between a linear and nonlinear systems and this has been known as partial feedback linearization. A necessary and sufficient condition for exact linearization, that is, for a multi-input multi-output system to be transformed into a chain of integrators

$$\begin{aligned} \text{(BR)} \begin{cases} \dot{\hat{z}}\_1^1 &=& \dot{z}\_1^1 & \dot{z}\_1^{\*\prime} &=& \dot{z}\_2^{\*\prime} \\ \dot{\hat{z}}\_2^1 &=& \dot{z}\_3^1 & \dot{z}\_2^{\*\prime} &=& \dot{z}\_3^{\*\prime} \\ & \vdots & & \vdots \\ \dot{\hat{z}}\_{\dot{\gamma}\_1 - 1}^1 &=& \dot{z}\_{\dot{\gamma}\_1}^1 & \cdots & \dot{z}\_{\dot{\gamma}\_{\dot{\gamma}\_1 - 1}}^{m} &=& \dot{z}\_{\dot{\gamma}\_1}^{m} \\ \dot{\hat{z}}\_{\dot{\gamma}\_1}^1 &=& \upsilon\_1 & \dot{\hat{z}}\_{\dot{\gamma}\_1}^{m} &=& \upsilon\_m \\ & \upsilon\_1 = z\_1^1 & & & \upsilon\_m = z\_1^m \end{cases} \tag{22}$$

is that it has a vector relative degree 1, ⋯, such that 1 +⋯+ =.

Obviously, different outputs will lead to different cascade systems: A system can be linearized with respect to some outputs and fail to be linearizable with respect to a different set of outputs. If we consider a control-affine system without outputs, then the linearization problem (Problem 2) is equivalent to solving a system of partial differential equation. Indeed, two affine control systems

$$\Sigma: \dot{\mathbf{x}} = f(\mathbf{x}) + g(\mathbf{x})u = f(\mathbf{x}) + g\_1(\mathbf{x})u\_1 + \dots + g\_m(\mathbf{x})u\_m, \text{ x } \mathfrak{R}^n, u \in \mathfrak{R}^m \tag{23}$$

and

$$\overline{\Sigma}: \dot{\mathbf{z}} = \overline{f}(\mathbf{z}) + \overline{g}(\mathbf{z})\mathbf{v} = \overline{f}(\mathbf{z}) + \overline{g}\_1(\mathbf{z})v\_1 + \dots + \overline{g}\_m(\mathbf{z})v\_m, \text{ z} \in \mathfrak{R}^n, \mathbf{v} \in \mathfrak{R}^m \tag{24}$$

are feedback equivalent via static state transformations = Φ() and feedback = + if and only if

$$\begin{aligned} \{Pides\} &: \begin{cases} \tilde{f}\{\Phi(\mathbf{x})\} = \frac{\partial \Phi}{\partial \mathbf{x}} \{f(\mathbf{x}) + g(\mathbf{x})a(\mathbf{x})\} \\ \tilde{g}\{\Phi(\mathbf{x})\} = \frac{\partial \Phi}{\partial x} \{g(\mathbf{x})\beta(\mathbf{x})\} \end{cases} \end{aligned} \tag{25}$$

In particular, the control-affine system Σ is static state feedback equivalent to a controllable linear system if and only the system of partial differential equations

$$\begin{aligned} \{\beta PEs\} &: \begin{cases} A\Phi(\mathbf{x}) = \frac{\partial \Phi}{\partial \mathbf{x}} \Big(f(\mathbf{x}) + g(\mathbf{x})a(\mathbf{x})\Big) \\\ B = \frac{\partial \Phi}{\partial \mathbf{x}} \Big(g(\mathbf{x})\beta(\mathbf{x})\Big) \end{cases} \end{aligned} \tag{26}$$

is solvable in Φ, *α*, and *β* with Φ a diffeomorphism around the origin, and *β* invertible. A geometric characterization of feedback linearization was obtained by Jakubczyk and Respondek [4] and independently by Hunt and Su [5]

**Theorem 4:** The system Σ is feedback equivalent to a controllable linear system Λ around an equilibrium point *x*0 = 0 if and only if the following two conditions are satisfied

(F1)

γ

1

 

( )

30 Nonlinear Systems - Design, Analysis, Estimation and Control

control systems

if and only if

and

*BR*

γ

1

 γ

1 1

− −

1 1

 <sup>=</sup> <sup>=</sup> <sup>=</sup> <sup>=</sup> = =

1 1

*z z z z z v z v y z y z*

1 2 1 2

*z z z z z z z z*

2 3 2 3

1 1 1 1

1 1 1

Obviously, different outputs will lead to different cascade systems: A system can be linearized with respect to some outputs and fail to be linearizable with respect to a different set of outputs. If we consider a control-affine system without outputs, then the linearization problem (Problem 2) is equivalent to solving a system of partial differential equation. Indeed, two affine

are feedback equivalent via static state transformations = Φ() and feedback = +

In particular, the control-affine system Σ is static state feedback equivalent to a controllable

is solvable in Φ, *α*, and *β* with Φ a diffeomorphism around the origin, and *β* invertible. A geometric characterization of feedback linearization was obtained by Jakubczyk and Respon-

**Theorem 4:** The system Σ is feedback equivalent to a controllable linear system Λ around an

equilibrium point *x*0 = 0 if and only if the following two conditions are satisfied

linear system if and only the system of partial differential equations

dek [4] and independently by Hunt and Su [5]

<sup>=</sup> <sup>=</sup> <sup>=</sup> <sup>=</sup>

 

1 1

1 1

is that it has a vector relative degree 1, ⋯, such that 1 +⋯+ =.

γ

*i m m m*

γ

*m*

 γ (22)

(23)

(24)

(25)

(26)

*m m*

*m m m m*

(F2)

Above stand for distributions defined recursively by

$$\mathcal{D}^{\dagger}\left(\mathbf{x}\right) = \text{span}\left(\mathbf{g}\_{i}\left(\mathbf{x}\right), ad\_{/\mathcal{S}}\left(\mathbf{x}\right), \dots, ad\_{/\mathcal{S}}^{\dagger -1}\mathbf{g}\_{i}\left(\mathbf{x}\right), \mathbf{1} \le i \le m\right) \tag{27}$$

and as the distribution spanned by all Lie brackets of the two distributions. The first condition (F1) stands for the rank condition while the second (F2) is referred as the involutivity condition.

Thus, to find the largest linear subsystem, the outputs need not to be predefined.

In this paper, we consider only systems without outputs and look to find such largest linear subsystem. First, an affine system is said to be partially static state feedback linearizable if there exists a coordinate system = (1, ⋯, ) and feedback in which the system takes the form

$$\Lambda\_p: \begin{cases} \frac{dx^1}{dt} &=& \tilde{f}(\mathbf{z}^1, \mathbf{z}^2) + \tilde{g}(\mathbf{z}^1, \mathbf{z}^2)v\\ \frac{dx^2}{dt} &=& A\mathbf{z}^2 + Bv \end{cases} \tag{28}$$

where <sup>1</sup> = (1, ⋯, ) and <sup>2</sup> <sup>=</sup> + 1, ⋯, .

**Remark 1:** Notice that the form above is also equivalent to

$$\begin{cases} \frac{dz^1}{dt} &=& A z^1 + B v \\ \frac{dz^2}{dt} &=& \tilde{f}\left(z^1, z^2\right) + \tilde{g}\left(z^1, z^2\right) v \end{cases} \tag{29}$$

by reordering the variables accordingly. In the sequel, we will refer more to the former form. The following result can be found in [17]

**Theorem 5:** Consider a control affine system Σ.


the dimension of the linear subsystem is dim2 <sup>=</sup> , and moreover, the linear subsystem is controllable.

We will provide a step-by-step procedure to write the system as a cascade of a nonlinear subsystem and a linear subsystem with highest dimension. Notice that a geometrical approach has been used in [14, 16] where the characterization depends on controllability indices associated to some lie algebras.

## **4. Algorithm for partial feedback linearization**

We first consider a single-input control system

$$
\Sigma: \dot{\boldsymbol{x}} = f(\boldsymbol{x}) + g(\boldsymbol{x})\boldsymbol{u}, \ \boldsymbol{x} \in \mathfrak{R}^n, \boldsymbol{u} \in \mathfrak{R} \tag{30}
$$

and we assume that its linear approximation ˙ = + is controllable with <sup>=</sup> ∂ ∂(0) and = (0). Without loss of generality, we can also assume that the pair (*F, G*) is in Brunovsky canonical form.

**Step 0**: We apply the Frobenius theorem to find coordinates = φ() that rectifies the vector field *g*, that is, such that φ\* <sup>=</sup> 0, …, 0, 1 ≜ and transform the system as

$$
\Sigma: \quad \dot{\mathbf{y}} = \tilde{f}(\mathbf{y}) + bu, \text{ y } \epsilon \text{ } \Re'', u \in \mathfrak{R}. \tag{31}
$$

Completing this step with the push-forward transformation

$$\begin{aligned} z\_1 &= y\_1 \\ &\vdots \\ z\_{n-1} &= y\_{n-1} \\ z\_n &= \tilde{f}\_{n-1}(y) \\ \upsilon &= \frac{\partial \tilde{f}\_{n-1}}{\partial y\_1} \tilde{f}\_1(y) + \dots + \frac{\partial \tilde{f}\_{n-1}}{\partial y\_n} \tilde{f}\_n(y) + \frac{\partial \tilde{f}\_{n-1}}{\partial y\_n} u \end{aligned} \tag{32}$$

the system is transformed as

$$
\Sigma: \dot{\mathbf{z}} = \overline{f}(\mathbf{z}) + b\nu, \text{ z } \epsilon \text{ } \Re^n, \nu \,\epsilon \text{ } \Re \tag{33}
$$

where

Feedback and Partial Feedback Linearization of Nonlinear Systems: A Tribute to the Elders http://dx.doi.org/10.5772/64689 33

$$\overline{f}\begin{pmatrix}\overline{f}\_1(z\_1,\cdots,z\_n)\\\overline{f}\_2(z\_1,\cdots,z\_n)\\\vdots\\\overline{f}\_{n-2}(z\_1,\cdots,z\_n)\\\vdots\\\overline{z}\_n\\\mathbf{0}\end{pmatrix}\tag{34}$$

**Step 1:** We reset the original notation, that is, replace the variable *z* by *x*, and by . Then, we decompose as following

the dimension of the linear subsystem is dim<sup>2</sup> <sup>=</sup> , and moreover, the linear

(30)

(31)

(32)

(33)

∂(0) and

We will provide a step-by-step procedure to write the system as a cascade of a nonlinear subsystem and a linear subsystem with highest dimension. Notice that a geometrical approach has been used in [14, 16] where the characterization depends on controllability indices

and we assume that its linear approximation ˙ = + is controllable with <sup>=</sup> ∂

field *g*, that is, such that φ\* <sup>=</sup> 0, …, 0, 1 ≜ and transform the system as

Completing this step with the push-forward transformation

 

the system is transformed as

where

<sup>=</sup>

<sup>=</sup> <sup>=</sup>

1 1

= (0). Without loss of generality, we can also assume that the pair (*F, G*) is in Brunovsky

**Step 0**: We apply the Frobenius theorem to find coordinates = φ() that rectifies the vector

( ) ( )

∂ ∂∂ = ++ + ∂ ∂∂

− − − − −−

1 1 1 1 11

( )

*n n n*

1 1

*z y*

 

*n n n n n nn*

*z y z fy f ff v fy fy u y yy*

subsystem is controllable.

32 Nonlinear Systems - Design, Analysis, Estimation and Control

**4. Algorithm for partial feedback linearization**

We first consider a single-input control system

associated to some lie algebras.

canonical form.

$$f\begin{pmatrix}\mathbf{x}\end{pmatrix} = \begin{pmatrix}f\_1(\mathbf{x})\\f\_2(\mathbf{x})\\\vdots\\f\_{n-2}(\mathbf{x})\\\vdots\\f\_{n-2}(\mathbf{x})\\\mathbf{x}\\\mathbf{0}\end{pmatrix} = \begin{pmatrix}f\_1(\mathbf{x}\_1,\cdots,\mathbf{x}\_{n-1})\\f\_2(\mathbf{x}\_1,\cdots,\mathbf{x}\_{n-1})\\\vdots\\f\_{n-2}(\mathbf{x}\_1,\cdots,\mathbf{x}\_{n-1})\\\mathbf{0}\\\mathbf{0}\end{pmatrix} + \mathbf{x}\_n \begin{pmatrix}g\_1(\mathbf{x}\_1,\cdots,\mathbf{x}\_{n-1})\\\vdots\\g\_2(\mathbf{x}\_1,\cdots,\mathbf{x}\_{n-1})\\\vdots\\g\_{n-2}(\mathbf{x}\_1,\cdots,\mathbf{x}\_{n-1})\\\vdots\\\mathbf{0}\end{pmatrix} + \mathbf{x}\_n^2 \begin{pmatrix}G\_1^\*(\mathbf{x})\\G\_2^\*(\mathbf{x})\\\vdots\\G\_{n-2}^\*(\mathbf{x})\\\mathbf{0}\\\mathbf{0}\end{pmatrix} \tag{35}$$

If <sup>2</sup> 1, ⋯, ≠ 0, then the algorithm stops. This means that the dimension of the largest linear subsystem is 2. In case <sup>2</sup> 1, ⋯, <sup>=</sup>0, we define the largest *j* such that 1, ⋯, ≠ 0. If 1, ⋯, = 0 for all <sup>1</sup> <sup>≤</sup> ≤ <sup>2</sup>, then we put = 0. We then apply the Frobenius theorem to straighten the vector field

$$\mathbf{g}\left(\mathbf{x}\right) = \begin{pmatrix} \mathbf{g}\_1\left(\mathbf{x}\_1, \dots, \mathbf{x}\_{n-1}\right) \\ \mathbf{g}\_2\left(\mathbf{x}\_1, \dots, \mathbf{x}\_{n-1}\right) \\ \vdots \\ \mathbf{g}\_{n-2}\left(\mathbf{x}\_1, \dots, \mathbf{x}\_{n-1}\right) \\ \mathbf{1} \\ \mathbf{0} \end{pmatrix} \tag{36}$$

by defining coordinates = φ() such that φ\* <sup>=</sup> 0, …, 0, 1, 0 ≜ . Notice that, because *<sup>g</sup>* depends only on the variables 1, ⋯, <sup>1</sup>, so do the first (*n*–1) components of the diffeomorphism *φ*. Thus, the system is transformed as

$$
\Sigma: \quad \dot{\mathbf{y}} = \underbrace{\begin{pmatrix} f\_1(\mathbf{y}\_1, \cdots, \mathbf{y}\_{n-1}) \\ f\_2(\mathbf{y}\_1, \cdots, \mathbf{y}\_{n-1}) \\ \vdots \\ f\_{n-2}(\mathbf{y}\_1, \cdots, \mathbf{y}\_{n-1}) \\ 0 \\ \mathbf{0} \end{pmatrix}}\_{\mathbf{0}} + \boldsymbol{\nu}\_n \begin{pmatrix} \mathbf{0} \\ \mathbf{0} \\ \vdots \\ \mathbf{0} \\ \mathbf{0} \\ \mathbf{0} \\ \mathbf{0} \end{pmatrix} + \boldsymbol{\nu}\_n^2 \begin{pmatrix} G\_1^n(\mathbf{y}\_1, \cdots, \mathbf{y}\_n) \\ G\_2^n(\mathbf{y}\_1, \cdots, \mathbf{y}\_n) \\ \vdots \\ G\_{n-2}^n(\mathbf{y}\_1, \cdots, \mathbf{y}\_n) \\ \mathbf{0} \\ \mathbf{0} \\ \mathbf{0} \end{pmatrix}}\_{\mathbf{0}} + \begin{pmatrix} \mathbf{0} \\ \vdots \\ \mathbf{0} \\ \mathbf{0} \\ \mathbf{0} \\ \mathbf{0} \end{pmatrix} u\_n \tag{37}
$$

We thus apply the push-forward transformation

$$\begin{aligned} \boldsymbol{z}\_{1} &= \boldsymbol{y}\_{1} \\ &\vdots\\ \boldsymbol{z}\_{n-2} &= \boldsymbol{y}\_{n-2} \\ \boldsymbol{z}\_{n-1} &= \boldsymbol{f}\_{n-2} \left( \boldsymbol{y}\_{1}, \cdots, \boldsymbol{y}\_{n-1} \right) \\ \boldsymbol{z}\_{n} &= \frac{\partial \boldsymbol{z}\_{n-1}}{\partial \boldsymbol{y}\_{1}} \tilde{f}\_{1} \left( \boldsymbol{y} \right) + \dots + \frac{\partial \boldsymbol{z}\_{n-1}}{\partial \boldsymbol{y}\_{n-2}} \tilde{f}\_{n-2} \left( \boldsymbol{y} \right) + \frac{\partial \boldsymbol{z}\_{n-1}}{\partial \boldsymbol{y}\_{n-1}} \boldsymbol{y}\_{n} \\ \boldsymbol{v} &= \frac{\partial \boldsymbol{z}\_{n}}{\partial \boldsymbol{y}\_{1}} \tilde{f}\_{1} \left( \boldsymbol{y} \right) + \dots + \frac{\partial \boldsymbol{z}\_{n}}{\partial \boldsymbol{y}\_{n-1}} \tilde{f}\_{n-1} \left( \boldsymbol{y} \right) + \frac{\partial \boldsymbol{z}\_{n}}{\partial \boldsymbol{y}\_{n}} \boldsymbol{u} \end{aligned} \tag{38}$$

to bring the system into the form

$$\Sigma: \dot{\boldsymbol{z}} = \underbrace{\begin{pmatrix} \boldsymbol{f}\_{1}(\boldsymbol{z}\_{1},\cdots,\boldsymbol{z}\_{n-1}) \\ \boldsymbol{f}\_{2}(\boldsymbol{z}\_{1},\cdots,\boldsymbol{z}\_{n-1}) \\ \vdots \\ \boldsymbol{f}\_{n-3}(\boldsymbol{z}\_{1},\cdots,\boldsymbol{z}\_{n-1}) \\ \boldsymbol{z}\_{n} \\ \boldsymbol{z}\_{n-1} \\ \boldsymbol{0} \\ \boldsymbol{0} \end{pmatrix}}\_{\boldsymbol{z}\_{n-1}} + \boldsymbol{z}\_{n} \begin{pmatrix} \boldsymbol{F}\_{1}^{n}(\boldsymbol{z}\_{1},\cdots,\boldsymbol{z}\_{n}) \\ \boldsymbol{F}\_{2}^{n}(\boldsymbol{z}\_{1},\cdots,\boldsymbol{z}\_{n}) \\ \vdots \\ \boldsymbol{F}\_{n}^{n}(\boldsymbol{z}\_{1},\cdots,\boldsymbol{z}\_{n}) \\ \boldsymbol{0} \\ \vdots \\ \boldsymbol{0} \\ \boldsymbol{0} \\ \boldsymbol{0} \end{pmatrix} + \boldsymbol{z}\_{n} \begin{pmatrix} \boldsymbol{0} \\ \boldsymbol{0} \\ \vdots \\ \boldsymbol{0} \\ \boldsymbol{0} \\ \boldsymbol{0} \\ \boldsymbol{0} \\ \boldsymbol{0} \\ \boldsymbol{0} \\ \boldsymbol{1} \end{pmatrix} \\ \boldsymbol{v},\ \boldsymbol{z}\in\mathfrak{R}^{n},\boldsymbol{v}\in\mathfrak{R} \tag{39}$$

or in much compact form

$$\Sigma: \quad \dot{\mathbf{z}} = f\{\mathbf{z}\_1, \dots, \mathbf{z}\_{n-1}\} + \mathbf{z}\_n\\F^\eta\{\mathbf{z}\} + AB\mathbf{z}\_n + Bu, \text{ z} \in \mathfrak{R}^n, u \in \mathfrak{R}^m \tag{40}$$

with = max , 1 <sup>≤</sup> ≤ 2, 1, ⋯, ≠ 0 . Moreover, and more importantly, we also have

$$\delta F^{\mathbb{R}}(0) = 0 \quad \text{and} \ \frac{\partial F^{\mathbb{R}}\_{\beta \mathbf{n}}}{\partial x\_{\mathbb{R}}} \neq \mathbf{0}. \tag{41}$$

### **Remark 2**

(37)

(38)

(39)

(40)

(41)

We thus apply the push-forward transformation

34 Nonlinear Systems - Design, Analysis, Estimation and Control

to bring the system into the form

or in much compact form

have

with = max , 1 <sup>≤</sup> ≤ 2,

<sup>=</sup>

<sup>=</sup>

( )

, ,

− − −

*n n*

1, ⋯, ≠ 0 . Moreover, and more importantly, we also

− −

( ) ( )

<sup>=</sup> ∂ ∂∂ = ++ + ∂ ∂∂

 ∂ ∂∂ = ++ + ∂ ∂∂

1 1 1 1

− − −− − − −−

*n n nn n n nn n n n*

*z y z fy y z zz z fy f y y y yy z zz v fy f y u y yy*

2 2 1 21 1 1 11 1 2 1 21

1 1

*z y*

( ) ( )

*n nn n n n*


**Step 2**: We reset the original notation, that is, replace the variable *z* by *x*. Then, we decompose as following

$$f(\mathbf{x}) = \begin{pmatrix} f\_1(\mathbf{x}\_1, \ldots, \mathbf{x}\_{n-1}) \\ f\_2(\mathbf{x}\_1, \ldots, \mathbf{x}\_{n-1}) \\ \vdots \\ f\_{n-3}(\mathbf{x}\_1, \ldots, \mathbf{x}\_{n-1}) \\ \mathbf{x}\_{n-1} \\ \mathbf{0} \\ \mathbf{0} \end{pmatrix} = \begin{pmatrix} f\_1(\mathbf{x}\_1, \ldots, \mathbf{x}\_{n-2}) \\ f\_2(\mathbf{x}\_1, \ldots, \mathbf{x}\_{n-2}) \\ \vdots \\ f\_{n-3}(\mathbf{x}\_1, \ldots, \mathbf{x}\_{n-2}) \\ \mathbf{0} \\ \mathbf{0} \\ \mathbf{0} \\ \mathbf{0} \\ \end{pmatrix} + \mathbf{x}\_{n-1} \begin{pmatrix} g\_1(\mathbf{x}\_1, \ldots, \mathbf{x}\_{n-2}) \\ g\_2(\mathbf{x}\_1, \ldots, \mathbf{x}\_{n-2}) \\ \vdots \\ g\_{n-3}(\mathbf{x}\_1, \ldots, \mathbf{x}\_{n-2}) \\ \mathbf{0} \\ \mathbf{0} \\ \mathbf{0} \\ \mathbf{0} \\ \mathbf{0} \\ \mathbf{0} \\ \end{pmatrix} + \mathbf{x}\_{n-1} \mathbf{e}\_1 \tag{42}$$

If <sup>3</sup> 1, ⋯, ≠ 0, then the dimension of the largest linear subsystem is less or equal to 3. We denote by <sup>1</sup> the largest *j* such that 1, ⋯, ≠ 0. If 1, ⋯, = 0 for all <sup>1</sup> <sup>≤</sup> ≤ <sup>3</sup>, then we put <sup>1</sup> = 0. We define <sup>1</sup> = max 1, as the updated largest component that cannot be cancelled or, equivalently, such that the dimension of the largest linear subsystem is less or equal to <sup>1</sup>.

We then apply the Frobenius theorem to straighten the vector field

$$\mathcal{g}\left(\mathbf{x}\right) = \begin{pmatrix} \mathcal{g}\_1\left(\mathbf{x}\_1, \dots, \mathbf{x}\_{n-2}\right) \\ \mathcal{g}\_2\left(\mathbf{x}\_1, \dots, \mathbf{x}\_{n-2}\right) \\ \vdots \\ \mathcal{g}\_{n-3}\left(\mathbf{x}\_1, \dots, \mathbf{x}\_{n-2}\right) \\ \mathbf{1} \\ \mathbf{0} \\ \mathbf{0} \end{pmatrix} \tag{43}$$

by defining coordinates = φ() such that φ\* <sup>=</sup> 0, …, 0, 1, 0, 0 ≜ 2. Notice that, because *<sup>g</sup>* depends only on the variables 1, ⋯, <sup>2</sup>, so do the first ( 2) components of the diffeomorphism *φ*. Thus, the system is transformed as

$$
\Sigma: \dot{\boldsymbol{z}} = \begin{pmatrix} f\_1(z\_1, \dots, z\_{n-2}) \\ f\_2(z\_1, \dots, z\_{n-2}) \\ \vdots \\ \dot{\boldsymbol{z}} \\ \dot{\boldsymbol{z}}\_{n-2}(z\_1, \dots, z\_{n-2}) \\ \boldsymbol{0} \\ \boldsymbol{0} \\ \boldsymbol{0} \\ \boldsymbol{0} \end{pmatrix} + \boldsymbol{z}\_{n-1} \begin{pmatrix} F\_1^{n-1}(z\_1, \dots, z\_{n-1}) \\ F\_2^{n-2}(z\_1, \dots, z\_{n-1}) \\ \vdots \\ F\_{n-1}^{n-1}(z\_1, \dots, z\_{n-1}) \\ \dot{\boldsymbol{0}} \\ \vdots \\ \dot{\boldsymbol{0}} \\ \boldsymbol{0} \\ \boldsymbol{0} \\ \end{pmatrix} + \boldsymbol{z}\_n \begin{pmatrix} F\_1^n(z\_1, \dots, z\_n) \\ F\_2^n(z\_1, \dots, z\_n) \\ \vdots \\ F\_{n-1}^n(z\_1, \dots, z\_n) \\ \vdots \\ F\_{n-1}^n(z\_1, \dots, z\_{n-1}) \\ \vdots \\ \dot{\boldsymbol{0}} \\ \boldsymbol{0} \\ \boldsymbol{0} \\ \end{pmatrix} + \boldsymbol{z}\_n \begin{pmatrix} F\_1^n(z\_1, \dots, z\_n) \\ \vdots \\ F\_{n-1}^n(z\_1, \dots, z\_{n-1}) \\ \vdots \\ \boldsymbol{0} \\ \vdots \\ \boldsymbol{0} \\ \boldsymbol{0} \\ \end{pmatrix} + \boldsymbol{z}\_n \begin{pmatrix} 0 \\ 0 \\ \vdots \\ 0 \\ \vdots \\ 0 \\ 0 \\ \vdots \\ 0 \\ \end{pmatrix}, \quad \boldsymbol{z}, \boldsymbol{z} \in \mathbb{R}^n, \boldsymbol{\nu} \in \mathfrak{R} \tag{44}$$

We thus apply the push-forward transformation

$$\begin{aligned} z\_1 &= y\_1 \\ \vdots \\ z\_{n-3} &= y\_{n-3} \\ z\_{n-2} &= \tilde{f}\_{n-3} \left( y\_1, \dots, y\_{n-2} \right) \end{aligned}$$

$$\begin{cases} z\_{n-1} = \frac{\partial z\_{n-2}}{\partial y\_1} \tilde{f}\_1 \left( y \right) + \dots + \frac{\partial z\_{n-2}}{\partial y\_{n-3}} \tilde{f}\_{n-3} \left( y \right) + \frac{\partial z\_{n-2}}{\partial y\_{n-2}} y\_{n-1} \\ z\_n = \frac{\partial z\_{n-1}}{\partial y\_1} \tilde{f}\_1 \left( y \right) + \dots + \frac{\partial z\_{n-1}}{\partial y\_{n-2}} \tilde{f}\_{n-2} \left( y \right) + \frac{\partial z\_{n-1}}{\partial y\_{n-1}} y\_n \end{cases} \tag{45}$$

$$\upsilon = \frac{\partial z\_n}{\partial y\_1} \tilde{f}\_1 \left( y \right) + \dots + \frac{\partial z\_n}{\partial y\_{n-1}} \tilde{f}\_{n-1} \left( y \right) + \frac{\partial z\_n}{\partial y\_n} u$$

to bring the system into the form

Feedback and Partial Feedback Linearization of Nonlinear Systems: A Tribute to the Elders http://dx.doi.org/10.5772/64689 37

$$\Sigma: \dot{z} = \begin{pmatrix} f\_1(z\_1, \dots, z\_{n-2}) \\ f\_2(z\_1, \dots, z\_{n-2}) \\ \vdots \\ \vdots \\ f\_{n-2}(z\_1, \dots, z\_{n-2}) \\ z\_{n-2} \\ 0 \\ 0 \\ 0 \end{pmatrix} + z\_{n-1} \begin{pmatrix} F\_1^{x\_1}(z\_1, \dots, z\_{n-1}) \\ F\_2^{x\_2}(z\_1, \dots, z\_{n-1}) \\ \vdots \\ F\_{n-1}^{x\_n}(z\_1, \dots, z\_{n-1}) \\ 0 \\ \vdots \\ 0 \\ 0 \\ 0 \\ 0 \end{pmatrix} + z\_n \begin{pmatrix} F\_1^x(z\_1, \dots, z\_n) \\ \vdots \\ F\_2^x(z\_1, \dots, z\_n) \\ \vdots \\ F\_{n-1}^x(z\_1, \dots, z\_n) \\ 0 \\ \vdots \\ 0 \\ 0 \\ 0 \\ 0 \end{pmatrix} + \begin{pmatrix} F\_1^x(z\_1, \dots, z\_n) \\ \vdots \\ F\_2^x(z\_1, \dots, z\_n) \\ 0 \\ \vdots \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \end{pmatrix} \tag{46}$$

or in much compact form

(44)

(45)

by defining coordinates = φ() such that φ\* <sup>=</sup> 0, …, 0, 1, 0, 0 ≜ 2. Notice that, because *<sup>g</sup>* depends only on the variables 1, ⋯, <sup>2</sup>, so do the first ( 2) components of the diffeomor-

( )

, ,

− −

*n n*

*n n*

− − −

− −

( ) ( )

 ∂ ∂∂ = ++ + ∂ ∂∂ ∂ ∂∂ = ++ + ∂ ∂∂

1 1 1 1

∂ ∂∂ = ++ + ∂ ∂∂

− − −− − − −− − − −

*n n nn n n nn n n n*

*z y z fy y z zz z fy f y y y yy z zz z fy f y y y yy z zz v fy f y u y yy*

3 3 2 31 2 2 22 1 1 3 1 1 32 1 11 1 2 1 21

1 1

*z y*

( ) ( )

*n nn n n n*

− −−

( ) ( )

*n nn n n n*

phism *φ*. Thus, the system is transformed as

36 Nonlinear Systems - Design, Analysis, Estimation and Control

We thus apply the push-forward transformation

to bring the system into the form

<sup>=</sup>

 <sup>=</sup> =

$$\Sigma: \ \dot{\mathbf{z}} = f(\mathbf{z}\_1, \dots, \mathbf{z}\_{n-2}) + \mathbf{z}\_{n-1} F^{n-1}(\mathbf{z}\_1, \dots, \mathbf{z}\_{n-1}) + \mathbf{z}\_n F^n(\mathbf{z}) + A^2 b \mathbf{z}\_{n-1} + A b \mathbf{z}\_n + b u,\tag{47}$$

$$\text{with } F^{n-1}(0) = F^n(0) = 0 \text{ and either } \frac{\partial F^n\_{\tilde{\rho}\_{n-1}}}{\partial z\_n} \neq 0 \text{ or } \frac{\partial F^{n-1}\_{\tilde{\rho}\_{n-1}}}{\partial z\_{n-1}} \neq 0.$$

**General step**: Let us assume that the system has been transformed such that it takes the form

$$\Sigma: \quad \dot{\mathbf{x}} = f\{\mathbf{x}\_1, \dots, \mathbf{x}\_k\} + \sum\_{i=k}^{n-1} \{\mathbf{x}\_{i+1} F^{i+1}\{\mathbf{x}\_1, \dots, \mathbf{x}\_{i+1}\} + A^{n-i} B \mathbf{x}\_{i+1}\} + B u, \text{ } \mathbf{x} \in \mathfrak{R}^n, u \in \mathfrak{R} \tag{48}$$

where + 1 <sup>0</sup> = 0 for all <sup>1</sup> and ∂ + 1 ∂ + 1 ≠ 0 for some , 1 with *ρ* being

the largest nonzero component among those of the vector fields + 1, …, . We will write

$$f(\mathbf{x}\_1, \dots, \mathbf{x}\_k) = \begin{pmatrix} f\_1(\mathbf{x}\_1, \dots, \mathbf{x}\_k) \\ f\_2(\mathbf{x}\_1, \dots, \mathbf{x}\_k) \\ \vdots \\ f\_{i-2}(\mathbf{x}\_1, \dots, \mathbf{x}\_k) \\ \vdots \\ \mathbf{x}\_k \\ \mathbf{0} \\ \vdots \\ \mathbf{0} \end{pmatrix} \text{ and } F^{i+1}(\mathbf{x}) = \begin{pmatrix} F\_1^{i+1}(\mathbf{x}\_1, \dots, \mathbf{x}\_{i+1}) \\ F\_2^{i+1}(\mathbf{x}\_1, \dots, \mathbf{x}\_{i+1}) \\ \vdots \\ F\_{\rho^{i+1}}^{i+1}(\mathbf{x}\_1, \dots, \mathbf{x}\_{i+1}) \\ \mathbf{0} \\ \vdots \\ \mathbf{0} \end{pmatrix} \tag{49}$$

Then, we decompose the vector field *f* as follows

$$f\begin{pmatrix}\mathbf{x}\\ \end{pmatrix} = \begin{pmatrix}f\_1\left(\mathbf{x}\_1,\cdots,\mathbf{x}\_{k-1}\right)\\ f\_2\left(\mathbf{x}\_1,\cdots,\mathbf{x}\_{k-1}\right)\\ \vdots\\ f\_{k-2}\left(\mathbf{x}\_1,\cdots,\mathbf{x}\_{k-1}\right)\\ \end{pmatrix} + \begin{pmatrix}g\_1\left(\mathbf{x}\_1,\cdots,\mathbf{x}\_{k-1}\right)\\ g\_2\left(\mathbf{x}\_1,\cdots,\mathbf{x}\_{k-1}\right)\\ \vdots\\ g\_{k-2}\left(\mathbf{x}\_1,\cdots,\mathbf{x}\_{k-1}\right)\\ \end{pmatrix} + \begin{pmatrix}\mathbf{G}\_1\left(\mathbf{x}\_1,\cdots,\mathbf{x}\_k\right)\\ \mathbf{G}\_2\left(\mathbf{x}\_1,\cdots,\mathbf{x}\_k\right)\\ \vdots\\ \mathbf{G}\_{k-2}\left(\mathbf{x}\_1,\cdots,\mathbf{x}\_k\right)\\ 0\\ \vdots\\ 0\\ \end{pmatrix}\tag{50}$$

If the largest nonzero component of the vector field () is less or equal to *ρ*, then move to the next step. If that largest component is greater than *ρ*, then update *ρ* as this component and apply Frobenius theorem to straighten the vector field *g*(*x*) and follow by a push-forward transformation. Any time in the process the value of = 2, the algorithm will stop; if not until, we reach the last step.

### **5. Examples**

In this section, we consider few examples to illustrate the partial feedback linearization algorithm.

**Example 1**: Consider a simplified model of a VTOL with dynamics [29] (see **Figure 1**).

**Figure 1.** Forces acting on a VTOL aircraft.

Feedback and Partial Feedback Linearization of Nonlinear Systems: A Tribute to the Elders http://dx.doi.org/10.5772/64689 39

$$\begin{cases} \begin{aligned} \dddot{x} &= -\sin(\theta) \frac{r}{M} + \cos(\theta) \frac{2\sin(\alpha)}{M} F\\ \ddot{\mathcal{Y}} &= -\cos(\theta) \frac{r}{M} + \sin(\theta) \frac{2\sin(\alpha)}{M} F - \mathbb{g} \end{aligned} \end{cases} \tag{51}$$

where *M, J, l*, and *g* denote the mass, moment of inertia, distance between wingtips and gravitational acceleration. The control inputs are the thrust *T*, and the rolling moment due to the torque *F*, whose direction forms a fixed angle α with the horizontal body axis. The position of center mass and the roll angle with respect to the horizon are (*x,y*), and *θ*, while (˙, ˙) and θ˙ stand for their respective velocities.

Let 1 = , 2 = ˙, 3 = , 4 = ˙, 5 = , 6 = ˙ with control inputs

$$
\mu\_1 = \frac{2lF}{J} \cos \alpha
\tag{52}
$$

and

Then, we decompose the vector field *f* as follows

38 Nonlinear Systems - Design, Analysis, Estimation and Control

( ) ( ) ( ) ( )

*k k k k k k*

 

*fx x gx x Gx x fx x gx x Gx x*

, , , , , , , , , , , ,

− − − −

11 1 11 1 1 1 21 1 21 1 2 1

=+ +

− − −− −

*f x fxx x gx x Gx x x*

2 1 1 21 1 2 1

( ) ( )  

(50)

( )

( )

, , , , , , 0 1 0

2

*k k k k kk kk*

0 0 0

If the largest nonzero component of the vector field () is less or equal to *ρ*, then move to the next step. If that largest component is greater than *ρ*, then update *ρ* as this component and apply Frobenius theorem to straighten the vector field *g*(*x*) and follow by a push-forward transformation. Any time in the process the value of = 2, the algorithm will stop; if not

In this section, we consider few examples to illustrate the partial feedback linearization

**Example 1**: Consider a simplified model of a VTOL with dynamics [29] (see **Figure 1**).

( )

\*

( )

until, we reach the last step.

**Figure 1.** Forces acting on a VTOL aircraft.

**5. Examples**

algorithm.

$$\mu\_2 = -\sin(\theta) \frac{r}{\mathcal{M}} + \cos(\theta) \frac{z \sin(a)}{\mathcal{M}} F - \mathbb{g} \tag{53}$$

The system rewrites in the form

$$\Sigma: \dot{\mathbf{x}} = f(\mathbf{x}) + g\_1(\mathbf{x})u\_1 + g\_2(\mathbf{x})u\_2,\\ \mathbf{x} = (\mathbf{x}\_1, \dots, \mathbf{x}\_6) \in \mathfrak{R}^6 \tag{54}$$

with

$$f(\mathbf{x}) = \begin{pmatrix} x\_2 \\ \operatorname{atan} x\_3 \\ x\_4 \\ 0 \\ x\_6 \\ 0 \end{pmatrix}, \ g\_1(\mathbf{x}) = \begin{pmatrix} 0 \\ \eta(\mathbf{x}\_3) \\ 0 \\ 1 \\ 0 \\ 0 \end{pmatrix} \quad \text{and} \ g\_2(\mathbf{x}) = \begin{pmatrix} 0 \\ \tan \mathbf{x}\_3 \\ 0 \\ 0 \\ 1 \end{pmatrix} \tag{55}$$

where

$$\ln \eta \left( x\_{\circ} \right) = \frac{J \tan \alpha}{M l} \left( \frac{\cos^2 \mathbf{x}\_{\circ} - \sin^2 \mathbf{x}\_{\circ}}{\cos \mathbf{x}\_{\circ}} \right) \tag{56}$$

We showed in [25] that the change of coordinates

$$\mathbf{z} = \boldsymbol{\varrho}\boldsymbol{\varrho}\left(\mathbf{x}\right) \triangleq \begin{cases} \mathbf{z}\_1 = \mathbf{x}\_1 \\ \mathbf{z}\_2 = \mathbf{x}\_2 - \mathbf{x}\_4 \boldsymbol{\eta}\left(\mathbf{x}\_3\right) - \mathbf{x}\_6 \tan\mathbf{x}\_3 \\ \mathbf{z}\_3 = \mathbf{x}\_3 \\ \mathbf{z}\_4 = \mathbf{x}\_4 \\ \mathbf{z}\_5 = \mathbf{x}\_5 \\ \mathbf{z}\_6 = \mathbf{x}\_6 \end{cases} \tag{57}$$

transformed the system into where

$$\overline{f}(\mathbf{z}) = \begin{pmatrix} \mathbf{z}\_2 + \mathbf{z}\_4 \eta(\mathbf{z}\_3) + \mathbf{z}\_6 \tan \mathbf{z}\_3\\ \text{yt} \tan \mathbf{z}\_3 - \eta'(\mathbf{z}\_3) \mathbf{z}\_4^2 - \mathbf{z}\_6 \mathbf{z}\_4 \sec^2(\mathbf{z}\_3) \\\ \mathbf{z}\_4\\ \mathbf{0} \\\ \mathbf{z}\_6 \\\ \mathbf{0} \end{pmatrix}, \ \overline{g}\_1(\mathbf{z}) = \begin{pmatrix} 0 \\ 0 \\ 0 \\ 1 \\ 0 \\ 0 \end{pmatrix} \quad \text{and} \ \underline{g}\_2(\mathbf{z}) = \begin{pmatrix} 0 \\ 0 \\ 0 \\ 0 \\ 0 \\ 1 \end{pmatrix} \tag{58}$$

The distribution generated by *g*1 and *g*2 is involutive and constant. A simple feedback

$$
\upsilon\_1 = -\mathbf{x}\_1 - 2\mathbf{x}\_3 + 2\mathbf{x}\_5^2 \mathbf{x}\_6 - 2\mathbf{x}\_6^2 + \mathbf{u}\_1 + \mathbf{u}\_2 \text{ and } \upsilon\_2 = -\mathbf{x}\_4 - \mathbf{x}\_5 + \mathbf{x}\_4 \mathbf{x}\_5 + \mathbf{u}\_2 \tag{59}
$$

transforms the system so as

$$f(\mathbf{x}) = \begin{pmatrix} \mathbf{x}\_1 + \mathbf{x}\_2 - \mathbf{x}\_4^2 + 2\mathbf{x}\_4\mathbf{x}\_5\\ \mathbf{x}\_3 - \mathbf{x}\_6^2\\ 0\\ \mathbf{x}\_5\\ 0\\ \mathbf{x}\_4 + \mathbf{x}\_5^2 - \mathbf{x}\_6 \end{pmatrix},\ g\_1(\mathbf{x}) = \begin{pmatrix} 0\\ 0\\ 1\\ 0\\ 0\\ 0 \end{pmatrix} \quad \text{and}\ g\_2(\mathbf{x}) = \begin{pmatrix} 0\\ 0\\ 0\\ 0\\ 1\\ 0 \end{pmatrix} \tag{60}$$

We then decompose the vector field *f* as

$$f(\mathbf{x}) = \begin{pmatrix} \mathbf{x}\_1 + \mathbf{x}\_2 - \mathbf{x}\_4^2 + 2\mathbf{x}\_4 \mathbf{x}\_5 \\ \mathbf{x}\_3 - \mathbf{x}\_6^2 \\ 0 \\ \mathbf{x}\_5 \\ 0 \\ \mathbf{0} \\ \mathbf{x}\_4 + \mathbf{x}\_5^2 - \mathbf{x}\_6 \end{pmatrix} = \mathbf{x}\_3 \begin{pmatrix} 0 \\ 1 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \end{pmatrix} + \mathbf{x}\_5 \begin{pmatrix} 2\mathbf{x}\_4 \\ 0 \\ 0 \\ 0 \\ 1 \\ 0 \\ 0 \end{pmatrix} + \begin{pmatrix} \mathbf{x}\_1 + \mathbf{x}\_2 - \mathbf{x}\_4^2 \\ -\mathbf{x}\_6^2 \\ 0 \\ 0 \\ 0 \\ \mathbf{x}\_4 + \mathbf{x}\_5^2 - \mathbf{x}\_6 \end{pmatrix} \tag{61}$$

Here, we rectify the two vector fields (affine in *x*3 and *x*5) and find the change of coordinates

Feedback and Partial Feedback Linearization of Nonlinear Systems: A Tribute to the Elders http://dx.doi.org/10.5772/64689 41

$$\begin{cases} \mathcal{Y}\_1 = \mathbf{x}\_1 - \mathbf{x}\_4^2 \\ \mathcal{Y}\_2 = \mathbf{x}\_2 \\ \mathcal{Y}\_3 = \mathbf{x}\_3 \\ \mathcal{Y}\_4 = \mathbf{x}\_4 \\ \mathcal{Y}\_5 = \mathbf{x}\_5 \\ \mathcal{Y}\_6 = \mathbf{x}\_6 \end{cases} \tag{62}$$

to transform the system into

( )

 <sup>=</sup> <sup>=</sup> <sup>=</sup>

ϕ

*z x*

40 Nonlinear Systems - Design, Analysis, Estimation and Control

transforms the system so as

We then decompose the vector field *f* as

( )

η( )

1 1 2 243 6 3 3 3 4 4 5 5 6 6

*z x zxxx x x z x*

> *z x z x z x*

tan

(57)

(58)

(60)

(61)

<sup>=</sup> =− −

 <sup>=</sup> =

transformed the system into where

The distribution generated by *g*1 and *g*2 is involutive and constant. A simple feedback

=− − + − + + = − − + + 2 2

+−+ + − − −

2 0 2

*x x x xx x xxx x x x*

3 5

+ − + −

Here, we rectify the two vector fields (affine in *x*3 and *x*5) and find the change of coordinates

2 2 456 456

0 0 0 0 0 0

*xxx xxx*

 

0 1 0

2 2 1 2 4 45 4 124 2 2 3 6 6

1 0 0 0 0 0 

<sup>=</sup> =+ +

5

*f x x x x*

1 1 3 56 6 1 2 2 4 5 45 2 *v v* 2 2 2 and *x x xx x u u x x xx u* (59)

$$\begin{cases}
\dot{y}\_1 = y\_1 + y\_2 \\
\dot{y}\_2 = y\_3 - y\_6^2 \\
\dot{y}\_3 = u\_1 \\
\dot{y}\_4 = y\_5 \\
\dot{y}\_5 = u\_5 \\
\dot{y}\_6 = y\_4 - y\_6 + y\_5^2
\end{cases} \tag{63}$$

If we apply the push-forward transformation given by 3 = 3 − 6 2, = , 3, and the feedback 1 = 1 − 26 4 + 5 2 − 6 , 2 = 2, we take the system into

$$\Sigma: \begin{cases} \dot{\mathbf{z}}\_1 = \mathbf{z}\_1 + \mathbf{z}\_2 \\ \dot{\mathbf{z}}\_2 = \mathbf{z}\_3 \\ \dot{\mathbf{z}}\_3 = \mathbf{v}\_1 \\ \dot{\mathbf{z}}\_4 = \mathbf{z}\_5 \\ \dot{\mathbf{z}}\_5 = \mathbf{v}\_2 \\ \dot{\mathbf{z}}\_6 = \mathbf{z}\_4 - \mathbf{z}\_6 + \mathbf{z}\_5^2 \end{cases} \tag{64}$$

with *ρ* = 4 being the dimension of the largest linear subsystem.

## **Author details**

Issa Amadou Tall

Address all correspondence to: tallia@elac.edu

East Los Angeles College Avenida Cesar Chavez, Monterey Park, CA, USA

## **References**


[16] Z. Xu and L. R. Hunt. On the largest input-output linearizable subsystem. IEEE Transactions on Automatic Control. 1996;41(1):128–132.

**References**

173–187.

[1] P. Brunovsky. A classification of linear controllable systems. Kybernetica. 1970;3(6):

[2] A. J. Krener. On the equivalence of control systems and the linearization of nonlinear

[3] R. W. Brockett. Feedback invariants for nonlinear systems. Proceedings of 7th IFAC

[4] B. Jakubczyk and W. Respondek. On linearization of control systems. Bulletin Aca-

[5] L. R. Hunt and R. Su. Linear equivalents of nonlinear time varying systems. In: Proceedings of Mathematical Theory of Networks & Systems; August 5-7; Santa

[6] B. Charlet, J. Levine and R. Marino. Dynamic feedback linearization, SIAM Journal on

[7] B. Charlet, J. Levine and R. Marino. Sufficient conditions for dynamic state feedback

[8] Zhendong Sun and S. S. Ge, Nonregular feedback linearization: a nonsmooth approach, in IEEE Transactions on Automatic Control, vol. 48, no. 10, pp. 1772–1776, Oct. 2003.

[9] S-J. Lie and W. Respondek. Orbital feedback linearization of multi-input control systems. International Journal of Robust and Nonlinear Control. 2015;25(1):1352–1378.

[10] C. Nielsen and M. Maggiore. On local transverse feedback linearization. SIAM Journal

[11] A. Isidori and A. J. Krener. On feedback equivalence of nonlinear systems. Systems &

[12] A. Isidori, C. Gori-Giorgi, A. J. Krener, and S. Monaco. Nonlinear decoupling via feedback: A differential geometric approach. IEEE Transactions on Automatic Control.

[13] L. R. Hunt, R. Su, and G. Meyer. Design for multi-input nonlinear systems. In: R. W. Brockett, R. S. Milman, and H. Sussmann, editors. Differential Geometric Control

[14] R. Marino. On the largest feedback linearizable subsystem. Systems & Control Letters.

[15] A. J. Krener, A. Isidori, W. Respondek. Partial and robust linearization by feedback. In: 22nd IEEE Conference on Decision and Control; December 14-16; San Antonio, Texas.

demie Polonaise des Sciences Series Mathematics. 1980;28:517–522.

linearization. Systems & Control Letters. 1989;(13):143–151.

of Control and Optimization. 2008;47(5):2227–2250.

Theory. Boston, USA: Birkhauser; 1983. p. 268–298.

systems. SIAM Journal on Control. 1973;11:670–676.

Congress, Helsinki. 1978;:1115–1120.

42 Nonlinear Systems - Design, Analysis, Estimation and Control

Monica, CA. USA:1981. p. 119–123.

Control Letters. 1982;2(2):118–121.

1982;26(2):331–345.

1986; 6(5):345–351.

1983. p. 126–130.

Control Optimization. 1991;(29):38–57.


## **Analyzing Quantum Time‐Dependent Singular Potential Systems in One Dimension Analyzing Quantum Time**‐**Dependent Singular Potential Systems in One Dimension**

Salah Menouar and Jeong Ryeol Choi Salah Menouar and Jeong Ryeol Choi

Additional information is available at the end of the chapter Additional information is available at the end of the chapter

http://dx.doi.org/10.5772/64007

## **Abstract**

Quantum states of a particle subjected to time‐dependent singular potentials in one‐ dimension are investigated using invariant operator method and the Nikiforov‐Uvarov method. We consider the case that the system is governed by two singular potentials which are the Coulomb potential and the inverse quadratic potential. An invariant operator that is a function of time has been constructed via a fundamental mechanics. This invariant operator is transformed to a simple one using a unitary operator, which is a time‐independent invariant operator. By solving the Schrödinger equation in the transformed system, analytical forms of exact eigenvalues and eigenfunctions of the invariant operator are evaluated in a simple elegant manner with the help of the Nikiforov‐Uvarov method. Eventually, the full wave functions in the original system (untransformed system) are obtained through an inverse unitary transformation from the wave functions in the transformed system. Quantum characteristics of the system associated with the wave functions are addressed in detail.

**Keywords:** time‐dependent Hamiltonian systems, singular potentials, unitary trans‐ formation, wave function, Schrödinger equation
