*4.1.1 Altitude control*

We will start deriving the control law of the attitude by defining the altitude error and the Lyapunov function as follows:

$$z\_1 = z\_d - z,\\
V\_1 = \frac{1}{2}e\_1^2 \tag{13}$$

*<sup>V</sup>*\_ <sup>2</sup> ¼ �*k*2*<sup>e</sup>*

*Advanced UAVs Nonlinear Control Systems and Applications*

*DOI: http://dx.doi.org/10.5772/intechopen.86353*

*4.1.2 Attitude control*

*4.1.3 Position control*

*4.2.1 Altitude control*

**87**

2

*<sup>U</sup>*<sup>1</sup> <sup>¼</sup> *<sup>m</sup>*

depending on integral backstepping method as follows:

*<sup>U</sup>*<sup>3</sup> <sup>¼</sup> <sup>1</sup> *b*2

quadcopter, one can obtain the following control laws:

*<sup>θ</sup><sup>d</sup>* <sup>¼</sup> arcsin *<sup>m</sup>*

*<sup>φ</sup><sup>d</sup>* ¼ �arcsin *<sup>m</sup>*

the attitude, altitude, and position of the quadcopter.

*cφ:cθ:U*<sup>1</sup>

*cψ:U*<sup>1</sup>

The feedback linearization law of the attitude is given as follows:

*<sup>Y</sup>*€<sup>3</sup> <sup>¼</sup> *<sup>z</sup>*€ <sup>¼</sup> *<sup>c</sup>φ:c<sup>θ</sup>*

*<sup>U</sup>*<sup>1</sup> <sup>¼</sup> *<sup>m</sup> cφ:cθ*

*m*

**4.2 Quadcopter control using feedback linearization with LQI**

*<sup>U</sup>*<sup>4</sup> <sup>¼</sup> <sup>1</sup> *b*3

*<sup>U</sup>*<sup>2</sup> <sup>¼</sup> <sup>1</sup> *b*1 <sup>2</sup> <sup>þ</sup> *<sup>e</sup>*<sup>2</sup> *<sup>v</sup>*\_*zd* � *<sup>c</sup>φc<sup>θ</sup>*

The control laws of the attitude of the quadcopter were derived in this section

*θψ*\_ � *<sup>a</sup>*<sup>2</sup> \_

*<sup>ψ</sup>*€*<sup>d</sup>* <sup>þ</sup> *<sup>k</sup>*7*e*\_<sup>7</sup> � *<sup>a</sup>*5*φ*\_ \_

*<sup>v</sup>*\_*xd* � *<sup>s</sup>φ:s<sup>ψ</sup> m*

*<sup>v</sup>*\_*yd* � *<sup>c</sup>φ:sθ:s<sup>ψ</sup> m*

The feedback linearization method is used in order to decouple the state variables of the quadcopter. This will enable us to derive the LQ-based control laws for

*<sup>U</sup>*<sup>1</sup> <sup>þ</sup> *<sup>k</sup>*10*e*<sup>10</sup> � � n o (23)

*<sup>U</sup>*<sup>1</sup> <sup>þ</sup> *<sup>k</sup>*12*e*<sup>12</sup> � � � � (24)

*Y*<sup>3</sup> ¼ *z* (25)

*U*<sup>1</sup> � *g* (26)

ð Þ *V*<sup>1</sup> þ *g* (27)

*θψ*\_ <sup>þ</sup> *<sup>a</sup>*<sup>4</sup> \_

*<sup>φ</sup>*€*<sup>d</sup>* <sup>þ</sup> *<sup>k</sup>*3*e*\_<sup>3</sup> � *<sup>a</sup>*<sup>1</sup> \_

€*θ<sup>d</sup>* <sup>þ</sup> *<sup>k</sup>*5*e*\_<sup>5</sup> � *<sup>a</sup>*<sup>2</sup> \_

The Cartesian motion of a quadcopter in the *x-y* coordinate relies on *θ* and *ϕ* angles with respect to *x* and *y* axes, respectively. Hence, *θ* and *ϕ* angles have been considered as the outputs of *x* and *y* control laws. In this chapter, exact Euler angles, but not small Euler angles, have been considered to obtain the position control laws on *x* and *y* axes. However, this is an important criterion for high dynamic performance trajectory tracking control. The position control laws are derived from the quadcopter's model directly by applying the procedure of the control approaches. By applying the procedure of integral backstepping on the position equations of the

*m*

*U*<sup>1</sup> þ *g* þ *k*2*e*<sup>2</sup> � � (18)

*<sup>c</sup>φc<sup>θ</sup> <sup>z</sup>*€*<sup>d</sup>* <sup>þ</sup> *<sup>k</sup>*1*e*\_ f g <sup>1</sup> <sup>þ</sup> *<sup>g</sup>* <sup>þ</sup> *<sup>k</sup>*2*e*<sup>2</sup> (19)

*θ*Ω*<sup>r</sup>* þ *k*4*e*<sup>4</sup> � � (20)

*θ*Ω*<sup>r</sup>* � � (21)

*θ* � � (22)

If the term *k1e1* is added and subtracted to the *V*\_ <sup>1</sup> function, where *k1* > 0, it yields

$$
\dot{V}\_1 = \mathbf{e}\_1 \dot{\mathbf{e}}\_1 = \mathbf{e}\_1(\dot{\mathbf{z}}\_d - V\_x + k\_1 \mathbf{e}\_1 - k\_1 \mathbf{e}\_1) \tag{14}
$$

$$
\dot{V}\_1 = -k\_1 e\_1^2 + e\_1(\dot{z}\_d - V\_x + k\_1 e\_1) \tag{15}
$$

The term *z*\_*<sup>d</sup>* � *vz* þ *k*1*e*<sup>1</sup> of the Lyapunov function must vanish for a negative definite derivative, which can be achieved by choosing the virtual control *vz* such that

$$
\sigma\_{x\_d} = \dot{\mathbf{z}}\_d + k\_1 \mathbf{e}\_1 + c\_1 \int e\_1 dt \tag{16}
$$

Similar steps are repeated here to derive the control law,

$$
\mathfrak{e}\_2 = \mathfrak{v}\_{\mathfrak{z}\_d} - \mathfrak{v}\_{\mathfrak{z}}, \, V\_2 = \frac{1}{2} \mathfrak{e}\_2^2 \tag{17}
$$

Using a similar strategy as for *vzd* results

**Figure 2.** *The block diagram of the position control system of the quadcopter.* *Advanced UAVs Nonlinear Control Systems and Applications DOI: http://dx.doi.org/10.5772/intechopen.86353*

$$\dot{V}\_2 = -k\_2 e\_2^2 + e\_2 \left( \dot{v}\_{x\_d} - \frac{c\rho c\theta}{m} U\_1 + \mathbf{g} + k\_2 e\_2 \right) \tag{18}$$

$$U\_1 = \frac{m}{c\rho c\theta} \{\ddot{z}\_d + k\_1\dot{e}\_1 + \text{g} + k\_2\epsilon\_2\} \tag{19}$$

## *4.1.2 Attitude control*

**4. Quadcopter control**

The control scheme of the quadcopter can be represented as in **Figure 2**, it consists of two loops: the attitude control loop and the inner loop, which produces the control commands for the quadcopter to move. Moreover, the position control

In this section, the control laws of the quadcopter will be derived using the

Control laws of the attitude and position of the quadcopter are derived using

We will start deriving the control law of the attitude by defining the altitude

*<sup>e</sup>*<sup>1</sup> <sup>¼</sup> *zd* � *z, V*<sup>1</sup> <sup>¼</sup> <sup>1</sup>

If the term *k1e1* is added and subtracted to the *V*\_ <sup>1</sup> function, where *k1* > 0, it yields

The term *z*\_*<sup>d</sup>* � *vz* þ *k*1*e*<sup>1</sup> of the Lyapunov function must vanish for a negative definite derivative, which can be achieved by choosing the virtual control *vz*

*vzd* ¼ *z*\_*<sup>d</sup>* þ *k*1*e*<sup>1</sup> þ *c*<sup>1</sup>

*<sup>e</sup>*<sup>2</sup> <sup>¼</sup> *vzd* � *vz, V*<sup>2</sup> <sup>¼</sup> <sup>1</sup>

2 *e* 2

ð

2 *e* 2

*<sup>V</sup>*\_ <sup>1</sup> <sup>¼</sup> *<sup>e</sup>*1*e*\_<sup>1</sup> <sup>¼</sup> *<sup>e</sup>*1ð Þ *<sup>z</sup>*\_*<sup>d</sup>* � *Vz* <sup>þ</sup> *<sup>k</sup>*1*e*<sup>1</sup> � *<sup>k</sup>*1*e*<sup>1</sup> (14)

<sup>1</sup> þ *e*1ð Þ *z*\_*<sup>d</sup>* � *Vz* þ *k*1*e*<sup>1</sup> (15)

<sup>1</sup> (13)

*e*1*dt* (16)

<sup>2</sup> (17)

loop and the outer loop produce the references for the inner loop.

aforementioned nonlinear control methods.

*Unmanned Robotic Systems and Applications*

error and the Lyapunov function as follows:

integral backstepping approach.

*4.1.1 Altitude control*

such that

**Figure 2.**

**86**

**4.1 Quadcopter control using integral backstepping**

*<sup>V</sup>*\_ <sup>1</sup> ¼ �*k*1*<sup>e</sup>*

Similar steps are repeated here to derive the control law,

Using a similar strategy as for *vzd* results

*The block diagram of the position control system of the quadcopter.*

2

The control laws of the attitude of the quadcopter were derived in this section depending on integral backstepping method as follows:

$$U\_2 = \frac{1}{b\_1} \left\{ \ddot{\rho}\_d + k\_3 \dot{e}\_3 - a\_1 \dot{\theta} \dot{\psi} - a\_2 \dot{\theta} \Omega\_r + k\_4 e\_4 \right\} \tag{20}$$

$$U\_3 = \frac{1}{b\_2} \left\{ \ddot{\theta}\_d + k\_5 \dot{e}\_5 - a\_2 \dot{\theta} \dot{\varphi} + a\_4 \dot{\theta} \Omega\_r \right\} \tag{21}$$

$$U\_4 = \frac{1}{b\_3} \left\{ \ddot{\nu}\_d + k\_7 \dot{e}\_7 - a\_5 \dot{\rho} \dot{\theta} \right\} \tag{22}$$

#### *4.1.3 Position control*

The Cartesian motion of a quadcopter in the *x-y* coordinate relies on *θ* and *ϕ* angles with respect to *x* and *y* axes, respectively. Hence, *θ* and *ϕ* angles have been considered as the outputs of *x* and *y* control laws. In this chapter, exact Euler angles, but not small Euler angles, have been considered to obtain the position control laws on *x* and *y* axes. However, this is an important criterion for high dynamic performance trajectory tracking control. The position control laws are derived from the quadcopter's model directly by applying the procedure of the control approaches. By applying the procedure of integral backstepping on the position equations of the quadcopter, one can obtain the following control laws:

$$\theta\_d = \arcsin\left(\frac{m}{c\rho.c\theta.U\_1} \left\{\dot{v}\_{xd} - \frac{s\rho.s\psi}{m}U\_1 + k\_{10}\varepsilon\_{10}\right\}\right) \tag{23}$$

$$\varphi\_d = -\arcsin\left(\frac{m}{c\wp . U\_1} \left\{\dot{v}\_{yd} - \frac{c\wp . s\theta . s\wp}{m} U\_1 + k\_{12}\varepsilon\_{12}\right\}\right) \tag{24}$$

#### **4.2 Quadcopter control using feedback linearization with LQI**

The feedback linearization method is used in order to decouple the state variables of the quadcopter. This will enable us to derive the LQ-based control laws for the attitude, altitude, and position of the quadcopter.

#### *4.2.1 Altitude control*

The feedback linearization law of the attitude is given as follows:

$$Y\_3 = \mathbf{z} \tag{25}$$

$$
\ddot{Y}\_3 = \ddot{z} = \frac{c\rho.c\theta}{m} U\_1 - \text{g} \tag{26}
$$

$$U\_1 = \frac{m}{c\rho \, \omega\theta} (V\_1 + \mathbf{g}) \tag{27}$$

where *V1* is a virtual input, which is computed using LQI controller that will be presented in section 4.2.4.

## *4.2.2 Attitude control*

The feedback linearization laws of the attitude are derived as follows:

$$U\_2 = \frac{I\_{\text{xx}}}{l} \left\{-a\_1 \dot{\theta} \dot{\psi} - a\_2 \Omega\_r \dot{\theta} + V\_2\right\} \tag{28}$$

*<sup>u</sup>* ¼ �*K x* ¼ �*R*�<sup>1</sup>

*ATP* <sup>þ</sup> *PA* � *PBR*�<sup>1</sup>

*Advanced UAVs Nonlinear Control Systems and Applications*

*DOI: http://dx.doi.org/10.5772/intechopen.86353*

*x*\_ *z*\_ " #

**4.3 Quadcopter control using sliding mode**

*4.3.1 Altitude control*

becomes

**Figure 3.**

**89**

*LQI optimal controller structure.*

<sup>¼</sup> *<sup>A</sup>* <sup>0</sup> �*C* 0 � � *x*

Hence, the control law *u* with an integral action is as follows:

sliding mode control, the steps followed are discussed below.

at first, the sliding surface should be determined as follows:

in which *<sup>P</sup>*\_ <sup>¼</sup> <sup>0</sup>

where *P* is a covariance matrix. It is the solution of the algebraic Riccati Eq. (36),

LQR controller is capable to provide a high dynamic performance when used with linear or linearized control systems. However, LQR is not capable to ensure fast tracking of time varying command signals [33, 34]. Different types of LQRs are demonstrated in literatures [32]. **Figure 3** shows an LQI regulator, with an integral action. If the model of the linear system is extended by an error vector *z*\_ such as

where *r* is a reference signal, which may represent the desired trajectory for tracking. The extended state space model of the LQI regulator is as follows:

> *z* � � þ

In order to obtain the attitude and position control laws of the quadcopter using

In order to obtain the control laws of the quadcopter using sliding mode control,

where *e*<sup>1</sup> ¼ *zd* � *z, e*\_<sup>1</sup> ¼ *z*\_*<sup>d</sup>* � *z*\_, so that the derivative of the sliding surface

*BTP x* (35)

*BTP* <sup>þ</sup> *<sup>Q</sup>* <sup>¼</sup> *<sup>P</sup>*\_ (36)

*<sup>z</sup>*\_ <sup>¼</sup> *<sup>r</sup>* � *<sup>y</sup>* <sup>¼</sup> *<sup>r</sup>* � ð Þ *Cx* <sup>þ</sup> *Du* (37)

*B* 0 *D I* � � *<sup>u</sup>*

*r*

*u* ¼ �*K x* � *KIz* (39)

*s*<sup>1</sup> ¼ *c*1*e*<sup>1</sup> þ *e*\_<sup>1</sup> (40)

*s*\_<sup>1</sup> ¼ *c*1*e*\_<sup>1</sup> þ *e*€<sup>1</sup> (41)

� � (38)

$$\mathbf{U}\_3 = \frac{I\_{\mathcal{Y}}}{l} \left\{-a\_3 \dot{\boldsymbol{\rho}} \dot{\boldsymbol{\nu}} + a\_4 \boldsymbol{\Omega}\_r \dot{\boldsymbol{\rho}} + V\_3\right\} \tag{29}$$

$$U\_4 = \frac{I\_{xx}}{l} \left\{-a\_5 \dot{\rho} \dot{\theta} + V\_4\right\} \tag{30}$$

The previous control laws linearize the mapping between the derivatives of the flat outputs *Y*<sup>4</sup> ¼ *φ*, *Y*<sup>5</sup> ¼ *θ*, *Y*<sup>6</sup> ¼ *ψ*, and the virtual controls *V2*, *V3*, *V4.* The latter are again computed using an LQI optimal controller,

$$\text{where } a\_1 = \frac{\left(l\_{\eta} - l\_{\text{xx}}\right)}{l\_{\text{xx}}}, a\_2 = \frac{l\_r}{l\_{\text{xx}}}, a\_3 = \frac{\left(l\_{\text{xx}} - l\_{\text{xx}}\right)}{l\_{\eta}}, a\_4 = \frac{l\_r}{l\_{\eta}}, \text{ and } a\_5 = \frac{\left(l\_{\text{xx}} - l\_{\text{xx}}\right)}{l\_{\eta}}.$$

#### *4.2.3 Position control*

Here, *ϕ* and *θ* angles are computed by the control laws of *x* and *y* motion, as it is done in the integral backstepping approach. The control laws are obtained as follows:

$$\theta\_d = \arcsin\left(\frac{m}{c\rho \, c\theta \, U\_1} \left\{ \dot{v}\_{xd} - \frac{s\rho \, s\varphi}{m} U\_1 + V\_5 \right\} \right) \tag{31}$$

$$\varphi\_d = -\arcsin\left(\frac{m}{c\wp\,\,U\_1} \left\{\dot{v}\_{pd} - \frac{c\wp\,s\theta\,s\varphi}{m} U\_1 + V\_6\right\}\right) \tag{32}$$

where *V5* and *V6* are the proposed linear quadratic integral optimal controller.

#### *4.2.4 Linear quadratic integral optimal control*

The goal of the optimal control is to determine the control feedback, for which the optimal controller minimizes a proposed cost function *J* to desired minimum value. The cost function of the linear quadratic regulator is given as follows [32]:

$$J = \bigcap\_{0}^{\infty} (\mathfrak{x}^T Q \,\mathfrak{x} + \mathfrak{u}^T R \,\mathfrak{u}) dt \tag{33}$$

where *Q* and *R* represent the weighting matrices for the state vector *x* and control law vector *u*, respectively. LQR is conveniently applied to linear control systems or linearized nonlinear control systems. The state space model of a linear control system is given as follows:

$$\begin{aligned} \dot{x} &= Ax + Bu\\ y &= Cx + Du \end{aligned} \tag{34}$$

The control law *u*, which minimizes the cost function *J*, can be derived as follows:

*Advanced UAVs Nonlinear Control Systems and Applications DOI: http://dx.doi.org/10.5772/intechopen.86353*

$$
\mu = -K \,\mathrm{x} = -R^{-1}B^T P \,\mathrm{x} \tag{35}
$$

where *P* is a covariance matrix. It is the solution of the algebraic Riccati Eq. (36), in which *<sup>P</sup>*\_ <sup>¼</sup> <sup>0</sup>

$$\mathbf{A}^T \mathbf{P} + \mathbf{P} \mathbf{A} - \mathbf{P} \mathbf{B} \mathbf{R}^{-1} \mathbf{B}^T \mathbf{P} + \mathbf{Q} = \dot{\mathbf{P}} \tag{36}$$

LQR controller is capable to provide a high dynamic performance when used with linear or linearized control systems. However, LQR is not capable to ensure fast tracking of time varying command signals [33, 34]. Different types of LQRs are demonstrated in literatures [32]. **Figure 3** shows an LQI regulator, with an integral action.

If the model of the linear system is extended by an error vector *z*\_ such as

$$
\dot{\bar{z}} = r - y = r - (\mathcal{C}x + Du) \tag{37}
$$

where *r* is a reference signal, which may represent the desired trajectory for tracking. The extended state space model of the LQI regulator is as follows:

$$
\begin{bmatrix}
\dot{\overline{\boldsymbol{x}}} \\
\dot{\overline{\boldsymbol{z}}}
\end{bmatrix} = \begin{bmatrix}
\boldsymbol{A} & \mathbf{0} \\
\end{bmatrix} \begin{bmatrix}
\overline{\boldsymbol{x}} \\
\overline{\boldsymbol{z}}
\end{bmatrix} + \begin{bmatrix}
\boldsymbol{B} & \mathbf{0} \\
\boldsymbol{D} & \boldsymbol{I}
\end{bmatrix} \begin{bmatrix}
\overline{\boldsymbol{u}} \\
\boldsymbol{r}
\end{bmatrix} \tag{38}
$$

Hence, the control law *u* with an integral action is as follows:

$$
\overline{u} = -K\,\overline{x} - K\_l \overline{z} \tag{39}
$$

#### **4.3 Quadcopter control using sliding mode**

In order to obtain the attitude and position control laws of the quadcopter using sliding mode control, the steps followed are discussed below.

#### *4.3.1 Altitude control*

where *V1* is a virtual input, which is computed using LQI controller that will be

*θψ*\_ � *<sup>a</sup>*2Ω*<sup>r</sup>* \_

*θ* þ *V*<sup>4</sup>

*<sup>l</sup>* �*a*5*φ*\_ \_

The previous control laws linearize the mapping between the derivatives of the flat outputs *Y*<sup>4</sup> ¼ *φ*, *Y*<sup>5</sup> ¼ *θ*, *Y*<sup>6</sup> ¼ *ψ*, and the virtual controls *V2*, *V3*, *V4.* The latter

*Iyy* , *<sup>a</sup>*<sup>4</sup> <sup>¼</sup> *Jr*

Here, *ϕ* and *θ* angles are computed by the control laws of *x* and *y* motion, as it is done in the integral backstepping approach. The control laws are obtained as follows:

where *V5* and *V6* are the proposed linear quadratic integral optimal controller.

The goal of the optimal control is to determine the control feedback, for which the optimal controller minimizes a proposed cost function *J* to desired minimum value. The cost function of the linear quadratic regulator is given as follows [32]:

where *Q* and *R* represent the weighting matrices for the state vector *x* and control law vector *u*, respectively. LQR is conveniently applied to linear control systems or linearized nonlinear control systems. The state space model of a linear

*x*\_ ¼ *Ax* þ *Bu*

The control law *u*, which minimizes the cost function *J*, can be derived as

*<sup>v</sup>*\_*xd* � *<sup>s</sup>φ:s<sup>ψ</sup> m*

*<sup>v</sup>*\_*yd* � *<sup>c</sup>φ:sθ:s<sup>ψ</sup> m*

� � � �

� � n o

*θ* þ *V*<sup>2</sup> � � (28)

*<sup>l</sup>* �*a*3*φ*\_*ψ*\_ <sup>þ</sup> *<sup>a</sup>*4Ω*<sup>r</sup>* f g *<sup>φ</sup>*\_ <sup>þ</sup> *<sup>V</sup>*<sup>3</sup> (29)

� � (30)

*Iyy*, and *<sup>a</sup>*<sup>5</sup> <sup>¼</sup> ð Þ *Izz*�*Ixx*

*U*<sup>1</sup> þ *V*<sup>5</sup>

*U*<sup>1</sup> þ *V*<sup>6</sup>

*xTQ x* <sup>þ</sup> *uTR u* � �*dt* (33)

*<sup>y</sup>* <sup>¼</sup> *Cx* <sup>þ</sup> *Du* (34)

*Iyy* .

(31)

(32)

The feedback linearization laws of the attitude are derived as follows:

*<sup>l</sup>* �*a*<sup>1</sup> \_

*<sup>U</sup>*<sup>4</sup> <sup>¼</sup> *Izz*

*Ixx*, *<sup>a</sup>*<sup>3</sup> <sup>¼</sup> ð Þ *Izz*�*Ixx*

*cφ:cθ:U*<sup>1</sup>

*cψ:U*<sup>1</sup>

*J* ¼ ∞ð

0

*<sup>U</sup>*<sup>2</sup> <sup>¼</sup> *Ixx*

*<sup>U</sup>*<sup>3</sup> <sup>¼</sup> *Iyy*

are again computed using an LQI optimal controller,

*<sup>θ</sup><sup>d</sup>* <sup>¼</sup> arcsin *<sup>m</sup>*

*<sup>φ</sup><sup>d</sup>* ¼ �arcsin *<sup>m</sup>*

*4.2.4 Linear quadratic integral optimal control*

control system is given as follows:

follows:

**88**

*Ixx* , *<sup>a</sup>*<sup>2</sup> <sup>¼</sup> *Jr*

presented in section 4.2.4.

*Unmanned Robotic Systems and Applications*

where *<sup>a</sup>*<sup>1</sup> <sup>¼</sup> ð Þ *Iyy*�*Izz*

*4.2.3 Position control*

*4.2.2 Attitude control*

In order to obtain the control laws of the quadcopter using sliding mode control, at first, the sliding surface should be determined as follows:

$$s\_1 = c\_1 e\_1 + \dot{e}\_1 \tag{40}$$

where *e*<sup>1</sup> ¼ *zd* � *z, e*\_<sup>1</sup> ¼ *z*\_*<sup>d</sup>* � *z*\_, so that the derivative of the sliding surface becomes

$$
\dot{s}\_1 = c\_1 \dot{e}\_1 + \ddot{e}\_1 \tag{41}
$$

**Figure 3.** *LQI optimal controller structure.*

From the equation of motion, the second derivative of the error becomes

$$
\ddot{\vec{e}}\_1 = \ddot{\vec{z}}\_d - \ddot{\vec{x}} = \ddot{\vec{z}}\_d - \frac{c\rho c\theta}{m} U\_1 + \text{g} \tag{42}
$$

**4.4 Results**

**Figure 4.**

**Figure 5.**

**91**

*response (4-b), and sliding mode control response (4-c).*

*response (5-b), and sliding mode control response (5-c).*

The discussed nonlinear approaches have been tested in MATLAB/Simulink based on the nonlinear quadcopter model of Eq. (10), as well as experimental verification is also conducted. For modeling and simulation of the proposed approaches, the simulation sample time was Ts = 100 μs and the solver used was Runge-Kutta with a fixed integration. **Figures 4** and **5** show the system's trajectory tracking response. **Figure 4a** depicts the system response when implementing the proposed integral backstepping approach. **Figure 4b** shows the system response using feedback linearization with LQI approach. **Figure 4c** represents the system response using sliding mode control. **Figure 4a**–**c** demonstrates the system trajectory tracking to a desired trajectory command signal, with the existing external disturbances. These disturbances are being added with the command signals at different time instances. The initial position of the desired trajectory was (2, 0, 0), but the quadcopter was initiated with a different initial position as (0, 0, 0). As seen from **Figure 4a–c**, for the three investigated control approaches, the actual trajectory at the start was a bit diverged from the desired trajectory. However, the actual trajectory was then converged to the desired one fast. **Figure 5** exhibits the reference signals and the responses for x-, y-, and z axes of the quadcopter in the 3D space. These references on x and y axes were selected to be sinusoidal signals with 2 m of magnitude and 0.05 Hz of frequency. The command along z axis was a ramp signal with 0.2 m.s<sup>1</sup> velocity rate. **Figure 6** shows the tracking errors of the

*Advanced UAVs Nonlinear Control Systems and Applications*

*DOI: http://dx.doi.org/10.5772/intechopen.86353*

*Desired and actual trajectory, proposed integral backstepping response (4-a), feedback linearization with LQI*

*Desired and actual trajectory, proposed integral backstepping response (5-a), feedback linearization with LQI*

By equaling Eq. (41) to zero, we obtain

$$
\dot{\varepsilon}\_1 = \ddot{z}\_d - \frac{c\rho c\theta}{m} U\_1 + \mathbf{g} + c\_1(\dot{z}\_d - \dot{z}) = \mathbf{0} \tag{43}
$$

By using the constant and proportional rate reaching law formula

$$-K\_1\mathbf{s}\_1 - Q\_1\operatorname{sgn}\left(\mathbf{s}\_1\right) = \ddot{\mathbf{z}}\_d - \frac{c\rho c\theta}{m}U\_1 + \mathbf{g} + c\_1(\dot{\mathbf{z}}\_d - \dot{\mathbf{z}})\tag{44}$$

So that the control law of the altitude will become:

$$U\_1 = \frac{m}{c\rho c\theta} \{\ddot{z}\_d + \mathbf{g} + c\_1(\dot{z}\_d - \dot{z}) + K\_1\varsigma\_1 + Q\_1\operatorname{sgn}(\varsigma\_1)\}\tag{45}$$

### *4.3.2 Attitude control*

By following sliding mode control steps of design for the attitude of the quadcopter, we obtain

$$U\_2 = \frac{1}{b\_1} \left\{ \ddot{\rho}\_d - a\_1 \dot{\theta} \dot{\psi} - a\_2 \dot{\theta} \Omega\_r + c\_2 (\dot{\rho}\_d - \dot{\rho}) + K\_2 \mathfrak{s}\_2 + Q\_2 \text{.sgn}\left(\mathfrak{s}\_2\right) \right\} \tag{46}$$

$$\mathcal{U}\_3 = \frac{1}{b\_2} \left\{ \ddot{\theta}\_d - a\_2 \dot{\theta} \dot{\varphi} + a\_4 \dot{\theta} \Omega\_r + c\_3 \left( \dot{\theta}\_d - \dot{\theta} \right) + K\_3 \mathfrak{z}\_3 + Q\_3, \text{sgn} \left( \mathfrak{z}\_3 \right) \right\} \tag{47}$$

$$\begin{aligned} \text{with} \quad \boldsymbol{U}\_{4} &= \frac{1}{b\_{3}} \left\{ \ddot{\boldsymbol{\nu}}\_{d} - a\_{5} \dot{\boldsymbol{\rho}} \dot{\boldsymbol{\theta}} + c\_{4} (\dot{\boldsymbol{\nu}}\_{d} - \dot{\boldsymbol{\nu}}) + K\_{4} \boldsymbol{\varepsilon}\_{4} + Q\_{4} \cdot \text{sgn} \left( \boldsymbol{\varepsilon}\_{4} \right) \right\} \end{aligned} \tag{48}$$
 
$$\begin{aligned} \boldsymbol{\varepsilon}\_{2} &= c\_{2} \boldsymbol{\varepsilon}\_{2} + \dot{\boldsymbol{\varepsilon}}\_{2} & \quad \boldsymbol{\varepsilon}\_{3} &= c\_{3} \boldsymbol{\varepsilon}\_{3} + \dot{\boldsymbol{\varepsilon}}\_{3} & \quad \boldsymbol{\varepsilon}\_{4} &= c\_{4} \boldsymbol{\varepsilon}\_{4} + \dot{\boldsymbol{\varepsilon}}\_{4} \\ \boldsymbol{\varepsilon}\_{2} &= \boldsymbol{\varrho}\_{d} - \boldsymbol{\varrho} & \quad \boldsymbol{\varepsilon}\_{3} &= \boldsymbol{\theta}\_{d} - \boldsymbol{\theta} & \quad \boldsymbol{\varepsilon}\_{4} &= \boldsymbol{\psi}\_{d} - \boldsymbol{\psi} \end{aligned}$$

#### *4.3.3 Position control*

Same strategy will be followed to derive the control laws of the position as in integral backstepping and feedback linearization. The control laws of both *x*, *y* will command the attitude loop with the references to accomplish the desired trajectory

$$\theta\_d = \arcsin\left(\frac{m}{c\rho.c\theta.U\_1} \left\{\ddot{x}\_d - \frac{s\rho.s\psi}{m}U\_1 + K\_5\varsigma\_5 + Q\_5.\text{sgn}\left(\varsigma\_5\right)\right\}\right) \tag{49}$$

$$\varphi\_d = -\arcsin\left(\frac{m}{c\wp\_\cdot U\_1} \left\{ \ddot{\jmath}\_d - \frac{c\wp\_\ast s\theta\_\cdot s\wp}{m} U\_1 + K\_6 \varsigma\_6 + Q\_6 \operatorname{sgn}\left(\varsigma\_6\right) \right\} \right) \tag{50}$$

with

$$\mathfrak{s}\_{\mathfrak{F}} = \mathfrak{c}\_{\mathfrak{F}} \mathfrak{e}\_{\mathfrak{F}} + \dot{\mathfrak{e}}\_{\mathfrak{F}} \quad \mathfrak{s}\_{\mathfrak{G}} = \mathfrak{c}\_{\mathfrak{G}} \mathfrak{e}\_{\mathfrak{G}} + \dot{\mathfrak{e}}\_{\mathfrak{G}}$$

$$\mathfrak{e}\_{\mathfrak{F}} = \mathfrak{x}\_{d} - \mathfrak{x} \qquad \mathfrak{e}\_{\mathfrak{G}} = \mathfrak{y}\_{d} - \mathfrak{y}$$
