**4. Vision algorithm development**

In addition to the force exerted by the motor, gravity also exerts a force on the

0 0 *mg* 3

2 4

**f***b g* ¼

0

�*mg* sin ð Þ*θ*

9

>>>>>>>>>>>>>>=

>>>>>>>>>>>>>>;

*p q r*

0 0 �*F*

*u v w*

*mg* cosð Þ*θ* sin ð Þ *ϕ*

*mg* cosð Þ*θ* cosð Þ *ϕ*

*cθc<sup>ψ</sup> sϕsθc<sup>ψ</sup>* � *cϕc<sup>ψ</sup> cϕsθc<sup>ψ</sup>* þ *sϕs<sup>ψ</sup> cθs<sup>ψ</sup> sϕsθs<sup>ψ</sup>* þ *cϕc<sup>ψ</sup> cϕsθs<sup>ψ</sup>* � *sϕc<sup>ψ</sup>* �*s<sup>θ</sup> sϕc<sup>θ</sup> cϕc<sup>θ</sup>*

1 sin ð Þ *ϕ* tan ð Þ*θ* cosð Þ *ϕ* tan ð Þ*θ* 0 cosðÞ � *ϕ* sin ð Þ *ϕ* 0 sin ð Þ *ϕ* sec ð Þ*θ* cosð Þ *ϕ* sec ð Þ*θ*

> �*g* sin ð Þ*θ g* cosð Þ*θ* sin ð Þ *ϕ g* cosð Þ*θ* cosð Þ *ϕ*

0

*mg*

, the gravity force acting on the center of mass is

5 (22)

(23)

(24)

(25)

(26)

(27)

quadrotor. In the vehicle frame *F<sup>v</sup>*

*Unmanned Robotic Systems and Applications*

Hence, transforming **f**

*b <sup>v</sup>* , we get:

> **f***b <sup>v</sup>* <sup>¼</sup> *<sup>R</sup><sup>v</sup> b*

> > ¼

Therefore, transforming equation set (14), we get:

*rv* � *qw pw* � *ru qu* � *pv*

> *p*\_ *q*\_ *r*\_

3 5 ¼

2 4

*Jy* � *Jz Jx*

*Jz* � *Jx Jy*

*Jx* � *Jy Jz*

*qr*

þ

1 *Jx τϕ*

1 *Jy τθ*

1 *Jz τψ*

*pr*

*qp*

Eqs. (24)–(27) represent the complete non-linear model of the quadrotor. However, they are not appropriate for control design for several reasons. The first reason is that they are too complicated to gain significant insight into the motion of the quadrotor. The second reason is that the position and orientation are relative to the inertial world fixed frame, whereas camera measurements will measure position and orientation of the target with respect to the camera frame. Hence, the above set of equations are further simplified using small angle approximation. We obtain:

*x*\_ *y*\_ *z*\_

*ϕ*\_ \_ *θ ψ*\_

> *u*\_ *v*\_ *w*\_

**36**

given by:

In order to control a system like the quadrotor, very reliable sensors are needed that can provide a good estimate of the system states. Sensors like the IMU and GPS are subjected to noise which can make them quite undesirable for control applications. Hence, an efficient method of developing control strategies for autonomous quadrotor operations is to utilize the concept of computer-vision.

Using computer vision algorithms, the on-board camera of the quadrotor can be used to confer full autonomy on the system, thereby allowing it to operate in almost any environment. This also eradicates the necessity of setting up an additional set of cameras or to calibrate the environment lighting. As long as the on-board camera is previously calibrated (just needed once) and the target to be tracked is perfectly known (marker size and ID), this system is ready to operate. The usage of ArUco markers as targets allows an easy and fast computation enabling its use in real time applications like autonomous take-off and landing.

In this chapter, let us consider the application of autonomous landing of the quadrotor on a stationary platform like a car roof-top. To enable the quadrotor to identify the landing pad, an ArUco markers board must be attached to the roof of a car. The vision algorithm must be designed to detect a specific ArUco marker ID, and provide the quadrotor's pose relative to the marker. The algorithms used for detection and identification of the marker board are reviewed in the succeeding sub-section.

#### **4.1 The ArUco library**

To detect the marker with a regular camera (RGB camera) a library called ArUco is used that was developed by Aplicaciones de la Visión Artificial (AVA) from the Universidad de Córdoba (UCO) [12]. This library is "a minimal library for Augmented Reality applications based on open source computer vision (OpenCV)" [13] and has an API for developing markers in C++ which is very useful in this work. A 100 mm Code 7 ArUco marker is shown in **Figure 2**.

A generic ArUco marker is a 2D bar-code that can be considered as a 7 � 7 Boolean matrix, with the outer cells filled with black (which makes a perfect square, easy to find with image processing). The remaining 5 � 5 matrix is a 10-bits coded ID (up to 1024 different IDs), where each line represents a couple of bits. Each line has only 2 bits of information out of the 5 bits, with the other 3 being used for error detection.

image processing. Therefore, all the calculations are performed in a matter of

*Vision-Based Autonomous Control Schemes for Quadrotor Unmanned Aerial Vehicle*

• Consider figures with only four connected corners.

• Threshold the area using OTSU, which assumes a bi-modal distribution and finds the threshold that maximizes the extra-class variance while

• Detect and identify a valid marker, which respects **Table 1**, and if not

6.Detect extrinsic parameters (by supplying the calibration matrix, distortion

The extrinsic parameters are calculated with the help of an OpenCV function: *solvePnP*ðÞ [14, 15]. For the marker considered, the four corners of its image and their respective 3D coordinates are provided to the algorithm, which will be:

> �*d=*2 *d=*2 *d=*2 �*d=*2 *d=*2 *d=*2 �*d=*2 �*d=*2 000 0

where *d*[*m*] is the dimension of the side of the printed squared marker. As it can

**Figure 5** gives the real time implementation of the algorithm for marker detection. The vision algorithm gives the position and orientation offset of the marker center with respect to the camera center. This acts as a reference error to the controller which will use it to position the drone to the center of the ArUco marker and land it. Hence, the succeeding section of this chapter describes the control

be observed, all the four points have their coordinate *Z* = 0 and their (*X*, *Y*) coordinates are disposed as a square, which means that the marker is considered to be horizontal, in the origin of the world reference, as suggested in **Figure 4** and so the extrinsic parameters will be the rotation and translation of the camera relative to

3

5 (29)

• Calculate homography (from corners).

keeping a low intra-class variance.

detected tries the four rotations.

matrix and physical markers dimensions).

**X**<sup>1</sup>�<sup>4</sup> ¼

2 4

The main code is not very complex and the markers detection is performed as

seconds so it can be used in real time applications.

*DOI: http://dx.doi.org/10.5772/intechopen.86057*

1. Converting color image to gray image.

2. Apply adaptive shareholding.

3. Detect contours.

4.Detect rectangles:

• Detect corners.

5. For detected markers:

the marker.

**39**

strategy development.

• Detect linked corners.

follows:

**Figure 2.** *ArUco marker (ID: 7, size: 100 mm).*

**Figure 3.** *ArUco marker (ID: 1023, size: 100 mm).*

These extra 3 bits add asymmetry to the markers, i.e., only a few valid markers are symmetric (e.g., **Figure 3**), which allows a unique orientation detection for the markers. The codification used is a slight modification of the Hamming Code (the first bit is inverted to avoid a valid black square).

So, any ArUco marker can be created by converting a number to binary, splitting into five groups of two bits and by putting each couple in one line of the marker, from the top to the bottom. For example the marker ID of **Figure 2** is the number 7, which is (00 00 00 01 11) in binary. Using the information in **Table 1**, it can be verified that the generated marker is the same as that in **Figure 2**.

The ArUco library processes the image supplied and detects the marker ID as well as its position and orientation in the 3D world, relative to the camera. The open source code of ArUco is based in OpenCV, which is a library highly optimized for


**Table 1.** *Codification of an ArUco marker.*

## *Vision-Based Autonomous Control Schemes for Quadrotor Unmanned Aerial Vehicle DOI: http://dx.doi.org/10.5772/intechopen.86057*

image processing. Therefore, all the calculations are performed in a matter of seconds so it can be used in real time applications.

The main code is not very complex and the markers detection is performed as follows:

	- Detect corners.
	- Detect linked corners.
	- Consider figures with only four connected corners.
	- Calculate homography (from corners).
	- Threshold the area using OTSU, which assumes a bi-modal distribution and finds the threshold that maximizes the extra-class variance while keeping a low intra-class variance.
	- Detect and identify a valid marker, which respects **Table 1**, and if not detected tries the four rotations.

The extrinsic parameters are calculated with the help of an OpenCV function: *solvePnP*ðÞ [14, 15]. For the marker considered, the four corners of its image and their respective 3D coordinates are provided to the algorithm, which will be:

$$\mathbf{X}\_{1-4} = \begin{bmatrix} -d/2 & d/2 & d/2 & -d/2 \\ d/2 & d/2 & -d/2 & -d/2 \\ \mathbf{0} & \mathbf{0} & \mathbf{0} & \mathbf{0} \end{bmatrix} \tag{29}$$

where *d*[*m*] is the dimension of the side of the printed squared marker. As it can be observed, all the four points have their coordinate *Z* = 0 and their (*X*, *Y*) coordinates are disposed as a square, which means that the marker is considered to be horizontal, in the origin of the world reference, as suggested in **Figure 4** and so the extrinsic parameters will be the rotation and translation of the camera relative to the marker.

**Figure 5** gives the real time implementation of the algorithm for marker detection. The vision algorithm gives the position and orientation offset of the marker center with respect to the camera center. This acts as a reference error to the controller which will use it to position the drone to the center of the ArUco marker and land it. Hence, the succeeding section of this chapter describes the control strategy development.

These extra 3 bits add asymmetry to the markers, i.e., only a few valid markers are symmetric (e.g., **Figure 3**), which allows a unique orientation detection for the markers. The codification used is a slight modification of the Hamming Code (the

So, any ArUco marker can be created by converting a number to binary, splitting into five groups of two bits and by putting each couple in one line of the marker, from the top to the bottom. For example the marker ID of **Figure 2** is the number 7, which is (00 00 00 01 11) in binary. Using the information in **Table 1**, it can be

The ArUco library processes the image supplied and detects the marker ID as well as its position and orientation in the 3D world, relative to the camera. The open source code of ArUco is based in OpenCV, which is a library highly optimized for

first bit is inverted to avoid a valid black square).

**Figure 2.**

**Figure 3.**

**Table 1.**

**38**

*Codification of an ArUco marker.*

*ArUco marker (ID: 7, size: 100 mm).*

*Unmanned Robotic Systems and Applications*

*ArUco marker (ID: 1023, size: 100 mm).*

verified that the generated marker is the same as that in **Figure 2**.

uncertainties, the system trajectories may deviate from the sliding surface. This can be overcome by making the sliding surface attractive. To ensure sliding surface

*Vision-Based Autonomous Control Schemes for Quadrotor Unmanned Aerial Vehicle*

*<sup>V</sup>* <sup>¼</sup> <sup>1</sup> 2

*S* \_

*S* \_

\_

*sign S*ð Þ¼

where *K* is the gain and is always positive. The signum function, *sign S*ð Þ, may be defined as:

condition of the sliding surface, i.e., *<sup>S</sup>* <sup>¼</sup> 0 and \_

consider a second order uncertain nonlinear system [19]

*x*\_ <sup>1</sup> ¼ *x*<sup>2</sup>

saturation or hyperbolic function [18].

where *x* ¼ ½ � *x*<sup>1</sup> *x*<sup>2</sup>

the scalar control input.

**41**

To satisfy the above inequality condition, a reaching law is selected as:

8 >><

>>:

The control law generated using SMC has two components defined as [17]:

where *ueq*ð Þ*t* is the equivalent control, which can be derived by the invariance

To understand the basic steps of control law design using sliding mode, Let us

*x*\_ <sup>2</sup> ¼ *f x*ð Þþ *g x*ð Þ*u t*ð Þþ *d*

nonlinear functions, and bounded uncertain term *d* satisfies ∣*d*∣ ≤*ds* >0, and *u t*ð Þ is

law also called reaching law based control, which can be obtained by testing the attractiveness condition. This hitting law is basically used to overcome the effect of uncertainties and unpredictable disturbances. Chattering appears in SMC due to signum function and can be overcome by using boundary layer method, in which the signum function is replaced by a continuous approximation function like a

In order to achieve finite-time convergence (global finite-time stability), the

þ1 *S*>0 0 *S* ¼ 0 �1 *S*<0

To make the sliding surface attractive and to guarantee asymptotic stability, *V*\_ must be negative definite. In order to make *V*\_ negative definite following condition

*S*<sup>2</sup> (31)

*S* <0 (32)

*<sup>S</sup>* <sup>≤</sup> � *<sup>η</sup>V*<sup>1</sup>*=*<sup>2</sup> (33)

*S* ¼ �*K sign S*ð Þ (34)

*u t*ðÞ¼ *ueq*ðÞþ*t uh*ð Þ*t* (35)

�

*<sup>T</sup>* is the system state vector, *f x*ð Þ and *g x*ð Þ 6¼ 0 are smooth

*S* ¼ 0, and *uh*ð Þ*t* is a hitting control

(36)

attractiveness, Lyapunov's theory is utilized as shown below:

Consider the Lyapunov's function *V* given as:

*DOI: http://dx.doi.org/10.5772/intechopen.86057*

must be satisfied.

above condition is modified as:

**Figure 4.** *A marker at the world's reference.*

**Figure 5.** *Real-time implementation.*
