**8.1 Hardware description**

As mentioned in earlier parts of this chapter, the quadrotor used for the implementation of this work is the *DJI Matrice M100* shown in **Figure 15**. The DJI Matrice

**Figure 15.** *DJI Matrice M100.*

*Vision-Based Autonomous Control Schemes for Quadrotor Unmanned Aerial Vehicle DOI: http://dx.doi.org/10.5772/intechopen.86057*

100 is a fully customize-able and programmable flight platform that lets its users perform operations such as pipeline health monitoring, surveillance, search and rescue and in applications requiring external sensor interface. Accompanied with the M100, a series of add-ons help in making its handling user-friendly. Similar to any other development drone in the market, the Matrice M100 comes with a programmed flight controller.

To aid in implementation of user defined controllers and task maneuvers, a separate on-board computer, named the *DJI Manifold*, is provided in **Figure 16**. The Manifold is an embedded Linux computer which incorporates a NVIDIA Tegra K1 SOC (CPU + GPU + ISP in a single chip) with both standard and extended connections and interfaces. The single GPU (Graphical Processing Unit) unit helps us run CUDA to aid in performing complex image processing operations. The Linux environment acts as a support to run ROS (Robot Operating System), which is the key element for any sorts of development on the Matrice M100. This would be mentioned in detail in the upcoming sub-section.

To gather visual data, the DJI Matrice M100 is provided with a completely controllable Zenmuse X3 Gimbal. This could be easily interfaces with the DJI Manifold for image processing. However, in this case, a separate downward facing camera is used to perform the task of vision based landing. This is done so as to keep the gimbal free to perform other tasks such as image capturing, video capturing and likewise. The downward facing camera chosen is the *LogiTech C310* camera (**Figure 17**) which can be interfaced with the manifold using an USB connection.

The landing pad is a wooden platform of dimension 4 feet 4 feet. At the center, an AruCo marker is placed of dimension 12.5 cm 12.5 cm. The AruCo Marker chosen is a 4 4 matrix of marker ID 7. The dimension of the marker is chosen such that it is clearly detected from an altitude as high as 10 m as well as from an altitude as low as 0.4 m. The landing pad setup as shown in **Figure 18** would be mounted on the roof of a car for experimental purposes.

#### **8.2 Software description**

**7. Vision-integrated control**

*Unmanned Robotic Systems and Applications*

the center of the marker.

**Figure 14.**

*Complete block diagram.*

**8. Hardware results**

**8.1 Hardware description**

**Figure 15.** *DJI Matrice M100.*

**50**

in indoor and outdoor environments.

**Figure 14** presents an overview of the vision-integrated control. The camera captures the image and based on the vision algorithm, the position and orientation offset between the quadrotor and the marker is obtained. This offset is fed to the sliding mode controller which reduces this error and aids in landing the quadrotor at

This section presents the results obtained from real-time implementation of the vision-integrated sliding mode control for the autonomous landing of the quadrotor

As mentioned in earlier parts of this chapter, the quadrotor used for the implementation of this work is the *DJI Matrice M100* shown in **Figure 15**. The DJI Matrice

This section briefly describes the software abstraction layer and its paradigm to control and the associated hardware flow of Matrice M100 quadrotor. As discussed in the hardware setup the DJI M100 uses DJI Manifold as its on-board computer to control and communicate with Flight controller and on-board sensors interfaced with it. *DJI On-board SDK (OSDK)* is an open source software library which enables the OBC (On-Board Computer) to handle the Input-Output data coming from the

**Figure 16.** *DJI manifold.*

**Figure 17.** *LogiTech C310.*

The software components of OSDK consist of APIs provided by DJI SDK library. The OSDK supports two varieties of asynchronous Programming and sends information to the OSDK workflow. The asynchronous programming mechanism works on executing the code receiving from the acknowledgement which is independent

• Serial device drivers: It communicates with flight controller and OBC via *UART*. The serial device drivers also takes care of input-output handling,

memory management like locking and unlocking and interrupts.

*Vision-Based Autonomous Control Schemes for Quadrotor Unmanned Aerial Vehicle*

• Thread communication: Allows inter thread communication to handle

• Application layer API calls: The core of on-board API is a communication between the flight control commands send from the processor to the control unit and in turn receives the acknowledgement independent of program flow. It provides callback functions. The synchronous programming API blocking calls will return only when the CMD-ACK round trip is done. This gives the

Two test environments were used to validate the developed control algorithm. To assess the quadrotor's capability of performing vision based landing in the indoor environment, an empty plot of dimension 12 feet 21 feet was used enclosed by nets. The plot was surrounded with obstacles on all four sides making it absolutely necessary for the drone not to move away too far away from the landing pad. The test environment is as shown in **Figure 21**. The first set of experiments were conducted using this setup. This also gave an opportunity to validate the shadow elimination that was incorporated in the drone. Note that the indoor experiment

The second setup included the landing pad placed on the roof of a car as shown

in **Figure 22**. It is assumed that the car is stationary when the quadrotor is

of main flow execution. The components also include:

assurance that the command is executed.

had the landing pad setup placed on the ground.

performing the task of vision based landing.

**53**

This process flow is depicted as shown in **Figure 20**.

different level of signals.

*OSDK architecture and software framework.*

*DOI: http://dx.doi.org/10.5772/intechopen.86057*

**Figure 19.**

**8.3 Test environment description**

**Figure 18.** *Landing pad setup.*

on-board control unit and sensor units. To establish the reliable network among the onboard sensor units and OBC, several serial communication protocols such as *UART1, UART2, CAN1, CAN2, USB* and *VBUS1 to VBUS5* are used. In this Paper, the main focus is on estimating the pose of the quadrotor using an on-board monocular camera connected to one of the USB ports. Other sensors, such as the *DJI Guidance*, which is connected to the*VBUS*, can be sued for fusion at different frame rates if necessary. The multi-layer hardware communication block diagram is as shown in **Figure 19**.

The multi-layer hardware connection is described in the **Figure 19**. The on-board SDK includes:


*Vision-Based Autonomous Control Schemes for Quadrotor Unmanned Aerial Vehicle DOI: http://dx.doi.org/10.5772/intechopen.86057*

**Figure 19.** *OSDK architecture and software framework.*

The software components of OSDK consist of APIs provided by DJI SDK library. The OSDK supports two varieties of asynchronous Programming and sends information to the OSDK workflow. The asynchronous programming mechanism works on executing the code receiving from the acknowledgement which is independent of main flow execution. The components also include:


This process flow is depicted as shown in **Figure 20**.

### **8.3 Test environment description**

Two test environments were used to validate the developed control algorithm. To assess the quadrotor's capability of performing vision based landing in the indoor environment, an empty plot of dimension 12 feet 21 feet was used enclosed by nets. The plot was surrounded with obstacles on all four sides making it absolutely necessary for the drone not to move away too far away from the landing pad. The test environment is as shown in **Figure 21**. The first set of experiments were conducted using this setup. This also gave an opportunity to validate the shadow elimination that was incorporated in the drone. Note that the indoor experiment had the landing pad setup placed on the ground.

The second setup included the landing pad placed on the roof of a car as shown in **Figure 22**. It is assumed that the car is stationary when the quadrotor is performing the task of vision based landing.

on-board control unit and sensor units. To establish the reliable network among the onboard sensor units and OBC, several serial communication protocols such as *UART1, UART2, CAN1, CAN2, USB* and *VBUS1 to VBUS5* are used. In this Paper, the main focus is on estimating the pose of the quadrotor using an on-board monocular camera connected to one of the USB ports. Other sensors, such as the *DJI Guidance*, which is connected to the*VBUS*, can be sued for fusion at different frame rates if necessary. The

• Robot Operating System *(ROS)*: Interface and associated packages to handle

• *DJI Assistant2*: Real time flight simulator to verify the developed algorithms.

• *DJI OSDK API*: Used to asynchronously to send the control commands to flight

multi-layer hardware communication block diagram is as shown in **Figure 19**. The multi-layer hardware connection is described in the **Figure 19**.

• C++ library to access arm processor based linux(OS).

control unit and s the acknowledgment from it.

The on-board SDK includes:

**Figure 18.** *Landing pad setup.*

**52**

**Figure 17.** *LogiTech C310.*

*Unmanned Robotic Systems and Applications*

multiple sensor nodes.

**Figure 20.** *OSDK software flow.*

**Figure 21.** *Indoor test environment.*

### **8.4 Results**

#### *8.4.1 Indoor environment*

The first environment was the indoor environment with the marker placed on the ground in an enclosed space of 12 feet 12 feet. The drone was made to autonomously lift-off and then the vision based landing node was initiated. The node simultaneously also recorded the position, velocity, acceleration and offset as the drone performed the corrections needed to align with the marker.

vision based landing took 30 s in this case. Over 10 trials an average error of 3.2 cm

The second test environment was the outdoor environment with the landing pad mounted on the roof of a car. A 4 4 feet wooden board was mounted on the roof top of a car with the ArUco marker affixed to the center of this board. It was tested in an open ground with winds blowing at 10 km/hr. NW. This helped us understand

was observed with the maximum error as 6 cm from the marker center.

*Vision-Based Autonomous Control Schemes for Quadrotor Unmanned Aerial Vehicle*

*DOI: http://dx.doi.org/10.5772/intechopen.86057*

the robustness of the controller designed. A slight swaying of the drone was observed, however, the designed controller managed to land the quadrotor on the marker with an average error of 4 cm with a maximum error of 7 cm over 20 trials. Once again, the acceleration, velocity, position and offset values were recorded and

*8.4.2 Outdoor environment*

*Acceleration profile (Indoor Testing).*

**Figure 22.**

**Figure 23.**

**55**

*Outdoor test environment.*

These results were plotted and are shown in **Figures 23–26**. Note that 1 s produces 18 samples and hence, from the time of initiation to completion, the action of *Vision-Based Autonomous Control Schemes for Quadrotor Unmanned Aerial Vehicle DOI: http://dx.doi.org/10.5772/intechopen.86057*

**Figure 22.** *Outdoor test environment.*

**Figure 23.** *Acceleration profile (Indoor Testing).*

vision based landing took 30 s in this case. Over 10 trials an average error of 3.2 cm was observed with the maximum error as 6 cm from the marker center.

#### *8.4.2 Outdoor environment*

The second test environment was the outdoor environment with the landing pad mounted on the roof of a car. A 4 4 feet wooden board was mounted on the roof top of a car with the ArUco marker affixed to the center of this board. It was tested in an open ground with winds blowing at 10 km/hr. NW. This helped us understand the robustness of the controller designed. A slight swaying of the drone was observed, however, the designed controller managed to land the quadrotor on the marker with an average error of 4 cm with a maximum error of 7 cm over 20 trials. Once again, the acceleration, velocity, position and offset values were recorded and

**8.4 Results**

**54**

*Indoor test environment.*

**Figure 21.**

**Figure 20.** *OSDK software flow.*

*Unmanned Robotic Systems and Applications*

*8.4.1 Indoor environment*

The first environment was the indoor environment with the marker placed on

These results were plotted and are shown in **Figures 23–26**. Note that 1 s produces 18 samples and hence, from the time of initiation to completion, the action of

the ground in an enclosed space of 12 feet 12 feet. The drone was made to autonomously lift-off and then the vision based landing node was initiated. The node simultaneously also recorded the position, velocity, acceleration and offset as

the drone performed the corrections needed to align with the marker.

**Figure 27.**

**Figure 28.**

**Figure 29.**

**57**

*Position profile (Outdoor Testing).*

*Velocity profile (Outdoor Testing).*

*Acceleration profile (Outdoor Testing).*

*DOI: http://dx.doi.org/10.5772/intechopen.86057*

*Vision-Based Autonomous Control Schemes for Quadrotor Unmanned Aerial Vehicle*

**Figure 24.** *Velocity profile (Indoor Testing).*

**Figure 25.** *Position profile (Indoor Testing).*

**Figure 26.** *Camera parameters (Indoor Testing).*

*Vision-Based Autonomous Control Schemes for Quadrotor Unmanned Aerial Vehicle DOI: http://dx.doi.org/10.5772/intechopen.86057*

**Figure 27.** *Acceleration profile (Outdoor Testing).*

**Figure 28.** *Velocity profile (Outdoor Testing).*

**Figure 29.** *Position profile (Outdoor Testing).*

**Figure 26.**

**56**

**Figure 25.**

*Position profile (Indoor Testing).*

**Figure 24.**

*Velocity profile (Indoor Testing).*

*Unmanned Robotic Systems and Applications*

*Camera parameters (Indoor Testing).*

**Acknowledgements**

**Author details**

India

**59**

simulations and hardware experiments.

*DOI: http://dx.doi.org/10.5772/intechopen.86057*

The authors would like to thank the Subhash Chand Yogi and Abhay Pratap Singh from Indian Institute of Technology-Kanpur, who have helped in running

*Vision-Based Autonomous Control Schemes for Quadrotor Unmanned Aerial Vehicle*

Archit Krishna Kamath, Vibhu Kumar Tripathi and Laxmidhar Behera\* Department of Electrical Engineering, Indian Institute of Technology, Kanpur,

© 2019 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium,

\*Address all correspondence to: lbehera@iitk.ac.in

provided the original work is properly cited.

**Figure 30.** *Camera parameters (Outdoor Testing).*

are shown in the **Figures 27–30**. The time for completion of the task from the point of initialization was found to be 43 s.
