**2.3 Visual-inertial sensor**

As mentioned in the introduction, the lack of a GPS requires the use of a sensor that can guarantee the correct positioning inside a closed space. In particular, follow the research trend in the field of computer vision; the sensor is composed by a monocular camera and an IMU inertial measurement sensor. These two sensors are connected to each other by a mechanism called hardware trigger. This choice was made to ensure maximum precision in the acquisition of data from both sensors since it is a crucial point in order to obtain a precise positioning of the UAV. The kind of sensors described above is preferable to purely visual-based techniques or any other sensor configurations for large number of advantages:


**151**

visual-inertial estimator.

**2.4 Scheduling system**

*Visual-Inertial Indoor Navigation Systems and Algorithms for UAV Inspection Vehicles*

considered the main aspect on which to base the overall system design.

ROVIO, unlike other odometry systems (e.g. mono VINS) that attempt to compensate the time's errors, requires that all timestamps be accurate in order to work properly. Considering this aspect, several manual experiments have been done in order to investigate the incidence of the time synchronization and timestamp acquisition on ROVIO. From our experiments we can see that the temporal accuracy depends on both application and the state estimator, but more generally we can say that the range of time acquisition must be between 2 and 5 milliseconds. Besides this threshold, it is no longer possible to follow rapid movements that cause the divergence and drift of the overall system. On the other side, below two milliseconds, we do not perceive huge improvements from the operational point of view. Most of the camera sensors acquire their timestamp when the image is sent to the computer companion. However, there are many potential sources of delay that can affect the accuracy of the timestamp related to an image like the exposure time of the camera, the internal data processing, internal filter (from IMU point of view), data transfer and also the OS scheduler of the camera. For most camera sensors on the market, these delays are generally including between 5 and 30 milliseconds. While some delays related to the exposure or some other parameters of the camera are constant or can be expected, unknown delays prevent, to the computer companion point of view, from providing accurate timing information to any

For this reason, we decide to use a custom-made sensor directly linked to a microcontroller that receives data from the IMU and use a trigger line to check when the camera captures images. When an image is taken, as consequence, the microcontroller transmits information about the timestamp and IMU to the computer companion that links it to the image coming from the visual sensor. **Figure 3** shows the schematic of the circuit between microcontroller, IMU, camera and computer companion, while **Figure 4** the two visual and inertial sensors.

The overall system, **Figure 5**, is designed as follow: through a web application linked to a web server, the user can select and set the parameters of the mission.

• IMU data can be used to provide instantaneous estimates at over 100 Hz.

• The installation of this hardware is cheaper, smaller, lighter and lower in power consumption with respect to the three-dimensional laser scanner or even

However, this type of approach has two main problems: the first one is related to the IMU camera timestamp synchronization that can cause large errors and drift in the state estimate of the UAV. The second one is that the system has to be able to continuously estimate and compensate the drift and the distortions of the IMU data. These problems are mainly related to the VIO algorithm chosen for this project, ROVIO [10]. ROVIO is a visual-inertial state estimator based on EFK which proposed several novelties. In addition to FAST corner features, whose 3D positions are parameterized with robot-centric bearing vectors and distances, multi-level patches are extracted from image stream around these features. These patch features are tracked and warped based on IMU predicted motion, and the photometric errors are used in the update step as innovation terms. The choice to use ROVIO is made based on the average CPU load of the visual-inertial algorithms proposed by [6]; in fact, the CPU usage—considering the limited CPU resources of the computer companion and the amount of all the operations to be performed during the UAV mission—was

*DOI: http://dx.doi.org/10.5772/intechopen.90315*

stereo configurations.

*Visual-Inertial Indoor Navigation Systems and Algorithms for UAV Inspection Vehicles DOI: http://dx.doi.org/10.5772/intechopen.90315*


However, this type of approach has two main problems: the first one is related to the IMU camera timestamp synchronization that can cause large errors and drift in the state estimate of the UAV. The second one is that the system has to be able to continuously estimate and compensate the drift and the distortions of the IMU data. These problems are mainly related to the VIO algorithm chosen for this project, ROVIO [10]. ROVIO is a visual-inertial state estimator based on EFK which proposed several novelties. In addition to FAST corner features, whose 3D positions are parameterized with robot-centric bearing vectors and distances, multi-level patches are extracted from image stream around these features. These patch features are tracked and warped based on IMU predicted motion, and the photometric errors are used in the update step as innovation terms. The choice to use ROVIO is made based on the average CPU load of the visual-inertial algorithms proposed by [6]; in fact, the CPU usage—considering the limited CPU resources of the computer companion and the amount of all the operations to be performed during the UAV mission—was considered the main aspect on which to base the overall system design.

ROVIO, unlike other odometry systems (e.g. mono VINS) that attempt to compensate the time's errors, requires that all timestamps be accurate in order to work properly. Considering this aspect, several manual experiments have been done in order to investigate the incidence of the time synchronization and timestamp acquisition on ROVIO. From our experiments we can see that the temporal accuracy depends on both application and the state estimator, but more generally we can say that the range of time acquisition must be between 2 and 5 milliseconds. Besides this threshold, it is no longer possible to follow rapid movements that cause the divergence and drift of the overall system. On the other side, below two milliseconds, we do not perceive huge improvements from the operational point of view.

Most of the camera sensors acquire their timestamp when the image is sent to the computer companion. However, there are many potential sources of delay that can affect the accuracy of the timestamp related to an image like the exposure time of the camera, the internal data processing, internal filter (from IMU point of view), data transfer and also the OS scheduler of the camera. For most camera sensors on the market, these delays are generally including between 5 and 30 milliseconds. While some delays related to the exposure or some other parameters of the camera are constant or can be expected, unknown delays prevent, to the computer companion point of view, from providing accurate timing information to any visual-inertial estimator.

For this reason, we decide to use a custom-made sensor directly linked to a microcontroller that receives data from the IMU and use a trigger line to check when the camera captures images. When an image is taken, as consequence, the microcontroller transmits information about the timestamp and IMU to the computer companion that links it to the image coming from the visual sensor. **Figure 3** shows the schematic of the circuit between microcontroller, IMU, camera and computer companion, while **Figure 4** the two visual and inertial sensors.

### **2.4 Scheduling system**

The overall system, **Figure 5**, is designed as follow: through a web application linked to a web server, the user can select and set the parameters of the mission.

*Industrial Robotics - New Paradigms*

*Assembly UAV payload, first perspective.*

**150**

**2.3 Visual-inertial sensor**

*Assembly UAV payload, second perspective.*

**Figure 2.**

**Figure 1.**

As mentioned in the introduction, the lack of a GPS requires the use of a sensor that can guarantee the correct positioning inside a closed space. In particular, follow the research trend in the field of computer vision; the sensor is composed by a monocular camera and an IMU inertial measurement sensor. These two sensors are connected to each other by a mechanism called hardware trigger. This choice was made to ensure maximum precision in the acquisition of data from both sensors since it is a crucial point in order to obtain a precise positioning of the UAV. The kind of sensors described above is preferable to purely visual-based techniques or

• Unlike monocular simultaneous localization and mapping (SLAM) based only

• Status estimation and feature tracking, which allow to understand how the UAV is moving in the space, are more robust to the motion blur and fast rota-

any other sensor configurations for large number of advantages:

tions than exclusive visual-based system.

on visual sensor, the generated maps have an absolute scale.

**Figure 3.**

*How computer companion, microcontroller and sensors are linked.*

**Figure 4.** *Camera and IMU sensors.*

These parameters are loaded through the scheduling system in which some patterns related to the overall status check of the UAV system (e.g. battery status, sensors status, LED status) are implemented.

At the lower level, there are the ROS nodes responsible for navigation, VIO and flight controller manager that execute the commands translated by the scheduling system.

**153**

**Figure 5.**

**Table 1.**

*Logic behind the system.*

*Visual-Inertial Indoor Navigation Systems and Algorithms for UAV Inspection Vehicles*

The scheduling system is based on SMACH (http://wiki.ros.org/smach) that is a task-level architecture based on ROS for rapidly creating complex robot behaviour. In this application the possible behaviors are two and depend from the type of mis-

• Mission 2: Partial exploration of the tunnel (fixed distance chosen

**Parameter Type of mission** Tunnel diameter (m) 1/2 Distance to travel (m) 2 Position altitude (m) 1/2 Data record: camera and Lidar (on/off) 1/2 Cruise speed (m/s) 1/2 K positioning (K) 1/2 Come back (on/off) 1/2 Maximum distance (m) 1/2 Minimum distance (m) 1/2

position or land at the end of the tunnel once the exploration is completed. Moreover, there are some specifications that the user can select by GUI. These parameters are related to the geometry of the tunnel and some working condition

and are obviously related to the type of mission selected (**Table 1**).

For both missions it is possible to specify if the UAV must return to the home

sion that the user selected through the web GUI:

from the user)

*Setting parameters for each type of mission.*

• Mission 1: Complete exploration of the tunnel

*DOI: http://dx.doi.org/10.5772/intechopen.90315*

*Visual-Inertial Indoor Navigation Systems and Algorithms for UAV Inspection Vehicles DOI: http://dx.doi.org/10.5772/intechopen.90315*

**Figure 5.** *Logic behind the system.*

*Industrial Robotics - New Paradigms*

**152**

system.

**Figure 4.**

*Camera and IMU sensors.*

**Figure 3.**

status, LED status) are implemented.

*How computer companion, microcontroller and sensors are linked.*

These parameters are loaded through the scheduling system in which some patterns related to the overall status check of the UAV system (e.g. battery status, sensors

At the lower level, there are the ROS nodes responsible for navigation, VIO and flight controller manager that execute the commands translated by the scheduling


### **Table 1.**

*Setting parameters for each type of mission.*

The scheduling system is based on SMACH (http://wiki.ros.org/smach) that is a task-level architecture based on ROS for rapidly creating complex robot behaviour. In this application the possible behaviors are two and depend from the type of mission that the user selected through the web GUI:


For both missions it is possible to specify if the UAV must return to the home position or land at the end of the tunnel once the exploration is completed. Moreover, there are some specifications that the user can select by GUI. These parameters are related to the geometry of the tunnel and some working condition and are obviously related to the type of mission selected (**Table 1**).

**Figure 6.** *K parameter logic.*

The K parameter indicates the position that the UAV must maintain, while the tunnel inspection is defined as the ratio between the distance of the UAV with the right and left walls of the tunnel (**Figure 6**).

In the hypothesis in which K has been defined equal to 1, the drone will carry out the mission remaining in a central position with respect to the left and right walls. In the same way, with K = 2, the distance held to the left wall by the UAV will be doubled compared to the right distance.

The same ratio is maintained even during return to home navigation, when the reference system of the drone will be rotated 180° on the *xy* plane.

This positioning system was thus implemented to allow a 3D reconstruction of the tunnel inspected by using a single camera.

### **3. Flight system**

Navigation and obstacle avoidance are one of the fundamental problems in mobile robotics, which are being already studied and analyzed by the researchers in the past 40 years. The goal of navigation is to find an optimal path from a starting to the goal point with obstacle avoidance competence. In order to guarantee an autonomous navigation, the robot must be able to safeguard a certain reliability in terms of position (IMU, GPS or other sensors) and ensure a map sufficiently precise to generate a path without collisions and faithful to the real one.

When the robot is in a complete unknown area and does not have information about the surrounding area, the global motion planning fails and does not produce any solution [11]. For this kind of situations, the local motion planning is more suitable.

The objective of the obstacle avoidance is to move a robot towards an area that is free of collisions thanks to the information handled by the sensors during the motion execution, which are steadily updated [12].

**155**

**Figure 7.**

*Monitoring of the three distances.*

*Visual-Inertial Indoor Navigation Systems and Algorithms for UAV Inspection Vehicles*

In this chapter the autonomous flight system will be defined. In particular, all the aspects concerning the navigation and the related intrinsic logic will be explained.

Given the application context, a blind tunnel of semi-circular or circular crosssection with a diameter ranging from 2 to 5 metres, it was necessary to develop a specific navigation algorithm that would allow the UAV to explore the surrounding environment avoiding obstacles that could arise during the investigation of the tunnel. The environment taken into consideration for the definition of the algorithm was structured in a tunnel with an entrance and an exit, where there were no

Within a dark and unknown environment, the use of a Lidar is crucial to carry out navigation in an appropriate manner and for the implementation of the obstacle

Light detection and ranging (Lidar) is a remote sensing technique that allows to determine the distance of an object or a surface using a laser pulse. The distance of the object is determined by measuring the time elapsed between the pulse emission and the reception of the retro-diffused signal. In the same way, to define the height from the ground, the height sensor is necessary. It allows stabilization of the UAV and its navigation to a predefined altitude with the possibility, thanks to the autopilot, of enabling terrain following or the technology that in an automatic way

The main task of the Lidar sensor is to monitor three distances during the navigations. The three distances are one front to the drone navigation and the two laterals, considering a 20° of inclination with respect to the perpendicular drone (**Figure 7**).

maintains a constant relative distance with respect to the ground.

*DOI: http://dx.doi.org/10.5772/intechopen.90315*

**3.1 Navigation algorithm**

bifurcations of the channel.

avoidance algorithm.

In this chapter the autonomous flight system will be defined. In particular, all the aspects concerning the navigation and the related intrinsic logic will be explained.
