**1. Introduction**

Nowadays indoor navigation is one of the most popular topics in the field of scientific research; this is because it is itself a very interesting challenge from the scientific point of view and with a potentially huge impact on the current market with multiple applications ranging from the industrial sector to the relief sector in emergency situations.

As well known, while in outdoor navigation, UAVs use Global Position System (GPS) signal to easily understand, with good accuracy, their position in space, in indoor navigation the possibility to use this technology for positioning and localization decays [1–3]. In fact, the Global Navigation Satellite System (GNSS) in indoor environments is almost blocked or made too weak by buildings, walls or several potential sources of interference.

The first approaches to indoor UAV navigation used technologies such as Wi-Fi, Bluetooth or Ultra-Wide Band [4]. The disadvantage and the great limitation of these technologies is that they need to structure or prepare the environment in order to be able to navigate inside with the UAVs.

In recent years, the miniaturization of both visual sensors and computers that guarantee good computing power and at the same time less weight has made possible a new different approach to the topic of UAV indoor navigation.

This approach is based on inertial and visual systems, for example, see [5–7], with enormous advantage of being free from any kind of need to structure the environment and therefore potentially flexible and universal. The position of the UAV is estimated using inertial measurements provided by gyroscopes and accelerometers that are now available in every smartphone and with small dimensions and weights. The accuracy of this type of inertial measurements is good but not sufficient in order to guarantee a precise indoor positioning. In fact the estimate of the UAV position based only on inertial systems tends to diverge and drift over time due to the fact that inertial measurement unit (IMU) measurements are corrupted by noise and bias with the results in pose estimates unreliable for long-term navigation. To avoid the effects of this phenomenon, the inertial system is combined with a visual one that uses a camera to collect information and extract futures from the surrounding environment and track them over time in order to estimate the trajectory of the camera. This approach is usually referred to as visual-inertial odometry (VIO). Information from the camera can also be used to build a map of the environment and then perform what is called simultaneous localization and mapping (SLAM).

In this chapter we propose a system architecture that allows a UAV to inspect a tunnel, which is a closed environment, navigating autonomously. To estimate the position of the UAV in the absence of GPS, we used Robust Visual Inertial Odometry (ROVIO) which is a predictor of inertial visual states based on an extended Kalman filter (EKF) that combines the visual information of a monocular camera with the measurements derived from the IMU inertial platform. At the same time, a navigation and obstacle avoidance algorithm based purely on a Lidar sensor is proposed.

The UAV is equipped with a companion computer in which Robotic Operating System (ROS) is installed and allows the processing of information coming from the monocular camera and the IMU as well as those coming from the Lidar for the navigation.

Furthermore, a scheduling system has been implemented and embedded on the computer companion that allows to set different strategies to approach the inspection of the tunnel before starting the mission. Defined safety patterns that are activated in case of dangerous situations for UAV and humans are also into the scheduling system.

The chapter is organized as follows. In Section 2, we briefly analyze and describe the sensors used and their characteristics, and we go into detail on how the architecture of the system is defined. In Section 3, we explain which criteria characterize the navigation system and what is the logic behind it. In conclusion we present the results achieved, outlining the performance of the proposed system for indoor navigation evaluating possible improvements for future research.

### **2. System architecture**

This chapter describes the overall system architecture under different points of view. We start with a short description of ROS, that is, the framework that allows to manage different UAV's operation. Then we move to analyze the hardware and payload of the UAV, we describe all the crucial characteristics and we explain why those characteristics and components are crucial for the project. Afterwards we explain why between the several VIO algorithms implemented in

**149**

*Visual-Inertial Indoor Navigation Systems and Algorithms for UAV Inspection Vehicles*

the past years, we select and use ROVIO and how we design the visual-inertial sensor. Moreover, we propose a scheduling system based on some setting parameters that are crucial to well set up at the beginning of the mission in order to define the positioning of the UAV inside the tunnel. These parameters allow to define both inspection and navigation settings, the last ones are useful to change based on the geometrical characteristic of the tunnel. Instead, the parameters define the type of inspection that is needed to be performed in order to collect

The heart of the whole system is robot operating system (ROS); it is an open source framework to manage robot's operations, tasks and motions. Among the several features that ROS has, the most relevant is the availability of code, packages and open source projects. This is a key element in the development of complex

A set of processes can be represented in a graph as a node that can receive, send and process messages, called topics, coming from other sensors, actuators

In this system the two main topics for the construction of the algorithm are those of the Lidar and the odometry that give to the system the information about the obstacles around the drone (coming from the Lidar) and the pose outgoing from ROVIO which defines the position and the orientations of the UAV along the

The information on these two messages is fed to the navigation algorithm which

1.Custom frame with a propulsion system designed for a total payload of about

2.LED lighting system for navigation and acquisition of frames even in complete

3.Cameras for the acquisition of photograms that allow the construction of a three-dimensional model of the inspected tunnel and environment

4.Visual-inertial sensor used for positioning, control and as the main source of

5.Laser sensors for detecting distance from the ground

6.Voltage and current distribution system, mainly 12 and 5 V

7.LiDAR 2D laser scanner for detecting obstacles and relative distances

As a payload there is also a mini computer companion that has the necessary power to perform, record, process and analyze data from all the sensors and to

returns the topic of the speed to be assigned to the drone during the inspection.

Referring to **Figures 1** and **2**, the main components of the UAV are:

systems which often encompass different skills and concepts [8, 9].

*DOI: http://dx.doi.org/10.5772/intechopen.90315*

data or achieve the inspection's objectives.

**2.1 Robot operating system**

and nodes.

six DOF.

**2.2 UAV's payload**

4 kg

odometry

move the UAV accordingly.

darkness and absence of light

### *Visual-Inertial Indoor Navigation Systems and Algorithms for UAV Inspection Vehicles DOI: http://dx.doi.org/10.5772/intechopen.90315*

the past years, we select and use ROVIO and how we design the visual-inertial sensor. Moreover, we propose a scheduling system based on some setting parameters that are crucial to well set up at the beginning of the mission in order to define the positioning of the UAV inside the tunnel. These parameters allow to define both inspection and navigation settings, the last ones are useful to change based on the geometrical characteristic of the tunnel. Instead, the parameters define the type of inspection that is needed to be performed in order to collect data or achieve the inspection's objectives.
