**2. Terms and background**

To clarify different terms, parameters and algorithms, which will be used later, those are defined in this chapter. The main idea of the presented search approach is that the quadro‐ copter uses a camera directed at the ground and by flying through the search area; it scans all possible locations on the floor for a target (red ball). If a target is detected, it is added to the list of detected targets unless a target has already been detected at this position. Thus, the whole area can be searched for targets and the amount of targets as well as their positions can be determined.

The most significant parameters for the performance of the search, the virtual field of view (VFOV), the masking area (MA) and the search strategy are investigated, and therefore need to be defined.

In this case, the field of view (FOV) is defined as the area on the floor which the camera covers (**Figure 1**). Using computer vision object detection, a target can be found on this single picture of the floor. The field of view is specified by the camera (hardware), whereas the virtual field of view is the area which the search strategy uses in order to cover the whole search area at least once. For VFOV, a smaller value than the true FOV may be used to leave no room for inaccuracy. Detection might fail if the quadrocopter does not fly exactly as expected by the search strategy or if the target is located between two pathways and cannot be seen completely. A smaller VFOV leads to a higher coverage and a longer search pathway (compare **Figures 2** and **3**).

**Figure 1.** Field of view (FOV).

**1. Introduction**

2 Recent Advances in Robotic Systems

design.

freedom).

be determined.

to be defined.

University of Würzburg [20].

**2. Terms and background**

computer for computation [17–19].

Equipping unmanned aerial vehicles (UAVs) such as quadrocopters with more and more autonomous abilities is an interesting field of research. Furthermore, it is a requirement for challenging autonomous search and rescue missions, which are still a field of interest [1–14]. Especially,fullyautonomous systemsare challengingsince theycannotrelyonexternal systems like Global Positioning System (GPS) or optical tracking for accurate positioning. State of the art is the usage of a laser scanner for obstacle detection, collision avoidance and via a simulta‐ neous localization and mapping (SLAM) algorithm for positioning [15, 16]. But laser scanners are heavy, expensive and fail in some situations like a smoking environment. Other ap‐ proaches are vision based, but the high computational burden often requires an external

Therefore, a solution for a fully autonomous system is presented using a new hardware design combining optical and PMD cameras with infrared and ultrasonic distance holders for a reliable system capable of search and rescue missions. This chapter focuses on the concept, implementation and evaluation of the search, count and localization of red balls (example search targets) with the mentioned autonomous system based on an innovative new hardware

In a preliminary calibration scan, the parameters of the object are defined: a red ball is used as an example object. The scan determines the colour and radius of the ball. The implementation and principles of the object recognition and search will be described in detail. After determin‐ ing the scanning parameters, the autonomous search can be executed. This is done autono‐ mously by the quadrocopter, which uses inertial, infrared, ultrasonic, pressure and optical flow sensors to determine and control its orientation and position in six DOF (degree of

This research is part of the AQopterI8 Project of the Chair Aerospace Information from the

To clarify different terms, parameters and algorithms, which will be used later, those are defined in this chapter. The main idea of the presented search approach is that the quadro‐ copter uses a camera directed at the ground and by flying through the search area; it scans all possible locations on the floor for a target (red ball). If a target is detected, it is added to the list of detected targets unless a target has already been detected at this position. Thus, the whole area can be searched for targets and the amount of targets as well as their positions can

The most significant parameters for the performance of the search, the virtual field of view (VFOV), the masking area (MA) and the search strategy are investigated, and therefore need

**Figure 2.** BFS waypoint list (VFOV 40 × 60).

**Figure 3.** BFS waypoint list (VFOV 60 × 90).

Two different search strategies, which are referred to as BFS (breadth‐first‐search) and DFS (depth‐first‐search) later, are investigated. They correspond to the original algorithms, which are used to search nodes in a graph. For reasons of simplification, all search algorithms start at the bottom left corner of the search area.

The idea of the BFS strategy is shown in **Figure 2**. This strategy follows the general rule, which says that closer positions are reached before farther ones. In general, the used algorithm follows the iterative rule Up‐Right–Down‐Right–Up‐Left.

In contrast to the BFS, the DFS does not search nearer but farther positions first. At first, the algorithm covers the sides of the search area and proceeds with smaller iterations until the complete search area is covered (compare **Figures 4** and **5**).

**Figure 4.** DFS waypoint list (VFOV 40 × 60).

**Figure 2.** BFS waypoint list (VFOV 40 × 60).

4 Recent Advances in Robotic Systems

**Figure 3.** BFS waypoint list (VFOV 60 × 90).

at the bottom left corner of the search area.

the iterative rule Up‐Right–Down‐Right–Up‐Left.

Two different search strategies, which are referred to as BFS (breadth‐first‐search) and DFS (depth‐first‐search) later, are investigated. They correspond to the original algorithms, which are used to search nodes in a graph. For reasons of simplification, all search algorithms start

The idea of the BFS strategy is shown in **Figure 2**. This strategy follows the general rule, which says that closer positions are reached before farther ones. In general, the used algorithm follows

**Figure 5.** DFS waypoint list (VFOV 60 × 90).

**Figure 6.** Masking area around an accepted target (red dot).

The masking area (**Figure 6**) determines a square which is set around a found object to avoid multiple detections of the same target. It is determined by a distance named MA. During the search a target might be seen several times from different positions. Because of errors and noise the target is never detected exactly twice at the same position, and therefore would be considered as a new object multiple times. The masking area is subtracted and added to the *X*‐coordinate and the *Y*‐coordinate of every accepted target and it is proved if the newly detected target is located within one of these coordinates. If so, the newly detected target is discarded, otherwise it is accepted. Instead of a circle a square masking area can be chosen for reasons of simplification and the FOV being also a square.
