**5.3 Obstacle avoidance framework**

The proposed OA framework built into the architecture of Figs. 2 and 3 is shown in Fig.13. It consists of an environmental map, a planning module, a localization module, sensors and actuators (Horner & Yakimenko, 2007). The environmental map can include *a priori* knowledge, such as the positions of charted underwater obstacles, but also incorporate unexpected threats discovered by sonar. The positions of all obstacles are eventually resolved in the vehicle-centred coordinate frame with the help of the localization module. The planning module is responsible for generating collision-free trajectories the vehicle should follow. This reference trajectory, possibly with reference controls, is then used to excite actuators.

80 Autonomous Underwater Vehicles

Process 1

RS-232

Process 2

Process N

DB SeaFox

RS-232 Switch

Throttle and Rudder Commands

> Wave Relay MANET

2.4GHz

Radio Modem

440MHz

Autopilot

Remote E-Stop

MOOS

DVL (Optional)

Secondary Controller PC/104

> Ethernet Switch

Fig. 12. SeaFox USV navigating on the Sacramento River near Rio Vista, CA

The proposed OA framework built into the architecture of Figs. 2 and 3 is shown in Fig.13. It consists of an environmental map, a planning module, a localization module, sensors and actuators (Horner & Yakimenko, 2007). The environmental map can include *a priori* knowledge, such as the positions of charted underwater obstacles, but also incorporate unexpected threats discovered by sonar. The positions of all obstacles are eventually resolved in the vehicle-centred coordinate frame with the help of the localization module. The planning module is responsible for generating collision-free trajectories the vehicle should follow. This reference trajectory, possibly with reference controls, is then used to

Port Sonar Head

Fig. 11. SeaFox sensors and control architecture

**5.3 Obstacle avoidance framework** 

excite actuators.

RS-422 RS-232

RS-485

RS-232

RS-485

Water Speed Sensor

DGPS Compass

Starboard Sonar Head

HG1700 IMU

Pan and Tilt

Commands

Fig. 13. Components of the NPS OA framework

The proposed OA framework supports both deliberative and reactive obstacle avoidance behaviours. Deliberative OA involves the ability to generate and follow a trajectory that avoids all known obstacles between an arbitrary start location and some desired goal location, whereas reactive OA involves the ability to avoid any previously unknown obstacles detected while following this trajectory. Since the sonar system continuously resamples the environment, this reactive behaviour can be achieved by a deliberative planner as long as i) it executes fast enough to incorporate all new obstacle information from the sonar, and ii) it generates feasible trajectories which begin with the vehicle's current state vector. Specifically, since the REMUS and SeaFox FLS have limited range and limited fields of view in both image planes, new trajectories must be generated continuously (e.g. on some fixed time interval or upon detection of a new obstacle) during execution of the current manoeuvre to ensure reactive avoidance of new obstacles.

As an example of deliberative OA, assume a REMUS vehicle is mapping a minefield with sidescan sonar prior to a mine clearance operation. For this mission, the goal locations are provided by the sequence of waypoints making up a typical lawn-mowing survey pattern. If an obstacle is detected along a specified track line, the preferred OA manoeuvre for this mission would be one that also minimizes the cumulative deviation from this track line, since we desire 100% sensor coverage of the survey area. Hence, deliberative OA implies the optimization of some performance index. Likewise, while digital nautical charts or previous vehicle surveys can be used to identify some obstacles *a priori*, this data is usually incomplete or outdated. Vehicles should be capable of storing in memory the locations of any uncharted obstacles discovered during their mission so that subsequent trajectories can avoid them—even when they are no longer in the sonar's current field of view. Deliberative OA, therefore, also entails the creation and maintenance of obstacle maps.

#### **5.4 Obstacle detection and mapping**

Detecting obstacles from sonar imagery is challenging because several factors affect the intensity of sonar reflections off objects in the water column. These factors include the size, material, and geometry of an object relative to the sonar head; interference from other acoustic sensors; and the composition of the acoustic background (e.g. bottom type, amount of sediment, etc.) to name a few (Masek, 2008). Once an obstacle has been detected, other image processing algorithms must measure its size and compute its location within the navigational reference frame. While localizing obstacles via the range and bearing data

Real-Time Optimal Guidance and Obstacle Avoidance for UMVs 83

modification, this test can also be used to detect when a trajectory passes directly above a bounding box. In our application, we use the OpenCV computer vision library to generate a bounding box around each object detected in the horizontal image plane. For each box, we then compute its centre point, length extent, and angle relative to the vehicle's navigation frame. Due to occlusion, the width extent produced from this rectangle does not accurately convey the true size of the obstacle, so we assume a constant value for this parameter. To create a 3D (actually 2.5D) bounding box around the object, we compute its vertical extents from vertical sonar imagery. At this time, the assumption is that obstacles extend from the ocean floor to its measured height above bottom, but this method can be generalized to obstacles suspended in the water column or extending from the surface to a measured depth

Fig. 15. Example of the bounding boxes used in conservative collision detection calculations While oriented bounding boxes work well for mapping discrete obstacles in open-water environments, they require an additional image processing step and are not easily adapted to operations in restricted waterways. For these environments, a probabilistic occupancy grid is preferable for robustly mapping large continuous obstacles (e.g. harbour breakwaters) or natural terrain (e.g., a river's banks). Occupancy grids divide the environment into a grid of cells and assign each cell a probability of being occupied by an obstacle. Given a probabilistic sensor model, Bayes' Theorem is used to compute the probability that a given cell is occupied, based upon current sensor data. By extension, an

(i.e. ships in a harbour).

Fig. 14. Horizontal FLS image of a kelp forest

embedded in the sonar imagery is straightforward, computing their true size is very difficult. First, for the REMUS FLS, an obstacle's height and width can be measured directly by both sonar heads only when it is located within a narrow 12-degree by 12-degree "window" directly ahead of the vehicle. Due to this narrow beam width, most obstacles are not imaged by both the horizontal and vertical sonar at the same time. Moreover, FLS images do not contain information in the region behind an obstacle's ensonified leading edge; this portion of the image is occluded. Therefore, the true horizontal and vertical extent of each obstacle must be deduced from multiple views of the same object. For a vehicle with a fixed sensor like the REMUS, this may be accomplished by deliberately inducing vehicle motion to vary the sonar angle (Furukawa, 2006) or by generating trajectories that will image the object from a different location at a later time. For these scenarios, it is desirable to balance OA behaviours with exploration behaviours in order to maximize sensor coverage and generate more complete obstacle maps. In this way, the proposed trajectory generation framework can be adapted to produce exploratory trajectories which more accurately measure the size and extent of detected obstacles (Horner et al., 2009). Nevertheless, due to the uncertainty in sonar images arising from environmental factors, sensor geometry, or obstacle occlusion, it is prudent to make conservative assumptions about an obstacle's boundaries until other information becomes available.

For the remainder of this section, we highlight different representations for incorporating obstacle size, location, and uncertainty into an obstacle map for efficient collision detection during the trajectory optimization phase. These representations can be tailored to the working environment. For operations in a kelp forest, for example, kelp stalks often appear as point-like features in horizontal-plane sonar imagery (Fig.14) but seldom appear in vertical-plane images. By making the reasonable assumption (for this environment) that these obstacles extend vertically from the sea floor to the surface, it may be simpler to perform horizontal-plane OA through this type of obstacle field. Nevertheless, when building an obstacle map comprised primarily of point features, mapping algorithms must account for the uncertainty inherent in sonar imagery. One simple but effective technique adds spherical (3D) or circular (2D) uncertainty bounds to each point feature stored in the obstacle map. Candidate OA trajectories which penetrate these boundaries violate constraint (7). Under this construct, collision detection calculations are reduced to a simple test to determine whether line segments in a discretized trajectory intersect with the uncertainty circle (2D) or sphere (3D) for each obstacle in the map. In general, when checking for line segment intersections with a circle or sphere there are five different test cases to consider (Bourke, 1992). Our application, however, requires only two computationally efficient tests to determine: i) which line segment along a discretized trajectory contains the closest point of approach (CPA) to an obstacle, and ii) whether this CPA is located inside the obstacle's uncertainty bound.

Most objects appear in sonar imagery not as point features, but as complex shapes. Unlike point features, it is difficult and computationally expensive to determine exhaustively whether or not a candidate vehicle trajectory will collide with these shapes. Instead, we can bound an arbitrary shape with a minimal area rectangle (or box, in 3D) aligned with the shape's principle axes (Fig.15). This type of object, called an oriented bounding box, is widely used in collision-detection algorithms for video games. One technique, based on the Separating Axis Theorem from complex geometry, results in an extremely fast test for line segment intersections with an oriented bounding box (Kreuzer, 2006). With slight 82 Autonomous Underwater Vehicles

embedded in the sonar imagery is straightforward, computing their true size is very difficult. First, for the REMUS FLS, an obstacle's height and width can be measured directly by both sonar heads only when it is located within a narrow 12-degree by 12-degree "window" directly ahead of the vehicle. Due to this narrow beam width, most obstacles are not imaged by both the horizontal and vertical sonar at the same time. Moreover, FLS images do not contain information in the region behind an obstacle's ensonified leading edge; this portion of the image is occluded. Therefore, the true horizontal and vertical extent of each obstacle must be deduced from multiple views of the same object. For a vehicle with a fixed sensor like the REMUS, this may be accomplished by deliberately inducing vehicle motion to vary the sonar angle (Furukawa, 2006) or by generating trajectories that will image the object from a different location at a later time. For these scenarios, it is desirable to balance OA behaviours with exploration behaviours in order to maximize sensor coverage and generate more complete obstacle maps. In this way, the proposed trajectory generation framework can be adapted to produce exploratory trajectories which more accurately measure the size and extent of detected obstacles (Horner et al., 2009). Nevertheless, due to the uncertainty in sonar images arising from environmental factors, sensor geometry, or obstacle occlusion, it is prudent to make conservative assumptions about an obstacle's

For the remainder of this section, we highlight different representations for incorporating obstacle size, location, and uncertainty into an obstacle map for efficient collision detection during the trajectory optimization phase. These representations can be tailored to the working environment. For operations in a kelp forest, for example, kelp stalks often appear as point-like features in horizontal-plane sonar imagery (Fig.14) but seldom appear in vertical-plane images. By making the reasonable assumption (for this environment) that these obstacles extend vertically from the sea floor to the surface, it may be simpler to perform horizontal-plane OA through this type of obstacle field. Nevertheless, when building an obstacle map comprised primarily of point features, mapping algorithms must account for the uncertainty inherent in sonar imagery. One simple but effective technique adds spherical (3D) or circular (2D) uncertainty bounds to each point feature stored in the obstacle map. Candidate OA trajectories which penetrate these boundaries violate constraint (7). Under this construct, collision detection calculations are reduced to a simple test to determine whether line segments in a discretized trajectory intersect with the uncertainty circle (2D) or sphere (3D) for each obstacle in the map. In general, when checking for line segment intersections with a circle or sphere there are five different test cases to consider (Bourke, 1992). Our application, however, requires only two computationally efficient tests to determine: i) which line segment along a discretized trajectory contains the closest point of approach (CPA) to an obstacle, and ii) whether this CPA is located inside the obstacle's

Most objects appear in sonar imagery not as point features, but as complex shapes. Unlike point features, it is difficult and computationally expensive to determine exhaustively whether or not a candidate vehicle trajectory will collide with these shapes. Instead, we can bound an arbitrary shape with a minimal area rectangle (or box, in 3D) aligned with the shape's principle axes (Fig.15). This type of object, called an oriented bounding box, is widely used in collision-detection algorithms for video games. One technique, based on the Separating Axis Theorem from complex geometry, results in an extremely fast test for line segment intersections with an oriented bounding box (Kreuzer, 2006). With slight

boundaries until other information becomes available.

uncertainty bound.

modification, this test can also be used to detect when a trajectory passes directly above a bounding box. In our application, we use the OpenCV computer vision library to generate a bounding box around each object detected in the horizontal image plane. For each box, we then compute its centre point, length extent, and angle relative to the vehicle's navigation frame. Due to occlusion, the width extent produced from this rectangle does not accurately convey the true size of the obstacle, so we assume a constant value for this parameter. To create a 3D (actually 2.5D) bounding box around the object, we compute its vertical extents from vertical sonar imagery. At this time, the assumption is that obstacles extend from the ocean floor to its measured height above bottom, but this method can be generalized to obstacles suspended in the water column or extending from the surface to a measured depth (i.e. ships in a harbour).

Fig. 14. Horizontal FLS image of a kelp forest

Fig. 15. Example of the bounding boxes used in conservative collision detection calculations

While oriented bounding boxes work well for mapping discrete obstacles in open-water environments, they require an additional image processing step and are not easily adapted to operations in restricted waterways. For these environments, a probabilistic occupancy grid is preferable for robustly mapping large continuous obstacles (e.g. harbour breakwaters) or natural terrain (e.g., a river's banks). Occupancy grids divide the environment into a grid of cells and assign each cell a probability of being occupied by an obstacle. Given a probabilistic sensor model, Bayes' Theorem is used to compute the probability that a given cell is occupied, based upon current sensor data. By extension, an

Real-Time Optimal Guidance and Obstacle Avoidance for UMVs 85

RECON interface, however, does not accept pitch rate commands (for vehicle safety reasons). Therefore, in order to use the aforementioned path following controller to track 3D trajectories with the REMUS UUV, controller outputs must be partitioned into horizontal (turn rate) and vertical (depth or altitude) commands as described in the following section

Consider the 2D problem geometry depicted in Fig.17, which defines an inertial {*I*} frame, Serret-Frenet {*F*} error frame and body-fixed reference frame {*b*}. The kinematic model of the

> 0 0 ( ) cos ( ) ( ) sin ( ) *xt U t y tU t*

<sup>⎡</sup> ⎤⎡ ⎤ <sup>⎢</sup> ⎥⎢ ⎥ <sup>=</sup> ⎣ ⎦⎣ ⎦

ψ

{ } *F*

*XI*

*F y*

By construction, the local trajectory planner produces an analytic expression for each

respectively, of the spatial trajectory. Using the relationships in Fig.17, the errors can be

 and ( ) *<sup>c</sup>* **p**′′ τ

( ) *<sup>F</sup> <sup>F</sup> F I Ic F x*

*<sup>I</sup> R* = **TNB** is a rotation matrix constructed from the tangent, normal, and

*R*

⎡ ⎤ = = − ⎢ ⎥ ⎣ ⎦

binormal vectors of the Serret-Frenet error frame {*F*}. The tangent vector is computed from

component of the spatial trajectory as a function of virtual arc length, ( ) *<sup>c</sup>* **p**

τ

*y*

ψ*F*

**T**

**N**

ψ

ψ

*F x*

*<sup>I</sup>* { }*<sup>b</sup>* **q**

(33)

ψ

**q qp** (35)

*XI*

**q***F*

*b x*

Reference Trajectory

*b y*

τ

, the first and second derivatives,

. We can also

= *r* (34)

(obviously, the SeaFox USV only uses the turn rate commands).

**p***c*

*YI*

**6.1 Horizontal plane** 

vehicle (2)-(3) reduces to

with dynamics described by

*XI*

{ }*I*

Fig. 17. Horizontal path-following kinematics

compute analytic expressions for ( ) *<sup>c</sup>* **p**′

expressed in the Serret-Frenet frame {*F*} as

the expression for the trajectory's first derivative as:

where [ , , ] *F T*

estimate for the occupancy state of each cell can be continually updated using an iterative technique that incorporates all previous measurements (Elfes, 1989). Figure 16a shows an occupancy grid map of a river generated by the SeaFox FLS system. In this image, each pixel corresponds to a 1-metre square grid cell whose colour represents the cell's probability of being occupied (red) or empty (green). For comparison, the inset portion of the occupancy grid map has been overlaid with an obstacle map of oriented bounding boxes in Fig.16b. Clearly, using discrete bounding boxes to represent a long, continuous shoreline quickly becomes intractable as more and more boxes are required. The occupancy grid framework is a much more efficient obstacle map representation for wide area operations in restricted waterways.

Fig. 16. Occupancy grid for a river as generated by the SeaFox FLS system

NPS has developed probabilistic sonar models for the BlueView FLS and has successfully combined separate 2D occupancy grids in order to reconstruct the 3D geometry of an obstacle imaged by the REMUS UUV's horizontal and vertical sonar arrays (Horner et al., 2009). Using this occupancy grid framework, each candidate trajectory's risk of obstacle collision is computed using the occupancy probabilities (a direct lookup operation) of the grid cells it traverses. Trajectory optimization for OA entails minimizing the cumulative risk of collision along the entire trajectory.
