**3. Search-based planning on time-invariant environment**

This section demonstrates one of the most well-known algorithms in graph search family: A\*. The A\* algorithm's properties are also examined and utilised to use in different cases.

#### **3.1. A\* algorithm**

There are two main approaches to discretize C-space into graph-like structure:

tion with black, and other free cells are white. **Figure 3** illustrates this approach.

In cell decomposition approach, we divide C-space into eight-connected square grid environment with arbitrary resolution. Then we colour all cells that intersect with obstacle configura-

This approximation has limited assumptions on obstacle configuration. Therefore, the approach is used widely in practice. However, there is no concept of path optimality, because we can infinitely divide C-space into smaller squares. It is a trade-off between optimality and computation. Cell decomposition in high dimensions is also expensive; it has exponential

In roadmaps approach, the idea is avoiding scanning the entire C-space by computing an undirected graph with "road" edges that are guaranteed to be collision-free. The main methods of this approach are visibility graph [17] and Voronoi diagrams. The examples of the two

As can be seen, this approach generates fewer vertices than cell decomposition approach. Visibility graph method tends to generate with vertices that are the vertices of obstacles; this property leads to finding shortest path. However, the visibility graph's roadmaps are close to obstacles; collision is inevitable due to some movement error. Voronoi diagram solves the problem by generating roadmaps that keep robot as far away as possible from obstacles.

Despite this approach constructs efficiently graph representation for search-based algorithm; it is difficult to compute in higher dimension or non-polygonal environment. The approach

**Figure 3.** Cell decomposition approach (a) Original Objects, (b) Encoded Objects into cells.

• Cell decomposition

68 Advanced Path Planning for Mobile Entities

growth in PSPACE.

methods are demonstrated in **Figure 4**.

• Roadmap

There are three main properties of A\* [8] that are inherited from historical graph search algorithms:


$$\mathbf{f}(\mathbf{s}) = \mathbf{g}(\mathbf{s}),\tag{1}$$

where f(s) is the priority of cell in open list *O*; the smaller the f(s), the higher the priority. The open list *O* handles processing expanding cells, and therefore this property prioritises expanding cells with less cost to travel. A\* inherits this property from Dijkstra's algorithm.

• **Heuristic:** a rule to guide expanding search towards goal cell. This rule is formulated:

$$\mathbf{f}(\mathbf{s}) = h(\mathbf{s}),\tag{2}$$

where *h*(*s*) is the heuristics function for each cell s that indicates the closeness from cell s to goal. *h*(*s*) can be Euclidean distance or Manhattan distance function in this case. In addition, *h*(*s*) must satisfy admissible property:

$$h(s) \le \cos(s, s') + h(s'),\tag{3}$$

for any successor *s*′ of *s* to ensure path optimality. A\* inherits this property from greedy bestfirst search.

**Figure 5** illustrates each property of A\* when they are applied to search for goal:

The total expanded cells in each algorithm constitutes for their performance (e.g. how many cells are processed before path is found). As can be seen, Dijkstra's algorithm has the worst performance due to lack of guidance to expand search; it just expands uniformly to all directions. Greedy best-first search has the best computation; however, it does not guarantee the shortest path like Dijkstra's algorithm, because its search is trap in local minima shown in the picture. A\* has both computation and optimality advantages over these old algorithms by combining uniform cost search rule to guarantee path optimality and heuristic rule of greedy best-first search to guide search process towards goal. Both rules can be combined and formulated as priority function:

$$\mathbf{f}(\mathbf{s}) = \mathcal{g}(\mathbf{s}) + h(\mathbf{s}).\tag{4}$$

**3.2. Anytime A\*: path suboptimal bound (ARA\*) algorithm**

time of each search iteration

**Figure 6.** Pseudo code of A\* algorithm.

property—*h*(*s*) ≤ *cost*(*s*, *s*′

inflating heuristics function *h*(*s*):

where *g*∗(*s*) is the optimal path cost from start to *s*.

The pseudo code for ARA\* is shown in **Figure 7**.

) + *h*(*s*′

In practice, the performance issue is more critical; time for robot to "think" before making

Search-Based Planning and Replanning in Robotics and Autonomous Systems

http://dx.doi.org/10.5772/intechopen.71663

71

• Quickly producing a suboptimal solution and then gradually improving its solution as

• Having control over the suboptimal bound and hence indicating a bound of processing

Basically, ARA\* is developed from A\*; it inherits all intrinsic properties of A\*. The idea to quickly plan suboptimal path is derived from inflated heuristics function [18] by a factor *ε*. The search is greedier to provide solution faster, and the solution is proven to be bounded:

*g*∗(*s*) ≤ *g*(*s*) ≤ *ε* ∗ *g*∗(*s*), (5)

To understand the behaviours of ARA\*, we must keep in mind that ARA\* violates admissible

of *s*. ARA\* modifies A\* *f*(*s*) function by

)—for any successor *s*′

decision is limited. Therefore, a path planner, which has these properties, is essential:

time allowed by reusing its previous search effort as much as possible

We introduce the algorithm that is well-suited for this scenario: ARA\* [18].

Intuitively, one could think f(s) is an estimated cost to travel from start cell to goal through concerning cell s. Hence, A\* expands towards cells that have least cost travel (**Figure 6**, line 11).

The pseudo code for A\* is shown in **Figure 6**.

**Figure 5.** Operation demonstration of properties of A\* and A\* itself (a) Map, (b) Uniform cost search, (c) Greedy Best-First Search and (d) A\*.

where f(s) is the priority of cell in open list *O*; the smaller the f(s), the higher the priority. The open list *O* handles processing expanding cells, and therefore this property prioritises expanding cells with less cost to travel. A\* inherits this property from Dijkstra's algorithm.

• **Heuristic:** a rule to guide expanding search towards goal cell. This rule is formulated:

**Figure 5** illustrates each property of A\* when they are applied to search for goal:

*h*(*s*) must satisfy admissible property:

70 Advanced Path Planning for Mobile Entities

for any successor *s*′

mulated as priority function:

First Search and (d) A\*.

The pseudo code for A\* is shown in **Figure 6**.

first search.

*h*(*s*) ≤ *cost*(*s*, *s*′

f(s) = *h*(*s*), (2)

where *h*(*s*) is the heuristics function for each cell s that indicates the closeness from cell s to goal. *h*(*s*) can be Euclidean distance or Manhattan distance function in this case. In addition,

The total expanded cells in each algorithm constitutes for their performance (e.g. how many cells are processed before path is found). As can be seen, Dijkstra's algorithm has the worst performance due to lack of guidance to expand search; it just expands uniformly to all directions. Greedy best-first search has the best computation; however, it does not guarantee the shortest path like Dijkstra's algorithm, because its search is trap in local minima shown in the picture. A\* has both computation and optimality advantages over these old algorithms by combining uniform cost search rule to guarantee path optimality and heuristic rule of greedy best-first search to guide search process towards goal. Both rules can be combined and for-

Intuitively, one could think f(s) is an estimated cost to travel from start cell to goal through concerning cell s. Hence, A\* expands towards cells that have least cost travel (**Figure 6**, line 11).

**Figure 5.** Operation demonstration of properties of A\* and A\* itself (a) Map, (b) Uniform cost search, (c) Greedy Best-

) + *h*(*s*′

of *s* to ensure path optimality. A\* inherits this property from greedy best-

f(s) = *g*(*s*) + *h*(*s*). (4)

), (3)

**Figure 6.** Pseudo code of A\* algorithm.

#### **3.2. Anytime A\*: path suboptimal bound (ARA\*) algorithm**

In practice, the performance issue is more critical; time for robot to "think" before making decision is limited. Therefore, a path planner, which has these properties, is essential:


We introduce the algorithm that is well-suited for this scenario: ARA\* [18].

Basically, ARA\* is developed from A\*; it inherits all intrinsic properties of A\*. The idea to quickly plan suboptimal path is derived from inflated heuristics function [18] by a factor *ε*. The search is greedier to provide solution faster, and the solution is proven to be bounded:

$$
\mathcal{g}^\*(\mathbf{s}) \le \mathcal{g}(\mathbf{s}) \le \varepsilon \ast \mathcal{g}^\*(\mathbf{s}),\tag{5}
$$

where *g*∗(*s*) is the optimal path cost from start to *s*.

The pseudo code for ARA\* is shown in **Figure 7**.

To understand the behaviours of ARA\*, we must keep in mind that ARA\* violates admissible property—*h*(*s*) ≤ *cost*(*s*, *s*′ ) + *h*(*s*′ )—for any successor *s*′ of *s*. ARA\* modifies A\* *f*(*s*) function by inflating heuristics function *h*(*s*):

**Figure 7.** Pseudo code of ARA\* algorithm.

$$f(\mathbf{s}) = g(\mathbf{s}) + \varepsilon \* h(\mathbf{s}).\tag{6}$$

(**Figure 7**, line 13) that already are expanded once and processes these cells in the next search

Search-Based Planning and Replanning in Robotics and Autonomous Systems

http://dx.doi.org/10.5772/intechopen.71663

73

In general, ARA\* executes consecutive search iterations with decreasing suboptimal bound; each search does not recalculate consistent cells from previous search. Therefore, the path

In real-world application, there is often a scenario that the robot initially does not know a priori information about its surroundings. We cannot encode the world space information each time the robot runs, because it is expensive, tedious, and infeasible due to rapid changes in practice. To maintain collision-free path, one can naively rerun A\* to replan the shortest path from the point that the robot detects changes. However, this naïve approach will waste computation by reprocessing cells that are irrelevant to compute a new path and hence increase idle time between each search. This section will demonstrate search-based algorithms to solve

In goal-directed navigation task, with cell decomposition approximation, the robot always observes a limited range of eight connected grids. The robot is able to move in eight directions with cost one, and it assumes that unknown cells are traversable. The robot follows the initial calculated path to goal and encounters blockage cells; it must be able to process only cells that are relevant to compute the new path. The challenge is to find these relevant cells. **Figure 8**

Note that grey cells (in **Figure 8**) are expanded cells to compute initial path or new path when robot detects blockage cell in purple at position yellow cell. Darker grey cells are processed multiple times. As can be seen, total expanded cells in replanning process of D\* Lite is 61,

D\* Lite [19] is developed directly from Lifelong Planning A\* (LPA\*) [20] for applying on mobile robot, which is a combination of Dynamic SWSF-FP [21] and A\* [8]. Therefore, D\* Lite

• Reverse search: Unlike A\*, D\* Lite expands its search from goal; *h*(*s*) now indicates the closeness from cell s to start cell. *g*(*s*) now also stores estimated distance from goal. After searching is finished, the path from start to goal is generated by iteratively moving from

• Heuristics: D\* Lite inherits this property from A\* with admissible rule. Thus, D\* Lite main-

tains path optimality by expanding heuristically towards start cell.

that have the lowest sum *g*(*s*) + *cost*(*s*, *s*′

) in greedy style.

improvement process is efficient. Theoretical properties of ARA\* is described in [18].

**4. Search-based replanning on time-varying environment**

mentioned problem in time-variant environment.

whereas expanded cells of rerunning A\* are 75.

*4.1.1. D\* Lite algorithm*

illustrates this idea.

possesses these properties:

cell *s* towards neighbour cells *s*′

**4.1. Incremental heuristic algorithm: D\* Lite algorithm**

iteration.

Hence, the computed path is no longer optimal. Moreover, each search iteration is no longer guaranteed to expand searching each cell at most once like A\* due to decreasing *ε*. However, to maintain efficiency and ensure suboptimal bound, ARA\* introduces INCONS list to store local inconsistent cells as specified function:

$$\mathbf{g}(\mathbf{s'}) \succ \min\_{\mathbf{s} \in \text{pred}(\mathbf{s})} \{ \text{cost}(\mathbf{s'}, \mathbf{s}') + \mathbf{g}(\mathbf{s'}) \},\tag{7}$$

(**Figure 7**, line 13) that already are expanded once and processes these cells in the next search iteration.

In general, ARA\* executes consecutive search iterations with decreasing suboptimal bound; each search does not recalculate consistent cells from previous search. Therefore, the path improvement process is efficient. Theoretical properties of ARA\* is described in [18].
