Autonomous Mobile Mapping Robots: Key Software Components

*Janusz Będkowski and Jacek Szklarski*

#### **Abstract**

This chapter discusses key software components of autonomous mobile mapping robots equipped with an inertial measurement unit (IMU) and light detection and ranging (LiDAR). In recent years, new LiDARs with nonrepetitive scanning pattern have appeared in the market. They are also equipped with an IMU; thus, the front end of simultaneous localization and mapping (SLAM)—a robust LiDAR-inertial odometry framework—significantly improves unmanned ground vehicles (UGVs) and unmanned aerial vehicles (UAV) in 3D mapping scenarios. Our study incorporates FAST-LIO as the front end of SLAM. The main focus is a lightweight back-end implementation of pose graph simultaneous localization and mapping (SLAM). It is an alternative solution to state-of-the-art g2o or GTSAM implementations. We also elaborate on iterative closest point, normal distributions transform, and their extension for multiview 3D data registration/refinement. It is based on C++ using Eigen library. This chapter also discusses path planning in already mapped environment. All software components are available as open-source projects.

**Keywords:** multiview normal distributions transform, SLAM, path planning, coverage

### **1. Introduction**

This chapter presents key software components for autonomous mobile mapping robots shown in **Figure 1** (software components are available in [1, 2]). This set of consecutive functionalities is composed of robust light detection and ranging (LiDAR)-inertial odometry FAST-LIO [3], pairwise matching algorithm (iterative closest point [ICP] or normal distributions transform [NDT]) [1] for minimizing an error for loop closures, pose graph simultaneous localization and mapping (SLAM) [2], final refinement (multiview NDT) [1], and path planning. These functionalities are the core components for autonomous mobile mapping robots equipped with an inertial measurement unit (IMU) and LiDAR.

Autonomous mobile mapping robots have already been widely investigated within the context of commercial applications, for example, power line inspection [4], smart factory production [5], offshore oil plant [6], and nuclear power plant (NPP) inspection [7]. Robots improve rescue missions in hazardous environments [8]. The research related to COVID-19 and public support for autonomous technologies shows great interest in artificial intelligence (AI) direction [9]. There are plenty of areas for

**Figure 1.**

*Scheme of key software components elaborated in this chapter.*

autonomous mobile mapping robots such as floor scrubbing, delivery, warehouse, and service robots. The rapid improvements in autonomous mobile mapping are evident for LiDAR with a nonrepetitive scanning pattern [10]. This LiDAR is capable of acquiring massive 3D data in short time with a limited field of view. The advantage is the long range, even up to 500 meters and high density of points covering entire measurement cone in short time. Narrow field of view can be extended by multiple LiDAR systems [11]. Owing to synchronized IMU data and robust feature classification, a robust LiDAR-inertial odometry framework significantly improves unmanned ground vehicles (UGVs) and unmanned aerial vehicles (UAVs) in 3D mapping scenarios [3, 12]. Moreover, the overall cost of such LiDAR is rather small compared with its competition. It is advised to study [13] for further precision and accuracy comparative evaluation.

SLAM is a core component of the autonomous mobile mapping robot that builds a map based on the estimated trajectory and estimates this set of consecutive poses based on this map [14]. SLAM is composed of front-end capable reconstructing smooth and continuous trajectory from onboard sensors. This trajectory is affected by constantly growing error. To reduce this error, the additional measurements should be incorporated into the back end as the so-called loop closure. The back end is typically solved using the pose graph SLAM method implemented in g2o [15] and GTSAM [16] frameworks. An alternative implementation is available in [2] that extends the possibility of rotation matrix parameterizations. Pose graph SLAM optimizes a graph composed of vertices (poses) and edges (consecutive odometry readings, loop closures, and other constraints); thus, it is supposed to preserve the shape of an initial trajectory (motion model) and minimize an error between observed (current relative pose) and measured (desired relative pose) loop closure edges. For the calculation of desired relative pose between two scans, an iterative closest point [17] or normal distributions transform [18] can be incorporated. The final step of the 3D mapping can be the final refinement of all 3D measurements performed with, for example, multiview normal distributions transform [19]. A similar approach is evident in general mobile mapping applications [20, 21].

The last functionality elaborated in this chapter is path planning being a fundamental software component of the autonomous mobile mapping system dedicated for missions where full coverage is desirable [22]. Examples are nuclear power plan inspection [23] and cleaning robotics [24]. In order to perform a mapping task, a mobile robot has to act according to some kind of a motion planning algorithm. This is also true for other related tasks such as inspecting, searching, cleaning, image mosaicking, etc. There are many factors that determine which algorithm should be used for the path plan generation:


All the aspects mentioned above profoundly impact the algorithms necessary to guide the mobile robot position. For example, if the map of the environment is not known in advance, one should focus on a version of SLAM with exploration algorithm. If, on the other hand, the map is known and offline planning is allowed, one may use a solver that generates plan giving some near-optimal path plan. The optimal criteria can also vary depending on a specific application: minimization of coverage time, minimization of energy, prioritization of certain regions, etc. If there is a group of robots involved, the question of equal workload distribution must also be addressed.

A path planning algorithm should also take into account kinematic properties of robots involved in the process. A different algorithm will be applied for path generation for aircraft taking aerial images for mosaicking and a different one for a mobile robot cleaning floor in a warehouse.

For the autonomous mapping of unknown environments, in order to obtain a map of the environment, a robot should be able to simultaneously localize and map and, at the same time, explore the environment. Path planning, in this context, is related to the exploration process. The robot should gradually explore the environment, and a map is incrementally built for new poses, while the robot localizes itself in this map. New, temporary goal poses are often chosen by means of frontier extraction [25]. Frontier areas represent boundary regions between known (mapped) and unknown regions of the environment. Robot motion between its current position and such temporary goal is realized by a typical path planning, which navigates between waypoints while avoiding obstacles. Details regarding the frontier exploration vary depending on application and may also include exploration in 3D, for example, [26].

One of the fundamental applications of mobile robots is to perform a coverage task. Such tasks that the robot will visit points in the environment, eventually visiting (or observing) the entire region of interest. This is necessary for tasks like cleaning, mowing, harvesting, planting, spraying, mapping, searching, painting, mosaicking, etc. Normally, the first step for such application is to obtain the map, for example, by means of exploration with SLAM. If the map is known, and the robot is able to localize itself using the map, a coverage path planning (CPP) algorithm should be employed in order to find consecutive waypoints. The area of interest will be covered entirely after the robot will visit all the points. The problem of CPP is well known and well studied in the field of robotics [27, 28]. Nevertheless, it remains challenging and even the simplest variants, like the lawnmower problem, are NP-hard [29]. There exist a large number of exact, approximate, and heuristic algorithms to solve many variations of both types of CPP [30–32].

The rest of this chapter is organized as follows. Section 2 discusses key software components for SLAM. Section 3 addresses path planning in known environments. Section 4 shows an example of mobile mapping applications. Finally, Section 5 concludes this chapter.

#### **2. Key software components for SLAM**

In this section, the key software components are elaborated. The fundamental element of SLAM is an observation equation. The set of optimization equations builds an optimization system. Finding an optimal solution results in the final map and trajectory. The proposed lightweight implementation uses symbolic computing in Python (SymPy) [33] to generate C++ code for each observation equations. An opensource project is available in [2].

#### **2.1 Observation equations**

Observation Eq. (1) is composed of a target value *yi* , a model function Ψ½ � *<sup>β</sup>* ð Þ *x<sup>i</sup>* , and its *residual ri* defined as the difference between the target value and the value of the model function for *x<sup>i</sup>* and state vector *β*

$$\begin{array}{|c|c|}\hline r\_i = \underbrace{\mathbf{y}\_i}\_{\text{target value}} - \underbrace{\Psi\_{[\beta]}(\mathbf{x}\_i)}\_{\text{model function}}\\ \hline \end{array} \tag{1}$$

where *β* is the vector of *n* optimized parameters. The weighted nonlinear least squares optimization method finds the optimal *n* parameter values (*β*) by minimizing the objective function being a sum of *C* squared residuals

$$Sum = \sum\_{i=1}^{C} r\_i^2 = \underbrace{\sum\_{i=1}^{C} \left(\wp\_i - \Psi\_{[\beta]}(\mathfrak{x}\_i)\right)^2}\_{\text{objective function}}.\tag{2}$$

Therefore, the optimization problem is defined as

$$\overline{\boldsymbol{\beta}^\*} = \min\_{\boldsymbol{\beta}} \sum\_{i=1}^{\mathcal{C}} \left( \mathbf{y}\_i - \boldsymbol{\Psi}\_{[\boldsymbol{\beta}]}(\mathbf{x}\_i) \right)^2} \tag{3}$$

where there are *C* observation equations. It is efficiently solved using the iterative Levenberg-Marquardt algorithm [34]. A single *k*th iteration provides an update for *β* given as

$$\boldsymbol{\beta}^{k+1} = \boldsymbol{\beta}^{k} + \left(\overset{\text{\textquotedblleft}}{\text{J}} \mathbf{W} \overset{\text{\textquotedblleft}}{\text{J}} + \lambda \mathbf{I}\right)^{-1} \overset{\text{\textquotedblleft}}{\text{J}} \mathbf{W} \mathbf{r}(\boldsymbol{\beta}^{k})\tag{4}$$

where *I* is the identity matrix, *J* Ψ is the Jacobian of the model function, and *W* is the weight matrix modeling the impact of the observation equation into the optimization process. *λ* starts from an initial small value. During the optimization process, *λ* increases once *Sum* <sup>¼</sup> <sup>P</sup>*<sup>C</sup> <sup>i</sup>*¼<sup>1</sup>*r*<sup>2</sup> *<sup>i</sup>* decreases; otherwise, *λ* decreases and the optimization

process starts from the previous step. The observation equation for pose graph SLAM is given as

$$\begin{array}{c|c} \begin{bmatrix} t\_x^s \\ t\_y^s \\ t\_z^s \\ t\_x^s \\ \hline \end{bmatrix} = \begin{bmatrix} t\_x \\ t\_y \\ t\_z \\ \hline \end{bmatrix} \\ \underbrace{\begin{bmatrix} \sigma^\delta \\ t\_x \\ \sigma^\delta \\ \hline \end{bmatrix}}\_{\text{target values}} - \underbrace{m \mathcal{D} \boldsymbol{\nu}\_{[t\_x, t\_y, t\_z, \alpha, \rho, \kappa]} \left( [\mathbf{R}, \mathfrak{t}]^{12} \right)}\_{\text{model function}} \end{array} \tag{5}$$

where *t δ <sup>x</sup> t δ y t δ <sup>z</sup> ω<sup>δ</sup> φ<sup>δ</sup> κ<sup>δ</sup>* h i<sup>⊺</sup> are *residuals*, *tx ty tz ωφκ* � �<sup>⊺</sup> are *target values*, and *<sup>m</sup>*2*v*½ � *<sup>β</sup>* ½ � *<sup>R</sup>*,*<sup>t</sup>* <sup>12</sup> � � is the *model function*. Target values describe the desired edge (relative pose expressed as translation (*tx*, *ty*, *tz*) and orientation (*ω*, *φ*, *κ*) between two optimized vertices (poses) of the graph. Relative pose ½ � *R*,*t* <sup>12</sup> from pose ½ � *R*,*t* <sup>1</sup> to pose ½ � *R*,*t* <sup>2</sup> is given as

$$
\begin{bmatrix} \mathbf{R} & \mathbf{t} \\ \mathbf{0}^{1 \times 3} & \mathbf{1} \end{bmatrix}\_{\mathbf{1} \mathbf{2}} = \left( \begin{bmatrix} \mathbf{R} & \mathbf{t} \\ \mathbf{0}^{1 \times 3} & \mathbf{1} \end{bmatrix}\_{\mathbf{1}} \right)^{-1} \begin{bmatrix} \mathbf{R} & \mathbf{t} \\ \mathbf{0}^{1 \times 3} & \mathbf{1} \end{bmatrix}\_{\mathbf{2}}.\tag{6}
$$

Function *<sup>m</sup>*2*v*½ � *<sup>β</sup>* ½ � *<sup>R</sup>*,*<sup>t</sup>* <sup>12</sup> � � retrieves *<sup>β</sup>* <sup>¼</sup> *tx*, *ty*, *tz*, *<sup>ω</sup>*, *<sup>φ</sup>*, *<sup>κ</sup>* � �<sup>⊺</sup> for the Tait-Bryan parametrization of the rotation matrix. This parameterization is essential to preserve the orthonormality of the rotation matrix during the optimization process. It is important to notice that other parameterizations exist, such as quaternion, Rodrigues, etc., but this is not the main topic of this chapter.

The iterative closest point algorithm finds the relative pose between two point clouds by incorporating the following source-point-to-target-point observation equation:

$$
\begin{bmatrix} \overbrace{\begin{bmatrix} \mathbf{x}^{s} \\ \mathbf{y}^{i} \\ \mathbf{z}^{i} \end{bmatrix}}^{\text{ $\textbf{$ \textbf{ $\textbf{$ \varepsilon $}}$ }}} = \underbrace{\begin{bmatrix} \mathbf{x}^{\text{ $\textbf{$ \varepsilon $}}} \\ \mathbf{y}^{\text{$ \textbf{ $\varepsilon$ }}} \end{bmatrix}}\_{\text{ $\textbf{$ \varepsilon $}}\text{$ \textbf{ $\varepsilon$ }}} - \underbrace{\mathbf{Y}\_{[\mathbf{R},\mathbf{t}]}(\mathbf{R},\mathbf{t},\mathbf{z}^{\text{ $\textbf{$ \varepsilon $}}},\mathbf{y}^{\text{$ \textbf{ $\varepsilon$ }}})}\_{\text{model function}}\text{)}\tag{7}
$$

where, *<sup>x</sup><sup>δ</sup> <sup>y</sup><sup>δ</sup> <sup>z</sup><sup>δ</sup>* � �<sup>⊺</sup> are *residuals*, *xtg <sup>y</sup>tg <sup>z</sup>tg* ½ �<sup>⊺</sup> are *target values*, and <sup>Ψ</sup>½ � *<sup>R</sup>*,*<sup>t</sup> <sup>R</sup>*,*t*, *xl* , *yl* , *z<sup>l</sup>* � � is the *model function* that transforms 3D points *xl* , *yl* , *z<sup>l</sup>* � � expressed in the local coordinate system into the global one.

The normal distributions transform algorithm is an alternative solution for pairwise matching with ICP, and it can be easily extended for multiview point cloud data registration (final refinement of the 3D map). It decomposes the 3D scene into a regular grid where for each cell, the centroid *μ* and the covariance Σ are calculated with formulas (8) and (9). These formulas incorporate all points *P<sup>g</sup> <sup>k</sup>* in a single cell expressed in the global coordinate system

$$\mu = \frac{1}{m} \sum\_{k=1}^{m} P\_k^{\mathrm{g}} \tag{8}$$

$$\Sigma = \frac{1}{m-1} \sum\_{k=1}^{m} \left(\mathbf{P}\_k^{\mathbf{g}} - \boldsymbol{\mu}\right) \left(\mathbf{P}\_k^{\mathbf{g}} - \boldsymbol{\mu}\right)^{\mathsf{T}}.\tag{9}$$

The NDT observation equation is given as

$$\begin{array}{c|c} \begin{bmatrix} \mathbf{x}^{\boldsymbol{\delta}} \\ \mathbf{y}^{\boldsymbol{\delta}} \\ \mathbf{z}^{\boldsymbol{\delta}} \end{bmatrix} = & \underbrace{\begin{bmatrix} \mu\_{\boldsymbol{x}} \\ \mu\_{\boldsymbol{\gamma}} \\ \mu\_{\boldsymbol{z}} \end{bmatrix}}\_{\text{target values}} - \underbrace{\mathbf{Y}\_{[\mathbf{R},\boldsymbol{\delta}]} \left( \mathbf{R}, \mathbf{t}, \mathbf{x}^{l}, \mathbf{y}^{l}, \mathbf{z}^{l} \right)}\_{\text{model function}} \end{array} \tag{10}$$

where the target value is *<sup>μ</sup>* and <sup>Σ</sup>�<sup>1</sup> <sup>¼</sup> *<sup>W</sup>* for each NDT observation equation incorporated in the Levenberg–Marquardt algorithm from eq. (4).

#### **3. Path planning in known environments**

In this section, we will focus on a practical example of a cleaning robot whose task is to clean a large area. Therefore, one needs to apply a path planning algorithm for a single device that moves in a known environment, and the map of static obstacles is known in advance (c.f. [35]).

#### **3.1 Map decomposition**

A cleaning robot has the coverage area equal to the area of its cleaning/sweeping device. For industrial cleaners, it is a dedicated brush equipped with a water and soap reservoir. Consequently, the size of the coverage area, being a circle with radius *r*cov, is comparable with the size of the robot. The area to be cleaned is a large warehouse with the total area *A*, so it should be assumed that *r*cov ≪ *A*. From this assumption, it follows that any grid-based approach for planning should be avoided. This is because in most grid-based methods, the time for finding a solution grows significantly with the grid size (for many methods even exponentially [28]). For the discussed problem, the number of grid cells would be too large to come up with a feasible solution.

It should be noted that this is not always the case for coverage problems. For example, photo mosaicking has much larger *r*cov than the size of the robots, for example, UAVs equipped with cameras. Area being photographed is much larger than the area of the device. Such problems may use a different planning approach than the one for cleaning robots.

Consequently, a solution based on the geometric decomposition of the grid map into a set of polygons should be considered. Afterward, a path on this set of polygon is found in a way that minimizes a given optimization criterion, in this case the total coverage time. The pipeline for the system is depicted in **Figure 2**.

After mapping the environment with SLAM and the proper optimization, a grid map of static obstacles is obtained. There are a number of ways to convert such a grid map into a set of polygons. This process is known as map decomposition. The most well-known method of decomposition is the *trapezoidal decomposition* where the grid map is "scanned" line by line along one direction and trapezoids are found, which cover the free space completely. Afterward, trapezoids are covered by simple backand-forth motion patterns.

*Autonomous Mobile Mapping Robots: Key Software Components DOI: http://dx.doi.org/10.5772/intechopen.110549*

#### **Figure 2.**

*A processing pipeline for the generation of a coverage path plan for a single robot operating in an environment that is first mapped with the SLAM method.*

**Figure 3.**

*An example of trapezoidal and boustrophedon decompositions into cells together with theirs Reeb graphs (bottom). The latter decomposition allows for better sweeping pattern fit (the dashed line).*

The main drawback of this simple decomposition is the fact that it generates only convex polygons, and therefore, it results in a large number of polygons and suboptimal sweeping patterns (c.f. **Figure 3**). Another possibility is to apply the socalled *boustrophedon cellular decomposition* (BCD), which also generates nonconvex cells [36]. However, these polygons can also be covered only by zig-zag motions, usually in a more efficient way than for the trapezoidal decomposition. It should be noted that some more sophisticated decompositions have been proposed in the literature, for example, [37]. In such an approach, optimization is focused in the decomposition process itself. Here, however, we optimize the path by finding proper sweeping patterns at a later stage.

After the decomposition, a set of polygons is obtained. A geometrical relation between these polygons may be represented by the so-called Reeb graph. The Reeb graph is a special type of a graph for environment representation, in which each link corresponds to a polygon and each node represents an adjacency between the polygons. Consequently, the problem may be treated with the help of existing solutions known from graph theory. This can be done by formulating the optimization problem into a variant of the traveling salesman problem and employing some known efficient solvers.

#### **Figure 4.**

*For each cell, the coverage time is determined by sweeping direction and entry point into the cell (i.e., start and end vertices). The figure depicts some possible entry/exit points and directions for a sample trapezoid.*

#### **3.2 Finding near-optimal sweeping patterns**

Let us consider the covering process for a single polygon. Usually, in order to minimize the coverage time, one needs to minimize the number of turns since, for each turn, a robot must slow down, stop, turn, and accelerate again to its maximum velocity. For a single polygon, various points of entrance for the zig-zag pattern should be considered (see **Figure 4**).

After the decomposition process, the entire environment is represented as a Reeb graph. The order of visiting all the links, that is, polygons, determines entry points to each of them and, therefore, the time cost associated with covering it. It can be shown that finding the minimum of the total time required for covering all the polygons is equivalent to solving the equality generalized traveling salesman problem (EG-TSP) [38]. This is an NP-hard problem for which approximate heuristic solvers may be applied. The results presented in this section are obtained with the use of a memetic solver, as proposed in [38].

#### **4. Example applications**

#### **4.1 Robust LiDAR-inertial odometry and multiview NDT**

**Figure 5** demonstrates the result of FAST-LIO [3] as the 3D point cloud of underground garage recorded using Livox AVIA LiDAR. This robust LiDAR-inertial odometry provides an input for multiview NDT shown in **Figure 6**.

#### **4.2 Pose graph SLAM**

This section demonstrates the pose graph SLAM functionality available with data in [2]. **Figure 7** demonstrates the 2D case and **Figure 8** is related to the 3D case. Pose graph SLAM implementation efficiently solves the optimization problem represented as a consecutive set of poses (trajectory) connected via odometry readings (edges) and loop closure edges. This is a core component of the autonomous mobile mapping robot.

#### **4.3 Final refinement with NDT**

Multiview normal distributions transform 3D data registration is capable to increase the accuracy of the 3D map, as shown in **Figure 6**. The implementation is rather offline since it requires plenty of calculations. These calculations are related

**Figure 5.** *Result of robust LiDAR-inertial odometry FAST-LIO [3] as the 3D point cloud of underground garage.*

mostly to 3D data decomposition, where for each 3D bucket, the mean value and covariance are calculated. This method is efficient mostly for urban environments with many planar shapes.

### **4.4 Path planning**

As an example of a real-world application of a coverage task, let us consider the cleaning process of an underground garage. First, the garage is scanned with 3D laser scanners; it is optimized and flattened to a 2D obstacle map (see **Figure 9**, top). Afterward, the map is used for robot motion planning and navigation.

The result of boustrophedon decomposition for this map is shown in **Figure 9** (middle). In this particular case, it consists mostly of rectangles. The next stage is to find the covering path that connects all these polygons. In order to formally state the optimization goal, one needs to define kinematic characteristics of the robot. Here, we assume realistic parameters: *<sup>a</sup>*max <sup>¼</sup> <sup>0</sup>*:*3 ms�<sup>2</sup> and *<sup>v</sup>*max <sup>¼</sup> 1 ms�1. This corresponds to real devices that are being used for the cleaning tasks [35]. After using the memetic solver for the associated EG-TSP, a trajectory for the robot is obtained. It is shown at the bottom of the figure. For the depicted scale and the assumed kinematic model, the total time for coverage is 2302 s in this case.

In order to validate the approach for path planning and to estimate its usefulness, one should use a large number of maps, perform planning, and measure efficiency. One possibility is to use a synthetic albeit realistic set of layouts provided by Li et al. [39]. Based on real experiments with LiDARs, the authors have developed a method to generate about 60,000 various maps, which can be used by researchers to test various algorithms. Some example layouts together with the planned coverage paths are depicted in **Figure 10**.

#### **Figure 6.**

*Top: input data produced by robust LiDAR-inertial odometry FAST-LIO [3] from Figure 5. Bottom: result of final refinement with multiview NDT.*

*Autonomous Mobile Mapping Robots: Key Software Components DOI: http://dx.doi.org/10.5772/intechopen.110549*

#### **Figure 8.**

*Top: input data for pose graph SLAM (purple dots: Graph vertices, blue lines: Graph edges). Bottom: result of pose graph SLAM, 3D case.*

#### **5. Conclusion**

This chapter elaborates key software components of autonomous mobile mapping robots equipped with Livox AVIA LiDAR. It is new LiDAR with a nonrepetitive scanning pattern equipped also with the IMU. LiDAR and IMU are synchronized; thus, this advantage is addressed by the robust LiDAR-inertial odometry framework FAST-LIO. It improves unmanned ground vehicles (UGVs) and unmanned aerial

*Autonomous Mobile Mapping Robots: Key Software Components DOI: http://dx.doi.org/10.5772/intechopen.110549*

**Figure 9.**

*Top: a grid representing obstacles in an underground garage. Pixel size is 3 cm 3 cm. Middle: the result of a boustrophedon decomposition. Bottom: A trajectory for a single robot.*

vehicles (UAVs) in 3D mapping scenarios. Our study incorporates this robust LiDARinertial odometry framework FAST-LIO as the front end of SLAM. The main focus is a lightweight back-end implementation of pose graph simultaneous localization and mapping (SLAM). This lightweight implementation is an alternative solution to stateof-the-art g2o or GTSAM implementations. We also elaborate iterative closest point, normal distributions transform, and their extension for multiview 3D data registration/refinement. It is based on C++ using Eigen library. This chapter also discusses

#### **Figure 10.**

*Covering path plans for a small subset of the realistic, synthetic dataset of building layouts [39]. The covering area is a circle with diameter equal to* 0*:*5 m*. Notice various scales for the x*and*y axes and—Consequently—Various times required for the complete coverage (indicated above the plots).*

path planning in already mapped environment. All software components are available as an open-source project. This chapter provides insights for useful software components for building autonomous mobile mapping robots.
