**5. Visualization**

ROS includes a 3D visualization environment package RViz that fuse sensor data, robot model and other 3D data like point clouds into a combined view. RViz uses several external libraries like the Ogre3D graphics library for the 3D visualization. The 3D points from the point clouds can be visualized as small surface elements (surfel) or boxes. A surface mesh can be developed with surface reconstruction algorithms. This very expensive step is mostly avoided and is approximated by choosing the size of the surfels or boxes.

For the post-processing of the robot data, a graphical user interface (GUI) for mapit has been developed. The user can define a new project and can select all considered data over the client-server connection. Mapit has implemented algorithms to register point clouds, space decomposition algorithm for efficient computation and rendering algorithm for the visualization. All user-selected algorithms can be arranged and saved in a workflow and can be visualized and edited in the GUI as a node graph. **Figure 5** shows the mapit GUI and the node graph to calculate and render 13 point clouds from the ground floor of the main building of the Aachen University of Applied Sciences.

#### **5.1 Visualization from sensor and mapping data**

For almost all sensors attached to the exploration vehicle, ROS connections are provided for visualization. The calibration software of the SWAP platform allows a visualization of the point clouds recorded over time in near real-time. The raw data of the FARO laser scanner are also automatically processed on-board on-site and made available within ROS. This process takes a few seconds to a few minutes, depending on the volume of data. **Figure 6a** represents a combined visualization

*A System for Continuous Underground Site Mapping and Exploration DOI: http://dx.doi.org/10.5772/intechopen.85859*

#### **Figure 5.**

error. This becomes visible after long scan series (e.g. in the map of maxit, Krölpa,

To minimize the errors of pairwise registration of many point clouds, all point clouds are registered globally. Again, the algorithms are easily interchangeable. Mostly currently the GraphSLAM [24] algorithm is used after Lu and Milios [25]. It creates a graph of the connections between all point clouds with overlaps and

As a representation of the 3D map, an OctoMap [26] is used. This offers the advantage of being able to query the information in various resolutions and to map the distinctions that are important for navigation between free, occupied and

**Figure 4** represents the result from the 3D mapping process from an exploratory trip in the underground mine from maxit in Krölpa, Germany. The mobile robot scans every 10 m a full-sphere point cloud. The point cloud data and the odometry data are saved into a rosbag file and are processed with the mapit workflow. **Figure 4a** shows a 2D occupancy grid of the mapped part of the mine. **Figure 4c** visualizes the point clouds themselves and **Figure 4b** a 3D OctoMap from the point

ROS includes a 3D visualization environment package RViz that fuse sensor data, robot model and other 3D data like point clouds into a combined view. RViz uses several external libraries like the Ogre3D graphics library for the 3D visualization. The 3D points from the point clouds can be visualized as small surface ele-

For the post-processing of the robot data, a graphical user interface (GUI) for mapit has been developed. The user can define a new project and can select all considered data over the client-server connection. Mapit has implemented algorithms to register point clouds, space decomposition algorithm for efficient computation and rendering algorithm for the visualization. All user-selected algorithms can be arranged and saved in a workflow and can be visualized and edited in the GUI as a node graph. **Figure 5** shows the mapit GUI and the node graph to calculate and render 13 point clouds from the ground floor of the main building of the

For almost all sensors attached to the exploration vehicle, ROS connections are provided for visualization. The calibration software of the SWAP platform allows a visualization of the point clouds recorded over time in near real-time. The raw data of the FARO laser scanner are also automatically processed on-board on-site and made available within ROS. This process takes a few seconds to a few minutes, depending on the volume of data. **Figure 6a** represents a combined visualization

ments (surfel) or boxes. A surface mesh can be developed with surface reconstruction algorithms. This very expensive step is mostly avoided and is

approximated by choosing the size of the surfels or boxes.

Aachen University of Applied Sciences.

**5.1 Visualization from sensor and mapping data**

minimizes the alignment errors of all connections simultaneously.

of approximately 800 m range).

*Unmanned Robotic Systems and Applications*

*4.3.2 Global registration*

*4.3.3 Conversion to 3D map*

unknown cells.

**5. Visualization**

clouds.

**72**

*Visualization from the mapit GUI to render 13 registered point clouds.*

from the stop-and-go FARO scan point cloud data in gray values and the live sensor data over 2 seconds from the SWAP platform in color values. **Figure 6b** shows in a top-down view the real-time data from one Velodyne Puck scanner with 20 Hz in blue and red color values.

The exploration vehicle can scan automatically with the automation adapter of the FARO Focus laser scanner. A main disadvantage of this automation adapter is that all drivers for the FARO scanner support only the Windows operating system. To overcome this problem, several Windows applications are developed and can be selected via ROS. Firstly, the scanning application sends user-selected scan parameter over ROS to the scanner and starts the scanning process. The scanned data are stored in the customer format on the hard disk. The second application converts the customer format of a point cloud into the free PCD format of the Point Cloud Library (PCL) and stores the new format again. Last but not least, the PCD data files are loaded and are visualized with RViz in ROS. As an option, the PCD data can be filtered.

The huge amount of point cloud data can be decomposed with an octree space partitioning algorithm (see **Figure 7**). This data structure is suitable for local collision detection, downsampling the huge data amount and representing the data in different resolutions. Especially, the normal estimation uses a neighboring search to every point of the point cloud. This step and the viewer-dependent resolution can be calculated more efficiently using the octree data structure.

#### **Figure 6.**

*Live sensor data from the SWAP platform of the exploration vehicle at the maxit mine, Krölpa, Germany. (a) Grey values: Stop-and-go data from FARO Focus3D X 130. Color values: Live data from SWAP platform over 2s and (b) Top down perspective from Velodyne VLP-16 puck data, 20Hz in blue and reds.*

multiple users can view the same scenes simultaneously from different perspectives

*Virtual reality visualization with CuteVR using the HTC Vive VR headset.*

*A System for Continuous Underground Site Mapping and Exploration*

*DOI: http://dx.doi.org/10.5772/intechopen.85859*

In this chapter, we presented a system for continuous mapping and exploration of underground sites. Most of this work has been developed as part of the project "Underground 4D+ Positioning, Navigation and Mapping System for Highly Selective, Efficient and Highly-secure Exploitation of Important Resources" (UPNS4D+) which was funded by the German Federal Ministry of Education and Research within the programme of "R4–Innovative Technologies for Resource Efficiency – Research for the Provision of Raw Materials of Strategic Economic Importance". We first reported on the hardware platform that was built to acquire comparably

densely populated 3D point clouds of the (underground) environment using a rotating LiDAR device. Afterwards, we reported on the framework mapit which is used to track and execute post-processing operations on the data acquired by the robot. Namely, it allows for registering a (large) set of individual maps to one global map. The important aspect is that mapit does not store the resulting map and discards the original data, but instead it keeps track of the operations that were performed on the data and logs these operations. This allows for reapplying all postprocessing in case an algorithm has been improved or a misalignment in the calibration of the sensor setup has been detected. Finally, we presented the options for

The system described in this chapter provides diverse support for (first) responders in search and rescue applications. For one, the resulting maps can be used to conduct further missions with rescue robots. Also, analysis tools can be run on the maps. For example, the mapit framework supports running algorithms to compare maps from different points in time to see which changes have occurred. The versatile visualization capabilities allow for planning rescue missions and

This research was funded by the German Federal Ministry of Education and Research within the programme of "R4–Innovative Technologies for Resource

visualizing the resulting maps in different contexts.

**Acknowledgements**

**75**

training first responders before sending them into the field.

via Multi-View.

**Figure 8.**

**6. Discussion**

**Figure 7.**

*Spatial decomposition of the point cloud with an octree data structure. The points are organized in a hierarchical fashion where the resolution of space is increased only when the space actually contains data points.*

An interface from mapit to ROS allows a generated map to be returned to the context of the exploration vehicle. This allows current sensor data to be visualized in a corresponding map section with appropriate resolution during a re-exploration.

#### **5.2 Virtual reality integration with CuteVR**

The CuteVR library was developed as a bridge between the virtual reality hardware drivers and the end-user software with the aim of a consistent interface. CuteVR is highly modular based on the cross-platform application framework Qt and can be extended due to its class structures with relatively little effort. An event system and differentiated error handling also make it easier to handle highly dynamic VR scenes.

It forms the basis for a VR plugin for RViz, which allows the viewing of all sensor data in VR. This allows the user to navigate in the virtual reality world next to the exploration vehicle (see **Figure 8**). CuteVR unifies the interfaces to VR devices and can be expanded for future VR hardware.

Based on CuteVR, the ROS package vr\_tools was developed, which integrates VR hardware and VR concepts in ROS. The core component is the head-mounted display (HMD) plugin for RViz, with which one or more users can look around and move around in a RViz scene. This allows intuitive and true-to-scale viewing of 3D sensor data in the virtual space.

Since RViz itself does not provide structures for the spatial distribution of large amounts of data such as octrees, it was decided instead to filter the data stream to RViz and adapt it on the fly.

Furthermore, the states of VR input devices are made available in ROS and thus can interact with other programme components. With additional VR setups,

*A System for Continuous Underground Site Mapping and Exploration DOI: http://dx.doi.org/10.5772/intechopen.85859*

#### **Figure 8.**

An interface from mapit to ROS allows a generated map to be returned to the context of the exploration vehicle. This allows current sensor data to be visualized in a corresponding map section with appropriate resolution during a re-exploration.

*Spatial decomposition of the point cloud with an octree data structure. The points are organized in a hierarchical fashion where the resolution of space is increased only when the space actually contains data points.*

The CuteVR library was developed as a bridge between the virtual reality hard-

ware drivers and the end-user software with the aim of a consistent interface. CuteVR is highly modular based on the cross-platform application framework Qt and can be extended due to its class structures with relatively little effort. An event system and differentiated error handling also make it easier to handle highly

It forms the basis for a VR plugin for RViz, which allows the viewing of all sensor data in VR. This allows the user to navigate in the virtual reality world next to the exploration vehicle (see **Figure 8**). CuteVR unifies the interfaces to VR

Based on CuteVR, the ROS package vr\_tools was developed, which integrates VR hardware and VR concepts in ROS. The core component is the head-mounted display (HMD) plugin for RViz, with which one or more users can look around and move around in a RViz scene. This allows intuitive and true-to-scale viewing of 3D

Since RViz itself does not provide structures for the spatial distribution of large amounts of data such as octrees, it was decided instead to filter the data stream to

Furthermore, the states of VR input devices are made available in ROS and thus

can interact with other programme components. With additional VR setups,

**5.2 Virtual reality integration with CuteVR**

*Unmanned Robotic Systems and Applications*

devices and can be expanded for future VR hardware.

dynamic VR scenes.

**Figure 7.**

sensor data in the virtual space.

RViz and adapt it on the fly.

**74**

*Virtual reality visualization with CuteVR using the HTC Vive VR headset.*

multiple users can view the same scenes simultaneously from different perspectives via Multi-View.
