Unconventional Trajectories for Mobile 3D Scanning and Mapping

*Fabian Arzberger, Jasper Zevering, Anton Bredenbeck, Dorit Borrmann and Andreas Nüchter*

#### **Abstract**

State-of-the-art LiDAR-based 3D scanning and mapping systems focus on scenarios where good sensing coverage is ensured, such as drones, wheeled robots, cars, or backpack-mounted systems. However, in some scenarios more unconventional sensor trajectories come naturally, e.g., rolling, descending, or oscillating back and forth, but the literature on these is relatively sparse. As a result, most implementations developed in the past are not able to solve the SLAM problem in such conditions. In this chapter, we propose a robust offline-batch SLAM system that is able to address more challenging trajectories, which are characterized by weak angles of incidence and limited FOV while scanning. The proposed SLAM system is an upgraded version of our previous work and takes as input the raw points and prior pose estimates, yet the latter are subject to large amounts of drift. Our approach is a two-staged algorithm where in the first stage coarse alignment is fast achieved by matching planar polygons. In the second stage, we utilize a graph-based SLAM algorithm for further refinement. We evaluate the mapping accuracy of the algorithm on our own recorded datasets using high-resolution ground truth maps, which are available from a TLS.

**Keywords:** 3D LiDAR, mobile mapping, scanning, spherical robot, pendulum, descent, small FOV, Livox, Intel, RealSense

#### **1. Introduction**

Mobile systems increasingly gain astonishing capabilities when it comes to 3D sensing, mapping, and environment reconstruction (**Figure 1**). Nowadays, there exist many mobile systems in different shapes and sizes that are able to perform these tasks, e.g., drones, wheeled or tracked robots, or backpack-mounted systems, just to name a few. A key aspect of fulfilling their purpose is the estimation of the systems inertial frame of reference, i.e., the systems local coordinate system, with respect to a global reference frame. This global reference frame has an origin somewhere in the respective environment and is usually initialized with the systems starting position and orientation (pose). By expressing the local measurements in the global frame, the system is able to create a map of the environment whilst localizing itself in it. This process is called simultaneous localization and mapping (SLAM), and only works if the initial pose estimation of the system is sufficiently accurate. The types of robot

#### **Figure 1.**

*(Left) post-processed 3D point cloud acquired by a sensor that is rotating freely while being descended with a crane from a fire truck. Prior pose estimates are available from three IMUs and an angular encoder, which encodes the rotation of the cable reel. (Center) the firetruck. (Right) the sensor, which is mounted to the crane of the firetruck.*

designs mentioned earlier are often favored since they have access to quite accurate prior pose estimates (GNSS, odometry, visual odometry, etc.) and reliable coverage while sensing the environment with cameras or laser scanners. In such conditions, it is typically easy to perform laser-based SLAM, either online (for autonomous mobile robots) or as a post-processing step. Yet, there are still many situations where conditions are poor, inferring uncertainties to prior pose estimates and thus degrading SLAM performance. Visual feature tracking with a camera, for example, only works in good lighting situations, and GNSS might not be available. Using IMUs as the only pose estimation device usually is also not an option, due to the large accumulation of measurement errors caused by noise and drift which makes position estimation difficult. Even high-end devices, e.g. in aviation systems, must be combined with other references like GNSS in order to be reliable. The ability of LiDAR-based SLAM algorithms to deal with degradation puts constraints on current mobile system designs, as well as their applications. Therefore, extending the capabilities of SLAM algorithms to more unusual scenarios opens up interesting design choices for mobile systems, especially in hazardous environments. The intention of this chapter is to provide a general and robust LiDAR-based solution for the SLAM problem, which is independent with respect to the executed trajectories and sensor setups. We note the utilization of a Livox Mid-40 scanner, which is considered a solid-state LiDAR in [1]. Yet in [2], the family of Livox scanners is only considered to be "semi solid-state" LiDARs, due to their non-repetitive scanning pattern. Solid-state LiDAR got a lot of attention in the past years due to "their superiority in cost, reliability, and [ … ] performance against the conventional mechanical spinning LiDARs [ … ]" [3]. While traditional LiDAR is based on electro-mechanic parts which move the sensor head, solid-state LiDAR relies on micro-electro-mechanical systems (MEMS), optical-phase arrays (OPA), or Risley prisms. Despite their potential advantages, solid-state laser-scanners impose new challenges for established SLAM algorithms, e.g., small FOV, an irregular scanning pattern with non-repetitive scanning which makes feature extraction more difficult, and increased motion blur. In this work, we address these challenges by making the assumption that planar structures are available in the environment. Due to the utilization of planes, the envisioned applications are in human-made environments, e.g., old buildings that are in danger of collapsing, narrow underground tunnels, construction sites, or mining shafts. Many mobile mapping systems present in the current literature have access to accurate pose estimates, as well as good sensing coverage. We present four notably interesting experimental setups which do not meet those

### *Unconventional Trajectories for Mobile 3D Scanning and Mapping DOI: http://dx.doi.org/10.5772/intechopen.108132*

conditions. The main sensor in each scenario is a LIVOX MID-100 laser scanner, which is inconvenient due to its limited FOV and unique scanning pattern. **Figure 2** illustrates the different executed motions. (1) We descent a freely rotating 3D sensor from a crane. Prior pose estimates are available from three IMUs and a rotary encoder on the cable reel. (2) Further, we use the sensor as a pendulum, oscillating back and forth while walking. The robot obtains prior pose estimates from a T265 tracking camera, which uses its own internal IMU. (3) Moreover, we roll the sensor around on flat ground. An IMU-only-based approach with three units estimates the pose of the system [4]. The approach combines two popular filters (Madgwick- and Complementary-filter) and estimates the position by relating the rotational velocity and radius of the sphere to its traveled distance. Further constraints are present in the filter to account for slippage and sliding effects of the sphere. (4) Finally, we repeat the previous experiment but substitute the 3D sensor with a 2D sensor, which measures parts of the environment in planar slices. Rotating the planar slice thus results in

#### **Figure 2.**

*Illustration of the executed sensor trajectories (left) and images of system setups during operation (right).*

a 3D reconstruction of the scene. For pose estimates, we use a rotary encoder and a single IMU. Section 4 describes all experimental setups in more detail. The prior pose estimates are subject to a significant amount of noise and drift. Some scenarios are more difficult than others since the robot locomotion mechanism causes the sensor to move in five, or even all six degrees of freedom (DOF). In this work, we refine our offline-batch SLAM system from [5] to make it more robust, such that it is able to address the unfavorable conditions imposed by more challenging trajectories. Section 3 introduces the two-staged SLAM algorithm, where the first stage uses a polygonbased approach for a fast coarse alignment, and a graph-based method in the second stage for slow further refinement. We evaluate the accuracy of the resulting maps by means of ground truth point clouds, which are available from high precise terrestrial laser scanning (TLS). Note that in this initial study, no trajectory ground truth has been recorded. This is a task for future work. In particular, the contributions of this work are as follows:

