**2. Multi-robot SLAM**

SLAM is to concurrently conduct two processes which are called localization and mapping, respectively. Mapping is to acquire a map of its surrounding environments to plan a path to its own goal without collisions with structures. Localization is to estimate its own pose within the acquired map. Unfortunately, SLAM is not easy because the two processes in SLAM depend on each other. In other words, the localization process requires a map as a reference to estimate its own pose, and the mapping process requires a pose which consists of location and orientation as a reference point to represent a map. Many researches have been conducted to conduct SLAM efficiently, and several nice solutions have been recently proposed. However, SLAM is still an open problem in the context of accuracy, reliability, and computational cost.

Multi-robot SLAM is to conduct the SLAM task using multiple robots for the sake of completing localization and mapping more efficiently. An example of configuring a two-robot SLAM is shown in **Figure 3**. Each robot conducts SLAM with its own sensors. Based on the multiple SLAM results gathered through the communication modules, the global state has been updated.

### **Figure 3.** *An example of configuring a two-robot SLAM.*

Due to the errors in sensors for multi-robot SLAM, the global state estimation is generally conducted with probabilistic formulations. The estimation of the two-robot SLAM state in **Figure 3** can be formulated as follows:

$$\begin{split} &P(\mathbf{x}\_{1:t}^{1}, \mathbf{x}\_{1:t}^{2}, \mathbf{M} | \mathbf{z}\_{1:t}^{1}, \mathbf{u}\_{0:t-1}^{1}, \mathbf{x}\_{0}^{2}, \mathbf{z}\_{\varepsilon:t}^{2}, \mathbf{u}\_{\varepsilon:t-1}^{2}, \Delta\_{\varepsilon}^{21}) \\ &= P(\mathbf{M} | \mathbf{x}\_{1:t}^{1}, \mathbf{x}\_{1:t}^{1}, \mathbf{x}\_{\varepsilon:t}^{2}, \mathbf{z}\_{\varepsilon:t}^{2}) P(\mathbf{x}\_{1:t}^{1} | \mathbf{z}\_{1:t}^{1}, \mathbf{u}\_{0:t-1}^{1}, \mathbf{x}\_{0}^{1}) P(\mathbf{x}\_{\varepsilon:t}^{2} | \mathbf{z}\_{\varepsilon:t}^{2}, \mathbf{u}\_{\varepsilon:t-1}^{2}, \mathbf{x}\_{\varepsilon}^{1}, \Delta\_{\varepsilon}^{21}) \end{split} \tag{1}$$

where *xi <sup>k</sup>*:*<sup>t</sup>* is the trajectory for robot *i* at times *k*, *k* þ 1, ⋯, *t*, and *M* is the merged map, and *u<sup>i</sup> <sup>k</sup>*�1:*t*�<sup>1</sup> is the sequence of actions executed by robot *<sup>i</sup>*, and *zi <sup>k</sup>*:*<sup>t</sup>* is the sequence of observations from robot *i*, and *Δ*<sup>21</sup> *<sup>s</sup>* is the relative pose between two robots at time *s* . Extended Kalman filters (EKF) [4] and Rao-Blackwellized particle filters (RBPF) [5] have been widely used as estimation methods for the probabilistic formulation. At the beginning of the estimation, the uncertainty of the state is large. But, as time goes, the uncertainty of the state has been gradually reduced if the observation measurements are acquired consistently, and data association is conducted properly. Especially, whenever loop closures [6] are conducted, the uncertainty of the state can be significantly reduced.
