particle

Synchronous and Asynchronous Communication Modes for Swarm Robotic Search 319

fitness evaluate <sup>Ă</sup>

fitness evaluate

Ă

fitness evaluate

fitness evaluate

Ă

## **3.1.3 Parallel evaluate and synchronous update**

The most obvious PSO parallel implementation is to simplify fitness evaluate for particles at iteration in parallel, without changing the overall logic of the algorithm itself (Venter and Sobieszczanki-Sobieksi, 2006). And the property of synchronous refers to all particles being sent to parallel computing environment and moving from the current iteration to the next only if the fitness of all particles has been gotten (Schutte *et al.*, 2004). To demonstrate the internal relationship of logic better, we illustrate with flow chart rather than pseudo code, as showed in Figure 3 (Koh *et al.*, 2006). In this case, the existence of load imbalance in computing environment may significantly affect parallel performance. These factors are shown below:

Fig. 3. Visual sketch for a version of PSO having characteristics of parallel evaluate on fitness and synchronous update for velocity and position.


#### **3.1.4 Parallel evaluate and asynchronous update**

Parallel implementations being asynchronous in PSO can make the algorithmic computation efficiency enhanced (Venter and Sobieszczanki-Sobieksi, 2006). The asynchronous approach does not need a synchronous point to determine new velocities and positions, as showed in Figure 4 (Koh *et al.*, 2006). The optimization can proceed to the next iteration without waiting for the completion of all functions evaluate from the current iteration.

8 Will-be-set-by-IN-TECH

The most obvious PSO parallel implementation is to simplify fitness evaluate for particles at iteration in parallel, without changing the overall logic of the algorithm itself (Venter and Sobieszczanki-Sobieksi, 2006). And the property of synchronous refers to all particles being sent to parallel computing environment and moving from the current iteration to the next only if the fitness of all particles has been gotten (Schutte *et al.*, 2004). To demonstrate the internal relationship of logic better, we illustrate with flow chart rather than pseudo code, as showed in Figure 3 (Koh *et al.*, 2006). In this case, the existence of load imbalance in computing environment may significantly affect parallel performance. These factors are shown below:

> No.2 particle fitness evaluate

Ă

convergence check

velocity, position update

Fig. 3. Visual sketch for a version of PSO having characteristics of parallel evaluate on fitness

• a heterogeneous distributed computing environment where processors with varying

• time spent in fitness evaluate, i.e., using a numerical simulation to evaluate each particle,

• the number of particles cannot be equally distributed among the processors in the computing environment, i.e., having a swarm size that is not an integer multiple of the

Parallel implementations being asynchronous in PSO can make the algorithmic computation efficiency enhanced (Venter and Sobieszczanki-Sobieksi, 2006). The asynchronous approach does not need a synchronous point to determine new velocities and positions, as showed in Figure 4 (Koh *et al.*, 2006). The optimization can proceed to the next iteration without waiting

computational speed are combined into a parallel computing environment;

where the required simulation time depends on the particle being analyzed;

for the completion of all functions evaluate from the current iteration.

No.n particle fitness evaluate

Mobile Robots – Control Architectures,

**3.1.3 Parallel evaluate and synchronous update**

318

initialization

No.1 particle fitness evaluate

and synchronous update for velocity and position.

number of processors (Koh *et al.*, 2006).

**3.1.4 Parallel evaluate and asynchronous update**

iteration

Fig. 4. Visual sketch for a version of PSO having characteristics of parallel evaluate on fitness and asynchronous update for velocity and position.

As stated above, different versions of PSO have different algorithmic properties when implementation. But the most desirable properties that we would like to transfer to our swarm robotic system may be controlled in parallel and asynchronously (Ridge *et al.*, 2005). Indeed, as one of nature-inspired algorithms, parallel and asynchronous version of PSO makes algorithm more efficient in execution.

#### **3.2 Communication modes taken in swarm robotic search**

Now, let us examine the case of swarm robotic search in a closed obstacle-free environment. According to the analysis above, controlling over robots should be done in a fine-grained way, as individual robots detect target signals independently at the same time to determine the best-found position by signals intensity comparison with their respective TVCS neighbors (Xue and Zeng, 2008b). Due to heterogeneous hardware caused by parameter distribution of sensors, difference of detection and evaluate time required among different positions because of signals diffusing, part of swarm robots completing signals detection early have to wait for synchronous update. The reason is that the update depends on the slowest robot (Schutte *et al.*, 2004). By this means, velocities and positions of all robots are updated at the same time after evaluate fully completing.

Here, asynchronous communication mode refers to that each robot compares at once with the optimal value of the swarm after iterating in the iteration process, if their detective signals are discovered stronger, updates immediately the optimal value of the swarm, thereby, other

**3.3 Algorithm description**

be found in Algorithm 4.

marked explicitly for TVCS.

7: **while** target is not found out 8: **for** *i* = 1; *i* <= *popsize*; *i* + +

10: calculate position

15: calculate velocity 16: move ahead one step

gotten from enough simulations.

**4.1 Parameter settings**

third column of this table.

**4.2 Performance metrics**

17: *j* ← *j* + 1

**4. Simulations**

18: **end**

12: target signals measurement 13: update *Ibest* and *Xbest*

14: update *Ibest* and *Xbest* of TVCS

3: velocity *V*(*t* = 0), position *X*(*t* = 0) 4: target signals measurement *I*(*t* = 0) 5: *Ibest* ← *I*(*t* = 0), *Xbest* ← *X*(*t* = 0)

6: *Ibest* ← *I*(*t* = 0), *Xbest* ← *X*(*t* = 0) for TVCS

9: calculate expected and real velocities

1: **initialize** 2: *j* ← 1

11: **end**

The corresponding algorithms with different communication modes can be listed in the following. Of them, the synchronous communication version is taken according to the characteristics of processing moments on signals detection, search completion judging as well as velocity and position updating, see Algorithm 3. Different from the synchronous communication mode, the moment that robot updates shared information of TVCS is more flexible in asynchronous communication mode. The details of corresponding algorithm can

Synchronous and Asynchronous Communication Modes for Swarm Robotic Search 321

**Algorithm 3** Controlling robot *Ri* involved in swarm robotic search with synchronous communication mode. For convenience, the default variables are for *Ri* unless those are

The two control algorithms are performed and repeated for 10 runs respectively, for both are characteristic of random search in nature and the comparison can be made with statistics

The parameters about working environment, individual robot and swarm robots affect the results directly. Thus the important parameters and their common configurations used in simulations are given in Table 1. The farther information of the symbols can be found in the

In order to comparatively evaluate the running performance of control algorithms with different communication modes, some quantitative metrics need to be designed in advance. Here, two performance figures are considered and presented. Both are based on the definition of search success. We can define such term as the best position of swarm closes to target

robots can share the experience timely, without having to wait until given some synchronous moment, then realizes the non-synchronization of the robots in the search process.

## **3.2.1 Communication trigger**

Clearly, the key to asynchronous implementation of control algorithm is to partition the individual from the group update behavior to take the different property into account. These update behaviors include updating the individual robot and the shared information within the swarm histories. Similarly, as for the asynchronous particle swarm optimization, the update action starts after fitness evaluate, while the update on swarm starts in the last at each iteration (Venter and Sobieszczanki-Sobieksi, 2006). For the target search with swarm robots, detection of target signals depends only on their respective on-board processors rather than processing center. In other words, such center does not exist in swarm robotic system. The processors work independently and in parallel between one another. The individual robot updates its velocity, position and history as soon as it completes target signals measurement and makes decision on the best-found by comparison with the best of its TVCS (Zhao *et al.*, 2005). But the update on the shared information should start in accordance with some special asynchronous control strategy. In fact, this is the decision on communication triggers. Differing from the ideal particles in PSO, robot possesses mass in real world that causes it to have inertia when moves about in the search environment. Therefore, as for same an evolution position of certain particle, it is unlimited to reach at any speed in PSO case, while robot may arrive at the same position in several sampling times due to constraint of kinematics and dynamics because the evolution position is only expected (Pugh and Martinoli, 2007). These factors should be taken consideration when we design the asynchronous interaction strategies. Based on this, some update strategies have been developed. One is communication cycle-based control principle. Here, communication cycle is named as evolution iterations. Similar to the coarse-grained parallel particle swarm optimization, we can make robot *Ri* communicate every *n* iterations to decide the best-found position within robot *Ri*'s TVCS (Huang and Fan, 2006; Wang *et al.*, 2007; Zhao *et al.*, 2005). To improve systematic efficiency, a communication cycle can be assigned to several fixed times of sampling periods (Xue *et al.*, 2009). Besides, different robots can be allowed to have different sampling frequencies. On the other hand, the best-found fitness value and position of TVCS should be remembered in memory before the next iteration starts. Another update strategy is evolution position-based control principle (Xue and Zeng, 2009). According to this control principle, update of the shared information does not been carried out in the current iteration before the previous evolution position has not been reached. That is to say, robots communicate when they arrive at the decisive expected or desired evolution positions regardless of the iteration history and the next iteration required. No communication between two consecutive ideal evolution positions makes robot moving continuously, saving power and decreasing communication time-consuming.

### **3.2.2 Synchronous case in swarm robotic search**

The difference between synchronous and asynchronous control lines in their update types and opportunities (Zhao *et al.*, 2005). As for the synchronous mode, update time points depend on the last particle completing fitness evaluate at each step. Thus the communication triggers do not need to consider in synchronous mode. As comparison, refers to the updates of all the robots being synchronous at same iterative procedure, they are aiming at the optimal value in the current iteration step, after all the updates are completed, all the robots proceed towards the goal synchronously, and accomplish the search task with common integral cognitive level.

## **3.3 Algorithm description**

10 Will-be-set-by-IN-TECH

Mobile Robots – Control Architectures,

robots can share the experience timely, without having to wait until given some synchronous

Clearly, the key to asynchronous implementation of control algorithm is to partition the individual from the group update behavior to take the different property into account. These update behaviors include updating the individual robot and the shared information within the swarm histories. Similarly, as for the asynchronous particle swarm optimization, the update action starts after fitness evaluate, while the update on swarm starts in the last at each iteration (Venter and Sobieszczanki-Sobieksi, 2006). For the target search with swarm robots, detection of target signals depends only on their respective on-board processors rather than processing center. In other words, such center does not exist in swarm robotic system. The processors work independently and in parallel between one another. The individual robot updates its velocity, position and history as soon as it completes target signals measurement and makes decision on the best-found by comparison with the best of its TVCS (Zhao *et al.*, 2005). But the update on the shared information should start in accordance with some special asynchronous control strategy. In fact, this is the decision on communication triggers. Differing from the ideal particles in PSO, robot possesses mass in real world that causes it to have inertia when moves about in the search environment. Therefore, as for same an evolution position of certain particle, it is unlimited to reach at any speed in PSO case, while robot may arrive at the same position in several sampling times due to constraint of kinematics and dynamics because the evolution position is only expected (Pugh and Martinoli, 2007). These factors should be taken consideration when we design the asynchronous interaction strategies. Based on this, some update strategies have been developed. One is communication cycle-based control principle. Here, communication cycle is named as evolution iterations. Similar to the coarse-grained parallel particle swarm optimization, we can make robot *Ri* communicate every *n* iterations to decide the best-found position within robot *Ri*'s TVCS (Huang and Fan, 2006; Wang *et al.*, 2007; Zhao *et al.*, 2005). To improve systematic efficiency, a communication cycle can be assigned to several fixed times of sampling periods (Xue *et al.*, 2009). Besides, different robots can be allowed to have different sampling frequencies. On the other hand, the best-found fitness value and position of TVCS should be remembered in memory before the next iteration starts. Another update strategy is evolution position-based control principle (Xue and Zeng, 2009). According to this control principle, update of the shared information does not been carried out in the current iteration before the previous evolution position has not been reached. That is to say, robots communicate when they arrive at the decisive expected or desired evolution positions regardless of the iteration history and the next iteration required. No communication between two consecutive ideal evolution positions makes robot moving continuously, saving

The difference between synchronous and asynchronous control lines in their update types and opportunities (Zhao *et al.*, 2005). As for the synchronous mode, update time points depend on the last particle completing fitness evaluate at each step. Thus the communication triggers do not need to consider in synchronous mode. As comparison, refers to the updates of all the robots being synchronous at same iterative procedure, they are aiming at the optimal value in the current iteration step, after all the updates are completed, all the robots proceed towards the goal synchronously, and accomplish the search task with common integral cognitive level.

moment, then realizes the non-synchronization of the robots in the search process.

**3.2.1 Communication trigger**

320

power and decreasing communication time-consuming.

**3.2.2 Synchronous case in swarm robotic search**

The corresponding algorithms with different communication modes can be listed in the following. Of them, the synchronous communication version is taken according to the characteristics of processing moments on signals detection, search completion judging as well as velocity and position updating, see Algorithm 3. Different from the synchronous communication mode, the moment that robot updates shared information of TVCS is more flexible in asynchronous communication mode. The details of corresponding algorithm can be found in Algorithm 4.

**Algorithm 3** Controlling robot *Ri* involved in swarm robotic search with synchronous communication mode. For convenience, the default variables are for *Ri* unless those are marked explicitly for TVCS.

```
1: initialize
```

```
2: j ← 1
3: velocity V(t = 0), position X(t = 0)
4: target signals measurement I(t = 0)
5: Ibest ← I(t = 0), Xbest ← X(t = 0)
6: Ibest ← I(t = 0), Xbest ← X(t = 0) for TVCS
7: while target is not found out
8: for i = 1; i <= popsize; i + +
9: calculate expected and real velocities
10: calculate position
11: end
12: target signals measurement
13: update Ibest and Xbest
14: update Ibest and Xbest of TVCS
15: calculate velocity
16: move ahead one step
17: j ← j + 1
18: end
```
## **4. Simulations**

The two control algorithms are performed and repeated for 10 runs respectively, for both are characteristic of random search in nature and the comparison can be made with statistics gotten from enough simulations.

## **4.1 Parameter settings**

The parameters about working environment, individual robot and swarm robots affect the results directly. Thus the important parameters and their common configurations used in simulations are given in Table 1. The farther information of the symbols can be found in the third column of this table.

## **4.2 Performance metrics**

In order to comparatively evaluate the running performance of control algorithms with different communication modes, some quantitative metrics need to be designed in advance. Here, two performance figures are considered and presented. Both are based on the definition of search success. We can define such term as the best position of swarm closes to target

cycle in simulations has been determined in advance, a simple relation between time step and spent time can be established. The value equals to the reciprocal of average steps taken by all individual robots in a successful search. Clearly, the more the average time steps, the lower

Synchronous and Asynchronous Communication Modes for Swarm Robotic Search 323

The metric is distance principle-based one. It is expressed in form of the sum of passed distance of all the individual robots when the swarm robotic search task is completed. Since the energy consumption of robot is fixed per distance unit, the average energy consumption of individual robots can measure aspect of algorithm performance in economical efficiency. Compared with energy consuming, search efficiency seems more important in swarm robotic

Simulations with different setting configurations are conducted and repeated for 10 runs respectively, for reducing the effect caused by the inherent randomness from the swarm intelligence-based control algorithms. Then the results are shown in some figures and tables. Of them, the running screenshots of swarm robotic system with different sizes when simulated programs terminate at first, see Figure 5 for details. We can find that the robots start searching from the lower left corner of the working space at the beginning of each simulation, being limited to a area of 160 × 160. While the target position is initialized at the same time, being limited to the upper right corner of the searching environment. As for the statistics from

> comm. mode 3-rob swarm 5-rob swarm 8-rob swarm 10-rob swarm synchronous 98.2 94.3 93.9 93.2 mode ±6.11 ±13.23 ±3.81 ±7.16 asynchronous 95.4 91.6 93.2 92.3 mode ±5.48 ±5.83 ±7.35 ±6.10

Table 2. Average control time steps spent in completion of target search under conditions of

comm. mode 3-rob swarm 5-rob swarm 8-rob swarm 10-rob swarm synchronous 1165.3 1863.1 2965.3 3665.9 mode ±74.52 ±259.72 ±119.51 ±254.90 asynchronous 979.9 1681.2 2603.9 3315.4 mode ±86.76 ±144.11 ±220.70 ±186.62

Table 3. Average total distance spent by all robots for target search success under conditions

We can comparatively analyze and discuss the indications surrounding simulation results,

• As for the same task and same parameter setting configurations, time steps decrease as swarm size increases regardless of which communication mode taken in control algorithm.

trying to reveal effects of different communication modes on swarm robotic search.

It indicates that search efficiency enhances as swarm size expands.

search control evaluate because we concern about higher algorithmic speed.

simulations, they are shown in Table 2 – 5, and Figure 6 – 9.

the search efficiency, and vice versa.

**4.2.2 Energy consuming**

*Rdetec* = 250, *Rcomm* = 250.

of *Rdetec* = 250, *Rcomm* = 250.

**4.4 Discussions**

**4.3 Results**

**Algorithm 4** Controlling robot *Ri* involved in swarm robotic search with asynchronous communication mode. Note that symbols share the same meanings as in Algorithm 3.

#### 1: **initialize**



Table 1. Parameter settings taken in simulations. Note that those parameters expressed in certain dimensions are assigned a corresponding proper one respectively.

enough so as that at least one robot can identify the target with some sensory ability (Kowadlo *et al.*, 2006). It means that if one or more robots approach and reach the area where target locates, the run is considered successful.

#### **4.2.1 Search efficiency**

Search efficiency is defined as reciprocal of mean steps required for one successful search. In fact, it concerns search speed by counting time steps for completion of a successful search, which indicates the elapsed time in a single simulation run indirectly. Because the sampling cycle in simulations has been determined in advance, a simple relation between time step and spent time can be established. The value equals to the reciprocal of average steps taken by all individual robots in a successful search. Clearly, the more the average time steps, the lower the search efficiency, and vice versa.

## **4.2.2 Energy consuming**

The metric is distance principle-based one. It is expressed in form of the sum of passed distance of all the individual robots when the swarm robotic search task is completed. Since the energy consumption of robot is fixed per distance unit, the average energy consumption of individual robots can measure aspect of algorithm performance in economical efficiency. Compared with energy consuming, search efficiency seems more important in swarm robotic search control evaluate because we concern about higher algorithmic speed.

### **4.3 Results**

12 Will-be-set-by-IN-TECH

Mobile Robots – Control Architectures,

**Algorithm 4** Controlling robot *Ri* involved in swarm robotic search with asynchronous communication mode. Note that symbols share the same meanings as in Algorithm 3.

1: **initialize**

322

13: **end**

16: **end**

15: calculate *X*

20: calculate velocity 21: move ahead one step

22: *j* ← *j* + 1

23: **end**

17: target signals measurement 18: update *Ibest* and *Xbest*

19: update *Ibest* and *Xbest* for TVCS

locates, the run is considered successful.

**4.2.1 Search efficiency**

Symbol Value Meaning

2: set counter *j* ← 1

8: **while** target is not found out

3: velocity *V*(*t* = 0), position *X*(*t* = 0) 4: target signals measurement *I*(*t* = 0) 5: *Ibest* ← *I*(*t* = 0), *Xbest* ← *X*(*t* = 0)

6: *Ibest* ← *I*(*t* = 0), *Xbest* ← *X*(*t* = 0) for TVCS

9: **for** *i* = 1; *i* <= *neighbor*\_*number*; *i* + + 10: target signals measurement 11: **if** best-found is gotten 12: update *Ibest* and *Xbest*

7: calculate number of neighbors *neighbor*\_*number* in TVCS

14: calculate expected *Vexpec* and real velocity *Vreal*

*Space* 500 × 500 size of searching environment

*Rdetec* 250, 125 maximum detection radius of sensors *Rcomm* 250 maximum communication radius of robot *Vmax* 5 maximum moving velocity of robot *P* 1600 signals power emitted from target *T* 70 constant of inertia element

certain dimensions are assigned a corresponding proper one respectively.

Δ*t* 0.8 contracted factor for a moving step of robots

Table 1. Parameter settings taken in simulations. Note that those parameters expressed in

enough so as that at least one robot can identify the target with some sensory ability (Kowadlo *et al.*, 2006). It means that if one or more robots approach and reach the area where target

Search efficiency is defined as reciprocal of mean steps required for one successful search. In fact, it concerns search speed by counting time steps for completion of a successful search, which indicates the elapsed time in a single simulation run indirectly. Because the sampling

*StartArea* 160 × 160 start area for robots at the beginning of simulations *popsize* 3, 5, 8, 10 number of individual robots in swarm robotic system Simulations with different setting configurations are conducted and repeated for 10 runs respectively, for reducing the effect caused by the inherent randomness from the swarm intelligence-based control algorithms. Then the results are shown in some figures and tables. Of them, the running screenshots of swarm robotic system with different sizes when simulated programs terminate at first, see Figure 5 for details. We can find that the robots start searching from the lower left corner of the working space at the beginning of each simulation, being limited to a area of 160 × 160. While the target position is initialized at the same time, being limited to the upper right corner of the searching environment. As for the statistics from simulations, they are shown in Table 2 – 5, and Figure 6 – 9.


Table 2. Average control time steps spent in completion of target search under conditions of *Rdetec* = 250, *Rcomm* = 250.


Table 3. Average total distance spent by all robots for target search success under conditions of *Rdetec* = 250, *Rcomm* = 250.

#### **4.4 Discussions**

We can comparatively analyze and discuss the indications surrounding simulation results, trying to reveal effects of different communication modes on swarm robotic search.

• As for the same task and same parameter setting configurations, time steps decrease as swarm size increases regardless of which communication mode taken in control algorithm. It indicates that search efficiency enhances as swarm size expands.

3−rob swarm 5−rob swarm 8−rob swarm 10−rob swarm

Synchronous and Asynchronous Communication Modes for Swarm Robotic Search 325

3−rob swarm 5−rob swarm 8−rob swarm 10−rob swarm

Fig. 7. Average total distance passed by all robots for 10 runs search success under conditions

• As for the same parameter setting configurations, average total distance required for a success search varies in the same direction as swarm size increases regardless of which communication mode taken in control algorithm. It indicates that energy consuming largens as swarm size scales expansion. We can think that swarm robots carry out task

• Swarm robotic search with asynchronous communication mode runs more efficiently than with synchronous communication mode, which seems to indicate that information and experience of certain dominant individual can be shared timely among its society for others

of target search at compromise of time consuming and energy consuming.

Fig. 6. Average time steps required to complete target search for 10 runs under conditions of *Rdetec* = 250, *Rcomm* = 250. Note that search efficiency is inversely proportional to time steps.

> Synchronous mode Asynchronous mode

Synchronous mode Asynchronous mode

0

0

500

1000

1500

2000

distance

of *Rdetec* = 250, *Rcomm* = 250.

2500

3000

3500

4000

50

100

steps

150

Fig. 5. Screenshots of swarm robotic search in cases of different size swarms. Note that at least one robot closes to the target enough when search succeeds.


Table 4. Average control time steps spent in completion of target search under conditions of *Rdetec* = 125, *Rcomm* = 250.


Table 5. Average total distance spent by all robots for target search success under conditions of *Rdetec* = 125, *Rcomm* = 250.

14 Will-be-set-by-IN-TECH

<sup>0</sup> <sup>100</sup> <sup>200</sup> <sup>300</sup> <sup>400</sup> <sup>500</sup> <sup>0</sup>

(b) 5-rob swarm

Swarm Robotic Search

<sup>0</sup> <sup>100</sup> <sup>200</sup> <sup>300</sup> <sup>400</sup> <sup>500</sup> <sup>0</sup>

(d) 10-rob swarm

Swarm Robotic Search

Mobile Robots – Control Architectures,

Fig. 5. Screenshots of swarm robotic search in cases of different size swarms. Note that at

comm. mode 3-rob swarm 5-rob swarm 8-rob swarm 10-rob swarm synchronous 91.3 97.1 97.9 89.8 mode ±3.36 ±10.15 ±4.91 ±6.55 asynchronous 97.8 93.1 90.5 89.1 mode ±7.40 ±7.43 ±3.96 ±4.88

Table 4. Average control time steps spent in completion of target search under conditions of

comm. mode 3-rob swarm 5-rob swarm 8-rob swarm 10-rob swarm synchronous 1081.0 1917.8 3117.9 3546.5 mode ±40.95 ±207.36 ±154.86 ±269.06 asynchronous 1017.6 1786.8 2628.1 3139.2 mode ±82.28 ±146.27 ±244.16 ±274.70

Table 5. Average total distance spent by all robots for target search success under conditions

<sup>0</sup> <sup>100</sup> <sup>200</sup> <sup>300</sup> <sup>400</sup> <sup>500</sup> <sup>0</sup>

(a) 3-rob swarm

Swarm Robotic Search

<sup>0</sup> <sup>100</sup> <sup>200</sup> <sup>300</sup> <sup>400</sup> <sup>500</sup> <sup>0</sup>

(c) 8-rob swarm

least one robot closes to the target enough when search succeeds.

Swarm Robotic Search

324

*Rdetec* = 125, *Rcomm* = 250.

of *Rdetec* = 125, *Rcomm* = 250.

Fig. 6. Average time steps required to complete target search for 10 runs under conditions of *Rdetec* = 250, *Rcomm* = 250. Note that search efficiency is inversely proportional to time steps.

Fig. 7. Average total distance passed by all robots for 10 runs search success under conditions of *Rdetec* = 250, *Rcomm* = 250.


detecting target signals either for case of *Rdetec* = 250 or for *Rdetec* = 125. Therefore, robots

Synchronous and Asynchronous Communication Modes for Swarm Robotic Search 327

By extending the particle swarm optimization algorithm, we model swarm robotic system and control it in PSO-type way for carrying out target search task. Because of the relationship between swarm robotic search and PSO, some ideal characteristics of PSO can be transferred to case of swarm robotic search. Inspired from asynchronous versions of PSO, we develop control strategy with asynchronous communication mode for search efficiency enhancement. To reveal the effect, we compare the algorithm with synchronous communication mode. From the statistics of simulation, a conclusion can be drawn that asynchronous communication mode is more efficient than synchronous mode under conditions of the same parameter settings in search efficiency. In our future work, we will explore the effect how to vary as

Beni, G. (2005) From swarm intelligence to swarm robotics, *Lecture Notes in Computer Science*,

Sahin, E. (2005) Swarm robotics: From sources of inspiration to domains of application, ¸ *Lecture*

Dorigo, M. and Sahin, E. (2004) Swarm robotics–special issue editorial, *Autonomous Robots*,

Schmickl, T. and Crailsheim, K. (2008) Trophallaxis within a robotic swarm: bio-inspired

Chang, J.F., Chu, S.C., Roddick, J. F. and et al. (2005) A Parallel Particle Swarm Optimization

Eberhart, R.C. and Shi, Y. (2001) Particle swarm optimization: developments, applications

Henrich, D. and Honiger, T. (1997) Parallel Processing Approaches in Robotics, in *Proceedings*

Huang, F. and Fan, X.P. (2006) Parallel Particle Swarm Optimization Algorithm with Island

Kennedy, J. and Eberhart, R. (1995) Particle swarm optimization, in *Proceedings of IEEE*

Koh, B., George, A.D., Haftka, R.T. and et al. (2006) Parallel Asynchronous Particle Swarm

Kolda, T.G. and Torczon, V.J. (2003) Understanding Asynchronous Parallel Pattern Search, *High Performance Algorithm and Software for Nonlinear Optimization*, pp. 316–335. Kowadlo, G., Rawlinson, D., Russell, R.A. and et al. (2006) Bi-modal search using

communication among robots in a swarm, *Autonomous Robots*, vol. 25, no. 1, pp.

Algorithm with Communication Strategies, *Journal of Information Science and*

and resources, in *Proceedings of the 2001 IEEE Congress on Evolutionary Computation*,

*of IEEE International Symposium on Industrial Electronics*, Guimaraes, Portugal, July

Population Model, *Journal of Control and Decision*, vol. 21, no. 2, pp. 175–179, 188. (in

Optimization, *International Journal of Numerical Methods in Engineering*, vol. 67, pp.

complementary sensing (olfaction/vision) for odour source localization, in *Proceedings of the IEEE International Conference on Robotics and Automation*, Orlando.

only move randomly without help of the social experience sharing.

**5. Conclusions**

**6. References**

controlled object and task change.

vol. 3342, pp. 1–9.

171–188.

7–11.

Chinese)

578–595.

vol. 17, nos. 2–3, pp. 111–113.

*Engineering*, no. 21, 809–818.

Piscataway, NJ, USA, vol. 1, pp. 81–86.

*International Conference on Neural Networks*, vol. 4.

*Notes in Computer Science*, vol. 3342, pp. 10–20.

Fig. 8. Average time steps required to complete target search for 10 runs under conditions of *Rdetec* = 125, *Rcomm* = 250.

Fig. 9. Average total distance passed by all robots for 10 runs search success under conditions of *Rdetec* = 125, *Rcomm* = 250.

behavior decision making. The result seems that robots can adjust its own motion velocity and position and finally fasten the search process by learning from the optimal neighbors at different time steps.

• As to different parameter setting configurations, such as detection radius is set *Rdetec* = 250 and *Rdetec* = 125 respectively but the communication radius remains the same, search efficiencies and energy consuming do not vary obviously. The reason maybe that at the beginning of simulations, robots are far from the potential target so as not to be capable of detecting target signals either for case of *Rdetec* = 250 or for *Rdetec* = 125. Therefore, robots only move randomly without help of the social experience sharing.

#### **5. Conclusions**

16 Will-be-set-by-IN-TECH

Synchronous mode Asynchronous mode

Mobile Robots – Control Architectures,

3−rob swarm 5−rob swarm 8−rob swarm 10−rob swarm

3−rob swarm 5−rob swarm 8−rob swarm 10−rob swarm

behavior decision making. The result seems that robots can adjust its own motion velocity and position and finally fasten the search process by learning from the optimal neighbors

Fig. 9. Average total distance passed by all robots for 10 runs search success under conditions

• As to different parameter setting configurations, such as detection radius is set *Rdetec* = 250 and *Rdetec* = 125 respectively but the communication radius remains the same, search efficiencies and energy consuming do not vary obviously. The reason maybe that at the beginning of simulations, robots are far from the potential target so as not to be capable of

Fig. 8. Average time steps required to complete target search for 10 runs under conditions of

Synchronous mode Asynchronous mode

0

0

500

1000

1500

2000

distance

of *Rdetec* = 125, *Rcomm* = 250.

at different time steps.

2500

3000

3500

4000

50

100

steps

*Rdetec* = 125, *Rcomm* = 250.

150

326

By extending the particle swarm optimization algorithm, we model swarm robotic system and control it in PSO-type way for carrying out target search task. Because of the relationship between swarm robotic search and PSO, some ideal characteristics of PSO can be transferred to case of swarm robotic search. Inspired from asynchronous versions of PSO, we develop control strategy with asynchronous communication mode for search efficiency enhancement. To reveal the effect, we compare the algorithm with synchronous communication mode. From the statistics of simulation, a conclusion can be drawn that asynchronous communication mode is more efficient than synchronous mode under conditions of the same parameter settings in search efficiency. In our future work, we will explore the effect how to vary as controlled object and task change.

#### **6. References**


**0**

**15**

*France*

**Using a Meeting Channel and Relay Nodes to**

*LIMOS CNRS / Clermont Université, Campus des Cézeaux, 63 173 Aubiere Cedex*

Mobile robots are increasingly used in industrial applications (generally in order to convey merchandises), in healthcare applications (for example, to transport patients or medicines) or in domotic applications (for instance, to sweep a floor). These robots are often mounted with several sensors and actuators, in order to carry out their tasks (*e.g.*, grabbing an object of interest, and bringing it a destination without colliding with obstacles). To reduce the complexity and weight of the wiring of all these sensors and actuators, these devices can be equipped with wireless capability. A central entity, called the central unit, is usually in charge of collecting the data from the sensors, integrating part of the data in a models, computing a behavioral response and operating the actuators accordingly. Another entity, called the coordinator, is in charge of managing the wireless entities, retrieving data from sensors and sending orders to actuators. The same device can be simultaneously the central unit and the coordinator. In the following, the wireless communications between sensors, actuators and

Real applications often require the collaboration of mobile robots in order to perform required tasks. For instance, when they are used in airports to transport luggage, mobile robots can cooperate to avoid colliding with each other. In search and rescue operations, mobile robots can inform their neighbors that an area has already been explored (Ferranti & Trigoni, 2008). Robots can also follow tracks on the floor (Hogg et al., 2002) or monitor an area collaboratively (Lambrou & Panayiotou, 2009). An allegorical example is shown on Fig. 1, where the waiter robot *Dro* has to collaborate with the waitress robot *Ide* in order to serve hot coffee to customers. These communications used for collaborations are called inter-robots

The design of a network architecture that allows mobile robots to cooperate efficiently, without negatively impacting the performance of each intra-robot communication, is a challenging

The remainder of the chapter is the following. In Section 2, we introduce a multi-channel approach where intra-robot communications are performed on a robot-specific channel, and inter-robot communications are performed on a common meeting channel. In this way, mobile robots can communicate without impacting intra-robot communications. In Section 3, we discuss the protocols that allow mobile robots to communicate directly, when they are

**1. Introduction**

communications.

issue, and is the aim of this chapter.

coordinator are called intra-robot communications.

**Interconnect Mobile Robots**

Nassima Hadid, Alexandre Guitton

and Michel Misson


## **Using a Meeting Channel and Relay Nodes to Interconnect Mobile Robots**

Nassima Hadid, Alexandre Guitton and Michel Misson *LIMOS CNRS / Clermont Université, Campus des Cézeaux, 63 173 Aubiere Cedex France*

#### **1. Introduction**

18 Will-be-set-by-IN-TECH

Li, J.M., Wan, D.L., Chi, Z.X. and et al. (2006) A Parallel Particle Swarm Optimization

Luo, J.H. and Zhong, Z.N. (2005) Research on the Parallel Simulation of Asynchronous Pattern

Pugh, J. and Martinoli, A. (2007) Inspiring and Modeling Multi-Robot Search with Particle

Zeng, J.C. and Xue, S.D. (2010) Modeling and Simulation Approaches to Swarm Robotic Systems, *Journal of System Simulation*, vol. 22, no. 6, pp. 1327–1330. (in Chinese) Ridge, E., Kudenko, D., Kazakov, D. and et al. (2005) Moving Nature-Inspired Algorithms

Schutte, J.F., Reinbol, J.A., Fregly, B.J. and et al. (2004) Parallel Global Optimization with

Venter, G. and Sobieszczanki-Sobieksi, J. (2006) A Parallel Particle Swarm Optimization

Wang, Y.Y., Zeng, J.C. and Tan, Y. (2007) Parallel Particle Swarm Optimization Based on

Xu, Y.Z. and Zeng, W.H. (2005) The Development of Parallel Evolutionary Algorithms, *Pattern Recognition and Artificial Intelligence*, vol. 21, no. 2, pp. 183–192. (in Chinese) Xue, S.D. and Zeng, J.C. (2008a) Using Relative Localization Observations for Swarm Robots

Xue, S.D. and Zeng, J.C. (2008b) Control over Swarm Robots Search with Swarm Intelligence Principles, *Journal of System Simulation*, vol. 20, no. 13, pp. 3449–3454. Xue, S.D. and Zeng, J.C. (2008c) Swarm Robots Search under Conditions of Heterogeneous

Zeng, J.C., Jie, J. and Cui, Z.H. (2004) *Particle Swarm Optimization*, Beijing: Science Press. (in

Zhao, Y., Yue, J.G., Li, G.Y. and et al. (2005) A Parallel Particle Swarm Optimization Algorithm

Xue, S.D. and Zeng, J.C. (2009) Controlling swarm robots for target search in parallel and

*and Evolutionary Methods*, Las Vegas, Nevada, USA, July 14–17.

*Institute of Technology*, vol. 38, no. 12, pp. 2162–2166. (in Chinese)

68–70, 78. (in Chinese)

328

2171–2176. (in Chinese)

China, June 29–July 2.

Chinese)

3, pp. 151–163.

4, pp. 353–360.

Honolulu, Hawaii, USA, April, 1–5.

*Intelligence and Applications*, Vol. 135, pp. 35–49.

*Information, and Communication*, no. 3, pp. 123–137.

*Engineering*, vol. 61, no. 13, pp. 2296–2315.

Algorithm Based on Fine-Grained Model with GPU-Accelerating, *Journal of Harbin*

Bio-Interfacing, Navigation, Multi Robot Motion Planning and Operator Training

Mobile Robots – Control Architectures,

of Particle Swarm Optimization, *Journal of Computer Simulation*, vol. 22, no. 6, pp.

Swarm Optimization, in *Proceedings of the 4th IEEE Swarm Intelligence Symposium*,

to Parallel, Asynchronous and Decentralised Environment, *Frontiers in Artificial*

the Particle Swarm Optimization, *International Journal of Numerical Methods in*

Algorithm Accelerated by Asynchronous Evaluations, *Journal of Aerospace Computing,*

Parallel Model with Controller, *Journal of System Simulation*, vol. 19, no.10, pp.

Search, in *International Conference on Modelling, Identification and Control*, Shanghai,

Sensory Signals Fusion, in *Proceedings of the 2008 International Conference on Genetic*

Based on Multigroup for Solving Complex Functions Optimization, *Journal of Computer Engineering and Applications*, vol. 41, no. 16, pp. 58–60, 64. (in Chinese) Xue, S.D., Zhang, J.H. and Zeng, J.C. (2009) Parallel asynchronous control strategy for target

search with swarm robots, *International Journal of Bio-Inspired Computation*, vol. 1, no.

asynchronously, *International Journal of Modelling, Identification and Control*, vol. 8, no.

Mobile robots are increasingly used in industrial applications (generally in order to convey merchandises), in healthcare applications (for example, to transport patients or medicines) or in domotic applications (for instance, to sweep a floor). These robots are often mounted with several sensors and actuators, in order to carry out their tasks (*e.g.*, grabbing an object of interest, and bringing it a destination without colliding with obstacles). To reduce the complexity and weight of the wiring of all these sensors and actuators, these devices can be equipped with wireless capability. A central entity, called the central unit, is usually in charge of collecting the data from the sensors, integrating part of the data in a models, computing a behavioral response and operating the actuators accordingly. Another entity, called the coordinator, is in charge of managing the wireless entities, retrieving data from sensors and sending orders to actuators. The same device can be simultaneously the central unit and the coordinator. In the following, the wireless communications between sensors, actuators and coordinator are called intra-robot communications.

Real applications often require the collaboration of mobile robots in order to perform required tasks. For instance, when they are used in airports to transport luggage, mobile robots can cooperate to avoid colliding with each other. In search and rescue operations, mobile robots can inform their neighbors that an area has already been explored (Ferranti & Trigoni, 2008). Robots can also follow tracks on the floor (Hogg et al., 2002) or monitor an area collaboratively (Lambrou & Panayiotou, 2009). An allegorical example is shown on Fig. 1, where the waiter robot *Dro* has to collaborate with the waitress robot *Ide* in order to serve hot coffee to customers. These communications used for collaborations are called inter-robots communications.

The design of a network architecture that allows mobile robots to cooperate efficiently, without negatively impacting the performance of each intra-robot communication, is a challenging issue, and is the aim of this chapter.

The remainder of the chapter is the following. In Section 2, we introduce a multi-channel approach where intra-robot communications are performed on a robot-specific channel, and inter-robot communications are performed on a common meeting channel. In this way, mobile robots can communicate without impacting intra-robot communications. In Section 3, we discuss the protocols that allow mobile robots to communicate directly, when they are

performance of the whole system. Indeed, if the communications between mobiles and the intra-mobile communications are on the same channel, the competition for the medium is high. Reducing the bandwidth of intra-mobile communications can have a disastrous impact on the robot behavior, for instance if the coordinator is not able to inform quickly the actuators

Using a Meeting Channel and Relay Nodes to Interconnect Mobile Robots 331

Multi-channel approaches have the main advantage to reduce the contention on each communication channel, and thus to improve the performance of the system. In the literature, several multi-channel approaches have been proposed (Bahl et al., 2004; Jovanovic & Djordjevic, 2007; Nasipuri et al., 1999; Pathmasuntharam et al., 2004). There are two types of multi-channel approaches: multi-channel approaches with several radio interfaces and

HMCP (Hybrid Multi-Channel Protocol) (Kyasanur & Vaidya, 2005) divides radio interfaces into two groups: static interfaces operating on static channels, and dynamic interfaces able to change the channel they use. Nodes periodically inform their neighbors of the channel of their static interfaces. Notice that neighboring nodes do not necessarily have their static interfaces on the same set of channels. When a sender has to transmit data to a neighbor, the sender switches one of its dynamic interface to a channel corresponding to one of the static interface of the neighbor. HMCP requires periodic exchange of control packets to maintain

The DCA (Dynamic Channel Assignment) protocol (Wu et al., 2000) requires each node to have at least two radio interfaces. One of the interface is used for control frames, and the others for data frames. The control interface always communicates on the same channel, while the data interfaces can change channels dynamically. Neighboring nodes collaborate to reserve available channels for transmissions. The amount of messages exchanged in DCA

The DCSS (Dynamic Channel Selection with Snooping) protocol (Seo et al., 2008) also uses two radio interfaces: the control interface is always listening on a common given channel, while the data interface changes its channel depending on the reservation. Nodes perform reservations depending on their view of the channel states. Nodes overhear control packets in order to determine the set of channels locally used. They assess the state of all the channels using a round-robin algorithm while they are neither transmitting nor receiving a frame. Thus, they periodically update their view of the channel states. The main drawback of DCSS is that

In all these protocols, nodes are required to have two radio interfaces, which consumes a lot of energy. Indeed, radio interfaces are the main source of energy consumption in low rate wireless solutions. Moreover, the control radio interface is often active all the time. Additionally, having several radio interfaces increases the cost of the hardware. Thus, having a multi-channel approach operating with several radio interfaces is not suited in our

makes it not suitable when contacts between mobile entities last for a short duration.

Thus, it is important to interconnect mobile robots using a multi-channel approach.

(such as brakes) to be operated.

**2.1 Multi-channel approaches**

multi-channel approaches with a single radio interface.

the neighbor tables, and is not suited for mobile nodes.

nodes spend a lot of energy in assessing the channels.

applications.

**2.1.1 Multi-channel approaches with several radio interfaces**

Fig. 1. Mobile robots *Dro* (on the left) and *Ide* (on the right) have to collaborate to serve hot coffee to customers.

in range of each other. These protocols are based on several mechanisms: a data storage mechanism (which allows data to be stored until the intended destination mobile comes in range), a neighbor discovery mechanism (which detects mobiles in range), an association mechanism (which allows neighbors to identify each other), and a data exchange mechanism (which transfers the data). All these mechanisms are described in details. The concepts behind these protocols come from delay tolerant networks (DTN), as the mobility of robots is often difficult to integrate to applications, or even unpredictable in some applications. In Section 4, we discuss how to achieve indirect communications between mobile robots. We explain how road-side units can help mobile robots to communicate. These road-side units can also be required in some applications, where a message has to be disseminated in a specified geographical area. A generic addressing mechanism is useful in this case, when the ID of the destination mobiles cannot be known beforehand. We show that the overhead of implementing the generic addressing is limited. Finally, in Section 5, we conclude this chapter by summarizing the benefits of the meeting channel.

## **2. Interconnection of mobile robots using a meeting channel**

Most applications using mobile robots require the mobile robots to communicate together in order to perform the required tasks. In the following, we consider a generic application where mobile robots follow tracks and carry items (such as robots carrying luggage in airports). The mobile robots communicate in order to avoid collisions, and to detect priority when crossing intersections. Each mobile is equipped with sensors and actuators that communicate with the central unit embedded on the mobile.

Although the communications between mobile robots can be performed on the same communication channels as the intra-mobile communications, this tends to reduce the performance of the whole system. Indeed, if the communications between mobiles and the intra-mobile communications are on the same channel, the competition for the medium is high. Reducing the bandwidth of intra-mobile communications can have a disastrous impact on the robot behavior, for instance if the coordinator is not able to inform quickly the actuators (such as brakes) to be operated.

Thus, it is important to interconnect mobile robots using a multi-channel approach.

#### **2.1 Multi-channel approaches**

2 Will-be-set-by-IN-TECH

Mobile Robots – Control Architectures,

Fig. 1. Mobile robots *Dro* (on the left) and *Ide* (on the right) have to collaborate to serve hot

in range of each other. These protocols are based on several mechanisms: a data storage mechanism (which allows data to be stored until the intended destination mobile comes in range), a neighbor discovery mechanism (which detects mobiles in range), an association mechanism (which allows neighbors to identify each other), and a data exchange mechanism (which transfers the data). All these mechanisms are described in details. The concepts behind these protocols come from delay tolerant networks (DTN), as the mobility of robots is often difficult to integrate to applications, or even unpredictable in some applications. In Section 4, we discuss how to achieve indirect communications between mobile robots. We explain how road-side units can help mobile robots to communicate. These road-side units can also be required in some applications, where a message has to be disseminated in a specified geographical area. A generic addressing mechanism is useful in this case, when the ID of the destination mobiles cannot be known beforehand. We show that the overhead of implementing the generic addressing is limited. Finally, in Section 5, we conclude this chapter

Most applications using mobile robots require the mobile robots to communicate together in order to perform the required tasks. In the following, we consider a generic application where mobile robots follow tracks and carry items (such as robots carrying luggage in airports). The mobile robots communicate in order to avoid collisions, and to detect priority when crossing intersections. Each mobile is equipped with sensors and actuators that communicate with the

Although the communications between mobile robots can be performed on the same communication channels as the intra-mobile communications, this tends to reduce the

coffee to customers.

330

by summarizing the benefits of the meeting channel.

central unit embedded on the mobile.

**2. Interconnection of mobile robots using a meeting channel**

Multi-channel approaches have the main advantage to reduce the contention on each communication channel, and thus to improve the performance of the system. In the literature, several multi-channel approaches have been proposed (Bahl et al., 2004; Jovanovic & Djordjevic, 2007; Nasipuri et al., 1999; Pathmasuntharam et al., 2004). There are two types of multi-channel approaches: multi-channel approaches with several radio interfaces and multi-channel approaches with a single radio interface.

#### **2.1.1 Multi-channel approaches with several radio interfaces**

HMCP (Hybrid Multi-Channel Protocol) (Kyasanur & Vaidya, 2005) divides radio interfaces into two groups: static interfaces operating on static channels, and dynamic interfaces able to change the channel they use. Nodes periodically inform their neighbors of the channel of their static interfaces. Notice that neighboring nodes do not necessarily have their static interfaces on the same set of channels. When a sender has to transmit data to a neighbor, the sender switches one of its dynamic interface to a channel corresponding to one of the static interface of the neighbor. HMCP requires periodic exchange of control packets to maintain the neighbor tables, and is not suited for mobile nodes.

The DCA (Dynamic Channel Assignment) protocol (Wu et al., 2000) requires each node to have at least two radio interfaces. One of the interface is used for control frames, and the others for data frames. The control interface always communicates on the same channel, while the data interfaces can change channels dynamically. Neighboring nodes collaborate to reserve available channels for transmissions. The amount of messages exchanged in DCA makes it not suitable when contacts between mobile entities last for a short duration.

The DCSS (Dynamic Channel Selection with Snooping) protocol (Seo et al., 2008) also uses two radio interfaces: the control interface is always listening on a common given channel, while the data interface changes its channel depending on the reservation. Nodes perform reservations depending on their view of the channel states. Nodes overhear control packets in order to determine the set of channels locally used. They assess the state of all the channels using a round-robin algorithm while they are neither transmitting nor receiving a frame. Thus, they periodically update their view of the channel states. The main drawback of DCSS is that nodes spend a lot of energy in assessing the channels.

In all these protocols, nodes are required to have two radio interfaces, which consumes a lot of energy. Indeed, radio interfaces are the main source of energy consumption in low rate wireless solutions. Moreover, the control radio interface is often active all the time. Additionally, having several radio interfaces increases the cost of the hardware. Thus, having a multi-channel approach operating with several radio interfaces is not suited in our applications.

own). In order to maximize the lifetime of the robot, it is crucial that all embedded devices

Using a Meeting Channel and Relay Nodes to Interconnect Mobile Robots 333

As the radio module of network entities consumes a significant amount of energy, most standards for low-power wireless personal area networks, such as IEEE 802.15.4 (IEEE 802.15, 2006), deactivates the radio module of nodes periodically. During this time, the nodes are sleeping and can neither receive nor transmit. They are activated at specific instants, usually

Our proposition is to allow some of the nodes to use the meeting channel for inter-mobile communications, while other nodes are sleeping. Note that most standards for low-power wireless personal area network define an inactivity time of the standards (such as IEEE 802.15.4 standard for example). The nodes that remain awaken are called energy-rich (ER) nodes, and the nodes that go to sleep are called energy-limited (EL) nodes. In this way:

Our mechanism does not require mobile robots to be synchronized so that their inactivity period occur at the same time. Such a synchronization among mobile entities would be very difficult to achieve and induce a significant amount of control traffic, especially as the number

Our mechanism uses the fact that inactivity is typically larger than the activity time. Table 1 shows the expected time ratio during which all the *n* mobiles are simultaneously inactive, as the activity ratio varies. It can be seen that when the activity ratio is low, the simultaneous inactivity ratio becomes very high. For example, for an activity ratio of 1%, 10 mobile robots are simultaneously inactive (and thus potentially on the meeting channel) during more than

Number of mobile robots *n* Activity ratio Expected simultaneous inactivity ratio 2 0.5 0.25 0.25 0.5625 0.01 0.9801 0.5 0.125 0.25 0.4219 0.01 0.9703 0.5 0.0010 0.25 0.0563 0.01 0.9044 Table 1. The ratio of time during which *n* mobile robots are simultaneously inactive is very

Allowing mobile robots to communicate during their inactivity period has several advantages. As inactivity periods are usually long, the probability that two different robots are simultaneously inactive is high. By activating a subset of network devices during the

save energy, including network devices.

**2.2.2 Compatibility with standards**

of robots in the fleet increases.

90% of the time.

during small fractions (typically less than 1%) of time.

• inter-mobile communications are transparent to EL nodes,

• ER nodes use the inactivity time, which is otherwise wasted.

high when the activity ratio of each mobile is low.

• inter-mobile communications do not degrade intra-mobile communications, • the activities on the robot-specific channel are compliant with the standard,

#### **2.1.2 Multi-channel approaches with a single radio interface**

The McMAC (Multi-channel Medium Access Control) (So & Walrand, 2007) protocol uses a channel reservation mechanism based on the periodic diffusion of beacons. The beacons allow nodes to be synchronized. The interval between successive beacons is divided into two periods: a control period, and a data period. During the control period, all the nodes communicate on the same channel in order to plan the data period. During the data period, node switch to and communicate on the channels reserved during the control period. The main drawback of McMAC in our applications is that it is difficult to maintain a synchronization between mobile entities. Also, nodes need to be identified before the control period in order to be able to reserve channels: if the delay to identify new mobile nodes is large, these new nodes might be unable to communicate during a long duration.

The TMMAC (TDMA-based Multi-channel Medium Access Control) (Zhang et al., 2007) protocol is similar to the McMAC protocol. It uses a channel allocation bitmap and a channel usage bitmap in order to reserve channels. The channel allocation bitmap is filled during the control period, and the channel usage bitmap is maintained by each node and transmitted to its one-hop neighborhood. The TMMAC protocol has the same drawbacks as the McMAC protocol.

The SMC MAC (Sensor Multi-Channel Medium Access Control) (Ramakrishnan & Vanaja Ranjan, 2009) protocol based on RTS (Request To Send) and CTS (Clear To Send) control frames. RTS and CTS frames are sent on the control channel. The RTS contains the list of available channels from the sender point of view. The CTS contains the chosen channel from the destination point of view, which is the first channel of the intersection of the available channels of the sender and the receiver. When transmitting and receiving a CTS, both the sender and the receiver switch to the chosen channel, and the other nodes mark this channel as busy. The main drawback of SMC MAC is that all the nodes in range share the same communication channels: the performance of intra-robot communications can be reduced by other intra-robot communications or by inter-robot communications.

#### **2.2 Interconnection using a meeting channel**

Our goal is to allow several mobile robots to communicate without requiring an expensive hardware. Thus, we plan to use a multi-channel approach with a single radio interface. Moreover, the performance of the intra-robot communications of a robot should not be reduced by the proximity of other robots in the neighborhood. Finally, the mechanism should be compatible with existing low-power wireless personal area networks standards.

To achieve these goals, we propose to consider two types of channels: the robot-specific channels and the meeting channel. Each robot is allocated its own robot-specific channel, in order to be able to perform intra-robot communications without suffering from severe and systematic interferences from other robots that are moving in the same area. In the case where there are more robots than available channels, it is possible that several robots share the same channel. However, it is important to allocate the same channel to robots having disjoint trajectories, in order to maintain a low level of interferences. Additionally, all the robots share the same meeting channel. This channel is used solely for communications between robots.

#### **2.2.1 Energy-efficiency**

Mobile robots are often equipped with a battery that powers the engine and the coordinator. Sensors and actuators are also powered by a battery (either by the main battery or by their own). In order to maximize the lifetime of the robot, it is crucial that all embedded devices save energy, including network devices.

As the radio module of network entities consumes a significant amount of energy, most standards for low-power wireless personal area networks, such as IEEE 802.15.4 (IEEE 802.15, 2006), deactivates the radio module of nodes periodically. During this time, the nodes are sleeping and can neither receive nor transmit. They are activated at specific instants, usually during small fractions (typically less than 1%) of time.

### **2.2.2 Compatibility with standards**

4 Will-be-set-by-IN-TECH

Mobile Robots – Control Architectures,

The McMAC (Multi-channel Medium Access Control) (So & Walrand, 2007) protocol uses a channel reservation mechanism based on the periodic diffusion of beacons. The beacons allow nodes to be synchronized. The interval between successive beacons is divided into two periods: a control period, and a data period. During the control period, all the nodes communicate on the same channel in order to plan the data period. During the data period, node switch to and communicate on the channels reserved during the control period. The main drawback of McMAC in our applications is that it is difficult to maintain a synchronization between mobile entities. Also, nodes need to be identified before the control period in order to be able to reserve channels: if the delay to identify new mobile nodes is

The TMMAC (TDMA-based Multi-channel Medium Access Control) (Zhang et al., 2007) protocol is similar to the McMAC protocol. It uses a channel allocation bitmap and a channel usage bitmap in order to reserve channels. The channel allocation bitmap is filled during the control period, and the channel usage bitmap is maintained by each node and transmitted to its one-hop neighborhood. The TMMAC protocol has the same drawbacks as the McMAC

The SMC MAC (Sensor Multi-Channel Medium Access Control) (Ramakrishnan & Vanaja Ranjan, 2009) protocol based on RTS (Request To Send) and CTS (Clear To Send) control frames. RTS and CTS frames are sent on the control channel. The RTS contains the list of available channels from the sender point of view. The CTS contains the chosen channel from the destination point of view, which is the first channel of the intersection of the available channels of the sender and the receiver. When transmitting and receiving a CTS, both the sender and the receiver switch to the chosen channel, and the other nodes mark this channel as busy. The main drawback of SMC MAC is that all the nodes in range share the same communication channels: the performance of intra-robot communications can be reduced by

Our goal is to allow several mobile robots to communicate without requiring an expensive hardware. Thus, we plan to use a multi-channel approach with a single radio interface. Moreover, the performance of the intra-robot communications of a robot should not be reduced by the proximity of other robots in the neighborhood. Finally, the mechanism should

To achieve these goals, we propose to consider two types of channels: the robot-specific channels and the meeting channel. Each robot is allocated its own robot-specific channel, in order to be able to perform intra-robot communications without suffering from severe and systematic interferences from other robots that are moving in the same area. In the case where there are more robots than available channels, it is possible that several robots share the same channel. However, it is important to allocate the same channel to robots having disjoint trajectories, in order to maintain a low level of interferences. Additionally, all the robots share the same meeting channel. This channel is used solely for communications between robots.

Mobile robots are often equipped with a battery that powers the engine and the coordinator. Sensors and actuators are also powered by a battery (either by the main battery or by their

be compatible with existing low-power wireless personal area networks standards.

large, these new nodes might be unable to communicate during a long duration.

other intra-robot communications or by inter-robot communications.

**2.2 Interconnection using a meeting channel**

**2.2.1 Energy-efficiency**

**2.1.2 Multi-channel approaches with a single radio interface**

protocol.

332

Our proposition is to allow some of the nodes to use the meeting channel for inter-mobile communications, while other nodes are sleeping. Note that most standards for low-power wireless personal area network define an inactivity time of the standards (such as IEEE 802.15.4 standard for example). The nodes that remain awaken are called energy-rich (ER) nodes, and the nodes that go to sleep are called energy-limited (EL) nodes. In this way:


Our mechanism does not require mobile robots to be synchronized so that their inactivity period occur at the same time. Such a synchronization among mobile entities would be very difficult to achieve and induce a significant amount of control traffic, especially as the number of robots in the fleet increases.

Our mechanism uses the fact that inactivity is typically larger than the activity time. Table 1 shows the expected time ratio during which all the *n* mobiles are simultaneously inactive, as the activity ratio varies. It can be seen that when the activity ratio is low, the simultaneous inactivity ratio becomes very high. For example, for an activity ratio of 1%, 10 mobile robots are simultaneously inactive (and thus potentially on the meeting channel) during more than 90% of the time.


Table 1. The ratio of time during which *n* mobile robots are simultaneously inactive is very high when the activity ratio of each mobile is low.

Allowing mobile robots to communicate during their inactivity period has several advantages. As inactivity periods are usually long, the probability that two different robots are simultaneously inactive is high. By activating a subset of network devices during the

appli. layer

including the bundle layer.

**3.1 Optimized intra-robot communications**

can help relaying the intra-robot traffic.

**3.1.1 Description**

activity period).

bundle layer bundle layer bundle layer transport layer transport layer transport layer network layer network layer network layer network layer network layer data-link layer data-link layer data-link layer data-link layer data-link layer physical layer physical layer physical layer physical layer physical layer

Using a Meeting Channel and Relay Nodes to Interconnect Mobile Robots 335

source custodian gateway Fig. 2. Network stack and communication from a source to a gateway (on the same mobile),

we present the trade-off between the energy-efficiency of the neighbor discovery mechanism and the duration of the neighbor detection. Third, we show how communication links can be established between robots, and how the custodians of two neighbor robots can know each

During the robot inactivity period, all ER nodes are active on the meeting channel. The gateways are trying to detect potential neighboring robots. However, most of the time, no robots are found in the vicinity. Instead of being idle while waiting for encounters, ER nodes

During the robot inactivity period, the ER nodes form a network called the ER network. Thus, ER nodes have two addresses: one during the robot activity period, and one during the robot inactivity period. EL nodes have a single address, for the robot activity period. We assume that there exists a routing function R, such that R takes as input the destination addresses (in the robot network and/or in the ER network) and returns the next hop addresses (in the robot network and/or in the ER network). In the ZigBee standard, the function R is based on the hierarchical allocation of addresses. We also assume that there exists a neighbor address function N that takes as input the address of a next hop (in the robot network and/or in the ER network) and returns, if possible, the address of the next hop in both networks. It is not mandatory that functions R and N are always able to translate the address of a node from

one network to another, but they can improve the global behavior of the mechanism.

To summarize, ER nodes can always relay frames to ER nodes.

To optimize intra-robot communications on the meeting channel, the MAC relaying scheme has to be modified in the following way. Let us assume that an ER node *n* has a frame for destination *d*. If the address of *d* in the ER network is known, this address can be used to compute the next hop *h* on the ER network, as *h* = R(*d*). *n* can send the frame to *h*. If the address of *d* in the ER network is not known, it is possible that the next hop *h*� of *d* computed on the robot network is also a neighbor of *n* in the ER network. If this is the case, *h* = N (*h*�

is known: *n* can send the frame to *h* (in fact, both addresses *h* and *h*� are present in the frame). Otherwise, *n* is unable to send the frame during this period. Thus, *n* proceeds to the next frame in queue (without dropping the previous frame, as it will be sent during the next robot

)

other. Fourth, we discuss direct communications between mobiles.

inactivity period of each robot, it is possible to have these ER devices communicate with each other while the EL nodes sleep.

The remaining of this chapter describes how mobile robots can communicate using the meeting channel, either directly (when the destination robot becomes in range with the source robot) or indirectly (when the destination robot does not become in range with the source robot, or when the destination robot address is unknown by the source robot). Notice that intra-mobile communications can be performed according to the ZigBee and IEEE 802.15.4 standards for instance, and is not the focus of this chapter.

## **3. Direct communications between mobile robots**

To allow direct communications between mobile robots, a DTN paradigm has to be used (*Delay-Tolerant Networking Architecture*, 2006). Indeed, communications between mobile robots are opportunistic in nature: they depend on the mobility pattern of each mobile robot and on the wireless propagation conditions, both of which are considered here as unpredictable. Moreover, the number of mobile robots in the area is small compared to the area size. Thus, mobile robots meet each other infrequently.

We propose to adapt the *store, carry and forward* principle from the DTN literature to our generic application. In DTNs, a new layer called the bundle layer is introduced between the application layer and the transport layer of the OSI model. The main role of this bundle layer is to mask to applications the effects of node mobility, and mainly the sporadicity of the network connectivity. To do so, data from the application layer is encapsulated in a composite structure, called a bundle. A bundle contains the application data, and additional data to deal with sporadic connectivity, such as timeouts and destinations. Using a bundle layer implies the need of a transport layer, as the bundle structure is often larger than the maximum frame size in a low-power wireless personal area network.

According to the DTN principle, messages are stored in bundle in specific nodes called custodians. When two mobiles meet, their custodians can exchange bundles. The bundles can circulate through the mobility of robots or through contacts between mobiles. The detection of a contact between mobiles is performed by specific nodes called gateways.

To summarize, we can consider the following types of nodes.


Fig. 2 describes the network stack of our architecture.

In this section, we first describe how the presence of ER nodes can improve the network performance of intra-robot communications. This feature is important as the mobile robots often have no neighbors in range. Second, we study the neighbor discovery mechanism and

Fig. 2. Network stack and communication from a source to a gateway (on the same mobile), including the bundle layer.

we present the trade-off between the energy-efficiency of the neighbor discovery mechanism and the duration of the neighbor detection. Third, we show how communication links can be established between robots, and how the custodians of two neighbor robots can know each other. Fourth, we discuss direct communications between mobiles.

#### **3.1 Optimized intra-robot communications**

During the robot inactivity period, all ER nodes are active on the meeting channel. The gateways are trying to detect potential neighboring robots. However, most of the time, no robots are found in the vicinity. Instead of being idle while waiting for encounters, ER nodes can help relaying the intra-robot traffic.

#### **3.1.1 Description**

6 Will-be-set-by-IN-TECH

Mobile Robots – Control Architectures,

inactivity period of each robot, it is possible to have these ER devices communicate with each

The remaining of this chapter describes how mobile robots can communicate using the meeting channel, either directly (when the destination robot becomes in range with the source robot) or indirectly (when the destination robot does not become in range with the source robot, or when the destination robot address is unknown by the source robot). Notice that intra-mobile communications can be performed according to the ZigBee and IEEE 802.15.4

To allow direct communications between mobile robots, a DTN paradigm has to be used (*Delay-Tolerant Networking Architecture*, 2006). Indeed, communications between mobile robots are opportunistic in nature: they depend on the mobility pattern of each mobile robot and on the wireless propagation conditions, both of which are considered here as unpredictable. Moreover, the number of mobile robots in the area is small compared to the

We propose to adapt the *store, carry and forward* principle from the DTN literature to our generic application. In DTNs, a new layer called the bundle layer is introduced between the application layer and the transport layer of the OSI model. The main role of this bundle layer is to mask to applications the effects of node mobility, and mainly the sporadicity of the network connectivity. To do so, data from the application layer is encapsulated in a composite structure, called a bundle. A bundle contains the application data, and additional data to deal with sporadic connectivity, such as timeouts and destinations. Using a bundle layer implies the need of a transport layer, as the bundle structure is often larger than the maximum frame

According to the DTN principle, messages are stored in bundle in specific nodes called custodians. When two mobiles meet, their custodians can exchange bundles. The bundles can circulate through the mobility of robots or through contacts between mobiles. The detection

• EL nodes are only active on the robot-specific channel. They sleep during the robot inactivity period, and do not contribute to communications between mobile robots. • There is one custodian node per robot. It is an ER node with larger storage capabilities than the other nodes. During the robot activity period, it works as the other nodes. During the robot inactivity period, it switches to the meeting channel, and manages bundles. • There is at least one gateway node per robot. Gateways are ER nodes whose role is to detect neighboring robots on the meeting channel, during the robot inactivity period. • There can be intermediate ER nodes per robot. Intermediate nodes are ER nodes that help interconnecting the custodian and the gateways on the meeting channel, during the robot

In this section, we first describe how the presence of ER nodes can improve the network performance of intra-robot communications. This feature is important as the mobile robots often have no neighbors in range. Second, we study the neighbor discovery mechanism and

of a contact between mobiles is performed by specific nodes called gateways.

other while the EL nodes sleep.

334

standards for instance, and is not the focus of this chapter.

area size. Thus, mobile robots meet each other infrequently.

size in a low-power wireless personal area network.

Fig. 2 describes the network stack of our architecture.

inactivity period.

To summarize, we can consider the following types of nodes.

**3. Direct communications between mobile robots**

During the robot inactivity period, the ER nodes form a network called the ER network. Thus, ER nodes have two addresses: one during the robot activity period, and one during the robot inactivity period. EL nodes have a single address, for the robot activity period. We assume that there exists a routing function R, such that R takes as input the destination addresses (in the robot network and/or in the ER network) and returns the next hop addresses (in the robot network and/or in the ER network). In the ZigBee standard, the function R is based on the hierarchical allocation of addresses. We also assume that there exists a neighbor address function N that takes as input the address of a next hop (in the robot network and/or in the ER network) and returns, if possible, the address of the next hop in both networks. It is not mandatory that functions R and N are always able to translate the address of a node from one network to another, but they can improve the global behavior of the mechanism.

To optimize intra-robot communications on the meeting channel, the MAC relaying scheme has to be modified in the following way. Let us assume that an ER node *n* has a frame for destination *d*. If the address of *d* in the ER network is known, this address can be used to compute the next hop *h* on the ER network, as *h* = R(*d*). *n* can send the frame to *h*. If the address of *d* in the ER network is not known, it is possible that the next hop *h*� of *d* computed on the robot network is also a neighbor of *n* in the ER network. If this is the case, *h* = N (*h*� ) is known: *n* can send the frame to *h* (in fact, both addresses *h* and *h*� are present in the frame). Otherwise, *n* is unable to send the frame during this period. Thus, *n* proceeds to the next frame in queue (without dropping the previous frame, as it will be sent during the next robot activity period).

To summarize, ER nodes can always relay frames to ER nodes.

end-to-end

 delay (in seconds)

goodput

all duty cycles.

 (in frames per second)

always active only EL, 12.5% only EL, 25% only EL, 50% EL and ER, 12.5% EL and ER, 25% EL and ER, 50%

inactivity period of EL nodes, which helps reducing the delay.

0 10 20 30 40 50 60 70 80 90 100

frame generation rate (in frames per second)

Fig. 4. The presence of ER nodes significantly improves the performance of the network, for

Using a Meeting Channel and Relay Nodes to Interconnect Mobile Robots 337

wait for a long time in the frame queue, because of the limited size of the frame queue and because slotted CSMA/CA drops frames after a small number of attempts. When nodes are always active, the delay is very low. The delay increases when the network is saturated, which occurs at about 50 frames per second (which can be seen on Fig. 4). When there are only EL nodes, the end-to-end delay is large: indeed, EL nodes spend a large amount of time sleeping, which increases the delay. In the scenario with ER nodes, the end-to-end delay is improved compared to the scenario without ER nodes: indeed, some frames can be relayed during the

> always active only EL, 12.5% only EL, 25% only EL, 50% EL and ER, 12.5% EL and ER, 25% EL and ER, 50%

0 10 20 30 40 50 60 70 80 90 100

frame generation rate (in frames per second) Fig. 5. The presence of ER nodes significantly reduce the end-to-end delay, for all duty cycles.

#### **3.1.2 Simulation results**

To quantify the performance improvement when the meeting channel is used for intra-robot communications, we performed simulations using the network simulator NS2. We used the IEEE 802.15.4 physical layer and a custom IEEE 802.15.4 MAC layer that supports our meeting channel architecture and our relaying scheme. We used the ZigBee hierarchical routing protocol for both the robot network and the ER network. The propagation model used is the ITU indoor propagation model (*Propagation data and prediction method for the planning of indoor radio communication systems and local area networks in the frequency range of 900 MHz to 100 GHz*, 2005), with a path loss exponent of 3. The transmission power of nodes is set to -25 dBm. Frames have a size of 119 bytes at the MAC layer, which corresponds to 1000 bits at the physical layer. Queue lengths are set to 50 frames.

Simulations are performed on a network of 6 nodes with four pairs of source *s* and destination *d*, as depicted on Fig. 3. Solid nodes represent EL nodes, half-striped nodes represent ER nodes. The source is always three hops away from the destination.

In order to evaluate the advantages of communications on the meeting channel, we compare the scenario where all the nodes are ER nodes with a scenario where the two central nodes are ER nodes, and the source and destination both have a 50% probability of being ER nodes. Simulation results are averaged over 100 runs.

Fig. 3. Simulations are performed on a small topology, where sources and destinations both have a 50% probability to be ER nodes.

We considered two metrics: (1) the goodput, which is the number of frames correctly received by the destination node per second, and (2) the end-to-end delay.

Fig. 4 displays the goodput as a function of the frame generation rate, for several scenarios: one where the nodes are always active on the robot-specific channel, in dotted lines, one with only EL nodes and for various duty cycles on the robot-specific channel (from 12.5% to 50%), in solid lines, and one with both EL and ER nodes (as shown on Fig. 3) for various duty cycles on the robot specific channel (from 12.5% to 50%), in dashed lines. First, it can be noticed that the goodput increases with the frame generation rate, until it reaches a maximum. This behavior is typical of IEEE 802.15.4. As expected, the goodput with EL nodes only is strongly related to the duty cycle: it is high when the duty cycle is 100% (EL nodes are always active), and is reduced when the duty cycle goes from 50% down to 12.5%. When ER nodes are added, the goodput is significantly increased with respect to the scenarios with only EL nodes.

Fig. 5 displays the average end-to-end delay as a function of the frame generation rate, for the three same scenarios, and for various duty cycles. The average end-to-end delay increases with the frame generation rate, until it reaches a maximum. This maximum can be explained in the following way. The delay is computed based on the received frame only. Frames cannot 8 Will-be-set-by-IN-TECH

Mobile Robots – Control Architectures,

To quantify the performance improvement when the meeting channel is used for intra-robot communications, we performed simulations using the network simulator NS2. We used the IEEE 802.15.4 physical layer and a custom IEEE 802.15.4 MAC layer that supports our meeting channel architecture and our relaying scheme. We used the ZigBee hierarchical routing protocol for both the robot network and the ER network. The propagation model used is the ITU indoor propagation model (*Propagation data and prediction method for the planning of indoor radio communication systems and local area networks in the frequency range of 900 MHz to 100 GHz*, 2005), with a path loss exponent of 3. The transmission power of nodes is set to -25 dBm. Frames have a size of 119 bytes at the MAC layer, which corresponds to 1000 bits at

Simulations are performed on a network of 6 nodes with four pairs of source *s* and destination *d*, as depicted on Fig. 3. Solid nodes represent EL nodes, half-striped nodes represent ER

In order to evaluate the advantages of communications on the meeting channel, we compare the scenario where all the nodes are ER nodes with a scenario where the two central nodes are ER nodes, and the source and destination both have a 50% probability of being ER nodes.

*s s s s*

case 1 case 2 case 3 case 4

*d d d d*

Fig. 3. Simulations are performed on a small topology, where sources and destinations both

We considered two metrics: (1) the goodput, which is the number of frames correctly received

Fig. 4 displays the goodput as a function of the frame generation rate, for several scenarios: one where the nodes are always active on the robot-specific channel, in dotted lines, one with only EL nodes and for various duty cycles on the robot-specific channel (from 12.5% to 50%), in solid lines, and one with both EL and ER nodes (as shown on Fig. 3) for various duty cycles on the robot specific channel (from 12.5% to 50%), in dashed lines. First, it can be noticed that the goodput increases with the frame generation rate, until it reaches a maximum. This behavior is typical of IEEE 802.15.4. As expected, the goodput with EL nodes only is strongly related to the duty cycle: it is high when the duty cycle is 100% (EL nodes are always active), and is reduced when the duty cycle goes from 50% down to 12.5%. When ER nodes are added, the goodput is significantly increased with respect to the scenarios with only EL nodes. Fig. 5 displays the average end-to-end delay as a function of the frame generation rate, for the three same scenarios, and for various duty cycles. The average end-to-end delay increases with the frame generation rate, until it reaches a maximum. This maximum can be explained in the following way. The delay is computed based on the received frame only. Frames cannot

**3.1.2 Simulation results**

336

the physical layer. Queue lengths are set to 50 frames.

Simulation results are averaged over 100 runs.

have a 50% probability to be ER nodes.

nodes. The source is always three hops away from the destination.

by the destination node per second, and (2) the end-to-end delay.

Fig. 4. The presence of ER nodes significantly improves the performance of the network, for all duty cycles.

wait for a long time in the frame queue, because of the limited size of the frame queue and because slotted CSMA/CA drops frames after a small number of attempts. When nodes are always active, the delay is very low. The delay increases when the network is saturated, which occurs at about 50 frames per second (which can be seen on Fig. 4). When there are only EL nodes, the end-to-end delay is large: indeed, EL nodes spend a large amount of time sleeping, which increases the delay. In the scenario with ER nodes, the end-to-end delay is improved compared to the scenario without ER nodes: indeed, some frames can be relayed during the inactivity period of EL nodes, which helps reducing the delay.

Fig. 5. The presence of ER nodes significantly reduce the end-to-end delay, for all duty cycles.

3 m

Using a Meeting Channel and Relay Nodes to Interconnect Mobile Robots 339

Fig. 6. Two mobiles *M*<sup>1</sup> and *M*<sup>2</sup> cross each other. The minimum distance between the

clock drifts. The periodicity of GW-ADV for a mobile *Mi* is given by *T*(1 + *δi*), where *T* is the basic periodicity, and *δ<sup>i</sup>* is chosen randomly in [−0.3; 0.3] (according to a uniform distribution). Fig. 7 shows the neighbor discovery delay as a function of the GW-ADV periodicity *T*, and as a function of the duration of the activity period on the robot-specific channel. The largest the periodicity, the longest the duration. However, the duration of the activity period has the most significant impact on the neighbor discovery delay: when the nodes spend 50% of their time on their robot-specific channel, the gateways of a mobile often miss the GW-ADV

The second simulation concerns the quantification of the contact duration. It is computed as the difference between the time when the mobiles discover each other and the time when there is no gateway of one mobile in contact of one gateway of the other mobile. Notice that having several gateways per mobile might be a way to (slightly) improve the contact duration. Fig. 8 shows the contact duration as a function of the GW-ADV periodicity *T*, and as a function of the duration of the activity period on the robot-specific channel. The largest the periodicity, the smallest the duration, as it takes some time for the nodes to discover each other. Again, the duration of the activity period has the most significant impact on the contact duration.

<sup>1</sup>, and the gateways of *<sup>M</sup>*2, *<sup>G</sup>*<sup>1</sup>

messages sent on the meeting channel by the gateways of the other mobile.

*G*1

gateways of *M*1, *G*<sup>1</sup>

**3.2.2 Contact duration**

<sup>2</sup> *<sup>G</sup>*<sup>2</sup>

<sup>1</sup> and *<sup>G</sup>*<sup>2</sup>

*M*<sup>2</sup>

2

*G*1

<sup>2</sup> and *<sup>G</sup>*<sup>2</sup>

<sup>1</sup> *<sup>G</sup>*<sup>2</sup>

<sup>2</sup>, is 3 meters.

*M*<sup>1</sup>

1

ER nodes consume more energy than EL nodes, because they are more often active. Thus, ER nodes allow to achieve a trade-off between energy consumption and performance.

#### **3.2 Neighbor discovery**

When a robot moves in range of another, the two robots need to discover each other before being able to communicate. The neighbor discovery mechanism Hadid et al. (2010) occurs during the inactivity period, and is performed by gateway nodes only.

Gateway nodes send periodic gateway advertisement (GW-ADV) frames. Each GW-ADV frame contains the identification of the gateway, and the identification of the robot that carries it. When receiving a GW-ADV frame, a gateway replies with a gateway reply (GW-REP) frame. Each GW-REP frame contains the identification of the local gateway, the identification of the distant gateway, and the identification of the robot that carries it. Upon receiving a GW-ADV or GW-REP frame, both gateways update their neighbor table. An entry in this table contains the identification of the neighbor mobile, the address of the distant gateway, and a time to live. It is possible for the table to contain two different entries for the same mobile, if the mobile is reachable through two gateways.

The neighbor discovery is not instantaneous, as gateways transmit GW-ADV periodically, and only when they are operating on the meeting channel. The neighbor discovery delay depends on the periodicity of the GW-ADV frames, and on the robot activity period. To study the impact of these two parameters, we carried out two sets of simulations. The first set of simulation concerns the neighbor discovery delay from the moment the two mobiles become in range. The second set of simulation concerns the total contact duration, between the moment the gateways identify each other, until the time the contact is broken due to robots mobility.

We used a simple mobility model1: two robots move on parallel lines, in opposite directions, with a constant speed of 2 meters per second. They start far away, at a position where they are not in range of each other. They eventually meet and discover each other, while continuing their movement. After some time, they are too far away to keep communicating, and the contact is broken. The minimum distance between robots is 3 meters, which is loss than the range of a node. The placement of gateways is shown on Fig. 6, where gateways are represented with nodes connected to a black rectangle.

We used a duty cycle of about 1 second2. Each simulation lasts for 100 second, and the results are averaged over 500 runs.

#### **3.2.1 Neighbor discovery delay**

The first simulation concerns the quantification of the neighbor discovery delay. It is computed as the difference between the time when the two mobiles become in range and the time when they are not in range anymore. To model the fact that mobiles are not synchronized, the activity (on the robot-specific channel) of mobile *M*<sup>1</sup> starts at *t*0, and the activity (on the robot-specific channel) of mobile *M*<sup>2</sup> starts at *t*<sup>0</sup> + Δ, where Δ is chosen randomly in the cycle. Moreover, we consider that the periodicity of GW-ADV might vary between mobiles, due to

<sup>1</sup> Our focus here is not to focus on the mobility model, but on the quality of the neighbor discovery procedure. That is why we prefer to use a simple mobility model rather than a more complex one. A discussion on the impact of the mobility model can be found in Bai et al. (2003).

<sup>2</sup> The duty cycle is in fact equal to 983.04 ms, which is the closest value for a duty cycle of 1 second for IEEE 802.15.4 solutions.

Fig. 6. Two mobiles *M*<sup>1</sup> and *M*<sup>2</sup> cross each other. The minimum distance between the gateways of *M*1, *G*<sup>1</sup> <sup>1</sup> and *<sup>G</sup>*<sup>2</sup> <sup>1</sup>, and the gateways of *<sup>M</sup>*2, *<sup>G</sup>*<sup>1</sup> <sup>2</sup> and *<sup>G</sup>*<sup>2</sup> <sup>2</sup>, is 3 meters.

clock drifts. The periodicity of GW-ADV for a mobile *Mi* is given by *T*(1 + *δi*), where *T* is the basic periodicity, and *δ<sup>i</sup>* is chosen randomly in [−0.3; 0.3] (according to a uniform distribution). Fig. 7 shows the neighbor discovery delay as a function of the GW-ADV periodicity *T*, and as a function of the duration of the activity period on the robot-specific channel. The largest the periodicity, the longest the duration. However, the duration of the activity period has the most significant impact on the neighbor discovery delay: when the nodes spend 50% of their time on their robot-specific channel, the gateways of a mobile often miss the GW-ADV messages sent on the meeting channel by the gateways of the other mobile.

#### **3.2.2 Contact duration**

10 Will-be-set-by-IN-TECH

Mobile Robots – Control Architectures,

ER nodes consume more energy than EL nodes, because they are more often active. Thus, ER

When a robot moves in range of another, the two robots need to discover each other before being able to communicate. The neighbor discovery mechanism Hadid et al. (2010) occurs

Gateway nodes send periodic gateway advertisement (GW-ADV) frames. Each GW-ADV frame contains the identification of the gateway, and the identification of the robot that carries it. When receiving a GW-ADV frame, a gateway replies with a gateway reply (GW-REP) frame. Each GW-REP frame contains the identification of the local gateway, the identification of the distant gateway, and the identification of the robot that carries it. Upon receiving a GW-ADV or GW-REP frame, both gateways update their neighbor table. An entry in this table contains the identification of the neighbor mobile, the address of the distant gateway, and a time to live. It is possible for the table to contain two different entries for the same

The neighbor discovery is not instantaneous, as gateways transmit GW-ADV periodically, and only when they are operating on the meeting channel. The neighbor discovery delay depends on the periodicity of the GW-ADV frames, and on the robot activity period. To study the impact of these two parameters, we carried out two sets of simulations. The first set of simulation concerns the neighbor discovery delay from the moment the two mobiles become in range. The second set of simulation concerns the total contact duration, between the moment the gateways identify each other, until the time the contact is broken due to robots

We used a simple mobility model1: two robots move on parallel lines, in opposite directions, with a constant speed of 2 meters per second. They start far away, at a position where they are not in range of each other. They eventually meet and discover each other, while continuing their movement. After some time, they are too far away to keep communicating, and the contact is broken. The minimum distance between robots is 3 meters, which is loss than the range of a node. The placement of gateways is shown on Fig. 6, where gateways are

We used a duty cycle of about 1 second2. Each simulation lasts for 100 second, and the results

The first simulation concerns the quantification of the neighbor discovery delay. It is computed as the difference between the time when the two mobiles become in range and the time when they are not in range anymore. To model the fact that mobiles are not synchronized, the activity (on the robot-specific channel) of mobile *M*<sup>1</sup> starts at *t*0, and the activity (on the robot-specific channel) of mobile *M*<sup>2</sup> starts at *t*<sup>0</sup> + Δ, where Δ is chosen randomly in the cycle. Moreover, we consider that the periodicity of GW-ADV might vary between mobiles, due to

<sup>1</sup> Our focus here is not to focus on the mobility model, but on the quality of the neighbor discovery procedure. That is why we prefer to use a simple mobility model rather than a more complex one. A

<sup>2</sup> The duty cycle is in fact equal to 983.04 ms, which is the closest value for a duty cycle of 1 second for

discussion on the impact of the mobility model can be found in Bai et al. (2003).

nodes allow to achieve a trade-off between energy consumption and performance.

during the inactivity period, and is performed by gateway nodes only.

mobile, if the mobile is reachable through two gateways.

represented with nodes connected to a black rectangle.

**3.2 Neighbor discovery**

338

mobility.

are averaged over 500 runs.

IEEE 802.15.4 solutions.

**3.2.1 Neighbor discovery delay**

The second simulation concerns the quantification of the contact duration. It is computed as the difference between the time when the mobiles discover each other and the time when there is no gateway of one mobile in contact of one gateway of the other mobile. Notice that having several gateways per mobile might be a way to (slightly) improve the contact duration.

Fig. 8 shows the contact duration as a function of the GW-ADV periodicity *T*, and as a function of the duration of the activity period on the robot-specific channel. The largest the periodicity, the smallest the duration, as it takes some time for the nodes to discover each other. Again, the duration of the activity period has the most significant impact on the contact duration.

If a gateway stops receiving GW-ADV messages during a period equal to the time to live of the neighbor entry, the gateway sends a message to its custodian, in order to remove the entry from the contact table. The mobile might still be reachable, through other local or distant

Using a Meeting Channel and Relay Nodes to Interconnect Mobile Robots 341

The data exchange between a source node on a source mobile and a destination node on a

• the custodian node of the source mobile sends the bundles to a gateway node of the source

• the gateway node of the source mobile sends the bundles to the gateway node of the

One bundle can correspond to several packets. The transport layer is in charge of splitting

The direct data exchange between mobiles raises four issues: (1) the choice of the source gateway by the source custodian, (2) the choice of the distant gateway by the source gateway, (3) the algorithm used by the custodian to send the bundles, and (4) the policy to retransmit

When there are several gateways that can reach the destination mobile, the source custodian has to choose to which source gateway the bundles have to be sent. The custodian can apply different policies of load balancing, depending on the application, by choosing several different source gateways to allow parallel transmissions of the bundles. Thus, several

A given source gateway can be in range of several destination gateways. The source gateway has to choose to which destination gateway the bundles should be sent to. In this paper, we propose that source gateways choose destination gateways at random, to reduce

bundles at a time are sent, one per gateway that discovered the destination mobile.

• the gateway node of the destination mobile sends the bundles to the destination node. The transmission from the source node to the custodian node of the source mobile corresponds to an intra-robot communication, as both nodes are on the same mobile. This part of the communication can be achieved even if the two mobiles are not in range. The same goes for the transmission between the gateway of the destination mobile to the destination node. Fig. 9 depicts this data exchange on a small example. Custodians are shown with a small database on the side, to indicate that they possess storage capabilities. As shown below the figure, several communication layers are involved in this process. The source node *S* generates data at the application layer, which is encapsulated within bundles at the bundle layer. The communication from *S* to *C*<sup>1</sup> is done at the bundle layer, and can require the transmission through intermediate nodes, such as *X*1. Then, the bundles wait in *C*<sup>1</sup> for a notification concerning mobile *M*2. Once this happens, *C*<sup>1</sup> communicates with its gateway *G*<sup>1</sup> at the bundle layer. *G*<sup>1</sup> and *G*<sup>2</sup> are able to communicate directly. Finally, *G*<sup>2</sup> sends the bundles to *D*, at the bundle layer, potentially involving intermediate nodes, such as *X*2. Finally, *D* can

• the source node sends the bundles to the custodian node of the source mobile,

mobile, once a link between these two mobiles has been established,

gateways.

**3.4 Direct data exchange between mobiles**

extract the data at the application layer.

large bundles into several packets.

**3.4.1 Description**

or drop bundles.

destination mobile,

destination mobile is performed according to the following steps:

Fig. 7. The neighbor discovery delay is greatly impacted by the duration of the inactivity period.

Fig. 8. The neighbor discovery delay is greatly impacted by the duration of the inactivity period.

#### **3.3 Link establishment**

Once a local gateway discovers a distant gateway, the local gateway notifies its custodian of its discovery using a notification (NOTIF) message. This is done either when the local gateway received a GW-REP, or received the GW-ADV from another gateway. The NOTIF message contains the identification of the distant mobile, the address of the distant gateway and the address of the local gateway. Upon receiving the NOTIF message, the custodian updates its contact table. An entry of the contact table contains the identification of the distant mobile, the address of the distant gateway and the address of the local gateway. There might be several local and distant gateways that lead to the same mobile.

If a gateway stops receiving GW-ADV messages during a period equal to the time to live of the neighbor entry, the gateway sends a message to its custodian, in order to remove the entry from the contact table. The mobile might still be reachable, through other local or distant gateways.

### **3.4 Direct data exchange between mobiles**

12 Will-be-set-by-IN-TECH

value of *T* (in seconds)

value of *T* (in seconds)

Fig. 8. The neighbor discovery delay is greatly impacted by the duration of the inactivity

Once a local gateway discovers a distant gateway, the local gateway notifies its custodian of its discovery using a notification (NOTIF) message. This is done either when the local gateway received a GW-REP, or received the GW-ADV from another gateway. The NOTIF message contains the identification of the distant mobile, the address of the distant gateway and the address of the local gateway. Upon receiving the NOTIF message, the custodian updates its contact table. An entry of the contact table contains the identification of the distant mobile, the address of the distant gateway and the address of the local gateway. There might be several

0.0625 0.125 0.25 0.5

3.125% 6.25% 12.5% 25% 50%

Mobile Robots – Control Architectures,

Fig. 7. The neighbor discovery delay is greatly impacted by the duration of the inactivity

0.0625 0.125 0.25 0.5

0

0

local and distant gateways that lead to the same mobile.

2

contact duration

**3.3 Link establishment**

4

6

8

 (in seconds)

10

12

0.5

1

neighbor

period.

340

period.

(in seconds)

discovery

 delay

1.5

2

2.5

3

3.125% 6.25% 12.5% 25% 50%

3.5

The data exchange between a source node on a source mobile and a destination node on a destination mobile is performed according to the following steps:


The transmission from the source node to the custodian node of the source mobile corresponds to an intra-robot communication, as both nodes are on the same mobile. This part of the communication can be achieved even if the two mobiles are not in range. The same goes for the transmission between the gateway of the destination mobile to the destination node.

Fig. 9 depicts this data exchange on a small example. Custodians are shown with a small database on the side, to indicate that they possess storage capabilities. As shown below the figure, several communication layers are involved in this process. The source node *S* generates data at the application layer, which is encapsulated within bundles at the bundle layer. The communication from *S* to *C*<sup>1</sup> is done at the bundle layer, and can require the transmission through intermediate nodes, such as *X*1. Then, the bundles wait in *C*<sup>1</sup> for a notification concerning mobile *M*2. Once this happens, *C*<sup>1</sup> communicates with its gateway *G*<sup>1</sup> at the bundle layer. *G*<sup>1</sup> and *G*<sup>2</sup> are able to communicate directly. Finally, *G*<sup>2</sup> sends the bundles to *D*, at the bundle layer, potentially involving intermediate nodes, such as *X*2. Finally, *D* can extract the data at the application layer.

One bundle can correspond to several packets. The transport layer is in charge of splitting large bundles into several packets.

#### **3.4.1 Description**

The direct data exchange between mobiles raises four issues: (1) the choice of the source gateway by the source custodian, (2) the choice of the distant gateway by the source gateway, (3) the algorithm used by the custodian to send the bundles, and (4) the policy to retransmit or drop bundles.

When there are several gateways that can reach the destination mobile, the source custodian has to choose to which source gateway the bundles have to be sent. The custodian can apply different policies of load balancing, depending on the application, by choosing several different source gateways to allow parallel transmissions of the bundles. Thus, several bundles at a time are sent, one per gateway that discovered the destination mobile.

A given source gateway can be in range of several destination gateways. The source gateway has to choose to which destination gateway the bundles should be sent to. In this paper, we propose that source gateways choose destination gateways at random, to reduce

Simulations are performed on a scenario close to the one depicted on Fig. 9. We assume that each mobile has two gateways, that the two gateways of the source mobile are one hop away from the custodian, and the destination is one hop away from the two gateways of the destination mobile. We assume that each bundle can fit in a single frame, and thus bundles do not require fragmentation and reassembly. The frame size for a bundle is set to 119 bytes at the MAC layer, so that a bundle is 1000 bits long at the physical layer. The duty cycle is set to about one second, and gateways send their GW-ADV frame with a periodicity of *T* =0.25 seconds (when they operate on the meeting channel). This value of *T* is a trade-off between

Using a Meeting Channel and Relay Nodes to Interconnect Mobile Robots 343

Fig. 10 shows the bundle reception rate as a function of the bundle generation rate. The bundle reception rate increases with the bundle generation rate, until it reaches a maximum (again, this behavior is typical of IEEE 802.15.4). When the duty cycle on the robot-specific channel is low (for instance, when the duty cycle is equal to 12.5%), the ER nodes spend most of their time on the meeting channel. Thus, the bundle reception rate is large. Contrarily, when nodes spend a significant part of their activity time on the robot-specific channel (for instance, when the duty cycle is 50%), the ER nodes cannot spend most time on the meeting channel, and the gateways of the different robots have less time to communicate (on average, the gateways of the two robots are on the same meeting channel during only 25% of the time),

> 0 5 10 15 20 25 30 35 40 45 bundle generation rate (in bundles per second)

Fig. 10. The bundle reception rate (in bundles per second) increases as the duty cycle on the

Fig. 11 shows the average delay to transmit a bundle from the custodian of the source mobile to the destination. Note that this delay does not take into account the time required to transmit a bundle from the source to the custodian. Indeed, most of the time, all the bundles are accumulated in the custodian rather than in the source node. As expected, the average delay increases with the bundle generation rate. When the bundle loss rate is too high, most bundles are lost, and the delay becomes stable (as it involves only bundles that are correctly received). The average path length for the bundles is computed as the average amount of transmission of a bundle. The minimum value for this metric is three in our topology, as the destination is

the discovery delay and the overhead of the GW-ADV frames.

which negatively impacts the bundle reception rate.

12.5% 25% 50%

0

robot-specific channel decreases.

5

(in bundles

10

 per second)

bundle

reception

 rate

15

20

intra-robot routing

inter-robot routing

Fig. 9. The bundles sent by mobile *M*<sup>1</sup> to mobile *M*<sup>2</sup> first go from the source node *S* to the custodian *C*1. Once *M*<sup>1</sup> and *M*<sup>2</sup> have established a link through their gateways, the bundles go from *C*<sup>1</sup> to *G*1, from *G*<sup>1</sup> to *G*2, and finally from *G*<sup>2</sup> to *D*.

the probability that two different source gateways choose the same destination gateway repeatedly.

The algorithm applied by the custodian is the following. When it receives a bundle, it checks whether there is an entry in its contact table corresponding to the destination mobile or not. If there is no entry, the custodian stores the bundle. Otherwise, the custodian computes the set of source gateways that can reach the destination mobile.

Bundles are not kept permanently in custodians. Indeed, they are associated to a time to live, which is decided at the application layer. Bundles are also removed by the source custodian when they are successfully transmitted. When a local gateway sends a bundle to a distant gateway, it waits for the acknowledgment of the transmission. Once the local gateway receives the acknowledgment, it sends a ROUTE-SUCCESS message to the custodian. Upon receiving this message, the custodian removes the bundle from its memory. If the local gateway does not receive acknowledgments from the distant gateway after a number of retries, the local gateway sends a ROUTE-FAILURE message to the custodian. The custodian can decide to transmit the bundle to another source gateway. The custodian does not delete bundles after ROUTE-FAILURE messages: it simply waits for another opportunity to transmit it, or for the time to live to expire. Notice that the ROUTE-SUCCESS and ROUTE-FAILURE messages only contain a bundle ID: these messages can fit in short frames.

#### **3.4.2 Simulation results**

We evaluated the performance of this data-exchange mechanism according to three metrics: (1) the bundle reception rate, (2) the average delay from the custodian of the source to the destination, and (3) the average path length for bundles.

14 Will-be-set-by-IN-TECH

*X*<sup>1</sup> *Y*<sup>1</sup> *X*<sup>2</sup>

network network network network network network network network data-link data-link data-link data-link data-link data-link data-link data-link physical physical physical physical physical physical physical physical

Fig. 9. The bundles sent by mobile *M*<sup>1</sup> to mobile *M*<sup>2</sup> first go from the source node *S* to the custodian *C*1. Once *M*<sup>1</sup> and *M*<sup>2</sup> have established a link through their gateways, the bundles

the probability that two different source gateways choose the same destination gateway

The algorithm applied by the custodian is the following. When it receives a bundle, it checks whether there is an entry in its contact table corresponding to the destination mobile or not. If there is no entry, the custodian stores the bundle. Otherwise, the custodian computes the set

Bundles are not kept permanently in custodians. Indeed, they are associated to a time to live, which is decided at the application layer. Bundles are also removed by the source custodian when they are successfully transmitted. When a local gateway sends a bundle to a distant gateway, it waits for the acknowledgment of the transmission. Once the local gateway receives the acknowledgment, it sends a ROUTE-SUCCESS message to the custodian. Upon receiving this message, the custodian removes the bundle from its memory. If the local gateway does not receive acknowledgments from the distant gateway after a number of retries, the local gateway sends a ROUTE-FAILURE message to the custodian. The custodian can decide to transmit the bundle to another source gateway. The custodian does not delete bundles after ROUTE-FAILURE messages: it simply waits for another opportunity to transmit it, or for the time to live to expire. Notice that the ROUTE-SUCCESS and ROUTE-FAILURE messages only

We evaluated the performance of this data-exchange mechanism according to three metrics: (1) the bundle reception rate, (2) the average delay from the custodian of the source to the

appli. appli.

bundle bundle bundle bundle bundle transport transport transport transport transport

*G*1

*S*

*C*1

*C*1

go from *C*<sup>1</sup> to *G*1, from *G*<sup>1</sup> to *G*2, and finally from *G*<sup>2</sup> to *D*.

of source gateways that can reach the destination mobile.

contain a bundle ID: these messages can fit in short frames.

destination, and (3) the average path length for bundles.

*G*1

*S*

342

repeatedly.

**3.4.2 Simulation results**

intra-robot routing inter-robot routing

*G*2

*G*2

*D*

*D*

Mobile Robots – Control Architectures,

*C*2

Simulations are performed on a scenario close to the one depicted on Fig. 9. We assume that each mobile has two gateways, that the two gateways of the source mobile are one hop away from the custodian, and the destination is one hop away from the two gateways of the destination mobile. We assume that each bundle can fit in a single frame, and thus bundles do not require fragmentation and reassembly. The frame size for a bundle is set to 119 bytes at the MAC layer, so that a bundle is 1000 bits long at the physical layer. The duty cycle is set to about one second, and gateways send their GW-ADV frame with a periodicity of *T* =0.25 seconds (when they operate on the meeting channel). This value of *T* is a trade-off between the discovery delay and the overhead of the GW-ADV frames.

Fig. 10 shows the bundle reception rate as a function of the bundle generation rate. The bundle reception rate increases with the bundle generation rate, until it reaches a maximum (again, this behavior is typical of IEEE 802.15.4). When the duty cycle on the robot-specific channel is low (for instance, when the duty cycle is equal to 12.5%), the ER nodes spend most of their time on the meeting channel. Thus, the bundle reception rate is large. Contrarily, when nodes spend a significant part of their activity time on the robot-specific channel (for instance, when the duty cycle is 50%), the ER nodes cannot spend most time on the meeting channel, and the gateways of the different robots have less time to communicate (on average, the gateways of the two robots are on the same meeting channel during only 25% of the time), which negatively impacts the bundle reception rate.

Fig. 10. The bundle reception rate (in bundles per second) increases as the duty cycle on the robot-specific channel decreases.

Fig. 11 shows the average delay to transmit a bundle from the custodian of the source mobile to the destination. Note that this delay does not take into account the time required to transmit a bundle from the source to the custodian. Indeed, most of the time, all the bundles are accumulated in the custodian rather than in the source node. As expected, the average delay increases with the bundle generation rate. When the bundle loss rate is too high, most bundles are lost, and the delay becomes stable (as it involves only bundles that are correctly received). The average path length for the bundles is computed as the average amount of transmission of a bundle. The minimum value for this metric is three in our topology, as the destination is

3

robot-specific channel decreases.

communications.

**4.1.1 Description**

 0 5 10 15 20 25 30 35 40 45 bundle generation rate (in bundles per second)

Using a Meeting Channel and Relay Nodes to Interconnect Mobile Robots 345

Fig. 12. The average path length for the bundles decreases when the duty cycle on the

In practice, the two last conditions are hard to fulfill. For example, when a robot has to perform an emergency brake, it has to inform the following robots. The ID of those robots might not be known by the source robot. Moreover, it is possible that a robot and the one following it on the same trajectory never meet: in this case, they are unable to communicate using direct

Indirect communications help removing these constraints. In this section, we first consider how fixed relays can improve the overall connectivity of the network, and can enable indirect communications without requiring sophisticated routing protocols. Then, we consider how to identify mobile robots and nodes based on their role rather than based on their ID or address.

Fixed-relay networks can be used to improve the communication opportunities between mobiles (Groenevelt et al., 2005; Zhao et al., 2006). When these relay networks are placed along the road, or at crossings, they can be used to store bundles from robots passing by. In

Fixed-relay can be used to improve the connectivity of the network. If they are deployed strategically at the intersection of mobile trajectories, they can store bundles temporarily, and

Each fixed-relay is composed of several nodes forming a network. There is a custodian node in each fixed-relay, and several gateways. The role of the custodian node and of the gateways

The main difference between fixed-relays and mobile is that the nodes in fixed-relays are always working on the meeting channel. Thus, they are always available on this channel,

turn, the bundles can be sent to other mobiles traveling in range of the relays.

3.2

3.4

average path length

3.6

3.8

4

4.2

12.5% 25% 50%

• The source robot knows the ID of the destination robot. • The source robot and the destination robot meet directly.

**4.1 Communications through fixed-relay networks**

retransmit them to mobiles passing by.

and are thus easier to contact than mobiles.

is the same as for mobile robots.

4.4

Fig. 11. The average delay from the source custodian to the destination node decreases when the duty cycle on the robot-specific channel decreases.

three hops away from the custodian of the source mobile. Path lengths larger than three occur in the following two cases.


Note that to simplify, we assume in our simulations that each bundle is sent in a single frame. Thus, the path length increases only by one each time a data frame is transmitted.

Fig. 12 shows the average path length for the bundles. When the duty cycle on the robot-specific channel is low (for instance, when it is only 12.5%), the average path length varies slightly around 3.2 hops. As the minimum path length is 3, this means that, on average, the probability for a bundle to be retransmitted (due to collision, loss of a frame or of its acknowledgment, or due to the mobility of mobiles) is only 6%. However, this percentage increases up to 26% when the duty cycle on the robot-specific channel is high.

### **4. Indirect communications between mobile robots**

Direct communications occur when the following three conditions are met.

• A source robot has to communicate with a destination robot.

Fig. 12. The average path length for the bundles decreases when the duty cycle on the robot-specific channel decreases.


In practice, the two last conditions are hard to fulfill. For example, when a robot has to perform an emergency brake, it has to inform the following robots. The ID of those robots might not be known by the source robot. Moreover, it is possible that a robot and the one following it on the same trajectory never meet: in this case, they are unable to communicate using direct communications.

Indirect communications help removing these constraints. In this section, we first consider how fixed relays can improve the overall connectivity of the network, and can enable indirect communications without requiring sophisticated routing protocols. Then, we consider how to identify mobile robots and nodes based on their role rather than based on their ID or address.

#### **4.1 Communications through fixed-relay networks**

Fixed-relay networks can be used to improve the communication opportunities between mobiles (Groenevelt et al., 2005; Zhao et al., 2006). When these relay networks are placed along the road, or at crossings, they can be used to store bundles from robots passing by. In turn, the bundles can be sent to other mobiles traveling in range of the relays.

#### **4.1.1 Description**

16 Will-be-set-by-IN-TECH

 0 5 10 15 20 25 30 35 40 45 bundle generation rate (in bundles per second)

Fig. 11. The average delay from the source custodian to the destination node decreases when

three hops away from the custodian of the source mobile. Path lengths larger than three occur

• When one of the nodes on the path from the custodian to the destination has to repeat the transmission (usually due to the loss of a frame or of its acknowledgment), the path length

gateway is not connected anymore to the distant gateway. Indeed, this local gateway can be waiting for the time to live of the neighbor to expire in order to remove the entry in its neighbor table and to notify the custodian. After having tried several times to transmit the

<sup>2</sup> ) and *D*. Note that while bundles are long frames, the ROUTE-FAILURE message is a

Note that to simplify, we assume in our simulations that each bundle is sent in a single frame.

Fig. 12 shows the average path length for the bundles. When the duty cycle on the robot-specific channel is low (for instance, when it is only 12.5%), the average path length varies slightly around 3.2 hops. As the minimum path length is 3, this means that, on average, the probability for a bundle to be retransmitted (due to collision, loss of a frame or of its acknowledgment, or due to the mobility of mobiles) is only 6%. However, this percentage

destination mobile. In this case, the path length is composed of four hops: (*C*, *G*<sup>1</sup>

Thus, the path length increases only by one each time a data frame is transmitted.

increases up to 26% when the duty cycle on the robot-specific channel is high.

Direct communications occur when the following three conditions are met.

<sup>1</sup> notifies the source custodian with a ROUTE-FAILURE message. The custodian

<sup>1</sup>, it is possible that this local

Mobile Robots – Control Architectures,

<sup>1</sup> ), (*C*, *<sup>G</sup>*<sup>2</sup>

1 ),

<sup>1</sup> which is still in contact with the

 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5

12.5% 25% 50%

the duty cycle on the robot-specific channel decreases.

• When the custodian *C*<sup>1</sup> sends the bundle to a local gateway *G*<sup>1</sup>

can try to send the bundle to another local gateway *G*<sup>2</sup>

**4. Indirect communications between mobile robots**

• A source robot has to communicate with a destination robot.

simple notification. Thus, it is not counted in the path length.

average delay (in seconds)

344

in the following two cases.

increases accordingly.

bundle, *G*<sup>1</sup>

(*G*<sup>2</sup>

Fixed-relay can be used to improve the connectivity of the network. If they are deployed strategically at the intersection of mobile trajectories, they can store bundles temporarily, and retransmit them to mobiles passing by.

Each fixed-relay is composed of several nodes forming a network. There is a custodian node in each fixed-relay, and several gateways. The role of the custodian node and of the gateways is the same as for mobile robots.

The main difference between fixed-relays and mobile is that the nodes in fixed-relays are always working on the meeting channel. Thus, they are always available on this channel, and are thus easier to contact than mobiles.

0

0

channel, where the delay is reduced by 25%.

5

(in bundles

 per second)

10

15

20

12.5% 25% 50%

BI/16 BI/8 BI/4 BI/2 value of *T* (in seconds)

Using a Meeting Channel and Relay Nodes to Interconnect Mobile Robots 347

 0 5 10 15 20 25 30 35 40 45 bundle generation rate (in bundles per second)

Fig. 15. The bundle reception rate increases when the duty cycle on the robot-specific channel

Fig. 16 shows the average delay as a function of the bundle generation rate, for several duty cycles on the robot-specific channel. The delay is the time difference between the first transmission of the bundle by the custodian of the source mobile, and the first reception of the bundle by the custodian of the fixed-relay. When comparing with Fig. 16, the results are similar. The gain in delay is not significant, except for the duty cycle on the robot-specific

Fig. 17 shows the average hop count as a function of the bundle generation rate, for several duty cycles on the robot specific channel. The y-axis is the same as in Fig. 17. It can be seen

Fig. 14. The contact duration between a mobile and a fixed-relay depends mostly on the

fixed-relays has little impact on this number). When the duty cycle is 12.5%, the maximum bundle reception rate is 21 when two mobiles meet, and 24 when a mobile meets a fixed-relay.

3.125% 6.25% 12.5% 25% 50%

5

contact duration

GW-ADV periodicity.

bundle

decreases.

reception

 rate 10

15

 (in seconds)

20

#### **4.1.2 Simulation results**

To show the benefits of fixed-relays, and to evaluate the advantage of remaining on the meeting channel on the performance of the network, we performed numerous simulations. The simulation settings are mainly the same as in Section 4. Fixed-relays have a six-node topology, which is identical to the topology of the mobiles. They are static.

Fig. 13 shows the average delay for a mobile to discover a relay node, as a function of the periodicity *T* of the GW-ADV of both the mobile and the fixed-relay. The relay discovery delay increases as the periodicity *T* decreases and as the duty cycle on the robot-specific channel increases. The duty cycle on the robot-specific channel is the key parameter: its impact is the most significant. When comparing with Fig. 7, the advantage of fixed-relays on the discovery delay is obvious. In the case where the duty cycle on the robot-specific channel is 50%, and for a GW-ADV periodicity *T* of 0.5 seconds, the discovery delay of a mobile is 13.2 times larger than the discovery delay of a fixed-relay.

Fig. 13. The relay discovery delay for a mobile is very short, and depends mostly on the duty cycle on the robot-specific channel.

Fig. 14 shows the contact duration between a mobile and a fixed-relay. The contact duration decreases as the GW-ADV periodicity increases because it takes more time to discover the fixed-relay with a large periodicity. The contact duration also decreases as the duty cycle on the robot-specific channel increases, for the same reason. When comparing with Fig. 14, the contacts are about twice larger, as expected since only one robot is moving in this scenario with fixed-relays.

Fig. 15 displays the bundle reception rate as a function of the bundle generation rate, for several duty cycles on the robot-specific channel. The bundle reception rate increases with the bundle generation rate, until a maximum is reached. When comparing with Fig. 15, the results are significantly improved and show that using fixed-relays is highly beneficial. For instance, when the duty cycle is 50%, the maximum bundle reception rate is about 7 when two mobiles meet, and is about 14 when a mobile meets a fixed-relay (as the bundle reception rate is given in bundles per second, the increase of the contact duration in the scenario with 18 Will-be-set-by-IN-TECH

Mobile Robots – Control Architectures,

To show the benefits of fixed-relays, and to evaluate the advantage of remaining on the meeting channel on the performance of the network, we performed numerous simulations. The simulation settings are mainly the same as in Section 4. Fixed-relays have a six-node

Fig. 13 shows the average delay for a mobile to discover a relay node, as a function of the periodicity *T* of the GW-ADV of both the mobile and the fixed-relay. The relay discovery delay increases as the periodicity *T* decreases and as the duty cycle on the robot-specific channel increases. The duty cycle on the robot-specific channel is the key parameter: its impact is the most significant. When comparing with Fig. 7, the advantage of fixed-relays on the discovery delay is obvious. In the case where the duty cycle on the robot-specific channel is 50%, and for a GW-ADV periodicity *T* of 0.5 seconds, the discovery delay of a mobile is 13.2 times larger

> BI/16 BI/8 BI/4 BI/2 value of *T* (in seconds)

Fig. 13. The relay discovery delay for a mobile is very short, and depends mostly on the duty

Fig. 14 shows the contact duration between a mobile and a fixed-relay. The contact duration decreases as the GW-ADV periodicity increases because it takes more time to discover the fixed-relay with a large periodicity. The contact duration also decreases as the duty cycle on the robot-specific channel increases, for the same reason. When comparing with Fig. 14, the contacts are about twice larger, as expected since only one robot is moving in this scenario

Fig. 15 displays the bundle reception rate as a function of the bundle generation rate, for several duty cycles on the robot-specific channel. The bundle reception rate increases with the bundle generation rate, until a maximum is reached. When comparing with Fig. 15, the results are significantly improved and show that using fixed-relays is highly beneficial. For instance, when the duty cycle is 50%, the maximum bundle reception rate is about 7 when two mobiles meet, and is about 14 when a mobile meets a fixed-relay (as the bundle reception rate is given in bundles per second, the increase of the contact duration in the scenario with

topology, which is identical to the topology of the mobiles. They are static.

**4.1.2 Simulation results**

346

than the discovery delay of a fixed-relay.

3.125% 6.25% 12.5% 25% 50%

0

cycle on the robot-specific channel.

0.05

relay

with fixed-relays.

discovery

 delay (in seconds)

0.1

0.15

0.2

0.25

0.3

Fig. 14. The contact duration between a mobile and a fixed-relay depends mostly on the GW-ADV periodicity.

fixed-relays has little impact on this number). When the duty cycle is 12.5%, the maximum bundle reception rate is 21 when two mobiles meet, and 24 when a mobile meets a fixed-relay.

Fig. 15. The bundle reception rate increases when the duty cycle on the robot-specific channel decreases.

Fig. 16 shows the average delay as a function of the bundle generation rate, for several duty cycles on the robot-specific channel. The delay is the time difference between the first transmission of the bundle by the custodian of the source mobile, and the first reception of the bundle by the custodian of the fixed-relay. When comparing with Fig. 16, the results are similar. The gain in delay is not significant, except for the duty cycle on the robot-specific channel, where the delay is reduced by 25%.

Fig. 17 shows the average hop count as a function of the bundle generation rate, for several duty cycles on the robot specific channel. The y-axis is the same as in Fig. 17. It can be seen

allocation mechanism is used, the whole address space is not used for nodes. Among the 2<sup>16</sup> available short addresses, only *n* can be allocated, where *n* is the maximum number of nodes in the hierarchical tree. *n* depends on the ZigBee network parameters *Rm*, *Cm* and *Lm*. It is possible to use the unused addresses to represent roles. The same is true for mobiles: it is unrealistic to consider that there are 2<sup>16</sup> mobile robots in the network. Generally, the number of roles is much lower than the number of entities (either mobiles or nodes) in the network: it

Using a Meeting Channel and Relay Nodes to Interconnect Mobile Robots 349

The definition of generic addresses is application-dependent, and it depends on the

• Concerning unicast communications (that is, from one mobile to another mobile), two

**–** Role *unicast-next*: the mobile has to communicate with the next mobile. The next time the source mobile meets another mobile, this new mobile becomes the destination of

**–** Role *unicast-next-geographical*: the mobile has to communicate with the next mobile entering a given area. This role makes use of the fixed-relay to transmit bundles. With this role, the source mobile sends the bundles to the first mobile or fixed-relay it meets. Once the bundles have reached a fixed-relay, it sends the bundles to the first mobile it

• Concerning multicast communications (that is, from one mobile to several mobiles), three types of requirements can be considered. They are a generalization of the roles for unicast

**–** Role *multicast-next-all*: the mobile has to communicate with *n* next mobiles. The next time the source mobile meets another mobile, it decreases the value of *n* and sends all the bundles to this new mobile. If *n* is equal to zero, the bundles can be removed from the memory of the source mobile. Otherwise, the bundles are stored for the future encounters. Note that the source mobile has to maintain a list of mobiles it has already met: if a source mobile meets twice the same mobile, the source does not send the same

**–** Role *multicast-next-chain*: the mobile has to communicate with a a chain of *n* next mobiles. This requirement is important when several mobiles are following each other. When the source mobile meets the first mobile of the chain, the source sends all the bundles, and does not keep a copy of them (once they are acknowledged by the gateway of the first mobile of the chain). Then, the first mobile becomes in charge of transmitting the bundles to the *n* − 1 next mobiles. Additional care has to be taken in order to avoid

**–** Role *multicast-next-geographical*: the mobile has to communicate with *n* next mobiles entering a given area. When a mobile has to transmit *k* times the bundles and meets a fixed-relay, the mobile transmits all its bundles to this fixed-relay. The fixed-relay becomes in charge of transmitting *k* − 1 times the bundles. When a mobile has to transmit *k* times the bundles and meets another mobile, the former mobile become in charge of transmitting �(*k* − 1)/2�, and the latter mobile becomes in charge of transmitting �*k*/2�. In this way, the bundles are disseminated rapidly in the area.

loops: a mobile should not receive bundles it has previously received.

communication requirements. For mobiles, we identified the following requirements.

is not required to allocate a large number of generic addresses.

types of requirements can be considered.

the bundles.

meets.

communications.

bundles twice.

For nodes, several generic roles can be identified.

Fig. 16. The average delay from the custodian of the source to the custodian of the fixed-relay decreases when the duty cycle on the robot-specific channel decreases.

that the average hop count varies from 3 (which is the minimum value in our scenario) to 3.2, which constitutes an important improvement.

Fig. 17. The average length of the paths followed by bundles is close to the minimum value of 3 when a mobile meets a fixed-relay.

#### **4.2 Generic addressing**

The generic addressing consists in addressing network entities by their role rather than by their ID. There are two types of generic addressing: one for the mobile robots, and one for the nodes. These two types can be used simultaneously: it is possible to address a generic node on a generic mobile.

The address space has to be divided into two parts: the part for network addresses and the part for generic addresses. In the ZigBee protocol for instance, when the distributed address 20 Will-be-set-by-IN-TECH

Mobile Robots – Control Architectures,

 0 5 10 15 20 25 30 35 40 45 bundle generation rate (in bundles per second)

 0 5 10 15 20 25 30 35 40 45 bundle generation rate (in bundles per second)

Fig. 17. The average length of the paths followed by bundles is close to the minimum value

The generic addressing consists in addressing network entities by their role rather than by their ID. There are two types of generic addressing: one for the mobile robots, and one for the nodes. These two types can be used simultaneously: it is possible to address a generic node

The address space has to be divided into two parts: the part for network addresses and the part for generic addresses. In the ZigBee protocol for instance, when the distributed address

Fig. 16. The average delay from the custodian of the source to the custodian of the fixed-relay

that the average hop count varies from 3 (which is the minimum value in our scenario) to 3.2,

decreases when the duty cycle on the robot-specific channel decreases.

0

3

of 3 when a mobile meets a fixed-relay.

3.2

3.4

average path length

**4.2 Generic addressing**

on a generic mobile.

3.6

3.8

4

4.2

4.4

which constitutes an important improvement.

12.5% 25% 50%

0.5

average delay (in seconds)

1

1.5

2

2.5

3

12.5% 25% 50%

3.5

348

allocation mechanism is used, the whole address space is not used for nodes. Among the 2<sup>16</sup> available short addresses, only *n* can be allocated, where *n* is the maximum number of nodes in the hierarchical tree. *n* depends on the ZigBee network parameters *Rm*, *Cm* and *Lm*. It is possible to use the unused addresses to represent roles. The same is true for mobiles: it is unrealistic to consider that there are 2<sup>16</sup> mobile robots in the network. Generally, the number of roles is much lower than the number of entities (either mobiles or nodes) in the network: it is not required to allocate a large number of generic addresses.

The definition of generic addresses is application-dependent, and it depends on the communication requirements. For mobiles, we identified the following requirements.

	- **–** Role *unicast-next*: the mobile has to communicate with the next mobile. The next time the source mobile meets another mobile, this new mobile becomes the destination of the bundles.
	- **–** Role *unicast-next-geographical*: the mobile has to communicate with the next mobile entering a given area. This role makes use of the fixed-relay to transmit bundles. With this role, the source mobile sends the bundles to the first mobile or fixed-relay it meets. Once the bundles have reached a fixed-relay, it sends the bundles to the first mobile it meets.
	- **–** Role *multicast-next-all*: the mobile has to communicate with *n* next mobiles. The next time the source mobile meets another mobile, it decreases the value of *n* and sends all the bundles to this new mobile. If *n* is equal to zero, the bundles can be removed from the memory of the source mobile. Otherwise, the bundles are stored for the future encounters. Note that the source mobile has to maintain a list of mobiles it has already met: if a source mobile meets twice the same mobile, the source does not send the same bundles twice.
	- **–** Role *multicast-next-chain*: the mobile has to communicate with a a chain of *n* next mobiles. This requirement is important when several mobiles are following each other. When the source mobile meets the first mobile of the chain, the source sends all the bundles, and does not keep a copy of them (once they are acknowledged by the gateway of the first mobile of the chain). Then, the first mobile becomes in charge of transmitting the bundles to the *n* − 1 next mobiles. Additional care has to be taken in order to avoid loops: a mobile should not receive bundles it has previously received.
	- **–** Role *multicast-next-geographical*: the mobile has to communicate with *n* next mobiles entering a given area. When a mobile has to transmit *k* times the bundles and meets a fixed-relay, the mobile transmits all its bundles to this fixed-relay. The fixed-relay becomes in charge of transmitting *k* − 1 times the bundles. When a mobile has to transmit *k* times the bundles and meets another mobile, the former mobile become in charge of transmitting �(*k* − 1)/2�, and the latter mobile becomes in charge of transmitting �*k*/2�. In this way, the bundles are disseminated rapidly in the area.

For nodes, several generic roles can be identified.

**6. References**

Bahl, V., Chandra, R. & Dunagan, J. (2004). SSCH: Slotted seeded channel hopping for capacity

Using a Meeting Channel and Relay Nodes to Interconnect Mobile Robots 351

Bai, F., Sadagopan, N. & Helmy, A. (2003). IMPORTANT: a framework to systematically

Ferranti, E. & Trigoni, N. (2008). Robot-assisted discovery of evacuation routes in emergency

Groenevelt, R., Nain, P. & Koole, G. (2005). The message delay in mobile ad hoc networks,

Hadid, N., Guitton, A. & Misson, M. (2010). Exploiting a meeting channel to

Hogg, R. W., Rankin, A. L., Roumeliotis, S. I., McHenry, M. C., Helmick, D. M., Bergh, C. F.

IEEE 802.15 (2006). Part 15.4: Wireless medium access control (MAC) and physical layer

Jovanovic, M. D. & Djordjevic, G. L. (2007). TFMAC: Multi-channel MAC protocol for wireless

Kyasanur, P. & Vaidya, N. H. (2005). Routing and interface assignment in multi-channel

Lambrou, T. P. & Panayiotou, C. G. (2009). Collaborative area monitoring using wireless sensor

Nasipuri, A., Zhuang, J. & Das, S.-R. (1999). A multi-channel CSMA MAC protocol

Pathmasuntharam, J. S., Das, A. & Gupta, A. K. (2004). Primary channel assignment based

Ramakrishnan, M. & Vanaja Ranjan, P. (2009). Multi channel MAC for wireless sensor

Seo, M., Kim, Y. & Ma, J. (2008). Multi-channel MAC protocol for multi-hop wireless

*IEEE Wireless Communications and Networking Conference*, pp. 1110–1115. *Propagation data and prediction method for the planning of indoor radio communication systems and*

*on Mobile computing and Networking*, pp. 216–230.

*Delay-Tolerant Networking Architecture* (2006). *Request For Comments 4838*, IETF.

*IEEE International Symposium on Computers and Communications*.

*International Conference on Robotics and Automation*, pp. 11–15.

networks, *IEEE INFOCOM*, pp. 525–535.

*Performance Evaluation* 62(1-4): 210–228.

*Cable and Broadcasting Services*, pp. 23–26.

*Military Communications Conference*, pp. 1–7.

scenarios, *ICRA*, pp. 2824–2830.

*802.15.4 R2006*, ANSI/IEEE.

*Conference*, pp. 13–17.

*Conference*, pp. 1402–1406.

*ITU-R P 1238-4*, ITU.

1(2): 47–54.

*Processing* .

improvement in IEEE 802.11 ad-hoc wireless networks, *ACM International Conference*

analyze the impact of mobility on performance of routing protocols for adhoc

interconnect 802.15.4-compliant mobile entities: discovery and association phases,

& Matthies, L. (2002). Algorithms and sensors for small robot path following, *IEEE*

(PHY) specifications for low-rate wireless personal area networks (WPANs), *Standard*

sensor networks, *International Conference on Telecommunications in Modern Satellite,*

multi-interface wireless networks, *IEEE Wireless Communications and Networking*

networks with stationary and mobile nodes, *EURASIP Journal on Advances in Signal*

for multi-hop wireless networks, *IEEE Wireless Communications and Networking*

MAC (PCAM) - a multi-channel MAC protocol for multi-hop wireless networks,

*local area networks in the frequency range of 900 MHz to 100 GHz* (2005). *Recommendation*

networks, *International Journal of Computer Networks and Communications (IJCNC)*

networks: handling multi-channel hidden node problem using snooping, *IEEE*


As it can be seen, it is possible to represent several nodes through the same generic role. The duplication of messages from one generic message into several messages is performed in the node that translates the generic addresses, that is in the custodian node.

The generic addressing mechanism requires address translations (one for the generic mobiles, and one for the generic nodes). The translation for generic nodes is always performed at the custodian of the destination mobile, as only this custodian know the exact sensors and actuators deployed on its mobile. The translation for generic mobiles is performed by the custodian of a mobile that depends on the type of generic address.


## **5. Conclusion**

Several new applications require robots to move within an area, in order to monitor the environment while performing a task. The applications are likely to require that these mobile robots communicate, to avoid obstacles or to improve the monitoring for example. In this context, we have proposed a multi-channel architecture: each robot uses a robot-specific channel for intra-robot communications, and a common channel, called the meeting channel, is used to interconnect mobile robots that are in range of each other.

The use of a meeting channel allows mobiles to perform intra-mobile communications without interferences from neighbor robots, and it allows robots to communicate while some of their nodes save energy by sleeping. When there is no neighbor mobile in the vicinity, the meeting channel can be used to improve the intra-mobile communications. We conducted several simulations in order to quantify the benefits of the meeting channel, and we studied the performance of the network mainly in terms of throughput, delay and path length.

We proposed to use fixed-relays to improve the connectivity of the network, and to allow information to be stored at specific location in the environment. Fixed-relays are always on the meeting channel: this feature allows significant performance benefits.

Finally, we proposed the use of generic addresses in order to identify mobiles and nodes based on their role, rather than based on their address. This concept is especially important when a mobile robot has to communicate with the next mobiles.

#### **6. References**

22 Will-be-set-by-IN-TECH

Mobile Robots – Control Architectures,

As it can be seen, it is possible to represent several nodes through the same generic role. The duplication of messages from one generic message into several messages is performed in the

The generic addressing mechanism requires address translations (one for the generic mobiles, and one for the generic nodes). The translation for generic nodes is always performed at the custodian of the destination mobile, as only this custodian know the exact sensors and actuators deployed on its mobile. The translation for generic mobiles is performed by the

• Role *unicast-next*. The translation is performed by the custodian of the source mobile. • Role *unicast-next-geographical*. The translation is performed by the custodian of the source

• Role *multicast-next-all*. The translation is performed by the custodian of the source mobile. • Role *multicast-next-chain*. The translation is performed by the custodian of each mobile of

• Role *multicast-next-geographical*. The translation is performed by all the custodian of the mobiles involved as intermediate mobiles, as well as by the custodian of all fixed-relays.

Several new applications require robots to move within an area, in order to monitor the environment while performing a task. The applications are likely to require that these mobile robots communicate, to avoid obstacles or to improve the monitoring for example. In this context, we have proposed a multi-channel architecture: each robot uses a robot-specific channel for intra-robot communications, and a common channel, called the meeting channel,

The use of a meeting channel allows mobiles to perform intra-mobile communications without interferences from neighbor robots, and it allows robots to communicate while some of their nodes save energy by sleeping. When there is no neighbor mobile in the vicinity, the meeting channel can be used to improve the intra-mobile communications. We conducted several simulations in order to quantify the benefits of the meeting channel, and we studied the

We proposed to use fixed-relays to improve the connectivity of the network, and to allow information to be stored at specific location in the environment. Fixed-relays are always on

Finally, we proposed the use of generic addresses in order to identify mobiles and nodes based on their role, rather than based on their address. This concept is especially important when a

performance of the network mainly in terms of throughput, delay and path length.

the meeting channel: this feature allows significant performance benefits.

mobile robot has to communicate with the next mobiles.

node that translates the generic addresses, that is in the custodian node.

custodian of a mobile that depends on the type of generic address.

mobile, and potentially by the custodian of the fixed-relay.

is used to interconnect mobile robots that are in range of each other.

• the custodian of the mobile,

350

the chain, in turn.

**5. Conclusion**

• one of the gateways of the mobile, • all the gateways of the mobile,

• one of the sensors (for each type of sensor), • all of the sensors (for each type of sensor), • one of the actuators (for each type of actuator), • all of the actuators (for each type of actuator).


**16** 

*Romania* 

**Distance Measurement for Indoor** 

Mihai V. Micea, Andrei Stancovici and Sînziana Indreica

Location monitoring is a common problem for many mobile robotic applications covering various domains, such as industrial automation, manipulation in difficult areas, rescue operations, environment exploration and monitoring, smart environments and buildings,

A key aspect of localization is inter-robot distance measurement. In this chapter we consider the problem of autonomous, collaborative distance measurement in mobile robotic systems,

This work significantly extends and updates the results previously published in (Micea et al., 2010). We present and discuss some of the most relevant state of the art techniques for robot distance estimation. Next, we introduce a framework for collaborative inter-robot distance measurement along with a procedure for accurate robotic alignment. The proposed alignment algorithm is based on evaluating and comparing the strength of ultrasonic signals at different angles, processing (filtering) the measured data and ensuring a good synchronization during the process. Further on, we present the CTOF (Combined Time-of-Flight) method for distance measurement, which brings significant improvements to the classical TOF technique, and we show how this new technique meets the above specified design constraints. Some of the most interesting test and evaluation results are presented and discussed. The experimental data show how the distance estimation accuracy can be increased by applying the Kalman filter algorithm on repetitive measurements. The final

The problem of inter-robot distance measurement and location monitoring is considered of key importance in the field and, consequently, a large number and variety of methods have been proposed and studied in the literature. For instance, the GPS system (Ohno et al., 2004; Reina et al., 2007) and landmark-based solutions such as the Cricket Indoor Location System (Cricket Project, 2005; Priyantha, 2005) are well established in the field. On the other hand,

robotic home appliances, space exploration and probing.

remarks and the reference list conclude this chapter.

**2. Current techniques for robot distance estimation** 

under the following set of design and functional constraints:

**1. Introduction** 

a. indoor operation,

c. robustness and accuracy, d. energy efficiency, e. low cost and complexity.

b. independence of fixed landmarks,

 **Robotic Collectives** 

*Politehnica University of Timisoara* 


## **Distance Measurement for Indoor Robotic Collectives**

Mihai V. Micea, Andrei Stancovici and Sînziana Indreica *Politehnica University of Timisoara Romania* 

## **1. Introduction**

24 Will-be-set-by-IN-TECH

So, H. W. & Walrand, J. (2007). McMAC: A multi-channel MAC proposal for ad-hoc wireless

Wu, S.-L., Lin, C.-Y., Tseng, Y.-C. & Sheu, J.-P. (2000). A new multi-channel MAC protocol with

*Symposium on Parallel Architectures, Algorithms and Networks*, pp. 232–237. Zhang, J., Zhou, G., Huang, C., Son, S. H. & Stankovic, J. A. (2007). TMMAC: an energy efficient multi-channel MAC protocol for ad hoc networks, *IEE ICC*, pp. 3554–3561. Zhao, W., Chen, Y., Ammar, M., Corner, M., Levine, B. & Zegura, E. (2006). Capacity

on-demand channel assignment for multi-hop mobile ad hoc networks, *International*

Bio-Interfacing, Navigation, Multi Robot Motion Planning and Operator Training

Mobile Robots – Control Architectures,

networks, *IEEE Wireless Communications and Networking Conference*.

enhancement using throwboxes in DTNs, pp. 31–40.

352

Location monitoring is a common problem for many mobile robotic applications covering various domains, such as industrial automation, manipulation in difficult areas, rescue operations, environment exploration and monitoring, smart environments and buildings, robotic home appliances, space exploration and probing.

A key aspect of localization is inter-robot distance measurement. In this chapter we consider the problem of autonomous, collaborative distance measurement in mobile robotic systems, under the following set of design and functional constraints:


This work significantly extends and updates the results previously published in (Micea et al., 2010). We present and discuss some of the most relevant state of the art techniques for robot distance estimation. Next, we introduce a framework for collaborative inter-robot distance measurement along with a procedure for accurate robotic alignment. The proposed alignment algorithm is based on evaluating and comparing the strength of ultrasonic signals at different angles, processing (filtering) the measured data and ensuring a good synchronization during the process. Further on, we present the CTOF (Combined Time-of-Flight) method for distance measurement, which brings significant improvements to the classical TOF technique, and we show how this new technique meets the above specified design constraints. Some of the most interesting test and evaluation results are presented and discussed. The experimental data show how the distance estimation accuracy can be increased by applying the Kalman filter algorithm on repetitive measurements. The final remarks and the reference list conclude this chapter.

## **2. Current techniques for robot distance estimation**

The problem of inter-robot distance measurement and location monitoring is considered of key importance in the field and, consequently, a large number and variety of methods have been proposed and studied in the literature. For instance, the GPS system (Ohno et al., 2004; Reina et al., 2007) and landmark-based solutions such as the Cricket Indoor Location System (Cricket Project, 2005; Priyantha, 2005) are well established in the field. On the other hand,

Distance Measurement for Indoor Robotic Collectives 355

4 4 *<sup>t</sup>*

*P d fd P c* 

*r*

(ZigBee Standards Organization, 2007), is presented in (Grossmann, 2007).

errors and noise occurring in the measurements of the ultrasound signal.

systems with multiple sensors with different characteristics.

associating them with the corresponding locations.

(Cioarga et al., 2006).

propagation distance,

2 2

where *P*t and *P*r are the signal power at the emitter and at the receiver, respectively, *d* is the

the speed of light. The accuracy of such systems though, is around 2-3 m. Another similar technique, based on modeling the signal power for the ZigBee (IEEE 802.15.4) protocol

An indoor GPS system, presented in (Kim et al., 2006), consists of two receivers as fixed reference points and a transmitter which uses both ultrasonic and RF signals. The receivers estimate the distance to the transmitter based on the delay between the received RF signal and the ultrasound waves. The two resulting distances are then used to calculate the location of the transmitter, through geometrical formulas. A Linear Kalman Filter is also used to minimize the

GPS systems usually calculate the distance between a receiver and multiple transmitters based on the difference in the time-of-flight of the received signals (the TOF method). Through the TOF method, more precise results can be obtained. However, TOF is influenced by the synchronization accuracy of system, environment temperature or other factors which could yield calculation errors. As a result, filtering of measurement values is frequently used. A common solution is the Linear Kalman Filter, as presented in (Kim et al., 2006; Ko et al., 2008; Welch & Bishop, 2006). Other approaches use Bayesian filters (Fox et al., 2003), which estimate the state of a probabilistic dynamic system from observations drowned in noise. Using statistic techniques, they operate in a deterministic manner and are suitable for

The Building Positioning System (Reynolds, 1999), determines the position of a mobile device by receiving radio signals from fixed devices. These are designed to transmit radio signals in a manner which is similar to the operation of the Cricket or GPS modules. Compared with the GPS, this system uses a much lower frequency, making the radio waves propagate with a relatively low attenuation. The system requires only 4 fixed transmission antennas attached to

Image processing methods are also widely used, many of them using passive cameras. A microcontroller drives a motor to focus a target image located at a certain distance from the sensor. Based on several parameters, like motor position or lens properties, the distance to the target object can be determined. Some systems are based, for instance, on visual information from a 360 degree camera (Tamimi et al., 2006). Such systems must be trained before being used, by capturing representative images from the environment and

**3. Example framework for collaborative inter-robot distance measurement**  The proposed distance measurement method and inter-robot alignment algorithm have a common set of requirements for the target robotic system. Such a framework, which has been used to implement and test the proposed techniques, is the CORE-TX platform

CORE-TX (COllaborative Robotic Environment – the Timisoara eXperiment) is designed to provide theoretical and applicative support for the study of intelligent sensor networks and

1. Perception and Operation Layer, consisting mainly of autonomous microsystems with

robotic collectives. Its architecture is structured on three main layers (see Fig. 1):

embedded intelligence, called WITs (Wireless Intelligent Terminals),

four different corners of the building. The accuracy of such a system is about 5 cm.

 2

and *f* are the carrier wavelength and frequency, respectively, and *c*,

(1)

they do not comply with the constraints specified in the previous section (i.e. independence of fixed landmarks). In this section we discuss some of the most prominent techniques which can be used for indoor robotic collectives.

Time-Difference-of-Arrival (TDOA) measures the distance between two points by using two different types of signals (usually radio and acoustic) which cover the route connecting the two points with different speeds. To illustrate this technique, consider two points, A and B, located at distance *d* from each other. At a time instance, the transmitter from A sends simultaneously the signals S1 and S2, which cover the distance *d* at the speeds *v*S1 and *v*S2, respectively. If, for example, *v*S2 < *v*S1, then signal S2 arrives at the receiver B after S1, with a delay which depends on the distance *d*. This delay is measured at the destination point B and the value of *d* is consequently derived. Cricket Indoor Location System uses TDOA to measure the distance to the reference points. The system consists of several landmark transmission devices, depending on the size of the desired coverage area (at least three modules) and one or more mobile devices that play the role of receptors. In most cases, the transmission devices are attached to the upper part of the room so as to cover a large portion or the entire room. The reception devices are attached to robots, located on the floor. As shown in (Priyantha, 2005), the system relies on two types of signals to calculate distances: a RF (radio) signal and an ultrasonic signal. The radio signal is 106 times faster than the ultrasonic signal, and the distance is calculated by applying the principle of TDOA to the difference of the two propagation periods. The localization of mobile robots through this system is made at an accuracy of 1 ÷ 3 cm. Similarly, the system presented in (Fayli & Kleeman, 2004) solves the localization problem based on four transmitters as fixed reference points and a wireless receiver.

Another well known technology used in robotics to calculate distance is based on infrared (IR) sensors. There are several types of IR sensors, each varying according to their parameters (e.g. maximum range and accuracy) and price. In comparison to ultrasonic devices, the IR sensors are cheaper and use light, which is much faster than the acoustic signal. They have nonlinear characteristics which depend on the surface reflectance of the objects. Based on measuring the intensity of light reflected by a target, the IR sensors can calculate the distance to it. This technique is presented and discussed in several works, including (Novotny & Ferrier, 1999; Ha & Kim, 2004; Mohammad, 2009). Hagisonic StarGazer (Hagisonic, 2009) is a location system for mobile robots, based on the analysis of infrared rays which are reflected by a passive landmark with a unique ID. The system works as follows:


The advantage of such a system is its accuracy, which, according to (Hagisonic, 2009), can reach approximately 2 cm. The system can carry out 20 measurements per second. Its disadvantage is the high price and the reduced coverage area, which ranges from 2.5 to 5 m.

A set of radio-based methods use the power of the received signal to estimate the distance to the source. The mathematical model of the emitted signal power is given in (Fuicu et al., 2009) as:

Mobile Robots – Control Architectures,

354 Bio-Interfacing, Navigation, Multi Robot Motion Planning and Operator Training

they do not comply with the constraints specified in the previous section (i.e. independence of fixed landmarks). In this section we discuss some of the most prominent techniques

Time-Difference-of-Arrival (TDOA) measures the distance between two points by using two different types of signals (usually radio and acoustic) which cover the route connecting the two points with different speeds. To illustrate this technique, consider two points, A and B, located at distance *d* from each other. At a time instance, the transmitter from A sends simultaneously the signals S1 and S2, which cover the distance *d* at the speeds *v*S1 and *v*S2, respectively. If, for example, *v*S2 < *v*S1, then signal S2 arrives at the receiver B after S1, with a delay which depends on the distance *d*. This delay is measured at the destination point B and the value of *d* is consequently derived. Cricket Indoor Location System uses TDOA to measure the distance to the reference points. The system consists of several landmark transmission devices, depending on the size of the desired coverage area (at least three modules) and one or more mobile devices that play the role of receptors. In most cases, the transmission devices are attached to the upper part of the room so as to cover a large portion or the entire room. The reception devices are attached to robots, located on the floor. As shown in (Priyantha, 2005), the system relies on two types of signals to calculate distances: a RF (radio) signal and an ultrasonic signal. The radio signal is 106 times faster than the ultrasonic signal, and the distance is calculated by applying the principle of TDOA to the difference of the two propagation periods. The localization of mobile robots through this system is made at an accuracy of 1 ÷ 3 cm. Similarly, the system presented in (Fayli & Kleeman, 2004) solves the localization problem based on four transmitters as fixed reference

Another well known technology used in robotics to calculate distance is based on infrared (IR) sensors. There are several types of IR sensors, each varying according to their parameters (e.g. maximum range and accuracy) and price. In comparison to ultrasonic devices, the IR sensors are cheaper and use light, which is much faster than the acoustic signal. They have nonlinear characteristics which depend on the surface reflectance of the objects. Based on measuring the intensity of light reflected by a target, the IR sensors can calculate the distance to it. This technique is presented and discussed in several works, including (Novotny & Ferrier, 1999; Ha & Kim, 2004; Mohammad, 2009). Hagisonic StarGazer (Hagisonic, 2009) is a location system for mobile robots, based on the analysis of infrared rays which are reflected by a passive

1. The IR transmitter is located on the robot. It transmits infrared beams to the fixed

2. The infrared rays are reflected from the landmark and reach the Stargazer, mounted on

3. Stargazer contains a CMOS camera able to estimate the angle of incidence of the

4. Based on the angle of incidence and on the distance to the landmark, the position of the

The advantage of such a system is its accuracy, which, according to (Hagisonic, 2009), can reach approximately 2 cm. The system can carry out 20 measurements per second. Its disadvantage is the high price and the reduced coverage area, which ranges from 2.5 to 5 m. A set of radio-based methods use the power of the received signal to estimate the distance to the source. The mathematical model of the emitted signal power is given in (Fuicu et al.,

reflected IR waves and the distance between the robot and the landmark.

robot in the room can then be obtained through geometric techniques.

which can be used for indoor robotic collectives.

points and a wireless receiver.

the robot.

2009) as:

landmark with a unique ID. The system works as follows:

landmark attached to the ceiling of the room.

$$\frac{P\_t}{P\_r} = \frac{4\pi d}{\lambda^2} = \frac{\left(4\pi fd\right)^2}{c^2} \tag{1}$$

where *P*t and *P*r are the signal power at the emitter and at the receiver, respectively, *d* is the propagation distance, and *f* are the carrier wavelength and frequency, respectively, and *c*, the speed of light. The accuracy of such systems though, is around 2-3 m. Another similar technique, based on modeling the signal power for the ZigBee (IEEE 802.15.4) protocol (ZigBee Standards Organization, 2007), is presented in (Grossmann, 2007).

An indoor GPS system, presented in (Kim et al., 2006), consists of two receivers as fixed reference points and a transmitter which uses both ultrasonic and RF signals. The receivers estimate the distance to the transmitter based on the delay between the received RF signal and the ultrasound waves. The two resulting distances are then used to calculate the location of the transmitter, through geometrical formulas. A Linear Kalman Filter is also used to minimize the errors and noise occurring in the measurements of the ultrasound signal.

GPS systems usually calculate the distance between a receiver and multiple transmitters based on the difference in the time-of-flight of the received signals (the TOF method). Through the TOF method, more precise results can be obtained. However, TOF is influenced by the synchronization accuracy of system, environment temperature or other factors which could yield calculation errors. As a result, filtering of measurement values is frequently used. A common solution is the Linear Kalman Filter, as presented in (Kim et al., 2006; Ko et al., 2008; Welch & Bishop, 2006). Other approaches use Bayesian filters (Fox et al., 2003), which estimate the state of a probabilistic dynamic system from observations drowned in noise. Using statistic techniques, they operate in a deterministic manner and are suitable for systems with multiple sensors with different characteristics.

The Building Positioning System (Reynolds, 1999), determines the position of a mobile device by receiving radio signals from fixed devices. These are designed to transmit radio signals in a manner which is similar to the operation of the Cricket or GPS modules. Compared with the GPS, this system uses a much lower frequency, making the radio waves propagate with a relatively low attenuation. The system requires only 4 fixed transmission antennas attached to four different corners of the building. The accuracy of such a system is about 5 cm.

Image processing methods are also widely used, many of them using passive cameras. A microcontroller drives a motor to focus a target image located at a certain distance from the sensor. Based on several parameters, like motor position or lens properties, the distance to the target object can be determined. Some systems are based, for instance, on visual information from a 360 degree camera (Tamimi et al., 2006). Such systems must be trained before being used, by capturing representative images from the environment and associating them with the corresponding locations.

## **3. Example framework for collaborative inter-robot distance measurement**

The proposed distance measurement method and inter-robot alignment algorithm have a common set of requirements for the target robotic system. Such a framework, which has been used to implement and test the proposed techniques, is the CORE-TX platform (Cioarga et al., 2006).

CORE-TX (COllaborative Robotic Environment – the Timisoara eXperiment) is designed to provide theoretical and applicative support for the study of intelligent sensor networks and robotic collectives. Its architecture is structured on three main layers (see Fig. 1):

1. Perception and Operation Layer, consisting mainly of autonomous microsystems with embedded intelligence, called WITs (Wireless Intelligent Terminals),

Distance Measurement for Indoor Robotic Collectives 357

high ADC performance and multiple resources, such as 16 ADC channels, 4 DMA channels,

**ROM: Support and Operation (Robotic) Module (daughterboard)**

**ATxmega128A1** 

Counters DMA

PWM

Servo Motor

Rotating platform with ultrasonic transducers

PWM ADC

SPI

PWM ADC

Timers / and duplexing

Amplification

**POM: Power Management Module (daughterboard)**

**BAM: Base Processing Module (motherboard)**

**SEM: Perception (Sensor) Module (daughterboard)**

**COM: Communication Module (daughterboard)**

8 timers, 24 PWM channels, 4 SPI interfaces and a large set of I/O ports.

**WIT**

Fig. 2. Diagram of the Wireless Intelligent Terminal

Fig. 3. Schematic of the WIT Perception Module

**WIT System Bus** 

**LPC2294** 

SPI SPI


Fig. 1. General architecture of the CORE-TX system

## **3.1 General architecture of the robotic elements**

The WIT elements may have perception functions (intelligent sensors), operating functions (autonomous mini-robots), or combined. They have been designed using a modular approach (Fig. 2) which specifies a motherboard (the Base Processing Module), interconnected through a system bus to a set of specialized daughter boards. Such daughter boards are the Power Management Module, the Perception Module and the Communication Module. The additional Support and Operation Module transforms the WIT, from a static intelligent sensor, into an autonomous mini-robot.

Currently, the WIT communication board uses the XBee wireless module (Digi International, 2009), which is based on the Zigbee protocol. This module provides a unique 64-bit ID. Other features are: size of 2 cm x 3 cm, operating range of up to 30 m indoor and up to 90 m outdoor, with a maximum consumption of 50 mA at a voltage of 3.3 V. Communication uses the 2.4 GHz radio frequency band with 16 channels.

## **3.2 Design of the perception module**

The schematic design of the Perception Module is shown in Fig. 3. The main processor is the ARM7-based LPC2294 (NXP Semiconductors, 2008) which runs the Hard Real-Time Operating Kernel, HARETICK (Micea et al., 2006), for predictable operation. Another important part of the module is the coprocessor ATxmega128A1 (Atmel Corporation, 2010), used for fast, periodic data acquisition and processing operations. It was also chosen for its Mobile Robots – Control Architectures,

356 Bio-Interfacing, Navigation, Multi Robot Motion Planning and Operator Training

2. Collaborative Communication Layer, based on ad-hoc wireless data communication techniques (currently, the basic support is provided by the ZigBee protocol), and 3. Background Control and Supervision Layer, with a central node called BRAIN

The WIT elements may have perception functions (intelligent sensors), operating functions (autonomous mini-robots), or combined. They have been designed using a modular approach (Fig. 2) which specifies a motherboard (the Base Processing Module), interconnected through a system bus to a set of specialized daughter boards. Such daughter boards are the Power Management Module, the Perception Module and the Communication Module. The additional Support and Operation Module transforms the WIT, from a static

Currently, the WIT communication board uses the XBee wireless module (Digi International, 2009), which is based on the Zigbee protocol. This module provides a unique 64-bit ID. Other features are: size of 2 cm x 3 cm, operating range of up to 30 m indoor and up to 90 m outdoor, with a maximum consumption of 50 mA at a voltage of 3.3 V.

The schematic design of the Perception Module is shown in Fig. 3. The main processor is the ARM7-based LPC2294 (NXP Semiconductors, 2008) which runs the Hard Real-Time Operating Kernel, HARETICK (Micea et al., 2006), for predictable operation. Another important part of the module is the coprocessor ATxmega128A1 (Atmel Corporation, 2010), used for fast, periodic data acquisition and processing operations. It was also chosen for its

Communication uses the 2.4 GHz radio frequency band with 16 channels.

(Background Robotic Activity Induction Node).

Fig. 1. General architecture of the CORE-TX system

**3.1 General architecture of the robotic elements** 

intelligent sensor, into an autonomous mini-robot.

**3.2 Design of the perception module** 

high ADC performance and multiple resources, such as 16 ADC channels, 4 DMA channels, 8 timers, 24 PWM channels, 4 SPI interfaces and a large set of I/O ports.

Fig. 2. Diagram of the Wireless Intelligent Terminal

Fig. 3. Schematic of the WIT Perception Module

Distance Measurement for Indoor Robotic Collectives 359

Thus, the ideal alignment situation is when *α*12 = *φ*1 and *α*21 = *φ*2. In this case, the alignment

<sup>21</sup> <sup>2</sup>

*x*2

*<sup>y</sup>*<sup>2</sup> *<sup>y</sup>*<sup>1</sup>

**W2**

The alignment procedure uses the wireless communication interfaces of the robots to enable the two corresponding peers exchange the required commands and messages, and is based on the continuous measurement of the Sonar acoustic intensity. It is initiated and conducted by one of the robots, which acts as the master (WM), while the other robot, the slave (WS), executes the commands received from the master through the wireless link. The master will

*x*1

The procedure is based on the high directivity of ultrasonic waves used by the Sonar. As the two robot turrets rotate, the master calculates the average strength of the ultrasonic signal received from the slave, at each rotation step of 1 degree. If WM senses this average signal strength has increased from the previous rotation step, it continues the procedure until a decrease is encountered. Then, the two robots will change the rotation directions of their

Fig. 6 shows all steps of the alignment algorithm. The process starts with the reading of the ADC results. The two decision blocks labeled "done" refer to extracting a predetermined number of samples, respectively, re-reading the values stored in memory. Next, the measured values are optimized by finding the peak amplitude of each period, applying a Kalman filter and comparing the results to each other to determine a global maximum. This

The block labeled "pre\_align==0?" determines whether it is the last alignment phase of the process. If not, the algorithm determines the trend of the signal: if two consecutive increases are detected, "flag\_inc" is set and if there has been a previous decrease, the current stage of the algorithm is finished. On the other hand, if there are two consecutive decreases, the

global maximum is further used as the amplitude of the signal in the next steps.

operate in the Sonar receive mode and the slave in Sonar transmit mode.

12 1

turrets to return to the previously detected maximum.

error is 0.

Fig. 5. Key angles on the alignment process

**4.1 Description of the algorithm** 

**W1**

Two similar transducers are used both for transmitting and for receiving ultrasonic signals. The BPU-1640IOAH12 device (Bestar Electronics, 2006) has been selected, due to its convenient features, which include low cost, bidirectional operation, nominal frequency of 40 kHz, and maximum input voltage of 120 Vpp. Signal duplexing at the transducer level (bidirectional operation) has been implemented using SI4808DY MOSFET circuits.

To perform a fast alignment process, the two ultrasonic transducers are mounted back to back at 180 degrees on a rotating platform, which is driven by a servo motor. The motor is a TowerPro SG-50 (Tower Pro, 2008) with the following specifications: weight 5 g, dimensions 21.5 11. 7 25.1 mm, speed 0.1 s/60 degrees (at 4.8 V), supply voltage 4 6 V. The servo motor is driven by a PWM signal with a period of 20 ms and a variable duty cycle. Rotation is between 0 (minimum pulse duration) and 180 degrees (maximum pulse duration).

Choosing the design based on a turret support for the ultrasonic transducers has several advantages when compared to other designs. On one hand, avoiding the rotation of the entire robots during the alignment process eliminates the inherent positioning errors, while also lowers the power consumption of the system. Other advantages of this solution are the increase of the alignment accuracy at a higher process speed.

## **4. Inter-robot alignment algorithm**

To obtain correct results, the proposed techniques require that the pair of robots performing the distance measurement procedure must successfully complete the alignment algorithm. Correct alignment means the sensing devices (i.e. the ultrasonic transducers) of the robots are facing each other, as close as possible to the straight line between them (see Fig. 4). This procedure also provides key angle values of each robot position and orientation related to the local reference system (Fig. 5):


Fig. 4. Inter-robot alignment process

Mobile Robots – Control Architectures,

358 Bio-Interfacing, Navigation, Multi Robot Motion Planning and Operator Training

Two similar transducers are used both for transmitting and for receiving ultrasonic signals. The BPU-1640IOAH12 device (Bestar Electronics, 2006) has been selected, due to its convenient features, which include low cost, bidirectional operation, nominal frequency of 40 kHz, and maximum input voltage of 120 Vpp. Signal duplexing at the transducer level

To perform a fast alignment process, the two ultrasonic transducers are mounted back to back at 180 degrees on a rotating platform, which is driven by a servo motor. The motor is a TowerPro SG-50 (Tower Pro, 2008) with the following specifications: weight 5 g, dimensions 21.5 11. 7 25.1 mm, speed 0.1 s/60 degrees (at 4.8 V), supply voltage 4 6 V. The servo motor is driven by a PWM signal with a period of 20 ms and a variable duty cycle. Rotation

Choosing the design based on a turret support for the ultrasonic transducers has several advantages when compared to other designs. On one hand, avoiding the rotation of the entire robots during the alignment process eliminates the inherent positioning errors, while also lowers the power consumption of the system. Other advantages of this solution are the

To obtain correct results, the proposed techniques require that the pair of robots performing the distance measurement procedure must successfully complete the alignment algorithm. Correct alignment means the sensing devices (i.e. the ultrasonic transducers) of the robots are facing each other, as close as possible to the straight line between them (see Fig. 4). This procedure also provides key angle values of each robot position and orientation related to

*α*12 is the angle between the *orientation axis (Ox*1*)* of the first robot (W1) and the direct

*α*21 is the angle between the *orientation axis* (*Ox*2) of the second robot (W2) and the direct

*φ*1 is the angle defined by the *Ox*1 axis and the ultrasonic sensor axis of W1.

*φ*2 is the angle defined by the *Ox*2 axis and the ultrasonic sensor axis of W2.

(bidirectional operation) has been implemented using SI4808DY MOSFET circuits.

is between 0 (minimum pulse duration) and 180 degrees (maximum pulse duration).

increase of the alignment accuracy at a higher process speed.

**4. Inter-robot alignment algorithm** 

the local reference system (Fig. 5):

line between W1 and W2.

Fig. 4. Inter-robot alignment process

line between the two robots.

Thus, the ideal alignment situation is when *α*12 = *φ*1 and *α*21 = *φ*2. In this case, the alignment error is 0.

Fig. 5. Key angles on the alignment process

#### **4.1 Description of the algorithm**

The alignment procedure uses the wireless communication interfaces of the robots to enable the two corresponding peers exchange the required commands and messages, and is based on the continuous measurement of the Sonar acoustic intensity. It is initiated and conducted by one of the robots, which acts as the master (WM), while the other robot, the slave (WS), executes the commands received from the master through the wireless link. The master will operate in the Sonar receive mode and the slave in Sonar transmit mode.

The procedure is based on the high directivity of ultrasonic waves used by the Sonar. As the two robot turrets rotate, the master calculates the average strength of the ultrasonic signal received from the slave, at each rotation step of 1 degree. If WM senses this average signal strength has increased from the previous rotation step, it continues the procedure until a decrease is encountered. Then, the two robots will change the rotation directions of their turrets to return to the previously detected maximum.

Fig. 6 shows all steps of the alignment algorithm. The process starts with the reading of the ADC results. The two decision blocks labeled "done" refer to extracting a predetermined number of samples, respectively, re-reading the values stored in memory. Next, the measured values are optimized by finding the peak amplitude of each period, applying a Kalman filter and comparing the results to each other to determine a global maximum. This global maximum is further used as the amplitude of the signal in the next steps.

The block labeled "pre\_align==0?" determines whether it is the last alignment phase of the process. If not, the algorithm determines the trend of the signal: if two consecutive increases are detected, "flag\_inc" is set and if there has been a previous decrease, the current stage of the algorithm is finished. On the other hand, if there are two consecutive decreases, the

Distance Measurement for Indoor Robotic Collectives 361

2. the current design in which the robots are equipped with turrets, rotated at 180 degrees

To simulate the ultrasonic signal transmission we considered an idealized representation – an isosceles triangle which represents the longitudinal cross-section of the propagation cone. This triangle will represent the "active space" of the robot, and it has an opening of 60 degrees. In this case, the power of the signal (its amplitude) varies with the distance from the bisector of the considered angle. In other words, a perfect alignment takes place when

Because the distance between the robots is constant throughout the alignment process, its importance in the real case lies in the fact that the receiver may not "read" anything from the transmitter if it is too far away. Also, when further apart, the visibility angle broadens and more precise alignments may occur. In the application, we consider the distance a constant parameter and simulate the most usual scenarios, i.e. where the robots are not too far apart

To be able to determine the position of the robots (of the sensors) relative to each other, we have to calculate the signal strength from both directions and then multiply the results. For the correction of the final alignment, the maximum value of the signal is determined. With the simulator, this maximum value is calculated from the initial conditions. In the real case, the best value must be searched for the signal currently measured, because the distance

Fig. 8 and Fig. 9 show the elapsed time for the alignment, versus the initial angles of the robots, at various ratios of the rotation speed (calculated as [rad/(s103)]). The thick lines mark the best two series, which minimize the alignment duration, at a corresponding

The data collected was used to improve the alignment algorithm.

the bisectors of the receiver and transmitter concur (Fig. 7).

Fig. 7. Robotic alignment simulation tool

for the angle to broaden.

between robots is unknown.

rotation speed ratio.

by a servo motor.

direction of rotation changes and, if "flag\_inc" is set, "flag\_dec" also will be set. This phase also determines and updates the highest value of the signal reached so far.

In the last phase of the alignment, the algorithm tries to trace back the position with the highest measured values. Thus, if it detects an increase of the measured signal and the current value is larger than or equal to the stored peak, the process ends and the transducers are considered aligned. If, instead, a decrease is detected, the rotation direction is changed. In all cases, if the process doesn't end, the flow of the algorithm goes back to ADC readings.

Fig. 6. Robotic alignment algorithm

#### **4.2 Performance evaluation of the alignment algorithm**

To evaluate the robotic alignment procedure, a custom simulation application has been developed using the WPF (Windows Presentation Foundation). Two cases have been considered:

1. the ultrasonic transducers are fixed on the robot case and thus the robots rotate themselves (using the wheels) during the alignment process, and

Mobile Robots – Control Architectures,

360 Bio-Interfacing, Navigation, Multi Robot Motion Planning and Operator Training

direction of rotation changes and, if "flag\_inc" is set, "flag\_dec" also will be set. This phase

In the last phase of the alignment, the algorithm tries to trace back the position with the highest measured values. Thus, if it detects an increase of the measured signal and the current value is larger than or equal to the stored peak, the process ends and the transducers are considered aligned. If, instead, a decrease is detected, the rotation direction is changed. In all cases, if the process doesn't end, the flow of the algorithm goes

ADC read

START

**pre\_align**=3

Done?

N

Y

Find maximum per period

Apply (Kalman) filter Find maximum per degree

**pre\_align**==0 ?

Done? Y

**flag\_dec**==TRUE ?

Decrement **pre\_align**

**flag\_inc**=TRUE

Amplitude<**MAX** ?

Y

Amplitude increase?

Y

STOP

N Change turret

N

N

Y

direction

Amplitude decrease? Y

also determines and updates the highest value of the signal reached so far.

back to ADC readings.

Fig. 6. Robotic alignment algorithm

Change turret direction

**flag\_dec**=TRUE Y

Double amplitude decrease?

Y

considered:

N

N

**4.2 Performance evaluation of the alignment algorithm** 

Double amplitude increase?

N

<sup>Y</sup> **flag\_inc**==TRUE ?

N Y

Store new **MAX** = global maximum value

N

N

To evaluate the robotic alignment procedure, a custom simulation application has been developed using the WPF (Windows Presentation Foundation). Two cases have been

1. the ultrasonic transducers are fixed on the robot case and thus the robots rotate

themselves (using the wheels) during the alignment process, and

2. the current design in which the robots are equipped with turrets, rotated at 180 degrees by a servo motor.

The data collected was used to improve the alignment algorithm.

To simulate the ultrasonic signal transmission we considered an idealized representation – an isosceles triangle which represents the longitudinal cross-section of the propagation cone. This triangle will represent the "active space" of the robot, and it has an opening of 60 degrees. In this case, the power of the signal (its amplitude) varies with the distance from the bisector of the considered angle. In other words, a perfect alignment takes place when the bisectors of the receiver and transmitter concur (Fig. 7).

Fig. 7. Robotic alignment simulation tool

Because the distance between the robots is constant throughout the alignment process, its importance in the real case lies in the fact that the receiver may not "read" anything from the transmitter if it is too far away. Also, when further apart, the visibility angle broadens and more precise alignments may occur. In the application, we consider the distance a constant parameter and simulate the most usual scenarios, i.e. where the robots are not too far apart for the angle to broaden.

To be able to determine the position of the robots (of the sensors) relative to each other, we have to calculate the signal strength from both directions and then multiply the results. For the correction of the final alignment, the maximum value of the signal is determined. With the simulator, this maximum value is calculated from the initial conditions. In the real case, the best value must be searched for the signal currently measured, because the distance between robots is unknown.

Fig. 8 and Fig. 9 show the elapsed time for the alignment, versus the initial angles of the robots, at various ratios of the rotation speed (calculated as [rad/(s103)]). The thick lines mark the best two series, which minimize the alignment duration, at a corresponding rotation speed ratio.

Distance Measurement for Indoor Robotic Collectives 363

CTOF, Combined Time-of-Flight, is based on the TOF technique and involves two robots. Although a little more complicated, CTOF has several advantages over the MTDOA method, proposed in (Micea et al., 2010). Thus, the CTOF procedure does not require an additional robot (the third one) to coordinate the distance calculation. It also does not depend on the delays implied by the wireless communication interfaces of the robots.

Fig. 10 depicts the CTOF technique. Robot W1 initiates the procedure by sending a "START" wireless message (abbreviated "WMes" in Fig. 10) to its peer, W2. The latter acknowledges the start of its part of the procedure with the "SONAR REQ" message, while simultaneously launching its own Sonar Receive Task. As a response to the second message, W1 starts the Sonar Transmit Task and activates the timer which will count the elapsed time of the entire procedure, *t*. Upon receiving the ultrasound signal, W2 activates a delay timer with a

maximum communication delay over the wireless link and the corresponding interfaces. When W1 receives the "SONAR START" message, it launches its Sonar Receive Task. After

W, and twice the propagation delay of the ultrasound signal, from W1 to W2 and

W timer expires, W2 starts its Sonar Transmit Task and sends the corresponding ultrasonic signal towards W1. Finally, when W1 receives the signal, it stops the timer to produce the *t* period. As a result, the *t* period contains the two predefined delays,

U, which is empirically determined to cover the total duration of the

U delay, W2 sends a "SONAR START" message

W empirically established to cover the

U and

**5. Distance measurement with the CTOF method** 

Fig. 10. Distance measurement with the CTOF method

predefined value,

the 

ultrasonic transmission from W1. After the

to W1 and starts a second timer, with a value

Fig. 8. Inter-robot alignment in the case of the fixed transducers (rotating robots)

Fig. 9. Inter-robot alignment in the case of the transducers mounted on the turret

For the design without turrets, most of the rotation speed ratios have cases in which the alignment time exceeds the limit imposed (60 s). They are shown on the graph as being equal to the limit. For the turret case, the sensors have to scan only half of the circle and, consequently, they find each other more quickly.

In many cases of small rotation ratios, the sensors cannot find each other, because they are "following" each other closely behind. On the other hand, for increased values of the rotation ratios, the entire process is becoming slow. Considering these facts, the simulations found an optimal rotation ratio of 1047/209 (i.e. 5/1) for the case in Fig. 8 and 1047/349 (i.e. 3/1) for the rotating turret (Fig. 9). Taking also into consideration the alignment time, its maximum value is about 27 s in the first case and 14 s for the second. These results prove that by using the turret design, the alignment time can be almost reduced to half.

Mobile Robots – Control Architectures,

362 Bio-Interfacing, Navigation, Multi Robot Motion Planning and Operator Training

Fig. 8. Inter-robot alignment in the case of the fixed transducers (rotating robots)

Fig. 9. Inter-robot alignment in the case of the transducers mounted on the turret

consequently, they find each other more quickly.

reduced to half.

For the design without turrets, most of the rotation speed ratios have cases in which the alignment time exceeds the limit imposed (60 s). They are shown on the graph as being equal to the limit. For the turret case, the sensors have to scan only half of the circle and,

In many cases of small rotation ratios, the sensors cannot find each other, because they are "following" each other closely behind. On the other hand, for increased values of the rotation ratios, the entire process is becoming slow. Considering these facts, the simulations found an optimal rotation ratio of 1047/209 (i.e. 5/1) for the case in Fig. 8 and 1047/349 (i.e. 3/1) for the rotating turret (Fig. 9). Taking also into consideration the alignment time, its maximum value is about 27 s in the first case and 14 s for the second. These results prove that by using the turret design, the alignment time can be almost

## **5. Distance measurement with the CTOF method**

CTOF, Combined Time-of-Flight, is based on the TOF technique and involves two robots. Although a little more complicated, CTOF has several advantages over the MTDOA method, proposed in (Micea et al., 2010). Thus, the CTOF procedure does not require an additional robot (the third one) to coordinate the distance calculation. It also does not depend on the delays implied by the wireless communication interfaces of the robots.

Fig. 10. Distance measurement with the CTOF method

Fig. 10 depicts the CTOF technique. Robot W1 initiates the procedure by sending a "START" wireless message (abbreviated "WMes" in Fig. 10) to its peer, W2. The latter acknowledges the start of its part of the procedure with the "SONAR REQ" message, while simultaneously launching its own Sonar Receive Task. As a response to the second message, W1 starts the Sonar Transmit Task and activates the timer which will count the elapsed time of the entire procedure, *t*. Upon receiving the ultrasound signal, W2 activates a delay timer with a predefined value, U, which is empirically determined to cover the total duration of the ultrasonic transmission from W1. After the U delay, W2 sends a "SONAR START" message to W1 and starts a second timer, with a value W empirically established to cover the maximum communication delay over the wireless link and the corresponding interfaces. When W1 receives the "SONAR START" message, it launches its Sonar Receive Task. After the W timer expires, W2 starts its Sonar Transmit Task and sends the corresponding ultrasonic signal towards W1. Finally, when W1 receives the signal, it stops the timer to produce the *t* period. As a result, the *t* period contains the two predefined delays, U and W, and twice the propagation delay of the ultrasound signal, from W1 to W2 and backwards. Based on this ultrasound propagation delay, the distance between the two robots can be derived:

$$d = \frac{c\_{\text{air}} \left(\Delta t - \delta\_{\text{U}} - \delta\_{\text{W}}\right)}{2} \tag{2}$$

Distance Measurement for Indoor Robotic Collectives 365

Fig. 12. Received signal without filtering (up) and after the Kalman filtering (down)

alignment procedure.

approximately 116 degrees.

As depicted in the upper diagram of Fig. 12, the received Sonar signal contains a relatively significant amount of noise, which can be reduced by applying a Kalman filter. As shown in the lower diagram of Fig. 12, the filter significantly reduces the random variations of the signal, while retaining its trend. Even with the filtering process, accidental variations of the signal can occur. Therefore, empirical thresholds have been specified to establish when the signal amplitude changes. Setting the threshold values is a key calibration step of the

The results for a full scan of the ultrasonic transducer, i.e. a rotation of 180 degrees, are presented in Fig. 13, both with and without applying the Kalman filter. The target robot is detected at around 93 degrees, with a maximum of the Sonar signal occurring at

where *c*air = 343.4 m/s is the velocity of acoustic waves in air at room temperature and at normal pressure. When considering the threshold-based detection method of the received ultrasonic bursts and the fact that the ultrasonic measurements are not perfectly linear, an additional calibration offset is needed for the distance formula in (2):

$$d = \frac{c\_{\text{air}} \left(\Delta t - \delta\_{\text{U}} - \delta\_{\text{W}} - \theta\_{\text{LC}}\right)}{2} \tag{3}$$

where UC is the ultrasonic signal calibration offset and has an experimentally determined value (in our case studies, UC = 290 s).

## **6. Experimental results**

An extensive set of experiments have been conducted in the DSPLabs using the robotic system of the CORE-TX platform. The experimental setup consisted in several mobile robots, out of which two of them were randomly chosen to perform the alignment and the distance calculation procedures for each experiment. The robots have been placed at a distance ranging from 100 mm to 3000 mm and, for each 100 mm in this range, a set of over 50 pairs of measurements have been performed. Before each measurement, the robots have been positioned in random directions with respect to each other.

Since the proposed techniques are based on Sonar and are specifically designed for indoor measurements, the experiments, evaluations and results consider normal room values for the air parameters (such as temperature, humidity, pressure, etc.). These parameters could otherwise influence the speed of ultrasonic waves used in equations (2) and (3).

Fig. 11 shows several periods for the raw received ultrasonic signal, at the output of the LM6134 amplification circuit. Further on, the values of the signal peaks (which occur every 25 s) are extracted and interpreted as the received Sonar signal.

Fig. 11. Amplitude of the received ultrasonic signal, after amplification

Mobile Robots – Control Architectures,

(2)

(3)

364 Bio-Interfacing, Navigation, Multi Robot Motion Planning and Operator Training

backwards. Based on this ultrasound propagation delay, the distance between the two

where *c*air = 343.4 m/s is the velocity of acoustic waves in air at room temperature and at normal pressure. When considering the threshold-based detection method of the received ultrasonic bursts and the fact that the ultrasonic measurements are not perfectly linear, an

additional calibration offset is needed for the distance formula in (2):

UC = 290 s).

been positioned in random directions with respect to each other.

25 s) are extracted and interpreted as the received Sonar signal.

Fig. 11. Amplitude of the received ultrasonic signal, after amplification

air 2 *c t U W <sup>d</sup>* 

air 2 *U W UC c t <sup>d</sup>* 

An extensive set of experiments have been conducted in the DSPLabs using the robotic system of the CORE-TX platform. The experimental setup consisted in several mobile robots, out of which two of them were randomly chosen to perform the alignment and the distance calculation procedures for each experiment. The robots have been placed at a distance ranging from 100 mm to 3000 mm and, for each 100 mm in this range, a set of over 50 pairs of measurements have been performed. Before each measurement, the robots have

Since the proposed techniques are based on Sonar and are specifically designed for indoor measurements, the experiments, evaluations and results consider normal room values for the air parameters (such as temperature, humidity, pressure, etc.). These parameters could

Fig. 11 shows several periods for the raw received ultrasonic signal, at the output of the LM6134 amplification circuit. Further on, the values of the signal peaks (which occur every

otherwise influence the speed of ultrasonic waves used in equations (2) and (3).

UC is the ultrasonic signal calibration offset and has an experimentally determined

 

robots can be derived:

value (in our case studies,

**6. Experimental results** 

where 

Fig. 12. Received signal without filtering (up) and after the Kalman filtering (down)

As depicted in the upper diagram of Fig. 12, the received Sonar signal contains a relatively significant amount of noise, which can be reduced by applying a Kalman filter. As shown in the lower diagram of Fig. 12, the filter significantly reduces the random variations of the signal, while retaining its trend. Even with the filtering process, accidental variations of the signal can occur. Therefore, empirical thresholds have been specified to establish when the signal amplitude changes. Setting the threshold values is a key calibration step of the alignment procedure.

The results for a full scan of the ultrasonic transducer, i.e. a rotation of 180 degrees, are presented in Fig. 13, both with and without applying the Kalman filter. The target robot is detected at around 93 degrees, with a maximum of the Sonar signal occurring at approximately 116 degrees.

Distance Measurement for Indoor Robotic Collectives 367

Fig. 14. Maximum variation of the correction angle for experimental alignment procedures Table 2 presents distance measurements with the CTOF method, after applying the Kalman filter to the results. As we can see, the results are very good in terms of accuracy. The disadvantage is the time of measurement, which increases for repetitive measurements. Some comparative distance evaluation results with and without filtering the data are depicted in detail in Fig. 15. From the experiments we conclude that the system provide

optimal results for approximately 10 repetitive measurements.

Fig. 13. Reading of the Sonar signal for a 180 degree rotation (scan) of the transducer

Some of the most interesting experimental results for the alignment process are depicted in Fig. 14. The alignment accuracy varies even for the same distance between the robots, due to the frequent changes of the turrets rotation and to the different time instances the transmitted ultrasonic signal is acquired. With few exceptions, the alignment correction angle remains lower than 10 degrees. The results presented in Fig. 14 also show the improvement of the alignment accuracy with the increase of the distance between the two robots.

Table 1 presents the distance measurement results for the proposed CTOF method. The maximum absolute error is 4.8 cm, when the robots are 3000 mm apart. The result and error analysis of the CTOF procedure show that, after the necessary calibrations, the measurement characteristics are linear and follow very closely the real distance. It proves also to be independent from the random delays introduced by the wireless modules.

Mobile Robots – Control Architectures,

366 Bio-Interfacing, Navigation, Multi Robot Motion Planning and Operator Training

Fig. 13. Reading of the Sonar signal for a 180 degree rotation (scan) of the transducer

robots.

Some of the most interesting experimental results for the alignment process are depicted in Fig. 14. The alignment accuracy varies even for the same distance between the robots, due to the frequent changes of the turrets rotation and to the different time instances the transmitted ultrasonic signal is acquired. With few exceptions, the alignment correction angle remains lower than 10 degrees. The results presented in Fig. 14 also show the improvement of the alignment accuracy with the increase of the distance between the two

Table 1 presents the distance measurement results for the proposed CTOF method. The maximum absolute error is 4.8 cm, when the robots are 3000 mm apart. The result and error analysis of the CTOF procedure show that, after the necessary calibrations, the measurement characteristics are linear and follow very closely the real distance. It proves also to be

independent from the random delays introduced by the wireless modules.

Fig. 14. Maximum variation of the correction angle for experimental alignment procedures

Table 2 presents distance measurements with the CTOF method, after applying the Kalman filter to the results. As we can see, the results are very good in terms of accuracy. The disadvantage is the time of measurement, which increases for repetitive measurements. Some comparative distance evaluation results with and without filtering the data are depicted in detail in Fig. 15. From the experiments we conclude that the system provide optimal results for approximately 10 repetitive measurements.

Distance Measurement for Indoor Robotic Collectives 369

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33

Without filter With Kalman filter

Without filter With Kalman filter

**Number of measurements**

**Number of measurements**

**Number of measurements**

**Distance [mm]**

**Distance [mm]**

**Distance [mm]**

Fig. 15. Repetitive CTOF distance measurements with and without Kalman filtering, for various distances between robots (100 mm – top, 1000 mm – middle, and 3000 mm – bottom)

Without filter With Kalman filter

1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33


Table 1. Distance measurement results for the CTOF method


Table 2. Distance measurement results for the CTOF method with Kalman filtering

Mobile Robots – Control Architectures,

**Procedure Duration** 

**Procedure Duration [ms]** 

**Error reduction with Filtering [%]** 

368 Bio-Interfacing, Navigation, Multi Robot Motion Planning and Operator Training

**CTOF Measured Distance [mm]** 

100 92 96 99 20849 200 199 201 207 21461 300 298 300 303 22037 400 401 404 410 22643 500 504 508 515 23249 600 604 607 612 23825 700 700 706 710 24402 800 803 807 813 24990 900 906 911 916 25596 1000 1013 1019 1026 26225 2000 2024 2033 2043 32130 3000 3018 3031 3047 37943

**Min Average Max [s]** 

**Real Distance [mm]** 

Table 1. Distance measurement results for the CTOF method

**CTOF Measured Distance with Filtering [mm]** 

**Min Average Max** 

100 98 100 101 208÷688 66 200 198 200 205 215÷708 29 300 298 300 301 220÷727 62 400 397 399 401 226÷747 50 500 498 500 508 232÷767 63 600 598 599 601 238÷786 68 700 697 699 703 244÷805 19 800 796 800 802 250÷825 56 900 897 899 901 256÷845 56 1000 997 1001 1004 262÷865 49 2000 1997 2002 2006 321÷1060 40 3000 2993 3000 3006 379÷1252 57

Table 2. Distance measurement results for the CTOF method with Kalman filtering

**Real Distance [mm]** 

Fig. 15. Repetitive CTOF distance measurements with and without Kalman filtering, for various distances between robots (100 mm – top, 1000 mm – middle, and 3000 mm – bottom)

Distance Measurement for Indoor Robotic Collectives 371

Grossmann, R.; Blumenthal, J.; Golatowski, F.; Timmermann, D. (2007). Localization in

Ha, Y.S.; Kim, H.H. (2004). Environmental Map Building for a Mobile Robot Using

Hagisonic. (2009) *Localization System StarGazer for Intelligent Robots: User's Guide*, Hagisonic Co., Ltd., Korea, 2009, Available from http://www.hagisonic.com/ Kim, D.-E.; Hwang, K.-H.; Lee, D.-H.; Kuc, T.-Y. (2006). A Simple Ultrasonic GPS System for

Ko, S.-I.; Choi, J.-S.; Kim, B.-H. (2008). Indoor Mobile Localization System and Stabilization

Micea, M.V.; Cretu, V.I.; Groza, V. (2006). Maximum Predictability in Signal Interactions

Micea, M.V.; Stancovici, A.; Chiciudean, D.; Filote, C. (2010). Indoor Inter-Robot Distance

Mohammad, T. (2009). Using Ultrasonic and Infrared Sensors for Distance Measurement.

Novotny, P. M.; Ferrier, N. J. (1999). Using Infrared Sensor and the Phong Illumination

NXP Semiconductors. (April 2008). *UM10114: LPC21xx and LPC22xx User Manual* (Rev. 03),

Ohno, K.; Tsubouchi, T.; Shigematsu, B.; Yuta, S. (2004). Differential GPS and Odometry-

Priyantha, N.B. (2005). *The Cricket Indoor Location System*, Ph.D. Thesis, Department of

Reina, G.; Vargas, A.; Nagatani, K.; Yoshida, K. (2007). Adaptive Kalman Filtering for GPS-

Reynolds, M.S. (1999). *A Phase Measurement Radio Positioning System for Indoor Use*, Master's

Tamimi, H.; Andreasson, H.; Treptow, A.; Duckett, T.; Zell, A. (2006). Localization of Mobile

*Engineering*, Vol.10, No.3, (August 2010), pp. 21-26, ISSN 1582-7445

*World Academy of Science, Engineering and Technology*, Vol.51, (2009)

*Robotics and Automation*, pp. 1644-1649, Detroit, USA, May 1999

*Safety, Security and Rescue Robotics, SSRR 2007*, pp. 1-6, 2007

Massachusetts Institute of Technology, USA, Feb 1999.

*and Autonomous Systems*, Vol.54, 2006, pp. 758-765

*Conference, EuZDC 2007*, Munchen-Dornach, Germany, 2007

*International Joint Conference*, pp. 2915-2918, Busan, Korea, 2006

*Automation, and Systems*, Vol.6, No.2, (April 2008), pp. 204-213

Vol.55, No.4, (August 2006), ISSN 0018-9456

NXP Semiconductors N. V.

Technology, USA, Jun. 2005

(2004), pp. 611-635

450

Zigbee-based Sensor Networks, *Proceedings of the 1st European ZigBee Developer's* 

Infrared Range-Finder Sensors. *Advanced Robotics*, Vol.18, No.4, (2004), pp. 437-

Indoor Mobile Robot System using Kalman Filtering, *Proceedings of the SICE-ICASE* 

of Localization Performance Using Pre-Filtering. *International Journal of Control,* 

With HARETICK Kernel. *IEEE Transactions on Instrumentation and Measurement*,

Measurement in Collaborative Systems. *Advances in Electrical and Computer* 

Model to Measure Distances, *Proceedings of the 1999 IEEE International Conference on* 

Based Outdoor Navigation of a Mobile Robot. *Advanced Robotics*, Vol.18, No.6,

Electrical Engineering and Computer Science, Massachusetts Institute of

based Mobile Robot Localization, *Proceedings of IEEE International Workshop on* 

Dissertation, Department of Electrical Engineering and Computer Science,

Robots with Omnidirectional Vision Using Particle Filter and Iterative SIFT. *Robotic* 

## **7. Conclusion**

This work extends the discussion on the CTOF method of inter-robot distance measurement, introduced in (Micea et al., 2010). An extended discussion has also been made on the prerequisite sensor (robot) alignment procedure. The custom designed software simulation application provided the optimal ratio between the rotation speeds of the robots or their turrets with the ultrasonic transducers.

After the alignment process, the distance between two robots can be measured with the CTOF method. It has been shown that CTOF is independent of the communication propagation errors. We have also shown how the CTOF method meets the requirements of indoor, low-cost, energy-efficient robotic applications, reaching an accuracy of 4.8 cm for distances of 3 m. Furthermore, by applying the Kalman filter to repetitive distance measurements, an accuracy of 1 cm has been achieved for distances of 3 m, without the need of fixed landmarks.

The proposed distance measurement method and inter-robot alignment algorithm rely on inter-robot collaborative procedures and, therefore, these techniques are independent of fixed landmarks. Nevertheless, if the system further requires accurate localization of the mobile robots, at least the initial position of one of the robots must be known prior to the start of the system operation.

## **8. References**


Mobile Robots – Control Architectures,

370 Bio-Interfacing, Navigation, Multi Robot Motion Planning and Operator Training

This work extends the discussion on the CTOF method of inter-robot distance measurement, introduced in (Micea et al., 2010). An extended discussion has also been made on the prerequisite sensor (robot) alignment procedure. The custom designed software simulation application provided the optimal ratio between the rotation speeds of the robots or their

After the alignment process, the distance between two robots can be measured with the CTOF method. It has been shown that CTOF is independent of the communication propagation errors. We have also shown how the CTOF method meets the requirements of indoor, low-cost, energy-efficient robotic applications, reaching an accuracy of 4.8 cm for distances of 3 m. Furthermore, by applying the Kalman filter to repetitive distance measurements, an accuracy of 1 cm has been achieved for distances of 3 m, without the need

The proposed distance measurement method and inter-robot alignment algorithm rely on inter-robot collaborative procedures and, therefore, these techniques are independent of fixed landmarks. Nevertheless, if the system further requires accurate localization of the mobile robots, at least the initial position of one of the robots must be known prior to the

Atmel Corporation. (2010). *ATxmega64A1/128A1/192A1/256A1/384A1 Preliminary Data Sheet*,

Bestar Electronics. (2006). *BPU-1640IOAH12 Ultrasonic Sensor Datasheet*, Bestar Electronics

Cioarga, R.D.; Micea, M.V.; Ciubotaru, B.; Chiuciudean, D.; Stanescu, D. (2006). CORE-

Cricket Project (2005). *Cricket v2 User Manual*, MIT Computer Science and Artificial

Digi International. (2009). *XBee/XBee-PRO RF Modules: Product Manual*, Digi International,

Fayli, S.; Kleeman, L. (2004). A Real Time Advanced Sonar Ring with Simultaneous Firing,

Fox, V.; Hightower, J.; Lin, L.; Schulz, D.; Borriello, G. (2003). Bayesian Filtering for

Fuicu, S.; Marcu, M.; Stratulat, B.; Gîrban, A. (2009). Effectiveness and Accuracy of Wireless

*Proceedings of 2004 IEEE-RSJ International Conference on Intelligent Robots and* 

Location Estimation. *IEEE Pervasive Computing*, Vol.2, No.3, (September 2003),

Positioning Systems. *WSEAS Transactions on Computers*, Vol.8, No.9, (September

Industry Co., Ltd., China, October 2006, Available from http://www.be-star.com/

TX: Collective Robotic Environment - the Timisoara Experiment, *Proceedings of the 3rd Romanian-Hungarian Joint Symposium on Applied Computational Intelligence, SACI 2006*, pp. 495−506, ISBN 963-7154-46-9, Timisoara, Romania,

(Rev. M), Atmel Corporation, September 2010

Intelligence Lab, Cambridge, USA, Jan 2005

2009), pp. 1471-1483, ISSN 1109-2750

Inc., USA, 2009, Available from http://www.digi.com/

*Systems*, Sendai, Japan, September - October 2, 2004

**7. Conclusion** 

of fixed landmarks.

**8. References** 

start of the system operation.

May 25-26, 2006

pp. 24 – 33.

turrets with the ultrasonic transducers.


**Part 5** 

**Mobile Robot Operator Training** 


## **Part 5**

**Mobile Robot Operator Training** 

Mobile Robots – Control Architectures,

372 Bio-Interfacing, Navigation, Multi Robot Motion Planning and Operator Training

Tower Pro. (2008). *Servomotor Information*, Specification Note, Tower Pro, January 2008,

Welch, G.; Bishop, G. (2006). *An Introduction to the Kalman Filter*, Technical Report, TR 95-041,

ZigBee Standards Organization. (2007). *ZigBee Specification*, ZigBee Alliance, Inc., California,

University of North Carolina at Chapel Hill, USA, July 24, 2006

SUA, Available from http://www.zigbee.org/

servo%20spec.pdf

Available from http://www.towerpro.com.tw/driver/drivers/Towerpro%20

**0**

**17**

*Poland*

**Improvement of RISE Mobile Robot**

In the chapter the framework for RISE (Risky Intervention and Environmental Surveillance) mobile robot operator training design is presented. The basic idea of the framework is to provide advanced software tools to improve the performance of the mobile robot simulation design that are applicable for operator training. The simulator is the end product of the framework. To achieve realistic rigid body simulation and realistic rendering the NVIDIA PhysX engine and the OGRE (Open Graphic Rendering Engine) are used. The compatibility between the framework software and CAD modeling tools such as SolidWorks was essential, therefore virtual models are exchangeable using COLLADA format. The simulator can be integrated with control panel of the mobile robot. It is possible to simulate the behavior of

The improvement presented in this chapter is related to the problem of automatic environment model generation based on autonomous mobile robot observations. The approach related to semantic mapping is used and in consequence semantic simulation engine is implemented. Semantic simulation engine is composed by data registration module, semantic entities identification module and environment model generation module. Presented result is a new approach related to mobile robots applications and potentially can improve the process of

Training of the mobile robot operators is very difficult task not only because of the complexity of the mobile robot but also because of several factors related to different task execution. The study of the human-robot interactions (HRI) during a real rescue is presented in Casper & Murphy (2003) Bedkowski, Kowalski & Piszczek (2009). Several robotic systems are developed for rescue tasks Wei et al. (2009) Zhang et al. (2009). The research on simulation and training systems for mobile robots is shown in Xuewen et al. (2006). In the field of robotics applied for medical purposes such as surgery the simulation and training environments are developed and described in Schlaefer et al. (2008). The problem of learning via Web is common application used in several cases and as remote experiment system for distance training is discussed in Hong et al. (2007). Virtual labs have been meaningful and very popular in distance education in engineering oriented fields of study, since they allow to perform interesting experiments with the equipment in the labs remotely available through Internet Masar et al. (2004). An advance computer techniques such augmented reality approach applied in training robotics is shown in Kowalski et al. (2008)Bravo & Alberto (2009). The simulation environment used as a testbed for simulation based approach for

advanced mobile robot application design and professional training of the operators.

**1. Introduction**

several types of robots such as caterpillar or wheeled.

**Operator Training Tool**

*Institute of Mathematical Machines*

Janusz Be¸dkowski and Andrzej Masłowski

## **Improvement of RISE Mobile Robot Operator Training Tool**

Janusz Be¸dkowski and Andrzej Masłowski *Institute of Mathematical Machines Poland*

#### **1. Introduction**

In the chapter the framework for RISE (Risky Intervention and Environmental Surveillance) mobile robot operator training design is presented. The basic idea of the framework is to provide advanced software tools to improve the performance of the mobile robot simulation design that are applicable for operator training. The simulator is the end product of the framework. To achieve realistic rigid body simulation and realistic rendering the NVIDIA PhysX engine and the OGRE (Open Graphic Rendering Engine) are used. The compatibility between the framework software and CAD modeling tools such as SolidWorks was essential, therefore virtual models are exchangeable using COLLADA format. The simulator can be integrated with control panel of the mobile robot. It is possible to simulate the behavior of several types of robots such as caterpillar or wheeled.

The improvement presented in this chapter is related to the problem of automatic environment model generation based on autonomous mobile robot observations. The approach related to semantic mapping is used and in consequence semantic simulation engine is implemented. Semantic simulation engine is composed by data registration module, semantic entities identification module and environment model generation module. Presented result is a new approach related to mobile robots applications and potentially can improve the process of advanced mobile robot application design and professional training of the operators.

Training of the mobile robot operators is very difficult task not only because of the complexity of the mobile robot but also because of several factors related to different task execution. The study of the human-robot interactions (HRI) during a real rescue is presented in Casper & Murphy (2003) Bedkowski, Kowalski & Piszczek (2009). Several robotic systems are developed for rescue tasks Wei et al. (2009) Zhang et al. (2009). The research on simulation and training systems for mobile robots is shown in Xuewen et al. (2006). In the field of robotics applied for medical purposes such as surgery the simulation and training environments are developed and described in Schlaefer et al. (2008). The problem of learning via Web is common application used in several cases and as remote experiment system for distance training is discussed in Hong et al. (2007). Virtual labs have been meaningful and very popular in distance education in engineering oriented fields of study, since they allow to perform interesting experiments with the equipment in the labs remotely available through Internet Masar et al. (2004). An advance computer techniques such augmented reality approach applied in training robotics is shown in Kowalski et al. (2008)Bravo & Alberto (2009). The simulation environment used as a testbed for simulation based approach for

the best choice. Physical models can be developed using commercially available tools such as Solid Works with Collada plug-in, or Maya with ColladaMaya plug-in. It is also possible to use freeware tool under GNU/GPL license such a Blender with Collada plug-in. Similarly for designing spatial models one can use Autodesk 3ds Max as a non-free product which native file standard is 3ds or free competitor on the market Blender which can handle 3ds file as well. Utilization of presented tools ensures long term support for the designed platform. The framework consists of several software tools for training components development. Figure 1 shows the scheme of the expected end product of the framework - RISE robot operator

Improvement of RISE Mobile Robot Operator Training Tool 377

Fig. 1. The scheme of the expected end product of the framework - RISE robot operator

The training is composed from n nodes. Each node consist of robot model, console model, environment model, and task model. Framework provides software tools for model design and integration such as: Robot Builder, Environment Builder, Console Builder, Task Builder, Training Builder. The main concept and the achievement is the limitation of the programmer effort during training design (all activities can be done using the framework advanced GUI applications), therefore the process of models design and integration is improved and the time and potential costs are decreased. The training can be performed simultaneously for several operators. It is possible to construct training classroom for multiple operator training. To demonstrate this, the single training architecture has to be discussed. Figure 2 shows the

To perform the single training 2 PCs with framework software has to be taken into account. PC (operator) consists of: task manager and task executor programs. These programs are responsible for proper task execution and sending the result to the PC (instructor) where the training process is supervised with training manager program. Instructor can observe the operator behavior using training viewer software that visualizes the simulated scene. For

training.

training.

software architecture of single training.

Fig. 2. Software architecture of single training.

unmanned system command and control is demonstrated in Drewes (2006). The research related to the training and the hypotheses through interactions with unmanned systems using computer mediated gesture recognition is shown in Varcholik et al. (2008), where the presented methodology employs the Nintendo Wii Remote Controller (Wiimote) to retrieve and classify one- and two-handed gestures that are mapped to an unmanned system command set. The Modeling and simulation for the mobile robot operator training tool was presented in Bedkowski, Piszczek, Kowalski & Maslowski (2009).

Semantic information Asada & Shirai (1989) extracted from 3D laser data Nüchter et al. (2005) is recent research topic of modern mobile robotics. In Nüchter & Hertzberg (2008) a semantic map for a mobile robot was described as a map that contains, in addition to spatial information about the environment, assignments of mapped features to entities of known classes. In Grau (1997) a model of an indoor scene is implemented as a semantic net. This approach is used in Nüchter, Surmann, Lingemann & Hertzberg (2003) where robot extracts semantic information from 3D models built from a laser scanner. In Cantzler et al. (2002) the location of features is extracted by using a probabilistic technique (RANSAC RANdom SAmple Consensus) Fischler & Bolles (1980). Also the region growing approach Eich et al. (2010) extended from Vaskevicius et al. (2007) by efficiently integrating k-nearest neighbor (KNN) search is able to process unorganized point clouds. The improvement of plane extraction from 3D Data by fusing laser data and vision is shown in Andreasson et al. (2005). The automatic model refinement of 3D scene is introduced in Nüchter, Surmann & Hertzberg (2003) where the idea of feature extraction (planes) is done also with RANSAC. The semantic map building is related to SLAM problem Oberlander et al. (2008) Se & Little (2002) Davison et al. (2004). Most of recent SLAM techniques use camera Castle et al. (2010) Williams & Reid (2010) Andreasson (2008), laser measurement system Pedraza et al. (2007) Thrun et al. (2000) or even registered 3D laser data Magnusson, Andreasson, Nüchter & Lilienthal (2009). Concerning the registration of 3D scans described in Magnusson et al. (2007) Andreasson & Lilienthal (2007) we can find several techniques solving this important issue. The authors of Besl & Mckay (1992) briefly describe ICP algorithm and in Hähnel & Burgard (2002) the probabilistic matching technique is proposed. In Magnusson, Nüchter, Lörken, Lilienthal & Hertzberg (2009) the comparison of ICP and NDT (Normal Distributions Transform) algorithm is shown. In Rusu et al. (2008) the mapping system that acquires 3D object models of man-made indoor environments such as kitchens is shown. The system segments and geometrically reconstructs cabinets with doors, tables, drawers, and shelves, objects that are important for robots retrieving and manipulating objects in these environments.

In this paper the application of framework for RISE mobile robot operator training design is presented. The framework is developed to improve the training development using advanced software tools. The semantic simulation engine is proposed to automatic environment model (composed by walls, door, stairs) generation and task execution with mobile robot supervision. As a result, it is demonstrated the task execution of training example of RISE robot, also examples of automatically generated scenes are shown. It should be noticed that the automatic generation of robot environment is still open problem and needs further developments.

## **2. Framework overview**

The platform is going to support specialized tools for designing physical models, spatial models and others. The standard implemented and used in the platform accepts Collada and PhysX files for designing physical models. For spatial models 3ds format is considered as 2 Will-be-set-by-IN-TECH

Mobile Robots – Control Architectures,

unmanned system command and control is demonstrated in Drewes (2006). The research related to the training and the hypotheses through interactions with unmanned systems using computer mediated gesture recognition is shown in Varcholik et al. (2008), where the presented methodology employs the Nintendo Wii Remote Controller (Wiimote) to retrieve and classify one- and two-handed gestures that are mapped to an unmanned system command set. The Modeling and simulation for the mobile robot operator training tool was

Semantic information Asada & Shirai (1989) extracted from 3D laser data Nüchter et al. (2005) is recent research topic of modern mobile robotics. In Nüchter & Hertzberg (2008) a semantic map for a mobile robot was described as a map that contains, in addition to spatial information about the environment, assignments of mapped features to entities of known classes. In Grau (1997) a model of an indoor scene is implemented as a semantic net. This approach is used in Nüchter, Surmann, Lingemann & Hertzberg (2003) where robot extracts semantic information from 3D models built from a laser scanner. In Cantzler et al. (2002) the location of features is extracted by using a probabilistic technique (RANSAC RANdom SAmple Consensus) Fischler & Bolles (1980). Also the region growing approach Eich et al. (2010) extended from Vaskevicius et al. (2007) by efficiently integrating k-nearest neighbor (KNN) search is able to process unorganized point clouds. The improvement of plane extraction from 3D Data by fusing laser data and vision is shown in Andreasson et al. (2005). The automatic model refinement of 3D scene is introduced in Nüchter, Surmann & Hertzberg (2003) where the idea of feature extraction (planes) is done also with RANSAC. The semantic map building is related to SLAM problem Oberlander et al. (2008) Se & Little (2002) Davison et al. (2004). Most of recent SLAM techniques use camera Castle et al. (2010) Williams & Reid (2010) Andreasson (2008), laser measurement system Pedraza et al. (2007) Thrun et al. (2000) or even registered 3D laser data Magnusson, Andreasson, Nüchter & Lilienthal (2009). Concerning the registration of 3D scans described in Magnusson et al. (2007) Andreasson & Lilienthal (2007) we can find several techniques solving this important issue. The authors of Besl & Mckay (1992) briefly describe ICP algorithm and in Hähnel & Burgard (2002) the probabilistic matching technique is proposed. In Magnusson, Nüchter, Lörken, Lilienthal & Hertzberg (2009) the comparison of ICP and NDT (Normal Distributions Transform) algorithm is shown. In Rusu et al. (2008) the mapping system that acquires 3D object models of man-made indoor environments such as kitchens is shown. The system segments and geometrically reconstructs cabinets with doors, tables, drawers, and shelves, objects that are important for robots retrieving and manipulating

In this paper the application of framework for RISE mobile robot operator training design is presented. The framework is developed to improve the training development using advanced software tools. The semantic simulation engine is proposed to automatic environment model (composed by walls, door, stairs) generation and task execution with mobile robot supervision. As a result, it is demonstrated the task execution of training example of RISE robot, also examples of automatically generated scenes are shown. It should be noticed that the automatic generation of robot environment is still open problem and needs further

The platform is going to support specialized tools for designing physical models, spatial models and others. The standard implemented and used in the platform accepts Collada and PhysX files for designing physical models. For spatial models 3ds format is considered as

presented in Bedkowski, Piszczek, Kowalski & Maslowski (2009).

objects in these environments.

developments.

376

**2. Framework overview**

the best choice. Physical models can be developed using commercially available tools such as Solid Works with Collada plug-in, or Maya with ColladaMaya plug-in. It is also possible to use freeware tool under GNU/GPL license such a Blender with Collada plug-in. Similarly for designing spatial models one can use Autodesk 3ds Max as a non-free product which native file standard is 3ds or free competitor on the market Blender which can handle 3ds file as well. Utilization of presented tools ensures long term support for the designed platform. The framework consists of several software tools for training components development. Figure 1 shows the scheme of the expected end product of the framework - RISE robot operator training.

Fig. 1. The scheme of the expected end product of the framework - RISE robot operator training.

The training is composed from n nodes. Each node consist of robot model, console model, environment model, and task model. Framework provides software tools for model design and integration such as: Robot Builder, Environment Builder, Console Builder, Task Builder, Training Builder. The main concept and the achievement is the limitation of the programmer effort during training design (all activities can be done using the framework advanced GUI applications), therefore the process of models design and integration is improved and the time and potential costs are decreased. The training can be performed simultaneously for several operators. It is possible to construct training classroom for multiple operator training. To demonstrate this, the single training architecture has to be discussed. Figure 2 shows the software architecture of single training.

Fig. 2. Software architecture of single training.

To perform the single training 2 PCs with framework software has to be taken into account. PC (operator) consists of: task manager and task executor programs. These programs are responsible for proper task execution and sending the result to the PC (instructor) where the training process is supervised with training manager program. Instructor can observe the operator behavior using training viewer software that visualizes the simulated scene. For

**2.2 Model of an environment**

Fig. 6. Environment of the robot.

**2.3 Model of a control panel**

**2.4 Task**

figure 5 presents main elements of an environment.

Fig. 5. Composition of a virtual model of an environment.

Virtual model of an environment similarly as a models of control panel and robot consists the following elements: physical model, spatial model and sound model. The scheme shown in

Improvement of RISE Mobile Robot Operator Training Tool 379

The environment in the simulator is represented by Virtual Model of Environment. It is accepted that this model will be designed in similar manner as virtual model of robot described previously. The platform supplies such a tool which can support a designer in the process of creating the environment. It will be possible to utilize components from the Components Base. This Base will consists a set of sample models such as buildings, furniture (chairs, closets, tables, etc.) , vehicles, trees, different kinds of basis (i.e. soil, road, grass) (see figure 6). All objects from the Base will have physical and spatial properties. Apart from the physical properties, there will be a spatial model attached and visualized. Also special effect

Virtual model of a control panel is a second basic element of a simulation. The figure 7 presents the main elements of a control panel. It is assumed that control panel consists of joysticks, buttons, switches and displays. The control panel communicates with a robot via communicating module or directly dependently whether a real or a virtual panel is in use. Control panel can be virtual or real thus the trainee can accustom to real device controlling the virtual robot using real control panel. Described technology is presented in the figure 8.

In order to design Task, the following models are required: robot model, control panel model and environmental model. A training designer has the ability to set the beginning position of

such as particle system, fog, animation and water can be integrated into the scene.

the multiple training PC (instructor) can use separate training manager and training viewer programs to perform single training on several operator PCs simultaneously.

## **2.1 Model of a robot**

Virtual model of robot is the basic element of a simulation. The figure 3 presents main elements of the robot model.

Fig. 3. Scheme of components of virtual model of robot.

Virtual model of robot is composed of the following models:

a) physical (represents mechanical properties of a designed robot),

b) spatial (represents what is visualized),

c) motors/actuators (represents all driven parts of a robot),

d) sound (represents a set of sounds connected to associated to the model of motors/actuators),

e) sensors (represents a set of sensors mounted onto a robot).

Each listed model is described in a single file, therefore full virtual model of robot requires five files and the information about their interaction. Framework offers a tool called Robot Builder for designing a robot. The window of the application is presented in the Figure 4.

Fig. 4. Robot Builder: up left - physic model, up right - spatial model, bottom left - sensor model (cameras), bottom right - sensor model (lights).

## **2.2 Model of an environment**

4 Will-be-set-by-IN-TECH

Mobile Robots – Control Architectures,

the multiple training PC (instructor) can use separate training manager and training viewer

Virtual model of robot is the basic element of a simulation. The figure 3 presents main

d) sound (represents a set of sounds connected to associated to the model of

Each listed model is described in a single file, therefore full virtual model of robot requires five files and the information about their interaction. Framework offers a tool called Robot Builder for designing a robot. The window of the application is presented in the Figure 4.

Fig. 4. Robot Builder: up left - physic model, up right - spatial model, bottom left - sensor

programs to perform single training on several operator PCs simultaneously.

Fig. 3. Scheme of components of virtual model of robot.

b) spatial (represents what is visualized),

Virtual model of robot is composed of the following models: a) physical (represents mechanical properties of a designed robot),

c) motors/actuators (represents all driven parts of a robot),

e) sensors (represents a set of sensors mounted onto a robot).

model (cameras), bottom right - sensor model (lights).

**2.1 Model of a robot**

378

motors/actuators),

elements of the robot model.

Virtual model of an environment similarly as a models of control panel and robot consists the following elements: physical model, spatial model and sound model. The scheme shown in figure 5 presents main elements of an environment.

Fig. 5. Composition of a virtual model of an environment.

The environment in the simulator is represented by Virtual Model of Environment. It is accepted that this model will be designed in similar manner as virtual model of robot described previously. The platform supplies such a tool which can support a designer in the process of creating the environment. It will be possible to utilize components from the Components Base. This Base will consists a set of sample models such as buildings, furniture (chairs, closets, tables, etc.) , vehicles, trees, different kinds of basis (i.e. soil, road, grass) (see figure 6). All objects from the Base will have physical and spatial properties. Apart from the physical properties, there will be a spatial model attached and visualized. Also special effect such as particle system, fog, animation and water can be integrated into the scene.

Fig. 6. Environment of the robot.

## **2.3 Model of a control panel**

Virtual model of a control panel is a second basic element of a simulation. The figure 7 presents the main elements of a control panel. It is assumed that control panel consists of joysticks, buttons, switches and displays. The control panel communicates with a robot via communicating module or directly dependently whether a real or a virtual panel is in use. Control panel can be virtual or real thus the trainee can accustom to real device controlling the virtual robot using real control panel. Described technology is presented in the figure 8.

## **2.4 Task**

In order to design Task, the following models are required: robot model, control panel model and environmental model. A training designer has the ability to set the beginning position of

purposes. Figure 12 shows executed task taskMETRO. The main objective was to transport

Improvement of RISE Mobile Robot Operator Training Tool 381

Fig. 10. Scheme of the training graph. Each node must be one of three: start, end or mission; transition to the next mission is represented by arrow and is triggered under specific

Framework for robot operator training design is improved by semantic simulation engine. Semantic simulation engine J. Bedkowski (2011b) is a project that main idea is to improve State of The Art in the area of semantic robotics Asada & Shirai (1989) Nüchter & Hertzberg (2008) Grau (1997) with the focus on practical applications such as robot supervision and control, semantic map building, robot reasoning and robot operator training using augmented reality techniques Kowalski et al. (2011) J. Bedkowski (2011a). It combines semantic map with rigid body simulation to perform supervision of its entities such as robots moving in INDOOR or

OUTDOOR environments composed by floor, ceiling, walls, door, tree, etc.

hazardous object into defined place of neutralization.

condition denoted as *C*<sup>1</sup> ... *C*<sup>9</sup>

Fig. 11. Training manager GUI.

**3. Framework improvement**

Fig. 7. Composition of necessary elements of a control panel.

Fig. 8. Composition of necessary elements of a control panel and robot communication.

the robot and after that define all possible mission events he concerns as important. Next, he can define time thresholds for the mission. Such e-Task is then exported and can be used to prepare the full multilevel training. The task designing steps are shown in figure 9. For this purpose, framework supplies e-Tasks Generator. The tool supplies the base of events. The Task designer will be able to move about the scene with the robot and environment read in then chose one actor or group of actors and attach an event with specific parameters. The possible events defined for the single Task are: a) robot path to chosen localization, b) time exceeded, c) move an object to chosen localization, d) touch an object, e) damage/disable robot, f) neutralize/destroy an object (e.g. shoot with water gun, put an explosives).

Fig. 9. Task designing steps.

#### **2.5 Training**

When a designer get through the previous steps the last what is required is to prepare the training. Using previously defined Tasks one can prepare a graph of the training. The sample graph is presented in figure 10. The training starts with the Mission 1 and leads through two or three subsequent missions. In the first case when the condition *C*<sup>3</sup> is satisfied it means that Mission 2 is non requisite otherwise if the condition *C*<sup>2</sup> is satisfied the training path will lead through the Mission 2. Under conditions *C*<sup>1</sup> *C*<sup>4</sup> *C*<sup>5</sup> *C*<sup>8</sup> appropriate missions will be repeated. At the end of each training the summary is presented on the screen and saved to file. The file is imported to the training manager. Figure 11 shows the Training manager GUI. Training manager GUI informs the user about currently task report, also the training report is available for further tasks execution. Training manager can visualize the simulated scene for analyses 6 Will-be-set-by-IN-TECH

Mobile Robots – Control Architectures,

Fig. 8. Composition of necessary elements of a control panel and robot communication.

robot, f) neutralize/destroy an object (e.g. shoot with water gun, put an explosives).

the robot and after that define all possible mission events he concerns as important. Next, he can define time thresholds for the mission. Such e-Task is then exported and can be used to prepare the full multilevel training. The task designing steps are shown in figure 9. For this purpose, framework supplies e-Tasks Generator. The tool supplies the base of events. The Task designer will be able to move about the scene with the robot and environment read in then chose one actor or group of actors and attach an event with specific parameters. The possible events defined for the single Task are: a) robot path to chosen localization, b) time exceeded, c) move an object to chosen localization, d) touch an object, e) damage/disable

When a designer get through the previous steps the last what is required is to prepare the training. Using previously defined Tasks one can prepare a graph of the training. The sample graph is presented in figure 10. The training starts with the Mission 1 and leads through two or three subsequent missions. In the first case when the condition *C*<sup>3</sup> is satisfied it means that Mission 2 is non requisite otherwise if the condition *C*<sup>2</sup> is satisfied the training path will lead through the Mission 2. Under conditions *C*<sup>1</sup> *C*<sup>4</sup> *C*<sup>5</sup> *C*<sup>8</sup> appropriate missions will be repeated. At the end of each training the summary is presented on the screen and saved to file. The file is imported to the training manager. Figure 11 shows the Training manager GUI. Training manager GUI informs the user about currently task report, also the training report is available for further tasks execution. Training manager can visualize the simulated scene for analyses

Fig. 7. Composition of necessary elements of a control panel.

Fig. 9. Task designing steps.

**2.5 Training**

380

purposes. Figure 12 shows executed task taskMETRO. The main objective was to transport hazardous object into defined place of neutralization.

Fig. 10. Scheme of the training graph. Each node must be one of three: start, end or mission; transition to the next mission is represented by arrow and is triggered under specific condition denoted as *C*<sup>1</sup> ... *C*<sup>9</sup>


Fig. 11. Training manager GUI.

## **3. Framework improvement**

Framework for robot operator training design is improved by semantic simulation engine. Semantic simulation engine J. Bedkowski (2011b) is a project that main idea is to improve State of The Art in the area of semantic robotics Asada & Shirai (1989) Nüchter & Hertzberg (2008) Grau (1997) with the focus on practical applications such as robot supervision and control, semantic map building, robot reasoning and robot operator training using augmented reality techniques Kowalski et al. (2011) J. Bedkowski (2011a). It combines semantic map with rigid body simulation to perform supervision of its entities such as robots moving in INDOOR or OUTDOOR environments composed by floor, ceiling, walls, door, tree, etc.

efficient since it performs nearest neighbor search using a bucket data structure (sorted using CUDA primitive) and computes the correlation matrix using parallel CUDA all-prefix-sum

Improvement of RISE Mobile Robot Operator Training Tool 383

We can consider data registration module as a component executing 6D SLAM - Simultaneous Localization and Mapping, where 6D is related to vector describing robot position in 3D (x,y,z,yaw,pitch,roll). The 6D SLAM algorithm executes three steps: a) ICP alignment of 2 robot observations, b) loop closing, c) map relaxation. The example result of data registration

INPUT: Two point clouds A = {*ai*}, B= {*bi*}, an initial transformation *T*<sup>0</sup>

INPUT: Two point clouds M = {*mi*}, D= {*di*}, an initial transformation *T*<sup>0</sup>

*mi* ← FindClosestPointInM (*Tdevice* · *di*) {using regular grid decomposition}

{calculation T←R,t with SVD}

OUTPUT: The correct transformation T, which aligns M and D

**if** *f oundClosestPointInNeighboringBuckets* **then**

<sup>∑</sup>*<sup>i</sup> wi* �*<sup>T</sup>* · *di* <sup>−</sup> *mi*�<sup>2</sup>

OUTPUT: The correct transformation T, which aligns A and B

instruction.

*T* ← *T*<sup>0</sup>

module is shown in figure 13

**for** *iter* ← 0 to *maxIterations* **do**

**for** *iter* ← 0 to *maxIterations* **do for** *i* ← 0 to *N* {in parallel} **do**

*wi* ← 1

*wi* ← 0 **end if end for**

*Tdevice* ← *argmin*

*Tdevice*

**else**

**end for** *M* ← *Mdevice D* ← *Ddevice T* ← *Tdevice*

*mi* ← FindClosestPointInA(*T* · *bi*) **if** �*mi* − *T* · *bi*� ≤ *dmax* **then**

**Algorithm 2** ICP - parallel computing approach

<sup>∑</sup>*<sup>i</sup> wi* �*<sup>T</sup>* · *bi* <sup>−</sup> *mi*�<sup>2</sup>

**Algorithm 1** Classic ICP

**for** *i* ← 0 to *N* **do**

*wi* ← 1

*wi* ← 0 **end if end for** *T* ← *argmin T*

**else**

**end for**

*Mdevice* ← *M Ddevice* ← *D Tdevice* ← *T*<sup>0</sup>

Fig. 12. Task taskMETRO execution. The goal is to transport hazardous object to appropriate location.

### **3.1 Semantic simulation engine**

Semantic simulation engine is composed of data registration modules, semantic entities identification (data segmentation) modules and semantic simulation module. It provides tools to implement mobile robot simulation based on real data delivered by robot and processed on-line using parallel computation. Semantic entities identification modules can classify several objects in INDOOR and OUTDOOR environments. Data can be delivered by robot observation based on modern sensors such as laser measurement system 3D and RGB-D cameras. Real objects are associated with virtual entities of simulated environment. The concept of semantic simulation is a new idea, and its strength lies on the semantic map integration with mobile robot simulator.

#### **3.2 Data registration**

Data registration module provides accurate alignment of robot observations. Aligning two-view range images with respect to the reference coordinate system is performed by the ICP (Iterative Closest Points) algorithm. Range images are defined as a model set *M* and data set *D*, *Nm* and *Nd* denotes the number of the elements in the respective set. The alignment of these two data sets is solved by minimization with respect to **R**,**t** of the following cost function:

$$E\left(\mathbf{R},\mathbf{t}\right) = \sum\_{i=1}^{N\_m} \sum\_{j=1}^{N\_d} w\_{ij} \left\| \mathbf{m}\_i - \left(\mathbf{R}\mathbf{d}\_j + \mathbf{t}\right) \right\|^2 \tag{1}$$

*wij* is assigned 1 if the i-th point of M correspond to the j-th point in D. Otherwise *wij*=0. **R** is a rotation matrix, **t** is a translation matrix. **m***<sup>i</sup>* and **d***<sup>i</sup>* corresponds to the *i*-th point from model set *M* and *D* respectively. The key concept of the standard ICP algorithm can be summarized in two steps Segal et al. (2009):

1) compute correspondences between the two scans (Nearest Neighbor Search).

2) compute a transformation which minimizes distance between corresponding points.

Iteratively repeating these two steps should results in convergence to the desired transformation. Because we are violating the assumption of full overlap, we are forced to add a maximum matching threshold *dmax*. This threshold accounts for the fact that some points will not have any correspondence in the second scan. In most implementations of ICP, the choice of *dmax* represents a trade off between convergence and accuracy. A low value results in bad convergence, a large value causes incorrect correspondences to pull the final alignment away from the correct value. Classic ICP is listed as algorithm 1 and the ICP algorithm using CUDA parallel programming is listed as algorithm 2. Parallel programming applied for ICP computation results the average time 300ms for 30 iterations of ICP. The proposed solution is efficient since it performs nearest neighbor search using a bucket data structure (sorted using CUDA primitive) and computes the correlation matrix using parallel CUDA all-prefix-sum instruction.

We can consider data registration module as a component executing 6D SLAM - Simultaneous Localization and Mapping, where 6D is related to vector describing robot position in 3D (x,y,z,yaw,pitch,roll). The 6D SLAM algorithm executes three steps: a) ICP alignment of 2 robot observations, b) loop closing, c) map relaxation. The example result of data registration module is shown in figure 13

**Algorithm 1** Classic ICP

8 Will-be-set-by-IN-TECH

Mobile Robots – Control Architectures,

Fig. 12. Task taskMETRO execution. The goal is to transport hazardous object to appropriate

Semantic simulation engine is composed of data registration modules, semantic entities identification (data segmentation) modules and semantic simulation module. It provides tools to implement mobile robot simulation based on real data delivered by robot and processed on-line using parallel computation. Semantic entities identification modules can classify several objects in INDOOR and OUTDOOR environments. Data can be delivered by robot observation based on modern sensors such as laser measurement system 3D and RGB-D cameras. Real objects are associated with virtual entities of simulated environment. The concept of semantic simulation is a new idea, and its strength lies on the semantic map

Data registration module provides accurate alignment of robot observations. Aligning two-view range images with respect to the reference coordinate system is performed by the ICP (Iterative Closest Points) algorithm. Range images are defined as a model set *M* and data set *D*, *Nm* and *Nd* denotes the number of the elements in the respective set. The alignment of these two data sets is solved by minimization with respect to **R**,**t** of the following cost function:

*wij* is assigned 1 if the i-th point of M correspond to the j-th point in D. Otherwise *wij*=0. **R** is a rotation matrix, **t** is a translation matrix. **m***<sup>i</sup>* and **d***<sup>i</sup>* corresponds to the *i*-th point from model set *M* and *D* respectively. The key concept of the standard ICP algorithm can be summarized

 2

(1)

location.

382

**3.1 Semantic simulation engine**

integration with mobile robot simulator.

*E* (**R**, **t**) =

*Nm* ∑ *i*=1

1) compute correspondences between the two scans (Nearest Neighbor Search). 2) compute a transformation which minimizes distance between corresponding points. Iteratively repeating these two steps should results in convergence to the desired transformation. Because we are violating the assumption of full overlap, we are forced to add a maximum matching threshold *dmax*. This threshold accounts for the fact that some points will not have any correspondence in the second scan. In most implementations of ICP, the choice of *dmax* represents a trade off between convergence and accuracy. A low value results in bad convergence, a large value causes incorrect correspondences to pull the final alignment away from the correct value. Classic ICP is listed as algorithm 1 and the ICP algorithm using CUDA parallel programming is listed as algorithm 2. Parallel programming applied for ICP computation results the average time 300ms for 30 iterations of ICP. The proposed solution is

*Nd* ∑ *j*=1 *wij* **m***<sup>i</sup>* − **Rd***<sup>j</sup>* + **t**

**3.2 Data registration**

in two steps Segal et al. (2009):

INPUT: Two point clouds A = {*ai*}, B= {*bi*}, an initial transformation *T*<sup>0</sup> OUTPUT: The correct transformation T, which aligns A and B *T* ← *T*<sup>0</sup> **for** *iter* ← 0 to *maxIterations* **do for** *i* ← 0 to *N* **do** *mi* ← FindClosestPointInA(*T* · *bi*) **if** �*mi* − *T* · *bi*� ≤ *dmax* **then** *wi* ← 1 **else** *wi* ← 0 **end if end for** *T* ← *argmin T* <sup>∑</sup>*<sup>i</sup> wi* �*<sup>T</sup>* · *bi* <sup>−</sup> *mi*�<sup>2</sup> **end for**

#### **Algorithm 2** ICP - parallel computing approach

INPUT: Two point clouds M = {*mi*}, D= {*di*}, an initial transformation *T*<sup>0</sup> OUTPUT: The correct transformation T, which aligns M and D *Mdevice* ← *M Ddevice* ← *D Tdevice* ← *T*<sup>0</sup> **for** *iter* ← 0 to *maxIterations* **do for** *i* ← 0 to *N* {in parallel} **do** *mi* ← FindClosestPointInM (*Tdevice* · *di*) {using regular grid decomposition} **if** *f oundClosestPointInNeighboringBuckets* **then** *wi* ← 1 **else** *wi* ← 0 **end if end for** *Tdevice* ← *argmin Tdevice* <sup>∑</sup>*<sup>i</sup> wi* �*<sup>T</sup>* · *di* <sup>−</sup> *mi*�<sup>2</sup> {calculation T←R,t with SVD} **end for** *M* ← *Mdevice D* ← *Ddevice T* ← *Tdevice*

Fig. 14. Semantic net defined for semantic entities identification.

Fig. 16. Scene interpretation left - door, walls, right - stairs.

Fig. 15. Left - segmentation of 3D cloud of points, right - boxes that contain measured points.

Improvement of RISE Mobile Robot Operator Training Tool 385

object...}, semantic map relationships between the entities *Rsm* = {parallel, orthogonal, above, under, equal height, available inside, connected via joint...}, robot simulator relationships

Fig. 13. Example of a result of data registration module. A - odometry with gyroscope correction, B - ICP, C - loop closed, D - final 3D map. Data was acquired in Royal Military Academy building (Brussels, Belgium) using PIONEER 3AT robot equipped with 3DLSN unit (rotated LMS200).

## **3.3 Semantic entities identification**

The procedure of prerequisites generation using image processing methods is used. The set of lines obtained by Hough transform from projected 3*D* points onto 2*D* OXY plane is used to obtain segmentation of cloud of points, where different walls will have different labels. For each line segment the orthogonal *planeorth* to *planeOXY* is computed. The intersection between this two planes is the same line segment. All 3*D* points which satisfy the condition of distance to *planeorth* have the same label. In the first step all prerequisites of walls were checked separately - it is data segmentation. To perform the scene interpretation semantic net is used (figure 14). The feature detection algorithm is composed of cubes generation method, where each cube should contain measured 3D point after segmentation (see figure 15). In the second step of the algorithm wall candidates are chosen. From this set of candidates, based on relationships between them, proper labels are assigned and output model is generated (see figure 16 left). The image processing methods are also used for stairs prerequisites generation. The set of parallel lines (obtained by projected single 3D scan onto OXY plane) in the same short distance between each other is prerequisite of stairs. Possible labels of the nodes are L = {stair}. The relationships between the entities are R = {parallel, above, under}. Figure 16 right shows resulting model of stairs generated from 3D cloud of points. In this spatial model each stair (except first and last one obviously) is in relation r=above&parallel with the previous one and in relation r=under&parallel with next one.

#### **3.4 Environment model generation**

Model of the environment is automatically generated using semantic net from set of identified semantic entities. The engine basic elements for INDOOR environment are: semantic map nodes(entities) *Lsm*={Wall, Wall above door, Floor, Ceiling, Door, Free space for door, Stairs...}, the *Lsm* set can be extended by another objects, what is dependent on robust and accurate 3D scene analysis, robot simulator nodes(entities) *Lrs*={robot, rigid body object, soft body 10 Will-be-set-by-IN-TECH

Mobile Robots – Control Architectures,

Fig. 13. Example of a result of data registration module. A - odometry with gyroscope correction, B - ICP, C - loop closed, D - final 3D map. Data was acquired in Royal Military Academy building (Brussels, Belgium) using PIONEER 3AT robot equipped with 3DLSN

The procedure of prerequisites generation using image processing methods is used. The set of lines obtained by Hough transform from projected 3*D* points onto 2*D* OXY plane is used to obtain segmentation of cloud of points, where different walls will have different labels. For each line segment the orthogonal *planeorth* to *planeOXY* is computed. The intersection between this two planes is the same line segment. All 3*D* points which satisfy the condition of distance to *planeorth* have the same label. In the first step all prerequisites of walls were checked separately - it is data segmentation. To perform the scene interpretation semantic net is used (figure 14). The feature detection algorithm is composed of cubes generation method, where each cube should contain measured 3D point after segmentation (see figure 15). In the second step of the algorithm wall candidates are chosen. From this set of candidates, based on relationships between them, proper labels are assigned and output model is generated (see figure 16 left). The image processing methods are also used for stairs prerequisites generation. The set of parallel lines (obtained by projected single 3D scan onto OXY plane) in the same short distance between each other is prerequisite of stairs. Possible labels of the nodes are L = {stair}. The relationships between the entities are R = {parallel, above, under}. Figure 16 right shows resulting model of stairs generated from 3D cloud of points. In this spatial model each stair (except first and last one obviously) is in relation r=above&parallel with the previous one

Model of the environment is automatically generated using semantic net from set of identified semantic entities. The engine basic elements for INDOOR environment are: semantic map nodes(entities) *Lsm*={Wall, Wall above door, Floor, Ceiling, Door, Free space for door, Stairs...}, the *Lsm* set can be extended by another objects, what is dependent on robust and accurate 3D scene analysis, robot simulator nodes(entities) *Lrs*={robot, rigid body object, soft body

unit (rotated LMS200).

384

**3.3 Semantic entities identification**

and in relation r=under&parallel with next one.

**3.4 Environment model generation**

Fig. 14. Semantic net defined for semantic entities identification.

Fig. 15. Left - segmentation of 3D cloud of points, right - boxes that contain measured points.

Fig. 16. Scene interpretation left - door, walls, right - stairs.

object...}, semantic map relationships between the entities *Rsm* = {parallel, orthogonal, above, under, equal height, available inside, connected via joint...}, robot simulator relationships

Fig. 19. Simulated robot climbs the stairs.

Fig. 20. Simulated robot damages itself.

The chapter presents the framework for RISE mobile robot operator training design with an improvement based on semantic simulation engine. The framework is composed by advanced software tools to improve the performance of the mobile robot simulation design that are applicable for operator training. The simulator is the end product of the framework. The effort to build the simulation software is decreased by developed programs that allow to build several model such as robot model, and environment model. The framework can help in RISE robot training classroom building for several task execution in simultaneous mode, therefore multiple training can be performed. We believe that developed software can help also in multi robotic system design and development by providing advanced techniques for simulation process, therefore the framework can be applicable in several mobile robotics applications. New concept of semantic simulation engine, composed of data registration modules, semantic entities identification modules and semantic simulation module is demonstrated. The implementation of parallel computing applied for 6D SLAM, especially for data registration based on regular grid decomposition is shown. Semantic simulation engine provides tools to implement mobile robot simulation based on real data delivered by robot and processed on-line using parallel computation. Semantic entities identification modules can classify door, walls, floor, ceiling, stairs in indoor environment. Semantic simulation uses NVIDIA PhysX for rigid body simulation. Future work will be related to AI techniques applied for semantic entities identification (furnitures, victims, cars, etc...), localization and tracking methods.

Improvement of RISE Mobile Robot Operator Training Tool 387

**4. Conclusion and future work**

between the entities *Rrs* = {connected via joint, position...}, semantic map events *Esm* = robot simulator events *Ers* = {movement, collision between two entities started, collision between two entities stopped, collision between two entities continued, broken joint...}. Robot simulator is implemented using NVIDIA PhysX library. The entities from semantic map correspond to actors in PhysX. *Lsm* is transformed into *Lrs* based on spatial model generated based on registered 3D scans. *Rsm* is transformed into *Rrs*. All entities/relations *Rsm* has the same initial location in *Rrs*, obviously the location of each actor/entity may change during simulation, therefore accurate object tracking system is needed. The transformation from *Esm* to *Ers* effects that events related to entities from semantic map correspond to the events related to actors representing proper entities. Following events can be noticed during simulation: robot can touch each entity (see figure 17), open/close the door and enter empty space of the door (see figure 18), climb the stairs (see figure 19), damage itself -broken joint between actors in robot arm (see figure 20), brake joint that connects door to the wall.

Fig. 17. Simulated robot touches entity.

Fig. 18. Simulated robot enters empty space of the door.

12 Will-be-set-by-IN-TECH

Mobile Robots – Control Architectures,

between the entities *Rrs* = {connected via joint, position...}, semantic map events *Esm* = robot simulator events *Ers* = {movement, collision between two entities started, collision between two entities stopped, collision between two entities continued, broken joint...}. Robot simulator is implemented using NVIDIA PhysX library. The entities from semantic map correspond to actors in PhysX. *Lsm* is transformed into *Lrs* based on spatial model generated based on registered 3D scans. *Rsm* is transformed into *Rrs*. All entities/relations *Rsm* has the same initial location in *Rrs*, obviously the location of each actor/entity may change during simulation, therefore accurate object tracking system is needed. The transformation from *Esm* to *Ers* effects that events related to entities from semantic map correspond to the events related to actors representing proper entities. Following events can be noticed during simulation: robot can touch each entity (see figure 17), open/close the door and enter empty space of the door (see figure 18), climb the stairs (see figure 19), damage itself -broken joint between

actors in robot arm (see figure 20), brake joint that connects door to the wall.

Fig. 17. Simulated robot touches entity.

386

Fig. 18. Simulated robot enters empty space of the door.

## Fig. 19. Simulated robot climbs the stairs.

Fig. 20. Simulated robot damages itself.

## **4. Conclusion and future work**

The chapter presents the framework for RISE mobile robot operator training design with an improvement based on semantic simulation engine. The framework is composed by advanced software tools to improve the performance of the mobile robot simulation design that are applicable for operator training. The simulator is the end product of the framework. The effort to build the simulation software is decreased by developed programs that allow to build several model such as robot model, and environment model. The framework can help in RISE robot training classroom building for several task execution in simultaneous mode, therefore multiple training can be performed. We believe that developed software can help also in multi robotic system design and development by providing advanced techniques for simulation process, therefore the framework can be applicable in several mobile robotics applications. New concept of semantic simulation engine, composed of data registration modules, semantic entities identification modules and semantic simulation module is demonstrated. The

implementation of parallel computing applied for 6D SLAM, especially for data registration based on regular grid decomposition is shown. Semantic simulation engine provides tools to implement mobile robot simulation based on real data delivered by robot and processed on-line using parallel computation. Semantic entities identification modules can classify door, walls, floor, ceiling, stairs in indoor environment. Semantic simulation uses NVIDIA PhysX for rigid body simulation. Future work will be related to AI techniques applied for semantic entities identification (furnitures, victims, cars, etc...), localization and tracking methods.

Grau, O. (1997). A scene analysis system for the generation of 3-d models, *NRC '97: Proceedings*

Improvement of RISE Mobile Robot Operator Training Tool 389

Hähnel, D. & Burgard, W. (2002). Probabilistic matching for 3D scan registration, *In.: Proc. of*

Hong, S. H., Park, J. H., Kwon, K. H. & Jeon, J. W. (2007). A distance learning system

J. Bedkowski, A. M. (2011a). Methodology of control and supervision of web connected mobile

J. Bedkowski, A. M. (2011b). Semantic simulation engine for mobile robotic applications,

Kowalski, G., Bedkowski, J., Kowalski, P. & Maslowski, A. (2011). Computer training with

Kowalski, G., Bedkowski, J. & Maslowski, A. (2008). Virtual and augmented reality as a

Magnusson, M., Andreasson, H., Nüchter, A. & Lilienthal, A. J. (2009). Automatic

Magnusson, M., Duckett, T. & Lilienthal, A. J. (2007). 3d scan registration for autonomous

Magnusson, M., Nüchter, A., Lörken, C., Lilienthal, A. J. & Hertzberg, J. (2009). Evaluation of

Masar, I., Bischoff, A. & Gerke, M. (2004). Remote experimentation in distance education for

Nüchter, A. & Hertzberg, J. (2008). Towards semantic maps for mobile robots, *Robot. Auton.*

Nüchter, A., Surmann, H. & Hertzberg, J. (2003). Automatic model refinement for 3D

Nüchter, A., Surmann, H., Lingemann, K. & Hertzberg, J. (2003). Semantic scene analysis

Nüchter, A., Wulf, O., Lingemann, K., Hertzberg, J., Wagner, B. & Surmann, H. (2005). 3d

Oberlander, J., Uhl, K., Zollner, J. M. & Dillmann, R. (2008). A region-based slam algorithm capturing metric, topological, and semantic properties, *ICRA'08*, pp. 1886–1891. Pedraza, L., Dissanayake, G., Miro, J. V., Rodriguez-Losada, D. & Matia, F. (2007). Bs-slam: Shaping the world, *Proceedings of Robotics: Science and Systems*, Atlanta, GA, USA. Rusu, R. B., Marton, Z. C., Blodow, N., Dolha, M. & Beetz, M. (2008). Towards 3d point cloud based object maps for household environments, *Robot. Auton. Syst.* 56(11): 927–941.

*Landmine detection, de-mining and other applications* pp. 397–418.

*series A, no. 1, vol. 2, Brasov, Romania*, pp. 571–576.

*Conf. on Robotics and Automation*, pp. 3907–3912.

*Imaging and Modeling 3DIM 03*, p. 394.

transform, *Journal of Field Robotics* 26(11–12): 892–914.

mining vehicles, *Journal of Field Robotics* 24(10): 803–827.

control engineers, *Proceedings of Virtual University, Slovakia*.

*Workshop on Vision, Modeling, and Visualization (VMV 03)*.

IEEE Computer Society, Washington, DC, USA, p. 221.

*the VDI - Conference Robotik 2002 (Robotik)*.

4489/2007: 523–530.

*Syst.* 56(11): 915–926.

pp. 335–346.

*Intelligent Systems* 5(2): 3–11.

*of the International Conference on Recent Advances in 3-D Digital Imaging and Modeling*,

for robotics, *Lecture Notes in Computer Science, Computational Science ICCS 2007*

robots with cuda technology application, *Journal of Automation, Mobile Robotics and*

*Pomiary, Automatyka, Robotyka 2/2011, Automation 2011 6-8 April, Warsaw* pp. 333–343.

ground teleoperated robots for de-mining, *Using robots in hazardous environments,*

mobile robots inspections operators training aid, *Bulletin of the Transilvania University of Brasov. Proceedings of the international conference Robotics08, special issue vol. 15(50)*

appearance-based loop detection from 3D laser data using the normal distributions

3d registration reliability and speed âA ¸ ˘ S- a comparison of icp and ndt, *Proc. IEEE Int.*

reconstruction with mobile robots, *Fourth International Conference on 3-D Digital*

of scanned 3d indoor environments, *in: Proceedings of the Eighth International Fall*

mapping with semantic knowledge, *IN ROBOCUP INTERNATIONAL SYMPOSIUM*,

Concerning 6D SLAM, semantic loop closing techniques will be taken into the consideration as promising methods delivering conceptual reasoning.

#### **5. References**

	- URL: *http://dx.doi.org/10.1109/TSMCB.2003.811794*

14 Will-be-set-by-IN-TECH

Mobile Robots – Control Architectures,

Concerning 6D SLAM, semantic loop closing techniques will be taken into the consideration

Andreasson, H. (2008). *Local Visual Feature based Localisation and Mapping by Mobile Robots*, Doctoral thesis, Orebro University, School of Science and Technology. Andreasson, H. & Lilienthal, A. J. (2007). Vision aided 3d laser based registration, *Proceedings*

Andreasson, H., Triebel, R. & Burgard, W. (2005). Improving plane extraction from 3d data

Asada, M. & Shirai, Y. (1989). Building a world model for a mobile robot using dynamic

Bedkowski, J., Kowalski, P. & Piszczek, J. (2009). Hmi for multi robotic inspection system

Bedkowski, J., Piszczek, J., Kowalski, P. & Maslowski, A. (2009). Modeling and simulation for

Bravo, J. & Alberto, C. (2009). An augmented reality interface for training robotics through the

Cantzler, H., Fisher, R. B. & Devy, M. (2002). Quality enhancement of reconstructed 3d models

Casper, J. & Murphy, R. R. (2003). Human-robot interactions during the robot-assisted urban

Castle, R. O., Klein, G. & Murray, D. W. (2010). Combining monoslam with object recognition for scene augmentation using a wearable camera, 28(11): 1548 – 1556. Davison, A., Cid, Y. G. & Kita, N. (2004). Real-time 3D SLAM with wide-angle vision, *Proc.*

Drewes, P. (2006). Simulation based approach for unmanned system command and control, *Conference on Recent Advances in Robotics, FCRAR Miami, Florida*, pp. 189–194. Eich, M., Dabrowska, M. & Kirchner, F. (2010). Semantic labeling: Classification of 3d entities

Fischler, M. A. & Bolles, R. (1980). Random sample consensus. a paradigm for model fitting

based on spatial feature descriptors, *IEEE International Conference on Robotics and*

with apphcahons to image analysm and automated cartography, *Proc. 1980 Image Understandtng Workshop (College Park, Md., Apr i980) L. S. Baurnann, Ed, Scmnce*

by fusing laser data and vision, *Proceedings of the IEEE/RSJ International Conference on*

semantic constraints, *Proc. 11 th International Joint Conference on Artificial Intelligence*,

for risky intervention and environmental surveillance, *Mobile Robotics: Solutions and Challenges, Proceedings of the Twelfth International Conference on Climbing and Walking*

the mobile robot operator training tool, *The 5th International Conference on Information and Communication Technology and Systems (ICTS), Surabaya, Indonesia*, pp. 229–236. Besl, P. J. & Mckay, H. D. (1992). A method for registration of 3-d shapes, *Pattern Analysis and*

web, *Proceedings of the 40th International Symposium on Robotics. Barcelona : AER-ATP,*

using coplanarity and constraints, *Proceedings of the 24th DAGM Symposium on Pattern*

search and rescue response at the World Trade Center., *IEEE transactions on systems, man, and cybernetics. Part B, Cybernetics : a publication of the IEEE Systems, Man, and*

*of the European Conference on Mobile Robots (ECMR)*, pp. 192–197.

*Intelligent Robots and Systems (IROS)*, pp. 2656–2661.

*Robots and the Support Technologies for Mobile Machines*.

*Machine Intelligence, IEEE Transactions on* 14(2): 239–256.

*Recognition*, Springer-Verlag, London, UK, pp. 34–41.

URL: *http://dx.doi.org/10.1109/TSMCB.2003.811794*

*Automation (ICRA2010) in Anchorage, Alaska, May 3*.

*IFAC Symposium on Intelligent Autonomous Vehicles, Lisbon*.

*ISBN 978-84-920933-8-0*, pp. 189–194.

*Cybernetics Society* 33(3): 367–385.

*Apphcatlons, McLean, Va.*, pp. 71–88.

as promising methods delivering conceptual reasoning.

**5. References**

388

pp. 1629–1634.


16 Will-be-set-by-IN-TECH

Schlaefer, A., Gill, J. & Schweikard, A. (2008). A simulation and training environment for

Se, S. & Little, D. L. J. (2002). Mobile robot localization and mapping with uncertainty

Segal, A., Haehnel, D. & Thrun, S. (2009). Generalized-icp, *Proceedings of Robotics: Science and*

Thrun, S., Burgard, W. & Fo, D. (2000). A real-time algorithm for mobile robot mapping with

Varcholik, P., Barber, D. & Nicholson, D. (2008). Interactions and training with unmanned

Vaskevicius, N., Birk, A., Pathak, K. & Poppinga, J. (2007). Fast detection of polygons in 3d

Wei, B., Gao, J., Zhu, J. & Li, K. (2009). A single-hand and binocular visual system for

Williams, B. & Reid, I. (2010). On combining visual slam and visual odometry, *Proc.*

Xuewen, L., Cai, M., Jianhong, L. & Tianmiao, W. (2006). Research on simulation and training

Zhang, W., Yuan, J., Li, J. & Tang, Z. (2009). The optimization scheme for eod robot based on

applications to multi-robot and 3d mapping, *ICRA*, pp. 321–328.

*Education Conference (I/ITSEC), Paper No. 8255*, pp. 189–194.

*Security and Rescue Robotics, SSRR*, IEEE, Rome, pp. 1–6.

*Automation, ICICTA '09*, Vol. 3, pp. 403 – 406.

*on Robotics and Biomimetics*, pp. 1421 – 1426.

*International Conference on Robotics and Automation*.

3(3-4): 267–274.

390

21(8): 735–758.

814.

*Systems*, Seattle, USA.

robotic radiosurgery, *International Journal of Computer Assisted Radiology and Surgery*

Bio-Interfacing, Navigation, Multi Robot Motion Planning and Operator Training

Mobile Robots – Control Architectures,

using scale-invariant visual landmarks, *The International Journal of Robotics Research*

systems and the nintendo wiimote, *Interservice/Industry Training, Simulation, and*

point clouds from noise-prone range sensors, *IEEE International Workshop on Safety,*

eod robot, *Second International Conference on Intelligent Computation Technology and*

system for eod robots, *IEEE International Conference on Industrial Informatics*, pp. 810 –

supervising control architecture, *Proceedings of the 2008 IEEE International Conference*

## *Edited by Janusz Będkowski*

The objective of this book is to cover advances of mobile robotics and related technologies applied for multi robot systems' design and development. Design of control system is a complex issue, requiring the application of information technologies to link the robots into a single network. Human robot interface becomes a demanding task, especially when we try to use sophisticated methods for brain signal processing. Generated electrophysiological signals can be used to command different devices, such as cars, wheelchair or even video games. A number of developments in navigation and path planning, including parallel programming, can be observed. Cooperative path planning, formation control of multi robotic agents, communication and distance measurement between agents are shown. Training of the mobile robot operators is very difficult task also because of several factors related to different task execution. The presented improvement is related to environment model generation based on autonomous mobile robot observations.

Photo by PhonlamaiPhoto / iStock

Mobile Robots - Control Architectures, Bio-Interfacing, Navigation,

Multi Robot Motion Planning and Operator Training

Mobile Robots

Control Architectures, Bio-Interfacing,

Navigation, Multi Robot Motion Planning

and Operator Training

*Edited by Janusz Będkowski*