**5. Experimentation results**

The robot calculates a "TC" with each obstacle around it. The value of "TC" allows it to predict the nature of each obstacle (static or dynamic). Then, the robot can calculate the velocity of each obstacle in order firstly to anticipate the state of the environment in the future and therefore to adjust its speed in order to avoid collisions.

In order to prove the efficiency of the proposed model, we proceed as follows. In a first time, we test our method in static environments (composed of U-shape and T-shape obstacles). Having obtained valid results, we achieve, in a second time, tests in dynamic environments (including moving robots). We have used the Simbad simulation platform.

#### **5.1 The parameters of the robot and the controller in simulation**

In simulation, the parameters of the robot are: the maximum linear velocity ϑmax = 1m/s and the maximum angular velocity θmax = 45/s. The height of the mobile robot is 0.78 m, the diameter is 0.4 m, the weight is 50 kg and the radius of the wheel is 0.05m. The Δ time interval between two successive perceptions is 0.4s. This interval time is used to calculate the TC.

In addition, we have defined a variety of basic scenarios for the experimentation stage. In the reminder of this section we detail some of them. For each scenario, the robot is initially positioned at the centre of the environment and starts to work without having any preliminary information. The simulation environment is a square area composed of multiple traps like U-shape and T-shape forming the static obstacle, and moving agents (moving objects and other robots).

### **5.2 Experimentation results in static environments**

Test results in static environments are presented in figure 11. Each scenario shows the robot's environment with the robot's path covered and the detected obstacles.

**Scenario 1** The robot is initially positioned in an environment composed of multiple "dead cycle" as shown in figure 11.a. The robot calculates for each trap a DTC in order to predict the nature of the obstacle. The robot's decision depends on this information (e.g. if the DTC is equal to zero then, the obstacle is static and the robot should change its direction). In region 'a' it detects that the DTCF is equal to zero, which means that the robot should change immediately its direction to find a free path towards its target. In region 'b', it changes its behaviour and Turns Right. In region 'c', the robot detects a new local minimum (DTCF=0, DTCR<∆T and DTCL<∆T). In this situation, the robot should find a free trajectory in order to move away from this obstacle. When it leaves this local minimum, the robot adjusts its trajectory to minimise the distance between the robot and the target. For each step, the robot reasons about the nature, velocity and the orientation angle in order to choose a direction that will be free in the future (predict the future state using the nature and the velocity of obstacle in order to find a suitable path with fewer constraints). However, in (Wang & Liu, 2005) based on trial-return behaviour phenomenon, the robot searches the nearest exit. Therefore, when the nearest exit is blocked by along wall, the

calculate the output for the two controllers. The constants of output membership function for the first controller (angular velocity) are shown in figure 6 and the values of the constants of output membership function for the second controller (linear velocity) are shown in figure 10. During simulation work, the robot's normal velocity VN is set to 0.5m/s.

The robot calculates a "TC" with each obstacle around it. The value of "TC" allows it to predict the nature of each obstacle (static or dynamic). Then, the robot can calculate the velocity of each obstacle in order firstly to anticipate the state of the environment in the

In order to prove the efficiency of the proposed model, we proceed as follows. In a first time, we test our method in static environments (composed of U-shape and T-shape obstacles). Having obtained valid results, we achieve, in a second time, tests in dynamic environments

In simulation, the parameters of the robot are: the maximum linear velocity ϑmax = 1m/s and the maximum angular velocity θmax = 45/s. The height of the mobile robot is 0.78 m, the diameter is 0.4 m, the weight is 50 kg and the radius of the wheel is 0.05m. The Δ time interval between two successive perceptions is 0.4s. This interval time is used to calculate

In addition, we have defined a variety of basic scenarios for the experimentation stage. In the reminder of this section we detail some of them. For each scenario, the robot is initially positioned at the centre of the environment and starts to work without having any preliminary information. The simulation environment is a square area composed of multiple traps like U-shape and T-shape forming the static obstacle, and moving agents (moving

Test results in static environments are presented in figure 11. Each scenario shows the

**Scenario 1** The robot is initially positioned in an environment composed of multiple "dead cycle" as shown in figure 11.a. The robot calculates for each trap a DTC in order to predict the nature of the obstacle. The robot's decision depends on this information (e.g. if the DTC is equal to zero then, the obstacle is static and the robot should change its direction). In region 'a' it detects that the DTCF is equal to zero, which means that the robot should change immediately its direction to find a free path towards its target. In region 'b', it changes its behaviour and Turns Right. In region 'c', the robot detects a new local minimum (DTCF=0, DTCR<∆T and DTCL<∆T). In this situation, the robot should find a free trajectory in order to move away from this obstacle. When it leaves this local minimum, the robot adjusts its trajectory to minimise the distance between the robot and the target. For each step, the robot reasons about the nature, velocity and the orientation angle in order to choose a direction that will be free in the future (predict the future state using the nature and the velocity of obstacle in order to find a suitable path with fewer constraints). However, in (Wang & Liu, 2005) based on trial-return behaviour phenomenon, the robot searches the nearest exit. Therefore, when the nearest exit is blocked by along wall, the

robot's environment with the robot's path covered and the detected obstacles.

future and therefore to adjust its speed in order to avoid collisions.

**5.1 The parameters of the robot and the controller in simulation** 

**5.2 Experimentation results in static environments** 

(including moving robots). We have used the Simbad simulation platform.

**5. Experimentation results** 

the TC.

objects and other robots).

Fig. 11. Experimental results in static environments

nearest exit will be the opening at the right hand side where the wall ends (see figure 11.b). This method takes a large time to find the free end of the wall. However, the problem with trial-return motion is high power consumption and the time spent from start point to target is long when the obstacle is composed of complex traps like U-shape and T-shape. In (Zhu & Yang, 2004), the dead cycle problem is resolved by using a state memory strategy (see figure 11.c). Therefore, there is a situation where the robot cannot go straight toward the target. It has to keep turning around the obstacle (point "a") if the distance between the robot and the obstacle is shorter than the distance between the robot and the target.

**Scenario 2** We test our method in an environment composed of corridors (wide and narrow). As shown in figure 11.d the robot moves in a corridor in order to reach its target. The problem is how to ensure the motion's stability without oscillation. In our architecture, there is a compromise between "reach target" and "avoid obstacles". In region 1, the robot crosses a large corridor; it changes its direction when it detects a static obstacle. In region 2, the robot crosses a narrow corridor; in this situation the DTCR<∆T and the DTCL<∆T. Thus, the robot cannot change its trajectory, but it should only adjust its velocity with the velocity of the nearest obstacle in the front area. This reasoning allows the robot to reach its target easily without oscillation.

## **5.3 Experimentation results in dynamic environments**

**Scenario 1** Generally it is difficult to deal with narrow passage cases when there are moving actors because there may be oscillation in the trajectory between multiple obstacles. However, we can show, in figure 12, the effectiveness of our method when dealing with this case. The robot navigates in an environment composed of corridors with a moving obstacle. At the first time, the corridor is narrow. In this setting the robot adjusts its velocity to avoid collision with the obstacle without changing the direction because the goal is in the front

A Reactive Anticipation for Autonomous Robot Navigation 161

obstacles and changes its orientation if it detects a static obstacle. In region 1, it moves away from a U-shape obstacle because it detects that the DTC is equal to zero. In region 2, the robot is in a large corridor with moving obstacles, it calculates the DTC relative to every moving obstacle in order to predict conflict situations. In this setting, the DTC of every obstacle around the robot decreases, which means that the obstacles are going toward the robot. At the first step, the robot calculates the velocity of the nearest moving obstacle in order to adjust its velocity. At the second step, it reasons about the trajectory that it has to

1. The robot uses a simple model of perception composed of ultrasonic sensors that

2. The robot can provide a suitable trajectory in dynamic environments and there are no

3. The results provided by our method are satisfactory with the robot's dynamic

4. The robot can easily predict the nature of obstacles around it, and it uses this kind of information to predict conflict situations with other agents sharing the same resources. 5. The robot can also predict the velocity of each obstacle, using its simple model of perception, to adjust its linear velocity in order to avoid collision with other agents.

In this paper, a new method based on the sensors' information to combine reactivity and anticipation in order to predict conflict situations between robots has been proposed. We have used fuzzy logic to develop a control system that enables the robot to navigate at real time and to minimise its interaction with other agents. Basing on the simulation experiments, we can conclude that our method operates in different cases of dynamic environments. The robot can easily predict the nature of obstacles in order to anticipate conflict situations. It can also predict the velocity of each obstacle and adjust its linear velocity in order to avoid collision. As perspectives for this work, we attend to improve the robot's behaviour by introducing both the ability to learn and the concept of communication between robots in order to explore the environment and to share

Arkin, R. (1990). Integrating behavioural, perceptual and world knowledge in reactive

Bonarini, A., Invernizzi, G. & Labella, T. H. (2003). An architecture to coordinate fuzzy behaviours to control an autonomous robot. *Proc. Fuzzy sets Systems*, pp.101-115 Canny, J., Rege, A. & Reif, J. (1990). An exact algorithm for kino dynamic planning in the

Darwiche, A. (2009). Modelling and Reasoning with Bayesian Networks. *New York, NY:* 

navigation. *Robots and Autonomous Systems*, Vol.6, pp. 105-122

plane. *In ACM Symp.On Computational Geometry,*pp. 271-280.

According to a large number of simulation experiments, we can conclude that:

6. Our method can be adapted to different cases in dynamic environments.

choose (minimise interactions with other agents).

provides information at real time.

local minimum encountered.

**6. Conclusion and perspectives** 

*Cambridge University Press* 

constraints.

knowledge.

**7. References** 

Fig. 12. Experimental results in a dynamic environment: case of conflict situations in narrow and large corridors.

area and there is no free area neither on the left nor on the right (DTC<∆T on the Left and the Right). When the corridor becomes large, the robot changes its direction because there is a free area on the right. In this trajectory, the robot detects that there is a conflict space (DTCF is negative). With the concept of DTC, the robot can find the trajectory that will be free in the future. Thus, the robot should choose the most adequate direction that minimises states of conflict. So, in our model the robot can go through narrow passages successfully. However, in (Lazkano et al., 2007) a reactive navigation method based on Bayesian Networks is proposed. This method only takes into account the door-crossing behaviour. Its drawback is that it becomes slow when the environment becomes more dynamic because it requires a long computing time.

**Scenario 2** We test our method in a dynamic environment composed of multiple traps as shown in figure 13. The robot adjusts its velocity in order to avoid collision with moving

Fig. 13. Experimental results in a dynamic environment: case of conflict situations in a dynamic environment composed of local traps.

Fig. 12. Experimental results in a dynamic environment: case of conflict situations in narrow

area and there is no free area neither on the left nor on the right (DTC<∆T on the Left and the Right). When the corridor becomes large, the robot changes its direction because there is a free area on the right. In this trajectory, the robot detects that there is a conflict space (DTCF is negative). With the concept of DTC, the robot can find the trajectory that will be free in the future. Thus, the robot should choose the most adequate direction that minimises states of conflict. So, in our model the robot can go through narrow passages successfully. However, in (Lazkano et al., 2007) a reactive navigation method based on Bayesian Networks is proposed. This method only takes into account the door-crossing behaviour. Its drawback is that it becomes slow when the environment becomes more dynamic because it

**Scenario 2** We test our method in a dynamic environment composed of multiple traps as shown in figure 13. The robot adjusts its velocity in order to avoid collision with moving

Fig. 13. Experimental results in a dynamic environment: case of conflict situations in a

dynamic environment composed of local traps.

and large corridors.

requires a long computing time.

obstacles and changes its orientation if it detects a static obstacle. In region 1, it moves away from a U-shape obstacle because it detects that the DTC is equal to zero. In region 2, the robot is in a large corridor with moving obstacles, it calculates the DTC relative to every moving obstacle in order to predict conflict situations. In this setting, the DTC of every obstacle around the robot decreases, which means that the obstacles are going toward the robot. At the first step, the robot calculates the velocity of the nearest moving obstacle in order to adjust its velocity. At the second step, it reasons about the trajectory that it has to choose (minimise interactions with other agents).

According to a large number of simulation experiments, we can conclude that:

