**5. Implementation in ARGoS**

We consider a road clearance scenario to illustrate the proposed distributed algorithm (Section 4), where a road may be blocked by several obstacles. A team of robots should jointly move each obstacle to one side of the road. The algorithm is implemented using ARGoS (Autonomous Robots Go Swarming) [8], a multirobot simulator using the 3.0.0-beta47 version on Intel<sup>⊕</sup> Core™ i5 Processor, 4-GB of RAM and macOS Sierra operating system. The code run in ARGoS can be directly deployed on a real robot system.

The broadcast of the messages to all other robots is done by *rab* actuator. The broadcast of message is done within a certain range and in line of sight. We have used the 3 bytes for message within the range of 15 meters. The message is received by *rab* receiver within in the same network sent by *rab* sensors. Along with sending and receiving the message within range, *rab* sensors do the work of identifying the direction and distance from where the message is being sent. As the *rab* actuator allows the only broadcast, the address of the sender and that of the receiver needs to be specified in every message. Every robot in the simulation has a unique id of size 1 byte. Several sensors and actuators are used to control the movement and positioning of the robots. For example, proximity sensors are used to stay on the road and avoiding collisions with other robots, the omni-directional sensor is used to detect obstacles, gripper actuator is used to grip an obstacle, and turret actuator is used to

In **Figure 5a**, the initial position of the robots and blocks is shown. Three robots detect the three obstacles and they start the formation for the same is shown in **Figure 5b**. We assume that all the obstacles require two robots to move. In **Figure 5c**, two initiator robots are able to form their teams. In **Figure 5d**, it is depicted that robots have reached to the location of obstacles and they are ready to move the obstacles. **Figure 5e**, clearly shows that both the obstacles have been shifted to one side of the road. After dropping the obstacles, the robots again visit the road and search for other obstacles if any. Finally, in **Figure 5f**, the third obstacle is also detected and removed. In this way, all the obstacles are removed from the road.

turn the gripper actuator towards the direction of the obstacle.

*A Distributed Approach for Autonomous Cooperative Transportation*

*DOI: http://dx.doi.org/10.5772/intechopen.98270*

**Figure 5.**

**151**

*Illustration of multiple task execution in ARGoS.*

An example scenario is shown in **Figure 5**, where the shaded portion in gray is the road (10 m 5 m), obstacles are simulated by green movable cylinders of radius 0.2 m with a blue light on top. The robots are shown in blue. The overall process of removing an obstacle from the road is shown in **Figure 5**. The robots in ARGoS use the inbuilt range and bearing sensor (*rab*) to communicate among themselves.

*A Distributed Approach for Autonomous Cooperative Transportation DOI: http://dx.doi.org/10.5772/intechopen.98270*

**Figure 5.** *Illustration of multiple task execution in ARGoS.*

The broadcast of the messages to all other robots is done by *rab* actuator. The broadcast of message is done within a certain range and in line of sight. We have used the 3 bytes for message within the range of 15 meters. The message is received by *rab* receiver within in the same network sent by *rab* sensors. Along with sending and receiving the message within range, *rab* sensors do the work of identifying the direction and distance from where the message is being sent. As the *rab* actuator allows the only broadcast, the address of the sender and that of the receiver needs to be specified in every message. Every robot in the simulation has a unique id of size 1 byte. Several sensors and actuators are used to control the movement and positioning of the robots. For example, proximity sensors are used to stay on the road and avoiding collisions with other robots, the omni-directional sensor is used to detect obstacles, gripper actuator is used to grip an obstacle, and turret actuator is used to turn the gripper actuator towards the direction of the obstacle.

In **Figure 5a**, the initial position of the robots and blocks is shown. Three robots detect the three obstacles and they start the formation for the same is shown in **Figure 5b**. We assume that all the obstacles require two robots to move. In **Figure 5c**, two initiator robots are able to form their teams. In **Figure 5d**, it is depicted that robots have reached to the location of obstacles and they are ready to move the obstacles. **Figure 5e**, clearly shows that both the obstacles have been shifted to one side of the road. After dropping the obstacles, the robots again visit the road and search for other obstacles if any. Finally, in **Figure 5f**, the third obstacle is also detected and removed. In this way, all the obstacles are removed from the road.

**5. Implementation in ARGoS**

*Robotics Software Design and Engineering*

*Execution trace of the algorithms for multiple initiators.*

**Figure 4.**

**150**

deployed on a real robot system.

We consider a road clearance scenario to illustrate the proposed distributed algorithm (Section 4), where a road may be blocked by several obstacles. A team of robots should jointly move each obstacle to one side of the road. The algorithm is implemented using ARGoS (Autonomous Robots Go Swarming) [8], a multirobot simulator using the 3.0.0-beta47 version on Intel<sup>⊕</sup> Core™ i5 Processor, 4-GB of RAM and macOS Sierra operating system. The code run in ARGoS can be directly

An example scenario is shown in **Figure 5**, where the shaded portion in gray is the road (10 m 5 m), obstacles are simulated by green movable cylinders of radius 0.2 m with a blue light on top. The robots are shown in blue. The overall process of removing an obstacle from the road is shown in **Figure 5**. The robots in ARGoS use the inbuilt range and bearing sensor (*rab*) to communicate among themselves.

For the implementation we have written the required functions in Lua (a C-like language). These are: (i) to control the movement of a robot to avoid obstacle or another robot based on proximity sensor data, where the sensor detects an obstacle or another robot, (ii) control speed and velocity, (iii) synchronizing the robots for task execution, (iv) to control the movement of a robot when boundaries are detected using motor-ground sensors, (v) communication among robots based on the line of sight.

**References**

189-218

[1] Kube, C. R., & Zhang, H. (1993). Collective robotics: From social insects to robots. Adaptive behavior, 2(2),

*DOI: http://dx.doi.org/10.5772/intechopen.98270*

*A Distributed Approach for Autonomous Cooperative Transportation*

[2] Mataric, M. J., Nilsson, M., & Simsarin, K. T. (1995, August). Cooperative multi-robot box-pushing.

In Proceedings 1995 IEEE/RSJ

[3] Gerkey, B. P., & Mataric, M. J. (2002). Sold!: Auction methods for multirobot coordination. IEEE transactions on robotics and automation, 18(5), 758-768

[4] Chen, J., Gauci, M., Li, W., Kolling, A., & Gro, R. (2015). Occlusion-based cooperative transport with a swarm of

Transactions on Robotics, 31(2), 307-321

miniature mobile robots. IEEE

Journal, 59(3), 403-422

Sciences, 81(3), 553-567

& Business Media

**153**

[5] Kong, Y., Zhang, M., & Ye, D. (2016). An auction-based approach for group task allocation in an open network environment. The Computer

[6] Gunn, T., & Anderson, J. (2015). Dynamic heterogeneous team formation for robotic urban search and rescue. Journal of Computer and System

[7] Brard, B., Bidoit, M., Finkel, A., Laroussinie, F., Petit, A., Petrucci, L., & Schnoebelen, P. (2013). Systems and software verification: model-checking techniques and tools. Springer Science

[8] Pinciroli, C., Trianni, V., OGrady, R., Pini, G., Brutschy, A., Brambilla, M., Dorigo, M. (2012). ARGoS: a modular, parallel, multi-engine simulator for multi-robot systems. Swarm intelligence, 6(4), 271-295

Vol. 3, pp. 556-561

International Conference on Intelligent Robots and Systems. Human Robot Interaction and Cooperative Robots,

The implementation is carried out by writing the required function in Lua language. The different functions that are identified are as follows: (i) control of velocity and speed of the robot, (ii) control the movement of a robot so that obstacles and other robots could be avoided, (iii) synchronizing the robots in order to task execution, and (iv) communication among robots based on the line of sight.

### **6. Summary**

Now, research in the field of robotics is going with a rapid rate. In many applications such as search and rescue, space, and automated warehouse, intelligent robots are being used. With the advancement of artificial intelligence domain, robots are becoming the good choice. A plenty of work has been carried out in the field of single robot. However, this chapter discuss the different aspects of work where multiple robots act on the same object at the same time. This problem becomes tough and different from normal multi-agent problem.

Cooperative transportation is common task in many challenging domains, i.e., rescue, mars and space, and autonomous warehouse etc. In this way the proposed framework becomes very much essential and important in such domains where multiple robots are required to execute a task.

The proposed approach also takes care of multiple task execution simultaneously, i.e., if multiple robots detect multiple different obstacles at the same time, the coalition formation process for each obstacle can be started. Each robot who detects the obstacle, starts the coalition formation, by executing the instance of the algorithms.

#### **Author details**

Amar Nath1,2\*† and Rajdeep Niyogi2†

1 Sant Longowal Institute of Engineering and Technology, Punjab, India


† These authors contributed equally.

© 2021 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

*A Distributed Approach for Autonomous Cooperative Transportation DOI: http://dx.doi.org/10.5772/intechopen.98270*
