*3.5.2.2 ViSP*

ViSP [17] is an open source visual servo framework developed and maintained by the Lagadic team of the French National Institute of Information and Automation. It has the characteristics of hardware independence, scalability, and portability. In addition, ViSP also provides a complete library of basic functions, which can be combined with a variety of visual feature libraries; it also provides a simulation environment and interfaces with various hardware. Based on ViSP, we

**Figure 22.** *Visual servo program of eye-in-hand.*

**127**

**Figure 24.**

*ViSP software architecture.*

is returned.

*Manipulating Complex Robot Behavior for Autonomous and Continuous Operations*

these features make ViSP very suitable for use as a core part of our module.

The complete flowchart of eye-hand coordination is shown in **Figure 25**. After the node is initialized, the system initializes and starts the '*/visual\_servo*' *actionlib* service and subscribes to *execute()* to wait for the client to be awakened. After receiving the service request, start the visual servo loop. In the loop, program request the feature position of the tomato image from the vision module and make a difference from the expected position. If the difference exceeds the threshold Δs (Δs=2mm), the program will obtain the camera parameters, initialize the control model, and call the ViSP library function *vpServo()* to calculate the control output speed vector. Then, program integrates the velocity vector with time (*t* = 1s), motion module controls robot to move to the output position, and requests the tomato image feature position from the vision module again, then makes a difference with the desired position, and loops back and forth until the target image feature. The difference between the position and the desired image feature position is less than the threshold Δs, the visual servo loop is ended, and our execution result

**Figure 26** shows the design of the eye-hand coordination node class. There are mainly two classes. The *VisualServoCycleNode* class is responsible for the loop and interaction with other modules. The *VisualServoControlNode* module is responsible

can complete functions such as visual tracking, fiducial marking, two-dimensional contour tracking, pose estimation, and so on. The goal of ViSP is to provide developers with a tool for rapid development of visual servo functions. The software framework of ViSP is shown in **Figure 24**. The entire framework is divided into three modules: one module provides vision models, vision servo control algorithms, and robot controller interfaces; the second module provides image processing algorithms, tracking algorithms and other machine vision algorithms; and the last module is a visualization module that provides a simulation and visual environment. All

*DOI: http://dx.doi.org/10.5772/intechopen.92254*

*3.5.2.3 Eye-hand collaboration module node design*

for controlling the operation of the algorithm.

**Figure 23.** *Eye-hand coordination process.*

*Manipulating Complex Robot Behavior for Autonomous and Continuous Operations DOI: http://dx.doi.org/10.5772/intechopen.92254*

can complete functions such as visual tracking, fiducial marking, two-dimensional contour tracking, pose estimation, and so on. The goal of ViSP is to provide developers with a tool for rapid development of visual servo functions. The software framework of ViSP is shown in **Figure 24**. The entire framework is divided into three modules: one module provides vision models, vision servo control algorithms, and robot controller interfaces; the second module provides image processing algorithms, tracking algorithms and other machine vision algorithms; and the last module is a visualization module that provides a simulation and visual environment. All these features make ViSP very suitable for use as a core part of our module.

### *3.5.2.3 Eye-hand collaboration module node design*

*Service Robotics*

*3.5.2.2 ViSP*

extracts the position information of the tomato features in the two-dimensional image, and makes a difference from the expected position information. The difference is used as the input of the visual servo control algorithm and then calculate the control output in real time, that is, the speed vector of the end effector, and then integrate this speed vector with time to calculate the next point that needs to reach the target position. Cycle back and forth to get a trajectory that gradually approaches the target position. The eye-hand correspondence is converted into the amount of motion of the joint, and the end of the robot arm moves accordingly to

approach the target. The implementation process is shown in **Figure 23.**

ViSP [17] is an open source visual servo framework developed and maintained by the Lagadic team of the French National Institute of Information and Automation. It has the characteristics of hardware independence, scalability, and portability. In addition, ViSP also provides a complete library of basic functions, which can be combined with a variety of visual feature libraries; it also provides a simulation environment and interfaces with various hardware. Based on ViSP, we

**126**

**Figure 23.**

*Eye-hand coordination process.*

**Figure 22.**

*Visual servo program of eye-in-hand.*

The complete flowchart of eye-hand coordination is shown in **Figure 25**. After the node is initialized, the system initializes and starts the '*/visual\_servo*' *actionlib* service and subscribes to *execute()* to wait for the client to be awakened. After receiving the service request, start the visual servo loop. In the loop, program request the feature position of the tomato image from the vision module and make a difference from the expected position. If the difference exceeds the threshold Δs (Δs=2mm), the program will obtain the camera parameters, initialize the control model, and call the ViSP library function *vpServo()* to calculate the control output speed vector. Then, program integrates the velocity vector with time (*t* = 1s), motion module controls robot to move to the output position, and requests the tomato image feature position from the vision module again, then makes a difference with the desired position, and loops back and forth until the target image feature. The difference between the position and the desired image feature position is less than the threshold Δs, the visual servo loop is ended, and our execution result is returned.

**Figure 26** shows the design of the eye-hand coordination node class. There are mainly two classes. The *VisualServoCycleNode* class is responsible for the loop and interaction with other modules. The *VisualServoControlNode* module is responsible for controlling the operation of the algorithm.

**Figure 24.** *ViSP software architecture.*

#### **Figure 25.**

*Eye-hand collaboration node flow chart.*

#### **Figure 26.**

*Eye-hand coordination node class design UML diagram.*

#### *3.5.3 Task planning module node*

The task planning module mainly completes the design and implementation of a layered concurrent state machine for one pick, as shown in **Figure 27**:

**129**

**Figure 28.**

*FSM in SMACH\_viewer.*

**Figure 27.**

*Task planning node flow chart.*

*Manipulating Complex Robot Behavior for Autonomous and Continuous Operations*

First we initialize the node, state machine, and user intermediate data, and then add the transformation relationship between the states of each state machine according to the state transition of the task design. Use the transition keyword to control the transition from the current state to the secondary state. At the same time, since each state is *SimpleActionState*, each state implements an *actionlib* client by default. You need to add an initialization function and a callback function *callback()* for each

*DOI: http://dx.doi.org/10.5772/intechopen.92254*

*Manipulating Complex Robot Behavior for Autonomous and Continuous Operations DOI: http://dx.doi.org/10.5772/intechopen.92254*

**Figure 27.** *Task planning node flow chart.*

*Service Robotics*

**128**

**Figure 26.**

**Figure 25.**

*3.5.3 Task planning module node*

*Eye-hand coordination node class design UML diagram.*

*Eye-hand collaboration node flow chart.*

The task planning module mainly completes the design and implementation of a

layered concurrent state machine for one pick, as shown in **Figure 27**:

**Figure 28.** *FSM in SMACH\_viewer.*

First we initialize the node, state machine, and user intermediate data, and then add the transformation relationship between the states of each state machine according to the state transition of the task design. Use the transition keyword to control the transition from the current state to the secondary state. At the same time, since each state is *SimpleActionState*, each state implements an *actionlib* client by default. You need to add an initialization function and a callback function *callback()* for each

state. Start a state machine visualization service *IntrospectionServe*r in the node, so that we can view the state transition diagram in *SMACH\_viewer* and can monitor the state transition in real time. The data details of each state are shown in **Figure 28**.

#### *3.5.4 System node diagram*

**Figure 29** shows that the running node diagram after all ROS nodes in the system is turned on. The node diagram is generated using the *rqt\_graph* command. Each rectangular box represents a topic. The oval box represents a node, and the arrowed lines represent the subscription relationship between each other. Visualization of the node diagram makes the system architecture intuitive.

Since most of the eye-hand coordination and motion control are concurrent, the fluency of multitasks is verified under two plant factories and three greenhouses with different fruit status and illumination variations. The experimental results show that if total number of targets within the visual field is not more than three, the average picking time is less than 35 s.

#### **4. Conclusion**

The contribution of this research mainly orients around the software engineering for manipulating the complex robot behavior. Although service robot leverages ROS for rapid development, classical tasks such as eye-hand coordination and continuous operation in an open scenario have not been systematically addressed. In this chapter, we advocate that if the complex robot behavior can be structured, then they can be modeled as Finite State Machines (FSM), and a "Sense Plan Act" (SPA) process can be implemented with a formal software architecture. Meanwhile, we demonstrate that ViSP and SMACH in ROS are beneficial frameworks for developing a dual-arm robot for autonomously harvesting the fruits in plant factory, which embodies the complexity of multi-task planning and scheduling in natural scenes. The experimental results show that the software engineering paradigm effectively improves the system reliability and scalability of the dual-arm harvesting robot.

**131**

**Author details**

Chengliang Liu, Liang Gong\* and Wei Zhang

provided the original work is properly cited.

\*Address all correspondence to: gongliang\_mi@sjtu.edu.cn

School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China

© 2020 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium,

*Manipulating Complex Robot Behavior for Autonomous and Continuous Operations*

This work was supported by the National Natural Science Foundation of China

(No. 51775333) and the Scientific Research Program of Shanghai Science and

*DOI: http://dx.doi.org/10.5772/intechopen.92254*

Technology Commission (No. 18391901000).

**Acknowledgements**

*Manipulating Complex Robot Behavior for Autonomous and Continuous Operations DOI: http://dx.doi.org/10.5772/intechopen.92254*
