**4.2. Subtask control of arm robots**

For net based control of the arm robot, unit actions or motions should be defined in a task coordinate system. The trajectories can be free (point to point), straight or circular. The speed of forward movement of a trajectory is specified in the main coordinate of the task coordinate system. The movements in the other coordinates are compensated based on errors. At the end of a trajectory, it can be stopped or continued while turning the direction. When a trajectory is circular, the end-effector can have either of two orientations, that is, to the center of the circle or fixed. In the case of control of the end-effector, there are commands to represent the coordinate frames, open the hand, close the hand, and grasp. The grasp command assumes that the hand has a proximity sensor to autonomously grasp a workpiece in an appropriate direction. Synchronous actions by the arm and the wrist or sequences of unit actions by the arm, the wrist and the fingers are also specified using commands. The reference positions for arm movement are set by a separated teaching method, as well as desired positions of parts known at the programming time. The other positions relative to these positions are computed on-line. In this way, using these commands the final point and the trajectory of the motion can be specified in the task coordinate system. Figure 15 shows the block diagram of the trajectory tracking control in the task coordinate system.

**Figure 15.** Block diagram of 3-axis Cartesian coordinate arm control system

The command system can be extended to execute actions specified based on information from the external sensors such as visual sensors, proximity sensors or slippage sensors. Figure 14 shows the hardware structure of the microcontroller-based control system. The visual sensor detects the coordinates of the center of an object and the orientation of an edge of the object. The proximity sensors, which are composed of several LED arrays attached to the fingers can detect the distance and orientation of the object with respect to the planes of the fingers. For the grip command, the grip action raises the grip force till the signal from the slippage sensor becomes zero. When the hand is moving down vertically, if the signal from the slippage sensor rises inversely, then the hand is opened.

**Figure 16.** Block diagram of multi-axis arm control system

86 Petri Nets – Manufacturing and Computer Science

**4.2. Subtask control of arm robots** 

trajectory tracking control in the task coordinate system.

Calculation of main velocity vector

The command system can be extended to execute actions specified based on information from the external sensors such as visual sensors, proximity sensors or slippage sensors.

**Figure 15.** Block diagram of 3-axis Cartesian coordinate arm control system

Calculation of reference axis velocity vector

3-axis Cartesian control system

Compensation gain

Calculation of trajectory distance and errors

Calculation of homogenous transformation

Figure 10.

Current position Reference position Main velocity

task has been finished with "End of task" place and returns back to its home position. On the contrary, the arm robot starts to get the position of a part and grasp it. The mobile robot moves to the station, and the arm robot, after the completion of the "Grasp" subtask and the "Movement to station" subtask, starts "Loading" subtask while the mobile robot waits at the specified position. After the completion of loading, the mobile robot moves to the depository, and the arm robot executes the "Identification" subtask repeatedly. If the signal of "End of task" is on, the mobile robot returns back to its home position, and if not it moves to the station. From the "Movement to station" and "Movement to depository" places, the gate signal is sent to repeatedly execute the "Obstacle avoidance" subtask using infrared range sensors. In the coordination task, synchronization is represented as a shared transition which is implemented using a sequence of asynchronous communications as shown in

For net based control of the arm robot, unit actions or motions should be defined in a task coordinate system. The trajectories can be free (point to point), straight or circular. The speed of forward movement of a trajectory is specified in the main coordinate of the task coordinate system. The movements in the other coordinates are compensated based on errors. At the end of a trajectory, it can be stopped or continued while turning the direction. When a trajectory is circular, the end-effector can have either of two orientations, that is, to the center of the circle or fixed. In the case of control of the end-effector, there are commands to represent the coordinate frames, open the hand, close the hand, and grasp. The grasp command assumes that the hand has a proximity sensor to autonomously grasp a workpiece in an appropriate direction. Synchronous actions by the arm and the wrist or sequences of unit actions by the arm, the wrist and the fingers are also specified using commands. The reference positions for arm movement are set by a separated teaching method, as well as desired positions of parts known at the programming time. The other positions relative to these positions are computed on-line. In this way, using these commands the final point and the trajectory of the motion can be specified in the task coordinate system. Figure 15 shows the block diagram of the

When programming a specific task, the task is broken down into subtasks through task planning. These subtasks are composed of the position data and the programs that are edited using the robot motion simulator. Each subtask is represented as a place. A place can also represent the internal state of the robot, which is operating or idle, and the state of external devices. The relations of these places are explicitly represented by interconnections of transitions, arcs and gates that are edited with the robot task program editor and simulator. For places that represent subtasks, the following parameters are necessary: 1) the code of the controller such as the vehicle, arm, hand or sensor etc., that executes the subtask, 2) the file name where the subtask such as MOVE, GRASP, RELEASE, or HOLD, etc., is explicitly written with some programming language, and 3) the file name of a set of position data that will be used to execute the subtask. The procedures of editing and simulating of the net model are done interactively until certain specifications are satisfied. At this point, it is expected that problems such as deadlock, conflict resolution, concurrency, synchronization, etc., have been well studied and analyzed. If some error is found, that is if the net model does not satisfy the specification, it can be easily amended by reediting the net model and simulating again.

### **4.3. Subtask control of mobile robots**

The decomposition of "Movement to station" place and the associated control structure are illustrated in Figure 17. In movement control of the mobile robot using state feedback based on pose sensors, the robot's planning task is reduced to setting some intermediate positions (subgoals), with respective control modes, lying on the requested path. The global path planner in the trajectory controller determines a sequence of subgoals to reach the goal. Given a set of subgoal locations, the target tracking controller plans a detailed path to the closest subgoal position only and executes this plan. In the target tracking control, the distance between the robot and the specified target position and the angle between the forward direction and the target is computed based on the current location detected by the internal pose sensors (accelerators and gyros) and the current target. And then, the reference tangent and angular velocities of the mobile robot is determined to meet the target tracking using a state feedback algorithm, and the reference wheel velocities are computed based on inverse kinematics. The new velocity setpoints are sent to the respective wheel velocity controller, which executes proportional plus integral control of its wheel velocity using the rotary encoder.

Implementation of Distributed Control Architecture for Multiple Robot Systems Using Petri Nets 89

In case of detection of a blockage on the intended path, the trajectory controller receives a failure notification from the visual sensor, then modifies the subgoals and the short term local knowledge of the robot's surroundings and triggers the target tracking in view of this change to the local environment knowledge. The trajectory controller has the dynamic map with global and local representation that becomes more accurate as the robot moves. Upon reaching this subgoal location, its local map will change based on the perceptual information using the PSD data extracted during motion. Then the target tracking controller triggers the local path planner to generate a path from the new location to the next subgoal location. When the lowest-level wheel velocity control fails to make progress, the target tracking controller attempts to find a way past the obstacle by turning the robot in position and trying again. The trajectory controller decides when and if new information integrated

The current subgoal and current location are shared by the trajectory controller and the target tracking controller. In the coordinator program, a place is assigned to each shared variable to be protected from concurrent access. Mutual exclusive access to a shared variable is represented by a place, which is identical to the P and V operations on the semaphore, as

If a time-out in real-time control, communication, or sensing data acquisition, is brought about, an alarm signal is sent to the upper controller. When an alarm is processed, a signal is sent to stop any active controller. These signals are implemented by places, as shown in Figure 19. If the final goal is reached, the target tracking controller sends an "End" signal to

into the local map can be copied into the global map.

**Figure 18.** Net representation of mutual exclusive access

Trajectory controller

Target tracking controller

**Figure 19.** Net representation of signaling between controllers

Target tracking controller

the trajectory controller, which then sends end signals to the rest of the system.

Trajectory controller

shown in Figure 18.

**Figure 17.** Hierarchical decomposition of net model of mobile robot control system

In case of detection of a blockage on the intended path, the trajectory controller receives a failure notification from the visual sensor, then modifies the subgoals and the short term local knowledge of the robot's surroundings and triggers the target tracking in view of this change to the local environment knowledge. The trajectory controller has the dynamic map with global and local representation that becomes more accurate as the robot moves. Upon reaching this subgoal location, its local map will change based on the perceptual information using the PSD data extracted during motion. Then the target tracking controller triggers the local path planner to generate a path from the new location to the next subgoal location. When the lowest-level wheel velocity control fails to make progress, the target tracking controller attempts to find a way past the obstacle by turning the robot in position and trying again. The trajectory controller decides when and if new information integrated into the local map can be copied into the global map.

The current subgoal and current location are shared by the trajectory controller and the target tracking controller. In the coordinator program, a place is assigned to each shared variable to be protected from concurrent access. Mutual exclusive access to a shared variable is represented by a place, which is identical to the P and V operations on the semaphore, as shown in Figure 18.

**Figure 18.** Net representation of mutual exclusive access

88 Petri Nets – Manufacturing and Computer Science

rotary encoder.

**4.3. Subtask control of mobile robots** 

The decomposition of "Movement to station" place and the associated control structure are illustrated in Figure 17. In movement control of the mobile robot using state feedback based on pose sensors, the robot's planning task is reduced to setting some intermediate positions (subgoals), with respective control modes, lying on the requested path. The global path planner in the trajectory controller determines a sequence of subgoals to reach the goal. Given a set of subgoal locations, the target tracking controller plans a detailed path to the closest subgoal position only and executes this plan. In the target tracking control, the distance between the robot and the specified target position and the angle between the forward direction and the target is computed based on the current location detected by the internal pose sensors (accelerators and gyros) and the current target. And then, the reference tangent and angular velocities of the mobile robot is determined to meet the target tracking using a state feedback algorithm, and the reference wheel velocities are computed based on inverse kinematics. The new velocity setpoints are sent to the respective wheel velocity controller, which executes proportional plus integral control of its wheel velocity using the

> Local map

> > Moving1 Moving2 Moving3

subgoal

reference velocity

Right wheel

Left wheel

DC motors

(Wheel velocity control)

**Figure 17.** Hierarchical decomposition of net model of mobile robot control system

Rotary encoders

Pose sensors (accelerators,

Visual sensors

Global map

gyros)

(Trajectory control)

Global path planner Local path planner

(Target tracking control with inverse kinematics)

Movement to pallet

> If a time-out in real-time control, communication, or sensing data acquisition, is brought about, an alarm signal is sent to the upper controller. When an alarm is processed, a signal is sent to stop any active controller. These signals are implemented by places, as shown in Figure 19. If the final goal is reached, the target tracking controller sends an "End" signal to the trajectory controller, which then sends end signals to the rest of the system.

**Figure 19.** Net representation of signaling between controllers
