**4. The topology of the multi-robotic system for accurate positioning control**

In this section, we discuss the proposed control architectures for a multi-robot system, which enables the high-accuracy movement of a tool in various manufacturing scenarios by reducing the process uncertainties. Assuming at the start, all camera and robot manipulators are well calibrated by using one or multiple methods discussed in Sections 3.2.1 and 3.2.2, so that the initial camera and robot manipulators parameters are identified. Therefore, in this case, the main uncertainties include the sensor noise and the dynamic modeling errors. **Figure 4** shows the overall topology of this multi-robot system.

### **Figure 4.**

*The topology of the multi-robotic system for accurate positioning control.*

The multi-robot system is composed of a visual system and a tool manipulation system (**Figure 4**). In the visual system, a camera is mounted on an elbow robot arm while a tool is held by the end-effector of the robot manipulator arm. The goal of the visual system is to provide precise estimation of the tool pose so that the tool manipulator can control the pose with the guidance from the visual system. Two fiducial markers (green circles or the interest points) are placed on the tool to help the computer to detect the position and the orientation of the tool. The absolute coordinates of the reference points (red circles) are known in the inertial reference frame. The reference points are placed close to the tool's target location (the target interest points) so that the reference points and the target interest points can be captured in the camera frame when the tool gets close to its target pose.

Four reference points are selected close to each other in the space. In a visual servoing problem, a location in space from which an image was taken can only be uniquely determined by at least four points, 3 points to determine a specific location and one point to determine the orientation. This is a location determination problem (LDP) using image recognition [46]. Therefore, we consider using four reference points to determine the camera pose in the 3D space. However, whenever the camera pose is fixed and known in space, the stereo camera, which can detect the depth, provides the distinct 3D location of a point from the image coordinates.

### **4.1 The multi-robotic system sequential control procedure**

The movement control of the robot manipulators is asynchronous in the visual and the tool manipulator systems. A flowchart demonstrating this sequential control process is shown in **Figure 5**.

The first stage consists of the optimal camera pose determination and control. In this stage, the camera moves and searches for an optimal position based on the minimization of a proposed objective function, in this case, the time duration and the energy consumption, while reducing the image noise of the reference points to within an acceptable threshold. In the second stage, the camera movement adjustment control, any uncertainties occur in the movement of camera from the last stage is eliminated by an eye-in-hand visual servoing controller. After the movement adjustment, the camera is kept static and provides precise estimations of the tool position. In the last stage, the high-accuracy tool manipulation control, the tool movement is controlled and guided to the target pose location by an eye-to-hand visual servoing controller. Each control method and architecture are discussed in the sections below.

### **4.2 The optimal camera pose determination process and its control architecture**

**Figure 6** shows how the optimal pose of the camera is determined from a single picture taken at different perspectives. The uncertainty in the image processing is spatially related. As the camera moves in space, the combined factors (the light conditions, the temperature, etc.) that affect the image processing changes and with

**Figure 5.**

*The flow chart of the sequential control procedure.*

*Role of Uncertainty in Model Development and Control Design for a Manufacturing Process DOI: http://dx.doi.org/10.5772/intechopen.104780*

these changes, the uncertainty level in the estimation changes accordingly. In this work, we propose to apply image averaging [45] to reduce the uncertainty level in the pose estimation. As discussed in Section 3.2.3, the number of images required for the averaging increases by a factor of 2 for reducing the uncertainty level by the square root of the same factor. In order to reduce the energy consumption and the time duration in this photo taking process, it is necessary to first determine the location where the image averaging should take place before the camera actually starts to take multiple photos.

In **Figure 6**, in the first stage, the camera takes a single picture. In the second stage, we compute the image intensity matrix *I* from that photo and then, we estimate the noise level σ across the image by a previously developed algorithm, see [47]. In the third stage, we calculate the uncertainty level from the image noise level and generates the number of images *N* required to reduce this uncertainty within a prescribed threshold. In the fourth stage, utilizing a moving algorithm, which is designed as a part of this work, the current camera target pose, *PC* is commanded. In the fifth stage, the camera pose controller guides the camera to the target pose location using the encoders that measure joints rotational angles, *q*~*v*. These five stages are repeated until the movement algorithm instructs the camera to stay in the current pose. Then, this current target pose is the optimal pose *PC* of the camera where the total energy consumption and the time duration is minimized. The output *qv* is the target joint angles of visual manipulator system at the optimal pose of the camera. If for any reason, such as uncompensated uncertainties, the current pose is not the same as the optimal pose, *PC*, then the camera movement adjustment control, presented in the next section, will reduce this error. In addition, *M* is the number of pictures needed for the averaging at the optimal pose location of the camera.

### **4.3 The camera movement adjustment control block diagram**

We propose a control method with its associated block diagram for the camera movement adjustment as shown in **Figure 7**. The role of this feedback control is to deal with the errors occurred in the dynamics and the measurements of the previous stage.

### **Figure 7.**

*The camera movement adjustment control block diagram.*

In **Figure 4**, four reference points whose absolute positions are known in the space, are selected close to the tool target pose. The fiducial markers are placed on the reference points, so that their location can be recognized and estimated in a 2D image coordinate frame using computer vision. From the kinematic model of the robot arm and the camera, the image coordinates of the reference points can be calculated online, and those coordinates are used as the targets for a cascaded control loop and are noted as *pR* in **Figure 7**. After applying the image averaging technique (Section 3.2.3), we can obtain a precise estimation of the current position of the reference points and are noted as *<sup>p</sup>*c*<sup>R</sup>* in **Figure 7** in image coordinates from the computer vision. Therefore, any deviation between *pR* and *<sup>p</sup>*c*<sup>R</sup>* could be the result of some uncertainties, such as the joint compliances, which are not compensated by the joint control loop as shown in **Figure 6**. The cascaded controller is similar to the image-based visual servoing scheme (IBVS), as discussed in Section 3.1. The inner-loop control strategy in this part is also very similar to the joint control in the camera pose control in **Figure 6**, and its control design and simulation results are discussed in Section 5.

### **4.4 The high-accuracy tool manipulation control block diagram**

We propose a control strategy with its associated control block diagram for the tool manipulation system as shown in **Figure 8**. The control algorithm in this block diagram is a combination of a feedforward and a feedback control.

The feedforward control loop is an open loop which brings the tool as close to the target position as possible in the presence of the input disturbance, *dqm* . In the inner joint control loop, the noise sources may originate from the low fidelity cheap encoder joint sensors and the dynamic errors from the joint, e.g., compliances. All sources of noise from the joint control loop are combined and modeled as an input disturbance, *dqm* , to the outer control loop. The outputs of the feedforward are the reference joint angles of rotations, *qRmfeedforward* , which are added to the outer feedback controller outputs, *qRmfeedback* , and set as the targets for the joint control inner-loop. The function of the forward kinematics is to transform a set of current joint angles of the tool manipulator to the current pose of the tool on the end-effector using a kinematic model of the robot arm.

Movement of the tool can be adjusted with high accuracy by the feedback control loop. The feedback control loop rejects the input disturbance, *dqm* , and minimizes the *Role of Uncertainty in Model Development and Control Design for a Manufacturing Process DOI: http://dx.doi.org/10.5772/intechopen.104780*

### **Figure 8.**

*The high-accuracy tool manipulation control block diagram.*

error between the tool pose target, *pT*, in the image frame and the high precision estimation tool pose from the camera sensor, *<sup>p</sup>*c*<sup>T</sup>*. The pose in a robot system modeled in Cartesian inertial base frame consists of six degrees of freedom, i.e., three translations and three rotations. Therefore, in order to have a full control of the tool pose, the camera in the feedback control loop requires to measure the image coordinates of at least two interests points on the tool.

The feedforward and feedback controllers work simultaneously to move the tool to the target pose location in the tool manipulation system. The combined target *qRmsum* are the inputs to the joint control loop so that both controllers manipulate the tool pose. The benefit of designing both feedback and feedforward controls for the manipulation system is to reduce the time duration. If only feedback control is utilized, the pose estimation generated from the visual system requires taking multiple pictures and makes the tool movement very slow. We can divide the task of the tool movement control into two stages. In the first stage, under the action of the feedforward control, the tool moves to an approximate location that is close to the desired destination. In the second stage, the feedback controller moves the tool to the precise target location using the tool pose estimation from the camera. In addition, the camera has a range of view and can only detect the tool and measure its 2D feature as *<sup>p</sup>*c*<sup>T</sup>* when it is not far away from the target. When the tool is moving from a location that is not in the camera range of view, we must estimate the feature as *<sup>p</sup>*f*<sup>T</sup>* until the tool moves into the range of view (this point will be discussed in detail in Section 7). It should be noted that only the feedback controller has the ability to compensate for uncertainties.

This control topology is an analogy to the macro-micro manipulation in the current industry trends where the large-scale robots are used for the approximate positioning, while the small-scale robots are utilized for the precise positioning [2].

### **4.5 The high precision camera sensor model**

As shown in **Figures 7** and **8**, the high precision camera sensor model provides high precision estimations in the feedback loop of the tool manipulation system

control and the camera movement adjustment control. The camera robot arm model, which is shown in both **Figures 7** and **8**, is the target generator that transforms the target in the inertial frame to the target locations in the image frame. The mathematical model of the camera, which is utilized in the visual robot arm and in the feedback loop to generate the required position estimation, has an equivalent Hardware-In-the-Loop (HIL) model as shown in **Figure 9**.

**Figure 9** shows the details of high precision camera model and its equivalent HIL model. The upper configuration is the mathematical model that is used in the simulation to generate image coordinates and to design the outer-loop controller in the robot arm control loop. However, in real application, the lower HIL configuration replaces this mathematical model. In the HIL model, the image processing will make an estimation of the tool pose with high precision.
