**7.1 Developing a combined SISO tool robot arm and camera model**

In **Figure 8**, the joint angle including the disturbance of the tool manipulator is transformed to the tool pose using a robot kinematics model. A camera model then is utilized to convert the 3D pose to the 2D, as shown in **Figure 20**. For simplicity, we

*The tool robot arm and the camera sensor model (a combined block of tool manipulation kinematics and camera sensing).*

**Figure 21.** *A SISO camera and tool robot arm setup.*

can combine these two blocks into one block, which is called the tool robot arm and camera sensor model.

In **Figure 21**, a SISO combined model setup has been shown based on the one-link camera robot arm model in **Figure 13**. Now, the camera, which is attached to a onelink rotational robot arm captures the image of a tool, which is attached to another similar robot arm, and estimates its angle of rotation. The tool has a length *Lt* with an interest point is selected at the tip of the tool. Both robot links have a length of *L*<sup>1</sup> and are separated from each other by a distance *L*. Assume qv is the angle of the camera from previous control sequences (discussed in **Figure 5**). The inertial and camera coordinate frames set ups are discussed in Section 6.1. The only difference is that the camera frame rotates relative to inertial frame by a clockwise angle *qv* along the *Z*-axis. The tool rotates relative to the *Z*-axis in a clockwise direction with a variable angle *qm ^*. The actual angle of rotation *qm ^* is the sum of the input disturbance *dqm* and the planned angle of rotation *qm*; i.e.,

$$
\widetilde{q\_m} = q\_m + d\_{q\_m} \tag{70}
$$

Then, we can compute the final angle after rotation by adding the initial angle of the tool in the inertial frame *qm***<sup>0</sup>** :

$$
\check{q\_{m\_f}} = \check{q\_m} + q\_{m\_0} \tag{71}
$$

The coordinates of the point of interest on the tool in the inertial frame is then computed as *L* � *Lt cos qmf* � � *^* , � *Lt sin qmf* � � *^* , *<sup>L</sup>***1**<sup>Þ</sup>

Following the same procedures as in Eqs. (37)–(41), we can derive the tool image coordinate *<sup>u</sup>*c*<sup>T</sup>* along the *<sup>u</sup>*-axis as:

$$\widehat{u\_T} = f\_u \frac{\mathbf{Q}\left(\overset{\smile}{q\_{mf}}\right) + \tan\left(\overline{q\_v}\right)}{\mathbf{1} - \mathbf{Q}\left(\overset{\smile}{q\_{mf}}\right) \tan\left(\overline{q\_v}\right)}\tag{72}$$

where,

$$Q\left(\overset{\smile}{q\_{mf}}\right) = \frac{L\_t \sin\left(\overset{\smile}{q\_{mf}}\right)}{L - L\_t \cos\left(\overset{\smile}{q\_{mf}}\right)}\tag{73}$$

Eqs. (72) and (73) provide a function that maps the current or the final angle of the tool onto the image coordinate *<sup>u</sup>*c*<sup>T</sup>* with constant parameters, *qv*, *<sup>L</sup>*, and *Lt*.

### **7.2 The outer-loop feedback and feedforward controller design**

The overall plant for the design of this control system is composed of the inner joint control loop, see Eq. (32) and **Figure 8**, and the tool robot arm and camera sensor models, as shown in **Figure 20**. We can design the outer-loop feedback controller using the feedback linearization method or the plant linearization method by following the procedures presented in Section 6.2 and 6.3 respectively. For the sake of brevity, we will not discuss the detail derivations of each controller. Mostly

*Role of Uncertainty in Model Development and Control Design for a Manufacturing Process DOI: http://dx.doi.org/10.5772/intechopen.104780*

comparative issues such the overshoots and the robustness are discussed in Section 6.4. In this section, we utilize the plant linearization method to design the outer-loop feedback controller.

Without providing the details, the controller is designed for a second order closedloop system using the plant linearization method (the plant is linearized at *qmf ^* = 0°) is given as:

$$\mathbf{G}\_{\mathfrak{c}\_{\rm out}} = \frac{\mathbf{1}}{\mathbf{C}\_1} \frac{\left(\mathfrak{r}\_{\rm in}\mathfrak{s} + \mathbf{1}\right)^3}{\left(\mathfrak{3}\mathfrak{r}\_{\rm in}\mathfrak{s} + \mathbf{1}\right)} \frac{o\nu\_n^2}{\mathfrak{s}^2 + 2\zeta o\nu\_n\mathfrak{s}} \tag{74}$$

and

$$\mathbf{C}\_{1} = f\_{\
u} \left( \mathbf{1} + \left( \tan \left( \overline{q\_{v}} \right) \right)^{2} \frac{L L\_{t} - L\_{t}}{\left( L - L\_{t} \right)^{2}} \right. \tag{75}$$

Then, the second order closed-loop transfer function *T* of the overall cascaded control system is expressed as:

$$T = \frac{\left\|\rho\_n\right\|^2}{s^2 + 2\zeta\rho\_n s + \left\|\rho\_n\right\|^2} \tag{76}$$

Where *f <sup>u</sup>* is the camera focal length, *qv*,*L*, and *Lt* are the parameters defined in Section 7.1. *τin* defines the bandwidth of the inner joint loop. *ω<sup>n</sup>* is the natural frequency and *ζ* is the damping ratio of the second order system.

As the camera is static in this control stage, the tool pose cannot be recognized and measured visually if it is outside the camera range of view. To tackle this problem, we can estimate the 2D feature (image coordinates of the tool points) from the same model in Eqs. (72) and (73) with the joint angle *qm* as input:

$$\mu\_T = f\_u \frac{Q(q\_m) + \tan\left(\overline{q\_v}\right)}{1 - Q(q\_m)\tan\left(\overline{q\_v}\right)}\tag{77}$$

where

$$Q\left(q\_m\right) = \frac{L\_t \sin\left(q\_m\right)}{L - L\_t \cos\left(q\_m\right)}\tag{78}$$

**Figure 22.** *The block diagram of the tool manipulator feedback control loop with feature estimation.*

which is illustrated in the block diagram of **Figure 22**. Normal feedback loop (in blue lines) is preserved when the tool is inside the camera range of view and hence, the camera can estimate the tool 2D feature *<sup>u</sup>*c*<sup>T</sup>*. However, when the tool is outside the range of view, the 2D feature can only be approximated as *<sup>u</sup>*f*<sup>T</sup>* (red dashed line) by the combined model as shown in the blue dashed box. We can implement a bump-less switch to smoothly switch between these modes of operations. The switching signal changes over when the tool moves in or out of the camera range of view.

In addition, the feedforward controller, as shown in **Figure 8**, is designed with the inverse kinematics model of the tool robot arm and the details of this design is not provided here. It should be noted, as stated previously, that the combination of the feedforward and the feedback controllers provide a much faster response than the feedback controller by itself.
