**3.1 High-speed vision system with traditional configuration**

As analyzed in Section 2.2, feedback information in task space for the add-on module should be high speed in order to satisfy the assumption ~\_ *<sup>δ</sup>* ¼ � \_ ^*δ*. Here, highspeed vision system with traditional configuration (namely, imaging and image processing are conducted separately as shown in **Figure 5(a)**) was introduced. A Photron IDP-Express R2000 high-speed camera [19] (made by Photron, Japan) was used with the eye-in-hand configuration. The camera is capable of acquiring 8 bit monochrome or 24-bit color frames with the resolution 512 � 512 pixels at a frame rate of 2000 fps. It was connected to an image processing PC (OS, Windows 7 Professional; CPU, 24 core, 2.3 GHz Intel Xeon; Memory, 32 GB; GPU, NVIDIA

*Dynamic Compensation Framework to Improve the Autonomy of Industrial Robots DOI: http://dx.doi.org/10.5772/intechopen.90169*

### **Figure 5.**

*High-speed vision system [25]. (a) Traditional system with imaging and image processing conducted separately. (b) New high-speed vision system with high-speed imaging and processing integrated in one chip.*


### **Table 1.**

action was driven by an on–off solenoid. The insertion action was activated only if the error between the peg and the center of the hole in the *x*-direction was smaller than 0.8 pixels (corresponding to 0.112 mm) and lasted for more than 0.02 s. The insertion lasted for 0.3 s. We sought to insert the peg at the center of these holes. As can be seen from **Figure 4**, the holes formed the white parts of the otherwise black workpiece. Section 3.3.2 describes the process of detecting and obtaining the

*Experimental system [13]. (a) Overall setup: A one-DOF (x-direction) add-on module and a commercial parallel-link robot. (b) Global VGA camera for coarse motion planning of the parallel-link robot. (c) Detected marker and reference position (center of the nearest hole). (d) Marker representing the peg's position.*

As analyzed in Section 2.2, feedback information in task space for the add-on

speed vision system with traditional configuration (namely, imaging and image processing are conducted separately as shown in **Figure 5(a)**) was introduced. A Photron IDP-Express R2000 high-speed camera [19] (made by Photron, Japan) was used with the eye-in-hand configuration. The camera is capable of acquiring 8 bit monochrome or 24-bit color frames with the resolution 512 � 512 pixels at a frame rate of 2000 fps. It was connected to an image processing PC (OS, Windows 7 Professional; CPU, 24 core, 2.3 GHz Intel Xeon; Memory, 32 GB; GPU, NVIDIA

*<sup>δ</sup>* ¼ � \_

^*δ*. Here, high-

**3.1 High-speed vision system with traditional configuration**

module should be high speed in order to satisfy the assumption ~\_

positions of these holes.

*Industrial Robotics - New Paradigms*

**Figure 4.**

**68**

*Spec. of actuator for the one-DOF add-on module prototype.*

Quadro K5200) (made by Dell Inc., USA), and the high-speed camera was configured with a working frame rate of 1000 fps.

Since control was limited to one dimension (along the *x*-direction in our case), the peg and the holes only needed to be aligned along the *x*-axis in the images. The peg was tracked using a marker fixed on the mechanical pencil at some distance from its tip (**Figure 3(b)**). In the captured images, it corresponded to roughly 9 9 pixel patches, and we employed a simple template-based search to find the location of the marker by minimizing the mean squared error. Following marker identification, we calculated its location at sub-pixel accuracy by computing image moments on the center patch. After locating the peg, we searched for the hole on a row in the image at a fixed distance from the detected marker in the *y*-direction. This image row was effectively binarized apart from the edge regions around the holes. We therefore searched for consecutive white regions (holes) and selected the one with center closest to the peg in the *x*-direction. The hole position was also computed with sub-pixel accuracy using image moments over the non-black region on the searched row. The processing ran within a millisecond using CUDA [20] to enable 1000 fps tracking of the positions of the pen and the hole. The high-speed camera is configured in such a way that the peg and each hole were both visible, and the relative error in image coordinates was sent to the real-time controller (**Figure 3**) by an Ethernet at a frequency of 1000 Hz. Since the high-speed camera was configured as the eye-in-hand, uncertainties due to the main robot as well as the external environment can be resultantly perceived as the variations of the hole's position, and they are accumulated within the relative error toward the peg.

### **3.2 Add-on compensation module with one DOF**

The one-DOF add-on module with linear actuation was developed with specifications presented in **Table 1**. In accordance with the proposed dynamic compensation framework, the module was designed with large acceleration capability as well as being lightweight. The high-speed camera addressed above was configured with

a field of view of approximately 70 mm within the motion range of the linear actuator. Therefore, we had an approximate conversion of 1 pixel to be 0.14 mm. Following this, the shortest path connecting all of these points was calculated as a traveling salesman problem (TSP) [23]. Finally, the route (all points in order) was sent to the controller of the parallel-link robot to generate the corresponding

*Dynamic Compensation Framework to Improve the Autonomy of Industrial Robots*

The result of one experimental trial for continuous peg-and-hole alignment for six holes in the workpiece is shown in **Figure 6(a)**. The workpiece was placed randomly. **Figure 6(b)** shows the details of the second alignment. It can be seen that while the parallel-link robot executed coarse positioning at a high speed (maximum speed: 2000 mm/s), the hole's image position from high-speed vision in the *x*-direction did not become stable until after 2.1 s due to the fact that its rotational axis exhibited significant backlash. Nevertheless, the compensation module realized

*Image feature profiles of pegand-hole alignment process [13]. (a) One experimental trial of continuous*

*peg-and-hole alignment for six holes. (b) Zoomed details of the second alignment.*

motion, with the maximum motion speed set at 2000 mm/s.

**3.4 Experimental result**

*DOI: http://dx.doi.org/10.5772/intechopen.90169*

**Figure 6.**

**71**

As indicated in Section 2.2, control law of the compensation module should be developed according to Eq. (10). Obviously, Eq. (10) can be held with simple proportional derivative (PD) control or some other methods such as precompensation fuzzy logic control (PFLC) algorithm [21].

### **3.3 Coarse motion planning of industrial robot**

A four-DOF parallel-link robot capable of high-speed motion was deployed to execute coarse global motion. A low-cost Video Graphics Array (VGA) camera (made by SONY, Japan) was mounted on the frame of the system and directed at the workspace. Using this camera, we fully automate the teaching task by detecting the rough position of holes in the main robot's coordinates.

### *3.3.1 Calibration issue*

With the proposed framework, an exact calibration was unnecessary. Therefore, we did not worry about the intrinsic calibration of the VGA camera. Calibration simply involved computing the planar homography between the workpiece (a metal plate) in the main robot's coordinates and the image plane of the VGA camera. This was done by letting the main robot move to four points *X<sup>i</sup>* ¼ *xi*, *yi* , *ki <sup>T</sup>* directly above the workpiece and marking the corresponding locations *X<sup>i</sup>* <sup>0</sup> ¼ *xi* <sup>0</sup>, *yi* <sup>0</sup>, *ki* 0 *<sup>T</sup>* in the image. Here, the points *X<sup>i</sup>* and *X<sup>i</sup>* <sup>0</sup> are homogeneous coordinates representing the 2D coordinates *xi=ki*, *yi =ki <sup>T</sup>* and *xi* <sup>0</sup>*=ki* <sup>0</sup>, *yi* <sup>0</sup>*=ki* 0 *<sup>T</sup>* . The four points could be chosen randomly such that no three points were collinear. Using these four point correspondences *X<sup>i</sup>* \$ *X<sup>i</sup>* <sup>0</sup>, we computed the homography *H* ∈ ℜ<sup>3</sup>�<sup>3</sup> [22] by

$$\mathbf{X}\_{i'} = \mathbf{H} \cdot \mathbf{X}\_i (i = 1, \dots, 4). \tag{11}$$

Since we only needed a coarse homography, the procedure of choosing four point correspondences was rough and easy to implement. While we only consider one-dimensional compensation in this task, the same calibration procedure applies to full 3D compensation over a limited depth range so long as holes on the workpiece are observable by the high-speed camera for fine compensation. Hence, it would not be necessary to obtain 3D measurements using a stereo camera configuration or some similar mechanism. Since the camera was fixed in relation to the main robot's coordinate frame, the calibration procedure was implemented once and only needed to be performed again if the camera was moved or if the height of the workspace changed drastically.

### *3.3.2 Hole detection and motion planning*

Usually, the model of holes on the workpiece should be known in order to detect them and calculate their locations. Here, we simplified the detection problem by utilizing the fact that the holes formed the white area of the black workpiece. The holes were identified and their locations were computed using image moments. The resulting points were transferred to the main robot's coordinate system using the homography computed in Section 3.3.1. Since the calibration mapping between the image and the robot's coordinates was rough, the detected points expressed in the robot's coordinates should reside in the neighbor area of their corresponding hole.

*Dynamic Compensation Framework to Improve the Autonomy of Industrial Robots DOI: http://dx.doi.org/10.5772/intechopen.90169*

Following this, the shortest path connecting all of these points was calculated as a traveling salesman problem (TSP) [23]. Finally, the route (all points in order) was sent to the controller of the parallel-link robot to generate the corresponding motion, with the maximum motion speed set at 2000 mm/s.
