**3.4 Experimental result**

a field of view of approximately 70 mm within the motion range of the linear actuator. Therefore, we had an approximate conversion of 1 pixel to be 0.14 mm. As indicated in Section 2.2, control law of the compensation module should be developed according to Eq. (10). Obviously, Eq. (10) can be held with simple

A four-DOF parallel-link robot capable of high-speed motion was deployed to execute coarse global motion. A low-cost Video Graphics Array (VGA) camera (made by SONY, Japan) was mounted on the frame of the system and directed at the workspace. Using this camera, we fully automate the teaching task by detecting

With the proposed framework, an exact calibration was unnecessary. Therefore, we did not worry about the intrinsic calibration of the VGA camera. Calibration simply involved computing the planar homography between the workpiece (a metal plate) in the main robot's coordinates and the image plane of the VGA camera. This

> <sup>0</sup>*=ki* <sup>0</sup>, *yi* <sup>0</sup>*=ki* 0

chosen randomly such that no three points were collinear. Using these four point

Since we only needed a coarse homography, the procedure of choosing four point correspondences was rough and easy to implement. While we only consider one-dimensional compensation in this task, the same calibration procedure applies to full 3D compensation over a limited depth range so long as holes on the workpiece are observable by the high-speed camera for fine compensation. Hence, it would not be necessary to obtain 3D measurements using a stereo camera configuration or some similar mechanism. Since the camera was fixed in relation to the main robot's coordinate frame, the calibration procedure was implemented once and only needed to be performed again if the camera was moved or if the height of

Usually, the model of holes on the workpiece should be known in order to detect them and calculate their locations. Here, we simplified the detection problem by utilizing the fact that the holes formed the white area of the black workpiece. The holes were identified and their locations were computed using image moments. The resulting points were transferred to the main robot's coordinate system using the homography computed in Section 3.3.1. Since the calibration mapping between the image and the robot's coordinates was rough, the detected points expressed in the robot's coordinates should reside in the neighbor area of their corresponding hole.

*<sup>T</sup>*

<sup>0</sup>, we computed the homography *H* ∈ ℜ<sup>3</sup>�<sup>3</sup> [22] by

, *ki <sup>T</sup>* directly

<sup>0</sup> ¼ *xi*

. The four points could be

<sup>0</sup> are homogeneous coordinates representing

<sup>0</sup> ¼ *H* � *Xi*ð Þ *i* ¼ 1, … , 4 *:* (11)

<sup>0</sup>, *yi* <sup>0</sup>, *ki* 0 *<sup>T</sup>* in

proportional derivative (PD) control or some other methods such as

precompensation fuzzy logic control (PFLC) algorithm [21].

the rough position of holes in the main robot's coordinates.

was done by letting the main robot move to four points *X<sup>i</sup>* ¼ *xi*, *yi*

above the workpiece and marking the corresponding locations *X<sup>i</sup>*

*=ki <sup>T</sup>* and *xi*

*Xi*

**3.3 Coarse motion planning of industrial robot**

*Industrial Robotics - New Paradigms*

*3.3.1 Calibration issue*

the image. Here, the points *X<sup>i</sup>* and *X<sup>i</sup>*

the workspace changed drastically.

**70**

*3.3.2 Hole detection and motion planning*

the 2D coordinates *xi=ki*, *yi*

correspondences *X<sup>i</sup>* \$ *X<sup>i</sup>*

The result of one experimental trial for continuous peg-and-hole alignment for six holes in the workpiece is shown in **Figure 6(a)**. The workpiece was placed randomly. **Figure 6(b)** shows the details of the second alignment. It can be seen that while the parallel-link robot executed coarse positioning at a high speed (maximum speed: 2000 mm/s), the hole's image position from high-speed vision in the *x*-direction did not become stable until after 2.1 s due to the fact that its rotational axis exhibited significant backlash. Nevertheless, the compensation module realized

#### **Figure 6.**

*Image feature profiles of pegand-hole alignment process [13]. (a) One experimental trial of continuous peg-and-hole alignment for six holes. (b) Zoomed details of the second alignment.*

fine alignment within 0.2 s within an accuracy of 0.1 mm. For 20 trials with different positions of the workpiece, all alignments were satisfactory as the proper insertions were observed [13]. A video for the peg-and-hole alignment task can be found on the website [24].

with newly developed vision chip is introduced. The new vision chip combines high-frame-rate imaging and highly parallel signal processing with high-resolution, high-sensitivity, low-power consumption [26]. The 1/3.2-inch 1.27 Mpixel 500 fps (0.31 Mpixel 1000 fps 2 2 binning) vision chip is fabricated with 3D-stacked column-parallel analog-to-digital converters (ADCs) and 140 giga-operations per second (GOPS) programmable single instruction multiple data (SIMD) columnparallel processing elements (PEs) for high-speed spatiotemporal image processing. The programmable PE can implement high-speed spatiotemporal filtering and enables imaging and various image processing such as target detection, recognition, and tracking on one chip. By realizing image processing on the chip, it can suppress power consumption to maximum 363 mW at 1000 fps. Comparing with conventional high-speed vision system, the new high-speed vision system will greatly save space and energy and is very suitable for compact usage in robotic applications. The high-speed vision was configured to work at 1000 fps with a resolution of 648 484. Overall latency of high-speed visual feedback was measured to be within

*Dynamic Compensation Framework to Improve the Autonomy of Industrial Robots*

*DOI: http://dx.doi.org/10.5772/intechopen.90169*

In order to accompany with the parallel-link robot to realize the twodimensional contour tracing task, an add-on module prototype capable of realizing fine compensation in two dimensions was developed. The add-on module was with two orthogonal linear joints, and specifications for the actuators of the module were estimated by an accelerometer and are shown in **Table 2**. The total weight of the module was about 0.27 kg. The high-speed vision was configured on the moving table of the add-on module, and the tracing task was implemented in such a manner that the high-speed vision was guided to travel along the curve with the curve's center accurately aligned with the center (324,242) of the high-speed vision's

The same as the peg-and-hole alignment task, the main robot's motion was planned using vision information from the globally configured VGA camera. The implementation involved exactly the same with the last task: a rough calibration and image processing to extract key points (via-points) of a target contour path considering the limited working range of the add-on module. Extraction of key points of a

1.As shown in **Figure 8(a)**, the image was binarized with a proper threshold, and a start point *p*<sup>0</sup> on the target contour was determined with the nearest

2.A probing circle with its center at *pc* was used to detect the intersection *pd* with

**Joint Stroke Max. velocity Max. acceleration** x 20 mm 600 mm/s 63 m/s<sup>2</sup> y 20 mm 650 mm/s 70 m/s<sup>2</sup>

3.0 ms [25].

images.

**Table 2.**

**73**

**4.2 Add-on compensation module with two DOFs**

**4.3 Coarse motion planning of industrial robot**

target contour was implemented in the following manner [25]:

distance to a user predefined point. *pc* was initialized as *p*0.

the target contour by predefined extraction direction.

*Spec. of actuators for the two-DOF add-on module prototype.*
