**1. Introduction**

Performance of industrial robots in realizing fast and accurate manipulation is very important for manufacturing process, as it directly relates to productivity and quality. On the other hand, with manufacturing shifting from an old era of mass production to a new era of high-mix low-volume production, autonomous capability of industrial robots becomes more and more important to the manufacturing industry. Autonomy represents the ability of a system in reacting to changes and uncertainties on the fly.

Currently, off-line teaching-playback using a teaching pendant, or physically positioning a robot with a teaching arm, is supposed to be the main method for the applications of industrial robots. The method features a user-friendly interface developed by commercial robot manufacturers and is usually motion optimized and reliable so long as task conditions do not change. As detailed in [1], negative effects of nonlinear dynamics during high-speed motion may be pre-compensated in order to achieve accurate path tracking during the playback phase. However, it is impossible for a teaching-playback robot to adapt to significant variations in the initial pose of a working target or unexpected fluctuations during manipulation. CAD model-based teaching methods neither enable a robot to adapt to changes on the fly. In [2], a view-based teaching-playback method was proposed to achieve robust manipulation against changes in task conditions with the use of artificial neural network (ANN). However, the approach is difficult for teaching jerky robot motion and cannot be applied to cases where high motion accuracy is required.

By utilizing external sensory feedback (e.g., vision), on-line control method may help a robot adjust to environmental uncertainty. Generally, accurate models with structured working environment are preconditions for implementation. However, in reality, accurate models are difficult to obtain. Regarding these issues, many adaptive approaches have been proposed (e.g., [3–7]) to address the control problem in the presence of uncertainty associated with a robot's kinematics model, mechanical dynamics, or with sensor-robot mapping. However, it is usually difficult to obtain satisfactory accuracy at a fast motion speed due to the complex dynamics and large mechanical inertia of a typical multi-joint industrial robot [8, 9].

In order to improve autonomy and to address on-line uncertainty attributed from a robot itself or external environment, ideally we need the feedback control of the robot in task space with much higher bandwidth than that of the accumulated uncertainty. Therefore, high-speed sensing and high-speed control based on highspeed feedback information should be realized. This kind of system has been developed decades ago such as the 1 ms sensor-motor fusion system as presented in [10]. However, from the viewpoint that easy integration with a commercial robot's black-box controller (or even to consider the compatibility with the Industrial 4.0 [11]) is also an important issue, there is still no practical framework that can effectively address these issues.

Ideally, action-level intelligence is realized with high-frequency update rate to address the high-frequency part of on-line uncertainties and changes, whereas planning-level intelligence is allowed to implement with low-frequency update rate to tackle with the low-frequency part of uncertainties and changes. For implementation, traditional industrial robots are designated to conduct coarse global motion by focusing on planning-level intelligence. Concurrently, an add-on robotic module with high-speed actuators and high-speed sensory feedback is controlled to realize

*Dynamic Compensation Framework to Improve the Autonomy of Industrial Robots*

*DOI: http://dx.doi.org/10.5772/intechopen.90169*

fine local motion with the role of implementing action-level intelligence.

In order to grasp the basic idea of the proposed framework, intuitive analysis on asymptotic convergence with a simplified model is provided with the assumption

As shown in **Figure 2**, an arbitrary industrial robot is controlled from position *C* toward position *A* (for simplification, the target is assumed to be motionless) with visual feedback information from a global camera. A direct-driven add-on module with high-speed actuators and high-speed sensory feedback is configured to its end effector. Specifically, we take the example of high-speed vision sensing for the addon module. Initially, tool point *B* is assumed to overlap with tool point *C*. We refer to image features of target *A* and those of the robot's tool as *ξ<sup>a</sup>* and *ξc*, respectively, from visual feedback of the global camera, and error *e* for regulation is noted as

*e* ¼ *ξ<sup>c</sup>* � *ξ<sup>a</sup>* (1)

*θ<sup>c</sup>* (2)

*θ<sup>c</sup>* ¼ 0) and stays

*ξ<sup>c</sup>* can be divided into two parts, motion effects corresponding to the

*θ<sup>c</sup>* represent joint velocity vectors of the main robot and the compen-

*<sup>θ</sup><sup>m</sup>* <sup>þ</sup> *<sup>J</sup><sup>c</sup>* \_

sation module, respectively, and *J<sup>r</sup>* and *J<sup>c</sup>* are the Jacobians (mapping from joint space to image space) of the main robot and the compensation module, respec-

**2.2 Analysis on asymptotic convergence with a simplified model**

\_ *<sup>ξ</sup><sup>c</sup>* <sup>¼</sup> *<sup>J</sup><sup>r</sup>* \_

tively. We assume that the compensation module is not activated ( \_

that the entire system is regulated in image space [13, 14].

Noting that \_

**Figure 1.**

*Proposed framework.*

where \_

**63**

*θm*, \_

main robot and the compensation module,

In this chapter, we present the dynamic compensation framework to improve the autonomy of industrial robots. The dynamic compensation concept [12–16] is implemented based on high-speed sensory feedback as well as a coarse-to-fine strategy inherited from the macro–micro method [17, 18]. It should be noted that the macro–micro concept had been proposed several decades ago with the aim of enhancing system bandwidth for rigid manipulators and suppressing bending vibrations for flexible manipulators and this is not the scope of this study. In order to show the effectiveness of the proposed framework, several application scenarios are also presented.

### **2. Methodology**

In this section, dynamic compensation framework under a hierarchical intelligent architecture would be presented. For the issue of asymptotic convergence, an intuitive analysis based on a simplified model will then be introduced. System integration with an industrial robot under the proposed framework would be addressed.

### **2.1 Dynamic compensation framework**

The proposed framework for improving autonomy of industrial robots is based on a coarse-to-fine strategy as shown in **Figure 1**. Intelligence of a system is considered to be made up of two parts: action-level intelligence for motion control and planning-level intelligence for motion and task planning. Action-level intelligence represents low-level layer of the intelligence architecture and is referred to adaptability for motion control without sacrificing motion speed and absolute accuracy simultaneously, and we consider it as the foundation in implementing high-level intelligence. Real-time adaptation to both system uncertainty and environmental uncertainty enables a robot to focus on implementing high-level intelligence.

*Dynamic Compensation Framework to Improve the Autonomy of Industrial Robots DOI: http://dx.doi.org/10.5772/intechopen.90169*

### **Figure 1.** *Proposed framework.*

In [2], a view-based teaching-playback method was proposed to achieve robust manipulation against changes in task conditions with the use of artificial neural network (ANN). However, the approach is difficult for teaching jerky robot motion

By utilizing external sensory feedback (e.g., vision), on-line control method may help a robot adjust to environmental uncertainty. Generally, accurate models with structured working environment are preconditions for implementation. However, in reality, accurate models are difficult to obtain. Regarding these issues, many adaptive approaches have been proposed (e.g., [3–7]) to address the control problem in the presence of uncertainty associated with a robot's kinematics model, mechanical dynamics, or with sensor-robot mapping. However, it is usually difficult to obtain satisfactory accuracy at a fast motion speed due to the complex dynamics and large mechanical inertia of a typical multi-joint industrial robot [8, 9]. In order to improve autonomy and to address on-line uncertainty attributed from a robot itself or external environment, ideally we need the feedback control of the robot in task space with much higher bandwidth than that of the accumulated uncertainty. Therefore, high-speed sensing and high-speed control based on highspeed feedback information should be realized. This kind of system has been developed decades ago such as the 1 ms sensor-motor fusion system as presented in [10]. However, from the viewpoint that easy integration with a commercial robot's black-box controller (or even to consider the compatibility with the Industrial 4.0 [11]) is also an important issue, there is still no practical framework that can

In this chapter, we present the dynamic compensation framework to improve the autonomy of industrial robots. The dynamic compensation concept [12–16] is implemented based on high-speed sensory feedback as well as a coarse-to-fine strategy inherited from the macro–micro method [17, 18]. It should be noted that the macro–micro concept had been proposed several decades ago with the aim of enhancing system bandwidth for rigid manipulators and suppressing bending vibrations for flexible manipulators and this is not the scope of this study. In order to show the effectiveness of the proposed framework, several application scenarios

In this section, dynamic compensation framework under a hierarchical intelligent architecture would be presented. For the issue of asymptotic convergence, an intuitive analysis based on a simplified model will then be introduced. System integration with an industrial robot under the proposed framework would be

The proposed framework for improving autonomy of industrial robots is based on a coarse-to-fine strategy as shown in **Figure 1**. Intelligence of a system is considered to be made up of two parts: action-level intelligence for motion control and planning-level intelligence for motion and task planning. Action-level intelligence represents low-level layer of the intelligence architecture and is referred to adaptability for motion control without sacrificing motion speed and absolute accuracy simultaneously, and we consider it as the foundation in implementing high-level intelligence. Real-time adaptation to both system uncertainty and environmental uncertainty enables a robot to focus on implementing high-level intelligence.

and cannot be applied to cases where high motion accuracy is required.

effectively address these issues.

*Industrial Robotics - New Paradigms*

are also presented.

**2. Methodology**

**2.1 Dynamic compensation framework**

addressed.

**62**

Ideally, action-level intelligence is realized with high-frequency update rate to address the high-frequency part of on-line uncertainties and changes, whereas planning-level intelligence is allowed to implement with low-frequency update rate to tackle with the low-frequency part of uncertainties and changes. For implementation, traditional industrial robots are designated to conduct coarse global motion by focusing on planning-level intelligence. Concurrently, an add-on robotic module with high-speed actuators and high-speed sensory feedback is controlled to realize fine local motion with the role of implementing action-level intelligence.

### **2.2 Analysis on asymptotic convergence with a simplified model**

In order to grasp the basic idea of the proposed framework, intuitive analysis on asymptotic convergence with a simplified model is provided with the assumption that the entire system is regulated in image space [13, 14].

As shown in **Figure 2**, an arbitrary industrial robot is controlled from position *C* toward position *A* (for simplification, the target is assumed to be motionless) with visual feedback information from a global camera. A direct-driven add-on module with high-speed actuators and high-speed sensory feedback is configured to its end effector. Specifically, we take the example of high-speed vision sensing for the addon module. Initially, tool point *B* is assumed to overlap with tool point *C*. We refer to image features of target *A* and those of the robot's tool as *ξ<sup>a</sup>* and *ξc*, respectively, from visual feedback of the global camera, and error *e* for regulation is noted as

$$e = \mathfrak{f}\_c - \mathfrak{f}\_a \tag{1}$$

Noting that \_ *ξ<sup>c</sup>* can be divided into two parts, motion effects corresponding to the main robot and the compensation module,

$$
\dot{\xi}\_c = \mathbf{J}\_r \dot{\theta}\_m + \mathbf{J}\_c \dot{\theta}\_c \tag{2}
$$

where \_ *θm*, \_ *θ<sup>c</sup>* represent joint velocity vectors of the main robot and the compensation module, respectively, and *J<sup>r</sup>* and *J<sup>c</sup>* are the Jacobians (mapping from joint space to image space) of the main robot and the compensation module, respectively. We assume that the compensation module is not activated ( \_ *θ<sup>c</sup>* ¼ 0) and stays

**Figure 2.**

*A simplified model for addressing the dynamic compensation concept. Tool point* C *is controlled to align with target position* A *with the help of a compensation module, although the position of* B *is not certain due to the systematic uncertainty of the main robot. Image features of the global camera is represented by ξ, and image features of the high-speed camera is represented by ϕ.*

still to retain the overlap between *B* and *C*. With the ideal case, exponential convergence of error regulation (for instance, *ξ<sup>c</sup>* converges to *ξa*) can be obtained if we apply feedback control, such as

$$
\dot{\theta}\_m = -a \mathbf{J}\_r^+ \mathbf{e} \tag{3}
$$

and let *γ* represents the conversion factor such that *ϕ*\_

*DOI: http://dx.doi.org/10.5772/intechopen.90169*

time period. The closed-loop system then becomes

Lyapunov function candidate

*<sup>δ</sup>* ¼ � \_

*<sup>V</sup>*\_ <sup>¼</sup> *<sup>e</sup>*\_

cycle. Therefore, ~\_

following condition:

two aspects:

**65**

perceived as time-invariant as the lens systems of both cameras are fixed and the relative position between the two cameras can be assumed as constant for a short-

*Dynamic Compensation Framework to Improve the Autonomy of Industrial Robots*

*<sup>e</sup>*\_ ¼ �*ω<sup>e</sup>* <sup>þ</sup> ^*<sup>δ</sup>* � *<sup>δ</sup>*

where <sup>~</sup>*<sup>δ</sup>* <sup>¼</sup> *<sup>δ</sup>* � ^*δ*. In order to obtain the update law for ^*δ*, we choose the following

where *P* and **Γ** are two symmetric positive-definite matrices. Suppose that the direct-driven compensation module is feedback controlled by 1000 Hz of highspeed vision and the uncertain term *δ* due to the main robot which has big inertia can be approximated as a constant unknown term during the 1 ms feedback control

^*δ*, and the time derivative of *V* is given by

*T*

And resultantly, the control law for the compensation module should satisfy the

*<sup>θ</sup><sup>c</sup>* ¼ �*γ*�<sup>1</sup>

From the analysis above, we claim that asymptotic convergence is achievable using the proposed dynamic compensation in spite of systematic uncertainty in the main robot. Several issues should be noted here. First, the same conclusion can be drawn no matter how the main robot is controlled (in task space, as here, or joint space), and compensation capability can be further enhanced due to the fact that the control frequency of most commercial industrial robots is smaller than the 1000 Hz feedback control of the directly driven compensation module. Second, although dynamics are not fully incorporated within the analysis, our claim is still reasonable under the condition that the compensation actuator has a different bandwidth from that of the main robot. Third, although we have assumed the target to be motionless above, it is reasonable to apply the same analysis to cases where the target is moving but its motion is negligible in the context of 1000 Hz high-speed vision sensing. Moreover, although several robust and adaptive control approaches have been proposed (e.g., [3]) for direct control of robots with uncertain kinematics and dynamics, we note the advantages of our method in following

1.The method here decouples the direct-driven compensation module and the main industrial robot and requires no changes to the main robot's controller.

*T* **<sup>Γ</sup>**�<sup>1</sup> \_ ^*δ*

**<sup>Γ</sup>**�<sup>1</sup> **<sup>Γ</sup>***Pe* <sup>þ</sup> \_

^*δ*

^*<sup>δ</sup>* ¼ �**Γ***Pe* (9)

*T*

¼ �*ω<sup>e</sup>* � <sup>~</sup>*<sup>δ</sup>*

*<sup>V</sup>*ð Þ¼ *<sup>e</sup>*, *<sup>δ</sup>* **<sup>e</sup>***<sup>T</sup>Pe* <sup>þ</sup> <sup>~</sup>*<sup>δ</sup>*

*<sup>T</sup>Pe* <sup>þ</sup> *<sup>e</sup><sup>T</sup>Pe*\_ � <sup>2</sup>~*<sup>δ</sup>*

¼ �2*ωe<sup>T</sup>Pe* � <sup>2</sup>~*<sup>δ</sup>*

Apparently, *<sup>V</sup>*\_ ¼ �2*ωe<sup>T</sup>Pe* <sup>≤</sup>0 if we choose the update law for ^*<sup>δ</sup>* as

\_

*Jc* \_

*<sup>c</sup>* <sup>¼</sup> *<sup>γ</sup>* \_

*ξc*. Note that *γ* can be

**Γ**�<sup>1</sup> ~*δ* (7)

(8)

**Γ***Pe* (10)

(6)

where *ω* is a constant positive-definite coefficient matrix and *J* þ *<sup>r</sup>* represents the pseudo-inverse of *Jr*. However, in practice, the ideal visual-motor model of *J* þ *<sup>r</sup>* for an industrial robot is not available and is usually estimated with errors due to systematic uncertainties and inaccurate camera calibration. We denote the uncertain part as Δ*J* þ *<sup>r</sup>* . The error dynamics with feedback control then become

$$
\dot{\xi}\_c = -a \mathbf{J}\_r \left( \mathbf{J}\_r^+ + \Delta \mathbf{J}\_r^+ \right) \mathbf{e} \tag{4}
$$

and

$$
\dot{\mathbf{e}} = -a\mathbf{e} - \mathbf{\delta} \tag{5}
$$

where *δ* represents the projected uncertainty in image space of the global camera. In this case, for instance, *ϕ<sup>k</sup>*�<sup>1</sup> *<sup>c</sup>* moves to *ϕ<sup>k</sup> <sup>c</sup>* rather than *ϕ<sup>a</sup>* in image space of the high-speed vision system due to uncertainty, as shown in **Figure 2**. It should be pointed out that in spite of the uncertain term *δ*, the system is still assumed to conduct coarse positioning in the direction of the neighborhood of the target with the visual feedback of the global camera.

Now, let the compensation module be activated with motion such that *J<sup>c</sup>* \_ *<sup>θ</sup><sup>c</sup>* <sup>¼</sup> *<sup>ϕ</sup>*\_ *c*. Let ^*δ* represents the compensation module's motion observed by the global camera,

*Dynamic Compensation Framework to Improve the Autonomy of Industrial Robots DOI: http://dx.doi.org/10.5772/intechopen.90169*

and let *γ* represents the conversion factor such that *ϕ*\_ *<sup>c</sup>* <sup>¼</sup> *<sup>γ</sup>* \_ *ξc*. Note that *γ* can be perceived as time-invariant as the lens systems of both cameras are fixed and the relative position between the two cameras can be assumed as constant for a shorttime period. The closed-loop system then becomes

$$\begin{aligned} \dot{\mathbf{e}} &= -\alpha \mathbf{e} + \hat{\boldsymbol{\delta}} - \boldsymbol{\delta} \\ &= -\alpha \mathbf{e} - \ddot{\boldsymbol{\delta}} \end{aligned} \tag{6}$$

where <sup>~</sup>*<sup>δ</sup>* <sup>¼</sup> *<sup>δ</sup>* � ^*δ*. In order to obtain the update law for ^*δ*, we choose the following Lyapunov function candidate

$$V(\mathbf{e}, \boldsymbol{\delta}) = \mathbf{e}^T \mathbf{P} \mathbf{e} + \boldsymbol{\delta}^T \boldsymbol{\Gamma}^{-1} \boldsymbol{\delta} \tag{7}$$

where *P* and **Γ** are two symmetric positive-definite matrices. Suppose that the direct-driven compensation module is feedback controlled by 1000 Hz of highspeed vision and the uncertain term *δ* due to the main robot which has big inertia can be approximated as a constant unknown term during the 1 ms feedback control cycle. Therefore, ~\_ *<sup>δ</sup>* ¼ � \_ ^*δ*, and the time derivative of *V* is given by

$$\begin{split} \dot{V} &= \dot{\mathbf{e}}^T \mathbf{P} \mathbf{e} + \mathbf{e}^T \mathbf{P} \dot{\mathbf{e}} - 2 \ddot{\boldsymbol{\delta}}^T \Gamma^{-1} \dot{\boldsymbol{\delta}} \\ &= -2 \alpha \mathbf{e}^T \mathbf{P} \mathbf{e} - 2 \ddot{\boldsymbol{\delta}}^T \Gamma^{-1} \left( \Gamma \mathbf{P} \mathbf{e} + \dot{\boldsymbol{\delta}} \right) \end{split} \tag{8}$$

Apparently, *<sup>V</sup>*\_ ¼ �2*ωe<sup>T</sup>Pe* <sup>≤</sup>0 if we choose the update law for ^*<sup>δ</sup>* as

$$
\dot{\hat{\delta}} = -\Gamma \mathbf{P} \mathbf{e} \tag{9}
$$

And resultantly, the control law for the compensation module should satisfy the following condition:

$$J\_c \dot{\theta}\_c = -\gamma^{-1} \Gamma \mathbf{P} \mathbf{e} \tag{10}$$

From the analysis above, we claim that asymptotic convergence is achievable using the proposed dynamic compensation in spite of systematic uncertainty in the main robot. Several issues should be noted here. First, the same conclusion can be drawn no matter how the main robot is controlled (in task space, as here, or joint space), and compensation capability can be further enhanced due to the fact that the control frequency of most commercial industrial robots is smaller than the 1000 Hz feedback control of the directly driven compensation module. Second, although dynamics are not fully incorporated within the analysis, our claim is still reasonable under the condition that the compensation actuator has a different bandwidth from that of the main robot. Third, although we have assumed the target to be motionless above, it is reasonable to apply the same analysis to cases where the target is moving but its motion is negligible in the context of 1000 Hz high-speed vision sensing. Moreover, although several robust and adaptive control approaches have been proposed (e.g., [3]) for direct control of robots with uncertain kinematics and dynamics, we note the advantages of our method in following two aspects:

1.The method here decouples the direct-driven compensation module and the main industrial robot and requires no changes to the main robot's controller.

still to retain the overlap between *B* and *C*. With the ideal case, exponential convergence of error regulation (for instance, *ξ<sup>c</sup>* converges to *ξa*) can be obtained if we

*A simplified model for addressing the dynamic compensation concept. Tool point* C *is controlled to align with target position* A *with the help of a compensation module, although the position of* B *is not certain due to the systematic uncertainty of the main robot. Image features of the global camera is represented by ξ, and image*

þ

*<sup>r</sup> e* (3)

*e* (4)

*<sup>c</sup>* rather than *ϕ<sup>a</sup>* in image space of the

*e*\_ ¼ �*ωe* � *δ* (5)

þ

*<sup>r</sup>* represents the

þ *<sup>r</sup>* for an

*<sup>θ</sup><sup>c</sup>* <sup>¼</sup> *<sup>ϕ</sup>*\_ *c*.

\_ *θ<sup>m</sup>* ¼ �*ωJ*

pseudo-inverse of *Jr*. However, in practice, the ideal visual-motor model of *J*

industrial robot is not available and is usually estimated with errors due to systematic uncertainties and inaccurate camera calibration. We denote the uncertain part

> þ *<sup>r</sup>* þ Δ*J* þ *r*

where *δ* represents the projected uncertainty in image space of the global cam-

*<sup>c</sup>* moves to *ϕ<sup>k</sup>*

high-speed vision system due to uncertainty, as shown in **Figure 2**. It should be pointed out that in spite of the uncertain term *δ*, the system is still assumed to conduct coarse positioning in the direction of the neighborhood of the target with

Now, let the compensation module be activated with motion such that *J<sup>c</sup>* \_

Let ^*δ* represents the compensation module's motion observed by the global camera,

where *ω* is a constant positive-definite coefficient matrix and *J*

*<sup>r</sup>* . The error dynamics with feedback control then become

*ξ<sup>c</sup>* ¼ �*ωJ<sup>r</sup> J*

\_

apply feedback control, such as

*Industrial Robotics - New Paradigms*

*features of the high-speed camera is represented by ϕ.*

era. In this case, for instance, *ϕ<sup>k</sup>*�<sup>1</sup>

the visual feedback of the global camera.

as Δ*J* þ

**64**

**Figure 2.**

and

On the contrary, traditional adaptive control methods need to directly assess the inner loop of a robot's controller (mostly not open), which is usually considered difficult both technically and practically.

2. It is difficult for traditional adaptive control methods to realize high-speed and accurate adaptive regulation due to the main robot's large inertia and complex nonlinear dynamics. With the philosophy of motion decoupling as well as adopting high-speed vision to sense the accumulated uncertainties, the proposed method here enables a poor-accuracy industrial robot to realize high-speed and accurate position regulation by incorporating a ready-to-use add-on module.

To summarize, the proposed dynamic compensation involves three important features:


Such as many other traditional methods, implementation of motion planning based on external sensory feedback involves two aspects: calibration for the mapping between the coordinates of sensory information and those of an industrial robot and signal processing to extract key points for motion planning from the sensory space according to a specified task. From the perspective of motion control, application tasks of industrial robots can be categorized into two kinds: set point motion control and trajectory motion control. For conventional methods, extracting key points and then realizing point-to-point motion control with accurate sensormotor models for tasks involving set point motion (e.g., peg-in-hole task) is relatively easy to assure good accuracy. However, for tasks involving continuous trajectory control based on extracted key points and accurate sensor-motor models, good accuracy is still hard to realize due to the complex dynamics of an industrial robot during on-line moving. On the other hand, since an industrial robot is only asked to realize coarse motion in the proposed framework, calibration can be rough and easy, and motion errors due to the complex dynamics or even mismatched kinematics can be allowable, as long as the accumulated error is within the work range of the add-on compensation module. Application tasks with both set point motion control and trajectory motion control will be implemented with the proposed framework. Firstly, simplified peg-and-hole alignment in one dimension will be presented in Section 3. And then, contour tracing in two dimensions will be

*Dynamic Compensation Framework to Improve the Autonomy of Industrial Robots*

**3. Application scenario 1: peg-and-hole alignment in one dimension**

The simplified peg-and-hole alignment task was conducted to test the efficiency of the proposed method in realizing fast and accurate set point regulation under internal and external uncertainties [13]. The experimental testbed is shown in **Figure 4**. The add-on compensation module had one degree of freedom (DOF). A workpiece (metal plate with six randomly configured holes) was blindly placed on a desk for each experiment trial. The holes were 2 mm along the *x*-direction and were elongated in the *y*-direction to account for the fact that compensation was carried out only in the *x*-direction. A mechanical pencil with a diameter of 1.0 mm acting as the peg was attached to the linear compensation actuator, and the insertion

introduced in Section 4.

**67**

**Figure 3.**

*Integration with conventional industrial robots.*

*DOI: http://dx.doi.org/10.5772/intechopen.90169*

3.The error value *e* is the relative information between the robot's tool point and the target in image coordinates, which can be observed directly.

Finally, it should be noted that since the add-on module works independently of the main robot's controller, optimal control of the system is another issue to address and is beyond the scope of this chapter.

### **2.3 Integration with conventional industrial robots**

The proposed framework can be easily integrated with existing industrial robots. Usually, inner control loops of conventional industrial robots are black-box to end users due to issues concerning safety and intellectual property, and limited functions such as trajectory planning are available to users through interfaces provided by robot makers. In other words, it is difficult to incorporate external sensory information into the robot's inner control loop for motion control. On the other hand, as shown in **Figure 3**, control scheme for an arbitrary industrial robot and the add-on compensation module is separated with the proposed framework. Therefore, the compatibility of the proposed method is good as the industrial robot itself can be perceived as a black-box and users only need common interfaces for integration.

In the dynamic compensation framework, an industrial robot is designated for fast and coarse motion. Therefore, efforts for trajectory planning can be greatly reduced compared to traditional teaching-playback methods as well as adaptive control methods based on external sensory feedback. Coarse motion planning of the industrial robot can be either in a semiautonomous way or in an autonomous way. In the former method, teaching points covering a target trajectory can be very sparse as long as the target trajectory can be accessed by the add-on module. In the later method, coarse trajectory can be planned by utilizing external sensors (e.g., camera), and the calibration for sensor-motor mapping can be rough and easy. However, for conventional methods utilizing external sensors to realize accurate motion control of the industrial robot, calibration for sensor-motor mapping can be very complex and difficult.

*Dynamic Compensation Framework to Improve the Autonomy of Industrial Robots DOI: http://dx.doi.org/10.5772/intechopen.90169*

### **Figure 3.** *Integration with conventional industrial robots.*

On the contrary, traditional adaptive control methods need to directly assess the inner loop of a robot's controller (mostly not open), which is usually

2. It is difficult for traditional adaptive control methods to realize high-speed and accurate adaptive regulation due to the main robot's large inertia and complex nonlinear dynamics. With the philosophy of motion decoupling as well as adopting high-speed vision to sense the accumulated uncertainties, the proposed method here enables a poor-accuracy industrial robot to realize high-speed and accurate position regulation by incorporating a ready-to-use add-on module.

To summarize, the proposed dynamic compensation involves three important

1.The compensation module should be controlled accurately and sufficiently fast. Ideally, it has a much larger bandwidth than that of the main robot.

2.The sensory feedback for the compensation module should be sufficiently fast

*<sup>δ</sup>* ¼ � \_ ^*δ*.

3.The error value *e* is the relative information between the robot's tool point and

Finally, it should be noted that since the add-on module works independently of the main robot's controller, optimal control of the system is another issue to address

The proposed framework can be easily integrated with existing industrial robots. Usually, inner control loops of conventional industrial robots are black-box to end users due to issues concerning safety and intellectual property, and limited functions such as trajectory planning are available to users through interfaces provided by robot makers. In other words, it is difficult to incorporate external sensory information into the robot's inner control loop for motion control. On the other hand, as shown in **Figure 3**, control scheme for an arbitrary industrial robot and the add-on compensation module is separated with the proposed framework. Therefore, the compatibility of the proposed method is good as the industrial robot itself can be perceived as a black-box and users only need common interfaces

In the dynamic compensation framework, an industrial robot is designated for fast and coarse motion. Therefore, efforts for trajectory planning can be greatly reduced compared to traditional teaching-playback methods as well as adaptive control methods based on external sensory feedback. Coarse motion planning of the industrial robot can be either in a semiautonomous way or in an autonomous way. In the former method, teaching points covering a target trajectory can be very sparse as long as the target trajectory can be accessed by the add-on module. In the later method, coarse trajectory can be planned by utilizing external sensors (e.g., camera), and the calibration for sensor-motor mapping can be rough and easy. However, for conventional methods utilizing external sensors to realize accurate motion control of the industrial robot, calibration for sensor-motor mapping can be

the target in image coordinates, which can be observed directly.

considered difficult both technically and practically.

*Industrial Robotics - New Paradigms*

in order to satisfy the assumption ~\_

and is beyond the scope of this chapter.

**2.3 Integration with conventional industrial robots**

features:

for integration.

very complex and difficult.

**66**

Such as many other traditional methods, implementation of motion planning based on external sensory feedback involves two aspects: calibration for the mapping between the coordinates of sensory information and those of an industrial robot and signal processing to extract key points for motion planning from the sensory space according to a specified task. From the perspective of motion control, application tasks of industrial robots can be categorized into two kinds: set point motion control and trajectory motion control. For conventional methods, extracting key points and then realizing point-to-point motion control with accurate sensormotor models for tasks involving set point motion (e.g., peg-in-hole task) is relatively easy to assure good accuracy. However, for tasks involving continuous trajectory control based on extracted key points and accurate sensor-motor models, good accuracy is still hard to realize due to the complex dynamics of an industrial robot during on-line moving. On the other hand, since an industrial robot is only asked to realize coarse motion in the proposed framework, calibration can be rough and easy, and motion errors due to the complex dynamics or even mismatched kinematics can be allowable, as long as the accumulated error is within the work range of the add-on compensation module. Application tasks with both set point motion control and trajectory motion control will be implemented with the proposed framework. Firstly, simplified peg-and-hole alignment in one dimension will be presented in Section 3. And then, contour tracing in two dimensions will be introduced in Section 4.
