**6. Experimental results**

20 Will-be-set-by-IN-TECH

When there are more simultaneously active obstacles in the neighborhood of the manipulator,

*ao* ∑ *i*=1 **J***T o*,*i*

> ∑ *i*=1 **J***T o*,*i*

To decouple the task space and the null-space motion we propose as before the controller (37).

As before, the term **N**¯ *<sup>T</sup>*τ*<sup>o</sup>* is also part of the controller and should be on the right-hand side of

One of the reasons for using the above control law is that the null-space velocity controller (9) can be used for other lower-priority tasks. Hence, to optimize *p* we can select ˙ϕ in (9) as

Note that when the weighted generalized inverse of **J** is used, the desired null space velocity has to be multiplied by the inverse of the weighting matrix (in our case **H**−1) to assure the

Next we analyse the behavior of the close-loop system. Premultiplying Eq. (64) by **H**−<sup>1</sup> yields

The dynamics of the system in the task space and in the null-space can be obtained by

**J** = **I**, **NH**¯ <sup>−</sup><sup>1</sup> = **H**−1**N**¯ *<sup>T</sup>*, and **JN**¯ = **0**. The proper choice of **K***<sup>v</sup>* and **K***<sup>p</sup>* assures the asymptotical stability of the system. Furthermore, it can be seen that the virtual forces do not

**<sup>J</sup>** <sup>=</sup> **<sup>0</sup>**. Rearranging Eq. (68) and using the relation ¨e*<sup>n</sup>* <sup>=</sup> **<sup>N</sup>**¯ (ϕ¨ <sup>−</sup> <sup>q</sup>¨) + **<sup>N</sup>**¯˙ (ϕ˙ <sup>−</sup> <sup>q</sup>˙) we

**<sup>N</sup>**¯ (ϕ¨ <sup>−</sup> <sup>q</sup>¨ <sup>+</sup> **<sup>N</sup>**¯˙ (ϕ˙ <sup>−</sup> <sup>q</sup>˙) + **<sup>K</sup>***n*e˙*n*) = <sup>−</sup>**NH**¯ <sup>−</sup>1**N**¯ *<sup>T</sup>*τ*<sup>o</sup>* <sup>=</sup> <sup>−</sup>**H**‡

**<sup>N</sup>**¯ (−q¨ <sup>+</sup> <sup>ϕ</sup>¨ <sup>+</sup> **<sup>N</sup>**¯˙ <sup>ϕ</sup>˙ <sup>+</sup> **<sup>K</sup>***n*e˙*<sup>n</sup>* <sup>−</sup> **<sup>N</sup>**¯˙ <sup>q</sup>˙) = <sup>−</sup>**NH**¯ <sup>−</sup>1**N**¯ *<sup>T</sup>*τ*<sup>o</sup>* (68)

**<sup>J</sup>**(e¨ <sup>+</sup> **<sup>K</sup>***v*e˙ <sup>+</sup> **<sup>K</sup>***p*e) + **<sup>N</sup>**¯ (−q¨ <sup>+</sup> <sup>ϕ</sup>¨ <sup>+</sup> **<sup>N</sup>**¯˙ <sup>ϕ</sup>˙ <sup>+</sup> **<sup>K</sup>***n*e˙*<sup>n</sup>* <sup>−</sup> **<sup>N</sup>**¯˙ <sup>q</sup>˙) = <sup>−</sup>**H**−1**N**¯ *<sup>T</sup>*τ*<sup>o</sup>* (66)

**<sup>J</sup>**(x¨ *<sup>d</sup>* <sup>+</sup> **<sup>K</sup>***v*e˙ <sup>+</sup> **<sup>K</sup>***p*<sup>e</sup> <sup>−</sup> ˙

premultiplying Eq. (66) by **J** and **N**¯ , respectively. Premultiplying (66) with **J** yields

affect the motion in the task space. Next, premultiplying (66) by **N**¯ yields

F*o*,*<sup>i</sup>* (62)

F*o*,*<sup>i</sup>* (63)

**<sup>J</sup>**q˙) + **<sup>N</sup>**¯ (ϕ¨ <sup>+</sup> **<sup>N</sup>**¯˙ <sup>ϕ</sup>˙ <sup>+</sup> **<sup>K</sup>***n*e˙*<sup>n</sup>* <sup>−</sup> **<sup>N</sup>**¯˙ <sup>q</sup>˙)) + <sup>h</sup> <sup>+</sup> <sup>g</sup> (64)

<sup>ϕ</sup>˙ <sup>=</sup> **<sup>H</sup>**−1*kp*∇*<sup>p</sup>* (65)

**<sup>J</sup>**q˙) + **<sup>N</sup>**¯ (ϕ¨ <sup>+</sup> **<sup>N</sup>**¯˙ <sup>ϕ</sup>˙ <sup>+</sup> **<sup>K</sup>***n*e˙*<sup>n</sup>* <sup>−</sup> **<sup>N</sup>**¯˙ <sup>q</sup>˙)

e¨ + **K***v*e˙ + **K***p*e = 0 (67)

*<sup>n</sup>*τ*<sup>o</sup>* (69)

τ*<sup>o</sup>* =

<sup>τ</sup>*o*,*<sup>N</sup>* <sup>=</sup> **<sup>N</sup>**¯ *<sup>T</sup> ao*

the sum of the torques due to the relevant virtual forces is used,

Combining (48) and (37), and considering Eqs. (8) and (9) yields

Eq. (64), but we put it on the left-hand side to emphasize its role.

and by using (1) the above equation can be rewritten in the form

convergence of the optimization of *p* (Nemec, 1997).

<sup>q</sup>¨ <sup>−</sup> **<sup>H</sup>**−1**N**¯ *<sup>T</sup>*τ*<sup>o</sup>* <sup>=</sup> ¯

¯

since **J**¯

because **N**¯ ¯

obtain

**<sup>J</sup>**(x¨ *<sup>d</sup>* <sup>+</sup> **<sup>K</sup>***v*e˙ <sup>+</sup> **<sup>K</sup>***p*<sup>e</sup> <sup>−</sup> ˙

and considering this in Eq. (58) yields

where *ao* is the number of active obstacles.

**<sup>H</sup>**q¨ <sup>+</sup> <sup>h</sup> <sup>+</sup> <sup>g</sup> <sup>−</sup> **<sup>N</sup>**¯ *<sup>T</sup>*τ*<sup>o</sup>* <sup>=</sup> **<sup>H</sup>**(¯

The proposed algorithms were tested on the laboratory manipulator (see Fig. 13 and two KUKA LWR robots (see Fig. 14). The laboratory manipulator was specially developed for testing the different control algorithms. To be able to test the algorithms for the redundant systems the manipulator has four revolute DOF acting in a plane. The link lengths of the manipulator are *l* = (0.184, 0.184, 0.184, 0.203)*m* and the link masses are *m* = 0.83, 0.44, 0.18, 0.045)*kg*. The manipulator is a part of the integrated environment for the design of the control algorithms and the testing of these algorithms on a real system (Žlajpah, 2001).

Fig. 13. Experimental manipulator with external force sensor

Manipulators as Control Problem 23

Obstacle Avoidance for Redundant Manipulators as Control Problem 225

Fig. 16. A sequence of still photographs shows the movement of two Kuka LWR robots, while they successfully avoids each other. The desired movement for the robots is imitated in

human physiology. The posture of each arm may be described by four angles - three angles in the shoulder joint and one in the elbow. The shoulder joint enables the following motion (Hayes et al., 2001): arm flexion, arm abduction and external rotation. These angles are

A sequence for successful self collision-avoidance is shown in Fig. 16. Here, we can see that the robot angles are similar when the humans hands are away from the threshold, i.e., the robots are not close together. On the other hand, when close together, the robots properly

First we tested the behavior of the manipulator when no sensors were used to detect the obstacles. The desired task was to track the circular path and the motion of the manipulator was obstructed by an obstacle (the rod; see Fig. 13). To be able to monitor the contact forces the rod was mounted on a force sensor. Note that the force information measured by this sensor

The controller was based on the algorithm (7) with the task controller (8) and null space controller (9) and we compared three sets of null space controller parameters. In the first case (a) the controller assured very stiff null space behavior, in the second (b) the controller assured medium stiffness in null space, and in the last case (c) the manipulator was compliant in the null space. To prevent high impact forces, the bumper was covered with soft material

The experimental results are shown in Fig. 17. First we can see that in case (a) the manipulator pushes the bumper more and more and finally, the task has to be aborted due to the very large contact forces. Next, comparing the responses one can see that although the motion is almost equal in both cases, the forces are lower in case (b) (compliant in null-space). Summarizing, the experimental results proved that the contact forces can be decreased by increasing the null

Next we tested the efficiency of the tactile sensors. In our experiments we used simple bumpers on each link as tactile sensors. The sensor can detect an object when it touches the

real time using the Microsoft Kinect sensor for the tracking.

calculated from the data obtained with Microsoft Kinect.

adapt their motion to prevent a collision.

was not used in the close loop controller.

and the manipulator joint velocities were low.

**6.3 Contact forces**

space compliance.

**6.4 Tactile sensors**

Fig. 14. Experimental setup with two KUKA LWR robots for bimanual movement imitation.

Fig. 15. Planar 4DOF manipulator tracking a path in an unstructured environment with three obstacles using velocity-based control (sampled every 1s)

### **6.1 Velocity controller**

The task in these experiments was tracking the path x*<sup>d</sup>* = [0.4 − 0.2 sin(2*π t*/8), −0.1 + 0.1 sin(2*π t*/4)]*<sup>T</sup>* and the motion of the manipulator was obstructed by three obstacles (see Fig. 15). In the current implementation the vision system is using a simple USB WebCam that can recognize the scene and output the position of all the obstacles in less that 0.04s. To avoid the obstacles the proximity sensor distance was selected as *dm* = 0.08*m*. The rate of the velocity controller was 200Hz (the necessary joint velocities for the avoiding motion are calculated in less than 0.5ms). The experimental results are given in Fig. 15. As we can see, the obstacles were successfully avoided.

#### **6.2 Obstacle avoidance as a primary task**

We applied our algorithm to two Kuka LWR robots as shown in Fig. 14. Our algorithm is used as a low-level control to prevent self-collision, i.e., a collision between the robots themselves. As mentioned previously, the desired movement of the robot is a secondary task. The task of collision avoidance is only observed if we approach a pre-defined threshold. While far from the threshold the algorithm allows direct control of the separate joints ( ˙q*n*). If we approach the threshold, the task of collision avoidance smoothly takes over and only allows joint control in the null space projection of this task. Note that ˙q*<sup>n</sup>* is in joint space.

The task for both arms in this experiment was to follow the human demonstrator in real-time. The human motion is captured using the Microsoft Kinect sensor. Microsoft Kinect is based on arange camera developed by PrimeSense, which interprets 3D scene information from a continuously-projected infrared structured light. By processing the depth image, the PrimeSense application programming interface (API) enables tracking of the user's movement in real time. Imitating the motion of the user's arm requires some basic understanding of

Fig. 16. A sequence of still photographs shows the movement of two Kuka LWR robots, while they successfully avoids each other. The desired movement for the robots is imitated in real time using the Microsoft Kinect sensor for the tracking.

human physiology. The posture of each arm may be described by four angles - three angles in the shoulder joint and one in the elbow. The shoulder joint enables the following motion (Hayes et al., 2001): arm flexion, arm abduction and external rotation. These angles are calculated from the data obtained with Microsoft Kinect.

A sequence for successful self collision-avoidance is shown in Fig. 16. Here, we can see that the robot angles are similar when the humans hands are away from the threshold, i.e., the robots are not close together. On the other hand, when close together, the robots properly adapt their motion to prevent a collision.

### **6.3 Contact forces**

22 Will-be-set-by-IN-TECH

Fig. 14. Experimental setup with two KUKA LWR robots for bimanual movement imitation.

Fig. 15. Planar 4DOF manipulator tracking a path in an unstructured environment with three

The task in these experiments was tracking the path x*<sup>d</sup>* = [0.4 − 0.2 sin(2*π t*/8), −0.1 + 0.1 sin(2*π t*/4)]*<sup>T</sup>* and the motion of the manipulator was obstructed by three obstacles (see Fig. 15). In the current implementation the vision system is using a simple USB WebCam that can recognize the scene and output the position of all the obstacles in less that 0.04s. To avoid the obstacles the proximity sensor distance was selected as *dm* = 0.08*m*. The rate of the velocity controller was 200Hz (the necessary joint velocities for the avoiding motion are calculated in less than 0.5ms). The experimental results are given in Fig. 15. As we can see,

We applied our algorithm to two Kuka LWR robots as shown in Fig. 14. Our algorithm is used as a low-level control to prevent self-collision, i.e., a collision between the robots themselves. As mentioned previously, the desired movement of the robot is a secondary task. The task of collision avoidance is only observed if we approach a pre-defined threshold. While far from the threshold the algorithm allows direct control of the separate joints ( ˙q*n*). If we approach the threshold, the task of collision avoidance smoothly takes over and only allows joint control in

The task for both arms in this experiment was to follow the human demonstrator in real-time. The human motion is captured using the Microsoft Kinect sensor. Microsoft Kinect is based on arange camera developed by PrimeSense, which interprets 3D scene information from a continuously-projected infrared structured light. By processing the depth image, the PrimeSense application programming interface (API) enables tracking of the user's movement in real time. Imitating the motion of the user's arm requires some basic understanding of

obstacles using velocity-based control (sampled every 1s)

the null space projection of this task. Note that ˙q*<sup>n</sup>* is in joint space.

**6.1 Velocity controller**

the obstacles were successfully avoided.

**6.2 Obstacle avoidance as a primary task**

First we tested the behavior of the manipulator when no sensors were used to detect the obstacles. The desired task was to track the circular path and the motion of the manipulator was obstructed by an obstacle (the rod; see Fig. 13). To be able to monitor the contact forces the rod was mounted on a force sensor. Note that the force information measured by this sensor was not used in the close loop controller.

The controller was based on the algorithm (7) with the task controller (8) and null space controller (9) and we compared three sets of null space controller parameters. In the first case (a) the controller assured very stiff null space behavior, in the second (b) the controller assured medium stiffness in null space, and in the last case (c) the manipulator was compliant in the null space. To prevent high impact forces, the bumper was covered with soft material and the manipulator joint velocities were low.

The experimental results are shown in Fig. 17. First we can see that in case (a) the manipulator pushes the bumper more and more and finally, the task has to be aborted due to the very large contact forces. Next, comparing the responses one can see that although the motion is almost equal in both cases, the forces are lower in case (b) (compliant in null-space). Summarizing, the experimental results proved that the contact forces can be decreased by increasing the null space compliance.

#### **6.4 Tactile sensors**

Next we tested the efficiency of the tactile sensors. In our experiments we used simple bumpers on each link as tactile sensors. The sensor can detect an object when it touches the

Manipulators as Control Problem 25

Obstacle Avoidance for Redundant Manipulators as Control Problem 227

Fig. 18. Planar 4DOF manipulator tracking a circle while and avoiding obstacles using

a) no obstacles

b) one obstacle

c) three obstacles

0.04s. To avoid the obstacles the proximity sensor distance was selected as *dm* = 0.18*m* and the nominal virtual force as *fo* = 10*N*. The rate of the torque controller was 400Hz (the necessary virtual forces for the avoiding motion were calculated in less than 0.5ms). The experimental results are given in Fig. 19. We can clearly see the difference in the motion when no obstacles

The presented approaches for on-line obstacle avoidance for redundant manipulators are a) based on redundancy resolution at the velocity level or b) considering also the dynamics of the manipulator. The primary task is determined by the end-effector trajectories and for the obstacle avoidance the internal motion of the manipulator is used. The goal is to assign each point on the body of the manipulator, which is close to the obstacle, a motion component in a

Fig. 19. Planar 4DOF manipulator tracking a path in an unstructured environment using a

bumper and a virtual forces controller

vision system to detect obstacles

direction that is away from the obstacle.

**7. Conclusion**

are present and when one or more obstacles are present.


<sup>0</sup> <sup>2</sup> <sup>4</sup> <sup>6</sup> <sup>8</sup> <sup>0</sup>

t

(c) more compliant null-space behavior

object. In our case each switch needs a force of 2*N* to trigger it. This means that the sensor can detect an object if the contact force is greater then ∼ 2 − 4*N*, depending on the particular contact position on the bar. The main drawback of this type of sensor is that it can only detect the link and the side of the link where the contact occurs, and not the exact position of the contact.

The desired task in these experiments was tracking the linear path and as before, the motion of the manipulator was obstructed by the rod. The virtual forces were calculated using Eqs. (49) – (50). As our sensor can detect only the side of the link where the contact occurs and not the exact position of the contact, we used the middle of the link as the approximation for the contact point and the avoiding direction was perpendicular to the link.

The experimental results are shown in Fig. 18. The figures show the contact forces norm �Fext� and the configurations of the manipulator. The results clearly show that the manipulator avoids the obstacle after the collision. The behavior of the manipulator after the contact with the obstacle depends mainly on the particular nominal virtual force *fo* and the time constants *Ti* and *Td*. Tuning these parameters results in different behaviors of the system. With experiments we have found that it is possible to tune these parameters so that the manipulator slides along the obstacle with minimal impact forces and chattering.

#### **6.5 Virtual forces**

Finally, we tested the VF strategy. The desired task was tracking the linear path and the motion of the manipulator was obstructed by a different number of obstacles. To detect the obstacles a vision system was used. In the current implementation the vision system used a simple USB WebCam, which can recognize the scene and output the position of all obstacles in less than 24 Will-be-set-by-IN-TECH

(a) stiff null-space behavior

(b) compliant null-space behavior

(c) more compliant null-space behavior

object. In our case each switch needs a force of 2*N* to trigger it. This means that the sensor can detect an object if the contact force is greater then ∼ 2 − 4*N*, depending on the particular contact position on the bar. The main drawback of this type of sensor is that it can only detect the link and the side of the link where the contact occurs, and not the exact position of the

The desired task in these experiments was tracking the linear path and as before, the motion of the manipulator was obstructed by the rod. The virtual forces were calculated using Eqs. (49) – (50). As our sensor can detect only the side of the link where the contact occurs and not the exact position of the contact, we used the middle of the link as the approximation for the

The experimental results are shown in Fig. 18. The figures show the contact forces norm �Fext� and the configurations of the manipulator. The results clearly show that the manipulator avoids the obstacle after the collision. The behavior of the manipulator after the contact with the obstacle depends mainly on the particular nominal virtual force *fo* and the time constants *Ti* and *Td*. Tuning these parameters results in different behaviors of the system. With experiments we have found that it is possible to tune these parameters so that the manipulator

Finally, we tested the VF strategy. The desired task was tracking the linear path and the motion of the manipulator was obstructed by a different number of obstacles. To detect the obstacles a vision system was used. In the current implementation the vision system used a simple USB WebCam, which can recognize the scene and output the position of all obstacles in less than

contact point and the avoiding direction was perpendicular to the link.

slides along the obstacle with minimal impact forces and chattering.

Fig. 17. Planar 4DOF manipulator tracking a circle while avoiding obstacles without any

sensor for obstacle detection

contact.

**6.5 Virtual forces**

<sup>0</sup> <sup>2</sup> <sup>4</sup> <sup>6</sup> <sup>8</sup> <sup>0</sup>

<sup>0</sup> <sup>2</sup> <sup>4</sup> <sup>6</sup> <sup>0</sup>

<sup>0</sup> <sup>2</sup> <sup>4</sup> <sup>6</sup> <sup>0</sup>

t

t

t




Fig. 18. Planar 4DOF manipulator tracking a circle while and avoiding obstacles using bumper and a virtual forces controller

c) three obstacles

Fig. 19. Planar 4DOF manipulator tracking a path in an unstructured environment using a vision system to detect obstacles

0.04s. To avoid the obstacles the proximity sensor distance was selected as *dm* = 0.18*m* and the nominal virtual force as *fo* = 10*N*. The rate of the torque controller was 400Hz (the necessary virtual forces for the avoiding motion were calculated in less than 0.5ms). The experimental results are given in Fig. 19. We can clearly see the difference in the motion when no obstacles are present and when one or more obstacles are present.
