**3. Visual systems for non–rigid robots**

Different video tracking schemes for non–rigid robots and actuators are possible, and the selected are presented shortly.

### **3.1 Conventional motion capture system**

Conventional motion capture system (multiple camera vision system [ Aghajan & Cavallaro (2009)]) uses a set of cameras located around robot (Fig. 3). Video tracking gives abilities of the robot state estimation what is necessary to control. Such system is very simple for implementation in comparison to other presented tracking schemes. The market availability of such systems for large working area (known as a volumen) like a cubic area with a few meters distance in every direction is important for large scale systems. There are also available systems for small working area about half meter in every direction.

Typical motion capture system uses markers for estimation of the state of human or some objects. The measurements are contact less so significant integration or embedding into robot surface is not necessary. The weight of the robot is preserved. Motion capture system may be used for measurements a very large number of points located on the robot surface. Single or a few cameras are sufficient for estimation of the robot state in most cases.

There are also drawbacks related to the vision techniques. Occlusion reduces a possibility of the state estimation, and the multiple cameras are necessary for reduction of such effects, but

due to additional information about number are less usefully due to size, but the color coded

257

Image processing and state estimation algorithms should be the low–latency and real–time. Fixed processing time or variable with known maximal response time is necessary. Detection,

The commercial motion systems are mostly closed design, without possibility of the algorithm

The conventional systems based on the multiple cameras is not unique, and the similar idea based on the video based estimation is possible for other configurations. Most advantages and disadvantages are preserved in other configurations. Some of them are interesting for

Some, especially mobile robots, uses own vision system for navigation purposes and objects manipulations. The availability of the own vision systems is important for inspection robots working in an hostile environments, especially for space probes, or planet exploration robot. Vision systems are used also for remote examination of the current state of the robot in case of the significant motion error. Blocked wheels, failed arms or legs due to unexpected environment case or own failure are possible to detect using vision system used for navigation purposes or objects manipulation. This is typical procedure in space robots nowadays. Vision sensors placed on the flexible arms helps in such situation (Fig. 4), gives ability of failure source inspection, finding solutions and may save (extend live) of a multi million dollar robot.

The conventional sensors may fail and the availability of the vision system gives ability of redundancy also. Proper design uses cameras for the navigation and manipulation task, and many other sensors for movement control. Secondary task of vision system are measurements of state for motion control in case of failure of primary motion control (measurement

The non–rigid robots gives interesting ability of application vision system using own multiple cameras. Such robot may change own state and additionally camera position and orientation, creating a different camera configurations on demand. The concept of such robot is similar to

information about number is interesting alternative.

for Non–Rigid Robots Control Using Motion Capture Techniques

Estimation of Position and Orientation

**3.2 Robot equipped with the vision systems**

Fig. 4. Non–rigid robot equipped with the vision systems

amoeba, that have large ability of the shape modification.

new robot design.

subsystem).

tracking and assignment algorithms should be carefully selected.

replacement. There are no available free systems, contemporary.

Fig. 3. Motion capture of the non–rigid robot

elimination of the occlusion is not possible for general scenario. Occlusions may occurs due to self occlusions of the robot by the own parts like arms, or may occurs if the environment or operated objects are close to robot.

Internal parts of the robot and related states are hard to estimate, if the cameras are placed around robot. The estimation of the state is based on the outer surfaces, and the estimation of the inner parts of the robot is very difficult task. Such situation occurs for example in estimation task for the propulsion part of the underwater robot that is based on the biological nature species like squids. This is specific design problem, but should be identified on early development stage.

Illumination of the working area influent on the estimation result. The constant environment conditions are recommended. The variable conditions, like bright light sources, may disturb image acquisition by overexposure. Constant light conditions are especially important for the retroreflective markers. Light emitting markers are more robust for variable illumination conditions. Overexposure and underexposure conditions needs expensive HDR (High Dynamic Range) cameras.

High speed cameras are available today (fps > 100 or 1000) but the latency is also very important factor for smooth control of robot, so the image processing part should be integrated into sensors (intelligent cameras are recommended). Most professional motion capture systems uses image processing of acquire image inside camera for bandwidth reduction between camera and computer. Marker detection algorithms are processed in hardware, for reduction processing costs on computer, moreover. High speed cameras reduce distance between position of the marker on two following frames, what gives ability of application simple marker tracking algorithms and assignment algorithms. Gate based approach and nearest neighborhood algorithms for assignment are an example of the simple but an effective algorithms. Assignment is necessary for the tracks of markers maintenance. Assignment is simpler to do if the markers are more unique complex patterns. Position, scale and rotation invariant markers may consist information about unique number of marker. Larger markers 4 Will-be-set-by-IN-TECH

elimination of the occlusion is not possible for general scenario. Occlusions may occurs due to self occlusions of the robot by the own parts like arms, or may occurs if the environment or

Internal parts of the robot and related states are hard to estimate, if the cameras are placed around robot. The estimation of the state is based on the outer surfaces, and the estimation of the inner parts of the robot is very difficult task. Such situation occurs for example in estimation task for the propulsion part of the underwater robot that is based on the biological nature species like squids. This is specific design problem, but should be identified on early

Illumination of the working area influent on the estimation result. The constant environment conditions are recommended. The variable conditions, like bright light sources, may disturb image acquisition by overexposure. Constant light conditions are especially important for the retroreflective markers. Light emitting markers are more robust for variable illumination conditions. Overexposure and underexposure conditions needs expensive HDR (High

High speed cameras are available today (fps > 100 or 1000) but the latency is also very important factor for smooth control of robot, so the image processing part should be integrated into sensors (intelligent cameras are recommended). Most professional motion capture systems uses image processing of acquire image inside camera for bandwidth reduction between camera and computer. Marker detection algorithms are processed in hardware, for reduction processing costs on computer, moreover. High speed cameras reduce distance between position of the marker on two following frames, what gives ability of application simple marker tracking algorithms and assignment algorithms. Gate based approach and nearest neighborhood algorithms for assignment are an example of the simple but an effective algorithms. Assignment is necessary for the tracks of markers maintenance. Assignment is simpler to do if the markers are more unique complex patterns. Position, scale and rotation invariant markers may consist information about unique number of marker. Larger markers

Fig. 3. Motion capture of the non–rigid robot

operated objects are close to robot.

development stage.

Dynamic Range) cameras.

due to additional information about number are less usefully due to size, but the color coded information about number is interesting alternative.

Image processing and state estimation algorithms should be the low–latency and real–time. Fixed processing time or variable with known maximal response time is necessary. Detection, tracking and assignment algorithms should be carefully selected.

The commercial motion systems are mostly closed design, without possibility of the algorithm replacement. There are no available free systems, contemporary.

The conventional systems based on the multiple cameras is not unique, and the similar idea based on the video based estimation is possible for other configurations. Most advantages and disadvantages are preserved in other configurations. Some of them are interesting for new robot design.

## **3.2 Robot equipped with the vision systems**

Some, especially mobile robots, uses own vision system for navigation purposes and objects manipulations. The availability of the own vision systems is important for inspection robots working in an hostile environments, especially for space probes, or planet exploration robot. Vision systems are used also for remote examination of the current state of the robot in case of the significant motion error. Blocked wheels, failed arms or legs due to unexpected environment case or own failure are possible to detect using vision system used for navigation

purposes or objects manipulation. This is typical procedure in space robots nowadays. Vision sensors placed on the flexible arms helps in such situation (Fig. 4), gives ability of failure source inspection, finding solutions and may save (extend live) of a multi million dollar robot.

Fig. 4. Non–rigid robot equipped with the vision systems

The conventional sensors may fail and the availability of the vision system gives ability of redundancy also. Proper design uses cameras for the navigation and manipulation task, and many other sensors for movement control. Secondary task of vision system are measurements of state for motion control in case of failure of primary motion control (measurement subsystem).

The non–rigid robots gives interesting ability of application vision system using own multiple cameras. Such robot may change own state and additionally camera position and orientation, creating a different camera configurations on demand. The concept of such robot is similar to amoeba, that have large ability of the shape modification.

swarm is obtained from neighborhoods members. Favorable members are inside swarm due to availability of multiple views (multiple independent observations) from neighborhood members. The outer members are partially observed only. The multiple swarm members may

The images acquired by the camera set, gives information about 3D world using multiple 2D views. Relation between image objects or additional knowledge about object may be used for estimation of position objects and camera. Without additional knowledge a relative, spatial

The vision techniques use feature points or model fitting approaches. Both of them are important for establishing relations between real and virtual (computer modeled) world. In the case of motion capture systems the markers (feature points) are placed on robot or

Feature points are existing features of surrounding object in environment (e.g. corners, edges) or intentionally added (e.g. ball shaped markers, or painted chessboard patterns). Estimation of the position (for point like features) and optionally orientation (for edges or patterns) gives

The model fitting approach is based on the 3D model of robot. The camera measurements are related to the estimation of the pixel assignment to the background or robot body. The aim of the fitting is to find the configuration of the model, that gives image for single camera system or images if multiple camera system are used. The corresponding real and virtual (rendered)

The simplest technique that is used for establishing relations between virtual and real camera is based on the calibration object. This techniques uses physical object with known physical dimension (*M*) and mathematical model of this object (*V*). The bridge between the real and

Assuming, that the worlds coordinates (*O*, *X*, *Y*, *Z*) are defined if fixed relation in virtual and real calibration object, the full correspondence may between objects, projections and cameras is possible (degenerative cases are not considered here). It means, that all particular positions and orientation have exact values. The projections are the images of the markers from the cameras. Acquired image from the real camera is processed for the marker's positions estimation with subpixel accuracy (e.g. center of mass algorithm may be used). The projection of the virtual markers (*V*) on the virtual camera projection plane is possible

During the estimation process of the external parameters the camera, the correspondence is obtained with some error. Markers projections are not identical and cameras parameters are not equal, especially in beginning steps. The error (Fig. 7) between projections (*m*, *v*) of markers (*M*, *V*) is possible to calculate. Comparison of the 2D positions on projection planes using *l*<sup>2</sup> value is used typically (Euclidean distance). Iterative calculations with subject of the

(*mi* − *vi*)

<sup>2</sup> (1)

259

using the computer graphics formulas using high, usually floating point accuracy.

minimization of this error are used for establish reliable correspondence. The accumulative *l*<sup>2</sup> error is computed using the following formula:

> *l*<sup>2</sup> = ∑ *i d*2 *<sup>i</sup>* = ∑ *i*

images are fitted if the configuration of real robot and its model are identical.

**4. Vision based estimation of position and orientation**

for Non–Rigid Robots Control Using Motion Capture Techniques

cooperate in many ways.

Estimation of Position and Orientation

relations are obtained.

**4.1 Features and model based approaches**

deformable model of robot is used (model fitting).

**4.2 Correspondence by the calibration object**

ability of estimation of camera position relative to object.

virtual world is the calibration object and its model (Fig. 6).

The cameras are integrated into robot's flexible body. Range of the work is unlimited and not limited to the unique area (volumen). Different camera configurations may be proposed in real–time and tested for optimal object manipulation or movement. The most challenging task is the multiple or single camera calibration [ Daniilidis & Eklundh (2008); Lei et al. (2005); Mazurek (2010; 2009; 2007)]. The estimation of external parameters is especially important for such robots.

#### **3.3 Video sensors on robot's surface**

This is specific version of the previous case and inverse motion capture configuration. The cameras are placed on the robot and the fixed set of markers is observed by them. The robot environment is used for the robot's state estimation (Fig.5).

Fig. 5. Cameras placed on the robot surface

It is possible to use cameras for navigation and manipulation purposes and for estimation own state also.

One of the most important factor is the power consumption for such case. The motion capture configuration using passive markers on robot does not need additional power for robot. Inverse motion capture system needs a power supply for camera and acquisition devices. Image processing for inverse case by the external computer is important technique for reduction of the needs of the additional electrical power. The weight of the robot is reduced if the computational part is outside of robot.

#### **3.4 Cooperative robot swarm with multiple cameras**

Another possibility when multiple robots (rigid on non–rigid) are equipped with cameras for navigation, manipulation and self measurements. Swarm members are separated robots from the physical point–of–view, but from the logical point–of–view it is a single robot if the cooperation between members of swarm is very close. The self measurement task (Estimation of own parameters) is very interesting, because the state of the particular member of the swarm is obtained from neighborhoods members. Favorable members are inside swarm due to availability of multiple views (multiple independent observations) from neighborhood members. The outer members are partially observed only. The multiple swarm members may cooperate in many ways.
