**Meet the editor**

Dr. Zoran Gacovski is an associate professor at FON – University, Skopje, Macedonia. His teaching subjects include Intelligent systems and Visual programming techniques, and his areas of research are: fuzzy systems, intelligent control, mobile robots, graphical models (Petri, Neural and Bayesian networks), machine learning, and human-computer interaction. He has earned

his Ph.D. degree at Faculty of Electrical engineering, Skopje. In his career he was awarded the Fulbright postdoctoral fellowship (2002) for research stay at Rutgers University, USA. He has also earned best-paper award at the Baltic Olympiad for Automation control (2002), US NSF grant for conducting a specific research in the field of human-computer interaction at Rutgers University, USA (2003), and DAAD grant for research stay at University of Bremen, Germany (2008).

Contents

**Preface IX** 

**Part 1 Robots for Educational Purposes 1** 

Chapter 1 **Autonomous Mobile Robot Emmy III 3** 

Georgios A. Demetriou

Chapter 4 **Gaining Control Knowledge** 

Chapter 7 **Mobile Platform with** 

Shuro Nakajima

Cláudio Rodrigo Torres, Jair Minoro Abe,

Chapter 2 **Mobile Robotics in Education and Research 27** 

Lluís Pacheco, Ningsu Luo, Inès Ferrer,

**of a Wheelchair Typed Omnidirectional Mobile Robot with Admittance Control 89** 

Chi Zhu, Masashi Oda, Haoyong Yu, Hideomi Watanabe and Yuling Yan

Chapter 6 **A Control System for Robots and Wheelchairs:** 

Xavier Cufí and Roger Arbusé

**Part 2 Health–Care and Medical Robots 87** 

Chapter 5 **Walking Support and Power Assistance** 

Germano Lambert-Torres and João Inácio da Silva Filho

Chapter 3 **The KCLBOT: A Framework of the Nonholonomic Mobile** 

Evangelos Georgiou, Jian Dai and Michael Luck

**Through an Applied Mobile Robotics Course 69** 

**Robot Platform Using Double Compass Self-Localisation 49** 

**Its Application for People with Severe Motor Disability 105** 

Alonso A. Alonso, Ramón de la Rosa, Albano Carrera, Alfonso Bahillo, Ramón Durán and Patricia Fernández

**Leg-Wheel Mechanism for Practical Use 127** 

## Contents

#### **Preface XI**

	- **Part 2 Health–Care and Medical Robots 87**

X Contents


Chapter 9 **Influence of the Size Factor of a Mobile Robot Moving Toward a Human on Subjective Acceptable Distance 177**  Yutaka Hiroi and Akinori Ito


Chapter 11 **Construction of a Vertical Displacement Service Robot with Vacuum Cups 215**  Nicolae Alexandrescu, Tudor Cătălin Apostolescu, Despina Duminică, Constantin Udrea, Georgeta Ionaşcu and Lucian Bogatu

	- **Part 4 Localization and Navigation 289**

## Preface

We are all witnesses that the beginning of the 21st century in technological terms is dedicated to mobile communications - they are everywhere: smartphones, Ipads, ereaders, and many other wireless devices. Once a fiction, today is a reality – music on demand, video on demand, live video conversation via IP on a tablet. What will be the next technological achievement that will have such huge impact on human living? I dare to predict that the second half of this century will by highly influenced by mobile robotics – robots will become ubiquitous décor in everyday life.

Over the past century, anthropomorphic machines have become familiar figures in popular culture through books such as Isaac Asimov's *I, Robot,* movies such as *Star Wars* and television shows such as *Star Trek.* The popularity of robots in fiction indicates that people are receptive to the idea that these machines will one day walk among us as helpers

and even as companions. Nevertheless, although robots play a vital role in industries such as automobile manufacturing - where there is about one robot for every 10 workers - we have a long way to go before real robots catch up with their sciencefiction counterparts.

One reason for this gap is that it has been much harder than expected to give robots the capabilities that humans take for granted - for example, the abilities to orient themselves with respect to the objects in a room, to respond to sounds and interpret speech, and to grasp objects of varying sizes, textures and fragility.

The improvement of hardware electronics and decreasing of the components prices enabled the robot builders to add Global Positioning System chips, video cameras, array microphones (which are better than conventional microphones at distinguishing a voice from background noise), and a host of additional sensors for a reasonable expense. The resulting enhancement of capabilities, combined with expanded processing power and storage, allows today's robots to do things such as vacuum a room or help to defuse a roadside bomb - tasks that would have been impossible for commercially produced machines just a few years ago. The confidence for the robot rising is based on recent developments in electronics and software, as well as on the observations of robots, computers and even living things over the past 30 years.

#### XII Preface

In October 2005, several fully autonomous cars successfully traversed a hazard-studded 132-mile desert course, and in 2007 several successfully drove for half a day in urban traffic conditions. In other experiments within the past few years, mobile robots mapped and navigated unfamiliar office suites, and computer vision systems located textured objects and tracked and analyzed faces in real time. Meanwhile, personal computers became much more adept at recognizing text and speech. A second generation of universal robots with a 100,000 MIPS (mouse-brain) will be adaptable, as the first generation is not, and will even be trainable. Besides application programs, such robots would host a suite of software "conditioning modules" that would generate positive and negative reinforcement signals in predefined circumstances. For example, doing jobs fast and keeping its batteries charged will be positive; hitting or breaking something will be negative. There will be other ways to accomplish each stage of an application program, from the minutely specific (grasp the handle underhand or overhand) to the broadly general (work indoors or outdoors). As jobs are repeated, alternatives that result in positive reinforcement will be favored, those with negative outcomes shunned.

By the end of the century, humans will meet monkeylike five million MIPS, embedded in a third generation of robots, that will learn very quickly from mental rehearsals in simulations that model physical, cultural and psychological factors. Physical properties will include shape, weight, strength, texture and appearance of things, and ways to handle them. Cultural aspects will include an items's name, value, proper location and purpose. Psychological factors, applied to humans and robots alike will include goals, beliefs, feelings and preferences.

This book consists of 18 chapters divided in four sections: Robots for Educational Purposes, Health-Care and Medical Robots, Hardware – State of the Art, and Localization and Navigation. In the first section, there are four chapters covering autonomous mobile robot Emmy III, KCLBOT – mobile nonholonomic robot, and general overview of educational mobile robots. In the second section, the following themes are covered: walking support robots, control system for wheelchairs, leg-wheel mechanism as a mobile platform, micro mobile robot for abdominal use, and the influence of the robot size in the psychological treatment. In the third section, there are chapters about I2C bus system, vertical displacement service robots, quadruped robots – kinematics and dynamics model and Epi.q (hybrid) robots. Finally, in the last section, the following topics are covered: skid-steered vehicles, robotic exploration (new place recognition), omnidirectional mobile robots, ball-wheel mobile robots, and planetary wheeled mobile robots.

I hope that this book will be a small contribution towards the general idea of making the mobile robots closer to the humans.

> **Prof. Dr. Zoran Gacovski**  Associate Professor FON – University, Skopje Macedonia

**Part 1** 

**Robots for Educational Purposes** 

**1** 

*Brazil* 

**Autonomous Mobile Robot Emmy III** 

Germano Lambert-Torres and João Inácio da Silva Filho

*Federal University of Itajubá, Universidade Santa Cecilia – UNISANTA* 

In this work we present a description of the Emmy III robot architecture [1], [2], [3], [4] and, also, a summary of the previous projects which have led to the Emmy III robot building [5], [6], [7], [8], [9]. These robots are part of a project of applying the Paraconsistent Annotated Evidential Logic Eτ [12] which allows manipulating concepts like fuzziness, inconsistencies

The Emmy III robot is designed to achieve a set point in an environment which is divided into coordinates. The Emmy III robot may be considered as a system which is divided into three other subsystems: the planning subsystem, the sensing subsystem and the mechanical

The planning subsystem is responsible for generating the sequence of movements the robot must perform to achieve a set point. The sensing subsystem has the objective of informing the planning subsystem the position of obstacles; the mechanical subsystem is the robot itself, it means, the mobile mechanical platform which carries all devices that come from the other subsystems. This platform must also perform the sequence of movements borne by the

It is observed that the planning subsystem and the sensing subsystem have already been

The sensing subsystem uses the Paraconsistent Artificial Neural Networks - PANN [2], [3]. PANN is a new type of Artificial Neural Networks – ANNs based on Paraconsistent Annotated Evidential Logic Eτ. In the next paragraph we introduce the main basic concepts

Generally, Paraconsistent Logics is a new kind of logics that allows contradictions without trivialization. A branch of it, the Paraconsistent Annotated Evidential Logic Eτ, which will be employed in this work, also deals with the concept of fuzziness. Its language consists of propositions in the usual sense p together with annotation constants: (µ, λ) where µ, λ ∈ [0, 1] (real unitary interval). Thus an atomic formula of the Logic Eτ is of the form p(µ, λ) which can be intuitively read: the favorable evidence expressed by p is µ and the contrary evidence

implemented, but the mechanical subsystem has not been implemented yet.

**1. Introduction** 

and paracompleteness.

planning subsystem.

of the Logic Eτ, as well as some terminologies.

**2. Paraconsistent annotated evidential logic E**τ

expressed by p is λ. A detailed feature on the subject is found in [12].

subsystem.

Cláudio Rodrigo Torres, Jair Minoro Abe,

*University of São Paulo and Paulista University – UNIP,* 

*Universidade Metodista de São Paulo,* 

## **Autonomous Mobile Robot Emmy III**

Cláudio Rodrigo Torres, Jair Minoro Abe, Germano Lambert-Torres and João Inácio da Silva Filho *Universidade Metodista de São Paulo, University of São Paulo and Paulista University – UNIP, Federal University of Itajubá, Universidade Santa Cecilia – UNISANTA Brazil* 

### **1. Introduction**

In this work we present a description of the Emmy III robot architecture [1], [2], [3], [4] and, also, a summary of the previous projects which have led to the Emmy III robot building [5], [6], [7], [8], [9]. These robots are part of a project of applying the Paraconsistent Annotated Evidential Logic Eτ [12] which allows manipulating concepts like fuzziness, inconsistencies and paracompleteness.

The Emmy III robot is designed to achieve a set point in an environment which is divided into coordinates. The Emmy III robot may be considered as a system which is divided into three other subsystems: the planning subsystem, the sensing subsystem and the mechanical subsystem.

The planning subsystem is responsible for generating the sequence of movements the robot must perform to achieve a set point. The sensing subsystem has the objective of informing the planning subsystem the position of obstacles; the mechanical subsystem is the robot itself, it means, the mobile mechanical platform which carries all devices that come from the other subsystems. This platform must also perform the sequence of movements borne by the planning subsystem.

It is observed that the planning subsystem and the sensing subsystem have already been implemented, but the mechanical subsystem has not been implemented yet.

The sensing subsystem uses the Paraconsistent Artificial Neural Networks - PANN [2], [3]. PANN is a new type of Artificial Neural Networks – ANNs based on Paraconsistent Annotated Evidential Logic Eτ. In the next paragraph we introduce the main basic concepts of the Logic Eτ, as well as some terminologies.

### **2. Paraconsistent annotated evidential logic E**τ

Generally, Paraconsistent Logics is a new kind of logics that allows contradictions without trivialization. A branch of it, the Paraconsistent Annotated Evidential Logic Eτ, which will be employed in this work, also deals with the concept of fuzziness. Its language consists of propositions in the usual sense p together with annotation constants: (µ, λ) where µ, λ ∈ [0, 1] (real unitary interval). Thus an atomic formula of the Logic Eτ is of the form p(µ, λ) which can be intuitively read: the favorable evidence expressed by p is µ and the contrary evidence expressed by p is λ. A detailed feature on the subject is found in [12].

Autonomous Mobile Robot Emmy III 5

It is also defined the Uncertainty Degree: Gun(μ, λ) = μ + λ - 1 and the Certainty Degree:

The artificial neural network of the sensing subsystem is composed of two types of cells: Analytic Paraconsistent Artificial Neural Cell – CNAPa, and Passage Paraconsistent

The Analytic Paraconsistent Artificial Neural Cell – CNAPa has two inputs (μRA and μRB) and two outputs (S1 and S2). There are also two configuration parameter inputs (Ftct and

Fig. 3. Graphic representation of the Analytic Paraconsistent Artificial Neural Cell

The Analytic Paraconsistent Artificial Neural Cell – CNAPa has two outputs. The out-put 1

The Analytic Paraconsistent Artificial Neural Cell calculates the maximum value of certainty - Vcve, the minimum value of certainty control - Vcfa, the maximum value of uncertainty control -



**3. Paraconsistent artificial neural network** 

It is described the proposed sensing system in the next section.

Artificial Neural Cell - CNAPpa. The cells are described as it follows:

**3.1 Analytic paraconsistent artificial neural cell - CNAPa** 

Ftc). The figure 3 shows the graphic representation of this cell.



Vcic, and the minimum value of uncertainty control - Vcpa, by this way:

Gce(μ, λ) = μ - λ (0 ≤ μ, λ ≤ 1) Some additional control values are:

The input evidence degrees are: - μRA, such as: 0 ≤ μRA ≤ 1 - μRB, such as: 0 ≤ μRB ≤ 1 There are also two control values:



(S1) is the Resultant Evidence Degree - μE.

The Favorable Evidence Degree (μ) is a value that represents the favorable evidence in which the sentence is true; this value is between 0 and 1.

The Contrary Evidence Degree (λ) is a value that represents the contrary evidence in which the sentence is true; this value is between 0 and 1.

Through the Favorable and Contrary Degrees it is possible to represent the four extreme logic states, as shown in the figure 1.

Fig. 1. The extreme logic states

The four extreme logic states are:


In [6] it is proposed the Para-analyzer Algorithm. By this algorithm it is also possible to represent the non-extreme logic state. The figure 2 shows this.

Fig. 2. The non-extreme logic states

The eight non-extreme logic states are:


4 Mobile Robots – Current Trends

The Favorable Evidence Degree (μ) is a value that represents the favorable evidence in

The Contrary Evidence Degree (λ) is a value that represents the contrary evidence in which

Through the Favorable and Contrary Degrees it is possible to represent the four extreme

In [6] it is proposed the Para-analyzer Algorithm. By this algorithm it is also possible to

which the sentence is true; this value is between 0 and 1.

represent the non-extreme logic state. The figure 2 shows this.

the sentence is true; this value is between 0 and 1.

logic states, as shown in the figure 1.

Fig. 1. The extreme logic states The four extreme logic states are:

Fig. 2. The non-extreme logic states The eight non-extreme logic states are:





It is also defined the Uncertainty Degree: Gun(μ, λ) = μ + λ - 1 and the Certainty Degree: Gce(μ, λ) = μ - λ (0 ≤ μ, λ ≤ 1)

Some additional control values are:


It is described the proposed sensing system in the next section.

#### **3. Paraconsistent artificial neural network**

The artificial neural network of the sensing subsystem is composed of two types of cells: Analytic Paraconsistent Artificial Neural Cell – CNAPa, and Passage Paraconsistent Artificial Neural Cell - CNAPpa. The cells are described as it follows:

#### **3.1 Analytic paraconsistent artificial neural cell - CNAPa**

The Analytic Paraconsistent Artificial Neural Cell – CNAPa has two inputs (μRA and μRB) and two outputs (S1 and S2). There are also two configuration parameter inputs (Ftct and Ftc). The figure 3 shows the graphic representation of this cell.

Fig. 3. Graphic representation of the Analytic Paraconsistent Artificial Neural Cell

The input evidence degrees are:


There are also two control values:


The Analytic Paraconsistent Artificial Neural Cell – CNAPa has two outputs. The out-put 1 (S1) is the Resultant Evidence Degree - μE.



The Analytic Paraconsistent Artificial Neural Cell calculates the maximum value of certainty - Vcve, the minimum value of certainty control - Vcfa, the maximum value of uncertainty control - Vcic, and the minimum value of uncertainty control - Vcpa, by this way:

Autonomous Mobile Robot Emmy III 7

The output S1 assumes the same value as in the in-put μ when the following situation is

The robot Emmy I was the first application of the Paraconsistent Evidential Logics in robotics [8], [9]. The Emmy I robot project finished in 1999 and its results have led to the

The Emmy I has two ultra-sonic sensors: one determines the favorable evidence degree and the other determines the contrary evidence degree. The Emmy I controller, named as Paracontrol, allows the Emmy I to act conveniently in "special" situations, as when there is contradictory datum: one sensor may detect an obstacle in front of the robot (for example, a wall) while the other detects the presence of no obstacles (for example, it may be in direction to an opened door). In a situation like that Emmy may stop and turn 45° to the most free direction. Then, if in a new measurement, there is no inconsistency, the robot may take

The Emmy I robot consists of a circular mobile platform of aluminum with a 30 cm diameter and being 60 cm tall. Its main device is the Paracontrol controller. While moving into a nonstructured environment, the Emmy robot gets information about presence/absence of obstacles using a sonar system called Parasonic [17]. The figure 5 shows the autonomous

construction of the Emmy II robot and to the Emmy III project itself.

true:


**4. Autonomous mobile robot Emmy I** 

another decision, for example, to go ahead.

Fig. 5. The autonomous mobile robot Emmy

mobile robot Emmy.

$$V\_{cve} = \frac{1 + Ft\_c}{2} \tag{1}$$

$$V\_{cfa} = \frac{1 - Ft\_c}{2} \tag{2}$$

$$V\_{cic} = \frac{1 + Ft\_{ct}}{2} \tag{3}$$

$$V\_{cpa} = \frac{1 - Ft\_{ct}}{2} \tag{4}$$

The Resultant Evidence Degree – μE, is determined as:

$$
\mu\_E = \frac{\mathbf{G}\_c + 1}{2} \tag{5}
$$

As Gc = μ - λ, we can say that:

$$
\mu\_{\rm E} = \frac{\mu - \lambda + 1}{2} \tag{6}
$$

It is called as Certainty Interval (φ) the Certainty Degree interval that can be modified without changing the Uncertainty Degree value. This value is determined as:

$$\mathbf{q} = \mathbf{1} \cdot |\mathbf{Gct}|\tag{7}$$

#### **3.2 Passage Paraconsistent Artificial Neural Cell - CNAPpa**

The Passage Paraconsistent Artificial Neural Cell – CNAPpa has one input (µ), one out-put (S1) and one parameter control in-put (Ftc). The figure 4 shows the graphic representation of CNAPpa.

Fig. 4. Graphic representation of the Passage Paraconsistent Artificial Neural Cell


The CNAPpa calculates the maximum value of certainty - Vcve and the minimum value of certainty control - Vcfa by the equations (1) and (2). And it also determines the Resultant Evidence Degree - μE, by the equation (6). For this, λ is considered as the following:

$$
\lambda = 1 - \mu \tag{8}
$$

The output S1 assumes the same value as in the in-put μ when the following situation is true:


6 Mobile Robots – Current Trends

1 2 *<sup>c</sup> cve*

1 2 *<sup>c</sup> cfa*

1 2 *ct cic*

1 2 *ct cpa*

μ

μ

without changing the Uncertainty Degree value. This value is determined as:

Fig. 4. Graphic representation of the Passage Paraconsistent Artificial Neural Cell

Evidence Degree - μE, by the equation (6). For this, λ is considered as the following:

**3.2 Passage Paraconsistent Artificial Neural Cell - CNAPpa** 


be limited through the parameter control in-put Ftc.

2 *<sup>E</sup>* μ λ

It is called as Certainty Interval (φ) the Certainty Degree interval that can be modified

The Passage Paraconsistent Artificial Neural Cell – CNAPpa has one input (µ), one out-put (S1) and one parameter control in-put (Ftc). The figure 4 shows the graphic representation of


The CNAPpa calculates the maximum value of certainty - Vcve and the minimum value of certainty control - Vcfa by the equations (1) and (2). And it also determines the Resultant

1 2 *<sup>c</sup> <sup>E</sup> G*

1

The Resultant Evidence Degree – μE, is determined as:

As Gc = μ - λ, we can say that:

CNAPpa.


*Ft <sup>V</sup>* <sup>+</sup> <sup>=</sup> (1)

*Ft <sup>V</sup>* <sup>−</sup> <sup>=</sup> (2)

*Ft <sup>V</sup>* <sup>+</sup> <sup>=</sup> (3)

*Ft <sup>V</sup>* <sup>−</sup> <sup>=</sup> (4)

<sup>+</sup> <sup>=</sup> (5)

− + <sup>=</sup> (6)

φ = 1 - |Gct| (7)

λ = 1 – μ (8)

### **4. Autonomous mobile robot Emmy I**

The robot Emmy I was the first application of the Paraconsistent Evidential Logics in robotics [8], [9]. The Emmy I robot project finished in 1999 and its results have led to the construction of the Emmy II robot and to the Emmy III project itself.

The Emmy I has two ultra-sonic sensors: one determines the favorable evidence degree and the other determines the contrary evidence degree. The Emmy I controller, named as Paracontrol, allows the Emmy I to act conveniently in "special" situations, as when there is contradictory datum: one sensor may detect an obstacle in front of the robot (for example, a wall) while the other detects the presence of no obstacles (for example, it may be in direction to an opened door). In a situation like that Emmy may stop and turn 45° to the most free direction. Then, if in a new measurement, there is no inconsistency, the robot may take another decision, for example, to go ahead.

The Emmy I robot consists of a circular mobile platform of aluminum with a 30 cm diameter and being 60 cm tall. Its main device is the Paracontrol controller. While moving into a nonstructured environment, the Emmy robot gets information about presence/absence of obstacles using a sonar system called Parasonic [17]. The figure 5 shows the autonomous mobile robot Emmy.

Fig. 5. The autonomous mobile robot Emmy

Autonomous Mobile Robot Emmy III 9

In the figure 7 can be seen the main components of the Emmy robot.

Fig. 7. Main components of the Emmy robot

and for detecting the return of them.

The description of the Emmy robot components is the following.

from 5 to 0 volt, it must be related to the contrary evidence degree.



The Paracontrol [18] is an electronic materialization of the Para-analyzer algorithm [9] [19]. Basically, it is an electronic circuitry which treats logic signals in a context of logic Eτ. Such circuitry compares the logic value entries and determines the logic values output. Favorable evidence and contrary evidence degrees are represented by voltage. Operational amplifiers determine the Certainty and the Uncertainty degrees. The Paracontrol comprises both, analogical and digital systems, and it can be externally adjusted by applying positive and negative voltages. As there are 12 logic states in the Para-analyzer algorithm, the Paracontrol can take 12 different decisions.

Parasonic is an electronic circuitry that the Emmy I robot uses to detect obstacles in its path. Parasonic converts distances from obstacles into electric signals of continuous voltage, ranging from 0 to 5 volts. Parasonic is basically composed of two ultrasonic sensors, type POLAROID 6500, controlled by an 8051 microcontroller. The microcontroller is programmed to carry out the synchronization between the measurements of the two sensors and the change of the distance into electric voltage.

Parasonic generates the favorable evidence degree value (μ) and the contrary evidence degree value (λ). They make a continuous voltage which ranges from 0 to 5 volts. Paracontrol receives these signals from Parasonic. The figure 6 shows the basic structure of the Emmy robot.

Fig. 6. Basic structure of Emmy robot

8 Mobile Robots – Current Trends

The Paracontrol [18] is an electronic materialization of the Para-analyzer algorithm [9] [19]. Basically, it is an electronic circuitry which treats logic signals in a context of logic Eτ. Such circuitry compares the logic value entries and determines the logic values output. Favorable evidence and contrary evidence degrees are represented by voltage. Operational amplifiers determine the Certainty and the Uncertainty degrees. The Paracontrol comprises both, analogical and digital systems, and it can be externally adjusted by applying positive and negative voltages. As there are 12 logic states in the Para-analyzer algorithm, the Paracontrol

Parasonic is an electronic circuitry that the Emmy I robot uses to detect obstacles in its path. Parasonic converts distances from obstacles into electric signals of continuous voltage, ranging from 0 to 5 volts. Parasonic is basically composed of two ultrasonic sensors, type POLAROID 6500, controlled by an 8051 microcontroller. The microcontroller is programmed to carry out the synchronization between the measurements of the two sensors

Parasonic generates the favorable evidence degree value (μ) and the contrary evidence degree value (λ). They make a continuous voltage which ranges from 0 to 5 volts. Paracontrol receives these signals from Parasonic. The figure 6 shows the basic structure of

can take 12 different decisions.

Fig. 6. Basic structure of Emmy robot

the Emmy robot.

and the change of the distance into electric voltage.

In the figure 7 can be seen the main components of the Emmy robot.

Fig. 7. Main components of the Emmy robot

The description of the Emmy robot components is the following.


Autonomous Mobile Robot Emmy III 11

Fig. 9. Emmy II block representation

The figure 10 shows a picture of the Emmy II robot

Fig. 10. The front part of the Emmy II robot

Fig. 11. The lower part of the Emmy II robot

It is shown in the figure 11 the lower part of the Emmy II robot.


### **5. Autonomous mobile robot Emmy II**

The Emmy II robot is an improvement of the Emmy I robot. It is an autonomous mobile robot which is able to avoid obstacles while it is moving in any environment.

The platform used to assemble the Emmy II robot is approximately 23cm high and has a diameter of 25cm. The Emmy II robot main components are a microcontroller from the 8051 family, two sonar ranging module (sensors) and two DC motors. The figure 8 shows the Emmy II basic structure.

Fig. 8. The Emmy II basic structure

The Emmy II controller system uses six logic states instead of 12 logic states which are used in the Emmy I controller. Moreover, it may present some commands that do not exist in the Emmy I robot:


It can be seen in the figure 9 a simplified block representation of the Emmy II robot.

10 Mobile Robots – Current Trends





The Emmy II robot is an improvement of the Emmy I robot. It is an autonomous mobile

The platform used to assemble the Emmy II robot is approximately 23cm high and has a diameter of 25cm. The Emmy II robot main components are a microcontroller from the 8051 family, two sonar ranging module (sensors) and two DC motors. The figure 8 shows the

The Emmy II controller system uses six logic states instead of 12 logic states which are used in the Emmy I controller. Moreover, it may present some commands that do not exist in the

1. Velocity control: the Emmy II controller allows the robot to brake, turn and accelerate

2. The Emmy II controller allows the backward motion. In some situations the robot may move backward or turn around with a fixed wheel having the other spinning around

backward. There are not these types of movements in the Emmy I robot. It can be seen in the figure 9 a simplified block representation of the Emmy II robot.

"in a smooth way", what is not possible in the Emmy I robot.


decoding circuitry. Then the signals can actuate on the relays. - Driving: relays are responsible for actuating the DC motors M1 and M2. - Motor driver: two DC motors are responsible for moving the robot.

robot which is able to avoid obstacles while it is moving in any environment.

according to the logic Eτ.

perform the right movement.

**5. Autonomous mobile robot Emmy II** 

electric circuitries.

Emmy II basic structure.

Fig. 8. The Emmy II basic structure

Emmy I robot:

Fig. 9. Emmy II block representation

The figure 10 shows a picture of the Emmy II robot

Fig. 10. The front part of the Emmy II robot

It is shown in the figure 11 the lower part of the Emmy II robot.

Fig. 11. The lower part of the Emmy II robot

Autonomous Mobile Robot Emmy III 13

In the inconsistency (T), μ and λ are high (i.e., belong to T region). It means that the sensor 1 is far from an obstacle and the sensor 2 is near an obstacle, so the left side is more free than the right side. Then, the behavior should be to turn left by supplying only the DC motor 2

When the Paracompleteness (⊥) is detected, μ and λ are low. It means that the sensor 1 is near an obstacle and the sensor 2 is far from an obstacle, so the right side is more free than the left side. Then, the behavior should be to turn right by supplying only the DC motor 1

In the false state (F) there are obstacles near the front of the robot. Therefore the robot

In the QF→ T state, the front of the robot is obstructed but the obstacle is not so near as in the false state and the left side is a little bit more free than the right side. So, in this case, the robot should turns left by supplying only the DC motor 1 for spinning around backward

In the QF→⊥ state, the front of the robot is obstructed but the obstacle is not so near as in the false state and the right side is a little bit freer than the left side. So, in this case, the robot should turns right by supplying only the DC motor 2 for spinning around backward and

Aiming to verify Emmy II robot functionally, it has been performed 4 tests. Basically, counting how many collisions there were while the robot moved in an environment as

for spinning around forward and keeping the DC motor 1 stopped.

for spinning around forward and keeping the DC motor 2 stopped.

should go back.

**5.1 Tests** 

and keeping the DC motor 2 stopped.

keeping the DC motor 1 stopped.

showed in figure 13 composed the tests.

Fig. 13. Environment used to perform the Emmy II tests

The sonar ranging modules are responsible for verifing whether there is any obstacle in front of the robot or not. The signals generated by the sonar ranging modules are sent to the microcontroller. These signals are used to determine the favorable evidence degree (μ) and the contrary evidence degree (λ) on the proposition "There is no obstacle in front of the robot". The favorable and contrary evidence degrees are used to determine the robot movements.

The Emmy II possible movements are the following:


The signal generated by the sensor 1 is considered the favorable evidence degree and the signal generated by the sensor 2 is considered the contrary evidence degree for the proposition "There is no obstacle in front of the robot". When there is an obstacle near the sensor 1, the favorable evidence degree is low and when there is an obstacle far from the sensor 1, the favorable evidence degree is high. Otherwise, when there is an obstacle near the sensor 2, the contrary evidence degree is high and when there is an obstacle far from the sensor 2, the contrary evidence degree is low. The Emmy II controller decision of which movement the robot should perform is based on the reticulated showed in the figure 12.

Fig. 12. Lattice of Emmy II controller

The decision for each logic state is the following:


The justification for each decision is the following:

When the logic state is true (V), it means that the front of the robot is free. So, the robot can go ahead.

12 Mobile Robots – Current Trends

The sonar ranging modules are responsible for verifing whether there is any obstacle in front of the robot or not. The signals generated by the sonar ranging modules are sent to the microcontroller. These signals are used to determine the favorable evidence degree (μ) and the contrary evidence degree (λ) on the proposition "There is no obstacle in front of the robot". The favorable and contrary evidence degrees are used to determine the robot

• Robot goes ahead. DC motors 1 and 2 are supplied for spinning around forward. • Robot goes back. DC motors 1 and 2 are supplied for spinning around backward. • Robot turns right. Just DC motor 1 is supplied for spinning around forward. • Robot turns left. Just DC motor 2 is supplied for spinning around forward. • Robot turns right. Just DC motor 2 is supplied for spinning around backward. • Robot turns left. Just DC motor 1 is supplied for spinning around backward.

The signal generated by the sensor 1 is considered the favorable evidence degree and the signal generated by the sensor 2 is considered the contrary evidence degree for the proposition "There is no obstacle in front of the robot". When there is an obstacle near the sensor 1, the favorable evidence degree is low and when there is an obstacle far from the sensor 1, the favorable evidence degree is high. Otherwise, when there is an obstacle near the sensor 2, the contrary evidence degree is high and when there is an obstacle far from the sensor 2, the contrary evidence degree is low. The Emmy II controller decision of which movement the robot should perform is based on the reticulated showed in the figure 12.

When the logic state is true (V), it means that the front of the robot is free. So, the robot can

movements.

The Emmy II possible movements are the following:

Fig. 12. Lattice of Emmy II controller

• V state: Robot goes ahead. • F state: Robot goes back. • ⊥ state: Robot turns right. • T state: Robot turns left. • QF→⊥ state: Robot turns right. • QF→ T state: Robot turns left.

go ahead.

The decision for each logic state is the following:

The justification for each decision is the following:

In the inconsistency (T), μ and λ are high (i.e., belong to T region). It means that the sensor 1 is far from an obstacle and the sensor 2 is near an obstacle, so the left side is more free than the right side. Then, the behavior should be to turn left by supplying only the DC motor 2 for spinning around forward and keeping the DC motor 1 stopped.

When the Paracompleteness (⊥) is detected, μ and λ are low. It means that the sensor 1 is near an obstacle and the sensor 2 is far from an obstacle, so the right side is more free than the left side. Then, the behavior should be to turn right by supplying only the DC motor 1 for spinning around forward and keeping the DC motor 2 stopped.

In the false state (F) there are obstacles near the front of the robot. Therefore the robot should go back.

In the QF→ T state, the front of the robot is obstructed but the obstacle is not so near as in the false state and the left side is a little bit more free than the right side. So, in this case, the robot should turns left by supplying only the DC motor 1 for spinning around backward and keeping the DC motor 2 stopped.

In the QF→⊥ state, the front of the robot is obstructed but the obstacle is not so near as in the false state and the right side is a little bit freer than the left side. So, in this case, the robot should turns right by supplying only the DC motor 2 for spinning around backward and keeping the DC motor 1 stopped.

#### **5.1 Tests**

Aiming to verify Emmy II robot functionally, it has been performed 4 tests. Basically, counting how many collisions there were while the robot moved in an environment as showed in figure 13 composed the tests.

Fig. 13. Environment used to perform the Emmy II tests

Autonomous Mobile Robot Emmy III 15

Mechanical subsystem - The Emmy III mechanical part must perform the schedule determined by the planning subsystem. For this, the mechanical subsystem must know the cell occupied by the robot, therefore, a monitoring position makes part of this construction. For each cell that the robot reaches, the possible error of position should be

The objective of the sensing subsystem is to inform the other robot components about the obstacle position. The proposed sensing subsystem has as its main part Paraconsistent Neural Network [13], [14]. This artificial neural network is based on the Paraconsistent

The sensing subsystem is a set of electronic components and softwares which are responsible for analyzing the environment around the robot and detecting the obstacle positions. After that, it must inform the other components of the robot the position of the

In [15] it is presented a method of robot perception and the world's modeling which uses a probabilistic tessellated representation of spatial information called the Occupancy Grid. It is proposed in the chapter a similar method, but instead of using probabilistic

The proposed sensing subsystem aims to generate a Favorable Evidence Degree for each environment position. The Favorable Evidence Degree is related to the sentence: there is an

The sensing subsystem is divided into two parts. The first part is responsible for receiving the data from the sensors and sending information to the second part of the system. The second part is Paraconsistent Artificial Neural Network itself. Figure 14 shows this idea.

The proposed sensing subsystem is prepared to receive data from ultrasonic sensors. The robot sensors are on the mechanical subsystem. So, this subsystem must treat the data generated by the sensors and send information to the first part of the sensing subsystem. The data the mechanical subsystem must send to the first part of the sensing subsystem are:

b. The angle between the horizontal axis of the environment and the direction to the front

The sensing subsystem may get information from any type of sensor.

representation, it is used Paraconsistent Annoted Evidential Logic Eτ.

considered.

obstacles.

**6.1 Sensing subsystem** 

Evidential Logics – Eτ.

obstacle in the analyzed position.

Fig. 14. Representation of the sensing system

a. The distance between the sensor and the obstacle (D).

of the sensor (α). Figure 15 shows the angle α.

D, α, Xa and Ya.

The time duration and results for each test have been the following:

Test 1: Duration: 3 minutes and 50 seconds. Result: 13 collisions.

Test 2: Duration: 3 minutes and 10 seconds. Result: 7 collisions.

Test 3: Duration: 3 minutes and 30 seconds. Result: 10 collisions.

Test 4: Duration: 2 minutes and 45 seconds. Result: 10 collisions.

The sonar ranging modules used in the Emmy II robot can't detect obstacles closer than 7,5 cm. The sonar ranging modules transmit sonar pulses and wait for them to return (echo) so that it can determine the distance between the sonar ranging modules and the obstacles; however, sometimes the echo doesn't return, because it reflects to another direction. These are the main causes for the robot collisions:

Test 1: Collisions: 13.

Collisions caused by echo reflection: 4.

Collisions caused by too near obstacles: 9.

Test 2: Collisions: 7.

Collisions caused by echo reflection: 2.

Collisions caused by too near obstacles: 5.

Test 3: Collisions: 10.

Collisions caused by echo reflection: 5.

Collisions caused by too near obstacles: 5.

Test 4: Collisions: 10.

Collisions caused by echo reflection: 4.

Collisions caused by too near obstacles: 6.

There is another robot collision possibility when the robot is going back. As there is no sonar ranging module behind the robot, it may collide.

#### **6. Autonomous mobile robot Emmy III**

The aim of the Emmy III autonomous mobile robot is to move from an origin to an end, both predetermined in a non-structured environment. The Emmy III controller considers the environment around the robot divided into cells [15] and a planning subsystem gives the sequence of cells the robot must follow to reach the end cell. These ideas have been applied in [20], [21]. The robot must avoid cells that are supposed to be occupied. A sensing subsystem detects the cells which are occupied. The sensing subsystem uses Paraconsistent Annotated Logic to handle information captured by the sensors. The Emmy III structure is composed of a sensing subsystem, a planning subsystem and a mechanical subsystem as described in the follow.

Sensing subsystem - The environment around the robot is considered as a set of cells. The sensing subsystem has to determine the cells which have obstacles in. But the information captured by the sensors always has an inherent imprecision, which leads to an uncertainty regarding to the actual situation of the cells. In order to manipulate the inconsistent information, the sensing subsystem is based on ParaconsistentAnnotated Evidential Logic Eτ, which captures the information generated by the sensors using the favorable and contrary evidence degrees.

Planning subsystem - The planning subsystem determines a path linking an initial point to an end point in a non-structured environment. For this, the environment around the robot is divided into cells and the planning subsystem gives the sequence of cells that the robot must follow to reach the end cell successfully.

14 Mobile Robots – Current Trends

The sonar ranging modules used in the Emmy II robot can't detect obstacles closer than 7,5 cm. The sonar ranging modules transmit sonar pulses and wait for them to return (echo) so that it can determine the distance between the sonar ranging modules and the obstacles; however, sometimes the echo doesn't return, because it reflects to another direction. These

There is another robot collision possibility when the robot is going back. As there is no sonar

The aim of the Emmy III autonomous mobile robot is to move from an origin to an end, both predetermined in a non-structured environment. The Emmy III controller considers the environment around the robot divided into cells [15] and a planning subsystem gives the sequence of cells the robot must follow to reach the end cell. These ideas have been applied in [20], [21]. The robot must avoid cells that are supposed to be occupied. A sensing subsystem detects the cells which are occupied. The sensing subsystem uses Paraconsistent Annotated Logic to handle information captured by the sensors. The Emmy III structure is composed of a sensing subsystem, a planning subsystem and a mechanical subsystem as

Sensing subsystem - The environment around the robot is considered as a set of cells. The sensing subsystem has to determine the cells which have obstacles in. But the information captured by the sensors always has an inherent imprecision, which leads to an uncertainty regarding to the actual situation of the cells. In order to manipulate the inconsistent information, the sensing subsystem is based on ParaconsistentAnnotated Evidential Logic Eτ, which captures the information generated by the sensors using the favorable and

Planning subsystem - The planning subsystem determines a path linking an initial point to an end point in a non-structured environment. For this, the environment around the robot is divided into cells and the planning subsystem gives the sequence of cells that the robot must

The time duration and results for each test have been the following: Test 1: Duration: 3 minutes and 50 seconds. Result: 13 collisions. Test 2: Duration: 3 minutes and 10 seconds. Result: 7 collisions. Test 3: Duration: 3 minutes and 30 seconds. Result: 10 collisions. Test 4: Duration: 2 minutes and 45 seconds. Result: 10 collisions.

are the main causes for the robot collisions:

Collisions caused by echo reflection: 4. Collisions caused by too near obstacles: 9.

Collisions caused by echo reflection: 2. Collisions caused by too near obstacles: 5.

Collisions caused by echo reflection: 5. Collisions caused by too near obstacles: 5.

Collisions caused by echo reflection: 4. Collisions caused by too near obstacles: 6.

ranging module behind the robot, it may collide.

**6. Autonomous mobile robot Emmy III** 

Test 1: Collisions: 13.

Test 2: Collisions: 7.

Test 3: Collisions: 10.

Test 4: Collisions: 10.

described in the follow.

contrary evidence degrees.

follow to reach the end cell successfully.

Mechanical subsystem - The Emmy III mechanical part must perform the schedule determined by the planning subsystem. For this, the mechanical subsystem must know the cell occupied by the robot, therefore, a monitoring position makes part of this construction. For each cell that the robot reaches, the possible error of position should be considered.

### **6.1 Sensing subsystem**

The objective of the sensing subsystem is to inform the other robot components about the obstacle position. The proposed sensing subsystem has as its main part Paraconsistent Neural Network [13], [14]. This artificial neural network is based on the Paraconsistent Evidential Logics – Eτ.

The sensing subsystem is a set of electronic components and softwares which are responsible for analyzing the environment around the robot and detecting the obstacle positions. After that, it must inform the other components of the robot the position of the obstacles.

The sensing subsystem may get information from any type of sensor.

In [15] it is presented a method of robot perception and the world's modeling which uses a probabilistic tessellated representation of spatial information called the Occupancy Grid. It is proposed in the chapter a similar method, but instead of using probabilistic representation, it is used Paraconsistent Annoted Evidential Logic Eτ.

The proposed sensing subsystem aims to generate a Favorable Evidence Degree for each environment position. The Favorable Evidence Degree is related to the sentence: there is an obstacle in the analyzed position.

The sensing subsystem is divided into two parts. The first part is responsible for receiving the data from the sensors and sending information to the second part of the system. The second part is Paraconsistent Artificial Neural Network itself. Figure 14 shows this idea.

Fig. 14. Representation of the sensing system

The proposed sensing subsystem is prepared to receive data from ultrasonic sensors. The robot sensors are on the mechanical subsystem. So, this subsystem must treat the data generated by the sensors and send information to the first part of the sensing subsystem. The data the mechanical subsystem must send to the first part of the sensing subsystem are: D, α, Xa and Ya.


Autonomous Mobile Robot Emmy III 17

c. The number of positions on the arc BC, shown in the figure 17, considered by the

The first part of the sensing system generates three Favorable Evidence Degree, μ1, μ2 and

The Favorable Evidence Degree μ1 is related to the distance between the sensor and the

The Favorable Evidence Degree μ2 is related to the coordinate position on the arc BC shown in the figure 17. As the analyzed coordinate is near from the point A, the µ2 value must be the biggest. And as the analyzed coordinate is near from the points B or C, the µ2 value must be the smallest. The inspiration for this idea comes from [16] which says that the probability for the obstacle be near from the point A is high. And this probability decreases

Eventually, the Favorable Evidence Degree μ3 is the previous value of the coordinate

In the figure 18, it is shown Paraconsistent Artificial Neural Network - PANN architecture

Fig. 18. Chosen Paraconsistent Neural Network Architecture for sensing system.

The PANN output µ is Favorable Evidence Degree for the analyzed position. There is a database which has recorded the µ for each analyzed position. The robot considers each

The sensing subsystem has been tested by simulating its inputs and analyzing the database generated. The database stores Favorable Evidence Degree in each environment position

d. The maximum distance measured by the sensor; the system considers it (Dmax). e. The minimum distance measured by the sensor; the system considers it (Dmin).

obstacle. The nearer the obstacle is from the sensor, the bigger μ1 value is.

as we analyze the region near from the points B and C.

**6.1.1 Paraconsistent artificial neural network architecture** 

system (n).

Favorable Evidence Degree.

for the sensing subsystem.

position as a cell.

**6.2 Results of the sensing subsystem** 

μ3.

Fig. 15. Angle α

c. The coordinate occupied by the robot (Xa, Ya).

In the first part of the sensing subsystem there are also some configuration parameters, which are:

a. The distance between the environment coordinates (a); it is indicated in the figure 16.

Fig. 16. Distance between coordinates

b. The angle of the ultrasonic sensor conical field of view (β). Figure 17 shows this.

Fig. 17. Ultrasonic sensor conical field of view (β)

16 Mobile Robots – Current Trends

In the first part of the sensing subsystem there are also some configuration parameters,

a. The distance between the environment coordinates (a); it is indicated in the figure 16.

b. The angle of the ultrasonic sensor conical field of view (β). Figure 17 shows this.

Fig. 15. Angle α

which are:

c. The coordinate occupied by the robot (Xa, Ya).

Fig. 16. Distance between coordinates

Fig. 17. Ultrasonic sensor conical field of view (β)


The first part of the sensing system generates three Favorable Evidence Degree, μ1, μ2 and μ3.

The Favorable Evidence Degree μ1 is related to the distance between the sensor and the obstacle. The nearer the obstacle is from the sensor, the bigger μ1 value is.

The Favorable Evidence Degree μ2 is related to the coordinate position on the arc BC shown in the figure 17. As the analyzed coordinate is near from the point A, the µ2 value must be the biggest. And as the analyzed coordinate is near from the points B or C, the µ2 value must be the smallest. The inspiration for this idea comes from [16] which says that the probability for the obstacle be near from the point A is high. And this probability decreases as we analyze the region near from the points B and C.

Eventually, the Favorable Evidence Degree μ3 is the previous value of the coordinate Favorable Evidence Degree.

#### **6.1.1 Paraconsistent artificial neural network architecture**

In the figure 18, it is shown Paraconsistent Artificial Neural Network - PANN architecture for the sensing subsystem.

Fig. 18. Chosen Paraconsistent Neural Network Architecture for sensing system.

The PANN output µ is Favorable Evidence Degree for the analyzed position. There is a database which has recorded the µ for each analyzed position. The robot considers each position as a cell.

#### **6.2 Results of the sensing subsystem**

The sensing subsystem has been tested by simulating its inputs and analyzing the database generated. The database stores Favorable Evidence Degree in each environment position

Autonomous Mobile Robot Emmy III 19

Coordinate µ A (18,10) 0.438 B (17,11) 0.413 C (17,12) 0.388 D (16,13) 0.363 E (15,14) 0.338 F (15,15) 0.313 G (14,15) 0.288 H (13,16) 0.263 I (12,17) 0.238 J (11,17) 0.213 K (10,18) 0.188 L (18,10) 0.413 M (19,9) 0.388 N (19,8) 0.363 O (20,7) 0.338 P (20,6) 0.313 Q (20,5) 0.288 R (20,4) 0.263 S (20,3) 0.238 T (20,2) 0.213 U (20,0) 0.188

The analyzed coordinates and their Favorable Evidence Degree are shown in the table 1.

The configuration parameters of this test have been the same as the ones from the first test. The simulated sensing subsystem data have been the ones described in the follow. The distance between the sensor and the obstacle (D): 400. The angle between the horizontal axis of the environment and the direction to the front of the sensor (α): 45. The coordinate where

It is shown in the figure 20 the graphical representation of the database generated by the

It has been simulated the first measuring of the sensor, then, µ3 was initially 0.

Table 1. Results of the first test.

**6.2.2 Second test** 

sensing subsystem.

the robot is (Xa, Ya): (0, 0).

analyzed. It is shown here the result of three tests. The information from one ultrasonic sensor was considered as the Sensing System inputs.

#### **6.2.1 First test**

The configuration parameters of this test have been the following. The distance between the environment coordinates (a): 10. The angle of the ultrasonic sensor conical field of view (β): 30. The number of positions on the arc of the sensor conical field of view considered by the system (n): 10. The maximum distance measured by the sensor; the system considers it (Dmax): 800. The minimum distance measured by the sensor; the system considers it (Dmin): 8.

The mechanical subsystem treats the data from the sensors and generates the sensing subsystem inputs. It has been needed to simulate the sensing subsystem inputs because the mechanical subsystem has not been implemented yet.

Thus, the simulated sensing subsystem data have been the ones described in the follow. The distance between the sensor and the obstacle (D): 200. The angle between the horizontal axis of the environment and the direction to the front of the sensor (α): 30. The coordinate where the robot is (Xa, Ya): (0, 0).

It has been simulated the first measuring of the sensor, then, µ3 has been initially 0.

It is shown in the figure 19 the representation of the coordinates in which sensing system considered to have obstacles in. Summarizing, the figure 10 is a graphical representation of the database generated by sensing subsystem.

Fig. 19. The graphical representation of the database generated by the first test of sensing subsystem

18 Mobile Robots – Current Trends

analyzed. It is shown here the result of three tests. The information from one ultrasonic

The configuration parameters of this test have been the following. The distance between the environment coordinates (a): 10. The angle of the ultrasonic sensor conical field of view (β): 30. The number of positions on the arc of the sensor conical field of view considered by the system (n): 10. The maximum distance measured by the sensor; the system considers it (Dmax): 800. The minimum distance measured by the sensor; the system considers it

The mechanical subsystem treats the data from the sensors and generates the sensing subsystem inputs. It has been needed to simulate the sensing subsystem inputs because the

Thus, the simulated sensing subsystem data have been the ones described in the follow. The distance between the sensor and the obstacle (D): 200. The angle between the horizontal axis of the environment and the direction to the front of the sensor (α): 30. The coordinate where

It is shown in the figure 19 the representation of the coordinates in which sensing system considered to have obstacles in. Summarizing, the figure 10 is a graphical representation of

Fig. 19. The graphical representation of the database generated by the first test of sensing

It has been simulated the first measuring of the sensor, then, µ3 has been initially 0.

sensor was considered as the Sensing System inputs.

mechanical subsystem has not been implemented yet.

the database generated by sensing subsystem.

**6.2.1 First test** 

(Dmin): 8.

subsystem

the robot is (Xa, Ya): (0, 0).


Table 1. Results of the first test.

The analyzed coordinates and their Favorable Evidence Degree are shown in the table 1.

#### **6.2.2 Second test**

The configuration parameters of this test have been the same as the ones from the first test. The simulated sensing subsystem data have been the ones described in the follow. The distance between the sensor and the obstacle (D): 400. The angle between the horizontal axis of the environment and the direction to the front of the sensor (α): 45. The coordinate where the robot is (Xa, Ya): (0, 0).

It has been simulated the first measuring of the sensor, then, µ3 was initially 0.

It is shown in the figure 20 the graphical representation of the database generated by the sensing subsystem.

Autonomous Mobile Robot Emmy III 21

The analyzed coordinates and their Favorable Evidence Degree are shown in the table 2.

Coordinate µ A (29,29) 0.375 B (27,30) 0.35 C (26,32) 0.325 D (24,33) 0.3 E (22,34) 0.275 F (20,35) 0.25 G (19,36) 0.225 H (17,37) 0.2 I (15,38) 0.175 J (13,39) 0.15 K (11,39) 0.125 L (30,27) 0.35 M (32,26) 0.325 N (33,24) 0.3 O (34,22) 0.275 P (35,20) 0.25 Q (36,19) 0.225 R (37,17) 0.2 S (38,15) 0.175 T (39,13) 0.15 U (39,11) 0.125

The configuration parameters and the sensing subsystem data have been the same ones of the second test; then the analyzed coordinates have been the same as the second test. The third test has been done just after the second, therefore, their Favorable Evidence Degree have been different from the one in the second test because µ3 has been the Favorable

The analyzed coordinates and their Favorable Evidence Degree are shown in the table 3. If it is considered the sequence of positions from K to U as an arc in the three tests; it is perceived that the Favorable Evidence Degree (µ) decreases as the coordinate is farther from

The planning subsystem is responsible for generating the sequence of movements the robot must perform to achieve a set point. The sensing subsystem has the objective of informing the planning subsystem about the position of obstacles; and the mechanical subsystem is the robot itself, it means, the mobile mechanical platform which carries all devices away from the other subsystems. This platform must also perform the sequence of movements which

the center of the arc. It means that the system is working as desired.

Table 2. Results of the second test.

Evidence Degree generated by the second test.

**6.2.3 Third test** 

**6.3 Planning subsystem** 

are borne by the planning subsystem.

20 Mobile Robots – Current Trends

Fig. 20. The graphical representation of the database generated by the second test of the

sensing subsystem

The analyzed coordinates and their Favorable Evidence Degree are shown in the table 2.


Table 2. Results of the second test.

#### **6.2.3 Third test**

The configuration parameters and the sensing subsystem data have been the same ones of the second test; then the analyzed coordinates have been the same as the second test. The third test has been done just after the second, therefore, their Favorable Evidence Degree have been different from the one in the second test because µ3 has been the Favorable Evidence Degree generated by the second test.

The analyzed coordinates and their Favorable Evidence Degree are shown in the table 3.

If it is considered the sequence of positions from K to U as an arc in the three tests; it is perceived that the Favorable Evidence Degree (µ) decreases as the coordinate is farther from the center of the arc. It means that the system is working as desired.

#### **6.3 Planning subsystem**

The planning subsystem is responsible for generating the sequence of movements the robot must perform to achieve a set point. The sensing subsystem has the objective of informing the planning subsystem about the position of obstacles; and the mechanical subsystem is the robot itself, it means, the mobile mechanical platform which carries all devices away from the other subsystems. This platform must also perform the sequence of movements which are borne by the planning subsystem.

Autonomous Mobile Robot Emmy III 23

Fig. 21. Planning subsystem screen.

Fig. 22. The first prototype of the Emmy III robot.

from the origin cell to the aimed cell.

system.

**6.4.2 Second prototype of the autonomous mobile robot Emmy III** 

Similarly to the first prototype, the second prototype of the autonomous mobile robot Emmy III is basically composed of a planning subsystem and a mechanical structure. The planning subsystem can be recorded in any personal computer and the communication between the personal computer and the mechanical construction is done through a USB port. The planning system considers the environment around the robot divided into cells. So, it is necessary to inform the planning system about the cell the robot is in, and the aimed cell too. The answer of the planning system is a sequence of cells which the robot must follow to go

The planning system considers all cells free. The Figure 23 shows the screen of the planning


Table 3. Results of the third test.

#### **6.4 Mechanical subsystem**

The Emmy III mechanical part must perform the schedule which is determined by the planning system. The mechanical subsystem must know the position where it is, therefore, a monitoring position makes part of this construction. In the process, for each cell that the robot reaches, any possible error of position should be considered. Some Emmy III prototypes are described here.

#### **6.4.1 First prototype of the autonomous mobile robot Emmy III**

The first prototype is composed of a planning subsystem and a mechanical construction. The planning system considers all cells free.

The planning subsystem asks for the initial point and the aimed point. After that, a sequence of movements is given on a screen. Also a sequence of pulses is sent to the step Motors which are responsible for moving the physical platform of the robot. So, the robot moves from the initial point to the aimed point.

The Figure 21 shows the planning system screen.

The physical construction of the first prototype of the Emmy III robot is basically composed of a circular platform of approximately 286 mm of diameter and two-step motors. The Figure 22 shows the Emmy III first prototype. The planning subsystem is recorded in a notebook. And the communication between the notebook and the physical construction is made through the parallel port. A potency driver is responsible for getting the pulses from the notebook and sending them to the step motors which are responsible for moving the robot.

22 Mobile Robots – Current Trends

The Emmy III mechanical part must perform the schedule which is determined by the planning system. The mechanical subsystem must know the position where it is, therefore, a monitoring position makes part of this construction. In the process, for each cell that the robot reaches, any possible error of position should be considered. Some Emmy III

The first prototype is composed of a planning subsystem and a mechanical construction.

The planning subsystem asks for the initial point and the aimed point. After that, a sequence of movements is given on a screen. Also a sequence of pulses is sent to the step Motors which are responsible for moving the physical platform of the robot. So, the robot moves

The physical construction of the first prototype of the Emmy III robot is basically composed of a circular platform of approximately 286 mm of diameter and two-step motors. The Figure 22 shows the Emmy III first prototype. The planning subsystem is recorded in a notebook. And the communication between the notebook and the physical construction is made through the parallel port. A potency driver is responsible for getting the pulses from the notebook and

**6.4.1 First prototype of the autonomous mobile robot Emmy III** 

sending them to the step motors which are responsible for moving the robot.

Coordinate µ A (29,29) 0.565 B (27,30) 0.525 C (26,32) 0.49 D (24,33) 0.45 E (22,34) 0.415 F (20,35) 0.375 G (19,36) 0.34 H (17,37) 0.3 I (15,38) 0.265 J (13,39) 0.225 K (11,39) 0.19 L (30,27) 0.525 M (32,26) 0.49 N (33,24) 0.45 O (34,22) 0.415 P (35,20) 0.375 Q (36,19) 0.34 R (37,17) 0.3 S (38,15) 0.265 T (39,13) 0.225 U (39,11) 0.19

Table 3. Results of the third test.

**6.4 Mechanical subsystem** 

prototypes are described here.

The planning system considers all cells free.

from the initial point to the aimed point.

The Figure 21 shows the planning system screen.


Fig. 21. Planning subsystem screen.

Fig. 22. The first prototype of the Emmy III robot.

### **6.4.2 Second prototype of the autonomous mobile robot Emmy III**

Similarly to the first prototype, the second prototype of the autonomous mobile robot Emmy III is basically composed of a planning subsystem and a mechanical structure. The planning subsystem can be recorded in any personal computer and the communication between the personal computer and the mechanical construction is done through a USB port. The planning system considers the environment around the robot divided into cells. So, it is necessary to inform the planning system about the cell the robot is in, and the aimed cell too. The answer of the planning system is a sequence of cells which the robot must follow to go from the origin cell to the aimed cell.

The planning system considers all cells free. The Figure 23 shows the screen of the planning system.

Autonomous Mobile Robot Emmy III 25

The aim of the sensing subsystem is to inform the planning subsystem the positions in

The sensing subsystem is based on the Paraconsistent Artificial Neural Network - PANN. The sensing subsystem neural network is composed of two types of cells: Analytic Paraconsistent Artificial Neural Cell – CNAPa and Passage Paraconsistent Artificial Neural

The output of the sensing subsystem is the Favorable Evidence Degree related to the sentence: there is obstacle in the position. In fact, the sensing subsystem generates a

[1] Torres, Cláudio Rodrigo; Abe, Jair Minoro; Lambert-Torres, Germano; Da Silva Filho,

[2] Torres, Cláudio Rodrigo "Sistema inteligente baseado na lógica paraconsistente anotada

[3] Torres, Cláudio Rodrigo; ABE, J. M. ; Torres, Germano Lambert ; Silva Filho, João Inácio da

[5] Abe, Jair Minoro ; Torres, Cláudio Rodrigo ; Lambert-Torres, Germano ; Nakamatsu, K. ;

[7] Torres, Cláudio Rodrigo ; Lambert-Torres, Germano ; Silva, Luiz Eduardo Borges da ;

Engineering Systems. Bournemouth - UK : KES Pub., 2006.

João Inácio & Martins, Helga Gonzaga., J. I. da Silva Filho, H. G. Martins . A sensing system for an autonomous mobile robot based on the paraconsistent artificial neural network. *Lecture Notes in Computer Science*. Berlin/Heidelberg:

Eτ para controle e navegação de robôs móveis autônomos em um ambiente não estruturado", in Portuguese, Ph. D. Thesis, Federal University of Itajubá, Itajubá,

; Martins, Helga Gonzaga . Autonomous Mobile Robot Emmy III. In: Nakamatsu, K.; Phillips-Wren, G.; Jain, L.C.; Howlett, R.J.. (Org.). New Advances in Intelligent Decision Technologies. 1 ed. Helderberg: Springer-Verlag, 2009, v. 199, p. 317-327. [4] Abe, Jair Minoro; Lambert-Torres, Germano; Da Silva Filho, João Inácio; Torres,

Cláudio Rodrigo; Martins, Helga Gonzaga. Paraconsistent Autonomous Mobile Robot Emmy III, 6th Congress of Logic Applied to Technology – LAPTEC'2007. Santos, Proceedings of the VI Congress of Logic Applied to Technology. São Paulo

Kondo, M . Intelligent Paraconsistent Logic Controller and Autonomous Mobile Robot Emmy II. Lecture Notes in Computer Science, v. 4252, p. 851-857, 2006. [6] Abe, Jair Minoro; Torres, Cláudio Rodrigo ; Lambert-Torres, Germano ; Nakamatsu, K. ;

Kondo, M . Intelligent Paraconsistent Logic Controller and Autonomous Mobile Robot Emmy II. In: 10th International Conference on Knowledge-Based, Intelligent Information & Engineering Systems, KES2006, 2006, Bournemouth. Proceedings of the 10th International Conference on Knowledge-Based, Intelligent Information &

Abe, Jair Minoro . Intelligent System of Paraconsistent Logic to Control Autonomous Moving Robots. In: 32nd Annual Conference of the IEEE Industrial Electronics Society, IECON'06, 2006, Paris. Proceedings of the 32nd Annual Conference of the IEEE Industrial Electronics Society. Paris : IEEE Press, 2006.

Some tests were made with the sensing subsystem. The reached results were satisfactory. The next step is the implementation of the mechanical subsystem and the connection of the

which may have obstacles in. It considers the environment divided into coordinates.

database with the Favorable Evidence Degree for each analyzed coordinate.

Springer-Verlag, 2010, v. 6278, p. 154-163.

MG, Brazil, 2010.

– Brazil, 2007.

Cell - CNAPpa.

three subsystems.

**8. References** 

Fig. 23. The output of the planning system - Emmy III

Figure 24 shows the mechanical structure of Emmy III second prototype.

Fig. 24. The mechanical structure of the Emmy III second prototype

The planning system considers all cells free. The mechanical construction is basically composed of a steel structure, two DC motors and three wheels. Each motor has a wheel fixed in its axis and there is a free wheel. There is an electronic circuitry on the steel structure. The main device of the electronic circuitry is the microcontroller PIC18F4550 that is responsible for receiving the schedule from the planning system and activates the DC motors. Also there is a potency driver between the microcontroller and the DC motors.

### **7. Conclusions**

In this work, it is discussed several autonomous mobile robots dubbed Emmy. They are based on a new kind of logic, namely the Paraconsistent Annotated Evidential Logic Eτ. A logical controller – Paracontrol served as basis for control system and in the 3rd prototype it was incorporated the use of Artificial Neural Network, also based on Logic Eτ.

This work presents a proposal of an autonomous mobile robot composed of three modules: sensing subsystem, planning subsystem and mechanical subsystem. The mechanical subsystem has not been implemented yet.

24 Mobile Robots – Current Trends

Fig. 23. The output of the planning system - Emmy III

Figure 24 shows the mechanical structure of Emmy III second prototype.

Fig. 24. The mechanical structure of the Emmy III second prototype

**7. Conclusions** 

subsystem has not been implemented yet.

The planning system considers all cells free. The mechanical construction is basically composed of a steel structure, two DC motors and three wheels. Each motor has a wheel fixed in its axis and there is a free wheel. There is an electronic circuitry on the steel structure. The main device of the electronic circuitry is the microcontroller PIC18F4550 that is responsible for receiving the schedule from the planning system and activates the DC motors. Also there is a potency driver between the microcontroller and the DC motors.

In this work, it is discussed several autonomous mobile robots dubbed Emmy. They are based on a new kind of logic, namely the Paraconsistent Annotated Evidential Logic Eτ. A logical controller – Paracontrol served as basis for control system and in the 3rd prototype it

This work presents a proposal of an autonomous mobile robot composed of three modules: sensing subsystem, planning subsystem and mechanical subsystem. The mechanical

was incorporated the use of Artificial Neural Network, also based on Logic Eτ.

The aim of the sensing subsystem is to inform the planning subsystem the positions in which may have obstacles in. It considers the environment divided into coordinates.

The sensing subsystem is based on the Paraconsistent Artificial Neural Network - PANN. The sensing subsystem neural network is composed of two types of cells: Analytic Paraconsistent Artificial Neural Cell – CNAPa and Passage Paraconsistent Artificial Neural Cell - CNAPpa.

The output of the sensing subsystem is the Favorable Evidence Degree related to the sentence: there is obstacle in the position. In fact, the sensing subsystem generates a database with the Favorable Evidence Degree for each analyzed coordinate.

Some tests were made with the sensing subsystem. The reached results were satisfactory.

The next step is the implementation of the mechanical subsystem and the connection of the three subsystems.

#### **8. References**


**2** 

Georgios A. Demetriou

*Frederick University* 

*Cyprus*

**Mobile Robotics in Education and Research** 

Mobile robotics is a new field. Mobile robots range from the sophisticated space robots, to the military flying robots, to the lawn mower robots at our back yard. Mobile robotics is based on many engineering and science disciplines, from mechanical, electrical and electronics engineering to computer, cognitive and social sciences (Siegwart & Nourbakhsh, 2004). A mobile robot is an autonomous or remotely operated programmable mobile machine that is capable of moving in a specific environment. Mobile robots use sensors to perceive their environment and make decisions based on the information gained from the

The autonomous nature of mobile robots is giving them an important part in our society. Mobile robots are everywhere, from military application to domestic applications. The first mobile robots as we know them today were developed during World War II by the Germans and they were the V1 and V2 flying bombs. In the 1950s W. Grey Walter developed Elmer and Elsie, two autonomous robots that were designed to explore their environment. Elmer and Elsie were able to move towards the light using light sensors, thus avoiding obstacles on their way. The evolution of mobile robots continued and in the 1970s Johns Hopkins University develops the "Beast". The Beast used an ultrasound sensor to move around. During the same period the Stanford Cart line follower was developed by Stanford University. It was a mobile robot that was able to follow a white line, using a simple vision system. The processing was done off-board by a large mainframe. The most known mobile robot of the time was developed by the Stanford Research Institute and it was called Shakey. Shakey was the first mobile robot to be controlled by vision. It was able to recognize an object using vision, find its way to the object. Shakey, shown in Figure 1, had

These robots had limitations due to the lack of processing power and the size of computers, and thus industrial robotics was still dominating the market and research. Industrial manipulators are attached to an off-board computer (controller) for their processing requirements and thus do not require an onboard computer for processing. Unlike industrial robots, mobile robots operate in dynamic and unknown environments and thus require many sensors (i.e. vision, sonar, laser, etc.) and therefore more processing power. Another important requirement of mobile robots is that their processing must be done onboard the moving robot and cannot be done off-board. The computer technology of the time was too bulky and too slow to meet the requirements of mobile robots. Also, sensor

technology had to advance further before it could be used reliably on mobile robots.

a camera, a rangefinder, bump sensors and a radio link.

**1. Introduction** 

sensors.


## **Mobile Robotics in Education and Research**

Georgios A. Demetriou *Frederick University Cyprus*

### **1. Introduction**

26 Mobile Robots – Current Trends

[8] Da Silva Filho, João Inácio & Abe, Jair Minoro. Emmy: A Paraconsistent Autonomous

[9] Da Silva Filho, João Inácio, "Métodos de Aplicações da Lógica Paraconsistente Anotada

[10] Abe, Jair Minoro Some Aspects of Paraconsistent Systems and Applications. Logique et

[11] Abe, Jair Minoro. A logical system for reasoning with inconsistency. In: 5a Reunião

[12] Abe, Jair Minoro, "Fundamentos da lógica anotada" (Foundations of Annotated Logics), in Portuguese, Ph. D. Thesis, University of São Paulo, São Paulo, 1992. [13] Da Silva Filho, João Inácio; Lambert-Torres, Germano & Abe, Jair Minoro. Uncertainty

[14] Da Silva Filho, João Inácio; Abe, Jair Minoro & Lambert-Torres, Germano. "Inteligência

[15] Elfes, A. Using occupancy grids for mobile robot perception and navigation, Comp.

[16] Boreinstein, J. & Koren, Y. The vector field histogram: fast obstacle avoidance for mobile robots. IEEE Journal of Robotics and Automation. v. 7, p. 278-288, jun. de 1991. [17] Abe, Jair Minoro & Da Silva Filho, João Inácio. Manipulating Conflicts and

[18] Da Silva Filho, João Inácio & Abe, Jair Minoro. Para-Control: An Analyser Circuit Based

[19] Da Silva Filho, João Inácio & Abe, Jair Minoro., Paraconsistent analyzer module,

[20] Desiderato, J. M. G. & De Oliveira, E. N., Primeiro Protótipo do Robô Móvel Autônomo

[21] Maran, L. H. C., Riba, P. A., Collett, R. G. & De Souza, R. R., Mapeamento de um

de São Paulo, São Bernardo do Campo - SP, Brazil, 2006.

Neural Networks. 211. ed. Amsterdam: IOS Press, 2010. 328 pp. p.


of São Paulo, São Paulo, 1999.

Analyse, v. 157, p. 83-96, 1997.

SBPN, 1997. p. 196-201.

Rio de Janeiro: LTC, 2008.

3980, 147-169, 2003.

Orlando, Florida, USA, 2001.

2-9600262-1-7, 346-352, 2001.

Brazil, 2006.

Mag., vol. 22, No. 6, pp. 46-57, June 1989.

Mobile Robot. In: Laptec' 2001 The 2 Congress of Logic Applied to Technology, 2001, São Paulo -Brazil. Logic, Artificial Inteligence and Robotics. Amsterdam : IOS

de Anotação com Dois Valores LPA2v com Construção de Algoritmo e Implementação de Circuitos Eletrônicos", in Portuguese, Ph. D. Thesis, University

Anual da SBPN'97, 1997, Aguas de Lindoia. Anais da 5a Reunião Anual da SBPN'97Ciência e Cultura na Globalização - Novos Paradigmas. Aguas de Lindoia :

Treatment Using Paraconsistent Logic - Introducing Paraconsistent Artificial

artificial com redes de análises paraconsistentes: teoria e aplicação", in Portuguese.

Uncertainties in Robotics, *Multiple-Valued Logic and Soft Computing*, V.9, ISSN 1542-

On Algorithm For Treatment of Inconsistencies, Proc. of the World Multiconference on Systemics, Cybernetics and Informatics, ISAS, SCI 2001, Vol. XVI, Cybernetics and Informatics: Concepts and Applications (Part I), ISBN 9800775560, 199-203,

*International Journal of Computing Anticipatory Systems*, vol. 9, ISSN 1373-5411, ISBN

Emmy III, in Portuguese, Trabalho de Conclusão de Curso, Universidade Metodista

Ambiente Não-Estruturado para Orientação de um Robô Móvel Autônomo Utilizando Redes Neurais Paraconsistente, in Portuguese, Trabalho de Conclusão de Curso, Universidade Metodista de São Paulo, São Bernardo do Campo - SP, Mobile robotics is a new field. Mobile robots range from the sophisticated space robots, to the military flying robots, to the lawn mower robots at our back yard. Mobile robotics is based on many engineering and science disciplines, from mechanical, electrical and electronics engineering to computer, cognitive and social sciences (Siegwart & Nourbakhsh, 2004). A mobile robot is an autonomous or remotely operated programmable mobile machine that is capable of moving in a specific environment. Mobile robots use sensors to perceive their environment and make decisions based on the information gained from the sensors.

The autonomous nature of mobile robots is giving them an important part in our society. Mobile robots are everywhere, from military application to domestic applications. The first mobile robots as we know them today were developed during World War II by the Germans and they were the V1 and V2 flying bombs. In the 1950s W. Grey Walter developed Elmer and Elsie, two autonomous robots that were designed to explore their environment. Elmer and Elsie were able to move towards the light using light sensors, thus avoiding obstacles on their way. The evolution of mobile robots continued and in the 1970s Johns Hopkins University develops the "Beast". The Beast used an ultrasound sensor to move around. During the same period the Stanford Cart line follower was developed by Stanford University. It was a mobile robot that was able to follow a white line, using a simple vision system. The processing was done off-board by a large mainframe. The most known mobile robot of the time was developed by the Stanford Research Institute and it was called Shakey. Shakey was the first mobile robot to be controlled by vision. It was able to recognize an object using vision, find its way to the object. Shakey, shown in Figure 1, had a camera, a rangefinder, bump sensors and a radio link.

These robots had limitations due to the lack of processing power and the size of computers, and thus industrial robotics was still dominating the market and research. Industrial manipulators are attached to an off-board computer (controller) for their processing requirements and thus do not require an onboard computer for processing. Unlike industrial robots, mobile robots operate in dynamic and unknown environments and thus require many sensors (i.e. vision, sonar, laser, etc.) and therefore more processing power. Another important requirement of mobile robots is that their processing must be done onboard the moving robot and cannot be done off-board. The computer technology of the time was too bulky and too slow to meet the requirements of mobile robots. Also, sensor technology had to advance further before it could be used reliably on mobile robots.

Mobile Robotics in Education and Research 29

the development of countless of robotics software tools and programming languages (i.e. Microsoft Robotics Studio (Microsoft Robotics Developer Studio, 2011), RoboLab (Welcome to the RoboLab, 2011), ROBOTC (ROBOTC.net, 2011)) and the development of many robotic simulators have made robotics more accessible to more educators, students and robot enthusiasts at all levels. This and the fact that using mobile robots in education is an appealing way of promoting research and development in robotics, science and technology, has triggered a revolution of mobile robotics education, research and development. Now educators, researchers, and robot enthusiasts are pursuing innovative robotic, electronic, and advanced

This chapter provides an overview on mobile robotics education and research. Rather than attempting to cover the wide area of this subject exhaustively, it highlights some key concepts of robotics education at the K-12 and the university levels. It also presents several robotic platforms that can be used in education and research. Various mobile robotic

Since the late 1980s, when robotics was first introduced into the classroom, mobile robotics is used in education at all levels and for various subjects (Malec, 2001). Mobile robotic technology is introduced at the school level as a novelty item and teaching tool. Even though robotics as a field is recognized as a separate educational discipline, it is usually incorporated within the computer science and engineering departments. Robotics courses are becoming core curriculum courses within these departments at most universities. Currently only a small number of universities have pure robotics departments. As the demand for specialized robotics engineers becomes greater, more universities will start to

It is a known fact that most students, regardless of age and educational background /interests, consider working with robots to be "fun" and "interesting". Mobile robotics is used in education in many ways, but generally there are two approaches on how mobile robots are used (Malec, 2001). The first, and most obvious, approach is using robots to teach courses that are directly related to robotics. These courses are usually introductory robotics courses and teach the basic concepts of mobile robotics. These courses are usually divided into lectures and laboratory sessions. At the lecture sessions students learn concepts such as kinematics, perception, localization, map building and navigation. At the laboratory sessions, students experiment on real or simulated robots. Experiments are done on robotics concepts and most times students are required to use concepts they learned in other courses, such as control and programming. This method of using/teaching robots in education is primarily used at the university level and very few times we see it at the high-school level. The second approach is to use mobile robots as a tool to teach other subjects in engineering, science and even non-related fields such as biology and psychology. Since students enjoy working with robots, learning becomes more interesting. This allows robotics to be incorporated into various disciplines and departments. This method is primarily used at the K-12 levels of education to teach technology related courses. It can easily be used at the first year of university level to teach courses such as programming and control. Mobile robots can also be used outside the science and engineering fields to teach biology students for example about leg locomotion or to create studies for psychology students on how mobile

mobile programming projects in a variety of different fields.

offer robotics degrees and have robotics departments.

**2. Education and research** 

robotics is affecting our personal life.

competitions for K-12 and university students will also be presented.

In the last twenty years we saw a revolution in computer technology. Computers got smaller, a lot faster and less expensive. This met the requirements of mobile robots and as a result we saw an explosion of research and development activities in mobile robotics. Mobile robots are increasingly becoming important in advanced applications for the home, military, industry, space and many others. The mobile robot industry has grown enormously and it is developing mobile robots for all imaginable applications. The vast number of mobile robot applications has forced a natural subdivision of the field based on their working environment: land or surface robots, aquatic/underwater robots, aerial robots and space robots. Land or surface robots are subdivided based on their locomotion: Legged robots, wheeled robots and track robots. Legged robots can be classified as two legged (i.e. humanoids) robots and animal-like robots that can have anywhere from four legs to as many as the application and the imagination of the developer requires.

Fig. 1. Shakey the Robot in its display case at the Computer History Museum

The revolution of mobile robotics has increased the need for more mobile robotics engineers for manufacturing, research, development and education. And this in turn has significantly changed the nature of engineering and science education at all levels, from K-12 to graduate school. Most schools and universities have integrated or are integrating robotics courses into their curriculums. Mobile robotics are widely accepted as a multidisciplinary approach to combine and create knowledge in various fields as mechanical engineering, electrical engineering, control, computer science, communications, and even psychology or biology in some cases.

The majority of robotics research is focusing on mobile robotics from surface robots, humanoids, aerial robots, underwater robots, and many more. The development of several less expensive mobile robotic platforms (i.e. VEX Robotics Design System (VEX Robotics Design System, 2011), LEGO Mindstorms (LEGO Education, 2011), Engino Robotics (Engino international website – play to invent, 2011), Fischertechnik (Fischertechnik GmbH, 2011), etc.), 28 Mobile Robots – Current Trends

In the last twenty years we saw a revolution in computer technology. Computers got smaller, a lot faster and less expensive. This met the requirements of mobile robots and as a result we saw an explosion of research and development activities in mobile robotics. Mobile robots are increasingly becoming important in advanced applications for the home, military, industry, space and many others. The mobile robot industry has grown enormously and it is developing mobile robots for all imaginable applications. The vast number of mobile robot applications has forced a natural subdivision of the field based on their working environment: land or surface robots, aquatic/underwater robots, aerial robots and space robots. Land or surface robots are subdivided based on their locomotion: Legged robots, wheeled robots and track robots. Legged robots can be classified as two legged (i.e. humanoids) robots and animal-like robots that can have anywhere from four legs to as

many as the application and the imagination of the developer requires.

Fig. 1. Shakey the Robot in its display case at the Computer History Museum

some cases.

The revolution of mobile robotics has increased the need for more mobile robotics engineers for manufacturing, research, development and education. And this in turn has significantly changed the nature of engineering and science education at all levels, from K-12 to graduate school. Most schools and universities have integrated or are integrating robotics courses into their curriculums. Mobile robotics are widely accepted as a multidisciplinary approach to combine and create knowledge in various fields as mechanical engineering, electrical engineering, control, computer science, communications, and even psychology or biology in

The majority of robotics research is focusing on mobile robotics from surface robots, humanoids, aerial robots, underwater robots, and many more. The development of several less expensive mobile robotic platforms (i.e. VEX Robotics Design System (VEX Robotics Design System, 2011), LEGO Mindstorms (LEGO Education, 2011), Engino Robotics (Engino international website – play to invent, 2011), Fischertechnik (Fischertechnik GmbH, 2011), etc.), the development of countless of robotics software tools and programming languages (i.e. Microsoft Robotics Studio (Microsoft Robotics Developer Studio, 2011), RoboLab (Welcome to the RoboLab, 2011), ROBOTC (ROBOTC.net, 2011)) and the development of many robotic simulators have made robotics more accessible to more educators, students and robot enthusiasts at all levels. This and the fact that using mobile robots in education is an appealing way of promoting research and development in robotics, science and technology, has triggered a revolution of mobile robotics education, research and development. Now educators, researchers, and robot enthusiasts are pursuing innovative robotic, electronic, and advanced mobile programming projects in a variety of different fields.

This chapter provides an overview on mobile robotics education and research. Rather than attempting to cover the wide area of this subject exhaustively, it highlights some key concepts of robotics education at the K-12 and the university levels. It also presents several robotic platforms that can be used in education and research. Various mobile robotic competitions for K-12 and university students will also be presented.

### **2. Education and research**

Since the late 1980s, when robotics was first introduced into the classroom, mobile robotics is used in education at all levels and for various subjects (Malec, 2001). Mobile robotic technology is introduced at the school level as a novelty item and teaching tool. Even though robotics as a field is recognized as a separate educational discipline, it is usually incorporated within the computer science and engineering departments. Robotics courses are becoming core curriculum courses within these departments at most universities. Currently only a small number of universities have pure robotics departments. As the demand for specialized robotics engineers becomes greater, more universities will start to offer robotics degrees and have robotics departments.

It is a known fact that most students, regardless of age and educational background /interests, consider working with robots to be "fun" and "interesting". Mobile robotics is used in education in many ways, but generally there are two approaches on how mobile robots are used (Malec, 2001). The first, and most obvious, approach is using robots to teach courses that are directly related to robotics. These courses are usually introductory robotics courses and teach the basic concepts of mobile robotics. These courses are usually divided into lectures and laboratory sessions. At the lecture sessions students learn concepts such as kinematics, perception, localization, map building and navigation. At the laboratory sessions, students experiment on real or simulated robots. Experiments are done on robotics concepts and most times students are required to use concepts they learned in other courses, such as control and programming. This method of using/teaching robots in education is primarily used at the university level and very few times we see it at the high-school level. The second approach is to use mobile robots as a tool to teach other subjects in engineering, science and even non-related fields such as biology and psychology. Since students enjoy working with robots, learning becomes more interesting. This allows robotics to be incorporated into various disciplines and departments. This method is primarily used at the K-12 levels of education to teach technology related courses. It can easily be used at the first year of university level to teach courses such as programming and control. Mobile robots can also be used outside the science and engineering fields to teach biology students for example about leg locomotion or to create studies for psychology students on how mobile robotics is affecting our personal life.

Mobile Robotics in Education and Research 31

only of robotics itself, but of general topics in STEM as well (van Lith, 2007). Mobile robotics is a great way to get kids excited about STEM topics. Students at this level do not need to have a considerable understanding about how robots work. The approach used at this level is to experiment, learn and play. It is also highly effective in developing teamwork and selfconfidence. Even children with no interest in technology and sciences still consider robots interesting. Utilizing this interest, robots are used to teach children about robots or using robots as a tool to teach STEM topics. The study of robotics, by its very nature, captures all four legs of STEM very well while creating classroom environments where both knowledge

and skill development can flourish without having to compromise one for the other.

career choices in the STEM areas.

But getting students interested in science and other subjects is only one part of the equation, as we must also prepare them for logical thinking and problem solving. At this level it is beneficial to start solving logical problems as the brain is forming in order to develop the required neural plasticity that can be employed over a lifetime of logical thinking and problem solving (Matson et al., 2003). In order to succeed in this, the right tools have to be selected. Choosing the right tools is difficult when competing with high-tech computer games, electronic gadgets and other toys that children are using today. Children today need more stimulation than ever before in human education. Young people are very good at using gadgets and electronic toys, but not many of them are interested in how these devices work or are built (Malec, 2001). We need to stimulate their interest in technology and science in order to make them try to understand the functionality of the devices they use. If they understand, then they will want to develop. Thus it is critical to provide engaging hands-on education to all children as early as possible in order to open their minds to the technology

It has been shown that no age is too young for being engaged by robots. Toddlers express deep interest in active machines and toys; robots motivate even children with special social and cognitive needs (Chu et al., 2005). Increasingly, high schools across the world are providing elective robotics courses as well as after-school programs. Gradually, middle schools are starting to get involved, as well. Slowly, even children at the elementary schools are being exposed to robotics (Lego Education WeDo, 2011; Engino international website – play to invent, 2011). There are also many robotics competitions that were designed specifically for these age groups; some of the most visible are FIRST (FIRST, 2011) LEGO Mindstorms (LEGO Education, 2011), VEX Robotics Competition (Competition - VEX Robotics, 2011) and RoboCupJunior (RoboCupJunior, 2011). These competitions increase the

Generally there is a lack of age appropriate robotics teaching materials for the K-12 level. Normally due to lack of financial resources, schools do not have enough or up-to-date equipment in order to use robotics successfully. Also, because of the broad range of backgrounds of K-12 educators who teach robotics and the lack of appropriate lesson plans, it is critical that ready educational materials be provided in order to teach robotics successfully. There is a lack of available robotics textbooks at this level of education as well. For these reasons, many universities are directly working with school systems to develop robotic material, robotic platforms and appropriate lesson plans in order to aid teachers overcome these problems. Universities must become more involved with K-12 schools and offer activities such as competitions, summer camps, lectures and workshops to students and teachers. K-12

teachers have to be properly educated and trained to use robots in their classrooms.

There are two general categories of mobile robots that are used for education: do-it-yourself (DIY) kits and prebuilt robots. Prebuilt robots are generally more expensive and are only

interest of students since they add the ingredient of competition.

The majority of the mobile robotics activities are offered at the university level, but over the last few years we have seen many robotics courses, competitions and other robotics activities offered at the K-12 level of education as well. Many K-12 school systems are starting to teach young people using robots. At the high-school level (ages 16 to 18) robotics is often used to teach programming courses. The most commonly used robots at this level are the Lego Mindstorms NXT and the VEX Robotics Design Systems. The nature of mobile robotics allows it to be an effective tool in teaching technology courses to children at the primary and early secondary levels (from the ages of 6 to 15). Companies such as Engino Toy Systems and LEGO have started developing robotics packages that can be used to teach basic physics and technology concepts to young children, even from the age of 6. By introducing robotics at the K-12 level of education, students may be better prepared for the university and more students will be interested in robotics as a field of study.

The results of the success of robotics in the education of young children has triggered many successful local, national and international robotics competitions (i.e. FIRST LEGO League (USFIRST.org – Welcome to FIRST, 2011), RoboCupJunior (RoboCupJunior, 2011)), many robotics workshops and summer camps for students of all levels. These activities have generated more momentum in robotics education and research and already many educational systems are starting to offer robotics education even at the early stages of K-12 education. Companies such as Engino Robotics (Engino international website – play to invent, 2011) and Lego Education WeDo (Lego Education WeDo, 2011) are developing robotic platforms to be used specifically at the primary school level (ages 6-12). This momentum must be carried over to the university level but in most cases it is not. University students must wait until their third or fourth year before they can take their first robotics course.

Most universities only offer an introductory course in robotics and only a few offer more advanced courses. Most advanced robotics courses are offered at the graduate level. Where advanced courses are offered, specific concepts such as vision, advanced navigation, mapbuilding and sensor fusion are targeted. At the undergraduate level, usually students only go beyond the introductory courses only in the form of final year projects or other projects. In many cases, robotics interested students form robotics groups in order to exchange information, to participate in robotics competitions or to just build a mobile robot. By offering robotics courses early at the undergraduate level will create a natural continuation to the K-12 robotics education. By introducing more advanced mobile robotics courses at the undergraduate level, we will have better prepared students for the graduate level, make more research progress in the long run and have more prepared workforce for the robotics industry as well.

#### **3. Teaching robotics at the K-12 level**

It has been proven that hands-on education provides better motivation for learning new material. Abstract knowledge is more difficult to comprehend and sometimes not interesting enough for most students. By providing experiments and real-world situations, students become more interested and they learn new topics much easier.

There is a lack of student interest in science, technology, engineering, and math (STEM) topics and increasing attention has been paid to developing innovative tools for improved teaching of STEM. For this reason alone, hands-on education has been imposed on today's K-12 teachers. Mobile robotics has been shown to be a superb tool for hands-on learning, not 30 Mobile Robots – Current Trends

The majority of the mobile robotics activities are offered at the university level, but over the last few years we have seen many robotics courses, competitions and other robotics activities offered at the K-12 level of education as well. Many K-12 school systems are starting to teach young people using robots. At the high-school level (ages 16 to 18) robotics is often used to teach programming courses. The most commonly used robots at this level are the Lego Mindstorms NXT and the VEX Robotics Design Systems. The nature of mobile robotics allows it to be an effective tool in teaching technology courses to children at the primary and early secondary levels (from the ages of 6 to 15). Companies such as Engino Toy Systems and LEGO have started developing robotics packages that can be used to teach basic physics and technology concepts to young children, even from the age of 6. By introducing robotics at the K-12 level of education, students may be better prepared for the

The results of the success of robotics in the education of young children has triggered many successful local, national and international robotics competitions (i.e. FIRST LEGO League (USFIRST.org – Welcome to FIRST, 2011), RoboCupJunior (RoboCupJunior, 2011)), many robotics workshops and summer camps for students of all levels. These activities have generated more momentum in robotics education and research and already many educational systems are starting to offer robotics education even at the early stages of K-12 education. Companies such as Engino Robotics (Engino international website – play to invent, 2011) and Lego Education WeDo (Lego Education WeDo, 2011) are developing robotic platforms to be used specifically at the primary school level (ages 6-12). This momentum must be carried over to the university level but in most cases it is not. University students must wait until their third or fourth year before they can take their first

Most universities only offer an introductory course in robotics and only a few offer more advanced courses. Most advanced robotics courses are offered at the graduate level. Where advanced courses are offered, specific concepts such as vision, advanced navigation, mapbuilding and sensor fusion are targeted. At the undergraduate level, usually students only go beyond the introductory courses only in the form of final year projects or other projects. In many cases, robotics interested students form robotics groups in order to exchange information, to participate in robotics competitions or to just build a mobile robot. By offering robotics courses early at the undergraduate level will create a natural continuation to the K-12 robotics education. By introducing more advanced mobile robotics courses at the undergraduate level, we will have better prepared students for the graduate level, make more research progress in the long run and have more prepared workforce for the robotics

It has been proven that hands-on education provides better motivation for learning new material. Abstract knowledge is more difficult to comprehend and sometimes not interesting enough for most students. By providing experiments and real-world situations,

There is a lack of student interest in science, technology, engineering, and math (STEM) topics and increasing attention has been paid to developing innovative tools for improved teaching of STEM. For this reason alone, hands-on education has been imposed on today's K-12 teachers. Mobile robotics has been shown to be a superb tool for hands-on learning, not

students become more interested and they learn new topics much easier.

university and more students will be interested in robotics as a field of study.

robotics course.

industry as well.

**3. Teaching robotics at the K-12 level** 

only of robotics itself, but of general topics in STEM as well (van Lith, 2007). Mobile robotics is a great way to get kids excited about STEM topics. Students at this level do not need to have a considerable understanding about how robots work. The approach used at this level is to experiment, learn and play. It is also highly effective in developing teamwork and selfconfidence. Even children with no interest in technology and sciences still consider robots interesting. Utilizing this interest, robots are used to teach children about robots or using robots as a tool to teach STEM topics. The study of robotics, by its very nature, captures all four legs of STEM very well while creating classroom environments where both knowledge and skill development can flourish without having to compromise one for the other.

But getting students interested in science and other subjects is only one part of the equation, as we must also prepare them for logical thinking and problem solving. At this level it is beneficial to start solving logical problems as the brain is forming in order to develop the required neural plasticity that can be employed over a lifetime of logical thinking and problem solving (Matson et al., 2003). In order to succeed in this, the right tools have to be selected. Choosing the right tools is difficult when competing with high-tech computer games, electronic gadgets and other toys that children are using today. Children today need more stimulation than ever before in human education. Young people are very good at using gadgets and electronic toys, but not many of them are interested in how these devices work or are built (Malec, 2001). We need to stimulate their interest in technology and science in order to make them try to understand the functionality of the devices they use. If they understand, then they will want to develop. Thus it is critical to provide engaging hands-on education to all children as early as possible in order to open their minds to the technology career choices in the STEM areas.

It has been shown that no age is too young for being engaged by robots. Toddlers express deep interest in active machines and toys; robots motivate even children with special social and cognitive needs (Chu et al., 2005). Increasingly, high schools across the world are providing elective robotics courses as well as after-school programs. Gradually, middle schools are starting to get involved, as well. Slowly, even children at the elementary schools are being exposed to robotics (Lego Education WeDo, 2011; Engino international website – play to invent, 2011). There are also many robotics competitions that were designed specifically for these age groups; some of the most visible are FIRST (FIRST, 2011) LEGO Mindstorms (LEGO Education, 2011), VEX Robotics Competition (Competition - VEX Robotics, 2011) and RoboCupJunior (RoboCupJunior, 2011). These competitions increase the interest of students since they add the ingredient of competition.

Generally there is a lack of age appropriate robotics teaching materials for the K-12 level. Normally due to lack of financial resources, schools do not have enough or up-to-date equipment in order to use robotics successfully. Also, because of the broad range of backgrounds of K-12 educators who teach robotics and the lack of appropriate lesson plans, it is critical that ready educational materials be provided in order to teach robotics successfully. There is a lack of available robotics textbooks at this level of education as well. For these reasons, many universities are directly working with school systems to develop robotic material, robotic platforms and appropriate lesson plans in order to aid teachers overcome these problems. Universities must become more involved with K-12 schools and offer activities such as competitions, summer camps, lectures and workshops to students and teachers. K-12 teachers have to be properly educated and trained to use robots in their classrooms.

There are two general categories of mobile robots that are used for education: do-it-yourself (DIY) kits and prebuilt robots. Prebuilt robots are generally more expensive and are only

Mobile Robotics in Education and Research 33

interface but sometimes we see third party companies developing software for programming these kits. Older students sometimes prefer using high-level languages to program since it allows them to have more control over the robots and to accomplish more complex functions. The complexity of the kids depends on the age group they are targeting. Some of the most commonly used mobile robot platforms are described here. It is impossible to separate these platforms into age-group categories, since most of them are used by all K-12 groups. Some of these kits are so advanced that are even used at the university level. There are many other available educational mobile robots and the list is growing fast, but it is not possible to list and describe all of them here. The list here is only a selected list based

Lego Mindstorms is a line of programmable robotics/construction toys, manufactured by the Lego Group (LEGO Education, 2011). It was first introduced in 1998 and it was called Robotics Invention System (RIS). The next generation was released in 2006 as Lego Mindstorms NXT (Figure 2). The newest version, released on August 5, 2009, is known as Lego Mindstorms NXT 2.0. Lego Mindstorms is primarily used in secondary education but most universities also use the Lego Mindstorms NXT for their introductory courses in

Fig. 2. Lego Mindstorms NXT Controller with sensors and a sample mobile robot

allows USB and Bluetooth connections to a computer.

Lego Mindstorms NXT is equipped with three servo motors, a light sensor, a sound sensor, an ultrasound sensor and a touch sensor. The NXT 2.0 has two touch sensors, a light, sound and distance sensors, and support for four sensors without using a sensor multiplexor. The main component in the Lego Mindstorms kit is a brick-shaped computer called the NXT Intelligent Brick. It can take input from up to four sensors and control up to three motors. The brick has a 100x64 pixel monochrome LCD display and four buttons that can be used to navigate a user interface using menus. It also has a speaker that can play sound files. It

Lego Mindstorms NXT comes with the Robolab (Welcome to the RoboLab, 2011) graphical user interface (GUI) programming software, developed at Tufts University using the National Instruments LabVIEW (NI LabVIEW, 2011) as an engine. With Robolab, students

on availability, innovation and usage.

**3.1.1 Lego Mindstorms NXT** 

mobile robotics or for projects.

found in university labs, industry and the military. The least expensive prebuild robot cost about 3.000 Euro. Do-it-yourself mobile robots are less expensive and normally cost less than 1.000 Euro. There are many DIY robotic kits that are ideal for K12 education, such as Lego Mindstorms NXT, VEX Robotics Design System and Engino Toy Systems.

Generally students are required to construct and then program the robots. Normally the construction is done by using specific instructions and prepackaged robotic parts, such as LEGO or Engino Toy Systems parts. Most DIY kits are equipped with simple graphical user interfaces (GUI), such as RoboLab (Welcome to the RoboLab, 2011), for students to program the robots. The programming capabilities of these GUIs are limited and are primarily used by students up the middle school (grade 8) level. Students at the high-school level (grades 9 to 12) require more control over their robots, and they often use more advanced programming tools. They use high level languages such as ROBOTC (ROBOTC.net, 2011) or more advanced visual programming languages (VPL) such as Microsoft Robotics Studio (Microsoft Robotics Developer Studio, 2011). Many more high level programming languages and visual programming languages are constantly being developed and this will give students many more options for programming.

Using mobile robotic simulators is another way mobile robotics can be used in education. The goal of simulators at this level is to provide a complete learning experience without the need to have the actual robot hardware. This eliminates the cost of purchasing enough robots to satisfy the needs of all students, since usually one robot is needed per 3-4 students. Simulators are generally less expensive, can be used by all students simultaneously and do not have any hardware costs since they can normally run on existing school computers. In addition to this, the animation of simulation is ideal for today's children who are keen in animation and gaming. Mobile robotic simulators allow students to virtually build mobile robots and program the robot to perform similar functions as the real robot would.

A very good mobile robotics simulator is the RobotC Virtual Worlds by Carnegie Mellon Robotics Academy (Computer Science Social Network, 2011). It allows students to program simulated Lego NXT and VEX robots using the RobotC programing language. The program offers four mobile robotic challenges: Labyrinth Challenge, Maze Challenge, Race Track Challenge and the Gripper Challenge. Students can also venture on to a simulated extraterrestrial planet environment where they can explore different areas such as the Astronaut Camp and the Container Yard.

In order to have success in robotics education, children from the early stages of K-12 education it would be good for them to be exposed to robotics. Elementary school children (grades 1 to 6) may play and program simple robots. Companies such as Engino Toy Systems and Lego Education WeDo are developing robotic platforms that can be used at the elementary school systems. In the early secondary education (grades 7 to 9) children may build simple robots, program them and even participate in local or international robotic competitions. High school students (grades 10 to 12) may design more complex mobile robots, program them using high level languages and compete in competitions to design robots. Generally by introducing robotics at this younger age group, we will have better prepared students for the university and graduate levels.

#### **3.1 Mobile robotic platforms for K-12 education**

Most K-12 educational mobile robots come in the form of kits. Most kits contain building blocks, cables and sensors. They normally come equipped with their own programming 32 Mobile Robots – Current Trends

found in university labs, industry and the military. The least expensive prebuild robot cost about 3.000 Euro. Do-it-yourself mobile robots are less expensive and normally cost less than 1.000 Euro. There are many DIY robotic kits that are ideal for K12 education, such as

Generally students are required to construct and then program the robots. Normally the construction is done by using specific instructions and prepackaged robotic parts, such as LEGO or Engino Toy Systems parts. Most DIY kits are equipped with simple graphical user interfaces (GUI), such as RoboLab (Welcome to the RoboLab, 2011), for students to program the robots. The programming capabilities of these GUIs are limited and are primarily used by students up the middle school (grade 8) level. Students at the high-school level (grades 9 to 12) require more control over their robots, and they often use more advanced programming tools. They use high level languages such as ROBOTC (ROBOTC.net, 2011) or more advanced visual programming languages (VPL) such as Microsoft Robotics Studio (Microsoft Robotics Developer Studio, 2011). Many more high level programming languages and visual programming languages are constantly being developed and this will give

Using mobile robotic simulators is another way mobile robotics can be used in education. The goal of simulators at this level is to provide a complete learning experience without the need to have the actual robot hardware. This eliminates the cost of purchasing enough robots to satisfy the needs of all students, since usually one robot is needed per 3-4 students. Simulators are generally less expensive, can be used by all students simultaneously and do not have any hardware costs since they can normally run on existing school computers. In addition to this, the animation of simulation is ideal for today's children who are keen in animation and gaming. Mobile robotic simulators allow students to virtually build mobile

A very good mobile robotics simulator is the RobotC Virtual Worlds by Carnegie Mellon Robotics Academy (Computer Science Social Network, 2011). It allows students to program simulated Lego NXT and VEX robots using the RobotC programing language. The program offers four mobile robotic challenges: Labyrinth Challenge, Maze Challenge, Race Track Challenge and the Gripper Challenge. Students can also venture on to a simulated extraterrestrial planet environment where they can explore different areas such as the

In order to have success in robotics education, children from the early stages of K-12 education it would be good for them to be exposed to robotics. Elementary school children (grades 1 to 6) may play and program simple robots. Companies such as Engino Toy Systems and Lego Education WeDo are developing robotic platforms that can be used at the elementary school systems. In the early secondary education (grades 7 to 9) children may build simple robots, program them and even participate in local or international robotic competitions. High school students (grades 10 to 12) may design more complex mobile robots, program them using high level languages and compete in competitions to design robots. Generally by introducing robotics at this younger age group, we will have better

Most K-12 educational mobile robots come in the form of kits. Most kits contain building blocks, cables and sensors. They normally come equipped with their own programming

robots and program the robot to perform similar functions as the real robot would.

Lego Mindstorms NXT, VEX Robotics Design System and Engino Toy Systems.

students many more options for programming.

Astronaut Camp and the Container Yard.

prepared students for the university and graduate levels.

**3.1 Mobile robotic platforms for K-12 education** 

interface but sometimes we see third party companies developing software for programming these kits. Older students sometimes prefer using high-level languages to program since it allows them to have more control over the robots and to accomplish more complex functions. The complexity of the kids depends on the age group they are targeting. Some of the most commonly used mobile robot platforms are described here. It is impossible to separate these platforms into age-group categories, since most of them are used by all K-12 groups. Some of these kits are so advanced that are even used at the university level. There are many other available educational mobile robots and the list is growing fast, but it is not possible to list and describe all of them here. The list here is only a selected list based on availability, innovation and usage.

### **3.1.1 Lego Mindstorms NXT**

Lego Mindstorms is a line of programmable robotics/construction toys, manufactured by the Lego Group (LEGO Education, 2011). It was first introduced in 1998 and it was called Robotics Invention System (RIS). The next generation was released in 2006 as Lego Mindstorms NXT (Figure 2). The newest version, released on August 5, 2009, is known as Lego Mindstorms NXT 2.0. Lego Mindstorms is primarily used in secondary education but most universities also use the Lego Mindstorms NXT for their introductory courses in mobile robotics or for projects.

Fig. 2. Lego Mindstorms NXT Controller with sensors and a sample mobile robot

Lego Mindstorms NXT is equipped with three servo motors, a light sensor, a sound sensor, an ultrasound sensor and a touch sensor. The NXT 2.0 has two touch sensors, a light, sound and distance sensors, and support for four sensors without using a sensor multiplexor.

The main component in the Lego Mindstorms kit is a brick-shaped computer called the NXT Intelligent Brick. It can take input from up to four sensors and control up to three motors. The brick has a 100x64 pixel monochrome LCD display and four buttons that can be used to navigate a user interface using menus. It also has a speaker that can play sound files. It allows USB and Bluetooth connections to a computer.

Lego Mindstorms NXT comes with the Robolab (Welcome to the RoboLab, 2011) graphical user interface (GUI) programming software, developed at Tufts University using the National Instruments LabVIEW (NI LabVIEW, 2011) as an engine. With Robolab, students

Mobile Robotics in Education and Research 35

The VEX Robotics Design System (VEX Robotics Design System, 2011) is a robotic kit intended to introduce robotics to students. VEX Robotics Design System offers the VEX Classroom Lab Kits that make it easy to bring the VEX Robotics Design System into the classroom. The Classroom Lab Kits include everything you need to design, build, power and operate robots. It comes with four sensors (two bumper sensors and two light switches), four electric motors and a servo motor, and building parts such as wheels and gears. The challenge level for the students can increase by adding expansion kits for advanced sensors, drive systems and pneumatics. Additional sensors (ultrasonic, line tracking, optical shaft encoder, bumper switches, limit switches, and light sensors), wheels (small and large omni-directional wheels, small, medium, and large regulars), tank treads, motors, servos, gears (regular and advanced), chain and sprocket sets, extra transmitters and receivers, programming kit (easyC, ROBOTC, MPLab), extra metal, pneumatics, and rechargeable battery power packs, can all be purchased separately. There are two options on the controllers that can be used with the VEX robotic kits: the basic PIC Microcontroller or

Fig. 3. Fischertechnik ROBO TX Controller and sample mobile robot

the advanced and more powerful CORTEX Microcontroller.

Fig. 4. VEX Robotics Design CORTEX controller and a sample mobile robot

**3.1.3 VEX robotics design system** 

use flowchart like "blocks" to design their program. Students that need to write more advanced programs sometimes prefer to use third party firmware and/or high-level programming languages, including some of the most popular ones used by professionals in the embedded systems industry, like Java, C and C# (i.e. ROBOTC). The programs are downloaded from the computer onto the NXT Brick using the USB port or wirelessly using the Bluetooth connection. Programs can also be run on the computer and wirelessly through Bluetooth can control the NXT brick.

Some of the programming languages that are used to program NXT Brick are:


Another addition that allows more rigid Lego Mindstorms designs is TETRIX by Pitsco (TETRIX, 2011). The metal building system was designed specifically to work with the LEGO Technic building system through the use of the innovative *Hard Point Connector*. *TETRIX*, combined with custom motor controllers from Hitechnic, enables students to use the power of the Mindstorms technology and incorporate and control powerful DC and servo motors and metal gears. Students can build more versatile and robust robots designed for more sophisticated tasks, all while mastering basic wiring, multi-motor control, and much more.

#### **3.1.2 Fischertechnik ROBO TX training lab**

This kit includes the ROBO TX Controller (Figure 3), the construction parts for building mobile robots, the obstacle detector and trail searcher, two encoder motors for exact positioning, one motor XS, one infrared trail sensor and two sensing devices. It also includes the ROBO Pro software for programming. The system can be expanded with the extensive accessories provided by Fischertechnik (Fischertechnik GmbH, 2011).

The ROBO TX Controller is based on a 200Mhz 32-bit processor and is equipped with Bluetooth, eight universal inputs, 8 MB RAM (2 MB flash) and a display. Several ROBO TX Controllers can be coupled together to form more complex systems. The ROBO TX Controller can be purchased separately and can be used in custom designs as well.

This mobile robot kid is suitable for children at the secondary level or university. Everything else is included such as the ROBO interface and ROBO Pro software.

34 Mobile Robots – Current Trends

use flowchart like "blocks" to design their program. Students that need to write more advanced programs sometimes prefer to use third party firmware and/or high-level programming languages, including some of the most popular ones used by professionals in the embedded systems industry, like Java, C and C# (i.e. ROBOTC). The programs are downloaded from the computer onto the NXT Brick using the USB port or wirelessly using the Bluetooth connection. Programs can also be run on the computer and wirelessly through

• NXT-G: Is the programming software that comes bundled with the NXT. This software is suitable for basic programming, such as driving motors, incorporating sensor inputs, doing calculations, and learning simplified programming structures

• C# with Microsoft Robotics Developer Studio: Uses the free tools Visual Studio Express and the Robotics Developer Studio and allows programming using the C#

• Next Byte Codes (NBC): Is a simple open source language with an assembly language

• Not eXactly C (NXC): Is a high level open-source language, similar to C. NXC is basically NQC (Not Quite C) for the NXT. It is one of the most widely used third-party

• ROBOTC: Developed by the Carnegie Mellon Robotic's Academy. ROBOTC runs a very optimized firmware, which allows the NXT to run programs very quickly, and also compresses the files so that you can fit a large amount of programs into your NXT. Like other NXT languages, ROBOTC requires this firmware to be downloaded from the

Another addition that allows more rigid Lego Mindstorms designs is TETRIX by Pitsco (TETRIX, 2011). The metal building system was designed specifically to work with the LEGO Technic building system through the use of the innovative *Hard Point Connector*. *TETRIX*, combined with custom motor controllers from Hitechnic, enables students to use the power of the Mindstorms technology and incorporate and control powerful DC and servo motors and metal gears. Students can build more versatile and robust robots designed for more sophisticated tasks, all while mastering basic wiring, multi-motor control, and

This kit includes the ROBO TX Controller (Figure 3), the construction parts for building mobile robots, the obstacle detector and trail searcher, two encoder motors for exact positioning, one motor XS, one infrared trail sensor and two sensing devices. It also includes the ROBO Pro software for programming. The system can be expanded with the extensive

The ROBO TX Controller is based on a 200Mhz 32-bit processor and is equipped with Bluetooth, eight universal inputs, 8 MB RAM (2 MB flash) and a display. Several ROBO TX Controllers can be coupled together to form more complex systems. The ROBO TX

This mobile robot kid is suitable for children at the secondary level or university. Everything

Controller can be purchased separately and can be used in custom designs as well.

accessories provided by Fischertechnik (Fischertechnik GmbH, 2011).

else is included such as the ROBO interface and ROBO Pro software.

Some of the programming languages that are used to program NXT Brick are:

syntax that can be used to program the NXT brick.

programming languages for the NXT.

ROBOTC interface in order to run.

**3.1.2 Fischertechnik ROBO TX training lab** 

Bluetooth can control the NXT brick.

and flow control.

language.

much more.

Fig. 3. Fischertechnik ROBO TX Controller and sample mobile robot

### **3.1.3 VEX robotics design system**

The VEX Robotics Design System (VEX Robotics Design System, 2011) is a robotic kit intended to introduce robotics to students. VEX Robotics Design System offers the VEX Classroom Lab Kits that make it easy to bring the VEX Robotics Design System into the classroom. The Classroom Lab Kits include everything you need to design, build, power and operate robots. It comes with four sensors (two bumper sensors and two light switches), four electric motors and a servo motor, and building parts such as wheels and gears. The challenge level for the students can increase by adding expansion kits for advanced sensors, drive systems and pneumatics. Additional sensors (ultrasonic, line tracking, optical shaft encoder, bumper switches, limit switches, and light sensors), wheels (small and large omni-directional wheels, small, medium, and large regulars), tank treads, motors, servos, gears (regular and advanced), chain and sprocket sets, extra transmitters and receivers, programming kit (easyC, ROBOTC, MPLab), extra metal, pneumatics, and rechargeable battery power packs, can all be purchased separately. There are two options on the controllers that can be used with the VEX robotic kits: the basic PIC Microcontroller or the advanced and more powerful CORTEX Microcontroller.

Fig. 4. VEX Robotics Design CORTEX controller and a sample mobile robot

Mobile Robotics in Education and Research 37

own robots using the simple, drag-and-drop software. The WeDo is designed to teach simpler concepts to slightly younger kids than other kits. This kit does not allow mobile robot constructions but it is worth mentioning since it is targeting an age group that not

By hooking the robots up to your computer via the included USB hub, the WeDo software allows you to program the robots, controlling its actions, sounds and responses. All the

Teaching mobile robotics at K-12 level is never a complete task. Students and teachers strengthen their knowledge by participating in mobile robot competitions. The success of robotics in the education of young children has triggered many successful local, national and international robotics competitions such as the FIRST Lego League (USFIRST.org – Welcome to FIRST, 2011), RoboCup@Home (RoboCup@Home, 2011), RoboCup (RoboCup, 2011) and RoboCupJunior (RoboCupJunior, 2011). Only a few international competitions will be briefly mentioned here since trying to name and describe all will be a never-ending

VEX robotics competition (Competition - VEX Robotics, 2011) comes at two levels: VEX Robotics Classroom Competition and VEX Robotics Competition (Figure 7). VEX Robotics Classroom Competition is specifically tailored to bring the magic of robotics competition into the classroom. Robotics is an engaging way to integrate all facets of STEM education into the classroom and head-to-head competition is a natural way to capture students' attention. During the excitement that comes with building and competing with their robots, students will be having too much fun to realize they're learning important STEM concepts and life skills. A single teacher can easily implement all aspects of this program as part of

The VEX Robotics Competition is the largest and fastest growing middle and high school robotics program globally with more than 3,500 teams from 20 countries playing in over 250 tournaments worldwide. Local VEX Robotics Competition events are being held in many

many manufacturers target.

programming is drag-and-drop.

Fig. 6. Lego WeDo parts and a robot

**3.2.1 VEX robotics competition** 

their daily classroom activities.

task.

**3.2 Mobile robotics competitions for K-12 students** 

### **3.1.4 Engino robotics**

The engino toy system (Engino international website – play to invent, 2011) was launched in 2007, initially with structural snap fit components and later with gears, pulleys and motors for more complex models (Figure 5). In 2011 the company teamed with Frederic University to develop robotic solutions for primary school students. The robotics module is expected to be officially launched in 2012 and will have 3 modes of operation, making it suitable for both primary and early secondary education (up to grade 8).

The engino controller is based on an ARM 32-bit processor and has four analogue motor outputs, six LED & buzzer outputs, two digital inputs and two analogue inputs. The kit comes equipped with basic building equipment, the controller, motors and basic sensors that include touch, sound, temperature, infrared and light. The controller comes equipped with a USB port that allows it to connect directly to a computer. It can be used to directly control the robots from the computer or to download programs for autonomous functionality.

Fig. 5. Engino Robotics sample mobile robots

The module is designed for three levels of programming, basic, intermediate and advanced. The basic level allows manual programming using the existing buttons on the module and allows recording of steps. Younger students (primary education) can be introduced to robotics by recording their steps and functions and then playing them in sequence. It is restricted to two motor outputs and three LED/buzzer outputs. The second level of programming is done on a computer and can fully program the controller. A specialized GUI allows programming the controller using graphical blocks that represent the blocks on the actual system (i.e. motors, sensors, etc). An innovation of the system is that students can create modules of code reusable code that they can use in other programs. The first two levels of programing are suitable for children at the primary education level. The advanced level programming allows students to use a custom made C like high-level language to create more complex programs. This is suitable for students at the secondary education level.

### **3.1.5 Lego WeDo**

The Lego Education WeDo platform is shown in Figure 6. It powered by the LabVIEW software and is ideal for primary school students. Students can build and program their 36 Mobile Robots – Current Trends

The engino toy system (Engino international website – play to invent, 2011) was launched in 2007, initially with structural snap fit components and later with gears, pulleys and motors for more complex models (Figure 5). In 2011 the company teamed with Frederic University to develop robotic solutions for primary school students. The robotics module is expected to be officially launched in 2012 and will have 3 modes of operation, making it suitable for

The engino controller is based on an ARM 32-bit processor and has four analogue motor outputs, six LED & buzzer outputs, two digital inputs and two analogue inputs. The kit comes equipped with basic building equipment, the controller, motors and basic sensors that include touch, sound, temperature, infrared and light. The controller comes equipped with a USB port that allows it to connect directly to a computer. It can be used to directly control the robots from the computer or to download programs for autonomous

The module is designed for three levels of programming, basic, intermediate and advanced. The basic level allows manual programming using the existing buttons on the module and allows recording of steps. Younger students (primary education) can be introduced to robotics by recording their steps and functions and then playing them in sequence. It is restricted to two motor outputs and three LED/buzzer outputs. The second level of programming is done on a computer and can fully program the controller. A specialized GUI allows programming the controller using graphical blocks that represent the blocks on the actual system (i.e. motors, sensors, etc). An innovation of the system is that students can create modules of code reusable code that they can use in other programs. The first two levels of programing are suitable for children at the primary education level. The advanced level programming allows students to use a custom made C like high-level language to create more complex programs. This is

The Lego Education WeDo platform is shown in Figure 6. It powered by the LabVIEW software and is ideal for primary school students. Students can build and program their

both primary and early secondary education (up to grade 8).

Fig. 5. Engino Robotics sample mobile robots

suitable for students at the secondary education level.

**3.1.5 Lego WeDo** 

**3.1.4 Engino robotics** 

functionality.

own robots using the simple, drag-and-drop software. The WeDo is designed to teach simpler concepts to slightly younger kids than other kits. This kit does not allow mobile robot constructions but it is worth mentioning since it is targeting an age group that not many manufacturers target.

By hooking the robots up to your computer via the included USB hub, the WeDo software allows you to program the robots, controlling its actions, sounds and responses. All the programming is drag-and-drop.

Fig. 6. Lego WeDo parts and a robot

### **3.2 Mobile robotics competitions for K-12 students**

Teaching mobile robotics at K-12 level is never a complete task. Students and teachers strengthen their knowledge by participating in mobile robot competitions. The success of robotics in the education of young children has triggered many successful local, national and international robotics competitions such as the FIRST Lego League (USFIRST.org – Welcome to FIRST, 2011), RoboCup@Home (RoboCup@Home, 2011), RoboCup (RoboCup, 2011) and RoboCupJunior (RoboCupJunior, 2011). Only a few international competitions will be briefly mentioned here since trying to name and describe all will be a never-ending task.

### **3.2.1 VEX robotics competition**

VEX robotics competition (Competition - VEX Robotics, 2011) comes at two levels: VEX Robotics Classroom Competition and VEX Robotics Competition (Figure 7). VEX Robotics Classroom Competition is specifically tailored to bring the magic of robotics competition into the classroom. Robotics is an engaging way to integrate all facets of STEM education into the classroom and head-to-head competition is a natural way to capture students' attention. During the excitement that comes with building and competing with their robots, students will be having too much fun to realize they're learning important STEM concepts and life skills. A single teacher can easily implement all aspects of this program as part of their daily classroom activities.

The VEX Robotics Competition is the largest and fastest growing middle and high school robotics program globally with more than 3,500 teams from 20 countries playing in over 250 tournaments worldwide. Local VEX Robotics Competition events are being held in many

Mobile Robotics in Education and Research 39

The Soccer challenge is a competition for youth to design, program and strategize autonomous soccer-playing robots. At the Dance challenge, students create dancing robots which, dressed in costumes, move in creative harmony to music. The Rescue challenge is a platform which involves student programming autonomous robots to rescue "victims" in disaster scenarios. And the CoSpace challenges offer an opportunity for RoboCupJunior participants to explore robotics technology, digital media, and CoSpace concept. It also provides a platform for young minds who are keen in animation and gaming into the world of robotics. It offers an opportunity for junior participants to explore the robotics programming and AI strategies with Simulation based competitions. It comprises of two

sub-leagues, namely CoSpace Adventure Challenge and CoSpace Dance Challenge.

Mobile robotics is a compelling subject for engineering and computer science undergraduates, but unfortunately most universities do not offer them at the introductory level. Most computer science and engineering departments do offer an introductory course in classical (industrial) robotics but not mobile robotics. These courses generally concentrate on the introduction to industrial robotics and in most cases they incorporate one chapter on mobile robotics. This chapter normally gives a general introduction to mobile robotics concepts and not much more. Mobile robotics courses are usually left as directed studies or

Another issue is the fact that robotic courses are not offered at the first year of university education; they are normally offered at the third or fourth year. This creates a problem because it gives no continuation with the robotics education that students are getting in high school. One argument might be that mobile robotic courses require extensive knowledge of electronics, engineering and control. This is true, but it does not mean that the first year university courses must be so involved. The first year mobile robotics course can be a mobile robotic applications course. Another way to introduce mobile robots in the first year of university education is to use mobile robots with other courses such as programing, control and engineering courses in order to offer experimentation of the materials learned. Students learn more when they apply what they learn in lectures to real world applications such as robotics. More advanced mobile robotic topics can be covered with a second or third year

Fig. 8. RoboCup Rescue and Soccer challenges

**4. Robotics at the university level** 

are offered only at the graduate level.

course in mobile robotics.

different cities throughout the world. In addition to just having a great time and building amazing robots, through their participation in the VEX Robotics Competition and their work within their team, students will learn many academic and life skills.

Fig. 7. VEX robotics competition

### **3.2.2 FIRST LEGO league and junior FIRST LEGO league**

The FIRST LEGO League (also known by the acronym FLL) is an international competition organized by FIRST for primary and middle school students (ages 9–14 in the USA and Canada, 9–16 elsewhere). It is an annual competition and each year a new challenge is announced that focuses on a different real-world topic related to the sciences. Each challenge within the competition then revolves around that theme. The robotics part of the competition revolves around designing and programming LEGO Robots to complete tasks. Students work out solutions to the various problems they are given and then meet for regional tournaments to share their knowledge, compare ideas, and display their robots. The Junior FIRST LEGO League is a scaled-down robotics program for children of ages 6–9.

### **3.2.3 RoboCupJunior**

RoboCupJunior started in 1998 with a demonstration held at the RoboCup international competition held in Paris, France. RoboCup Junior is closely related to the RoboCup competition. RoboCup is an international robotics competition that aims to develop autonomous football (soccer) robots with the intention of promoting research and education in the field of artificial intelligence. Robot cup is described in section *Mobile Robot Competitions for University Students***.**

The programming and engineering-influenced competition introduces the aims and goals of the RoboCupJunior project to the primary and secondary school aged level (typically people under the 18). RoboCupJunior is an educational initiative to promote knowledge and skills in programming, hardware engineering, and the world of 3D through robotics for young minds. It aims to fuse real and virtual robotic technologies towards bridging two prominent areas of the future namely, Interactive Digital Media and Robotics. Those involved create and build robots in a variety of different challenges, and compete against other teams. RobotCupJunior is divided into four challenges: Soccer challenge, Dance challenge, Rescue challenge and CoSpace challenge (Figure 8) (van Lith, 2007).

38 Mobile Robots – Current Trends

different cities throughout the world. In addition to just having a great time and building amazing robots, through their participation in the VEX Robotics Competition and their work

The FIRST LEGO League (also known by the acronym FLL) is an international competition organized by FIRST for primary and middle school students (ages 9–14 in the USA and Canada, 9–16 elsewhere). It is an annual competition and each year a new challenge is announced that focuses on a different real-world topic related to the sciences. Each challenge within the competition then revolves around that theme. The robotics part of the competition revolves around designing and programming LEGO Robots to complete tasks. Students work out solutions to the various problems they are given and then meet for regional tournaments to share their knowledge, compare ideas, and display their robots. The Junior FIRST LEGO League is a scaled-down robotics program for children of ages 6–9.

RoboCupJunior started in 1998 with a demonstration held at the RoboCup international competition held in Paris, France. RoboCup Junior is closely related to the RoboCup competition. RoboCup is an international robotics competition that aims to develop autonomous football (soccer) robots with the intention of promoting research and education in the field of artificial intelligence. Robot cup is described in section *Mobile Robot* 

The programming and engineering-influenced competition introduces the aims and goals of the RoboCupJunior project to the primary and secondary school aged level (typically people under the 18). RoboCupJunior is an educational initiative to promote knowledge and skills in programming, hardware engineering, and the world of 3D through robotics for young minds. It aims to fuse real and virtual robotic technologies towards bridging two prominent areas of the future namely, Interactive Digital Media and Robotics. Those involved create and build robots in a variety of different challenges, and compete against other teams. RobotCupJunior is divided into four challenges: Soccer challenge, Dance challenge, Rescue

within their team, students will learn many academic and life skills.

**3.2.2 FIRST LEGO league and junior FIRST LEGO league** 

challenge and CoSpace challenge (Figure 8) (van Lith, 2007).

Fig. 7. VEX robotics competition

**3.2.3 RoboCupJunior** 

*Competitions for University Students***.**

The Soccer challenge is a competition for youth to design, program and strategize autonomous soccer-playing robots. At the Dance challenge, students create dancing robots which, dressed in costumes, move in creative harmony to music. The Rescue challenge is a platform which involves student programming autonomous robots to rescue "victims" in disaster scenarios. And the CoSpace challenges offer an opportunity for RoboCupJunior participants to explore robotics technology, digital media, and CoSpace concept. It also provides a platform for young minds who are keen in animation and gaming into the world of robotics. It offers an opportunity for junior participants to explore the robotics programming and AI strategies with Simulation based competitions. It comprises of two sub-leagues, namely CoSpace Adventure Challenge and CoSpace Dance Challenge.

Fig. 8. RoboCup Rescue and Soccer challenges

### **4. Robotics at the university level**

Mobile robotics is a compelling subject for engineering and computer science undergraduates, but unfortunately most universities do not offer them at the introductory level. Most computer science and engineering departments do offer an introductory course in classical (industrial) robotics but not mobile robotics. These courses generally concentrate on the introduction to industrial robotics and in most cases they incorporate one chapter on mobile robotics. This chapter normally gives a general introduction to mobile robotics concepts and not much more. Mobile robotics courses are usually left as directed studies or are offered only at the graduate level.

Another issue is the fact that robotic courses are not offered at the first year of university education; they are normally offered at the third or fourth year. This creates a problem because it gives no continuation with the robotics education that students are getting in high school. One argument might be that mobile robotic courses require extensive knowledge of electronics, engineering and control. This is true, but it does not mean that the first year university courses must be so involved. The first year mobile robotics course can be a mobile robotic applications course. Another way to introduce mobile robots in the first year of university education is to use mobile robots with other courses such as programing, control and engineering courses in order to offer experimentation of the materials learned. Students learn more when they apply what they learn in lectures to real world applications such as robotics. More advanced mobile robotic topics can be covered with a second or third year course in mobile robotics.

Mobile Robotics in Education and Research 41

VEX robotics was described earlier in section *Mobile robotic platforms for K-12 education*. Many universities are using VEX robotics because of the rigid design, powerful processor and programming flexibility. This kit, like the Mindstorms kit, is primarily used for introductory

The Create robot from iRobot (iRobot: Education & Research, 2011) is intended for educational purposes. This robot is based on the iRobot Roomba vacuum cleaner. In place of the vacuum hardware of the Roomba, the Create includes a cargo bay which houses a 25 pin port that can be used for digital and analog input and output. The Create also possesses a serial port through which sensor data can be read and motor commands can be issued using the iRobot Roomba Open Interface protocol. The platform accepts virtually all accessories designed for iRobot's domestic robots and can also be programmed with the addition of a small "command module" (a microcontroller with a USB connector and four DE-9 expansion

The controller of the iRobot is limited in processing power and thus many choose to utilize an external computer in controlling the Create robot. Since the built-in serial port supports the transmission of sensor data and can receive actuation commands, any embedded computer that supports serial communication can be used as the control computer. Popular choices include the gumstix (Gumstix small open source hardware, 2011) line of computers. In many cases laptop computers are used to control the Create through the serial

The Create is supported both in hardware and in simulation by Microsoft Robotics Developer Studio (RDS). An example of an RDS simulation that contains the Create is shown in Figure 10. The Create is the platform used for Sumo robot competitions. The

iRobot Command Module for the Create is not required for RDS and is not used.

**4.1.2 VEX robotics design system** 

courses, projects and prototyping.

**4.1.3 iRobot create** 

ports).

connection.

Fig. 9. iRobot Create

It is only recently that we see many universities start to offer mobile robotics at the undergraduate level. At K-12 education students are normally concentrating on mobile robotics and not industrial robots since mobile robots are more interesting and more fun for students. Therefore, offering introductory mobile robotics courses at the undergraduate level will be a natural continuation of what students were doing in high-school. At this level robots such as the Lego Mindstorms or the VEX Robotics Design System can be used.

In order to have successful graduate students and develop more research at the undergraduate and graduate levels in mobile robotics, more advanced courses have to be offered. There is a need for a pure mobile robotic course that covers the basic concepts of mobile robotics such as kinematics, perception, localization, map building and navigation. Universities that offer robotics degrees and have robotics departments, more specialized courses can be offered to include topics such as vision, behavior coordination, robot learning, swarm robotics, humanoid robotics etc. at the undergraduate level. Beyond this, faculty may offer directed studies and projects (i.e. final year projects or group projects) to students interested in mobile robotics. This will prepare students for industry work and graduate level work.

Universities normally do not have the same problems as K-12 education. There are normally more resources available either from funding agencies, university funds or research projects. Also there is no need to have many robotic systems since there are fewer students using them compared to K-12 schools. At this level, students can also use simulators such as Microsoft Robotics Developer Studio and others. There are definitely enough teaching materials, textbooks, equipment and simulators that can successfully support mobile robotics courses. There are several excellent textbooks such as AI Robotics by Robin Murphy (Murphy, 2000), Computational Principles of Mobile Robotics by Dudek and M. Jenkin (Dudek & Jenkin, 2011), Introduction to Autonomous Mobile Robots by Roland Siegwart and Illah Nourbakhsh (Siegwart & Nourbakhsh, 2004), etc.

Students must grain most of their mobile robotics education at the university level. At the graduate level, robotics courses must be in the form of research, projects and theses. At this level more research must be done.

#### **4.1 Mobile robotic platforms for the university level**

There are many mobile robots available at the university level. A few selected ones will be presented and described in this section. The robots will be described in terms of hardware, capabilities, control and applications. The peripherals that can be used for each robot will also be briefly described, if possible. An effort will be made to describe the possible applications for each of these robots.

#### **4.1.1 Lego Mindstorms NXT and Fischertechnik ROBO TX**

The processing power and programming flexibilities that these systems offer make them ideal for University introductory courses. Many universities are using these robots very successfully. These systems were described earlier in the *Mobile robotic platforms for K-12 education* section thus the explanation part will be omitted.

These robotic kits are primarily used at the introductory level to teach the basic concepts and principles of perception, localization, map building and path planning and navigation. We often find these type of mobile robots used for projects and/or to develop prototype ideas. Programming at this level is done using high-level languages such as C# with Microsoft Robotics Developer Studio, NBC, ROBOTC or other high-level languages.

### **4.1.2 VEX robotics design system**

VEX robotics was described earlier in section *Mobile robotic platforms for K-12 education*. Many universities are using VEX robotics because of the rigid design, powerful processor and programming flexibility. This kit, like the Mindstorms kit, is primarily used for introductory courses, projects and prototyping.

### **4.1.3 iRobot create**

40 Mobile Robots – Current Trends

It is only recently that we see many universities start to offer mobile robotics at the undergraduate level. At K-12 education students are normally concentrating on mobile robotics and not industrial robots since mobile robots are more interesting and more fun for students. Therefore, offering introductory mobile robotics courses at the undergraduate level will be a natural continuation of what students were doing in high-school. At this level

robots such as the Lego Mindstorms or the VEX Robotics Design System can be used. In order to have successful graduate students and develop more research at the undergraduate and graduate levels in mobile robotics, more advanced courses have to be offered. There is a need for a pure mobile robotic course that covers the basic concepts of mobile robotics such as kinematics, perception, localization, map building and navigation. Universities that offer robotics degrees and have robotics departments, more specialized courses can be offered to include topics such as vision, behavior coordination, robot learning, swarm robotics, humanoid robotics etc. at the undergraduate level. Beyond this, faculty may offer directed studies and projects (i.e. final year projects or group projects) to students interested in mobile

robotics. This will prepare students for industry work and graduate level work.

and Illah Nourbakhsh (Siegwart & Nourbakhsh, 2004), etc.

**4.1 Mobile robotic platforms for the university level** 

**4.1.1 Lego Mindstorms NXT and Fischertechnik ROBO TX** 

*education* section thus the explanation part will be omitted.

level more research must be done.

applications for each of these robots.

Universities normally do not have the same problems as K-12 education. There are normally more resources available either from funding agencies, university funds or research projects. Also there is no need to have many robotic systems since there are fewer students using them compared to K-12 schools. At this level, students can also use simulators such as Microsoft Robotics Developer Studio and others. There are definitely enough teaching materials, textbooks, equipment and simulators that can successfully support mobile robotics courses. There are several excellent textbooks such as AI Robotics by Robin Murphy (Murphy, 2000), Computational Principles of Mobile Robotics by Dudek and M. Jenkin (Dudek & Jenkin, 2011), Introduction to Autonomous Mobile Robots by Roland Siegwart

Students must grain most of their mobile robotics education at the university level. At the graduate level, robotics courses must be in the form of research, projects and theses. At this

There are many mobile robots available at the university level. A few selected ones will be presented and described in this section. The robots will be described in terms of hardware, capabilities, control and applications. The peripherals that can be used for each robot will also be briefly described, if possible. An effort will be made to describe the possible

The processing power and programming flexibilities that these systems offer make them ideal for University introductory courses. Many universities are using these robots very successfully. These systems were described earlier in the *Mobile robotic platforms for K-12* 

These robotic kits are primarily used at the introductory level to teach the basic concepts and principles of perception, localization, map building and path planning and navigation. We often find these type of mobile robots used for projects and/or to develop prototype ideas. Programming at this level is done using high-level languages such as C# with

Microsoft Robotics Developer Studio, NBC, ROBOTC or other high-level languages.

The Create robot from iRobot (iRobot: Education & Research, 2011) is intended for educational purposes. This robot is based on the iRobot Roomba vacuum cleaner. In place of the vacuum hardware of the Roomba, the Create includes a cargo bay which houses a 25 pin port that can be used for digital and analog input and output. The Create also possesses a serial port through which sensor data can be read and motor commands can be issued using the iRobot Roomba Open Interface protocol. The platform accepts virtually all accessories designed for iRobot's domestic robots and can also be programmed with the addition of a small "command module" (a microcontroller with a USB connector and four DE-9 expansion ports).

The controller of the iRobot is limited in processing power and thus many choose to utilize an external computer in controlling the Create robot. Since the built-in serial port supports the transmission of sensor data and can receive actuation commands, any embedded computer that supports serial communication can be used as the control computer. Popular choices include the gumstix (Gumstix small open source hardware, 2011) line of computers. In many cases laptop computers are used to control the Create through the serial connection.

The Create is supported both in hardware and in simulation by Microsoft Robotics Developer Studio (RDS). An example of an RDS simulation that contains the Create is shown in Figure 10. The Create is the platform used for Sumo robot competitions. The iRobot Command Module for the Create is not required for RDS and is not used.

Fig. 9. iRobot Create

Mobile Robotics in Education and Research 43

the KoreBot II, the robot can be remotely operated. It is easily interfaced with any Personal

The robot includes an array of nine infrared sensors for obstacle detection as well as five ultrasonic sensors for long range object detection. An optional front pair of ground Infrared Sensors is available for line following and table edge detection. Through the KoreBot II, the robot is also able to host standard Compact Flash extension cards, supporting WiFi,

Koala is a mid-size robot designed for real-world applications. It is bigger than Khepera, more powerful, and capable of carrying larger accessories. Koala has the functionality

In addition to these new features, Koala retains a shape and structure similar to Khepera, such that experiments performed on Khepera can be migrated to the Koala. The BIOS of both robots is compatible, permitting programs written for one robot to be easily adapted

Programming of the robots can be done by standard cross-C language to more sophisticated tools like LabView, MATLAB (MATLAB - The Language OF Technical Computing, 2011) or SysQuake (Calerga – Sysquake, 2011). In general any programming environment capable of communicating over a serial port can also be used to program these robots. KoreBot II GNU (The GNU Operating System, 2011) C/C++ cross-compiler provides a powerful standard tool for complex code compilation and conditional build. It also supports all the standard C

The K-team robots are commonly used for experiments and research is Navigation, Artificial Intelligence, Multi-Agents System, Control, Collective Behavior and Real-Time

Adept MobileRobots (Intelligent Mobile Robotic Platforms, 2011) offers an array of mobile robots; the Seekur Jr, the GuiaBot and the PowerBot. The most frequently used and most popular research mobile robot is the Pioneer 3DX (P3DX). It is an advanced research robot that is controlled by a computer (PC) and has a large range of sensors (including an optional laser range finder), and communicates via WiFi. The Pioneer's versatility, reliability and durability have made it the reference platform for robotics research. Unlike hobby and kit

The base Pioneer 3DX (Figure 13) platform arrives fully assembled with motors with 500 tick encoders, 19cm wheels, tough aluminum body, 8 forward-facing ultrasonic (sonar) sensors, 8 optional real-facing sonar, 1, 2 or 3 hot-swappable batteries, and a complete software development kit. One of the innovations of Pioneer is that it does not have an on board controller. The robot can be controlled either by an optional internal computer (PC) or an external laptop. The base Pioneer 3DX platform can reach speeds of 1.6 meters per second and carry a payload of up to 23 kg. With the optional Laser Mapping & Navigation System and MobileEyes, pioneer can map buildings and constantly update its position within a few cm while traveling within mapped areas. With the appropriate optional

Since it is controlled by a computer, it can be programmed in any programming language that the user selects. I addition to this, the P3DX is supported both in hardware and in simulation by Microsoft Robotics Developer Studio. The Pioneer 3 DX is an all-purpose

accessories, the robot can be remotely viewed, speak and play and hear audio.

Computer.

Bluetooth, extra storage space, and many others.

library functions and almost all the other Posix libraries.

necessary for use in practical applications.

and recompiled for the other.

Programming, among others.

**4.1.5 Adept MobileRobots** 

robots, Pioneer is fully programmable.

Fig. 10. iRobot Create in Microsoft Robotics Developer Studio Simulation

### **4.1.4 K-Team SA mobile robots**

K-Team (K-Team Corporation, 2011) is a Swiss company that develops, manufactures and markets high quality mobile robots for use in advanced education and research. The Khepera III and Koala II (Figure 10) are found in many university laboratories that specialize in robotics education and research. KoreBot II is the standard controller for the Khepera II and the Koala II. The KoreBot II is a single board controller used for custom robotics developments.

The Khepera III is a miniature mobile robot that has features that can match the performances of much bigger robots. It has upgradable embedded computing power using the KoreBot system, multiple sensor arrays for both long range and short range object detection, swappable battery pack system for optimal autonomy, and exceptional differential drive odometry. Khepera III is able to move on a tabletop but it is also designed to move on rough floor surfaces and carpets.

Fig. 11. K-Team SA Koala II and Khepera III

The Khepera III architecture provides exceptional modularity. The robot base can be used with or without a KoreBot II board. Using the KoreBot II, it features a standard embedded Linux Operating System for quick and easy autonomous application development. Without 42 Mobile Robots – Current Trends

K-Team (K-Team Corporation, 2011) is a Swiss company that develops, manufactures and markets high quality mobile robots for use in advanced education and research. The Khepera III and Koala II (Figure 10) are found in many university laboratories that specialize in robotics education and research. KoreBot II is the standard controller for the Khepera II and the Koala

The Khepera III is a miniature mobile robot that has features that can match the performances of much bigger robots. It has upgradable embedded computing power using the KoreBot system, multiple sensor arrays for both long range and short range object detection, swappable battery pack system for optimal autonomy, and exceptional differential drive odometry. Khepera III is able to move on a tabletop but it is also designed

The Khepera III architecture provides exceptional modularity. The robot base can be used with or without a KoreBot II board. Using the KoreBot II, it features a standard embedded Linux Operating System for quick and easy autonomous application development. Without

II. The KoreBot II is a single board controller used for custom robotics developments.

Fig. 10. iRobot Create in Microsoft Robotics Developer Studio Simulation

**4.1.4 K-Team SA mobile robots** 

to move on rough floor surfaces and carpets.

Fig. 11. K-Team SA Koala II and Khepera III

the KoreBot II, the robot can be remotely operated. It is easily interfaced with any Personal Computer.

The robot includes an array of nine infrared sensors for obstacle detection as well as five ultrasonic sensors for long range object detection. An optional front pair of ground Infrared Sensors is available for line following and table edge detection. Through the KoreBot II, the robot is also able to host standard Compact Flash extension cards, supporting WiFi, Bluetooth, extra storage space, and many others.

Koala is a mid-size robot designed for real-world applications. It is bigger than Khepera, more powerful, and capable of carrying larger accessories. Koala has the functionality necessary for use in practical applications.

In addition to these new features, Koala retains a shape and structure similar to Khepera, such that experiments performed on Khepera can be migrated to the Koala. The BIOS of both robots is compatible, permitting programs written for one robot to be easily adapted and recompiled for the other.

Programming of the robots can be done by standard cross-C language to more sophisticated tools like LabView, MATLAB (MATLAB - The Language OF Technical Computing, 2011) or SysQuake (Calerga – Sysquake, 2011). In general any programming environment capable of communicating over a serial port can also be used to program these robots. KoreBot II GNU (The GNU Operating System, 2011) C/C++ cross-compiler provides a powerful standard tool for complex code compilation and conditional build. It also supports all the standard C library functions and almost all the other Posix libraries.

The K-team robots are commonly used for experiments and research is Navigation, Artificial Intelligence, Multi-Agents System, Control, Collective Behavior and Real-Time Programming, among others.

#### **4.1.5 Adept MobileRobots**

Adept MobileRobots (Intelligent Mobile Robotic Platforms, 2011) offers an array of mobile robots; the Seekur Jr, the GuiaBot and the PowerBot. The most frequently used and most popular research mobile robot is the Pioneer 3DX (P3DX). It is an advanced research robot that is controlled by a computer (PC) and has a large range of sensors (including an optional laser range finder), and communicates via WiFi. The Pioneer's versatility, reliability and durability have made it the reference platform for robotics research. Unlike hobby and kit robots, Pioneer is fully programmable.

The base Pioneer 3DX (Figure 13) platform arrives fully assembled with motors with 500 tick encoders, 19cm wheels, tough aluminum body, 8 forward-facing ultrasonic (sonar) sensors, 8 optional real-facing sonar, 1, 2 or 3 hot-swappable batteries, and a complete software development kit. One of the innovations of Pioneer is that it does not have an on board controller. The robot can be controlled either by an optional internal computer (PC) or an external laptop. The base Pioneer 3DX platform can reach speeds of 1.6 meters per second and carry a payload of up to 23 kg. With the optional Laser Mapping & Navigation System and MobileEyes, pioneer can map buildings and constantly update its position within a few cm while traveling within mapped areas. With the appropriate optional accessories, the robot can be remotely viewed, speak and play and hear audio.

Since it is controlled by a computer, it can be programmed in any programming language that the user selects. I addition to this, the P3DX is supported both in hardware and in simulation by Microsoft Robotics Developer Studio. The Pioneer 3 DX is an all-purpose

Mobile Robotics in Education and Research 45

contests are recognized by the scientific community as a way for the development of research. The practical solutions students find for their projects on their way to a competition will teach them more skills than a course can. There is a wide variety of competitions for robots of

RoboCup (RoboCup, 2011) is an international robotics competition founded in 1997. RoboCup chose to use soccer as a central topic of research, aiming at innovations to be applied for socially significant problems and industries. The aim is to develop autonomous football (soccer) robots with the intention of promoting research and education in the field of artificial intelligence. The name *RoboCup* is a contraction of the competition's full name, "Robot Soccer World Cup", but there are many other stages of the competition as well. The ultimate goal of the RoboCup project is by 2050, develop a team of fully autonomous

The contest currently has three major competition domains, each with a number of leagues and subleagues: RoboCupSoccer (RoboCupSoccer, 2011), RoboCupRescue (RoboCupRescue, 2011) and RoboCup@Home (RoboCup@Home, 2011). RoboCupSoccer includes a number of sub-competitions: Simulation League, Small Size Robot League, Middle Size Robot League, Standard Platform League and the Humanoid League. Figure 13 shows the humanoid

The RoboCupRescue Robot League is an international competition for urban search and rescue robots, in which robots compete to find victims in a simulated earthquake environment. RoboCupRescue includes real robot and simulation leagues. RoboCup@Home is a new league inside the RoboCup competitions that focuses on real-world applications and human-machine interaction with autonomous robots. A set of benchmark tests is used to evaluate the robots' abilities and performance in a realistic non-standardized home environment setting. The aim is to foster the development of useful robotic applications that can assist humans in everyday life. The ultimate scenario is the real world itself. To build up the required technologies gradually a basic home environment is provided as a general scenario. In the first years it will consist of a living room and a kitchen but soon it should

various types. The following examples describe a few of the higher profile events.

humanoid robots that can win against the human world champion team in soccer.

Fig. 14. RoboCup Humanoid and Simulation Competitions

competition and the simulation competition.

**4.2.1 RoboCup** 

base, used for research and applications involving mapping, teleoperation, localization, monitoring, reconnaissance, vision, manipulation, autonomous navigation and multi-robot cooperation and other behaviors.

Fig. 12. The Seekur Jr, GuiaBot and PowerBot from Adept MobileRobots

Fig. 13. Adept MobileRobots Pioneer 3DX

### **4.2 Mobile robot competitions for university students**

Robotic competitions attempt to foster several research areas by providing a standard problem where wide range of technologies can be integrated and examined, as well as used for integrated project-oriented education. Additionally, mobile robotic competitions create media interest and this may even generate additional funds from external sources. Normally the work to conceive, build and program the robots is integrated in final graduation projects, extracurricular activities or post-graduation activities. Students are very motivated, because they can integrate most of the knowledge acquired during their courses. Preparing for mobile robot competitions involves interdepartmental co-operations, sometimes not only of the engineering and science departments.

Mobile Robotic competitions are a very motivating way of fostering research, development and education, mainly in Robotics, but also in Science and Technology in general. Mobile robot contests are recognized by the scientific community as a way for the development of research. The practical solutions students find for their projects on their way to a competition will teach them more skills than a course can. There is a wide variety of competitions for robots of various types. The following examples describe a few of the higher profile events.

### **4.2.1 RoboCup**

44 Mobile Robots – Current Trends

base, used for research and applications involving mapping, teleoperation, localization, monitoring, reconnaissance, vision, manipulation, autonomous navigation and multi-robot

Fig. 12. The Seekur Jr, GuiaBot and PowerBot from Adept MobileRobots

cooperation and other behaviors.

Fig. 13. Adept MobileRobots Pioneer 3DX

of the engineering and science departments.

**4.2 Mobile robot competitions for university students** 

Robotic competitions attempt to foster several research areas by providing a standard problem where wide range of technologies can be integrated and examined, as well as used for integrated project-oriented education. Additionally, mobile robotic competitions create media interest and this may even generate additional funds from external sources. Normally the work to conceive, build and program the robots is integrated in final graduation projects, extracurricular activities or post-graduation activities. Students are very motivated, because they can integrate most of the knowledge acquired during their courses. Preparing for mobile robot competitions involves interdepartmental co-operations, sometimes not only

Mobile Robotic competitions are a very motivating way of fostering research, development and education, mainly in Robotics, but also in Science and Technology in general. Mobile robot RoboCup (RoboCup, 2011) is an international robotics competition founded in 1997. RoboCup chose to use soccer as a central topic of research, aiming at innovations to be applied for socially significant problems and industries. The aim is to develop autonomous football (soccer) robots with the intention of promoting research and education in the field of artificial intelligence. The name *RoboCup* is a contraction of the competition's full name, "Robot Soccer World Cup", but there are many other stages of the competition as well. The ultimate goal of the RoboCup project is by 2050, develop a team of fully autonomous humanoid robots that can win against the human world champion team in soccer.

Fig. 14. RoboCup Humanoid and Simulation Competitions

The contest currently has three major competition domains, each with a number of leagues and subleagues: RoboCupSoccer (RoboCupSoccer, 2011), RoboCupRescue (RoboCupRescue, 2011) and RoboCup@Home (RoboCup@Home, 2011). RoboCupSoccer includes a number of sub-competitions: Simulation League, Small Size Robot League, Middle Size Robot League, Standard Platform League and the Humanoid League. Figure 13 shows the humanoid competition and the simulation competition.

The RoboCupRescue Robot League is an international competition for urban search and rescue robots, in which robots compete to find victims in a simulated earthquake environment. RoboCupRescue includes real robot and simulation leagues. RoboCup@Home is a new league inside the RoboCup competitions that focuses on real-world applications and human-machine interaction with autonomous robots. A set of benchmark tests is used to evaluate the robots' abilities and performance in a realistic non-standardized home environment setting. The aim is to foster the development of useful robotic applications that can assist humans in everyday life. The ultimate scenario is the real world itself. To build up the required technologies gradually a basic home environment is provided as a general scenario. In the first years it will consist of a living room and a kitchen but soon it should

Mobile Robotics in Education and Research 47

Mobile robots are becoming part of our everyday life in many forms. The mobile robot industry, other organizations and universities are developing mobile robots for all imaginable applications. Mobile robot research and development are at their highest. There

The mobile robotics momentum that has been created over the last two decades must not stop. More courses and more activities are needed starting from the early K-12 education. This is now possible because of the development of various types of mobile robots and simulators. The existence of various mobile robotics competitions gives extra motivation to

Teaching with robots from the early stages of K-12 education will better prepare students for high school mobile robotics education and competitions. At the university level, introductory level mobile robotics courses must be offered. More advanced mobile robotics courses must also be offered that specialize in topics such as vision, localization, navigation, etc. At the graduate level, students must concentrate on research and projects. This will create a continuation on mobile robotics education, from K-12 all the way to graduate school, and thus creating better prepared graduate students, developing more research interest and creating skilled graduates for the many jobs that are opening up in the mobile robotics industry.

Chu, K.H., Goldman, R. & Sklar, E. (2005). RoboXAP: an agent-based educational robotics

Dudek, G. & Jenkin, M. (2011). *Computational Principles of Mobile Robotics* (Second Edition), Cambridge University Press, ISBN 9780521692120, Cambridge, UK. Malec, J. (2001). Some Thoughts on Robotics for Education, *2001 AAAI Spring Symposium on* 

Matson, E. Pauly, R. & DeLoach, S. (2003). Robotic Simulators to Develop Logic and Critical

*2005,* Utrecht University, The Netherlands, July 2005.

*Robotics and Education*, Stanford University, USA, March 2001

*Section Conference*, Rolla, Missouri, USA, September, 2003.

simulator, *Agent-based Systems for Human Learning Workshop (ABSHL) at AAMAS-*

Thinking Skills in Underserved K-6 Children, *Proceedings of the 38th ASEE Midwest* 

Fig. 16. ELROB military competition robots

is a need for specialized mobile robotics engineers and scientists.

students and educators to do more work in mobile robotics.

**5. Conclusion** 

**6. References** 

also involve other areas of daily life, such as a garden/park area, a shop, a street or other public places. Focus lies on the following domains but is not limited to: Human-Robot-Interaction and Cooperation, Navigation and Mapping in dynamic environments, Computer Vision and Object Recognition under natural light conditions, Object Manipulation, Adaptive Behaviors, Behavior Integration, Ambient Intelligence, Standardization and System Integration.

### **4.2.2 Eurobot**

Eurobot (Eurobot, international robotics contest, 2011) is an international amateur robotics contest, created in 1998. It is open to teams of young people, organized either in student projects or in independent clubs. Countries that present more than three teams must organize a national qualification round where only three teams are selected for the final Eurobot competition (Figure 15). These teams could be formed from students as part of their studies or as independent clubs or non-profit organizations. A team must be made up of two or more active participants. Team members may be up to 30 years old, each team may have one supervisor for which this age limit does not apply. The contest aims at interesting the largest public to robotics and at encouraging the group practice of science by youth. The competition includes a conference for the students and the public. Eurobot is an opportunity to unleash technical imagination and exchange ideas, know-how, hints and engineering knowledge around a common challenge.

Fig. 15. Eurobot competitions

#### **4.2.3 European Land-Robot trial (ELROB)**

ELROB (Elrob-Website, 2011) is a European event, which demonstrates the abilities of unmanned robots. The ELROB is an annual event and alternates between a military and a civilian focus each year (Figure 16). Only teams from Europe are allowed. But teams of both commercial and academic backgrounds are allowed.

ELROB is designed to assess current technology to solve problems at hand, using whatever strategy to achieve it. The scenarios are designed to simulate real world missions, be it military or civilian ones. There are no artificial constraints set to these scenarios to ease the task for the robots like e.g. very visible road markings. This forces the participating teams and systems to fulfill high requirements set by the real world scenarios.

Fig. 16. ELROB military competition robots

### **5. Conclusion**

46 Mobile Robots – Current Trends

also involve other areas of daily life, such as a garden/park area, a shop, a street or other public places. Focus lies on the following domains but is not limited to: Human-Robot-Interaction and Cooperation, Navigation and Mapping in dynamic environments, Computer Vision and Object Recognition under natural light conditions, Object Manipulation, Adaptive Behaviors, Behavior Integration, Ambient Intelligence, Standardization and

Eurobot (Eurobot, international robotics contest, 2011) is an international amateur robotics contest, created in 1998. It is open to teams of young people, organized either in student projects or in independent clubs. Countries that present more than three teams must organize a national qualification round where only three teams are selected for the final Eurobot competition (Figure 15). These teams could be formed from students as part of their studies or as independent clubs or non-profit organizations. A team must be made up of two or more active participants. Team members may be up to 30 years old, each team may have one supervisor for which this age limit does not apply. The contest aims at interesting the largest public to robotics and at encouraging the group practice of science by youth. The competition includes a conference for the students and the public. Eurobot is an opportunity to unleash technical imagination and exchange ideas, know-how, hints and engineering

ELROB (Elrob-Website, 2011) is a European event, which demonstrates the abilities of unmanned robots. The ELROB is an annual event and alternates between a military and a civilian focus each year (Figure 16). Only teams from Europe are allowed. But teams of both

ELROB is designed to assess current technology to solve problems at hand, using whatever strategy to achieve it. The scenarios are designed to simulate real world missions, be it military or civilian ones. There are no artificial constraints set to these scenarios to ease the task for the robots like e.g. very visible road markings. This forces the participating teams

System Integration.

knowledge around a common challenge.

Fig. 15. Eurobot competitions

**4.2.3 European Land-Robot trial (ELROB)** 

commercial and academic backgrounds are allowed.

and systems to fulfill high requirements set by the real world scenarios.

**4.2.2 Eurobot** 

Mobile robots are becoming part of our everyday life in many forms. The mobile robot industry, other organizations and universities are developing mobile robots for all imaginable applications. Mobile robot research and development are at their highest. There is a need for specialized mobile robotics engineers and scientists.

The mobile robotics momentum that has been created over the last two decades must not stop. More courses and more activities are needed starting from the early K-12 education. This is now possible because of the development of various types of mobile robots and simulators. The existence of various mobile robotics competitions gives extra motivation to students and educators to do more work in mobile robotics.

Teaching with robots from the early stages of K-12 education will better prepare students for high school mobile robotics education and competitions. At the university level, introductory level mobile robotics courses must be offered. More advanced mobile robotics courses must also be offered that specialize in topics such as vision, localization, navigation, etc. At the graduate level, students must concentrate on research and projects. This will create a continuation on mobile robotics education, from K-12 all the way to graduate school, and thus creating better prepared graduate students, developing more research interest and creating skilled graduates for the many jobs that are opening up in the mobile robotics industry.

### **6. References**


**3** 

*King's College London United Kingdom* 

I C bus. The electronics are all mounted

**The KCLBOT: A Framework of the** 

Evangelos Georgiou, Jian Dai and Michael Luck

**Nonholonomic Mobile Robot Platform** 

**Using Double Compass Self-Localisation** 

The key to effective autonomous mobile robot navigation is accurate self-localization. Without self-localization or with inaccurate self-localization, any non-holonomic autonomous mobile robot is blind in a navigation environment. The KCLBOT [1] is a nonholonomic two wheeled mobile robot that is built around the specifications for 'Micromouse Robot' and the 'RoboCup' competition. These specifications contribute to the mobile robot's form factor and size. The mobile robot holds a complex electronic system to support on-line path planning, self-localization, and even simultaneous localization and mapping (SLAM), which is made possible by an onboard sensor array. The mobile robot is loaded with eight Robot-Electronics SRF05 [2] ultrasonic rangers, and its drive system is supported by Nubotics WC-132 [3] WheelCommander Motion Controller and two WW-01 [4] WheelWatcher Encoders. The motors for robot are modified continuous rotation servo motors, which are required for the WheelCommander Motion Controller. The rotation of the mobile robot is measured by Robot-Electronics CMPS03 [5] Compass Module. These two modules make the double compass configuration, which supports the self-localization theory presented in this paper. Each individual module provides the bearing of the mobile robot relative to the magnetic field of the earth. The central processing for the mobile robot is managed by a Savage Innovations OOPic-R microcontroller. The OOPic-R has advanced communication modules to enable data exchange between the sensors and motion controller.

on two aluminium bases, which make the square structure of the mobile robot. To support the hardware requirements of the novel localisation methodology, the cutting edge technology of a 200 MHz 32-bit ARM 9 processor on a GHI ChipworkX module [6] is employed. The software architecture is based on the Microsoft .NET Micro Framework 4.1

The combination of hardware electronics and drive mechanics, makes the KCLBOT, as

Many different systems have been considered for self-location, from using visual odometry [7] to using a GPS method. While all of these have benefits and detriments, the solution

**1. Introduction** 

Communication is managed via a serial bus and an 2

using C# and the Windows Presentation Foundation (WPF).

represented in Fig. 1, a suitable platform for autonomous self-localization.


http:// www.gumstix.com

	- http:// www.microsoft.com/robotics

## **The KCLBOT: A Framework of the Nonholonomic Mobile Robot Platform Using Double Compass Self-Localisation**

Evangelos Georgiou, Jian Dai and Michael Luck *King's College London United Kingdom* 

### **1. Introduction**

48 Mobile Robots – Current Trends

Murphy, R. R. (2000). *Introduction to AI Robotics*, MIT Press, ISBN 0-262-13383-0, Cambridge,

Siegwart, R. & Nourbakhsh, I. (2004). *Introduction to Autonomous Mobile Robots*, MIT Press,

van Lith, P. (2007). Teaching Robotics in Primary and Secondary schools, *Proceedings,*

Calerga - Sysquake. (n.d.). 2011, Available from: http://www.calerga.com/products/

Competition - VEX Robotics. (n.d.). 2011, Available from: http://www.vexrobotics.com/

Computer Science Social Network. (n.d.). 2011, Available at http://www.cs2n.org/rvw Elrob-Website: Home/Objectives. (n.d.). 2011, Available from: http://www.elrob.org

Fischertechnik GmbH. (n.d.). 2011, Available from: http://www.fischertechnik.de/en

iRobot: Education & Research. (n.d.). 2011, Available from: www.irobot.com/create

LEGO Education. (n.d.). 2011, Available from: http://www.legoeducation.com

Intelligent Mobile Robotic Platforms for Service Robots, Research and Rapid Prototyping. (n.d.). 2011, Available from: http://www.mobilerobots.com/Mobile\_Robots.aspx

K-Team Corporation - Mobile Robotics. (n.d.). 2011, Available from: http://www.k-team.com

Lego Education WeDo. (n.d.). 2011, Available from: http://www.legoeducation.us/ eng/product/lego\_education\_wedo\_robotics\_construction\_set/2096 MATLAB - The Language OF Technical Computing. (n.d.). 2011, Available from:

NI LabVIEW – Improving the Productivity of Engineers and Scientists. (n.d.). 2011, Available

RoboCup@Home. (n.d.). 2011, Available from: http://www.ai.rug.nl/robocupathome RoboCupJunior. (n.d.). 2011, Available from: http:// www.robocup.org/robocup-junior

RoboCupSoccer. (n.d.). 2011, Available from: http://www.robocup.org/robocup-soccer ROBOTC.net: Home of the Best Programming Language for Educational Robotics. (n.d.).

USFIRST.org – Welcome to FIRST. (n.d.). 2011, Available from: http://www.usfirst.org VEX Robotics Design System. (n.d.). 2011, Available from: http://www.vexrobotics.com Welcome to the RoboLab. (n.d.). 2011, Available from: http://www.robolabonline.com/home

RoboCupRescue. (n.d.). 2011, Available from: http://www.robocuprescue.org

TETRIX Robotics. (n.d.). 2011, Available from: http://www.tetrixrobotics.com The GNU Operating System. (n.d.). 2011, Available from: http://www.gnu.org

from: http://sine.ni.com/np/app/flex/p/ap/global/lang/en/pg/1/docid/nav-77

Engino international website – play to invent. (n.d.). 2011, Available from:

Eurobot, international robotics contest. (n.d.). 2011. Available from

Gumstix small open source hardware. (n.d.). 2011, Available from:

 http://www.mathworks.com/products/matlab Microsoft Robotics Developer Studio. (n.d.). 2011, Available from:

RoboCup. (n.d.). 2011, Available from: http://www.robocup.org

2011, Available from: http://www.robotc.net

http:// www.microsoft.com/robotics

*ComLab International Conference 2007*, Radovljica, Slovenia, November 30 -

ISBN: 0-262-19502-X, Cambridge, MA, USA.

MA, USA.

Sysquake

competition

December 1, 2007

http://www.engino.com

http:// www.gumstix.com

:http://www. eurobot.org/eng

The key to effective autonomous mobile robot navigation is accurate self-localization. Without self-localization or with inaccurate self-localization, any non-holonomic autonomous mobile robot is blind in a navigation environment. The KCLBOT [1] is a nonholonomic two wheeled mobile robot that is built around the specifications for 'Micromouse Robot' and the 'RoboCup' competition. These specifications contribute to the mobile robot's form factor and size. The mobile robot holds a complex electronic system to support on-line path planning, self-localization, and even simultaneous localization and mapping (SLAM), which is made possible by an onboard sensor array. The mobile robot is loaded with eight Robot-Electronics SRF05 [2] ultrasonic rangers, and its drive system is supported by Nubotics WC-132 [3] WheelCommander Motion Controller and two WW-01 [4] WheelWatcher Encoders. The motors for robot are modified continuous rotation servo motors, which are required for the WheelCommander Motion Controller. The rotation of the mobile robot is measured by Robot-Electronics CMPS03 [5] Compass Module. These two modules make the double compass configuration, which supports the self-localization theory presented in this paper. Each individual module provides the bearing of the mobile robot relative to the magnetic field of the earth. The central processing for the mobile robot is managed by a Savage Innovations OOPic-R microcontroller. The OOPic-R has advanced communication modules to enable data exchange between the sensors and motion controller. Communication is managed via a serial bus and an 2 I C bus. The electronics are all mounted on two aluminium bases, which make the square structure of the mobile robot. To support the hardware requirements of the novel localisation methodology, the cutting edge technology of a 200 MHz 32-bit ARM 9 processor on a GHI ChipworkX module [6] is employed. The software architecture is based on the Microsoft .NET Micro Framework 4.1 using C# and the Windows Presentation Foundation (WPF).

The combination of hardware electronics and drive mechanics, makes the KCLBOT, as represented in Fig. 1, a suitable platform for autonomous self-localization.

Many different systems have been considered for self-location, from using visual odometry [7] to using a GPS method. While all of these have benefits and detriments, the solution

The KCLBOT: A Framework of the Nonholonomic Mobile

**2. A manoeuvrable nonholonomic mobile robot** 

combination of Newton's law with Kirchhoff's law.

**2.1 Configuration constraints and singularities** 

manoeuvre configuration, it needs to rotate about itself.

via visual odometry.

defined as

deduced as follows:

Robot Platform Using Double Compass Self-Localisation 51

orientation on a two dimensional Cartesian plane. The accuracy of analytical model is benchmarked against visual odometry telemetry. However, this model still suffers from accumulative drift because of the utilization of quadrature shaft encoders. The digital compasses also encounter the same problem with the level of resolution being limited. The ideal solution will not have accumulation of drift error and will not be dependent on the previous configuration values of position and orientation. Such a solution is only available

In this section, the experimental vehicle is evaluated by defining its constraints and modelling its kinematic and dynamic behaviour. The constraints are based on holonomic and nonholonomic behaviour of the rotating wheels, and the vehicles pose in a two dimensional Cartesian plane. The equations of motion for the mobile robot are deduced using Lagrangian d'Alembert's principle, with the implementation of Lagrangian multiplier for optimization. The behaviour of the dual motor configuration is deduced using a

In the manoeuvrable classification of mobile robots [11], the vehicle is defined as being constrained to move in the vehicle's fixed heading angle. For the vehicle to change

Fig. 2. A typical two wheel mobile robot constrained under the maneuverable classification. As the vehicle traverses the two dimensional plane, both left and right wheels follow a path that moves around the instantaneous centre of curvature at the same angle, which can be

( ) <sup>2</sup> *L r*

( ) <sup>2</sup> *R r*

 

 

, and thus the angular velocity of the left and right wheel rotation can be

*L*

*L*

*icc* (1)

*icc* (2)

proposed in this paper endeavours to offer significantly more benefits than detriments. Over the years, several solutions to self-localization have been presented. The most common application uses the vehicle's shaft encoders to estimate the distance travelled by the vehicle and deduce the position. Other applications use external reference entities to compute the vehicle's position, like global positioning systems (GPS) or marker beacons. All of these applications come with their respective weaknesses; for example, the shaft encoder assumes no slippage and is subject to accumulative drift; the GPS will not work indoors and is subject to a large margin of accuracy; and the beacons method is subject to the loss of the multi-path component delivery and accuracy is affected by shadowing, fast/slow fading, and the Doppler effect. More accurate applications have been presented using visual odometry but such applications require off-line processing or high computation time for real-time applications. Frederico et al. [8] present an interesting self-localization concept for the Bulldozer IV robot using shaft encoders, an analog compass, and optical position sensor from the PC mouse. This configuration of the vehicle is dependent on a flat surface for visual odometry to be effective; any deviation from the surface will cause inaccuracy. Hofmeister et al. [9] present a good idea for self-localization using visual odometry with a compass to cope with the low resolution visual images. While this is a very good approach to self-localization, the vehicle is dependent on a camera and the computation ability to process images quickly. Haverinen et al. [10] propose an excellent basis for self-localization utilizing ambient magnetic fields for indoor environments, whilst using the Monte Carlo Localization technique.

Fig. 1. The KCLBOT: A Nonholonomic Manoeuvrable Mobile Robot

It is not ideal to have multiple solutions to the position and orientation of the mobile robot and the computational requirement will affect the ability of the solution being available in real-time. Heavy computation also affects the battery life of the mobile robot and this is a critical aspect. In the technique utilizing two compasses, an analytical calculation model is presented over a numerical solution, which allows for minimal computation time over the numerical model. The configuration of the mobile robot employs two quadrature shaft encoders and two digital magnetic compasses to compute the vehicle's position and angular 50 Mobile Robots – Current Trends

proposed in this paper endeavours to offer significantly more benefits than detriments. Over the years, several solutions to self-localization have been presented. The most common application uses the vehicle's shaft encoders to estimate the distance travelled by the vehicle and deduce the position. Other applications use external reference entities to compute the vehicle's position, like global positioning systems (GPS) or marker beacons. All of these applications come with their respective weaknesses; for example, the shaft encoder assumes no slippage and is subject to accumulative drift; the GPS will not work indoors and is subject to a large margin of accuracy; and the beacons method is subject to the loss of the multi-path component delivery and accuracy is affected by shadowing, fast/slow fading, and the Doppler effect. More accurate applications have been presented using visual odometry but such applications require off-line processing or high computation time for real-time applications. Frederico et al. [8] present an interesting self-localization concept for the Bulldozer IV robot using shaft encoders, an analog compass, and optical position sensor from the PC mouse. This configuration of the vehicle is dependent on a flat surface for visual odometry to be effective; any deviation from the surface will cause inaccuracy. Hofmeister et al. [9] present a good idea for self-localization using visual odometry with a compass to cope with the low resolution visual images. While this is a very good approach to self-localization, the vehicle is dependent on a camera and the computation ability to process images quickly. Haverinen et al. [10] propose an excellent basis for self-localization utilizing ambient magnetic fields for indoor environments, whilst using the Monte Carlo

Localization technique.

Fig. 1. The KCLBOT: A Nonholonomic Manoeuvrable Mobile Robot

It is not ideal to have multiple solutions to the position and orientation of the mobile robot and the computational requirement will affect the ability of the solution being available in real-time. Heavy computation also affects the battery life of the mobile robot and this is a critical aspect. In the technique utilizing two compasses, an analytical calculation model is presented over a numerical solution, which allows for minimal computation time over the numerical model. The configuration of the mobile robot employs two quadrature shaft encoders and two digital magnetic compasses to compute the vehicle's position and angular orientation on a two dimensional Cartesian plane. The accuracy of analytical model is benchmarked against visual odometry telemetry. However, this model still suffers from accumulative drift because of the utilization of quadrature shaft encoders. The digital compasses also encounter the same problem with the level of resolution being limited. The ideal solution will not have accumulation of drift error and will not be dependent on the previous configuration values of position and orientation. Such a solution is only available via visual odometry.

### **2. A manoeuvrable nonholonomic mobile robot**

In this section, the experimental vehicle is evaluated by defining its constraints and modelling its kinematic and dynamic behaviour. The constraints are based on holonomic and nonholonomic behaviour of the rotating wheels, and the vehicles pose in a two dimensional Cartesian plane. The equations of motion for the mobile robot are deduced using Lagrangian d'Alembert's principle, with the implementation of Lagrangian multiplier for optimization. The behaviour of the dual motor configuration is deduced using a combination of Newton's law with Kirchhoff's law.

### **2.1 Configuration constraints and singularities**

In the manoeuvrable classification of mobile robots [11], the vehicle is defined as being constrained to move in the vehicle's fixed heading angle. For the vehicle to change manoeuvre configuration, it needs to rotate about itself.

Fig. 2. A typical two wheel mobile robot constrained under the maneuverable classification.

As the vehicle traverses the two dimensional plane, both left and right wheels follow a path that moves around the instantaneous centre of curvature at the same angle, which can be defined as , and thus the angular velocity of the left and right wheel rotation can be deduced as follows:

$$\dot{\theta}\_{\rm L} = o(\dot{\mathbf{c}} \mathbf{c}\_r - \frac{\mathbf{L}}{2}) \tag{1}$$

$$\dot{\theta}\_{\text{R}} = o(\text{iccc}\_{r} + \frac{L}{2}) \tag{2}$$

The KCLBOT: A Framework of the Nonholonomic Mobile

Robot Platform Using Double Compass Self-Localisation 53

referenced from the global x-axis. To conclude, Equation (7) presents the pose of the mobile robot. The mobile robot has two controllable degrees of freedom, which control the rotational velocity of the left and right wheel and, adversely – with changes in rotation – the

sin cos *y () x () L r cc r*

 *θ*

 *θ*

(8)

(9)

heading angle of the mobile robot is affected; these constraints are stated as follows:

sin cos *y () x () L r cc l*

Fig. 3. A Manoeuvrable Nonholonomic mobile robot

Fig. 4. The Mobile Robot Drive Configuration

where *L* is the distance between the centres of the two rotating wheels, and the parameter *<sup>r</sup> icc* is the distance between the mid-point of the rotating wheels and the instantaneous centre of curvature. Using the velocities equations (1) and (2) of the rotating left and rights wheels, *L* and *R* respectively, the instantaneous centre of curvature, *<sup>r</sup> icc* and the curvature angle, can be derived as follows:

$$\dot{\mathbf{c}} \mathbf{c} \mathbf{c}\_r = \frac{\mathbf{L}(\dot{\theta}\_\mathbf{R} + \dot{\theta}\_\mathbf{L})}{2(\dot{\theta}\_\mathbf{R} - \dot{\theta}\_\mathbf{L})} \tag{3}$$

$$
\omega = \frac{(\dot{\theta}\_R - \dot{\theta}\_L)}{L} \tag{4}
$$

Using equations (3) and (4), two singularities can be identified. When *θ θ <sup>R</sup> <sup>L</sup>* , the radius of instantaneous centre of curvature, r icc tends towards infinity and this is the condition when the mobile robot is moving in a straight line. When *θ θ <sup>R</sup> <sup>L</sup>* , the mobile robot is rotating about its own centre and the radius of instantaneous centre of curvature, r icc , is null. When the wheels on the mobile robot rotate, the quadrature shaft encoder returns a counter tick value; the rotation direction of the rotating wheel is given by positive or negative value returned by the encoder. Using the numbers of tick counts returned, the distance travelled by the rotating left and right wheel can be deduced in the following way:

$$d\_L = \frac{L\_{ticks} \pi D}{L\_{res}}\tag{5}$$

$$d\_R = \frac{\mathcal{R}\_{ticks} \pi D}{\mathcal{R}\_{res}} \tag{6}$$

where Lticks and Rticks depicts the number of encoder pulses counted by left and right wheel encoders, respectively, since the last sampling, and D is defined as the diameter of the wheels. With resolution of the left and right shaft encoders Lres and Rres , respectively, it is possible to determine the distance travelled by the left and right rotating wheel, dL and dR . This calculation is shown in equations (5) and (6).

In the field of robotics, holonomicity [12] is demonstrated as the relationship between the controllable and total degrees of freedom of the mobile robot, as presented by the mobile robot configuration in Fig. 3. In this case, if the controllable degrees of freedom are equal to the total degrees of freedom, then the mobile robot is defined as holonomic. Otherwise, if the controllable degrees of freedom are less than the total degrees of freedom, it is nonholonomic. The manoeuvrable mobile robot has three degrees of freedom, which are its position in two axes and its orientation relative to a fixed heading angle. This individual holonomic constraint is based on the mobile robot's translation and rotation in the direction of the axis of symmetry and is represented as follows:

$$
\dot{y}\_c \cos(\phi) - \dot{\mathbf{x}}\_c \sin(\phi) - d\dot{\phi} = 0 \tag{7}
$$

where, c x and yc are Cartesian-based coordinates of the mobile robot's centre of mass, which is defined as Pc , and describes the heading angle of the mobile robot, which is 52 Mobile Robots – Current Trends

where *L* is the distance between the centres of the two rotating wheels, and the parameter *<sup>r</sup> icc* is the distance between the mid-point of the rotating wheels and the instantaneous centre of curvature. Using the velocities equations (1) and (2) of the rotating left and rights

> ( ) 2( ) *<sup>R</sup> <sup>L</sup> <sup>r</sup> R L*

*<sup>R</sup> <sup>L</sup> (θ θ )*

*L*

Using equations (3) and (4), two singularities can be identified. When *θ θ <sup>R</sup> <sup>L</sup>* , the radius of instantaneous centre of curvature, r icc tends towards infinity and this is the condition when the mobile robot is moving in a straight line. When *θ θ <sup>R</sup> <sup>L</sup>* , the mobile robot is rotating about its own centre and the radius of instantaneous centre of curvature, r icc , is null. When the wheels on the mobile robot rotate, the quadrature shaft encoder returns a counter tick value; the rotation direction of the rotating wheel is given by positive or negative value returned by the encoder. Using the numbers of tick counts returned, the distance travelled

*ticks*

*ticks*

where Lticks and Rticks depicts the number of encoder pulses counted by left and right wheel encoders, respectively, since the last sampling, and D is defined as the diameter of the wheels. With resolution of the left and right shaft encoders Lres and Rres , respectively, it is possible to determine the distance travelled by the left and right rotating wheel, dL and

In the field of robotics, holonomicity [12] is demonstrated as the relationship between the controllable and total degrees of freedom of the mobile robot, as presented by the mobile robot configuration in Fig. 3. In this case, if the controllable degrees of freedom are equal to the total degrees of freedom, then the mobile robot is defined as holonomic. Otherwise, if the controllable degrees of freedom are less than the total degrees of freedom, it is nonholonomic. The manoeuvrable mobile robot has three degrees of freedom, which are its position in two axes and its orientation relative to a fixed heading angle. This individual holonomic constraint is based on the mobile robot's translation and rotation in the direction

cos sin 0 *y () x () d c c*

where, c x and yc are Cartesian-based coordinates of the mobile robot's centre of mass,

 

(7)

describes the heading angle of the mobile robot, which is

*res <sup>L</sup> <sup>π</sup><sup>D</sup> <sup>d</sup>*

*res <sup>R</sup> <sup>π</sup><sup>D</sup> <sup>d</sup>*

 

*<sup>L</sup> icc*

*ω*

by the rotating left and right wheel can be deduced in the following way:

dR . This calculation is shown in equations (5) and (6).

of the axis of symmetry and is represented as follows:

which is defined as Pc , and

*L*

*R*

respectively, the instantaneous centre of curvature, *<sup>r</sup> icc* and the

(3)

(4)

*<sup>L</sup>* (5)

*<sup>R</sup>* (6)

wheels,

 *L* and

curvature angle,

*R*

can be derived as follows:

referenced from the global x-axis. To conclude, Equation (7) presents the pose of the mobile robot. The mobile robot has two controllable degrees of freedom, which control the rotational velocity of the left and right wheel and, adversely – with changes in rotation – the heading angle of the mobile robot is affected; these constraints are stated as follows:

$$
\dot{y}\_c \sin(\phi) + \dot{\mathbf{x}}\_c \cos(\phi) + \mathbf{L}\phi = r\dot{\theta}\_r \tag{8}
$$

$$
\dot{y}\_c \sin(\phi) + \dot{x}\_c \cos(\phi) - \mathbf{L}\phi = r\dot{\theta}\_l\tag{9}
$$

Fig. 3. A Manoeuvrable Nonholonomic mobile robot

Fig. 4. The Mobile Robot Drive Configuration

The KCLBOT: A Framework of the Nonholonomic Mobile

dependent force vector and is represented as follows:

coordinate space, and E(q)u is specified as follows:

multipliers and can be described as follows:

follows:

Robot Platform Using Double Compass Self-Localisation 55

Here, 2 2 I (I 2m (d L ) 2I ) zc w <sup>m</sup> and V(q,q) describes an n dimensional velocity

 

G(q) describes the gravitational force vector, which is null and is not taken into consideration, u describes a vector of r dimensions of actuator force/torque, E(q) describes an n r dimensional matrix used to map the actuator space into a generalized

Where the expression is in terms of coordinates (q,q) where q <sup>ℝ</sup> *<sup>n</sup>* is the position vector

equation can be defined as A(q)q 0 . Where, A(q), the constraints matrix is expressed as

sin cos 0 0 cos sin 0 cos sin 0

 

 

Where finally, T T B (q) A (q) and n λ describes an m dimensional vector of Lagrangian

sin( ) cos( ) cos( ) cos( ) sin( ) sin( )

The purpose of using the Lagrangian multipliers is to optimize the behaviour of the nonholonomic manoeuvrable mobile robot, by providing a strategy for finding the maximum or minimum of the equations' behaviour, subject to the defined constraints.

0 0

*d LL r*

0 0

 

*A(q)q ( ) ( ) b r*

*() () d y*

*() () b r θ*

*r*

and q <sup>ℝ</sup> *<sup>n</sup>* is the velocity vector. Where q is defined as <sup>T</sup>

*r l τ τ*

*md ( ) md ( )* 

(12)

(13)

θ ,θ ] , the constraints

(14)

(15)

cc rl [x ,y , , 

> *c c*

*x*

 

> *r l*

*θ*

1 2 3

 

*c c*

where r θ and l <sup>θ</sup> are the angular displacements of the right and left mobile robot wheels, respectively, and where r describes the radius of the mobile robot's driving wheels. As such, the two-wheeled manoeuvrable mobile robot is a nonholonomic system. To conclude, Equation (8) and (9) describe the angular velocity of the mobile robot's left and right wheel.


Table 1. The Mobile Robots Constants

Based on the mobile robot drive configuration, presented in Fig. 4., Table 1 describes the structured constants required to provide the physical characteristics of the mobile robots movement.

### **2.2 Kinematics a dynamics modeling**

Using the diagrammatic model expressed by Figures 2 and 3, and the structured constants listed in Table 1, the nonholonomic equations of motion with Lagrangian multiplier are derived using Lagrange – d'Alembert's principle [13] and are specified as follows:

$$M(q)\ddot{q} + V(q, \dot{q}) + G(q) = E(q)u + B^T(q)\lambda\_n \tag{10}$$

where M(q) describes an n n dimensional inertia matrix, and where M(q)q is represented as follows:

$$
\begin{bmatrix}
(\mathbf{m}\_c + 2\mathbf{m}\_w) & 0 & -\mathbf{m}\_c \mathbf{d}\ddot{\boldsymbol{\phi}}\sin(\boldsymbol{\phi}) & 0 & 0 \\
0 & (\mathbf{m}\_c + 2\mathbf{m}\_w) & \mathbf{m}\_c \mathbf{d}\ddot{\boldsymbol{\phi}}\cos(\boldsymbol{\phi}) & 0 & 0 \\
0 & 0 & 0 & \mathbf{I}\_w & 0 \\
0 & 0 & 0 & 0 & \mathbf{I}\_w
\end{bmatrix}
\begin{bmatrix}
\ddot{\mathbf{x}}\_c \\
\ddot{\mathbf{y}}\_c \\
\ddot{\boldsymbol{\phi}}\_c \\
\ddot{\boldsymbol{\theta}}\_r \\
\ddot{\boldsymbol{\theta}}\_l
\end{bmatrix}
\tag{11}
$$

54 Mobile Robots – Current Trends

respectively, and where r describes the radius of the mobile robot's driving wheels. As such, the two-wheeled manoeuvrable mobile robot is a nonholonomic system. To conclude, Equation (8) and (9) describe the angular velocity of the mobile robot's left and right wheel.

are the angular displacements of the right and left mobile robot wheels,

The intersection of the axis of symmetry with the mobile robot's driving

The moment of inertia of the mobile robot without the driving wheels and

The moment of inertia of each of the wheels and rotating servo motors

The moment of inertia of each of the wheels and rotating servo motors

The mass of the mobile robot without the driving wheels and the rotating

*<sup>n</sup>* (10)

> 

> > w r w l

 

(11)

the rotating servo motors about a vertical axis through Po

mw The mass of each of the mobile robot's wheels and rotating motors

Based on the mobile robot drive configuration, presented in Fig. 4., Table 1 describes the structured constants required to provide the physical characteristics of the mobile robots

Using the diagrammatic model expressed by Figures 2 and 3, and the structured constants listed in Table 1, the nonholonomic equations of motion with Lagrangian multiplier are

where M(q) describes an n n dimensional inertia matrix, and where M(q)q is

c c w c

x (m 2m ) 0 m d sin( ) 0 0 0 (m 2m ) m d cos( ) 0 0 y

 

m dsin( ) m dcos( ) I 0 0

 

c cc z

0 0 0 I0 θ 0 0 0 0I θ

c cwc

 

 

derived using Lagrange – d'Alembert's principle [13] and are specified as follows:

() (,) () () () *<sup>T</sup> Mqq Vqq Gq Equ B q*

where r θ

Po

cI

Iw

Im

mc

movement.

represented as follows:

and l <sup>θ</sup>

**Symbol Description of Structured Constant** 

d The distance between Po and Pc

about the wheel's axis

servo motors

Table 1. The Mobile Robots Constants

**2.2 Kinematics a dynamics modeling** 

Pc The centre of the mass of the mobile robot

about the diameter of the wheels

wheel axis

Here, 2 2 I (I 2m (d L ) 2I ) zc w <sup>m</sup> and V(q,q) describes an n dimensional velocity dependent force vector and is represented as follows:

$$\begin{bmatrix} -m\_c d\dot{\phi}^2 \cos(\phi) \\ -m\_c d\dot{\phi}^2 \sin(\phi) \\ 0 \\ 0 \\ 0 \end{bmatrix} \tag{12}$$

G(q) describes the gravitational force vector, which is null and is not taken into consideration, u describes a vector of r dimensions of actuator force/torque, E(q) describes an n r dimensional matrix used to map the actuator space into a generalized coordinate space, and E(q)u is specified as follows:

$$
\begin{bmatrix} 0 & 0 \\ 0 & 0 \\ 0 & 0 \\ 1 & 0 \\ 0 & 1 \end{bmatrix} \begin{bmatrix} \mathbf{r}\_r \\ \mathbf{r}\_I \end{bmatrix} \tag{13}
$$

Where the expression is in terms of coordinates (q,q) where q <sup>ℝ</sup> *<sup>n</sup>* is the position vector and q <sup>ℝ</sup> *<sup>n</sup>* is the velocity vector. Where q is defined as <sup>T</sup> cc rl [x ,y , , θ ,θ ] , the constraints equation can be defined as A(q)q 0 . Where, A(q), the constraints matrix is expressed as follows:

$$A(q)\dot{q} = \begin{bmatrix} -\sin(\phi) & \cos(\phi) & -d & 0 & 0 \\ -\cos(\phi) & -\sin(\phi) & -b & r & 0 \\ -\cos(\phi) & -\sin(\phi) & b & 0 & r \end{bmatrix} \begin{vmatrix} \dot{x}\_c \\ \dot{y}\_c \\ \dot{\phi} \\ \dot{\theta}\_r \\ \dot{\theta}\_l \end{vmatrix} \tag{14}$$

Where finally, T T B (q) A (q) and n λ describes an m dimensional vector of Lagrangian multipliers and can be described as follows:

$$
\begin{bmatrix}
\cos(\phi) & -\sin(\phi) & -\sin(\phi) \\
0 & r & 0 \\
0 & 0 & r
\end{bmatrix}
\begin{bmatrix}
\lambda\_1 \\
\lambda\_2 \\
\lambda\_3 \\
\lambda\_4
\end{bmatrix}
\tag{15}
$$

The purpose of using the Lagrangian multipliers is to optimize the behaviour of the nonholonomic manoeuvrable mobile robot, by providing a strategy for finding the maximum or minimum of the equations' behaviour, subject to the defined constraints.

The KCLBOT: A Framework of the Nonholonomic Mobile

can be related to the electrical behaviour of the circuit.

voltage applied to the motors as the systems input.

**3. Manoeuvrable mobile robot self-localization** 

double compass configuration.

robot drive configuration.

(16) and (17) in terms of *s* as follows:

follows:

Robot Platform Using Double Compass Self-Localisation 57

The value definitions listed in Table 2 complement Fig. 5's representation of the mobile

Using Newton's laws of motion and Kirchhoff's circuit laws [14], the motion of the motors

*wl r l r l r l r l r* ,, ,, , *I b Ki*

, ,, , ,

*L Ri V K*

*l r lrlr lr lr*

Equation (16) specifies the Newtonian derivation of motion of both motors and equation (17) specifies how the circuit behaves applying Kirchhoff's laws. Having derived equations (16) and (17), the next step is to relate the electrical circuit behaviour to the mechanical behaviour of rotating motors, and this is achieved using Laplace transforms and expressing equations

> <sup>2</sup> , , ,, , ( )( ) *l r wl r l r l r l r K*

, ,, , , ( ) () ( ) *L s R I s V Ks s lr lr lr lr lr*

Using equations (18) and (19), the open-loop transfer function of this configuration can be derived by eliminating l,r I (s) and relating the equation of motion to the circuit behaviour as

> <sup>2</sup> , , ,, , ( )( ) *l r wl r l r l r l r K*

Here, equation (20) equates the rotational speed of the motors as the systems output and the

In summary, the constraints of the experimental vehicle have been derived in equations (7), (8), and (9), in both holonomic and nonholonomic forms. Using the system constraints, the behaviour of the nonholonomic manoeuvrable mobile robot is formed using the Lagranged'Alembert principle with Lagrangian multipliers to optimize the performance of the system, as specified in equations (10), (11), (12), (13), (14), and (15). Finally, using Newton's laws of motion and Kirchhoff's laws to evaluate circuits, a model is derived using Laplace

The self-localization offers a different approach to solving for an object's position. The reason for the perpetuation of so many approaches is that no solution has yet offered an absolutely precise solution to position. In the introduction, three major approaches using the shaft encoder telemetry, using the visual odometry, and using the global position system were discussed, of which all three have inherent problems. This paper, hence, proposes an alternative approach, which is a hybrid model using the vehicle's dual shaft encoders and

transforms, to relate the behaviour of the systems motion to the electrical circuit.

*V I sb Ls R K*

*V I sb Ls R K*

 

(16)

(17)

(19)

(18)

(20)

,

*di*

*l r*

*dt*

Equation (10) describes the Lagrangian representation of the KCLBOT, in a state-space model, and Equations (11), (12), (13), (14), and (15) decompose the state-space model.

#### **2.3 The dual-drive configuration**

The manoeuvrable mobile robot is configured with two independent direct current (DC) servo motors, set up to be parallel to each other, with the ability for continuous rotation. This configuration allows the mobile robot to behave in a manoeuvrable configuration, is illustrated in Figure 5.

Fig. 5. Dual Servo Motor Drive Configuration

It is assumed that both the left and right servo motors are identical. The torque, l,r τ , of the motor is related to the armature current, l,r i , by the constant factor Kt and is described as l,r t l,r τ K .i . The input voltage source is described as V , which is used to drive the servo l,r motors, where R is internal resistance of the motor, l,r L is the internal inductance of the l,r motor, and l,r e describes back electromagnetic field (EMF) of both the left and right electric servo motors. It is known that l,r l,r e K. θ , where KK K e t describe the electromotive force constants.


Table 2. Dual Servo Motor Configuration Value Definitions

56 Mobile Robots – Current Trends

Equation (10) describes the Lagrangian representation of the KCLBOT, in a state-space

The manoeuvrable mobile robot is configured with two independent direct current (DC) servo motors, set up to be parallel to each other, with the ability for continuous rotation. This configuration allows the mobile robot to behave in a manoeuvrable configuration, is

It is assumed that both the left and right servo motors are identical. The torque, l,r τ , of the motor is related to the armature current, l,r i , by the constant factor Kt and is described as l,r t l,r τ K .i . The input voltage source is described as V , which is used to drive the servo l,r motors, where R is internal resistance of the motor, l,r L is the internal inductance of the l,r motor, and l,r e describes back electromagnetic field (EMF) of both the left and right electric

, where KK K e t describe the electromotive

model, and Equations (11), (12), (13), (14), and (15) decompose the state-space model.

**2.3 The dual-drive configuration** 

Fig. 5. Dual Servo Motor Drive Configuration

servo motors. It is known that l,r l,r e K. θ

**Symbol Description** 

wl,r I Moment of inertia of the rotor

K Electromotive force constant

R l,r Electric resistance L l,r Electric inductance V l,r Input voltage source

Table 2. Dual Servo Motor Configuration Value Definitions

b l,r Damping ratio of the mechanical system

force constants.

illustrated in Figure 5.

The value definitions listed in Table 2 complement Fig. 5's representation of the mobile robot drive configuration.

Using Newton's laws of motion and Kirchhoff's circuit laws [14], the motion of the motors can be related to the electrical behaviour of the circuit.

$$\mathbf{I}\_{wl,r}\ddot{\theta}\_{l,r} + \mathbf{b}\_{l,r}\dot{\theta}\_{l,r} = \mathbf{K}\dot{\mathbf{i}}\_{l,r} \tag{16}$$

$$L\_{l,r}\frac{d\dot{i}\_{l,r}}{dt} + \mathbf{R}\_{l,r}\dot{i}\_{l,r} = \mathbf{V}\_{l,r} - \mathbf{K}\dot{\theta}\_{l,r} \tag{17}$$

Equation (16) specifies the Newtonian derivation of motion of both motors and equation (17) specifies how the circuit behaves applying Kirchhoff's laws. Having derived equations (16) and (17), the next step is to relate the electrical circuit behaviour to the mechanical behaviour of rotating motors, and this is achieved using Laplace transforms and expressing equations (16) and (17) in terms of *s* as follows:

$$\frac{\dot{\theta}}{V\_{l,r}} = \frac{\mathbf{K}}{(\mathbf{I}\_{wl,r}\mathbf{s} + \mathbf{b}\_{l,r})(\mathbf{L}\_{l,r}\mathbf{s} + \mathbf{R}\_{l,r}) + \mathbf{K}^2} \tag{18}$$

$$(\mathbf{L}\_{l,r}\mathbf{s} + \mathbf{R}\_{l,r})I\_{l,r}(\mathbf{s}) = V\_{l,r} - \mathbf{K}\mathbf{s}\theta\_{l,r}(\mathbf{s})\tag{19}$$

Using equations (18) and (19), the open-loop transfer function of this configuration can be derived by eliminating l,r I (s) and relating the equation of motion to the circuit behaviour as follows:

$$\frac{\dot{\theta}}{\mathcal{V}\_{l,r}} = \frac{\mathcal{K}}{(I\_{wl,r}\mathbf{s} + \mathbf{b}\_{l,r})(L\_{l,r}\mathbf{s} + \mathbf{R}\_{l,r}) + \mathcal{K}^2} \tag{20}$$

Here, equation (20) equates the rotational speed of the motors as the systems output and the voltage applied to the motors as the systems input.

In summary, the constraints of the experimental vehicle have been derived in equations (7), (8), and (9), in both holonomic and nonholonomic forms. Using the system constraints, the behaviour of the nonholonomic manoeuvrable mobile robot is formed using the Lagranged'Alembert principle with Lagrangian multipliers to optimize the performance of the system, as specified in equations (10), (11), (12), (13), (14), and (15). Finally, using Newton's laws of motion and Kirchhoff's laws to evaluate circuits, a model is derived using Laplace transforms, to relate the behaviour of the systems motion to the electrical circuit.

#### **3. Manoeuvrable mobile robot self-localization**

The self-localization offers a different approach to solving for an object's position. The reason for the perpetuation of so many approaches is that no solution has yet offered an absolutely precise solution to position. In the introduction, three major approaches using the shaft encoder telemetry, using the visual odometry, and using the global position system were discussed, of which all three have inherent problems. This paper, hence, proposes an alternative approach, which is a hybrid model using the vehicle's dual shaft encoders and double compass configuration.

The KCLBOT: A Framework of the Nonholonomic Mobile

position and orientation will be erroneous.

state changes to the configuration as it rotates.

Fig. 6. Mobile Robot Manoeuvre Configuration

following:

by dR and dL . This can be achieved by substituting r θ

Robot Platform Using Double Compass Self-Localisation 59

(28) in terms of the distances that the left and right wheels have traversed, which are defined

(28)) for dR and dL , respectively, and also dropping the time constant *t* to achieve the

*R L Ld d d d t xt x*

*R L Ld d d d t yt y dd b*

**3.2 Numerical approach with a single compass configuration** 

configured such that there is no deviation between the two points, Po and Pc .

*dd b*

<sup>0</sup> 2 *d d R L*

0 0 0 ( )( ) ( ) sin sin( ) 2( ) *RL RL*

0 0 0 ( )( ) ( ) cos cos( ) 2( ) *RL RL*

By implementing equations (29), (30), and (31), we provide a solution to the relative position of a manoeuvrable mobile robot. This might offer a possible solution to the self-localization problem but is subject to accumulative drift of the position and orientation with no method of re-alignment. The accuracy of this method is subject to the sampling rate of the data accumulation, such that if small position or orientation changes are not recorded, then the

Having derived a self-localization model using only the telemetry from the quadrature shaft encoders, the next step to evolve the model is to add a digital compass to input the manoeuvrable mobile robot's orientation. The ideal position on the vehicle for the digital compass would be at the midpoint between its centre of its mass and the intersection of the axis of symmetry with the mobile robot's driving wheel axis. In this case, the vehicle is

When the manoeuvrable mobile robot starts a forward or backward rotation configuration, it induces two independent instantaneous centres of curvatures from the left and right wheel. The maximum arcs of the curvature lines are depicted in Fig. 9, showing the steady

and l <sup>θ</sup>

(31)

(30)

(29)

 

> 

(in equations (24), (27) and

#### **3.1 Implementing a dual-shaft encoder configuration**

By using the quadrature shaft encoders that accumulate the distance travelled by the wheels, a form of position can be deduced by deriving the mobile robot's x , y Cartesian position and the manoeuvrable vehicle's orientation , with respect to time. The derivation starts by defining and considering s (t) and ( )*t* to be function of time, which represents the velocity and orientation of the mobile robot, respectively. The velocity and orientation are derived from differentiating the position form as follows:

$$\frac{d\mathbf{x}}{dt} = \mathbf{s}(t). \cos(\phi(t))\tag{21}$$

$$\frac{dy}{dt} = \mathbf{s}(t). \sin(\phi(t))\tag{22}$$

The change in orientation with respect to time is the angular velocity , which was defined in equation (4) and can be specified as follows:

$$\frac{d\phi}{dt} = \alpha = \frac{\dot{\theta}\_r - \dot{\theta}\_l}{b} \tag{23}$$

When equation (23) is integrated, the mobile robot's angle orientation value (t) with respect to time is achieved. The mobile robot's initial angle of orientation (0) is written as 0 and is represented as follows:

$$\phi(\mathbf{t}) = \frac{(\dot{\theta}\_r - \dot{\theta}\_l)\mathbf{t}}{\mathbf{b}} + \phi\_0 \tag{24}$$

The velocity of the mobile robot is equal to the average speed of the two wheels and this can be incorporated into equations (21) and (22), as follows:

$$\frac{d\mathbf{x}}{dt} = \frac{\dot{\theta}\_r + \dot{\theta}\_l}{2} \cos(\phi(t)) \tag{25}$$

$$\frac{dy}{dt} = \frac{\dot{\theta}\_r + \dot{\theta}\_l}{2}. \sin(\phi(t)) \tag{26}$$

The next step is to integrate equations (25) and (26) to the initial position of the mobile robot, as follows:

$$\mathbf{x}(\mathbf{t}) = \mathbf{x}\_0 + \frac{\mathbf{L}(\dot{\theta}\_r + \dot{\theta}\_l)}{2(\dot{\theta}\_r - \dot{\theta}\_l)} \left( \sin \left( \frac{(\dot{\theta}\_r - \dot{\theta}\_l)\mathbf{t}}{\mathbf{b}} + \phi\_0 \right) - \sin(\phi\_0) \right) \tag{27}$$

$$y(t) = y\_0 + \frac{\mathbf{L}(\dot{\theta}\_r + \dot{\theta}\_l)}{2(\dot{\theta}\_r - \dot{\theta}\_l)} \left( \cos\left(\frac{(\dot{\theta}\_r - \dot{\theta}\_l)\mathbf{t}}{\mathbf{b}} + \phi\_0\right) - \cos(\phi\_0) \right) \tag{28}$$

Equations (27) and (28) specify the mobile robot's position, where 0 x(0) x and y <sup>0</sup> (0) y are the mobile robot's initial positions. The next step is to represent equations (24), (27) and 58 Mobile Robots – Current Trends

By using the quadrature shaft encoders that accumulate the distance travelled by the wheels, a form of position can be deduced by deriving the mobile robot's x , y Cartesian

the velocity and orientation of the mobile robot, respectively. The velocity and orientation

( ).cos( ( )) *dx st t*

( ).sin( ( )) *dy st t*

*d r l dt b*

( ) ( ) *r l <sup>t</sup> <sup>t</sup> b* 

The velocity of the mobile robot is equal to the average speed of the two wheels and this can

*dx r l <sup>t</sup>*

*dy r l <sup>t</sup>*

The next step is to integrate equations (25) and (26) to the initial position of the mobile robot,

 

 

cos( ( )) <sup>2</sup>

.sin( ( )) <sup>2</sup>

0 0 0 ()() ( ) sin sin( ) 2( ) *rl rl*

0 0 0 ()() ( ) cos cos( ) 2( ) *rl rl*

 

Equations (27) and (28) specify the mobile robot's position, where 0 x(0) x and y <sup>0</sup> (0) y are the mobile robot's initial positions. The next step is to represent equations (24), (27) and

 

*b*

 0

*dt*

*dt*

When equation (23) is integrated, the mobile robot's angle orientation value

respect to time is achieved. The mobile robot's initial angle of orientation

*dt*

*dt*

*r l L t xt x*

*r l L t yt y <sup>b</sup>* 

 

 

be incorporated into equations (21) and (22), as follows:

The change in orientation with respect to time is the angular velocity

, with respect to time. The derivation

(21)

(22)

, which was defined

(0) is written as

(t) with

(23)

(24)

(25)

(26)

(27)

(28)

( )*t* to be function of time, which represents

**3.1 Implementing a dual-shaft encoder configuration** 

are derived from differentiating the position form as follows:

position and the manoeuvrable vehicle's orientation

starts by defining and considering s (t) and

in equation (4) and can be specified as follows:

0 and is represented as follows:

as follows:

(28) in terms of the distances that the left and right wheels have traversed, which are defined by dR and dL . This can be achieved by substituting r θ and l <sup>θ</sup> (in equations (24), (27) and (28)) for dR and dL , respectively, and also dropping the time constant *t* to achieve the following:

$$
\theta = \frac{\mathbf{d}\_R - \mathbf{d}\_L}{2} + \theta\_0 \tag{29}
$$

$$\mathbf{x}(\mathbf{t}) = \mathbf{x}\_0 + \frac{\mathbf{L}(\mathbf{d}\_R + \mathbf{d}\_L)}{2(\mathbf{d}\_R - \mathbf{d}\_L)} \left( \sin \left( \frac{(\mathbf{d}\_R - \mathbf{d}\_L)t}{\mathbf{b}} + \phi\_0 \right) - \sin(\phi\_0) \right) \tag{30}$$

$$y(t) = y\_0 + \frac{\mathcal{L}(d\_R + d\_L)}{\mathcal{Z}(d\_R - d\_L)} \left( \cos\left(\frac{(d\_R - d\_L)t}{b} + \phi\_0\right) - \cos(\phi\_0) \right) \tag{31}$$

By implementing equations (29), (30), and (31), we provide a solution to the relative position of a manoeuvrable mobile robot. This might offer a possible solution to the self-localization problem but is subject to accumulative drift of the position and orientation with no method of re-alignment. The accuracy of this method is subject to the sampling rate of the data accumulation, such that if small position or orientation changes are not recorded, then the position and orientation will be erroneous.

#### **3.2 Numerical approach with a single compass configuration**

Having derived a self-localization model using only the telemetry from the quadrature shaft encoders, the next step to evolve the model is to add a digital compass to input the manoeuvrable mobile robot's orientation. The ideal position on the vehicle for the digital compass would be at the midpoint between its centre of its mass and the intersection of the axis of symmetry with the mobile robot's driving wheel axis. In this case, the vehicle is configured such that there is no deviation between the two points, Po and Pc .

When the manoeuvrable mobile robot starts a forward or backward rotation configuration, it induces two independent instantaneous centres of curvatures from the left and right wheel. The maximum arcs of the curvature lines are depicted in Fig. 9, showing the steady state changes to the configuration as it rotates.

Fig. 6. Mobile Robot Manoeuvre Configuration

The KCLBOT: A Framework of the Nonholonomic Mobile

**3.3 Double compass configuration methodology** 

Fig. 8. Double Compass Manoeuvre Configuration

γL and γ<sup>R</sup> , as shown in Fig. 8.

Robot Platform Using Double Compass Self-Localisation 61

Having described the issues associated with using a single compass to solve for the position above, it is clear that it would be preferable to have a model that eliminated having simultaneous solutions and led to a single solution. This would ideally mean that the angles γL and γR are known constants, and to achieve this condition requires additional telemetry from the vehicle. A model with dual compasses is proposed to resolve the angles

By introducing two compasses, which are placed directly above the rotating wheels, when a configuration change takes place, the difference is measured by αL and α<sup>R</sup> , which represent the change in orientation of the left and right mobile robot wheels, respectively. Using the same approach as a single compass configuration, the double compass manoeuvre

configuration is modelled using a six-bar mechanism as shown in Fig. 9.

Fig. 9. Six-bar Linkage Manoeuvre Model with a Double Compass Configuration

Using a steady state manoeuvre, it is assumed that the actual distance travelled in a rotation manoeuvre does not equal the distance used to model the calculation for position and orientation. This assumption is depicted in Fig. 6., clearly showing the actual and assumed distance travelled. The difference does not cause any consequence to the calculation model as the difference cannot be measured in the resolution of the quadrature shaft encoders.

Fig. 7. Six-bar Linkage Manoeuvre Model with a Single Compass Configuration

Using the vector loop technique [15] to analyse the kinematic position of the linkage model in Fig. 7., the vector loop equation is written as follows:

$$I\_{N\_LO\_L} + I\_{N\_RN\_L} + I\_{O\_RN\_R} + I\_{O\_LO\_R} = 0\tag{32}$$

Using the complex notation, equation (32) is written as follows:

$$\mathbf{d}\_L \mathbf{e}^{\mathbf{j}\cdot\boldsymbol{\gamma}\_L} + \mathbf{L}.\mathbf{e}^{\mathbf{j}\cdot\boldsymbol{\gamma}\_c} + \mathbf{d}\_r.\mathbf{e}^{\mathbf{j}\cdot\boldsymbol{\gamma}\_R} + \mathbf{L}.\mathbf{e}^{\mathbf{j}\cdot\boldsymbol{\gamma}\_o} = \mathbf{0} \tag{33}$$

Having derived the complex notation of the vector loop equation in equation (33), the next step is to substitute the Euler equations to the complex notations as follows:

$$\begin{aligned} d\_L(\cos(\chi\_L) + j\sin(\chi\_L)) + L(\cos(\chi\_c) + j\sin(\chi\_c)) + \\ d\_r(\cos(\chi\_R) + j\sin(\chi\_R)) + L(\cos(\chi\_o) + j\sin(\chi\_o)) = 0 \end{aligned} \tag{34}$$

Equation (34) is separated into its corresponding real and imaginary parts, considering γ<sup>o</sup> 180 , as follows:

$$d\_L \cos(\chi\_L) + L \cos(\chi\_c) + d\_r \cos(\chi\_R) - L = 0 \tag{35}$$

$$d\_L \sin(\gamma\_L) + L \sin(\gamma\_c) + d\_r \sin(\gamma\_R) = 0\tag{36}$$

where, dL , dR , and *L* are all known constants, and γc specifies the new angle of orientation of the mobile robot. Having two equations (35) and (36), with two unknown angles γL and γ<sup>R</sup> , when solving simultaneously for these two angles, it will return multiple independent values. This approach requires a large amount of computation to first find these angles and then deduce the relative position.

60 Mobile Robots – Current Trends

Using a steady state manoeuvre, it is assumed that the actual distance travelled in a rotation manoeuvre does not equal the distance used to model the calculation for position and orientation. This assumption is depicted in Fig. 6., clearly showing the actual and assumed distance travelled. The difference does not cause any consequence to the calculation model as the difference cannot be measured in the resolution of the quadrature shaft encoders.

Fig. 7. Six-bar Linkage Manoeuvre Model with a Single Compass Configuration

in Fig. 7., the vector loop equation is written as follows:

Using the complex notation, equation (32) is written as follows:

Using the vector loop technique [15] to analyse the kinematic position of the linkage model

0 *NO N N O N OO LL R L R R LR II I I* (32)

. . . . . . . .0 *L R c o j j j j d e Le d e Le L r*

Having derived the complex notation of the vector loop equation in equation (33), the next

(cos( ) sin( )) (cos( ) sin( )) (cos( ) sin( )) (cos( ) sin( )) 0 *LL L c c rR R o o*

Equation (34) is separated into its corresponding real and imaginary parts, considering

cos( ) cos( ) cos( ) 0 *d Ld L L L cr R*

sin( ) sin( ) sin( ) 0 *d Ld L L cr R*

where, dL , dR , and *L* are all known constants, and γc specifies the new angle of orientation of the mobile robot. Having two equations (35) and (36), with two unknown angles γL and γ<sup>R</sup> , when solving simultaneously for these two angles, it will return multiple independent values. This approach requires a large amount of computation to first

step is to substitute the Euler equations to the complex notations as follows:

find these angles and then deduce the relative position.

γ<sup>o</sup> 180 , as follows:

*d jLj d j Lj*

 

 

 

 

> 

(33)

(34)

(35)

(36)

### **3.3 Double compass configuration methodology**

Having described the issues associated with using a single compass to solve for the position above, it is clear that it would be preferable to have a model that eliminated having simultaneous solutions and led to a single solution. This would ideally mean that the angles γL and γR are known constants, and to achieve this condition requires additional telemetry from the vehicle. A model with dual compasses is proposed to resolve the angles γL and γ<sup>R</sup> , as shown in Fig. 8.

Fig. 8. Double Compass Manoeuvre Configuration

By introducing two compasses, which are placed directly above the rotating wheels, when a configuration change takes place, the difference is measured by αL and α<sup>R</sup> , which represent the change in orientation of the left and right mobile robot wheels, respectively. Using the same approach as a single compass configuration, the double compass manoeuvre configuration is modelled using a six-bar mechanism as shown in Fig. 9.

Fig. 9. Six-bar Linkage Manoeuvre Model with a Double Compass Configuration

The KCLBOT: A Framework of the Nonholonomic Mobile

**4.2 A skip-list inspired searching algorithm** 

search algorithm is represented as follows:

For x-pixel = 0 To Image width For y-pixel = 0 To Image height

Flag = Flag + 1

Flag = Flag + 1

raised, and when the search has two raised flags, the search will end.

tracking results are used to compare the double compass tracking results.

 x-pixel = Image width y-pixel = Image height

If Flag > 1 Then

End if

End if

 End if Next Next

**5. Results & analysis 5.1 Statistical results** 

Robot Platform Using Double Compass Self-Localisation 63

The camera is positioned above the manoeuvrable mobile robot at a fixed height. The camera is connected to an ordinary desktop computer that processes the images captured from the camera. The captured images are then processed to find the two markers on the mobile robot. The markers are positioned directly above the mobile robot's rolling wheels. The processing software then scans the current image identifying the marker positions. Having two markers allows the software to deduce the orientation of the manoeuvring mobile robot. When the software has deduced the position and orientation of the mobile robot, this telemetry is communicated to the mobile robot via a wireless Bluetooth signal.

By using a systematic searching algorithm, the computation cost-time is dependent on the location of the markers. The closer the markers are to the origin (e.g. coordinates (0 pixels, 0 pixels) of the search, the faster the search for the markers will be performed. The systematic

If current pixel (x-pixel, y-pixel) = Marker 1 Then

If current pixel (x-pixel, y-pixel) = Marker 2 Then

The search algorithm above shows how every vertical and horizontal pixel is scanned to identify the two defined markers on the mobile robot. When a marker is found, a flag is

To validate the double compass approach to self-localisation, an overhead camera visual odometry tracking system was set up, as presented in the previous section. The visual

 Marker 1 x position = x-pixel Marker 1 y position = y-pixel

 Marker 2 x position = x-pixel Marker 2 y position = y-pixel

Using an identical approach as the single compass configuration, the vector loop equations remain the same, equations (35) and (36), with the difference being the ability to define the angles γL and γ<sup>R</sup> . For this manoeuvre model, presented in Fig. 9., the configuration can be calculated as follows:

$$
\gamma\_\mathcal{L} = \beta\_\mathcal{L} + \alpha\_\mathcal{L} \tag{37}
$$

$$
\gamma\_{\mathbb{R}} = \beta\_{\mathbb{R}} + \alpha\_{\mathbb{R}} \tag{38}
$$

where, βL and βR are the trigonometric angles used for calculating γL and γR for each pose of the mobile robot, based on the different configuration states of dL and *dR* . For the configuration state described by Fig. 13, β<sup>L</sup> 90 and β<sup>R</sup> 270 . Having a constant value for the angles γL and γR from Equations (37) and (38), it allows either equation (35) or (36) to be used to derive the remaining angle γ<sup>c</sup> , which is specified as follows:

$$\gamma\_c = \cos^{-1}\left(\frac{-d\_L\cos(\gamma\_L) - d\_r\cos(\gamma\_R) + L}{L}\right) \tag{39}$$

$$\gamma\_c = \sin^{-1}\left(\frac{-d\_L \sin(\gamma\_L) - d\_r \sin(\gamma\_R)}{L}\right) \tag{40}$$

where equations (39) and (40) solve the single compass solution to the position of the centrally positioned digital compass, indicated by γ<sup>c</sup> .

Using the simultaneous solutions from comparing equations (35) and (36), and evaluating their difference to either equation (39) or (40), it is possible to derive the accuracy of the measurement. The simultaneous equation solution will be the best fit model for the hybrid model and any ambiguity can be considered resolution error or a factor of wheel slippage.

#### **4. Mobile robot localization using visual odometry**

#### **4.1 Implementing visual odometry with an overhead camera**

Using any standard camera at a resolution of 320 by 240 pixels, the system uses this device to capture images that are then used to compute the position and orientation of the manoeuvring mobile robot.

Fig. 10. Localization Configuration Setup

62 Mobile Robots – Current Trends

Using an identical approach as the single compass configuration, the vector loop equations remain the same, equations (35) and (36), with the difference being the ability to define the angles γL and γ<sup>R</sup> . For this manoeuvre model, presented in Fig. 9., the configuration can be

*L L L*

*R R R*

where, βL and βR are the trigonometric angles used for calculating γL and γR for each pose of the mobile robot, based on the different configuration states of dL and *dR* . For the configuration state described by Fig. 13, β<sup>L</sup> 90 and β<sup>R</sup> 270 . Having a constant value for the angles γL and γR from Equations (37) and (38), it allows either equation (35) or (36)

<sup>1</sup> cos( ) cos( ) cos *L Lr R <sup>c</sup>*

<sup>1</sup> sin( ) sin( ) sin *L Lr R <sup>c</sup> d d*

where equations (39) and (40) solve the single compass solution to the position of the

Using the simultaneous solutions from comparing equations (35) and (36), and evaluating their difference to either equation (39) or (40), it is possible to derive the accuracy of the measurement. The simultaneous equation solution will be the best fit model for the hybrid model and any ambiguity can be considered resolution error or a factor of wheel slippage.

Using any standard camera at a resolution of 320 by 240 pixels, the system uses this device to capture images that are then used to compute the position and orientation of the

*ddL L*

> *L*

 

> 

(39)

(40)

(37)

(38)

 

 

to be used to derive the remaining angle γ<sup>c</sup> , which is specified as follows:

centrally positioned digital compass, indicated by γ<sup>c</sup> .

**4. Mobile robot localization using visual odometry** 

manoeuvring mobile robot.

Fig. 10. Localization Configuration Setup

**4.1 Implementing visual odometry with an overhead camera** 

calculated as follows:

The camera is positioned above the manoeuvrable mobile robot at a fixed height. The camera is connected to an ordinary desktop computer that processes the images captured from the camera. The captured images are then processed to find the two markers on the mobile robot. The markers are positioned directly above the mobile robot's rolling wheels. The processing software then scans the current image identifying the marker positions. Having two markers allows the software to deduce the orientation of the manoeuvring mobile robot. When the software has deduced the position and orientation of the mobile robot, this telemetry is communicated to the mobile robot via a wireless Bluetooth signal.

### **4.2 A skip-list inspired searching algorithm**

By using a systematic searching algorithm, the computation cost-time is dependent on the location of the markers. The closer the markers are to the origin (e.g. coordinates (0 pixels, 0 pixels) of the search, the faster the search for the markers will be performed. The systematic search algorithm is represented as follows:

```
For x-pixel = 0 To Image width 
  For y-pixel = 0 To Image height 
       If current pixel (x-pixel, y-pixel) = Marker 1 Then 
   Marker 1 x position = x-pixel 
   Marker 1 y position = y-pixel 
   Flag = Flag + 1 
   End if 
       If current pixel (x-pixel, y-pixel) = Marker 2 Then 
   Marker 2 x position = x-pixel 
   Marker 2 y position = y-pixel 
   Flag = Flag + 1 
   End if 
   If Flag > 1 Then 
   x-pixel = Image width 
   y-pixel = Image height 
   End if 
  Next 
Next
```
The search algorithm above shows how every vertical and horizontal pixel is scanned to identify the two defined markers on the mobile robot. When a marker is found, a flag is raised, and when the search has two raised flags, the search will end.

### **5. Results & analysis**

### **5.1 Statistical results**

To validate the double compass approach to self-localisation, an overhead camera visual odometry tracking system was set up, as presented in the previous section. The visual tracking results are used to compare the double compass tracking results.

The KCLBOT: A Framework of the Nonholonomic Mobile

Table 3. Statistical analysis of experimental data

distributed centrally, with anomalies at both tail ends.

(1mm) and the maximum (20.6mm and 19.3mm) values.

Fig. 13. Histogram of the left and right marker error

Robot Platform Using Double Compass Self-Localisation 65

**Method Right Marker Left Marker**  Mean 12.298302 11.254462 95% Confidence Interval for Mean (LB) 11.605941 10.616642 95% Confidence Interval for Mean (UB) 12.990663 11.892282 Median 12.806248 11.401754 Variance 22.284 18.911 Standard Deviation 4.7205636 4.3486990 Standard Error Mean 0.3508767 0.3232362 Minimum 1.0000 1.0000 Maximum 20.6155 19.3132 Range 19.6155 18.3132 Interquartile Range 7.0312 6.2556 Skewness -0.263 -0.118

The histograms of the error distribution, shown in Fig. 13, present the distribution of error, which is the difference between the visual tracking position and double compass position. The Q-Q plot depicted in Fig. 14. presents the performance of the observed values against the expected values. This demonstrates that the errors are approximately normally

The boxplot presented in Fig. 15. visually shows the distance from 0 to the mean, which is 12.3mm and 11.3mm, for the right and left respectively. It also presents the interquartile range, which is 19.6mm and 18.3mm, for the right and left respectively, and minimum

(a) Right Marker (b) Left Marker

Fig. 11. Experimental data from linear path following

Fig. 12. Experimental data from sinusoidal path following

Using the configuration presented in Fig. 10. the mobile robot with a double compass configuration was set up to follow a linear path and a sinusoidal path. The experimental results showing the visual tracking results and the double compass estimation results are presented in Fig. 11. and Fig. 12.

The results presented in Table 3 show the statistical analyses of 362 samples recorded from the linear and sinusoidal manoeuvre experiments. Both left and right marker mean values are relatively low and, for more accuracy, because the data might be skewed, the median value is presented because it compensates for skewed data. The confidence intervals, which represent two standard deviations from the mean, equivalently present a low error rate.

64 Mobile Robots – Current Trends

Fig. 11. Experimental data from linear path following

Fig. 12. Experimental data from sinusoidal path following

presented in Fig. 11. and Fig. 12.

low error rate.

Using the configuration presented in Fig. 10. the mobile robot with a double compass configuration was set up to follow a linear path and a sinusoidal path. The experimental results showing the visual tracking results and the double compass estimation results are

The results presented in Table 3 show the statistical analyses of 362 samples recorded from the linear and sinusoidal manoeuvre experiments. Both left and right marker mean values are relatively low and, for more accuracy, because the data might be skewed, the median value is presented because it compensates for skewed data. The confidence intervals, which represent two standard deviations from the mean, equivalently present a


Table 3. Statistical analysis of experimental data

The histograms of the error distribution, shown in Fig. 13, present the distribution of error, which is the difference between the visual tracking position and double compass position.

The Q-Q plot depicted in Fig. 14. presents the performance of the observed values against the expected values. This demonstrates that the errors are approximately normally distributed centrally, with anomalies at both tail ends.

The boxplot presented in Fig. 15. visually shows the distance from 0 to the mean, which is 12.3mm and 11.3mm, for the right and left respectively. It also presents the interquartile range, which is 19.6mm and 18.3mm, for the right and left respectively, and minimum (1mm) and the maximum (20.6mm and 19.3mm) values.

Fig. 13. Histogram of the left and right marker error

The KCLBOT: A Framework of the Nonholonomic Mobile

*F*( ) *x* , the distribution is specified as follows:

The null hypothesis is rejected at level

of 11.78 and a standard deviation of 0.663.

following:

**6. Conclusions** 

position and orientation.

derive.

Robot Platform Using Double Compass Self-Localisation 67

where sup*x* describes the supremum of the set of distances. For the analysis test to be

Under the null hypothesis that the sample originates from the hypothesized distribution

where *B t*( ) describes the Brownian bridge [17]. If *F* is continuous, then under the null hypothesis *nDn* converges to the Kolmogorov distribution, which does not depend on *F* . The analysis test is constructed by using the critical values of the Kolmogorov distribution.

Pr( ) 1 *K K <sup>a</sup>*

For the experimental data presented in this paper, the null hypothesis is that the distribution of error is normal with a mean value of 11.78 and a standard deviation of 4.56. Based on significance level of 0.05, a significance of 0.663 is returned using the one-sample Kolmogorov-Smirnov test (44). The strength of the returned significance value allows us to retain the null hypothesis and say that the distribution of error is normal with a mean value

The most fundamental part of implementing a successful maneuverable non-holonomic mobile robot is accurate self-localization telemetry. The biggest concern about using a selflocalization technique is the amount of computation it will require to complete the task. Ideally, having an analytical solution to position offers the benefit of a single solution, where the numeric solution has many possibilities and requires more time and computation to

In this paper, three different solutions have been presented for the position of the mobile robot. The first solution presented a method where only the quadrature shaft encoder telemetry is used to solve position. However, using an accumulative method leads to the drifting of the position and orientation result, caused by slippage of the wheels or even by the limiting resolution of the shaft encoders. The second solution presented the implementation of a hybrid model using both the quadrature shaft encoders and a single, centrally placed digital compass. Modeling the maneuver configuration of the mobile robot as a six-bar linkage mechanism and using vector loop equation method to analyze the mechanism for a numeric solution was derived. The final solution presented a model that implements a dual digital compass configuration to allow an analytical solution to the purposed six-bar linkage mechanism. An analytical model was derived to resolve the numeric nature of the single compass configuration, which will allow the swift resolution of

effective to reject a null hypothesis, a relatively large number of data is required.

It should also be noted that the asymptotic power of this analysis test is 1.

*Dn xn* sup | ( ) ( )| *F x Fx* (42)

sup | ( ( ))| *<sup>n</sup> nDn t BFt* (43)

if *nD K n a* , where *Ka* is calculated from the

(44)

Fig. 14. Normal Q-Q Plot of Error for the Right and Left Markers

Fig. 15. Boxplot of Errors base on the Experimental Methods

#### **5.2 Analysis using Kolmogorov-Smimov test**

Using the statistical analysis presented in the previous section, a non-parametric test is required to validate the practical effectiveness of the double compass methodology. The ideal analysis test for a non-parametric independent one-sample set of data is the Kolmogorov-Smirnov test [16] for significance. The empirical distribution function *Fn* for *n* independent and identically distributed random variables observations *Xi* is defined as follows:

$$F\_n(\mathbf{x}) = \frac{1}{n} \sum\_{i=1}^n I\_{X\_i \le x} \tag{41}$$

where *Xi <sup>x</sup> I* describes the indicator function, which is equal to 1 if *Xi x* and is equal to 0 otherwise. The Kolmogrov-Smirnov statistic [16] for a cumulative distribution function *F*( ) *x* is as follows:

$$\mathbf{D}\_{\mathbf{n}} = \sup\_{\mathbf{x}} \mathbf{p}\_{\mathbf{x}} \left| F\_{\mathbf{n}}(\mathbf{x}) - F(\mathbf{x}) \right| \tag{42}$$

where sup*x* describes the supremum of the set of distances. For the analysis test to be effective to reject a null hypothesis, a relatively large number of data is required.

Under the null hypothesis that the sample originates from the hypothesized distribution *F*( ) *x* , the distribution is specified as follows:

$$\sqrt{n}D\_n \xrightarrow{n \to \infty} \sup\_t |\mathcal{B}(F(t))|\tag{43}$$

where *B t*( ) describes the Brownian bridge [17]. If *F* is continuous, then under the null hypothesis *nDn* converges to the Kolmogorov distribution, which does not depend on *F* . The analysis test is constructed by using the critical values of the Kolmogorov distribution. The null hypothesis is rejected at level if *nD K n a* , where *Ka* is calculated from the following:

$$\Pr(\mathbf{K} \le \mathbf{K}\_a) = 1 - \alpha \tag{44}$$

It should also be noted that the asymptotic power of this analysis test is 1.

For the experimental data presented in this paper, the null hypothesis is that the distribution of error is normal with a mean value of 11.78 and a standard deviation of 4.56. Based on significance level of 0.05, a significance of 0.663 is returned using the one-sample Kolmogorov-Smirnov test (44). The strength of the returned significance value allows us to retain the null hypothesis and say that the distribution of error is normal with a mean value of 11.78 and a standard deviation of 0.663.

### **6. Conclusions**

66 Mobile Robots – Current Trends

(a) Right Marker (b) Left Marker

Using the statistical analysis presented in the previous section, a non-parametric test is required to validate the practical effectiveness of the double compass methodology. The ideal analysis test for a non-parametric independent one-sample set of data is the Kolmogorov-Smirnov test [16] for significance. The empirical distribution function *Fn* for *n* independent

> 1 <sup>1</sup> ( ) *<sup>i</sup> n n X x i Fx I*

where *Xi <sup>x</sup> I* describes the indicator function, which is equal to 1 if *Xi x* and is equal to 0 otherwise. The Kolmogrov-Smirnov statistic [16] for a cumulative distribution function

*n* 

(41)

and identically distributed random variables observations *Xi* is defined as follows:

Fig. 14. Normal Q-Q Plot of Error for the Right and Left Markers

Fig. 15. Boxplot of Errors base on the Experimental Methods

**5.2 Analysis using Kolmogorov-Smimov test** 

*F*( ) *x* is as follows:

The most fundamental part of implementing a successful maneuverable non-holonomic mobile robot is accurate self-localization telemetry. The biggest concern about using a selflocalization technique is the amount of computation it will require to complete the task. Ideally, having an analytical solution to position offers the benefit of a single solution, where the numeric solution has many possibilities and requires more time and computation to derive.

In this paper, three different solutions have been presented for the position of the mobile robot. The first solution presented a method where only the quadrature shaft encoder telemetry is used to solve position. However, using an accumulative method leads to the drifting of the position and orientation result, caused by slippage of the wheels or even by the limiting resolution of the shaft encoders. The second solution presented the implementation of a hybrid model using both the quadrature shaft encoders and a single, centrally placed digital compass. Modeling the maneuver configuration of the mobile robot as a six-bar linkage mechanism and using vector loop equation method to analyze the mechanism for a numeric solution was derived. The final solution presented a model that implements a dual digital compass configuration to allow an analytical solution to the purposed six-bar linkage mechanism. An analytical model was derived to resolve the numeric nature of the single compass configuration, which will allow the swift resolution of position and orientation.

**4** 

*Spain* 

*University of Girona* 

**Gaining Control Knowledge Through an** 

Lluís Pacheco, Ningsu Luo, Inès Ferrer, Xavier Cufí and Roger Arbusé

In this chapter the use of an open mobile robot platform as an innovative educational tool in order to promote and integrate control science knowledge is presented. The idea of including applied interdisciplinary concepts is an important objective in engineering education. Future work in Electrical and Computer Engineering education is pointing towards gaining the ability to understand the technical details of a wide variety of disciplines (Antsaklis et al., 1999, Murray et al., 2003). Moreover, experimental developments have helped to breathe life into theoretical concepts found in text books and have thereby greatly changed the educational experience of students (Hristu-Varsakelis and Brockett, 2002). Students react positively to realism, since the models used for such experiments are in general accurately described by some relatively simple differential equations. Within this framework, the challenge of using mobile robots becomes evident. Thus, realistic platforms that incorporate engineering standards and realistic constraints

In this sense, mobile robot platforms can be used as educational tools to promote and integrate different curriculum subjects. Control system world and computer science share a mutual interest in robotics. Therefore, in the educational community, the Robotics Education Lab becomes a resource for supporting courses with an academic curriculum in a broad range of subjects. Some important institutions have developed various teaching activities by introducing the mobile robots as a necessary academic tool in a broad sense in order to extend the student's previously acquired knowledge (Carnegie Mellon University, 2011, Qu and Wu, 2006). In this context, many universities take advantage of mobile robot competitions in engineering education. This allows real world problem projects to be tackled, and fundamental concepts by increasing motivation and retention purposes to be reinforced. Thus, for example, FIRST (For Inspiration and Recognition of Science and Technology) mobile robot contest attracts young people to careers in engineering, technology and science. Robotics competition encourages students to apply knowledge gained throughout their engineering degree, it also offers all students a chance to serve as members of interdisciplinary engineering teams, and introduces both freshmen and sophomores to engineering concepts. Moreover, university curriculum is reinforced by the knowledge gained throughout applied experiences that embrace a wide spectrum of subjects (Wilczynski and Flowers, 2006). The educational and research objectives can also be achieved through the use of configurable, small, low-cost such

increase student skills and experience through engineering practices.

as LEGO mobile robot kits (Valera, 2007).

**1. Introduction** 

**Applied Mobile Robotics Course** 

The benefit of the dual compass configuration is that it offers an analytical solution to the hybrid model that utilizes quadrature shaft encoders. This paper, nevertheless, presents a novel approach where visual odometry is not possible.

#### **7. References**


## **Gaining Control Knowledge Through an Applied Mobile Robotics Course**

Lluís Pacheco, Ningsu Luo, Inès Ferrer, Xavier Cufí and Roger Arbusé *University of Girona Spain* 

### **1. Introduction**

68 Mobile Robots – Current Trends

The benefit of the dual compass configuration is that it offers an analytical solution to the hybrid model that utilizes quadrature shaft encoders. This paper, nevertheless, presents a

[2] *SRF05 Ultra-Sonic Ranger*. Robot Electronics; Available from: http://www.robot-

[5] *CMPS03 Compass Module*. Robot Electronics; Available from: http://www.robot-

[7] Nister, D., Naroditsky, O., & Bergen, J, *Visual Odometry*, in *IEEE Computer Society* 

[8] Frederico, C., Adelardo, M., & Pablo, A. , *Dynamic Stabilization of a Two-Wheeled* 

[9] Hofmeister, M., Liebsch, M., & Zell, A. , *Visual Self-Localization for Small Mobile Robots* 

[10] Haverinen, J., & Kemppainen, A. , *Global indoor self-localization based on the ambient magnetic field.* Robotics and Autonomous Systems, 2009: p. 1028-1035. [11] Georgiou, E., Chhaniyara, S., Al-milli, S., Dai, J., & Althoefer, K. , *Experimental Study on* 

*Technologies for Mobile Machines (CLAWAR).* . 2008: Coimbra, Portugal. [12] Kolmanovsky, I.a.M., N. , *Developments in nonholonomic control problems.* IEEE Control

[13] Lew, A., Marsden, J., Ortiz, M. and West, M, *An Overview of Variational Integrators. In:* 

[14] Alexander, J.a.M., J. , *On the kinematics of wheeled mobile robots.* The International Journal

[15] Acharyya, S.a.M., M. , *Performance of EAs for four-bar linkage synthesis.* Mechanism and

[16] Zhang, G., Wang, X., Liang, Y., and Li, J. ,, *Fast and Robust Spectrum Sensing via* 

[17] Hu, L., Zhu, H.,, *Bounded Brownian bridge model for UWB indoor multipath channel.* IEEE

Technologies for Wireless Communications,, 2005: p. 1411-1414.

*Differentially Driven Nonholonomic Mobile Robot*, in *Simpósio Brasileiro de Automação* 

*with Weighted Gradient Orientation Histograms.* , in *40th International Symposium on* 

*Track-Terrain Interaction Dynamics in an Integrated Environment*, in *Proceedings of the Eleventh International Conference on Climbing and Walking Robots and the Support* 

*Finite Element Methods: 1970's and Beyond. Theory and engineering applications of computational methods*, in *International Center for Numerical Methods in Engineering* 

*Kolmogorov-Smirnov Test.* IEEE TRANSACTIONS ON COMMUNICATIONS, 2010.

International Symposium on Microwave, Antenna, Propagation and EMC

*Conference on Computer Vision and Pattern Recognition (CVPR)*. 2004.

[1] Georgiou, E. *The KCLBOT Mobile Robot*. 2010; Available from: www.kclbot.com.

[3] *WC-132 WheelCommander Motion Controller*. Nu-Botics; Available from: http://www.nubotics.com/products/wc132/index.html

http://www.nubotics.com/products/ww01/index.html

http://www.ghielectronics.com/catalog/product/123.

novel approach where visual odometry is not possible.

electronics.co.uk/htm/srf05tech.htm

[4] *WW-01 WheelWatcher Encoder*. Nu-Botics; Available from:

electronics.co.uk/htm/cmps3tech.htm

[6] *GHI Chipworkx Module*. Available from:

*Inteligente*. 2003. p. 620-624.

*Robotics (ISR)*. 2009. p. 87-91.

Systems Magazine, 1995: p. 20-36.

of Robotics Research, 1989. 8: p. 15-27.

Machine Theory, 2009. 44(9): p. 1784-1794.

*(CIMNE)*. 2004. p. 1-18.

58(12): p. 3410-3416.

**7. References** 

In this chapter the use of an open mobile robot platform as an innovative educational tool in order to promote and integrate control science knowledge is presented. The idea of including applied interdisciplinary concepts is an important objective in engineering education. Future work in Electrical and Computer Engineering education is pointing towards gaining the ability to understand the technical details of a wide variety of disciplines (Antsaklis et al., 1999, Murray et al., 2003). Moreover, experimental developments have helped to breathe life into theoretical concepts found in text books and have thereby greatly changed the educational experience of students (Hristu-Varsakelis and Brockett, 2002). Students react positively to realism, since the models used for such experiments are in general accurately described by some relatively simple differential equations. Within this framework, the challenge of using mobile robots becomes evident. Thus, realistic platforms that incorporate engineering standards and realistic constraints increase student skills and experience through engineering practices.

In this sense, mobile robot platforms can be used as educational tools to promote and integrate different curriculum subjects. Control system world and computer science share a mutual interest in robotics. Therefore, in the educational community, the Robotics Education Lab becomes a resource for supporting courses with an academic curriculum in a broad range of subjects. Some important institutions have developed various teaching activities by introducing the mobile robots as a necessary academic tool in a broad sense in order to extend the student's previously acquired knowledge (Carnegie Mellon University, 2011, Qu and Wu, 2006). In this context, many universities take advantage of mobile robot competitions in engineering education. This allows real world problem projects to be tackled, and fundamental concepts by increasing motivation and retention purposes to be reinforced. Thus, for example, FIRST (For Inspiration and Recognition of Science and Technology) mobile robot contest attracts young people to careers in engineering, technology and science. Robotics competition encourages students to apply knowledge gained throughout their engineering degree, it also offers all students a chance to serve as members of interdisciplinary engineering teams, and introduces both freshmen and sophomores to engineering concepts. Moreover, university curriculum is reinforced by the knowledge gained throughout applied experiences that embrace a wide spectrum of subjects (Wilczynski and Flowers, 2006). The educational and research objectives can also be achieved through the use of configurable, small, low-cost such as LEGO mobile robot kits (Valera, 2007).

Gaining Control Knowledge Through an Applied Mobile Robotics Course 71

• *Computer Science* includes issues such as: computer structure and technology (first year); digital systems and microcomputers (second year); architecture of computers, industrial

It is noted that topics such as computer vision and control, which are also highly related with the proposed course, are included within the optional subjects. Furthermore, some students are attracted to perform their *"final degree project*" by continuing and developing issues related with some practical aspects of the mobile robotics course. The students who pass the course successfully are awarded 5 credits on their degree which corresponds to 50 hours of teaching activities. It consists of theoretical and laboratory sessions, related to the following different topics: PSE (power system electronics), DPED (design & programming of electronic devices), MCS (modeling and control systems), TPC (trajectory planning and control) and CVS (computer vision systems). The theoretical parts are presented during 11

2. Hardware description. Digital systems and PLD (programmable logic devices)

3. Hardware description. Digital systems and microprocessors (Barnett, 1994). Theoretical

4. Introducing the MCS block. System identification (Ljung, 1989). Discrete and

5. Developing the MCS block. System reduction, observers and filters (Aström and

6. Introducing the TPC block. Configuration space and trajectory following (Schilling,

Fig. 1. Computer Engineering degree curriculum.

computer science and robotics (third year).

classroom hours, which are scheduled as follows:

introduction of DPED practices.

continuous systems (Kuo, 1996).

Wittenmark, 1988).

1990). Fuzzy rules.

1. Course and platform presentation. Introduction of power drivers (PSE).

7. TPC & MCS integration. Trajectory following. Design of PID controllers.

(Wakerly, 1992). Theoretical introduction of DPED practices.

In this Chapter, the program and acquired experience of an optional course named "Applied Mobile Robotics" is outlined. The main aim of the this educational course is to integrate different subjects such as electronics, programming, architecture, perception systems, communications, control and trajectory planning by using the educational open mobile robot platform PRIM (Pacheco et al., 2006). The course is addressed to the students of Computer Engineering around the time of their final academic year. As a practical approach, the majority of educational activities of the course are developed in our university labs. Section 1 introduces the teaching framework from a global point of view. Section 2 indicates which community of students is suitable for this teaching activity within the context of our university framework. The robotic platform used and the course program are presented from a wide scope within the multidisciplinary context. Section 3 presents the program related to the control educational content of the course in detail. Attention is paid particularly to describing the different experiments designed in order to fulfill the process of gaining knowledge. Section 4 briefly introduces the student opinions and future work.

### **2. Educational structure**

Spanish educational framework arises from the present European convergence program (European Commission, 2008). In this way, the main challenge of Spain in 2010 was to design and adapt the university degrees studies. The objective was to improve the quality of Education processes by implementing new learning methodologies (Amante et al., 2007). In this work, knowledge integration is proposed by using an open mobile robot. The course is addressed to Computer Engineering students that can take this course to consolidate their curriculum studies. It is an optional subject within the degree of Computer Engineering at the Polytechnic School of University of Girona. However, the course "*Applied Mobile Robotics"* started as special subject in the summer of 2005 (Pacheco et al., 2009). It was proposed by a group of faculty members belonging to the teaching areas of Control and Computer Engineering and sharing a common background that involves the research on mobile robots. Hence, its purpose consists to consolidate, to integrate and to complement different teaching aspects such as electronics, programming, control and perception systems from a practical point of view by using an available robotic platform.

This section outlines the Computer Engineering curriculum and the course schedule. Especially attention is paid to relationships between different curriculum subjects and the "*Applied Mobile Robotics"* course. The mobile robot platform used is also described.

#### **2.1 Course schedule and student curriculum**

Students, who want to pursue the course, have previously acquired the basic skills in some fundamental areas such as electronics, programming, control and perception systems. In this way, an open mobile robot is used for linking multiple curriculum subjects from a practical point of view. Significant subjects of the actual university degree curriculum are depicted in Fig. 1.

The subjects of Fig. 1, which have blue background, have relationship with the "*Applied Mobile Robotics"* course. The main related subjects cover the following educational parts:

• *Programming* covers aspects such as: programming methodology and technology (first year); algorithms, data structure and programming projects (second year); user interfaces, multimedia, fundamentals of computing, programming languages and artificial intelligence (third year).

70 Mobile Robots – Current Trends

In this Chapter, the program and acquired experience of an optional course named "Applied Mobile Robotics" is outlined. The main aim of the this educational course is to integrate different subjects such as electronics, programming, architecture, perception systems, communications, control and trajectory planning by using the educational open mobile robot platform PRIM (Pacheco et al., 2006). The course is addressed to the students of Computer Engineering around the time of their final academic year. As a practical approach, the majority of educational activities of the course are developed in our university labs. Section 1 introduces the teaching framework from a global point of view. Section 2 indicates which community of students is suitable for this teaching activity within the context of our university framework. The robotic platform used and the course program are presented from a wide scope within the multidisciplinary context. Section 3 presents the program related to the control educational content of the course in detail. Attention is paid particularly to describing the different experiments designed in order to fulfill the process of gaining knowledge. Section 4 briefly introduces the student opinions and future work.

Spanish educational framework arises from the present European convergence program (European Commission, 2008). In this way, the main challenge of Spain in 2010 was to design and adapt the university degrees studies. The objective was to improve the quality of Education processes by implementing new learning methodologies (Amante et al., 2007). In this work, knowledge integration is proposed by using an open mobile robot. The course is addressed to Computer Engineering students that can take this course to consolidate their curriculum studies. It is an optional subject within the degree of Computer Engineering at the Polytechnic School of University of Girona. However, the course "*Applied Mobile Robotics"* started as special subject in the summer of 2005 (Pacheco et al., 2009). It was proposed by a group of faculty members belonging to the teaching areas of Control and Computer Engineering and sharing a common background that involves the research on mobile robots. Hence, its purpose consists to consolidate, to integrate and to complement different teaching aspects such as electronics, programming, control and perception systems

This section outlines the Computer Engineering curriculum and the course schedule. Especially attention is paid to relationships between different curriculum subjects and the

Students, who want to pursue the course, have previously acquired the basic skills in some fundamental areas such as electronics, programming, control and perception systems. In this way, an open mobile robot is used for linking multiple curriculum subjects from a practical point of view. Significant subjects of the actual university degree curriculum are

The subjects of Fig. 1, which have blue background, have relationship with the "*Applied Mobile Robotics"* course. The main related subjects cover the following educational parts: • *Programming* covers aspects such as: programming methodology and technology (first year); algorithms, data structure and programming projects (second year); user interfaces, multimedia, fundamentals of computing, programming languages and

"*Applied Mobile Robotics"* course. The mobile robot platform used is also described.

from a practical point of view by using an available robotic platform.

**2.1 Course schedule and student curriculum** 

artificial intelligence (third year).

depicted in Fig. 1.

**2. Educational structure** 

Fig. 1. Computer Engineering degree curriculum.

• *Computer Science* includes issues such as: computer structure and technology (first year); digital systems and microcomputers (second year); architecture of computers, industrial computer science and robotics (third year).

It is noted that topics such as computer vision and control, which are also highly related with the proposed course, are included within the optional subjects. Furthermore, some students are attracted to perform their *"final degree project*" by continuing and developing issues related with some practical aspects of the mobile robotics course. The students who pass the course successfully are awarded 5 credits on their degree which corresponds to 50 hours of teaching activities. It consists of theoretical and laboratory sessions, related to the following different topics: PSE (power system electronics), DPED (design & programming of electronic devices), MCS (modeling and control systems), TPC (trajectory planning and control) and CVS (computer vision systems). The theoretical parts are presented during 11 classroom hours, which are scheduled as follows:


Gaining Control Knowledge Through an Applied Mobile Robotics Course 73

• The dc motor power drivers based on a MOSFET bridge that controls the energy

• A set of PCB (printed circuits boards) based on PLD act as an interface between the embedded PC system, the encoders, and the dc motors. The interface between the PLD

• A μc processor board controls the sonar sensors. Communication between this board and the embedded PC is made through a serial port. This board is also in charge of a radio control module that enables the tele-operation of the robot. The embedded PC is

the core of the basic system, and it is where the high level decisions are made. The PLD boards generate 23 khz PWM (pulse width modulation) signals for each motor and the consequent timing protection during the command changes. This protection system provides a delay during power connection, and at the moment of change of the rotation motor direction. A hardware ramp is also implemented in order to facilitate a better transition between command changes. The value of the speed is encoded in a byte, it can be generated from 0 to 127 advancing or reversing speed commands that are sent to the PLD boards through the parallel port. The PLD boards also measure the pulses provided by the encoders, during an adjustable period of time, giving the PC the speed of each wheel at every 25ms. The absolute position of each encoder is also measured by two absolute counters used in order to measure the position and orientation of the robot by the odometer system. The shaft encoders provide 500 counts/rev since encoders are placed at the motor axes; it means that the encoders provide 43,000 counts for each turn of the wheel due to the gear reduction. Moreover, the μc has control of the sonar sensors, so for each sensor a distance measure is obtained. The ultrasound sensor range is comprised of between 3cm and 5m. The data provided by these boards are gathered through the serial port in the central computer based on a VIA C3 EGBA 733/800 MHz CPU running under LINUX Debian OS. The rate of communication with these boards is 9600 b/s. Fig. 2.b shows the

The meaningful hardware consists of the following electronic boards:

boards and the PC is carried out by the parallel port.

supplied to the actuators.

electronic and sensorial system blocks.

Fig. 2. (a) The open mobile educational platform PRIM.

(b) Sensorial and electronic system blocks.


Laboratory parts are developed during 39 hours that are scheduled as follows:


### **2.2 The educational open mobile platform**

The robot structure, shown in Fig. 2.a, is made from aluminium. It consists of 4 levels where the different parts are placed. On the first level there are two differential driven wheels, controlled by two DC motors, and a third omni-directional wheel that gives a third contact point with the floor. On the second level there is an embedded PC computer, and on the third level specific hardware and the sonar sensors are placed. On the fourth level the machine vision system is placed. The system can be powered by 12V dc batteries or by an external power source through a 220V ac, and a switch selects either mode of operation. The 12V provided by the batteries or the external source of 220V are transformed into the wide range of dc voltages needed by the system. Moreover, the use of a 220V power supply allows an unlimited use of the platform during the teaching activities. The robot has the following sensorial system:


72 Mobile Robots – Current Trends

8. TPC & MCS integration. Trajectory following. Introducing MPC (model predictive

1. DPED practices with PLD: 4 bits magnitude comparison, PWM generator, design of

2. DPED practices with microcomputers: Serial port, I/O ports, A/D converters, PWM

3. MCS practices with the robot PRIM and using MATLAB: excitation of the input system by using PRBS (pseudo random binary signals), the system model using the toolbox

4. MCS practices using SIMULINK environment: reducing the system model and PID

5. MCS practices using the SISO tools of MATLAB: reduced SISO systems and PID speed

6. MCS practices using the PRIM platform: on-robot testing of the obtained set of speed

7. DPED practices using PC and the robotic platform: introduction of the high level languages. Communication with the robot by using sockets that include the on-robot

11. MCS and TPC practices: introduction of MPC software simulation tools, trajectory

The robot structure, shown in Fig. 2.a, is made from aluminium. It consists of 4 levels where the different parts are placed. On the first level there are two differential driven wheels, controlled by two DC motors, and a third omni-directional wheel that gives a third contact point with the floor. On the second level there is an embedded PC computer, and on the third level specific hardware and the sonar sensors are placed. On the fourth level the machine vision system is placed. The system can be powered by 12V dc batteries or by an external power source through a 220V ac, and a switch selects either mode of operation. The 12V provided by the batteries or the external source of 220V are transformed into the wide range of dc voltages needed by the system. Moreover, the use of a 220V power supply allows an unlimited use of the platform during the teaching activities. The robot has the

8. TPC practices using the robotic platform: heuristic navigation using sonar sensors. 9. TPC practices using the robotic platform: the odometer system, trajectory following

12. CVS practices: introducing the software tools, binary images and segmentation.

control design. Testing of the controllers within SIMULINK environment.

Laboratory parts are developed during 39 hours that are scheduled as follows:

control) (Maciejowski, 2002). 9. Introducing the CVS block (Horn, 1998). 10. Practicing within CVS block, color and filters.

11. Final classroom, overall demos.

signal.

speed control.

controllers.

counters (encoders speed and position).

available functions (actuators and sensors).

using PID and discontinuous control laws.

13. CVS practices: color, filters and tracking of an object.

**2.2 The educational open mobile platform** 

following sensorial system:

10. Time gap for finishing the previously proposed TPC practices.

• Two encoders connected to the rotation axis of each dc motor. • An array of sonars composed by 8 ultrasounds sensors. • A machine vision system consisting of a monocular camera.

following with different prediction horizons and trajectories.

IDENT of MATLAB. Discrete and continuous models.

The meaningful hardware consists of the following electronic boards:


The PLD boards generate 23 khz PWM (pulse width modulation) signals for each motor and the consequent timing protection during the command changes. This protection system provides a delay during power connection, and at the moment of change of the rotation motor direction. A hardware ramp is also implemented in order to facilitate a better transition between command changes. The value of the speed is encoded in a byte, it can be generated from 0 to 127 advancing or reversing speed commands that are sent to the PLD boards through the parallel port. The PLD boards also measure the pulses provided by the encoders, during an adjustable period of time, giving the PC the speed of each wheel at every 25ms. The absolute position of each encoder is also measured by two absolute counters used in order to measure the position and orientation of the robot by the odometer system. The shaft encoders provide 500 counts/rev since encoders are placed at the motor axes; it means that the encoders provide 43,000 counts for each turn of the wheel due to the gear reduction. Moreover, the μc has control of the sonar sensors, so for each sensor a distance measure is obtained. The ultrasound sensor range is comprised of between 3cm and 5m. The data provided by these boards are gathered through the serial port in the central computer based on a VIA C3 EGBA 733/800 MHz CPU running under LINUX Debian OS. The rate of communication with these boards is 9600 b/s. Fig. 2.b shows the electronic and sensorial system blocks.

Fig. 2. (a) The open mobile educational platform PRIM. (b) Sensorial and electronic system blocks.

Gaining Control Knowledge Through an Applied Mobile Robotics Course 75

First lab control practical begins with experimental parametric model identification. Then, the model is obtained by sending different PRBS to the robot for the low, medium and high speeds, see Fig.4. Students select the adequate speed model and load the experimental data onto the MATLAB environment. Each data group has 5 different arrays containing time, right motor reference velocity, right motor measured velocity, left motor reference velocity, left motor measured velocity, respectively. Then, they obtain the time response of the inputs and outputs. The system identification is carried out by using the MATLAB toolbox *"ident"*, shown in Fig. 5, selecting a second-order ARX model and importing the necessary

workspace data that should be identified.

Fig. 4. Speed system outputs corresponding to the PRBS input signals

Fig. 5. System identification toolbox of MATLAB.

The flexibility of the system allows different hardware configurations as a function of the desired application and consequently the ability to run different programs on the μc or PLD boards. The open platform philosophy is reinforced by the use of the similar μc and PLD boards that are used as teaching tools at our school. Furthermore, reinforcement of the teaching activities can be achieved through the integration of different subjects. The system's flexibility is increased with the possibility of connecting it to other computer systems through a local LAN. Hence, in this research a network server has been implemented, which allows the different lab groups to make their remote connection with the mobile robot during different practices. The use of a PC as a high level device is shown from two different points of view, consisting of PC high level programming and the TCP/IP protocol net knowledge that allow on-robot LINUX created functions to be run by the use of the laboratory LAN. Thus, lab personal computers become a flexible and easy tool that allows communication with the onrobot hardware real-time devices through the use of WIFI connections.

### **3. Gaining control knowledge**

In this section it is presented the MCS and TPC learning activities that introduce the basic control theory. The control knowledge is mainly acquired through the following lab practices:


It is noted that the compulsory curriculum of Computer Engineering has a lack of control science theory, while obviously students show experience in their programming skills. In this context, the control science is introduced from a general point of view. Moreover, students have a self-guide lab practices manual that includes the important theoretical aspects.

#### **3.1 Experimental model and system identification**

The model is obtained through the approach of a set of lineal transfer functions that include the nonlinearities of the whole system. The parametric identification process is based on black box models (Norton, 1986; Ljung, 1989). The nonholonomic system dealt with in this work is considered initially to be a MIMO (multiple input multiple output) system, as shown in Fig. 3, due to the dynamic influence between two DC motors. This MIMO system is composed of a set of SISO (single input single output) subsystems with coupled connection.

Fig. 3. The MIMO system structure

74 Mobile Robots – Current Trends

The flexibility of the system allows different hardware configurations as a function of the desired application and consequently the ability to run different programs on the μc or PLD boards. The open platform philosophy is reinforced by the use of the similar μc and PLD boards that are used as teaching tools at our school. Furthermore, reinforcement of the teaching activities can be achieved through the integration of different subjects. The system's flexibility is increased with the possibility of connecting it to other computer systems through a local LAN. Hence, in this research a network server has been implemented, which allows the different lab groups to make their remote connection with the mobile robot during different practices. The use of a PC as a high level device is shown from two different points of view, consisting of PC high level programming and the TCP/IP protocol net knowledge that allow on-robot LINUX created functions to be run by the use of the laboratory LAN. Thus, lab personal computers become a flexible and easy tool that allows communication with the on-

In this section it is presented the MCS and TPC learning activities that introduce the basic control theory. The control knowledge is mainly acquired through the following lab practices:

• Trajectory following by using MPC techniques as other suitable advanced control

It is noted that the compulsory curriculum of Computer Engineering has a lack of control science theory, while obviously students show experience in their programming skills. In this context, the control science is introduced from a general point of view. Moreover, students

The model is obtained through the approach of a set of lineal transfer functions that include the nonlinearities of the whole system. The parametric identification process is based on black box models (Norton, 1986; Ljung, 1989). The nonholonomic system dealt with in this work is considered initially to be a MIMO (multiple input multiple output) system, as shown in Fig. 3, due to the dynamic influence between two DC motors. This MIMO system is composed of a

have a self-guide lab practices manual that includes the important theoretical aspects.

set of SISO (single input single output) subsystems with coupled connection.

robot hardware real-time devices through the use of WIFI connections.

• Experimental identification. The model of the system.

• Designing and testing speed PID controllers. • Trajectory following, controlling the robot position.

**3.1 Experimental model and system identification** 

**3. Gaining control knowledge** 

Fig. 3. The MIMO system structure

method.

First lab control practical begins with experimental parametric model identification. Then, the model is obtained by sending different PRBS to the robot for the low, medium and high speeds, see Fig.4. Students select the adequate speed model and load the experimental data onto the MATLAB environment. Each data group has 5 different arrays containing time, right motor reference velocity, right motor measured velocity, left motor reference velocity, left motor measured velocity, respectively. Then, they obtain the time response of the inputs and outputs. The system identification is carried out by using the MATLAB toolbox *"ident"*, shown in Fig. 5, selecting a second-order ARX model and importing the necessary workspace data that should be identified.

Fig. 4. Speed system outputs corresponding to the PRBS input signals

Fig. 5. System identification toolbox of MATLAB.

Gaining Control Knowledge Through an Applied Mobile Robotics Course 77

transfer functions obtained. Students can validate the results obtained by sending open loop

*s* + 0.41 1

*s* + 0.27 1

The third practice consists of obtaining a controller based on the simplified SISO models. The students will try to make the controller design by using pole placement techniques and the MATLAB design facilities. For instance, students can use the MATLAB "sisotool", as shown in Fig. 7, to design the controller by first importing the transfer functions corresponding to both robot wheels. Then, they depict the open loop frequency response without a compensator. Afterwards, students learn to add a PI controller to the system in order to achieve the desired response. The control performance is verified by using the

On-robot speed control is an objective of the last MCS practice. Students transform firstly the controller from continuous to discrete time. Then, they use the robot remote network

complete MIMO model of the robot in the presence of several perturbations.

High velocities Medium velocities Low velocities

0.92

0.92

*s* + 0.46 1

*s* + 0.33 1

0.82 *s* +

0.96 *s* +

speed commands to the robot.

Linear Transfer Function

Table 2. The reduced WMR model

Fig. 7. "sisotool" design window.

GDD 0.42 <sup>1</sup>

GEE 0.24 <sup>1</sup>

**3.2 Designing and testing the speed controllers** 

0.95

0.91

Tendency suppression and data filtering are suggested. The complete MIMO (multiple input multiple output) discrete-time model obtained will be used by students for validating the effectiveness of control. Table 1 shows a possible set of continuous transfer functions obtained for the three different speed lineal models used.


Table 1. The second order WMR models

The second practice consists in finding a simplified robot model. The continuous-time model is introduced into the SIMULINK environment, as shown in Fig. 6. Several studies concerning about the importance of coupling terms are carried out by students through the analysis of stationary gains, while the order reduction is done by searching for dominant poles. The frequency response and BODE analysis can provide a dominant pole existence, which can reduce the model order of the system. Students learn from the obtained results that the MIMO model can be approximated by two SISO models.

Fig. 6. MIMO robot model in SIMULINK environment.

Finally, the reduced SISO systems should be validated by considering the static gain and simulating reduced and complete robot models. Table 2 shows an example of the first order transfer functions obtained. Students can validate the results obtained by sending open loop speed commands to the robot.


Table 2. The reduced WMR model

76 Mobile Robots – Current Trends

Tendency suppression and data filtering are suggested. The complete MIMO (multiple input multiple output) discrete-time model obtained will be used by students for validating the effectiveness of control. Table 1 shows a possible set of continuous transfer functions

> 2 2

2 2

2 2

2 2

The second practice consists in finding a simplified robot model. The continuous-time model is introduced into the SIMULINK environment, as shown in Fig. 6. Several studies concerning about the importance of coupling terms are carried out by students through the analysis of stationary gains, while the order reduction is done by searching for dominant poles. The frequency response and BODE analysis can provide a dominant pole existence, which can reduce the model order of the system. Students learn from the obtained results

Finally, the reduced SISO systems should be validated by considering the static gain and simulating reduced and complete robot models. Table 2 shows an example of the first order

High velocities Medium velocities Low velocities

6.17 9.14 0.20 3.10 8.44

6.17 9.14 0.02 0.31 0.03

6.17 9.14 0.01 0.13 0.20

6.17 9.14 0.29 4.11 8.40

5.21 6.57 0.16 2.26 5.42

5.21 6.57 0.02 0.20 0.41

5.21 6.57 0.01 0.08 0.17

5.21 6.57 0.25 3.50 6.31

+ + + +

*s s s s*

+ + + +

*s s s s*

2 2

2 2 + + − − + *s s s s*

2 2 + + − − − *s s s s*

> 2 2

+ + + +

+ + − − − *s s s s*

> + + + +

> + + + +

*s s s s*

*s s s s*

*s s s s*

obtained for the three different speed lineal models used.

6.55 9.88 0.20 3.15 9.42

6.55 9.88 0.04 0.60 0.32

6.55 9.88 0.01 0.08 0.36

6.55 9.88 0.31 4.47 8.97

that the MIMO model can be approximated by two SISO models.

Fig. 6. MIMO robot model in SIMULINK environment.

+ + − +

+ + − − − *s s s s*

+ + − − − *s s s s*

> + + + +

*s s s s*

*s s s s*

2 2

2 2

2 2

2 2

Table 1. The second order WMR models

Linear Transfer Function

GDD

GED

GDE

GEE

### **3.2 Designing and testing the speed controllers**

The third practice consists of obtaining a controller based on the simplified SISO models. The students will try to make the controller design by using pole placement techniques and the MATLAB design facilities. For instance, students can use the MATLAB "sisotool", as shown in Fig. 7, to design the controller by first importing the transfer functions corresponding to both robot wheels. Then, they depict the open loop frequency response without a compensator. Afterwards, students learn to add a PI controller to the system in order to achieve the desired response. The control performance is verified by using the complete MIMO model of the robot in the presence of several perturbations.


Fig. 7. "sisotool" design window.

On-robot speed control is an objective of the last MCS practice. Students transform firstly the controller from continuous to discrete time. Then, they use the robot remote network

Gaining Control Knowledge Through an Applied Mobile Robotics Course 79

minimize the distance to the path by performing a turning action. Otherwise the desired point distance should be minimized (Pacheco and Luo, 2006). Orientation and desired point distances can be considered as important data for achieving proper path-tracking. Their deviation errors are used in the control law as necessary information for going forward or turning. Students have to compute Cartesian distances between the robot and the trajectory

The proposed control law is based on sending the same speed to both wheels when the robot distance to the trajectory is less than a heuristic threshold and different speed to each wheel in order to perform a turning action when such distance is greater. As can be observed in Fig. 9, the turning action must be different as a consequence of the robot position, up or down the trajectory, or the trajectory sense. Fig. 10 shows a trajectory

Fig. 10. Example of trajectory following experiences, using discontinuous control laws and

to be followed. Fig. 9 depicts these concepts for two cases.

Fig. 9. (a) Straight line trajectory from (Px1, Py1) to (Px2, Py2). (b) Straight line trajectory from (Px2, Py2) to (Px1, Py1).

following experience with two straight lines that have a 90º angle.

PID speed controllers.

server to send the controller parameters to the robot in order to test the speed control performance.

#### **3.3 Trajectory following using PID speed controllers**

Once PID speed controllers are obtained, TPC block practices are used for performing trajectory tracking controllers. Student goal is following trajectories by using high level languages and the existing communication functions that allow sending speed commands to the robot and to receive the odometer data. The coordinates (*x, y, θ*) can be expressed:

$$\begin{aligned} \mathbf{x}(k+1) &= \mathbf{x}(k) + dS \cos \left(\theta(k) + d\theta\right) \\ \mathbf{y}(k+1) &= \mathbf{y}(k) + dS \sin \left(\theta(k) + d\theta\right) \\ \theta(k+1) &= \theta(k) + d\theta \end{aligned} \tag{1}$$

Fig. 8.a shows two consecutive robot coordinates. Fig. 8.b describes the positioning of robot as a function of the radius of left and right wheels (*Re, Rd*)*,* and the angular incremental positioning (*θe,θd*), with *E* being the distance between two wheels and *dS* the incremental displacement of the robot. The position and angular incremental displacements are expressed as:

$$d\,dS = \frac{R\_d d\theta\_d + R\_e d\theta\_e}{2} \qquad d\theta = \frac{R\_d d\theta\_d - R\_e d\theta\_e}{E} \tag{2}$$

The incremental position of the robot can be obtained by the odometer system through the available encoder information from (1) and (2).

Fig. 8. (a) Robot coordinates at two consecutive instants of time. (b) Robot position as function of the angular displacement of each wheel.

When students are able to obtain the robots coordinates, the path planning approach consists of following a sequence of straight lines. Thus, the trajectory tracking can be performed through either straight lines or turning actions (Reeds and Shepp, 1990). Trajectory tracking using discontinuous control laws can be implemented by using heuristic concepts related to experimental dynamic knowledge of the system. Therefore, when the error of path distance of the robot is greater than a heuristic threshold, the control law can

78 Mobile Robots – Current Trends

server to send the controller parameters to the robot in order to test the speed control

Once PID speed controllers are obtained, TPC block practices are used for performing trajectory tracking controllers. Student goal is following trajectories by using high level languages and the existing communication functions that allow sending speed commands to the robot and to receive the odometer data. The coordinates (*x, y, θ*) can be expressed:

> ( ) () () ( ) ( ) () () ( )

+= + + += + +

Fig. 8.a shows two consecutive robot coordinates. Fig. 8.b describes the positioning of robot as a function of the radius of left and right wheels (*Re, Rd*)*,* and the angular incremental positioning (*θe,θd*), with *E* being the distance between two wheels and *dS* the incremental displacement of the robot. The position and angular incremental displacements are

<sup>+</sup> <sup>=</sup> *R d Rd dd ee <sup>d</sup>*

The incremental position of the robot can be obtained by the odometer system through the

When students are able to obtain the robots coordinates, the path planning approach consists of following a sequence of straight lines. Thus, the trajectory tracking can be performed through either straight lines or turning actions (Reeds and Shepp, 1990). Trajectory tracking using discontinuous control laws can be implemented by using heuristic concepts related to experimental dynamic knowledge of the system. Therefore, when the error of path distance of the robot is greater than a heuristic threshold, the control law can

θ

θ

θ

 θ

 θ

*E* θ

 θ

<sup>−</sup> <sup>=</sup> (2)

(1)

1 cos 1 sin

θθ

*x k x k dS k d y k y k dS k d*

( ) ()

*k kd*

+= +

1

2 *Rd Rd dd ee dS* θ

Fig. 8. (a) Robot coordinates at two consecutive instants of time.

(b) Robot position as function of the angular displacement of each wheel.

 θ

θ

available encoder information from (1) and (2).

**3.3 Trajectory following using PID speed controllers** 

performance.

expressed as:

minimize the distance to the path by performing a turning action. Otherwise the desired point distance should be minimized (Pacheco and Luo, 2006). Orientation and desired point distances can be considered as important data for achieving proper path-tracking. Their deviation errors are used in the control law as necessary information for going forward or turning. Students have to compute Cartesian distances between the robot and the trajectory to be followed. Fig. 9 depicts these concepts for two cases.

Fig. 9. (a) Straight line trajectory from (Px1, Py1) to (Px2, Py2). (b) Straight line trajectory from (Px2, Py2) to (Px1, Py1).

The proposed control law is based on sending the same speed to both wheels when the robot distance to the trajectory is less than a heuristic threshold and different speed to each wheel in order to perform a turning action when such distance is greater. As can be observed in Fig. 9, the turning action must be different as a consequence of the robot position, up or down the trajectory, or the trajectory sense. Fig. 10 shows a trajectory following experience with two straight lines that have a 90º angle.

Fig. 10. Example of trajectory following experiences, using discontinuous control laws and PID speed controllers.

Gaining Control Knowledge Through an Applied Mobile Robotics Course 81

of the input signal is taken into account in the first constraint, where *G0* and *G1* respectively denote the dead zone and saturation of the DC motors. The second and third terms are contractive constraints (Wang, 2007), which result in the convergence of coordinates or

By using the basic ideas introduced in the previous subsection, the LMPC algorithms have

In order to reduce the set of possibilities, when optimal solution is searched for, some

The above considerations will result in the reduction of the computation time and the smooth behavior of the robot during the prediction horizon (Maciejowski, 2002). Thus, the

Fig. 11. LMPC strategy with fixed increment of the input during the control horizon and

orientation to the objective, and should be accomplished at each control step.

2. Minimize the cost function and to obtain a series of optimal input signals;

• The input signals remain constant during the remaining interval of time.

3. Choose the first obtained input signal as the command signal;

• The signal increment is kept fixed within the prediction horizon.

set of available input is reduced to one value, as it is shown in Fig. 11.

4. Go back to the step 1 in the next sampling period.

constraints over the DC motor inputs are taken into account:

**3.4.2 The algorithms and simulated results** 

the following steps:

1. Read the current position;

constant value for the remaining time

#### **3.4 Trajectory following using MPC techniques**

In this subsection the LMPC (local model predictive control) techniques based on the dynamics models obtained in the previous subsection are presented. LMPC strategies are simulated in the laboratory by using the methods and software developed by those of our teaching staff participating in the course (Pacheco and Luo, 2011).

#### **3.4.1 The LMPC formulation**

The main objective of highly precise motion tracking consists in minimizing the error between the robot and the desired path. Global path-planning becomes unfeasible since the sensorial system of some robots is just local. In this way, LMPC is proposed in order to use the available local perception data in the navigation strategies. Concretely, LMPC is based on minimizing a cost function related to the objectives for generating the optimal WMR inputs. Define the cost function as follows:

$$f(n,m) = \min\_{\begin{subarray}{c} \min\limits\_{i=1}^{n-1} \sum\limits\_{j=1}^{n} \mathcal{X}\left(k+i\left|k\right|-\overline{X}\_{ld}\right)^{T}Q\left[\mathcal{X}\left(k+n\left|k\right|\right)-\mathcal{X}\_{ld}\right] \\ +\sum\limits\_{i=1}^{n-1} \left[\mathcal{X}\left(k+i\left|k\right|\right)-\overline{X\_{ld}X\_{l0}}\right]^{T}Q\left[\mathcal{X}\left(k+i\left|k\right|\right)-\overline{X\_{ld}X\_{l0}}\right] \\ +\sum\limits\_{i=1}^{n-1} \left[\mathcal{O}\left(k+i\left|k\right|\right)-\mathcal{O}\_{ld}\right]^{T}R\left[\mathcal{O}\left(k+i\left|k\right|\right)-\mathcal{O}\_{ld}\right] \\ +\sum\limits\_{i=0}^{m-1} \mathcal{U}^{T}\left(k+i\left|k\right|\right)S\mathcal{U}\left(k+i\left|k\right|\right) \end{subarray}} \tag{3}$$

The first term of (3) refers to the attainment of the local desired coordinates, *Xld=*(*xd,yd*), where (*xd, yd*) denote the desired Cartesian coordinates. *X*(*k+n/k*) represents the terminal value of the predicted output after the horizon of prediction *n.* The second one can be considered as an orientation term and is related to the distance between the predicted robot positions and the trajectory segment given by a straight line between the initial robot Cartesian coordinates *Xl0=*(*xl0, yl0*) from where the last perception was done and the desired local position, *Xld*, to be achieved within the perceived field of view. This line orientation is denoted by *θld* and denotes the desired orientation towards the local objective. *X*(*k+i/k*) and *θ*(*k+i/k*) (*i=1,…n-1*) represents the predicted Cartesian and orientation values within the prediction horizon. The third term is the predicted orientation error. The last one is related to the power signals assigned to each DC motor and are denoted as *U.* The parameters *P, Q, R* and *S* are weighting parameters that express the importance of each term. The control horizon is designed by the parameter *m*. The system constraints are also considered:

$$\begin{cases} \mathbf{G}\_0 < \left| \boldsymbol{U}(k) \right| \le \mathbf{G}\_1 & \alpha \in \left( 0, 1 \right] \\ \left| \mathbf{X}\left(\boldsymbol{K} + \boldsymbol{n} \;/\; k\right) - \mathbf{X}\_{\operatorname{ld}} \right| \le \alpha \left| \mathbf{X}\left(k\right) - \mathbf{X}\_{\operatorname{ld}} \right| \\ \text{or} \quad \left| \boldsymbol{\theta}\left(\boldsymbol{k} + \boldsymbol{n} \;/\; k\right) - \boldsymbol{\theta}\_{\operatorname{ld}} \right| \le \alpha \left| \boldsymbol{\theta}\left(\boldsymbol{k}\right) - \boldsymbol{\theta}\_{\operatorname{ld}} \right| \\ \vdots \end{cases} \tag{4}$$

where *X*(*k*) and *θ*(*k*) denote the current WMR coordinates and orientation, *X*(*k+n/k*) and *θ*(*k+n/k*) denote the final predicted coordinates and orientation, respectively. The limitation of the input signal is taken into account in the first constraint, where *G0* and *G1* respectively denote the dead zone and saturation of the DC motors. The second and third terms are contractive constraints (Wang, 2007), which result in the convergence of coordinates or orientation to the objective, and should be accomplished at each control step.

### **3.4.2 The algorithms and simulated results**

By using the basic ideas introduced in the previous subsection, the LMPC algorithms have the following steps:

1. Read the current position;

80 Mobile Robots – Current Trends

In this subsection the LMPC (local model predictive control) techniques based on the dynamics models obtained in the previous subsection are presented. LMPC strategies are simulated in the laboratory by using the methods and software developed by those of our

The main objective of highly precise motion tracking consists in minimizing the error between the robot and the desired path. Global path-planning becomes unfeasible since the sensorial system of some robots is just local. In this way, LMPC is proposed in order to use the available local perception data in the navigation strategies. Concretely, LMPC is based on minimizing a cost function related to the objectives for generating the optimal WMR

( ) ( )

*n T*

*U k ik ld ld <sup>i</sup> <sup>i</sup>*

The first term of (3) refers to the attainment of the local desired coordinates, *Xld=*(*xd,yd*), where (*xd, yd*) denote the desired Cartesian coordinates. *X*(*k+n/k*) represents the terminal value of the predicted output after the horizon of prediction *n.* The second one can be considered as an orientation term and is related to the distance between the predicted robot positions and the trajectory segment given by a straight line between the initial robot Cartesian coordinates *Xl0=*(*xl0, yl0*) from where the last perception was done and the desired local position, *Xld*, to be achieved within the perceived field of view. This line orientation is denoted by *θld* and denotes the desired orientation towards the local objective. *X*(*k+i/k*) and *θ*(*k+i/k*) (*i=1,…n-1*) represents the predicted Cartesian and orientation values within the prediction horizon. The third term is the predicted orientation error. The last one is related to the power signals assigned to each DC motor and are denoted as *U.* The parameters *P, Q, R* and *S* are weighting parameters that express the importance of each term. The control

horizon is designed by the parameter *m*. The system constraints are also considered:

( ) ( ) ( ) ( )

*XK n k X Xk X knk k*

 θ αθ

or /

θ

*G Uk G*

( ) ( ]

α

 θ

*ld ld ld ld*

0 1 0,1 /

α

< ≤ <sup>∈</sup> + −≤ − + −≤ −

where *X*(*k*) and *θ*(*k*) denote the current WMR coordinates and orientation, *X*(*k+n/k*) and *θ*(*k+n/k*) denote the final predicted coordinates and orientation, respectively. The limitation

*T*

*X k nk X P X k nk X*

( ) ( )

 θθ

+− +− + +− + − <sup>=</sup> + +− +− ++ +

*U k i k SU k i k*

( ) ( )

*ld ld*

*X k ik X X Q X k ik X X*

0 0

 θ (3)

(4)

*ld l ld l*

( ) ( )

*k ik R k ik*

**3.4 Trajectory following using MPC techniques** 

**3.4.1 The LMPC formulation** 

( )

*J nm*

inputs. Define the cost function as follows:

, min

( )

teaching staff participating in the course (Pacheco and Luo, 2011).

1

−

1 1

*<sup>n</sup> <sup>T</sup> i m*

θ

*i*

=

1

−

*<sup>m</sup> <sup>T</sup>*

0

*i*

=

1 <sup>0</sup> <sup>1</sup>

<sup>−</sup> = − <sup>+</sup> <sup>=</sup> <sup>=</sup>


In order to reduce the set of possibilities, when optimal solution is searched for, some constraints over the DC motor inputs are taken into account:


The above considerations will result in the reduction of the computation time and the smooth behavior of the robot during the prediction horizon (Maciejowski, 2002). Thus, the set of available input is reduced to one value, as it is shown in Fig. 11.

Fig. 11. LMPC strategy with fixed increment of the input during the control horizon and constant value for the remaining time

Gaining Control Knowledge Through an Applied Mobile Robotics Course 83

Fig. 13. Simulation results obtained (*m*=4, *n*=8, *P*=1, *Q*=1, *R*=1 and *S*=0).

Fig. 14. Simulation results obtained (*m*=4, *n*=8, *P*=1, *Q*=2, R=1 and *S*=0).

The teaching experience gained from this course has shown the usefulness of the mobile robot as an important experimental platform for computer engineering education. The

**4. Conclusions** 

The LMPC simulation software provides to students a set of files and facilities to draw the results, and consequently the students can represent the files containing the cost function values, for different available set of inputs, corresponding to different moments of the simulation. Simulation environment allows testing the control law with different set up. Therefore, different control and prediction horizons can be tested. The control law parameters of equation (3) can be analyzed and modified according to the different horizons of prediction. Fig. 12 depicts the results obtained, *m*=3 and *n*=5, when a trajectory composed of 3 straight lines is followed. The lines are defined by the coordinates (0, 0), (0, 100), (50, 200) and (250, 200) where (*x*, *y*) are given in cm. It is noted that the sample time is 0.1s, so *n*=5 means 0.5s.

Fig. 12. Simulation results obtained (*m*=3, *n*=5, *P*=1, *Q*=1, *R*=1 and *S*=0).

Fig. 13 show the simulation results when the control horizon is set to 4 and the prediction horizon to 8.

The results obtained in Fig. 13 show that the robot is stopped near (50, 200) coordinates due to contractive constraints, and consequently the final coordinates (250, 200) cannot be achieved. The reason is that each new straight line trajectory is commanded when the final coordinate of trajectory is reached considering a tolerance threshold but in the above case the coordinates (50, 200) will not be reached even considering the tolerance threshold. The problem can be solved by modifying the parameter weights of the control law as it is shown in Fig. 14.

82 Mobile Robots – Current Trends

The LMPC simulation software provides to students a set of files and facilities to draw the results, and consequently the students can represent the files containing the cost function values, for different available set of inputs, corresponding to different moments of the simulation. Simulation environment allows testing the control law with different set up. Therefore, different control and prediction horizons can be tested. The control law parameters of equation (3) can be analyzed and modified according to the different horizons of prediction. Fig. 12 depicts the results obtained, *m*=3 and *n*=5, when a trajectory composed of 3 straight lines is followed. The lines are defined by the coordinates (0, 0), (0, 100), (50, 200) and (250, 200) where (*x*, *y*) are given in cm. It is noted that the sample time is 0.1s, so

Fig. 12. Simulation results obtained (*m*=3, *n*=5, *P*=1, *Q*=1, *R*=1 and *S*=0).

Fig. 13 show the simulation results when the control horizon is set to 4 and the prediction

The results obtained in Fig. 13 show that the robot is stopped near (50, 200) coordinates due to contractive constraints, and consequently the final coordinates (250, 200) cannot be achieved. The reason is that each new straight line trajectory is commanded when the final coordinate of trajectory is reached considering a tolerance threshold but in the above case the coordinates (50, 200) will not be reached even considering the tolerance threshold. The problem can be solved by modifying the parameter weights of the control law as it is shown

*n*=5 means 0.5s.

horizon to 8.

in Fig. 14.

Fig. 13. Simulation results obtained (*m*=4, *n*=8, *P*=1, *Q*=1, *R*=1 and *S*=0).

Fig. 14. Simulation results obtained (*m*=4, *n*=8, *P*=1, *Q*=2, R=1 and *S*=0).

#### **4. Conclusions**

The teaching experience gained from this course has shown the usefulness of the mobile robot as an important experimental platform for computer engineering education. The

Gaining Control Knowledge Through an Applied Mobile Robotics Course 85

Carnegie Mellon University*. The Robotics Institute: Robotics Education in the School of Computer* 

European Commission. *Higher Education in Europe, inside the police areas of European*

Hristu-Varsakelis, D., and Brockett, R. W. (2002). Experimenting with Hybrid Control. *IEEE* 

Kuo, C. (1996). Automatic Control Systems*. Prentice-Hall International Ed*itors, New Jersey,

Ljung, L. (1989). System Identification: Theory for the User. Prentice-Hall Inter. Ed., New

Maciejowski, J.M. (2002). Predictive Control with Constraints, *Ed. Prentice Hall*, ISBN 0-201-

Murray, R. M., Aström, K. J., Boyd, S. P., Brockett, R. W., and Stein, G. (2003). Future

Norton, J. (1986). An Introduction to Identification. *Academic Press*, ISBN 0125217307,

Pacheco, L., Batlle, J., Cufí, X., Arbusé, R. (2006). PRIM an Open Mobile Platform.

Pacheco, L. and Luo, N. (2006). Mobile robot experimental modelling and control strategies

Pacheco, L., Luo, N., Arbusé, R., Ferrer, I., and Cufí, X. (2009). Interdisciplinary Knowledge

Pacheco, L., Luo, N. (2011) Mobile robot local trajectory tracking with dynamic model

*Information and Control*, Vol.7, No.6, (June 2011), ISSN 1349-4198, 3457-3483 Qu, Z. and Wu XH. (2006). A new curriculum on planning and cooperative control of

Reeds, J. A and Shepp, L. A. (1990). Optimal paths for a car that goes forwards and

Schilling, R. J. (1990). Fundamentals of Robotics. *Prentice-Hall International Ed.*, New Jersey,

Valera, A., Weiss, M., Valles, M., Diez, J. L. (2007). Control of mobile robots using mobile

Wakerly, J.F. (1992). Digital Design Principles and Practices. *Prentice-Hall International Ed.,*

*Engineering Education*, Vol. 25, No. 4, (July 2009), 830-840.

backwards. *Pacific Journal of Mathematics*, vol. 145, 1990.

Directions in Control in an Information-Rich World. *IEEE Control Systems Magazine*,

Motivation, Present and Future Trends. *Inter. Conf. on Automation, Quality and* 

using sensor fusion. *Control Engineering and Applied Informatics*. Vol. 8, No.3, (2006),

Integration Through an Applied Mobile Robotics Course*. International Journal of* 

predictive control techniques, *International Journal of Innovative Computing,* 

autonomous mobile robots. *International Journal of Engineering Education*, Vol. 22,

technologies. *International Journal of Engineering Education*, Vol. 23, No 3, (March

*Control Systems Magazine*, Vol. 22, No. 1, (Feb. 2002), 82-95.

http://ec.europa.eu/education/policies/higher/higher\_en.html, 2008. Horn, B. K. P. (1998). Robot Vision. *MIT press*, Ed. McGraw–Hill, ISBN 0-262-08159-8,

*Science*. In URL: http://www.ri.cmu.edu, 2011.

*Commission*, in URL:

London (England)

1996.

47-55.

1990.

2007), 491-498.

New Jersey, 1992.

Jersey, 1989.

39823-0, Essex (England)

Vol. 23, No. 2, (April 2003) 20-33.

*Testing, Robotics*. Vol. 2, 231-236, 2006.

London and New York, 1986

No 4, (July, 2006), 804-814.

course framework allows a different level of knowledge learning according to the student skills. Various control techniques have been provided to students as a main issue in order to improve their knowledge. They can acquire a higher level of skills by integrating different subjects related to electronics, control system, computer science, communication, etc. Moreover, students have the opportunity to be familiar with basic control techniques as well as some advanced methods as MPC. It should be mentioned that the developed lab practices related to the theoretic counterparts have increased the motivation of the students and have achieved the multidisciplinary knowledge consolidation of the proposed objective of control education. In this context, according to the student's feedback on the course, the average opinion of 30 students of the course is 4 over 5 according to the different survey questions that the university provides at the end of the course. They underline the interesting and attractive educational and practical aspects as the relevant characteristics of the course. One of the suggestions is to increase the number of hours in order to perform better the theoretical analysis, lab practices and algorithm implementation. Furthermore, some students decide to do their compulsory final degree project by improving the robot performance in some practical aspects; e.g., nowadays the design of factorial experiments to perform the MPC cost function tuning. In the future, the teaching staff from different research areas will do their utmost to promote the development and consolidation of the course so that the quality of teaching can be further improved. In this process, the student's opinions are important enough to encourage this work. Therefore, the path for achieving renewed progress in robotics is through the integration of several fields of knowledge, such as computing, communications, and control sciences, so as to perform a higher level reasoning and use decision tools with strong theoretical base.

#### **5. Acknowledgment**

This work has been partially funded by the Commission of Science and Technology of Spain (CICYT) through the coordinated projects DPI2007-66796-C03-02 and DPI 2008-06699-C02- 01. The authors are grateful to the support provided by Javi Cobos in implementing a useful on-robot network server as well as different functions that allow the dynamic interactions in the robot.

#### **6. References**


84 Mobile Robots – Current Trends

course framework allows a different level of knowledge learning according to the student skills. Various control techniques have been provided to students as a main issue in order to improve their knowledge. They can acquire a higher level of skills by integrating different subjects related to electronics, control system, computer science, communication, etc. Moreover, students have the opportunity to be familiar with basic control techniques as well as some advanced methods as MPC. It should be mentioned that the developed lab practices related to the theoretic counterparts have increased the motivation of the students and have achieved the multidisciplinary knowledge consolidation of the proposed objective of control education. In this context, according to the student's feedback on the course, the average opinion of 30 students of the course is 4 over 5 according to the different survey questions that the university provides at the end of the course. They underline the interesting and attractive educational and practical aspects as the relevant characteristics of the course. One of the suggestions is to increase the number of hours in order to perform better the theoretical analysis, lab practices and algorithm implementation. Furthermore, some students decide to do their compulsory final degree project by improving the robot performance in some practical aspects; e.g., nowadays the design of factorial experiments to perform the MPC cost function tuning. In the future, the teaching staff from different research areas will do their utmost to promote the development and consolidation of the course so that the quality of teaching can be further improved. In this process, the student's opinions are important enough to encourage this work. Therefore, the path for achieving renewed progress in robotics is through the integration of several fields of knowledge, such as computing, communications, and control sciences, so as to perform a higher level

This work has been partially funded by the Commission of Science and Technology of Spain (CICYT) through the coordinated projects DPI2007-66796-C03-02 and DPI 2008-06699-C02- 01. The authors are grateful to the support provided by Javi Cobos in implementing a useful on-robot network server as well as different functions that allow the dynamic interactions in

Amante, B., Romero, C., Piñuela, J.A. (2007). Aceptación de la metodología del aprendizaje

Aström K.J and Wittenmark B., (1988). Computed Controlled Systems. Theory and Design*.*

Barnett, R. H. (1994). The 8051 Family of Microcontrollers. *Prentice Hall International Ed.*,

*IEEE Control Systems Magazine*, Vol. 19, No. 5, (Oct. 1999) 53-58.

*Prentice-Hall International Ed.*, New Jersey, 1988.

colaborativo en diferentes ciclos de las carreras técnicas. Cuadernos de innovación educativa de las Enseñanzas Técnicas Universitarias. Vol. 1, No. 1, 2007, 65-74. Antsaklis, P., Basar, T., Decarlo, R., McClamroch, N. H., Spong M., and Yurkovich, S. (1999).

Report on NSF/CSS workshop on new directions in control engineering education.

reasoning and use decision tools with strong theoretical base.

**5. Acknowledgment** 

the robot.

**6. References** 

New Jersey, 1994.


http://ec.europa.eu/education/policies/higher/higher\_en.html, 2008.


**Part 2** 

**Health-Care and Medical Robots** 


**Part 2** 

**Health-Care and Medical Robots** 

86 Mobile Robots – Current Trends

Wan, J. (2007) Computational reliable approaches of contractive MPC for discrete-time

Wilczynski, V. and Flowers, W. (2006). FIRST Robotics Competition: University curriculum

applications of mobile robots. *International Journal of Engineering Education*, Vol. 22,

systems, *PhD Thesis, University of Girona*

No. 4, (July, 2006), 792-803.

Chi Zhu1, Masashi Oda1, Haoyong Yu2, Hideomi Watanabe3

*Kamisadori 460-1, Maebashi, Gunma, 371-0816.* <sup>2</sup>*Bioengineering Division, Faculty of Engineering,*

**Robot with Admittance Control** 

<sup>4</sup>*School of Engineering, Santa Clara University, CA*

*National University of Singapore*

<sup>1</sup>*Department of Systems Life Engineering, Maebashi Institute of Technology,*

**Walking Support and Power Assistance of a** 

**Wheelchair Typed Omnidirectional Mobile** 

<sup>3</sup>*Department of Health Sciences, Faculty of Medicine, Gunma University*

Walking ability is essential for the elderly/the disabled's self-supported daily life. Currently the development of various robotic aids or devices for helping the elderly's walking has attracted many researcher's attentions. Guidecane ((Ulrich & Borenstein, 2001)), PAM-AID ((Lacey & Dawson-Howe, 1998)), Care-O-bot ((Graf et al., 2004)), PAMM ((S. Dubowsky, 2000)), and etc, are developed for intelligent walking support and other tasks for the elderly or the disabled. Mainly for the elderly's walking support, a robotic power-assisted walking support system is developed (Nemoto & M. Fujie, 1998). The system is to support the elderly needing help in standing up from the bed, walking around, sitting down for rehabilitation. Meanwhile, a number of intelligent wheelchair-type aids are being designed for people who cannot walk and have extremely poor dexterity ((Yanco, 1998)-(Dicianno et al., 2007)). These devices are well suited for people who have little or no mobility, but they are not appropriate for the elderly with significant cognitive problems. Consequently, the facilities are generally reluctant to permit the elderly residents to use powered wheelchairs. Hence, the better solution is that the facility staff (the caregiver) pushes the wheelchair to move. Some technologies for the caregiver's power assistance are proposed ((Kakimoto et al.,

Further, since the omnidirectional mobility is necessary to truly yield the user's (here refer to the elderly/disabled, or the caregiver) intentions of walking speed and direction, omnidirectional mobility is investigated in order to obtain a smooth motion and high payload capability, not with the special tires but with the conventional rubber or pneumatic tires. An omnidirectional and holonomic vehicle with two offset steered driving wheels and two

**1. Introduction**

1997)-(Miyata et al., 2008)).

and Yuling Yan<sup>4</sup>

1,3*Japan* <sup>2</sup>*Singapore* <sup>4</sup>*USA*

**5**

## **Walking Support and Power Assistance of a Wheelchair Typed Omnidirectional Mobile Robot with Admittance Control**

Chi Zhu1, Masashi Oda1, Haoyong Yu2, Hideomi Watanabe3 and Yuling Yan<sup>4</sup> <sup>1</sup>*Department of Systems Life Engineering, Maebashi Institute of Technology, Kamisadori 460-1, Maebashi, Gunma, 371-0816.* <sup>2</sup>*Bioengineering Division, Faculty of Engineering, National University of Singapore* <sup>3</sup>*Department of Health Sciences, Faculty of Medicine, Gunma University* <sup>4</sup>*School of Engineering, Santa Clara University, CA* 1,3*Japan* <sup>2</sup>*Singapore* <sup>4</sup>*USA*

#### **1. Introduction**

Walking ability is essential for the elderly/the disabled's self-supported daily life. Currently the development of various robotic aids or devices for helping the elderly's walking has attracted many researcher's attentions. Guidecane ((Ulrich & Borenstein, 2001)), PAM-AID ((Lacey & Dawson-Howe, 1998)), Care-O-bot ((Graf et al., 2004)), PAMM ((S. Dubowsky, 2000)), and etc, are developed for intelligent walking support and other tasks for the elderly or the disabled. Mainly for the elderly's walking support, a robotic power-assisted walking support system is developed (Nemoto & M. Fujie, 1998). The system is to support the elderly needing help in standing up from the bed, walking around, sitting down for rehabilitation.

Meanwhile, a number of intelligent wheelchair-type aids are being designed for people who cannot walk and have extremely poor dexterity ((Yanco, 1998)-(Dicianno et al., 2007)). These devices are well suited for people who have little or no mobility, but they are not appropriate for the elderly with significant cognitive problems. Consequently, the facilities are generally reluctant to permit the elderly residents to use powered wheelchairs. Hence, the better solution is that the facility staff (the caregiver) pushes the wheelchair to move. Some technologies for the caregiver's power assistance are proposed ((Kakimoto et al., 1997)-(Miyata et al., 2008)).

Further, since the omnidirectional mobility is necessary to truly yield the user's (here refer to the elderly/disabled, or the caregiver) intentions of walking speed and direction, omnidirectional mobility is investigated in order to obtain a smooth motion and high payload capability, not with the special tires but with the conventional rubber or pneumatic tires. An omnidirectional and holonomic vehicle with two offset steered driving wheels and two

Handle bar

Free castors

Different from the other intelligent aids, the robot has a handle and a seat. The user holds the handle with his/her hands during walking. Usually, the elderly's walking ability (the capable walking distance), can be greatly enhanced with the help of the intelligent walking support devices. However, because of the muscle weakness, the knee and waist of the elderly will be painful after walking some distances. At that time, the elderly hopes to have a seat for a rest. To meet such a very actual requirement, we embed a seat in the robot so that the elderly or the disabled can take a seat for rest when necessary. On the other hand, when the elderly or the disabled is sitting in the seat of the wheelchair type aids and wants to go to somewhere, the caregiver will have to push the wheelchair. Hence, the power assistance of the health

<sup>91</sup> Walking Support and Power Assistance of

a Wheelchair Typed Omnidirectional Mobile Robot with Admittance Control

Consequently, the main the objectives of the robot are To meet the above two main objectives,

2. few load feeling when the user (the elderly/disabled /caregiver) holding the handle

3. guidance to destinations and obstacle avoidance via sensors and pre-determined maps.

The robot shown in Fig.1, 2 is considered to be used either indoor or outdoor. It includes two active dual-wheel modules and two free casters that provides the robot omnidirectional

A 6-axis force/torque sensor, mounted on the shaft of the handle, detects the forces and torques the user applies to the handle. With these force/torque signals, the embedded computer estimates the user's intention of walking velocity and walking direction, and the robot is controlled to yield the user's intention so that the user will feel few loads to fulfill the

The 2D laser ranger/area finder is employed to detect the obstacles. The RFID system of the RFID antenna under the seat and the passive RFID tags laid under or near the places such as

walking support and power assistance. The detail will be explained in section 4.

4. adaptation to the different motion abilities of the different users.

Passive RFID tags

**2.1 The robot objective and functions**

RFID antenna

Fig. 2. The structure of the robot

caregivers is also necessary.

1. omnidirectional mobility.

**2.2 Robot structure**

the functions the robot should have:

during walking or pushing the robot.

mobility. It will be discussed in section 3 in detail.

2D Laser Range/area sensor

Seat

Active castors

driving unit

sensor

6 axis Force/Torque

Battery, control and

free casters is proposed (Wada & Mori, 1996). An omnidirectional mobile robot platform using active dual-wheel caster mechanisms is developed (Han et al., 2000), in which, the kinematic model of the active dual-wheel caster mechanism is derived and a holonomic and omnidirectional motion of the mobile robot using two such assembly mechanisms is realized. An omnidirectional platform and vehicle design using active split offset casters (ASOC) is also developed (Yu et al., 2004). Its structure is very similar to (Han et al., 2000), but particular attention is paid to the system performance on uneven floors.

However, up to now, the wheelchair typed robot that can be used either for the elderly's walking support, or the disabled's walking rehabilitation, or the caregiver's power assistance has not been reported yet. In this study, we develop such kind of wheelchair typed omnidirectional robot that not only can support the elderly or the disabled's walking, but also can lessen the caregivers' burden, or saying, give the caregivers power assistance.

#### **2. Overview of the developed robot**

The developed new type of omnidirectional mobile robot for the elderly's walking support, the disabled's rehabilitation, and the caregiver's power assistance, is shown in Fig. 1. Its structure is illustrated in Fig. 2.

Fig. 1. The omnidirectional mobile robot in developing for the elderly's walking support, the disabled's walking rehabilitation, and the caregiver's power assistance

Fig. 2. The structure of the robot

2 Mobile Robots

free casters is proposed (Wada & Mori, 1996). An omnidirectional mobile robot platform using active dual-wheel caster mechanisms is developed (Han et al., 2000), in which, the kinematic model of the active dual-wheel caster mechanism is derived and a holonomic and omnidirectional motion of the mobile robot using two such assembly mechanisms is realized. An omnidirectional platform and vehicle design using active split offset casters (ASOC) is also developed (Yu et al., 2004). Its structure is very similar to (Han et al., 2000), but particular

However, up to now, the wheelchair typed robot that can be used either for the elderly's walking support, or the disabled's walking rehabilitation, or the caregiver's power assistance has not been reported yet. In this study, we develop such kind of wheelchair typed omnidirectional robot that not only can support the elderly or the disabled's walking, but also can lessen the caregivers' burden, or saying, give the caregivers power assistance.

The developed new type of omnidirectional mobile robot for the elderly's walking support, the disabled's rehabilitation, and the caregiver's power assistance, is shown in Fig. 1. Its

Fig. 1. The omnidirectional mobile robot in developing for the elderly's walking support, the

disabled's walking rehabilitation, and the caregiver's power assistance

attention is paid to the system performance on uneven floors.

**2. Overview of the developed robot**

structure is illustrated in Fig. 2.

#### **2.1 The robot objective and functions**

Different from the other intelligent aids, the robot has a handle and a seat. The user holds the handle with his/her hands during walking. Usually, the elderly's walking ability (the capable walking distance), can be greatly enhanced with the help of the intelligent walking support devices. However, because of the muscle weakness, the knee and waist of the elderly will be painful after walking some distances. At that time, the elderly hopes to have a seat for a rest. To meet such a very actual requirement, we embed a seat in the robot so that the elderly or the disabled can take a seat for rest when necessary. On the other hand, when the elderly or the disabled is sitting in the seat of the wheelchair type aids and wants to go to somewhere, the caregiver will have to push the wheelchair. Hence, the power assistance of the health caregivers is also necessary.

Consequently, the main the objectives of the robot are To meet the above two main objectives, the functions the robot should have:


#### **2.2 Robot structure**

The robot shown in Fig.1, 2 is considered to be used either indoor or outdoor. It includes two active dual-wheel modules and two free casters that provides the robot omnidirectional mobility. It will be discussed in section 3 in detail.

A 6-axis force/torque sensor, mounted on the shaft of the handle, detects the forces and torques the user applies to the handle. With these force/torque signals, the embedded computer estimates the user's intention of walking velocity and walking direction, and the robot is controlled to yield the user's intention so that the user will feel few loads to fulfill the walking support and power assistance. The detail will be explained in section 4.

The 2D laser ranger/area finder is employed to detect the obstacles. The RFID system of the RFID antenna under the seat and the passive RFID tags laid under or near the places such as

d

The linear velocities of the joint *Oi* of the module *i* can be expressed as:

where J<sup>i</sup> is the Jacobian matrix of the module in the moving frame *OiXiYi*.

cos *<sup>φ</sup><sup>i</sup>* <sup>−</sup> *<sup>s</sup>*

sin *φ<sup>i</sup>* + *<sup>s</sup>*

 = *r* 2

r

a Wheelchair Typed Omnidirectional Mobile Robot with Admittance Control

vli

Yw

Fig. 4. Model of one active dual-wheel module

*r*: radius of the wheels.

ωi: angular velocity vector of the module *i*. ω<sup>i</sup> = [*ωli*, *ωri*]

 *x*˙*<sup>i</sup> y*˙*i*

 *x*˙*wi y*˙*wi* =

u<sup>i</sup> =

u*wi* =

<sup>J</sup>*wi* <sup>=</sup> *<sup>r</sup>* 2

rotation velocities is expressed as:

as:

of J*wi* is :

d s

Oi (xi, yi)

xi

vri

*T*.

φi

xwi

= J<sup>i</sup> · ω<sup>i</sup> (1)

· u<sup>i</sup> = R<sup>i</sup> · u<sup>i</sup> (2)

(4)

Yi Xi

ywi

yi

<sup>93</sup> Walking Support and Power Assistance of

Xw Ow

 1 1 *s <sup>d</sup>* <sup>−</sup> *<sup>s</sup> d*

On the other hand, the velocity of the joint *Oi* expressed in the world system is given by

 cos *<sup>φ</sup><sup>i</sup>* <sup>−</sup> sin *<sup>φ</sup><sup>i</sup>* sin *φ<sup>i</sup>* cos *φ<sup>i</sup>*

Therefore, the kinematics of the module between the velocity of the joint *Oi* and the two wheel

where J*wi* is the Jacobian matrix of the module in the world frame *OwXwYw*, and it is given

The above expressions indicate that the velocity of the module is determined by the parameters *s*, *d*, *r*, and *ωri*, *ωli*. But since *s*, *d*, *r* are structurally determined for a robot, the only controllable values are angular velocities *ωri*, *ωli* of the two driving wheels. The determinant

*<sup>d</sup>* sin *<sup>φ</sup><sup>i</sup>* cos *<sup>φ</sup><sup>i</sup>* <sup>+</sup> *<sup>s</sup>*

*<sup>d</sup>* cos *<sup>φ</sup><sup>i</sup>* sin *<sup>φ</sup><sup>i</sup>* <sup>−</sup> *<sup>s</sup>*

u*wi* = R<sup>i</sup> · J<sup>i</sup> · ω<sup>i</sup> = J*wi*· ω<sup>i</sup> (3)

*<sup>d</sup>* sin *φ<sup>i</sup>*

*<sup>d</sup>* cos *φ<sup>i</sup>*

 · *ωri ωli*

Fig. 3. The control system of the robot

the door, corner, elevator are used to identify the robot position and orientation. They are used for the destination guidance and obstacle avoidance. The problems of such obstacle avoidance and navigation are not discussed in this chapter.

The control system of the robot as shown in Fig. 3 consists of a board computer running RTLinux operation system, an multi-functional interface board, two motor drivers in which each motor driver can drive two motors.

#### **3. Omnidirectional mobility**

The wheels of the robot consists of two active dual-wheel modules and two free casters. The two active dual-wheel modules enable the robot omnidirectional mobility and the two free casters keep the robot balance. First we analyze the kinematics of one dual-wheel module, than investigate the omnidirectional mobility of the robot realized by two dual-wheel modules.

#### **3.1 Kinematics of one dual-wheel module**

As shown in Fig. 4, an active dual-wheel module consists of two independently driven coaxial wheels, which are separated at a distance 2*d* and connected via an offset link *s* to the robot at the joint *Oi*. The coordinate systems, valuables, and parameters are defined as follows. *OwXwYw*: a world coordinate system.

*OiXiYi*: a moving coordinate system attached to the wheel module *i* at the offset link joint *Oi*. *xi*, *yi*: coordinates of the origin *Oi* of the frame *OiXiYi* with respect to the ground (the frame *OwXwYw*). Consequently, their derivations to time

*x*˙*i*, *y*˙*i*: velocities of the joint *Oi* respectively in the *Xi*,*Yi* axes.

*x*˙*wi*, *y*˙*wi*: velocities of the joint *Oi* respectively in the *Xw*, *Yw* axes.

*φi*: orientation of the module *i* with respect to the world frame.

ui, u*wi*: velocity vectors of the joint *Oi* respectively with respect to the frame *OiXiYi* and ground *OwXwYw*, where, u<sup>i</sup> = [*x*˙*i*, *y*˙*i*] *<sup>T</sup>* and u*wi* = [*x*˙*wi*, *y*˙*wi*] *T*.

*ωli*, *ωri*: respectively the left and right wheel's angular velocities of the module *i*.

4 Mobile Robots

A/D

finder

Force/Torque sensor

2D laser range/area

Board Computer

PCI Bus

D/A

current command

the door, corner, elevator are used to identify the robot position and orientation. They are used for the destination guidance and obstacle avoidance. The problems of such obstacle avoidance

The control system of the robot as shown in Fig. 3 consists of a board computer running RTLinux operation system, an multi-functional interface board, two motor drivers in which

The wheels of the robot consists of two active dual-wheel modules and two free casters. The two active dual-wheel modules enable the robot omnidirectional mobility and the two free casters keep the robot balance. First we analyze the kinematics of one dual-wheel module, than investigate the omnidirectional mobility of the robot realized by two dual-wheel

As shown in Fig. 4, an active dual-wheel module consists of two independently driven coaxial wheels, which are separated at a distance 2*d* and connected via an offset link *s* to the robot at the joint *Oi*. The coordinate systems, valuables, and parameters are defined as follows.

*OiXiYi*: a moving coordinate system attached to the wheel module *i* at the offset link joint *Oi*. *xi*, *yi*: coordinates of the origin *Oi* of the frame *OiXiYi* with respect to the ground (the frame

ui, u*wi*: velocity vectors of the joint *Oi* respectively with respect to the frame *OiXiYi* and

*ωli*, *ωri*: respectively the left and right wheel's angular velocities of the module *i*.

*<sup>T</sup>* and u*wi* = [*x*˙*wi*, *y*˙*wi*]

*T*.

Interface Board

Motor Driver

Encoder Motor

and navigation are not discussed in this chapter.

Fig. 3. The control system of the robot

each motor driver can drive two motors.

**3.1 Kinematics of one dual-wheel module**

*OwXwYw*: a world coordinate system.

ground *OwXwYw*, where, u<sup>i</sup> = [*x*˙*i*, *y*˙*i*]

*OwXwYw*). Consequently, their derivations to time

*x*˙*i*, *y*˙*i*: velocities of the joint *Oi* respectively in the *Xi*,*Yi* axes. *x*˙*wi*, *y*˙*wi*: velocities of the joint *Oi* respectively in the *Xw*, *Yw* axes. *φi*: orientation of the module *i* with respect to the world frame.

**3. Omnidirectional mobility**

modules.

Fig. 4. Model of one active dual-wheel module

ωi: angular velocity vector of the module *i*. ω<sup>i</sup> = [*ωli*, *ωri*] *T*. *r*: radius of the wheels.

The linear velocities of the joint *Oi* of the module *i* can be expressed as:

$$\boldsymbol{u}\_{i} = \begin{bmatrix} \dot{\boldsymbol{x}}\_{i} \\ \dot{\boldsymbol{y}}\_{i} \end{bmatrix} = \frac{r}{2} \begin{bmatrix} 1 & 1 \\ \frac{s}{d} & -\frac{s}{d} \end{bmatrix} \cdot \begin{bmatrix} \omega\_{ri} \\ \omega\_{li} \end{bmatrix} = \boldsymbol{J}\_{i} \cdot \boldsymbol{\omega}\_{i} \tag{1}$$

where J<sup>i</sup> is the Jacobian matrix of the module in the moving frame *OiXiYi*.

On the other hand, the velocity of the joint *Oi* expressed in the world system is given by

$$\begin{aligned} u\_{\textit{w}i} = \begin{bmatrix} \dot{\boldsymbol{x}}\_{\textit{i}vi} \\ \dot{\boldsymbol{y}}\_{\textit{i}vi} \end{bmatrix} = \begin{bmatrix} \cos\phi\_i - \sin\phi\_i \\ \sin\phi\_i & \cos\phi\_i \end{bmatrix} \cdot \boldsymbol{u}\_i = R\_i \cdot \boldsymbol{u}\_i \tag{2}$$

Therefore, the kinematics of the module between the velocity of the joint *Oi* and the two wheel rotation velocities is expressed as:

$$u\_{wi} = R\_i \cdot J\_i \cdot \omega\_i = J\_{wi} \cdot \omega\_i \tag{3}$$

where J*wi* is the Jacobian matrix of the module in the world frame *OwXwYw*, and it is given as:

$$J\_{wi} = \frac{r}{2} \begin{bmatrix} \cos\phi\_i - \frac{s}{d}\sin\phi\_i\cos\phi\_i + \frac{s}{d}\sin\phi\_i\\ \sin\phi\_i + \frac{s}{d}\cos\phi\_i\sin\phi\_i - \frac{s}{d}\cos\phi\_i \end{bmatrix} \tag{4}$$

The above expressions indicate that the velocity of the module is determined by the parameters *s*, *d*, *r*, and *ωri*, *ωli*. But since *s*, *d*, *r* are structurally determined for a robot, the only controllable values are angular velocities *ωri*, *ωli* of the two driving wheels. The determinant of J*wi* is :

Therefore, *φ*˙ of the robot can be expressed as

*<sup>φ</sup>*˙=*x*˙*R*<sup>2</sup> <sup>−</sup> *<sup>x</sup>*˙*R*<sup>1</sup>

a Wheelchair Typed Omnidirectional Mobile Robot with Admittance Control

=1

⎤ <sup>⎦</sup> <sup>=</sup> <sup>1</sup> 2 ⎡ ⎣

−cos *<sup>φ</sup>*

� <sup>A</sup><sup>1</sup> <sup>0</sup>**2**×**<sup>2</sup>** 0**2**×**<sup>2</sup>** A<sup>2</sup>

cos *<sup>φ</sup><sup>i</sup>* <sup>−</sup> *<sup>s</sup>*

sin *φ<sup>i</sup>* + *<sup>s</sup>*

Hence, the velocity vector V can be expressed as:

⎡ ⎣ *vxw vyw φ*˙

From (3), the velocity vector [*x*˙*w*1, *y*˙*w*1, *x*˙*w*2, *y*˙*w*2]

*x*˙*w*<sup>1</sup> *y*˙*w*<sup>1</sup> *x*˙*w*<sup>2</sup> *y*˙*w*<sup>2</sup>

⎤ ⎥ ⎥ ⎦ = *r* 2

A*<sup>i</sup>* = �

⎡ ⎢ ⎢ ⎣

where

From (6) it means

kinematics of the robot:

� V 0 � = ⎡ ⎢ ⎢ ⎣

= B ·

*vxw vyw φ*˙ 0

> ⎡ ⎢ ⎢ ⎣

*x*˙*w*<sup>1</sup> *y*˙*w*<sup>1</sup> *x*˙*w*<sup>2</sup> *y*˙*w*<sup>2</sup> ⎤ ⎥ ⎥

⎤ ⎥ ⎥ <sup>⎦</sup> <sup>=</sup> <sup>1</sup> 2 ⎡ ⎢ ⎢ ⎣

−cos *<sup>φ</sup>*

<sup>⎦</sup> <sup>=</sup> <sup>B</sup> · <sup>A</sup> ·

V =

*<sup>L</sup>* (7)

⎤ ⎦ · ⎡ ⎢ ⎢ ⎣

*<sup>T</sup>* in the above expression is given by

*ωr*<sup>1</sup> *ωl*<sup>1</sup> *ωr*2 *ωl*<sup>2</sup>

�

*y*˙*R*<sup>1</sup> = *y*˙*R*<sup>2</sup> (12)

⎤ ⎥ ⎥ ⎦

⎡ ⎢ ⎢ ⎣

*<sup>d</sup>* sin *φ<sup>i</sup>*

*<sup>d</sup>* cos *φ<sup>i</sup>*

0 = *x*˙*w*<sup>1</sup> sin *φ* − *y*˙*w*<sup>1</sup> cos *φ* − *x*˙*w*<sup>2</sup> sin *φ* + *y*˙*w*<sup>2</sup> cos *φ* (13)

cos *φ L*

sin *φ L*

⎤ ⎥ ⎥ ⎦ · ⎡ ⎢ ⎢ ⎣

<sup>⎦</sup> <sup>=</sup> <sup>B</sup> · <sup>A</sup> · <sup>ω</sup> (14)

*x*˙*w*<sup>1</sup> *y*˙*w*<sup>1</sup> *x*˙*w*<sup>2</sup> *y*˙*w*<sup>2</sup> ⎤ ⎥ ⎥ ⎦

1 0 10 0 1 01

sin *φ* − cos *φ* − sin *φ* cos *φ*

⎤ ⎥ ⎥ *x*˙*w*<sup>1</sup> *y*˙*w*<sup>1</sup> *x*˙*w*<sup>2</sup> *y*˙*w*<sup>2</sup> ⎤ ⎥ ⎥ ⎦

(9)

(10)

(11)

*<sup>L</sup>* [(*x*˙*w*2−*x*˙*w*1) cos *<sup>φ</sup>*+(*y*˙*w*2−*y*˙*w*1) sin *<sup>φ</sup>*] (8)

sin *φ L*

1 0 10 0 1 01

> cos *φ L*

*<sup>L</sup>* <sup>−</sup>sin *<sup>φ</sup> L*

<sup>95</sup> Walking Support and Power Assistance of

� · ⎡ ⎢ ⎢ ⎣

*<sup>d</sup>* sin *<sup>φ</sup><sup>i</sup>* cos *<sup>φ</sup><sup>i</sup>* <sup>+</sup> *<sup>s</sup>*

*<sup>d</sup>* cos *<sup>φ</sup><sup>i</sup>* sin *<sup>φ</sup><sup>i</sup>* <sup>−</sup> *<sup>s</sup>*

Furthermore, the robot possessing omnidirectional mobility realized by two active dual-wheel modules has three degrees of freedom with a total of four actuators. There should be a constraint for this redundancy. The physically constant distance between the two joint points *O*<sup>1</sup> and *O*<sup>2</sup> is used to eliminate the redundancy. It leads the following velocity constraint:

By combining Eqs. (9), (10) and (13), we can get the following homogeneous forward

*<sup>L</sup>* <sup>−</sup>sin *<sup>φ</sup> L*

> ⎡ ⎢ ⎢ ⎣

*ωr*<sup>1</sup> *ωl*<sup>1</sup> *ωr*2 *ωl*<sup>2</sup>

*ωr*<sup>1</sup> *ωl*<sup>1</sup> *ωr*2 *ωl*<sup>2</sup> ⎤ ⎥ ⎥ <sup>⎦</sup> <sup>=</sup> <sup>A</sup> ·

Fig. 5. Model of two active dual-wheel modules

$$\det \mathbf{J}\_{\rm{w}l} = -\frac{\mathbf{r}^{2}\mathbf{s}}{2\mathbf{d}}\tag{5}$$

This means that the module has no singularity as long as the distance *s* is not zero. This fact implies that arbitrary and unique velocities at the joint point *Oi* can be achieved by appropriately controlling the rotational velocities of the two wheels. Note that one such a module only has 2DOF. In other words, omnidirectional mobility (3DOF) cannot be achieved. Consequently, at least two modules are necessary to realize omnidirectional mobility. In this study, two modules are used to achieve the omnidirectional mobility.

#### **3.2 Omnidirectional mobility with two modules**

As shown in Fig.5, two active dual-wheel modules consists of a basic part of the mobile robot that enable the robot to have omnidirectional mobility.

In Fig. 5, point *O* is the center of the robot platform. Its coordinates and velocities in the world frame *OwXwYw* are respectively (*xw*, *yw*) and (*vxw*, *vyw*). The frame *OXRYR* is attached to the robot. *φ* is the orientation of the robot. The vector V = [*vxw*, *vyw*, *φ*˙] *<sup>T</sup>* is the velocity vector of the robot. The distance between the two joint points of two modules is *L*. ω = [*ωr*1, *ωl*1, *ωr*2, *ωl*2] *<sup>T</sup>* is the angular velocity vector of the four driving wheels of the robot.

To obtain the rotational velocity *φ*˙ of the robot, it it convenient to transform *x*˙*wi*, *y*˙*wi* in the world frame to the robot frame *OXRYR*. The relationship is given by

$$
\begin{bmatrix}
\dot{\mathbf{x}}\_{Ri} \\
\dot{y}\_{Ri}
\end{bmatrix} = \begin{bmatrix}
\cos\phi - \sin\phi \\
\sin\phi & \cos\phi
\end{bmatrix} \cdot \begin{bmatrix}
\dot{\mathbf{x}}\_{wi} \\
\dot{y}\_{wi}
\end{bmatrix} \tag{6}
$$

Therefore, *φ*˙ of the robot can be expressed as

$$
\dot{\phi} = \frac{\dot{\mathbf{x}}\_{R2} - \dot{\mathbf{x}}\_{R1}}{L} \tag{7}
$$

$$=\frac{1}{L}\left[ (\dot{\mathbf{x}}\_{w2} - \dot{\mathbf{x}}\_{w1})\cos\phi + (\dot{y}\_{w2} - \dot{y}\_{w1})\sin\phi \right] \tag{8}$$

Hence, the velocity vector V can be expressed as:

$$\mathbf{V} = \begin{bmatrix} v\_{xw} \\ v\_{yw} \\ \dot{\Phi} \end{bmatrix} = \frac{1}{2} \begin{bmatrix} 1 & 0 & 1 & 0 \\ 0 & 1 & 0 & 1 \\ -\frac{\cos\phi}{L} & -\frac{\sin\phi}{L} & \frac{\cos\phi}{L} & \frac{\sin\phi}{L} \end{bmatrix} \cdot \begin{bmatrix} \dot{\mathbf{x}}\_{w1} \\ \dot{\mathbf{y}}\_{w1} \\ \dot{\mathbf{x}}\_{w2} \\ \dot{\mathbf{y}}\_{w2} \end{bmatrix} \tag{9}$$

From (3), the velocity vector [*x*˙*w*1, *y*˙*w*1, *x*˙*w*2, *y*˙*w*2] *<sup>T</sup>* in the above expression is given by

$$
\begin{bmatrix}
\dot{\mathbf{x}}\_{w1} \\
\dot{y}\_{w1} \\
\dot{x}\_{w2} \\
\dot{y}\_{w2}
\end{bmatrix} = \frac{r}{2} \begin{bmatrix}
\mathbf{A\_1} & \mathbf{0\_2}\_{2 \times 2} \\
\mathbf{0\_2} - \mathbf{A\_1}\overline{\mathbf{A\_2}} \\
\omega\_{r2} \\
\omega\_{l2}
\end{bmatrix} \cdot \begin{bmatrix}
\omega\_{r1} \\
\omega\_{l1} \\
\omega\_{r2} \\
\omega\_{l2}
\end{bmatrix} = \mathbf{A} \cdot \begin{bmatrix}
\omega\_{r1} \\
\omega\_{l1} \\
\omega\_{r2} \\
\omega\_{l2}
\end{bmatrix} \tag{10}
$$

where

6 Mobile Robots

Y2

vyw

φ

L/2

φvxw

O(xw, yw)

X2

y2

x2

yw2

φ2

O2(x2, y2)

2d (5)

*<sup>T</sup>* is the velocity

(6)

Vl2

Vr2

xw2

O Xw <sup>w</sup>

det J*wi* <sup>=</sup> <sup>−</sup>r2s

This means that the module has no singularity as long as the distance *s* is not zero. This fact implies that arbitrary and unique velocities at the joint point *Oi* can be achieved by appropriately controlling the rotational velocities of the two wheels. Note that one such a module only has 2DOF. In other words, omnidirectional mobility (3DOF) cannot be achieved. Consequently, at least two modules are necessary to realize omnidirectional mobility. In this

As shown in Fig.5, two active dual-wheel modules consists of a basic part of the mobile robot

In Fig. 5, point *O* is the center of the robot platform. Its coordinates and velocities in the world frame *OwXwYw* are respectively (*xw*, *yw*) and (*vxw*, *vyw*). The frame *OXRYR* is attached

vector of the robot. The distance between the two joint points of two modules is *L*. ω =

To obtain the rotational velocity *φ*˙ of the robot, it it convenient to transform *x*˙*wi*, *y*˙*wi* in the

cos *φ* − sin *φ* sin *φ* cos *φ*

*<sup>T</sup>* is the angular velocity vector of the four driving wheels of the robot.

 · *x*˙*wi y*˙*wi* 

X1

yw1

L/2

x1

φ1

xw1

YR

XR

O1(x1, y1)

Vr1

study, two modules are used to achieve the omnidirectional mobility.

to the robot. *φ* is the orientation of the robot. The vector V = [*vxw*, *vyw*, *φ*˙]

world frame to the robot frame *OXRYR*. The relationship is given by

 *x*˙*Ri y*˙*Ri* = 

Yw

Y1

y1

Fig. 5. Model of two active dual-wheel modules

**3.2 Omnidirectional mobility with two modules**

[*ωr*1, *ωl*1, *ωr*2, *ωl*2]

that enable the robot to have omnidirectional mobility.

Vl1

$$\mathbf{A}\_{l} = \begin{bmatrix} \cos\phi\_{l} - \frac{s}{d}\sin\phi\_{l}\cos\phi\_{l} + \frac{s}{d}\sin\phi\_{l} \\ \sin\phi\_{l} + \frac{s}{d}\cos\phi\_{l}\sin\phi\_{l} - \frac{s}{d}\cos\phi\_{l} \end{bmatrix} \tag{11}$$

Furthermore, the robot possessing omnidirectional mobility realized by two active dual-wheel modules has three degrees of freedom with a total of four actuators. There should be a constraint for this redundancy. The physically constant distance between the two joint points *O*<sup>1</sup> and *O*<sup>2</sup> is used to eliminate the redundancy. It leads the following velocity constraint:

$$
\dot{y}\_{R1} = \dot{y}\_{R2} \tag{12}
$$

From (6) it means

$$0 = \dot{\mathfrak{x}}\_{w1} \sin \phi - \dot{\mathfrak{y}}\_{w1} \cos \phi - \dot{\mathfrak{x}}\_{w2} \sin \phi + \dot{\mathfrak{y}}\_{w2} \cos \phi \tag{13}$$

By combining Eqs. (9), (10) and (13), we can get the following homogeneous forward kinematics of the robot:

$$
\begin{bmatrix} V \\ 0 \end{bmatrix} = \begin{bmatrix} v\_{xw} \\ v\_{yw} \\ \dot{\Phi} \\ 0 \end{bmatrix} = \frac{1}{2} \begin{bmatrix} 1 & 0 & 1 & 0 \\ 0 & 1 & 0 & 1 \\ -\frac{\cos\phi}{L} & -\frac{\sin\phi}{L} & \frac{\cos\phi}{L} & \frac{\sin\phi}{L} \\ \sin\phi & -\cos\phi & -\sin\phi \cos\phi \end{bmatrix} \cdot \begin{bmatrix} \dot{\mathbf{x}}\_{w1} \\ \dot{\mathbf{y}}\_{w1} \\ \dot{\mathbf{x}}\_{w2} \\ \dot{\mathbf{y}}\_{w2} \end{bmatrix}
$$

$$
= \mathbf{B} \cdot \begin{bmatrix} \dot{\mathbf{x}}\_{w1} \\ \dot{\mathbf{y}}\_{w1} \\ \dot{\mathbf{x}}\_{w2} \end{bmatrix} = \mathbf{B} \cdot \mathbf{A} \cdot \begin{bmatrix} \omega\_{r1} \\ \omega\_{l1} \\ \omega\_{l2} \\ \omega\_{l2} \end{bmatrix} = \mathbf{B} \cdot \mathbf{A} \cdot \omega \tag{14}
$$

where, V is the velocity and F is the contact or applied force, both at the point of interaction. A large admittance corresponds to a rapid motion induced by applied forces; while a small

<sup>97</sup> Walking Support and Power Assistance of

Here, admittance controller is introduced in order to fulfill the power assistance when the elderly/disabled holding the handle to walk or the caregiver holding the handle to push the robot while the elderly or the disabled sitting in the seat. Since user's (the elderly, the disabled, or the caregiver) walking speed doesn't change a lot, a large admittance implies relatively small forces needed to exert to the robot. This is the basic principle of power assistance. Note that almost similar approaches had been already used for the elderly's walking support (Nemoto & M. Fujie (1998); Yu et al. (2003)), since our robot can be used both by the elderly/disabled and the caregiver, the approach is extended to the caregiver's power

In this study, the admittance of the human-robot system can be defined as a transfer function with the user's applied forces and torques, F (*s*), as input and the robot's velocities, V (*s*), as the output. The time response Vd(*t*) of the admittance model is used as the desired velocity of the robot. Then, the desired driving speed ω<sup>d</sup> of each wheel is calculated from Vd(*t*) by the inverse kinematics equation (20), and the ωd, as each wheel's speed command, is used for velocity control. The admittance based control process is shown in Fig.6, in which a digital LF (low-pass filter) cuts off the high frequency noises in the force/torque signals from the 6-axis

*Fx*(*s*) <sup>=</sup> <sup>1</sup>

where, *Mx* and *Dx* are respectively the virtual mass and virtual damping of the system in

+ -

ω*<sup>d</sup>*

The time response *Vxd*(*t*) for a step input *Fx* of the above transfer function is:

*Vxd*(*t*) = *Fx*

*Dx*

where, *τ<sup>x</sup>* is the time constant defined by *τ<sup>x</sup>* = *Mx*/*Dx*. The steady state velocity of the system is *Vxs* = *Fx*/*Dx*. This means that the forces to the robot exerted by the user determines the velocities of the system (user and machine). In other words, when the user's steady forward walking velocity is *Vxs* (this velocity usually doesn't change a lot for a user), then the necessary pushing force *Fxs*, or saying, the burden the user feels reacted from the robot, should be

*Mxs* + *Dx*

**F Vd** Inverse

Velocity control

Admittance model

(22)

ω*r*

ω*<sup>d</sup>*

Robot Plant

(<sup>1</sup> <sup>−</sup> *<sup>e</sup>*−*t*/*τ<sup>x</sup>* ) (23)

*Fxs* = *Vxs* · *Dx* (24)

Kinematics

In the forward direction (*XR* direction in Fig.5), the admittance can be expressed as

*Gx*(*s*) = *Vx*(*s*)

LF

Force sesnor

admittance represents a slow reaction to act forces.

a Wheelchair Typed Omnidirectional Mobile Robot with Admittance Control

assistance.

force/torque sensor.

forward (*XR*) direction.

Human

Fig. 6. Admittance based control

Hence, the inverse kinematics of the robot, i.e., with the given desired velocity vector V<sup>d</sup> of the robot, the necessary driving velocity vector ω<sup>d</sup> of the four wheels is given by

$$
\boldsymbol{\omega}\_{d} = \begin{bmatrix} \boldsymbol{\omega}\_{r1} \\ \boldsymbol{\omega}\_{l1} \\ \boldsymbol{\omega}\_{l2} \\ \boldsymbol{\omega}\_{l2} \end{bmatrix} = \mathbf{A}^{-1} \cdot \mathbf{B}^{-1} \cdot \begin{bmatrix} \boldsymbol{v}\_{xw} \\ \boldsymbol{v}\_{yw} \\ \dot{\boldsymbol{\phi}} \\ 0 \end{bmatrix} = \mathbf{A}^{-1} \cdot \mathbf{B}^{-1} \cdot \begin{bmatrix} \mathbf{V}\_{d} \\ 0 \end{bmatrix} \tag{15}
$$

where,

$$\mathbf{A}^{-1} = \frac{1}{r} \left[ \mathbf{\frac{A\_1^{-1}}{\mathbf{0}}} \mathbf{\frac{0}{A\_2^{-1}}} \mathbf{\frac{0}{A\_1^{-1}}} \right] \tag{16}$$

$$\mathbf{A}\_{i}^{-1} = \begin{bmatrix} \cos\phi\_{i} - \frac{d}{s}\sin\phi\_{i}\sin\phi\_{i} + \frac{d}{s}\cos\phi\_{i} \\ \cos\phi\_{i} + \frac{d}{s}\cos\phi\_{i}\sin\phi\_{i} - \frac{d}{s}\cos\phi\_{i} \end{bmatrix} \tag{17}$$

$$\mathbf{B}^{-1} = \begin{bmatrix} 1 \ 0 \ -L \cos \phi & \sin \phi \\ 0 \ 1 \ -L \sin \phi & -\cos \phi \\ 1 \ 0 \ -L \cos \phi & -\sin \phi \\ 0 \ 1 \ -L \sin \phi & \cos \phi \end{bmatrix} \tag{18}$$

Now by rewriting B−<sup>1</sup> as:

$$\mathbf{B}^\* = \begin{bmatrix} 1 \ 0 \ -L\cos\phi \\ 0 \ 1 \ -L\sin\phi \\ 1 \ 0 \ -L\cos\phi \\ 0 \ 1 \ -L\sin\phi \end{bmatrix} \tag{19}$$

We can get the following inverse kinematic equation of the robot:

$$
\boldsymbol{\omega}\_{\mathbf{d}} = \begin{bmatrix} \boldsymbol{\omega}\_{r1} \\ \boldsymbol{\omega}\_{l1} \\ \boldsymbol{\omega}\_{l2} \\ \boldsymbol{\omega}\_{l2} \end{bmatrix} = \mathbf{A}^{-1} \cdot \mathbf{B}^{\*} \cdot \begin{bmatrix} v\_{xw} \\ v\_{yw} \\ \dot{\Phi} \end{bmatrix} = \mathbf{A}^{-1} \cdot \mathbf{B}^{\*} \cdot \mathbf{V}\_{\mathbf{d}} \tag{20}
$$

As discussed in section 4, the Vd, the desired velocity of the robot, is obtained from the introduced admittance controller, and the ωd, each wheel's speed command, is used for velocity control.

#### **4. Admittance based interaction control for power assistance**

Admittance of a mechanical system is defined as (Newman (1992))

$$G = \frac{V}{F} \tag{21}$$

8 Mobile Robots

Hence, the inverse kinematics of the robot, i.e., with the given desired velocity vector V<sup>d</sup> of

⎡ ⎢ ⎢ ⎣

*vxw vyw φ*˙ 0

*<sup>s</sup>* sin *<sup>φ</sup><sup>i</sup>* sin *<sup>φ</sup><sup>i</sup>* <sup>+</sup> *<sup>d</sup>*

*<sup>s</sup>* cos *<sup>φ</sup><sup>i</sup>* sin *<sup>φ</sup><sup>i</sup>* <sup>−</sup> *<sup>d</sup>*

1 0 −*L* cos *φ* sin *φ* 0 1 −*L* sin *φ* − cos *φ* 1 0 *L* cos *φ* − sin *φ* 0 1 *L* sin *φ* cos *φ*

> 1 0 −*L* cos *φ* 0 1 −*L* sin *φ* 1 0 *L* cos *φ* 0 1 *L* sin *φ*

> > ⎡ ⎣ *vxw vyw φ*˙

As discussed in section 4, the Vd, the desired velocity of the robot, is obtained from the introduced admittance controller, and the ωd, each wheel's speed command, is used for

<sup>G</sup> <sup>=</sup> <sup>V</sup>

⎤

⎤ ⎥ ⎥ ⎦

⎤ ⎥ ⎥

<sup>⎦</sup> <sup>=</sup> <sup>A</sup>−<sup>1</sup> · <sup>B</sup>−<sup>1</sup> ·

*<sup>s</sup>* cos *φ<sup>i</sup>*

�

<sup>⎦</sup> <sup>=</sup> <sup>A</sup>−<sup>1</sup> · <sup>B</sup><sup>∗</sup> · <sup>V</sup><sup>d</sup> (20)

<sup>F</sup> (21)

*<sup>s</sup>* cos *φ<sup>i</sup>*

⎤ ⎥ ⎥ ⎦

�

� V<sup>d</sup> 0 �

(15)

(16)

(17)

(18)

(19)

the robot, the necessary driving velocity vector ω<sup>d</sup> of the four wheels is given by

<sup>⎦</sup> <sup>=</sup> <sup>A</sup>−<sup>1</sup> · <sup>B</sup>−<sup>1</sup> ·

<sup>A</sup>−<sup>1</sup> <sup>=</sup> <sup>1</sup> *r* � A−<sup>1</sup> <sup>1</sup> 0 0 A−<sup>1</sup> 2

cos *<sup>φ</sup><sup>i</sup>* <sup>−</sup> *<sup>d</sup>*

cos *φ<sup>i</sup>* + *<sup>d</sup>*

⎡ ⎢ ⎢ ⎣

B∗ =

<sup>⎦</sup> <sup>=</sup> <sup>A</sup>−<sup>1</sup> · <sup>B</sup><sup>∗</sup> ·

⎡ ⎢ ⎢ ⎣

ω<sup>d</sup> =

Now by rewriting B−<sup>1</sup> as:

velocity control.

where,

⎡ ⎢ ⎢ ⎣

*ωr*<sup>1</sup> *ωl*<sup>1</sup> *ωr*2 *ωl*<sup>2</sup>

A−<sup>1</sup> <sup>i</sup> = �

B−<sup>1</sup> =

We can get the following inverse kinematic equation of the robot:

⎤ ⎥ ⎥

**4. Admittance based interaction control for power assistance** Admittance of a mechanical system is defined as (Newman (1992))

ω<sup>d</sup> =

⎡ ⎢ ⎢ ⎣

*ωr*<sup>1</sup> *ωl*<sup>1</sup> *ωr*2 *ωl*<sup>2</sup>

⎤ ⎥ ⎥ where, V is the velocity and F is the contact or applied force, both at the point of interaction. A large admittance corresponds to a rapid motion induced by applied forces; while a small admittance represents a slow reaction to act forces.

Here, admittance controller is introduced in order to fulfill the power assistance when the elderly/disabled holding the handle to walk or the caregiver holding the handle to push the robot while the elderly or the disabled sitting in the seat. Since user's (the elderly, the disabled, or the caregiver) walking speed doesn't change a lot, a large admittance implies relatively small forces needed to exert to the robot. This is the basic principle of power assistance. Note that almost similar approaches had been already used for the elderly's walking support (Nemoto & M. Fujie (1998); Yu et al. (2003)), since our robot can be used both by the elderly/disabled and the caregiver, the approach is extended to the caregiver's power assistance.

In this study, the admittance of the human-robot system can be defined as a transfer function with the user's applied forces and torques, F (*s*), as input and the robot's velocities, V (*s*), as the output. The time response Vd(*t*) of the admittance model is used as the desired velocity of the robot. Then, the desired driving speed ω<sup>d</sup> of each wheel is calculated from Vd(*t*) by the inverse kinematics equation (20), and the ωd, as each wheel's speed command, is used for velocity control. The admittance based control process is shown in Fig.6, in which a digital LF (low-pass filter) cuts off the high frequency noises in the force/torque signals from the 6-axis force/torque sensor.

In the forward direction (*XR* direction in Fig.5), the admittance can be expressed as

$$G\_{\mathbf{x}}(\mathbf{s}) = \frac{V\_{\mathbf{x}}(\mathbf{s})}{F\_{\mathbf{x}}(\mathbf{s})} = \frac{1}{M\_{\mathbf{x}}\mathbf{s} + D\_{\mathbf{x}}} \tag{22}$$

where, *Mx* and *Dx* are respectively the virtual mass and virtual damping of the system in forward (*XR*) direction.

#### Fig. 6. Admittance based control

The time response *Vxd*(*t*) for a step input *Fx* of the above transfer function is:

$$V\_{\rm xd}(t) = \frac{F\_{\rm x}}{D\_{\rm x}} (1 - e^{-t/\tau\_{\rm x}}) \tag{23}$$

where, *τ<sup>x</sup>* is the time constant defined by *τ<sup>x</sup>* = *Mx*/*Dx*. The steady state velocity of the system is *Vxs* = *Fx*/*Dx*. This means that the forces to the robot exerted by the user determines the velocities of the system (user and machine). In other words, when the user's steady forward walking velocity is *Vxs* (this velocity usually doesn't change a lot for a user), then the necessary pushing force *Fxs*, or saying, the burden the user feels reacted from the robot, should be

$$F\_{\rm xs} = V\_{\rm xs} \cdot D\_{\rm x} \tag{24}$$

**5.1 Forward experiment without power assistance**

2. in steady state, the pushing force is about 24[N]. 3. to stop the robot, a negative (pulling) force is needed.

4. the user's walking speed is about 0.9[m/s].

s/m].

wheelchair. From the the result shown in Fig. 8, we can find:

a Wheelchair Typed Omnidirectional Mobile Robot with Admittance Control

**5.2 Forward experiments with power assistance as** *Dx* = 14**[N**·**s/m]**

exerted force. This will lead to the user fearful to use the robot.

**Force[N]**

1. to start to move the robot, a big pushing force as large as 70[N] is needed.

First, we turn off the motor drivers and do the experiment without power assistance. In this case, the motors are in free state and the robot is just like a traditional manual-operated

<sup>99</sup> Walking Support and Power Assistance of

In this case, we try to halve the user's burden with power assistance to about 12[N]. Since the user's walking speed is about 0.9[m/s], according to eq.(24) we set *Dx* = *Fxs*/*Vxs* = 14[N·

Fig. 9 and 10 respectively show the experimental results with *τ<sup>x</sup>* = 1.1[s] (*Mx* = *τ<sup>x</sup>* · *Dx* = 15.4[kg]) and *τ<sup>x</sup>* = 0.5[s] (*Mx* = 7[kg]). Both the results show that the pushing force is reduced to about 12[N] from about 24[N] when the steady walking speed is about 0.9[m/s] as we planned, and almost no pulling (negative) force is needed to stop the robot, which can be directly interpreted by eq.(23) or (24). The purpose of power assistance for the caregiver is achieved. In addition, Fig. 9 indicates that a slow and big pushing force occurs at the start to move the robot, while there is no such phenomenon in Fig. 10. This can be explained as follows. The parameter *Mx*, the virtual mass coefficient, is set to be 15.4[kg] in Fig.9 and *Mx* is 7.0[kg] in Fig.10. Since mass is a metric of inertia, the more the inertia of an object is, the slower the object starts to move. But a too small *Mx* will make the system too sensitive to the

**0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 11000**

**Time[ms]**

The result of power assistance experiment with *Dx* = 9[N·s/m] and *τ<sup>x</sup>* = 0.5[s] (*Mx* = 4.5[kg]) is shown in Fig. 11. The result shows that the pushing force is about 7[N] when the walking speed is about 0.9[m/s]. The experiment result with *Dx* = 19[N·s/m] and *τ<sup>x</sup>* = 0.5[s] (*Mx* = 9.5[kg]) is shown in Fig. 12. The pushing force is about 17[N] as the walking speed is about 0.9[m/s]. Same as the above, these results demonstrate that the pushing force (the

Fig. 9. Forward Experimental result with assistance (as *Dx* = 14 and *τ<sup>x</sup>* = 1.1)

**5.3 Forward experiments with power assistance as** *Dx*=9, 19**[N**·**s/m]**

**-0.9 -0.6 -0.3 0 0.3 0.6 0.9 1.2**

**Force Speed**

**Speed[m/s]**

Fig. 7. A user pushes the robot while a person sitting in the seat.

Thus, by adjusting the virtual damping coefficient *Dx*, the user will have different burden feeling. And further by altering virtual mass coefficient *Mx* (therefore *τx* is changed), we can get the different dynamic corresponds of human-machine system. Our experiments will verify this.

#### **5. Experiments and remarks**

The omnidirectional mobility of the robot is experimentally implemented in (Zhu et al., 2010). Here, the developed admittance based interaction control is tested. The sampling and control period is 1 [kHz]. As shown in Fig. 7, a user is holding the robot handle to move the robot forward, or lateral, or tuning on the spot while a 65[kg] person sitting in the seat.

Fig. 8. Forward experimental result without assistance

#### **5.1 Forward experiment without power assistance**

10 Mobile Robots

Thus, by adjusting the virtual damping coefficient *Dx*, the user will have different burden feeling. And further by altering virtual mass coefficient *Mx* (therefore *τx* is changed), we can get the different dynamic corresponds of human-machine system. Our experiments will

The omnidirectional mobility of the robot is experimentally implemented in (Zhu et al., 2010). Here, the developed admittance based interaction control is tested. The sampling and control period is 1 [kHz]. As shown in Fig. 7, a user is holding the robot handle to move the robot

**0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 11000**

**Time[ms]**

**-0.9 -0.6 -0.3 0 0.3 0.6 0.9 1.2**

**Force Speed**

**Speed[m/s]**

forward, or lateral, or tuning on the spot while a 65[kg] person sitting in the seat.

Fig. 7. A user pushes the robot while a person sitting in the seat.

verify this.

**5. Experiments and remarks**

Fig. 8. Forward experimental result without assistance

**Force[N]**

First, we turn off the motor drivers and do the experiment without power assistance. In this case, the motors are in free state and the robot is just like a traditional manual-operated wheelchair. From the the result shown in Fig. 8, we can find:


### **5.2 Forward experiments with power assistance as** *Dx* = 14**[N**·**s/m]**

In this case, we try to halve the user's burden with power assistance to about 12[N]. Since the user's walking speed is about 0.9[m/s], according to eq.(24) we set *Dx* = *Fxs*/*Vxs* = 14[N· s/m].

Fig. 9 and 10 respectively show the experimental results with *τ<sup>x</sup>* = 1.1[s] (*Mx* = *τ<sup>x</sup>* · *Dx* = 15.4[kg]) and *τ<sup>x</sup>* = 0.5[s] (*Mx* = 7[kg]). Both the results show that the pushing force is reduced to about 12[N] from about 24[N] when the steady walking speed is about 0.9[m/s] as we planned, and almost no pulling (negative) force is needed to stop the robot, which can be directly interpreted by eq.(23) or (24). The purpose of power assistance for the caregiver is achieved. In addition, Fig. 9 indicates that a slow and big pushing force occurs at the start to move the robot, while there is no such phenomenon in Fig. 10. This can be explained as follows. The parameter *Mx*, the virtual mass coefficient, is set to be 15.4[kg] in Fig.9 and *Mx* is 7.0[kg] in Fig.10. Since mass is a metric of inertia, the more the inertia of an object is, the slower the object starts to move. But a too small *Mx* will make the system too sensitive to the exerted force. This will lead to the user fearful to use the robot.

Fig. 9. Forward Experimental result with assistance (as *Dx* = 14 and *τ<sup>x</sup>* = 1.1)

### **5.3 Forward experiments with power assistance as** *Dx*=9, 19**[N**·**s/m]**

The result of power assistance experiment with *Dx* = 9[N·s/m] and *τ<sup>x</sup>* = 0.5[s] (*Mx* = 4.5[kg]) is shown in Fig. 11. The result shows that the pushing force is about 7[N] when the walking speed is about 0.9[m/s]. The experiment result with *Dx* = 19[N·s/m] and *τ<sup>x</sup>* = 0.5[s] (*Mx* = 9.5[kg]) is shown in Fig. 12. The pushing force is about 17[N] as the walking speed is about 0.9[m/s]. Same as the above, these results demonstrate that the pushing force (the

**5.4 Lateral experiments without power assistance**

2. in steady state, the pushing force is about 22[N]. 3. to stop the robot, a negative (pulling) force is needed.

Fig. 13. Lateral experimental result without assistance

**5.5 Lateral experiments with power assistance as** *Dy* = 18**[N**·**s/m]**

the robot. The purpose of power assistance for the caregiver is achieved.

4. the user's walking speed is about 0.6[m/s].

**-60 -40 -20 0 20 40 60**

**-60 -40 -20 0 20 40 60**

**Force[N]**

s/m].

**Force[N]**

We turn off the motor drivers and do the experiment without power assistance. In this case,

<sup>101</sup> Walking Support and Power Assistance of

**0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 1100012000 13000 14000**

**Time[ms]**

In this case, we try to halve the user's burden with power assistance to about 11[N]. Since the user's walking speed is about 0.6[m/s], according to eq.(24) we set *Dy* = *Fys*/*Vys* = 18[N·

Fig. 14 shows the experimental results with *τ<sup>y</sup>* = 0.5[s] (*My* = 9[kg]). The results show that the pushing force is reduced to about 11[N] from about 22[N] when the steady walking speed is about 0.6[m/s] as we planned, and a very small pulling (negative) force appeared to stop

**0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 11000 12000**

**Time[ms]**

Fig. 14. Lateral experimental result with assistance (as *Dy* = 14 and *τ<sup>x</sup>* = 0.5)

**-1**

**-1**

**-0.5**

**0**

**Speed[m/s]**

**0.5**

**1**

**Force Speed** **-0.5**

**0**

**Speed[m/s]**

**0.5**

**1**

**Force Speed**

the motors are in free state. From the the result shown in Fig. 13, we can find: 1. to start to move the robot, a big pushing force as large as 40[N] is needed.

a Wheelchair Typed Omnidirectional Mobile Robot with Admittance Control

Fig. 10. Forward experimental result with assistance (as *Dx* = 14 and *τ<sup>x</sup>* = 0.5)

user's burden) is well controlled by the admittance based controller. In Fig. 10 there occurs a comparatively big force for starting to move the robot. As aforementioned, this is caused by a relatively big mass coefficient *Mx*. How to optimally select the two parameters is one of our future tasks.

Fig. 11. Forward experimental result with assistance (as *Dx* = 9 and *τ<sup>x</sup>* = 0.5)

Fig. 12. Forward experimental result with assistance (as *Dx* = 19 and *τ<sup>x</sup>* = 0.5)

#### **5.4 Lateral experiments without power assistance**

12 Mobile Robots

**0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 11000**

**Time[ms]**

user's burden) is well controlled by the admittance based controller. In Fig. 10 there occurs a comparatively big force for starting to move the robot. As aforementioned, this is caused by a relatively big mass coefficient *Mx*. How to optimally select the two parameters is one of our

**0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000**

**Time[ms]**

**0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 11000**

**Time[ms]**

Fig. 12. Forward experimental result with assistance (as *Dx* = 19 and *τ<sup>x</sup>* = 0.5)

Fig. 11. Forward experimental result with assistance (as *Dx* = 9 and *τ<sup>x</sup>* = 0.5)

Fig. 10. Forward experimental result with assistance (as *Dx* = 14 and *τ<sup>x</sup>* = 0.5)

**-0.9 -0.6 -0.3 0 0.3 0.6 0.9 1.2**

**-0.9 -0.6 -0.3 0 0.3 0.6 0.9 1.2**

**-0.9 -0.6 -0.3 0 0.3 0.6 0.9 1.2**

**Speed[m/s]**

**Speed[m/s]**

**Force Speed**

**Force Speed**

**Force Speed** **Speed[m/s]**

**Force[N]**

**Force[N]**

future tasks.

**Force[N]**

We turn off the motor drivers and do the experiment without power assistance. In this case, the motors are in free state. From the the result shown in Fig. 13, we can find:


Fig. 13. Lateral experimental result without assistance

### **5.5 Lateral experiments with power assistance as** *Dy* = 18**[N**·**s/m]**

In this case, we try to halve the user's burden with power assistance to about 11[N]. Since the user's walking speed is about 0.6[m/s], according to eq.(24) we set *Dy* = *Fys*/*Vys* = 18[N· s/m].

Fig. 14 shows the experimental results with *τ<sup>y</sup>* = 0.5[s] (*My* = 9[kg]). The results show that the pushing force is reduced to about 11[N] from about 22[N] when the steady walking speed is about 0.6[m/s] as we planned, and a very small pulling (negative) force appeared to stop the robot. The purpose of power assistance for the caregiver is achieved.

Fig. 14. Lateral experimental result with assistance (as *Dy* = 14 and *τ<sup>x</sup>* = 0.5)

**6. Conclusions**

**7. References**

support and power assistance is achieved.

Vol.15(No.1): 144–150.

*Systems* Vol.29(No.3): 257–275.

Vol.55(No.6): 2517–2524.

*Society*, pp. 2693–2695.

Vol.114(No.4): 563–570.

In this research, we developed a new type of omnidirectional mobile robot that not only can fulfill the walking support or walking rehabilitation as the elderly or the disabled walks while holding the robot handle, but also can realize the power assistance for a caregiver when he/she pushes the robot to move while the elderly or the disabled is sitting in the seat. The omnidirectional mobility of the robot is analyzed, and an admittance based human-machine interaction controller is introduced for power assistance. The experimental results show that the pushing force is reduced and well controlled as we planned and the purpose of walking

<sup>103</sup> Walking Support and Power Assistance of

a Wheelchair Typed Omnidirectional Mobile Robot with Admittance Control

Dicianno, B. E., Spaeth, D. M. & Rory A. Cooper, e. a. (2007). Force control strategies

Graf, B., Hans, M. & Schraft, R. D. (2004). Care-o-bot ii.development of a next generation

Han, F., Yamada, T., Watanabe, K., Kiguchi, K. & Izumi, K. (2000). Construction

Kakimoto, A., Matsuda, H. & Sekiguchi, Y. (1997). Development of power-assisted

Lacey, G. & Dawson-Howe, K. M. (1998). The application of robotics to a mobility aid for the elderly blind, *Robotics and Autonomous Systems* Vol.23(No.4): 245–252. Miyata, J., Kaida, Y. & Murakami, T. (2008). *<sup>v</sup>*−*φ*˙ coordinate-based power-assist control

Nemoto, Y. & M. Fujie, e. a. (1998). Power-assisted walking support system for elderly, *Proc. of*

Newman, W. (1992). Stability and performance limits of interaction controllers,

S. Dubowsky, e. a. (2000). Pamm-a robotic aid to the elderly for mobility assistance

Ulrich, I. & Borenstein, J. (2001). The guidecane - applying mobile robot technologies to

Wada, M. & Mori, S. (1996). Holonomic and omnidirectional vehicle with conventional tires,

Yanco, H. A. (1998). Integrating robotic research: A survey of robotic wheelchair development, *AAAI Spring Symposium on Integrating Robotic Research*, Stanford University, CA.

*Conference on Robotics and Automation*, pp. 570–576.

*A: Systems and Humans* Vol. 31(No. 2): 131–136.

robotic home assistant, *Autonomous Robots* Vol.16(No.2): 193–205.

*IEEE Engineering in Medicine and Biology Society*, pp. 1875–1876.

while driving electric powered wheelchairs with isometric and movement-sensing joysticks, *IEEE Transactions on Neural Systems and Rehabilitation Engineering*

of an omnidirectional mobile robot platform based on active dual-wheel caster mechanisms and development of a control simulator, *Journal of Intelligent and Robotics*

attendant-propelled wheelchair, *Proc. of 19th Annual International Conference of the*

of electric wheelchair for a caregiver, *IEEE Transactions on Industrial Electronics*

*the 20th Annual International Conference of the IEEE Engineering in Medicine and Biology*

*Transactions of the ASME, Journal of Dynamics Systems, Measurement, and Control*

and monitoring: A "helping-hand" for the elderly, *Proc. of 2000 IEEE International*

assist the visually impaired, *IEEE Transactions on System, Man, and Cybernetics-Part*

*Proc. of 1996 IEEE International Conference on Robotics and Automation*, pp. 3671–3676.

#### **5.6 Turning-on-the-spot experiment without power assistance**

We switch off the motor drivers and do the experiment without power assistance. In this case, the motors are in free state. From the the result shown in Fig. 15, we can find:


Fig. 15. Turning-on-the-spot experimental result without assistance

### **5.7 Turning-on-the-spot experiments with power assistance as** *Dz* = 6**[Nm**·**s/rad]**

In this case, we also try to halve the user's burden with power assistance to about 5[Nm]. Since the user's walking speed is about 0.8[rad/s], according to eq.(24) we set *Dz* = *Fzs*/*Vzs* = 6[Nm· s/rad].

Fig. 16 shows the experimental results with *τ<sup>z</sup>* = 0.5[s] (*Iz* = 9[kg m2]). The results show that the pushing torque is reduced to about 11[Nm] from about 22[Nm] when the steady walking speed is about 0.6[rad/s] as we planned, and no pulling (negative) torque is needed to stop the robot, which can be directly interpreted by eq.(23). The purpose of power assistance for the caregiver is achieved.

Fig. 16. Turning-on-the-spot experimental result with assistance (as *Dz* = 6 and *τ<sup>z</sup>* = 0.5)

#### **6. Conclusions**

14 Mobile Robots

We switch off the motor drivers and do the experiment without power assistance. In this case,

**1 1001 2001 3001 4001 5001 6001**

**Time[ms]**

In this case, we also try to halve the user's burden with power assistance to about 5[Nm]. Since the user's walking speed is about 0.8[rad/s], according to eq.(24) we set *Dz* = *Fzs*/*Vzs* = 6[Nm·

Fig. 16 shows the experimental results with *τ<sup>z</sup>* = 0.5[s] (*Iz* = 9[kg m2]). The results show that the pushing torque is reduced to about 11[Nm] from about 22[Nm] when the steady walking speed is about 0.6[rad/s] as we planned, and no pulling (negative) torque is needed to stop the robot, which can be directly interpreted by eq.(23). The purpose of power assistance for

**0 1000 2000 3000 4000 5000 6000**

**Time[ms]**

Fig. 16. Turning-on-the-spot experimental result with assistance (as *Dz* = 6 and *τ<sup>z</sup>* = 0.5)

**5.7 Turning-on-the-spot experiments with power assistance as** *Dz* = 6**[Nm**·**s/rad]**

Fig. 15. Turning-on-the-spot experimental result without assistance

**-1.2 -0.8 -0.4 0 0.4 0.8 1.2**

**-1.2 -0.8 -0.4 0 0.4 0.8 1.2**

**Torque Speed**

**Speed[rad/s]**

**Torque Speed**

**Speed[rad/s]**

the motors are in free state. From the the result shown in Fig. 15, we can find: 1. to start to move the robot, a big pushing torque as large as 19[Nm] is needed.

**5.6 Turning-on-the-spot experiment without power assistance**

2. in steady state, the pushing torque is about 10[Nm]. 3. to stop the robot, a negative (pulling) torque is needed.

4. the user's walking speed is about 0.8[rad/s].

**-20**

the caregiver is achieved.

**-20**

**-10**

**0**

**Torque[Nm]**

**10**

**20**

s/rad].

**-10**

**0**

**Torque[Nm]**

**10**

**20**

In this research, we developed a new type of omnidirectional mobile robot that not only can fulfill the walking support or walking rehabilitation as the elderly or the disabled walks while holding the robot handle, but also can realize the power assistance for a caregiver when he/she pushes the robot to move while the elderly or the disabled is sitting in the seat. The omnidirectional mobility of the robot is analyzed, and an admittance based human-machine interaction controller is introduced for power assistance. The experimental results show that the pushing force is reduced and well controlled as we planned and the purpose of walking support and power assistance is achieved.

### **7. References**


**6** 

*Spain* 

**Motor Disability** 

*University of Valladolid* 

**A Control System for Robots and Wheelchairs:** 

There is a large number of people with disabilities that involve severe reduction of mobility, such as tetraplegia, brain stroke or vascular brain damage. These people usually have great impairments which prevent them from performing their normal daily activities. For our society, this fact means a great effort in cares and specialized attention that, in most cases,

Although only a small part of the disabled population can be included in these types of injuries, trying to solve or minimise the problems associated with severe motor disabilities is an especially difficult challenge for research and development in Rehabilitation Technologies (RT). The Laboratory of Electronics and Bioenginnering (LEB) of the University of Valladolid, to which the article authors belong, has been working since 1995

Although, currently, it is impossible to find an effective medical solution for these disabilities, the improvement in patients' personal autonomy should be achieved through

• The environment. This action consists in modifying the environment to facilitate

• The patient. In this case, the action consists in developing human-machine interfaces adapted to each type of disability and desired functionality. Additionally, it is necessary to implement the corresponding actuators or systems to be controlled, including

As regards the adaptation of the environment, the implemented solutions have relied, primarily, on laws that promote accessibility, applied to architecture and urban planning. Thus, significant improvements were achieved to facilitate, mainly, the mobility of motor disabled people who use wheelchairs. In this regard, the buildings, the infrastructure planning and the public transport services have been especially taken into account. In order to achieve a suitable ambient, authorities subsidise the reforms in private homes where disabled people live, in some countries (Spain, 2006). Note that this first action, based on the transformation of the environment, also benefits a wider scope of population, such as the

involves caregivers assistance dedicated almost exclusively to these patients.

technical means. This approach focuses, primarily, on two actions based upon:

mobility, interaction with the ambient and communication.

on the development of practical systems in this field.

elderly or other people with milder motor disabilities.

prosthetics and orthotics.

**1. Introduction** 

**Its Application for People with Severe** 

Alonso A. Alonso, Ramón de la Rosa, Albano Carrera, Alfonso Bahillo, Ramón Durán and Patricia Fernández


## **A Control System for Robots and Wheelchairs: Its Application for People with Severe Motor Disability**

Alonso A. Alonso, Ramón de la Rosa, Albano Carrera, Alfonso Bahillo, Ramón Durán and Patricia Fernández *University of Valladolid Spain* 

### **1. Introduction**

16 Mobile Robots

104 Mobile Robots – Current Trends

Yu, H., Spenko, M. & Dubowsky, S. (2003). An adaptive shared control system for an intelligent mobility aid for the elderly, *Autonomous Robots* Vol.15(No.1): 53–66. Yu, H., Spenko, M. & Dubowsky, S. (2004). Construction of an omnidirectional mobile

Zhu, C., Oda, M., Suzuki, M., Luo, X., Watanabe, H. & Yan, Y. (2010). A new type of

Vol.126(No.8): 822–829.

robot platform based on active dual-wheel caster mechanisms and development of a control simulator, *Transactions of the ASME, Journal of Mechanical Design*

omnidirectional wheelchair robot for walking support and power assistance, *Proc. of 2010 IEEE International Conference on Intelligent Robots and Systems*, pp. 6028–6033.

> There is a large number of people with disabilities that involve severe reduction of mobility, such as tetraplegia, brain stroke or vascular brain damage. These people usually have great impairments which prevent them from performing their normal daily activities. For our society, this fact means a great effort in cares and specialized attention that, in most cases, involves caregivers assistance dedicated almost exclusively to these patients.

> Although only a small part of the disabled population can be included in these types of injuries, trying to solve or minimise the problems associated with severe motor disabilities is an especially difficult challenge for research and development in Rehabilitation Technologies (RT). The Laboratory of Electronics and Bioenginnering (LEB) of the University of Valladolid, to which the article authors belong, has been working since 1995 on the development of practical systems in this field.

> Although, currently, it is impossible to find an effective medical solution for these disabilities, the improvement in patients' personal autonomy should be achieved through technical means. This approach focuses, primarily, on two actions based upon:


As regards the adaptation of the environment, the implemented solutions have relied, primarily, on laws that promote accessibility, applied to architecture and urban planning. Thus, significant improvements were achieved to facilitate, mainly, the mobility of motor disabled people who use wheelchairs. In this regard, the buildings, the infrastructure planning and the public transport services have been especially taken into account. In order to achieve a suitable ambient, authorities subsidise the reforms in private homes where disabled people live, in some countries (Spain, 2006). Note that this first action, based on the transformation of the environment, also benefits a wider scope of population, such as the elderly or other people with milder motor disabilities.

A Control System for Robots and Wheelchairs:

commands for the device of interest.

commands.

scalp.

actions.

made in each case.

exhalations.

Its Application for People with Severe Motor Disability 107

As much as the work is focused on creating interfaces to control wheelchairs or other type of mobile systems, as robots, there are several papers that review the different existing systems. Their general conclusion is that improvements should be made on the control interfaces and monitoring technologies for automatic navigation (Fehr et al, 2000). The works found in the scientific literature show different approaches to the development of control interfaces in wheelchairs or surveillance robots for severely disabled people. The

• System based on voice recognition, (Mazo,1995). The operating principle of such interface systems is reduced to the interpretation of a set of predefined voice

• Motion detection systems of the head or the eye, using cameras, (Úbeda et al, 2009) and (Perez et al, 2009). This type of detection systems is based on monitoring the user's face or a part of it. Thus, the movements made by the user are translated into control

• Interfaces based on electromyography (EMG) and electrooculography (EOG) records generated by muscles during voluntary action, (Barea et al, 2002), (Frizera et al, 2006), (Ferreira et al, 2007) and (Freire Bastos et al, 2009). These techniques are based on the acquisition and analysis of electric logs generated by the muscles, which allow us to know the level of muscle activity and the duration thereof. The term EMG is utilised when unspecified muscle signals are used, while EOG indicates that EMG signals around the eye are used. Thus, combining the duration and intensity of the voluntary

• Electroencephalography (EEG)-based interfaces – Brain Computer Interfaces (BCI), (Millán et al, 2004), (Ferreira et al, 2007), (Escolano et al, 2009), (Freire Bastos et al, 2009) and (Iturrate et al, 2009). This means using the signals of brain activity to control the system after processing and coding possible commands. Currently, there are two types of techniques in this category, the invasive and noninvasive (Millán & Carmena, 2010). The invasive strategy implies that the signals are collected inside the brain or, generally, on its surface. The noninvasive approach uses the signals recorded on the surface of the

• Systems based on the detection of residual muscle movements and not based on the registration of bioelectric signals, (Huo et al, 2008) and (Alonso et al, 2009). This is the kind of system developed in the research group of the authors and that will be explained in detail in this chapter. It is based on coding small residual muscle movements that the patient can perform and on a suitable sensor to detect such

• Interfaces based on the detection of head position using inertial sensors, (Azkoitia, 2007) and (Freire Bastos et al, 2009). In this case, the objective is to detect the position and the movements of the head by placing an external element on it, so that the movement is collected. With this simple mechanism, a light processing system to manage the signals and encode the desired order can be used according to gesture

• System based on control by sniffing, (Plotkin et al, 2010). The guidance system is based on the detection of inhalations and exhalations of the use and further processing to code orders according to a preset sequence of inhalations and

solutions proposed in the literature can be grouped into the following strategies:

muscle contractions, various commands can be generated.

In the case of adapting the environment, the action is mainly reflected in changes made on the urban and architectural physical space and, in any case, on the special furniture required and home automation elements that facilitate automated control of a home. In contrast, the aid techniques based on patients are more diverse as they rely on a multitude of particular applications. In this way, if functionality is will the topic to considered four main areas of work in developing patient-centric devices can be distinguished:


In all electromechanical and electronic developments, the four work areas need a suitable interface between the patient and the technical means of control. The development of a suitable and customizable adapter is much more difficult in the case of severe motor disabilities, where there is no possibility of utilising the patient's limbs to control simple or standard devices.

The growing interest in all these work fields is established by the appearance of numerous different works and systems designed to make life easier for the user. A particularly interesting example of application is the control of an interface with a computer, since on that computer just about any control application can be developed. This fact allows to understand that the development of a human-machine interface is the core of the problem to be solved. In order for disabled people to function, one of the most studied tasks has been to facilitate the control of personal computers, (Gareth et al, 2000), (Kim, 2002), (Betke et al, 2002), (De Santis & Iacoviello, 2009) and (Barea et al, 2011). In these works different systems are developed to adapt the control of the mouse cursor using, firstly, an infrared pointer system and secondly a face tracking. Other works present as people with disabilities in the upper limbs can control peripherals (Azkoitia, 2007).

Based on the experience of the research group and the systems found in the scientific related literature, it has been determined that the human-machine interfaces developed must meet the following conditions (Maxwell,1995):


106 Mobile Robots – Current Trends

In the case of adapting the environment, the action is mainly reflected in changes made on the urban and architectural physical space and, in any case, on the special furniture required and home automation elements that facilitate automated control of a home. In contrast, the aid techniques based on patients are more diverse as they rely on a multitude of particular applications. In this way, if functionality is will the topic to considered four main areas of

In all electromechanical and electronic developments, the four work areas need a suitable interface between the patient and the technical means of control. The development of a suitable and customizable adapter is much more difficult in the case of severe motor disabilities, where there is no possibility of utilising the patient's limbs to control simple or

The growing interest in all these work fields is established by the appearance of numerous different works and systems designed to make life easier for the user. A particularly interesting example of application is the control of an interface with a computer, since on that computer just about any control application can be developed. This fact allows to understand that the development of a human-machine interface is the core of the problem to be solved. In order for disabled people to function, one of the most studied tasks has been to facilitate the control of personal computers, (Gareth et al, 2000), (Kim, 2002), (Betke et al, 2002), (De Santis & Iacoviello, 2009) and (Barea et al, 2011). In these works different systems are developed to adapt the control of the mouse cursor using, firstly, an infrared pointer system and secondly a face tracking. Other works present as people with disabilities in the

Based on the experience of the research group and the systems found in the scientific related literature, it has been determined that the human-machine interfaces developed must meet

• Adaptation to the user. The interface should used, effectively, taking advantage of any remaining capability of the disabled, which depends on the type and degree of

• Correct choice of the remaining capability to be used. It is important to identify which of the various remaining capabilities of the patient is the most adequate to act with the

• Reliability of the interface. The error level in the interpretation of the commands generated by the patients should be minimised. This task is especially important for interfaces to control personal mobility systems, where potential failures may pose a real

• Capacity. The interfaces must be capable of generating a sufficient number of different

• Speed and computational cost. The human-machine interfaces should be able to work

• Comfort. This item implies easy usability, ergonomics and low proper interaction with

• Cost. The interfaces to control the system must have a reasonable price for the potencial

work in developing patient-centric devices can be distinguished:

• Interaction with external technical elements.

upper limbs can control peripherals (Azkoitia, 2007).

the following conditions (Maxwell,1995):

physical danger to the user.

commands to control the external system.

other activities that the patient carries out at the same time.

in real-time or with minimum delay.

• Telecare and telemonitoring

• Mobility

• Communication

standard devices.

disability.

interface.

users.

As much as the work is focused on creating interfaces to control wheelchairs or other type of mobile systems, as robots, there are several papers that review the different existing systems. Their general conclusion is that improvements should be made on the control interfaces and monitoring technologies for automatic navigation (Fehr et al, 2000). The works found in the scientific literature show different approaches to the development of control interfaces in wheelchairs or surveillance robots for severely disabled people. The solutions proposed in the literature can be grouped into the following strategies:


A Control System for Robots and Wheelchairs:

of users.

their specific needs.

**3. Implementation** 

described.

**3.1 Adapted interfaces** 

group of the authors.

by the research community in this field.

Its Application for People with Severe Motor Disability 109

• Review of related scientific literature to learn about the different approaches carried out

• Study the feasibility of assisted systems for the disabled taking into account the views

• Presentation and operating principle of adapted interfaces developed in the research

• Presentation of the structure and principle of operation of two systems that use the adapted interfaces implemented. The first succeeds in establishing a surveillance and telecare system, using a mobile robot equipped with a webcam. The second facilitates

• Presentation of the results and experience in the use of the entire system and interfaces. Additionally, there is a discussion about the advantages of the interfaces based on residual voluntary actions of severe motor disabilities and compare its advantages over

The comparison that will be made with similar systems will focus, primarily, on the study and adaptation to the parameters defined in the introduction. In addition, it should be noted that the adequacy of systems will be studied in relation to its use in patients with severe motor disabilities, excluding other types of disabilities like sensorial or cognitive ones and

This section describes the working principle of a robot guidance system based on a user interface customized according to severe motor disabled patients. This system has been implemented in two different applications: the first one, a surveillance and telecare system using a mobile robot equipped with a webcam, and the second one, a system for patient

The system diagram shown in Figure 1 describes the structure of the application developed for surveillance and telecare. This system includes a customizable interface, a processing module, a transmission channel, a reception module and, finally, the device to be controlled: a robot, an electric wheelchair or another convenient domotic system. A Scribbler robot type

Throughout this section, each of the parts of the system presented in Figure 1 will be explained in detail. Additionally, other items and adaptations such as an electric wheelchair with a guide system based on an adapted interface joystick control will be

The system presented is designed to collect patient information by detecting voluntary winks. Thus, several different adapted interfaces that are explained in detail in this section have been built in. Most of them required an additional support consisting of a pair of glasses and mechanical items for attachment and adjustment of the different sensors. Figure

other systems found in the scientific literature for the same kind of disability. The developed devices are described with sufficient level detail. Thus, a specialized reader

patient mobility through the control of a modified electric wheelchair.

can incorporate different tools and procedures to their own developments.

mobility through control of a modified electric wheelchair.

that represents the device to be controlled is included.

2 shows three different types of the built interfaces.

• Autonomous navigation, (Alonso, 1999), (Levine et al, 1999), (Minguez, et al, 2007), (Angulo et al, 2007) and (Zeng et al, 2008). The autonomous navigation systems perform an automatic guide to the previously chosen destination. Thus, the user only interferes with the system for choosing the final point; it is the system that automatically takes the user to the desired position. The interface chosen, any of the above points, will be used only at the beginning of the operation or if it intends to stop the automatic guide option.

Additionally, hybrid systems that combine self-guided techniques with directed guidance have been developed. Thus, the device intelligently chooses the best possible option for the next move, after the environment analysis, (Hoppenot & Colle, 2002) and (Perrín et al, 2010). In other cases, the system knows different paths and it performs an automatic guidance after the order of destination, (Rebsamen et al, 2006).

For the development of assistance RT elements is of paramount importance, to have the opinion of the recipients of these developments, i.e., persons with disabilities. Therefore, the research group conducted a survey, among the population with severe motor disabilities, concerning the basic features that systems designed to enhance their autonomy and capabilities should have. The results of this study have been taken into account in the LEB works.

The system presented in this chapter is based on sensors capable of detecting winks performed voluntarily by the user. The devices with these interfaces are classified in the fifth group of the types presented above. To carry out this detection, several user interfaces, based mostly in a pair of glasses support have been developed to detect the movement of the skin near the eye. In this area, when there is a wink the orbicularis oculi muscle contractions and extensions produce a movement. The hardware needed for the treatment of signals received is simple, robust, inexpensive, compatible with commercial wheelchairs and implemented by microcontrollers.

This chapter is divided into different sections. Section 2 presents the objectives that the group aim to achieve by the use of the system. Then, section 3 presents and explains the architecture of each of the component parts and the different interfaces developed. Section 4 makes a comparative assessment of the advantages and disadvantages of the proposed solution and those contained in the related scientific literature, after the presentation of the conclusions reached from the interview survey. In addition, an explanation about the protocols and test which has been put into practice with the developed systems is included. Finally, Section 5 details the conclusions reached after the completion of all the work and subsequent analysis.

### **2. Objectives**

This chapter presents the basic theoretical aspects of the interfaces and systems adapted to severe motor disabilities, including an extensive reference to the state of the art in this field. In addition, an interface development, especially suitable for this kind of disabilities, and two particular systems which use this interface are presented. These two systems have the aim, on the one hand, to manipulate a robot that acts as a surveillance or telecare device and, on the other hand, to guide an electric wheelchair. Therefore, the following objectives can be defined:

• Presentation of the parameters that an adapted interface to the severe motor disability must achieve.

108 Mobile Robots – Current Trends

• Autonomous navigation, (Alonso, 1999), (Levine et al, 1999), (Minguez, et al, 2007), (Angulo et al, 2007) and (Zeng et al, 2008). The autonomous navigation systems perform an automatic guide to the previously chosen destination. Thus, the user only interferes with the system for choosing the final point; it is the system that automatically takes the user to the desired position. The interface chosen, any of the above points, will be used only at the beginning of the operation or if it intends to stop

Additionally, hybrid systems that combine self-guided techniques with directed guidance have been developed. Thus, the device intelligently chooses the best possible option for the next move, after the environment analysis, (Hoppenot & Colle, 2002) and (Perrín et al, 2010). In other cases, the system knows different paths and it performs an automatic guidance after

For the development of assistance RT elements is of paramount importance, to have the opinion of the recipients of these developments, i.e., persons with disabilities. Therefore, the research group conducted a survey, among the population with severe motor disabilities, concerning the basic features that systems designed to enhance their autonomy and capabilities should have. The results of this study have been taken into account in the LEB

The system presented in this chapter is based on sensors capable of detecting winks performed voluntarily by the user. The devices with these interfaces are classified in the fifth group of the types presented above. To carry out this detection, several user interfaces, based mostly in a pair of glasses support have been developed to detect the movement of the skin near the eye. In this area, when there is a wink the orbicularis oculi muscle contractions and extensions produce a movement. The hardware needed for the treatment of signals received is simple, robust, inexpensive, compatible with commercial wheelchairs

This chapter is divided into different sections. Section 2 presents the objectives that the group aim to achieve by the use of the system. Then, section 3 presents and explains the architecture of each of the component parts and the different interfaces developed. Section 4 makes a comparative assessment of the advantages and disadvantages of the proposed solution and those contained in the related scientific literature, after the presentation of the conclusions reached from the interview survey. In addition, an explanation about the protocols and test which has been put into practice with the developed systems is included. Finally, Section 5 details the conclusions reached after the completion of all the work and

This chapter presents the basic theoretical aspects of the interfaces and systems adapted to severe motor disabilities, including an extensive reference to the state of the art in this field. In addition, an interface development, especially suitable for this kind of disabilities, and two particular systems which use this interface are presented. These two systems have the aim, on the one hand, to manipulate a robot that acts as a surveillance or telecare device and, on the other hand, to guide an electric wheelchair. Therefore, the following objectives

• Presentation of the parameters that an adapted interface to the severe motor disability

the automatic guide option.

and implemented by microcontrollers.

subsequent analysis.

**2. Objectives** 

can be defined:

must achieve.

works.

the order of destination, (Rebsamen et al, 2006).


The developed devices are described with sufficient level detail. Thus, a specialized reader can incorporate different tools and procedures to their own developments.

The comparison that will be made with similar systems will focus, primarily, on the study and adaptation to the parameters defined in the introduction. In addition, it should be noted that the adequacy of systems will be studied in relation to its use in patients with severe motor disabilities, excluding other types of disabilities like sensorial or cognitive ones and their specific needs.

### **3. Implementation**

This section describes the working principle of a robot guidance system based on a user interface customized according to severe motor disabled patients. This system has been implemented in two different applications: the first one, a surveillance and telecare system using a mobile robot equipped with a webcam, and the second one, a system for patient mobility through control of a modified electric wheelchair.

The system diagram shown in Figure 1 describes the structure of the application developed for surveillance and telecare. This system includes a customizable interface, a processing module, a transmission channel, a reception module and, finally, the device to be controlled: a robot, an electric wheelchair or another convenient domotic system. A Scribbler robot type that represents the device to be controlled is included.

Throughout this section, each of the parts of the system presented in Figure 1 will be explained in detail. Additionally, other items and adaptations such as an electric wheelchair with a guide system based on an adapted interface joystick control will be described.

### **3.1 Adapted interfaces**

The system presented is designed to collect patient information by detecting voluntary winks. Thus, several different adapted interfaces that are explained in detail in this section have been built in. Most of them required an additional support consisting of a pair of glasses and mechanical items for attachment and adjustment of the different sensors. Figure 2 shows three different types of the built interfaces.

A Control System for Robots and Wheelchairs:

uses vibration sensors to detect the winks.

right place will make it work properly.

a user. Detail of the sticker on the skin for proper operation.

Its Application for People with Severe Motor Disability 111

Fig. 2. Three different adapted interfaces built in: a) the highlighted rectangles indicate an optical sensor based on two integrated circuits CNY-70 and their mechanization on glasses. b) the rectangles indicate the circuits fixed on the patient's glasses, built using hardware sensors used in optical mice, c) the highlighted part indicates a customizable interface that

• Optical sensors to detect movements of the *orbicularis oculi* based on optical mice system. Following the same philosophy as the above sensors, these devices are placed on the arm of glasses and detect the movement on the side of the eye. The advantage of this sensor is that it does not need any special attachments, simply placing it on the

Fig. 3. Adapted interface based on the integrated circuit CNY-70 while it is being utilised by

Fig. 1. Diagram of the developed system for telecare and surveillance using a Scribbler robot.

The sensors included in these adapted interfaces and their processing systems are detailed below:

• Optical sensors based on variations of light reflection on the mark that is placed on the skin above the *orbicularis oculi*. In this case, the integrated circuit CNY-70, (Vishay, 2008), has been employed. These devices consist of an infrared LED and a phototransistor whose range of vision is parallel and receives the signal reflected from the facing surface. The use of these sensors requires an additional black and white sticker on the user's skin (see Figure 3). If the sticker is properly placed on the surface of the *orbicularis oculi* muscle, the sensor on the glasses pin will detect the wink because the reflected light changes. This change is registered due to the color change that occurs with the movement of the sticker. Two sets of preprocessing systems for the acquired signals have been built following this screening procedure: the first one based on PIC16F84A microcontroller from Microchip Technology Inc., (Microchip, 2001), and the second based on the Arduino hardware platform that incorporates ATMega microcontrollers from Atmel Corporation, (Arduino, 2009) and (Atmel, 2009).

The integrated circuit that has been utilised is very simple and only requires two resistors to bias the diode and phototransistor incorporated. The signal generated by the phototransistor is the information signal, and it discriminates a high or low level with the microcontroller.

110 Mobile Robots – Current Trends

Fig. 1. Diagram of the developed system for telecare and surveillance using a Scribbler

microcontrollers from Atmel Corporation, (Arduino, 2009) and (Atmel, 2009).

The integrated circuit that has been utilised is very simple and only requires two resistors to bias the diode and phototransistor incorporated. The signal generated by the phototransistor is the information signal, and it discriminates a high or low level with

The sensors included in these adapted interfaces and their processing systems are detailed

• Optical sensors based on variations of light reflection on the mark that is placed on the skin above the *orbicularis oculi*. In this case, the integrated circuit CNY-70, (Vishay, 2008), has been employed. These devices consist of an infrared LED and a phototransistor whose range of vision is parallel and receives the signal reflected from the facing surface. The use of these sensors requires an additional black and white sticker on the user's skin (see Figure 3). If the sticker is properly placed on the surface of the *orbicularis oculi* muscle, the sensor on the glasses pin will detect the wink because the reflected light changes. This change is registered due to the color change that occurs with the movement of the sticker. Two sets of preprocessing systems for the acquired signals have been built following this screening procedure: the first one based on PIC16F84A microcontroller from Microchip Technology Inc., (Microchip, 2001), and the second based on the Arduino hardware platform that incorporates ATMega

robot.

below:

the microcontroller.

Fig. 2. Three different adapted interfaces built in: a) the highlighted rectangles indicate an optical sensor based on two integrated circuits CNY-70 and their mechanization on glasses. b) the rectangles indicate the circuits fixed on the patient's glasses, built using hardware sensors used in optical mice, c) the highlighted part indicates a customizable interface that uses vibration sensors to detect the winks.

• Optical sensors to detect movements of the *orbicularis oculi* based on optical mice system. Following the same philosophy as the above sensors, these devices are placed on the arm of glasses and detect the movement on the side of the eye. The advantage of this sensor is that it does not need any special attachments, simply placing it on the right place will make it work properly.

Fig. 3. Adapted interface based on the integrated circuit CNY-70 while it is being utilised by a user. Detail of the sticker on the skin for proper operation.

A Control System for Robots and Wheelchairs:

Its Application for People with Severe Motor Disability 113

Fig. 5. Circuit developed for the adaptation of the vibration sensors output signal.

Fig. 6. Block diagram of the electrodes EMG signals acquisition and conditioning module. In addition to these systems adapted to control devices by voluntary winks for people with major problems of mobility, a less suitable system has been built. It is an intermediate

• Special keypad for disabled people. This keypad consists of four buttons that allow the user to define the same orders as those used in the adapted interface. The problem with this interface is a certain difficulty for the disabled person to control a robot or a wheelchair, as performing simultaneous button presses would be necessary, and the user should have greater flexibility and nimbleness with their hands and fingers.

The main task of the implemented processing module is to identify and process the information received by different sensors of the adapted interfaces. This process is straightforward and easily implemented by microcontrollers or hardware platforms such as

The processing module is an essential part for the proper functioning of the system and it will be responsible for the discrimination between voluntary or involuntary signal gestures. In addition, the processing system carries out the detection of possible complex orders as the combination of different voluntary signals. The flowchart presented in Figure 7 gives an

interface between the adapted one and the hardware processing system:

**3.2 Processing module** 

Arduino.

The sensors only need some parts of the mouse: the integrated optical camera and its peripherals, LED lighting and the light beam guidance prism (see Figure 4).

The system is controlled by the Arduino hardware platform. The interconnection between the optical devices and the platform is achieve through the same signals that a computer processes to detect the movement of the mouse. Thus, it is easy to set different thresholds in order to discriminate motion errors when capturing the information from the user.

Fig. 4. Detail of the necessary components of the optical mouse for the adapted interface. The top of the image is the top view: LED, prism and integrated camera. The bottom of the image is the bottom view: prism and lens.

• Vibration sensors based on the piezoelectric effect. These sensors have been built in conjunction with a signal conditioner which adapts the signal to a PIC microcontroller, which selects the appropriate orders and discriminates the signals produced by involuntary movements. Again, the philosophy of these sensors is identical to the two mentioned above; it will detect the skin movement near the eye and possible sensor state changes when the eyes wink.

In contrast with the previous circuits, in this case, the use of a circuit to condition the sensor generated signals is necessary. For this purpose, a simple operational amplifier inverter stage has been developed. This stage is based on the TL082 chip from Texas Instruments, Inc., (Texas, 1999). In addition to this amplifier device, a new stage for information signal pulse discrimination and bounce suppression has been developed. A NE555 integrated circuit from the same company, (Texas, 2002), has been utilised to give shape to this part of the system (see Figure 5).

• Electromyogram electrodes for acquisition of contraction signals of the *orbicularis oculi*. These sensors are based on the acquisition of the bioelectrical signals of each muscle. In this way, it would not be necessary to use the glasses structure interface for signal acquisition, simply, attaching surface electrodes to the skin and an acquisition and processing system, the signal could be obtained and discriminate voluntary winks from other involuntary actions.

This research group has a high degree of experience in this kind of signals and also has its own neuromuscular training platform UVa-NTS (de la Rosa et al, 2010). This system utilises an amplification and conditioning module (see Figure 6). It consists of different stages: a step for electrostatic discharge, an instrumentation amplifier, a high-pass filter with programmable gain and a low-pass filtering.

112 Mobile Robots – Current Trends

peripherals, LED lighting and the light beam guidance prism (see Figure 4).

Fig. 4. Detail of the necessary components of the optical mouse for the adapted interface. The top of the image is the top view: LED, prism and integrated camera. The bottom of the

• Vibration sensors based on the piezoelectric effect. These sensors have been built in conjunction with a signal conditioner which adapts the signal to a PIC microcontroller, which selects the appropriate orders and discriminates the signals produced by involuntary movements. Again, the philosophy of these sensors is identical to the two mentioned above; it will detect the skin movement near the eye and possible sensor

In contrast with the previous circuits, in this case, the use of a circuit to condition the sensor generated signals is necessary. For this purpose, a simple operational amplifier inverter stage has been developed. This stage is based on the TL082 chip from Texas Instruments, Inc., (Texas, 1999). In addition to this amplifier device, a new stage for information signal pulse discrimination and bounce suppression has been developed. A NE555 integrated circuit from the same company, (Texas, 2002), has been utilised to

• Electromyogram electrodes for acquisition of contraction signals of the *orbicularis oculi*. These sensors are based on the acquisition of the bioelectrical signals of each muscle. In this way, it would not be necessary to use the glasses structure interface for signal acquisition, simply, attaching surface electrodes to the skin and an acquisition and processing system, the signal could be obtained and discriminate voluntary winks from

This research group has a high degree of experience in this kind of signals and also has its own neuromuscular training platform UVa-NTS (de la Rosa et al, 2010). This system utilises an amplification and conditioning module (see Figure 6). It consists of different stages: a step for electrostatic discharge, an instrumentation amplifier, a high-pass filter

information from the user.

image is the bottom view: prism and lens.

state changes when the eyes wink.

other involuntary actions.

give shape to this part of the system (see Figure 5).

with programmable gain and a low-pass filtering.

The sensors only need some parts of the mouse: the integrated optical camera and its

The system is controlled by the Arduino hardware platform. The interconnection between the optical devices and the platform is achieve through the same signals that a computer processes to detect the movement of the mouse. Thus, it is easy to set different thresholds in order to discriminate motion errors when capturing the

Fig. 5. Circuit developed for the adaptation of the vibration sensors output signal.

Fig. 6. Block diagram of the electrodes EMG signals acquisition and conditioning module.

In addition to these systems adapted to control devices by voluntary winks for people with major problems of mobility, a less suitable system has been built. It is an intermediate interface between the adapted one and the hardware processing system:

• Special keypad for disabled people. This keypad consists of four buttons that allow the user to define the same orders as those used in the adapted interface. The problem with this interface is a certain difficulty for the disabled person to control a robot or a wheelchair, as performing simultaneous button presses would be necessary, and the user should have greater flexibility and nimbleness with their hands and fingers.

### **3.2 Processing module**

The main task of the implemented processing module is to identify and process the information received by different sensors of the adapted interfaces. This process is straightforward and easily implemented by microcontrollers or hardware platforms such as Arduino.

The processing module is an essential part for the proper functioning of the system and it will be responsible for the discrimination between voluntary or involuntary signal gestures. In addition, the processing system carries out the detection of possible complex orders as the combination of different voluntary signals. The flowchart presented in Figure 7 gives an

A Control System for Robots and Wheelchairs:

execution of the chosen movement.

**3.4 Reception module and robot** 

camera to display the environment of the disabled person.

number of outputs according to an input analog voltage.

Fig. 8. Block diagram of reception module and robot.

access point.

Its Application for People with Severe Motor Disability 115

this way, the transmission channel will broadcast a two-way data flow using the video channel to receive images from the webcam and the audio channel to send commands. Communication is simple for the adapted interface system, through the direct connection with the Wi-Fi device. Thus, the audio orders generated by the processing module are directly received through the communication channel, by the mobile robot with a webcam that has a wireless interface. The control orders are received by the device to be controlled through Internet and the wireless webcam interface. In this case, a router is used as the

On the other hand, if the mobile device needs to be controlled remotely, a simple graphical interface based on a web page, programmed in HTML and JavaScript, has been developed. This graphical interface allows to send the same commands that the adapted one. Operating it is very easy because the screen has five buttons (stop, move forward and backward, turn left and right) that, when pressed with the mouse cursor it sends an audio signal for the

Hence, with this use of wireless communications and the Internet, both surveillance tasks for disabled people at home and telecare can be conducted. The first application allows the disabled person commanding the robot, with the adapted interface, to observe what occurs in different rooms of the home that he/she cannot reach. Moreover, the second application of remote assistance allows a caregiver, who cannot be physically with the patient, to have control and some information of the status of his/her patient, using the robot and the

The reception module is responsible for detecting the received order by decoding the information signal from the processing system through the communications channel. The received signal is demodulated to obtain the control command by a system based on Phaselocked loop (PLL). This detection system uses the 74HC4046 integrate circuit from Philips Semiconductor Company, (Philips, 1997), and the LM3914 from National Semiconductor Corp., (National, 1995). The first one, 74HC4046 is a PLL with and integrated voltagecontrolled oscilator (VCO), while the sencond is a driver that activates a greater or smaller

Thus, the control input of the LM3914 device is taken from the PLL VCO input and the signal amplitude is directly proportional to the detected frequency of the PLL input and it has been generated by the processing module. The control signal is converted to eight digital signals, where the number of high-voltage signals depends on the input amplitude. For the conversion of these digital signals to the three bits used to encode the commands of the robot, an encoder integrated circuit SN74HC148 Texas Instruments, Inc., (Texas, 2004),

has been used. Figure 8 shows a block diagram of this module and the robot.

idea of the philosophy behind the construction of the processing module and the orders discrimination.

Fig. 7. Flowchart implemented in the processing modules with microcontrollers o hardware platforms.

In a first step, a loop-detection of a possible wink is performed. This initial blink can come from either or both eyes. Once the wink is characterized, the orders are discriminated. On the one hand, if the two eyes wink jointly the robot stop order is activated. Conversely, if the wink is coming from one of the eyes, after a moment's delay, a new similar detection loop to detect a possible new wink is accessed, which discriminates the simple or complex order. The complex commands consist of two consecutive winks made by the two eyes, regardless of the order; in this case, the activated order is 'robot turn right or left and stop'. The simple order consists of a single wink of an eye and it is used to command the movement forward and/or backward of the mobile device.

Following the implementation of this algorithm in the processing module, tests have been carried out with different adapted interfaces, to make sure their reliability and versatility. Switching between interfaces is automatic because the same connector is used. In the specific case of the optical mice-based interface, adapting the code to extract information from the generated sensors signals is necessary.

The received orders are interpreted and forwarded in the appropriate format to a communications device: laptop, notebook or PDA. This device transmits the necessary commands wirelessly to the device to be controlled. In this case, the processing module generates an audio signal of a certain frequency for each of the movements.

#### **3.3 Transmission channel and webcams for telecare and surveillance applications**

The Wi-Fi is the transmission channel and it is implemented using a router as an access point, allowing both an *ad-hoc* access (computer – webcam) and a remote Internet access. In 114 Mobile Robots – Current Trends

idea of the philosophy behind the construction of the processing module and the orders

Fig. 7. Flowchart implemented in the processing modules with microcontrollers o hardware

In a first step, a loop-detection of a possible wink is performed. This initial blink can come from either or both eyes. Once the wink is characterized, the orders are discriminated. On the one hand, if the two eyes wink jointly the robot stop order is activated. Conversely, if the wink is coming from one of the eyes, after a moment's delay, a new similar detection loop to detect a possible new wink is accessed, which discriminates the simple or complex order. The complex commands consist of two consecutive winks made by the two eyes, regardless of the order; in this case, the activated order is 'robot turn right or left and stop'. The simple order consists of a single wink of an eye and it is used to command the movement forward

Following the implementation of this algorithm in the processing module, tests have been carried out with different adapted interfaces, to make sure their reliability and versatility. Switching between interfaces is automatic because the same connector is used. In the specific case of the optical mice-based interface, adapting the code to extract information

The received orders are interpreted and forwarded in the appropriate format to a communications device: laptop, notebook or PDA. This device transmits the necessary commands wirelessly to the device to be controlled. In this case, the processing module

**3.3 Transmission channel and webcams for telecare and surveillance applications**  The Wi-Fi is the transmission channel and it is implemented using a router as an access point, allowing both an *ad-hoc* access (computer – webcam) and a remote Internet access. In

generates an audio signal of a certain frequency for each of the movements.

discrimination.

platforms.

and/or backward of the mobile device.

from the generated sensors signals is necessary.

this way, the transmission channel will broadcast a two-way data flow using the video channel to receive images from the webcam and the audio channel to send commands.

Communication is simple for the adapted interface system, through the direct connection with the Wi-Fi device. Thus, the audio orders generated by the processing module are directly received through the communication channel, by the mobile robot with a webcam that has a wireless interface. The control orders are received by the device to be controlled through Internet and the wireless webcam interface. In this case, a router is used as the access point.

On the other hand, if the mobile device needs to be controlled remotely, a simple graphical interface based on a web page, programmed in HTML and JavaScript, has been developed. This graphical interface allows to send the same commands that the adapted one. Operating it is very easy because the screen has five buttons (stop, move forward and backward, turn left and right) that, when pressed with the mouse cursor it sends an audio signal for the execution of the chosen movement.

Hence, with this use of wireless communications and the Internet, both surveillance tasks for disabled people at home and telecare can be conducted. The first application allows the disabled person commanding the robot, with the adapted interface, to observe what occurs in different rooms of the home that he/she cannot reach. Moreover, the second application of remote assistance allows a caregiver, who cannot be physically with the patient, to have control and some information of the status of his/her patient, using the robot and the camera to display the environment of the disabled person.

### **3.4 Reception module and robot**

The reception module is responsible for detecting the received order by decoding the information signal from the processing system through the communications channel. The received signal is demodulated to obtain the control command by a system based on Phaselocked loop (PLL). This detection system uses the 74HC4046 integrate circuit from Philips Semiconductor Company, (Philips, 1997), and the LM3914 from National Semiconductor Corp., (National, 1995). The first one, 74HC4046 is a PLL with and integrated voltagecontrolled oscilator (VCO), while the sencond is a driver that activates a greater or smaller number of outputs according to an input analog voltage.

Thus, the control input of the LM3914 device is taken from the PLL VCO input and the signal amplitude is directly proportional to the detected frequency of the PLL input and it has been generated by the processing module. The control signal is converted to eight digital signals, where the number of high-voltage signals depends on the input amplitude. For the conversion of these digital signals to the three bits used to encode the commands of the robot, an encoder integrated circuit SN74HC148 Texas Instruments, Inc., (Texas, 2004), has been used. Figure 8 shows a block diagram of this module and the robot.

Fig. 8. Block diagram of reception module and robot.

A Control System for Robots and Wheelchairs:

power module.

wheelchair.

operation is excellent.

specifically designed.

guidance system.

Its Application for People with Severe Motor Disability 117

The first prototype built for this purpose was intended to replace the control signals generated by the original joystick. Therefore, it could only be used in those wheelchairs which incorporate an identical system: SHARK system with DK-REMA joystick and DK-PMA power module from the Dynamic Controls Company (Dynamic Controls, 2004, 2006). The joystick works easily; it incorporates four fixed coils and a mobile one that moves together with the joystick and induces specific signals in the rest of the coils. The detection of this signals generates the corresponding order for the movement of the wheelchair. With these data, as a result of a reverse engineering process developed, different signals for each type of movement have been generated by the P87LPC769 microcontroller, (Philips, 2002), (Figure 10). This figure shows three signals for each case: the clear line is the reference level, the upper and lower are the order signals and the middle one is the sync signal from the

Fig. 10. Signals generated by the microcontroller for the five basic movements of the

These signals are faded directly into the circuit of the joystick, producing the desired movement in each case. The reverse engineering process concluded that the circuitry needed sine signals, but the system worked correctly with square ones because the embedded coils performed filtering functions, and finally, signals similar to sinusoidal ones were obtained. The wheelchair responds appropriately with the synthesized signals and its overall

In this design, the problem encountered is that it is very delicate to introduce external changes in the internal joystick circuit. These changes can cause a considerable decrease of the device's lifetime. Moreover, the main problem lies in the specificity of the proposed solution, that is, each control system for each wheelchair would require a different model

To solve these two problems encountered when modifying the electronics, a mechanical system based on two 180º rotation servo motors was developed, (Figure 11). This device, controlled by the Arduino platform, is able to move the joystick easily, in every possible direction and ensures universal usability for all wheelchairs that incorporate this kind of

Figure 11 shows the prototype built using two 180º servo motors. The joystick is positioned at the bottom of the device. The right image of Figure 11 shows, the top of the prototype, the control Arduino platform, along with a commands LED display and interfaces for

The encoded robot orders, that consist of 3 bits, are summarized in Table 1. The five basic movements -move forward and backward, turn to one side or the other and stop- have been codified through five orders.


Table 1. List of robot control commands and their encoding.

Through this combination of commands, three free combinations can be used for more complex robot movements, such as forward while turning slightly, or habilitate device standby option while controlling another system with the same adapted interface and similar orders.

Apart from other commercial systems such as the Scribbler robot, a robot with a custom design has been used. In the implemented robot, a video surveillance camera is incorporated (see Figure 9). It allows the applications described above, to control all the rooms at home by the disabled person and the caregiver`s remote tasks.

#### **3.5 Adaptation to a commercial wheelchair**

Once all the system was built and the optimal control of the robot was achieved using the developed interfaces, the equipment was adapted for use in guiding commercial wheelchairs. For this adaptation, a simpler design of the whole system has been implemented. In this design, the processing module is connected directly to the actuator system, the wheelchair. In any case, the amendments to include in the presented system are very easy to carried out.

Fig. 9. Custom design robot implemented with the surveillance system.

116 Mobile Robots – Current Trends

The encoded robot orders, that consist of 3 bits, are summarized in Table 1. The five basic movements -move forward and backward, turn to one side or the other and stop- have been

Through this combination of commands, three free combinations can be used for more complex robot movements, such as forward while turning slightly, or habilitate device standby option while controlling another system with the same adapted interface and

Apart from other commercial systems such as the Scribbler robot, a robot with a custom design has been used. In the implemented robot, a video surveillance camera is incorporated (see Figure 9). It allows the applications described above, to control all the

Once all the system was built and the optimal control of the robot was achieved using the developed interfaces, the equipment was adapted for use in guiding commercial wheelchairs. For this adaptation, a simpler design of the whole system has been implemented. In this design, the processing module is connected directly to the actuator system, the wheelchair. In any case, the amendments to include in the presented system are

**Order Codification** Stop 000 Move forward 101 Move backward 010 Turn right 001 Turn left 100

codified through five orders.

similar orders.

very easy to carried out.

Table 1. List of robot control commands and their encoding.

**3.5 Adaptation to a commercial wheelchair** 

rooms at home by the disabled person and the caregiver`s remote tasks.

Fig. 9. Custom design robot implemented with the surveillance system.

The first prototype built for this purpose was intended to replace the control signals generated by the original joystick. Therefore, it could only be used in those wheelchairs which incorporate an identical system: SHARK system with DK-REMA joystick and DK-PMA power module from the Dynamic Controls Company (Dynamic Controls, 2004, 2006). The joystick works easily; it incorporates four fixed coils and a mobile one that moves together with the joystick and induces specific signals in the rest of the coils. The detection of this signals generates the corresponding order for the movement of the wheelchair. With these data, as a result of a reverse engineering process developed, different signals for each type of movement have been generated by the P87LPC769 microcontroller, (Philips, 2002), (Figure 10). This figure shows three signals for each case: the clear line is the reference level, the upper and lower are the order signals and the middle one is the sync signal from the power module.

Fig. 10. Signals generated by the microcontroller for the five basic movements of the wheelchair.

These signals are faded directly into the circuit of the joystick, producing the desired movement in each case. The reverse engineering process concluded that the circuitry needed sine signals, but the system worked correctly with square ones because the embedded coils performed filtering functions, and finally, signals similar to sinusoidal ones were obtained. The wheelchair responds appropriately with the synthesized signals and its overall operation is excellent.

In this design, the problem encountered is that it is very delicate to introduce external changes in the internal joystick circuit. These changes can cause a considerable decrease of the device's lifetime. Moreover, the main problem lies in the specificity of the proposed solution, that is, each control system for each wheelchair would require a different model specifically designed.

To solve these two problems encountered when modifying the electronics, a mechanical system based on two 180º rotation servo motors was developed, (Figure 11). This device, controlled by the Arduino platform, is able to move the joystick easily, in every possible direction and ensures universal usability for all wheelchairs that incorporate this kind of guidance system.

Figure 11 shows the prototype built using two 180º servo motors. The joystick is positioned at the bottom of the device. The right image of Figure 11 shows, the top of the prototype, the control Arduino platform, along with a commands LED display and interfaces for

A Control System for Robots and Wheelchairs:

interested, etc.).

Its Application for People with Severe Motor Disability 119

Regarding testing in the laboratory, some basic requirements to be met by adapted interfaces as presented in the introduction of the chapter are also rated here. These laboratory studies allow a comparison with other similar systems found in the literature. In relation to the interfaces found for handling external robotic systems, a comparative analysis of their advantages and disadvantages given the above basic requirements will be provided. • System based on voice recognition (SVR). This type of system has some serious drawbacks. First, the complexity of pattern recognition systems that is neccesary to use. Second, the difficulty of speaking with other people while performing the guidance. Finally, the possibility of interference with environment talks in the control system. The

• Motion detection systems of the head or the eye, using cameras (MDC). They have common problems like the need to install a PC with a camera, the importance of controlling the lighting of the scene and the interference caused by the use of the system during the patient's normal activity (speaking, freedom to look where

• Interfaces based on EMG and EOG records generated in muscles during voluntary action (IEMGEOG). The problems in these cases are the need for electrodes and to ensure that they maintain a good contact, the need for a PC and the greater complexity of the system. A major advantage is that they can get analog control output, and after

• EEG based interfaces – BCI (BCI). They are very uncomfortable to use and difficult to apply each time they are used. They are slower and less reliable than other methods and, of course, need a PC. Furthermore, the interference is too great for any other activity that the patient is carring out. The advantage of the BCI is the potential to achieve a high level of control, long term, when the serious technical and biological

• Systems based on the detection of residual muscle movements and not based on the registration of bioelectric signals (IRMNBS). These are systems with high reliability because they depend on voluntary actions easily detectable with simple sensors. Moreover, the complexity required is low and limited to record the sensors' output. On the other hand, if both the action and the muscle that performs it are chosen properly, the system allows the user great freedom to carry out daily activities. In this case, sensors like the tracking tongue should be excluded because the ability to talk while

• Interfaces based on the detection of head position using inertial sensors (IHPIS). They are more robust than previous ones and possibly more suitable for marketing. A PC is not necessary but it requires head movements that interfere with the normal activity of

• System based on control by sniffing (SNIF). Such systems may interfere with the ability to speak while being used. Furthermore, in a situation in which the user breathes with

• Autonomous navigation aids for wheelchair users (ANW). It is a good suplement to the above guidance interfaces. These systems are complex and often require installation of some kind of hardware in the environment and in their own wheelchair. They also need the existence of an adapted interface to select the destination. These systems are justified only for extreme cases of immobility and when the subject reaction rate is

difficulty, due to fatigue or illness, their performance would be affected.

main advantage is the ease of deployment of large numbers of orders.

proper training the versatility of the device guidance can be improved.

constraints that currently exist can be overcome.

performing the guidance is affected.

the patient.

connection: USB to a computer (optional connection), supply jack and RJ11 for the adapted interface. In this image also shows the servo motor responsible for setting the direction in which the joystick will be moved. This servo motor controls a central platform where the second motor is located. On the other hand, the left side of Figure 11 includes the interior view of the joystick control system where the second servo motor is responsible for setting off the final movement and speed by a rod, once the direction is positioned. It also allows the system to go to the rest position, i.e., stop the chair quickly and independent of the direction set firstly. With this system, all the 360º of possible positions for the joystick can be easily reached.

Fig. 11. Wheelchair joystick control prototype based on two servo motors, along with an adapted interface developed.

### **4. Discussion and results**

First, there will be a brief analysis of the results obtained from surveys of a sample of the disabled community. The respondents were divided into two groups, face interviews and email consultations. The personal interviews were conducted in the Spinal Cord Injury Association, ASPAYM (Valladolid), and the Agency of Attention and Resources to the Handicapped, CEAPAT (Salamanca). The email consultation was carried out by two virtual network organizations: DISTEC and ListaMedular. Both organizations have been dedicated to the spread of technical assistance devices all over the world and have lots of users.

The statistical study of 40 disabled persons can be summarized to yield some conclusions of interest for guided assistance system designers. The respondents clearly associated these systems to patients with severe motor disabilities. Over 80% of them prefered the guidance systems that are not fully automatic, so that users can have some sort of continuous interaction with them. Finally, over 90% of the sample population declared to be willing to pay the cost of such systems.

Next, the experience gained from the use of the developments dealt with in this chapter will be presented. The LEB has been collaborating for several years with the National Paraplegics Hospital of Toledo (HNPT), putting the systems into practice and testing them. The HNPT is a Spain reference center in this field. This center is, probably, the most appropriate to test the validity of the developments in Rehabilitation Technologies (RT).

118 Mobile Robots – Current Trends

connection: USB to a computer (optional connection), supply jack and RJ11 for the adapted interface. In this image also shows the servo motor responsible for setting the direction in which the joystick will be moved. This servo motor controls a central platform where the second motor is located. On the other hand, the left side of Figure 11 includes the interior view of the joystick control system where the second servo motor is responsible for setting off the final movement and speed by a rod, once the direction is positioned. It also allows the system to go to the rest position, i.e., stop the chair quickly and independent of the direction set firstly. With this system, all the 360º of possible positions for the joystick can be

Fig. 11. Wheelchair joystick control prototype based on two servo motors, along with an

First, there will be a brief analysis of the results obtained from surveys of a sample of the disabled community. The respondents were divided into two groups, face interviews and email consultations. The personal interviews were conducted in the Spinal Cord Injury Association, ASPAYM (Valladolid), and the Agency of Attention and Resources to the Handicapped, CEAPAT (Salamanca). The email consultation was carried out by two virtual network organizations: DISTEC and ListaMedular. Both organizations have been dedicated to the spread of technical assistance devices all over the world and have lots of

The statistical study of 40 disabled persons can be summarized to yield some conclusions of interest for guided assistance system designers. The respondents clearly associated these systems to patients with severe motor disabilities. Over 80% of them prefered the guidance systems that are not fully automatic, so that users can have some sort of continuous interaction with them. Finally, over 90% of the sample population declared to be willing to

Next, the experience gained from the use of the developments dealt with in this chapter will be presented. The LEB has been collaborating for several years with the National Paraplegics Hospital of Toledo (HNPT), putting the systems into practice and testing them. The HNPT is a Spain reference center in this field. This center is, probably, the most appropriate to test the validity of the developments in Rehabilitation Technologies (RT).

easily reached.

adapted interface developed.

**4. Discussion and results** 

pay the cost of such systems.

users.

Regarding testing in the laboratory, some basic requirements to be met by adapted interfaces as presented in the introduction of the chapter are also rated here. These laboratory studies allow a comparison with other similar systems found in the literature. In relation to the interfaces found for handling external robotic systems, a comparative analysis of their advantages and disadvantages given the above basic requirements will be provided.


A Control System for Robots and Wheelchairs:

Internet connection with a Wi-Fi router.

Its Application for People with Severe Motor Disability 121

system monitoring and telecare have been successful in the controlled environment of the LEB, (Figure 12). Additionally, the trials were prepared in private homes equipped with an

Fig. 12. Control subjects putting into practice the surveillance and telecare system trials.

incompatible with the dynamic of the guide. In these cases, the only solution is to choose a destination in advance and let the system do the rest.

A summarized comparative analysis between all adapted guidance systems found in the scientific related literature is presented in Table 2. Thus, there has been an evaluation of the parameters presented in the chapter introduction, using six progressive degrees, from very inappropriate (– – –) to very suitable (+ + +).


Table 2. Comparative analysis between the adapted system interfaces found in the literature and the parameters presented in the introduction. Abbreviations for each of the systems and the parameters have been used. Thus, for columns: AU (adaptation to the user), CCRC (correct choice of the remaining capability to be used), RI (reliability of the interface), C (capacity), SCC (speed and computational cost), CF (comfort) and CT (cost); for rows: SVR (System based on voice recognition), MDC (Motion detection systems of the head or the eye, using cameras), IEMGEOG (interfaces based on EMG and EOG records generated by muscles during voluntary action), BCI (EEG-based interfaces – BCI), IRMNBS (systems based on the detection of residual muscle movements and not based on the registration of bioelectric signals), IHPIS (interfaces based on the detection of head position using inertial sensors), SNIF (system based on control by sniffing) and ANW (autonomous navigation aids for wheelchair users).

Taking into account the previous disccussion and the type of disability to which the developments are directed, the LEB chose the most appropriate option to carry out the adapted interfaces. The system presented in this chapter is based on sensors capable of detecting winks performed voluntarily by the user (a particular IRMNBS system). The hardware needed for the treatment of signals received is simple, robust, inexpensive, compatible with commercial wheelchairs and implemented by microcontrollers. The main advantages of this system are listed below:


The results obtained from practical trials of the devices have responded to the expectations of reliability, speed and convenience for the user. The tests performed with the robotic 120 Mobile Robots – Current Trends

A summarized comparative analysis between all adapted guidance systems found in the scientific related literature is presented in Table 2. Thus, there has been an evaluation of the parameters presented in the chapter introduction, using six progressive degrees, from very

SVR + + + + + + + + – – – + MDC + + – – – – – – – – – IEMGEOG + + + + + – – – BCI + – – – – – – – – – – – – IRMNBS + + + + + + + + + + + IHPIS + + + + – + + – + SNIF ++ ++ + – + + – – + ANW + + + – – – – – + – – –

Table 2. Comparative analysis between the adapted system interfaces found in the literature and the parameters presented in the introduction. Abbreviations for each of the systems and the parameters have been used. Thus, for columns: AU (adaptation to the user), CCRC (correct choice of the remaining capability to be used), RI (reliability of the interface), C (capacity), SCC (speed and computational cost), CF (comfort) and CT (cost); for rows: SVR (System based on voice recognition), MDC (Motion detection systems of the head or the eye, using cameras), IEMGEOG (interfaces based on EMG and EOG records generated by muscles during voluntary action), BCI (EEG-based interfaces – BCI), IRMNBS (systems based on the detection of residual muscle movements and not based on the registration of bioelectric signals), IHPIS (interfaces based on the detection of head position using inertial sensors), SNIF (system based on control by sniffing) and ANW (autonomous navigation aids

Taking into account the previous disccussion and the type of disability to which the developments are directed, the LEB chose the most appropriate option to carry out the adapted interfaces. The system presented in this chapter is based on sensors capable of detecting winks performed voluntarily by the user (a particular IRMNBS system). The hardware needed for the treatment of signals received is simple, robust, inexpensive, compatible with commercial wheelchairs and implemented by microcontrollers. The main

• Minimal interference with patient daily activities. The necessary conventional biological flashes are not detected, as it should. The winks that activate the system need not be as intense as for having to close the eyes, in order not to lose sight at any time, due to the

The results obtained from practical trials of the devices have responded to the expectations of reliability, speed and convenience for the user. The tests performed with the robotic

choose a destination in advance and let the system do the rest.

inappropriate (– – –) to very suitable (+ + +).

for wheelchair users).

interface.

advantages of this system are listed below:

• Light software implemented on microcontrollers.

• Low cost and easy adaptation to different wheelchair models.

• Simplicity of the hardware.

• More reliable than other systems.

incompatible with the dynamic of the guide. In these cases, the only solution is to

AU CCRC RI C SCC CF CT

system monitoring and telecare have been successful in the controlled environment of the LEB, (Figure 12). Additionally, the trials were prepared in private homes equipped with an Internet connection with a Wi-Fi router.

Fig. 12. Control subjects putting into practice the surveillance and telecare system trials.

A Control System for Robots and Wheelchairs:

trained.

**5. Conclusions** 

medical specialists.

**6. Acknowledgments** 

**7. References** 

Its Application for People with Severe Motor Disability 123

has been programmed on the microcontroller. This delay was introduced to facilitate beginners to execute the commands performed by a successive wink of each eye. However, this is not a serious problem, since it is simple to modify the code to speed up the response. In this case, the functioning mode would be customized for users who have been properly

The design and development of human-machine interfaces for people with severe motor disabilities are activities of great interest to improve the quality of life and independence of such persons. The use of adapted interfaces to their residual abilities serves the disabled to train their remaining capabilities and also provides an assessment tool of the patients for

In this chapter, a brief collection of techniques and interfaces developed by different researchers for this type of patients has been made. The interfaces are generally used to manage domotics and robotic elements with different functions such as mobility, environment control and communication. In the introduction, some basic requirements that the very low mobility disabled-interfaces should meet have been defined. Such requirements can be used to assess the goodness of a given interface or to compare different human-machine interfaces. The control interfaces based on slight voluntary actions, such as

Once the human-machine interface is developed, a control for two different robotic systems, with good results of satisfaction for users, has been applied. Both systems have been described with sufficient detail to be implemented by those readers with appropriate technical training. The first of these systems -environment surveillance and telecare of the patient system- can be easily modified to use their two-way audio channel, and thus can also perform the communication function. The second system -a mobility system developedis based on the control of a conventional electric wheelchair, for which only a cheap adaptation between the joystick and the interface of winks is needed. This circumstance significantly increases the accessibility of the system for patients. Both systems can be

This work has been funded by the *research Excellence Group GR72* project of the Regional

Alonso, A. (1999). Diseño de un sistema de silla de ruedas autoguiada en entornos

Alonso, A., de la Rosa, R., del Val, L., Jimenez, M.I. & Franco, S. (2009). A Robot Controlled

controlados. *International Simposium on Biomechanic Methods (SIBVA '99)*, Valladolid

by Blinking for Ambient Assisted Living, In: *Distributed Computing, Artificial Intelligence, Bioinformatics, Soft Computing, and Ambient Assisted Living. Lecture Notes in Computer Science*, Omatu, S., Rocha, M.P., Bravo, J., Fernández, F., Corchado, E.,

winks, are worthy of a high score because they meet most of the conditions.

programmed for a faster response, as users are learning to manage the interface.

Administration, *Junta de Castilla y León*, Valladolid, Spain.

(Spain), November, 1999.

Regarding the direct control of the electric wheelchair with the adapted interface developed, the data were collected from control subjects, Figure 13, pending further trials in the HNPT research center. In this case, indeed, it is not important that the subjects who test the system are disabled, since the residual capability used to control the adapted interface, i.e., the eye winks, is similar in both populations.

The trials consisted of training and two definitive tests that indicate the progression of learning. Tests are based on a predetermined circuit, Figure 14, and the complete protocol is summarized as follows:


Fig. 13. Control subject testing the adapted wheelchair control system.

The conditions for conducting the trial are based on an optimized circuit for an estimated duration of one minute. For safety reasons, for the users without prior training, the circuit did not contain physical barriers. The recorded values can make an assessment of the usability of the system, user satisfaction and speed of learning.

Fig. 14. Designed test circuit for both surveillance and telecare system and adapted wheelchair mobility.

Both systems showed an annoying delay in the response for the action ordered by the user through the interface. This effect is caused by how the interpreting the control commands has been programmed on the microcontroller. This delay was introduced to facilitate beginners to execute the commands performed by a successive wink of each eye. However, this is not a serious problem, since it is simple to modify the code to speed up the response. In this case, the functioning mode would be customized for users who have been properly trained.

### **5. Conclusions**

122 Mobile Robots – Current Trends

Regarding the direct control of the electric wheelchair with the adapted interface developed, the data were collected from control subjects, Figure 13, pending further trials in the HNPT research center. In this case, indeed, it is not important that the subjects who test the system are disabled, since the residual capability used to control the adapted interface, i.e., the eye

The trials consisted of training and two definitive tests that indicate the progression of learning. Tests are based on a predetermined circuit, Figure 14, and the complete protocol is

winks, is similar in both populations.

• Previous 5 minutes free training.

• 5 minutes training after a break.

• First test scoring errors and the time to complete the circuit.

Fig. 13. Control subject testing the adapted wheelchair control system.

usability of the system, user satisfaction and speed of learning.

The conditions for conducting the trial are based on an optimized circuit for an estimated duration of one minute. For safety reasons, for the users without prior training, the circuit did not contain physical barriers. The recorded values can make an assessment of the

Fig. 14. Designed test circuit for both surveillance and telecare system and adapted

Both systems showed an annoying delay in the response for the action ordered by the user through the interface. This effect is caused by how the interpreting the control commands

• Second test with new errors and time registration.

summarized as follows:

wheelchair mobility.

The design and development of human-machine interfaces for people with severe motor disabilities are activities of great interest to improve the quality of life and independence of such persons. The use of adapted interfaces to their residual abilities serves the disabled to train their remaining capabilities and also provides an assessment tool of the patients for medical specialists.

In this chapter, a brief collection of techniques and interfaces developed by different researchers for this type of patients has been made. The interfaces are generally used to manage domotics and robotic elements with different functions such as mobility, environment control and communication. In the introduction, some basic requirements that the very low mobility disabled-interfaces should meet have been defined. Such requirements can be used to assess the goodness of a given interface or to compare different human-machine interfaces. The control interfaces based on slight voluntary actions, such as winks, are worthy of a high score because they meet most of the conditions.

Once the human-machine interface is developed, a control for two different robotic systems, with good results of satisfaction for users, has been applied. Both systems have been described with sufficient detail to be implemented by those readers with appropriate technical training. The first of these systems -environment surveillance and telecare of the patient system- can be easily modified to use their two-way audio channel, and thus can also perform the communication function. The second system -a mobility system developedis based on the control of a conventional electric wheelchair, for which only a cheap adaptation between the joystick and the interface of winks is needed. This circumstance significantly increases the accessibility of the system for patients. Both systems can be programmed for a faster response, as users are learning to manage the interface.

### **6. Acknowledgments**

This work has been funded by the *research Excellence Group GR72* project of the Regional Administration, *Junta de Castilla y León*, Valladolid, Spain.

### **7. References**


A Control System for Robots and Wheelchairs:

(Spain), May, 2009.

(USA), June, 2008.

2009.

2001.

0013-5194.

pp.(203-224), 0929-5593.

8493-8346-3, Salem (MA, USA).

(Quebec, Canada), July, 2006.

Vol 8, No 1, March 2000, pp.(107-116), 1063-6528.

Its Application for People with Severe Motor Disability 125

Freire Bastos, T., Ferreira, A., Cardoso Celeste, W., Cruz Calieri, D., Sarcinelli Filho, M. & de

Frizera, A., Cardoso, W., Ruiz, V., FreireBastos, T. & Sarcinelli, M. (2006). Human-machine

Gareth Evans, D., Drew, R. & Blenkon, P. (2000). Controlling mouse pointer position using

Hoppenot, P. & Colle, E. (2002). Mobile robot command by man-machine co-operation –

Huo, X., Wang, J. & Ghovanloo, M. (2008). Tracking tongue drive system as a new interface

Iturrate, I., Escolano, C., Antelis J. & Minguez, J. (2009). Dispositivos robóticos de

Kim, Y.W. (2002) Development of Headset-Type Computer Mouse Using Gyro Sensors for

Levine, S.P., Bell, D.A., Jaros, L.A., Simpson, R.C., Koren, Y. & Borenstein, J. (1999). The

Maxwell K.J. (1995). Human-Computer Interface Design Issues, In: *The biomedical* 

Microchip Technology Incorporated. (2001). PIC16F84A: 18-pin enhanced

Millán, J. del R., Renkens, F., Mouriño, J. & GerstneR, W. (2004). Noninvasive brain-

Millán, J. del R. & Carmena, J.M. (2010). Invasive or noninvasive: understanding brain-

*engineering,* Vol 51, No 6, June 2004, pp.(1026-1033), 0018-9294.

Vol 29, No 1, January 2010, pp.(16-22), 0739-5175.

*Robotic Systems*, Vol 34, No 3, July 2002, pp.(235-252), 0921-0296.

la Cruz, C. (2009). Silla de ruedas robótica multiaccionada inteligente con capacidad de comunicación. *Proceedings of III International Congress on Domotics, Robotics and Remote-Assistance for All - DRT4all2009*, 978-84-88934-39-0, Barcelona

interface based on electrobiological signals for mobile vehicles. *Proceedings of the 2006 IEEE International Symposium on Industrial Electronics,* 1-4244-0496-7, Montreal

an infrared head-operated joystick. *IEEE Transactions on rehabilitation engineering*,

Application to disabled and elderly people assistance. *Journal of Intelligent and* 

to control powered wheelchairs, *Proceedings of the Rehabilitation Engineering and Assistive Technology Society of North America (RESNA) Conference*,Washington DC

rehabilitación basados en Interfaces cerebro-ordenador: silla de ruedas y robot para teleoperación, *Proceedings of III International Congress on Domotics, Robotics and Remote-Assistance for All - DRT4all2009*, 978-84-88934-39-0, Barcelona (Spain), May,

the Handicapped. *Electronics letters*, Vol. 38, No 22, October 2002, pp.(1313-1314),

NavChair Assistive Wheelchair Navigation System. *IEEE Transactions on rehabilitation engineering*, Vol. 7, No 4, December 1999, pp. (443-451), 1063-6528. Mazo, M., Rodríguez, F.J., Lázaro, J.L., Ureña, J., Gracía, J.C., Santiso, E., Revenga, P. &

García, J.J. (1995). Wheelchair for physically disabled people with voice, ultrasonic and infrared sensor control. *Autonomous Robots*, Vol 2, No 3, September 1995,

*Engineering Handbook.* Bronzino, J.D. pp. (2263-2277), CRC press - IEEE press, 0-

FLASH/EEPROM, 8-bit microcontroller. *Datasheet*. Chandler, (AZ, USA), January

acturated control o a mobile robot by human EEG. *IEEE Transactions on biomedical* 

machine interface technology. *IEEE Engineering in Medicine and Biology Magazine*,

Bustillo, A. & Corchado, J.M., pp.(839-842), Springer-Verlag, 3-642-02480-7, Berlin (Germany).


124 Mobile Robots – Current Trends

Angulo, C., Minguez, J., Díaz, M. & Cabestany, J. (2007). Ongoing Research on Adaptive

Arduino. (2009). Arduino Duemilanove. In: *ArduinoBoardDuemilanove*, 29 June 2011, Available from: http://www.arduino.cc/en/Main/ArduinoBoardDuemilanove. Atmel Corporation. (2009). 8-bit AVR Microcontroller with 4/8/16/32 bytes in-system

Azkoitia, J.M., Eizmendi, G., Manterota, I., Zabaleta, H. & Pérez, M. (2007) Non-Invasive,

Barea, R., Boquete, L., Mazo, M. & López, E. (2002). System for assisted mobility using eye

De la Rosa, R., Alonso, A., Carrera, A., Durán, r. & Fernández, P. (2010) Man-Machine

De Santis, A. & Iacoviello, D. (2009). Robust real time eye tracking for computer interface for

Dynamic Controls. (2004). SHARK DK-REMA Remotes. *Installation Manual*. Christchurch,

Dynamic Controls. (2006). DK-PMA SHARK Power Module. *Installation Manual*.

Escolano, C., Antelis, J. & Minguez, J. (2009). Human brain-teleoperated robot between

Fehr, L., Langbein, W.E. & Skaar, S.B. (2000). Adequacy of power wheelchair control

Ferreira, A., Cardoso Celeste, W., Freire Bastos-Filho, T., Sarcinelli-Filho, M., Auat Cheein, F.

*Assistance for All - DRT4all2007*, 84-8473-258-4, Madrid (Spain), April, 2007.

*All - DRT4all2007*, 84-8473-258-4, Madrid (Spain), April, 2007.

*Sensors*, Vol.11, No 1, January 2011, pp. (310-328), 1424-8220.

*Datasheet*. San Jose (CA,USA), October 2009.

(Germany).

pp. (1-10), 1534-4320.

October 2009, pp. (1-11), 0169-2607.

Christchurch, (New Zealand), June, 2006.

*automation*, 1-4244-2789-5, Kobe (Japan), May, 2009.

(New Zealand), June, 2004.

0898-2732.

Bustillo, A. & Corchado, J.M., pp.(839-842), Springer-Verlag, 3-642-02480-7, Berlin

Smart Assistive Systems for Disabled People in Autonomous Movement. *Proceedings of II International Congress on Domotics, Robotics and Remote-Assistance for* 

programmable flash: ATmega48PA, ATmega88PA, ATmega168PA, Atmega328P.

Wireless and Universal Interface for the Control of Peripheral Devices by Means of Head Movements. *Proceedings of II International Congress on Domotics, Robotics and Remote-Assistance for All - DRT4all2007*, 84-8473-258-4, Madrid (Spain), April, 2007. Barea, R., Boquete, L., Rodriguez-Ascariz, J.M., Ortega, S. &López, E. (2011) Sensory System

for Implementing a Human-Computer Interface Based on Electrooculography.

movements based on electrooculography. *IEEE transactions on neural systems and rehabilitation engineering*, Vol. 10, No 4, December 2002, pp. (209-218), 1534-4320. Betke, M., Gips, P. & Fleming, P. (2002). The Camera Mouse: Visual Tracking of Body

Features to Provide Computer Access for People With Severe Disabilities. *IEEE transactions on neural systems and rehabilitation engineering*, Vol 10, No 1, March 2002,

Interface System for Neuromuscular Training and Evaluation Based on EMG and MMG Signals. *Sensors*, Vol. 10, No 12, December 2010, pp. (11100-11125), 1424-8220.

disabled people. *Computer Methods and Programs in Biomedicine*, Vol 96, No 1,

remote places, *Proceedings of the 2009 IEEE International conference on robotics and* 

interfaces for persons with severe disabilities: a clinical survey. *Journal of Rehabilitation Research and Development*, Vol 37, No 3, May – June 2000, pp. (353-360),

& Carelli, R. (2007). Development of interfaces for impaired people based on EMG and EEG. *Proceedings of II International Congress on Domotics, Robotics and Remote-*


**0**

**7**

*Japan*

**for Practical Use**

Shuro Nakajima

**Mobile Platform with Leg-Wheel Mechanism**

Robots that can move over rough terrain with active body leveling are now in strong demand. Such a robot can have a variety of uses, such as carrying packages, assisting people who have

Many robots capable of moving over rough terrain exist as research tools; however, few are suitable for practical use. These robots can be generally classified into the three categories. 1) Legged robots: These have excellent mobility with high stability. The mobility of legged robots has been extensively studied: for example, ASV (Song and Waldron 1989), the TITAN series (Hirose et al. 1985), DANTE II (Bares and Wettergreen 1997), the hexapod robot (Delcomyn and Nelson 2000), Tekken2 (Kimura et al. 2007), and Hyperion3 (Yoneda 2007). 2) Wheeled robots: These are most commonly selected for traversing continuous surfaces that include rough terrain. Because of their stability, maneuverability, and simple controls, wheels are the most frequently used mechanism for exploration rovers. Examples of wheeled mobile robots are Micro5 (Kubota et al. 2003), Rocky7 (Volpe et al. 1997), Shrimp (Siegwart et al. 2002), CRAB (Thueer et al. 2006), and Zaurus (Sato et al. 2007). These have passive linkage mechanisms. SpaceCat (Lauria et al. 1998) and Nanokhod (Winnendael et al. 1999) have active linkage mechanisms. The high-grip stair climber (Yoneda et al. 2009) is a crawler-type

3) Leg-wheel robots: These attempt to combine the advantages of both legs and wheels in various configurations. Work Partner (Halme et al. 2003), Roller Walker (Endo and Hirose 2000), Zero Carrier (Yuan and Hirose 2004), Hylos (Grand et al. 2004), and PAW (J.A. Smith et al. 2006) are equipped with wheels placed at the ends of their legs; the Chariot series (Nakajima and Nakano 2008a,b, 2009a-c,Fig. 1), RoboTrac (Six and Kecskem'ethy 1999), and a wheel chair robot(Morales et al.2006) have separate wheels and legs; Whegs (Quinn et al. 2003; Daltorio et al. 2009) and Epi.q-1(Quaglia et al.2010) have four wheels composed of rotating legs or wheels; and Wheeleg (Lacagnina et al. 2003) has two front legs and two rear wheels. Although a legged robot is highly mobile on rough terrain, the mechanism is complex and more energy is required for walking. On the other hand, while wheeled robots are usually the best solution for continuous terrain, most cannot travel over discontinuous terrain. Generally speaking, a hybrid mechanism like Fig. 1 provides the strengths of both wheels and legs, although such mechanisms tend to be complex. Chariot 3 is equipped with four legs of three degrees of freedom and two independent wheels. On the other hand, Whegs is not complex,

difficulty in walking, and safety monitoring outdoors.

**1. Introduction**

robot.

*The Department of Advanced Robotics, Chiba Institute of Technology*


## **Mobile Platform with Leg-Wheel Mechanism for Practical Use**

Shuro Nakajima *The Department of Advanced Robotics, Chiba Institute of Technology Japan*

#### **1. Introduction**

126 Mobile Robots – Current Trends

Minguez, J., Montesano, L., Díaz, M. & Canalis, C. (2007). Intelligent robotic mobility system

National Semiconductor Corporation. (1995). LM3914 Dot/Bar Display Driver. *Datasheet*.

Perez, E., Soria, C., Mut, V., Nasisi, O. & Freire Bastos, T. (2009). Interfaz basada en visión

Perrín, X., Chavarriaga, R., Colas, F., Siegwart, R. & Millán, J. del R. (2010). Brain-coupled

Philips Semiconductors, Incorporated. 2002. P87LPC769 - Low power, low price, low pin

Plotkin, A., Sela, L., Weissbrod, A., Kahana, R., Haviv, L., Yeshurun, Y., Soroker, N. & Sobel,

*of America*, Vol. 107, No. 32, August 2010, pp. (14413-14418), 0027-8424. Rebsamen, B., Burdet, E., Guan, C., Zhang, H., Teo, C.L., Zeng, Q., Ang, M. & Laugier, C.

VCO. *Datasheet*. Eindhoven (The Netherlands), November, 1997.

April, 2007.

December, 2006.

Dallas (TX,USA), February, 2002.

April 2008, pp. (161-170), 1534-4320.

encoders. *Datasheet*. Dallas, (TX,USA), April, 2004.

Arlington (TX, USA), February, 1995.

Eindhoven (The Netherlands), March, 2002.

Barcelona (Spain), May, 2009.

for cognitive disabled children. *Proceedings of II International Congress on Domotics, Robotics and Remote-Assistance for All - DRT4all2007*, 84-8473-258-4, Madrid (Spain),

aplicada a una silla de ruedas robótica. *Proceedings of III International Congress on Domotics, Robotics and Remote-Assistance for All - DRT4all2009*, 978-84-88934-39-0,

interaction for semi-autonomous navigation of an assistive robot. *Robotics and Autonomous Systems*, Vol 58, No 12, December 2010, pp.(1246-1255), 0921-8890. Philips Semicondusctors, Incorporated. 1997. 74HC/HCT4046A Phase-locked-loop with

count (20 pin) microcontroller with 4 kB OTP 8-bit A/D, and DAC. *Datasheet*.

N. (2010). Sniffing enables communication and environmental control for the severely disabled. *Proceedings of the National Academy of Sciences of the United States* 

(2006). A Brain-controlled wheelchair based on P300 and path guidance, *Proceedings of the first IEEE/RAS-EMBS International conference on biomedical robotics and biomechatronics, 2006. BioRob 2006.*, 1-4244-0040-6, Pisa (Italy), February, 2006. Spain. Ley 39/2006, de 14 de diciembre, de Promoción de la Autonomía Personal y Atención

a las personas en situación de dependencia. Disposición adicional tercera. Ayudas económicas para facilitar la autonomía personal. *Boletín Oficial del Estado (BOE) – Viernes, 15 de diciembre de 2006*, No 299, pp. (44142-44156), Madrid (Spain),

Texas Instruments, Incorporated. 1999. TL082, TL082A, TL082B, TL082Y JFET-Input

Texas Instruments, Incorporated. 2002. NE555, SA555, SE555 precision timers. *Datasheet*.

Texas Instruments, Incorporated. 2004. SN54HC148, SN74HC148 – 8-line to 3-line priority

Úbeda, A., Azorín, J.M., Iáñez, E. & Sabater, J.M. (2009). Interfaz de seguimiento ocular

Vishay Intertechnology, Incorporated. (2008). CNY 70 – Reflective Optical Sensor with

Zeng, Q., Teo, C.L., Rebsamen, B. & Burdet, E. (2008). A collaborative wheelchair system.

basado en visión artificial para personas con discapacidad, *Proceedings of III International Congress on Domotics, Robotics and Remote-Assistance for All -* 

*IEEE Transactions on Neural Systems and Rehabilitation Engineering*, Vol 16, No 2,

operational amplifiers. *Datasheet*. Dallas (TX,USA), February, 1999.

*DRT4all2009*, 978-84-88934-39-0, Barcelona (Spain), May, 2009.

Transistor Output. *Datasheet*. Shelton (CT, USA), July, 2008.

Robots that can move over rough terrain with active body leveling are now in strong demand. Such a robot can have a variety of uses, such as carrying packages, assisting people who have difficulty in walking, and safety monitoring outdoors.

Many robots capable of moving over rough terrain exist as research tools; however, few are suitable for practical use. These robots can be generally classified into the three categories.

1) Legged robots: These have excellent mobility with high stability. The mobility of legged robots has been extensively studied: for example, ASV (Song and Waldron 1989), the TITAN series (Hirose et al. 1985), DANTE II (Bares and Wettergreen 1997), the hexapod robot (Delcomyn and Nelson 2000), Tekken2 (Kimura et al. 2007), and Hyperion3 (Yoneda 2007).

2) Wheeled robots: These are most commonly selected for traversing continuous surfaces that include rough terrain. Because of their stability, maneuverability, and simple controls, wheels are the most frequently used mechanism for exploration rovers. Examples of wheeled mobile robots are Micro5 (Kubota et al. 2003), Rocky7 (Volpe et al. 1997), Shrimp (Siegwart et al. 2002), CRAB (Thueer et al. 2006), and Zaurus (Sato et al. 2007). These have passive linkage mechanisms. SpaceCat (Lauria et al. 1998) and Nanokhod (Winnendael et al. 1999) have active linkage mechanisms. The high-grip stair climber (Yoneda et al. 2009) is a crawler-type robot.

3) Leg-wheel robots: These attempt to combine the advantages of both legs and wheels in various configurations. Work Partner (Halme et al. 2003), Roller Walker (Endo and Hirose 2000), Zero Carrier (Yuan and Hirose 2004), Hylos (Grand et al. 2004), and PAW (J.A. Smith et al. 2006) are equipped with wheels placed at the ends of their legs; the Chariot series (Nakajima and Nakano 2008a,b, 2009a-c,Fig. 1), RoboTrac (Six and Kecskem'ethy 1999), and a wheel chair robot(Morales et al.2006) have separate wheels and legs; Whegs (Quinn et al. 2003; Daltorio et al. 2009) and Epi.q-1(Quaglia et al.2010) have four wheels composed of rotating legs or wheels; and Wheeleg (Lacagnina et al. 2003) has two front legs and two rear wheels.

Although a legged robot is highly mobile on rough terrain, the mechanism is complex and more energy is required for walking. On the other hand, while wheeled robots are usually the best solution for continuous terrain, most cannot travel over discontinuous terrain. Generally speaking, a hybrid mechanism like Fig. 1 provides the strengths of both wheels and legs, although such mechanisms tend to be complex. Chariot 3 is equipped with four legs of three degrees of freedom and two independent wheels. On the other hand, Whegs is not complex,

for Practical Use 3

Mobile Platform with Leg-Wheel Mechanism for Practical Use 129

The target of this chapter is a practical mobile robot that carries objects or people or is used as a mobile bogie for a service robot. It is necessary to keep objects, the upper half of the onboard parts, and the boarding seat (hereinafter, the platform) horizontal in order to carry people and objects. We also aim to develop a practical robot with high cost performance by reducing the

Table 1 shows the practical use status of robots with various locomotion mechanisms. Wheeled robots have been used for practical applications. On the other hand, robots with complex mechanisms are not currently suitable for practical use because they are difficult to control, operate, and maintain, and they are expensive because of their many actuators.

Crawler Few examples of practical use (e.g., in leisure and construction fields)

A mobile robot used for general purposes should have high speed on a paved road and good ability of locomotion over rough terrain. Although there is no mechanism superior to a wheel for high speed and energy efficiency, a wheeled robot can move only on continuous rough terrain. To move on discontinuous terrain, a leg mechanism is better than a wheel mechanism. Therefore, to perform all the essential functions of mobile robots in rough terrain, both wheel

Table 2 shows the strengths and limitations of leg-wheel robots. Moreover, the target application environment such as an urban environment is basically a paved surface upon which transportation by wheel is possible, with several steps that necessitate a leg function in

Strengths High speed movement is possible because of the use of a wheel mechanism.

Limitations The number of actuators (required for legs) increases, so the cost also rises.

Therefore, it is advantageous to reduce the complexity of the leg mechanism to a minimum

the robot if the leg's motion space is wide.

Ability of locomotion on rough terrain is good because of the use of a leg

Ability of locomotion can be enhanced by using leg and wheel mechanisms

There is a danger of collision between the legs of the robot and a person near

Reliability and maintainability worsen because of the complexity of the leg

number of drive shafts to the extent possible and employing a simple mechanism.

Wheel Some examples of practical use (e.g., cleaning robots)

Table 1. Practical use status of mobile robots with various locomotion mechanisms

Leg No examples of practical use to date

Hybrid mechanism No examples of practical use to date

**2. RT-mover**

**2.1 Mechanical concept**

**Type Status**

and leg mechanisms are needed.

order to reach the destination.

mechanism.

cooperatively.

mechanism.

and to limit each leg's motion space.

Table 2. Strengths and limitations of leg-wheel robots

Fig. 1. A leg-wheel robot. (a)Chariot 3. (b)Chari-Bee is a demonstration robot of Aichi EXPO, 2005.

but the posture of its body cannot be easily controlled. PAW has both wheel and leg modes with a simple mechanism and can control its posture in wheel mode by adjusting each leg tip position; however, PAW cannot get over a step statically while maintaining a horizontal posture. The evaluation point for the mechanism in this chapter is to maintain the horizontal posture of the body on rough terrain statically, because a person or object should be carried stably on the robot.

Fig. 2. RT-Mover series. (a)A middle size type. (b)A personal mobility vehicle type.

The proposed robots, the RT-Mover series (Nakajima 2011a, b)(Fig. 2), have a simple mechanism and adequate mobility with a stable posture position for the following target environments: 1. an indoor environment including a step and an uneven ground surface; 2. an artificial outdoor environment with an uneven ground surface and a bump; and 3. natural terrain such as a path in a forest. RT-Mover's mechanism is different from that of conventional mobile robots: four wheels are mounted at the tip of every leg but it has only four active wheels and only five other active shafts. With an emphasis on minimizing the number of drive shafts, the mechanism is designed for a four-wheeled mobile body that is widely used in practical locomotive machinery. RT-Mover can move like a wheeled robot and also walk over a step like a legged robot, despite the simplicity of the mechanism.

The robot can move on discontinuous, rough terrain while maintaining its platform in a horizontal position. Therefore, it can be used as a stable carrier for an object (Fig. 2(a)) or a person (Fig. 2(b)).

In this chapter, the mechanical design concept for RT-Mover is discussed, and strategies for moving on rough terrain are proposed. The kinematics, stability, and control of the robot are also described in detail. The performance of the proposed locomotion is evaluated through simulations and experiments.

### **2. RT-mover**

2 Mobile Robot / Book 3

(a) (b) Fig. 1. A leg-wheel robot. (a)Chariot 3. (b)Chari-Bee is a demonstration robot of Aichi EXPO,

but the posture of its body cannot be easily controlled. PAW has both wheel and leg modes with a simple mechanism and can control its posture in wheel mode by adjusting each leg tip position; however, PAW cannot get over a step statically while maintaining a horizontal posture. The evaluation point for the mechanism in this chapter is to maintain the horizontal posture of the body on rough terrain statically, because a person or object should be carried

Front

R<sup>w</sup>

The proposed robots, the RT-Mover series (Nakajima 2011a, b)(Fig. 2), have a simple mechanism and adequate mobility with a stable posture position for the following target environments: 1. an indoor environment including a step and an uneven ground surface; 2. an artificial outdoor environment with an uneven ground surface and a bump; and 3. natural terrain such as a path in a forest. RT-Mover's mechanism is different from that of conventional mobile robots: four wheels are mounted at the tip of every leg but it has only four active wheels and only five other active shafts. With an emphasis on minimizing the number of drive shafts, the mechanism is designed for a four-wheeled mobile body that is widely used in practical locomotive machinery. RT-Mover can move like a wheeled robot and

The robot can move on discontinuous, rough terrain while maintaining its platform in a horizontal position. Therefore, it can be used as a stable carrier for an object (Fig. 2(a)) or

In this chapter, the mechanical design concept for RT-Mover is discussed, and strategies for moving on rough terrain are proposed. The kinematics, stability, and control of the robot are also described in detail. The performance of the proposed locomotion is evaluated through

+ -

x

platform

p

hg

z y

Side view

+ -

wheel

Rear + -

y sf sr L <sup>A</sup>

Top view

L <sup>B</sup>

Fig. 2. RT-Mover series. (a)A middle size type. (b)A personal mobility vehicle type.

also walk over a step like a legged robot, despite the simplicity of the mechanism.

L <sup>A</sup> 0.3 [m] L <sup>W</sup> 0.21 [m] R<sup>w</sup> 0.1 [m] L <sup>B</sup> 0.6 [m] h<sup>g</sup> 0.1 [m]

(a) (b)

platform

+ -

 rf , ( rr : rear) <sup>z</sup> x Front view L <sup>W</sup>

2005.

stably on the robot.

a person (Fig. 2(b)).

simulations and experiments.

### **2.1 Mechanical concept**

The target of this chapter is a practical mobile robot that carries objects or people or is used as a mobile bogie for a service robot. It is necessary to keep objects, the upper half of the onboard parts, and the boarding seat (hereinafter, the platform) horizontal in order to carry people and objects. We also aim to develop a practical robot with high cost performance by reducing the number of drive shafts to the extent possible and employing a simple mechanism.

Table 1 shows the practical use status of robots with various locomotion mechanisms. Wheeled robots have been used for practical applications. On the other hand, robots with complex mechanisms are not currently suitable for practical use because they are difficult to control, operate, and maintain, and they are expensive because of their many actuators.


Table 1. Practical use status of mobile robots with various locomotion mechanisms

A mobile robot used for general purposes should have high speed on a paved road and good ability of locomotion over rough terrain. Although there is no mechanism superior to a wheel for high speed and energy efficiency, a wheeled robot can move only on continuous rough terrain. To move on discontinuous terrain, a leg mechanism is better than a wheel mechanism. Therefore, to perform all the essential functions of mobile robots in rough terrain, both wheel and leg mechanisms are needed.

Table 2 shows the strengths and limitations of leg-wheel robots. Moreover, the target application environment such as an urban environment is basically a paved surface upon which transportation by wheel is possible, with several steps that necessitate a leg function in order to reach the destination.


Table 2. Strengths and limitations of leg-wheel robots

Therefore, it is advantageous to reduce the complexity of the leg mechanism to a minimum and to limit each leg's motion space.

for Practical Use 5

Mobile Platform with Leg-Wheel Mechanism for Practical Use 131

The following factors are necessary for traveling in leg mode over steps and other such obstacles, which cannot be traversed in wheel mode, while maintaining the platform in a horizontal plane: 1. lifting, moving forward, and landing for each leg; 2. supporting the body

The up-and-down function of each leg is achieved by a wheel mode suspension function. The mechanisms to move each wheel back and forth are shown in Fig. 3(c). The mechanism of 3-1 is a method to affix a back-and-forth drive function to each wheel, and the number of required drive shafts is four. That of 3-2 is the same as steering mechanism 1-4, with the wheels moving back-and-forth through rotation of the center shaft. Both 3-1 and 3-2 can locate the center of gravity of the body in the support polygon and thus support the body by shifting to the front

The realistically possible combinations of each aforementioned mechanism are examined, and

The combinations of each mechanism are listed in Table 3. Impossible or unavailable

As mechanism 3-2 is the same as that of 1-4, it is also a steering mechanism. Therefore, 3-2 cannot be combined with 1-1, 1-2, and 1-3 because this would result in unnecessarily duplicating steering functions in equipping the drive shaft. The mechanism by which to move the wheels back and forth is duplicated in the combination of 1-4 and 3-1, so they are not combined. Also, the simultaneous employment of 1-3 and 2-3, or 1-4 and 2-3, results in a physical contradiction because the distance between front and rear wheels is altered, and

Fig. 3. Mechanism for each function. (a)Steering mechanism. (b)Suspension mechanism for

wheel mode. (c)Suspension mechanism for leg mode (Top view).

the combinations with the minimum number of drive shafts are determined.

**2.2.3 Suspension mechanism in leg mode**

and rear of the landing points of the three legs.

**2.2.4 Mechanism of proposed robot**

combinations are indicated by "-".

these are fixed by the bogie mechanism.

with three other legs.

We take a four-wheeled mobile body often used in practice as the starting point in considering the mechanism of the proposed robot, and from there we develop the proposed mechanism. When seeking a high degree of ability of locomotion on rough terrain in a highly used wheel mode, it is clear that each of the four wheels should generate good driving force. In addition, when driving on rough terrain, each of the wheels of a four-wheeled mobile body is required to be driven independently, since each wheel travels a different route. Accordingly, this discussion is based on a four-wheeled mobile body that drives each wheel independently.

#### **2.2 Mechanical design**

Cost, reliability, and maintainability are important for practical mobile bodies. These factors can be evaluated from the number of drive shafts to a certain extent. In other words, using fewer drive shafts tends to lower the cost and simplify the mechanism, which in turn leads to increased reliability and maintainability. The above is evident if a comparison is made between practical transport machinery, such as automobiles and trains, and the mobile robot currently being developed.

Since the objective for the robot developed in this chapter is to add the minimum leg functions necessary, a mechanism that minimizes the number of added drive shafts is designed. Specifically, after listing possible mechanisms for each function, the mechanism with the minimum number of drive shafts is chosen for the proposed robot with consideration of the possible combinations.

#### **2.2.1 Steering function**

Practical mechanisms to achieve a steering function for a four-wheeled mobile body are shown in Fig. 3(a). Needless to say, there are other mechanisms to achieve steering, but we aim to realize a practical mobile robot. Accordingly, the following discussion targets highly practical representative mechanisms. The mechanism of 1-1 is the Ackermann steering system, an automobile steering system. That of 1-3 is a mechanism that rotates the center of a shaft to steer. It is possible to attach a steering mechanism to the rear wheels of both 1-1 and 1-3. However, this case is the same as 1-1 and 1-3 and is omitted from following discussion. Those of 1-2 and 1-4 are 4-Wheel Steering (4WS) systems in which the four wheels can be directed.

#### **2.2.2 Suspension function in wheel mode**

The wheels are required to have active vertical travel according to terrain in order to keep the platform horizontal when moving on gently varying irregular terrain. Systems to fulfill this requirement are shown in Fig. 3(b). The mechanism of 2-1 provides an up-down function to each wheel. The necessary number of drive shafts is four, and the platform can be kept horizontal on rough terrain by the vertical movement of each wheel according to the ground surface.

The mechanism of 2-2 is a bogie mechanism for the left and right wheel that uses a rotating shaft as a drive shaft. The horizontal degree of the roll direction of the platform can be maintained by the active control of the rotating shaft of front and rear bogie mechanism in response to the ground surface. The horizontal degree of the pitch direction of the platform is maintained by attaching an independent pitch control shaft. The number of necessary drive shafts is three.

The mechanism of 2-3 is a bogie mechanism for the front and back wheels.

#### **2.2.3 Suspension mechanism in leg mode**

4 Mobile Robot / Book 3

We take a four-wheeled mobile body often used in practice as the starting point in considering the mechanism of the proposed robot, and from there we develop the proposed mechanism. When seeking a high degree of ability of locomotion on rough terrain in a highly used wheel mode, it is clear that each of the four wheels should generate good driving force. In addition, when driving on rough terrain, each of the wheels of a four-wheeled mobile body is required to be driven independently, since each wheel travels a different route. Accordingly, this discussion is based on a four-wheeled mobile body that drives each wheel independently.

Cost, reliability, and maintainability are important for practical mobile bodies. These factors can be evaluated from the number of drive shafts to a certain extent. In other words, using fewer drive shafts tends to lower the cost and simplify the mechanism, which in turn leads to increased reliability and maintainability. The above is evident if a comparison is made between practical transport machinery, such as automobiles and trains, and the mobile robot

Since the objective for the robot developed in this chapter is to add the minimum leg functions necessary, a mechanism that minimizes the number of added drive shafts is designed. Specifically, after listing possible mechanisms for each function, the mechanism with the minimum number of drive shafts is chosen for the proposed robot with consideration of the

Practical mechanisms to achieve a steering function for a four-wheeled mobile body are shown in Fig. 3(a). Needless to say, there are other mechanisms to achieve steering, but we aim to realize a practical mobile robot. Accordingly, the following discussion targets highly practical representative mechanisms. The mechanism of 1-1 is the Ackermann steering system, an automobile steering system. That of 1-3 is a mechanism that rotates the center of a shaft to steer. It is possible to attach a steering mechanism to the rear wheels of both 1-1 and 1-3. However, this case is the same as 1-1 and 1-3 and is omitted from following discussion. Those of 1-2 and 1-4 are 4-Wheel Steering (4WS) systems in which the four wheels can be directed.

The wheels are required to have active vertical travel according to terrain in order to keep the platform horizontal when moving on gently varying irregular terrain. Systems to fulfill this requirement are shown in Fig. 3(b). The mechanism of 2-1 provides an up-down function to each wheel. The necessary number of drive shafts is four, and the platform can be kept horizontal on rough terrain by the vertical movement of each wheel according to the ground

The mechanism of 2-2 is a bogie mechanism for the left and right wheel that uses a rotating shaft as a drive shaft. The horizontal degree of the roll direction of the platform can be maintained by the active control of the rotating shaft of front and rear bogie mechanism in response to the ground surface. The horizontal degree of the pitch direction of the platform is maintained by attaching an independent pitch control shaft. The number of necessary drive

The mechanism of 2-3 is a bogie mechanism for the front and back wheels.

**2.2 Mechanical design**

currently being developed.

possible combinations.

**2.2.1 Steering function**

surface.

shafts is three.

**2.2.2 Suspension function in wheel mode**

The following factors are necessary for traveling in leg mode over steps and other such obstacles, which cannot be traversed in wheel mode, while maintaining the platform in a horizontal plane: 1. lifting, moving forward, and landing for each leg; 2. supporting the body with three other legs.

The up-and-down function of each leg is achieved by a wheel mode suspension function. The mechanisms to move each wheel back and forth are shown in Fig. 3(c). The mechanism of 3-1 is a method to affix a back-and-forth drive function to each wheel, and the number of required drive shafts is four. That of 3-2 is the same as steering mechanism 1-4, with the wheels moving back-and-forth through rotation of the center shaft. Both 3-1 and 3-2 can locate the center of gravity of the body in the support polygon and thus support the body by shifting to the front and rear of the landing points of the three legs.

#### **2.2.4 Mechanism of proposed robot**

The realistically possible combinations of each aforementioned mechanism are examined, and the combinations with the minimum number of drive shafts are determined.

The combinations of each mechanism are listed in Table 3. Impossible or unavailable combinations are indicated by "-".

As mechanism 3-2 is the same as that of 1-4, it is also a steering mechanism. Therefore, 3-2 cannot be combined with 1-1, 1-2, and 1-3 because this would result in unnecessarily duplicating steering functions in equipping the drive shaft. The mechanism by which to move the wheels back and forth is duplicated in the combination of 1-4 and 3-1, so they are not combined. Also, the simultaneous employment of 1-3 and 2-3, or 1-4 and 2-3, results in a physical contradiction because the distance between front and rear wheels is altered, and these are fixed by the bogie mechanism.

Fig. 3. Mechanism for each function. (a)Steering mechanism. (b)Suspension mechanism for wheel mode. (c)Suspension mechanism for leg mode (Top view).

for Practical Use 7

Mobile Platform with Leg-Wheel Mechanism for Practical Use 133

Height to bottom 0.16[m]

Motor 23[W] (front and rear steering *θs f* , *θsr*: ×2)

Angle limit ±30 (steering *θs f* , *θsr*, roll *θr f* , *θrr*, and pitch *θp*)

Wheel Radius:0.1[m]; Width:0.03[m]

each wheel: ×4)

50 (each wheel) Encoder (each motor)

Sensor Current sensor (each motor)

Gear ratio 250 (pitch *θp*)

Minimum rotation radius 0.52[m]

**3. Control method in wheel mode**

Pitch-adjustment

The pitch-adjustment shaft is well controlled, and the platform is maintained horizontally.

shaft

Fig. 5. Wheel mode locomotion

Side view

Table 4. Main specifications

Max speed 0.63[m/s]

Power supply 24[V] lead accumulator

movement and traversal of a slope of up to 30[deg] is possible.

(a)Ascending a slope (b)Crossing a slope

Platform

Dimensions Length 0.8[m]; Width 0.63[m] (Tread 0.6[m]); Height 0.46[m];

Weight 28[kg] (including the platform at 6.5[kg] and batteries at 5.4[kg])

(DC servo) 40[W] (front and rear roll *θr f* , *θrr*: ×2; pitch *θp*: ×2(double motor);

Posture angle sensor (roll and pitch of platform)

100 (steering *θs f* , *θsr*, and roll *θr f* , *θrr*)

The length of the steering arm (tread) was 0.6 [m], and the maximum angle for steering, roll-adjustment axis, and pitch-adjustment axis was ±30 [deg]. When rotating the roll-adjustment axis through 30[deg] such that the wheel on one side is in contact with the ground, the other wheel attached to a 0.6[m] steering arm can rise 0.3[m]. Therefore, the movement range is sufficient for this initial step. Likewise, moving 0.3[m] in the front and rear directions is possible by moving the steering from 0[deg] to 30[deg], and holes of 0.3[m] can be crossed (Fig. 7(c)). Also, the radius of rotation is 0.52[m] if the front and rear steering angles are turned a full 30[deg] in antiphase. With regards to locomotion on a slope, back-and-forth

> Front and rear roll-adjustment shafts are well controlled, and the platform is maintained

> > Front view

(c)Movement over randomly

*θ*, (1)

Front and rear leg-like axles are well controlled, and the robot can move on rough terrain maintaining the platform horizontally.

placed obstacles

horizontally.

shaft

Roll-adjustment

pitch-adjustment shaft *θp*, and to the front and rear roll-adjustment shafts *θr f* , *θrr*.

*Td* <sup>=</sup> *<sup>K</sup>*(*θ<sup>d</sup>* <sup>−</sup> *<sup>θ</sup>*) + *<sup>D</sup>*(˙

The movement method in wheel mode is shown in Fig. 5. RT-Mover can move on continuous rough terrain while maintaining the platform in a horizontal plane by applying eq. (1) to the

*<sup>θ</sup><sup>d</sup>* <sup>−</sup> ˙

*<sup>θ</sup>*) = <sup>−</sup>*K<sup>θ</sup>* <sup>−</sup> *<sup>D</sup>* ˙


Table 3. Combinations of each mechanism

The reason the combination of 1-3, 2-1, and 3-1 results in seven drive shafts is because the front wheels are able to move back and forth by the steering function of 1-3, and only the rear wheels are required to be moved by mechanism 3-1.

As shown in Table 3, the combination of 1-4, 2-2, and 3-2 gives five drive shafts, which is a minimum. Taken together, the mechanism of the proposed robot consists of four drive wheels and five drive shafts, as shown in Fig. 4. The mechanism that has an intersection of the roll-adjustment axis and the steering axis at the center of the steering arm is called the leg-like axle.

Fig. 4. Assembly drawing of RT-Mover

#### **2.2.5 Consideration of major dimensions**

The size of the robot(Fig. 2(a)) is suitable for transporting small objects or serving as a mobile bogie of a service robot. The target is to traverse steps about 0.07 [m] high in wheel mode and 0.15 [m] high in leg mode, on the assumption that a robot of this size would be used in indoor offices, factories, and public facilities. Although depending on the coefficient friction with the floor surface, the wheel mechanism allows the robot to traverse a floor surface with irregularities having height up to about 2/3 of the wheel radius. Accordingly, the wheel radius for the robot being developed is configured to 0.1 [m].

The specifications of the robot are listed in Table 4.


Table 4. Main specifications

6 Mobile Robot / Book 3

Steering Suspension Suspension Number of

1-1 2-1 3-1 / 3-2 9/-

1-2 2-1 3-1 / 3-2 10/ -

1-3 2-1 3-1 / 3-2 7/-

1-4 2-1 3-1 / 3-2 -/6

The reason the combination of 1-3, 2-1, and 3-1 results in seven drive shafts is because the front wheels are able to move back and forth by the steering function of 1-3, and only the rear

As shown in Table 3, the combination of 1-4, 2-2, and 3-2 gives five drive shafts, which is a minimum. Taken together, the mechanism of the proposed robot consists of four drive wheels and five drive shafts, as shown in Fig. 4. The mechanism that has an intersection of the roll-adjustment axis and the steering axis at the center of the steering arm is called the

> Pitch-adjustment axis

Pitchadjustment axis

Front view Side view Plan view

160[mm]

The size of the robot(Fig. 2(a)) is suitable for transporting small objects or serving as a mobile bogie of a service robot. The target is to traverse steps about 0.07 [m] high in wheel mode and 0.15 [m] high in leg mode, on the assumption that a robot of this size would be used in indoor offices, factories, and public facilities. Although depending on the coefficient friction with the floor surface, the wheel mechanism allows the robot to traverse a floor surface with irregularities having height up to about 2/3 of the wheel radius. Accordingly, the wheel radius

Table 3. Combinations of each mechanism

Roll-adjustment axis

Fig. 4. Assembly drawing of RT-Mover

**2.2.5 Consideration of major dimensions**

for the robot being developed is configured to 0.1 [m]. The specifications of the robot are listed in Table 4.

leg-like axle.

Lw= 210[mm]

Platform

wheels are required to be moved by mechanism 3-1.

Platform

(wheel mode) (leg mode) drive shafts

2-2 3-1 / 3-2 8/- 2-3 3-1 / 3-2 8/-

2-2 3-1 / 3-2 9/- 2-3 3-1 / 3-2 9/-

2-2 3-1 / 3-2 6/- 2-3 3-1 / 3-2 -/-

2-2 3-1 / 3-2 -/5 2-3 3-1 / 3-2 -/-

600[mm]

Rw=100[mm]

460[mm]

LA= 300[mm] 800[mm]

LB=600[mm]

Steering mechanism The length of the steering arm (tread) was 0.6 [m], and the maximum angle for steering, roll-adjustment axis, and pitch-adjustment axis was ±30 [deg]. When rotating the roll-adjustment axis through 30[deg] such that the wheel on one side is in contact with the ground, the other wheel attached to a 0.6[m] steering arm can rise 0.3[m]. Therefore, the movement range is sufficient for this initial step. Likewise, moving 0.3[m] in the front and rear directions is possible by moving the steering from 0[deg] to 30[deg], and holes of 0.3[m] can be crossed (Fig. 7(c)). Also, the radius of rotation is 0.52[m] if the front and rear steering angles are turned a full 30[deg] in antiphase. With regards to locomotion on a slope, back-and-forth movement and traversal of a slope of up to 30[deg] is possible.

#### **3. Control method in wheel mode**

Fig. 5. Wheel mode locomotion

The movement method in wheel mode is shown in Fig. 5. RT-Mover can move on continuous rough terrain while maintaining the platform in a horizontal plane by applying eq. (1) to the pitch-adjustment shaft *θp*, and to the front and rear roll-adjustment shafts *θr f* , *θrr*.

$$T\_d = K(\theta\_d - \theta) + D(\dot{\theta}\_d - \dot{\theta}) = -K\theta - D\dot{\theta}\_\prime \tag{1}$$

for Practical Use 9

Mobile Platform with Leg-Wheel Mechanism for Practical Use 135

time[s]

Fig. 6. Simulation of moving over randomly placed obstacles. (a) Shape of the road and a scene from the simulation. (b) Platform's pitch and the angle of the pitch-adjustment shaft. (c) Data on platform's roll and the angles of the front and rear roll-adjustment shafts for the

(a)Ascending a step

The robot lowers each back wheel using the rear leg-like axle

**(a)The front-left wheel is lifted. (b)The front-left wheel is swung. (c)The front-left wheel is landed.**

Angle control

Front-right wheel


> side view

in the same way. (c)Stepping over

Front-left wheel

front view

same way.

Upward step

angle[deg]

0 1 2 3 4 5 6 7 8 9

(c)

Rear roll-adjustment shaft's angle

Front roll-adjustment shaft's angle

time[s]

Roll angle of the platform

Leg-like axles and the pitch-adjustment shaft are well controlled, and the platform is maintained horizontally when ascending a step.

a hole or an obstacle

Angle control

Posture control

Each front wheel steps over a hole. After that, each rear wheel steps over it in the


movement from point A to B in (a).

Upward step

The robot raises each front wheel using the front leg-like axle.

Step

The robot lowers each front wheel using the front leg-like axle.

(b)Descending a step

Posture control

Upward step

Angle control (front roll adjustment shaft)

Plan view

Fig. 7. Leg mode locomotion

Posture control (rear roll-adjustment shaft)

> Posture control (pitch-adjustment shaft)

Fig. 8. Leg motion of front-left wheel

angle[deg]

0 1 2 3 4 5 6 7 8 9

(b)

Angle of pitch-adjustment shaft

Pitch angle of the platform

where *Td* is the target torque, *θ* is the posture angle of the platform, *θd*: target posture angle of the platform (= 0), *K* is the angle gain, and *D* is the angular velocity gain.

#### **3.1 Assessment of ability of locomotion in wheel mode**

The situations of (a) and (b) shown in Fig. 5 partially appear in (c). Therefore, Fig. 5(c) alone is evaluated by simulation. The conditions employed in the simulation are as follows. Each gain value is obtained experimentally. For the initial step of the study, the velocity is set such that movement in the static state is possible. Since high speeds are a characteristic of wheel driving systems, traveling states with dynamic behavior will be studied in the near future.


Figure 6 shows a simulation of moving from point A to B in Fig. 6(a) over randomly placed obstacles. In (b) and (c) we see that each adjustment shaft is controlled appropriately, as well as that the platform's posture angle remains horizontal to within ±0.8[deg]. This shows that in wheel mode the platform can move over rough terrain with obstacles about 2/3 the size of the wheel radius as shown in the figure.

When a wheel hits an obstacle, the steering shaft is turned because of the reaction force from the obstacle. If the robot is required to move exactly straight, it is necessary to adjust the corresponding wheel speed according to both the rotation angle of the steering shaft and that of the roll-adjustment shaft. This is a subject for future study.

#### **4. Gait strategy and control method in leg mode**

The leg mode locomotion method is shown in Fig. 7. As an initial step, evaluations are performed by simulations and experiments taking Fig. 7 (a) and (b) as examples.

In the future, a method for integrating external sensor information with the robot system will be studied because, for example, such information is necessary to recognize a downward step before descending in Fig. 7(b). At the current stage, road shapes are known in advance.

#### **4.1 Step-up gait**

Using the control method in eq. (1), RT-Mover can move over rough terrain where its wheels can be in continuous contact with the ground. However, with steps higher than the wheel radius or gaps larger than the wheel diameter, the ground contact points of wheels need to be altered by lifting the wheels. The case of lifting a wheel onto a step which the robot cannot climb in wheel mode is shown in Fig. 8. Assuming that static stability is maintained, a wheel is lifted like a leg while the body is constantly supported on at least three wheels.

Figure 9 shows the flow of the step-up gait. Before and after an upward step, the robot runs in wheel mode (Fig. 9(a) and (l)). When a front wheel reaches the step, the rear steering is rotated so that the margin of static stability during leg motion increases (Fig. 9(b)). Since RT-Mover cannot adjust the position of its center of gravity due to having a small number of degrees of 8 Mobile Robot / Book 3

where *Td* is the target torque, *θ* is the posture angle of the platform, *θd*: target posture angle of

The situations of (a) and (b) shown in Fig. 5 partially appear in (c). Therefore, Fig. 5(c) alone is evaluated by simulation. The conditions employed in the simulation are as follows. Each gain value is obtained experimentally. For the initial step of the study, the velocity is set such that movement in the static state is possible. Since high speeds are a characteristic of wheel driving systems, traveling states with dynamic behavior will be studied in the near future.

1. *Kp* =800[N·m], *Dp* =15[N·m·s], *Kr f* = *Krr* =250[N·m], *Dr f* = *Drr* =10[N·m·s].

4. The wheels and steering are controlled using a proportional derivative (PD) controller. 5. The coefficient of friction between wheel and road is 0.7, and there is no friction on the

Figure 6 shows a simulation of moving from point A to B in Fig. 6(a) over randomly placed obstacles. In (b) and (c) we see that each adjustment shaft is controlled appropriately, as well as that the platform's posture angle remains horizontal to within ±0.8[deg]. This shows that in wheel mode the platform can move over rough terrain with obstacles about 2/3 the size of

When a wheel hits an obstacle, the steering shaft is turned because of the reaction force from the obstacle. If the robot is required to move exactly straight, it is necessary to adjust the corresponding wheel speed according to both the rotation angle of the steering shaft and that

The leg mode locomotion method is shown in Fig. 7. As an initial step, evaluations are

In the future, a method for integrating external sensor information with the robot system will be studied because, for example, such information is necessary to recognize a downward step before descending in Fig. 7(b). At the current stage, road shapes are known in advance.

Using the control method in eq. (1), RT-Mover can move over rough terrain where its wheels can be in continuous contact with the ground. However, with steps higher than the wheel radius or gaps larger than the wheel diameter, the ground contact points of wheels need to be altered by lifting the wheels. The case of lifting a wheel onto a step which the robot cannot climb in wheel mode is shown in Fig. 8. Assuming that static stability is maintained, a wheel

Figure 9 shows the flow of the step-up gait. Before and after an upward step, the robot runs in wheel mode (Fig. 9(a) and (l)). When a front wheel reaches the step, the rear steering is rotated so that the margin of static stability during leg motion increases (Fig. 9(b)). Since RT-Mover cannot adjust the position of its center of gravity due to having a small number of degrees of

performed by simulations and experiments taking Fig. 7 (a) and (b) as examples.

is lifted like a leg while the body is constantly supported on at least three wheels.

2. The speeds of all the wheels are maintained at a constant 0.2[m/s].

the platform (= 0), *K* is the angle gain, and *D* is the angular velocity gain.

**3.1 Assessment of ability of locomotion in wheel mode**

3. The front and rear steering angles are maintained at 0.

6. Open Dynamics Engine (ODE) is used for simulation.

of the roll-adjustment shaft. This is a subject for future study.

**4. Gait strategy and control method in leg mode**

shafts of the robot.

**4.1 Step-up gait**

the wheel radius as shown in the figure.

Fig. 6. Simulation of moving over randomly placed obstacles. (a) Shape of the road and a scene from the simulation. (b) Platform's pitch and the angle of the pitch-adjustment shaft. (c) Data on platform's roll and the angles of the front and rear roll-adjustment shafts for the movement from point A to B in (a).

Fig. 7. Leg mode locomotion

**(a)The front-left wheel is lifted. (b)The front-left wheel is swung. (c)The front-left wheel is landed.**

Fig. 8. Leg motion of front-left wheel

for Practical Use 11

Mobile Platform with Leg-Wheel Mechanism for Practical Use 137

**(j)**

**(1)rotate the front steering for stability (2)lift the rear-right**

 **wheel (3)move the wheel forward (4)lower the wheel**

the yaw angle of its body relative to the step acquires the desired value (Fig. 10(b)). Then, it moves forward to the step maintaining this yaw angle (Fig. 10(c)). When a front wheel reaches the step, the rear steering is rotated so that the margin of static stability during leg motion increases (Fig. 10(d)). First, the front-left leg is lifted (Fig. 10(d)), then the front right-leg is lifted (Fig. 10(e) and (f)). After both front wheels have completed the leg motion, the robot changes its yaw angle back to 0[deg] relative to the step (Fig. 10(g)). The yaw angle at this time is not important, because the static stability is sufficient during the rear wheels' leg motion. After coming to the step (Fig. 10(h)), the rear wheels are lifted down in the same way as the front wheels (Fig. 10(i) and (j)). Finally, the robot again changes the yaw angle of its body to 0 (Fig. 10(k)). The roll-adjustment shaft on the leg side is controlled using angle control, and

Pw<sup>1</sup>

θw R<sup>w</sup>

+

Lr

In this section, the target angle of each joint shaft to achieve the configured leg tip trajectory and the target angle of each wheel when in wheel mode are obtained. In other words, in leg

X Z

θsf

+

L<sup>A</sup>

L<sup>W</sup>

<sup>Y</sup> <sup>θ</sup><sup>p</sup><sup>B</sup>

+

L<sup>B</sup> 2

Pw<sup>4</sup>

P<sup>Q</sup>

pitch angle of the body

> θ*w* R<sup>w</sup>

Lr

Pw<sup>3</sup>

projection frame

rear

**(3)move the wheel forward (4)lower the wheel**

**(1)rotate the rear steering for stability (2)lift the front-left**

**:lifted wheel**

 **wheel**

**leg-side steering**

> **(1)move forwad while keeping the rotated front steering (2)changes the yaw angle to 0**

**(k) (l)**

**(1)move the wheel forward (2)lower the wheel**

**(1)rotate the rear steering for stability (2)lift the front-right**

**support-side steering**

 **wheel**

**(a) (b) (c) (d) (e) (f)**

**yaw angle of the body relative to the step**

**(i)**

**body**

**backward**

**move forward to the step while maintaining the desired yaw angle of the**

**(1)rotate the front steering for stability (2)lift the rear-left wheel (3)move the wheel forward (4)lower the wheel**

**(g)**

**reach a downward**

**(1)rotate the rear steering (2)move forward and changes the yaw angle to 0**

**(h)**

**move forward to the step**

Fig. 10. Flow of processes in the step-down gait

that on the support side uses posture control.

Pw<sup>2</sup>

front

PP

θrf

+

**5. Inverse kinematics**

Fig. 11. Frame model for analysis

**(1)rotate the rear steering (2)move backward and changes the yaw angle of the body**

**step move**

freedom, the positions of supporting wheels are adjusted by rotating a steering shaft in order to maintain static stability. Since the leg-side steering shaft is used for moving the lifted wheel forward, static stability is increased by rotating the support-side steering shaft to the limit (-30[deg]). In order to lift a leg, the front roll-adjustment shaft is switched from posture control (eq. (1)) to angle control, and the leg is lifted to the desired height (Fig. 8(a)). Meanwhile, to prevent the platform from inclining, the rear roll-adjustment shaft and pitch-adjustment shaft continue to use posture control. After lifting, the angle of the front roll-adjustment shaft is kept constant, and the lifted wheel is moved forward onto the step (Fig. 8(b) and Fig. 9(c)). Then, the lifted wheel is lowered, and when landing is detected, leg motion of the front-left wheel ends (Fig. 8(c)). As can be seen in Fig. 18(a), since the sign of the roll angle of the platform changes from negative to positive at (A) when the wheel lands, this timing can be used for detection. Next, the front-right wheel becomes the lifted leg (Fig. 9(d) and (e)).

After the front wheels have gone up the step, the robot changes its yaw angle relative to the step to ensure static stability when the rear wheels go up (this is considered in detail in a later section). The robot moves forward keeping the rear steering angle at 30[deg] until its yaw angle reaches the desired value (Fig. 9(f)). After that, it moves forward in wheel mode while maintaining the desired yaw angle (Fig. 9(g) and (h)). When a rear wheel reaches the step, the rear wheels are lifted onto the step in the same way as the front wheels (Fig. 9(i) and (j)). The rear roll-adjustment shaft is controlled using angle control, and the front one by posture control. Finally, the robot changes its yaw angle back to 0 (Fig. 9(k)).

Since the left-right order does not affect this movement, each wheel is lifted in turn in the order front-left, front-right, rear-left, and rear-right.

Fig. 9. Flow of processes in the step-up gait

#### **4.2 Step-down gait**

Figure 10 shows the flow of the step-down gait. When all wheels support the body, the robot is controlled under wheel mode.

When a front wheel encounters a downward step, the robot changes the yaw angle of its body relative to the step for ensuring static stability. So, after the robot reaches the step for the first time (Fig. 10(a)), it moves backward keeping the rear steering angle at -30[deg] until 10 Mobile Robot / Book 3

freedom, the positions of supporting wheels are adjusted by rotating a steering shaft in order to maintain static stability. Since the leg-side steering shaft is used for moving the lifted wheel forward, static stability is increased by rotating the support-side steering shaft to the limit (-30[deg]). In order to lift a leg, the front roll-adjustment shaft is switched from posture control (eq. (1)) to angle control, and the leg is lifted to the desired height (Fig. 8(a)). Meanwhile, to prevent the platform from inclining, the rear roll-adjustment shaft and pitch-adjustment shaft continue to use posture control. After lifting, the angle of the front roll-adjustment shaft is kept constant, and the lifted wheel is moved forward onto the step (Fig. 8(b) and Fig. 9(c)). Then, the lifted wheel is lowered, and when landing is detected, leg motion of the front-left wheel ends (Fig. 8(c)). As can be seen in Fig. 18(a), since the sign of the roll angle of the platform changes from negative to positive at (A) when the wheel lands, this timing can be used for detection. Next, the front-right wheel becomes the lifted leg (Fig. 9(d) and (e)). After the front wheels have gone up the step, the robot changes its yaw angle relative to the step to ensure static stability when the rear wheels go up (this is considered in detail in a later section). The robot moves forward keeping the rear steering angle at 30[deg] until its yaw angle reaches the desired value (Fig. 9(f)). After that, it moves forward in wheel mode while maintaining the desired yaw angle (Fig. 9(g) and (h)). When a rear wheel reaches the step, the rear wheels are lifted onto the step in the same way as the front wheels (Fig. 9(i) and (j)). The rear roll-adjustment shaft is controlled using angle control, and the front one by posture

Since the left-right order does not affect this movement, each wheel is lifted in turn in the

**(1)rotate the rear steering for stability (2)lift the front-right**

> **(1)rotate the front steering for stability (2)lift the rear-right**

 **wheel (3)move the wheel forward (4)lower the wheel**

Figure 10 shows the flow of the step-down gait. When all wheels support the body, the robot

When a front wheel encounters a downward step, the robot changes the yaw angle of its body relative to the step for ensuring static stability. So, after the robot reaches the step for the first time (Fig. 10(a)), it moves backward keeping the rear steering angle at -30[deg] until

**(1)move the front-right wheel forward (2)lower the wheel**

**(1)move forwad while keeping the rotated rear steering (2)change the yaw angle of the body for growing stability during rear wheels'**

 **leg motions** 

**(1)rotate the front steering (2)move forward and change the yaw angle to 0**

**(l)**

**yaw angle of the body relative to the step**

 **wheel**

**(a) (b) (c) (d) (e) (f)**

control. Finally, the robot changes its yaw angle back to 0 (Fig. 9(k)).

**(g) (h) (i) (j) (k)**

**supporting polygon**

**-30[deg]**

 **wheel (3)move the wheel forward (4)lower the wheel**

**(1)move the front-left wheel forward (2)lower the wheel**

**(1)rotate the front steering for stability (2)lift the rear-left**

order front-left, front-right, rear-left, and rear-right.

**:lifted wheel leg-side steering**

**(1)rotate the rear steering for stability (2)lift the front-left**

**A rear wheel comes to the step.**

 **wheel**

Fig. 9. Flow of processes in the step-up gait

**support-side steering**

**move forward while maintaining the desired yaw angle of the body**

**4.2 Step-down gait**

is controlled under wheel mode.

**yaw angle of the body relative to the step**

Fig. 10. Flow of processes in the step-down gait

the yaw angle of its body relative to the step acquires the desired value (Fig. 10(b)). Then, it moves forward to the step maintaining this yaw angle (Fig. 10(c)). When a front wheel reaches the step, the rear steering is rotated so that the margin of static stability during leg motion increases (Fig. 10(d)). First, the front-left leg is lifted (Fig. 10(d)), then the front right-leg is lifted (Fig. 10(e) and (f)). After both front wheels have completed the leg motion, the robot changes its yaw angle back to 0[deg] relative to the step (Fig. 10(g)). The yaw angle at this time is not important, because the static stability is sufficient during the rear wheels' leg motion. After coming to the step (Fig. 10(h)), the rear wheels are lifted down in the same way as the front wheels (Fig. 10(i) and (j)). Finally, the robot again changes the yaw angle of its body to 0 (Fig. 10(k)). The roll-adjustment shaft on the leg side is controlled using angle control, and that on the support side uses posture control.

#### **5. Inverse kinematics**

Fig. 11. Frame model for analysis

In this section, the target angle of each joint shaft to achieve the configured leg tip trajectory and the target angle of each wheel when in wheel mode are obtained. In other words, in leg

for Practical Use 13

Mobile Platform with Leg-Wheel Mechanism for Practical Use 139

2Af (t)

lifted wheel ( front right wheel) L L

VP <sup>P</sup>

VP P w4

V w2

lw1

V w4

sup

*Pw*2*x*(*t*) <sup>−</sup> tan−<sup>1</sup> *Pw*2*y*(*<sup>t</sup>* <sup>+</sup> <sup>Δ</sup>*t*)

*θs f*(*t*) are determined by calculating ˙

*θleg*(*t*) = *θs f*(*t*) cos *θpB* (*t*) + *θr f*(*t*) sin *θpB* (*t*), (8)

X

Y

lw3

O

Pw4<sup>y</sup>

Ar

Af

Pw2<sup>y</sup>

(a) (b)

V <sup>B</sup>

Ar

RH

lw4

*Pw*2*x*(*t* + Δ*t*)

Δ*θo*(*t*) = Δ*θleg*(*t*) + Δ*θB*(*t*). (7)

*θs f*(*t*), which is determined topologically from the relations

*θPB* (*t*)(*θs f*(*t*) sin *θpB* (*t*) − *θr f*(*t*) cos *θpB* (*t*))

cos *<sup>θ</sup>pB* (*t*) , (9)

lw2

leg

V w3 sup

B

LR

V w4

OH (xH , yH )

P<sup>Q</sup>

leg

sup

<sup>2</sup>Δ*<sup>t</sup>* , 0), (5)

. (6)

*θleg*(*t*) and the

*θB*(*t*) and using the

VP P w3 <sup>P</sup><sup>P</sup>

B

l eg(t)

Front part of the projection frame

t) Pw2(t)

V w1

V w3

(c) (d) Fig. 12. Calculation model. (a) For the trajectory of a leg tip when raising and lowering a

**VPP** (*t*)=(*VPpx* (*t*), *VPpy* (*t*)) = ( *Pw*2*x*(*<sup>t</sup>* <sup>+</sup> <sup>Δ</sup>*t*) <sup>−</sup> *Pw*2*x*(*t*)

Δ*θo*(*t*) is the sum of the changes in the projected front steering angle *θleg*(*t*) and the body yaw

where *θpB* is obtained from attitude sensor information on the platform and the pitch

B

l eg(t) + <sup>B</sup> (t)

PP (t)

 <sup>t</sup> <sup>B</sup> (t):projection angle of the body yaw angle

> Pw<sup>2</sup> lifted wheel

wheel. (b) For *Vw*<sup>3</sup> and *Vw*<sup>4</sup> . (c) For swing phase. (d) For wheel mode.

<sup>Δ</sup>*θo*(*t*) = <sup>−</sup> tan−<sup>1</sup> *Pw*2*y*(*t*)

From these variables, the angular velocity of the projected front steering shaft ˙

*θr f*(*t*) sin *θpB* (*t*) + ˙

t) Pw2(t +

 o(t)

Pw1(t)

2Af (t +

V <sup>P</sup> <sup>P</sup> (t) = <sup>P</sup> w 2(t+

leg + <sup>B</sup>

o

angle *θB*(*t*):

below.

relationship between ˙

∴ ˙ *θs f*(*t*) =

adjustment angle.

Yo <sup>X</sup><sup>o</sup> '

Af

angular velocity of the front steering ˙

˙ *<sup>θ</sup>leg*(*t*) <sup>−</sup> ˙

*θleg*(*t*) and ˙

Yo

supporting wheel ( front left wheel)

Xo

PP (t + t)

 t)P w 2(t) 2

VP <sup>P</sup>

PP

mode, for example, the inverse kinematics to achieve the trajectory by lifting the transfer leg vertically, swinging it forward, and setting it down vertically to land is described.

A "projection frame" is introduced (Fig. 11), which comprises projecting line segments connecting the wheel supporting points (front arm *Pw*1*Pw*2, and rear arm *Pw*3*Pw*4) and a line segment connecting the centers of the arms (body *PPPQ*) to a horizontal plane. Here, the inverse kinematics are discussed using this projection frame. We use a right-handed coordinate system with the center of the projection frame as the origin. The direction of travel is defined as Y and the vertical axis as Z. Then, the following matrix <sup>0</sup>*Twf l* maps to coordinates with the front-left leg at the origin of the body-centered coordinate system:

<sup>0</sup>*Twf l* <sup>=</sup> ⎛ ⎜⎜⎝ 10 0 0 0 *CθpB* −*SθpB* 0 0 *SθpB CθpB* 0 00 0 1 ⎞ ⎟⎟⎠ ⎛ ⎜⎜⎝ 100 0 010 *LB* 2 001 0 000 1 ⎞ ⎟⎟⎠ ⎛ ⎜⎜⎝ *Cθs f* −*Sθs f* 0 0 *Sθs f Cθs f* 0 0 0 0 10 0 0 01 ⎞ ⎟⎟⎠ ⎛ ⎜⎜⎝ *Cθr f* 0 *Sθr f* 0 0 100 −*Sθr f* 0 *Cθr f* 0 0 001 ⎞ ⎟⎟⎠ · ⎛ ⎜⎜⎝ 100 −*LA* 010 0 001 0 000 1 ⎞ ⎟⎟⎠ ⎛ ⎜⎜⎝ 100 0 010 0 001 −*Lr* 000 1 ⎞ ⎟⎟⎠ ⎛ ⎜⎜⎝ 10 0 0 0 *Cθ<sup>w</sup>* −*Sθ<sup>w</sup>* 0 0 *Sθ<sup>w</sup> Cθ<sup>w</sup>* 0 00 0 1 ⎞ ⎟⎟⎠ ⎛ ⎜⎜⎝ 100 0 010 0 001 −*Rw* 000 1 ⎞ ⎟⎟⎠ (2)

#### **5.1 Lifting and landing phases**

When lifting or landing the front-right wheel, an angular velocity value will be determined for the front roll-adjustment shaft. In order to avoid contacting a lateral surface of the step, the lifted wheel is moved up and down without moving back or forth. The posture control given by eq. (1) is applied to the pitch adjustment and rear roll-adjustment shafts, and the rotation of the supporting front-left wheel is stopped. In order to widen the supporting polygon, the rear steering shaft is rotated to its steering limit. The control parameters of the front steering shaft, the rear-left wheel, and the rear-right wheel are determined by the value set for the front roll-adjustment shaft.

The derivation of these three control parameters is described in an absolute coordinate system with its origin at the supporting position of the front-left wheel *Pw*1(*t*), as shown in Fig. 12(a). In Fig. 12(a), the position of the front-right wheel *Pw*2(*t*) and *Pw*2(*t* + Δ*t*), when rotating the front roll-adjustment shaft for a small amount of time Δ*t*, are calculated. The angular velocity of the front steering shaft, ˙ *θs f*(*t*), and the velocities of the rear-left and rear-right wheels, **Vw3**(*t*) and **Vw4**(*t*), are also derived. Since the wheel is moved up and down without moving in the Y direction, the Y coordinate of *PP* is constant.

The distance between *Pw*1(*t*) and *PP*(*t*) is *Af*(*t*); since this is half the distance between *Pw*1(*t*) and *Pw*2(*t*), it may be derived from eq. (2). According to eq. (2), *Af*(*t*) depends on the front steering *θs f*(*t*), the front roll axis *θr f*(*t*), and the body pitch angle *θpB* (*t*). The value of *Af* after a small incremental movement of *θr f*(*t*) is *Af*(*t* + Δ*t*). Because an analytic solution is difficult, *θs f* and *θpB* are approximated as not varying over time Δ*t*. Since the Y axis moves in a fixed up and down path, the Y coordinate of *Pw*<sup>2</sup> is fixed and is given below:

$$\mathcal{A}(P\_{w2x}(t), P\_{w2y}(t)) = \left(\mathcal{Z}A\_f(t)\cos(\theta\_{l\mathcal{K}}(t) + \theta\_{\mathcal{B}}(t)), \mathcal{Z}A\_f(t)\sin(\theta\_{l\mathcal{K}}(t) + \theta\_{\mathcal{B}}(t))\right),\tag{3}$$

$$(P\_{w2x}(t+\Delta t), P\_{w2y}(t+\Delta t)) = (\sqrt{4A\_f(t+\Delta t)^2 - P\_{w2y}(t)^2}, P\_{w2y}(t)),\tag{4}$$

*θleg*(*t*) and *θB*(*t*) are obtained by eqs. (6), (7), and (10), when their initial values are given. The velocity of *PP* and the small angle Δ*θo*(*t*) are given by

12 Mobile Robot / Book 3

mode, for example, the inverse kinematics to achieve the trajectory by lifting the transfer leg

A "projection frame" is introduced (Fig. 11), which comprises projecting line segments connecting the wheel supporting points (front arm *Pw*1*Pw*2, and rear arm *Pw*3*Pw*4) and a line segment connecting the centers of the arms (body *PPPQ*) to a horizontal plane. Here, the inverse kinematics are discussed using this projection frame. We use a right-handed coordinate system with the center of the projection frame as the origin. The direction of travel is defined as Y and the vertical axis as Z. Then, the following matrix <sup>0</sup>*Twf l* maps to coordinates

vertically, swinging it forward, and setting it down vertically to land is described.

with the front-left leg at the origin of the body-centered coordinate system:

⎞

⎛

⎜⎜⎝

⎟⎟⎠

⎞

⎛

*Cθs f* −*Sθs f* 0 0 *Sθs f Cθs f* 0 0 0 0 10 0 0 01

⎞

⎛

*Cθr f* 0 *Sθr f* 0 0 100 −*Sθr f* 0 *Cθr f* 0 0 001

> 100 0 010 0 001 −*Rw* 000 1

⎞

⎟⎟⎠ ·

⎞

⎟⎟⎠

(2)

⎜⎜⎝

⎞

⎛

⎜⎜⎝

⎟⎟⎠

⎟⎟⎠

10 0 0 0 *Cθ<sup>w</sup>* −*Sθ<sup>w</sup>* 0 0 *Sθ<sup>w</sup> Cθ<sup>w</sup>* 0 00 0 1

*θs f*(*t*), and the velocities of the rear-left and rear-right wheels,

<sup>4</sup>*Af*(*<sup>t</sup>* + <sup>Δ</sup>*t*)<sup>2</sup> − *Pw*2*y*(*t*)2, *Pw*2*y*(*t*)), (4)

⎜⎜⎝

⎞

⎛

⎜⎜⎝

⎟⎟⎠

⎟⎟⎠

When lifting or landing the front-right wheel, an angular velocity value will be determined for the front roll-adjustment shaft. In order to avoid contacting a lateral surface of the step, the lifted wheel is moved up and down without moving back or forth. The posture control given by eq. (1) is applied to the pitch adjustment and rear roll-adjustment shafts, and the rotation of the supporting front-left wheel is stopped. In order to widen the supporting polygon, the rear steering shaft is rotated to its steering limit. The control parameters of the front steering shaft, the rear-left wheel, and the rear-right wheel are determined by the value set for the front

The derivation of these three control parameters is described in an absolute coordinate system with its origin at the supporting position of the front-left wheel *Pw*1(*t*), as shown in Fig. 12(a). In Fig. 12(a), the position of the front-right wheel *Pw*2(*t*) and *Pw*2(*t* + Δ*t*), when rotating the front roll-adjustment shaft for a small amount of time Δ*t*, are calculated. The angular velocity

**Vw3**(*t*) and **Vw4**(*t*), are also derived. Since the wheel is moved up and down without moving

The distance between *Pw*1(*t*) and *PP*(*t*) is *Af*(*t*); since this is half the distance between *Pw*1(*t*) and *Pw*2(*t*), it may be derived from eq. (2). According to eq. (2), *Af*(*t*) depends on the front steering *θs f*(*t*), the front roll axis *θr f*(*t*), and the body pitch angle *θpB* (*t*). The value of *Af* after a small incremental movement of *θr f*(*t*) is *Af*(*t* + Δ*t*). Because an analytic solution is difficult, *θs f* and *θpB* are approximated as not varying over time Δ*t*. Since the Y axis moves in a fixed

�

*θleg*(*t*) and *θB*(*t*) are obtained by eqs. (6), (7), and (10), when their initial values are given.

(*Pw*2*x*(*t*), *Pw*2*y*(*t*)) = (2*Af*(*t*) cos(*θleg*(*t*) + *θB*(*t*)), 2*Af*(*t*) sin(*θleg*(*t*) + *θB*(*t*))), (3)

⎞

⎛

⎜⎜⎝

⎟⎟⎠

<sup>0</sup>*Twf l* <sup>=</sup>

⎛

10 0 0 0 *CθpB* −*SθpB* 0 0 *SθpB CθpB* 0 00 0 1

⎛

⎜⎜⎝

in the Y direction, the Y coordinate of *PP* is constant.

(*Pw*2*x*(*t* + Δ*t*), *Pw*2*y*(*t* + Δ*t*)) = (

The velocity of *PP* and the small angle Δ*θo*(*t*) are given by

up and down path, the Y coordinate of *Pw*<sup>2</sup> is fixed and is given below:

⎜⎜⎝

**5.1 Lifting and landing phases**

roll-adjustment shaft.

of the front steering shaft, ˙

Fig. 12. Calculation model. (a) For the trajectory of a leg tip when raising and lowering a wheel. (b) For *Vw*<sup>3</sup> and *Vw*<sup>4</sup> . (c) For swing phase. (d) For wheel mode.

$$\mathbf{V\_{P\_P}}(t) = (V\_{P\_{px}}(t), V\_{P\_{py}}(t)) = (\frac{P\_{w2x}(t + \Delta t) - P\_{w2x}(t)}{2\Delta t}, 0), \tag{5}$$

$$
\Delta\theta\_o(t) = -\tan^{-1}\frac{P\_{w2y}(t)}{P\_{w2x}(t)} - \tan^{-1}\frac{P\_{w2y}(t+\Delta t)}{P\_{w2x}(t+\Delta t)}.\tag{6}
$$

Δ*θo*(*t*) is the sum of the changes in the projected front steering angle *θleg*(*t*) and the body yaw angle *θB*(*t*):

$$
\Delta\theta\_{\theta}(t) = \Delta\theta\_{l\xi\xi}(t) + \Delta\theta\_{B}(t). \tag{7}
$$

From these variables, the angular velocity of the projected front steering shaft ˙ *θleg*(*t*) and the angular velocity of the front steering ˙ *θs f*(*t*) are determined by calculating ˙ *θB*(*t*) and using the relationship between ˙ *θleg*(*t*) and ˙ *θs f*(*t*), which is determined topologically from the relations below.

$$
\theta\_{l\varepsilon\S}(t) = \theta\_{sf}(t)\cos\theta\_{p\_B}(t) + \theta\_{rf}(t)\sin\theta\_{p\_B}(t), \tag{8}
$$

$$\dots \dot{\theta}\_{sf}(t) = \frac{\dot{\theta}\_{l\varepsilon\S}(t) - \dot{\theta}\_{rf}(t)\sin\theta\_{p\_{\mathbb{B}}}(t) + \dot{\theta}\_{P\_{\mathbb{B}}}(t)(\theta\_{sf}(t)\sin\theta\_{p\_{\mathbb{B}}}(t) - \theta\_{rf}(t)\cos\theta\_{p\_{\mathbb{B}}}(t))}{\cos\theta\_{p\_{\mathbb{B}}}(t)},\tag{9}$$

where *θpB* is obtained from attitude sensor information on the platform and the pitch adjustment angle.

for Practical Use 15

Mobile Platform with Leg-Wheel Mechanism for Practical Use 141

With the velocity of point *PP* determined, as in the lifting and landing phases, the three control parameters, the angular velocity of the front steering shaft and the velocities of the

In Fig. 9(g) and (h), for example, the robot moves with all four wheels supporting the body. Since the velocity of the body center, **VB**, and the angles of the front and rear steering axes in the projection frame, *θleg* and *θsup*, are given as parameters, the desired wheel velocities with no slipping, **Vw1** ∼ **Vw4**, are derived. Since each wheel rotates about *OH*, **Vwi** is given by **Vwi**(*t*) = *lwi*(*t*)**VB**(*t*)/*RH*(*t*)(*i* = 1 ∼ 4) where *RH*(*t*) is the turning radius. Except under conditions, such as *θleg* = *θsup*, where the front and rear steering angles are equal and the

tan *θsup*(*t*) − tan *θleg*(*t*)

and *RH*(*t*) = *xH*(*t*)<sup>2</sup> + *yH*(*t*)2. Variables such as *lw*<sup>1</sup> are obtained in the form *lw*1(*t*) =

In this section, whether the robot can maintain static stability while moving over a target step of 0.15[m] is analyzed for the gait strategy given above. Static state locomotion is considered as an initial step. In general, statically stable locomotion can be achieved if the center of gravity is located inside the support polygon. Here, the stability during movement of the proposed robot in leg mode is specifically investigated. For example, the best range of body yaw angle

Figure 13(a) shows the static stability when lifting the front-left wheel. Static stability is positive if the center of gravity is in the supporting polygon. Since RT-Mover employs a mechanism with a small number of driving shafts, it cannot move its center of gravity without altering the position of the supporting wheels. In addition, the supporting point of the front-right wheel in Fig. 13(a) cannot move since the lifted wheel is needed to move forward. Thus, the rear steering is used so that the center of gravity stays within the supporting polygon. As shown in Fig. 13(b), if the body inclines backward when going up a step, the center of gravity is displaced backward by *hg* sin *θpB* , where *θpB* is the body pitch angle. Figure 14(A) shows four phases during the step-up gait. Out of the four phases in which a wheel is lifted during the step-up gait, only those shown in Fig. 14(A-c) and (A-d) cause static instability, because the center of gravity is displaced backward due to the backward inclination of the body and the stability margin consequently decreases. Here, the front steering is rotated up to the limit of ±30[deg] in the direction that increases stability. First, the rear-left wheel is lifted (Fig. 14(A-c)), moved forward, and then lowered. Next, the rear-right wheel is lifted, moved forward, and lowered. Therefore, the rear steering angle when the rear-right wheel is lifted depends on the rear steering angle when the rear-left wheel is lifted. It can be seen in Fig. 14(A-c) and (A-d) that the less the lifted rear-left wheel goes forward, the more static stability the robot has at the beginning of lifting the rear-right wheel. Hence, the rear-left

, *B*(*t*) 2

tan *θsup*(*t*) + tan *θleg*(*t*) tan *θsup*(*t*) − tan *θleg*(*t*)

) (18)

turning radius becomes infinite, the topology in Fig. 12(d) leads to

However, when *θleg*(*t*) = *θsup*(*t*), we have **Vwi** = **VB**(*i* = 1 ∼ 4).

shown in Fig. 9(g) to climb a step while maintaining stability is derived.

*OH*(*t*)=(*xH*(*t*), *yH*(*t*)) = ( *<sup>B</sup>*(*t*)

rear wheels, can be obtained.


**6. Stability in leg mode**

**5.3 Wheel mode**

The angular velocity of the body rotation ˙ *θ<sup>B</sup>* is

$$\dot{\theta}\_B(t) = \frac{V\_{P\_{Q^\times}}(t) - V\_{P\_{P^\times}}(t)}{B(t)},\tag{10}$$

where *B* is the length of the projection body and *VPQx* is the x element of the velocity of *PQ* (Fig. 12(b)). *B*(*t*) is the length between <sup>0</sup>*PP* and <sup>0</sup>*PQ*, where <sup>0</sup>*PP* and <sup>0</sup>*PQ* are the positions of *PP* and *PQ* in the body-centered coordinate system. The velocity of *PQ*, **VPQ** , is given by

$$\mathbf{V\_{P\_Q}}(t) = \left(^{\mathbf{0}}\mathbf{P\_Q}(t) - \,^{\mathbf{0}}\mathbf{P\_Q}(t - \Delta t) - \Delta \mathbf{O\_0}\right) / \Delta t,\tag{11}$$

where Δ**Oo** is the movement of the origin of the body-centered coordinate system relative to the absolute coordinate system:

$$
\Delta \mathbf{O\_0} = {}^0 \mathbf{P\_{W1}}(t) - {}^0 \mathbf{P\_{W1}}(t - \Delta t). \tag{12}
$$

The angular velocity of the front steering shaft ˙ *θs f* , which is one of the three control parameters, is determined by eqs. (6), (7), (9), and (10).

#### **5.1.1 How to derive velocities of rear-left and rear-right wheel**

Here, we derive the velocities of the rear-left and rear-right wheels, **Vw3**(*t*) and **Vw4**(*t*). The velocity generated at point *PP* when stopping the right-back wheel (**Vw4** = 0) and moving left-back wheel at **Vw3** is **VPPw3** shown in Fig. 12(b). If we define **VPPw4** similarly, then the velocity of *PP*(*t*) is

$$\mathbf{V}\_{\mathbf{P}\_{\mathbf{P}}}(t) = \mathbf{V}\_{\mathbf{P}\_{\mathbf{P}\mathbf{w}3}}(t) + \mathbf{V}\_{\mathbf{P}\_{\mathbf{P}\mathbf{w}4}}(t). \tag{13}$$

The relationships between **Vw3** and **VPPw3** , and between **Vw4** and **VPPw4** are

$$\mathbf{V\_{w3}}(t) = \frac{2A\_r(t)}{LR(t)}\mathbf{V\_{P\_{W3}}}(t) \,\tag{14}$$

$$\mathbf{V\_{w4}}(t) = \frac{2A\_r(t)}{LL(t)}\mathbf{V\_{P\_{w4}}}(t),\tag{15}$$

where *LR*(*t*) and *LL*(*t*) are obtained from *B*(*t*), *θsup*(*t*), and the distance *Ar*(*t*) between *Pw*3(*t*) and *PQ*(*t*) in Fig. 12(b).

The velocities of the rear-left wheel and the rear-right wheel are determined by eqs. (5), (13), (14), and (15).

#### **5.2 Swing phase**

Figure 12(c) shows a model of the swing phase, where the origin of the absolute coordinate system is the front-left wheel and the lifted wheel is the front-right wheel. The trajectory is set such that point *PP* draws a circular path around the front-left wheel. The angular velocity of the front steering shaft and the velocities of the rear wheels are determined so that they produce **VPP** . Setting a command value for ˙ *θo*, we obtain

$$|\mathbf{V\_{P\_P}}(t)| = A\_f(t)|\dot{\theta}\_o|\_\prime \tag{16}$$

$$\mathbf{V\_{P\_P}}(t) = (-|\mathbf{V\_{P\_P}}(t)|\sin(\theta\_{\log}(t) + \theta\_B(t)), |\mathbf{V\_{P\_P}}(t)|\cos(\theta\_{\log}(t) + \theta\_B(t))).\tag{17}$$

With the velocity of point *PP* determined, as in the lifting and landing phases, the three control parameters, the angular velocity of the front steering shaft and the velocities of the rear wheels, can be obtained.

#### **5.3 Wheel mode**

14 Mobile Robot / Book 3

where *B* is the length of the projection body and *VPQx* is the x element of the velocity of *PQ* (Fig. 12(b)). *B*(*t*) is the length between <sup>0</sup>*PP* and <sup>0</sup>*PQ*, where <sup>0</sup>*PP* and <sup>0</sup>*PQ* are the positions of *PP* and *PQ* in the body-centered coordinate system. The velocity of *PQ*, **VPQ** , is given by

where Δ**Oo** is the movement of the origin of the body-centered coordinate system relative to

The angular velocity of the front steering shaft ˙ *θs f* , which is one of the three control

Here, we derive the velocities of the rear-left and rear-right wheels, **Vw3**(*t*) and **Vw4**(*t*). The velocity generated at point *PP* when stopping the right-back wheel (**Vw4** = 0) and moving left-back wheel at **Vw3** is **VPPw3** shown in Fig. 12(b). If we define **VPPw4** similarly, then the

*VPQx* (*t*) − *VPPx* (*t*)

**VPQ** (*t*)=(**0PQ**(*t*) <sup>−</sup> **0PQ**(*<sup>t</sup>* <sup>−</sup> <sup>Δ</sup>*t*) <sup>−</sup> <sup>Δ</sup>**Oo**)/Δ*t*, (11)

<sup>Δ</sup>**Oo** <sup>=</sup> **0Pw1**(*t*) <sup>−</sup> **0Pw1**(*<sup>t</sup>* <sup>−</sup> <sup>Δ</sup>*t*). (12)

**VPP** (*t*) = **VPPw3** (*t*) + **VPPw4** (*t*). (13)

*LR*(*t*) **VPPw3** (*t*), (14)

*LL*(*t*) **VPPw4** (*t*), (15)

*θo*|, (16)

*<sup>B</sup>*(*t*) , (10)

*θ<sup>B</sup>* is

˙ *θB*(*t*) =

The angular velocity of the body rotation ˙

the absolute coordinate system:

velocity of *PP*(*t*) is

and *PQ*(*t*) in Fig. 12(b).

produce **VPP** . Setting a command value for ˙

(14), and (15).

**5.2 Swing phase**

parameters, is determined by eqs. (6), (7), (9), and (10).

**5.1.1 How to derive velocities of rear-left and rear-right wheel**

The relationships between **Vw3** and **VPPw3** , and between **Vw4** and **VPPw4** are

**Vw3**(*t*) = <sup>2</sup>*Ar*(*t*)

**Vw4**(*t*) = <sup>2</sup>*Ar*(*t*)

where *LR*(*t*) and *LL*(*t*) are obtained from *B*(*t*), *θsup*(*t*), and the distance *Ar*(*t*) between *Pw*3(*t*)

The velocities of the rear-left wheel and the rear-right wheel are determined by eqs. (5), (13),

Figure 12(c) shows a model of the swing phase, where the origin of the absolute coordinate system is the front-left wheel and the lifted wheel is the front-right wheel. The trajectory is set such that point *PP* draws a circular path around the front-left wheel. The angular velocity of the front steering shaft and the velocities of the rear wheels are determined so that they

<sup>|</sup>**VPP** (*t*)<sup>|</sup> <sup>=</sup> *Af*(*t*)<sup>|</sup> ˙

*θo*, we obtain

**VPP** (*t*)=(−|**VPP** (*t*)| sin(*θleg*(*t*) + *θB*(*t*)), |**VPP** (*t*)| cos(*θleg*(*t*) + *θB*(*t*))). (17)

In Fig. 9(g) and (h), for example, the robot moves with all four wheels supporting the body. Since the velocity of the body center, **VB**, and the angles of the front and rear steering axes in the projection frame, *θleg* and *θsup*, are given as parameters, the desired wheel velocities with no slipping, **Vw1** ∼ **Vw4**, are derived. Since each wheel rotates about *OH*, **Vwi** is given by **Vwi**(*t*) = *lwi*(*t*)**VB**(*t*)/*RH*(*t*)(*i* = 1 ∼ 4) where *RH*(*t*) is the turning radius. Except under conditions, such as *θleg* = *θsup*, where the front and rear steering angles are equal and the turning radius becomes infinite, the topology in Fig. 12(d) leads to

$$\mathcal{O}\_H(t) = (\mathbf{x}\_H(t), y\_H(t)) = \left(\frac{B(t)}{\tan \theta\_{\sup}(t) - \tan \theta\_{\log}(t)}, \frac{B(t)}{2} \frac{\tan \theta\_{\sup}(t) + \tan \theta\_{\log}(t)}{\tan \theta\_{\sup}(t) - \tan \theta\_{\log}(t)}\right) \tag{18}$$

and *RH*(*t*) = *xH*(*t*)<sup>2</sup> + *yH*(*t*)2. Variables such as *lw*<sup>1</sup> are obtained in the form *lw*1(*t*) = |(*xH*(*t*) − *Pw*1*x*(*t*))/ cos *θleg*(*t*)|. However, when *θleg*(*t*) = *θsup*(*t*), we have **Vwi** = **VB**(*i* = 1 ∼ 4).

#### **6. Stability in leg mode**

In this section, whether the robot can maintain static stability while moving over a target step of 0.15[m] is analyzed for the gait strategy given above. Static state locomotion is considered as an initial step. In general, statically stable locomotion can be achieved if the center of gravity is located inside the support polygon. Here, the stability during movement of the proposed robot in leg mode is specifically investigated. For example, the best range of body yaw angle shown in Fig. 9(g) to climb a step while maintaining stability is derived.

Figure 13(a) shows the static stability when lifting the front-left wheel. Static stability is positive if the center of gravity is in the supporting polygon. Since RT-Mover employs a mechanism with a small number of driving shafts, it cannot move its center of gravity without altering the position of the supporting wheels. In addition, the supporting point of the front-right wheel in Fig. 13(a) cannot move since the lifted wheel is needed to move forward. Thus, the rear steering is used so that the center of gravity stays within the supporting polygon. As shown in Fig. 13(b), if the body inclines backward when going up a step, the center of gravity is displaced backward by *hg* sin *θpB* , where *θpB* is the body pitch angle.

Figure 14(A) shows four phases during the step-up gait. Out of the four phases in which a wheel is lifted during the step-up gait, only those shown in Fig. 14(A-c) and (A-d) cause static instability, because the center of gravity is displaced backward due to the backward inclination of the body and the stability margin consequently decreases. Here, the front steering is rotated up to the limit of ±30[deg] in the direction that increases stability. First, the rear-left wheel is lifted (Fig. 14(A-c)), moved forward, and then lowered. Next, the rear-right wheel is lifted, moved forward, and lowered. Therefore, the rear steering angle when the rear-right wheel is lifted depends on the rear steering angle when the rear-left wheel is lifted. It can be seen in Fig. 14(A-c) and (A-d) that the less the lifted rear-left wheel goes forward, the more static stability the robot has at the beginning of lifting the rear-right wheel. Hence, the rear-left

for Practical Use 17

Mobile Platform with Leg-Wheel Mechanism for Practical Use 143

A positive value of static stability indicates that the robot is stable, and a negative one indicates that it is unstable. Figure 15(a) shows that it is possible to go up a 0.15[m] step while maintaining static stability by setting the rear steering angle to be between 8 and 15.5[deg] when lifting the rear-left leg. The most stable angle is 11[deg], so the yaw angle of the robot

When descending a step, the four phases in Fig. 14(A) occur in reverse order as shown in Fig. 14(B). The positions shown in Fig. 14(B) are at the end of each leg motion, because static stability is smaller than it is at the beginning. Out of the four phases, only those shown in Fig. 14(B-a) and (B-b) cause static instability due to an inclination of the center of gravity. Because the stability of Fig. 14(B-b) is determined by the condition of Fig. 14(B-a) and Fig. 14(B-a) corresponds to Fig. 14(A-d), Fig. 15(b) can be used for discussing the stability margin for the step-down gait. Figure 15(b) shows that it is possible to go down a 0.15[m] step while maintaining static stability by setting the front steering angle to be between −4.5 and

For the maximum stable angle, the yaw angle of the robot shown in Fig. 10(c) is configured to a value calculated by (A) + (B) + (C). Here, (A) is the maximum stable angle of Fig. 15(b), (B) is the change in front steering angle generated by swinging front-left wheel (*θ<sup>b</sup>* − *θ<sup>a</sup>* in Fig. 16), and (C) is the change in the front steering angle generated by the front-left wheel landing

As (A)=-1[deg], (B)=12[deg], and (C)=4[deg] for the robot, the yaw angle of the body is

Fig. 16. Change of the front steering angle when moving the front-left wheel forward and

The proposed step-up gait was evaluated through a simulation and an experiment. The conditions of the simulation are the following. The upward step height is 0.15[m], the height when lifting a wheel is 0.16[m], the distance that the lifted wheel is moved forward is 0.12[m], the yaw angle of the body relative to the step in Fig. 9(g) is 11[deg], the angular velocity of a roll-adjustment shaft when lifting the wheel is 0.2[rad/s], ˙*θ*<sup>0</sup> in Fig. 12(c) is 0.2[rad/s], the angular velocity of a roll-adjustment shaft when landing the wheel is 0.1[rad/s], and the forward velocity of the body in wheel mode is 0.1[m/s]. In this chapter, the road shape is assumed to be known in advance. The robot starts 0.2[m] from the step, as shown in Fig. 17. The configured values are given a margin of 0.01[m] when lifting a wheel onto a step of height 0.15[m] and a margin of 0.02[m] when extending the wheel by the wheel radius of 0.1 [m]. The configured value of each process velocity in leg mode is obtained experimentally from a velocity that gives static leg motion. There are plans to address high-speed leg processes for

**The front steering angle is rotated 4[deg] from (b) to (c), because** 

**Pp**

**Pp moves forward.**

**(b) (c)** θ**b** θ**b-**θ**a=12[deg]**

8[deg] when landing the front-left leg. The most stable angle is −1[deg].

becomes 11[deg] in Fig. 9(g).

determined to be 15[deg] in Fig. 10(c).

*top view*

θ**a**

**7. Assessment of ability of locomotion in leg mode**

both step-up and step-down gaits in the future.

**(a) 0.15[m]**

(Fig. 16 (c)).

*side view*

lowering it

**7.1 Step-up gait**

Fig. 13. Stability margin

wheel must be advanced by the minimum distance required for going up the step. Since the lifted wheel can be placed on the step from the state shown in Fig. 14(A-c) by advancing it a distance equal to its radius, *θ<sup>A</sup>* is set at tan−1(*R*� *<sup>w</sup>*/(2*Ar*)), where *R*� *<sup>w</sup>* = *Rw* + 0.02[m](margin).

Fig. 14. Four phases during the gait. (A)The step-up gait. (B)The step-down gait.

Since the rear-left wheel is already on the step when lifting the rear-right wheel, the body pitch angle is smaller in (A-d) than in (A-c).

Figure 15 shows the results of numerical calculations of the margin of static stability (the minimum distance between the center of gravity and the supporting polygon) on a 0.15[m] high step. 0.15[m] is the maximum targeted height for the middle size type of RT-Mover.

Fig. 15. Static stability data

16 Mobile Robot / Book 3

supporting wheel lifted wheel

d2

rear-left wheel

distance equal to its radius, *θ<sup>A</sup>* is set at tan−1(*R*�

angle is smaller in (A-d) than in (A-c).


Fig. 15. Static stability data



0

**stability margin[m]**

0.02

0.04

0.06

Fig. 13. Stability margin

stability margin = min(d1,d2,d3)

d1

d3

(a) (b)

wheel must be advanced by the minimum distance required for going up the step. Since the lifted wheel can be placed on the step from the state shown in Fig. 14(A-c) by advancing it a

(A) (B)

Since the rear-left wheel is already on the step when lifting the rear-right wheel, the body pitch

Figure 15 shows the results of numerical calculations of the margin of static stability (the minimum distance between the center of gravity and the supporting polygon) on a 0.15[m] high step. 0.15[m] is the maximum targeted height for the middle size type of RT-Mover.

> **stability margin at the beginning of lifting the rear-left wheel**


**(a)The rear steering angle at the beginning**

**(b)The rear steering angle at the beginning of lifting the rear-right wheel**

**stability margin at the beginning of lifting the rear-right wheel after the rear-left wheel's leg motion**

 **of lifting the rear-left wheel**


**stable area**

> **the most stable angle**

> > **[deg]**

**[deg]**

Fig. 14. Four phases during the gait. (A)The step-up gait. (B)The step-down gait.

rear-right wheel

supporting polygon

front-right wheel

θ<sup>P</sup><sup>B</sup> hg

h<sup>g</sup> sin θ<sup>P</sup><sup>B</sup>

*<sup>w</sup>*/(2*Ar*)), where *R*�

*<sup>w</sup>* = *Rw* + 0.02[m](margin).

A positive value of static stability indicates that the robot is stable, and a negative one indicates that it is unstable. Figure 15(a) shows that it is possible to go up a 0.15[m] step while maintaining static stability by setting the rear steering angle to be between 8 and 15.5[deg] when lifting the rear-left leg. The most stable angle is 11[deg], so the yaw angle of the robot becomes 11[deg] in Fig. 9(g).

When descending a step, the four phases in Fig. 14(A) occur in reverse order as shown in Fig. 14(B). The positions shown in Fig. 14(B) are at the end of each leg motion, because static stability is smaller than it is at the beginning. Out of the four phases, only those shown in Fig. 14(B-a) and (B-b) cause static instability due to an inclination of the center of gravity. Because the stability of Fig. 14(B-b) is determined by the condition of Fig. 14(B-a) and Fig. 14(B-a) corresponds to Fig. 14(A-d), Fig. 15(b) can be used for discussing the stability margin for the step-down gait. Figure 15(b) shows that it is possible to go down a 0.15[m] step while maintaining static stability by setting the front steering angle to be between −4.5 and 8[deg] when landing the front-left leg. The most stable angle is −1[deg].

For the maximum stable angle, the yaw angle of the robot shown in Fig. 10(c) is configured to a value calculated by (A) + (B) + (C). Here, (A) is the maximum stable angle of Fig. 15(b), (B) is the change in front steering angle generated by swinging front-left wheel (*θ<sup>b</sup>* − *θ<sup>a</sup>* in Fig. 16), and (C) is the change in the front steering angle generated by the front-left wheel landing (Fig. 16 (c)).

As (A)=-1[deg], (B)=12[deg], and (C)=4[deg] for the robot, the yaw angle of the body is determined to be 15[deg] in Fig. 10(c).

Fig. 16. Change of the front steering angle when moving the front-left wheel forward and lowering it

#### **7. Assessment of ability of locomotion in leg mode**

#### **7.1 Step-up gait**

The proposed step-up gait was evaluated through a simulation and an experiment. The conditions of the simulation are the following. The upward step height is 0.15[m], the height when lifting a wheel is 0.16[m], the distance that the lifted wheel is moved forward is 0.12[m], the yaw angle of the body relative to the step in Fig. 9(g) is 11[deg], the angular velocity of a roll-adjustment shaft when lifting the wheel is 0.2[rad/s], ˙*θ*<sup>0</sup> in Fig. 12(c) is 0.2[rad/s], the angular velocity of a roll-adjustment shaft when landing the wheel is 0.1[rad/s], and the forward velocity of the body in wheel mode is 0.1[m/s]. In this chapter, the road shape is assumed to be known in advance. The robot starts 0.2[m] from the step, as shown in Fig. 17. The configured values are given a margin of 0.01[m] when lifting a wheel onto a step of height 0.15[m] and a margin of 0.02[m] when extending the wheel by the wheel radius of 0.1 [m]. The configured value of each process velocity in leg mode is obtained experimentally from a velocity that gives static leg motion. There are plans to address high-speed leg processes for both step-up and step-down gaits in the future.

for Practical Use 19

Mobile Platform with Leg-Wheel Mechanism for Practical Use 145

Figure 18 shows the posture of the platform, the angles of the front and rear roll-adjustment shafts, the front and rear steering angles, and the static stability during each leg motion. Figure 18(a) shows that the pitch posture angle of the platform is almost kept horizontal. The roll angle of the platform is kept horizontal to within ±3[deg]. At 2.8 ∼ 7.5[s], 9.6 ∼ 14.5[s], 20.4 ∼ 25.0[s], and 27.1 ∼ 31.6[s], the roll angle is larger than at other times because the twisting force around the body, caused by the roll-adjustment shaft that produces the torque for lifting a wheel, disturbs the posture control of the other roll-adjustment shaft. The timings

Figure 18(b) shows the transition of angles of the front and rear roll-adjustment shafts. From 2.8[s] to 7.5[s], the front-left wheel is lifted. First, the wheel is lifted until the front roll-adjustment shaft is rotated at 18[deg] (2.8[s] to 4.9[s]). From 4.9[s] to 5.9[s], the front steering is rotated until it reaches −14.5[deg] so that the wheel moves forward 0.12[m] (Fig. 18(c)). Then the wheel moves downward from 5.9[s] to 7.5[s]. Since the roll angle of the platform changes from negative to positive at 7.5[s]((A) in Fig. 18(a)), the landing of the

Figure 18(c) shows the transition of angles of the front and rear steering shafts. From 2.8[s] to 7.5[s], the front wheels are lifted. While the front-left wheel is lifted, the rear steering shaft rotates to its steering limit of −30[deg] (1.8[s] to 7.5[s]) so that the static stability increases. After lifting the front-left wheel, the wheel is moved forward until the front steering angle becomes −14.5[deg] (4.9[s] to 5.9[s]). While the front-right wheel is lifted, the rear steering shaft is maintained at the steering limit of 30[deg] (9.6[s] to 14.5[s]) so that the static stability increases. The rear steering shaft is also maintained at 30[deg] (14.5[s] to 15.9[s]) after the front wheels are lifted, thereby adjusting the yaw angle of the body relative to the step to 11[deg] for lifting the rear wheels. Rear wheels are lifted between 20.4[s] and 31.6[s]. While the rear-left wheel is lifted, the wheel is moved forward 0.12[m] until the rear steering shaft reaches an angle of −10.8[deg] (22.1[s] to 23.1[s]). The front steering shaft is rotated to ± 30[deg] in order

Figure 18(d) shows the data for static stability only during leg motion, because static stability is large enough during wheel mode. The figure shows that the static stability is maintained. When lifting the front-left wheel, the static stability increases, because the center of gravity of the robot moves backward according to the body pitch (2.8[s] to 4.9[s]). In the swing phase of the front-left wheel, static stability decreases, because the position of the front-right wheel with respect to the body changes and the supporting polygon becomes smaller (4.9[s] to 5.9[s]). Finally, in its landing phase, static stability decreases, because the center of gravity

Figure 19 shows scenes from a step-up gait experiment and the experimental data. The conditions of the experiment are the same as those of the simulation except the *D* gains for each shaft are set experimentally. The actual robot can also move up onto the 0.15[m]-high step, and the features of the experimental data are almost the same as those of the simulation data. However, it takes about 2.5[s] longer to perform the movement in the experiment than in the simulation. The main reason is that the detection of the landing of each wheel is delayed due to a difference in the posture of the platform between the simulation and the experiment. The inclination of the pitch angle of the platform is larger in the experiment than in the simulation, because of the backlash of the pitch-adjustment shaft and the friction acting

on it in the actual robot. Thus, the proposed step-up gait was proved to be effective.

given are those during each leg motion.

to ensure static stability.

wheel can be detected. The other legs behave similarly.

of the robot moves forward due to the body pitch (5.9[s] to 7.5[s]).

Fig. 17. Snapshots of the step-up gait simulation

Fig. 18. Simulation data for the step-up gait. (a) Posture angles of the platform. (b) Front and rear roll-adjustment shaft's angles. (c) Front and rear steering angles. (d) Static stability during each leg motion.

18 Mobile Robot / Book 3

0.2[m] 0.15[m]

**-10**

**-40 -30 -20 -10 0 10 20 30 40**

during each leg motion.

**angle[deg]**

**-5**

 **0**

**angle[deg]**

 **5**

 **10**

Fig. 17. Snapshots of the step-up gait simulation

 **0 5 10 15 20 25 30 35**

 **0 5 10 15 20 25 30 35**

**adjust the yaw angle of the body(Fig.9(f))**

**(c)**

*Rear steering angle*

**(a)**

**Fig.9(g)**

*Roll angle of the platform*

**leg motion of rear-right wheel**

**leg motion of rear-left wheel**

*Pitch angle of the platform*

**(A)**

**leg motion of front-right wheel**

**leg motion of front-left wheel**

**rotate the rear steering for growing the stability margin**

> **rotate the front steering to move the lifted wheel forward**

**time[s]**

*Front steering angle*

**time[s]**

**rotate the yaw angle of the body to 0 (Fig.9(k))**

**-0.01 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08**

Fig. 18. Simulation data for the step-up gait. (a) Posture angles of the platform. (b) Front and rear roll-adjustment shaft's angles. (c) Front and rear steering angles. (d) Static stability

**stability margin[m]**

**lifting phase**

**lifting phase**

**swing phase landing phase**

**swing phase**

> **landing phase**

**angle[deg]**

0[s] 2[s] 4[s] 6[s] 8[s] 10[s]

12[s] 14[s] 16[s] 18[s] 20[s] 22[s]

Top view

 **0 5 10 15 20 25 30 35**

 **0 5 10 15 20 25 30 35**

**(d)**

**(b)**

*Front roll-adjustment shaft's angle*

**time[s]**

**time[s]**

*Rear roll-adjustment shaft's angle*

24[s] 26[s] 28[s] 30[s] 34[s] 37.5[s]

Figure 18 shows the posture of the platform, the angles of the front and rear roll-adjustment shafts, the front and rear steering angles, and the static stability during each leg motion. Figure 18(a) shows that the pitch posture angle of the platform is almost kept horizontal. The roll angle of the platform is kept horizontal to within ±3[deg]. At 2.8 ∼ 7.5[s], 9.6 ∼ 14.5[s], 20.4 ∼ 25.0[s], and 27.1 ∼ 31.6[s], the roll angle is larger than at other times because the twisting force around the body, caused by the roll-adjustment shaft that produces the torque for lifting a wheel, disturbs the posture control of the other roll-adjustment shaft. The timings given are those during each leg motion.

Figure 18(b) shows the transition of angles of the front and rear roll-adjustment shafts. From 2.8[s] to 7.5[s], the front-left wheel is lifted. First, the wheel is lifted until the front roll-adjustment shaft is rotated at 18[deg] (2.8[s] to 4.9[s]). From 4.9[s] to 5.9[s], the front steering is rotated until it reaches −14.5[deg] so that the wheel moves forward 0.12[m] (Fig. 18(c)). Then the wheel moves downward from 5.9[s] to 7.5[s]. Since the roll angle of the platform changes from negative to positive at 7.5[s]((A) in Fig. 18(a)), the landing of the wheel can be detected. The other legs behave similarly.

Figure 18(c) shows the transition of angles of the front and rear steering shafts. From 2.8[s] to 7.5[s], the front wheels are lifted. While the front-left wheel is lifted, the rear steering shaft rotates to its steering limit of −30[deg] (1.8[s] to 7.5[s]) so that the static stability increases. After lifting the front-left wheel, the wheel is moved forward until the front steering angle becomes −14.5[deg] (4.9[s] to 5.9[s]). While the front-right wheel is lifted, the rear steering shaft is maintained at the steering limit of 30[deg] (9.6[s] to 14.5[s]) so that the static stability increases. The rear steering shaft is also maintained at 30[deg] (14.5[s] to 15.9[s]) after the front wheels are lifted, thereby adjusting the yaw angle of the body relative to the step to 11[deg] for lifting the rear wheels. Rear wheels are lifted between 20.4[s] and 31.6[s]. While the rear-left wheel is lifted, the wheel is moved forward 0.12[m] until the rear steering shaft reaches an angle of −10.8[deg] (22.1[s] to 23.1[s]). The front steering shaft is rotated to ± 30[deg] in order to ensure static stability.

Figure 18(d) shows the data for static stability only during leg motion, because static stability is large enough during wheel mode. The figure shows that the static stability is maintained. When lifting the front-left wheel, the static stability increases, because the center of gravity of the robot moves backward according to the body pitch (2.8[s] to 4.9[s]). In the swing phase of the front-left wheel, static stability decreases, because the position of the front-right wheel with respect to the body changes and the supporting polygon becomes smaller (4.9[s] to 5.9[s]). Finally, in its landing phase, static stability decreases, because the center of gravity of the robot moves forward due to the body pitch (5.9[s] to 7.5[s]).

Figure 19 shows scenes from a step-up gait experiment and the experimental data. The conditions of the experiment are the same as those of the simulation except the *D* gains for each shaft are set experimentally. The actual robot can also move up onto the 0.15[m]-high step, and the features of the experimental data are almost the same as those of the simulation data. However, it takes about 2.5[s] longer to perform the movement in the experiment than in the simulation. The main reason is that the detection of the landing of each wheel is delayed due to a difference in the posture of the platform between the simulation and the experiment. The inclination of the pitch angle of the platform is larger in the experiment than in the simulation, because of the backlash of the pitch-adjustment shaft and the friction acting on it in the actual robot. Thus, the proposed step-up gait was proved to be effective.

for Practical Use 21

Mobile Platform with Leg-Wheel Mechanism for Practical Use 147

2[s] 4[s] 6[s] 8[s] 10[s]

12[s] 14[s] 16[s] 18[s] 20[s] 22[s]

24[s] 26[s] 28[s] 30[s] 32[s] 34[s]

36[s] 38[s] 40[s] 42[s] 44[s] 48[s]

(a) (b)

(c) (d)

Fig. 21. (a)RT-Mover P-type. (b)On a bank. (c)On a slope. (d)Getting off a train.

0.325[m]. Therefore, the movement range is sufficient for the targeted terrain. Likewise, moving 0.325[m] in the front and rear directions is possible by moving the steering from 0[deg] to 30[deg], and holes of 0.325[m] can be crossed. With regards to locomotion on a

slope, back-and-forth movement and traversal of a slope of up to 30[deg] is possible.

0[s]

0[s]

=0.15[m] 0.2[m]

step height

Fig. 20. Snapshots of the step-down gait simulation

#### **7.2 Step-down gait**

The proposed step-down gait was evaluated using a simulation and an experiment. Due to space limitations, only the result of simulation is shown. The conditions of the simulation are the following. The downward step height is 0.15[m], the height when lifting a wheel is 0.02[m], the length the lifted wheel is moved forward is 0.12[m], the yaw angle of the body in Fig. 10(c) is 15[deg], the angular velocity of a roll-adjustment shaft when lifting a wheel is 0.2[rad/s], ˙*θ*<sup>0</sup> in Fig. 12(c) is 0.2[rad/s], the angular velocity of a roll-adjustment shaft when landing a wheel is 0.1[rad/s], the forward velocity of the body in wheel mode is 0.1[m/s], and the road shape is known in advance. The robot starts at a position 0.2[m] from the step, as shown in Fig. 20. The configured value allows a margin of 0.02[m] in the height by which to lift the wheel and in the length by which to swing the lifted wheel forward. The configured value of each process velocity in leg mode is obtained experimentally from a velocity that gives static leg motion.

Fig. 19. Experimental data for the step-up gait. (a) Experimental scenes. (b) Posture angles of the platform.

Figure 20 shows snapshots of the step-down gait simulation. It can be seen that the step-down gait presented in Fig. 10 is performed stably.

### **8. A personal mobility vehicle, RT-mover P-type**

RT-Mover P-type (Fig. 21) that is one of RT-Mover series is introduced. This robot can carry a person even if on the targetted rough terrain. The specifications of it are listed in Table 5. When rotating the roll-adjustment axis through 30[deg] such that the wheel on one side is in contact with the ground, the other wheel attached to a 0.65[m] steering arm can rise 20 Mobile Robot / Book 3

The proposed step-down gait was evaluated using a simulation and an experiment. Due to space limitations, only the result of simulation is shown. The conditions of the simulation are the following. The downward step height is 0.15[m], the height when lifting a wheel is 0.02[m], the length the lifted wheel is moved forward is 0.12[m], the yaw angle of the body in Fig. 10(c) is 15[deg], the angular velocity of a roll-adjustment shaft when lifting a wheel is 0.2[rad/s], ˙*θ*<sup>0</sup> in Fig. 12(c) is 0.2[rad/s], the angular velocity of a roll-adjustment shaft when landing a wheel is 0.1[rad/s], the forward velocity of the body in wheel mode is 0.1[m/s], and the road shape is known in advance. The robot starts at a position 0.2[m] from the step, as shown in Fig. 20. The configured value allows a margin of 0.02[m] in the height by which to lift the wheel and in the length by which to swing the lifted wheel forward. The configured value of each process velocity in leg mode is obtained experimentally from a velocity that

0[s] 2[s] 4[s] 6[s] 8[s] 10[s]

12[s] 14[s] 16[s] 18[s] 20[s] 22[s]

24[s] 26[s] 28[s] 30[s] 32[s] 34[s]

Fig. 19. Experimental data for the step-up gait. (a) Experimental scenes. (b) Posture angles of

Figure 20 shows snapshots of the step-down gait simulation. It can be seen that the step-down

RT-Mover P-type (Fig. 21) that is one of RT-Mover series is introduced. This robot can carry a person even if on the targetted rough terrain. The specifications of it are listed in Table 5. When rotating the roll-adjustment axis through 30[deg] such that the wheel on one side is in contact with the ground, the other wheel attached to a 0.65[m] steering arm can rise

36[s] 38[s] 40[s]

(a)

0.15[m] height step

**7.2 Step-down gait**

gives static leg motion.

the platform.

0.2[m]

gait presented in Fig. 10 is performed stably.

**8. A personal mobility vehicle, RT-mover P-type**

Fig. 20. Snapshots of the step-down gait simulation

0.325[m]. Therefore, the movement range is sufficient for the targeted terrain. Likewise, moving 0.325[m] in the front and rear directions is possible by moving the steering from 0[deg] to 30[deg], and holes of 0.325[m] can be crossed. With regards to locomotion on a slope, back-and-forth movement and traversal of a slope of up to 30[deg] is possible.

Fig. 21. (a)RT-Mover P-type. (b)On a bank. (c)On a slope. (d)Getting off a train.

for Practical Use 23

Mobile Platform with Leg-Wheel Mechanism for Practical Use 149

respectively, that of a steering shaft to put forward the lifted leg is 0.2[rad/s], and the forward velocity of the body in wheel mode is 0.12[m/s]. The configured value of each process velocity in leg mode is obtained experimentally from a velocity that gives static leg motion. There are

Fig.23 shows data of the step-down walking experiment down a 0.15[m]-high step. The main conditions are basically the same as the step-up gait experiment. The robot can decline a 0.15[m] step with a person riding on it while maintaing the horizontal position of its platform

0[s] 4[s] 8[s] 12[s] 16[s]

20[s] 24[s] 28[s] 32[s] 36[s]

(a)

roll angle

0 10 20 30 40 50 60

(b)

We have developed some mobile platforms with leg-wheel mechanism for practical use, including a real size personal mobility vehicle (Fig. 24(a)). They are RT-Movers that have both of a wheel mode and a leg mode in a simple mechanism. They have four drivable wheels and two leg-like axles. The wheels are mounted on one side of the leg-like axles at the front and rear of the body. The mechanism is realized with few drive shafts to achieve the minimum

The mechanical design concept was discussed and strategies for moving on rough terrain were proposed. The kinematics, stability, and control method of RT-Mover were also described in detail. Some typical cases of wheel mode and leg mode locomotion were selected, and the robot's ability of locomotion on rough terrain was assessed through simulations and

Fig. 23. Experimental result of the step-down gait. (a)Snapshots. (b)Posture angles of the

48[s] 52[s] 56[s]

pitch angle

time[s]

plans to address high-speed leg processes in near future.

for expanding the stability.

The robot goes back to adjust its rotation of the body

within ±4.5[deg].

platform.

**10. Conclusions**

40[s] 44[s]


necessary leg functions, taking a four-wheel model as the base.

angle[deg]


Table 5. Main specifications of P-type

In fact, additional motors are attached to the robot, for example, for adjusting footrest mechanism. Those are, however, not essential functions for moving on rough terrain, so they are not discussed here.

### **9. Assessment of ability of locomotion of P-type**

Evaluations were performed through experiments taking a step-up gait and a step-down gait as examples. The above-mentioned methodology is also used for these gaits. At the current stage, road shapes are known in advance.

Fig.22 shows data of the step-up walking experiment over a 0.15[m]-high step. The robot can get over a 0.15[m] step with a person riding on it while maintaing the horizontal position of its platform within ±5[deg]. The main conditions are the followings. The angular velocity of a roll-adjustment shaft when lifting and landing the wheel are 0.2[rad/s] and 0.1[rad/s]

Fig. 22. Experimental result of the step-up gait. (a)Snapshots. (b)Posture angles of the platform.

22 Mobile Robot / Book 3

Dimensions Length 1.15[m](excluding footrest); Width 0.70[m] (Tread 0.60[m]);

Gear ratio 100 (each wheel, front and rear steering); 820 (pitch-adjustment shaft);

Sensor Encoder (each motor); Current sensor (each motor); Posture angle sensor

Angle limit ±30[deg] (steering, roll-adjustment shaft, and pitch-adjustment shaft)

In fact, additional motors are attached to the robot, for example, for adjusting footrest mechanism. Those are, however, not essential functions for moving on rough terrain, so they

Evaluations were performed through experiments taking a step-up gait and a step-down gait as examples. The above-mentioned methodology is also used for these gaits. At the current

Fig.22 shows data of the step-up walking experiment over a 0.15[m]-high step. The robot can get over a 0.15[m] step with a person riding on it while maintaing the horizontal position of its platform within ±5[deg]. The main conditions are the followings. The angular velocity of a roll-adjustment shaft when lifting and landing the wheel are 0.2[rad/s] and 0.1[rad/s]

0[s] 4[s] 8[s] 12[s] 16[s]

20[s] 24[s] 28[s] 32[s] 36[s] (a)

roll angle

0 5 10 15 20 25 30 35 40 45

(b)

Fig. 22. Experimental result of the step-up gait. (a)Snapshots. (b)Posture angles of the

time[s]

pitch angle

Height to seat 0.58[m]; Height to bottom 0.17[m]

Wheel Radius:0.15[m]; Width:0.03[m] Weight 80[kg] (including batteries at 20[kg]) Motor maxon brushless motor 100[W] × 9

Max speed 4.5[km/s]

are not discussed here.

Table 5. Main specifications of P-type

stage, road shapes are known in advance.


platform.

angle[deg]

Power supply 48[V] lead accumulator

**9. Assessment of ability of locomotion of P-type**

2400 (roll-adjustment shaft)

(roll and pitch of platform)

respectively, that of a steering shaft to put forward the lifted leg is 0.2[rad/s], and the forward velocity of the body in wheel mode is 0.12[m/s]. The configured value of each process velocity in leg mode is obtained experimentally from a velocity that gives static leg motion. There are plans to address high-speed leg processes in near future.

Fig.23 shows data of the step-down walking experiment down a 0.15[m]-high step. The main conditions are basically the same as the step-up gait experiment. The robot can decline a 0.15[m] step with a person riding on it while maintaing the horizontal position of its platform within ±4.5[deg].

Fig. 23. Experimental result of the step-down gait. (a)Snapshots. (b)Posture angles of the platform.

#### **10. Conclusions**

We have developed some mobile platforms with leg-wheel mechanism for practical use, including a real size personal mobility vehicle (Fig. 24(a)). They are RT-Movers that have both of a wheel mode and a leg mode in a simple mechanism. They have four drivable wheels and two leg-like axles. The wheels are mounted on one side of the leg-like axles at the front and rear of the body. The mechanism is realized with few drive shafts to achieve the minimum necessary leg functions, taking a four-wheel model as the base.

The mechanical design concept was discussed and strategies for moving on rough terrain were proposed. The kinematics, stability, and control method of RT-Mover were also described in detail. Some typical cases of wheel mode and leg mode locomotion were selected, and the robot's ability of locomotion on rough terrain was assessed through simulations and

for Practical Use 25

Mobile Platform with Leg-Wheel Mechanism for Practical Use 151

Lauria M., et al., (1998). Design and control of an innovative micro-rover, *Proceedings of the*

Morales R., et al., (2006). Kinematic Model of a New Staircase Climbing Wheelchair and its

Nakajima S., (2011). RT-Mover: a rough terrain mobile robot with a simple

Nakajima S., (2011). Development of a Personal Mobility Robot for Rough Terrain, *Proceedings*

Nakajima S. and Nakano E., (2008a). Adaptive Gait for Large Rough Terrain of a Leg-wheel

Nakajima S. and Nakano E., (2008b). Adaptive Gait for Large Rough Terrain of a Leg-wheel

Nakajima S. and Nakano E., (2009a). Adaptive Gait for Large Rough Terrain of a Leg-wheel

Nakajima S. and Nakano E., (2009b). Adaptive Gait for Large Rough Terrain of a Leg-wheel

Nakajima S. and Nakano E., (2009c). Adaptive Gait for Large Rough Terrain of a Leg-wheel

Quaglia G., et al., (2010). The Epi.q-1 Hybrid Mobile Robot, *The International Journal of Robotics*

Quinn R. D., et al., (2003). Parallel Complementary Strategies for Implementing Biological

Sato M., et al., (2007). An Environmental Adaptive Control System of a Wheel Type Mobile

Siegwart R., et al., (2002). Innovative design for wheeled locomotion in rough terrain, *Robotics*

Six K. and Kecskem'ethy A., (1999). Steering properties of a combined wheeled and legged

Smith J. A., et al., (2006). PAW: a Hybrid Wheeled-Leg Robot, *Proceedings of the 2006 IEEE*

Song S. M. and Waldron K. J., (1989). Machines That Walk: The Adaptive Suspension Vehicle,

Thueer T., et al., (2006). CRAB-Exploration rover with advanced obstacle negotiation

Volpe R., et al., (1997). Rocky 7: A next generation Mars rover prototype, *Journal of Advanced*

*International Conference on Robotics and Automation,* pp.4043-4048.

*Conference on Intelligent Robots and Systems,* pp.3962-3967.

*and Autonomous Systems,* 40: 151-162.

Netherlands, 1998.

801-805.

913-920.

12-19.

285-292.

419-426.

169-186.

*MIT Press.*

*Research,* 29(1): 81-91.

*Mechanisms,* pp.135-140.

*Robotics and Automation,* pp.1-8.

*Robotics,* 11(4): 341-358.

doi:10.1177/0278364911405697 .

*of the 14th CLAWAR,* accepted.

*Fifth ESA Workshop on Advanced Space Technologies for Robotics and Automation,* The

Experimental Validation, *The International Journal of Robotics Research,* 25(9): 825-841.

leg-wheel hybrid mechanism, *The International Journal of Robotics Research,*

Robot (First Report: Gait Strategy), *Journal of Robotics and Mechatronics,* 20(5):

Robot (Second Report:Step-Up Gait), *Journal of Robotics and Mechatronics,* 20(6):

Robot (Third Report: Step-Down Gait), *Journal of Robotics and Mechatronics,* 21(1):

Robot (Fourth Report: Step-Over Gait), *Journal of Robotics and Mechatronics,* 21(2):

Robot (Fifth Report: Integrated Gait), *Journal of Robotics and Mechatronics,* 21(3):

Principles into Mobile Robots, *The International Journal of Robotics Research,* 22(3):

Robot for the Rough Terrain Movement, *Proceedings of the 2007 IEEE/RSJ International*

striding excavator, *Proceedings of the 10th World Congress on the Theory of Machines and*

capabilities, *Proceedings of the 9th ESA Workshop on Advanced Space Technologies for*

experiments. In every case, the robot was able to move while maintaining the horizontal position of its platform.

We are undertaking joint research with a railway company to develop a personal mobility robot for outdoor use, including on rough terrain. Good coordination between the personal mobility robot and the railway system may also lead to a new type of transportation system (see Fig. 24(b)).

A future transportation system image of seamless connection between railway system and personal mobility vehicles

Fig. 24. Snapshots. (a) RT-Mover series. (b) A future transportation system image.

Since this research has just started, there is much work that should be done in the future, for example: 1. allowing for the perception of rough terrain rather than moving over obstacles whose position is known in advance; 2. adapting control methods for moving on different types of rough terrain; 3. dynamic control on rough terrain for high-speed locomotion.

#### **11. References**


24 Mobile Robot / Book 3

experiments. In every case, the robot was able to move while maintaining the horizontal

We are undertaking joint research with a railway company to develop a personal mobility robot for outdoor use, including on rough terrain. Good coordination between the personal mobility robot and the railway system may also lead to a new type of transportation system

(a) (b)

Since this research has just started, there is much work that should be done in the future, for example: 1. allowing for the perception of rough terrain rather than moving over obstacles whose position is known in advance; 2. adapting control methods for moving on different types of rough terrain; 3. dynamic control on rough terrain for high-speed locomotion.

Bares J. and Wettergreen D., (1997). Lessons from the development and deployment of Dante II, *Proceedings of the 1997 Field and Service Robotics Conference,* pp.72-79. Daltorio K. A., et al., (2009). Mini-Whegs Climbs Steep Surfaces Using Insect-inspired

Delcomyn F. and Nelson M. E., (2000). Architectures for a biomimetic hexapod robot, *Robotics*

Endo G. and Hirose S., (2000). Study on roller-walker (multi-mode steering control and self-contained locomotion), *Journal of Robotics and Mechatronics,* 12(5): 559-566. Grand C., et al., (2004). Stability and Traction Optimization of a Reconfigurable Wheel-Legged Robot, *The International Journal of Robotics Research,* 23(10-11): 1041-1058. Halme A., et al., (2003). WorkPartner: Interactive Human-Like Service Robot for Outdoor Applications, *The International Journal of Robotics Research,* 22(7-8): 627-640. Hirose S., et al., (1985). The Gait Control System of the Quadruped Walking Vehicle, *Journal of*

Kimura H., et al., (2007). Adaptive Dynamic Walking of a Quadruped Robot on Natural

Kubota T., et al., (2003). Small, light-weight rover "Micro5" for lunar exploration, *Acta*

Lacagnina M., et al., (2003). Kinematics, dynamics and control of a hybrid robot Wheeleg,

Ground Based on Biological Concepts, *The International Journal of Robotics Research,*

Attachment Mechanisms, *The International Journal of Robotics Research,* 28(2): 285-302.

Fig. 24. Snapshots. (a) RT-Mover series. (b) A future transportation system image.

A future transportation system image of seamless connection between railway system and personal mobility

vehicles

position of its platform.

(see Fig. 24(b)).

**11. References**

*and Autonomous Systems,* 30: 5-15.

*the Robotics Society of Japan,* 3(4): 304-323.

*Robotics and Autonomous Systems,* 45: 161-180.

26(5): 475-490.

*Astronautica,* 52: 447-453.


**8** 

Japan

**A Micro Mobile Robot with Suction** 

Chika Hiroki and Wenwei Yu

*Graduate School of Engineering, Chiba University* 

**Cups in the Abdominal Cavity for NOTES** 

NOTES (Natural Orifice Translumenal Endoscopic Surgery), in which forceps are put through a natural orifice, such as the mouth, anus or vagina, and a hole is cut at the site to reach intra-abdominal cavity. Because this surgery is able to minimize incision size and the amount of pain, thus greatly improve the quality of life of patients. Although the NOTES approach may hold tremendous potential, there are difficulties that should be overcome before this technique is introduced into clinical care. The most serious one is that since the distance from the surgeon's fingertip to the targeted site is generally longer than that of the usual endoscopic operations, manipulation of forceps is much more difficult, which brings more burdens on the surgeons; meanwhile, there are few surgical devices that could be

The aim of this study is to develop surgical devices that could facilitate high manipulability

The biggest issue when developing an device for NOTES support use is that it has to show both flexibility and rigidity. On one hand, in order to pass a long pathway (i.e., the esophagus) to reach a site (i.e., the stomach) it should be flexible enough. On the other hand, after reaching its target site (i.e., the abdominal cavity), it should show sufficient rigidity which could stay at the site steadily and perform its tasks, such as holding a camera for

The first type expanded the traditional flexible endoscope for single port access surgery (Xu. et. al., 2009), which has built-in camera, forceps, and electric scalpel, all folded in a small cylinder before and during insertion, then deployed after reaching its targeted site. This type of device owns sufficient flexibility, however, since the fulcrum of the manipulation (a point to provide support of manipulation) is outside the port, as the distance between the fulcrum and targeted site increases, its rigidity of system will be reduced by its inherent flexibility, and force will be even more difficult to transmitted to the endeffector usually located at the

The robot type goes to another extreme. The robot moves around the targeted site, after being inserted through the port. The fulcrum of manipulation is usually near the endeffector, thus the Robot Type usually has good manipulability. It has been reported that a wheeled surgical robot system could move on the surface of liver (Rentschler & Reid, 2009). However, the mobile mechanism could not provide mobility to cover whole abdominal cavity for NOTES support usage. Moreover, not all the surface of inner organs

and high functionality (cut, hold tissues, hold camera) for NOTES.

inspection, and/or a soft forceps in operations.

**1. Introduction** 

specifically used for NOTES.

detail portion of the device.


## **A Micro Mobile Robot with Suction Cups in the Abdominal Cavity for NOTES**

Chika Hiroki and Wenwei Yu

*Graduate School of Engineering, Chiba University*  Japan

#### **1. Introduction**

26 Mobile Robot / Book 3

152 Mobile Robots – Current Trends

Winnendael M. V., et al., (1999). Nanokhod micro-rover heading towards Mars, *Proceedings*

Yoneda K., (2007). Light Weight Quadruped with Nine Actuators, *Journal of Robotics and*

Yoneda K., et al., (2009). High-grip Stair Climber with Powder-filled Belts, *The International*

Yuan J. and Hirose S., (2004). Research on Leg-wheel Hybrid Stair Climbing Robot,

*Space,* pp.69-76.

*Biomimetics,* pp.1-6.

*Mechatronics,* 19(2): 160-165.

*Journal of Robotics Research,* 28(1): 81-89.

*of the Fifth International Symposium on Artificial Intelligence, Robotics and Automation in*

Zero Carrier, *Proceedings of the 2004 IEEE International Conference on Robotics and*

NOTES (Natural Orifice Translumenal Endoscopic Surgery), in which forceps are put through a natural orifice, such as the mouth, anus or vagina, and a hole is cut at the site to reach intra-abdominal cavity. Because this surgery is able to minimize incision size and the amount of pain, thus greatly improve the quality of life of patients. Although the NOTES approach may hold tremendous potential, there are difficulties that should be overcome before this technique is introduced into clinical care. The most serious one is that since the distance from the surgeon's fingertip to the targeted site is generally longer than that of the usual endoscopic operations, manipulation of forceps is much more difficult, which brings more burdens on the surgeons; meanwhile, there are few surgical devices that could be specifically used for NOTES.

The aim of this study is to develop surgical devices that could facilitate high manipulability and high functionality (cut, hold tissues, hold camera) for NOTES.

The biggest issue when developing an device for NOTES support use is that it has to show both flexibility and rigidity. On one hand, in order to pass a long pathway (i.e., the esophagus) to reach a site (i.e., the stomach) it should be flexible enough. On the other hand, after reaching its target site (i.e., the abdominal cavity), it should show sufficient rigidity which could stay at the site steadily and perform its tasks, such as holding a camera for inspection, and/or a soft forceps in operations.

The first type expanded the traditional flexible endoscope for single port access surgery (Xu. et. al., 2009), which has built-in camera, forceps, and electric scalpel, all folded in a small cylinder before and during insertion, then deployed after reaching its targeted site. This type of device owns sufficient flexibility, however, since the fulcrum of the manipulation (a point to provide support of manipulation) is outside the port, as the distance between the fulcrum and targeted site increases, its rigidity of system will be reduced by its inherent flexibility, and force will be even more difficult to transmitted to the endeffector usually located at the detail portion of the device.

The robot type goes to another extreme. The robot moves around the targeted site, after being inserted through the port. The fulcrum of manipulation is usually near the endeffector, thus the Robot Type usually has good manipulability. It has been reported that a wheeled surgical robot system could move on the surface of liver (Rentschler & Reid, 2009). However, the mobile mechanism could not provide mobility to cover whole abdominal cavity for NOTES support usage. Moreover, not all the surface of inner organs

A Micro Mobile Robot with Suction Cups in the Abdominal Cavity for NOTES 155

While realizing the functions discussed above, there are design constraints that should be met, i.e., the robot should be small and light enough. The constraints come from the size of the overtube for endoscopic operation, and robot's load-bearing ability that depends on the sucking force, and the weight of the robot body, camera and flexible forceps. Considering the inner diameter of the overtube, the diameter of our robot should be limited to less than 16.5 [mm]. However, since the purpose of this study is to investigate the feasibility of the robot, the diametral size of the robot was set to 34 [mm], according to the machining

Different with the other robots and devices using suction cups, the robot developed in our previous study moves in the abdominal cavity, hanging upside down from the abdominal

• Size and weight constraints.

accuracy and parts currently available.

Fig. 1. Construction of the robot

**2.2 The robot with suction cups developed** 

wall and supports NOTES. The first prototype is shown in Fig. 1.

(a) mechanical structure (b) 3-D illustration

(c) side surface (d) upper surface

adopted Ti–Ni alloy as wires for control of the front housing.

Three pairs of wires and guide-tubes were equipped to realize relative movement of the two suction cups, thus realizing the 3-D.O.F. movements for the robot. The three pairs of wires and guide-tubes, together with two air supply ducts, were contained in two housings (front and rear housings). Wires were fixed to the front housing and guide-tubes were fixed to the rear housing. In the front housing, there's a hole for loading and guiding forceps. The diametral size of robot is 34 mm, bigger than the inner diameter of the overtube; after carefully re-inspecting the current design of the robot, we could have an optimistic view that the problem can be solved in the next version of prototype. The wire-u and the guidetube-u are equipped to enable the vertical movement of the robot (shown in Fig. 1). We

are suitable as the movement plane. Sudden involuntary movement of inner organs in the abdominal cavity would be irresistible disturbance to robots. For the use of surgery support in the abdominal cavity, magnetic based mechanism has been employed (Lehman. et. al., 2008a, 2008b, 2008c, 2009; Dumpert, 2009). Generally, adhered to an outer magnet, a movable magnetic object inside the body cavity could be moved through moving the outer magnet. However, high magnet field needed (0.6T-2T) unavoidably gives strong constraint to surgical tools and devices, which increases cost of surgery. Moreover, the safety issues concerning long-term exposure to strong magnet field should also be carefully investigated.

Suction cup has been shown as an effective way to adhere to living organisms or biological objects, since it gives less damage to attaching surface. Suction effect to avian livers has been investigated in (Horie. et al, 2007), for developing a micro switchable sucker system. HeartLander (Patronik. et al, 2005a, 2005b, 2009), which was designed for moving on epicardium, moved using suction cups and a wire driven mechanism. It is recognized as the most successful mobile robotic system working in human body cavity. However, neither the mobility of the system, nor the adhesion to intraperitoneal environment were tested. The aim of this study was to construct a robotic system that can improve the manipulability and functionality for NOTES, using vacuum suction cups, moving on the peritoneum (the abdominal wall). The advantages of the robotic system, as well as the design details will be discussed in section 2.

### **2. Construction of the robot**

### **2.1 Basic considerations and specification of the robot**

• Stable movement and surgery support actions.

Since the surface of the intraperitoneal organs, such as the large intestine and the stomach, is unstable due to peristalsis and other factors, in order to achieve stable movements and surgery support actions, we had to consider the other possibilities. In this study, we proposed using the peritoneum (the smooth serous membrane that lines the abdominal cavity) as the surface for moving in the abdominal cavity.

• Less damage to the internal environment and fewer electrical devices.

The effort to produce less damage to the internal environment (according to the issue discussed above, the peritoneum) and use fewer electrical devices is significantly important considering the robot's medical application. In this study, we employed vacuum suction as the means to attach to the peritoneum and devised a cable-driven mechanism to realize the relative movements of the two suction cups.

• Degree of freedom (D.O.F.) necessary for mobility and surgery support.

In order to move on the peritoneum and guide the forceps to the targeted sites, 3- D.O.F. movements, i.e., moving forward/backward, turning left/right and moving up/down, should be realized. Moving up/down is important because (i) when one of the suction cups is detached from the peritoneum for the relative movement, anti-gravity control is necessary; (ii) the peritoneum is not a flat plane — especially after being filled with gas (one procedure during laparoscopic surgery), the peritoneum turns to be a dome-like threedimensional (3-D) landscape; (iii) the elasticity of the peritoneum will increase the difference in elevation; and (iv) the robot should be able to clear the ramp when moving from the overtube to abdominal cavity.

• Size and weight constraints.

154 Mobile Robots – Current Trends

are suitable as the movement plane. Sudden involuntary movement of inner organs in the abdominal cavity would be irresistible disturbance to robots. For the use of surgery support in the abdominal cavity, magnetic based mechanism has been employed (Lehman. et. al., 2008a, 2008b, 2008c, 2009; Dumpert, 2009). Generally, adhered to an outer magnet, a movable magnetic object inside the body cavity could be moved through moving the outer magnet. However, high magnet field needed (0.6T-2T) unavoidably gives strong constraint to surgical tools and devices, which increases cost of surgery. Moreover, the safety issues concerning long-term exposure to strong magnet field should

Suction cup has been shown as an effective way to adhere to living organisms or biological objects, since it gives less damage to attaching surface. Suction effect to avian livers has been investigated in (Horie. et al, 2007), for developing a micro switchable sucker system. HeartLander (Patronik. et al, 2005a, 2005b, 2009), which was designed for moving on epicardium, moved using suction cups and a wire driven mechanism. It is recognized as the most successful mobile robotic system working in human body cavity. However, neither the mobility of the system, nor the adhesion to intraperitoneal environment were tested. The aim of this study was to construct a robotic system that can improve the manipulability and functionality for NOTES, using vacuum suction cups, moving on the peritoneum (the abdominal wall). The advantages of the robotic system, as well as the design details will be

Since the surface of the intraperitoneal organs, such as the large intestine and the stomach, is unstable due to peristalsis and other factors, in order to achieve stable movements and surgery support actions, we had to consider the other possibilities. In this study, we proposed using the peritoneum (the smooth serous membrane that lines the abdominal

The effort to produce less damage to the internal environment (according to the issue discussed above, the peritoneum) and use fewer electrical devices is significantly important considering the robot's medical application. In this study, we employed vacuum suction as the means to attach to the peritoneum and devised a cable-driven mechanism to realize the

In order to move on the peritoneum and guide the forceps to the targeted sites, 3- D.O.F. movements, i.e., moving forward/backward, turning left/right and moving up/down, should be realized. Moving up/down is important because (i) when one of the suction cups is detached from the peritoneum for the relative movement, anti-gravity control is necessary; (ii) the peritoneum is not a flat plane — especially after being filled with gas (one procedure during laparoscopic surgery), the peritoneum turns to be a dome-like threedimensional (3-D) landscape; (iii) the elasticity of the peritoneum will increase the difference in elevation; and (iv) the robot should be able to clear the ramp when moving from the

also be carefully investigated.

discussed in section 2.

**2. Construction of the robot** 

**2.1 Basic considerations and specification of the robot** 

cavity) as the surface for moving in the abdominal cavity.

• Less damage to the internal environment and fewer electrical devices.

• Degree of freedom (D.O.F.) necessary for mobility and surgery support.

• Stable movement and surgery support actions.

relative movements of the two suction cups.

overtube to abdominal cavity.

While realizing the functions discussed above, there are design constraints that should be met, i.e., the robot should be small and light enough. The constraints come from the size of the overtube for endoscopic operation, and robot's load-bearing ability that depends on the sucking force, and the weight of the robot body, camera and flexible forceps. Considering the inner diameter of the overtube, the diameter of our robot should be limited to less than 16.5 [mm]. However, since the purpose of this study is to investigate the feasibility of the robot, the diametral size of the robot was set to 34 [mm], according to the machining accuracy and parts currently available.

### **2.2 The robot with suction cups developed**

Different with the other robots and devices using suction cups, the robot developed in our previous study moves in the abdominal cavity, hanging upside down from the abdominal wall and supports NOTES. The first prototype is shown in Fig. 1.

Fig. 1. Construction of the robot

Three pairs of wires and guide-tubes were equipped to realize relative movement of the two suction cups, thus realizing the 3-D.O.F. movements for the robot. The three pairs of wires and guide-tubes, together with two air supply ducts, were contained in two housings (front and rear housings). Wires were fixed to the front housing and guide-tubes were fixed to the rear housing. In the front housing, there's a hole for loading and guiding forceps. The diametral size of robot is 34 mm, bigger than the inner diameter of the overtube; after carefully re-inspecting the current design of the robot, we could have an optimistic view that the problem can be solved in the next version of prototype. The wire-u and the guidetube-u are equipped to enable the vertical movement of the robot (shown in Fig. 1). We adopted Ti–Ni alloy as wires for control of the front housing.

A Micro Mobile Robot with Suction Cups in the Abdominal Cavity for NOTES 157

absorb), releasing one suction cup and pushing forward (shown by an arrow marked with F) the wires (for moving the front suction cup) or the guide-tubes (for moving the rear suction cup), sequentially and repeatedly. Also the phase diagram of moving backward

could be acquired by simply reversing the direction of force (arrows).

(a) (b)

Turning motion is realized by making difference between the stretching-out length of left and right wires. The phase diagram of turning right is shown in Fig. 4(b). Different with moving forward motion, in phase 1, in order to move the front housing right, the left wire was pushed out, while fixing the right wire length, thus, the left wire will bend towards right, and the front housing will turn right. Turning left could be achieved by making stretching-out length of right wire longer during phase 1. Except this making difference between the left and right stretching-out wire length, the suction control of turning motion

Vertical movement is also possible by making difference between the stretching-out length of upper wires (those two wires close to suction cups) and lower wire (one single wire) (Fig. 5).

Fig. 4. Phase diagram (a) moving forward (b) turning right

is basically the same as that of moving forward.

2. Turning left/right

3. Moving up/down

#### **2.3 Control system**

Fig. 2 shows general view of the robot's control system.

Fig. 2. Robot's control system

The block diagram for the robot control is shown in Fig. 3. Movement of three degrees of freedom could be realized by controlling the state transition between adsorption and release of the suction cups, as well as the relative displacement of the two housings (containing a suction cup in each).

The suction control was achieved by using a digital solenoid valve [ZX1103-K16LZ-EC; SMC] that regulates the vacuum pressure generated by a compressor [0.75LP-75; Hitachi Industrial Equipment Systems], which generates air pressure up to 0.69 MPa. The valve can switch between adsorption and release of suction cups according to the command sent to it from output ports of a personal computer. Concretely, the negative pressure supplied from the compressor is controlled by an input voltage (ON: 12 v or OFF: 0 v), which corresponds deaeration (for adsorption state) or insufflation (for release state) of the air to the suction cups. Moreover, the adsorption state could be detected by the pressure sensor in the valve, and sent back to PC for control.

Fig. 3. Block diagram of the robot control

#### **2.4 Phase diagrams of movements**

#### 1. Moving forward/backward

The phase diagram of moving forward is shown in Fig. 4(a), where a hatched circle means an adsorbed suction cup, whereas an open circle means a released one. It is obvious that, the motion could be divided into 5 phases, starting from an initial phase 0 (both suction cups absorb), releasing one suction cup and pushing forward (shown by an arrow marked with F) the wires (for moving the front suction cup) or the guide-tubes (for moving the rear suction cup), sequentially and repeatedly. Also the phase diagram of moving backward could be acquired by simply reversing the direction of force (arrows).

Fig. 4. Phase diagram (a) moving forward (b) turning right

### 2. Turning left/right

156 Mobile Robots – Current Trends

The block diagram for the robot control is shown in Fig. 3. Movement of three degrees of freedom could be realized by controlling the state transition between adsorption and release of the suction cups, as well as the relative displacement of the two housings (containing a

The suction control was achieved by using a digital solenoid valve [ZX1103-K16LZ-EC; SMC] that regulates the vacuum pressure generated by a compressor [0.75LP-75; Hitachi Industrial Equipment Systems], which generates air pressure up to 0.69 MPa. The valve can switch between adsorption and release of suction cups according to the command sent to it from output ports of a personal computer. Concretely, the negative pressure supplied from the compressor is controlled by an input voltage (ON: 12 v or OFF: 0 v), which corresponds deaeration (for adsorption state) or insufflation (for release state) of the air to the suction cups. Moreover, the adsorption state could be detected by the pressure sensor in the valve,

The phase diagram of moving forward is shown in Fig. 4(a), where a hatched circle means an adsorbed suction cup, whereas an open circle means a released one. It is obvious that, the motion could be divided into 5 phases, starting from an initial phase 0 (both suction cups

**2.3 Control system** 

Fig. 2. Robot's control system

and sent back to PC for control.

Fig. 3. Block diagram of the robot control

**2.4 Phase diagrams of movements**  1. Moving forward/backward

suction cup in each).

Fig. 2 shows general view of the robot's control system.

Turning motion is realized by making difference between the stretching-out length of left and right wires. The phase diagram of turning right is shown in Fig. 4(b). Different with moving forward motion, in phase 1, in order to move the front housing right, the left wire was pushed out, while fixing the right wire length, thus, the left wire will bend towards right, and the front housing will turn right. Turning left could be achieved by making stretching-out length of right wire longer during phase 1. Except this making difference between the left and right stretching-out wire length, the suction control of turning motion is basically the same as that of moving forward.

### 3. Moving up/down

Vertical movement is also possible by making difference between the stretching-out length of upper wires (those two wires close to suction cups) and lower wire (one single wire) (Fig. 5).

A Micro Mobile Robot with Suction Cups in the Abdominal Cavity for NOTES 159

(a) Phase 1 (b) Phase 2

(c) Phase 3 (d) Phase 4

the instruction from surgeons to the automation system. This section describes the such hardware and software for such a controller: a WGL controller (Wire and Guide-tube

In order to adjust the length of wires and guide-tubes, the manipulation part should be able to 1) realize a push and pull linear motion with the stroke longer than 300mm, and with suitable axial forces; 2) guarantee independent linear motion of paired wire and guide-tube to be coaxial; 3) hold the wires and guide-tubes stably and fixedly without breaking the wires and guide-tubes. Besides, control algorithm should be developed to realize effective

In this section, we described our efforts to realize the WGL controller. We measured the force required to manipulate the wires and guide-tubes in a manual operation. Secondly, we developed a linear motion mechanism by using motors and timing belts. Thirdly, we designed a griping mechanism that is fixed to the timing belt and able to hold wires and

In order to develop the device which controls the length of wires and guide-tubes, it's necessary to measure how much force is actually required for operation. Then, we measured force required for push/pull of each wire and each guide-tube by using the force measurement system (Fig. 8(a)). We measured the forces of 4 operations of moving forward/backward and turning left/right 3 times each. Moreover, the force gauge as shown

guide-tubes. Finally, we proposed control algorithms using sensor information.

**3.1.1 Measurement of the force required to manipulate wire and guide-tube** 

in Fig. 8(b) was used for the measurement (ZPS-DPU-50N; IMADA).

Fig. 7. Phase views of moving forward motion with a 50g weight

Length controller).

automatic manipulation.

**3.1 Developing the WGL controller** 

Fig. 5. Phase diagram of moving up/down

### **2.5 Experiment with the first prototype**

The motions described in the last section were verified using the first prototype, by manual operation. The robot was operated to move on a piece of transparent film pasted onto a flat, level frame of a laparoscope operation simulation unit (Fig. 6).

It was confirmed that all the motions could be realized by manual operation. The forces needed to operate the robot will be shown in the next section. Moreover, even with a 50g load, the robot could move forward. Fig. 7 shows the pictures of each phase of robot with 50g load, in moving forward motion.

Fig. 6. Laparoscope operation simulation unit

### **3. Automatic control system**

As shown before, with manual operation, the robot prototype could realize the movement designed, hanging upside down from a transparent film. However, the operation needs the manipulation of three wires and three guide-tubes independently, and sometimes several of them simultaneously. Therefore, it's complicated and difficult to operate the robot by one surgeon. Thus, it is necessary to develop an automation system that could operate the wires and guide-tubes automatically, as well as a user interface that could receive and translation 158 Mobile Robots – Current Trends

The motions described in the last section were verified using the first prototype, by manual operation. The robot was operated to move on a piece of transparent film pasted onto a flat,

It was confirmed that all the motions could be realized by manual operation. The forces needed to operate the robot will be shown in the next section. Moreover, even with a 50g load, the robot could move forward. Fig. 7 shows the pictures of each phase of robot with

As shown before, with manual operation, the robot prototype could realize the movement designed, hanging upside down from a transparent film. However, the operation needs the manipulation of three wires and three guide-tubes independently, and sometimes several of them simultaneously. Therefore, it's complicated and difficult to operate the robot by one surgeon. Thus, it is necessary to develop an automation system that could operate the wires and guide-tubes automatically, as well as a user interface that could receive and translation

Fig. 5. Phase diagram of moving up/down

**2.5 Experiment with the first prototype** 

50g load, in moving forward motion.

Fig. 6. Laparoscope operation simulation unit

**3. Automatic control system** 

level frame of a laparoscope operation simulation unit (Fig. 6).

Fig. 7. Phase views of moving forward motion with a 50g weight

the instruction from surgeons to the automation system. This section describes the such hardware and software for such a controller: a WGL controller (Wire and Guide-tube Length controller).

### **3.1 Developing the WGL controller**

In order to adjust the length of wires and guide-tubes, the manipulation part should be able to 1) realize a push and pull linear motion with the stroke longer than 300mm, and with suitable axial forces; 2) guarantee independent linear motion of paired wire and guide-tube to be coaxial; 3) hold the wires and guide-tubes stably and fixedly without breaking the wires and guide-tubes. Besides, control algorithm should be developed to realize effective automatic manipulation.

In this section, we described our efforts to realize the WGL controller. We measured the force required to manipulate the wires and guide-tubes in a manual operation. Secondly, we developed a linear motion mechanism by using motors and timing belts. Thirdly, we designed a griping mechanism that is fixed to the timing belt and able to hold wires and guide-tubes. Finally, we proposed control algorithms using sensor information.

### **3.1.1 Measurement of the force required to manipulate wire and guide-tube**

In order to develop the device which controls the length of wires and guide-tubes, it's necessary to measure how much force is actually required for operation. Then, we measured force required for push/pull of each wire and each guide-tube by using the force measurement system (Fig. 8(a)). We measured the forces of 4 operations of moving forward/backward and turning left/right 3 times each. Moreover, the force gauge as shown in Fig. 8(b) was used for the measurement (ZPS-DPU-50N; IMADA).

A Micro Mobile Robot with Suction Cups in the Abdominal Cavity for NOTES 161

**3.1.2 A mechanism to realize the linear motions for manipulating wire and guide-tube** 

As shown in the last section, the robot motion could be realized by pulling or pushing the wire and guide-tubes to change the relation length between them. The requirements to the mechanism is: linear motion, with big stroke and a certain level of force ( around 5 N) . In this research, stepping motor with timing belt and pulley were employed as the main mechanism. The stepping motor selected is shown in Fig. 11, and the specification is shown

Fig. 10. Force and time required for each motion

Fig. 11. Selected stepping motor (35L048B-2U,Servo Supplies Ltd)

**pairs** 

in Table 1.

Fig. 8. Experiment setup for force measurement

Fig. 9. Force required to manipulate wire-r in phase 1.

Fig. 9 shows the output of the force gauge and the output of force which the wire R undergoes at the time of each operation . The following formulas were used in order to represent graphically.

$$\begin{array}{l} \mathbf{F\_i} = \begin{array}{c} \mathbf{F\_{pull}} \left( \left| \mathbf{F\_{max}} \right| > \left| \mathbf{F\_{min}} \right| \right) \\\\ \mathbf{F\_{push}} \left( \left| \mathbf{F\_{max}} \right| < \left| \mathbf{F\_{min}} \right| \right) \end{array} \tag{1}$$

$$\mathbf{F} = \frac{1}{3} \sum\_{i=1}^{n} \mathbf{\hat{r}}\_i \quad \text{(i:trial number)} \tag{2}$$

Fig. 10 shows the force measured in forward, backward and turning right and left motions, respectively. Fig. 10 showed that force required for operation was 5N at the maximum. From the above result, we determined that we connect the actuator which outputs the force more than 5N to each 6 operating portions.

Fig. 10. Force and time required for each motion

### **3.1.2 A mechanism to realize the linear motions for manipulating wire and guide-tube pairs**

As shown in the last section, the robot motion could be realized by pulling or pushing the wire and guide-tubes to change the relation length between them. The requirements to the mechanism is: linear motion, with big stroke and a certain level of force ( around 5 N) . In this research, stepping motor with timing belt and pulley were employed as the main mechanism. The stepping motor selected is shown in Fig. 11, and the specification is shown

in Table 1.

160 Mobile Robots – Current Trends

(a) force measurement system (b) force gauge

Fig. 9 shows the output of the force gauge and the output of force which the wire R undergoes at the time of each operation . The following formulas were used in order to

> F ( F < F ) F ( F > F )

<sup>i</sup> (1)

F = (i:trial number) (2)

push max min pull max min

Fig. 10 shows the force measured in forward, backward and turning right and left motions, respectively. Fig. 10 showed that force required for operation was 5N at the maximum. From the above result, we determined that we connect the actuator which outputs the force

Fig. 8. Experiment setup for force measurement

Fig. 9. Force required to manipulate wire-r in phase 1.

more than 5N to each 6 operating portions.

F =

∑ n i=1 Fi <sup>3</sup> 1

represent graphically.

Fig. 11. Selected stepping motor (35L048B-2U,Servo Supplies Ltd)

A Micro Mobile Robot with Suction Cups in the Abdominal Cavity for NOTES 163

Moreover, the linear distance per rotation [mm/step] (positioning accuracy) could be

θ×P x=

Where, x stands for the linear moving distance, θ represents rotation angle, and P is the product of pulley's pitch and number of teeth. Therefore, considering the specification of selected motor and pulley (see Table 1 and 2), the positioning accuracy of the combined

The mechanical system designed to realize the linear motion of wires and guide-tubes is shown in Fig. 14. The dimension of this device is summarized as follows: full length 480

(a) One-motor-loading type (b) Two-motor-loading type

(c) assembled manipulation mechanism

In order to guarantee independent linear motion of paired wire and guide-tube to be coaxial, special consideration is given to the setting of motor-loading stands, and the

Two types of motor-loading stands were designed. In the one-motor-loading type (Fig. 14(a)), one motor was loaded on only one side, while in the two-motor-loading type, motors were load on both sides, one for each side. The grip mechanism could be mounted either on the upper side or the lower side of the timing belt. Two gripping parts, one mounted on the upper side, another one mounted on the lower side, from two different stands will be put to

[mm], height 100 [mm], and stroke of the wire or tube 376 [mm].

360 (3)

determined by the following formula 3.

Fig. 14. Designed manipulation mechanism

mounting position of the gripping mechanism.

face each other for manipulating one wire and guide-tube pair.

system is± 1 [mm].


Table 1. Stepping motor's specification

The reason for selecting the stepping motor from various motors is that a stepping motor has the following features:


The selected timing belt and pulley are shown in Table 2, Fig. 12 and 13.

Fig. 12. Selected timing belt

Fig. 13. Selected timing pulley


Table 2. Specification of selected timing belt and pulley

162 Mobile Robots – Current Trends

Step angle 7.5 [deg] Hold torque 2.5 [Ncm] Rotor inertia 0.04 [kgcm2] Resistance 64 [Ω] Current 0.18 [A] Voltage 12 [V] Inductance 40 [mH] Mass 88 [g]

The reason for selecting the stepping motor from various motors is that a stepping motor

Number of teeth 24

Pitch 2.0 [mm] Inside diameter 20 [mm] Belt width 9 [mm] Timing belt's length 800 [mm]

Table 1. Stepping motor's specification

• Sufficiently high positioning accuracy.

• Easy to control, as compared with other motors.

• Position detection without an additional encoder.

The selected timing belt and pulley are shown in Table 2, Fig. 12 and 13.

has the following features:

Fig. 12. Selected timing belt

Fig. 13. Selected timing pulley

Table 2. Specification of selected timing belt and pulley

Moreover, the linear distance per rotation [mm/step] (positioning accuracy) could be determined by the following formula 3.

$$\mathbf{x} = \frac{\boldsymbol{\theta} \times \mathbf{P}}{\mathbf{3} \boldsymbol{\epsilon} \boldsymbol{\epsilon}} \tag{3}$$

Where, x stands for the linear moving distance, θ represents rotation angle, and P is the product of pulley's pitch and number of teeth. Therefore, considering the specification of selected motor and pulley (see Table 1 and 2), the positioning accuracy of the combined system is± 1 [mm].

The mechanical system designed to realize the linear motion of wires and guide-tubes is shown in Fig. 14. The dimension of this device is summarized as follows: full length 480 [mm], height 100 [mm], and stroke of the wire or tube 376 [mm].

Fig. 14. Designed manipulation mechanism

In order to guarantee independent linear motion of paired wire and guide-tube to be coaxial, special consideration is given to the setting of motor-loading stands, and the mounting position of the gripping mechanism.

Two types of motor-loading stands were designed. In the one-motor-loading type (Fig. 14(a)), one motor was loaded on only one side, while in the two-motor-loading type, motors were load on both sides, one for each side. The grip mechanism could be mounted either on the upper side or the lower side of the timing belt. Two gripping parts, one mounted on the upper side, another one mounted on the lower side, from two different stands will be put to face each other for manipulating one wire and guide-tube pair.

A Micro Mobile Robot with Suction Cups in the Abdominal Cavity for NOTES 165

(a) A side view of the assembled device (b) The gripped wire and guide-tube

(c) Motor and timing belt fixture (d) Gripping mechanism and timing belt

mesh with the timing belt. And, two screws shown in Fig15(b) could prevent the wire and guide-tube from fleeing away from the gripping mechanism. Moreover, in Fig. 15, a green part is the gripping mechanism for the guide-tube, and a purple part is the gripping

As shown in Fig. 3, the robot control system could be divided into two independent control parts: suction control of suction cups and position control. Detection of the adsorption state is realizable by using the pressure sensor of the solenoid valve. The position of two housings could detected using a magnetometric sensor. Magnetometric sensor used: trakSTRAR [Model130 and Ascension Technology Corporation] is shown in Fig. 17, and the specification is shown in Table 3. Although the range of detection is a sphere with a radius of 30 [cm], it should be sufficient to cover the abdominal cavity. The block diagram of the

mechanism for the wire.The assembled WGL controlling device is shown in Fig. 16.

control system using this magnetometric sensor is shown in Fig. 18.

Fig. 16. Assembled WGL controller

**3.1.4 Sensors for automatic control** 

The motor loading position (height from the horizontal plane) was decided so that the paired gripping parts have the same height. Through adjusting the positions of paired stands, paired wire and guide-tube can be manipulated in a coaxial manner.

### **3.1.3 Gripping mechanism**

In order to transmit the force from the linear mechanism to the wires and guide-tubes, a gripping mechanism that is able to hold the wires and guide-tubes steadily and attach to the timing belt should be designed. Moreover, because the timing belt is made of rubber, deflection is likely to happen. Thus, it's necessary to consider the following factors in the development of the gripping mechanism:


Fig. 15 shows the parts designed to realize the gripping mechanism. These parts have a Φ3.0 mm hole for guide-tube and a Φ0.8 mm hole for wire, and two Φ2.5mm up and down for long sliding shaft. Moreover, the teeth in the parts could enable gripping mechanism to

Fig. 15. Parts for realizing the gripping mechanism

164 Mobile Robots – Current Trends

The motor loading position (height from the horizontal plane) was decided so that the paired gripping parts have the same height. Through adjusting the positions of paired

In order to transmit the force from the linear mechanism to the wires and guide-tubes, a gripping mechanism that is able to hold the wires and guide-tubes steadily and attach to the timing belt should be designed. Moreover, because the timing belt is made of rubber, deflection is likely to happen. Thus, it's necessary to consider the following factors in the

• To hold the wires and guide-tubes to avoid the wires and guide-tubes' fleeing away

Fig. 15 shows the parts designed to realize the gripping mechanism. These parts have a Φ3.0 mm hole for guide-tube and a Φ0.8 mm hole for wire, and two Φ2.5mm up and down for long sliding shaft. Moreover, the teeth in the parts could enable gripping mechanism to

(a) gripping mechanism for guide-tube (b) gripping mechanism with a M3 screw

(c) upper gripping mechanism's detail (d) lower pripping mechanism's detail

(e) gripping mechanism for wire

Fig. 15. Parts for realizing the gripping mechanism

stands, paired wire and guide-tube can be manipulated in a coaxial manner.

• To have teeth that have the same pitch with, in turn, mesh with timing belt; • To have self-support against the deflection due to weight of the part;

**3.1.3 Gripping mechanism** 

from the gripping.

development of the gripping mechanism:

### Fig. 16. Assembled WGL controller

mesh with the timing belt. And, two screws shown in Fig15(b) could prevent the wire and guide-tube from fleeing away from the gripping mechanism. Moreover, in Fig. 15, a green part is the gripping mechanism for the guide-tube, and a purple part is the gripping mechanism for the wire.The assembled WGL controlling device is shown in Fig. 16.

### **3.1.4 Sensors for automatic control**

As shown in Fig. 3, the robot control system could be divided into two independent control parts: suction control of suction cups and position control. Detection of the adsorption state is realizable by using the pressure sensor of the solenoid valve. The position of two housings could detected using a magnetometric sensor. Magnetometric sensor used: trakSTRAR [Model130 and Ascension Technology Corporation] is shown in Fig. 17, and the specification is shown in Table 3. Although the range of detection is a sphere with a radius of 30 [cm], it should be sufficient to cover the abdominal cavity. The block diagram of the control system using this magnetometric sensor is shown in Fig. 18.

A Micro Mobile Robot with Suction Cups in the Abdominal Cavity for NOTES 167

In this paper, only the control algorithm for moving forward motion is explained in detail, because the other motions could be realized by modifying the algorithm. As shown in the phase transition diagram of the moving forward motion in Fig. 4(a), the sensors to detect the end of four phases differ from each other. Position sensing is required for the end of Phase1 and 3, and adsorption sensing is required for the end of Phase2 and 4. The automatic control

Phase1 is for the movement of the front suction cup (see Fig. 4(a)). Fig. 19 is an illustration of the movement, where x0f expresses the initial position of the front suction cup, x0r expresses the initial position of the rear suction cup, xf means the position of the present front suction cup and xA stands for the target position of the front suction cup. The distance from the initial position of the front suction cup to the target position is shown with the following

Let the tolerance of the target position be ± a, then an end judge function of Phase1 could be

According to the value of the judge function g, the rotation of the stepping motors for Wire L, R, U ( left, right, up-down wire respectively) were determined. That is, the robot would keep moving until |g| < a. In the case g < 0, the motor would rotate to move forward, while in the case g > 0, i.e., the case that the front suction cup goes over its target, the motor would

D = x -x f A 0f (4)

g=x -x -D f 0f f (5)

Fig. 18. Block diagram including the magnetometric sensor

algorithm could be considered for each phase independently.

**3.2 Control algorithm** 

• Phase 1

formula 4:

defined by formula 5:

rotate to move backward.


Table 3. Magnetometric sensor's specification

Fig. 17. Magnetometric sensor (Model130, Ascension Technology Corporation)

Fig. 18. Block diagram including the magnetometric sensor

### **3.2 Control algorithm**

In this paper, only the control algorithm for moving forward motion is explained in detail, because the other motions could be realized by modifying the algorithm. As shown in the phase transition diagram of the moving forward motion in Fig. 4(a), the sensors to detect the end of four phases differ from each other. Position sensing is required for the end of Phase1 and 3, and adsorption sensing is required for the end of Phase2 and 4. The automatic control algorithm could be considered for each phase independently.

• Phase 1

166 Mobile Robots – Current Trends

Axis 6 Axis (Position 3 Axis and Direction 3 Axis)

elevation: ± 90 [deg]

direction: 0.5 [deg RMS]

direction: 0.1[deg/305mm]

azimuth, rotation number

Size Φ1.5×7.7 [mm] Sampling rate 20-255 [Hz] (Default 80[Hz])

Static accuracy position: 1.4 [mm RMS]

Static resolution position: 0.5[mm/305mm]

Interface USB1.1/2.0 or RS-232

Data format Binary Correspondence WindowsAPI, driver

Fig. 17. Magnetometric sensor (Model130, Ascension Technology Corporation)

Table 3. Magnetometric sensor's specification

Output coordinate value of x, y and z,

Measurement position range x:20-66/y:± 28/z:± 30 [cm] Measurement angle range azimuth and roll: ± 180 [deg]

> Phase1 is for the movement of the front suction cup (see Fig. 4(a)). Fig. 19 is an illustration of the movement, where x0f expresses the initial position of the front suction cup, x0r expresses the initial position of the rear suction cup, xf means the position of the present front suction cup and xA stands for the target position of the front suction cup. The distance from the initial position of the front suction cup to the target position is shown with the following formula 4:

$$\mathbf{D}\_{\mathbf{f}} = \left| \mathbf{x}\_{\mathbf{A}} \text{ -x}\_{0\mathbf{f}} \right| \tag{4}$$

Let the tolerance of the target position be ± a, then an end judge function of Phase1 could be defined by formula 5:

$$\mathbf{g} = \mathbf{x}\_{\mathbf{f}} \text{ - } \mathbf{x}\_{0\mathbf{f}} \text{ - D}\_{\mathbf{f}} \tag{5}$$

According to the value of the judge function g, the rotation of the stepping motors for Wire L, R, U ( left, right, up-down wire respectively) were determined. That is, the robot would keep moving until |g| < a. In the case g < 0, the motor would rotate to move forward, while in the case g > 0, i.e., the case that the front suction cup goes over its target, the motor would rotate to move backward.

A Micro Mobile Robot with Suction Cups in the Abdominal Cavity for NOTES 169

According to the value of the judge function h, the rotation of the stepping motors for Guide-tube L, R, U ( left, right, up-down wire respectively) were determined. That is, the robot would keep moving until |h| < a. In the case h > 0, the motor would rotate to move forward, while in the case h < 0, i.e., the case that the rear suction cup goes over its target,

the motor would rotate to move backward.

Fig. 20. An illustration for the moving forward motion in Phase 3

Fig. 21. The lift-up operation of the rear suction cup

The purpose of Phase 4 is to adsorb the rear suction cup to the moving surface. Since, when released at the end of Phase3, the rear housing with the rear suction cup would drop down by its weight, there is a deflection of the wires and guide-tubes in the beginning of the Phase

• Phase 4

Fig. 19. An illustration for the moving forward motion in Phase 1

#### • Phase 2

The purpose of Phase 2 is to make the front suction cup adsorbed to its moving surface. Although through the solenoid valve, the deaeration (for adsorption state) or insufflation (for release state) could be controlled, the deaeration could not gurantee the adsorption of the front suction cup, i.e., only if the the suction cup is brought close enough to the moving surface, and the plane of the suction cup surface is almost parallel to the moving surface, then the deaeration could result in adsorption state. Then, the vertical up/down motion illustrated in Fig. 5 becomes important. This motion would be repeated until the front suction cup adsorbs to the moving surface. For each Wire U operation, the adsorption state would be confirmed by investigating the output of the pressure sensor of solenoid valve (formula 6).

$$\mathbf{V}\_{\text{output}} = \begin{cases} \text{0V:} \text{Adsorptions} \\ \text{6V:} \text{Release} \end{cases} \tag{6}$$

• Phase 3

In Phase 3, which is illustrated in Fig. 20, the rear housing (containing the rear suction cup) is moved towards the front housing. In Fig. 20, x0f and x0r express the position (initial position) of the rear suction cup at the end of Phase4 (or Phase0), xr is the position of the present rear suction cup, x1f is the position of the front suction cup at the end of Phase1 and xB is the position of the target rear suction cup. Moreover, as in the Fig. 19, *a* stands for the tolerance and Df is defined by formula 4, i.e., the distance of front suction cup was supposed to move. Since, if the distance for rear suction cup is set to Df, the rear suction cup may contact the front suction cup after this phase, thus there would be no enough margin to move the rear suction cup in Phase 4, the expected moving distance for the rear suction cup is set to Df/2. Therefore, the end judge function of Phase3 is defined in formula 7.:

$$\mathbf{h = x\_{1f} \cdot x\_r \cdot \frac{D\_t}{2}} \tag{7}$$

According to the value of the judge function h, the rotation of the stepping motors for Guide-tube L, R, U ( left, right, up-down wire respectively) were determined. That is, the robot would keep moving until |h| < a. In the case h > 0, the motor would rotate to move forward, while in the case h < 0, i.e., the case that the rear suction cup goes over its target, the motor would rotate to move backward.

Fig. 20. An illustration for the moving forward motion in Phase 3

Fig. 21. The lift-up operation of the rear suction cup

### • Phase 4

168 Mobile Robots – Current Trends

The purpose of Phase 2 is to make the front suction cup adsorbed to its moving surface. Although through the solenoid valve, the deaeration (for adsorption state) or insufflation (for release state) could be controlled, the deaeration could not gurantee the adsorption of the front suction cup, i.e., only if the the suction cup is brought close enough to the moving surface, and the plane of the suction cup surface is almost parallel to the moving surface, then the deaeration could result in adsorption state. Then, the vertical up/down motion illustrated in Fig. 5 becomes important. This motion would be repeated until the front suction cup adsorbs to the moving surface. For each Wire U operation, the adsorption state would be confirmed by

> 0V:Adsorption V = 6V:Release

In Phase 3, which is illustrated in Fig. 20, the rear housing (containing the rear suction cup) is moved towards the front housing. In Fig. 20, x0f and x0r express the position (initial position) of the rear suction cup at the end of Phase4 (or Phase0), xr is the position of the present rear suction cup, x1f is the position of the front suction cup at the end of Phase1 and xB is the position of the target rear suction cup. Moreover, as in the Fig. 19, *a* stands for the tolerance and Df is defined by formula 4, i.e., the distance of front suction cup was supposed to move. Since, if the distance for rear suction cup is set to Df, the rear suction cup may contact the front suction cup after this phase, thus there would be no enough margin to move the rear suction cup in Phase 4, the expected moving distance for the rear suction cup

Df

1f r <sup>2</sup> h=x -x - (7)

(6)

 

is set to Df/2. Therefore, the end judge function of Phase3 is defined in formula 7.:

Fig. 19. An illustration for the moving forward motion in Phase 1

investigating the output of the pressure sensor of solenoid valve (formula 6).

output

• Phase 2

• Phase 3

The purpose of Phase 4 is to adsorb the rear suction cup to the moving surface. Since, when released at the end of Phase3, the rear housing with the rear suction cup would drop down by its weight, there is a deflection of the wires and guide-tubes in the beginning of the Phase

A Micro Mobile Robot with Suction Cups in the Abdominal Cavity for NOTES 171

Phase 0: Vf = 0, Vr = 0

Phase 1: Vf = 6, Vr = 0

Phase 3: Vf = 0, Vr = 6

Phase 4: Vf = 0, Vr = 6 Where, Vf means the front output voltage, and Vr means the rear output voltage. From Fig. 24, it is clear that, the robot could move 3 Steps without falling-down from the moving

The change of the coordinate of the front and rear suction cup is shown the Fig. 25(a) and Fig. 25(b), respectively. The x, y and z coordinates (see Fig. 23) at the starting point were set to 0. From the figure, it is clear that both suction cups seldomly moved in the y and z direction, but moved mostly in x direction. Moreover, it was observed the front suction cup moved in x direction about 20 [mm] (value of Df ) in Phase1 of each step, and the rear suction cup moved more than 10 [mm] (Df/2) only in Phase3 of each step. This shows that the robot is automatically manipulated, exactly following the control algorithms designed. Moreover, Fig. 26 shows the representative situations for each phase in the moving forward

Because the difference of the speed for each value of Df was not remarkable, the adsorption sequence of each value of Df was also investigated. By increasing the value of Df, deflection of the wire becomes large and the time required for adsorption operation becomes long. Thus, a trade-off relation exists between the value of Df and the adsorption time. Then, with each value of Df, we conducted the experiment that repeats adsorption operation (Phase 2, 4) and investigated about repeatability. Moreover, we also investigated about whether adsorption time changes by increasing the value of Df. In the experiment, the suction cup's adsorption state was detected per motor's a full revolution (7.5deg), and a number of motor rotation required by adsorption was measured. The greater the number of motor rotation, the longer adsorption time. The period until suction cup's adsorption is detected is set as one trial, and it is repeated 10 trials. Then, the difference (repeatability of adsorption) of each

motion. The moving speed for the trial in the case of Df=20 [mm] was 1.85 [mm/s].

trial and the difference of the number of rotations by the value of Df were compared.

The result of Phase 2 is Table 5. The value of Table 5 shows value of Df or the number of motor rotation required by adsorption in each trial. From Table 5, in Df ≤ 30, there was no difference in the adsorption time for each value of Df, and the repeatability of the adsorption operation in each trial. However, in the first trial of Df =50, the number of rotation and time required by adsorption became twice other values of Df. Thus, if Df becomes very large, deflection of the wire has various influences and has big influence on adsorption time or the

Next, the rear suction cup's adsorption operation in Phase 4 of forward motion was investigated. In Phase 4, the relative distance during suction cups is adjusted by moving the rear suction cup after the front suction cup moves according to set Df. For this reason, we have to set the value of Df and relative distance. Therefore, in each of Df =30(almost no influence of deflection of the wire) and Df =50(some influences of deflection of the wire), it investigated by changing relative distance with 10, 15, 20, 25, and 30 [mm]. Each relative distance and the number of motor rotation of the value of Df are shown in Table 6. From

surface.

reproducibility of adsorption.

Phase 2: Vf = 6, Vr = 0 (8)

4. Basically, the operation that could lift the rear suction cup up to the moving surface is required. However, by using Wire U, only the front suction cup could be lifted up towards the moving surface, as shown in Fig. 5. In this study, we employed a motion shown in Fig. 21 to achieve the same effect. That is, by fixing all three guide-tubes and pulling back all the Wire L, R, and U, the deflection could be deleted or at least decreased. Certainly, by this operation, the front housing will receive a pulling force, which would affect the adsorption of the front suction cup to the moving surface. Although the experiment results would show that the feasibility of this operation, in the near future, a new mechanism should be designed improve lift-up of the rear suction cup.

Except the lift-up operation, the other sensing and operation are just the same as the Phase 2.

#### **3.3 The effect of parameters and experiment setting-up**

It is clear that, the two parameter Df and a, would influence the behavior of the robot. The parameter a determines the accuracy of the robot. In the experiment, the a was set as 1.5 [mm], decided according to the magnetometric sensor accuracy (1.4 [mm]), and linear motion accuracy (1.0[mm]). Due to the property of the robot, the Df, which determines the pace of the robot motion, also affects force required for wire operation. Because the relative distance between suction cups will become large when Df is increased, thus bending of the wire also becomes large. Thereby, at the phase of the front suction cup adsorption (Phase 2), bigger force would be needed.

On the other hand, by setting up a smaller Df, the deflection could be reduced, and force needed could be kept within an allowable range. However, this would result in a slow moving speed, and a bigger influence from measurement error of magnetometric sensor. Thus, an optimal Df has to be decided by trial and error. In the experiment, the Df was set to 10, 15, 20, 25, 30, and 50 [mm], and movement speed was calculated for each value.

In order to verify the capacity of the developed automatic control algorithm, operation is verified using a laparoscope operation simulation unit (Fig. 6).

#### **4. Results of the automatic control for moving-forward motion and discussion**

In the experiment, each motion (from Phase 1-4) was taken as one step, and 3 consecutive steps were measured and recorded as a trial. During each trial, if there is a falling down from moving surface, or a deadlock due to the shortage of the torque happened, then the trial was considered as a failure.

When calculating moving speed, since the robot moves on a x - y plane (ref. Fig. 23(a)), moving distance was calculated using the square root of the squared sum of distance on x and y axis. The moving speed for each step and each trial (3 Steps) were calculated.

Table 4. shows the moving speed [mm/s] in a single trial for each Df value, and the value in brackets shows the amount of moving distance [mm] for each case.

For detailed explanation, the case of Df=20 [mm] is taken as an example. In Fig. 24, the output voltage of pressure sensor is shown, where 0V expresses an adsorption state, 6[V] shows a release state, and the upper and lower part of graphs depict the output of sensors for the front and rear suction cup, respectively. This graph expressed the adsorption state of the suction cup of each rear and front part. The relationship between the phase and output voltages is shown as follows.

170 Mobile Robots – Current Trends

4. Basically, the operation that could lift the rear suction cup up to the moving surface is required. However, by using Wire U, only the front suction cup could be lifted up towards the moving surface, as shown in Fig. 5. In this study, we employed a motion shown in Fig. 21 to achieve the same effect. That is, by fixing all three guide-tubes and pulling back all the Wire L, R, and U, the deflection could be deleted or at least decreased. Certainly, by this operation, the front housing will receive a pulling force, which would affect the adsorption of the front suction cup to the moving surface. Although the experiment results would show that the feasibility of this operation, in the near future, a new mechanism should be

Except the lift-up operation, the other sensing and operation are just the same as the Phase 2.

It is clear that, the two parameter Df and a, would influence the behavior of the robot. The parameter a determines the accuracy of the robot. In the experiment, the a was set as 1.5 [mm], decided according to the magnetometric sensor accuracy (1.4 [mm]), and linear motion accuracy (1.0[mm]). Due to the property of the robot, the Df, which determines the pace of the robot motion, also affects force required for wire operation. Because the relative distance between suction cups will become large when Df is increased, thus bending of the wire also becomes large. Thereby, at the phase of the front suction cup adsorption (Phase 2),

On the other hand, by setting up a smaller Df, the deflection could be reduced, and force needed could be kept within an allowable range. However, this would result in a slow moving speed, and a bigger influence from measurement error of magnetometric sensor. Thus, an optimal Df has to be decided by trial and error. In the experiment, the Df was set to

In order to verify the capacity of the developed automatic control algorithm, operation is

In the experiment, each motion (from Phase 1-4) was taken as one step, and 3 consecutive steps were measured and recorded as a trial. During each trial, if there is a falling down from moving surface, or a deadlock due to the shortage of the torque happened, then the

When calculating moving speed, since the robot moves on a x - y plane (ref. Fig. 23(a)), moving distance was calculated using the square root of the squared sum of distance on x

Table 4. shows the moving speed [mm/s] in a single trial for each Df value, and the value in

For detailed explanation, the case of Df=20 [mm] is taken as an example. In Fig. 24, the output voltage of pressure sensor is shown, where 0V expresses an adsorption state, 6[V] shows a release state, and the upper and lower part of graphs depict the output of sensors for the front and rear suction cup, respectively. This graph expressed the adsorption state of the suction cup of each rear and front part. The relationship between the phase and output

and y axis. The moving speed for each step and each trial (3 Steps) were calculated.

brackets shows the amount of moving distance [mm] for each case.

10, 15, 20, 25, 30, and 50 [mm], and movement speed was calculated for each value.

**4. Results of the automatic control for moving-forward motion and** 

designed improve lift-up of the rear suction cup.

bigger force would be needed.

trial was considered as a failure.

voltages is shown as follows.

**discussion** 

**3.3 The effect of parameters and experiment setting-up** 

verified using a laparoscope operation simulation unit (Fig. 6).


Where, Vf means the front output voltage, and Vr means the rear output voltage. From Fig. 24, it is clear that, the robot could move 3 Steps without falling-down from the moving surface.

The change of the coordinate of the front and rear suction cup is shown the Fig. 25(a) and Fig. 25(b), respectively. The x, y and z coordinates (see Fig. 23) at the starting point were set to 0. From the figure, it is clear that both suction cups seldomly moved in the y and z direction, but moved mostly in x direction. Moreover, it was observed the front suction cup moved in x direction about 20 [mm] (value of Df ) in Phase1 of each step, and the rear suction cup moved more than 10 [mm] (Df/2) only in Phase3 of each step. This shows that the robot is automatically manipulated, exactly following the control algorithms designed. Moreover, Fig. 26 shows the representative situations for each phase in the moving forward motion. The moving speed for the trial in the case of Df=20 [mm] was 1.85 [mm/s].

Because the difference of the speed for each value of Df was not remarkable, the adsorption sequence of each value of Df was also investigated. By increasing the value of Df, deflection of the wire becomes large and the time required for adsorption operation becomes long. Thus, a trade-off relation exists between the value of Df and the adsorption time. Then, with each value of Df, we conducted the experiment that repeats adsorption operation (Phase 2, 4) and investigated about repeatability. Moreover, we also investigated about whether adsorption time changes by increasing the value of Df. In the experiment, the suction cup's adsorption state was detected per motor's a full revolution (7.5deg), and a number of motor rotation required by adsorption was measured. The greater the number of motor rotation, the longer adsorption time. The period until suction cup's adsorption is detected is set as one trial, and it is repeated 10 trials. Then, the difference (repeatability of adsorption) of each trial and the difference of the number of rotations by the value of Df were compared.

The result of Phase 2 is Table 5. The value of Table 5 shows value of Df or the number of motor rotation required by adsorption in each trial. From Table 5, in Df ≤ 30, there was no difference in the adsorption time for each value of Df, and the repeatability of the adsorption operation in each trial. However, in the first trial of Df =50, the number of rotation and time required by adsorption became twice other values of Df. Thus, if Df becomes very large, deflection of the wire has various influences and has big influence on adsorption time or the reproducibility of adsorption.

Next, the rear suction cup's adsorption operation in Phase 4 of forward motion was investigated. In Phase 4, the relative distance during suction cups is adjusted by moving the rear suction cup after the front suction cup moves according to set Df. For this reason, we have to set the value of Df and relative distance. Therefore, in each of Df =30(almost no influence of deflection of the wire) and Df =50(some influences of deflection of the wire), it investigated by changing relative distance with 10, 15, 20, 25, and 30 [mm]. Each relative distance and the number of motor rotation of the value of Df are shown in Table 6. From

A Micro Mobile Robot with Suction Cups in the Abdominal Cavity for NOTES 173

(a) front suction cup (b) rear suction cup

(a) phase 1 (b) phase 2 (c) phase 3 (d) phase 4 Fig. 26. Representative situations for each phase in the moving forward motion in case of Df

> Df [mm] Step 1-3 Step 1 Step 2 Step 3 10 1.86(31.11) 2.54 (11.41) 1.70 (10.73) 1.65 (9.09) 15 1.81(40.56) 1.90 (13.52) 1.93 (14.54) 2.12 (15.53) 20 1.85(51.46) 2.36 (19.65) 2.12 (19.84) 1.59 (15.49) 25 1.73(63.55) 2.44 (24.33) 1.87 (22.77) 1.29 (18.28) 30 1.94(79.78) 2.70 (29.05) 1.70 (25.83) 1.77 (25.92) 50 1.95(127.99) 2.16 (44.69) 1.55 (38.13) 2.37 (47.11)

Fig. 25. Change of travel distance of each suction cups (Df=20)

Table 4. The speed of moving-forward motion

= 20 [mm]

Table 6, below in 25 [mm], the differences of adsorption time and the reproducibility of trial don't have relative distance. Therefore, it is considered that there is almost no influence of deflection of the wire. On the other hand, in the relative distance 30 [mm], only in the case of Df =50, the increase of number of rotation and adsorption time was confirmed. Moreover, this increase was verified in the relative distance 30 [mm] and all the trial of Df =50 [mm]. From this result, it is considered that deflection of the wire by Df =50 of Phase1 influenced not only the front but adsorption operation of the rear.

Fig. 23. Robot's move and axis direction

Fig. 24. The output of the adsorption switch in the unit moving distance 20 [mm]

172 Mobile Robots – Current Trends

Table 6, below in 25 [mm], the differences of adsorption time and the reproducibility of trial don't have relative distance. Therefore, it is considered that there is almost no influence of deflection of the wire. On the other hand, in the relative distance 30 [mm], only in the case of Df =50, the increase of number of rotation and adsorption time was confirmed. Moreover, this increase was verified in the relative distance 30 [mm] and all the trial of Df =50 [mm]. From this result, it is considered that deflection of the wire by Df =50 of Phase1 influenced

(a) axis direction on the physical simulator (b) axis direction on the WGL controller

Fig. 24. The output of the adsorption switch in the unit moving distance 20 [mm]

not only the front but adsorption operation of the rear.

Fig. 23. Robot's move and axis direction

Fig. 25. Change of travel distance of each suction cups (Df=20)

Fig. 26. Representative situations for each phase in the moving forward motion in case of Df = 20 [mm]


Table 4. The speed of moving-forward motion

A Micro Mobile Robot with Suction Cups in the Abdominal Cavity for NOTES 175

Amy, L.; Kyle, B.; J, Dumpert.; N, Wood, A, Visty, ME, Rentschler.; SR, Platt.; SM, Farritor. &

Amy, L.; Nathan. W.; Jason. D.; Dmitry. O & Shane, F. (2008). Robotic natural orifice

Amy, L.; Nathan, W.; Jason, D.; Dmitry, O. & Shane, F. (2008). Dexterous miniature in vivo

Amy, L.; Jason, D.; Nathan, W.; Lee, R.; Abigail, V.; Shane, F.; Brandon, V. & Dmitry, O.

H, Zhang.; J, Gonzalez-Gomez.; S, Chen.; W, Wang.; R, Liu.; D, Li. & J, Zhang. (2007). A

J, Hazey.; V, Narula.; D, Renton.; K, Reavis.; C, Paul.; K, Hinshaw.; P, Muscarella.; E, Ellison.

M, Rentschler.; J, Dumpert.; S, Platt.; S, Farritor. & D, Oleynikov. (2007). Natural orifice

M, Bessler.; P, Stevens.; L, Milone.; M, Parikh. & D. Fowler. (2007). Transvaginal

Satoshi, O.; Chika, H. & Wenwei, Y. (2010). Design and manipulation of a suction-based

Satoshi, O.; Chika, H. & Wenwei, Y. (2010). Development of a Control System for Micro

*Computer Aided Surgery conference special edition,* Vol. 12, pp. 470-471 Toshiaki, H.; Satoshi, S. & Satoshi, K. (2007). Micro switchable sucker for fixable and mobile

orifice surgery, *Gastrointestinal Endoscopy*, Vol. 66, No. 6, pp. 1243–1245 Naoki, S.; Maki, H.; Satoshi, I.; Morimasa, T.; Hajime, K. & Makoto, H. (2010). The function

*Robots and Systems*, pp. 4707–4711, October, 2009

*Electro Mechanical Systems (MEMS)*, pp. 691–694, 2007

humans: initial clinical trial, *Surgical Endoscopy*, Vol. 22, No. 1, pp. 16–20 Kai, X.; Roger, G.; Jienan, D.; Peter, A.; Dennis, Fowler. & Nabil, S. (2009). System design of

*Biomedical Robotics and Biomechatronics*, pp. 244–249, October, 2008

*Robotics and Automation (ICRA)*, pp. 2969–2974, May, 2008

D. Oleynikov. (2008). Surgery with cooperative robots, *Computer aided surgery*, Vol.

translumenal endoscopic surgery, *Proceedings of IEEE International Conference on* 

robot for notes, *Proceedings of the 2nd IEEE/RAS-EMBS International Conference on* 

(2009). Natural orifice cholecystectomy using a miniature robot, *Surgical Endoscopy*,

novel modular climbing caterpillar using low-frequency vibrating passive suckers, *Proceedings of IEEE/ASME international conference on Advanced intelligent* 

& W, Melvin. (2008). Natural-orifice transgastric endoscopic peritoneoscopy in

an insertable robotic effector platform for single port access (spa) surgery, *Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems*, pp.

surgery with an endoluminal mobile robot. *Surgical endoscopy*, Vol. 21, No. 7, pp.

laparoscopically assisted endoscopic cholecystectomy: a hybrid approach to natural

which an oral type operation robot system should have, and its development, *The 19th Society of Computer Aided Surgery conference special edition,* Vol. 12, pp. 180-181 Satoshi, O.; Junichi. T. & Wenwei, Y. (2009). Development of a micro mobile robot in the

abdominal cavity, *Proceedings of IEEE/RSJ International Conference on Intelligent* 

micro robot for moving in the abdominal cavity, *Advanced Robotics*, Vol. 24, No. 12,

Mobile Robotwith Suction Cups in the Abdominal Cavity, *The 19th Society of* 

mechanism of medical mems, *Proceedings of IEEE International Conference on Micro* 

**6. References** 

13, No. 2, pp. 95–105

Vol. 23, No. 2, pp. 260–266

5546–5552, 2009

1212–1215

pp. 1741–1761

*mechatronics*, pp. 1–6, September, 2007


Table 5. The number of motor rotation of each value of Df in Phase2


Table 6. The number of motor rotation of each value of Df in Phase4

### **5. Conclusion**

In this paper, we described a NOTES support robot which uses suction cups and a wire driven mechanism. The robot has 3 pairs of wire and guide-tube, so it is difficult to manipulate for surgeons in operation. To realize automatic control for the robot, we developed a WGL controller, which adjusts the relative length of the wire and guide-tube pairs, and the control algorithms for it. In the experiment, it was shown that, the moving forward motion could be realized by the automatic control system.

The moving speed was also measured. From the result of Table 4, even if the value of Df was changed, there was no great change in the total movement speed (Step1-3), and the average moving speed was 1.86 [mm/s].

However, the moving speed 1.86 [mm/s] is not fast enough for clinical application, and improvement in speed is needed. Also, in this study, we investigated the moving forward motion. The control algorithms for other motions should be developed and verified. Furthermore, as Chapter 2 described, the robot size must be less than the over tube's inner diameter of 17mm (made by TOP Corporation). Moreover, in order that the robot may correspond to various operations, it is necessary to develop in consideration of usage in laparoscopic surgery. The inner diameter of the port used by laparoscopic surgery is 12mm (made by Applied Medical Resources Corporation). Therefore, as first aim, robot size less than the inner diameter of the over tube is realized, and after that, robot size less than the inner diameter of a port is realized. Finally, we have to test the whole robotic system in invivo experiment.

#### **6. References**

174 Mobile Robots – Current Trends

Df [mm] 1st trial 2nd trial 3rd trial 4-10 trial's average 10 2 2 2 2 15 2 2 2 2 20 2 2 2 2 25 2 2 2 2 30 2 2 2 2 50 4 2 2 2

relative distance[mm] Df [mm] 1st trial 2nd trial 3rd trial 4-10 trial's average 10 30 2 2 2 2 10 50 2 2 2 2 15 30 2 2 2 2 15 50 2 2 2 2 20 30 2 2 2 2 20 50 2 2 2 2 25 30 2 2 2 2 25 50 2 2 2 2 30 30 2 2 2 2 30 50 3 3 3 3

In this paper, we described a NOTES support robot which uses suction cups and a wire driven mechanism. The robot has 3 pairs of wire and guide-tube, so it is difficult to manipulate for surgeons in operation. To realize automatic control for the robot, we developed a WGL controller, which adjusts the relative length of the wire and guide-tube pairs, and the control algorithms for it. In the experiment, it was shown that, the moving

The moving speed was also measured. From the result of Table 4, even if the value of Df was changed, there was no great change in the total movement speed (Step1-3), and the average

However, the moving speed 1.86 [mm/s] is not fast enough for clinical application, and improvement in speed is needed. Also, in this study, we investigated the moving forward motion. The control algorithms for other motions should be developed and verified. Furthermore, as Chapter 2 described, the robot size must be less than the over tube's inner diameter of 17mm (made by TOP Corporation). Moreover, in order that the robot may correspond to various operations, it is necessary to develop in consideration of usage in laparoscopic surgery. The inner diameter of the port used by laparoscopic surgery is 12mm (made by Applied Medical Resources Corporation). Therefore, as first aim, robot size less than the inner diameter of the over tube is realized, and after that, robot size less than the inner diameter of a port is realized. Finally, we have to test the whole robotic system in in-

Table 5. The number of motor rotation of each value of Df in Phase2

Table 6. The number of motor rotation of each value of Df in Phase4

forward motion could be realized by the automatic control system.

**5. Conclusion** 

vivo experiment.

moving speed was 1.86 [mm/s].


**9** 

**Influence of the Size Factor of a Mobile** 

Service robots working around humans are expected to become widespread in the next decade. There have been numerous works for developing autonomous mobile robots, starting as early as the 1980s. For example, Crowley developed the Intelligent Mobile Platform (IMP) which moved around a known domain according to given commands (Crowley, 1985). The issue in the earlier works was how to navigate a robot in a room. HelpMate (Evans et al., 1989) was a mobile platform intended to be used in hospitals for carrying medical records, meal trays, medications, etc. In the 1990s, robots were developed which were equipped with manipulators and executed tasks such as moving objects. Bishoff (1997) developed a mobile robot called HERMES, which is an upper-body humanoid equipped with two arms with hands and an omni-directional vehicle. HERMES recognizes objects around it using stereo vision, and executes tasks such as moving an object from one place to another. Recently, service robots that can execute more complicated tasks using three-dimensional distance sensors and more powerful actuators have been actively developed (Borst et al., 2009; Graf et al., 2009; Droeschel et al., 2011). Along with the development of such service robots, service robot contests have been held such as RoboCup@Home League (RoboCup Federation, 2011), in which mobile service robots compete for accuracy, robustness and safety of task execution in home-like environments. We have also developed an experimental care service robot called IRIS (Hiroi et al., 2003). This robot understood a patient's commands through spoken dialogue and face recognition, and performed several care tasks such as carrying bottles or opening/closing curtains in a real environment. The other feature of IRIS was its safety; IRIS was equipped with various

devices for physical safety, such as arms with torque limiters (Jeong et al., 2004).

frightening, it will not be accepted by people even if it is physically harmless.

Safety is the most important issue for this kind of robot, and there have been many studies on keeping a robot safe for humans. Here, we consider two kinds of "safety." The first one is the physical safety of avoiding collisions between a robot and humans; physical safety is the most important requirement for a mobile robot working around humans. The other is mental safety, which means ensuring that the robot does not frighten people around it. Mental safety is as important as physical safety; if a robot's appearance or behavior is

**1. Introduction** 

**Robot Moving Toward a Human on** 

**Subjective Acceptable Distance** 

Yutaka Hiroi1 and Akinori Ito2 *1Osaka Institute of Technology,* 

*2Tohoku University* 

*Japan*


## **Influence of the Size Factor of a Mobile Robot Moving Toward a Human on Subjective Acceptable Distance**

Yutaka Hiroi1 and Akinori Ito2 *1Osaka Institute of Technology, 2Tohoku University Japan*

### **1. Introduction**

176 Mobile Robots – Current Trends

W, Tierney.; D, Adler.; J, Conway.; D, Diehl.; F, Farraye.; S, Kantsevoy.; V, Kaul.; S, Kethu.;

Yoshiyuki, T.; Tomoya, N.; Emi, S.; Norihito, W.; Kazuhiro, S. & Takashi, Y. (2010).

gastrointestinal endoscopy, *Gastrointest. Endosc.* 70, pp. 828–834

*edition,* Vol. 12, pp. 356-357

R, Kwon.; P, Mamula.; M, Pedrosa & S, Rodriguez. (2009). Overtube use in

Development of multiple degrees of freedom active forceps for endoscopic submucosal dissection, *The 19th Society of Computer Aided Surgery conference special* 

> Service robots working around humans are expected to become widespread in the next decade. There have been numerous works for developing autonomous mobile robots, starting as early as the 1980s. For example, Crowley developed the Intelligent Mobile Platform (IMP) which moved around a known domain according to given commands (Crowley, 1985). The issue in the earlier works was how to navigate a robot in a room. HelpMate (Evans et al., 1989) was a mobile platform intended to be used in hospitals for carrying medical records, meal trays, medications, etc. In the 1990s, robots were developed which were equipped with manipulators and executed tasks such as moving objects. Bishoff (1997) developed a mobile robot called HERMES, which is an upper-body humanoid equipped with two arms with hands and an omni-directional vehicle. HERMES recognizes objects around it using stereo vision, and executes tasks such as moving an object from one place to another. Recently, service robots that can execute more complicated tasks using three-dimensional distance sensors and more powerful actuators have been actively developed (Borst et al., 2009; Graf et al., 2009; Droeschel et al., 2011). Along with the development of such service robots, service robot contests have been held such as RoboCup@Home League (RoboCup Federation, 2011), in which mobile service robots compete for accuracy, robustness and safety of task execution in home-like environments. We have also developed an experimental care service robot called IRIS (Hiroi et al., 2003). This robot understood a patient's commands through spoken dialogue and face recognition, and performed several care tasks such as carrying bottles or opening/closing curtains in a real environment. The other feature of IRIS was its safety; IRIS was equipped with various devices for physical safety, such as arms with torque limiters (Jeong et al., 2004).

> Safety is the most important issue for this kind of robot, and there have been many studies on keeping a robot safe for humans. Here, we consider two kinds of "safety." The first one is the physical safety of avoiding collisions between a robot and humans; physical safety is the most important requirement for a mobile robot working around humans. The other is mental safety, which means ensuring that the robot does not frighten people around it. Mental safety is as important as physical safety; if a robot's appearance or behavior is frightening, it will not be accepted by people even if it is physically harmless.

Influence of the Size Factor of a Mobile Robot

mm.

Moving Toward a Human on Subjective Acceptable Distance 179

due to stability (a thin shape makes the robot unstable) and environmental (a very wide robot cannot pass through a door) restrictions. Thus, we define the height of a robot as the "robot size." The heights of robots used in conventional experiments have been around 1200

In this study, we investigated the psychological effects by varying the size of a robot. Although other factors such as the robot's color or materials also affect the impression of the robot, we assume that the effects of those factors are independent of the effects of the robot's size. Next, we define "subjective acceptable distance" as the minimum distance at which a subject does not feel any anxiety or threat. The concept of subjective acceptable distance is identical to that measured by Nakashima and Sato (1999). They defined this distance as "personal space" (Sommer, 1959). However, we decided to avoid the word "personal space" and used "subjective acceptable distance" instead because personal space seems to be a

We measured subjective acceptable distances using robots of various sizes in order to investigate the relationship between robot size and subjective acceptable distance. Next, we determined whether or not changing the size of a robot affects the anxiety or threat perceived by a subject. We also asked the subjects to answer questionnaires to investigate

To decide the sizes of robots to be examined in the experiment, we considered the sizes of existing robots. Robots around 1200 mm tall are used in many works such as the generalpurpose mobile humanoid Robovie (Ishiguro et al., 2001), a mobile robot for hospital work HOSPI (Sakai et al., 2005) and a mobile robot for health care (Kouno & Kanda, 1998). As a small robot, the assistive mobile robot AMOS was 700 mm tall (Takahashi et al., 2004). AMOS is not a humanoid but a cubic-shaped vehicle with a manipulator and camera. As a large robot, HERMES was 1850 mm tall (Bischoff, 1997). A robot smaller than AMOS could not easily carry objects in an office, for example, while a robot larger than HERMES would have difficulty in moving through a door. We therefore decided to examine three sizes

Next, we decided the velocity of the robots in the experiment. Nakashima and Sato (1999) examined five velocities in their experiment: 200, 400, 600, 800 and 1000 mm/s. They concluded that 800 and 1000 mm/s were too fast and caused great anxiety to the subjects. On the other hand, a velocity as slow as 200 mm/s caused no anxiety at all for some subjects. Considering their results, we set the velocity of our robot to 400 mm/s, which was

During experiments, subjects can either stand or sit on a chair. Nakashima et al. (1999) reported that the subjective acceptable distance became larger when the subject was seated. To investigate the relationship between this effect and the robot size, we conducted our

much broader concept compared with the distance we are trying to measure.

differences in impression on the robots of different sizes.

**3. Experimental conditions** 

around 1200 mm: 600, 1200 and 1800 mm.

an intermediate level in Nakashima's experiment.

experiment for both conditions of the subject standing or seated.

**3.2 Velocity of the robot** 

**3.3 Posture of the subjects** 

**3.1 Robot size** 

There have been many researches for improving the physical safety of robots. For example, sensors are commonly used for avoiding collisions with humans (Prassler et al., 2002; Burgard, 1998), and shock absorbers are deployed around a robot to reduce the risk of injury in case of a collision with a human (Jeong et al., 2005). Heinzman and Zelinsky (2003) proposed a scheme that restricts the torque of a manipulator to a pre-defined limit for safety against collision. As mentioned above, IRIS had a similar kind of torque limiter (Jeong, 2004). Furthermore, a method for evaluating the physical safety of a robot has been proposed (Ikuta et al., 2003).

Compared with physical safety, there have been few studies on improving mental safety. The purpose of the present work was to investigate the relationship between a robot's physical properties—especially the size of the robot—and the psychological threat that humans feel from the robot.

### **2. Mental safety of mobile robots**

In this section, we briefly review previous works that investigated issues related to the mental safety of robots, and describe the objective of our work.

#### **2.1 Previous works**

Ikeura et al. (1995) investigated the human response to an approaching mobile robot through subjective tests as well as objective analysis using skin resistance. They used a small robot (250180170 mm) moving on a desk. The robot was set at a distance of 700 mm from the subject, and moved along rails toward the seated subject at various velocities and accelerations. The robot approached to a distance of 400 mm from the subject. A subjective evaluation suggested that humans fear the robot's velocity, while they are surprised by its acceleration. Ikeura et al.'s work is interesting, but their robot was too small to generalize their conclusion to real service robots.

Nakashima and Sato (1999) investigated the relationship between a mobile robot's velocity and anxiety. They used HelpMate (Evans et al., 1989) as a mobile robot, and measured the distance between the robot and subject at which the subject did not feel anxiety or threat when the robot moved toward the subject. They changed the velocity with which the robot moved toward the subject, and investigated the relationship between the velocity and the distance. They used 21 university students aged from 22 to 28 as subjects, and five velocities of 0.2, 0.4, 0.6, 0.8 and 1.0 m/s. They examined two postures of the subject: standing and seated. The experimental results showed that the distance was proportional to the velocity, and that the distance was longer when the subject was seated.

Walters et al. (2005) carried out an experiment similar to that of Nakashima and Sato, using a mobile robot called PeopleBot. They discussed personal factors such as gender on the impression on the robot. As these studies used commercially available robots, they could not change the size of the robot.

#### **2.2 Size does matter**

Factors of a robot other than velocity also affect the psychological threat to humans around it. The size of a robot seems to have a great psychological effect. The size of a robot is determined by its width, depth and height. When a robot is approaching a subject from in front of the subject, the width and height are the factors that ought to be considered. In this chapter, we consider only the height of a robot because we cannot vary the width greatly due to stability (a thin shape makes the robot unstable) and environmental (a very wide robot cannot pass through a door) restrictions. Thus, we define the height of a robot as the "robot size." The heights of robots used in conventional experiments have been around 1200 mm.

In this study, we investigated the psychological effects by varying the size of a robot. Although other factors such as the robot's color or materials also affect the impression of the robot, we assume that the effects of those factors are independent of the effects of the robot's size. Next, we define "subjective acceptable distance" as the minimum distance at which a subject does not feel any anxiety or threat. The concept of subjective acceptable distance is identical to that measured by Nakashima and Sato (1999). They defined this distance as "personal space" (Sommer, 1959). However, we decided to avoid the word "personal space" and used "subjective acceptable distance" instead because personal space seems to be a much broader concept compared with the distance we are trying to measure.

We measured subjective acceptable distances using robots of various sizes in order to investigate the relationship between robot size and subjective acceptable distance. Next, we determined whether or not changing the size of a robot affects the anxiety or threat perceived by a subject. We also asked the subjects to answer questionnaires to investigate differences in impression on the robots of different sizes.

### **3. Experimental conditions**

### **3.1 Robot size**

178 Mobile Robots – Current Trends

There have been many researches for improving the physical safety of robots. For example, sensors are commonly used for avoiding collisions with humans (Prassler et al., 2002; Burgard, 1998), and shock absorbers are deployed around a robot to reduce the risk of injury in case of a collision with a human (Jeong et al., 2005). Heinzman and Zelinsky (2003) proposed a scheme that restricts the torque of a manipulator to a pre-defined limit for safety against collision. As mentioned above, IRIS had a similar kind of torque limiter (Jeong, 2004). Furthermore, a method for evaluating the physical safety of a robot has been

Compared with physical safety, there have been few studies on improving mental safety. The purpose of the present work was to investigate the relationship between a robot's physical properties—especially the size of the robot—and the psychological threat that

In this section, we briefly review previous works that investigated issues related to the

Ikeura et al. (1995) investigated the human response to an approaching mobile robot through subjective tests as well as objective analysis using skin resistance. They used a small robot (250180170 mm) moving on a desk. The robot was set at a distance of 700 mm from the subject, and moved along rails toward the seated subject at various velocities and accelerations. The robot approached to a distance of 400 mm from the subject. A subjective evaluation suggested that humans fear the robot's velocity, while they are surprised by its acceleration. Ikeura et al.'s work is interesting, but their robot was too small to generalize

Nakashima and Sato (1999) investigated the relationship between a mobile robot's velocity and anxiety. They used HelpMate (Evans et al., 1989) as a mobile robot, and measured the distance between the robot and subject at which the subject did not feel anxiety or threat when the robot moved toward the subject. They changed the velocity with which the robot moved toward the subject, and investigated the relationship between the velocity and the distance. They used 21 university students aged from 22 to 28 as subjects, and five velocities of 0.2, 0.4, 0.6, 0.8 and 1.0 m/s. They examined two postures of the subject: standing and seated. The experimental results showed that the distance was proportional to the velocity,

Walters et al. (2005) carried out an experiment similar to that of Nakashima and Sato, using a mobile robot called PeopleBot. They discussed personal factors such as gender on the impression on the robot. As these studies used commercially available robots, they could

Factors of a robot other than velocity also affect the psychological threat to humans around it. The size of a robot seems to have a great psychological effect. The size of a robot is determined by its width, depth and height. When a robot is approaching a subject from in front of the subject, the width and height are the factors that ought to be considered. In this chapter, we consider only the height of a robot because we cannot vary the width greatly

proposed (Ikuta et al., 2003).

humans feel from the robot.

**2.1 Previous works** 

**2. Mental safety of mobile robots** 

their conclusion to real service robots.

not change the size of the robot.

**2.2 Size does matter** 

mental safety of robots, and describe the objective of our work.

and that the distance was longer when the subject was seated.

To decide the sizes of robots to be examined in the experiment, we considered the sizes of existing robots. Robots around 1200 mm tall are used in many works such as the generalpurpose mobile humanoid Robovie (Ishiguro et al., 2001), a mobile robot for hospital work HOSPI (Sakai et al., 2005) and a mobile robot for health care (Kouno & Kanda, 1998). As a small robot, the assistive mobile robot AMOS was 700 mm tall (Takahashi et al., 2004). AMOS is not a humanoid but a cubic-shaped vehicle with a manipulator and camera. As a large robot, HERMES was 1850 mm tall (Bischoff, 1997). A robot smaller than AMOS could not easily carry objects in an office, for example, while a robot larger than HERMES would have difficulty in moving through a door. We therefore decided to examine three sizes around 1200 mm: 600, 1200 and 1800 mm.

### **3.2 Velocity of the robot**

Next, we decided the velocity of the robots in the experiment. Nakashima and Sato (1999) examined five velocities in their experiment: 200, 400, 600, 800 and 1000 mm/s. They concluded that 800 and 1000 mm/s were too fast and caused great anxiety to the subjects. On the other hand, a velocity as slow as 200 mm/s caused no anxiety at all for some subjects. Considering their results, we set the velocity of our robot to 400 mm/s, which was an intermediate level in Nakashima's experiment.

### **3.3 Posture of the subjects**

During experiments, subjects can either stand or sit on a chair. Nakashima et al. (1999) reported that the subjective acceptable distance became larger when the subject was seated. To investigate the relationship between this effect and the robot size, we conducted our experiment for both conditions of the subject standing or seated.

Influence of the Size Factor of a Mobile Robot

Fig. 3. Arrangement of the robot and subject

just once for one subject and one condition.

did, please describe them.

Fig. 4. Definition of distances between robot and subject

one condition.

Moving Toward a Human on Subjective Acceptable Distance 181

We allowed the subjects to practice using the switch to ensure the safety of the experiment.

*Just after pushing the switch, the robot will immediately start to move toward you from a distance of 3 m at a speed of 400 mm/s. If you feel any anxiety or fear and do not want the robot to come any nearer, please push the switch again to stop the robot immediately. If you feel the distance between you and the halted robot is nearer or further than the distance you intended, please let us know. In case of emergency such as if the robot does not stop, please avoid the robot by yourself. This experiment will be conducted in two postures, seated and standing, using three robots. Please keep looking at the robot throughout the* 

We randomized the order of the experiment (robot size and posture) to cancel out the order effect. If a subject reported that the distance was different from his intention, the experiment was repeated. The measurement was conducted only once for each condition, except failure of measurement. Nakashima and Sato (1999) measured the subjective acceptable distances many times for the same condition, and reported that the variance of distances obtained by multiple measurements was sufficiently smaller than the change caused by other factors. In view of their result, we decided that we did not need to conduct multiple measurements for

As a result, no subject asked to measure the distance again. There was no operation accident involving the switch, and no collision between the robot and the subject either. The robot remained in good order throughout the experiment. Therefore, the measurement was done

After stopping the robot, we measured two distances between the robot and the subject, as shown in Fig. 4. L1 is the distance between the front of the robot and the seated subject's

Did you feel any differences between the two postures (standing and seated)? If you

eyes, and L2 is that between the front of the robot and the toes of the subject. After the experiment, we asked the subjects to answer the following questions: Sort the six conditions (three robot sizes by two postures) in order of anxiety.

Before the experiment, we gave all the subjects the following instructions:

*experiment. After the experiment, we will ask you to fill in a questionnaire.* 

### **4. Experimental setup**

Figure 1 shows the base of the robots used in the experiments. The base included two driving wheels and two castors, and was 450 mm wide, 390 mm deep and 250 mm high, and weighed 15.0 kg. The body of the robot could be changed by replacing the aluminum frame on its base. A sheet of white paper was glued to the front of the frame so that the robot looked like a white parallelepiped. We prepared three frames, 600 mm, 1200 mm and 1800 mm in height, as shown in Fig. 2.

Nineteen male subjects aged from 19 to 22 years old participated in the experiment. The mobile robot was first positioned at 3 m from the nearest part of the subject, as shown in Fig. 3. The subject started and stopped the robot using a push switch. After starting the robot to move toward himself, he stopped the robot when he did not want the robot to move any nearer toward him.

Fig. 1. Overview of the base of the mobile robot

Fig. 2. External view of the robots

180 Mobile Robots – Current Trends

Figure 1 shows the base of the robots used in the experiments. The base included two driving wheels and two castors, and was 450 mm wide, 390 mm deep and 250 mm high, and weighed 15.0 kg. The body of the robot could be changed by replacing the aluminum frame on its base. A sheet of white paper was glued to the front of the frame so that the robot looked like a white parallelepiped. We prepared three frames, 600 mm, 1200 mm and 1800

Nineteen male subjects aged from 19 to 22 years old participated in the experiment. The mobile robot was first positioned at 3 m from the nearest part of the subject, as shown in Fig. 3. The subject started and stopped the robot using a push switch. After starting the robot to move toward himself, he stopped the robot when he did not want the robot to move any

600 mm 1200 mm 1800 mm

**4. Experimental setup** 

mm in height, as shown in Fig. 2.

Fig. 1. Overview of the base of the mobile robot

Fig. 2. External view of the robots

nearer toward him.

We allowed the subjects to practice using the switch to ensure the safety of the experiment. Before the experiment, we gave all the subjects the following instructions:

*Just after pushing the switch, the robot will immediately start to move toward you from a distance of 3 m at a speed of 400 mm/s. If you feel any anxiety or fear and do not want the robot to come any nearer, please push the switch again to stop the robot immediately. If you feel the distance between you and the halted robot is nearer or further than the distance you intended, please let us know. In case of emergency such as if the robot does not stop, please avoid the robot by yourself. This experiment will be conducted in two postures, seated and standing, using three robots. Please keep looking at the robot throughout the experiment. After the experiment, we will ask you to fill in a questionnaire.* 

We randomized the order of the experiment (robot size and posture) to cancel out the order effect. If a subject reported that the distance was different from his intention, the experiment was repeated. The measurement was conducted only once for each condition, except failure of measurement. Nakashima and Sato (1999) measured the subjective acceptable distances many times for the same condition, and reported that the variance of distances obtained by multiple measurements was sufficiently smaller than the change caused by other factors. In view of their result, we decided that we did not need to conduct multiple measurements for one condition.

As a result, no subject asked to measure the distance again. There was no operation accident involving the switch, and no collision between the robot and the subject either. The robot remained in good order throughout the experiment. Therefore, the measurement was done just once for one subject and one condition.

Fig. 4. Definition of distances between robot and subject

After stopping the robot, we measured two distances between the robot and the subject, as shown in Fig. 4. L1 is the distance between the front of the robot and the seated subject's eyes, and L2 is that between the front of the robot and the toes of the subject.

After the experiment, we asked the subjects to answer the following questions:


Influence of the Size Factor of a Mobile Robot

subject was seated.

posture

These results can be analyzed as follows.

**5.3 Questionnaire results** 

to E, as shown in Table 1.

**5.2 Effect of posture on subjective acceptable distance** 

Moving Toward a Human on Subjective Acceptable Distance 183

If this conjecture is correct, the subjective acceptable distance does not increase for robots larger than 1800 mm even when the subject is standing. However, a robot taller than 1800 mm is not suitable for working in a typical environment such as a home, office or hospital, because

Nakashima et al. (1999) reported that the subjective acceptable distance was larger when subjects were seated than when standing. To confirm this relationship, we conducted a paired t-test to compare L1 and L2 for each robot size. As a result, we observed significant differences between L1 and L2 for all robot sizes (p<0.001 for 600 and 1200 mm, p<0.01 for 1800 mm). This result supports Nakashima's conclusion that the distance was larger when a

The results of the questionnaires are summarized in Fig. 6. The x-axis is the robot size, and the y-axis denotes the score of the condition. Each condition is denoted as a symbol from A

Fig. 6. Comparison of perceived psychological threat between seated posture and standing

We conducted Friedman's test upon the result, and obtained a significant difference between conditions (p<0.001). Therefore, the anxiety felt by the subjects differed from condition to condition. Next, we conducted a Steel-Dwass multiple comparison test to investigate if there

were differences in anxiety between two specific conditions. Table 2 shows the results.

3. When subjects were seated, subjects felt more anxiety for the larger robot.

1. Subjects felt the maximum anxiety for the 1800 mm robot regardless of their posture. 2. Different postures did not affect anxiety for the 600 mm robot. When the robot was larger than 600 mm, seated subjects felt more anxiety than when they were standing.

Compared with the results shown in Fig. 5, result 1 is consistent with the subjective acceptable distance. However, when the subject was seated, the subjective acceptable distances for the 1200 mm and 1800 mm robots were not different, whereas the anxiety was

it cannot go through a door. Therefore, we did not consider robots taller than 1800 mm.

### Other suggestions (if any)

After a subject answered the questionnaire, we assigned scores to the conditions according to the order the subject gave. For example, if a subject answered that he felt the greatest anxiety for the (1800 mm, standing) condition, we gave a score of "6" to that condition (the larger the score, the more frightening the condition). Then we summed up the scores for a condition given by all subjects to calculate the final score for that condition.

### **5. Experimental results and discussion**

### **5.1 Subjective acceptable distance and subjects' posture**

Figure 5 shows the average subjective acceptable distances (L1 and L2) with respect to the three robot sizes. From the figure, L2 seems to change according to robot size. However, L1 for 1200 mm and 1800 mm does not look different. To validate these data, we conducted ANOVA using a randomized block design for both L1 and L2 to determine whether the subjective acceptable distance was affected by robot size. The results showed significant differences in both L1 and L2 (p<0.001). Next, we conducted Dunnett's test to find out whether the subjective acceptable distances at 600 mm or 1800 mm were different from that at 1200 mm. The results showed significant differences for L2s of 600 mm and 1800 mm (p<0.05), and an L1 of 600 mm (p<0.01). However, there was no significant difference between L1s of 1200 mm and 1800 mm. These results suggest that the subjective acceptable distance is greater for larger robots when the subject is standing. However, the subjective acceptable distance does not increase when the robot is larger than 1200 mm and the subject is seated.

Fig. 5. Relationship between robot size and subjective acceptable distance (error bars show standard deviation)

The average height of a seated subject's eyes from the floor was 1186 mm, which was comparable with the medium robot size of 1200 mm. When watching an object higher than the observer's eyes, the object's height within the observer's view does not change with the distance between the observer and the object, which means that one of the important cues for distance perception is lost (Gary, 2002). As the effect of cues for perceiving distance is additive (Cutting & Vishton, 1995), losing one of the cues may affect the observer's perception of distance.

When a subject was standing, the average height of the eyes was 1601 mm, which was larger than the small and medium robot sizes (600 and 1200 mm). This fact might have caused the significant differences of acceptable distances when a subject was standing.

If this conjecture is correct, the subjective acceptable distance does not increase for robots larger than 1800 mm even when the subject is standing. However, a robot taller than 1800 mm is not suitable for working in a typical environment such as a home, office or hospital, because it cannot go through a door. Therefore, we did not consider robots taller than 1800 mm.

### **5.2 Effect of posture on subjective acceptable distance**

Nakashima et al. (1999) reported that the subjective acceptable distance was larger when subjects were seated than when standing. To confirm this relationship, we conducted a paired t-test to compare L1 and L2 for each robot size. As a result, we observed significant differences between L1 and L2 for all robot sizes (p<0.001 for 600 and 1200 mm, p<0.01 for 1800 mm). This result supports Nakashima's conclusion that the distance was larger when a subject was seated.

### **5.3 Questionnaire results**

182 Mobile Robots – Current Trends

After a subject answered the questionnaire, we assigned scores to the conditions according to the order the subject gave. For example, if a subject answered that he felt the greatest anxiety for the (1800 mm, standing) condition, we gave a score of "6" to that condition (the larger the score, the more frightening the condition). Then we summed up the scores for a

Figure 5 shows the average subjective acceptable distances (L1 and L2) with respect to the three robot sizes. From the figure, L2 seems to change according to robot size. However, L1 for 1200 mm and 1800 mm does not look different. To validate these data, we conducted ANOVA using a randomized block design for both L1 and L2 to determine whether the subjective acceptable distance was affected by robot size. The results showed significant differences in both L1 and L2 (p<0.001). Next, we conducted Dunnett's test to find out whether the subjective acceptable distances at 600 mm or 1800 mm were different from that at 1200 mm. The results showed significant differences for L2s of 600 mm and 1800 mm (p<0.05), and an L1 of 600 mm (p<0.01). However, there was no significant difference between L1s of 1200 mm and 1800 mm. These results suggest that the subjective acceptable distance is greater for larger robots when the subject is standing. However, the subjective acceptable distance does not increase when the

condition given by all subjects to calculate the final score for that condition.

Fig. 5. Relationship between robot size and subjective acceptable distance

significant differences of acceptable distances when a subject was standing.

The average height of a seated subject's eyes from the floor was 1186 mm, which was comparable with the medium robot size of 1200 mm. When watching an object higher than the observer's eyes, the object's height within the observer's view does not change with the distance between the observer and the object, which means that one of the important cues for distance perception is lost (Gary, 2002). As the effect of cues for perceiving distance is additive (Cutting & Vishton, 1995), losing one of the cues may affect the observer's

When a subject was standing, the average height of the eyes was 1601 mm, which was larger than the small and medium robot sizes (600 and 1200 mm). This fact might have caused the

Other suggestions (if any)

**5. Experimental results and discussion** 

robot is larger than 1200 mm and the subject is seated.

(error bars show standard deviation)

perception of distance.

**5.1 Subjective acceptable distance and subjects' posture** 

The results of the questionnaires are summarized in Fig. 6. The x-axis is the robot size, and the y-axis denotes the score of the condition. Each condition is denoted as a symbol from A to E, as shown in Table 1.

Fig. 6. Comparison of perceived psychological threat between seated posture and standing posture

We conducted Friedman's test upon the result, and obtained a significant difference between conditions (p<0.001). Therefore, the anxiety felt by the subjects differed from condition to condition. Next, we conducted a Steel-Dwass multiple comparison test to investigate if there were differences in anxiety between two specific conditions. Table 2 shows the results. These results can be analyzed as follows.


Compared with the results shown in Fig. 5, result 1 is consistent with the subjective acceptable distance. However, when the subject was seated, the subjective acceptable distances for the 1200 mm and 1800 mm robots were not different, whereas the anxiety was

Influence of the Size Factor of a Mobile Robot

0

0.5

1

1.5

Score

different postures

are standing.

**announcement** 

2

2.5

3

Moving Toward a Human on Subjective Acceptable Distance 185

Seated Standing

Feel anxious Do not feel anxious Total

Group

care) seems to be the reason why there was no difference in score for the 600 mm robot.

of anxiety but is caused by the difficulty of estimating the distance to the robot.

**6. A method of reducing the threat and anxiety by using preliminary** 

**6.1 Preliminary announcement of a robot's behavior** 

Next, we investigate result 3. In section 4.1, we observed no difference between the subjective acceptable distances for the 1200 mm and 1800 mm robots when the subjects were seated. Result 3 looks inconsistent with this result. In fact, 13 out of the 19 subjects described anxiety for robots taller than their eye height. Therefore, we can conclude that the subjective acceptable distance does not become larger when the robot is taller than the subject's eye height, but it does not mean that the anxieties for larger robots are the same. The invariance of the subjective acceptable distance for larger robots is not because the larger robots cause the same impression

In summary, the robot with 1800 mm height caused the most anxiety to the subjects and the subjective acceptable distance for that robot was the longest. In this case, note that a shorter subjective acceptable distance does not mean less anxiety. Thus, robots of 1800 mm height or more are not suitable for service robots working around humans. Humans will allow a smaller robot to get nearer, but some people feel anxiety for the robot's behavior when they

The experimental results supported the conjecture that the subjective acceptable distance becomes smaller for a robot shorter than 1200 mm and vice versa. Therefore, we can design the size of a robot considering the assumed distance between the robot and users. However,

Fig. 7. Effect of anxiety about small robot (600 mm) on the difference in perceived threat for

Figure 7 shows the average score (order of anxiety, larger score for more anxiety) for the 600 mm robot for the two postures. "Feel anxious" is the average score for the subjects who felt anxiety for the 600 mm robot (9 subjects), and "Do not feel anxious" is that for the others (10 subjects). From this result, the anxiety score for the "feel anxious" group and the other group showed a different tendency for the two postures, and therefore the total average score for these two groups is similar (2.26 and 2.21). The existence of these two groups of subjects (subjects who were concerned when the robot went below their sight and those who did not


larger for the 1800 mm robot. This result suggests that the subjective acceptable distance does not simply reflect the subject's anxiety about the robot.

Table 1. Symbols for the conditions


Table 2. Results of Steel-Dwass test

Next, let us consider result 2. In section 4.1, we observed that the subjective acceptable distance of a seated subject became longer than that of a standing subject regardless of the subject's posture. This result could be interpreted to mean that a subject feels more anxiety when standing; however, result 2 from the questionnaire suggests that there was no difference in anxiety between the two postures for the 600 mm robot. To investigate the reason for this inconsistency, we focus on the second question of the questionnaire that asks the subject to describe the difference in feeling between the two postures.

Nine out of the 19 subjects felt anxiety for the 600 mm robot. The comments given by those nine subjects are shown in Table 3; many of them pointed out the relationship between the robot and the subject's sight.


Table 3. Descriptions of the reasons why the subjects felt more anxiety for the 600 mm robot when standing

184 Mobile Robots – Current Trends

larger for the 1800 mm robot. This result suggests that the subjective acceptable distance

Condition Robot size (mm) Subject's posture

B 1200 Seated

E 1200 Standing

**Cond. Cond. B Cond. C Cond. D Cond. E Cond. F Cond. A** p<0.05 p<0.001 NS NS p<0.001 **Cond. B** p<0.001 p<0.05 p<0.05 p<0.01 **Cond. C** p<0.001 p<0.001 p<0.05 **Cond. D** NS p<0.001 **Cond. E** p<0.001

Next, let us consider result 2. In section 4.1, we observed that the subjective acceptable distance of a seated subject became longer than that of a standing subject regardless of the subject's posture. This result could be interpreted to mean that a subject feels more anxiety when standing; however, result 2 from the questionnaire suggests that there was no difference in anxiety between the two postures for the 600 mm robot. To investigate the reason for this inconsistency, we focus on the second question of the questionnaire that asks

Nine out of the 19 subjects felt anxiety for the 600 mm robot. The comments given by those nine subjects are shown in Table 3; many of them pointed out the relationship between the

The lower robot almost vanished from my sight and I felt

I felt somewhat uncomfortable because it was below my

I was more scared than when standing because I had to

I didn't feel any threat, but it was uncomfortable to look

Table 3. Descriptions of the reasons why the subjects felt more anxiety for the 600 mm robot

 I was anxious when the robot went out of my sight When standing, I felt anxious because I couldn't see the

the subject to describe the difference in feeling between the two postures.

It went out of sight when it got nearer

I felt anxious when it went out of sight

I felt it approached toward my feet

robot (anxious)

anxious

sight

down

when standing

look down

does not simply reflect the subject's anxiety about the robot.

A 600

C 1800 D 600

F 1800

Table 1. Symbols for the conditions

Table 2. Results of Steel-Dwass test

robot and the subject's sight.

Fig. 7. Effect of anxiety about small robot (600 mm) on the difference in perceived threat for different postures

Figure 7 shows the average score (order of anxiety, larger score for more anxiety) for the 600 mm robot for the two postures. "Feel anxious" is the average score for the subjects who felt anxiety for the 600 mm robot (9 subjects), and "Do not feel anxious" is that for the others (10 subjects). From this result, the anxiety score for the "feel anxious" group and the other group showed a different tendency for the two postures, and therefore the total average score for these two groups is similar (2.26 and 2.21). The existence of these two groups of subjects (subjects who were concerned when the robot went below their sight and those who did not care) seems to be the reason why there was no difference in score for the 600 mm robot.

Next, we investigate result 3. In section 4.1, we observed no difference between the subjective acceptable distances for the 1200 mm and 1800 mm robots when the subjects were seated. Result 3 looks inconsistent with this result. In fact, 13 out of the 19 subjects described anxiety for robots taller than their eye height. Therefore, we can conclude that the subjective acceptable distance does not become larger when the robot is taller than the subject's eye height, but it does not mean that the anxieties for larger robots are the same. The invariance of the subjective acceptable distance for larger robots is not because the larger robots cause the same impression of anxiety but is caused by the difficulty of estimating the distance to the robot.

In summary, the robot with 1800 mm height caused the most anxiety to the subjects and the subjective acceptable distance for that robot was the longest. In this case, note that a shorter subjective acceptable distance does not mean less anxiety. Thus, robots of 1800 mm height or more are not suitable for service robots working around humans. Humans will allow a smaller robot to get nearer, but some people feel anxiety for the robot's behavior when they are standing.

### **6. A method of reducing the threat and anxiety by using preliminary announcement**

### **6.1 Preliminary announcement of a robot's behavior**

The experimental results supported the conjecture that the subjective acceptable distance becomes smaller for a robot shorter than 1200 mm and vice versa. Therefore, we can design the size of a robot considering the assumed distance between the robot and users. However,

Influence of the Size Factor of a Mobile Robot

see the projected information.

the method is difficult to use outdoors.

and attracts human attention to the robot's body.

Fig. 8. Overview of the experimental robot avatar

**6.2 Preliminary announcement using a robot avatar** 

avatar for preliminary announcement of robot motion: The preliminary announcement can be intuitive.

We propose a preliminary announcement method using a robot avatar. A robot avatar is a small robot mounted on a large robot for communication (Hiroi et al., 2005). An example of a robot avatar is shown in Fig. 8. The proposed method announces the (bigger) robot's behavior through the robot avatar's motion. There are many advantages of using a robot

The motion of the robot avatar can be prominent if the avatar is mounted at around a

Fig. 9. Motion of the robot avatar

human's eye height.

Moving Toward a Human on Subjective Acceptable Distance 187

 As the projected image depends on both the lighting condition and the floor, it cannot be used in a bright environment or on ground that is not sufficiently flat; for example,

As the information is projected in front of the robot, humans behind the robot cannot

 Humans around the robot need to pay attention to the floor rather than the robot itself. Therefore, we need to develop a method that is not affected by the environmental conditions such as lighting or floor condition, is able to present the robot's behavior in all directions,

in reality, it is not usually possible to change the size of a robot, particularly when using a commercial robot. In this section, we discuss ways of reducing a person's psychological threat and anxiety about the robot without changing the robot's size.

In the above experiment, a subject could stop the robot at will when he did not want the robot to come any nearer. However, in real situations where a robot is working around humans, a person by the robot cannot stop it even if he/she does not want the robot to move any closer. Therefore, the distance kept between the human and the robot can be larger than that measured in the experiment.

One reason for keeping at a distance from a robot is that it is difficult to predict the robot's motion. As a robot is an artificial being, it is not possible for a human to infer a robot's behavior from common sense among humans; we cannot predict when a robot will begin to move and stop.

One possible method for reducing the threat or anxiety about robots derived from this uncertainty of behavior is to explicitly announce what the robot will do next. If humans around the robot know the direction in which it will move, they will not be so anxious even when in close proximity.

Existing works on this concept include announcing the robot's velocity and moving direction using a laser beam (Matsumaru et al., 2006) or LCD projector (Matsumaru, 2006). In the former method, the robot draws the trajectory along which it is going to move using a laser beam pointer. The direction of the laser beam is controlled by a mirror, and the beam is projected on and swept over the floor. They evaluated the effect of such an announcement by questionnaires conducted in an exhibition hall. As a result, they reported that half of the respondents answered that they could easily understand the direction and velocity of the robot, and received the following comments:


As shown, Matsumaru et al. received many opinions on ways of improving the display method, even though they explained to the respondents the background and purpose of their research, proposed announcement method, and overview of the robot. A preliminary announcement must be easy to understand for someone looking at the robot for the first time. These opinions reveal that the laser-beam-based announcement method needs further improvement.

The method based on video projector (Matsumaru, 2006) projects icons such as an arrow, turning signs, "STOP" sign and "BACK" sign onto the floor. A video projector can present more information than a laser beam. The direction of the robot is presented by the direction of the arrow, and the velocity is expressed as the thickness and color of the arrow. When the robot is going to rotate, the color of the turning sign changes. When the robot is going to stop or go backward, the color of "STOP" or "BACK" is changed. Matsumaru evaluated the announcement method using the same criteria as used for evaluating the laser-beam method. The subjective evaluation showed that the projector method was easier to understand than the laser-beam method for both direction and velocity. The problems of the projector method are summarized as follows:

186 Mobile Robots – Current Trends

in reality, it is not usually possible to change the size of a robot, particularly when using a commercial robot. In this section, we discuss ways of reducing a person's psychological

In the above experiment, a subject could stop the robot at will when he did not want the robot to come any nearer. However, in real situations where a robot is working around humans, a person by the robot cannot stop it even if he/she does not want the robot to move any closer. Therefore, the distance kept between the human and the robot can be larger than

One reason for keeping at a distance from a robot is that it is difficult to predict the robot's motion. As a robot is an artificial being, it is not possible for a human to infer a robot's behavior from common sense among humans; we cannot predict when a robot will begin to

One possible method for reducing the threat or anxiety about robots derived from this uncertainty of behavior is to explicitly announce what the robot will do next. If humans around the robot know the direction in which it will move, they will not be so anxious even

Existing works on this concept include announcing the robot's velocity and moving direction using a laser beam (Matsumaru et al., 2006) or LCD projector (Matsumaru, 2006). In the former method, the robot draws the trajectory along which it is going to move using a laser beam pointer. The direction of the laser beam is controlled by a mirror, and the beam is projected on and swept over the floor. They evaluated the effect of such an announcement by questionnaires conducted in an exhibition hall. As a result, they reported that half of the respondents answered that they could easily understand the direction and velocity of the

The method should be combined with another type of method, such as a speech-based

As shown, Matsumaru et al. received many opinions on ways of improving the display method, even though they explained to the respondents the background and purpose of their research, proposed announcement method, and overview of the robot. A preliminary announcement must be easy to understand for someone looking at the robot for the first time. These opinions reveal that the laser-beam-based announcement method needs further

The method based on video projector (Matsumaru, 2006) projects icons such as an arrow, turning signs, "STOP" sign and "BACK" sign onto the floor. A video projector can present more information than a laser beam. The direction of the robot is presented by the direction of the arrow, and the velocity is expressed as the thickness and color of the arrow. When the robot is going to rotate, the color of the turning sign changes. When the robot is going to stop or go backward, the color of "STOP" or "BACK" is changed. Matsumaru evaluated the announcement method using the same criteria as used for evaluating the laser-beam method. The subjective evaluation showed that the projector method was easier to understand than the laser-beam method for both direction and velocity. The problems of the

The shape drawn by the laser beam should be an arrow, rather than a line

Children might have difficulty understanding the meaning of the beam

threat and anxiety about the robot without changing the robot's size.

that measured in the experiment.

move and stop.

method

improvement.

when in close proximity.

robot, and received the following comments:

projector method are summarized as follows:

The method of showing the velocity could be improved

This method can be applied to industrial robots


Therefore, we need to develop a method that is not affected by the environmental conditions such as lighting or floor condition, is able to present the robot's behavior in all directions, and attracts human attention to the robot's body.

Fig. 8. Overview of the experimental robot avatar

Fig. 9. Motion of the robot avatar

### **6.2 Preliminary announcement using a robot avatar**

We propose a preliminary announcement method using a robot avatar. A robot avatar is a small robot mounted on a large robot for communication (Hiroi et al., 2005). An example of a robot avatar is shown in Fig. 8. The proposed method announces the (bigger) robot's behavior through the robot avatar's motion. There are many advantages of using a robot avatar for preliminary announcement of robot motion:


Influence of the Size Factor of a Mobile Robot

55-60

504

355-358

281-297

pp. 498-503

pp. 162-167

*06)*, United Kingdom, pp. 443-450

*Systems*, Vol. 5, No. 3, pp. 251-256

Mausfeld, pp. 115-143, New York: Wiley

*Information Technology*, China, pp. 231-236

*SPIE*, Vol. 6042, doi:10.1117/12.664685

Moving Toward a Human on Subjective Acceptable Distance 189

Evans, J., Krishnamurthy, B., Ponga, W., Croston, R., Weiman, C., & Engelberger, G. (1989).

Gary, H. (2002). Perception as Unconscious Inference. In: *Perception and the Physical World:* 

Goetz, J., Kiesler, S., & Powers, A. (2003). Matching robot appearance and behaviors to tasks

Heinzman, J., & Zelinsky, A. (2003). Quantitative Safety Guarantees for Physical Human–

Hiroi, Y., Nakano, E., Takahashi, T., Makino, S., Ito, A., Kotani, K., Takatsu, N., & Ohmi, T.

Hiroi, Y., Nakano, E., Takahashi, T., Ito, A., Kotani, K., & Takatsu, N. (2005). A New Design

Ikeura, R., Otsuka, H., & Inooka, H. (1995). Study on emotional evaluation of robot motions

Ikuta, K., Ishii, H., & Nokata, M. (2003). Safety Evaluation Method of Design and Control for

Ishiguro, H., Ono, T., Imai, M., Maeda, T., Kanda, T., & Nakatsu, R. (2001). Robovie: an

James E. Cutting and Peter M. Vishton. (1995). Perceiving layout and knowing distances:

W. Epstein & S. Rogers (eds.), pp. 69-117, CA: Academic Press, San Diego. Jeong, S. H., Takahashi, T., & Nakano, E. (2004). A safety service manipulator system: the

Jeong, S. H., Takahashi, T., Shoji, M., & Nakano, E. (2005). Harmful Force Reduction of a

Matsumaru, T. (2006). Mobile Robot with Preliminary-Announcement and Display Function

Function. *Journal of the Robotics Society of Japan*, Vol. 23, No. 8, pp. 31-38 Kouno, T., & Kanda, S. (1998). Robot for Carrying Food Trays to the Aged and Disabled.

*Journal of the Robotics Society of Japan*, Vol. 16, No. 3, pp. 317-320

HelpMate™: A robotic materials transport system. *Robotics and Autonomous* 

*Psychological and Philosophical Issues in Perception*, ed. by Dieter Heyer and Rainer

to improve human-robot cooperation. *Proceedings of 12th IEEE International Workshop on Robot and Human Interactive Communication (RO-MAN 2003),* USA, pp.

Robot Interaction. *International Journal of Robotics Research*, Vol. 22, No. 7–8, pp. 479-

(2003). A Patient Care Service Robot System Based on a State Transition Architecture. *Proceedings of the 2nd International Conference on Mechatronics and* 

Concept of Robotic Interface for the Improvement of User Familiarity. *Proceedings of* 

based on galvanic skin reflex. *The Japanese Journal of Ergonomics*, Vol. 31, No. 5, pp.

Human-Care Robots. *International Journal of Robotics Research,* Vol. 22, No. 5, pp.

interactive humanoid robot. *International Journal of Industrial Robotics*, Vol. 28, No. 6,

The integration, relative potency, and contextual use of different information about depth. In: *Handbook of perception and cognition, Vol 5; Perception of space and motion,*

reduction of harmful force by a controllable torque limiter. *Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems, 2004. (IROS 2004)*, Japan,

Manipulator by Using a Collision Detecting System with a Shock Absorbing

of Following Motion using Projection Equipment. *Proceedings of the 15th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN* 


Figure 8 shows the robot avatar we are developing. This robot avatar can swing its arms and change direction in order to make the announcement motion. As an announcement motion, the robot avatar swings its arms and changes direction towards which the larger robot is about to move (Fig. 9). We are verifying the effectiveness of this method through subjective evaluations.

### **7. Conclusions**

We investigated the effect of robot size on subjective acceptable distance using three robots of different sizes. The results showed that the acceptable distance becomes smaller when the robot is smaller than 1200 mm, and vice versa. However, the relationship between robot size and distance was nonlinear, and the acceptable distance saturated when a subject was seated. One possible reason for this saturation could be a relationship between a robot's height and the eye height of the subject. This experiment showed a mutual effect between robot size and posture of a subject. Therefore, the size of a robot should be designed considering the likely posture of robot users.

We also conducted a survey on the impression of the robots, and found that robots taller than 1800 mm caused too much anxiety to be used safely in practice.

This work is the first investigation of the relationship between robot size and its effect on human impression, considering the real use of service robots. The results of this work will be useful for designing actual robots working around humans.

The experimental results obtained for square-shaped robots can be directly applied to mobile carriers (Kouno & Kanda, 1998; Sakai et al., 2005). A smaller robot is desired if we wish to minimize the acceptable distance between a robot and a human; however, the robot should be larger for carrying larger payloads. Considering these conditions, a robot size of 1200 mm could be the best among the examined three robot sizes.

In future, we plan to evaluate various factors affecting the impression of a robot, including the age and gender of subjects (Walters et al., 2005; Mutlu et al., 2006), human factors such as knowledge and rapport with the robot (Nomura et al., 2007), as well as the robot's color (Goetz et al., 2003) and velocity patterns (Ikeura et al., 1995; Nakashima & Sato, 1999). The effectiveness of using a robot avatar for preliminary announcement should also be tested.

#### **8. References**


188 Mobile Robots – Current Trends

Its visibility is more robust to environmental changes than the projection-based

 If the large robot were to make announcements, the motion could be dangerous because the robot's arm could collide with a human. A robot avatar is safer because it is smaller

Figure 8 shows the robot avatar we are developing. This robot avatar can swing its arms and change direction in order to make the announcement motion. As an announcement motion, the robot avatar swings its arms and changes direction towards which the larger robot is about to move (Fig. 9). We are verifying the effectiveness of this method through subjective

We investigated the effect of robot size on subjective acceptable distance using three robots of different sizes. The results showed that the acceptable distance becomes smaller when the robot is smaller than 1200 mm, and vice versa. However, the relationship between robot size and distance was nonlinear, and the acceptable distance saturated when a subject was seated. One possible reason for this saturation could be a relationship between a robot's height and the eye height of the subject. This experiment showed a mutual effect between robot size and posture of a subject. Therefore, the size of a robot should be designed

We also conducted a survey on the impression of the robots, and found that robots taller

This work is the first investigation of the relationship between robot size and its effect on human impression, considering the real use of service robots. The results of this work will

The experimental results obtained for square-shaped robots can be directly applied to mobile carriers (Kouno & Kanda, 1998; Sakai et al., 2005). A smaller robot is desired if we wish to minimize the acceptable distance between a robot and a human; however, the robot should be larger for carrying larger payloads. Considering these conditions, a robot size of

In future, we plan to evaluate various factors affecting the impression of a robot, including the age and gender of subjects (Walters et al., 2005; Mutlu et al., 2006), human factors such as knowledge and rapport with the robot (Nomura et al., 2007), as well as the robot's color (Goetz et al., 2003) and velocity patterns (Ikeura et al., 1995; Nakashima & Sato, 1999). The effectiveness of using a robot avatar for preliminary announcement should also be tested.

Bischoff, R. (1997). HERMES – A Humanoid Mobile Manipulator for Service Tasks.

Crowley, J. L. (1985). Navigation for an Intelligent Mobile Robot. *IEEE Journal of Robotics and* 

*Proceedings of International Conference on Field and Service Robotics*, Canberra, pp. 508-

than 1800 mm caused too much anxiety to be used safely in practice.

be useful for designing actual robots working around humans.

1200 mm could be the best among the examined three robot sizes.

*Automation*, Vol. 1, No. 1, pp. 31-41

announcement method.

It is easy to make a robot avatar look friendly.

considering the likely posture of robot users.

and lighter.

evaluations.

**7. Conclusions** 

**8. References** 

515


**Part 3** 

**Hardware – State of the Art** 


## **Part 3**

**Hardware – State of the Art** 

190 Mobile Robots – Current Trends

Matsumaru, T., Kusada, T., & Iwase, K. (2006). Mobile Robot with Preliminary-

Mutlu, B., Osman, S., Forlizzi, J., Hodgins, J., & Kiesler, S. (2006). Task Structure and User

Nakashima, K., & Sato, H. (1999). Personal distance against mobile robot. *The Japanese* 

Nomura, T., Shintani, T., Fujii, K., & Hokabe, K. (2007). Experimental Investigation of

Prassler, E., Bank, D., & Kluge, B. (2002). Key Technologies in Robot Assistants: Motion

Sakai, T., Nakajima, H., Nishimura, D., Uematsu, H., & Kitano, Y. (2005). Autonomous

Takahashi, Y., Komeda, T., & Koyama, H. (2004). Development of the assistive mobile robot

Walters, M. L., Dautenhahn, K., Boekhorst, R. te, Koay, K. L., Kaouri, C., Woods, S.,

Sommer, R. (1959). Studies in Personal Space. *Sociometry*, Vol. 22, No. 3, pp. 247-260

*Automation and Systems Engineering*, Vol. 4, No. 1, pp. 56-61

China, pp. 1516-1523

*MAN 2006)*, United Kingdom, pp. 74-79

*Computer Interaction*, France, pp. 13-18

http://www.robocup.org/robocup-home/.

*Robotics,* Vol. 18, No. 5, pp. 473-496

*(RO-MAN 2005)*, USA, pp. 347-352

*Works*, Vol. 53, No. 2, pp. 62-67

*Journal of Ergonomics*, Vol. 35, No. 2, pp. 87-95

RoboCup Federation. (2011) RoboCup @Home. Web available.

Announcement Function of Following Motion using Light-ray. *Proceedings of the 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2006)*,

Attributes as Elements of Human-Robot Interaction Design. *Proceedings of the 15th IEEE International Symposium on Robot and Human Interactive Communication (RO-*

Relationships between Anxiety, Negative Attitudes, and Allowable Distance of Robots. *Proceedings of the Second IASTED International Conference on Human-*

Coordination Between a Human and a Mobile Robot. *Transactions on Control,* 

Mobile Robot System for Delivery in Hospital. *Technical Report of Matsushita Electric* 

system: AMOS – to aid in the daily life of the physically handicapped. *Advanced* 

Nehaniv, C., Lee, D., & Werry, I. (2005). The influence of subjects' personality traits on personal spatial zones in a human-robot interaction experiment. *Proceedings of the 14th IEEE International Workshop on Robots and Human Interactive Communication* 

**10** 

**C Bus System** 

*Srinakharinwirot University* 

Surachai Panich

*Thailand* 

**Development of Mobile Robot** 

**Based on I2**

Mobile robots are widely researched in many applications and almost every major university has labs on mobile robot research. Also mobile robots are found in industry, military and security environments. For entertainment they appear as consumer services. They are most generally wheeled, but legged robots are more available in many applications too. Mobile robots have ability to move around in their environment. Mobile robot researches as autonomously guided robot use some information about its current location from sensors to reach goals. The current position of mobile robot can be calculated by using sensors such motor encoders, vision, Stereopsis, lasers and global positioning systems. Engineering and computer science are core elements of mobile robot, obviously, but when questions of intelligent behavior arise, artificial intelligence, cognitive science, psychology and philosophy offer hypotheses and answers. Analysis of system components, for example through error calculations, statistical evaluations etc. are the domain of mathematics, and regarding the analysis of whole systems physics proposes explanations, for example

This book chapter focuses on mobile robot system. Specifically, building a working mobile robot generally requires the knowledge of electronic, electrical, mechanical, and computer software technology. In this book chapter all aspects of mobile robot are deeply explained,

During World War II, the first mobile robots emerged as a result of technical advances on a number of relatively new research fields like computer science and cybernetics. W. Grey Walter constructed Elmer and Elsie, which were equipped with a light sensor. If they found a light source, they would move towards it, avoiding or moving obstacles. The Johns Hopkins University develops mobile robot named Beast. Beast used sonar to move around. Mowbot was the first automatically mobile robot to mow the lawn. The Stanford Cart for line follower was a mobile robot, which can follow a white line by using a camera. The mobile robot is developed to navigate its way through obstacle courses and make maps of its environment. The Stanford Research Institute researched on Shakey mobile robot. Shakey had a camera, a rangefinder, bump sensors and a radio link. The Soviet Union explores the

such as software and hardware design and technique of data communication.

surface of the moon with Lunokhod 1, a lunar rover as shown in Fig.1.

**2. History of mobile robots (Wikipedia, 2011)** 

**1. Introduction** 

through chaos theory.

### **Development of Mobile Robot Based on I2 C Bus System**

Surachai Panich *Srinakharinwirot University Thailand* 

### **1. Introduction**

Mobile robots are widely researched in many applications and almost every major university has labs on mobile robot research. Also mobile robots are found in industry, military and security environments. For entertainment they appear as consumer services. They are most generally wheeled, but legged robots are more available in many applications too. Mobile robots have ability to move around in their environment. Mobile robot researches as autonomously guided robot use some information about its current location from sensors to reach goals. The current position of mobile robot can be calculated by using sensors such motor encoders, vision, Stereopsis, lasers and global positioning systems. Engineering and computer science are core elements of mobile robot, obviously, but when questions of intelligent behavior arise, artificial intelligence, cognitive science, psychology and philosophy offer hypotheses and answers. Analysis of system components, for example through error calculations, statistical evaluations etc. are the domain of mathematics, and regarding the analysis of whole systems physics proposes explanations, for example through chaos theory.

This book chapter focuses on mobile robot system. Specifically, building a working mobile robot generally requires the knowledge of electronic, electrical, mechanical, and computer software technology. In this book chapter all aspects of mobile robot are deeply explained, such as software and hardware design and technique of data communication.

### **2. History of mobile robots (Wikipedia, 2011)**

During World War II, the first mobile robots emerged as a result of technical advances on a number of relatively new research fields like computer science and cybernetics. W. Grey Walter constructed Elmer and Elsie, which were equipped with a light sensor. If they found a light source, they would move towards it, avoiding or moving obstacles. The Johns Hopkins University develops mobile robot named Beast. Beast used sonar to move around. Mowbot was the first automatically mobile robot to mow the lawn. The Stanford Cart for line follower was a mobile robot, which can follow a white line by using a camera. The mobile robot is developed to navigate its way through obstacle courses and make maps of its environment. The Stanford Research Institute researched on Shakey mobile robot. Shakey had a camera, a rangefinder, bump sensors and a radio link. The Soviet Union explores the surface of the moon with Lunokhod 1, a lunar rover as shown in Fig.1.

Development of Mobile Robot Based on I<sup>2</sup>

Fig. 3. Dante II developed by Carnegie Mellon University

of seeing, walking and interacting with its environment.

standard heavy traffic at speeds up to 130 km/h.

Fig. 4. Sojourner Rover, NASA

The twin robot vehicles VaMP and VITA-2 of Daimler-Benz and Ernst Dickmanns of UniBwM drive more than one thousand kilometers on a Paris three-lane highway in

Semi-autonomous ALVINN steered a car coast-to-coast under computer control for all but about 50 of the 2850 miles. Throttle and brakes, however, were controlled by a human driver. The Pioneer programmable mobile robot becomes commercially available at an affordable price, enabling a widespread increase in robotics research and university study over the next decade as mobile robotics becomes a standard part of the university curriculum. NASA sends the Mars Pathfinder with its rover Sojourner to Mars as shown in Fig.4. The rover explores the surface, commanded from earth. Sojourner was equipped with a hazard avoidance system. Sony introduces Aibo as shown in Fig.5, a robotic dog capable

C Bus System 195

Fig. 1. A model of the Soviet Lunokhod-1 Moon rover released by the Science Photo Library

Fig. 2. Helpmate, autonomous mobile hospital robots

The team of Ernst Dickmanns at Bundeswehr University Munich built the first robot cars, driving up to 55 mph on empty streets. The Hughes Research Laboratories demonstrated the first cross-country map and sensor-based autonomous robotic vehicle. Mark Tilden invented BEAM robotics. Joseph Engelberger worked with colleagues to design the first commercially available autonomous mobile hospital robots named Helpmate as shown in Fig.2. The US Department of Defense funds the MDARS-I project for the Cyber-motion indoor security robot. Edo Franzi, André Guignard and Francesco Mondada developed Khepera, an autonomous small mobile robot. Dante I and Dante II as shown in Fig.3, walking robots used to explore live volcanoes were developed by Carnegie Mellon University.

194 Mobile Robots – Current Trends

Fig. 1. A model of the Soviet Lunokhod-1 Moon rover released by the Science Photo Library

The team of Ernst Dickmanns at Bundeswehr University Munich built the first robot cars, driving up to 55 mph on empty streets. The Hughes Research Laboratories demonstrated the first cross-country map and sensor-based autonomous robotic vehicle. Mark Tilden invented BEAM robotics. Joseph Engelberger worked with colleagues to design the first commercially available autonomous mobile hospital robots named Helpmate as shown in Fig.2. The US Department of Defense funds the MDARS-I project for the Cyber-motion indoor security robot. Edo Franzi, André Guignard and Francesco Mondada developed Khepera, an autonomous small mobile robot. Dante I and Dante II as shown in Fig.3, walking robots used to explore live volcanoes were developed by Carnegie Mellon

Fig. 2. Helpmate, autonomous mobile hospital robots

University.

Fig. 3. Dante II developed by Carnegie Mellon University

The twin robot vehicles VaMP and VITA-2 of Daimler-Benz and Ernst Dickmanns of UniBwM drive more than one thousand kilometers on a Paris three-lane highway in standard heavy traffic at speeds up to 130 km/h.

#### Fig. 4. Sojourner Rover, NASA

Semi-autonomous ALVINN steered a car coast-to-coast under computer control for all but about 50 of the 2850 miles. Throttle and brakes, however, were controlled by a human driver. The Pioneer programmable mobile robot becomes commercially available at an affordable price, enabling a widespread increase in robotics research and university study over the next decade as mobile robotics becomes a standard part of the university curriculum. NASA sends the Mars Pathfinder with its rover Sojourner to Mars as shown in Fig.4. The rover explores the surface, commanded from earth. Sojourner was equipped with a hazard avoidance system. Sony introduces Aibo as shown in Fig.5, a robotic dog capable of seeing, walking and interacting with its environment.

Development of Mobile Robot Based on I<sup>2</sup>

Fig. 7. Swarm bots: insect colonies behavior

Fig. 8. Robosapien designed by Mark Tilden

Fig. 9. PatrolBot introduced by Sony

Sony introduced an autonomous service robot system named a lower-cost PatrolBot as

shown in Fig. 9. The mobile robot becomes continue commercial product.

C Bus System 197

#### Fig. 5. Aibo, **Sony**

The PackBot remote-controlled military mobile robot is introduced as shown in Fig.6. PackBot is current base model using a videogame style hand controller to make easy control. PackBot is designed for improvised explosive device identification and disposal, infantry troops tasked with improvised explosive device inspection, help SWAT teams and other first responders with situational awareness.

Fig. 6. PackBot demonstrated by the French military

Swarm bots resemble insect colonies as shown in Fig.7. Typically they consist of a large number of individual simple robots that can interact with each other and together perform complex tasks.

Robosapien, a biomorphic commercially available robot designed by Mark Tilden as shown in Fig.8. The autonomous robots work together to make a map of an unknown environment and search for objects within the environment.

196 Mobile Robots – Current Trends

The PackBot remote-controlled military mobile robot is introduced as shown in Fig.6. PackBot is current base model using a videogame style hand controller to make easy control. PackBot is designed for improvised explosive device identification and disposal, infantry troops tasked with improvised explosive device inspection, help SWAT teams and other

Swarm bots resemble insect colonies as shown in Fig.7. Typically they consist of a large number of individual simple robots that can interact with each other and together perform

Robosapien, a biomorphic commercially available robot designed by Mark Tilden as shown in Fig.8. The autonomous robots work together to make a map of an unknown environment

Fig. 5. Aibo, **Sony**

complex tasks.

first responders with situational awareness.

Fig. 6. PackBot demonstrated by the French military

and search for objects within the environment.

Fig. 7. Swarm bots: insect colonies behavior

Fig. 8. Robosapien designed by Mark Tilden

Sony introduced an autonomous service robot system named a lower-cost PatrolBot as shown in Fig. 9. The mobile robot becomes continue commercial product.

Fig. 9. PatrolBot introduced by Sony

Development of Mobile Robot Based on I<sup>2</sup>

Fig. 12. Mobile robot system developed by Shibaura institute of technology

**3.2 Communication framework for sensor-actuator data in mobile robots** 

Fernandez proposes the architecture of the robot is designed in Fig.14, which shows several modules inter-connected with CAN bus (Fernandez J., et al., 2007). Each module performs one specific task in the distributed architecture. The actuator and the sensory modules are

about the robot knowledge as shown in Fig.13.

Fig. 13. System structure of mobile robot

executed with basic control algorithms.

It is necessary to use graphical and interactive interface to operate the robot, because the operator of this robot is physically handicapped and not always engineer and professional

C Bus System 199

The Tug as shown in Fig.10 becomes a popular means for hospitals to move large cabinets of stock from place to place for carrying blood and other patient samples from nurse stations to various labs.

Fig. 10. The Tug, Aethon's Automated Robotic Delivery System

Fig. 11. BigDog developed by Boston Dynamics

Boston Dynamics released video footage of a new generation BigDog as shown in Fig.11 able to walk on icy terrain and recover its balance when kicked from the side.

### **3. Related work**

#### **3.1 Development of the mobile robot system to aid the daily life for physically handicapped (Interface using internet browser)**

Yoshiyuki Takahashi and et al. from Shibaura institute of technology developed the mobile robot system (Yoshiyuki Takahashi, et. al., 1998), which could bring daily using objects putting them somewhere in the room with semi-automatically control as shown in Fig.12.

198 Mobile Robots – Current Trends

The Tug as shown in Fig.10 becomes a popular means for hospitals to move large cabinets of stock from place to place for carrying blood and other patient samples from nurse stations to

Boston Dynamics released video footage of a new generation BigDog as shown in Fig.11

Yoshiyuki Takahashi and et al. from Shibaura institute of technology developed the mobile robot system (Yoshiyuki Takahashi, et. al., 1998), which could bring daily using objects putting them somewhere in the room with semi-automatically control as shown in Fig.12.

able to walk on icy terrain and recover its balance when kicked from the side.

**3.1 Development of the mobile robot system to aid the daily life for physically** 

Fig. 10. The Tug, Aethon's Automated Robotic Delivery System

Fig. 11. BigDog developed by Boston Dynamics

**handicapped (Interface using internet browser)** 

**3. Related work** 

various labs.

Fig. 12. Mobile robot system developed by Shibaura institute of technology

It is necessary to use graphical and interactive interface to operate the robot, because the operator of this robot is physically handicapped and not always engineer and professional about the robot knowledge as shown in Fig.13.

Fig. 13. System structure of mobile robot

### **3.2 Communication framework for sensor-actuator data in mobile robots**

Fernandez proposes the architecture of the robot is designed in Fig.14, which shows several modules inter-connected with CAN bus (Fernandez J., et al., 2007). Each module performs one specific task in the distributed architecture. The actuator and the sensory modules are executed with basic control algorithms.

Development of Mobile Robot Based on I<sup>2</sup>

**C bus system overview** 

Fig. 15. The I2C bus has only two lines in total

which may be configurable at the board device.

Fig. 16. The I2C bus communication

operating system.

**4. The I<sup>2</sup>**

C Bus System 201

component for hardware reconfiguration is implemented in the FPGA. Multiple sensors and actuators with corresponding device drivers and signal processing modules are in the sensor or actuator layer. Each control module consists of one or more input ports, one or more output ports, and any number of other connections. The functionality of the module is implemented to provide automatic integration of the control modules. The information flow, communication and synchronization should be handled automatically by the

The standard Inter-IC (Integrated Circuit) bus named I2C is shorthand providing a good support for communication with various peripheral devices (Philips Semiconductor, 2000). It is a simple, low-bandwidth, short-distance protocol. There is no need for chip select or arbitration logic, making it cheap and simple to implement in hardware. Most I2C bus devices operate at speeds up to 400 Kbps. The I2C bus system is easy to link multiple devices together since it has a built-in addressing format. The I2C bus is a two wire serial bus as

It is possible to support serial transmission of eight-bit bytes with seven-bit bytes device addresses plus control bits over the two wire serial bus. The device called the master starts a transaction on the I2C bus. The master normally controls the clock signal. A device controlled and addressed by the master is called a slave. The I2C bus protocol supports multiple masters, but most system designs include only one. There may be one or more slaves on the bus. Both masters and slaves can receive and transmit data bytes. The slave device with compatible hardware on I2C bus is produced with a predefined device address,

The master must send the device address of the slave at the beginning of every transaction. Each slave is responsible for monitoring the bus and responding only to its own address. As shown in Fig. 16, the master begins to communicate by issuing the start condition. The master continues by sending seven-bit slave device address with the most significant bit.

shown in Fig. 15. The two I2C signals are serial data (SDA) and serial clock (SCL).

The communications protocol and CAN master process are implemented in figure 14 showing the different slaves are connected to the master. Control system for all modules worked on PC attached to CAN bus using a CAN-USB adapter.

Fig. 14. Robot architecture of CAN system based on sensor and actuator and PC control module communications.

The connection and disconnection of the different slaves by loading the corresponding driver is operated by CAN server. The CAN server will first register and initialize the new module connection, for example if a module with sonar sensors is connected then it will be identified and the module will hand over the messages to the sonar module. In the case of connecting a new module, the master will send information to configure the slave and change the watchdog time. If the master does not receive the watchdog of a slave for a period longer that a timeout, it assumes the slave is disconnected or has some error and it will be notified to the control programs interested in the slave data.

#### **3.3 SMARbot: A miniature mobile robot paradigm for ubiquitous computing**

Yan Meng from Stevens Institute of Technology, Hoboken introduce SMARbot paradigm (Meng Yan, et al., 2007). The software reconfiguration of microprocessor and a core component for hardware reconfiguration is implemented in the FPGA. Multiple sensors and actuators with corresponding device drivers and signal processing modules are in the sensor or actuator layer. Each control module consists of one or more input ports, one or more output ports, and any number of other connections. The functionality of the module is implemented to provide automatic integration of the control modules. The information flow, communication and synchronization should be handled automatically by the operating system.

#### **4. The I<sup>2</sup> C bus system overview**

200 Mobile Robots – Current Trends

The communications protocol and CAN master process are implemented in figure 14 showing the different slaves are connected to the master. Control system for all modules

Fig. 14. Robot architecture of CAN system based on sensor and actuator and PC control

will be notified to the control programs interested in the slave data.

**3.3 SMARbot: A miniature mobile robot paradigm for ubiquitous computing** 

The connection and disconnection of the different slaves by loading the corresponding driver is operated by CAN server. The CAN server will first register and initialize the new module connection, for example if a module with sonar sensors is connected then it will be identified and the module will hand over the messages to the sonar module. In the case of connecting a new module, the master will send information to configure the slave and change the watchdog time. If the master does not receive the watchdog of a slave for a period longer that a timeout, it assumes the slave is disconnected or has some error and it

Yan Meng from Stevens Institute of Technology, Hoboken introduce SMARbot paradigm (Meng Yan, et al., 2007). The software reconfiguration of microprocessor and a core

module communications.

worked on PC attached to CAN bus using a CAN-USB adapter.

The standard Inter-IC (Integrated Circuit) bus named I2C is shorthand providing a good support for communication with various peripheral devices (Philips Semiconductor, 2000). It is a simple, low-bandwidth, short-distance protocol. There is no need for chip select or arbitration logic, making it cheap and simple to implement in hardware. Most I2C bus devices operate at speeds up to 400 Kbps. The I2C bus system is easy to link multiple devices together since it has a built-in addressing format. The I2C bus is a two wire serial bus as shown in Fig. 15. The two I2C signals are serial data (SDA) and serial clock (SCL).

Fig. 15. The I2C bus has only two lines in total

It is possible to support serial transmission of eight-bit bytes with seven-bit bytes device addresses plus control bits over the two wire serial bus. The device called the master starts a transaction on the I2C bus. The master normally controls the clock signal. A device controlled and addressed by the master is called a slave. The I2C bus protocol supports multiple masters, but most system designs include only one. There may be one or more slaves on the bus. Both masters and slaves can receive and transmit data bytes. The slave device with compatible hardware on I2C bus is produced with a predefined device address, which may be configurable at the board device.

Fig. 16. The I2C bus communication

The master must send the device address of the slave at the beginning of every transaction. Each slave is responsible for monitoring the bus and responding only to its own address. As shown in Fig. 16, the master begins to communicate by issuing the start condition. The master continues by sending seven-bit slave device address with the most significant bit.

Development of Mobile Robot Based on I<sup>2</sup>

independent or combined control as shown in Fig. 18.

Fig. 18. The MD25 motor driver integrated on AMRO

feature, motors can be commanded to turn by sent value.

Fig.20.

Fig. 19. The CM02 Radio communications module integrated on AMRO

C Bus System 203

The AMRO is driven by two wheels and a caster powered by an MD25 Dual 5A controller. The MD25 motor driver is designed with 12v battery, which drives two motors with

It reads motors encoders and provides counts for determining distance traveled and direction. Motor current is readable and only 12v is required to power the module. Onboard 5v regulator can supply up to 1A peak, 300 mA continuously to external circuitry Steering

The CM02 Radio communications module as shown in Fig. 19 works together with its companion RF04 module from a complete interface between PC and I2C devices. The commands can be sent to the robot and receive telemetry data back up to the PC. The CM02 module is powered from battery, which can be anything from 6-12v. There are four I2C connectors on the CM02, but it is not limited to four I2C devices. The CM02 radio module provides communication with an RF04 module connected to the PC's USB port. It also provides the MD25 and I2C devices with 5v supply from its on-board 5v regulator. The AMRO is powered by battery which goes to the CM02 module and also to the MD25 for motor power. All of the modules are connected together with a four wire I2C loop, which are 5v, 0v, SCL and SDA lines. The PC can now control robot's motors and receive encoder information from AMRO. That means the PC now becomes the robot's brain as shown in

The eighth bit (read or write bit) after the start bit specifies whether the slave is now to receive or to transmit information. This is followed by an ACK bit issued by the receiver, acknowledging receipt of the previous byte. Then the transmitter (slave or master, as indicated by the bit) transmits a byte of data starting with the MSB. At the end of the byte, the receiver (whether master or slave) issues a new ACK bit. This 9-bit pattern is repeated if more bytes need to be transmitted. In a write transaction (slave receiving), when the master is done transmitting all of the data bytes it wants to send, it monitors the last ACK and then issues the stop condition. In a read transaction (slave transmitting), the master does not acknowledge the final byte it receives. This tells the slave that its transmission is done. The master then issues the stop condition.

#### **5. Development of mobile robot based on I2 C bus system**

In this book chapter, the system of mobile robot named AMRO (Surachai, 2010c) is deeply explained as example for understanding. This robot is developed by student team from Measurement and Mobile Robot laboratory. Its hardware is constructed and combined with the electronic components including the control program.

### **5.1 Hardware development for AMRO**

The mobile robot is designed based on differential drive system (Byoung-Suk Choi 2009; Surachai and et al., 2009) as shown in Fig. 17. The combination of two driven wheels allows the robot to be driven straight, in a curve, or to turn on the spot. The translation between driving commands, for example a curve of a given radius and the corresponding wheel speeds are controlled by software.

Fig. 17. The Autonomous Mobile Robot (AMRO), Measurement and Mobile robot laboratory

202 Mobile Robots – Current Trends

The eighth bit (read or write bit) after the start bit specifies whether the slave is now to receive or to transmit information. This is followed by an ACK bit issued by the receiver, acknowledging receipt of the previous byte. Then the transmitter (slave or master, as indicated by the bit) transmits a byte of data starting with the MSB. At the end of the byte, the receiver (whether master or slave) issues a new ACK bit. This 9-bit pattern is repeated if more bytes need to be transmitted. In a write transaction (slave receiving), when the master is done transmitting all of the data bytes it wants to send, it monitors the last ACK and then issues the stop condition. In a read transaction (slave transmitting), the master does not acknowledge the final byte it receives. This tells the slave that its transmission is done. The

In this book chapter, the system of mobile robot named AMRO (Surachai, 2010c) is deeply explained as example for understanding. This robot is developed by student team from Measurement and Mobile Robot laboratory. Its hardware is constructed and combined with

The mobile robot is designed based on differential drive system (Byoung-Suk Choi 2009; Surachai and et al., 2009) as shown in Fig. 17. The combination of two driven wheels allows the robot to be driven straight, in a curve, or to turn on the spot. The translation between driving commands, for example a curve of a given radius and the corresponding wheel

Fig. 17. The Autonomous Mobile Robot (AMRO), Measurement and Mobile robot laboratory

**C bus system** 

master then issues the stop condition.

**5.1 Hardware development for AMRO** 

speeds are controlled by software.

**5. Development of mobile robot based on I2**

the electronic components including the control program.

The AMRO is driven by two wheels and a caster powered by an MD25 Dual 5A controller. The MD25 motor driver is designed with 12v battery, which drives two motors with independent or combined control as shown in Fig. 18.

Fig. 18. The MD25 motor driver integrated on AMRO

It reads motors encoders and provides counts for determining distance traveled and direction. Motor current is readable and only 12v is required to power the module. Onboard 5v regulator can supply up to 1A peak, 300 mA continuously to external circuitry Steering feature, motors can be commanded to turn by sent value.

Fig. 19. The CM02 Radio communications module integrated on AMRO

The CM02 Radio communications module as shown in Fig. 19 works together with its companion RF04 module from a complete interface between PC and I2C devices. The commands can be sent to the robot and receive telemetry data back up to the PC. The CM02 module is powered from battery, which can be anything from 6-12v. There are four I2C connectors on the CM02, but it is not limited to four I2C devices. The CM02 radio module provides communication with an RF04 module connected to the PC's USB port. It also provides the MD25 and I2C devices with 5v supply from its on-board 5v regulator. The AMRO is powered by battery which goes to the CM02 module and also to the MD25 for motor power. All of the modules are connected together with a four wire I2C loop, which are 5v, 0v, SCL and SDA lines. The PC can now control robot's motors and receive encoder information from AMRO. That means the PC now becomes the robot's brain as shown in Fig.20.

Development of Mobile Robot Based on I<sup>2</sup>

**C bus devices** 

multimedia and other applications.

Fig. 22. Compass module slave device

**5.2.1.2 Gyroscope sensor** 

**5.2.1.1 Compass sensor** 

**5.2.1 The I<sup>2</sup>**

C Bus System 205

The PC station and RF radio module are selected to produce signal and work as master device. The system of mobile robot can directly connect to I2C devices and other devices, which cannot support I2C system, they can be connected through microcontroller with interface circuit.

Standard I2C devices operate up to 100Kbps, while fast-mode devices operate at up to 400Kbps. A 1998 revision of the I2C specification (v. 2.0) added a high-speed mode running at up to 3.4Mbps. Most of the I2C devices available today support 400Kbps operation. Higher-speed operation may allow I2C to keep up with the rising demand for bandwidth in

The first I2C slave device is compass sensor module as shown in Fig. 22. This sensor can work on the I2C bus without addition circuit. This compass module has been specifically designed for use in robots as an aid to navigation. The aim was to produce a unique number to represent the direction the robot is facing. The compass uses the Philips KMZ51 magnetic field sensor, which is sensitive enough to detect the earth's magnetic field. The output from two of them mounted at right angles to each other is used to compute the direction of the horizontal component of the earth's magnetic field. The compass module requires a 5v power supply at a nominal 15mA. The pulse width varies from 1mS (0°) to 36.99mS (359.9°) – in other words 100uS/° with a +1mS offset. On I2C bus, there is an important consideration that consists of the address from manufacturer and the address from user.

A gyroscope is a device for measuring or maintaining orientation, based on the principles of angular momentum (Komoriya, K. and Oyama, E., 1994). A mechanical gyroscope is essentially a spinning wheel or disk whose axle is free to take any orientation. This orientation changes much less in response to a given external torque than it would without the large angular momentum associated with the gyroscope's high rate of spin. Since external torque is minimized by mounting the device in gimbals, its orientation remains nearly fixed, regardless of any motion of the platform on which it is mounted. Because this gyroscope is not designed for the I2C bus system, it must be connected through microcontroller and read its

Fig. 20. The AMRO's system control

#### **5.2 The communication between robot and I2 C bus devices (Panich, 2008)**

Surachai developed the I2C bus system to control information between the robot and sensors or additional devices.

In order that the signal line of serial port from the robot can connect to devices on I2C bus, the electrical master module is designed to generate signal SDA and SCL as shown in Fig. 21.

Fig. 21. Hardware communication between Mobile robot (AMRO) and I2C devices

The PC station and RF radio module are selected to produce signal and work as master device. The system of mobile robot can directly connect to I2C devices and other devices, which cannot support I2C system, they can be connected through microcontroller with interface circuit.

#### **5.2.1 The I<sup>2</sup> C bus devices**

204 Mobile Robots – Current Trends

Surachai developed the I2C bus system to control information between the robot and sensors

In order that the signal line of serial port from the robot can connect to devices on I2C bus, the electrical master module is designed to generate signal SDA and SCL as shown in Fig. 21.

Fig. 21. Hardware communication between Mobile robot (AMRO) and I2C devices

**C bus devices (Panich, 2008)** 

Fig. 20. The AMRO's system control

or additional devices.

**5.2 The communication between robot and I2**

Standard I2C devices operate up to 100Kbps, while fast-mode devices operate at up to 400Kbps. A 1998 revision of the I2C specification (v. 2.0) added a high-speed mode running at up to 3.4Mbps. Most of the I2C devices available today support 400Kbps operation. Higher-speed operation may allow I2C to keep up with the rising demand for bandwidth in multimedia and other applications.

#### **5.2.1.1 Compass sensor**

The first I2C slave device is compass sensor module as shown in Fig. 22. This sensor can work on the I2C bus without addition circuit. This compass module has been specifically designed for use in robots as an aid to navigation. The aim was to produce a unique number to represent the direction the robot is facing. The compass uses the Philips KMZ51 magnetic field sensor, which is sensitive enough to detect the earth's magnetic field. The output from two of them mounted at right angles to each other is used to compute the direction of the horizontal component of the earth's magnetic field. The compass module requires a 5v power supply at a nominal 15mA. The pulse width varies from 1mS (0°) to 36.99mS (359.9°) – in other words 100uS/° with a +1mS offset. On I2C bus, there is an important consideration that consists of the address from manufacturer and the address from user.

Fig. 22. Compass module slave device

### **5.2.1.2 Gyroscope sensor**

A gyroscope is a device for measuring or maintaining orientation, based on the principles of angular momentum (Komoriya, K. and Oyama, E., 1994). A mechanical gyroscope is essentially a spinning wheel or disk whose axle is free to take any orientation. This orientation changes much less in response to a given external torque than it would without the large angular momentum associated with the gyroscope's high rate of spin. Since external torque is minimized by mounting the device in gimbals, its orientation remains nearly fixed, regardless of any motion of the platform on which it is mounted. Because this gyroscope is not designed for the I2C bus system, it must be connected through microcontroller and read its

Development of Mobile Robot Based on I<sup>2</sup>

desired position. The operation is same like for left button.

Fig. 25. Manual control window for AMRO

C Bus System 207

Input Velocity. During connection with robot, the PC can send command to the robot and receives sensor information from robot by using I2C bus system. The START button is used to establish the connection with real robot using wireless serial connection. The STOP button is used to disconnect the robot from the PC. It terminates the communication between robot and PC. The desired velocity can be defined in mm sec-1 for forward and backward movement in this Input Velocity block. In this program, the robot velocity are limited up to maximum 500 mm sec-1 for safety reason. Left and right turning is done by heading setting, which has already defined in the program. This is the important one to navigate the robot in the environment. As shown in Fig. 25 there are total five buttons to control the robot, which are Forward, Backward, Turn Left, Turn Right and Stop. After establishing connection with the robot through wireless communication and entering all velocity values, the robot will be ready to move in the desired directions. By pressing *F* button robot starts to run forward with the given velocity. To stop the robot in forward moving, the *S* button must be pressed. The *B* button for backward moving works as same as the *F* button. The *L* and *R* button are used to turn the robot in left and right direction respectively. After the *L* or *R* button are pressed, the *S* button can use to stop the robot in

As above mentioned, the *S* button is very useful to stop the robot motion. In any condition of the robot, this button plays an important role to restrict the further motion of the robot without disconnecting from PC too. The software is designed to control data between the robot and sensor device, which is programmed based on Visual C++. The software must be able to control SCL and SDA line. To control data line on the I2C bus, the step ordering of function based on Visual C++ must be carefully accurately considered and programmed, because if one step miss or does not complete, all devices on this bus will fail. The main function used for programming will be now detailed. All conditions are only generated by robot (as master device). The two main functions are I2C\_START ( ) and I2C\_STOP ( ). The

I2C\_START ( ) function produces the START condition as shown in Fig.26.

information as shown in Fig. 23. In order to microcontroller can work on the I2C bus system, it must be specified in I2C format. Now gyroscope with microcontroller works as slave device. This module can read information from gyroscope and send to master device.

Fig. 23. Gyroscope with microcontroller worked on I2C bus as slave

### **5.2.1.3 Temperature sensor (DS1621)**

The DS1621 as shown in Fig. 24 supports I2C bus and data transmission protocol. A device sends data onto the bus defined as a transmitter, and a device receiving data as a receiver. The device controls the message called a master and devices are controlled by the master called slaves. The bus must be controlled by a master device which generates the serial clock (SCL), controls the bus access, and generates the START and STOP conditions. The DS1621 operates as a slave on the 2-wire bus. Connections to the bus are made via the open-drain I/O lines SDA and SCL. A control byte is the first byte received following the START condition from the master device. The control byte consists of a 4-bit control code set as 1001 binary for read and write operations.

The next 3 bits of the control byte are the device select bits (A2, A1, A0). They are used by the master device to select which of eight devices are to be accessed. These bits are in effect the 3 least significant bits of the slave address. The last bit of the control byte (R/ W) defines the operation to be performed. When set to a "1" a read operation is selected, when set to a "0" a write operation is selected. Following the START condition the DS1621 monitors the SDA bus checking the device type identifier being transmitted. Upon receiving the 1001 code and appropriate device select bits, the slave device outputs an acknowledge signal on the SDA line.

### **5.3 Software development for AMRO**

A software is developed based on PC application which now manually controls AMRO. The control window contains three sections, Wireless Serial Connection, Manual Control and 206 Mobile Robots – Current Trends

information as shown in Fig. 23. In order to microcontroller can work on the I2C bus system, it must be specified in I2C format. Now gyroscope with microcontroller works as slave device.

The DS1621 as shown in Fig. 24 supports I2C bus and data transmission protocol. A device sends data onto the bus defined as a transmitter, and a device receiving data as a receiver. The device controls the message called a master and devices are controlled by the master called slaves. The bus must be controlled by a master device which generates the serial clock (SCL), controls the bus access, and generates the START and STOP conditions. The DS1621 operates as a slave on the 2-wire bus. Connections to the bus are made via the open-drain I/O lines SDA and SCL. A control byte is the first byte received following the START condition from the master device. The control byte consists of a 4-bit control code set as 1001

The next 3 bits of the control byte are the device select bits (A2, A1, A0). They are used by the master device to select which of eight devices are to be accessed. These bits are in effect the 3 least significant bits of the slave address. The last bit of the control byte (R/ W) defines the operation to be performed. When set to a "1" a read operation is selected, when set to a "0" a write operation is selected. Following the START condition the DS1621 monitors the SDA bus checking the device type identifier being transmitted. Upon receiving the 1001 code and appropriate device select bits, the slave device outputs an acknowledge signal on the SDA line.

A software is developed based on PC application which now manually controls AMRO. The control window contains three sections, Wireless Serial Connection, Manual Control and

This module can read information from gyroscope and send to master device.

Fig. 23. Gyroscope with microcontroller worked on I2C bus as slave

**5.2.1.3 Temperature sensor (DS1621)** 

binary for read and write operations.

Fig. 24. DS1621 temperature sensor

**5.3 Software development for AMRO** 

Input Velocity. During connection with robot, the PC can send command to the robot and receives sensor information from robot by using I2C bus system. The START button is used to establish the connection with real robot using wireless serial connection. The STOP button is used to disconnect the robot from the PC. It terminates the communication between robot and PC. The desired velocity can be defined in mm sec-1 for forward and backward movement in this Input Velocity block. In this program, the robot velocity are limited up to maximum 500 mm sec-1 for safety reason. Left and right turning is done by heading setting, which has already defined in the program. This is the important one to navigate the robot in the environment. As shown in Fig. 25 there are total five buttons to control the robot, which are Forward, Backward, Turn Left, Turn Right and Stop. After establishing connection with the robot through wireless communication and entering all velocity values, the robot will be ready to move in the desired directions. By pressing *F* button robot starts to run forward with the given velocity. To stop the robot in forward moving, the *S* button must be pressed. The *B* button for backward moving works as same as the *F* button. The *L* and *R* button are used to turn the robot in left and right direction respectively. After the *L* or *R* button are pressed, the *S* button can use to stop the robot in desired position. The operation is same like for left button.

Fig. 25. Manual control window for AMRO

As above mentioned, the *S* button is very useful to stop the robot motion. In any condition of the robot, this button plays an important role to restrict the further motion of the robot without disconnecting from PC too. The software is designed to control data between the robot and sensor device, which is programmed based on Visual C++. The software must be able to control SCL and SDA line. To control data line on the I2C bus, the step ordering of function based on Visual C++ must be carefully accurately considered and programmed, because if one step miss or does not complete, all devices on this bus will fail. The main function used for programming will be now detailed. All conditions are only generated by robot (as master device). The two main functions are I2C\_START ( ) and I2C\_STOP ( ). The I2C\_START ( ) function produces the START condition as shown in Fig.26.

Development of Mobile Robot Based on I<sup>2</sup>

**5.3.1 Function ordering for encoder information reading** 

information, the function ordering can follow as below.

**Step 4 : I2C\_SEND ( ) – Write the register number,**

**Step 7 : I2C\_SEND ( ) – Call encoder address again,**

are detailed below.

**Step 1 : I2C\_START ( ),**

**Step 3 : I2C\_ACK ( ),**

**Step 5 : I2C\_ACK ( ), Step 6 : I2C\_START ( ),**

**Step 8 : I2C\_ACK ( ), Step 9 : I2C\_RECEIVE ( ), Step 10 : I2C\_ACK ( ), Step 11 : I2C\_STOP ( ).** 

Then the position can be estimated as

2011).

where

C Bus System 209

The examples of function ordering to connect a slave device through I2C bus to receive data

The encoders are mounted at motor and their information is read by motor controller (MD25). The MD25 is designed to operate in a standard I2C bus system. To read encoder

By using the feedback information from the two encoders on the left and right wheels of mobile robot, the position and heading angles of the mobile robot can be estimated. The distance and heading increment can be obtained as follows. (Hye Ri Park, et. Al., 2009; Surachai Panich, 2010a; Surachai Panich, 2010b; Surachai Panich and Nitin V. Afzulpurkar,

*encoder R,k L,k*

*encoder R,k L,k*

X =X + 1 , Δd*encoder*

Y =Y + 1 , Δd*encoder*

ψ <sup>1</sup> = ψ + Δψ*encoder*

Δd = , Δd cosψ *<sup>k</sup> encoder encoder*

Δd = , Δd sinψ *<sup>k</sup> encoder encoder y k <sup>k</sup>*

*x k <sup>k</sup>*

Δψ <sup>=</sup> <sup>B</sup>*<sup>k</sup>*

Δd + Δd

Δd - Δd

*k*

*k*

Δd =

*encoder encoder*

*encoder encoder*

2 (1)

*k k xk* (3)

*k k yk* (4)

*k kk* (5)

(2)

**Step 2 : I2C\_SEND ( ) – Call encoder address and write mode configuration,**

Fig. 26. Start and stop condition of two lines

This condition is a HIGH to LOW transition on the SDA line while SCL is HIGH. And the I2C\_STOP ( ) will produce the STOP condition. This procedure is a LOW to HIGH transition on the SDA line while SCL is HIGH. Before the START condition will begin and after STOP condition finished, the bus is considered to be always free condition, the both lines must be HIGH. The next main function is I2C\_ACK ( ). As shown in Fig.27, after the robot (master device) sent data to slave device finished, the slave device must send back the acknowledgement that the slave device received data already.

Fig. 27. Acknowledge condition

As shown in Fig.28, it shows a complete transfer cycle associated with a frame of data. Firstly, the master initiates a write by asserting logic-0 at bit-8, where a slave address is defined by the other 7-bits. A acknowledge signal then follows from the slave as specified in bit-9. The second and third bytes are the data and acknowledge signal. The 7-bits addressing allows 127 devices on the I2C bus, by using 2-bytes address, which can be extended further. The last two main functions are to control and get data from slave device. It consists of I2C\_SEND ( ) and I2C\_RECEIVE ( ) function. This I2C\_SEND ( ) function is sent always by robot (master device) to control and set slave devices configuration and the I2C\_RECEIVE ( ) function is also sent by robot to receive data from slave devices.

Fig. 28. Data transfer on the I2C bus

208 Mobile Robots – Current Trends

This condition is a HIGH to LOW transition on the SDA line while SCL is HIGH. And the I2C\_STOP ( ) will produce the STOP condition. This procedure is a LOW to HIGH transition on the SDA line while SCL is HIGH. Before the START condition will begin and after STOP condition finished, the bus is considered to be always free condition, the both lines must be HIGH. The next main function is I2C\_ACK ( ). As shown in Fig.27, after the robot (master device) sent data to slave device finished, the slave device must send back the

As shown in Fig.28, it shows a complete transfer cycle associated with a frame of data. Firstly, the master initiates a write by asserting logic-0 at bit-8, where a slave address is defined by the other 7-bits. A acknowledge signal then follows from the slave as specified in bit-9. The second and third bytes are the data and acknowledge signal. The 7-bits addressing allows 127 devices on the I2C bus, by using 2-bytes address, which can be extended further. The last two main functions are to control and get data from slave device. It consists of I2C\_SEND ( ) and I2C\_RECEIVE ( ) function. This I2C\_SEND ( ) function is sent always by robot (master device) to control and set slave devices configuration and the I2C\_RECEIVE ( )

Fig. 26. Start and stop condition of two lines

Fig. 27. Acknowledge condition

Fig. 28. Data transfer on the I2C bus

acknowledgement that the slave device received data already.

function is also sent by robot to receive data from slave devices.

The examples of function ordering to connect a slave device through I2C bus to receive data are detailed below.

#### **5.3.1 Function ordering for encoder information reading**

The encoders are mounted at motor and their information is read by motor controller (MD25). The MD25 is designed to operate in a standard I2C bus system. To read encoder information, the function ordering can follow as below.

**Step 1 : I2C\_START ( ), Step 2 : I2C\_SEND ( ) – Call encoder address and write mode configuration, Step 3 : I2C\_ACK ( ), Step 4 : I2C\_SEND ( ) – Write the register number, Step 5 : I2C\_ACK ( ), Step 6 : I2C\_START ( ), Step 7 : I2C\_SEND ( ) – Call encoder address again, Step 8 : I2C\_ACK ( ), Step 9 : I2C\_RECEIVE ( ), Step 10 : I2C\_ACK ( ), Step 11 : I2C\_STOP ( ).** 

By using the feedback information from the two encoders on the left and right wheels of mobile robot, the position and heading angles of the mobile robot can be estimated. The distance and heading increment can be obtained as follows. (Hye Ri Park, et. Al., 2009; Surachai Panich, 2010a; Surachai Panich, 2010b; Surachai Panich and Nitin V. Afzulpurkar, 2011).

$$
\Delta \mathbf{d}\_k^{enorder} = \frac{\Delta \mathbf{d}\_{R,k}^{enorder} + \Delta \mathbf{d}\_{L,k}^{enorder}}{2} \tag{1}
$$

$$\Delta\mathfrak{u}\mu\_k^{enorder} = \frac{\Delta\mathbf{d}\_{R,k}^{enorder} - \Delta\mathbf{d}\_{L,k}^{enorder}}{\mathbf{B}\_k} \tag{2}$$

Then the position can be estimated as

$$\mathcal{X}\_{\mathbf{k}+1} = \mathcal{X}\_{\mathbf{k}} + \Delta \mathbf{d}\_{\mathbf{x},\mathbf{k}}^{encorder} \tag{3}$$

$$\mathbf{Y}\_{k+1} = \mathbf{Y}\_k + \Delta \mathbf{d}\_{y,k}^{encorder} \tag{4}$$

$$
\Delta \boldsymbol{\mu}\_{k+1} = \boldsymbol{\upmu}\_k + \Delta \boldsymbol{\upmu}\_k^{encorder} \tag{5}
$$

where

$$\begin{aligned} \Delta \mathbf{d}\_{x,k}^{encode} &= \Delta \mathbf{d}\_k^{encode} \ast \cos \underline{\boldsymbol{\mu}}\_k \\\\ \Delta \mathbf{d}\_{y,k}^{encode} &= \Delta \mathbf{d}\_k^{encode} \ast \sin \underline{\boldsymbol{\mu}}\_k \end{aligned}$$

Development of Mobile Robot Based on I<sup>2</sup>

 **selection,**

 **selected channel,**

**5.3.4 Function ordering for temperature reading** 

**Step 4 : I2C\_SEND ( ) – Enable analog output, single mode and analog channel** 

**Step 8 : I2C\_SEND ( ) – Read mode; start to read data from analog input of** 

To read temperature, it must be programmed in 3 steps; start to convert data, stop to convert

**Step 2 : I2C\_SEND ( ) – Call slave device address from robot and write mode** 

**Step 2 : I2C\_SEND ( ) – Call slave device address from robot and write mode** 

**Step 2 : I2C\_SEND ( ) – Call slave device address from robot and write mode** 

**Step 4 : I2C\_SEND ( ) – Send command register, start to convert data,**

**Step 4 : I2C\_SEND ( ) – Send command register, stop to convert data,**

**Step 4 : I2C\_SEND ( ) – Send command register, stop to convert data,**

**Step 7 : I2C\_SEND ( ) – Send command register, stop to convert data,**

**Step 9 : I2C\_RECEIVE ( ) – Read MSB of temperature register,**

**Step 3 : I2C\_ACK ( ),**

**Step 5 : I2C\_ACK ( ), Step 6 : I2C\_STOP ( ), Step 7 : I2C\_START ( ),**

**Step 9 : I2C\_ACK ( ), Step 10 : I2C\_RECEIVE ( ), Step 11 : I2C\_ACK ( ), Step 12 : I2C\_STOP ( ).**

data and read data.

**Start to convert data:** 

**Step 1 : I2C\_START ( ),** 

**Step 3 : I2C\_ACK ( ),**

**Step 5 : I2C\_ACK ( ), Step 6 : I2C\_STOP ( ),** 

**Stop to convert data:** 

**Step 1 : I2C\_START ( ),**

**Step 3 : I2C\_ACK ( ),**

**Step 5 : I2C\_ACK ( ), Step 6 : I2C\_STOP ( ),** 

**Step 1 : I2C\_START ( ),**

**Step 3 : I2C\_ACK ( ),**

**Step 5 : I2C\_ACK ( ), Step 6 : I2C\_START ( ),**

**Step 8 : I2C\_ACK ( ),**

**Read data:** 

 **configuration,**

 **configuration,**

 **configuration,**

C Bus System 211

The estimated position and heading can be detailed in software called PAMRO developed by Visual C++ as shown in Fig. 28.


Fig. 28. The software PAMRO

### **5.3.2 Function ordering for compass information reading**

The function ordering for compass module is detailed below.

```
Step 1 : I2C_START ( ),
Step 2 : I2C_SEND ( ) – Call compass address from robot and write mode configuration,
Step 3 : I2C_ACK ( ),
Step 4 : I2C_SEND ( ) – Write the register number,
Step 5 : I2C_ACK ( ),
Step 6 : I2C_START ( ),
Step 7 : I2C_SEND ( ) – Call compass address again,
Step 8 : I2C_ACK ( ),
Step 9 : I2C_RECEIVE ( ),
Step 10 : I2C_ACK ( ),
Step 11 : I2C_STOP ( ).
```
#### **5.3.3 Function ordering for gyroscope information reading**

To read gyroscope information, it needs device to convert analog to digital signal, because this gyroscope is not designed to work on I2C bus system. The IC-PCF8591is selected supported the I2C bus system to convert analog signal from gyroscope information.

```
Step 1 : I2C_START ( ),
Step 2 : I2C_SEND ( ) – Call slave device address from robot and write mode 
 configuration,
```
**Step 3 : I2C\_ACK ( ), Step 4 : I2C\_SEND ( ) – Enable analog output, single mode and analog channel selection, Step 5 : I2C\_ACK ( ), Step 6 : I2C\_STOP ( ), Step 7 : I2C\_START ( ), Step 8 : I2C\_SEND ( ) – Read mode; start to read data from analog input of selected channel, Step 9 : I2C\_ACK ( ), Step 10 : I2C\_RECEIVE ( ), Step 11 : I2C\_ACK ( ), Step 12 : I2C\_STOP ( ).**

### **5.3.4 Function ordering for temperature reading**

To read temperature, it must be programmed in 3 steps; start to convert data, stop to convert data and read data.

#### **Start to convert data:**

210 Mobile Robots – Current Trends

The estimated position and heading can be detailed in software called PAMRO developed

**Step 2 : I2C\_SEND ( ) – Call compass address from robot and write mode configuration,**

To read gyroscope information, it needs device to convert analog to digital signal, because this gyroscope is not designed to work on I2C bus system. The IC-PCF8591is selected

supported the I2C bus system to convert analog signal from gyroscope information.

**Step 2 : I2C\_SEND ( ) – Call slave device address from robot and write mode** 

by Visual C++ as shown in Fig. 28.

Fig. 28. The software PAMRO

**Step 1 : I2C\_START ( ),**

**Step 3 : I2C\_ACK ( ),**

**Step 5 : I2C\_ACK ( ), Step 6 : I2C\_START ( ),**

**Step 8 : I2C\_ACK ( ), Step 9 : I2C\_RECEIVE ( ), Step 10 : I2C\_ACK ( ), Step 11 : I2C\_STOP ( ).**

**Step 1 : I2C\_START ( ),**

 **configuration,**

**5.3.2 Function ordering for compass information reading**  The function ordering for compass module is detailed below.

**Step 4 : I2C\_SEND ( ) – Write the register number,**

**Step 7 : I2C\_SEND ( ) – Call compass address again,**

**5.3.3 Function ordering for gyroscope information reading** 

**Step 1 : I2C\_START ( ), Step 2 : I2C\_SEND ( ) – Call slave device address from robot and write mode configuration, Step 3 : I2C\_ACK ( ), Step 4 : I2C\_SEND ( ) – Send command register, start to convert data, Step 5 : I2C\_ACK ( ), Step 6 : I2C\_STOP ( ),** 

**Stop to convert data:** 

**Step 1 : I2C\_START ( ), Step 2 : I2C\_SEND ( ) – Call slave device address from robot and write mode configuration, Step 3 : I2C\_ACK ( ), Step 4 : I2C\_SEND ( ) – Send command register, stop to convert data, Step 5 : I2C\_ACK ( ), Step 6 : I2C\_STOP ( ),** 

#### **Read data:**


Development of Mobile Robot Based on I<sup>2</sup>

This research from Measurement and Mobile Robot Laboratory (M & M LAB) was supported by Faculty of Engineering, Srinakharinwirot University under grant 180/2552.

*Systems,* October 11-15, 2009 St. Louis, USA

Fukuoka International Congress Center, Japan.

Philips Semiconductor (2000), *I2C Bus Specification*. Version 2.1, 2000.

Volume 13, Issue 2 (July 2010) Page: 157 – 164

ISBN 978-953-307-446-7, Rijeka, Croatia.

*(CIRA),* DOI: 10.1109/CIRA.2009.5423180, Page(s): 349 – 354

Byoung-Suk Choi (2009). Mobile Robot Localization in Indoor Environment using RFID and

Fernandez J., et al., 2007. Communication framework for sensor-actuator data in mobile

Hye Ri Park, et al., 2009. A Dead Reckoning Sensor System and a Tracking Algorithm for

Komoriya, K. and Oyama, E., 1994. Position estimation of a mobile robot using optical fiber

Meng Yan, et al., 2007. SMARbot: A Miniature Mobile Robot Paradigm for Ubiquitous

Panich, S. (2008). A mobile robot with an inter-integrated circuit system, *10th International* 

Surachai, P.; Thiraporn, T.; Chaiyaporn, L.; Kittichai, T.; Tayawat, S., 2009. Sensor fusion for

Surachai Panich (2010a). Mobile Robot Driven by Odometry System Integrated with

Surachai Panich (2010b). Dynamics, Control and Simulation of Robots with Differential

Surachai Panich (2010c). Mathematic model for mobile robot. *Far East Journal of Mathematical* 

Surachai Panich and Nitin V. Afzulpurkar (2011), Sensor Fusion Techniques in Navigation

*Sciences (FJMS),* Volume 46, Issue 1 (November 2010) Page: 23 – 32

Sonar Fusion System, *IEEE/RSJ International Conference on Intelligent Robots and* 

robots, *ISIE 2007. IEEE International Symposium on Industrial Electronics, 2007*, Digital Object Identifier: 10.1109/ISIE.2007.4374825 Publication Year: 2007 , Page(s): 1502 –

Mobile Robot. *ICROS-SICE International Joint Conference 2009*, August 18-21,

gyroscope (OFG). *Proceedings of the Intelligent Robots and Systems, Advanced Robotic Systems and the Real World, IROS '94.* IEEE/RSJ/GI International Conference on 12-

Computing, *The 2007 International Conference on Intelligent Pervasive Computing, 2007 IPC*, Digital Object Identifier: 10.1109/IPC.2007.80 Publication Year: 2007, Page(s):

*Conference on Control, Automation, Robotics and Vision,* Publication Year: 2008, Page(s): 2010 – 2014, Digital Object Identifier: 10.1109/ICARCV.2008.4795839

differential encoder integrated with light intensity sensors and accelerometer. *2009 IEEE International Symposium on Computational Intelligence in Robotics and Automation* 

Accelerometer. *Far East Journal of Electronics and Communications*, Volume 4, Issue 2

Drive Integrated with Light Intensity Sensors. *Far East Journal of Dynamical Systems*,

Application for Mobile Robot, in *Sensor Fusion - Foundation and Applications*, edited by Dr. Ciza Thomas, pp. 101-120, Published by InTech - Open Access Publisher,

**7. Acknowledgement** 

**8. References** 

1507

16 Sept. 1994.

136 – 139

(June 2010) Page: 113 – 122

C Bus System 213

```
Step 10 : I2C_ACK ( ),
Step 11 : I2C_RECEIVE ( ) – Read LSB of temperature register,
Step 12 : I2C_ACK ( ),
Step 13 : I2C_STOP ( ),
```
With these steps, the robot can get data from only one module for one time. If the robot wants to get data from this module once again or from another module, the robot must rerun once again from first to last step. The software is developed by Visual C++ to read information from I2C devices as shown in Fig.29.


Fig. 29. Software to read data from sensors integrated on AMRO by I2C bus system

### **6. Conclusion**

For this book chapter has mainly purpose to introduce basic structure of mobile robot and communication between mobile robot and sensors by using I2C bus system. The mobile robot named AMRO is introduced as example. Its hardware is constructed and combined with the electronics components. The real robot is tested successfully using manual controls (GUI) developed by using C++ programming. This small GUI is very useful for new user to test the various robot movements. The software is developed with manual controls for real robot. After establishing connection with the robot through wireless communication and entering all velocity values, the robot will be ready to move in the desired directions. By pressing *F* button robot starts to run forward with the given velocity. The I2C bus system is selected for main system of our mobile robot, because it can be conveniently developed and modified, if new sensors must be integrated or more analog sensors are used later. The I2C bus system consists of master and slave devices that the master device in this work is PC station and ADC-Converter for analog sensors (gyroscope), compass module and temperature sensor work as slave devices integrated on the mobile robot (AMRO). The I2C bus can work very well and has no problem with AMRO system. The software is developed to control the robot and to read all analog sensors based on the I2C bus format. The developed software is programmed based on Visual C++. The software devleopment must be programmed in the I2C bus format to control SDA and SCL lines.

### **7. Acknowledgement**

This research from Measurement and Mobile Robot Laboratory (M & M LAB) was supported by Faculty of Engineering, Srinakharinwirot University under grant 180/2552.

### **8. References**

212 Mobile Robots – Current Trends

With these steps, the robot can get data from only one module for one time. If the robot wants to get data from this module once again or from another module, the robot must rerun once again from first to last step. The software is developed by Visual C++ to read

Fig. 29. Software to read data from sensors integrated on AMRO by I2C bus system

be programmed in the I2C bus format to control SDA and SCL lines.

For this book chapter has mainly purpose to introduce basic structure of mobile robot and communication between mobile robot and sensors by using I2C bus system. The mobile robot named AMRO is introduced as example. Its hardware is constructed and combined with the electronics components. The real robot is tested successfully using manual controls (GUI) developed by using C++ programming. This small GUI is very useful for new user to test the various robot movements. The software is developed with manual controls for real robot. After establishing connection with the robot through wireless communication and entering all velocity values, the robot will be ready to move in the desired directions. By pressing *F* button robot starts to run forward with the given velocity. The I2C bus system is selected for main system of our mobile robot, because it can be conveniently developed and modified, if new sensors must be integrated or more analog sensors are used later. The I2C bus system consists of master and slave devices that the master device in this work is PC station and ADC-Converter for analog sensors (gyroscope), compass module and temperature sensor work as slave devices integrated on the mobile robot (AMRO). The I2C bus can work very well and has no problem with AMRO system. The software is developed to control the robot and to read all analog sensors based on the I2C bus format. The developed software is programmed based on Visual C++. The software devleopment must

**Step 11 : I2C\_RECEIVE ( ) – Read LSB of temperature register,**

information from I2C devices as shown in Fig.29.

**Step 10 : I2C\_ACK ( ),**

**Step 12 : I2C\_ACK ( ), Step 13 : I2C\_STOP ( ),** 

**6. Conclusion** 


**11** 

*Romania* 

**Construction of a Vertical Displacement** 

Constantin Udrea1, Georgeta Ionaşcu1 and Lucian Bogatu1

Nicolae Alexandrescu1, Tudor Cătălin Apostolescu2, Despina Duminică1,

Based on the results of interdisciplinary fields like mechanics, electronics and informatics the autonomous mobile robots are gaining more and more attention. The methods used for movement, actuation and none the last the ability to operate in unknown and dynamic

The human model is still a challenge for mobility and robot movement. Robot mobility is satisfactorily achieved, in some cases even better with other types of locomotion means, but

Feasible solutions can be found in the situation when human mobility reaches its limits (movement on planes with high inclination angles, including 90° – vertical and 180° – parallel to the horizontal plane – ceiling type). Autonomous robot mobility on planes characterized by high inclinations is less tackled in specialty literature, some encountered solutions being objects of patents. The research in this field has a tremendous potential

The field of professional and domestic services, especially the wall cleaning of high buildings, is one of the areas that are expected to obtain a strong benefit from the use of

The advantages of the technologies that use climbing and walking robots consist mainly of

• automatic cleaning of high buildings, improvement of technological level and of

• cleaning robots can be used on various types of building, avoiding thus high costs involved by permanent gondola-type systems (open platform-car or baskets) for

The most common attachment principle is the vacuum adhesion (Cepolina et al., 2003), (Miyake et al., 2007), (Novotny&Horak, 2009), where the robot carries an onboard pump to create a vacuum inside the cups, which are pressed against the wall. This system enables the robots to adhere on any type of material, with low energy consumption. Vacuum adhesion is suitable for usage on smooth surfaces, because the roughness can influence a leakage loss

productivity of service industry in the field of building maintenance;

environments give them a great complexity and a certain level of intelligence.

especially in providing new ideas and/or knowledge development.

robotic systems able to displace on vertical surfaces.

technical solutions that rigorously reproduce human walking are not yet identified.

**1. Introduction** 

two aspects:

individual buildings.

in the vacuum chamber.

**Service Robot with Vacuum Cups** 

*1"POLITEHNICA" University of Bucharest 2"TITU MAIORESCU" University of Bucharest* 


## **Construction of a Vertical Displacement Service Robot with Vacuum Cups**

Nicolae Alexandrescu1, Tudor Cătălin Apostolescu2, Despina Duminică1, Constantin Udrea1, Georgeta Ionaşcu1 and Lucian Bogatu1 *1"POLITEHNICA" University of Bucharest 2"TITU MAIORESCU" University of Bucharest Romania* 

### **1. Introduction**

214 Mobile Robots – Current Trends

Wikipedia (June 2011). Mobile robot, *Wikipedia; the free encyclopedia*, Available URL:

Yoshiyuki Takahashi, et al., 1998. Development of the Mobile Robot System to aid the daily

*Congress, Technology for Inclusive Design and Equality,* July, Helsinki

life for physically handicapped (Interface using internet browser), *The TIDE 98* 

http://en.wikipedia.org/wiki/Mobile\_robot

Based on the results of interdisciplinary fields like mechanics, electronics and informatics the autonomous mobile robots are gaining more and more attention. The methods used for movement, actuation and none the last the ability to operate in unknown and dynamic environments give them a great complexity and a certain level of intelligence.

The human model is still a challenge for mobility and robot movement. Robot mobility is satisfactorily achieved, in some cases even better with other types of locomotion means, but technical solutions that rigorously reproduce human walking are not yet identified.

Feasible solutions can be found in the situation when human mobility reaches its limits (movement on planes with high inclination angles, including 90° – vertical and 180° – parallel to the horizontal plane – ceiling type). Autonomous robot mobility on planes characterized by high inclinations is less tackled in specialty literature, some encountered solutions being objects of patents. The research in this field has a tremendous potential especially in providing new ideas and/or knowledge development.

The field of professional and domestic services, especially the wall cleaning of high buildings, is one of the areas that are expected to obtain a strong benefit from the use of robotic systems able to displace on vertical surfaces.

The advantages of the technologies that use climbing and walking robots consist mainly of two aspects:


The most common attachment principle is the vacuum adhesion (Cepolina et al., 2003), (Miyake et al., 2007), (Novotny&Horak, 2009), where the robot carries an onboard pump to create a vacuum inside the cups, which are pressed against the wall. This system enables the robots to adhere on any type of material, with low energy consumption. Vacuum adhesion is suitable for usage on smooth surfaces, because the roughness can influence a leakage loss in the vacuum chamber.

Construction of a Vertical Displacement Service Robot with Vacuum Cups 217

In order to obtain an autonomous robot, the electric actuation of all degrees of freedom was chosen, as well as for the depressurization needed by the cups. An original driving system was introduced for the moving of robot legs. The system uses screw mechanisms synchronized by a toothed belt transmission. The system developed for the relative translation of the platforms contains a ball guideway of reduced size and good guiding accuracy. The use of a rack mechanism allows a compact and fast actuation. The rotation of the robot is achieved by modifying the relative angular position of the two platforms. The kinematic scheme of the robot is presented in Fig. 2. Although the robot attaches itself to the glass surface by the means of vacuum cups, a significant miniaturization was achieved. The interior platform – PLE - 7 is fixed on three vacuum suction cups 9. The raising and lowering of the platform is controlled using a screw mechanism 8 actuated by the motorreduction gear M1R1. The toothed driving belt 4 allows the synchronous movement of the three suction cups. Similarly, for the exterior platform – PLE - 3 we have suction cups 1, the

The driving toothed belt 6 and the motor-reduction gear M3R3 – movement m3 allow the

Translation between platforms – movement m4 is achieved by the means of motor-reduction gear M4R4 fixed on the interior platform 7. A mechanism consisting of the gear 14 and the rack 13 solidary with the guiding part is used. This type of mechanism was preferred instead of a screw mechanism which would be difficult to miniaturize. A rolling guideway 11 is also used. Figure 3 presents the 3D model of the robot placed in two positions: on a horizontal surface

• triangular platform side length L = 247 mm, corresponding to a stroke S of about

orienting rotation of the robot. The guiding part 10 is fixed on the shaft 5.

mechanism 2 and the motor-reduction gear M2R2 .

Fig. 2. The kinematic scheme

100…110 mm;

• duration of a cycle: 8s; • vacuum of Δp=0.57 bar;

(Fig. 3a) and on a vertical surface (Fig. 3b). The main parameters of the robot are:

• diameter of the vacuum cups: 50 mm; • normal detachment force: 86 N; • lateral detachment force: 110 N;

• full cycle for a translation step: 200…220 mm;

• cup raising and lowering speed: approx. 6mm/s.

The mobile robots endowed with platforms and legs with cups are widely spread in practical applications due to high relative forces of locomotion, mobility and good suspension. The disadvantage of increased overall size less disturbs in applications of cleaning and inspection of large vitrified surfaces covering the buildings, (Sun.et al., 2007). A new generation of cleaning robots based on all-pneumatic technology is on study (Belforte

et al., 2005).

In this context, the authors developed an original solution of a cleaning robot with vertical displacement and vacuum attachment system (Alexandrescu, 2010a, 2010b), (Apostolescu, 2010). The novelty of the approach consists of the robot capability to move on vertical surfaces, which involves basic studies enlarging the horizon of knowledge related to: displacement cinematic structures, robot leg anchoring solutions, actuating solutions, as well as control system of such robots.

### **2. Robot construction**

The robot structure (Fig. 1) contains two triangular platforms with relative mobility and reduced overall size. It is an original feature, not being found till now in the structure of the robots moving in the vertical plane. The fixing of the robot on the vertical surface is achieved by vacuum cups.

Fig. 1. The autonomous robot with vertical displacement

In order to obtain an autonomous robot, the electric actuation of all degrees of freedom was chosen, as well as for the depressurization needed by the cups. An original driving system was introduced for the moving of robot legs. The system uses screw mechanisms synchronized by a toothed belt transmission. The system developed for the relative translation of the platforms contains a ball guideway of reduced size and good guiding accuracy. The use of a rack mechanism allows a compact and fast actuation. The rotation of the robot is achieved by modifying the relative angular position of the two platforms. The kinematic scheme of the robot is presented in Fig. 2. Although the robot attaches itself to the glass surface by the means of vacuum cups, a significant miniaturization was achieved. The interior platform – PLE - 7 is fixed on three vacuum suction cups 9. The raising and lowering of the platform is controlled using a screw mechanism 8 actuated by the motorreduction gear M1R1. The toothed driving belt 4 allows the synchronous movement of the three suction cups. Similarly, for the exterior platform – PLE - 3 we have suction cups 1, the mechanism 2 and the motor-reduction gear M2R2 .

Fig. 2. The kinematic scheme

216 Mobile Robots – Current Trends

The mobile robots endowed with platforms and legs with cups are widely spread in practical applications due to high relative forces of locomotion, mobility and good suspension. The disadvantage of increased overall size less disturbs in applications of cleaning and inspection of large vitrified surfaces covering the buildings, (Sun.et al., 2007). A new generation of cleaning robots based on all-pneumatic technology is on study (Belforte

In this context, the authors developed an original solution of a cleaning robot with vertical displacement and vacuum attachment system (Alexandrescu, 2010a, 2010b), (Apostolescu, 2010). The novelty of the approach consists of the robot capability to move on vertical surfaces, which involves basic studies enlarging the horizon of knowledge related to: displacement cinematic structures, robot leg anchoring solutions, actuating solutions, as

The robot structure (Fig. 1) contains two triangular platforms with relative mobility and reduced overall size. It is an original feature, not being found till now in the structure of the robots moving in the vertical plane. The fixing of the robot on the vertical surface is

et al., 2005).

well as control system of such robots.

Fig. 1. The autonomous robot with vertical displacement

**2. Robot construction** 

achieved by vacuum cups.

The driving toothed belt 6 and the motor-reduction gear M3R3 – movement m3 allow the orienting rotation of the robot. The guiding part 10 is fixed on the shaft 5.

Translation between platforms – movement m4 is achieved by the means of motor-reduction gear M4R4 fixed on the interior platform 7. A mechanism consisting of the gear 14 and the rack 13 solidary with the guiding part is used. This type of mechanism was preferred instead of a screw mechanism which would be difficult to miniaturize. A rolling guideway 11 is also used. Figure 3 presents the 3D model of the robot placed in two positions: on a horizontal surface (Fig. 3a) and on a vertical surface (Fig. 3b).

The main parameters of the robot are:


Construction of a Vertical Displacement Service Robot with Vacuum Cups 219

Fig. 5. Test stand used in order to establish the cup characteristic in the presence of normal forces Fx: 1 - computer, 2 - incremental rule, 3 - dynamometer, 4 - electronic vernier, 5 vacuum cup, 6 - working surface, 7 - rod, 8 - table, 9 - hand wheel for obtaining the

An important part of the research concerned the fixing of the robot on the vacuum cups (Alexandrescu, 2010b). In order to establish their bearing capacity, the cups were subjected to external normal, lateral and combined loads. Tests were performed for different depressions. The influence of the different supporting materials was also studied, as well as

**3. Research of the vacuum attachment system of the robot** 

the behaviour of the cup in presence of different liquids on the surface.

displacement wx.

Fig. 3. The 3D model of the robot: a) on a horizontal surface; b) on a vertical surface

The pneumatic diagram used for reducing the pressure inside the cups of the robot legs, in order to ensure the contact force when vacuumed suction cups adhere on the surface to be cleaned, is presented in Fig. 4: PV - vacuum micro pump (NMP 015 B – KNF Neuberger); Ac - tank; EM1, EM2 - electromagnets for the electro valve operating; V1, V2, V3 - vacuum cups for the interior platform; V4, V5, V6 - vacuum cups for the exterior platform; D depressurization; A - atmospheric pressure.

Fig. 4. Pneumatic diagram

218 Mobile Robots – Current Trends

a) b)

The pneumatic diagram used for reducing the pressure inside the cups of the robot legs, in order to ensure the contact force when vacuumed suction cups adhere on the surface to be cleaned, is presented in Fig. 4: PV - vacuum micro pump (NMP 015 B – KNF Neuberger); Ac - tank; EM1, EM2 - electromagnets for the electro valve operating; V1, V2, V3 - vacuum cups for the interior platform; V4, V5, V6 - vacuum cups for the exterior platform; D -

D A D A

Ac

v1 v v2 3 v4 v5 v6

Fig. 3. The 3D model of the robot: a) on a horizontal surface; b) on a vertical surface

depressurization; A - atmospheric pressure.

M

Fig. 4. Pneumatic diagram

A

PV

EM1 EM2

Fig. 5. Test stand used in order to establish the cup characteristic in the presence of normal forces Fx: 1 - computer, 2 - incremental rule, 3 - dynamometer, 4 - electronic vernier, 5 vacuum cup, 6 - working surface, 7 - rod, 8 - table, 9 - hand wheel for obtaining the displacement wx.

### **3. Research of the vacuum attachment system of the robot**

An important part of the research concerned the fixing of the robot on the vacuum cups (Alexandrescu, 2010b). In order to establish their bearing capacity, the cups were subjected to external normal, lateral and combined loads. Tests were performed for different depressions. The influence of the different supporting materials was also studied, as well as the behaviour of the cup in presence of different liquids on the surface.

Construction of a Vertical Displacement Service Robot with Vacuum Cups 221

Fig. 7. Comparation among the maximum values of the detachment force: Fd – dry glass;

The test stand presented in Figure 8 was developed for the study of cup fixing on vertical

In the presence of lateral forces, it was noticed that a slip of the cup appeared, leading to the re-positioning of the cup on the supporting surface. The slip stopped and the cup stabilized

In order to deliver a correct information regarding the characteristic of the cup, after its stabilization, the deformation presented on the diagrams is obtained as difference between

The obtained results are presented in Figure 9. It can be noticed that the maximum supported loads are 50%-76% larger in the case of lateral forces that in the case of normal forces, an important advantage for a vertical robot. Similar results were obtained in the cases

Figure 10 presents a comparison among the three working surfaces for a depression of

The last part of the experimental research concerned the study of the cup behavior in presence of combined forces. The tests were performed for a maximum value of the normal force of Fn=49N, generated by the compression f=9mm of the spring 17 (Fig. 8). This force corresponds to approximately 60% of the normal detachment force previously established. The diagrams corresponding to the tests on dry glass, in the presence of combined forces,

As previously, the research covered also the situation of fixing on watered glass. In the presence of lateral forces, the values of the cup slip increased significantly, e.g. for a depression of 0.6bar the slip exceeded 2mm. The obtained diagram is presented in Figure 13.

FdUd – watered glass; FdDet – glass with detergent for window cleaning.

the total deformation obtained by progressive loading and the lateral slips.

Δp=0.7bar. It can be noticed that the behaviour of the cup is practically identical.

The same values of the depression as in the previous experiments were used for tests. A comparison between the behaviour of the cup subjected to combined forces and subjected only to a normal force is presented in Figure 12. Experiments showed that, when lateral and normal forces acted together, the values of lateral forces were 36…45% smaller than in the absence of normal forces. Tests performed on aluminium and textolite surfaces led to similar

planes.

itself after 5…10s.

of aluminium and textolite.

are shown in Figure 11.

results.

ESS-50 vacuum cups (FESTO) with an outer diameter of 50 mm were used. Glass (Ra=0.02μm), polished aluminium (Ra=0.59μm;) and textolite (Ra=1.1μm) were used as supporting surface. Different functional conditions were simulated. The roughness was measured using the roughness tester Surftest SJ 201P (Mitutoyo). Different working conditions were generated: cleaning of dry surfaces, cleaning of watered surfaces and cleaning using window detergent solution.

Fig. 5 presents the experimental stand used for the study of the effects of normal forces Fx.

Figure 6 presents the results obtained for dry glass surfaces. The positive values of the force correspond to the tendency of wrenching/detachment of the cup. The negative values of the force are conventionally assigned to the compression tendency of the cup.

Tests were performed also on dry aluminium surfaces and on dry textolite surfaces. The obtained diagrams are similar. Only small differences are noticed among the values corresponding to the three materials. For reduced deformations, glass and textolite have almost the same behaviour.

In fact, values of the roughness below 1μm do not influence the detachment force. Only the aluminium surface requires smaller loads.

In the range of negative deformations (cup compression), rigidity of the elastic membrane is the only factor that influences the characteristic, thus the three diagrams are very similar.

The study performed on watered surfaces showed almost insignificant decrease of the cup performance, due to the fact that water was practically eliminated from the contact area during cup fixing.

A most significant reduction of maximum normal load was noticed when detergent for window cleaning was used, because it remained in the contact area even after the cup was fixed on the supporting surface.

Fig. 6. Experimental diagrams obtained for dry glass, normal force; FSt04 – depression of 0.4bar; FSt05 – depression of 0.5bar; FSt06 – depression of 0.6bar; FSt07 – depression of 0.7bar.

Figure 7 presents the diagrams obtained for glass surfaces in the three situations. A decrease of about 6% of the cup capacity appears in the presence of detergent.

220 Mobile Robots – Current Trends

ESS-50 vacuum cups (FESTO) with an outer diameter of 50 mm were used. Glass (Ra=0.02μm), polished aluminium (Ra=0.59μm;) and textolite (Ra=1.1μm) were used as supporting surface. Different functional conditions were simulated. The roughness was measured using the roughness tester Surftest SJ 201P (Mitutoyo). Different working conditions were generated: cleaning of dry surfaces, cleaning of watered surfaces and

Fig. 5 presents the experimental stand used for the study of the effects of normal forces Fx. Figure 6 presents the results obtained for dry glass surfaces. The positive values of the force correspond to the tendency of wrenching/detachment of the cup. The negative values of the

Tests were performed also on dry aluminium surfaces and on dry textolite surfaces. The obtained diagrams are similar. Only small differences are noticed among the values corresponding to the three materials. For reduced deformations, glass and textolite have

In fact, values of the roughness below 1μm do not influence the detachment force. Only the

In the range of negative deformations (cup compression), rigidity of the elastic membrane is the only factor that influences the characteristic, thus the three diagrams are very similar. The study performed on watered surfaces showed almost insignificant decrease of the cup performance, due to the fact that water was practically eliminated from the contact area

A most significant reduction of maximum normal load was noticed when detergent for window cleaning was used, because it remained in the contact area even after the cup was

Fig. 6. Experimental diagrams obtained for dry glass, normal force; FSt04 – depression of 0.4bar; FSt05 – depression of 0.5bar; FSt06 – depression of 0.6bar; FSt07 – depression of

Figure 7 presents the diagrams obtained for glass surfaces in the three situations. A decrease

of about 6% of the cup capacity appears in the presence of detergent.

force are conventionally assigned to the compression tendency of the cup.

cleaning using window detergent solution.

aluminium surface requires smaller loads.

almost the same behaviour.

fixed on the supporting surface.

during cup fixing.

0.7bar.

Fig. 7. Comparation among the maximum values of the detachment force: Fd – dry glass; FdUd – watered glass; FdDet – glass with detergent for window cleaning.

The test stand presented in Figure 8 was developed for the study of cup fixing on vertical planes.

In the presence of lateral forces, it was noticed that a slip of the cup appeared, leading to the re-positioning of the cup on the supporting surface. The slip stopped and the cup stabilized itself after 5…10s.

In order to deliver a correct information regarding the characteristic of the cup, after its stabilization, the deformation presented on the diagrams is obtained as difference between the total deformation obtained by progressive loading and the lateral slips.

The obtained results are presented in Figure 9. It can be noticed that the maximum supported loads are 50%-76% larger in the case of lateral forces that in the case of normal forces, an important advantage for a vertical robot. Similar results were obtained in the cases of aluminium and textolite.

Figure 10 presents a comparison among the three working surfaces for a depression of Δp=0.7bar. It can be noticed that the behaviour of the cup is practically identical.

The last part of the experimental research concerned the study of the cup behavior in presence of combined forces. The tests were performed for a maximum value of the normal force of Fn=49N, generated by the compression f=9mm of the spring 17 (Fig. 8). This force corresponds to approximately 60% of the normal detachment force previously established.

The diagrams corresponding to the tests on dry glass, in the presence of combined forces, are shown in Figure 11.

The same values of the depression as in the previous experiments were used for tests.

A comparison between the behaviour of the cup subjected to combined forces and subjected only to a normal force is presented in Figure 12. Experiments showed that, when lateral and normal forces acted together, the values of lateral forces were 36…45% smaller than in the absence of normal forces. Tests performed on aluminium and textolite surfaces led to similar results.

As previously, the research covered also the situation of fixing on watered glass. In the presence of lateral forces, the values of the cup slip increased significantly, e.g. for a depression of 0.6bar the slip exceeded 2mm. The obtained diagram is presented in Figure 13.

Construction of a Vertical Displacement Service Robot with Vacuum Cups 223

Fig. 9. Experimental diagrams for dry glass, lateral force; FSt04 – depression of 0.4bar; FSt05

Due to the fact that design conditions imposed a maximum robot leg slip of 0.5mm, the experimental research showed that the lateral forces corresponding to the vertical fixing of the robot must not exceed 25…30N. The values of the allowed lateral forces were 50…60%

The results contributed significantly to the development of a robot able to sustain its own

Fig. 10. Cup behavior in the presence of lateral force; depression of 0.7bar, different working

– depression of 0.5bar; FSt06 – depression of 0.6bar; FSt07 – depression of 0.7bar.

smaller than in the case of fixing on dry glass.

weight on vertical surfaces in order to perform cleaning tasks.

surfaces: FSt07 – glass; FAl07 – aluminium; FTx07 – textolite.

If the cup was supported on glass surfaces washed with detergent for window cleaning, the values of maximum lateral forces decreased with another 10…15%.

The experimental results obtained for aluminium and textolite were comparable to the results obtained in the case of glass supporting surfaces.

Fig. 8. Scheme of the experimental stand developed for testing the cups subjected to lateral forces Fy: 1 – screw for the displacement wy; 2 –guide yoke; 3 –Vernier rod; 4 – digital vernier for wy; 5 – link for transmitting the force Fy; 6 – guiding riser; 7 – cup; 8 – cup fixing body; 9, 14, 15 – ball guides; 10, 13, 27 – mobile elements; 11 – hand wheel wy; 12 – dynamometer body; 16 – pan; 17 –spring for generating the force Fx; 18 – rod for positioning the element II; 19 – base; 20 - hand wheel wx; 21 - screw for generating the force Fx; 22 – dial gauge; 23 – compressor; 24 – pressure regulation; 25 – ejector; 26 – noise damper; 27 – element I; 28 – stand hook; 29 – dynamometer hook.

222 Mobile Robots – Current Trends

If the cup was supported on glass surfaces washed with detergent for window cleaning, the

The experimental results obtained for aluminium and textolite were comparable to the

Fig. 8. Scheme of the experimental stand developed for testing the cups subjected to lateral forces Fy: 1 – screw for the displacement wy; 2 –guide yoke; 3 –Vernier rod; 4 – digital vernier for wy; 5 – link for transmitting the force Fy; 6 – guiding riser; 7 – cup; 8 – cup fixing

dynamometer body; 16 – pan; 17 –spring for generating the force Fx; 18 – rod for positioning the element II; 19 – base; 20 - hand wheel wx; 21 - screw for generating the force Fx; 22 – dial gauge; 23 – compressor; 24 – pressure regulation; 25 – ejector; 26 – noise damper; 27 –

body; 9, 14, 15 – ball guides; 10, 13, 27 – mobile elements; 11 – hand wheel wy; 12 –

element I; 28 – stand hook; 29 – dynamometer hook.

values of maximum lateral forces decreased with another 10…15%.

results obtained in the case of glass supporting surfaces.

Fig. 9. Experimental diagrams for dry glass, lateral force; FSt04 – depression of 0.4bar; FSt05 – depression of 0.5bar; FSt06 – depression of 0.6bar; FSt07 – depression of 0.7bar.

Due to the fact that design conditions imposed a maximum robot leg slip of 0.5mm, the experimental research showed that the lateral forces corresponding to the vertical fixing of the robot must not exceed 25…30N. The values of the allowed lateral forces were 50…60% smaller than in the case of fixing on dry glass.

The results contributed significantly to the development of a robot able to sustain its own weight on vertical surfaces in order to perform cleaning tasks.

Fig. 10. Cup behavior in the presence of lateral force; depression of 0.7bar, different working surfaces: FSt07 – glass; FAl07 – aluminium; FTx07 – textolite.

Construction of a Vertical Displacement Service Robot with Vacuum Cups 225

Fig. 13. Experimental diagrams for watered glass, lateral force; FStUd04 – depression of 0.4bar; FStUd05 – depression of 0.5bar; FStUd06 – depression of 0.6bar; FStUd07 –

The value of the triangular platform side L = 247 mm was adopted in order to obtain a stroke S of about 100…110 mm. A translation of 200…220 mm of the robot is obtained for a full translation cycle (attachment on PLI suction cups – PLE translation - attachment on PLE suction cups - PLI translation), allowing the tracking of a glass surface of 1500 mm in about seven cycles. If a cycle duration of 8s is considered, the whole window size is travelled in

Figure 14 presents the constructive solution adopted for the robot displacement. The vacuum suction cup 1 (supplied from the vacuum miniature pump by the nozzle 16) is embedded in the sliding body 15 and can displace relatively to the guideway 3. The parallel

The used mechanism consists of the shaft-screw 13 and the nut – interior thread manufactured in the body 15. The toothed belt 11 and the belt gear 6 solidary with the shaft allow the driving. Radial bearings 4 and stroke limiter with microswitch 7 can also be

The microswitches – two for each leg – are fixed on the corner brackets 8, adjustable relatively to the arm 5 solidary with the plate 12 (PLI). The positioning of the suction cup is discerned by the disk 10 driven by the rod 14 in contact with the inner part of the slider. The

The displacements of the robot result as a combination of the following categories of movements: one-step translation, rotation and pseudo-circular movement (Alexandrescu,

depression of 0.7bar.

key 2 restraints rotation.

noticed.

2010a).

**4. Displacement kinematics** 

less than one minute, which is convenient.

spring 9 helps maintaining this contact.

Fig. 11. Results of tests performed for combined forces (lateral force and normal force of 49N), for dry glass; FStn04 – depression of 0.4bar; FStn05 – depression of 0.5bar; FStn06 – depression of 0.6bar; FStn07 – depression of 0.7bar.

Fig. 12. Comparison between the behavior of the cup subjected to combined forces (FStn06) and normal force (FSt06) for a depression of 0.6 bar

Fig. 13. Experimental diagrams for watered glass, lateral force; FStUd04 – depression of 0.4bar; FStUd05 – depression of 0.5bar; FStUd06 – depression of 0.6bar; FStUd07 – depression of 0.7bar.

### **4. Displacement kinematics**

224 Mobile Robots – Current Trends

Fig. 11. Results of tests performed for combined forces (lateral force and normal force of 49N), for dry glass; FStn04 – depression of 0.4bar; FStn05 – depression of 0.5bar; FStn06 –

Fig. 12. Comparison between the behavior of the cup subjected to combined forces (FStn06)

depression of 0.6bar; FStn07 – depression of 0.7bar.

and normal force (FSt06) for a depression of 0.6 bar

The value of the triangular platform side L = 247 mm was adopted in order to obtain a stroke S of about 100…110 mm. A translation of 200…220 mm of the robot is obtained for a full translation cycle (attachment on PLI suction cups – PLE translation - attachment on PLE suction cups - PLI translation), allowing the tracking of a glass surface of 1500 mm in about seven cycles. If a cycle duration of 8s is considered, the whole window size is travelled in less than one minute, which is convenient.

Figure 14 presents the constructive solution adopted for the robot displacement. The vacuum suction cup 1 (supplied from the vacuum miniature pump by the nozzle 16) is embedded in the sliding body 15 and can displace relatively to the guideway 3. The parallel key 2 restraints rotation.

The used mechanism consists of the shaft-screw 13 and the nut – interior thread manufactured in the body 15. The toothed belt 11 and the belt gear 6 solidary with the shaft allow the driving. Radial bearings 4 and stroke limiter with microswitch 7 can also be noticed.

The microswitches – two for each leg – are fixed on the corner brackets 8, adjustable relatively to the arm 5 solidary with the plate 12 (PLI). The positioning of the suction cup is discerned by the disk 10 driven by the rod 14 in contact with the inner part of the slider. The spring 9 helps maintaining this contact.

The displacements of the robot result as a combination of the following categories of movements: one-step translation, rotation and pseudo-circular movement (Alexandrescu, 2010a).

Construction of a Vertical Displacement Service Robot with Vacuum Cups 227

Fig. 15. One-step translation. a. Positions of the two platforms: PLE and PLI; b. Initial position; c. Position after PLE displacement; d. End position (after PLI displacement).

in Figure 15. Notations from Figure 2 are used:

• PLE raising (movement m2,up); • PLE displacement (movement m4); • PLE lowering (movement m2,down); • PLI raising (movement m1,up); • PLI displacement (movement m4); • PLI lowering (movement m1,down).

One-step translation of the robot involves the following sequence of movements, as shown

Fig. 14. Detail regarding the actuation of the displacement relative to the suction cups.

226 Mobile Robots – Current Trends

Fig. 14. Detail regarding the actuation of the displacement relative to the suction cups.

Fig. 15. One-step translation. a. Positions of the two platforms: PLE and PLI; b. Initial position; c. Position after PLE displacement; d. End position (after PLI displacement).

One-step translation of the robot involves the following sequence of movements, as shown in Figure 15. Notations from Figure 2 are used:


Construction of a Vertical Displacement Service Robot with Vacuum Cups 229

Fig. 17. Pseudo circular movement. a. the positions of the platforms PLE and PLI during

2 15 *<sup>S</sup> R S*

The displacement of the robot was modelled and simulated using Cosmos Motion software

In order to simulate the robot translation, for the relative movement between platforms the interior platform was considered to be fixed. A parabolic variation was imposed for the acceleration. The numeric values used for simulation are: displacement Δ = *S mm* 100 , maximum acceleration amax = 500 mm/s2 and computed maximum speed vmax = 60 mm/s.

The simulations allowed the computation of the value of the maximum instantaneous

The orienting rotation of the robot was simulated for rotation cycle of 30º. Figures 21, 22 and

In the transitory areas, it can be noticed that the variation of the angular acceleration presents deviations relatively to its theoretical shape. This phenomenon can be explained by

The computed maximum value of the couple was equal to 0.204 Nm. The computed

The needed power at the exit of the driving motor resulted equal to 1.42W.

maximum power at the level of the platform was equal to Prot = 0.13 W.

1.867

*tg* = ≈⋅ <sup>⋅</sup> (1)

The radius of the circle inscribed in the travelled polygon is given by (1):

**5. Modelling and simulation of the robot displacement** 

translation; b. The polygon of the trajectory.

(Alexandrescu, 2010a), (Apostolescu, 2010).

power: Ptr= 0.69 W.

23 present the simulation results.

Figures 18, 19 and 20 present the simulation results.

the variation of the static charges during platform rotation.

Fig. 16. Phases of a 90° clockwise rotation. a. initial state; b, d, f. Rotation FWD of PLE with 30°; c, e, g. Rotation RW of PLI with 30°.

Robot rotation is achieved by the following sequence of movements, as shown in Figure 16:


This sequence leads to a rotation of angle α = 30°, which is the recommended maximum angle of relative rotation between platforms. The sequence is repeated till the achievement of the desired angle rotation.

The counterclockwise rotation needs only reversing the direction of the partial rotations ( *FWD RW* ↔ ). Axis X indicates the direction of the sliding axis of the robot. Initial centring of platforms is needed.

The combination among a translation and a number of 30° rotations leads to a pseudocircular movement outlining a 12-sided polygon, as presented in figure 17.

The translation has to start (point A of Figure 17,a) and to stop (point C of Figure 17,a) in a centred state (the centres of the platforms coincide).

228 Mobile Robots – Current Trends

Fig. 16. Phases of a 90° clockwise rotation. a. initial state; b, d, f. Rotation FWD of PLE with

Robot rotation is achieved by the following sequence of movements, as shown in Figure 16:

(movement m3);

α

angle of relative rotation between platforms. The sequence is repeated till the achievement

The counterclockwise rotation needs only reversing the direction of the partial rotations ( *FWD RW* ↔ ). Axis X indicates the direction of the sliding axis of the robot. Initial centring

The combination among a translation and a number of 30° rotations leads to a pseudo-

The translation has to start (point A of Figure 17,a) and to stop (point C of Figure 17,a) in a

= 30°, which is the recommended maximum

α

circular movement outlining a 12-sided polygon, as presented in figure 17.

30°; c, e, g. Rotation RW of PLI with 30°.

• PLE lowering (movement m2,down); • PLI raising (movement m1,up);

• PLI lowering (movement m1,down). This sequence leads to a rotation of angle

of the desired angle rotation.

of platforms is needed.

• PLI clockwise rotation of angle α (movement m3);

centred state (the centres of the platforms coincide).

• PLE raising (movement m2,up); • PLE clockwise rotation of angle

Fig. 17. Pseudo circular movement. a. the positions of the platforms PLE and PLI during translation; b. The polygon of the trajectory.

The radius of the circle inscribed in the travelled polygon is given by (1):

$$R = \frac{S}{2 \cdot tg \, 15^{\circ}} = 1.867 \cdot S \tag{1}$$

### **5. Modelling and simulation of the robot displacement**

The displacement of the robot was modelled and simulated using Cosmos Motion software (Alexandrescu, 2010a), (Apostolescu, 2010).

In order to simulate the robot translation, for the relative movement between platforms the interior platform was considered to be fixed. A parabolic variation was imposed for the acceleration. The numeric values used for simulation are: displacement Δ = *S mm* 100 , maximum acceleration amax = 500 mm/s2 and computed maximum speed vmax = 60 mm/s. Figures 18, 19 and 20 present the simulation results.

The simulations allowed the computation of the value of the maximum instantaneous power: Ptr= 0.69 W.

The needed power at the exit of the driving motor resulted equal to 1.42W.

The orienting rotation of the robot was simulated for rotation cycle of 30º. Figures 21, 22 and 23 present the simulation results.

In the transitory areas, it can be noticed that the variation of the angular acceleration presents deviations relatively to its theoretical shape. This phenomenon can be explained by the variation of the static charges during platform rotation.

The computed maximum value of the couple was equal to 0.204 Nm. The computed maximum power at the level of the platform was equal to Prot = 0.13 W.

Construction of a Vertical Displacement Service Robot with Vacuum Cups 231

Fig. 21. Variation of angular speed during platform rotation.

Fig. 22. Variation of angular acceleration during platform rotation.

Fig. 23. Angle variation during rotation.

Fig. 18. Variation of translation speed.

Fig. 19. Variation of translation acceleration.

Fig. 20. Displacement during translation.

230 Mobile Robots – Current Trends

Fig. 18. Variation of translation speed.

Fig. 19. Variation of translation acceleration.

Fig. 20. Displacement during translation.

Fig. 21. Variation of angular speed during platform rotation.

Fig. 22. Variation of angular acceleration during platform rotation.

Fig. 23. Angle variation during rotation.

Construction of a Vertical Displacement Service Robot with Vacuum Cups 233

Figure 25 presents the scheme of the photoelectric system used to determine the reference position. When the light stop reaches the optical axis of the sensor, the state of its output changes. The emitter diode is supplied through the resistor Ra for current limitation. A power amplifier OPB916 is connected at the circuit output. The suppressor diode Ds protects the transistor during its disconnection. Connection to the data acquisition board is made

The LabVIEW software program that allows founding the home switch is shown in Figure 26. The activation of limit switches is also needed during the search. After the reference is found, the position counter is reset. The program is applied for each of four axes of the

The programs consists of two sequences introduced by the cycle 10. The first one searches

The subVI 1 loads the maximum search speed and performs axis selection. The subVIs 2 and 3 load the maximum acceleration and deceleration. The subVI 4 defines the movement kinematics (S curve of the speed). A *while* type cycle is introduced. The subVI 7 reads the state of the search. The subVI 12 seizes various interruption cases. The subVI 13 stops the

In order to clean glass surfaces, the robot must cover the whole window area, paying especial attention to corners. The main control program of the robot controls the travel on the vitrified surface by horizontal and vertical movements, as well as by rotations that allow changing the direction. An ultrasonic PING sensor (Parallax) was introduced as decision element for changing the direction and stopping. The sensor is mounted on the PLI platform

the reference position and the second resets the position counter (subVI 9).

using the corner 3 and the jointed holder 2, as shown in Figure 27.

through the NO contact of the relay Rel.

Fig. 25. Scheme of the photoelectric circuit.

acquisition board.

cycle.

### **6. Robot control**

The robot can be controlled with a data acquisition board 7344 National Instruments and LabVIEW programming or with microcontrollers. The microcontroller BS2 (Parallax) is used, easy to program but with a number of limitations concerning the control of motor speeds.

Using a data acquisition board allows introducing home switches for each of four servo axes in order to find the reference position. For axis 1 (robot translation) and axis 4 (orienting rotation), home switches are mounted between the limiting micro switches. For axes 2 and 3, representing cup translations for PLE and PLI, respectively, micro switches are used only as stroke limit. The reference position is found with the help of a photoelectric system, as shown in Figure 24. The system consists of a light stop 6 fixed on the mobile plate 4 whose displacement *s* gives the position of the cups. The light stop moves between the sides of the photoelectric sensor 2 (of type OPB 916).

Fig. 24. Photoelectric system used in order to establish the reference position of axes 2 and 3. 1 – corner support; 2 – photoelectric sensor; 3 – rod for movement obstruction; 4 – plate attached to the mobile rod; 5 – mobile rod; 6 – light stop; 8 – microswitch.

232 Mobile Robots – Current Trends

The robot can be controlled with a data acquisition board 7344 National Instruments and LabVIEW programming or with microcontrollers. The microcontroller BS2 (Parallax) is used, easy to program but with a number of limitations concerning the control of motor

Using a data acquisition board allows introducing home switches for each of four servo axes in order to find the reference position. For axis 1 (robot translation) and axis 4 (orienting rotation), home switches are mounted between the limiting micro switches. For axes 2 and 3, representing cup translations for PLE and PLI, respectively, micro switches are used only as stroke limit. The reference position is found with the help of a photoelectric system, as shown in Figure 24. The system consists of a light stop 6 fixed on the mobile plate 4 whose displacement *s* gives the position of the cups. The light stop moves between the sides of the

Fig. 24. Photoelectric system used in order to establish the reference position of axes 2 and 3. 1 – corner support; 2 – photoelectric sensor; 3 – rod for movement obstruction; 4 – plate

attached to the mobile rod; 5 – mobile rod; 6 – light stop; 8 – microswitch.

**6. Robot control** 

photoelectric sensor 2 (of type OPB 916).

speeds.

Figure 25 presents the scheme of the photoelectric system used to determine the reference position. When the light stop reaches the optical axis of the sensor, the state of its output changes. The emitter diode is supplied through the resistor Ra for current limitation. A power amplifier OPB916 is connected at the circuit output. The suppressor diode Ds protects the transistor during its disconnection. Connection to the data acquisition board is made through the NO contact of the relay Rel.

The LabVIEW software program that allows founding the home switch is shown in Figure 26. The activation of limit switches is also needed during the search. After the reference is found, the position counter is reset. The program is applied for each of four axes of the acquisition board.

The programs consists of two sequences introduced by the cycle 10. The first one searches the reference position and the second resets the position counter (subVI 9).

The subVI 1 loads the maximum search speed and performs axis selection. The subVIs 2 and 3 load the maximum acceleration and deceleration. The subVI 4 defines the movement kinematics (S curve of the speed). A *while* type cycle is introduced. The subVI 7 reads the state of the search. The subVI 12 seizes various interruption cases. The subVI 13 stops the cycle.

In order to clean glass surfaces, the robot must cover the whole window area, paying especial attention to corners. The main control program of the robot controls the travel on the vitrified surface by horizontal and vertical movements, as well as by rotations that allow changing the direction. An ultrasonic PING sensor (Parallax) was introduced as decision element for changing the direction and stopping. The sensor is mounted on the PLI platform using the corner 3 and the jointed holder 2, as shown in Figure 27.

Fig. 25. Scheme of the photoelectric circuit.

Construction of a Vertical Displacement Service Robot with Vacuum Cups 235

Fig. 27. The ultrasonic sensor mounted on the interior platform: 1 – sensor; 2 – sensor holder;

Figure 28 presents a sequential cycle of travel. The cycle consists of the following sequences: sequential translation from left to right (this sequence ends when the proximity of the right side rim is sensed); 90º clockwise rotation; lowering with a step; 90º clockwise rotation; sequential translation from right to left (this sequence ends when the proximity of the left side rim is sensed); 90º counterclockwise rotation; lowering with a step; 90º

3 – corner; 4 – interior platform PLI of the robot.

counterclockwise rotation.

Fig. 28. Travel cycle of the robot.

Fig. 26. LabVIEW program for founding the home switch: 1 – maximum speed load; 2 – acceleration load; 3 – deceleration load; 4 – elements of curve S (kinematics without jerk); 5 – home switch use; 6 –*while* type cycle; 7 – reading of search state; 8 – delay producing; 9 – position counter reset; 10 – sequential cycle with two sequences; 11 – search settings; 12 – reading of different interrupt situations; 13 –end cycle condition; possible errors indication of possible errors.

234 Mobile Robots – Current Trends

Fig. 26. LabVIEW program for founding the home switch: 1 – maximum speed load; 2 – acceleration load; 3 – deceleration load; 4 – elements of curve S (kinematics without jerk); 5 – home switch use; 6 –*while* type cycle; 7 – reading of search state; 8 – delay producing; 9 – position counter reset; 10 – sequential cycle with two sequences; 11 – search settings; 12 – reading of different interrupt situations; 13 –end cycle condition; possible errors indication

of possible errors.

Fig. 27. The ultrasonic sensor mounted on the interior platform: 1 – sensor; 2 – sensor holder; 3 – corner; 4 – interior platform PLI of the robot.

Figure 28 presents a sequential cycle of travel. The cycle consists of the following sequences: sequential translation from left to right (this sequence ends when the proximity of the right side rim is sensed); 90º clockwise rotation; lowering with a step; 90º clockwise rotation; sequential translation from right to left (this sequence ends when the proximity of the left side rim is sensed); 90º counterclockwise rotation; lowering with a step; 90º counterclockwise rotation.

Fig. 28. Travel cycle of the robot.

Construction of a Vertical Displacement Service Robot with Vacuum Cups 237

The program uses the ports 1 and 2 of the acquisition board. The port 1 is used as program output, sending the commands towards the electro valves. The port 2 is used as input, receiving the signal from the sensor *S*. The control 3 initializes the boolean local variable as "False". The variable changes its state to "True" during vertical displacement. The signal from the sensor *S* is used also for stopping the horizontal translation

The chapter reports a number of very important results regarding the design and control of

The robot construction is able to perform its intended function: the efficient cleaning of glass surfaces. The vacuum attachment system ensures good contact with the support surface, is simple and reliable. The modelling and simulation of the robot functioning, developed for platform translation as well as for relative rotation of the platforms, certifies that its

The overall size of the robot, 350mm x 350mm x 220mm, proves an optimal degree of robot

Alexandrescu, N.; Apostolescu, T.C.; Udrea, C.; Duminică, D.; Cartal, L.A. (2010).

Alexandrescu, N., Udrea, C., Duminică, D., & Apostolescu, T.C. (2010), Research of the

Apostolescu, T.C. (2010) Autonomous robot with vertical displacement and vacuummetric

Belforte G., Mattiazzo G., & Grassi R. (2005). Innovative solution for climbing and cleaning

Cepolina, F.; Michelini, R.; Razzoli, R.; Zoppi, M. (2003). Gecko, a climbing robot for wall

*Robots and Systems*, San Diego, CA, USA, Oct.29-Nov.2, 2007, pp. 1920-1925 Novotny F.; Horak, M. (2009). Computer Modelling of Suction Cups Used for Window

Number CFP10AQT-PRT, ISBN 978-1-4244-6722-8, pp. 265-270

ART, ISBN 978-1-4244-8867-4, pp. 279-283

*Power*, pp. 251-255,Tsukuba, Japan.

Bucharest, 2010

June, 2009, pp. 113-116

Autonomous mobile robots with displacements in a vertical plane and applications in cleaning services. *Proc. 2010 IEEE International Conference on Automation, Quality and Testing, Robotics*, Cluj-Napoca, Romania, 28-30 May 2010, Tome I, IEEE Catalog

Vacuum System of a Cleaning Robot with Vertical Displacement, *Proc. 2010 International Conference on Mechanical Engineering, Robotics and Aerospace ICMERA 2010*, Bucharest, Romania,2-4 December 2010, IEEE Catalog Number CFP1057L-

attachment system (I Romanian), Ph.D. Thesis, POLITEHNICA University of

on smooth surfaces. *Proceedings of the 6th JFPS International Symposium on Fluid* 

cleaning. *1 st Int. Workshop on Advances in service Robotics ASER03*, March 13-15, Bardolino, Italia, 2003, Available from http://www.dimec.unige.it/PMAR/ Miyake, T.; Ishihara, H.; Yoshimura, M. (2007). Basic studies on wet adhesion system for

wall climbing robots. *Proc. 2007 IEEE/RSJ International Conference on Intelligent* 

Cleaning Robot and Automatic Handling of Glass Sheets. In: *MM Science Journal*,

a prototype of climbing autonomous robot with vacuum attachment cups.

performances are comparable to similar solutions conceived worldwide.

sequences.

**7. Conclusion** 

miniaturization.

**8. References** 

The robot covers the whole window area by repeating the travel cycle. The robot stops if the sensor *S* sends the signal of proximity of the bottom rim of the vitrified surface. The block diagram of the program is shown in Figure 29.

Fig. 29. Block diagram of main control program of the robot: 1 - setting port 1 as output; 2 setting port 2 as input; 3 - initialization of local variable; 4 - boolean local variable; 5 – *while* cycle of the travel program; 6 – travel stop; 7 – first order *while* cycles; 8 – sequences of the first order cycles; 9 – sequences of the first order cycles.

The program uses the ports 1 and 2 of the acquisition board. The port 1 is used as program output, sending the commands towards the electro valves. The port 2 is used as input, receiving the signal from the sensor *S*. The control 3 initializes the boolean local variable as "False". The variable changes its state to "True" during vertical displacement. The signal from the sensor *S* is used also for stopping the horizontal translation sequences.

### **7. Conclusion**

236 Mobile Robots – Current Trends

The robot covers the whole window area by repeating the travel cycle. The robot stops if the

Fig. 29. Block diagram of main control program of the robot: 1 - setting port 1 as output; 2 setting port 2 as input; 3 - initialization of local variable; 4 - boolean local variable; 5 – *while* cycle of the travel program; 6 – travel stop; 7 – first order *while* cycles; 8 – sequences of the

first order cycles; 9 – sequences of the first order cycles.

sensor *S* sends the signal of proximity of the bottom rim of the vitrified surface.

The block diagram of the program is shown in Figure 29.

The chapter reports a number of very important results regarding the design and control of a prototype of climbing autonomous robot with vacuum attachment cups.

The robot construction is able to perform its intended function: the efficient cleaning of glass surfaces. The vacuum attachment system ensures good contact with the support surface, is simple and reliable. The modelling and simulation of the robot functioning, developed for platform translation as well as for relative rotation of the platforms, certifies that its performances are comparable to similar solutions conceived worldwide.

The overall size of the robot, 350mm x 350mm x 220mm, proves an optimal degree of robot miniaturization.

### **8. References**


**0**

**12**

*Brazil*

**A Kinematical and Dynamical Analysis**

In general, legged locomotion requires higher degrees of freedom and therefore greater mechanical complexity than wheeled locomotion. Wheeled robots are simple in general, and more efficient than legged locomotion on flat surfaces. Yet as the surface turns softer, wheeled locomotion becomes inefficient due to rolling friction. Furthermore, in some cases, wheeled robots are unable to overcome small obstacles. On the other hand, legged robots are more easily adaptable to different kinds of terrains due to the fact that only a set of point contacts is required; thus, the quality of the ground between those points does not matter as long as the

Legged robots appear as the sole means of providing locomotion in highly unstructured environments. However, they cannot traverse every type of uneven terrain because they are of limited dimensions. Hence, if there are terrain irregularities such as a crevasse wider than the maximum horizontal leg reach or a cliff of depth greater than the maximum vertical leg reach, then the machine is prevented from making any progress. This limitation, however, can be overcome by providing the machine with the capability of attaching its feet to the terrain. Moreover, machine functionality is limited not only by the topography of the terrain, but also by the terrain constitution. Whereas hard rock poses no serious problem to legged robots, muddy terrain can hamper its operation to the point of jamming the machine. Still, under such adverse conditions, legged robots offer a better maneuverability than other vehicles (Angeles,

The main disadvantages of legged locomotion include power and mechanical complexity. The leg, which may include several degrees of freedom, must be capable of sustaining part of the robotŠs total weight and, in many robots, must be capable of lifting and lowering the robot. Additionally, high maneuverability will only be achieved if the legs have a sufficient number

In the last few years, this feature has given rise to a number of research activities on the subject. Despite all these efforts, the performance of legged robots is still far from what could be expected from them. This is true particularly because the robots performance depends on several factors, including the mechanical design, which sometimes may not be changed by the

Legged robots present some problems that are not usual in wheeled robots. For example, problems such as trajectory planning and stability analysis need a good kinematics and

of degrees of freedom to impart forces in a number of different directions.

**1. Introduction**

robot can maintain appropriate ground clearance.

2007; Siegwart & Nourbakhsh, 2004).

control designer (Estremera & Waldron, 2008).

dynamics model of the system.

**of a Quadruped Robot**

*University of São Paulo*

Alain Segundo Potts and José Jaime da Cruz

Sun D., Zhu J., & Tso S. K. (2007), A climbing robot for cleaning glass surface with motion planning and visual sensing, In: *Climbing & walking robots: towards new applications*, pp.219-234, Hao Xiang Zhang (ed.), InTech, Retrieved from <http://www.intechopen.com/books/show/title/climbing\_and\_walking\_robots\_ towards\_new\_applications>

## **A Kinematical and Dynamical Analysis of a Quadruped Robot**

Alain Segundo Potts and José Jaime da Cruz *University of São Paulo Brazil*

#### **1. Introduction**

238 Mobile Robots – Current Trends

Sun D., Zhu J., & Tso S. K. (2007), A climbing robot for cleaning glass surface with motion

pp.219-234, Hao Xiang Zhang (ed.), InTech, Retrieved from

towards\_new\_applications>

planning and visual sensing, In: *Climbing & walking robots: towards new applications*,

<http://www.intechopen.com/books/show/title/climbing\_and\_walking\_robots\_

In general, legged locomotion requires higher degrees of freedom and therefore greater mechanical complexity than wheeled locomotion. Wheeled robots are simple in general, and more efficient than legged locomotion on flat surfaces. Yet as the surface turns softer, wheeled locomotion becomes inefficient due to rolling friction. Furthermore, in some cases, wheeled robots are unable to overcome small obstacles. On the other hand, legged robots are more easily adaptable to different kinds of terrains due to the fact that only a set of point contacts is required; thus, the quality of the ground between those points does not matter as long as the robot can maintain appropriate ground clearance.

Legged robots appear as the sole means of providing locomotion in highly unstructured environments. However, they cannot traverse every type of uneven terrain because they are of limited dimensions. Hence, if there are terrain irregularities such as a crevasse wider than the maximum horizontal leg reach or a cliff of depth greater than the maximum vertical leg reach, then the machine is prevented from making any progress. This limitation, however, can be overcome by providing the machine with the capability of attaching its feet to the terrain. Moreover, machine functionality is limited not only by the topography of the terrain, but also by the terrain constitution. Whereas hard rock poses no serious problem to legged robots, muddy terrain can hamper its operation to the point of jamming the machine. Still, under such adverse conditions, legged robots offer a better maneuverability than other vehicles (Angeles, 2007; Siegwart & Nourbakhsh, 2004).

The main disadvantages of legged locomotion include power and mechanical complexity. The leg, which may include several degrees of freedom, must be capable of sustaining part of the robotŠs total weight and, in many robots, must be capable of lifting and lowering the robot. Additionally, high maneuverability will only be achieved if the legs have a sufficient number of degrees of freedom to impart forces in a number of different directions.

In the last few years, this feature has given rise to a number of research activities on the subject. Despite all these efforts, the performance of legged robots is still far from what could be expected from them. This is true particularly because the robots performance depends on several factors, including the mechanical design, which sometimes may not be changed by the control designer (Estremera & Waldron, 2008).

Legged robots present some problems that are not usual in wheeled robots. For example, problems such as trajectory planning and stability analysis need a good kinematics and dynamics model of the system.

of a Quadruped Robot 3

A Kinematical and Dynamical Analysis of a Quadruped Robot 241

Fig. 2. Gait graphs for the trot of the Kamambaré robot. Leg on the air ◦, leg attached to the

For a robot to move to a specific position, the location of the center of its body relative to the base should be established first. This is called by some authors *position analysis problem* (Tsai, 1999). There are two types of position analysis problems: direct kinematics and inverse kinematics problems. In the first one, the joint variables are given and the problem is to find the location of the body of the robot; for the inverse kinematics, the location of the body is given and the problem is to find the joint variables that correspond to it (Kolter et al., 2008). Two approaches will be taken herein for the complete modeling of the robot in accordance with its topology. Firstly, for the robot in the pushing stage the model will be like a parallel robot with a closed chain between the two legs that are supporting the platform. Then when the leg is "on the air" the model is of a serial manipulator attached at one of the corners of the

In this section, the direct kinematics problem of the platform will be solved. The system is modeled as a parallel robot and the legs are stuck between the supporting surface and the platform. The analysis is performed using the Denavit-Hartenberg (D-H) parametrization,

> *<sup>i</sup> <sup>α</sup>i*−<sup>1</sup> *ai*−<sup>1</sup> *di <sup>θ</sup>il* 40 0 *L*<sup>4</sup> *θ*4*<sup>l</sup>*

> <sup>2</sup> 0 0 *θ*3*<sup>l</sup>* 2 0 *L*<sup>3</sup> 0 *θ*2*<sup>l</sup>*

> > <sup>2</sup> *L*<sup>2</sup> 0 *θ*1*<sup>l</sup>* 0 *L*<sup>1</sup> 0 0

Table 1 shows the D-H parameters for the "pushing stage". Frames {*Bl*}, {*Cl*}, {*Dl*} and {*El*} are attached to links 4, 3, 2 and 1, respectively, as shown by Figure 3. Frame {*O*} is attached at a point of the climbing surface, {*Al*} is attached to the gripping point and {*P*} is attached to the robotic platform. The lengths of the links are *L*5, *L*4, *L*3, *L*<sup>2</sup> and *L*1, respectively, starting at the point *OAl*, origin of the frame {*Al*}. Index *<sup>l</sup>*, (*<sup>l</sup>* <sup>=</sup> 1, . . . , 4) is used to indicate the leg of the robot, while index *i*, (*i* = 1, . . . , 4) is used to indicate the *i-th* joint of the *l-th* leg. In this

Denoting by *YlTXl* the homogeneous transformation from the coordinate systems {*Xl*} to

*lBl* relative to frame {*O*} is assumed to be orthogonal to the climbing surface,

*Cl TDl* ·

*Dl TEl* ·

*El TP* (1)

3 *<sup>π</sup>*

<sup>1</sup> <sup>−</sup> *<sup>π</sup>*

Table 1. Denavit-Hartenberg parameters for leg *l* in the pushing stage

coordinate system {*Yl*} of the *l-th* leg, *OTP* can be expressed as:

*OTP* <sup>=</sup>*OTAl* ·

*Al TBl* · *Bl TCl* ·

surface •.

platform.

paper, vector *OA*

(Bernardi et al., 2009).

**2.1 Direct kinematics problem of the platform**

starting at the surface and advancing towards the platform.

#### Fig. 1. Kamambaré I robot

Herein will be presented a kinematical and dynamical analysis of a quadruped robot named Kamambaré I (Bernardi & Da Cruz, 2007).

Like all the mobile robots with legs the topology of Kamambaré is time variant. Deu to his own gait, we have two different problems to solve. First when there is at least one closed kinematic chain between the support surface and the platform, the robot's behavior will be similar to a parallel robot. On the other hand, when a leg of the robot is in the air looking for a new point of grasping, the model that best describes it is an open kinematic chain model, similar to the models of a serial industrial manipulator. Through this work we will refer to these two topological model as the platform for the parallel case of modeling and model of the leg for the second case reviewed like in (Potts & Da Cruz, 2010).

The analysis above, is important for bringing the platform or the gripper to some desired position in the space, but in our case it is not sufficient. To move the platform or the gripper along some desired path with a prescribed speed, the motion of the joints must be carefully coordinated. There are two types of velocity coordination problems, direct and inverse. In the first case, the velocity of the joints is given and the objective is to find the velocity of the end effector (platform or leg); in the other case, the velocity of the end effector is given and the input joint rates required to produce the desired velocity are to be found.

#### **2. Kinematics model**

Kamambaré I is a symmetrical quadruped robot. It was developed for climbing vertical objects such as trees, power poles, bridges, etc. Each of its legs with four revolution joints. See Fig. 1. At the end of each leg, there is a gripper. All joints are powered by DC motors. The basic gait of the robot simulates the walking trot of a quadruped mammal. In this type of gait, the diagonals legs move in tandem. While a pair of legs is fixed to the supporting surface and pushes the robot forward the other pair is on the air, seeking a new foothold, see Figure 2. According to that description, there are two basics stages for the legs, which will be named: "leg on the air" to represent the leg seeking for the new foothold, and "pushing stage" when the leg is fixed and pushing the body to a given direction.

2 Will-be-set-by-IN-TECH

Herein will be presented a kinematical and dynamical analysis of a quadruped robot named

Like all the mobile robots with legs the topology of Kamambaré is time variant. Deu to his own gait, we have two different problems to solve. First when there is at least one closed kinematic chain between the support surface and the platform, the robot's behavior will be similar to a parallel robot. On the other hand, when a leg of the robot is in the air looking for a new point of grasping, the model that best describes it is an open kinematic chain model, similar to the models of a serial industrial manipulator. Through this work we will refer to these two topological model as the platform for the parallel case of modeling and model of

The analysis above, is important for bringing the platform or the gripper to some desired position in the space, but in our case it is not sufficient. To move the platform or the gripper along some desired path with a prescribed speed, the motion of the joints must be carefully coordinated. There are two types of velocity coordination problems, direct and inverse. In the first case, the velocity of the joints is given and the objective is to find the velocity of the end effector (platform or leg); in the other case, the velocity of the end effector is given and the

Kamambaré I is a symmetrical quadruped robot. It was developed for climbing vertical objects such as trees, power poles, bridges, etc. Each of its legs with four revolution joints. See Fig. 1. At the end of each leg, there is a gripper. All joints are powered by DC motors. The basic gait of the robot simulates the walking trot of a quadruped mammal. In this type of gait, the diagonals legs move in tandem. While a pair of legs is fixed to the supporting surface and pushes the robot forward the other pair is on the air, seeking a new foothold, see Figure 2. According to that description, there are two basics stages for the legs, which will be named: "leg on the air" to represent the leg seeking for the new foothold, and "pushing stage" when

the leg for the second case reviewed like in (Potts & Da Cruz, 2010).

input joint rates required to produce the desired velocity are to be found.

the leg is fixed and pushing the body to a given direction.

Fig. 1. Kamambaré I robot

**2. Kinematics model**

Kamambaré I (Bernardi & Da Cruz, 2007).

Fig. 2. Gait graphs for the trot of the Kamambaré robot. Leg on the air ◦, leg attached to the surface •.

For a robot to move to a specific position, the location of the center of its body relative to the base should be established first. This is called by some authors *position analysis problem* (Tsai, 1999). There are two types of position analysis problems: direct kinematics and inverse kinematics problems. In the first one, the joint variables are given and the problem is to find the location of the body of the robot; for the inverse kinematics, the location of the body is given and the problem is to find the joint variables that correspond to it (Kolter et al., 2008). Two approaches will be taken herein for the complete modeling of the robot in accordance with its topology. Firstly, for the robot in the pushing stage the model will be like a parallel robot with a closed chain between the two legs that are supporting the platform. Then when the leg is "on the air" the model is of a serial manipulator attached at one of the corners of the platform.

#### **2.1 Direct kinematics problem of the platform**

In this section, the direct kinematics problem of the platform will be solved. The system is modeled as a parallel robot and the legs are stuck between the supporting surface and the platform. The analysis is performed using the Denavit-Hartenberg (D-H) parametrization, starting at the surface and advancing towards the platform.


Table 1. Denavit-Hartenberg parameters for leg *l* in the pushing stage

Table 1 shows the D-H parameters for the "pushing stage". Frames {*Bl*}, {*Cl*}, {*Dl*} and {*El*} are attached to links 4, 3, 2 and 1, respectively, as shown by Figure 3. Frame {*O*} is attached at a point of the climbing surface, {*Al*} is attached to the gripping point and {*P*} is attached to the robotic platform. The lengths of the links are *L*5, *L*4, *L*3, *L*<sup>2</sup> and *L*1, respectively, starting at the point *OAl*, origin of the frame {*Al*}. Index *<sup>l</sup>*, (*<sup>l</sup>* <sup>=</sup> 1, . . . , 4) is used to indicate the leg of the robot, while index *i*, (*i* = 1, . . . , 4) is used to indicate the *i-th* joint of the *l-th* leg. In this paper, vector *OA lBl* relative to frame {*O*} is assumed to be orthogonal to the climbing surface, (Bernardi et al., 2009).

Denoting by *YlTXl* the homogeneous transformation from the coordinate systems {*Xl*} to coordinate system {*Yl*} of the *l-th* leg, *OTP* can be expressed as:

$${}^{O}T\_{P} = {}^{O}T\_{A\_{l}} \cdot {}^{A\_{l}}T\_{B\_{l}} \cdot {}^{B\_{f}}T\_{C\_{l}} \cdot {}^{C\_{l}}T\_{D\_{l}} \cdot {}^{D\_{l}}T\_{E\_{l}} \cdot {}^{E\_{l}}T\_{P} \tag{1}$$

of a Quadruped Robot 5

A Kinematical and Dynamical Analysis of a Quadruped Robot 243

Then, using 6, the direct kinematics problem of the platform can be solved by the vector

Since any homogeneous transformation matrix *YlTXl* is non-singular it is possible to use that

*sθ*1*<sup>l</sup> sθ*4*<sup>l</sup>*

*cθ*1*<sup>l</sup>* −*sθ*4*<sup>l</sup>*

*cθ*4*<sup>l</sup> sθ*2*l*3*<sup>l</sup>*

The use of 4 or 8 depends on which part of the gait is active. In other words, if the leg *l* of the robot is in the air, the transformations between joint frames occur based on the frame {*El*}. On the other hand, if the leg is clung to the surface, the reference coordinate system is {*O*}.

Since each leg has only four degrees of freedom, the position and orientation of the platform

Using equations 7 and 6, it is possible to solve the inverse kinematics problem. If both, clinging

� *x*2 *PABl*

> + *y*<sup>2</sup> *PABl*

*PABl* <sup>≥</sup> *<sup>s</sup>ψ*<sup>2</sup>

as the geometrical and mathematical constraints are respected, from equation 6 we have:

*x*2 *PABl*

*sψPL*<sup>1</sup> ± *xPABl*

*x*2 *PABl* <sup>+</sup> *<sup>y</sup>*<sup>2</sup>

*x*2 *PABl* <sup>+</sup> *<sup>y</sup>*<sup>2</sup>

*ElRAl* <sup>|</sup> *El <sup>E</sup>*

−− + −− **0** | 1

*cθ*2*l*3*<sup>l</sup>*

(*cθ*2*l*3*<sup>l</sup> <sup>L</sup>*¯ <sup>4</sup> + *<sup>c</sup>θ*2*<sup>l</sup> <sup>L</sup>*<sup>3</sup> + *<sup>L</sup>*2)*cθ*1*<sup>l</sup>* (*cθ*2*l*3*<sup>l</sup> <sup>L</sup>*¯ <sup>4</sup> + *<sup>c</sup>θ*2*<sup>l</sup> <sup>L</sup>*<sup>3</sup> + *<sup>L</sup>*2)*sθ*1*<sup>l</sup> <sup>s</sup>θ*2*l*3*<sup>l</sup> <sup>L</sup>*¯ <sup>4</sup> <sup>−</sup> *<sup>s</sup>θ*2*<sup>l</sup> <sup>L</sup>*<sup>3</sup>

*cθ*2*l*3*<sup>l</sup>*

*lBl* + *OB*

*<sup>l</sup> Al*

⎤

*cθ*1*<sup>l</sup>* − *cθ*4*<sup>l</sup>*

*sθ*1*<sup>l</sup>* − *cθ*4*<sup>l</sup>*

*OPx*, *<sup>O</sup> Py*,

> <sup>−</sup> *<sup>s</sup>ψ*<sup>2</sup> *PL*<sup>2</sup> 1

> > .

*PABl* �= 0, (12)

<sup>1</sup> (13)

+ *y*<sup>2</sup> *PABl*

*PL*<sup>2</sup>

*lP* (7)

⎦ (8)

*sθ*2*l*3*<sup>l</sup>*

*sθ*2*l*3*<sup>l</sup>*

⎦ (10)

*<sup>O</sup> Pz*, *ψP*] are given as well

(11)

⎤

⎦ (9)

*sθ*1*<sup>l</sup> cθ*1*<sup>l</sup>*

*sθ*4*<sup>l</sup>* −*cθ*2*l*3*<sup>l</sup>*

⎤

*cθ*1*<sup>l</sup>* −*sθ*1*<sup>l</sup>*

*OOP* = *OOA <sup>l</sup>* + *OA*

⎡ ⎣

for a know coordinates of points *OAl* and *OBl* relatives to frame {*O*}.

expression to solve the direct kinematics problem for the leg on the air:

*El TAl* <sup>=</sup>

*cθ*1*<sup>l</sup>* + *sθ*4*<sup>l</sup>*

*sθ*1*<sup>l</sup>* + *sθ*4*<sup>l</sup>*

⎡ ⎣

must be specified in accordance with the constraints imposed by the joints.

**2.2 Direct kinematics problem of the leg**

⎡ ⎣

*cθ*4*<sup>l</sup> cθ*2*l*3*<sup>l</sup>*

−*cθ*4*<sup>l</sup>*

**2.3 Inverse kinematics problem for the platform**

*cθ*4*<sup>l</sup>* =

*cθ*2*l*3*<sup>l</sup>*

*El E <sup>l</sup> Al* =

point *OAl* and the position and orientation of the platform [

*yPABl*

where: *xPABl* <sup>=</sup> *OPx* <sup>−</sup> *OAxl* <sup>−</sup> *OBxl* and *yPABl* <sup>=</sup> *OPy* <sup>−</sup> *OAyl* <sup>−</sup> *OByl*

*sθ*2*l*3*<sup>l</sup>*

and the position of the gripper relative to frame {*El*} is given by:

*ElRAl* <sup>=</sup>

where *L*¯ <sup>4</sup> = *L*<sup>4</sup> + *L*<sup>5</sup>

Equation 11 is subject to:

for *l* = 1, 2, 3, 4,.

equation:

where:

Fig. 3. Scheme of *l-th* leg.

Recalling that the structure of *OTAl* , *AlTBl* and *BlTP* are:

$$\,^O T\_{A\_l} = \begin{bmatrix} ^O \mathbf{R}\_{A\_l} & \vert & ^O \mathbf{O} \vec{A}\_l \\ - \underline{\phantom{\rm I}} & + & - \underline{\phantom{\rm I}} \\ \mathbf{0} & \vert & 1 \end{bmatrix} . \tag{2}$$

$${}^{A\_I}T\_{B\_I} = \begin{bmatrix} {}^{A\_I}\mathbf{R}\_{B\_I} & | & {}^{O}\mathbf{A}\_I\mathbf{\vec{B}}\_I \\ - - & + & - \\ \mathbf{0} & | & \mathbf{1} \end{bmatrix} \tag{3}$$

and

$${}^{B\_l}T\_P = \begin{bmatrix} {}^{B\_l}R\_P & | & {}^{O}B\_l {}^{P}\mathbf{P} \\ - & + & - \\ \mathbf{0} & | & 1 \end{bmatrix} \tag{4}$$

Assume that matrixes *ORAl* and *AlRBl* are equal to identity matriz *<sup>I</sup>* and deu to a sequence of straightforward computations the rotation matrix *BlRP* is equal to:

$${}^{B\_{l}}\mathbf{R}\_{P} = \begin{bmatrix} c\theta\_{4\_{l}}c\theta\_{2\cdot3\_{l}}c\theta\_{1\_{l}} + s\theta\_{4\_{l}}s\theta\_{1\_{l}} - c\theta\_{4\_{l}}c\theta\_{2\cdot3\_{l}}s\theta\_{1\_{l}} + s\theta\_{4\_{l}}c\theta\_{1\_{l}}c\theta\_{4\_{l}}s\theta\_{2\cdot3\_{l}}\\ s\theta\_{4\_{l}}c\theta\_{2\cdot3\_{l}}c\theta\_{1\_{l}} - c\theta\_{4\_{l}}s\theta\_{1\_{l}} - s\theta\_{4\_{l}}c\theta\_{2\cdot3\_{l}}s\theta\_{1\_{l}} - c\theta\_{4\_{l}}c\theta\_{1\_{l}}s\theta\_{4\_{l}}s\theta\_{2\_{l}\cdot3\_{l}}\\ s\theta\_{2\_{l}3\_{l}}c\theta\_{1\_{l}} & -s\theta\_{2\_{l}3\_{l}}s\theta\_{1\_{l}} & -c\theta\_{2\_{l}3\_{l}} \end{bmatrix} \tag{5}$$

and to the position of the origin of system {*P*} with respect to {*B*}

$${}^{O}B\_{l}\vec{P} = \begin{bmatrix} (c\theta\_{4\uparrow}c\theta\_{2\downarrow3\downarrow}c\theta\_{1\downarrow} + s\theta\_{4\downarrow}s\theta\_{1\downarrow})L\_1 + c\theta\_{4\uparrow}(c\theta\_{2\downarrow3\downarrow}L\_2 + c\theta\_{3\downarrow}L\_3) \\ (s\theta\_{4\downarrow}c\theta\_{2\downarrow3\downarrow}c\theta\_{1\downarrow} - c\theta\_{4\downarrow}s\theta\_{1\downarrow})L\_1 + s\theta\_{4\downarrow}(c\theta\_{2\downarrow3\downarrow}L\_2 + c\theta\_{3\downarrow}L\_3) \\ s\theta\_{2\downarrow3\downarrow}c\theta\_{1\downarrow}L\_1 + s\theta\_{2\downarrow}L\_2 + s\theta\_{3\downarrow}L\_3 + L\_4 \end{bmatrix} \tag{6}$$

for *l* = 1, 2, 3, 4,.

4 Will-be-set-by-IN-TECH

, *AlTBl* and *BlTP* are:

*ORAl* <sup>|</sup> *OOA <sup>l</sup>* −− + −− **0** | 1

*AlRBl* <sup>|</sup> *OA*

*BlRP* <sup>|</sup> *OB*

Assume that matrixes *ORAl* and *AlRBl* are equal to identity matriz *<sup>I</sup>* and deu to a sequence of

*sθ*1*<sup>l</sup>* −*cθ*4*<sup>l</sup>*

*sθ*1*<sup>l</sup>* −*sθ*4*<sup>l</sup>*

*cθ*1*<sup>l</sup>* −*sθ*2*l*3*<sup>l</sup>*

*sθ*1*<sup>l</sup>*

*sθ*1*<sup>l</sup>*

−− + −− **0** | 1

−− + −− **0** | 1

⎤

*lBl*

*lP*

*cθ*2*l*3*<sup>l</sup>*

*cθ*2*l*3*<sup>l</sup>*

)*L*<sup>1</sup> + *cθ*4*<sup>l</sup>*

)*L*<sup>1</sup> + *sθ*4*<sup>l</sup>*

*L*<sup>1</sup> + *sθ*2*l*3*<sup>l</sup> L*<sup>2</sup> + *sθ*3*<sup>l</sup> L*<sup>3</sup> + *L*<sup>4</sup>

⎤

*sθ*1*<sup>l</sup>* + *sθ*4*<sup>l</sup>*

*sθ*1*<sup>l</sup>* − *cθ*4*<sup>l</sup>*

⎤

⎦ , (2)

⎦ (3)

⎦ , (4)

*sθ*2*l*3*<sup>l</sup>*

⎤

⎦ (5)

⎦ (6)

*sθ*2*l*3*<sup>l</sup>*

⎤

*cθ*1*<sup>l</sup> cθ*4*<sup>l</sup>*

*cθ*1*<sup>l</sup> sθ*4*<sup>l</sup>*

*sθ*1*<sup>l</sup>* −*cθ*2*l*3*<sup>l</sup>*

(*cθ*2*l*3*<sup>l</sup> L*<sup>2</sup> + *cθ*3*<sup>l</sup> L*3)

(*cθ*2*l*3*<sup>l</sup> L*<sup>2</sup> + *cθ*3*<sup>l</sup> L*3)

⎡ ⎣

> ⎡ ⎣

⎡ ⎣

*OTAl* <sup>=</sup>

*AlTBl* =

*BlTP* =

straightforward computations the rotation matrix *BlRP* is equal to:

*sθ*2*l*3*<sup>l</sup>*

and to the position of the origin of system {*P*} with respect to {*B*}

*cθ*2*l*3*<sup>l</sup>*

*cθ*1*<sup>l</sup>* + *sθ*4*<sup>l</sup>*

*cθ*1*<sup>l</sup>* − *cθ*4*<sup>l</sup>*

*sθ*2*l*3*<sup>l</sup> cθ*1*<sup>l</sup>*

*cθ*1*<sup>l</sup>* + *sθ*4*<sup>l</sup>*

*cθ*1*<sup>l</sup>* − *cθ*4*<sup>l</sup>*

Fig. 3. Scheme of *l-th* leg.

and

Recalling that the structure of *OTAl*

*BlRP* =

*OB lP* =

⎡ ⎣ *cθ*4*<sup>l</sup> cθ*2*l*3*<sup>l</sup>*

*sθ*4*<sup>l</sup> cθ*2*l*3*<sup>l</sup>*

> ⎡ ⎣ (*cθ*4*<sup>l</sup>*

(*sθ*4*<sup>l</sup> cθ*2*l*3*<sup>l</sup>* Then, using 6, the direct kinematics problem of the platform can be solved by the vector equation:

$${}^{O}\vec{O}\vec{P} = \,^{O}\vec{OA\_{l}} + \,^{O}A\_{l}\vec{B\_{l}} + \,^{O}\vec{B\_{l}P} \tag{7}$$

for a know coordinates of points *OAl* and *OBl* relatives to frame {*O*}.

#### **2.2 Direct kinematics problem of the leg**

Since any homogeneous transformation matrix *YlTXl* is non-singular it is possible to use that expression to solve the direct kinematics problem for the leg on the air:

$${}^{E\_l}T\_{A\_l} = \begin{bmatrix} {}^{E\_l}\mathbf{R}\_{A\_l} & \vert & {}^{E\_l}\mathbf{E}\_l \mathbf{\vec{A}}\_l \\ - - & + & - \\ \mathbf{0} & \vert & \mathbf{1} \end{bmatrix} \tag{8}$$

where:

$${}^{E\_{\parallel}}R\_{A\_{l}} = \begin{bmatrix} c\theta\_{4\_{l}}c\theta\_{2,3\_{l}}c\theta\_{1\_{l}} + s\theta\_{4\_{l}}s\theta\_{1\_{l}} & s\theta\_{4\_{l}}c\theta\_{2,3\_{l}}c\theta\_{1\_{l}} - c\theta\_{4\_{l}}s\theta\_{1\_{l}} & c\theta\_{1\_{l}}s\theta\_{2\_{l}3\_{l}} \\ -c\theta\_{4\_{l}}c\theta\_{2,3\_{l}}s\theta\_{1\_{l}} + s\theta\_{4\_{l}}c\theta\_{1\_{l}} & -s\theta\_{4\_{l}}c\theta\_{2,3\_{l}}s\theta\_{1\_{l}} - c\theta\_{4\_{l}}c\theta\_{1\_{l}} & -s\theta\_{1\_{l}}s\theta\_{2\_{l}3\_{l}} \\ s\theta\_{2,3\_{l}}c\theta\_{4\_{l}} & s\theta\_{2\_{l}3\_{l}}s\theta\_{4\_{l}} & -c\theta\_{2\_{l}3\_{l}} \end{bmatrix} \tag{9}$$

and the position of the gripper relative to frame {*El*} is given by:

$${}^{E\_l}\mathbf{E}\_I\mathbf{\vec{A}}\_l = \begin{bmatrix} (c\theta\_{2/3\_l}\bar{L}\_4 + c\theta\_{2\_l}L\_3 + L\_2)c\theta\_{1\_l} \\ (c\theta\_{2/3\_l}L\_4 + c\theta\_{2\_l}L\_3 + L\_2)s\theta\_{1\_l} \\ s\theta\_{2/3\_l}\bar{L}\_4 - s\theta\_{2\_l}L\_3 \end{bmatrix} \tag{10}$$

where *L*¯ <sup>4</sup> = *L*<sup>4</sup> + *L*<sup>5</sup>

The use of 4 or 8 depends on which part of the gait is active. In other words, if the leg *l* of the robot is in the air, the transformations between joint frames occur based on the frame {*El*}. On the other hand, if the leg is clung to the surface, the reference coordinate system is {*O*}.

#### **2.3 Inverse kinematics problem for the platform**

Since each leg has only four degrees of freedom, the position and orientation of the platform must be specified in accordance with the constraints imposed by the joints.

Using equations 7 and 6, it is possible to solve the inverse kinematics problem. If both, clinging point *OAl* and the position and orientation of the platform [ *OPx*, *<sup>O</sup> Py*, *<sup>O</sup> Pz*, *ψP*] are given as well as the geometrical and mathematical constraints are respected, from equation 6 we have:

$$c\theta\_{4\_l} = \frac{y\_{PAB\_l} s\psi\_P L\_1 \pm x\_{PAB\_l} \sqrt{x\_{PAB\_l}^2 + y\_{PAB\_l}^2 - s\psi\_P^2 L\_1^2}}{x\_{PAB\_l}^2 + y\_{PAB\_l}^2} \tag{11}$$

where: *xPABl* <sup>=</sup> *OPx* <sup>−</sup> *OAxl* <sup>−</sup> *OBxl* and *yPABl* <sup>=</sup> *OPy* <sup>−</sup> *OAyl* <sup>−</sup> *OByl* . Equation 11 is subject to:

$$
\lambda x\_{\text{PAB}\_l}^2 + y\_{\text{PAB}\_l}^2 \neq 0,\tag{12}
$$

$$
\alpha\_{\rm RPA\_{\parallel}}^2 + y\_{\rm RPA\_{\parallel}}^2 \ge s \psi\_P^2 L\_1^2 \tag{13}
$$

of a Quadruped Robot 7

A Kinematical and Dynamical Analysis of a Quadruped Robot 245

In this section, the inverse kinematics problem for the leg on the air will be solved. The starting point for the solution of the inverse kinematic problem of the gripper is equation 10. For points

> *<sup>t</sup>θ*1*<sup>l</sup>* <sup>=</sup> *yAEl xAEl*

, *yAEl* <sup>=</sup> *El*

*xAEl* � 0

:

2*L*¯ <sup>4</sup>*L*<sup>3</sup>

 *L*2

> *L*3*L*<sup>4</sup> + *L*<sup>2</sup> 3

<sup>4</sup> <sup>+</sup> <sup>2</sup>*cθ*3*<sup>l</sup> <sup>L</sup>*4*L*<sup>3</sup> <sup>+</sup> *<sup>L</sup>*<sup>2</sup>

3

<sup>4</sup> <sup>+</sup> *<sup>L</sup>*<sup>2</sup> 3 2*L*4*L*<sup>3</sup>

*AEl* <sup>≥</sup> *<sup>L</sup>*<sup>2</sup>

Besides from 30 and 31 we have the condition *zAEl* � 0. Equations 26, 27 and 29 give multiples solutions for the system. The orientation of gripper is represent by *ϕ<sup>l</sup>* and its value coincides

<sup>3</sup> <sup>−</sup> *<sup>z</sup>*<sup>2</sup>

*Ayl* <sup>−</sup> *El*

*Eyl*

, *cθ*1*<sup>l</sup>* � 0

*AEl* <sup>≤</sup> (*L*¯ <sup>4</sup> <sup>−</sup> *<sup>L</sup>*3)

<sup>3</sup> <sup>−</sup> *<sup>z</sup>*<sup>2</sup> *AEl*

<sup>4</sup> <sup>+</sup> <sup>2</sup>*cθ*3*<sup>l</sup> <sup>L</sup>*4*L*<sup>3</sup> <sup>+</sup> *<sup>L</sup>*<sup>2</sup>

<sup>4</sup> <sup>−</sup> *<sup>L</sup>*<sup>2</sup> 3

<sup>3</sup> <sup>−</sup> *<sup>z</sup>*<sup>2</sup> *AEl*

*AEl* ≥ 0, (30)

<sup>3</sup> � 0 (31)

 

, (33)

<sup>2</sup> (34)

) for a given point *OAl*.

, *θ*3*<sup>l</sup>* and *θ*4*<sup>l</sup>* are not independent. Hence, *θ<sup>P</sup>* =

(26)

(27)

(29)

<sup>2</sup> (28)

≤ 1 (32)

where *θ* � 0, *π* for *l* = 1, 2, 3, 4.

*f*(*Px*, *Py*, *Pz*, *θ*1*<sup>l</sup>*

where:

After finding *θ*1*<sup>l</sup>*

where: *x*<sup>2</sup>

Finally:

where:

and

and

and

.

As said in the last section, angles *θ*2*<sup>l</sup>*

**2.4 Inverse kinematics problem for the leg**

*El El* and *El Al* given, the solution is:

*cθ*3*<sup>l</sup>* =

*sθ*2*<sup>l</sup>* =

 

directly with the value of *θ*4*<sup>l</sup>*

*zAEl*

Inequations 30, 31 and 32 are satisfied for:

*AEl* <sup>+</sup> *<sup>y</sup>*<sup>2</sup>

) and *φ<sup>P</sup>* = *f*(*Px*, *Py*, *Pz*, *θ*1*<sup>l</sup>*

*xAEl* <sup>=</sup> *El*

*x*2 *AEl* + *y*<sup>2</sup> *AEl*

*AEl* � 0 and

(*L*¯ <sup>4</sup> + *L*3)

*zAEl*

2 ≤ *x*2 *AEl* + *y*<sup>2</sup> *AEl* − *L*<sup>2</sup> 2 + *z*<sup>2</sup>

*L*2

(*cθ*3*<sup>l</sup> L*<sup>4</sup> + *L*3) ± *sθ*3*<sup>l</sup> L*<sup>4</sup>

*L*2

*L*¯ 2

*Axl* <sup>−</sup> *El*

, the next step is to computate *θ*3*<sup>l</sup>*

± 2*L*<sup>2</sup> *x*2 *AEl* + *y*<sup>2</sup> *AEl* + *z*<sup>2</sup> *AEl* + *L*<sup>2</sup> <sup>2</sup> <sup>−</sup> *<sup>L</sup>*¯ <sup>2</sup>

(*cθ*3*<sup>l</sup> L*<sup>4</sup> + *L*3) ± *sθ*3*<sup>l</sup> L*<sup>4</sup>

*L*¯ 2 <sup>4</sup> + 2*cθ*3*<sup>l</sup>*

<sup>4</sup> <sup>+</sup> <sup>2</sup>*cθ*3*<sup>l</sup> <sup>L</sup>*4*L*<sup>3</sup> <sup>+</sup> *<sup>L</sup>*<sup>2</sup>

<sup>4</sup> <sup>+</sup> <sup>2</sup>*cθ*3*<sup>l</sup> <sup>L</sup>*4*L*<sup>3</sup> <sup>+</sup> *<sup>L</sup>*<sup>2</sup>

 *L*2

<sup>4</sup> <sup>+</sup> <sup>2</sup>*cθ*3*<sup>l</sup> <sup>L</sup>*3*L*<sup>4</sup> <sup>+</sup> *<sup>L</sup>*<sup>2</sup>

*<sup>c</sup>θ*3*<sup>l</sup>* � <sup>−</sup> *<sup>L</sup>*<sup>2</sup>

*x*2 *AEl* <sup>+</sup> *<sup>y</sup>*<sup>2</sup>

*Exl*

and

$$\left| \frac{y\_{PB\_l} s \psi\_{P} L\_1 \pm x\_{PAB\_l} \sqrt{x\_{PAB\_l}^2 + y\_{PAB\_l}^2 - s \psi\_{P}^2 L\_1^2}}{x\_{PAB\_l}^2 + y\_{PAB\_l}^2} \right| \le 1. \tag{14}$$

Whit respect to *θ*2*<sup>l</sup>* we have:

� � � � � �

$$c\theta\_{2l} = \frac{x\_{PAB\_l}^2 + y\_{PAB\_l}^2 + z\_{PAB\_l}^2 - L\_3^2 - \bar{L}\_2^2 - s\psi\_P^2 L\_1^2}{2L\_3L\_2} \tag{15}$$

where *<sup>L</sup>*¯ <sup>2</sup> <sup>=</sup> *<sup>c</sup>ψPL*<sup>1</sup> <sup>+</sup> *<sup>L</sup>*<sup>2</sup> and *zPABl* <sup>=</sup> *OPz* <sup>−</sup> *OAzl* <sup>−</sup>*OBzl* <sup>−</sup> *<sup>L</sup>*4. As |*cθ*2*<sup>l</sup>* | ≤ 1, equation 15 is subject to:

$$\left(x\_{\text{PAR}\_l}^2 + y\_{\text{PAR}\_l}^2 + z\_{\text{PAR}\_l}^2 \le \left(L\_3 + L\_2\right)^2 + 2L\_1 c \psi\_P(L\_3 + L\_2) + L\_1^2 \tag{16}$$

and

$$\left(x\_{\text{PAR}\_l}^2 + y\_{\text{PAR}\_l}^2 + z\_{\text{PAR}\_l}^2 \ge \left(L\_3 - L\_2\right)^2 - 2L\_1 c \psi\_P(L\_3 - L\_2) + L\_1^2 \tag{17}$$

finally

$$s\theta\_{\mathfrak{H}} = \frac{(c\theta\_{2\parallel}\bar{L}\_2 + L\_3)z\_{\text{PAB}\parallel} \pm s\theta\_{2\parallel}\bar{L}\_2\sqrt{L\_3^2 + 2c\theta\_{2\parallel}L\_3\bar{L}\_2 + \bar{L}\_2^2 - z\_{\text{PAB}\_{\parallel}}^2}}{L\_3^2 + 2c\theta\_{2\parallel}L\_3\bar{L}\_2 + \bar{L}\_2^2} \tag{18}$$

subject to:

$$L\_3^2 + 2c\theta\_{2\downarrow} L\_3 \bar{L}\_2 + \bar{L}\_2^2 - z\_{\rm PAB\_{\parallel}}^2 \ge 0,\tag{19}$$

$$L\_3^2 + 2c\theta\_2 L\_3 \bar{L}\_2 + \bar{L}\_2^2 \neq 0,\tag{20}$$

and

$$\left| \frac{(c\theta\_2 \bar{L}\_2 + L\_3) z\_{PB\_l} \pm s \theta\_2 \bar{L}\_2 \sqrt{L\_3^2 + 2c\theta\_2 L\_3 \bar{L}\_2 + \bar{L}\_2^2 - z\_{PB\_l}^2}}{L\_3^2 + 2c\theta\_2 L\_3 L\_2 + L\_2^2} \right| \le 1 \tag{21}$$

the last constraints are verified when relation 13 and *x*<sup>2</sup> *PABl* <sup>+</sup> *<sup>y</sup>*<sup>2</sup> *PABl* + *z*<sup>2</sup> *PABl* <sup>−</sup> *<sup>s</sup>ψ*<sup>2</sup> *PL*<sup>2</sup> <sup>1</sup> �= 0 are satisfied for *l* = 1, 2, 3, 4. In addition, from 19 and 20: *zPABl* �= 0.

Hence, the inverse kinematics problem is solved. Now the orientation of the body has to be defined. A usual way of defining it is through the Euler angles. Denoting by *φP*, *θ<sup>P</sup>* and *ψ<sup>P</sup>* the Euler angles associated to Z-Y-Z convention, the rotation matrix with respect to system {*O*}, *OR*¯ *<sup>P</sup>*, is given by:

$${}^{O}\bar{R}\_{P} = \begin{bmatrix} c\phi\_{P}c\theta\_{P}c\psi\_{P} - s\phi\_{P}s\psi\_{P} - c\phi\_{P}c\theta\_{P}s\psi\_{P} - s\phi\_{P}c\psi\_{P} & c\phi\_{P}s\theta\_{P} \\ s\phi\_{P}c\theta\_{P}c\psi\_{P} + c\phi\_{P}s\psi\_{P} - s\phi\_{P}c\theta\_{P}s\psi\_{P} + c\phi\_{P}s\psi\_{P} & s\phi\_{P}s\theta\_{P} \\ -s\theta\_{P}c\psi\_{P} & s\theta\_{P}s\psi\_{P} & c\theta\_{P} \end{bmatrix} \tag{22}$$

Equaling 5 and 22 it follow that:

$$
\mathfrak{c}\theta\_P = -\mathfrak{c}\theta\_{\mathfrak{Z}\_l\mathfrak{Z}\_l} \tag{23}
$$

$$t\psi\_P = t\theta\_{1\_l} \tag{24}$$

and

$$t\phi\_P = t\theta\_{4\_{l'}} \tag{25}$$

### where *θ* � 0, *π* for *l* = 1, 2, 3, 4.

As said in the last section, angles *θ*2*<sup>l</sup>* , *θ*3*<sup>l</sup>* and *θ*4*<sup>l</sup>* are not independent. Hence, *θ<sup>P</sup>* = *f*(*Px*, *Py*, *Pz*, *θ*1*<sup>l</sup>* ) and *φ<sup>P</sup>* = *f*(*Px*, *Py*, *Pz*, *θ*1*<sup>l</sup>* ) for a given point *OAl*.

#### **2.4 Inverse kinematics problem for the leg**

In this section, the inverse kinematics problem for the leg on the air will be solved. The starting point for the solution of the inverse kinematic problem of the gripper is equation 10. For points *El El* and *El Al* given, the solution is:

$$t\theta\_{1\_l} = \frac{y\_{AE\_l}}{x\_{AE\_l}}\tag{26}$$

where:

$$\mathbf{x}\_{AE\_l} = \,^{E\_l}\mathbf{A}\_{\mathbf{x}\_l} - \,^{E\_l}\mathbf{E}\_{\mathbf{x}\_{l'}}\mathbf{y}\_{AE\_l} = \,^{E\_l}\mathbf{A}\_{\mathbf{y}\_l} - \,^{E\_l}\mathbf{E}\_{\mathbf{y}\_{l'}}\mathbf{c}\mathbf{e}\_{\mathbf{1}\_l} \neq \mathbf{0}$$

and

.

6 Will-be-set-by-IN-TECH

+ *y*<sup>2</sup> *PABl*

+ *z*<sup>2</sup> *PABl*

+ *y*<sup>2</sup> *PABl*

> <sup>−</sup> *<sup>L</sup>*<sup>2</sup> <sup>3</sup> <sup>−</sup> *<sup>L</sup>*¯ <sup>2</sup>

*PABl* <sup>≤</sup> (*L*<sup>3</sup> <sup>+</sup> *<sup>L</sup>*2)<sup>2</sup> <sup>+</sup> <sup>2</sup>*L*1*cψP*(*L*<sup>3</sup> <sup>+</sup> *<sup>L</sup>*2) + *<sup>L</sup>*<sup>2</sup>

*PABl* <sup>≥</sup> (*L*<sup>3</sup> <sup>−</sup> *<sup>L</sup>*2)<sup>2</sup> <sup>−</sup> <sup>2</sup>*L*1*cψP*(*L*<sup>3</sup> <sup>−</sup> *<sup>L</sup>*2) + *<sup>L</sup>*<sup>2</sup>

*L*3*L*¯ <sup>2</sup> + *L*¯ <sup>2</sup> 2

<sup>3</sup> <sup>+</sup> <sup>2</sup>*cθ*2*<sup>l</sup> <sup>L</sup>*3*L*¯ <sup>2</sup> <sup>+</sup> *<sup>L</sup>*¯ <sup>2</sup>

*PABl* <sup>+</sup> *<sup>y</sup>*<sup>2</sup>

2

<sup>2</sup> <sup>−</sup> *<sup>z</sup>*<sup>2</sup>

<sup>3</sup> <sup>+</sup> <sup>2</sup>*cθ*2*<sup>l</sup> <sup>L</sup>*3*L*¯ <sup>2</sup> <sup>+</sup> *<sup>L</sup>*¯ <sup>2</sup>

<sup>2</sup> <sup>−</sup> *<sup>z</sup>*<sup>2</sup> *PABl*

*PABl* ≥ 0, (19)

<sup>2</sup> �= 0, (20)

� � � � � �

<sup>2</sup> <sup>−</sup> *<sup>z</sup>*<sup>2</sup> *PABl*

*PABl*

+ *z*<sup>2</sup> *PABl*

� *L*2

2*L*3*L*¯ <sup>2</sup>

<sup>−</sup> *<sup>s</sup>ψ*<sup>2</sup> *PL*<sup>2</sup> 1

> <sup>2</sup> <sup>−</sup> *<sup>s</sup>ψ*<sup>2</sup> *PL*<sup>2</sup> 1

� � � � � �

≤ 1. (14)

<sup>1</sup> (16)

<sup>1</sup> (17)

≤ 1 (21)

⎦ (22)

<sup>1</sup> �= 0 are

<sup>−</sup> *<sup>s</sup>ψ*<sup>2</sup> *PL*<sup>2</sup>

⎤

, (23)

, (25)

*tψ<sup>P</sup>* = *tθ*1*<sup>l</sup>* (24)

(15)

(18)

� *x*2 *PABl*

and

As |*cθ*2*<sup>l</sup>*

and

finally

subject to:

and

and

� � � � � �

Whit respect to *θ*2*<sup>l</sup>* we have:

*yPABl*

*<sup>c</sup>θ*2*<sup>l</sup>* <sup>=</sup> *<sup>x</sup>*<sup>2</sup>


*x*2 *PABl* <sup>+</sup> *<sup>y</sup>*<sup>2</sup>

*x*2 *PABl* <sup>+</sup> *<sup>y</sup>*<sup>2</sup>

*sθ*3*<sup>l</sup>* =

� � � � � �

*OR*¯ *<sup>P</sup>* =

Equaling 5 and 22 it follow that:

⎡ ⎣ *PABl*

where *<sup>L</sup>*¯ <sup>2</sup> <sup>=</sup> *<sup>c</sup>ψPL*<sup>1</sup> <sup>+</sup> *<sup>L</sup>*<sup>2</sup> and *zPABl* <sup>=</sup> *OPz* <sup>−</sup> *OAzl* <sup>−</sup>*OBzl* <sup>−</sup> *<sup>L</sup>*4.

*PABl* <sup>+</sup> *<sup>z</sup>*<sup>2</sup>

*PABl* <sup>+</sup> *<sup>z</sup>*<sup>2</sup>

(*cθ*2*<sup>l</sup> <sup>L</sup>*¯ <sup>2</sup> <sup>+</sup> *<sup>L</sup>*3)*zPABl* <sup>±</sup> *<sup>s</sup>θ*2*<sup>l</sup> <sup>L</sup>*¯ <sup>2</sup>

*L*2

*L*2

*L*2

(*cθ*2*<sup>l</sup> <sup>L</sup>*¯ <sup>2</sup> <sup>+</sup> *<sup>L</sup>*3)*zPABl* <sup>±</sup> *<sup>s</sup>θ*2*<sup>l</sup> <sup>L</sup>*¯ <sup>2</sup>

the last constraints are verified when relation 13 and *x*<sup>2</sup>

satisfied for *l* = 1, 2, 3, 4. In addition, from 19 and 20: *zPABl* �= 0.

*L*2 <sup>3</sup> + 2*cθ*2*<sup>l</sup>*

<sup>3</sup> <sup>+</sup> <sup>2</sup>*cθ*2*<sup>l</sup> <sup>L</sup>*3*L*¯ <sup>2</sup> <sup>+</sup> *<sup>L</sup>*¯ <sup>2</sup>

<sup>3</sup> <sup>+</sup> <sup>2</sup>*cθ*2*<sup>l</sup> <sup>L</sup>*3*L*¯ <sup>2</sup> <sup>+</sup> *<sup>L</sup>*¯ <sup>2</sup>

� *L*2

<sup>3</sup> <sup>+</sup> <sup>2</sup>*cθ*2*<sup>l</sup> <sup>L</sup>*3*L*¯ <sup>2</sup> <sup>+</sup> *<sup>L</sup>*¯ <sup>2</sup>

Hence, the inverse kinematics problem is solved. Now the orientation of the body has to be defined. A usual way of defining it is through the Euler angles. Denoting by *φP*, *θ<sup>P</sup>* and *ψ<sup>P</sup>* the Euler angles associated to Z-Y-Z convention, the rotation matrix with respect to system {*O*}, *OR*¯ *<sup>P</sup>*, is given by:

*cθ<sup>P</sup>* = −*cθ*2*l*3*<sup>l</sup>*

*tφ<sup>P</sup>* = *tθ*4*<sup>l</sup>*

*cφPcθPcψ<sup>P</sup>* − *sφPsψ<sup>P</sup>* −*cφPcθPsψ<sup>P</sup>* − *sφPcψ<sup>P</sup> cφPsθ<sup>P</sup> sφPcθPcψ<sup>P</sup>* + *cφPsψ<sup>P</sup>* −*sφPcθPsψ<sup>P</sup>* + *cφPsψ<sup>P</sup> sφPsθ<sup>P</sup>* −*sθPcψ<sup>P</sup> sθPsψ<sup>P</sup> cθ<sup>P</sup>*

*sψPL*<sup>1</sup> ± *xPABl*

*x*2 *PABl*

+ *y*<sup>2</sup> *PABl*

$$\mathfrak{x}\_{AE\_l} \neq 0$$

After finding *θ*1*<sup>l</sup>* , the next step is to computate *θ*3*<sup>l</sup>* :

$$c\theta\_{\mathfrak{Z}\_l} = \frac{x\_{AE\_l}^2 + y\_{AE\_l}^2 \pm 2L\_2\sqrt{x\_{AE\_l}^2 + y\_{AE\_l}^2} + z\_{AE\_l}^2 + L\_2^2 - \bar{L}\_4^2 - L\_3^2}{2\bar{L}\_4L\_3} \tag{27}$$

where: *x*<sup>2</sup> *AEl* + *y*<sup>2</sup> *AEl* � 0 and

$$\left(\left(\bar{L}\_4 + L\_3\right)^2 \le \left(\sqrt{x\_{AE\_l}^2 + y\_{AE\_l}^2} - L\_2\right)^2 + z\_{AE\_l}^2 \le \left(\bar{L}\_4 - L\_3\right)^2\tag{28}$$

Finally:

$$\mathrm{s\theta\_{2\_l}} = \frac{z\_{AE\_l}(c\theta\_{3\_l}L\_4 + L\_3) \pm s\theta\_{3\_l}L\_4\sqrt{L\_4^2 + 2c\theta\_{3\_l}L\_4L\_3 + L\_3^2 - z\_{AE\_l}^2}}{\bar{L}\_4^2 + 2c\theta\_{3\_l}L\_3L\_4 + L\_3^2} \tag{29}$$

where:

$$L\_4^2 + 2c\theta\_{3\downarrow} L\_4 L\_3 + L\_3^2 - z\_{AE\_l}^2 \ge 0,\tag{30}$$

$$L\_4^2 + 2c\theta\_3 L\_4 L\_3 + L\_3^2 \neq 0\tag{31}$$

and

$$\left| \frac{z\_{AE\_l}(c\theta\_{3\uparrow}L\_4 + L\_3) \pm s\theta\_{3\uparrow}L\_4\sqrt{L\_4^2 + 2c\theta\_{3\uparrow}L\_4L\_3 + L\_3^2 - z\_{AE\_l}^2}}{\bar{L}\_4^2 + 2c\theta\_{3\uparrow}L\_3L\_4 + L\_3^2} \right| \le 1\tag{32}$$

Inequations 30, 31 and 32 are satisfied for:

$$
\omega \theta\_{3\downarrow} \neq -\frac{L\_4^2 + L\_3^2}{2L\_4 L\_3} \,\prime \tag{33}
$$

and

$$x\_{AE\_l}^2 + y\_{AE\_l}^2 \ge L\_2^2 \tag{34}$$

Besides from 30 and 31 we have the condition *zAEl* � 0. Equations 26, 27 and 29 give multiples solutions for the system. The orientation of gripper is represent by *ϕ<sup>l</sup>* and its value coincides directly with the value of *θ*4*<sup>l</sup>*

of a Quadruped Robot 9

A Kinematical and Dynamical Analysis of a Quadruped Robot 247

Limbs Length(m) Weight(kg) Moment of Inertia(*<sup>N</sup>* <sup>−</sup> *<sup>m</sup>*2) *<sup>L</sup>*<sup>5</sup> 0.2 0.25 1.25 · <sup>10</sup>−<sup>5</sup> *<sup>L</sup>*<sup>4</sup> 0.2 0.2 17.6 · <sup>10</sup>−<sup>5</sup> *<sup>L</sup>*<sup>3</sup> 0.3 0.25 130.83 · <sup>10</sup>−<sup>5</sup> *<sup>L</sup>*<sup>2</sup> 0.1 0.06 1.625 · <sup>10</sup>−<sup>5</sup> Platform: (*L*1) 0.3 5.25 2932.45 · <sup>10</sup>−<sup>5</sup>

In previous sections, the problems of direct and inverse kinematics were discussed, both for the platform and for the leg. Such analysis is important for bringing the platform or the gripper to some desired position in the space, but in our case it is not sufficient. The motion of the joints must be carefully coordinated to move the platform or the gripper along some desired path with a prescribed speed, . There are two types of velocity coordination problems namely, direct and inverse. In the first case, the velocity of the joints is given and the objective is to find the velocity state of the end effector (platform or leg); in the later case, the velocity state of the end effector is given and the input joint rates required to produce the desired

Thus, the matrix that transforms the joint rates in the actuator space into the velocity state in

Due to the characteristics of the gait chosen for the robot, there will always be a closed-chain kinematics formed by the legs clung to the climbing surface. The closed-chain is also characterized by a set of inputs (denoted here by a vector **q**), which correspond to the powered joints, and by a set of output coordinates (denoted here by a vector **x**). These input and output vectors depend on the nature and purpose of the kinematics chain (Goselin & Angeles, 1990).

Table 2. Dimensions of the limbs

Fig. 5. Workspace for the gripper.

velocity are to be found. (Tsai, 1999)

**3.1 Singularity analysis for the platform**

the end effector space is called Jacobian matrix.

**3. Singularity analysis**

Fig. 4. Workspaces of the center of platform associated to legs 1 and 3. (W*P*<sup>1</sup> = W*P*1*max* − W*P*1*min* and W*P*<sup>3</sup> = W*P*3*max* − W*P*3*min* )

#### **2.5 Workspace**

The workspace is formed by the set of points of the reachable workspace where the robot can generate velocities that span the complete tangent space at that point.

The relationships between joint space and Cartesian space coordinates are generally multiple-valued: the same position can be reached in different ways, each with a different set of joint coordinates. Hence, the reachable workspace of the robot is formed by the configurations, in which the kinematic relationships are locally one-to-one (Pieper, 1968).

#### **2.6 Workspace of the platform**

The workspace of the platform is formed by the set of points *P* = (*Px*, *Py*, *Pz*) that satisfy equation 7 subject to constrains imposed by 13 and 17.

In a graphic form was defined by W*Pl* , the workspace of the center of platform relative to leg *l*, and if there is more than one leg support the platform the final workspace will be the intersection of all the W*Pl* of the legs clung to the surface. In a general case:

$$\mathcal{W}\_{\mathcal{P}} = \mathcal{W}\_{\mathcal{P}\_1} \cap \mathcal{W}\_{\mathcal{P}\_2} \cap \dots \cup \mathcal{W}\_{\mathcal{P}\_4} \tag{35}$$

Figure 4 show the workspace formed by the intersection of set W*P*<sup>1</sup> and W*P*<sup>3</sup> , and sets W*Plmin* and W*Plmax* represents the minimum and maximum values of the workspace of each legs. The lengths of the limbs are showed in table 2.

#### **2.7 Workspace of the leg**

The workspace of leg W*Gl* , when it is in the air, corresponds to its reachable Cartesian space . In this case, W*Gl* is formed by the admissible solutions of equations 28. The geometrical form of this workspace is shown in figure 5.

Table 2. Dimensions of the limbs

8 Will-be-set-by-IN-TECH

The workspace is formed by the set of points of the reachable workspace where the robot can

The relationships between joint space and Cartesian space coordinates are generally multiple-valued: the same position can be reached in different ways, each with a different set of joint coordinates. Hence, the reachable workspace of the robot is formed by the configurations, in which the kinematic relationships are locally one-to-one (Pieper, 1968).

The workspace of the platform is formed by the set of points *P* = (*Px*, *Py*, *Pz*) that satisfy

leg *l*, and if there is more than one leg support the platform the final workspace will be the

Figure 4 show the workspace formed by the intersection of set W*P*<sup>1</sup> and W*P*<sup>3</sup> , and sets W*Plmin* and W*Plmax* represents the minimum and maximum values of the workspace of each legs. The

In this case, W*Gl* is formed by the admissible solutions of equations 28. The geometrical form

, the workspace of the center of platform relative to

W*<sup>P</sup>* = W*P*<sup>1</sup> ∩ W*P*<sup>2</sup> ∩ ... W*P*<sup>4</sup> (35)

, when it is in the air, corresponds to its reachable Cartesian space .

Fig. 4. Workspaces of the center of platform associated to legs 1 and 3.

generate velocities that span the complete tangent space at that point.

intersection of all the W*Pl* of the legs clung to the surface. In a general case:

(W*P*<sup>1</sup> = W*P*1*max* − W*P*1*min* and W*P*<sup>3</sup> = W*P*3*max* − W*P*3*min* )

equation 7 subject to constrains imposed by 13 and 17.

**2.5 Workspace**

**2.6 Workspace of the platform**

In a graphic form was defined by W*Pl*

lengths of the limbs are showed in table 2.

of this workspace is shown in figure 5.

**2.7 Workspace of the leg** The workspace of leg W*Gl*


Fig. 5. Workspace for the gripper.

#### **3. Singularity analysis**

In previous sections, the problems of direct and inverse kinematics were discussed, both for the platform and for the leg. Such analysis is important for bringing the platform or the gripper to some desired position in the space, but in our case it is not sufficient. The motion of the joints must be carefully coordinated to move the platform or the gripper along some desired path with a prescribed speed, . There are two types of velocity coordination problems namely, direct and inverse. In the first case, the velocity of the joints is given and the objective is to find the velocity state of the end effector (platform or leg); in the later case, the velocity state of the end effector is given and the input joint rates required to produce the desired velocity are to be found. (Tsai, 1999)

Thus, the matrix that transforms the joint rates in the actuator space into the velocity state in the end effector space is called Jacobian matrix.

#### **3.1 Singularity analysis for the platform**

Due to the characteristics of the gait chosen for the robot, there will always be a closed-chain kinematics formed by the legs clung to the climbing surface. The closed-chain is also characterized by a set of inputs (denoted here by a vector **q**), which correspond to the powered joints, and by a set of output coordinates (denoted here by a vector **x**). These input and output vectors depend on the nature and purpose of the kinematics chain (Goselin & Angeles, 1990).

of a Quadruped Robot 11

A Kinematical and Dynamical Analysis of a Quadruped Robot 249

*JxP*<sup>1</sup> =

*<sup>T</sup>* and �*xP* = �

satisfy the constraints imposed on the kinematics equations.

**3.1.1 Inverse Kinematics Singularity of the platform:**

Inverse kinematics singularity occurs when:

⎡ ⎢ ⎣

The elements of �*q* correspond to the set of active joints. This set may vary with the robot gait, with the number of legs clung to the climbing surface and with the eventual use of an optimal

Vector �*xP* contains the position and the Euler angles that define the orientation of the platform. When the lengths of the input and output vectors are not the same, there are redundancies (Lenarcic & Roth, 2006). These are eliminated when there are only two legs holding the robot: �*q* = [*θ*<sup>41</sup> , *θ*<sup>31</sup> , *θ*<sup>21</sup> , *θ*<sup>43</sup> , *θ*<sup>33</sup> , *θ*<sup>23</sup> ]. Variables *xp*, *yp*, *zp*, *φp*, *θ<sup>p</sup>* and *ψ<sup>p</sup>* are not all arbitrary, but must

This kind of singularity consists of the set of points where different branches of the inverse kinematics problem meet, being the inverse kinematics problem understood here as the computation of the values of the input variables from given values of the output variables. Since the dimension of the null space of *Jq* is nonzero in the presence of a singularity of this kind, we can find nonzero vectors �*q*˙ for which �*x*˙ will be equal to zero and, therefore, some of

the velocity vectors �*q*˙ cannot be produced at the output (Goselin & Angeles, 1990).

*<sup>c</sup>θ*3*<sup>l</sup>* <sup>=</sup> <sup>±</sup> <sup>|</sup>*sθ*2*<sup>l</sup>*

� *<sup>L</sup>*<sup>2</sup> 3 *L*2 2 + 2 *<sup>L</sup>*<sup>3</sup>

*cθ*2*<sup>l</sup>* > −


*L*2 3 *L*2 2 + 1 2 *<sup>L</sup>*<sup>3</sup> *L*2

*<sup>L</sup>*<sup>2</sup> *cθ*2*<sup>l</sup>* + 1

*det*(*Jql*) = −*L*2*L*3*sθ*2*<sup>l</sup>*

The singularities occur when *θ*2*<sup>l</sup>* = 0, ±*π*,..., ±*nπ*, ∀ *n* ∈ **N** or when:

*I*3×<sup>3</sup> Ω*F*<sup>1</sup> . . . . . . *I*3×<sup>3</sup> Ω*F*<sup>4</sup>

*xp*, *yp*, *zp*, *φp*, *θp*, *ψp*

⎤ ⎥

*Jq*�*q*˙ = *Jx*�*x*˙*<sup>P</sup>* (45)

*Jx* = *JxP*<sup>1</sup> *Jx*<sup>2</sup> , (46) *Jq* = *diag*(*Jq*<sup>1</sup> ,..., *Jq*<sup>4</sup> ), (47)

�*T*.

*det*(*Jq*) = 0. (49)

(*cθ*2*l*3*<sup>l</sup> L*<sup>2</sup> + *cθ*3*<sup>l</sup> L*3) (51)

(52)

(53)

*det*(*Jq*) = *det*(*Jq*<sup>1</sup> )*det*(*Jq*<sup>2</sup> )*det*(*Jq*<sup>3</sup> )*det*(*Jq*<sup>4</sup> ) (50)

<sup>⎦</sup> , (48)

Finally, for the four legs:

�*q* = [*θ*<sup>41</sup> , *θ*<sup>31</sup> , *θ*<sup>21</sup> ... *θ*<sup>44</sup> , *θ*<sup>34</sup> , *θ*<sup>24</sup> ]

control policy.

From 47, it follows that

where, from 39,

for *l* = 1, . . . , 4.

where:

where:

The orientation of the platform relative to system {*O*} is given by matrix *ORP*. Then the platform angular velocity with respect to {*O*}, is:

$$
\begin{bmatrix} ^{O}\boldsymbol{\omega}\_{p\_{\boldsymbol{x}}} \\ ^{O}\boldsymbol{\omega}\_{p\_{\boldsymbol{y}}} \\ ^{O}\boldsymbol{\omega}\_{p\_{\boldsymbol{z}}} \end{bmatrix} = \begin{bmatrix} 0 & -s\boldsymbol{\phi}\_{P} \ s\boldsymbol{\theta}\_{P} c\boldsymbol{\phi}\_{P} \\ 0 & c\boldsymbol{\phi}\_{P} \ s\boldsymbol{\theta}\_{P} s\boldsymbol{\phi}\_{P} \\ 1 & 0 \qquad c\boldsymbol{\theta}\_{P} \end{bmatrix} \begin{bmatrix} \boldsymbol{\phi}\_{P} \\ \boldsymbol{\theta}\_{P} \\ \boldsymbol{\psi}\_{P} \end{bmatrix} \tag{36}
$$

The linear velocity of point *El* is given by

$${}^{O}\vec{\upsilon}\_{E\_l} = {}^{O}\vec{\upsilon}\_P + {}^{O}\vec{\omega}\_p \times ({}^{O}\mathbb{R}\_P \cdot {}^{O}P\vec{E}\_l) \tag{37}$$

where *<sup>O</sup>vEl* and *<sup>O</sup>vP* are respectively the linear velocities of points *El* and *P*, with respect to {*O*}.

The left-hand side of 37 can be rewritten as:

$${}^{O}\vec{w}\_{E\_l} = J\_{q\_l} \begin{bmatrix} \dot{\theta}\_{4\_l} \\ \dot{\theta}\_{3\_l} \\ \dot{\theta}\_{2\_l} \end{bmatrix} \tag{38}$$

where:

$$J\_{\mathbb{M}} = \begin{bmatrix} -s\theta\_{4\downarrow}(c\theta\_{3\downarrow 2\downarrow}L\_2 + c\theta\_{3\downarrow}L\_3) - c\theta\_{4\downarrow}(s\theta\_{3\downarrow 2\downarrow}L\_2 + s\theta\_{3\downarrow}L\_3) - c\theta\_{4\uparrow}L\_2s\theta\_{3\downarrow 2\downarrow} \\ c\theta\_{4\downarrow}(c\theta\_{3\downarrow 2\downarrow}L\_2 + c\theta\_{3\downarrow}L\_3) & -s\theta\_{4\downarrow}(s\theta\_{3\downarrow 2\downarrow}L\_2 + s\theta\_{3\downarrow}L\_3) - s\theta\_{4\downarrow}L\_2s\theta\_{3\downarrow 2\downarrow} \\ 0 & c\theta\_{3\downarrow 2\downarrow}L\_2 + c\theta\_{3\downarrow}L\_3 & c\theta\_{3\downarrow 2\downarrow}L\_2 \end{bmatrix} \tag{39}$$

On the right-hand side of 37, the product *<sup>ω</sup> <sup>p</sup>* <sup>×</sup> (*ORP* · *PE <sup>l</sup>*) can be rewritten as:

$${}^{O}\vec{\omega}\_{p}\times({}^{O}\mathcal{R}\_{P}\cdot{}^{O}P\vec{E}\_{l})=\Omega\_{\vec{F}\_{l}}\begin{bmatrix}{}^{O}\omega\_{p\_{\vec{x}}}\\{}^{O}\omega\_{p\_{\vec{y}}}\\{}^{O}\omega\_{p\_{\vec{z}}}\end{bmatrix}\tag{40}$$

where:

$$
\Omega\_{\rm Fl} = \begin{bmatrix}
0 & \mathbf{Y}\_{z\_l} & -\mathbf{Y}\_{y\_l} \\
\mathbf{Y}\_{y\_l} & -\mathbf{Y}\_{x\_l} & \mathbf{0}
\end{bmatrix} \tag{41}
$$

and <sup>Υ</sup>*<sup>l</sup>* <sup>=</sup>*ORP* · *OPE <sup>l</sup>* Substituting 36 and 40 into 37 gives:

$$\begin{aligned}{}^{O}\vec{v}\_{E\_{l}} = J\_{X\_{l\_{l}}}J\_{X\_{2}} \begin{bmatrix} {}^{O}\boldsymbol{\upsilon}\_{P\_{X}} \\ {}^{O}\boldsymbol{\upsilon}\_{P\_{Y}} \\ {}^{O}\boldsymbol{\upsilon}\_{P\_{Z}} \\ \dot{\Phi} \\ \dot{\Phi} \\ \dot{\Psi} \end{bmatrix} \end{aligned} \tag{42}$$

where:

$$J\_{\mathbf{x}\_{l}} = \begin{bmatrix} I\_{\mathbf{3}\times\mathbf{3}} & \boldsymbol{\Omega}\_{\mathbf{F}\_{l}} \end{bmatrix} \tag{43}$$

and

$$J\_{32} = \begin{bmatrix} I\_{3 \times 3} & 0\_{3 \times 3} \\ 0\_{3 \times 3} & \begin{bmatrix} 0 & -s\phi\_P \ s\theta\_P c\phi\_P \\ 0 & c\phi\_P & s\theta\_P s\phi\_P \\ 1 & 0 & c\theta\_P \end{bmatrix} \end{bmatrix} \tag{44}$$

Finally, for the four legs:

$$J\_{\eta}\vec{\tilde{q}} = J\_{\chi}\vec{\tilde{x}}\_{P} \tag{45}$$

where:

10 Will-be-set-by-IN-TECH

The orientation of the platform relative to system {*O*} is given by matrix *ORP*. Then the

where *<sup>O</sup>vEl* and *<sup>O</sup>vP* are respectively the linear velocities of points *El* and *P*, with respect to

⎡ ⎣ ˙ *θ*4*l* ˙ *θ*3*l* ˙ *θ*2*l*

(*sθ*3*l*2*<sup>l</sup>*

*OPE <sup>l</sup>*) = <sup>Ω</sup>*Fl*

0 Υ*zl* −Υ*yl* −Υ*zl* 0 Υ*xl* Υ*yl* −Υ*xl* 0

> ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣

*I*3×<sup>3</sup> Ω*Fl*

0 −*sφ<sup>P</sup> sθPcφ<sup>P</sup>* 0 *cφ<sup>P</sup> sθPsφ<sup>P</sup>* 1 0 *cθ<sup>P</sup>*

⎤ ⎦ ⎤ ⎥ ⎥ ⎦

*I*3×<sup>3</sup> 03×<sup>3</sup>

⎡ ⎣ *OvPx OvPy OvPz φ*˙ ˙ *θ ψ*˙

⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦

0 *cθ*3*l*2*<sup>l</sup> L*<sup>2</sup> + *cθ*3*<sup>l</sup> L*<sup>3</sup> *cθ*3*l*2*<sup>l</sup> L*<sup>2</sup>

⎡ ⎣

0 −*sφ<sup>P</sup> sθPcφ<sup>P</sup>* 0 *cφ<sup>P</sup> sθPsφ<sup>P</sup>* 1 0 *cθ<sup>P</sup>*

⎤ ⎦ ⎡ ⎣ *φ*˙ *P* ˙ *θP ψ*˙ *P*

*ORP* ·

⎤

(*sθ*3*l*2*<sup>l</sup> L*<sup>2</sup> + *sθ*3*<sup>l</sup> L*3) −*cθ*4*<sup>l</sup>*

*L*3) −*sθ*4*<sup>l</sup>*

*L*<sup>2</sup> + *sθ*3*<sup>l</sup>*

*<sup>O</sup>ωpx <sup>O</sup>ωpy <sup>O</sup>ωpz*

⎤

⎤

⎤

⎦ (36)

*<sup>O</sup> PE <sup>l</sup>*) (37)

⎦ (38)

*L*2*sθ*3*l*2*<sup>l</sup>*

⎤

⎦ (39)

(42)

(44)

*L*2*sθ*3*l*2*<sup>l</sup>*

⎦ (40)

⎦ (41)

� (43)

platform angular velocity with respect to {*O*}, is:

The linear velocity of point *El* is given by

The left-hand side of 37 can be rewritten as:

{*O*}.

where:

where:

and

where:

and

<sup>Υ</sup>*<sup>l</sup>* <sup>=</sup>*ORP* ·

*OPE <sup>l</sup>* Substituting 36 and 40 into 37 gives:

*Jql* =

⎡ ⎣

−*sθ*4*<sup>l</sup>*

*cθ*4*<sup>l</sup>*

(*cθ*3*l*2*<sup>l</sup>*

⎡ ⎣ *<sup>O</sup>ωpx <sup>O</sup>ωpy <sup>O</sup>ωpz* ⎤ ⎦ =

(*cθ*3*l*2*<sup>l</sup> L*<sup>2</sup> + *cθ*3*<sup>l</sup> L*3) −*cθ*4*<sup>l</sup>*

*<sup>O</sup><sup>ω</sup><sup>p</sup>* <sup>×</sup> (*ORP* ·

Ω*Fl* =

On the right-hand side of 37, the product *<sup>ω</sup> <sup>p</sup>* <sup>×</sup> (*ORP* · *PE <sup>l</sup>*) can be rewritten as:

⎡ ⎣

*<sup>O</sup>vEl* <sup>=</sup> *Jx*<sup>1</sup>*<sup>l</sup>*

*Jx*<sup>1</sup>*<sup>l</sup>* <sup>=</sup> �

03×<sup>3</sup>

*Jx*<sup>2</sup> =

⎡ ⎢ ⎢ ⎣ *Jx*2

*L*<sup>2</sup> + *cθ*3*<sup>l</sup>*

⎡ ⎣

*<sup>O</sup>vEl* <sup>=</sup>*<sup>O</sup> vP* <sup>+</sup>*<sup>O</sup> <sup>ω</sup> <sup>p</sup>* <sup>×</sup> (

*<sup>O</sup>vEl* = *Jql*

*L*3) −*sθ*4*<sup>l</sup>*

$$\mathbf{J}\_{\mathbf{x}} = \mathbf{J}\_{\mathbf{x}\_{\text{P1}}} \mathbf{J}\_{\mathbf{x}\_{\text{2}}} \tag{46}$$

$$\mathbf{J}\_{\mathfrak{l}} = \text{diag}(\mathbf{J}\_{\mathfrak{l}\mathfrak{l}\mathfrak{l}}, \dots, \mathbf{J}\_{\mathfrak{l}\mathfrak{l}}),\tag{47}$$

$$J\_{\mathbf{X}\_{\mathrm{F}}} = \begin{bmatrix} I\_{\mathbf{3}\times\mathbf{3}} \ \mathbf{\Omega}\_{\mathrm{F}} \\ \vdots & \vdots \\ I\_{\mathbf{3}\times\mathbf{3}} \ \mathbf{\Omega}\_{\mathrm{F}\_{4}} \end{bmatrix} \tag{48}$$

$$\overrightarrow{q} = \begin{bmatrix} \theta\_{4\_1}, \theta\_{3\_1}, \theta\_{2\_1}, \dots, \theta\_{4\_4}, \theta\_{3\_4}, \theta\_{2\_4} \end{bmatrix}^T \text{ and } \overrightarrow{x}\_P = \begin{bmatrix} x\_p, y\_p, z\_p, \phi\_p, \theta\_p, \psi\_p \end{bmatrix}^T.$$

The elements of �*q* correspond to the set of active joints. This set may vary with the robot gait, with the number of legs clung to the climbing surface and with the eventual use of an optimal control policy.

Vector �*xP* contains the position and the Euler angles that define the orientation of the platform. When the lengths of the input and output vectors are not the same, there are redundancies (Lenarcic & Roth, 2006). These are eliminated when there are only two legs holding the robot: �*q* = [*θ*<sup>41</sup> , *θ*<sup>31</sup> , *θ*<sup>21</sup> , *θ*<sup>43</sup> , *θ*<sup>33</sup> , *θ*<sup>23</sup> ]. Variables *xp*, *yp*, *zp*, *φp*, *θ<sup>p</sup>* and *ψ<sup>p</sup>* are not all arbitrary, but must satisfy the constraints imposed on the kinematics equations.

#### **3.1.1 Inverse Kinematics Singularity of the platform:**

Inverse kinematics singularity occurs when:

$$\det(f\_{\emptyset}) = 0.\tag{49}$$

This kind of singularity consists of the set of points where different branches of the inverse kinematics problem meet, being the inverse kinematics problem understood here as the computation of the values of the input variables from given values of the output variables. Since the dimension of the null space of *Jq* is nonzero in the presence of a singularity of this kind, we can find nonzero vectors �*q*˙ for which �*x*˙ will be equal to zero and, therefore, some of the velocity vectors �*q*˙ cannot be produced at the output (Goselin & Angeles, 1990). From 47, it follows that

$$\det(\mathbf{J}\_{\mathfrak{q}}) = \det(\mathbf{J}\_{\mathfrak{q}\_1}) \det(\mathbf{J}\_{\mathfrak{q}\_2}) \det(\mathbf{J}\_{\mathfrak{q}\_3}) \det(\mathbf{J}\_{\mathfrak{q}\_4}) \tag{50}$$

where, from 39,

$$\det(I\_{\mathbb{M}}) = -L\_2 L\_3 s \theta\_{2\downarrow} (c\theta\_{2\downarrow} L\_2 + c\theta\_{3\downarrow} L\_3) \tag{51}$$

for *l* = 1, . . . , 4. The singularities occur when *θ*2*<sup>l</sup>* = 0, ±*π*,..., ±*nπ*, ∀ *n* ∈ **N** or when:

$$c\theta\_{\mathfrak{I}\_l} = \pm \frac{|s\theta\_{\mathfrak{I}\_l}|}{\sqrt{\frac{L\_3^2}{L\_2^2} + 2\frac{L\_3}{L\_2}c\theta\_{\mathfrak{I}\_l} + 1}}\tag{52}$$

where:

$$c\theta\_{2\_l} > -\frac{\frac{L\_3^2}{L\_2^2} + 1}{2\frac{L\_3}{L\_2}}\tag{53}$$

of a Quadruped Robot 13

A Kinematical and Dynamical Analysis of a Quadruped Robot 251

This corresponds to configurations in which the platform is locally movable with all the actuated joints locked. The values of the output variables from given values of the input variables should be obtained. Since, in this case, the nullspace of *Jx* is non-empty, there exists nonzero output rate vectors **˙x** which are mapped into the origin by *Jx*, i.e., which will

Matrix *Jx*<sup>1</sup> can be square, for example, when the robot is clinging to the surface with two legs, while the other two are in the air. Thus, the matrix *Jx*<sup>1</sup> has size 6 × 6 and singularity occurs

<sup>2</sup> (b) On the other hand, *det*(*Jx*<sup>2</sup> ) = 0 for *θ* = 0, *π*,..., *nπ*, ∀ *n* ∈ *N*. This singularity is associated with the Euler angle convention used. For the Z-Y-Z Euler angle convention, this kind of singularity will occur for all horizontal orientations of the platform. Since this situation is not allowed in this particular application, this problem can be solved by either changing the Euler angle convention or by changing the coordinate system assigned to the climbing surface {*O*} as in Fig. 8, (Harib & Srinivasan, 2003). Now the singularity will occur for

with respect to the gripping surface (situation rather unlikely to occur in our case).

In such a configuration, we say that the output link gains one or more degrees of freedom, which implies that the output link cannot resist one or more components of force or moment

The third kind of singularity is of a slightly different nature since it requires conditions on the linkage parameters. This occurs when, for certain configurations, both *det*(*Jx*) and *det*(*Jq*) become simultaneously singular. If some specific conditions on the linkage parameters are satisfied, the chain can reach configurations at which the relation given by 45 degenerates.

<sup>2</sup> , ∀ *n* ∈ *N*, which means that the platform is completely at vertical

*det*(*Jx*) = 0. (54)

**3.1.2 Direct kinematics singularity of the platform:**

correspond to null velocities of the input joints.

Fig. 8. Singularities for *<sup>θ</sup>* <sup>=</sup> 0 (a) and *<sup>θ</sup>* <sup>=</sup> <sup>−</sup> *<sup>π</sup>*

*θ* = *<sup>π</sup>*

<sup>2</sup> , <sup>3</sup>*<sup>π</sup>*

<sup>2</sup> ..., (2*n*+1)*<sup>π</sup>*

even when all actuators are locked.

**3.1.3 Combined singularities of the platform:**

According to 46, *det*(*Jx*) is null when *det*(*Jx*<sup>1</sup> ) = 0 or *det*(*Jx*<sup>2</sup> ) = 0.

This kind of singularity occurs when

when *det*(Ω*Fj* − Ω*Fk* ) = 0 for *j* � *k*.

for *l* = 1, . . . , 4. According to 52, for a given value of *θ*2*<sup>l</sup>* there will be two solutions for *θ*3*<sup>l</sup>* .

Fig. 6. Side view for the first condition of singularity.

Fig. 7. Side view for the second condition of singularity.

The first condition of singularity, *θ*2*<sup>l</sup>* = 0, *π*,..., *nπ*, means that *L*<sup>2</sup> is fully aligned with *L*3. See Fig. 6.

The second condition of singularity, 52, means that joints *El*, *Cl* e *Bl* are vertically aligned in the same plane. See Fig. 7.

Provided that there are three parallel or coplanar axes, a singularity configuration will occur. (Murray et al., 1994)

In such a configuration, we say that the output link looses one or more degrees of freedom; this implies that the output link can resist to one or more components of force or moment with no torque or force applied at the powered joints. This condition can be useful if the robot needs to support heavy loads, forces or torques with little effort or low power consumption.

#### **3.1.2 Direct kinematics singularity of the platform:**

This kind of singularity occurs when

12 Will-be-set-by-IN-TECH

−0.5 <sup>0</sup> 0.5 <sup>1</sup> 1.5 −0.5

−0.5 <sup>0</sup> 0.5 <sup>1</sup> −0.5

The first condition of singularity, *θ*2*<sup>l</sup>* = 0, *π*,..., *nπ*, means that *L*<sup>2</sup> is fully aligned with *L*3.

The second condition of singularity, 52, means that joints *El*, *Cl* e *Bl* are vertically aligned in

Provided that there are three parallel or coplanar axes, a singularity configuration will occur.

In such a configuration, we say that the output link looses one or more degrees of freedom; this implies that the output link can resist to one or more components of force or moment with no torque or force applied at the powered joints. This condition can be useful if the robot needs to support heavy loads, forces or torques with little effort or low power consumption.

A<sup>1</sup> B<sup>1</sup>

C<sup>1</sup>

D<sup>1</sup> E<sup>1</sup>

X

A<sup>3</sup>

B<sup>3</sup>

C<sup>3</sup>

E<sup>3</sup>

D<sup>3</sup>

<sup>D</sup><sup>1</sup> <sup>E</sup><sup>1</sup> <sup>E</sup><sup>3</sup> <sup>D</sup><sup>3</sup> <sup>C</sup><sup>1</sup>

X

A<sup>1</sup> A<sup>3</sup> B<sup>1</sup> B<sup>3</sup>

.

C<sup>3</sup>

According to 52, for a given value of *θ*2*<sup>l</sup>* there will be two solutions for *θ*3*<sup>l</sup>*

0

Fig. 6. Side view for the first condition of singularity.

1

0

Fig. 7. Side view for the second condition of singularity.

0.5

Z

See Fig. 6.

the same plane. See Fig. 7.

(Murray et al., 1994)

0.5

Z

1

for *l* = 1, . . . , 4.

$$\det(f\_{\mathbf{x}}) = \mathbf{0}.\tag{54}$$

This corresponds to configurations in which the platform is locally movable with all the actuated joints locked. The values of the output variables from given values of the input variables should be obtained. Since, in this case, the nullspace of *Jx* is non-empty, there exists nonzero output rate vectors **˙x** which are mapped into the origin by *Jx*, i.e., which will correspond to null velocities of the input joints.

According to 46, *det*(*Jx*) is null when *det*(*Jx*<sup>1</sup> ) = 0 or *det*(*Jx*<sup>2</sup> ) = 0.

Matrix *Jx*<sup>1</sup> can be square, for example, when the robot is clinging to the surface with two legs, while the other two are in the air. Thus, the matrix *Jx*<sup>1</sup> has size 6 × 6 and singularity occurs when *det*(Ω*Fj* − Ω*Fk* ) = 0 for *j* � *k*.

Fig. 8. Singularities for *<sup>θ</sup>* <sup>=</sup> 0 (a) and *<sup>θ</sup>* <sup>=</sup> <sup>−</sup> *<sup>π</sup>* <sup>2</sup> (b)

On the other hand, *det*(*Jx*<sup>2</sup> ) = 0 for *θ* = 0, *π*,..., *nπ*, ∀ *n* ∈ *N*. This singularity is associated with the Euler angle convention used. For the Z-Y-Z Euler angle convention, this kind of singularity will occur for all horizontal orientations of the platform. Since this situation is not allowed in this particular application, this problem can be solved by either changing the Euler angle convention or by changing the coordinate system assigned to the climbing surface {*O*} as in Fig. 8, (Harib & Srinivasan, 2003). Now the singularity will occur for *θ* = *<sup>π</sup>* <sup>2</sup> , <sup>3</sup>*<sup>π</sup>* <sup>2</sup> ..., (2*n*+1)*<sup>π</sup>* <sup>2</sup> , ∀ *n* ∈ *N*, which means that the platform is completely at vertical with respect to the gripping surface (situation rather unlikely to occur in our case).

In such a configuration, we say that the output link gains one or more degrees of freedom, which implies that the output link cannot resist one or more components of force or moment even when all actuators are locked.

#### **3.1.3 Combined singularities of the platform:**

The third kind of singularity is of a slightly different nature since it requires conditions on the linkage parameters. This occurs when, for certain configurations, both *det*(*Jx*) and *det*(*Jq*) become simultaneously singular. If some specific conditions on the linkage parameters are satisfied, the chain can reach configurations at which the relation given by 45 degenerates.

of a Quadruped Robot 15

A Kinematical and Dynamical Analysis of a Quadruped Robot 253

D<sup>1</sup>

C<sup>1</sup>

E<sup>1</sup>

−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1

−0.5 −0.4 −0.3 −0.2 −0.1 0 0.1 0.2 0.3 0.4 0.5

C<sup>1</sup>

B<sup>1</sup>

D<sup>1</sup> E<sup>1</sup>

A<sup>1</sup>

X

: resulting force (excluding the actuator force) exerted at the center of mass of link *i* of

X

<sup>4</sup> . First solution.

<sup>4</sup> . Second solution.

˙*vil*

*<sup>p</sup>* <sup>=</sup> <sup>−</sup>*mp* ˙*vp*

*il* = −*mil*

−1

−0.5

• *fil*

• *f* ∗ *il*

• ˆ

• *f* ∗ *il*

• ˆ

leg *l*.

*fil* = *fil* + *f* <sup>∗</sup>

*fp* = *fp* + *f* ∗

*il*

*p*

0

0.5

Z

1

−0.5

0

0.5

Z

1

<sup>A</sup><sup>1</sup> <sup>B</sup><sup>1</sup>

Fig. 9. Singular configuration for the leg in the ar and *<sup>θ</sup>*<sup>31</sup> <sup>=</sup> <sup>−</sup> *<sup>π</sup>*

Fig. 10. Singular configuration for the leg in the ar and *<sup>θ</sup>*<sup>31</sup> <sup>=</sup> <sup>−</sup> *<sup>π</sup>*

: inertia force exerted at the center of mass of link *i* of leg *l*, *f* ∗

• *fp*: resulting force exerted at the center of mass of the moving platform.

: inertia force exerted at the center of mass of the moving platform, *f* ∗

1.5

This corresponds to configurations at which the chain can undergo finite motions when its actuators are locked or at which a finite motion at the inputs produces no motion at the outputs (Tsai, 1999).

#### **3.2 Jacobian matrix for the leg**

The study of the singularity of the leg is similar to the analysis of the serial manipulator attached to point *El*. Differently from the analysis of singularity of the platform, when the rank of the Jacobian of the serial manipulator loses its full rank (singularity condition), it may only lose degrees of freedom.

where:

$$
\vec{v}\_{A\_l} = f\_{A\_l} \dot{\vec{x}}\_{A\_l} \tag{55}
$$

$$
\dot{\vec{x}}\_{A\_l} = \begin{bmatrix}
E\_l \boldsymbol{\upsilon}\_{A\_{x\_l}} \\
E\_l \boldsymbol{\upsilon}\_{A\_{y\_l}} \\
E\_l \boldsymbol{\upsilon}\_{A\_{z\_l}} \\
\dot{\boldsymbol{\varphi}}\_l
\end{bmatrix} \tag{56}
$$

To calculate the *Jgl* is necessary to difference equation 10 relative to time.

$$J\_{A\_l} = \begin{bmatrix} -s\theta\_{1\_l}(c\theta\_{2\cdot3\_l}\bar{L}\_4 + c\theta\_{2\cdotl}L\_3 + L\_2) \ (-s\theta\_{2\cdot3\_l}\bar{L}\_4 - s\theta\_{2\cdotl}L\_3)c\theta\_{1\_l} - s\theta\_{2\cdot3\_l}\bar{L}\_4 & 0\\ c\theta\_{1\_l}(c\theta\_{2\cdot3\_l}\bar{L}\_4 + c\theta\_{2\cdotl}L\_3 + L\_2) \ (-s\theta\_{2\cdot3\_l}\bar{L}\_4 - s\theta\_{2\cdotl}L\_3)s\theta\_{1\_l} - s\theta\_{2\cdot3\_l}\bar{L}\_4 & 0\\ 0 & c\theta\_{2\cdot3\_l}\bar{L}\_4 - c\theta\_{2\cdotl}L\_3 & c\theta\_{2\cdot3\_l}\bar{L}\_4 & 0\\ 0 & 0 & 0 & 1 \end{bmatrix} \tag{57}$$

In this case, the singularity of matrix *Jgl* occurs when *det*(*Jgl*) = 0. This condition is present for *<sup>c</sup>θ*2*l*3*<sup>l</sup> <sup>L</sup>*¯ <sup>4</sup> + *<sup>c</sup>θ*2*<sup>l</sup> <sup>L</sup>*<sup>3</sup> + *<sup>L</sup>*<sup>2</sup> = 0 or in other words when:

$$s\theta\_{2\_l} = \frac{s\theta\_3 \bar{L}\_4 L\_2 \pm (c\theta\_3 \bar{L}\_4 + L\_3)\sqrt{\bar{L}\_4^2 + 2c\theta\_3 L\_3 \bar{L}\_4 + L\_3^2 - L\_2^2}}{\bar{L}\_4^2 + 2c\theta\_3 L\_3 \bar{L}\_4 + L\_3^2} \tag{58}$$

where:

$$c\theta\_{3\_l} \ge \frac{L\_2^2 - L\_3^2 - L\_4^2}{2L\_3\bar{L}\_4} \tag{59}$$

Another case of singularity of the leg, but, on the border of the workspace, occurs when a leg is fully extended horizontally as shown in Figure 11. This kind of singularity is deu to condition *zAEl* = 0.

#### **4. Dynamics model**

The dynamics of walking machines involves special features that render these systems more elaborate from the dynamics viewpoint, for they present a time-varying topology. What this means is that these systems include kinematic loops that open when a leg takes off and open chains that close when a leg touches the ground (Angeles, 2007). This fact implies in a degree of freedom time-varying. (Pfeiffer et al., 1995).

There are some techniques to analyze the dynamics of robots. In this section, two different methods will be used. Firstly, for the analysis of the dynamics of the platform, the Principle of Virtual Works is used and, for the analysis of the dynamics of the leg, the Newton-Euler formulation is chosen. In both cases, the notations used in (Tsai, 1999) are employed.

14 Will-be-set-by-IN-TECH

This corresponds to configurations at which the chain can undergo finite motions when its actuators are locked or at which a finite motion at the inputs produces no motion at the

The study of the singularity of the leg is similar to the analysis of the serial manipulator attached to point *El*. Differently from the analysis of singularity of the platform, when the rank of the Jacobian of the serial manipulator loses its full rank (singularity condition), it may

*vAl* = *JAl*

⎡ ⎢ ⎢ ⎢ ⎣ *El vAxl El vAyl El vAzl*

⎤ ⎥ ⎥ ⎥ ⎦

*ϕ*˙ *l*

*<sup>L</sup>*<sup>3</sup> <sup>+</sup> *<sup>L</sup>*2) (−*sθ*2*l*3*<sup>l</sup> <sup>L</sup>*¯ <sup>4</sup> <sup>−</sup> *<sup>s</sup>θ*2*<sup>l</sup>*

� *L*¯ 2

2*L*3*L*¯ <sup>4</sup>

In this case, the singularity of matrix *Jgl* occurs when *det*(*Jgl*) = 0. This condition is present

*L*¯ 2 <sup>4</sup> + 2*cθ*3*<sup>l</sup>*

> *L*2 <sup>2</sup> <sup>−</sup> *<sup>L</sup>*<sup>2</sup>

Another case of singularity of the leg, but, on the border of the workspace, occurs when a leg is fully extended horizontally as shown in Figure 11. This kind of singularity is deu to

The dynamics of walking machines involves special features that render these systems more elaborate from the dynamics viewpoint, for they present a time-varying topology. What this means is that these systems include kinematic loops that open when a leg takes off and open chains that close when a leg touches the ground (Angeles, 2007). This fact implies in a degree

There are some techniques to analyze the dynamics of robots. In this section, two different methods will be used. Firstly, for the analysis of the dynamics of the platform, the Principle of Virtual Works is used and, for the analysis of the dynamics of the leg, the Newton-Euler

formulation is chosen. In both cases, the notations used in (Tsai, 1999) are employed.

*cθ*3*<sup>l</sup>* ≥

(*cθ*2*l*3*<sup>l</sup> <sup>L</sup>*¯ <sup>4</sup> <sup>+</sup> *<sup>c</sup>θ*2*<sup>l</sup> <sup>L</sup>*<sup>3</sup> <sup>+</sup> *<sup>L</sup>*2) (−*sθ*2*l*3*<sup>l</sup> <sup>L</sup>*¯ <sup>4</sup> <sup>−</sup> *<sup>s</sup>θ*2*<sup>l</sup> <sup>L</sup>*3)*cθ*1*<sup>l</sup>* <sup>−</sup>*sθ*2*l*3*<sup>l</sup> <sup>L</sup>*¯ <sup>4</sup> <sup>0</sup>

<sup>0</sup> *<sup>c</sup>θ*2*l*3*<sup>l</sup> <sup>L</sup>*¯ <sup>4</sup> <sup>−</sup> *<sup>c</sup>θ*2*<sup>l</sup> <sup>L</sup>*<sup>3</sup> *<sup>c</sup>θ*2*l*3*<sup>l</sup> <sup>L</sup>*¯ <sup>4</sup> <sup>0</sup> 0 0 01

> *L*3*L*¯ <sup>4</sup> + *L*<sup>2</sup> 3

<sup>3</sup> <sup>−</sup> *<sup>L</sup>*¯ <sup>2</sup> 4

<sup>4</sup> <sup>+</sup> <sup>2</sup>*cθ*3*<sup>l</sup> <sup>L</sup>*3*L*¯ <sup>4</sup> <sup>+</sup> *<sup>L</sup>*<sup>2</sup>

˙*xAl* <sup>=</sup>

To calculate the *Jgl* is necessary to difference equation 10 relative to time.

(*cθ*2*l*3*<sup>l</sup> <sup>L</sup>*¯ <sup>4</sup> + *<sup>c</sup>θ*2*<sup>l</sup>*

for *<sup>c</sup>θ*2*l*3*<sup>l</sup> <sup>L</sup>*¯ <sup>4</sup> + *<sup>c</sup>θ*2*<sup>l</sup> <sup>L</sup>*<sup>3</sup> + *<sup>L</sup>*<sup>2</sup> = 0 or in other words when:

of freedom time-varying. (Pfeiffer et al., 1995).

*<sup>s</sup>θ*2*<sup>l</sup>* <sup>=</sup> *<sup>s</sup>θ*3*L*¯ <sup>4</sup>*L*<sup>2</sup> <sup>±</sup> (*cθ*3*<sup>l</sup> <sup>L</sup>*¯ <sup>4</sup> <sup>+</sup> *<sup>L</sup>*3)

˙*xAl* (55)

*<sup>L</sup>*3)*sθ*1*<sup>l</sup>* <sup>−</sup>*sθ*2*l*3*<sup>l</sup> <sup>L</sup>*¯ <sup>4</sup> <sup>0</sup>

<sup>3</sup> <sup>−</sup> *<sup>L</sup>*<sup>2</sup> 2 (56)

(57)

(58)

(59)

⎤ ⎥ ⎥ ⎦

outputs (Tsai, 1999).

where:

where:

condition *zAEl* = 0.

**4. Dynamics model**

**3.2 Jacobian matrix for the leg**

only lose degrees of freedom.

*JAl* =

⎡ ⎢ ⎢ ⎣

−*sθ*1*<sup>l</sup>*

*cθ*1*<sup>l</sup>*

Fig. 9. Singular configuration for the leg in the ar and *<sup>θ</sup>*<sup>31</sup> <sup>=</sup> <sup>−</sup> *<sup>π</sup>* <sup>4</sup> . First solution.

Fig. 10. Singular configuration for the leg in the ar and *<sup>θ</sup>*<sup>31</sup> <sup>=</sup> <sup>−</sup> *<sup>π</sup>* <sup>4</sup> . Second solution.


of a Quadruped Robot 17

A Kinematical and Dynamical Analysis of a Quadruped Robot 255

important force that was not included in the model is the friction force. As each joint of the Kamambaré is subject to reduction gears, in these circumstances the effects of friction can represent up to 25% of torque needed to trigger a joint in typical situations (Craig, 1989). The effects of viscous and Coulomb friction can then be modeled by a simplified equation:

*θil*

) + *<sup>b</sup>* · ˙

*n* ∑ *i*=1 *δx<sup>T</sup> il F*ˆ

> *n* ∑ *i*=1 (*J T il F*ˆ *il*

As usual, the virtual displacement must be compatible with both the geometrical and kinematical constraints of the system. It is thus necessary to express the displacement as a function of a set of independent generalized virtual displacements. In accordance with that, it is convenient to choose the coordinates of the moving platform *xp* as the generalized

Denoting by *Jp* and *Jil* the jacobian matrices, respectively, of the moving platform and of the

In addition to 64, a more accurate model of the leg dynamics could include various sources of flexibility, deflection of the links under load and vibrations (Bobrow et al., 2004). Nevertheless, this model is sufficiently accurate for our purposes since these effects are not significant for

Due to the fact, that the number of actuators is greater than the number of degrees of freedom of the robot, there is an infinite number of solutions for *τ*. Hence, a minimum norm solution

To solve equation (64), it is necessary to compute: i) the linear and angular velocities of each link, performing the inverse kinematics analysis; ii) the jacobian matrices of the links and of the moving platform; iii) the forces and torques of the links and of the moving platform.

For the analysis of the dynamics model of the leg, the recursive Newton-Euler formulation was chosen. This formulation uses all the forces acting on the individual links of the robot leg. Hence, the resulting dynamical equation includes all the forces of constraint between two

The method consists of a forward computation of the velocities and accelerations of each link, followed by a backward computation of the forces and moments in each joint. (Tsai, 1999)

*δxil* = *Jil*

*θil* (60)

*il* = 0 (61)

*δxp* (62)

)). (64)

*δq* = *Jpδxp*, (63)

*<sup>τ</sup>*˜*il* <sup>=</sup> *<sup>c</sup>* · *sgn*(˙

*p F*ˆ *<sup>p</sup>* + 2 ∑ *l*=1

*δq<sup>T</sup>τ* + *δx<sup>T</sup>*

*τ* = −*J*

can be adopted by applying the pseudo-inverse technique.

−*T <sup>p</sup>* (*F*ˆ *<sup>p</sup>* + 2 ∑ *l*=1

where *b* and *c* are constants.

**4.1 Dynamics model of the platform**

coordinates (Merlet, 2006; Tsai, 1999).

then equation 61 leads to

the leg under consideration.

**4.2 Dynamics model of the leg**

adjacent links.

link:

and

The Principle of Virtual Work can be written as:

Fig. 11. Singular configuration in the border of the workspace for the leg in the ar.


In addition, the next vectors are defined:

$$
\hat{F}\_{\hat{\imath}\_l} = \begin{bmatrix} f\_{\hat{\imath}\_l} \\ \hat{\imath}\_{\hat{\imath}\_l} \end{bmatrix}
$$

where *i* ∈ *A*, 1 ≤ *i* ≤ 4, and

$$
\hat{F}\_p = \begin{bmatrix} \hat{f}\_p \\ \hat{n}\_p \end{bmatrix} \cdot
$$

As the velocities and accelerations of the robot are low, without losing accuracy, it is possible to assume that the link has its mass lumped at its center of mass. This approach was demonstrated in Almeida & Hess-Coelho (2010) sufficiently accuracy for modeling purposes. In both cases, the methods do not take into account all the effects that act on the joints and links. They consider only the dynamics of the rigid body under the action of gravity. A very important force that was not included in the model is the friction force. As each joint of the Kamambaré is subject to reduction gears, in these circumstances the effects of friction can represent up to 25% of torque needed to trigger a joint in typical situations (Craig, 1989). The effects of viscous and Coulomb friction can then be modeled by a simplified equation:

$$\mathfrak{r}\_{\dot{l}\_l} = \mathfrak{c} \cdot \text{sgn}(\dot{\theta}\_{\dot{l}\_l}) + \mathfrak{b} \cdot \dot{\theta}\_{\dot{l}\_l} \tag{60}$$

where *b* and *c* are constants.

#### **4.1 Dynamics model of the platform**

The Principle of Virtual Work can be written as:

$$
\delta q^T \vec{\pi} + \delta \vec{\pi}\_p^T \hat{\mathbf{F}}\_p + \sum\_{l=1}^2 \sum\_{i=1}^n \delta \vec{\pi}\_{il}^T \hat{\mathbf{F}}\_{il} = \mathbf{0} \tag{61}
$$

As usual, the virtual displacement must be compatible with both the geometrical and kinematical constraints of the system. It is thus necessary to express the displacement as a function of a set of independent generalized virtual displacements. In accordance with that, it is convenient to choose the coordinates of the moving platform *xp* as the generalized coordinates (Merlet, 2006; Tsai, 1999).

Denoting by *Jp* and *Jil* the jacobian matrices, respectively, of the moving platform and of the link:

$$
\delta \vec{\mathbf{x}}\_{\mathbf{i}\_l} = f\_{\mathbf{i}\_l} \delta \vec{\mathbf{x}}\_p \tag{62}
$$

and

16 Will-be-set-by-IN-TECH

A<sup>1</sup> B<sup>1</sup> C<sup>1</sup> D<sup>1</sup> E<sup>1</sup>

−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1

Fig. 11. Singular configuration in the border of the workspace for the leg in the ar.

: inertia torque exerted at the center of mass of link *i* of leg *l*, *n*∗

• *np*: resulting torque exerted at the center of mass of the moving platform.

*<sup>p</sup>*: inertia torque exerted at the center of mass of the moving platform, *n*<sup>∗</sup>

• *xil* six-dimensional vector describing the position and orientation of link *i* of leg *l*.

*F*ˆ *il* = ˆ *fil n*ˆ*il* 

*F*ˆ*<sup>p</sup>* =

 ˆ *f p n*ˆ *p* .

As the velocities and accelerations of the robot are low, without losing accuracy, it is possible to assume that the link has its mass lumped at its center of mass. This approach was demonstrated in Almeida & Hess-Coelho (2010) sufficiently accuracy for modeling purposes. In both cases, the methods do not take into account all the effects that act on the joints and links. They consider only the dynamics of the rigid body under the action of gravity. A very

• *τ* = [*τ*<sup>11</sup> , *τ*<sup>21</sup> ,..., *τnl* ]: vector of actuator torques applied at the active joints 1 ≤ *i* ≤ *n* at the

X

: resulting torque (excluding the actuator torque) exerted at the center of mass of link *i*

*il* <sup>=</sup> <sup>−</sup>*<sup>i</sup>*

*Iil*

˙*<sup>ω</sup>il* <sup>−</sup>*il <sup>ω</sup>il* <sup>×</sup>

*<sup>p</sup>* <sup>=</sup> <sup>−</sup>*Ip* ˙*<sup>ω</sup> <sup>p</sup>* <sup>−</sup>

−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1

• *nil*

• *n*∗ *il*

• *n*∗

of leg *l*.

• *n*ˆ*il* = *nil* + *n*<sup>∗</sup>

*ω <sup>p</sup>* × (*Ipω <sup>p</sup>*) • *n*ˆ *<sup>p</sup>* = *np* + *n*∗

leg *l* = 1, 2, ..., 4

where *i* ∈ *A*, 1 ≤ *i* ≤ 4, and

*il*

*p*

• *δ*(·): virtual displacement of (·).

In addition, the next vectors are defined:

(*il Iil ω il* ) Z

$$
\delta\vec{q} = f\_p \delta\vec{x}\_{p\prime} \tag{63}
$$

then equation 61 leads to

$$\vec{\tau} = -J\_p^{-T} \left( \hat{\mathbf{f}}\_p + \sum\_{l=1}^{2} \sum\_{i=1}^{n} (f\_{l\_l}^T \hat{\mathbf{f}}\_{l\_l}) \right). \tag{64}$$

In addition to 64, a more accurate model of the leg dynamics could include various sources of flexibility, deflection of the links under load and vibrations (Bobrow et al., 2004). Nevertheless, this model is sufficiently accurate for our purposes since these effects are not significant for the leg under consideration.

Due to the fact, that the number of actuators is greater than the number of degrees of freedom of the robot, there is an infinite number of solutions for *τ*. Hence, a minimum norm solution can be adopted by applying the pseudo-inverse technique.

To solve equation (64), it is necessary to compute: i) the linear and angular velocities of each link, performing the inverse kinematics analysis; ii) the jacobian matrices of the links and of the moving platform; iii) the forces and torques of the links and of the moving platform.

#### **4.2 Dynamics model of the leg**

For the analysis of the dynamics model of the leg, the recursive Newton-Euler formulation was chosen. This formulation uses all the forces acting on the individual links of the robot leg. Hence, the resulting dynamical equation includes all the forces of constraint between two adjacent links.

The method consists of a forward computation of the velocities and accelerations of each link, followed by a backward computation of the forces and moments in each joint. (Tsai, 1999)

of a Quadruped Robot 19

A Kinematical and Dynamical Analysis of a Quadruped Robot 257

Parameters Movement of the Platform Movement of the Legs *N* 100 100 *tf* 40*s* 60*s*

*<sup>l</sup>* 3.5A 2.8A

*<sup>l</sup>* 12v 12v

The robot move was controlled by an optimal control law that minimize the loss energy in the actuator. The law of control was based in the independent joints control strategy. The objective of the simulation is to show the performance of the system in a basic cycle gait. The

Figures 14, 15 and 16 show the characteristics of the robot move at the "pushing stage". When the desirable position of the platform is reached, the next step is to move the legs that are in the air to the next clinging point. At this moment the stage "leg on the air" begins. Figures 17, 18 and 19 show the performance of one leg in this stage. The orientation of the

When the four legs are clung to the surface, the basic cycle gait is over, and the robot is ready to calculate the new path to go. The total displacement of the robot was from position *OP*(0) to position *OP*(*I*) with an average speed of displacement in the *Y* axis of about

*lBl* is always orthogonal to the surface in the pushing stage.

gait control was implemented according with the flowchart showed in figure 13.

2. The legs in the air are locked when the platform is moving.

Fig. 12. Mechanical model

Table 3. Control Parameters

1. Vector *<sup>O</sup> A*

*Ov*¯*<sup>Y</sup>* = 0.00975*m*/*s*.

*Imaxi*

*Vmaxi*

3. At the pushing stage, joints *θ*1*<sup>l</sup>* are passives.

gripper is the same all the time and it is *ϕ* = 0

#### **4.2.1 Forward computation**

The first step is to calculate the angular velocity, angular acceleration, linear velocity and linear acceleration of each link. The best form to calculate these velocities is using a recursive algorithm starting at the first moving link and advancing to the gripper.

$$\boldsymbol{^{i\_{l}+1}\omega}\_{\boldsymbol{i\_{l}+1}} = \boldsymbol{^{i\_{l}+1}}\,\,\boldsymbol{R}\_{\dot{\mathbf{i}\_{l}}} \cdot \boldsymbol{^{i\_{l}}}\,\boldsymbol{\omega}\_{\dot{\mathbf{i}\_{l}}} + \dot{\theta}\_{\dot{\mathbf{i}\_{l}}+1} \cdot \boldsymbol{^{i\_{l}+1}}\,\hat{\mathbf{2}}\_{\dot{\mathbf{i}\_{l}}+1} \tag{65}$$

where *il*+1*Z*ˆ*il*+<sup>1</sup> is the versor of the joint axe expressed in the frame {*il* <sup>+</sup> <sup>1</sup>} and *il*+1*ωil*+<sup>1</sup> is the angular velocity of joint *il* + 1.

$$\boldsymbol{^{i}l+1}\dot{\boldsymbol{\omega}}\_{l+1} = \boldsymbol{^{i}l+1}\boldsymbol{R}\_{l\boldsymbol{l}} \cdot \dot{\boldsymbol{^{i}l}}\,\boldsymbol{\omega}\_{l\boldsymbol{i}} + \ddot{\theta}\_{l\boldsymbol{i}+1} \cdot \boldsymbol{^{i}l+1}\,\hat{\boldsymbol{Z}}\_{l\boldsymbol{i}+1} + \boldsymbol{^{i}l+1}\boldsymbol{R}\_{l\boldsymbol{i}} \cdot \dot{\boldsymbol{^{i}l}}\,\boldsymbol{\omega}\_{l\boldsymbol{i}} \times \dot{\theta}\_{l\boldsymbol{i}+1} \cdot \dot{\boldsymbol{^{i}l+1}}\,\hat{\boldsymbol{Z}}\_{l\boldsymbol{i}+1} \tag{66}$$

$$\mathbf{r}^{i\_l+1}\boldsymbol{v}\_{i\_l+1} = \,^{i\_l+1}\mathbf{R}\_{i\_l}(^{i}\boldsymbol{v}\_{i\_l} + ^{i\_l}\boldsymbol{\omega}\_{i\_l} \times ^{i\_l}\mathbf{0}\_{i\_l+1})\tag{67}$$

$${}^{i\_l+1}\dot{\boldsymbol{\upsilon}}\_{i\_l+1} = {}^{i\_l+1}\ R\_{\dot{i}\_l}[^{i\_l}\dot{\boldsymbol{\omega}}\_{i\_l} \times {}^{i\_l}\mathbf{0}\_{i\_l+1} + {}^{i\_l}\boldsymbol{\omega}\_{\dot{i}\_l} \times (^{i\_l}\boldsymbol{\omega}\_{\dot{i}\_l} \times ^{i\_l}\mathbf{0}\_{\dot{i}\_l+1}) + {}^{i\_l}\dot{\boldsymbol{\upsilon}}\_{\dot{i}\_l}] \tag{68}$$

For the calculation, it is assumed that velocities of base *ω*0, *ω*˙ 0, *v*<sup>0</sup> and *v*˙0 are known and are equal to the platform.

If the center of mass of each link *il*+<sup>1</sup>*OCi <sup>l</sup>*+<sup>1</sup> is known, its acceleration may be calculated by equation 69.

$$\dot{\boldsymbol{v}}^{i\_{l}+1}\dot{\boldsymbol{v}}\_{\mathbb{C}\_{\bar{l}+1}} = \stackrel{i\_{l}+1}{\boldsymbol{\omega}}\dot{\boldsymbol{\omega}}\_{\bar{l}+1} \times \stackrel{i\_{l}+1}{\boldsymbol{\circ}}\mathcal{O}\_{\mathbb{C}\_{\bar{l}+1}} + \stackrel{i\_{l}+1}{\boldsymbol{\omega}}\boldsymbol{\omega}\_{\bar{l}+1} \times \stackrel{i\_{l}+1}{\boldsymbol{\circ}}\boldsymbol{\omega}\_{\bar{l}+1} \times \stackrel{i\_{l}+1}{\boldsymbol{\circ}}\mathcal{O}\_{\mathbb{C}\_{\bar{l}+1}}\tag{69}$$

where *il*+1*v*˙*ci <sup>l</sup>*+<sup>1</sup> is the velocity of the center of mass of link *il* + 1.

#### **4.2.2 Backward computation**

Once the velocities and accelerations of the link are calculated, the joint forces and moments can be computed, one link at time, starting from the gripper and ending at the platform.

$$\prescript{i\_l}{}{f}\_{\dot{i}\_l} = \prescript{i\_l}{}{R}\_{\dot{i}\_l+1} \cdot \prescript{i\_l+1}{}{f}\_{\dot{i}\_l+1} + \prescript{i\_l}{}{f}\_{\dot{i}\_l}^\* \tag{70}$$

$$\mathbf{u}^{\dot{i}\_{l}}\mathbf{n}\_{\dot{i}\_{l}} = {}^{\dot{i}\_{l}}\mathbf{n}\_{\dot{i}\_{l}}^{\*} + {}^{\dot{i}}\mathbf{R}\_{\dot{i}\_{l}+1} \cdot {}^{\dot{i}\_{l}+1}\mathbf{n}\_{\dot{i}\_{l}+1} + {}^{\dot{i}\_{l}}\mathbf{0}\_{\mathbf{c}\_{\dot{l}\_{l}}} \times {}^{\dot{i}\_{l}+1}f\_{\dot{i}\_{l}+1}^{\*} + {}^{\dot{i}\_{l}}\mathbf{0}\_{\dot{i}\_{l}+1} \times {}^{\dot{i}\_{l}}\mathbf{R}\_{\dot{i}\_{l}+1} \cdot {}^{\dot{i}\_{l}+1}f\_{\dot{i}\_{l}+1} \tag{71}$$

Finally, the torques are obtained by projecting the forces or moments onto their corresponding joint axes.

$$
\pi\_{\dot{\mathbf{i}}\_l} = {}^{\dot{\mathbf{i}}\_l} \, \pi\_{\dot{\mathbf{i}}\_l}^T \cdot {}^{\dot{\mathbf{i}}\_l} \, \hat{\mathbf{Z}}\_{\dot{\mathbf{i}}\_l} \tag{72}
$$

#### **5. Illustrative example of the robot gait**

This section, presents the performance of the robot in stages I and II (see figure 2). For this, we want to displacement of the center of the platform *OP* along the *Y* axis relative to frame {*O*} from the point *OP*(0)=[0.392, 0, 0.231] to the point *OP*(*I*)=[0.392, 0.39, 0.231]. At the starting point, legs *l* = 1, 3 are stuck to the surface and legs *l* = 2, 4 are in the air. Frame *O* is attached to point *<sup>O</sup> A*<sup>1</sup> like in figure 12.

Table 3 show the control parameters used in this example.

Where *N* is the fator of discretization of the signals, *tf* the time to execute the task, and *Imaxi l* and *Vmaxi <sup>l</sup>* the maximus values of the current and voltage than can be apply to the joints motors.

In this case some conditions must be respected:

Fig. 12. Mechanical model

18 Will-be-set-by-IN-TECH

The first step is to calculate the angular velocity, angular acceleration, linear velocity and linear acceleration of each link. The best form to calculate these velocities is using a recursive

where *il*+1*Z*ˆ*il*+<sup>1</sup> is the versor of the joint axe expressed in the frame {*il* <sup>+</sup> <sup>1</sup>} and *il*+1*ωil*+<sup>1</sup> is

*il <sup>ω</sup>il* <sup>+</sup> ˙

*il*+<sup>1</sup> *<sup>Z</sup>*ˆ*il*+<sup>1</sup> <sup>+</sup>*il*+<sup>1</sup> *Ril* ·

*θil*+<sup>1</sup> ·

*ilω*˙ *il* <sup>×</sup>*il* <sup>0</sup>*il*+<sup>1</sup> <sup>+</sup>*il <sup>ω</sup>il* <sup>×</sup> (*ilωil* <sup>×</sup>*il* <sup>0</sup>*il*+1) +*il <sup>v</sup>*˙*il*

*<sup>l</sup>*+<sup>1</sup> <sup>+</sup>*il*+<sup>1</sup> *<sup>ω</sup>il*+<sup>1</sup> <sup>×</sup> (*il*+1*ωil*+<sup>1</sup> <sup>×</sup>*il*+<sup>1</sup> *OCi*

*il*+<sup>1</sup> *fil*<sup>+</sup><sup>1</sup> <sup>+</sup>*il* <sup>ˆ</sup>

*<sup>l</sup>* <sup>×</sup>*il*+<sup>1</sup> <sup>ˆ</sup> *f* ∗

Finally, the torques are obtained by projecting the forces or moments onto their corresponding

This section, presents the performance of the robot in stages I and II (see figure 2). For this, we want to displacement of the center of the platform *OP* along the *Y* axis relative to frame {*O*} from the point *OP*(0)=[0.392, 0, 0.231] to the point *OP*(*I*)=[0.392, 0.39, 0.231]. At the starting point, legs *l* = 1, 3 are stuck to the surface and legs *l* = 2, 4 are in the air. Frame *O* is

Where *N* is the fator of discretization of the signals, *tf* the time to execute the task, and *Imaxi*

*<sup>l</sup>* the maximus values of the current and voltage than can be apply to the joints

*<sup>τ</sup>il* <sup>=</sup>*il <sup>n</sup><sup>T</sup> il* · *f* ∗

*il*+<sup>1</sup> <sup>+</sup>*il* <sup>0</sup>*il*+<sup>1</sup> <sup>×</sup>*il Ril*<sup>+</sup><sup>1</sup> ·

*il*+<sup>1</sup> *<sup>Z</sup>*ˆ*il*+<sup>1</sup> (65)

*il*+<sup>1</sup> *<sup>Z</sup>*ˆ*il*+<sup>1</sup> (66)

] (68)

*<sup>l</sup>*+<sup>1</sup> ) +*il*+<sup>1</sup> *<sup>v</sup>*˙*il*+<sup>1</sup> (69)

*il*+<sup>1</sup> *fil*<sup>+</sup><sup>1</sup> (71)

*l*

*il* (70)

*il <sup>Z</sup>*ˆ*il* (72)

*il <sup>ω</sup>il* <sup>×</sup> ˙

*<sup>l</sup>*+<sup>1</sup> is known, its acceleration may be calculated by

*θil*+<sup>1</sup> ·

*vil* <sup>+</sup>*il <sup>ω</sup>il* <sup>×</sup>*il* <sup>0</sup>*il*+1) (67)

algorithm starting at the first moving link and advancing to the gripper.

*il*+1*ωil*+<sup>1</sup> <sup>=</sup>*il*+<sup>1</sup> *Ril* ·

*il <sup>ω</sup>*˙ *il* <sup>+</sup> ¨

[

*il* ˆ

*il*+<sup>1</sup>*vil*<sup>+</sup><sup>1</sup> <sup>=</sup> *il*+<sup>1</sup>

*θil*+<sup>1</sup> ·

*<sup>l</sup>*+<sup>1</sup> is the velocity of the center of mass of link *il* + 1.

*fil* <sup>=</sup>*il Ril*<sup>+</sup><sup>1</sup> ·

*il*+<sup>1</sup> *nil*<sup>+</sup><sup>1</sup> <sup>+</sup>*il* <sup>0</sup>*ci*

*Ril* (*i*

For the calculation, it is assumed that velocities of base *ω*0, *ω*˙ 0, *v*<sup>0</sup> and *v*˙0 are known and are

Once the velocities and accelerations of the link are calculated, the joint forces and moments can be computed, one link at time, starting from the gripper and ending at the platform.

**4.2.1 Forward computation**

the angular velocity of joint *il* + 1.

equal to the platform.

*il*+1*v*˙*ci*

**4.2.2 Backward computation**

*il nil* <sup>=</sup> *iln*<sup>∗</sup>

equation 69.

where *il*+1*v*˙*ci*

joint axes.

and *Vmaxi*

motors.

*il*+1*ω*˙ *il*+<sup>1</sup> <sup>=</sup> *il*+<sup>1</sup>*Ril* ·

If the center of mass of each link *il*+<sup>1</sup>*OCi*

*il*+1*v*˙*il*+<sup>1</sup> <sup>=</sup>*il*+<sup>1</sup> *Ril*

*<sup>l</sup>*+<sup>1</sup> <sup>=</sup> *il*+1*ω*˙ *il*+<sup>1</sup> <sup>×</sup>*il*+<sup>1</sup> *OCi*

*il* <sup>+</sup>*<sup>i</sup> Ril*<sup>+</sup><sup>1</sup> ·

**5. Illustrative example of the robot gait**

attached to point *<sup>O</sup> A*<sup>1</sup> like in figure 12.

In this case some conditions must be respected:

Table 3 show the control parameters used in this example.


Table 3. Control Parameters


The robot move was controlled by an optimal control law that minimize the loss energy in the actuator. The law of control was based in the independent joints control strategy. The objective of the simulation is to show the performance of the system in a basic cycle gait. The gait control was implemented according with the flowchart showed in figure 13.

Figures 14, 15 and 16 show the characteristics of the robot move at the "pushing stage".

When the desirable position of the platform is reached, the next step is to move the legs that are in the air to the next clinging point. At this moment the stage "leg on the air" begins. Figures 17, 18 and 19 show the performance of one leg in this stage. The orientation of the gripper is the same all the time and it is *ϕ* = 0

When the four legs are clung to the surface, the basic cycle gait is over, and the robot is ready to calculate the new path to go. The total displacement of the robot was from position *OP*(0) to position *OP*(*I*) with an average speed of displacement in the *Y* axis of about *Ov*¯*<sup>Y</sup>* = 0.00975*m*/*s*.

of a Quadruped Robot 21

A Kinematical and Dynamical Analysis of a Quadruped Robot 259

θ1<sup>1</sup> θ2<sup>1</sup> θ3<sup>1</sup> θ4<sup>1</sup> θ1<sup>3</sup> θ2<sup>3</sup> θ3<sup>3</sup> θ4<sup>3</sup>

τ4<sup>1</sup> τ3<sup>1</sup> τ2<sup>1</sup> τ4<sup>3</sup> τ3<sup>3</sup> τ2<sup>3</sup>

<sup>10</sup> <sup>20</sup> <sup>30</sup> <sup>40</sup> <sup>50</sup> <sup>60</sup> <sup>70</sup> <sup>80</sup> <sup>90</sup> <sup>100</sup> −1.5

<sup>10</sup> <sup>20</sup> <sup>30</sup> <sup>40</sup> <sup>50</sup> <sup>60</sup> <sup>70</sup> <sup>80</sup> <sup>90</sup> <sup>100</sup> −20

Samples

Samples

−1 −0.5 0 0.5 1 1.5 2 2.5

−15

Fig. 15. Joint torques in the "pushing stage".

−10

−5

N-m

0

5

10

rad

Fig. 14. Joint space in the "pushing stage".

Fig. 13. Control flowchart for a cycle gait

20 Will-be-set-by-IN-TECH

Gait Cycle

New *xP*(*t*)

*xP*(*t*) ∈ W*<sup>P</sup>*

no

no

Fig. 13. Control flowchart for a cycle gait

Solve Inverse kinematics problems for the platform. Move to *xP*(*tf*)

yes

Calculate the new clinging points. (*l* = 1, 3)

(*t*) ∈ W*Gl*

yes

*xGl* (*t*)

Solve Inverse kinematics problems for the l-*th* leg. Move to *xGl*

> End Gait Cycle

(*tf*).

*xGl*

Fig. 14. Joint space in the "pushing stage".

Fig. 15. Joint torques in the "pushing stage".

of a Quadruped Robot 23

A Kinematical and Dynamical Analysis of a Quadruped Robot 261

τ1<sup>2</sup> τ2<sup>2</sup> τ3<sup>2</sup>

<sup>10</sup> <sup>20</sup> <sup>30</sup> <sup>40</sup> <sup>50</sup> <sup>60</sup> <sup>70</sup> <sup>80</sup> <sup>90</sup> <sup>100</sup> −1

Samples

Position

<sup>10</sup> <sup>20</sup> <sup>30</sup> <sup>40</sup> <sup>50</sup> <sup>60</sup> <sup>70</sup> <sup>80</sup> <sup>90</sup> <sup>100</sup> −0.4

<sup>10</sup> <sup>20</sup> <sup>30</sup> <sup>40</sup> <sup>50</sup> <sup>60</sup> <sup>70</sup> <sup>80</sup> <sup>90</sup> <sup>100</sup> <sup>0</sup>

<sup>10</sup> <sup>20</sup> <sup>30</sup> <sup>40</sup> <sup>50</sup> <sup>60</sup> <sup>70</sup> <sup>80</sup> <sup>90</sup> <sup>100</sup> <sup>0</sup>

Samples

Fig. 19. Movement of the gripper of a leg 1 in the Cartesian space in the stage "leg on the air".

This paper discussed an important issue related to legged robots: the kinematics and dynamics model of the quadruped robot. The analysis done for each model was always presented in two parts, the platform and the legs, according to a time-varying topology and a

Several methods were used in each modeling process always trying to use those which brought to better performance in accordance with the topology modeled and that could be easily implemented in programming languages of high level. Then were used the Denavit-Hartenberg parameters for solving the direct position kinematics of the platform and leg, the Principle of Virtual Work or the d'Alembert for dynamic modeling of the platform and

−0.5 0 0.5 1 1.5 2 2.5 3 3.5

−0.2 0

> 0.5 1

> 0.5 1

zG[m]

time-varying degree of freedom of the system.

the Newton-Euler dynamic model for leg in the air.

**6. Conclusion**

yG [m]

xG [m]

Fig. 18. Joint torques for one leg 1 in the stage "leg on the air".

N-m

Fig. 16. Movement and orientation of the center of platform in the Cartesian space in the "pushing stage".

Fig. 17. Joint space for one leg 1 in the stage "leg on the air".

22 Will-be-set-by-IN-TECH

0 1

3.15 3.2

−1 −0.5 0

ΘP [rad]

ΨP [rad]

<sup>10</sup> <sup>20</sup> <sup>30</sup> <sup>40</sup> <sup>50</sup> <sup>60</sup> <sup>70</sup> <sup>80</sup> <sup>90</sup> <sup>100</sup> −2

Samples

Fig. 16. Movement and orientation of the center of platform in the Cartesian space in the

ΦP [rad]

<sup>0</sup> <sup>50</sup> <sup>100</sup> −1

<sup>0</sup> <sup>50</sup> <sup>100</sup> 3.1

<sup>0</sup> <sup>50</sup> <sup>100</sup> −1.5

Samples

θ1<sup>2</sup> θ2<sup>2</sup> θ3<sup>2</sup>

Euler Angles

<sup>0</sup> <sup>50</sup> <sup>100</sup> 0.38

<sup>0</sup> <sup>50</sup> <sup>100</sup> −0.5

<sup>0</sup> <sup>50</sup> <sup>100</sup> 0.23

Samples

0.42 Position

0.4

0 0.5

0.235 0.24

−1

Fig. 17. Joint space for one leg 1 in the stage "leg on the air".

0

1

rad

2

3

4

zP [m]

"pushing stage".

xP [m]

yP [m]

Fig. 18. Joint torques for one leg 1 in the stage "leg on the air".

Fig. 19. Movement of the gripper of a leg 1 in the Cartesian space in the stage "leg on the air".

#### **6. Conclusion**

This paper discussed an important issue related to legged robots: the kinematics and dynamics model of the quadruped robot. The analysis done for each model was always presented in two parts, the platform and the legs, according to a time-varying topology and a time-varying degree of freedom of the system.

Several methods were used in each modeling process always trying to use those which brought to better performance in accordance with the topology modeled and that could be easily implemented in programming languages of high level. Then were used the Denavit-Hartenberg parameters for solving the direct position kinematics of the platform and leg, the Principle of Virtual Work or the d'Alembert for dynamic modeling of the platform and the Newton-Euler dynamic model for leg in the air.

**0**

**13**

*Italy*

**Epi.q Robots**

and Roberto Razzoli<sup>2</sup> <sup>1</sup>*Politecnico di Torino* <sup>2</sup>*University of Genova*

Giuseppe Quaglia1, Riccardo Oderio1, Luca Bruzzone2

Over the last few years there have been great developments and improvements in the mobile robotics field, oriented to replace human operators especially in dangerous tasks, such as mine-sweeping operations, rescuing after earthquakes or other catastrophic events, fire-fighting operations, working inside nuclear power stations and exploration of unknown

Different locomotion systems have been developed to enable robots to move flexibly and reliably across various ground surfaces. Usually, mobile robots are wheeled, tracked and legged ones, even if there are also robots that swim, jump, slither and so on. *Wheeled robots* are robots that use wheels for moving; they can move fast with low energy consumption, have few degrees of freedom and are easy to control, but they cannot climb great obstacles (in comparison with robot dimensions) and can lose grip on uneven terrain. *Tracked robots* are robots that use tracks for moving; they are easily controllable, also on uneven terrain, but are slower than wheeled ones and have higher energy consumption. *Legged robots* are robots that use legs for moving; they possess great mobility and this makes them suitable for applications on uneven terrain; conversely, they are relatively slow, require much energy and their structure needs several actuators, with increased control complexity. Of course each robot class has advantages and drawbacks, thus scientists designed new robots, trying to comprise the advantages of different robot classes and, at the same time, to reduce the

Literature presents numerous interesting solutions for robots moving in structured and unstructured environments: some of them are here presented. The Spacecat, Whegs and MSRox can be considered smart reference prototypes for this work; the others are interesting

Spacecat (Siegwart et al., 1998) is a smart rover developed at the École Polytechnique Fédérale de Lausanne (EPFL) by a team leaded by prof. Roland Siegwart, in collaboration with Mecanex S.A. and ESA. The locomotion concept is a hybrid approach called *Stepping triple wheels*, that shares features with both wheeled and legged locomotion. Two independently driven sets of three wheels are supported by two frames. The frames can rotate independently around the main body (payload frame) and allow the rover to actively lift one wheel to step climb the obstacle. Eight motors drive each wheel and frame independently. During climbing

**1. Introduction**

environments.

**1.1 Background**

disadvantages: these robots are called *Hybrid robots*.

solutions that, using different mechanisms, accomplish similar tasks.

Special attention was given in the section of the singularities, where the study of all the singularities in the parallel topology were presented. For that, the complete criterion of singularity for parallel robots proposed in Goselin & Angeles (1990) was used. In addition, the principals configurations of the singularities were showed through figures.

Finally, the performance of the robot in a cycle gait was presented. As a result of this example, the space joints, the torque of the joints and the cartesian space relative to this gait were displayed in figures.

#### **7. References**


### **0 13**

## **Epi.q Robots**

Giuseppe Quaglia1, Riccardo Oderio1, Luca Bruzzone2 and Roberto Razzoli<sup>2</sup> <sup>1</sup>*Politecnico di Torino* <sup>2</sup>*University of Genova*

*Italy*

### **1. Introduction**

24 Will-be-set-by-IN-TECH

262 Mobile Robots – Current Trends

Special attention was given in the section of the singularities, where the study of all the singularities in the parallel topology were presented. For that, the complete criterion of singularity for parallel robots proposed in Goselin & Angeles (1990) was used. In addition,

Finally, the performance of the robot in a cycle gait was presented. As a result of this example, the space joints, the torque of the joints and the cartesian space relative to this gait were

Almeida, R. Z. H. & Hess-Coelho, T. A. (2010). Dynamic model of a 3-dof asymmetric parallel

Angeles, J. (2007). *Fundamentals of Robotic Mechanical Systems. Theory, Methods, and Algorithms*,

Bernardi, R. & Da Cruz, J. J. (2007). Kamanbaré: A tree-climbing biomimetic robotic

Bernardi, R., Potts, A. S. & Cruz, J. (2009). An automatic modelling approach to mobile robots,

Bobrow, J., Park, F. & Sideris, A. (2004). Recent advances on the algorithmic optimization

Goselin, C. & Angeles, J. (1990). Singularity analysis of closed-loop kinematic chains, *IEEE*

Harib, K. & Srinivasan, K. (2003). Kinematic and dynamic analysis of stewart platform-based

Kolter, J. Z., Rodgers, M. P. & Ng, A. Y. (2008). A control architecture for quadruped

Lenarcic, J. & Roth, B. (eds) (2006). *Advances in Robots Kinematics. Mechanisms and Motion*,

Murray, R. M., Li, Z. & Sastry, S. S. (1994). *A Mathematical Introduction to Robot Manipulation*,

Pfeiffer, F., Eltze, J. & Weidemann, H.-J. (1995). The tum walking machine, *Intelligent*

Pieper, D. (1968). The kinematics of manipulators under computer control., *Technical report*,

Potts, A. & Da Cruz, J. (2010). Kinematics analysis of a quadruped robot, *IFAC Symposium on*

Siegwart, R. & Nourbakhsh, I. R. (2004). *Introduction to Autonomous Mobile Robots*, The MIT

Tsai, L. (1999). *Robot Analysis. The Mechanical of Serial and Paralel Manipulators*, John Wiley &

*Automation and Soft Computing. An International Journal* 1: 307–323.

Department of Mechanical Engineering, Stanford University.

Craig, J. (1989). *Introduction to Robotics. Mechanics and Control*, Addison Wesley Longman. Estremera, J. & Waldron, K. J. (2008). Thrust control, stabilization and energetics

platform for environmental research., *International Conference on Informatics in Control,*

*in* F. B. Troch (ed.), *International Conference on Mathematical Modelling*, MATHMOD,

of robot motion., *Technical report*, Departament of Mechanical and Aerospace

of a quadruped running robot, *The International Journal of Robotics Research*

locomotion over rough terrain, *Technical report*, Computer Science Department,

the principals configurations of the singularities were showed through figures.

mechanism, *The Open Mechanical Engineering Journal* 4.

*Automation and Robotics (ICINCO)*.

Engineering, University of California.

*Transactions on Robotics and Automation* 6: 281–290.

machine tool structures, *Robotica* 21: 241–254.

*Mechatronics Systems*, Boston, Massachusetts.

Stanford University, Stanford.

Merlet, J. (2006). *Parallel Robots*, 2nd edn, Springer.

Vienna, pp. 1906–1912.

27(10): 1135–1151.

Springer.

CRC Press.

Press.

Sons.

displayed in figures.

Springer.

**7. References**

Over the last few years there have been great developments and improvements in the mobile robotics field, oriented to replace human operators especially in dangerous tasks, such as mine-sweeping operations, rescuing after earthquakes or other catastrophic events, fire-fighting operations, working inside nuclear power stations and exploration of unknown environments.

Different locomotion systems have been developed to enable robots to move flexibly and reliably across various ground surfaces. Usually, mobile robots are wheeled, tracked and legged ones, even if there are also robots that swim, jump, slither and so on. *Wheeled robots* are robots that use wheels for moving; they can move fast with low energy consumption, have few degrees of freedom and are easy to control, but they cannot climb great obstacles (in comparison with robot dimensions) and can lose grip on uneven terrain. *Tracked robots* are robots that use tracks for moving; they are easily controllable, also on uneven terrain, but are slower than wheeled ones and have higher energy consumption. *Legged robots* are robots that use legs for moving; they possess great mobility and this makes them suitable for applications on uneven terrain; conversely, they are relatively slow, require much energy and their structure needs several actuators, with increased control complexity. Of course each robot class has advantages and drawbacks, thus scientists designed new robots, trying to comprise the advantages of different robot classes and, at the same time, to reduce the disadvantages: these robots are called *Hybrid robots*.

#### **1.1 Background**

Literature presents numerous interesting solutions for robots moving in structured and unstructured environments: some of them are here presented. The Spacecat, Whegs and MSRox can be considered smart reference prototypes for this work; the others are interesting solutions that, using different mechanisms, accomplish similar tasks.

Spacecat (Siegwart et al., 1998) is a smart rover developed at the École Polytechnique Fédérale de Lausanne (EPFL) by a team leaded by prof. Roland Siegwart, in collaboration with Mecanex S.A. and ESA. The locomotion concept is a hybrid approach called *Stepping triple wheels*, that shares features with both wheeled and legged locomotion. Two independently driven sets of three wheels are supported by two frames. The frames can rotate independently around the main body (payload frame) and allow the rover to actively lift one wheel to step climb the obstacle. Eight motors drive each wheel and frame independently. During climbing

unit is driven. The presented version of MSRox has only two motors: one motor controls the rotation of the 12 wheels while the other controls the rotation of the Star-Wheels; the steering

Epi.q Robots 265

RHex (Saranli et al., 2001; 2004), developed first at the McGill University and University of Michigan and then at the Carnegie Mellon Robotics Institute, is characterized by compliant leg elements that provide dynamically adaptable legs and a mechanically self-stabilized gait. This hexapod robot, cockroach-inspired, uses a simple mechanical design with one actuator per leg and it is capable of doing a wide variety of tasks, such as walking, running, leaping

Hylos (Grand et al., 2004), developed at the Université Pierre et Marie Curie, is characterized by a wheel-legged locomotion unit. Legs and wheels are independently actuated, therefore it uses wheels for propulsion and internal articulation to adapt its posture. It is a lightweight

VIPeR (Galileo Mobility Instruments & Elbit Systems Ltd, 2009), codeveloped by Elbit System and Galileo Mobility Instruments, is characterized by the *Galileo Wheel*, a patented system developed by Galileo Mobility Instruments ltd. The Galileo Wheel combines wheel and track in a single component, switching back and forth between the two modes within seconds. This technology enables the device to use wheels whenever possible, and tracks whenever needed. Lego Mindstorm Artic Snow Cat (Lego Mindstorm, 2007) is characterized by four sets of triangular tracked treads that can rotate in two ways. In standard drive the treads move like a tank. When the going gets tough it can turn all four treads on the center axis, or to go through

Packbot (iRobot, 2010; Mourikis et al., 2007), developed by iRobot, is a tracked vehicle with *flippers*. The flippers enable the robot to climb over obstacles, self right itself and climb stairs,

Scout II (Poulakakis et al., 2006; 2005) is characterized by a fast and stable quadrupedal locomotion. It consists of a rigid body with four compliant rigid prismatic legs. One single actuator per leg, located at the hip, allows active rotation of the leg. Each leg assembly consists of a lower and an upper part, connected via springs to form a compliant prismatic joint.

Epi.q robots can be classified as hybrid robots, since their locomotion system shares features with both wheeled and legged robots. They are smart mini robots able to move in structured and unstructured environments, to climb over obstacles and to go up and down stairs. The robots do not need to actively sense obstacles for climbing them, they simply move forward and let their locomotion passively adapt to ground conditions and change accordingly without active control intervention: from rolling on wheels to stepping on rotating legs and vice-versa. Using wheels whenever possible and legs only when needed, their energy demand is really low in comparison with tracked and legged robots having similar obstacle crossing

Epi.q mechanical architecture consists of: a forecarriage, a central body and a rear axle, as shown in Figure 1. The forecarriage is composed of a frame linked to two driving units, that generate robot traction. The forecarriage frame houses motors and electronics, protecting them from dust and from potentially dangerous impacts against obstacles. The driving units are three-legged wheel units having attached thereto three wheels; they house the

function is not implemented.

over obstacles and climbing stairs.

mini-robot with 16 actively actuated degrees of freedom.

enhancing ability over a simple tracked robot.

**2. Mechanical architecture**

capability.

**2.1 Chassis**

deep water it can run on the ends of its triangular treads for extra lift.

operation, the center of gravity of the rover is moved outside the contact surface formed by the four wheels. Thus the rover gets out of balance and falls with its upper wheel onto the obstacle; nevertheless no displacement of the center of gravity is required when the rover moves over a small rock; therefore, small object can be passed without any special control commands.

Whegs and Mini-Whegs (Allen et al., 2003; Quinn et al., 2003; Schroer et al., 2004) are hybrid mobile robots developed at the Center for Biologically Inspired Robotics Research at Case Western Reserve University, Cleveland, Ohio. The *Whegs* were designed using abstracted principles of cockroach locomotion. A cockroach has six legs, which support and move its body. It typically walks and runs in a tripod gait where the front and rear legs on one side of the body move in phase with the middle leg on the other side. The front legs swing head-high during normal walking so that many obstacles can be surmounted without significant gait changes. These robots are characterized by three-spoke locomotion units; they move faster than legged vehicles and climb higher barriers than wheeled ones of similar size. A single propulsion motor drives both front and rear axles and a servo actuated system controls the steering, similarly to automobile vehicle. With regard to Whegs locomotion: while the robot is walking on flat ground, three of the wheel-legs are 60◦ out of phase with the other three wheel-legs, which allows the robot to use an alternating tripod gait. This gait requires that the two front wheel-legs be out of phase with each other. When an obstacle is encountered, passive mechanical compliance allows the front legs to come back into phase with each other, so that they can both be used to pull the robot up and over the obstacle. After the robot has pulled itself over the obstacle, the front legs fall back into the previous pattern, thus the robot returns to an alternating tripod gait. *Whegs II*, the next generation of Whegs vehicles, incorporates a body flexion joint in addition to all of the mechanisms that were implemented in Whegs I. This actively controlled joint allows the robot to change its posture in a way similar to the cockroach, thus enabling it to climb even higher obstacles. The active body joint also allows the robot to reach its front legs down to contact the substrate during a climb and to avoid the instability of high-centering. Its aluminum frame and new leg design contributed in making Whegs II more robust than Whegs I. *Whegs VP* is a hybrid of the Whegs I and II vehicles. It is most similar in design to Whegs II, but lacks the body flexion joint. It combines the simplicity and agility of Whegs I with the durability and robustness of Whegs II. Improved legs and gait adaptation devices were implemented in its design. The *Mini-Whegs* are highly mobile, robust, and power-autonomous vehicles employ the same abstracted principles as Whegs, but on a scale more similar to the cockroach and using only four locomotion units. These robots, 90 mm long, can run at sustained speeds of over 10 body lengths per second and climb obstacles higher than the length of their legs. One version, called Jumping Mini-Whegs, has also a self-resetting jump mechanism that enables it to surmount obstacles as high as 220 mm, such as a stair.

MSRox (Dalvand & Moghadam, 2006) is an hybrid mobile robot developed by prof. Moghaddam and Dalvand at Tarbiat Modares University, Tehran, Iran. The MSRox employs an hybrid driving unit called *Star-Wheel*, designed for traversing stairs and obstacles. It is a three-legged wheel unit having three radially located wheels, mounted at the end of each spoke. Each Star-Wheel has two rotary axes: one for the rotation of the wheels, when MSRox moves on flat surfaces or passes over uphill, downhill, and slope surfaces; the other for the rotation of the Star-Wheel, when MSRox climbs or descends stairs and traverses obstacles. The four locomotion units are assembled on a central body. The robot can advance on ground, when only the wheel rotation is driven, or climb over an obstacle, when only the locomotion 2 Will-be-set-by-IN-TECH

operation, the center of gravity of the rover is moved outside the contact surface formed by the four wheels. Thus the rover gets out of balance and falls with its upper wheel onto the obstacle; nevertheless no displacement of the center of gravity is required when the rover moves over a small rock; therefore, small object can be passed without any special control

Whegs and Mini-Whegs (Allen et al., 2003; Quinn et al., 2003; Schroer et al., 2004) are hybrid mobile robots developed at the Center for Biologically Inspired Robotics Research at Case Western Reserve University, Cleveland, Ohio. The *Whegs* were designed using abstracted principles of cockroach locomotion. A cockroach has six legs, which support and move its body. It typically walks and runs in a tripod gait where the front and rear legs on one side of the body move in phase with the middle leg on the other side. The front legs swing head-high during normal walking so that many obstacles can be surmounted without significant gait changes. These robots are characterized by three-spoke locomotion units; they move faster than legged vehicles and climb higher barriers than wheeled ones of similar size. A single propulsion motor drives both front and rear axles and a servo actuated system controls the steering, similarly to automobile vehicle. With regard to Whegs locomotion: while the robot is walking on flat ground, three of the wheel-legs are 60◦ out of phase with the other three wheel-legs, which allows the robot to use an alternating tripod gait. This gait requires that the two front wheel-legs be out of phase with each other. When an obstacle is encountered, passive mechanical compliance allows the front legs to come back into phase with each other, so that they can both be used to pull the robot up and over the obstacle. After the robot has pulled itself over the obstacle, the front legs fall back into the previous pattern, thus the robot returns to an alternating tripod gait. *Whegs II*, the next generation of Whegs vehicles, incorporates a body flexion joint in addition to all of the mechanisms that were implemented in Whegs I. This actively controlled joint allows the robot to change its posture in a way similar to the cockroach, thus enabling it to climb even higher obstacles. The active body joint also allows the robot to reach its front legs down to contact the substrate during a climb and to avoid the instability of high-centering. Its aluminum frame and new leg design contributed in making Whegs II more robust than Whegs I. *Whegs VP* is a hybrid of the Whegs I and II vehicles. It is most similar in design to Whegs II, but lacks the body flexion joint. It combines the simplicity and agility of Whegs I with the durability and robustness of Whegs II. Improved legs and gait adaptation devices were implemented in its design. The *Mini-Whegs* are highly mobile, robust, and power-autonomous vehicles employ the same abstracted principles as Whegs, but on a scale more similar to the cockroach and using only four locomotion units. These robots, 90 mm long, can run at sustained speeds of over 10 body lengths per second and climb obstacles higher than the length of their legs. One version, called Jumping Mini-Whegs, has also a self-resetting jump mechanism that enables it to surmount

MSRox (Dalvand & Moghadam, 2006) is an hybrid mobile robot developed by prof. Moghaddam and Dalvand at Tarbiat Modares University, Tehran, Iran. The MSRox employs an hybrid driving unit called *Star-Wheel*, designed for traversing stairs and obstacles. It is a three-legged wheel unit having three radially located wheels, mounted at the end of each spoke. Each Star-Wheel has two rotary axes: one for the rotation of the wheels, when MSRox moves on flat surfaces or passes over uphill, downhill, and slope surfaces; the other for the rotation of the Star-Wheel, when MSRox climbs or descends stairs and traverses obstacles. The four locomotion units are assembled on a central body. The robot can advance on ground, when only the wheel rotation is driven, or climb over an obstacle, when only the locomotion

commands.

obstacles as high as 220 mm, such as a stair.

unit is driven. The presented version of MSRox has only two motors: one motor controls the rotation of the 12 wheels while the other controls the rotation of the Star-Wheels; the steering function is not implemented.

RHex (Saranli et al., 2001; 2004), developed first at the McGill University and University of Michigan and then at the Carnegie Mellon Robotics Institute, is characterized by compliant leg elements that provide dynamically adaptable legs and a mechanically self-stabilized gait. This hexapod robot, cockroach-inspired, uses a simple mechanical design with one actuator per leg and it is capable of doing a wide variety of tasks, such as walking, running, leaping over obstacles and climbing stairs.

Hylos (Grand et al., 2004), developed at the Université Pierre et Marie Curie, is characterized by a wheel-legged locomotion unit. Legs and wheels are independently actuated, therefore it uses wheels for propulsion and internal articulation to adapt its posture. It is a lightweight mini-robot with 16 actively actuated degrees of freedom.

VIPeR (Galileo Mobility Instruments & Elbit Systems Ltd, 2009), codeveloped by Elbit System and Galileo Mobility Instruments, is characterized by the *Galileo Wheel*, a patented system developed by Galileo Mobility Instruments ltd. The Galileo Wheel combines wheel and track in a single component, switching back and forth between the two modes within seconds. This technology enables the device to use wheels whenever possible, and tracks whenever needed. Lego Mindstorm Artic Snow Cat (Lego Mindstorm, 2007) is characterized by four sets of triangular tracked treads that can rotate in two ways. In standard drive the treads move like a tank. When the going gets tough it can turn all four treads on the center axis, or to go through deep water it can run on the ends of its triangular treads for extra lift.

Packbot (iRobot, 2010; Mourikis et al., 2007), developed by iRobot, is a tracked vehicle with *flippers*. The flippers enable the robot to climb over obstacles, self right itself and climb stairs, enhancing ability over a simple tracked robot.

Scout II (Poulakakis et al., 2006; 2005) is characterized by a fast and stable quadrupedal locomotion. It consists of a rigid body with four compliant rigid prismatic legs. One single actuator per leg, located at the hip, allows active rotation of the leg. Each leg assembly consists of a lower and an upper part, connected via springs to form a compliant prismatic joint.

### **2. Mechanical architecture**

Epi.q robots can be classified as hybrid robots, since their locomotion system shares features with both wheeled and legged robots. They are smart mini robots able to move in structured and unstructured environments, to climb over obstacles and to go up and down stairs. The robots do not need to actively sense obstacles for climbing them, they simply move forward and let their locomotion passively adapt to ground conditions and change accordingly without active control intervention: from rolling on wheels to stepping on rotating legs and vice-versa. Using wheels whenever possible and legs only when needed, their energy demand is really low in comparison with tracked and legged robots having similar obstacle crossing capability.

#### **2.1 Chassis**

Epi.q mechanical architecture consists of: a forecarriage, a central body and a rear axle, as shown in Figure 1. The forecarriage is composed of a frame linked to two driving units, that generate robot traction. The forecarriage frame houses motors and electronics, protecting them from dust and from potentially dangerous impacts against obstacles. The driving units are three-legged wheel units having attached thereto three wheels; they house the

*C C*

*<sup>d</sup>* <sup>=</sup> *vf l* <sup>+</sup> *vf r vf l* − *vf r*

Consequently the velocity of a point centered half way between the two wheels is known:

*vf* <sup>=</sup> *vf l* <sup>+</sup> *vf r*

and this velocity is equal to the component of the caster wheel velocity in the motion direction, otherwise there would be a deformation into robot body. During this operation the idle caster wheel is positioned by kinematic conditions and turns until it becomes orthogonal to the

*vp* <sup>=</sup> *vf l* <sup>+</sup> *vf r*

*<sup>α</sup>* <sup>=</sup> arctan *<sup>p</sup>*

For an Epi.q robot, shown in Figure 2 on the right, the mathematical treatment is quite similar. When the velocities of the two driving units are chosen, the position of the instantaneous

> *<sup>d</sup>* <sup>=</sup> *vf l* <sup>+</sup> *vf r vf l* − *vf r*

Therefore the velocity of a point centered half way between the two driving units is known:

*vf* <sup>=</sup> *vf l* <sup>+</sup> *vf r*

and this point coincides with the vertical revolute joint. An angle between front and rear part of the robot is generated by kinematic conditions, that position the rear wheel unit axis in

· *i*

segment that links *J* and *C*; its velocity is a function of the driven wheel velocities:

*vf l <sup>d</sup>* <sup>+</sup> *<sup>i</sup>*/2 <sup>=</sup> *vf r*

*α β*

*l*

· *i*

*vb*

*vbl*

*vf l*

*vf*

*J*

<sup>2</sup> (2)

<sup>2</sup> (3)

2 cos *<sup>α</sup>* (4)

*<sup>d</sup>* (5)

*<sup>d</sup>* <sup>−</sup> *<sup>i</sup>*/2 (6)

<sup>2</sup> (8)

<sup>2</sup> (7)

*<sup>β</sup> <sup>p</sup>*

*vbr*

*vf c*

*vf r*

*d*

*i*

*d*

*i*

Classic differential steering robot Epi.q robot

Epi.q Robots 267

*vf l*

*vf*

*p*

*vps α*

Fig. 2. Differential steering systems

*vp*

*J*

where

center of rotation is fixed too:

*vf r*

transmission system and therefore they control robot locomotion. The rear axle comprises two idle wheel units, consisting of an idle three-legged wheel unit with three radially located idle wheels, mounted at the end of each spoke. The central body is a platform which connects forecarriage and rear axle, where a payload can be placed.

Two passive revolute joints, mutually perpendicular, link front and rear part of the robot,

as shown in Figure 1. The vertical joint allows robot steering, while the horizontal joint guarantees a correct contact between wheels and ground, also in presence of uneven terrain.The angular excursion of the vertical and horizontal joints is limited by means of suitable mechanical stops.

Epi.q robots implement a differential steering, that provides both driving and steering functions. Differently choosing driving unit speeds, differently the instantaneous center of rotation is positioned along the common driving unit axis, so that an angle between front and rear part is generated by kinematic conditions and the robot can follow a specific path. Basically, a differential steering vehicle consists of two wheels mounted onto a device along the same axis, independently powered and controlled, and usually an idle caster wheel forms a tripod-like support structure for the body of the robot. In Epi.q robots the driven wheels are substituted by driving units and the Epi.q vertical joint accomplishes the same task of the caster wheel joint, as shown in Figure 2. If both the driving units are driven in the same direction and speed, the robot goes in a straight line. If one driving unit rotates faster than the other, the robot follows a curved path, turning inward toward the slower driving unit. If one of the driving units is stopped while the other continues to turn, the robot pivots around the stopped driving unit. If the driving units turn at equal speed but in opposite directions, both driving units traverse a circular path around a point centered half way between the two driving units, therefore the forecarriage pivots around the vertical axis.

For a classic differential steering robot, shown in Figure 2 on the left, when the velocities of the two driven wheels are chosen, the position of the instantaneous center of rotation is fixed too:

$$\frac{v\_{fl}}{d+i/2} = \frac{v\_{fr}}{d-i/2} \tag{1}$$

4 Will-be-set-by-IN-TECH

transmission system and therefore they control robot locomotion. The rear axle comprises two idle wheel units, consisting of an idle three-legged wheel unit with three radially located idle wheels, mounted at the end of each spoke. The central body is a platform which connects

Two passive revolute joints, mutually perpendicular, link front and rear part of the robot,

as shown in Figure 1. The vertical joint allows robot steering, while the horizontal joint guarantees a correct contact between wheels and ground, also in presence of uneven terrain.The angular excursion of the vertical and horizontal joints is limited by means of

Epi.q robots implement a differential steering, that provides both driving and steering functions. Differently choosing driving unit speeds, differently the instantaneous center of rotation is positioned along the common driving unit axis, so that an angle between front and rear part is generated by kinematic conditions and the robot can follow a specific path. Basically, a differential steering vehicle consists of two wheels mounted onto a device along the same axis, independently powered and controlled, and usually an idle caster wheel forms a tripod-like support structure for the body of the robot. In Epi.q robots the driven wheels are substituted by driving units and the Epi.q vertical joint accomplishes the same task of the caster wheel joint, as shown in Figure 2. If both the driving units are driven in the same direction and speed, the robot goes in a straight line. If one driving unit rotates faster than the other, the robot follows a curved path, turning inward toward the slower driving unit. If one of the driving units is stopped while the other continues to turn, the robot pivots around the stopped driving unit. If the driving units turn at equal speed but in opposite directions, both driving units traverse a circular path around a point centered half way between the two

For a classic differential steering robot, shown in Figure 2 on the left, when the velocities of the two driven wheels are chosen, the position of the instantaneous center of rotation is fixed

> *vf l <sup>d</sup>* <sup>+</sup> *<sup>i</sup>*/2 <sup>=</sup> *vf r*

driving units, therefore the forecarriage pivots around the vertical axis.

Central body

Horizontal axis

*<sup>d</sup>* <sup>−</sup> *<sup>i</sup>*/2 (1)

Rear axle

forecarriage and rear axle, where a payload can be placed.

Forecarriage

Fig. 1. Epi.q mechanical architecture

suitable mechanical stops.

too:

Vertical axis

Classic differential steering robot Epi.q robot

Fig. 2. Differential steering systems

$$d = \frac{v\_{fl} + v\_{fr}}{v\_{fl} - v\_{fr}} \cdot \frac{\text{i}}{\text{2}} \tag{2}$$

Consequently the velocity of a point centered half way between the two wheels is known:

$$v\_f = \frac{v\_{fl} + v\_{fr}}{2} \tag{3}$$

and this velocity is equal to the component of the caster wheel velocity in the motion direction, otherwise there would be a deformation into robot body. During this operation the idle caster wheel is positioned by kinematic conditions and turns until it becomes orthogonal to the segment that links *J* and *C*; its velocity is a function of the driven wheel velocities:

$$v\_p = \frac{v\_{fl} + v\_{fr}}{2\cos\alpha} \tag{4}$$

where

$$\alpha = \arctan{\frac{p}{d}}\tag{5}$$

For an Epi.q robot, shown in Figure 2 on the right, the mathematical treatment is quite similar. When the velocities of the two driving units are chosen, the position of the instantaneous center of rotation is fixed too:

$$\frac{v\_{fl}}{d+i/2} = \frac{v\_{fr}}{d-i/2} \tag{6}$$

$$d = \frac{v\_{fl} + v\_{fr}}{v\_{fl} - v\_{fr}} \cdot \frac{\text{i}}{\text{2}} \tag{7}$$

Therefore the velocity of a point centered half way between the two driving units is known:

$$v\_f = \frac{v\_{fl} + v\_{fr}}{2} \tag{8}$$

and this point coincides with the vertical revolute joint. An angle between front and rear part of the robot is generated by kinematic conditions, that position the rear wheel unit axis in

*rdu*

30° *ha*

�

Section 4.

Δ*hdu*

Epi.q Robots 269

Δ*hw*

Fig. 3. Vertical displacement in presence of little unevenness, a comparative sketch between a

Δ*hdu* ≤

while the vertical displacement of a single wheel, Δ*hw*, is always equal to obstacle height. Consequently, when the robot is moving on uneven terrain the pitching is significantly

As regards the ability of climbing an obstacle, a multi-leg wheel unit can climb over higher steps than a single wheel with the same overall dimensions actually, as shown in Figure 4, the maximum step that a single wheel can climb over measures a fraction of its radius while, for a multi-leg wheel unit, it is a fraction of its height: for example it was experimentally testes that the Epi.q-2 driving unit can climb over obstacles that measure till 84% of its height, see

In case of steps that can be overcome both by a multi-leg wheel unit or by a single wheel, generally the velocity component in the motion direction presents smaller discontinuities with the multi-leg wheel unit than with a single wheel. Considering a three-legged wheel unit and a single wheel with same overall dimensions that are advancing at the same speed, shown in

sin *<sup>β</sup>* <sup>=</sup> <sup>1</sup> <sup>−</sup> *ho*

*rw*

*ho*

three-legged wheel unit and a single wheel with same overall dimensions

reduced with the use of a three-legged wheel unit instead of a single wheel.

that is always smaller or equal to half obstacle height:

Figure 4, it is possible to identify a *β* angle:

*α*

*ll*

*ho*

*ho*

(18)

<sup>2</sup> (17)

*ha* ��

order to pass through the instantaneous center of rotation *C*. The component of the vertical joint velocity in rear axle direction is equal to the rear axle velocity, otherwise there would be a deformation of the robot central body:

$$v\_b = \frac{v\_{f1} + v\_{fr}}{2} \cos \beta \tag{9}$$

where

$$
\beta = \arcsin\frac{p}{d} \tag{10}
$$

and consequently the velocity of the two rear idle wheel units are:

$$v\_{bl} = \frac{v\_{fl} + v\_{fr}}{2} \cdot \frac{d\cos\beta + l/2}{d} \tag{11}$$

$$v\_{br} = \frac{v\_{fl} + v\_{fr}}{2} \cdot \frac{d \cos \beta - l/2}{d} \tag{12}$$

#### **2.2 Multi-leg wheel unit**

A multi-leg wheel unit consists of a plurality of radially located spokes that end with a wheel. Both the forecarriage and the rear axle employ multi-leg wheel units.

A multi-leg wheel unit has a plurality of equally spaced wheels. If the number of wheels increases, the polygon defined by the wheel centers tends to become a circle and its side length decreases; thus the step overcoming capability is reduced but, on the other hand, the rotating leg motion is improved in terms of motion smoothness. Epi.q robots employ a three-legged wheel unit because it maximizes the step overcoming capability, for a given driving unit height, and the motion smoothness is guaranteed due to the fact that these robots use wheels whenever possible and legs only when needed.

Although a multi-leg wheel unit generates more friction than a single wheel during steering operations, this solution is advantageous: when the robot is moving on uneven terrain, actually its pitching is significantly reduced; when it is facing an obstacle, actually a multi-leg wheel unit can climb over higher obstacles and generally the velocity component in motion direction of the wheel unit presents smaller discontinuities.

When Epi.q robots are moving on rough ground their body vertical displacement is significantly decreased with respect to a robot that uses single wheels. Actually, as illustrated in Figure 3, if *ho* is the height of an obstacle small enough to be contained between the wheels of a three-legged wheel unit, the height of the wheel unit axis can be expressed as:

$$h\_a^{'} = l\_l \sin \mathfrak{J} 0^{\diamond} + r\_{du} \tag{13}$$

$$h\_a^{''} = l\_l \sin\left(30^\circ + \alpha\right) + r\_{du} \tag{14}$$

The inclination *α* of the wheel unit can be related with the obstacle height:

$$2l\_l \cos 30^\circ \sin \alpha = h\_\circ \tag{15}$$

therefore the vertical displacement Δ*hdu* of a three-legged wheel unit follows from Equations 13, 14 and 15:

$$
\Delta h\_{du} = h\_a \left. \right. - h\_a \left. \right. = l\_l \sin 30^\circ \cos a + l\_l \cos 30^\circ \sin a - l\_l \sin 30^\circ = $$

$$
= \frac{h\_o}{2} - \frac{l\_l}{2} \left( 1 - \cos a \right) \tag{16}
$$

6 Will-be-set-by-IN-TECH

order to pass through the instantaneous center of rotation *C*. The component of the vertical joint velocity in rear axle direction is equal to the rear axle velocity, otherwise there would be

*<sup>β</sup>* <sup>=</sup> arcsin *<sup>p</sup>*

*d* cos *β* + *l*/2

*d* cos *β* − *l*/2

2 ·

2 ·

A multi-leg wheel unit consists of a plurality of radially located spokes that end with a wheel.

A multi-leg wheel unit has a plurality of equally spaced wheels. If the number of wheels increases, the polygon defined by the wheel centers tends to become a circle and its side length decreases; thus the step overcoming capability is reduced but, on the other hand, the rotating leg motion is improved in terms of motion smoothness. Epi.q robots employ a three-legged wheel unit because it maximizes the step overcoming capability, for a given driving unit height, and the motion smoothness is guaranteed due to the fact that these robots

Although a multi-leg wheel unit generates more friction than a single wheel during steering operations, this solution is advantageous: when the robot is moving on uneven terrain, actually its pitching is significantly reduced; when it is facing an obstacle, actually a multi-leg wheel unit can climb over higher obstacles and generally the velocity component in motion

When Epi.q robots are moving on rough ground their body vertical displacement is significantly decreased with respect to a robot that uses single wheels. Actually, as illustrated in Figure 3, if *ho* is the height of an obstacle small enough to be contained between the wheels

therefore the vertical displacement Δ*hdu* of a three-legged wheel unit follows from

= *ll* sin 30◦ cos *α* + *ll* cos 30◦ sin *α* − *ll* sin 30◦ =

of a three-legged wheel unit, the height of the wheel unit axis can be expressed as:

*ha* �

The inclination *α* of the wheel unit can be related with the obstacle height:

<sup>=</sup> *ho* <sup>2</sup> <sup>−</sup> *ll*

*ha* �� <sup>2</sup> cos *<sup>β</sup>* (9)

*<sup>d</sup>* (10)

*<sup>d</sup>* (11)

*<sup>d</sup>* (12)

= *ll* sin 30◦ + *rdu* (13)

= *ll* sin (30◦ + *α*) + *rdu* (14)

2*ll* cos 30◦ sin *α* = *ho* (15)

<sup>2</sup> (<sup>1</sup> <sup>−</sup> cos *<sup>α</sup>*) (16)

*vb* <sup>=</sup> *vf l* <sup>+</sup> *vf r*

and consequently the velocity of the two rear idle wheel units are:

Both the forecarriage and the rear axle employ multi-leg wheel units.

use wheels whenever possible and legs only when needed.

direction of the wheel unit presents smaller discontinuities.

*vbl* <sup>=</sup> *vf l* <sup>+</sup> *vf r*

*vbr* <sup>=</sup> *vf l* <sup>+</sup> *vf r*

a deformation of the robot central body:

where

**2.2 Multi-leg wheel unit**

Equations 13, 14 and 15:

Δ*hdu* = *ha*

�� − *ha* �

Fig. 3. Vertical displacement in presence of little unevenness, a comparative sketch between a three-legged wheel unit and a single wheel with same overall dimensions

that is always smaller or equal to half obstacle height:

$$
\Delta h\_{d\mu} \le \frac{h\_o}{2} \tag{17}
$$

while the vertical displacement of a single wheel, Δ*hw*, is always equal to obstacle height. Consequently, when the robot is moving on uneven terrain the pitching is significantly reduced with the use of a three-legged wheel unit instead of a single wheel.

As regards the ability of climbing an obstacle, a multi-leg wheel unit can climb over higher steps than a single wheel with the same overall dimensions actually, as shown in Figure 4, the maximum step that a single wheel can climb over measures a fraction of its radius while, for a multi-leg wheel unit, it is a fraction of its height: for example it was experimentally testes that the Epi.q-2 driving unit can climb over obstacles that measure till 84% of its height, see Section 4.

In case of steps that can be overcome both by a multi-leg wheel unit or by a single wheel, generally the velocity component in the motion direction presents smaller discontinuities with the multi-leg wheel unit than with a single wheel. Considering a three-legged wheel unit and a single wheel with same overall dimensions that are advancing at the same speed, shown in Figure 4, it is possible to identify a *β* angle:

$$\sin \beta = 1 - \frac{h\_o}{r\_w} \tag{18}$$

rolling on wheels to stepping on legs and vice versa. Thus only one motor per driving unit is

Epi.q Robots 271

Considering the driving unit as a planar mechanism, angularly it has two degrees of freedom: the angular position of the driving unit frame and the angular position of the wheels. Actually,

the transmission system links the input shaft angular velocity *ω<sup>i</sup>* with both the angular velocity of the driving unit frame Ω, and the angular velocity of the wheels *ωw*, that is the same for all the three wheels since the transmission system has the same gear ratio along each leg. Considering an observer placed on the driving unit frame, the transmission system is seen as an ordinary gearing, therefore the gear ratio (with sign) of the driving unit transmission

*ω<sup>w</sup>* − Ω

*ω<sup>w</sup>* +

When the robot is moving on wheels, *advancing mode*, the robot weight and the contact

therefore Equations 21 and 23 lead to identify the velocity ratio *iad* and the driving unit linear

*kts* − 1 *kts*

*<sup>ω</sup><sup>i</sup>* <sup>=</sup> <sup>1</sup> *kts*

If the robot is moving on a flat ground, the driving unit angular velocity is null:

*iad* <sup>=</sup> *<sup>ω</sup><sup>w</sup> ωi* Ω=0

between wheels and ground constrain driving unit angular position.

*ωi* Ω

*<sup>ω</sup><sup>i</sup>* <sup>−</sup> <sup>Ω</sup> <sup>=</sup> *kts* (21)

Ω = 0 (23)

= *kts* (24)

Ω (22)

*<sup>ω</sup><sup>w</sup> rw*

*ll*

*ωw*

required both for wheeled and legged locomotion.

Ω

system *kts* can be easily expressed as follows:

and making *ω<sup>i</sup>* explicit, it becomes:

velocity *va*, shown in Figure 6, as follows:

Fig. 5. Driving unit scheme

**3.1 Driving unit kinematic analysis**

Fig. 4. Step climbing, a comparative sketch between a three-legged wheel unit and a single wheel, with same overall dimensions

The horizontal component of the wheel unit speed presents smaller discontinuities then a single wheel if the *β* angle is lower than 30◦, that means when the obstacle is higher than a quarter of the wheel unit height or, equivalently, half wheel radius:

$$h\_0 > h\_{d\mu}/4$$

$$h\_0 > r\_w/2\tag{20}$$

Moreover this discontinuity is also reduced on driving units by the fact that Epi.q robots have different velocities when they are moving on wheels or on legs, even if the gear-motors still continue to rotate at the same speed, as it will be explained in Section 3.

#### **3. Driving unit**

In this section a special focus on driving unit is discussed. The driving unit is a three-legged wheel unit having three radially located wheels, mounted at the end of each spoke. The driving units, housing the transmission system, control robot locomotion.

The driving unit concept takes place from the idea that a robot can passively modify its locomotion, from rolling on wheels to stepping on rotating legs, simply according to local friction and dynamic conditions. Actually, the driving unit is designed to have a limit torque that triggers different locomotions: if the torque required for moving on wheels exceeds the torque required for moving on legs, the robot will change its locomotion accordingly, from 8 Will-be-set-by-IN-TECH

*vw*

*ho* > *hdu*/4 (19) *ho* > *rw*/2 (20)

*vdu*

60° 30°

*vdu*

*vw*

quarter of the wheel unit height or, equivalently, half wheel radius:

continue to rotate at the same speed, as it will be explained in Section 3.

driving units, housing the transmission system, control robot locomotion.

wheel, with same overall dimensions

**3. Driving unit**

*<sup>β</sup>* <sup>2</sup>*rw*

Fig. 4. Step climbing, a comparative sketch between a three-legged wheel unit and a single

The horizontal component of the wheel unit speed presents smaller discontinuities then a single wheel if the *β* angle is lower than 30◦, that means when the obstacle is higher than a

Moreover this discontinuity is also reduced on driving units by the fact that Epi.q robots have different velocities when they are moving on wheels or on legs, even if the gear-motors still

In this section a special focus on driving unit is discussed. The driving unit is a three-legged wheel unit having three radially located wheels, mounted at the end of each spoke. The

The driving unit concept takes place from the idea that a robot can passively modify its locomotion, from rolling on wheels to stepping on rotating legs, simply according to local friction and dynamic conditions. Actually, the driving unit is designed to have a limit torque that triggers different locomotions: if the torque required for moving on wheels exceeds the torque required for moving on legs, the robot will change its locomotion accordingly, from

*hdu*

rolling on wheels to stepping on legs and vice versa. Thus only one motor per driving unit is required both for wheeled and legged locomotion.

#### **3.1 Driving unit kinematic analysis**

Considering the driving unit as a planar mechanism, angularly it has two degrees of freedom: the angular position of the driving unit frame and the angular position of the wheels. Actually,

Fig. 5. Driving unit scheme

the transmission system links the input shaft angular velocity *ω<sup>i</sup>* with both the angular velocity of the driving unit frame Ω, and the angular velocity of the wheels *ωw*, that is the same for all the three wheels since the transmission system has the same gear ratio along each leg. Considering an observer placed on the driving unit frame, the transmission system is seen as an ordinary gearing, therefore the gear ratio (with sign) of the driving unit transmission system *kts* can be easily expressed as follows:

$$\frac{\omega\_{\rm tr} - \Omega}{\omega\_{\rm i} - \Omega} = k\_{\rm ts} \tag{21}$$

and making *ω<sup>i</sup>* explicit, it becomes:

$$
\omega\_i = \frac{1}{k\_{ts}} \,\omega\_w + \frac{k\_{ts} - 1}{k\_{ts}} \,\Omega \tag{22}
$$

When the robot is moving on wheels, *advancing mode*, the robot weight and the contact between wheels and ground constrain driving unit angular position.

If the robot is moving on a flat ground, the driving unit angular velocity is null:

$$
\Omega = 0 \tag{23}
$$

therefore Equations 21 and 23 lead to identify the velocity ratio *iad* and the driving unit linear velocity *va*, shown in Figure 6, as follows:

$$\dot{a}\_{ad} = \left. \frac{\omega\_w}{\omega\_{\bar{l}}} \right|\_{\Omega=0} = k\_{\text{fs}} \tag{24}$$

The ratio between driving unit linear velocity during advancing mode and automatic climbing mode, considering only the component parallel to the ground, can be identified by a coefficient

Epi.q Robots 273

Therefore the *β* coefficient contains information regard motion continuity: if this value is close

A third consideration takes into account driving unit application. Considering driving units with similar overall dimensions, different capabilities can be obtained varying the *rw*/*ll* parameter, as shown in Figure 7: if the *rw*/*ll* value decreases, the robot will be more oriented towards legged locomotion and it will be able to climb over higher obstacles, otherwise the robot will be more oriented towards wheeled locomotion, with wheels that will better protect driving unit from shocks caused by the contact with obstacles. The highest limit value for the

<sup>=</sup> <sup>1</sup>

<sup>2</sup>(*kts* <sup>−</sup> <sup>1</sup>) · *ll*

*rw*

2*rw* = 2*ll* · cos(30◦) (32)

<sup>2</sup> (33)

+ 1 (34)

(31)

*<sup>β</sup>* <sup>=</sup> *vac* · cos(60◦) *vad*

Fig. 7. Driving units with different *rw*/*ll* ratios, increasing value from left to right

*rw*/*ll* parameter corresponds to the condition in which driving unit wheels are in interference

therefore the *rw*/*ll* value must be chosen accordingly to robot application and always smaller

Once robot specifications are fixed and consequently the *rw*/*ll* and *β* parameters are chosen,

<sup>2</sup>*<sup>β</sup>* · *ll rw*

At this step it still remains to verify the predicted transition conditions between advancing mode and automatic climbing mode; if this conditions were not satisfactory, it would be

When the gear ratio *kts* and the driving unit kinematic chain, as well, are chosen, lots of possible combination of mechanical components still remain to be identified: a further suggestion would be to choose the gearing that better reduce risk of interferences between

Finally, it is necessary to identify a scale factor, that will depend on the robot application field,

*rw ll* < √3

*kts* <sup>=</sup> <sup>1</sup>

Equation 31 identifies a first attempt value for the driving unit gear ratio:

necessary to relax some robot specifications.

thus the driving unit geometry is completely identified.

driving unit frame and obstacles.

*β* that, from Equations 25 and 28, can be expressed as:

to the unit value, motion continuity will be preserved.

limit conditions:

than:

$$
\boldsymbol{\sigma}\_{ad} = \boldsymbol{\omega}\_{\overline{w}} \cdot \boldsymbol{r}\_{\overline{w}} = \boldsymbol{\omega}\_{\overline{i}} \cdot \boldsymbol{i}\_{ad} \cdot \boldsymbol{r}\_{\overline{w}} = \boldsymbol{\omega}\_{\overline{i}} \cdot \boldsymbol{k}\_{\text{ts}} \cdot \boldsymbol{r}\_{\overline{w}} \tag{25}
$$

where *rw* is wheel radius.

When the robot bumps against an obstacle, if the local frictions between front wheel and obstacle are sufficient to stop the wheel, the driving unit starts to rotate around the stopped wheel center, allowing the robot to climb over the obstacle, *automatic climbing mode*. In this occurrence the wheel angular velocity is null:

$$
\omega\_w = 0\tag{26}
$$

and consequently, from Equations 21 and 26, the velocity ratio *iac* and the driving unit linear velocity *vac*, shown in Figure 6, are respectively:

Fig. 6. Driving unit linear velocity during advancing mode, on the left, and automatic climbing mode, on the right

$$\dot{q}\_{\text{ac}} = \left. \frac{\Omega}{\omega\_i} \right|\_{\omega\_w = 0} = \frac{k\_{\text{ts}}}{k\_{\text{ts}} - 1} \tag{27}$$

$$
\omega\_{\rm ac} = \Omega \cdot l\_l = \omega\_{\rm i} \cdot i\_{\rm ac} \cdot l\_l = \omega\_{\rm i} \cdot \frac{k\_{\rm ts}}{k\_{\rm ts} - 1} \cdot l\_l \tag{28}
$$

where *ll* is the length of the driving unit leg.

Finally, taking into account Equations 24 and 27, it is possible to rewrite Equation 22 as follows:

$$
\omega\_{\dot{i}} = \frac{\omega\_w}{\dot{i}\_{ad}} + \frac{\Omega}{\dot{i}\_{ac}} \tag{29}
$$

#### **3.2 Driving unit design**

During the design phase it is important to establish the correct driving unit parameters, for this reason some preliminary reflections can be helpful.

The locomotion transition between wheeled and legged motion is only triggered by driving unit torque demand therefore, since driving unit motors must rotate in the same direction both for advancing mode and for automatic climbing mode, the driving unit will work properly only if the velocity ratios *ia* and *iac* have the same sign. Equations 24 and 27 lead to identify a low limit value for the driving unit gear ratio:

$$k\_{\rm ts} > 1\tag{30}$$

Consequently, a suitable transmission system has a gear ratio *kts* bigger than one and positive: if, for example, the chosen transmission system is only made of external toothed gears, this condition will lead to choose an odd number of gears with appropriate gear radii.

A second consideration regards robot motion continuity during the locomotion transition.

10 Will-be-set-by-IN-TECH

When the robot bumps against an obstacle, if the local frictions between front wheel and obstacle are sufficient to stop the wheel, the driving unit starts to rotate around the stopped wheel center, allowing the robot to climb over the obstacle, *automatic climbing mode*. In this

and consequently, from Equations 21 and 26, the velocity ratio *iac* and the driving unit linear

Fig. 6. Driving unit linear velocity during advancing mode, on the left, and automatic

*vac* <sup>=</sup> <sup>Ω</sup> · *ll* <sup>=</sup> *<sup>ω</sup><sup>i</sup>* · *iac* · *ll* <sup>=</sup> *<sup>ω</sup><sup>i</sup>* · *kts*

*<sup>ω</sup><sup>i</sup>* <sup>=</sup> *<sup>ω</sup><sup>w</sup> iad* + Ω *iac*

Finally, taking into account Equations 24 and 27, it is possible to rewrite Equation 22 as

During the design phase it is important to establish the correct driving unit parameters, for

The locomotion transition between wheeled and legged motion is only triggered by driving unit torque demand therefore, since driving unit motors must rotate in the same direction both for advancing mode and for automatic climbing mode, the driving unit will work properly only if the velocity ratios *ia* and *iac* have the same sign. Equations 24 and 27 lead to identify a

Consequently, a suitable transmission system has a gear ratio *kts* bigger than one and positive: if, for example, the chosen transmission system is only made of external toothed gears, this

A second consideration regards robot motion continuity during the locomotion transition.

condition will lead to choose an odd number of gears with appropriate gear radii.

<sup>=</sup> *kts*

*iac* <sup>=</sup> <sup>Ω</sup> *ωi ωw*=0

where *rw* is wheel radius.

climbing mode, on the right

follows:

**3.2 Driving unit design**

occurrence the wheel angular velocity is null:

velocity *vac*, shown in Figure 6, are respectively:

where *ll* is the length of the driving unit leg.

this reason some preliminary reflections can be helpful.

low limit value for the driving unit gear ratio:

*va*

*vad* = *ω<sup>w</sup>* · *rw* = *ω<sup>i</sup>* · *iad* · *rw* = *ω<sup>i</sup>* · *kts* · *rw* (25)

*ω<sup>w</sup>* = 0 (26)

*kts* <sup>−</sup> <sup>1</sup> (27)

*kts* > 1 (30)

*kts* <sup>−</sup> <sup>1</sup> · *ll* (28)

*vac*

(29)

The ratio between driving unit linear velocity during advancing mode and automatic climbing mode, considering only the component parallel to the ground, can be identified by a coefficient *β* that, from Equations 25 and 28, can be expressed as:

$$\beta = \frac{v\_{ac} \cdot \cos(60^\circ)}{v\_{ad}} = \frac{1}{2(k\_{ls} - 1)} \cdot \frac{l\_l}{r\_w} \tag{31}$$

Therefore the *β* coefficient contains information regard motion continuity: if this value is close to the unit value, motion continuity will be preserved.

A third consideration takes into account driving unit application. Considering driving units with similar overall dimensions, different capabilities can be obtained varying the *rw*/*ll* parameter, as shown in Figure 7: if the *rw*/*ll* value decreases, the robot will be more oriented towards legged locomotion and it will be able to climb over higher obstacles, otherwise the robot will be more oriented towards wheeled locomotion, with wheels that will better protect driving unit from shocks caused by the contact with obstacles. The highest limit value for the

Fig. 7. Driving units with different *rw*/*ll* ratios, increasing value from left to right

*rw*/*ll* parameter corresponds to the condition in which driving unit wheels are in interference limit conditions:

$$2r\_w = 2l\_l \cdot \cos(30^\circ) \tag{32}$$

therefore the *rw*/*ll* value must be chosen accordingly to robot application and always smaller than:

$$\frac{r\_w}{l\_l} < \frac{\sqrt{3}}{2} \tag{33}$$

Once robot specifications are fixed and consequently the *rw*/*ll* and *β* parameters are chosen, Equation 31 identifies a first attempt value for the driving unit gear ratio:

$$k\_{\rm ts} = \frac{1}{2\beta} \cdot \frac{l\_l}{r\_w} + 1\tag{34}$$

At this step it still remains to verify the predicted transition conditions between advancing mode and automatic climbing mode; if this conditions were not satisfactory, it would be necessary to relax some robot specifications.

When the gear ratio *kts* and the driving unit kinematic chain, as well, are chosen, lots of possible combination of mechanical components still remain to be identified: a further suggestion would be to choose the gearing that better reduce risk of interferences between driving unit frame and obstacles.

Finally, it is necessary to identify a scale factor, that will depend on the robot application field, thus the driving unit geometry is completely identified.

**Nomenclature Radius Rot. speed Label**

Epi.q Robots 275

Chassis 0 Input ring gear *rr ω<sup>r</sup>* 1 Planet carrier Ω 2 Planet gears *rp ω<sup>p</sup>* 3 Solar gear *rs ω<sup>s</sup>* 4 Sliding solar gear *rss ωss* 5 Leg planet gears *rl p ω<sup>l</sup>* 6 Legs *ω<sup>l</sup>* 7 Planet pulleys *rpp ω<sup>p</sup>* 8 Toothed belts 9 Wheel pulleys *rwp ω<sup>w</sup>* 10 Wheels *rw ω<sup>w</sup>* 11

> *ω<sup>p</sup>* − Ω *<sup>ω</sup><sup>r</sup>* <sup>−</sup> <sup>Ω</sup> = +

*ω<sup>s</sup>* − Ω *<sup>ω</sup><sup>p</sup>* <sup>−</sup> <sup>Ω</sup> <sup>=</sup> <sup>−</sup>*rp*

*ω<sup>l</sup>* − Ω *<sup>ω</sup>ss* <sup>−</sup> <sup>Ω</sup> <sup>=</sup> <sup>−</sup> *rss*

*ω<sup>w</sup>* − *ω<sup>l</sup> ω<sup>p</sup>* − *ω<sup>l</sup>*

Moreover the gear ratii *ke*<sup>1</sup> and *ke*<sup>2</sup> are linked by geometrical constraints:

conditions; these equations will be introduced in the following.

while a fourth one describes the belt transmission system:

*rr rp*

*rs*

*rl p*

= + *rpp rwp*

*rr* = *rs* + 2*rp*

<sup>=</sup> *rs rp* + 2

*ke*<sup>2</sup> <sup>=</sup> <sup>1</sup>

Other equations, describing physical constraints introduced by the sliding solar gear meshing conditions and by robot-terrain contact, are univocally determined by robot operative

2 − *ke*<sup>1</sup>

*rr rp* = *ke*<sup>1</sup> (35)

= *ke*<sup>2</sup> (36)

= *kel* (37)

= *kb* (38)

(39)

Table 1. Nomenclature of Epi.q-1 driving unit

meshing conditions in the epicyclic gearing:

therefore:

#### **3.3 Epi.q-1 driving unit**

The Epi.q-1 driving unit, as shown in Figures 8, is mainly composed of: an input ring gear (1) (directly linked to a gear-motor), a planet carrier (2) (rotationally free respect to robot chassis by means of bearing), three planet gears (3), a solar gear (4), a sliding solar gear (5), three leg planet gears (6), three legs (7), three planet pulleys (8), three belts (9), three wheel pulleys (10), three wheels (11), an axial device (12) linked to a mini-motor (13). The planet pulleys (8) are always rigidly connected with the planet gears (3), the wheel pulleys (10) with the wheels (11) and the leg planet gears (6) with the legs (7). An axial device controls the axial position of the

Fig. 8. Functional scheme of the Epi.q-1 driving unit, only one arm is represented

sliding solar gear (5) in order to alternatively link the leg planet gear (6) with the planet carrier (2) or with the solar gear (4), as shown in Figure 9

The operative conditions of the transmission system can be described using some kinematic equations: some of them are always valid for every operative conditions, three represent the 12 Will-be-set-by-IN-TECH

The Epi.q-1 driving unit, as shown in Figures 8, is mainly composed of: an input ring gear (1) (directly linked to a gear-motor), a planet carrier (2) (rotationally free respect to robot chassis by means of bearing), three planet gears (3), a solar gear (4), a sliding solar gear (5), three leg planet gears (6), three legs (7), three planet pulleys (8), three belts (9), three wheel pulleys (10), three wheels (11), an axial device (12) linked to a mini-motor (13). The planet pulleys (8) are always rigidly connected with the planet gears (3), the wheel pulleys (10) with the wheels (11) and the leg planet gears (6) with the legs (7). An axial device controls the axial position of the

9

Fig. 8. Functional scheme of the Epi.q-1 driving unit, only one arm is represented

sliding solar gear (5) in order to alternatively link the leg planet gear (6) with the planet carrier

The operative conditions of the transmission system can be described using some kinematic equations: some of them are always valid for every operative conditions, three represent the

8

7

6

5

4

10

11

**3.3 Epi.q-1 driving unit**

0 1

(2) or with the solar gear (4), as shown in Figure 9

2

3


Table 1. Nomenclature of Epi.q-1 driving unit

meshing conditions in the epicyclic gearing:

$$\frac{\omega\_p - \Omega}{\omega\_r - \Omega} = + \frac{r\_r}{r\_p} = k\_{\varepsilon 1} \tag{35}$$

$$\frac{\omega\_s - \Omega}{\omega\_p - \Omega} = -\frac{r\_p}{r\_s} = k\_{\epsilon 2} \tag{36}$$

$$\frac{\omega\_l - \Omega}{\omega\_{\rm ss} - \Omega} = -\frac{r\_{\rm ss}}{r\_{lp}} = k\_{\rm el} \tag{37}$$

while a fourth one describes the belt transmission system:

$$\frac{\omega\_{\rm tr} - \omega\_{l}}{\omega\_{p} - \omega\_{l}} = + \frac{r\_{pp}}{r\_{wp}} = k\_{b} \tag{38}$$

Moreover the gear ratii *ke*<sup>1</sup> and *ke*<sup>2</sup> are linked by geometrical constraints:

$$r\_r = r\_s + 2r\_p$$

$$\frac{r\_r}{r\_p} = \frac{r\_s}{r\_p} + 2$$

$$k\_{\ell 2} = \frac{1}{2 - k\_{\ell 1}}\tag{39}$$

therefore:

Other equations, describing physical constraints introduced by the sliding solar gear meshing conditions and by robot-terrain contact, are univocally determined by robot operative conditions; these equations will be introduced in the following.

*va* = *ω<sup>w</sup>* · *rw* = *ω<sup>r</sup>* · *iad* · *rw* = *ω<sup>m</sup>* · *ke*<sup>1</sup> · *kb* · *rw* (44)

*kts* = *ke*<sup>1</sup> · *kb* (45)

*ω<sup>w</sup>* = 0 (46)

*ke*<sup>1</sup> · *kb* <sup>−</sup> <sup>1</sup> · *<sup>ω</sup><sup>m</sup>* (47)

*ke*<sup>1</sup> · *kb* <sup>−</sup> <sup>1</sup> (48)

*ke*<sup>1</sup> · *kb* <sup>−</sup> <sup>1</sup> · *ll* (49)

*kts* = *ke*<sup>1</sup> · *kb* (50)

where *ω<sup>m</sup>* is the angular velocity of the gear-motor output shaft, directly linked to the input

Epi.q Robots 277

Comparing Equations 43 and 44 with the generic Equations 24 and 25, a relationship between

*Summary:* during the advancing mode the sliding solar gear is coupled with the planet carrier; the motion is transferred from the gear-motor to the wheels; the planet carrier is free to swing,

In automatic climbing mode, since the sliding solar gear is still rigidly connected with the planet carrier, the robot is still moving with its legs rigidly connected to the planet carrier, as previously described for the advancing mode, therefore Equations 40 and 41 are still available:

> *ωss* = Ω *ω<sup>l</sup>* = Ω

Certainly, also the planet carrier is still free to rotate around its axis but, in automatic climbing mode, the local friction between front wheel and obstacle is sufficient to block the wheel in

Equations from 35 to 41, together with 46 allow to evaluate the planet carrier angular speed:

thus the automatic climbing velocity ratio *iac* and the driving unit velocity in automatic

<sup>=</sup> *ke*<sup>1</sup> · *kb*

<sup>Ω</sup> <sup>=</sup> *ke*<sup>1</sup> · *kb*

*vac* <sup>=</sup> <sup>Ω</sup> · *ll* <sup>=</sup> *<sup>ω</sup><sup>m</sup>* · *ke*<sup>1</sup> · *kb*

*Summary:* during the automatic climbing mode the sliding solar gear is coupled with the planet carrier; the front wheel in contact with the obstacle is stopped by the friction forces due to the contact between wheel and obstacle themselves; the whole driving unit rotates

*iac* <sup>=</sup> <sup>Ω</sup> *ωm ωw*=0

ring gear.

gear ratios can be established:

reducing this way robot pitching.

contact with the obstacle, therefore:

climbing mode *vac* can be expressed as follows:

that compared with Equations 27 and 28 lead to:

as it was expected, confirming Equation 45.

around the stopped wheel, traversing the step.

**Automatic climbing mode**

Fig. 9. Driving unit configurations for different operative modes: advancing and automatic climbing modes (on the left); changing configuration mode (in the middle); rotating leg mode (on the right)

#### **3.3.1 Advancing & automatic climbing modes**

During advancing mode the robot weight and the contact between bottom wheels and ground constrain driving unit angular position. When the robot bumps against an obstacle, if the local frictions between front wheel and obstacle are sufficient to stop the wheel, the driving unit starts to rotate around that wheel, automatic climbing mode, allowing the robot to climb over the obstacle. The robot passively changes its locomotion simply according to the torque required.

In both advancing mode and automatic climbing mode, the sliding solar gear (5) is prismatically coupled with the planet carrier (2), so that a relative rotation between them is hindered, as shown in Figure 9 on the left. The sliding solar gear (5) is always meshed with the leg planet gears (6), in order to prevent a relative rotation between legs (7) and planet carrier (2), this way legs (7) are locked to the planet carrier (2) in a prefixed position.

#### **Advancing mode**

In advancing mode the robot is moving with its legs rigidly connected to the planet carrier, actually the sliding solar gear is coupled with the planet carrier:

$$
\omega\_{\rm ss} = \Omega \tag{40}
$$

and it meshes with the leg planet gears, therefore:

$$
\omega\_l = \Omega \tag{41}
$$

The planet carrier is free to rotate around its axis, but the driving unit balance and the contact between wheels and ground constrain its angular position. In the hypothesis of locomotion on flat ground, the contact between wheels and ground hinders planet carrier rotation:

$$
\Omega = 0 \tag{42}
$$

Equations from 35 to 42 lead to identify the advancing velocity ratio *ia* and the robot speed in advancing mode *va* as a function of the gear-motor rotation speed:

$$i\_{ad} = \left. \frac{\omega\_w}{\omega\_m} \right|\_{\Omega=0} = k\_{\varepsilon 1} \cdot k\_b \tag{43}$$

14 Will-be-set-by-IN-TECH

Fig. 9. Driving unit configurations for different operative modes: advancing and automatic climbing modes (on the left); changing configuration mode (in the middle); rotating leg mode

During advancing mode the robot weight and the contact between bottom wheels and ground constrain driving unit angular position. When the robot bumps against an obstacle, if the local frictions between front wheel and obstacle are sufficient to stop the wheel, the driving unit starts to rotate around that wheel, automatic climbing mode, allowing the robot to climb over the obstacle. The robot passively changes its locomotion simply according to the torque

In both advancing mode and automatic climbing mode, the sliding solar gear (5) is prismatically coupled with the planet carrier (2), so that a relative rotation between them is hindered, as shown in Figure 9 on the left. The sliding solar gear (5) is always meshed with the leg planet gears (6), in order to prevent a relative rotation between legs (7) and planet carrier

In advancing mode the robot is moving with its legs rigidly connected to the planet carrier,

The planet carrier is free to rotate around its axis, but the driving unit balance and the contact between wheels and ground constrain its angular position. In the hypothesis of locomotion on flat ground, the contact between wheels and ground hinders planet carrier rotation:

Equations from 35 to 42 lead to identify the advancing velocity ratio *ia* and the robot speed in

*ωss* = Ω (40)

*ω<sup>l</sup>* = Ω (41)

Ω = 0 (42)

= *ke*<sup>1</sup> · *kb* (43)

(2), this way legs (7) are locked to the planet carrier (2) in a prefixed position.

actually the sliding solar gear is coupled with the planet carrier:

advancing mode *va* as a function of the gear-motor rotation speed:

*iad* <sup>=</sup> *<sup>ω</sup><sup>w</sup> ωm* Ω=0

and it meshes with the leg planet gears, therefore:

(on the right)

required.

**Advancing mode**

**3.3.1 Advancing & automatic climbing modes**

$$
\boldsymbol{\upsilon}\_{a} = \boldsymbol{\omega}\_{\overline{w}} \cdot \boldsymbol{r}\_{\overline{w}} = \boldsymbol{\omega}\_{\overline{r}} \cdot \dot{\mathbf{i}}\_{a d} \cdot \boldsymbol{r}\_{\overline{w}} = \boldsymbol{\omega}\_{\overline{m}} \cdot \boldsymbol{k}\_{\varepsilon 1} \cdot \boldsymbol{k}\_{b} \cdot \boldsymbol{r}\_{\overline{w}} \tag{44}
$$

where *ω<sup>m</sup>* is the angular velocity of the gear-motor output shaft, directly linked to the input ring gear.

Comparing Equations 43 and 44 with the generic Equations 24 and 25, a relationship between gear ratios can be established:

$$k\_{ts} = k\_{e1} \cdot k\_{b} \tag{45}$$

*Summary:* during the advancing mode the sliding solar gear is coupled with the planet carrier; the motion is transferred from the gear-motor to the wheels; the planet carrier is free to swing, reducing this way robot pitching.

#### **Automatic climbing mode**

In automatic climbing mode, since the sliding solar gear is still rigidly connected with the planet carrier, the robot is still moving with its legs rigidly connected to the planet carrier, as previously described for the advancing mode, therefore Equations 40 and 41 are still available:

$$
\omega\_{\rm ss} = \Omega
$$

$$
\omega\_{l} = \Omega
$$

Certainly, also the planet carrier is still free to rotate around its axis but, in automatic climbing mode, the local friction between front wheel and obstacle is sufficient to block the wheel in contact with the obstacle, therefore:

$$
\omega\_w = 0\tag{46}
$$

Equations from 35 to 41, together with 46 allow to evaluate the planet carrier angular speed:

$$
\Omega = \frac{k\_{\varepsilon 1} \cdot k\_b}{k\_{\varepsilon 1} \cdot k\_b - 1} \cdot \omega\_m \tag{47}
$$

thus the automatic climbing velocity ratio *iac* and the driving unit velocity in automatic climbing mode *vac* can be expressed as follows:

$$i\_{\rm ac} = \left. \frac{\Omega}{\omega\_{\rm m}} \right|\_{\omega\_{\rm w} = 0} = \frac{k\_{\varepsilon 1} \cdot k\_{b}}{k\_{\varepsilon 1} \cdot k\_{b} - 1} \tag{48}$$

$$
\omega\_{ac} = \Omega \cdot l\_l = \omega\_m \cdot \frac{k\_{e1} \cdot k\_b}{k\_{e1} \cdot k\_b - 1} \cdot l\_l \tag{49}
$$

that compared with Equations 27 and 28 lead to:

$$k\_{ts} = k\_{e1} \cdot k\_{b} \tag{50}$$

as it was expected, confirming Equation 45.

*Summary:* during the automatic climbing mode the sliding solar gear is coupled with the planet carrier; the front wheel in contact with the obstacle is stopped by the friction forces due to the contact between wheel and obstacle themselves; the whole driving unit rotates around the stopped wheel, traversing the step.

**3.3.3 Rotating leg mode**

wheel and obstacle occur.

**3.4 Epi.q-2 driving unit**

follows:

When the robot cannot climb an obstacle in automatic climbing mode, due for example to a low friction coefficient between wheel and obstacle, it is possible to transform the driving unit

Epi.q Robots 279

Figure 9 on the right shows gearing configuration for this operative mode: the sliding solar gear (5) is shifted into an intermediate position in order to be simultaneously engaged with

Equations from 35 to 38 together with 54 allow to evaluate the driving unit angular speed as

Actually, the driving unit acts as a rigid body and the motor rotation provides the rotation of

Wheels wear is a possible drawback of this locomotion when slipping conditions between

*Summary:* during the rotating leg mode the sliding solar gear (5) is shifted into an intermediate position; the whole driving unit rotates as a rigid body around its central axis.

The driving unit implemented in Epi.q-2 prototype is shown in Figure 11. With respect to Epi.q-1 version, the changing configuration ability has been removed in order to simplify the

The driving unit, as shown in Figure 11, consists of: an input solar gear (1), a planet carrier (2)

**Nomenclature Radius Rot. speed Label** Frame 0 Input Solar gear *rs ω<sup>s</sup>* 1 Planet carrier *ll* Ω 2 First planet gear *rf p ωf p* 3 Second planet gear *rsp ω<sup>w</sup>* 4 Wheel *rw ω<sup>w</sup>* 5

(rotational free respect to robot frame by means of bearings), three first planet gears (3), three

The transmission system can be described using some kinematic equations; the meshing

*rf p*

*rsp*

structure, to increase gearing robustness and efficiency, and to reduce overall weight.

*ωss* = *ω<sup>s</sup>* = *ω<sup>l</sup>* = Ω (54)

Ω = *ω<sup>m</sup>* (55)

= *ke*<sup>1</sup> (56)

= *ke*<sup>2</sup> (57)

into a whole rigid body which rotates around its axis, *rotating leg mode*.

both the solar gear (4) and the planet carrier (2):

the whole driving unit, as it was a single rotating body.

Table 2. Nomenclature of Epi.q-2 driving unit

second planet gears (4) and three wheels (5).

conditions in the epicyclic gearing can be represented as follows:

*ωf p* − Ω *<sup>ω</sup><sup>s</sup>* <sup>−</sup> <sup>Ω</sup> <sup>=</sup> <sup>−</sup> *rs*

*ω<sup>w</sup>* − Ω *<sup>ω</sup>f p* <sup>−</sup> <sup>Ω</sup> <sup>=</sup> <sup>−</sup>*rf p*

#### **3.3.2 Changing configuration mode**

Driving unit can also modify its geometry from a closed configuration to an open one, as shown in Figure 10. The closed configuration is suitable to reach restricted spaces while the open one to get over obstacles.

In *changing configuration mode* the sliding solar gear (5) is no longer engaged in the planet carrier (2) whereas it is shifted towards the solar gear (4) and coupled with it, as shown Figure 9 in the middle, therefore:

$$
\omega\_{\rm ss} = \omega\_{\rm s} \tag{51}
$$

In order to avoid gearing lability this operation is split into two steps: first the sliding solar gear (5) couples with the solar gear (4), then the sliding solar gear (5) is rotationally disconnected from the planet carrier (2). A slow solar gear rotation and some elastic elements allow a correct axial engagement between the sliding solar gear (5) and solar gear (4) fittings. During the changing configuration mode the planet carrier angular speed is null:

$$
\Omega = 0\tag{52}
$$

Therefore Equations from 35 to 38, together with 51 and 52, bring to evaluate the changing configuration velocity ratio as:

$$i\_{\rm cc} = \frac{\omega\_{\rm l}}{\omega\_{\rm m}} = k\_{\rm e1} \cdot k\_{\rm e2} \cdot k\_{\rm el} = \frac{k\_{\rm e1}}{2 - k\_{\rm e1}} \tag{53}$$

Legs angular excursion is limited by suitable mechanical stops, placed on the planet carrier. A relative slippage between wheels and ground can happen during the changing configuration operations.

*Summary:* during the changing configuration mode the sliding solar gear is shifted and coupled with the solar gear; the planet carrier is blocked; the motion is transferred from the gear-motor to the legs, that rotate.

Fig. 10. Driving unit in closed and in open configuration

16 Will-be-set-by-IN-TECH

Driving unit can also modify its geometry from a closed configuration to an open one, as shown in Figure 10. The closed configuration is suitable to reach restricted spaces while the

In *changing configuration mode* the sliding solar gear (5) is no longer engaged in the planet carrier (2) whereas it is shifted towards the solar gear (4) and coupled with it, as shown

In order to avoid gearing lability this operation is split into two steps: first the sliding solar gear (5) couples with the solar gear (4), then the sliding solar gear (5) is rotationally disconnected from the planet carrier (2). A slow solar gear rotation and some elastic elements allow a correct axial engagement between the sliding solar gear (5) and solar gear (4) fittings.

Therefore Equations from 35 to 38, together with 51 and 52, bring to evaluate the changing

Legs angular excursion is limited by suitable mechanical stops, placed on the planet carrier. A relative slippage between wheels and ground can happen during the changing

*Summary:* during the changing configuration mode the sliding solar gear is shifted and coupled with the solar gear; the planet carrier is blocked; the motion is transferred from the

<sup>=</sup> *ke*<sup>1</sup> · *ke*<sup>2</sup> · *kel* <sup>=</sup> *ke*<sup>1</sup> *kb*

2 − *ke*<sup>1</sup>

During the changing configuration mode the planet carrier angular speed is null:

*icc* <sup>=</sup> *<sup>ω</sup><sup>l</sup> ωm*

Fig. 10. Driving unit in closed and in open configuration

*ωss* = *ω<sup>s</sup>* (51)

Ω = 0 (52)

(53)

**3.3.2 Changing configuration mode**

open one to get over obstacles.

Figure 9 in the middle, therefore:

configuration velocity ratio as:

configuration operations.

gear-motor to the legs, that rotate.

#### **3.3.3 Rotating leg mode**

When the robot cannot climb an obstacle in automatic climbing mode, due for example to a low friction coefficient between wheel and obstacle, it is possible to transform the driving unit into a whole rigid body which rotates around its axis, *rotating leg mode*.

Figure 9 on the right shows gearing configuration for this operative mode: the sliding solar gear (5) is shifted into an intermediate position in order to be simultaneously engaged with both the solar gear (4) and the planet carrier (2):

$$
\omega\_{\rm ss} = \omega\_{\rm s} = \omega\_{\rm l} = \Omega \tag{54}
$$

Equations from 35 to 38 together with 54 allow to evaluate the driving unit angular speed as follows:

$$
\Omega = \omega\_m \tag{55}
$$

Actually, the driving unit acts as a rigid body and the motor rotation provides the rotation of the whole driving unit, as it was a single rotating body.

Wheels wear is a possible drawback of this locomotion when slipping conditions between wheel and obstacle occur.

*Summary:* during the rotating leg mode the sliding solar gear (5) is shifted into an intermediate position; the whole driving unit rotates as a rigid body around its central axis.

#### **3.4 Epi.q-2 driving unit**

The driving unit implemented in Epi.q-2 prototype is shown in Figure 11. With respect to Epi.q-1 version, the changing configuration ability has been removed in order to simplify the structure, to increase gearing robustness and efficiency, and to reduce overall weight. The driving unit, as shown in Figure 11, consists of: an input solar gear (1), a planet carrier (2)


Table 2. Nomenclature of Epi.q-2 driving unit

(rotational free respect to robot frame by means of bearings), three first planet gears (3), three second planet gears (4) and three wheels (5).

The transmission system can be described using some kinematic equations; the meshing conditions in the epicyclic gearing can be represented as follows:

$$\frac{\omega\_{fp} - \Omega}{\omega\_{\rm s} - \Omega} = -\frac{r\_{\rm s}}{r\_{fp}} = k\_{\rm \varepsilon 1} \tag{56}$$

$$\frac{\omega\_w - \Omega}{\omega\_{fp} - \Omega} = -\frac{r\_{fp}}{r\_{sp}} = k\_{\ell 2} \tag{57}$$

When the robot bumps against an obstacle, if the local frictions between front wheel and

Epi.q Robots 281

then the second planet gear rotation is hindered and consequently the driving unit rotation

*vac* <sup>=</sup> <sup>Ω</sup> · *ll* <sup>=</sup> *<sup>ω</sup><sup>s</sup>* · *iac* · *ll* <sup>=</sup> *<sup>ω</sup><sup>s</sup>* · *ke*<sup>1</sup> · *ke*<sup>2</sup>

Epi.q robots move well on different terrains: from a structured environment, with flat surface and steps, to an unstructured one, with uneven ground and obstacles different in dimension

When the robot is moving on wheels in advancing mode, the robot weight and the contact between the bottom wheels and the ground constrains driving unit angular position, as shown in Figure 12; actually the driving units are axially joined to the forecarriage frame but

rotationally free by means of bearings. When the robot bumps against an obstacle, if the local frictions between front wheel and obstacle are sufficient to stop that wheel, the driving unit starts to rotate around the stopped wheel center, allowing the robot to climb over the obstacle in automatic climbing mode, shown in Figure 13. The transition between wheeled and legged locomotion is passively triggered: if the torque required for moving on wheels is higher than

*<sup>ω</sup><sup>w</sup>* <sup>=</sup> <sup>0</sup> <sup>=</sup> *ke*<sup>1</sup> · *ke*<sup>2</sup>

starts around the stopped wheel, allowing the robot to climb over the obstacle. In this occurrence the gear ratio *iac* and the robot linear velocity *vac* are respectively:

> *iac* <sup>=</sup> <sup>Ω</sup> *ωs*

*ω<sup>w</sup>* = 0 (62)

*kts* = *ke*<sup>1</sup> · *ke*<sup>2</sup> (65)

*ke*<sup>1</sup> · *ke*<sup>2</sup> <sup>−</sup> <sup>1</sup> (63)

*ke*<sup>1</sup> · *ke*<sup>2</sup> <sup>−</sup> <sup>1</sup> · *ll* (64)

obstacle are sufficient to stop that wheel:

that compared with Equations 27 and 28 lead to:

Fig. 12. Epi.q-1 in advancing mode, on uneven ground

as it was expected, confirming Equation 61.

**4. Tests**

and shape.

Fig. 11. Epi.q-2 driving unit

0

1 2 3

During advancing mode the robot weight and the contact between bottom wheels and ground constrain driving unit angular position.

$$
\Omega = 0\tag{58}
$$

In this occurrence the velocity ratio *ia* and the robot linear velocity *va*, from Equations 56, 57 and 58, are respectively:

$$i\_{ad} = \left. \frac{\omega\_w}{\omega\_s} \right| \Omega = 0 = k\_{\varepsilon 1} \cdot k\_{\varepsilon 2} \tag{59}$$

$$
\boldsymbol{\upsilon}\_{\rm ad} = \boldsymbol{\omega}\_{\rm w} \cdot \boldsymbol{r}\_{\rm w} = \boldsymbol{\omega}\_{\rm s} \cdot \boldsymbol{i}\_{\rm ad} \cdot \boldsymbol{r}\_{\rm w} = \boldsymbol{\omega}\_{\rm m} \cdot \boldsymbol{k}\_{\rm r} \cdot \boldsymbol{k}\_{\rm ts} \cdot \boldsymbol{r}\_{\rm w} \tag{60}
$$

where *kr* is the reducer gearing ratio, located between driving unit input shaft and motor, and *ω<sup>m</sup>* is the angular velocity of the motor output shaft.

Comparing the Equations 59 and 60 with the generic Equations 24 and 25, it is possible to establish a relationship between gear ratii:

$$k\_{\rm ts} = k\_{\rm \varepsilon 1} \cdot k\_{\rm \varepsilon 2} \tag{61}$$

18 Will-be-set-by-IN-TECH

During advancing mode the robot weight and the contact between bottom wheels and ground

In this occurrence the velocity ratio *ia* and the robot linear velocity *va*, from Equations 56, 57

where *kr* is the reducer gearing ratio, located between driving unit input shaft and motor, and

Comparing the Equations 59 and 60 with the generic Equations 24 and 25, it is possible to

*iad* <sup>=</sup> *<sup>ω</sup><sup>w</sup> ωs* 

Ω = 0 (58)

Ω = 0 = *ke*<sup>1</sup> · *ke*<sup>2</sup> (59)

*kts* = *ke*<sup>1</sup> · *ke*<sup>2</sup> (61)

*vad* = *ω<sup>w</sup>* · *rw* = *ω<sup>s</sup>* · *iad* · *rw* = *ω<sup>m</sup>* · *kr* · *kts* · *rw* (60)

0

constrain driving unit angular position.

*ω<sup>m</sup>* is the angular velocity of the motor output shaft.

establish a relationship between gear ratii:

Fig. 11. Epi.q-2 driving unit

and 58, are respectively:

1 2 3 4

5

6

When the robot bumps against an obstacle, if the local frictions between front wheel and obstacle are sufficient to stop that wheel:

$$
\omega\_w = 0\tag{62}
$$

then the second planet gear rotation is hindered and consequently the driving unit rotation starts around the stopped wheel, allowing the robot to climb over the obstacle.

In this occurrence the gear ratio *iac* and the robot linear velocity *vac* are respectively:

$$\dot{q}\_{\text{c2}} = \left. \frac{\Omega}{\omega\_{\text{s}}} \right| \omega\_{\text{w}} = 0 = \frac{k\_{\text{e1}} \cdot k\_{\text{e2}}}{k\_{\text{e1}} \cdot k\_{\text{e2}} - 1} \tag{63}$$

$$
\omega\_{\rm ac} = \Omega \cdot l\_l = \omega\_s \cdot i\_{\rm ac} \cdot l\_l = \omega\_s \cdot \frac{k\_{\varepsilon 1} \cdot k\_{\varepsilon 2}}{k\_{\varepsilon 1} \cdot k\_{\varepsilon 2} - 1} \cdot l\_l \tag{64}
$$

that compared with Equations 27 and 28 lead to:

$$k\_{\rm ts} = k\_{\rm e1} \cdot k\_{\rm e2} \tag{65}$$

as it was expected, confirming Equation 61.

#### **4. Tests**

Epi.q robots move well on different terrains: from a structured environment, with flat surface and steps, to an unstructured one, with uneven ground and obstacles different in dimension and shape.

When the robot is moving on wheels in advancing mode, the robot weight and the contact between the bottom wheels and the ground constrains driving unit angular position, as shown in Figure 12; actually the driving units are axially joined to the forecarriage frame but

Fig. 12. Epi.q-1 in advancing mode, on uneven ground

rotationally free by means of bearings. When the robot bumps against an obstacle, if the local frictions between front wheel and obstacle are sufficient to stop that wheel, the driving unit starts to rotate around the stopped wheel center, allowing the robot to climb over the obstacle in automatic climbing mode, shown in Figure 13. The transition between wheeled and legged locomotion is passively triggered: if the torque required for moving on wheels is higher than

driving unit height.

When the driving unit was in closed configuration, Epi.q-1 crossed almost all obstacles without interference between driving unit and obstacles; actually, the driving unit was protected by the wheels.When the driving unit was in open configuration, sometimes a collision between driving unit and obstacles occurred. In this case the robot overcame the obstacles with a slightly irregular motion, combining the advancing mode and the automatic

Epi.q Robots 283

climbing mode. A step negotiating sequence is represented in Figure 15.

0.00 s

1.40 s

**4.1.2 Epi.q-2 test results**

automatic climbing mode.

driving unit height.

1.90 s 2.10 s

Fig. 15. Epi.q-1 negotiating a step (time stamps have been roughly estimated from a video)

The experimental tests conducted on Epi.q-2 prototype have shown that the maximum step Epi.q-2 can climb over, in favourable friction conditions, is 110 mm height, equal to 84% of the

Epi.q-2 was also designed with the aim to reduce the risk of interference with obstacles; even if sometimes this condition can occur. The tests demonstrated that the robot overcomes the obstacle with a slightly irregular motion which combines the advancing mode and the

1.60 s

0.20 s 0.70 s

1.80 s

2.30 s

the torque required for moving on legs, the robot would change its locomotion from rolling on wheels to stepping on legs, and vice versa.

Experimental tests on Epi.q prototypes were conducted in order to assess their performance; tests and results are reported in the following sections.

Fig. 13. Epi.q-2 negotiating an obstacle

#### **4.1 Step negotiating aptitude**

The purpose of the test is to assess the ability of Epi.q robots to negotiate obstacles which are different in height. The robots were driven close to a step, on a flat surface. Analyzing the experimental tests, it was noticed that three different cases can occur when the rotation of the driving unit starts, as shown in Figure 14: the top wheel leans against the upper surface of the step; the top wheel leans against the front surface of the step; or an intermediate case between the two. In the first case it is always possible to overcome the step, even without

Fig. 14. Driving unit approaches three steps, different in height

initial velocity. Actually, when the rotation of the driving unit starts, the top wheel is above the step and the upper leg can lift up the robot, obviously only if the top wheel does not skid. In the second case it is never possible to overcome the step, because the rear wheel units are idle. In an intermediate case between the previous ones the possibility of overcoming a step depends on the static friction coefficient between wheels and step, on the tread pattern of the tire and on the approach speed.

#### **4.1.1 Epi.q-1 test results**

The experimental tests conducted on Epi.q-1 prototype have shown that the maximum step it can negotiate, in favourable friction conditions, is 90 mm height, equivalent to 72% of the 20 Will-be-set-by-IN-TECH

the torque required for moving on legs, the robot would change its locomotion from rolling

Experimental tests on Epi.q prototypes were conducted in order to assess their performance;

The purpose of the test is to assess the ability of Epi.q robots to negotiate obstacles which are different in height. The robots were driven close to a step, on a flat surface. Analyzing the experimental tests, it was noticed that three different cases can occur when the rotation of the driving unit starts, as shown in Figure 14: the top wheel leans against the upper surface of the step; the top wheel leans against the front surface of the step; or an intermediate case between the two. In the first case it is always possible to overcome the step, even without

initial velocity. Actually, when the rotation of the driving unit starts, the top wheel is above the step and the upper leg can lift up the robot, obviously only if the top wheel does not skid. In the second case it is never possible to overcome the step, because the rear wheel units are idle. In an intermediate case between the previous ones the possibility of overcoming a step depends on the static friction coefficient between wheels and step, on the tread pattern of the

The experimental tests conducted on Epi.q-1 prototype have shown that the maximum step it can negotiate, in favourable friction conditions, is 90 mm height, equivalent to 72% of the

on wheels to stepping on legs, and vice versa.

Fig. 13. Epi.q-2 negotiating an obstacle

Fig. 14. Driving unit approaches three steps, different in height

**4.1 Step negotiating aptitude**

tire and on the approach speed.

**4.1.1 Epi.q-1 test results**

tests and results are reported in the following sections.

driving unit height.

When the driving unit was in closed configuration, Epi.q-1 crossed almost all obstacles without interference between driving unit and obstacles; actually, the driving unit was protected by the wheels.When the driving unit was in open configuration, sometimes a collision between driving unit and obstacles occurred. In this case the robot overcame the obstacles with a slightly irregular motion, combining the advancing mode and the automatic climbing mode. A step negotiating sequence is represented in Figure 15.

Fig. 15. Epi.q-1 negotiating a step (time stamps have been roughly estimated from a video)

#### **4.1.2 Epi.q-2 test results**

The experimental tests conducted on Epi.q-2 prototype have shown that the maximum step Epi.q-2 can climb over, in favourable friction conditions, is 110 mm height, equal to 84% of the driving unit height.

Epi.q-2 was also designed with the aim to reduce the risk of interference with obstacles; even if sometimes this condition can occur. The tests demonstrated that the robot overcomes the obstacle with a slightly irregular motion which combines the advancing mode and the automatic climbing mode.

The Epi.q-2 prototype was tested on a smooth terrain, the current demand and the speed were

Epi.q Robots 285

**Current Speed** 0.4 A 0.2 m s−<sup>1</sup> 0.5 A 0.5 m s−<sup>1</sup> 0.6 A 0.75 m s−<sup>1</sup>

The Epi.q-2 power source, both for motor and electronics, was a removable 11 V/2200 mA h lithium-ion battery, providing more than 4 hours continuous runtime on one charge, up to

Epi.q-1 weighs almost 2.6 kg and measures 160 mm × 360 mm × 280 mm (height × length × width), with a driving unit that measures 125 mm in height in open configuration and 98 mm

Epi.q-1 can go up and down stairs and climb over obstacles with a maximum height of

On flat ground the maximum speed it can reach is approximately 0.5 m s−1. On a slope, the theoretical maximum value it can drive (maximum gravitational stability margins) is limited to 62◦ when the robot is moving uphill frontwards (or downhill backwards), to 32◦ when it is moving downhill frontwards (or uphill backwards), and to 59◦ when it is driving along a cross-hill (or normal to a downhill). When the robot is driving uphill, the maximum slope

Each driving unit is powered by a *Solarbotics GM17* gear-motor, declared specifications are a no load angular speed of 60 rpm and a maximum torque of almost 1 N m, when it is powered at 12 V. The axial device is powered by a *Faulhaber DC-micromotor* (series 0816) combined with a compatible planetary gearhead (series 08/01). The human operator controls the robot by means of an *Hitec - Laser 4* transmitter, and the radio signal is processed by a *Sabertooth 2X5* driver that provides the motors with the proper voltage.The power source both for motor

Table 3. Epi.q-2 current demand, moving on smooth terrain at different speeds

90 mm, that is 72% of the driving unit height, as shown in Figure 16.

evaluated; the results are collected in Table 3.

6 km of travel.

**5. Technical specifications**

in closed configuration.

**5.1 Epi.q-1 technical specifications**

Fig. 16. Epi.q-1 traversing obstacle and stairs

value is limited to 20◦, due to motor torque limitation.

and electronics is a removable 11 V/2200 mA h.

### **4.2 Motion on inclined surface**

The aim of the test is to assess Epi.q robot ability of moving on inclined surfaces. The robots were driven up a ramp and their behavior was observed.

The robots can drive on a slope either in advancing or in automatic climbing modes, actually there is a limit slope value that triggers the transition between wheeled and legged locomotion. Moreover there is a maximum slope value, that the robot can not overcome. This slope value is limited by motor power, by the disposition of the center of mass of the robot and by the friction coefficient between wheels and ground.

The tests highlighted a great influence of the center of mass disposition. Actually Epi.q robots does not have an active stability system and the traction is provided only by the front wheel units, which are also less loaded going uphill. Therefore, the driving units can loose traction on a slope and the wheels can start skidding.

### **4.2.1 Epi.q-1 test results**

The experimental tests have shown that when Epi.q-1 is moving on an inclined surface the transition between advancing mode and automatic climbing mode is triggered by a 13% slope, if the the driving unit is in open configuration, and by a 9.5% slope, in case of closed configuration. The robot was tested on different surfaces with increasing friction coefficient: plexiglas, paper and plywood. When the robot is moving on plexiglas surface it can reach a slope of 29% in automatic climbing mode, if the slope is steeper the robot starts skidding while it continues to advance up to a maximum slope of 32%. When the robot moves on paper surface, the maximum slope it can reach without skidding is 40% in automatic climbing mode, but it can advance up to 43% slope. Moving on plywood the wheels never skid and the maximum slope it can reach is 45%, dur to motor torque limitations.

#### **4.2.2 Epi.q-2 test results**

The experimental tests have shown that the Epi.q-2 locomotion transition is triggered by a 31% slope. In the experimental campaign Epi.q-2 prototypes has been tested on slopes up to 33%, with friction coefficient *μ<sup>s</sup>* = 0.83. Obviously, when the maximum slope is limited by the friction coefficient, the traction wheels can start skidding without reaching the transition condition.

#### **4.3 Motion on uneven and soft terrains**

The purpose of the test is to assess the Epi.q robot ability of moving on different terrains.

The experimental tests have shown that, when the rotation of the wheels is hindered by the high rolling friction due to the grass or to the ground unevenness, the transition between advancing mode and automatic climbing mode occurs and the robot starts the legged locomotion, as expected.

The Epi.q robots were tested on different scenarios: on uneven terrains with grass, stones, pebbles, earth and irregular trails. In all cases they were able to advance with a motion that changed between advancing mode and automatic climbing mode: the percentage of automatic climbing mode became higher accordingly to the terrain unevenness.

#### **4.4 Energy demand**

The purpose of the test is to evaluate the energy demand of Epi.q robots. Actually, using wheels whenever possible and legs only when needed, They should require a small amount of energy.

22 Will-be-set-by-IN-TECH

The aim of the test is to assess Epi.q robot ability of moving on inclined surfaces. The robots

The robots can drive on a slope either in advancing or in automatic climbing modes, actually there is a limit slope value that triggers the transition between wheeled and legged locomotion. Moreover there is a maximum slope value, that the robot can not overcome. This slope value is limited by motor power, by the disposition of the center of mass of the robot

The tests highlighted a great influence of the center of mass disposition. Actually Epi.q robots does not have an active stability system and the traction is provided only by the front wheel units, which are also less loaded going uphill. Therefore, the driving units can loose traction

The experimental tests have shown that when Epi.q-1 is moving on an inclined surface the transition between advancing mode and automatic climbing mode is triggered by a 13% slope, if the the driving unit is in open configuration, and by a 9.5% slope, in case of closed configuration. The robot was tested on different surfaces with increasing friction coefficient: plexiglas, paper and plywood. When the robot is moving on plexiglas surface it can reach a slope of 29% in automatic climbing mode, if the slope is steeper the robot starts skidding while it continues to advance up to a maximum slope of 32%. When the robot moves on paper surface, the maximum slope it can reach without skidding is 40% in automatic climbing mode, but it can advance up to 43% slope. Moving on plywood the wheels never skid and the

The experimental tests have shown that the Epi.q-2 locomotion transition is triggered by a 31% slope. In the experimental campaign Epi.q-2 prototypes has been tested on slopes up to 33%, with friction coefficient *μ<sup>s</sup>* = 0.83. Obviously, when the maximum slope is limited by the friction coefficient, the traction wheels can start skidding without reaching the transition

The purpose of the test is to assess the Epi.q robot ability of moving on different terrains. The experimental tests have shown that, when the rotation of the wheels is hindered by the high rolling friction due to the grass or to the ground unevenness, the transition between advancing mode and automatic climbing mode occurs and the robot starts the legged

The Epi.q robots were tested on different scenarios: on uneven terrains with grass, stones, pebbles, earth and irregular trails. In all cases they were able to advance with a motion that changed between advancing mode and automatic climbing mode: the percentage of automatic

The purpose of the test is to evaluate the energy demand of Epi.q robots. Actually, using wheels whenever possible and legs only when needed, They should require a small amount

**4.2 Motion on inclined surface**

**4.2.1 Epi.q-1 test results**

**4.2.2 Epi.q-2 test results**

locomotion, as expected.

**4.4 Energy demand**

of energy.

**4.3 Motion on uneven and soft terrains**

condition.

were driven up a ramp and their behavior was observed.

and by the friction coefficient between wheels and ground.

maximum slope it can reach is 45%, dur to motor torque limitations.

climbing mode became higher accordingly to the terrain unevenness.

on a slope and the wheels can start skidding.

The Epi.q-2 prototype was tested on a smooth terrain, the current demand and the speed were evaluated; the results are collected in Table 3.


Table 3. Epi.q-2 current demand, moving on smooth terrain at different speeds

The Epi.q-2 power source, both for motor and electronics, was a removable 11 V/2200 mA h lithium-ion battery, providing more than 4 hours continuous runtime on one charge, up to 6 km of travel.

### **5. Technical specifications**

#### **5.1 Epi.q-1 technical specifications**

Epi.q-1 weighs almost 2.6 kg and measures 160 mm × 360 mm × 280 mm (height × length × width), with a driving unit that measures 125 mm in height in open configuration and 98 mm in closed configuration.

Epi.q-1 can go up and down stairs and climb over obstacles with a maximum height of 90 mm, that is 72% of the driving unit height, as shown in Figure 16.

Fig. 16. Epi.q-1 traversing obstacle and stairs

On flat ground the maximum speed it can reach is approximately 0.5 m s−1. On a slope, the theoretical maximum value it can drive (maximum gravitational stability margins) is limited to 62◦ when the robot is moving uphill frontwards (or downhill backwards), to 32◦ when it is moving downhill frontwards (or uphill backwards), and to 59◦ when it is driving along a cross-hill (or normal to a downhill). When the robot is driving uphill, the maximum slope value is limited to 20◦, due to motor torque limitation.

Each driving unit is powered by a *Solarbotics GM17* gear-motor, declared specifications are a no load angular speed of 60 rpm and a maximum torque of almost 1 N m, when it is powered at 12 V. The axial device is powered by a *Faulhaber DC-micromotor* (series 0816) combined with a compatible planetary gearhead (series 08/01). The human operator controls the robot by means of an *Hitec - Laser 4* transmitter, and the radio signal is processed by a *Sabertooth 2X5* driver that provides the motors with the proper voltage.The power source both for motor and electronics is a removable 11 V/2200 mA h.

Driving unit is the core of these devices. Epi.q driving unit concept takes place from the idea that a robot can passively modify its locomotion, from rolling on wheels to stepping on legs, simply according to local friction and dynamic condition. Actually, the driving unit is designed to have a limit torque that triggers different locomotion: if the torque required for moving on wheels exceeds the torque required for moving on legs, the robot changes its locomotion accordingly, from rolling on wheels to stepping on legs and vice versa. Thus only one motor per driving unit is required both for wheeled and legged locomotion. When the robot is moving on wheels, the robot weight and the contact between wheels and ground constrain driving unit angular position but, when the robot bumps against an obstacle, if the local frictions between front wheel and obstacle are sufficient to stop that wheel, the driving unit starts to rotate around the stopped wheel center, allowing the robot to climb over the obstacle.Therefore, wheels are used whenever possible and legs only when needed, consequently these robots require a small amount of energy if compared to tracked or legged

Epi.q Robots 287

Epi.q robots can be successfully employed in many fields: from monitoring and surveillance tasks to intervention in potentially dangerous environments like in presence of radiation, gas or explosive, from rescue operations after catastrophic events to exploration of unknown

Allen, T. J., Quinn, R. D., Bachmann, R. J. & Ritzman, R. E. (2003). Abstracted biological

Dalvand, M. & Moghadam, M. (2006). Stair climber smart mobile robot (MSRox), *Autonomous*

Grand, C., Benamar, F., Plumet, F. & Bidaud, P. (2004). Stability and traction optimization

http://us.mindstorms.lego.com/en-us/Community/NXTLog/

Oderio, R. (2011). *Innovative concepts for stair climbing devices*, PhD thesis, Politecnico di Torino. Poulakakis, I., Papadopoulos, E. & Buehler, M. (2006). On the stability of the passive dynamics

Poulakakis, I., Smith, J. & Buehler, M. (2005). Modeling and experiments of untethered

Galileo Mobility Instruments & Elbit Systems Ltd (2009). Elbit viper, Robot Magazine.

http://www.galileomobility.com/?page\_id=12.

http://www.irobot.com/gi/ground/510\_PackBot.

principles applied with reduced actuation improve mobility of legged vehicles, *Proceedings of the 2003 IEEE/RSJ - Int. Conference on Intelligent Robots and Systems*, Las

of a reconfigurable wheel-legged robot, *The International Journal of Robotics Research*

DisplayProject.aspx?id=c7d16dfe-780b-4b19-9aa3-3c0b22065dd5. Mourikis, A., Trawny, N., Roumeliotis, S., Helmick, D. & Matthies, L. (2007). Autonomous

stair climbing for tracked vehicles, *The International Journal of Robotics Research*

of quadrupedal running with a bounding gait, *The International Journal of Robotics*

quadrupedal running with a bound gait: the scout ii robot, *The International Journal*

robots.

**7. References**

environments, and in many other fields as well.

Vegas, Nevada.

*Robots* 20(1): 3–14.

23(10–11): 1041–1058. iRobot (2010). Ground robots - 510 packbot.

Lego Mindstorm (2007). Artic snow cat.

*Research* 25(7): 669–687.

*of Robotics Research* 24(4): 239–256.

26(7): 737–758.

#### **5.2 Epi.q-2 technical specifications**

Epi.q-2 weighs almost 4 kg and measures 200 mm × 450 mm × 280 mm (height × length × width), with a driving unit that measures 130 mm in height.

Epi.q-2 can go up and down stairs and climb over obstacles with a maximum height of 110 mm, that is 84% of the driving unit height.

On flat ground the maximum speed it can reach is almost 1 m s−1. On a slope, the theoretical maximum value it can drive (maximum gravitational stability margins) is limited to 70◦ when the robot is moving uphill frontwards (or downhill backwards), to 51◦ when it is moving downhill frontwards (or uphill backwards), and to 60◦ when it is driving along a cross-hill (or normal to a downhill). When the robot is driving uphill, the maximum slope value is limited to 15◦, due to motor torque limitation.

Fig. 17. Epi.q-2 traversing an obstacle, on uneven ground

Each driving unit is powered by a gear-motor, declared specifications are a no load angular speed of about 81 rpm and a maximum torque of almost 0.5 N m, when it is powered at 12 V. The human operator controls the robot by means of a *Zebra 4* transmitter, and the radio signal is processed by a *Sabertooth 2X5* driver that provides the motors with the proper voltage. The power source both for motor and electronics is a removable 11 V/2200 mA h lithium-ion battery, providing more than 4 hours continuous runtime on one charge, up to 6 km of travel.

### **6. Conclusions**

The paper has dealt with Epi.q robots, smart mini devices able to move in structured and unstructured environments, to climb over obstacles and to go up and down stairs. These robots do not need to actively sense obstacles for climbing them, they simply move forward and let their locomotion passively adapt to ground conditions and change accordingly: from rolling on wheels to stepping on legs and vice-versa.

Epi.q robots are mainly composed of three parts: a forecarriage, a central body and a rear axle. The forecarriage consists of a box linked to two driving units that, housing the transmission system, control robot locomotion. The rear axle comprises two idle three-legged wheel units with three idle wheels mounted at the end of each spoke. The central body is a metal platform connecting forecarriage and rear axle, where a payload can be placed. Two passive revolute joints, mutually perpendicular, link front and rear part of the robot: the vertical joint allows robot steering, while the horizontal joint guarantees a correct contact between wheels and ground, also in presence of uneven terrain. A differential steering is implemented on Epi.q robots, that provides both driving and steering functions.

24 Will-be-set-by-IN-TECH

Epi.q-2 weighs almost 4 kg and measures 200 mm × 450 mm × 280 mm (height × length ×

Epi.q-2 can go up and down stairs and climb over obstacles with a maximum height of

On flat ground the maximum speed it can reach is almost 1 m s−1. On a slope, the theoretical maximum value it can drive (maximum gravitational stability margins) is limited to 70◦ when the robot is moving uphill frontwards (or downhill backwards), to 51◦ when it is moving downhill frontwards (or uphill backwards), and to 60◦ when it is driving along a cross-hill (or normal to a downhill). When the robot is driving uphill, the maximum slope value is limited

Each driving unit is powered by a gear-motor, declared specifications are a no load angular speed of about 81 rpm and a maximum torque of almost 0.5 N m, when it is powered at 12 V. The human operator controls the robot by means of a *Zebra 4* transmitter, and the radio signal is processed by a *Sabertooth 2X5* driver that provides the motors with the proper voltage. The power source both for motor and electronics is a removable 11 V/2200 mA h lithium-ion battery, providing more than 4 hours continuous runtime on one charge, up to 6 km of travel.

The paper has dealt with Epi.q robots, smart mini devices able to move in structured and unstructured environments, to climb over obstacles and to go up and down stairs. These robots do not need to actively sense obstacles for climbing them, they simply move forward and let their locomotion passively adapt to ground conditions and change accordingly: from

Epi.q robots are mainly composed of three parts: a forecarriage, a central body and a rear axle. The forecarriage consists of a box linked to two driving units that, housing the transmission system, control robot locomotion. The rear axle comprises two idle three-legged wheel units with three idle wheels mounted at the end of each spoke. The central body is a metal platform connecting forecarriage and rear axle, where a payload can be placed. Two passive revolute joints, mutually perpendicular, link front and rear part of the robot: the vertical joint allows robot steering, while the horizontal joint guarantees a correct contact between wheels and ground, also in presence of uneven terrain. A differential steering is implemented on Epi.q

**5.2 Epi.q-2 technical specifications**

to 15◦, due to motor torque limitation.

**6. Conclusions**

110 mm, that is 84% of the driving unit height.

width), with a driving unit that measures 130 mm in height.

Fig. 17. Epi.q-2 traversing an obstacle, on uneven ground

rolling on wheels to stepping on legs and vice-versa.

robots, that provides both driving and steering functions.

Driving unit is the core of these devices. Epi.q driving unit concept takes place from the idea that a robot can passively modify its locomotion, from rolling on wheels to stepping on legs, simply according to local friction and dynamic condition. Actually, the driving unit is designed to have a limit torque that triggers different locomotion: if the torque required for moving on wheels exceeds the torque required for moving on legs, the robot changes its locomotion accordingly, from rolling on wheels to stepping on legs and vice versa. Thus only one motor per driving unit is required both for wheeled and legged locomotion. When the robot is moving on wheels, the robot weight and the contact between wheels and ground constrain driving unit angular position but, when the robot bumps against an obstacle, if the local frictions between front wheel and obstacle are sufficient to stop that wheel, the driving unit starts to rotate around the stopped wheel center, allowing the robot to climb over the obstacle.Therefore, wheels are used whenever possible and legs only when needed, consequently these robots require a small amount of energy if compared to tracked or legged robots.

Epi.q robots can be successfully employed in many fields: from monitoring and surveillance tasks to intervention in potentially dangerous environments like in presence of radiation, gas or explosive, from rescue operations after catastrophic events to exploration of unknown environments, and in many other fields as well.

#### **7. References**


http://www.irobot.com/gi/ground/510\_PackBot.

Lego Mindstorm (2007). Artic snow cat.

http://us.mindstorms.lego.com/en-us/Community/NXTLog/

DisplayProject.aspx?id=c7d16dfe-780b-4b19-9aa3-3c0b22065dd5.

Mourikis, A., Trawny, N., Roumeliotis, S., Helmick, D. & Matthies, L. (2007). Autonomous stair climbing for tracked vehicles, *The International Journal of Robotics Research* 26(7): 737–758.

Oderio, R. (2011). *Innovative concepts for stair climbing devices*, PhD thesis, Politecnico di Torino.


**Part 4** 

**Localization and Navigation** 


## **Part 4**

**Localization and Navigation** 

26 Will-be-set-by-IN-TECH

288 Mobile Robots – Current Trends

Quaglia, G., Bruzzone, L., Bozzini, G., Oderio, R. & Razzoli, R. (2011). Epi. q-tg: mobile robot for surveillance, *Industrial Robot: An International Journal* 38(3): 282–291. Quaglia, G., Maffiodo, D., Franco, W., Appendino, S. & Oderio, R. (2010). The epi.q-1 hybrid

Quinn, R. D., Nelson, G. M., Bachmann, R. J., Kingsley, D. A., Offi, J. T., Allen, T. J.

Saranli, U., Buehler, M. & Koditschek, D. (2001). Rhex: A simple and highly mobile hexapod

Saranli, U., Rizzi, A. & Koditschek, D. (2004). Model-based dynamic self-righting maneuvers for a hexapedal robot, *The International Journal of Robotics Research* 23(9): 903–918. Schroer, R. T., Boggess, M. J., Bachmann, R. J., Quinn, R. D. & Ritzmann, R. E. (2004).

*International Conference on Robotics and Automation*, Las Vegas, Nevada. Siegwart, R., Lauria, M., Maeusli, P. & Van Winnendael, M. (1998). Design and implementation

*Robotics in Challenging Environments*, Albuquerque, New Mexico.

& Ritzmann, R. E. (2003). Parallel complementary strategies for implementing biological principles into mobile robots, *The International Journal of Robotics Research*

Comparing cockroach and whegs robot body motions, *Proceedings of ICRA 2004 -*

of an innovative micro-rover, *Proc. of Robotics 98, the 3rd Conference and Exposition on*

mobile robot, *The International Journal of Robotics Research* 29(1).

robot, *The International Journal of Robotics Research* 20(7): 616–631.

22(3-4): 169–186.

Wei Yu, Emmanuel Collins and Oscar Chuy

Dynamic models and power models of autonomous ground vehicles are needed to enable realistic motion planning Howard & Kelly (2007); Yu et al. (2010) in unstructured, outdoor environments that have substantial changes in elevation, consist of a variety of terrain

**Dynamic Modeling and Power Modeling of** 

**Robotic Skid-Steered Wheeled Vehicles** 

At least 4 different motion planning tasks can be accomplished using appropriate dynamic

For the purpose of motion planning this chapter focuses on developing dynamic and power models of a skid-steered wheeled vehicle to help the above motion planning tasks. The dynamic models are the foundation to derive the power models of skid-steered wheeled vehicles. The target research platform is a skid-steered vehicle. A skid-steered vehicle can be either tracked or wheeled . Fig. 1 shows examples of a skid-steered wheeled vehicle and a

This chapter is organized into five sections. Section 1 is the introduction. Section 2 presents the kinematic models of a skid-steered wheeled vehicle, which is the preliminary knowledge to the proposed dynamic model and power model. Section 3 develops analytical dynamic models of a skid-steered wheeled vehicle for general 2D motion. The developed models are characterized by the coefficient of rolling resistance, the coefficient of friction, and the shear deformation modulus, which have terrain-dependent values. Section 4 develops analytical power models of a skid-steered vehicle and its inner and outer motors in general 2D curvilinear motion. The developed power model builds upon a previously developed dynamic model in Section 3. Section 5 experimentally verifies the proposed dynamic models

Ackerman steering, differential steering, and skid steering are the most widely used steering mechanisms for wheeled and tracked vehicles. Ackerman steering has the advantages of good lateral stability when turning at high speeds, good controllability Siegwart & Nourbakhsh (2005) and lower power consumption Shamah et al. (2001), but has the disadvantages of low maneuverability and need of an explicit mechanical steering subsystem Mandow et al.

surfaces, and/or require frequent accelerations and decelerations.

4. *Planning in the presence of a fault, such as flat tire or faulty motor.*

and power models of a robotic skid-steered wheeled vehicle.

**1. Introduction**

and power models:

1. *Time optimal motion planning.* 2. *Energy efficient motion planning.*

skid-steered tracked vehicle.

3. *Reduction in the frequency of replanning.*

*Florida State University*

*U.S.A*

**14**

## **Dynamic Modeling and Power Modeling of Robotic Skid-Steered Wheeled Vehicles**

Wei Yu, Emmanuel Collins and Oscar Chuy *Florida State University U.S.A*

### **1. Introduction**

Dynamic models and power models of autonomous ground vehicles are needed to enable realistic motion planning Howard & Kelly (2007); Yu et al. (2010) in unstructured, outdoor environments that have substantial changes in elevation, consist of a variety of terrain surfaces, and/or require frequent accelerations and decelerations.

At least 4 different motion planning tasks can be accomplished using appropriate dynamic and power models:


For the purpose of motion planning this chapter focuses on developing dynamic and power models of a skid-steered wheeled vehicle to help the above motion planning tasks. The dynamic models are the foundation to derive the power models of skid-steered wheeled vehicles. The target research platform is a skid-steered vehicle. A skid-steered vehicle can be either tracked or wheeled . Fig. 1 shows examples of a skid-steered wheeled vehicle and a skid-steered tracked vehicle.

This chapter is organized into five sections. Section 1 is the introduction. Section 2 presents the kinematic models of a skid-steered wheeled vehicle, which is the preliminary knowledge to the proposed dynamic model and power model. Section 3 develops analytical dynamic models of a skid-steered wheeled vehicle for general 2D motion. The developed models are characterized by the coefficient of rolling resistance, the coefficient of friction, and the shear deformation modulus, which have terrain-dependent values. Section 4 develops analytical power models of a skid-steered vehicle and its inner and outer motors in general 2D curvilinear motion. The developed power model builds upon a previously developed dynamic model in Section 3. Section 5 experimentally verifies the proposed dynamic models and power models of a robotic skid-steered wheeled vehicle.

Ackerman steering, differential steering, and skid steering are the most widely used steering mechanisms for wheeled and tracked vehicles. Ackerman steering has the advantages of good lateral stability when turning at high speeds, good controllability Siegwart & Nourbakhsh (2005) and lower power consumption Shamah et al. (2001), but has the disadvantages of low maneuverability and need of an explicit mechanical steering subsystem Mandow et al.

and right wheels. An alternative kinematic model that is based on the slip ratios of the wheels has been presented in Song et al. (2006); Wong (2001). This model takes into account the longitudinal slip ratios of the left and right wheels. The difficulty in using this model is the actual detection of slip, which cannot be computed analytically. Hence, developing practical methods to experimentally determine the slip ratios is an active research area Endo et al.

Dynamic Modeling and Power Modeling of Robotic Skid-Steered Wheeled Vehicles 293

To date, there is very little published research on the experimentally verified dynamic models for general motion of skid-steered vehicles, especially wheeled vehicles. The main reason is that it is hard to model the tire (or track) and terrain interaction when slipping and skidding occur. (For each vehicle wheel, if the wheel linear velocity computed using the angular velocity of the wheel is larger than the actual linear velocity of the wheel, slipping occurs, while if the computed wheel velocity is smaller than the actual linear velocity, skidding occurs.) The research of Caracciolo et al. (1999) developed a dynamic model for planar motion by considering longitudinal rolling resistance, lateral friction, moment of resistance for the vehicle, and also the nonholonomic constraint for lateral skidding. In addition, a model-based nonlinear controller was designed for trajectory tracking. However, this model uses Coulomb friction to describe the lateral sliding friction and moment of resistance, which contradicts the experimental results Wong (2001); Wong & Chiang (2001). In addition, it does not consider any of the motor properties. Furthermore, the results of Caracciolo et al. (1999) are limited to

The research of Kozlowski & Pazderski (2004) developed a planar dynamic model of a skid-steered vehicle, which is essentially that of Caracciolo et al. (1999), using a different velocity vector (consisting of the longitudinal and angular velocities of the vehicle instead of the longitudinal and lateral velocities). In addition, the dynamics of the motors, though not the power limitations, were added to the model. Kinematic, dynamic and motor level control laws were explored for trajectory tracking. However, as in Caracciolo et al. (1999), Coulomb friction was used to describe the lateral friction and moment of resistance, and the results are limited to simulation. In Yi, Song, Zhang & Goodwin (2007) a functional relationship between the coefficient of friction and longitudinal slip is used to capture the interaction between the wheels and ground, and further to develop a dynamic model of skid-steered wheeled vehicle. Also, an adaptive controller is designed to enable the robot to follow a desired trajectory. The inputs of the dynamic model are the longitudinal slip ratios of the four wheels. However, the longitudinal slip ratios are difficult to measure in practice and depend on the terrain surface, instantaneous radius of curvature, and vehicle velocity. In addition, no experiment is conducted to verify the reliability of the torque prediction from the dynamic model and motor saturation and power limitations are not considered. In Wang et al. (2009) the dynamic model from Yi, Song, Zhang & Goodwin (2007) is used to explore the motion stability of the vehicle, which is controlled to move with constant linear velocity and angular velocity for each half of a lemniscate to estimate wheel slip. As in Yi, Song, Zhang & Goodwin (2007), no experiment

The most thorough dynamic analysis of a skid-steered vehicle is found in Wong (2001); Wong & Chiang (2001), which consider steady-state (i.e., constant linear and angular velocities) dynamic models for circular motion of *tracked* vehicles. A primary contribution of this research is that it proposes and then provides experimental evidence that in the track-terrain interaction the shear stress is a particular function of the shear displacement. This model differs from the Coulomb model of friction, adopted in Caracciolo et al. (1999); Kozlowski & Pazderski (2004), which essentially assumes that the maximum shear stress is obtained as

(2007); Moosavian & Kalantari (2008); Nagatani et al. (2007); Song et al. (2008).

simulation without experimental verification.

is carried out to verify the fidelity of the dynamic model.

Fig. 1. Examples of skid-steered vehicles: (Left) Skid-steered wheeled vehicle, (Right) Skid-steered tracked vehicle

(2007); Shamah et al. (2001); Siegwart & Nourbakhsh (2005). Differential steering is popular because it provides high maneuverability with a zero turning radius and has a simple steering configuration Siegwart & Nourbakhsh (2005); Zhang et al. (1998). However, it does not have strong traction and mobility over rough and loose terrain, and hence is seldom used for outdoor terrains. Like differential steering, skid steering leads to high maneuverability Caracciolo et al. (1999); Economou et al. (2002); Siegwart & Nourbakhsh (2005), faster response Martinez et al. (2005), and also has a simple Mandow et al. (2007); Petrov et al. (2000); Shamah et al. (2001) and robust mechanical structure Kozlowski & Pazderski (2004); Mandow et al. (2007); Yi, Zhang, Song & Jayasuriya (2007). In contrast, it also leads to strong traction and high mobilityPetrov et al. (2000), which makes it suitable for all-terrain traversal.

Many of the difficulties associated with modeling and operating both classes of skid-steered vehicles arise from the complex wheel (or track) and terrain interaction Mandow et al. (2007); Yi, Song, Zhang & Goodwin (2007). For Ackerman-steered or differential-steered vehicles, the wheel motions may often be accurately modeled by pure rolling, while for skid-steered vehicles in general curvilinear motion, the wheels (or tracks) roll and slide at the same time Mandow et al. (2007); O. Chuy et al. (2009); Yi, Song, Zhang & Goodwin (2007); Yi, Zhang, Song & Jayasuriya (2007). This makes it difficult to develop kinematic and dynamic models that accurately describe the motion. Other disadvantages are that the motion tends to be energy inefficient, difficult to control Kozlowski & Pazderski (2004); Martinez et al. (2005), and for wheeled vehicles, the tires tend to wear out faster Golconda (2005).

A kinematic model of a skid-steered wheeled vehicle maps the wheel velocities to the vehicle velocities and is an important component in the development of a dynamic model. In contrast to the kinematic models for Ackerman-steered and differential-steered vehicles, the kinematic model of a skid-steered vehicle is dependent on more than the physical dimensions of the vehicle since it must take into account vehicle sliding and is hence terrain-dependent Mandow et al. (2007); Wong (2001). In Mandow et al. (2007); Martinez et al. (2005) a kinematic model of a skid-steered vehicle was developed by assuming a certain equivalence with a kinematic model of a differential-steered vehicle. This was accomplished by experimentally determining the instantaneous centers of rotation (ICRs) of the sliding velocities of the left 2 Will-be-set-by-IN-TECH

Fig. 1. Examples of skid-steered vehicles: (Left) Skid-steered wheeled vehicle, (Right)

high mobilityPetrov et al. (2000), which makes it suitable for all-terrain traversal.

and for wheeled vehicles, the tires tend to wear out faster Golconda (2005).

(2007); Shamah et al. (2001); Siegwart & Nourbakhsh (2005). Differential steering is popular because it provides high maneuverability with a zero turning radius and has a simple steering configuration Siegwart & Nourbakhsh (2005); Zhang et al. (1998). However, it does not have strong traction and mobility over rough and loose terrain, and hence is seldom used for outdoor terrains. Like differential steering, skid steering leads to high maneuverability Caracciolo et al. (1999); Economou et al. (2002); Siegwart & Nourbakhsh (2005), faster response Martinez et al. (2005), and also has a simple Mandow et al. (2007); Petrov et al. (2000); Shamah et al. (2001) and robust mechanical structure Kozlowski & Pazderski (2004); Mandow et al. (2007); Yi, Zhang, Song & Jayasuriya (2007). In contrast, it also leads to strong traction and

Many of the difficulties associated with modeling and operating both classes of skid-steered vehicles arise from the complex wheel (or track) and terrain interaction Mandow et al. (2007); Yi, Song, Zhang & Goodwin (2007). For Ackerman-steered or differential-steered vehicles, the wheel motions may often be accurately modeled by pure rolling, while for skid-steered vehicles in general curvilinear motion, the wheels (or tracks) roll and slide at the same time Mandow et al. (2007); O. Chuy et al. (2009); Yi, Song, Zhang & Goodwin (2007); Yi, Zhang, Song & Jayasuriya (2007). This makes it difficult to develop kinematic and dynamic models that accurately describe the motion. Other disadvantages are that the motion tends to be energy inefficient, difficult to control Kozlowski & Pazderski (2004); Martinez et al. (2005),

A kinematic model of a skid-steered wheeled vehicle maps the wheel velocities to the vehicle velocities and is an important component in the development of a dynamic model. In contrast to the kinematic models for Ackerman-steered and differential-steered vehicles, the kinematic model of a skid-steered vehicle is dependent on more than the physical dimensions of the vehicle since it must take into account vehicle sliding and is hence terrain-dependent Mandow et al. (2007); Wong (2001). In Mandow et al. (2007); Martinez et al. (2005) a kinematic model of a skid-steered vehicle was developed by assuming a certain equivalence with a kinematic model of a differential-steered vehicle. This was accomplished by experimentally determining the instantaneous centers of rotation (ICRs) of the sliding velocities of the left

Skid-steered tracked vehicle

and right wheels. An alternative kinematic model that is based on the slip ratios of the wheels has been presented in Song et al. (2006); Wong (2001). This model takes into account the longitudinal slip ratios of the left and right wheels. The difficulty in using this model is the actual detection of slip, which cannot be computed analytically. Hence, developing practical methods to experimentally determine the slip ratios is an active research area Endo et al. (2007); Moosavian & Kalantari (2008); Nagatani et al. (2007); Song et al. (2008).

To date, there is very little published research on the experimentally verified dynamic models for general motion of skid-steered vehicles, especially wheeled vehicles. The main reason is that it is hard to model the tire (or track) and terrain interaction when slipping and skidding occur. (For each vehicle wheel, if the wheel linear velocity computed using the angular velocity of the wheel is larger than the actual linear velocity of the wheel, slipping occurs, while if the computed wheel velocity is smaller than the actual linear velocity, skidding occurs.) The research of Caracciolo et al. (1999) developed a dynamic model for planar motion by considering longitudinal rolling resistance, lateral friction, moment of resistance for the vehicle, and also the nonholonomic constraint for lateral skidding. In addition, a model-based nonlinear controller was designed for trajectory tracking. However, this model uses Coulomb friction to describe the lateral sliding friction and moment of resistance, which contradicts the experimental results Wong (2001); Wong & Chiang (2001). In addition, it does not consider any of the motor properties. Furthermore, the results of Caracciolo et al. (1999) are limited to simulation without experimental verification.

The research of Kozlowski & Pazderski (2004) developed a planar dynamic model of a skid-steered vehicle, which is essentially that of Caracciolo et al. (1999), using a different velocity vector (consisting of the longitudinal and angular velocities of the vehicle instead of the longitudinal and lateral velocities). In addition, the dynamics of the motors, though not the power limitations, were added to the model. Kinematic, dynamic and motor level control laws were explored for trajectory tracking. However, as in Caracciolo et al. (1999), Coulomb friction was used to describe the lateral friction and moment of resistance, and the results are limited to simulation. In Yi, Song, Zhang & Goodwin (2007) a functional relationship between the coefficient of friction and longitudinal slip is used to capture the interaction between the wheels and ground, and further to develop a dynamic model of skid-steered wheeled vehicle. Also, an adaptive controller is designed to enable the robot to follow a desired trajectory. The inputs of the dynamic model are the longitudinal slip ratios of the four wheels. However, the longitudinal slip ratios are difficult to measure in practice and depend on the terrain surface, instantaneous radius of curvature, and vehicle velocity. In addition, no experiment is conducted to verify the reliability of the torque prediction from the dynamic model and motor saturation and power limitations are not considered. In Wang et al. (2009) the dynamic model from Yi, Song, Zhang & Goodwin (2007) is used to explore the motion stability of the vehicle, which is controlled to move with constant linear velocity and angular velocity for each half of a lemniscate to estimate wheel slip. As in Yi, Song, Zhang & Goodwin (2007), no experiment is carried out to verify the fidelity of the dynamic model.

The most thorough dynamic analysis of a skid-steered vehicle is found in Wong (2001); Wong & Chiang (2001), which consider steady-state (i.e., constant linear and angular velocities) dynamic models for circular motion of *tracked* vehicles. A primary contribution of this research is that it proposes and then provides experimental evidence that in the track-terrain interaction the shear stress is a particular function of the shear displacement. This model differs from the Coulomb model of friction, adopted in Caracciolo et al. (1999); Kozlowski & Pazderski (2004), which essentially assumes that the maximum shear stress is obtained as

theory. This chapter also discusses the interesting phenomenon that while the outer motor always consumes power, even though the velocity of inner wheel is always positive, as the turning radius decreases from infinity (corresponding to linear motion), the inner motor first

Dynamic Modeling and Power Modeling of Robotic Skid-Steered Wheeled Vehicles 295

In summary, we expect this chapter to make the following two fundamental contributions to

1. **A paradigm for deriving dynamic models of skid-steered wheeled vehicles.** The modeling methodology will result in terrain-dependent models that describe general

2. **A paradigm for deriving power models of skid-steered wheeled vehicles based on dynamic models.** The power model of a skid-steered vehicle will be derived from vehicle dynamic models. The power model will be described from the perspective of the motors and includes both the mechanical power consumption and electrical power consumption. It can predict when a given trajectory is unachievable because the power limitation of one

In this section, the kinematic model of a skid-steered wheeled vehicle is described and discussed. It is an important component in the development of the overall dynamic models

To mathematically describe the kinematic models that have been developed for skid-steered vehicles, consider a wheeled vehicle moving at constant velocity about an instantaneous

The global and local coordinate frames are denoted respectively by *X-Y* and *x-y*. The variables *v*, *ϕ*˙ and *R* are respectively the translational velocity, angular velocity and turning radius of vehicle. The instantaneous centers of rotation for the left wheel and right wheel are given respectively by *ICRl* and *ICRr*. Note that *ICRl* and *ICRr* are the centers for left and right wheel treads (the parts of the tires that contact and slide on the terrain) Wong & Chiang (2001); Yi, Zhang, Song & Jayasuriya (2007), i.e., they are the centers for the sliding velocities of these contacting treads, but not the centers for the actual velocities of each wheel. It has been shown that the three ICRs are in the same line, which is parallel to the *x-*axis of the local

In the *x-y* frame, the coordinates of *ICR*, *ICRl* and *ICRr* are described as (*xICR*, *yICR*),

*vx* and *vy* are the components of *v* along the *x* and *y* axes. The angular velocities of the left and right wheels are denoted respectively by *ω<sup>l</sup>* and *ωr*. (Note that for both the left and right side of the vehicle the velocities of the front and rear wheels are the same since they are driven by the same belt, and hence, there is only one velocity associated with each side.) The parameters

An experimental kinematic model of a skid-steered wheeled vehicle that is developed in

⎡ ⎣

−*yICR yICR xICRr* −*xICRl* −1 1

⎤ ⎦ � *ωl ωr* � *<sup>T</sup>*, where

(1)

(*xICRl*, *yICRl*) and (*xICRr*, *yICRr*). The vehicle velocity is denoted as *u* = [*vx vy ϕ*˙]

*b*, *B* and *r* are respectively the wheel width, the vehicle width, and the wheel radius.

consumes power, then generates power, and finally consumes power again.

dynamic modeling and power modeling of skid-steered wheeled vehicles:

general planar (2D) motion.

of the motors is violated.

center of rotation as shown in Fig. 2.

Mandow et al. (2007) is given by

⎡ ⎣ *vx vy ϕ*˙

⎤

<sup>⎦</sup> <sup>=</sup> *<sup>r</sup>*

*xICRr* − *xICRl*

**2. Kinematics of a skid-steered wheeled vehicle**

and power models of a skid-steered wheeled vehicle.

frame Mandow et al. (2007); Yi, Zhang, Song & Jayasuriya (2007).

soon as there is any relative movement between the track and the ground. This research also provides detailed analysis of the longitudinal and lateral forces that act on a tracked vehicle. But their results had not been extended to skid-steered *wheeled* vehicles. In addition, they do not consider vehicle acceleration, terrain elevation, actuator limitations, or the vehicle control system.

In the existing literature there are very few publications that consider power modeling of skid-steered vehicles. The research of Kim & Kim (2007) provides an energy model of a skid-steered wheeled vehicle in linear motion. This model is essentially the time integration of a power model and is derived from the dynamic model of a motor, including the energy loss due to the armature resistance and viscous friction as well as the kinetic energy of the vehicle. This research also uses the energy model to find the velocity trajecotry that minimizes the energy consumption. However, the energy model only considers the dynamics of the motor, but does not include the mechanical dynamics of the vehicle and hence ignores the substantial energy consumption due to sliding friction. Because longitudinal friction and moment of resistance lead to substantial power loss when a skid steered vehicle is in general curvilinear motion, the results of Kim & Kim (2007) cannot be readily extended to motion that is not linear.

The most thorough exploration of power modeling of a skid-steered (tracked) vehicle is presented in Morales et al. (2009) and Morales et al. (2006). This research develops an experimental power model of a skid-steered tracked vehicle from terrain's perspective. The power model includes the power loss drawn by the terrain due to sliding frictions, and also the power losses due to the traction resistance and the motor drivers. Based on another conceptual model, this research considers the case in which the inner track has the same velocity sign as the outer track and *qualitatively* describes the negative sliding friction of the inner track, which leads the corresponding motor to work as a generator. Experiments to apply the power model for navigation are also described. However, this research has two limitations that the current research seeks to overcome. First, as in Caracciolo et al. (1999); Kozlowski & Pazderski (2004), discussed above in the context of dynamic modeling of skid-steered vehicles, Coulomb's law is adopted to describe the sliding friction component in the power modeling, which can lead to incorrect predictions for larger turning radii. Second, since the power model is derived from the perspective of the terrain drawing power from the tracks, it does not appear possible to quantify the power consumption of the left and right side motors. This is important since the motion of the vehicles can be dependent upon the power limitations of the motors.

Building upon the research in Wong (2001); Wong & Chiang (2001), this chapter will develop dynamic models of a skid-steered wheeled vehicle for general curvilinear planar (2D) motion. As in Wong (2001); Wong & Chiang (2001) the modeling is based upon the functional relationship of shear stress to shear displacement. Practically, this means that for a vehicle tire the shear stress varies with the turning radius. This chapter also includes models of the saturation and power limitations of the actuators as part of the overall vehicle model.

Using the developed dynamic model for 2D general curvilinear motion, this chapter will also develop power models of a skid-steered wheeled vehicle based on separate power models for left and right motors. The power model consists of two parts: (1) the mechanical power consumption, including the mechanical loss due to sliding friction and moment of resistance, and the power used to accelerate vehicle; and (2) the electrical power consumption, which is the electrical loss due to the motor electrical resistance. The mechanical power consumption is derived completely from the dynamic model, while the electrical power consumption is derived using the electric current predicted from this dynamic model along with circuit 4 Will-be-set-by-IN-TECH

soon as there is any relative movement between the track and the ground. This research also provides detailed analysis of the longitudinal and lateral forces that act on a tracked vehicle. But their results had not been extended to skid-steered *wheeled* vehicles. In addition, they do not consider vehicle acceleration, terrain elevation, actuator limitations, or the vehicle control

In the existing literature there are very few publications that consider power modeling of skid-steered vehicles. The research of Kim & Kim (2007) provides an energy model of a skid-steered wheeled vehicle in linear motion. This model is essentially the time integration of a power model and is derived from the dynamic model of a motor, including the energy loss due to the armature resistance and viscous friction as well as the kinetic energy of the vehicle. This research also uses the energy model to find the velocity trajecotry that minimizes the energy consumption. However, the energy model only considers the dynamics of the motor, but does not include the mechanical dynamics of the vehicle and hence ignores the substantial energy consumption due to sliding friction. Because longitudinal friction and moment of resistance lead to substantial power loss when a skid steered vehicle is in general curvilinear motion, the results of Kim & Kim (2007) cannot be readily extended to motion that is not

The most thorough exploration of power modeling of a skid-steered (tracked) vehicle is presented in Morales et al. (2009) and Morales et al. (2006). This research develops an experimental power model of a skid-steered tracked vehicle from terrain's perspective. The power model includes the power loss drawn by the terrain due to sliding frictions, and also the power losses due to the traction resistance and the motor drivers. Based on another conceptual model, this research considers the case in which the inner track has the same velocity sign as the outer track and *qualitatively* describes the negative sliding friction of the inner track, which leads the corresponding motor to work as a generator. Experiments to apply the power model for navigation are also described. However, this research has two limitations that the current research seeks to overcome. First, as in Caracciolo et al. (1999); Kozlowski & Pazderski (2004), discussed above in the context of dynamic modeling of skid-steered vehicles, Coulomb's law is adopted to describe the sliding friction component in the power modeling, which can lead to incorrect predictions for larger turning radii. Second, since the power model is derived from the perspective of the terrain drawing power from the tracks, it does not appear possible to quantify the power consumption of the left and right side motors. This is important since the motion of the vehicles can be dependent upon the power limitations of the motors. Building upon the research in Wong (2001); Wong & Chiang (2001), this chapter will develop dynamic models of a skid-steered wheeled vehicle for general curvilinear planar (2D) motion. As in Wong (2001); Wong & Chiang (2001) the modeling is based upon the functional relationship of shear stress to shear displacement. Practically, this means that for a vehicle tire the shear stress varies with the turning radius. This chapter also includes models of the

saturation and power limitations of the actuators as part of the overall vehicle model.

Using the developed dynamic model for 2D general curvilinear motion, this chapter will also develop power models of a skid-steered wheeled vehicle based on separate power models for left and right motors. The power model consists of two parts: (1) the mechanical power consumption, including the mechanical loss due to sliding friction and moment of resistance, and the power used to accelerate vehicle; and (2) the electrical power consumption, which is the electrical loss due to the motor electrical resistance. The mechanical power consumption is derived completely from the dynamic model, while the electrical power consumption is derived using the electric current predicted from this dynamic model along with circuit

system.

linear.

theory. This chapter also discusses the interesting phenomenon that while the outer motor always consumes power, even though the velocity of inner wheel is always positive, as the turning radius decreases from infinity (corresponding to linear motion), the inner motor first consumes power, then generates power, and finally consumes power again.

In summary, we expect this chapter to make the following two fundamental contributions to dynamic modeling and power modeling of skid-steered wheeled vehicles:


#### **2. Kinematics of a skid-steered wheeled vehicle**

In this section, the kinematic model of a skid-steered wheeled vehicle is described and discussed. It is an important component in the development of the overall dynamic models and power models of a skid-steered wheeled vehicle.

To mathematically describe the kinematic models that have been developed for skid-steered vehicles, consider a wheeled vehicle moving at constant velocity about an instantaneous center of rotation as shown in Fig. 2.

The global and local coordinate frames are denoted respectively by *X-Y* and *x-y*. The variables *v*, *ϕ*˙ and *R* are respectively the translational velocity, angular velocity and turning radius of vehicle. The instantaneous centers of rotation for the left wheel and right wheel are given respectively by *ICRl* and *ICRr*. Note that *ICRl* and *ICRr* are the centers for left and right wheel treads (the parts of the tires that contact and slide on the terrain) Wong & Chiang (2001); Yi, Zhang, Song & Jayasuriya (2007), i.e., they are the centers for the sliding velocities of these contacting treads, but not the centers for the actual velocities of each wheel. It has been shown that the three ICRs are in the same line, which is parallel to the *x-*axis of the local frame Mandow et al. (2007); Yi, Zhang, Song & Jayasuriya (2007).

In the *x-y* frame, the coordinates of *ICR*, *ICRl* and *ICRr* are described as (*xICR*, *yICR*), (*xICRl*, *yICRl*) and (*xICRr*, *yICRr*). The vehicle velocity is denoted as *u* = [*vx vy ϕ*˙] *<sup>T</sup>*, where *vx* and *vy* are the components of *v* along the *x* and *y* axes. The angular velocities of the left and right wheels are denoted respectively by *ω<sup>l</sup>* and *ωr*. (Note that for both the left and right side of the vehicle the velocities of the front and rear wheels are the same since they are driven by the same belt, and hence, there is only one velocity associated with each side.) The parameters *b*, *B* and *r* are respectively the wheel width, the vehicle width, and the wheel radius.

An experimental kinematic model of a skid-steered wheeled vehicle that is developed in Mandow et al. (2007) is given by

$$
\begin{bmatrix} v\_x \\ v\_y \\ \dot{\varphi} \end{bmatrix} = \frac{r}{\mathbf{x}\_{ICRr} - \mathbf{x}\_{ICRl}} \begin{bmatrix} -y\_{ICR} & y\_{ICR} \\ \mathbf{x}\_{ICRr} & -\mathbf{x}\_{ICRl} \\ -1 & 1 \end{bmatrix} \begin{bmatrix} \omega\_l \\ \omega\_r \end{bmatrix} \tag{1}
$$

where *il* (*rω<sup>l</sup>* − *vl*\_*a*)/(*rωl*), *ir* (*rω<sup>r</sup>* − *vr*\_*a*)/(*rωr*) and *vl*\_*<sup>a</sup>* and *vr*\_*<sup>a</sup>* are the actual

Dynamic Modeling and Power Modeling of Robotic Skid-Steered Wheeled Vehicles 297

(3) and (4) are identical. Currently, to our knowledge no analysis or experiments have been performed to verify the left hand equation in (5) and analyze its physical significance. However, for a limited range of turning radii experimentally derived expressions for *il*/*ir*, essentially in terms of *ω<sup>l</sup>* and *ωr*, are given in Endo et al. (2007); Nagatani et al. (2007).

This section develops dynamic models of a skid-steered wheeled vehicle for the cases of general 2D motion. In contrast to dynamic models described in terms of the velocity vector of the vehicle Caracciolo et al. (1999); Kozlowski & Pazderski (2004), the dynamic models here are described in terms of the angular velocity vector of the wheels. This is because the wheel (specifically, the motor) velocities are actually commanded by the control system, so

Following Kozlowski & Pazderski (2004), the dynamic model considering the nonholonomic

motors, *M* is the mass matrix, *C*(*q*, *q*˙) is the resistance term, and *G*(*q*) is the gravitational term. The primary focus of the following subsection is the derivation of *C*(*q*, *q*˙) to properly model the ground and wheel interaction. In the following content, it is assumed that the vehicle is

When the vehicle is moving on a 2D surface, it follows from the model given in Kozlowski & Pazderski (2004), which is expressed in the local *x-y* coordinates, and the kinematic model (3)

where *m* and *I* are respectively the mass and moment of inertia of the vehicle. Since we are considering planar motion, *G*(*q*) = 0. *C*(*q*, *q*˙) represents the resistance resulting from the interaction of the wheels and terrain, including the rolling resistance, sliding frictions, and the moment of resistance, the latter two of which are modeled using Coulomb friction in

Previous research Caracciolo et al. (1999); Kozlowski & Pazderski (2004) assumed that the shear stress takes on its maximum magnitude as soon as a small relative movement occurs between the contact surface of the wheel and terrain. Instead of using this theory for tracked vehicle, Wong (2001) and Wong & Chiang (2001) present experimental evidence to show that the shear stress of the tread is function of the shear displacement. The maximum shear stress is practically achieved only when the shear displacement exceeds a particular threshold. In

*<sup>T</sup>* is the angular displacement of the left and right wheels, *q*˙ = [*ω<sup>l</sup> ωr*]

*and <sup>α</sup>* <sup>=</sup> <sup>1</sup>

<sup>1</sup> <sup>−</sup> <sup>2</sup>*il ir il*+*ir*

*Mq*¨ + *C*(*q*, *q*˙) + *G*(*q*) = *τ*, (6)

*C*(*q*, *q*˙) = *τ*. (8)

, (5)

*<sup>T</sup>* is the torque of the left and right

, (7)

*<sup>T</sup>* is a known

*<sup>T</sup>* is

velocities of the left and right wheels. We have found that when *il ir*

**3. Dynamic modeling of a skid-steered wheeled vehicle**

this model form is particularly beneficial for control and planning.

symmetric and the center of gravity (CG) is at the geometric center.

*M* =

 *mr*<sup>2</sup> <sup>4</sup> <sup>+</sup> *<sup>r</sup>*<sup>2</sup> *<sup>I</sup> αB*<sup>2</sup> *mr*<sup>2</sup> <sup>4</sup> <sup>−</sup> *<sup>r</sup>*<sup>2</sup> *<sup>I</sup> αB*<sup>2</sup>

*mr*<sup>2</sup> <sup>4</sup> <sup>−</sup> *<sup>r</sup>*<sup>2</sup> *<sup>I</sup> αB*<sup>2</sup> *mr*<sup>2</sup> <sup>4</sup> <sup>+</sup> *<sup>r</sup>*<sup>2</sup> *<sup>I</sup> αB*<sup>2</sup>

Caracciolo et al. (1999); Kozlowski & Pazderski (2004). Assume that *q*˙ = [*ω<sup>l</sup> ωr*]

this section, this theory will be applied to a skid-steered wheeled vehicle.

the angular velocity of the left and right wheels, *τ* = [*τ<sup>l</sup> τr*]

constraint is given by

that *M* in (6) is given by

constant, then *q*¨ = 0 and (6) becomes

where *q* = [*θ<sup>l</sup> θr*]

<sup>=</sup> <sup>−</sup>*ω<sup>r</sup> ωl*

Fig. 2. The kinematics of a skid-steered wheeled vehicle and the corresponding instantaneous centers of rotation (ICRs)

If the skid-steered wheeled vehicle is symmetric about the *x* and *y* axes, then *yICRl* = *yICRr* = 0 and *xICRl* = −*xICRr*. Define the expansion factor *α* as the ratio of the longitudinal distance between the left and right wheels over the vehicle width, i.e.,

$$\alpha \triangleq \frac{\mathbb{x}\_{ICRr} - \mathbb{x}\_{ICRl}}{B}. \tag{2}$$

Then, for a symmetric vehicle the kinematic model (1) can be expressed as

$$
\begin{bmatrix} v\_y \\ \dot{\varphi} \end{bmatrix} = \frac{r}{aB} \begin{bmatrix} \frac{aB}{2} & \frac{aB}{2} \\ -1 & 1 \end{bmatrix} \begin{bmatrix} \omega\_l \\ \omega\_r \end{bmatrix} . \tag{3}
$$

(Note that *vx* = 0.)

The expansion factor *α* varies with the terrain. Experimental results show that the larger the rolling resistance, the larger the expansion factor. For a Pioneer 3-AT, *α* = 1.5 for a vinyl lab surface and *α* > 2 for a concrete surface. Equation (3) shows that the kinematic model of a skid-steered wheeled vehicle of width *B* is equivalent to the kinematic model of a differential-steered wheeled vehicle of width *αB*. Note that when *α* = 1, (3) becomes the kinematic model for a differential-steered wheeled vehicle.

A more rigorously derived kinematic model for a skid-steered vehicle is presented in Moosavian & Kalantari (2008); Song et al. (2006); Wong (2001). This model takes into account the longitudinal slip ratios *il* and *ir* of the left and right wheels and for symmetric vehicles is given by

$$
\begin{bmatrix} v\_y \\ \dot{\varphi} \end{bmatrix} = \frac{r}{B} \begin{bmatrix} \frac{(1-i\_l)B}{2} & \frac{(1-i\_r)B}{2} \\ -(1-i\_l) & (1-i\_r) \end{bmatrix} \begin{bmatrix} \omega\_l \\ \omega\_r \end{bmatrix} \tag{4}
$$

6 Will-be-set-by-IN-TECH

Fig. 2. The kinematics of a skid-steered wheeled vehicle and the corresponding

Then, for a symmetric vehicle the kinematic model (1) can be expressed as

 *vy ϕ*˙ <sup>=</sup> *<sup>r</sup> αB*

kinematic model for a differential-steered wheeled vehicle.

 *vy ϕ*˙ <sup>=</sup> *<sup>r</sup> B*

between the left and right wheels over the vehicle width, i.e.,

If the skid-steered wheeled vehicle is symmetric about the *x* and *y* axes, then *yICRl* = *yICRr* = 0 and *xICRl* = −*xICRr*. Define the expansion factor *α* as the ratio of the longitudinal distance

*<sup>α</sup> xICRr* <sup>−</sup> *xICRl*

 *αB* <sup>2</sup> *<sup>α</sup><sup>B</sup>* 2 −1 1

The expansion factor *α* varies with the terrain. Experimental results show that the larger the rolling resistance, the larger the expansion factor. For a Pioneer 3-AT, *α* = 1.5 for a vinyl lab surface and *α* > 2 for a concrete surface. Equation (3) shows that the kinematic model of a skid-steered wheeled vehicle of width *B* is equivalent to the kinematic model of a differential-steered wheeled vehicle of width *αB*. Note that when *α* = 1, (3) becomes the

A more rigorously derived kinematic model for a skid-steered vehicle is presented in Moosavian & Kalantari (2008); Song et al. (2006); Wong (2001). This model takes into account the longitudinal slip ratios *il* and *ir* of the left and right wheels and for symmetric vehicles is

−(1 − *il*) (1 − *ir*)

(1−*ir*)*B* 2

 *ωl ωr* 

 (1−*il*)*<sup>B</sup>* 2

 *<sup>ω</sup><sup>l</sup> ωr* 

*<sup>B</sup>* . (2)

. (3)

, (4)

instantaneous centers of rotation (ICRs)

(Note that *vx* = 0.)

given by

where *il* (*rω<sup>l</sup>* − *vl*\_*a*)/(*rωl*), *ir* (*rω<sup>r</sup>* − *vr*\_*a*)/(*rωr*) and *vl*\_*<sup>a</sup>* and *vr*\_*<sup>a</sup>* are the actual velocities of the left and right wheels. We have found that when

$$\frac{\dot{i}\_l}{\dot{i}\_r} = -\frac{\omega\_r}{\omega\_l} \text{ and } \alpha = \frac{1}{1 - \frac{2\dot{i}\_l \dot{i}\_r}{\dot{i}\_l + \dot{i}\_r}} \text{ } \tag{5}$$

(3) and (4) are identical. Currently, to our knowledge no analysis or experiments have been performed to verify the left hand equation in (5) and analyze its physical significance. However, for a limited range of turning radii experimentally derived expressions for *il*/*ir*, essentially in terms of *ω<sup>l</sup>* and *ωr*, are given in Endo et al. (2007); Nagatani et al. (2007).

#### **3. Dynamic modeling of a skid-steered wheeled vehicle**

This section develops dynamic models of a skid-steered wheeled vehicle for the cases of general 2D motion. In contrast to dynamic models described in terms of the velocity vector of the vehicle Caracciolo et al. (1999); Kozlowski & Pazderski (2004), the dynamic models here are described in terms of the angular velocity vector of the wheels. This is because the wheel (specifically, the motor) velocities are actually commanded by the control system, so this model form is particularly beneficial for control and planning.

Following Kozlowski & Pazderski (2004), the dynamic model considering the nonholonomic constraint is given by

$$M\ddot{\eta} + \mathbb{C}(\eta, \dot{\eta}) + \mathbb{G}(\eta) = \tau,\tag{6}$$

where *q* = [*θ<sup>l</sup> θr*] *<sup>T</sup>* is the angular displacement of the left and right wheels, *q*˙ = [*ω<sup>l</sup> ωr*] *<sup>T</sup>* is the angular velocity of the left and right wheels, *τ* = [*τ<sup>l</sup> τr*] *<sup>T</sup>* is the torque of the left and right motors, *M* is the mass matrix, *C*(*q*, *q*˙) is the resistance term, and *G*(*q*) is the gravitational term. The primary focus of the following subsection is the derivation of *C*(*q*, *q*˙) to properly model the ground and wheel interaction. In the following content, it is assumed that the vehicle is symmetric and the center of gravity (CG) is at the geometric center.

When the vehicle is moving on a 2D surface, it follows from the model given in Kozlowski & Pazderski (2004), which is expressed in the local *x-y* coordinates, and the kinematic model (3) that *M* in (6) is given by

$$M = \begin{bmatrix} \frac{mr^2}{4} + \frac{r^2I}{aR^2} \frac{mr^2}{4} - \frac{r^2I}{aR^2} \\ \frac{mr^2}{4} - \frac{r^2I}{aR^2} \frac{mr^2}{4} + \frac{r^2I}{aR^2} \end{bmatrix} \tag{7}$$

where *m* and *I* are respectively the mass and moment of inertia of the vehicle. Since we are considering planar motion, *G*(*q*) = 0. *C*(*q*, *q*˙) represents the resistance resulting from the interaction of the wheels and terrain, including the rolling resistance, sliding frictions, and the moment of resistance, the latter two of which are modeled using Coulomb friction in Caracciolo et al. (1999); Kozlowski & Pazderski (2004). Assume that *q*˙ = [*ω<sup>l</sup> ωr*] *<sup>T</sup>* is a known constant, then *q*¨ = 0 and (6) becomes

$$\mathbf{C}(q,\dot{q}) = \boldsymbol{\tau}.\tag{8}$$

Previous research Caracciolo et al. (1999); Kozlowski & Pazderski (2004) assumed that the shear stress takes on its maximum magnitude as soon as a small relative movement occurs between the contact surface of the wheel and terrain. Instead of using this theory for tracked vehicle, Wong (2001) and Wong & Chiang (2001) present experimental evidence to show that the shear stress of the tread is function of the shear displacement. The maximum shear stress is practically achieved only when the shear displacement exceeds a particular threshold. In this section, this theory will be applied to a skid-steered wheeled vehicle.

During the same time, the vehicle has moved from position 1 to position 2 with an angular displacement of *ϕ*. The sliding velocities of point (*xf r*, *yf r*) in the *xr* and *yr* directions are

Dynamic Modeling and Power Modeling of Robotic Skid-Steered Wheeled Vehicles 299

Note that when the wheel is sliding, the direction of friction is opposite to the sliding velocity,

In order to calculate the shear displacement of this reference point, the sliding velocities need to be expressed in the global *X*–*Y* frame. Let *vf r*\_*<sup>X</sup>* and *vf r*\_*<sup>Y</sup>* denote the sliding velocities in the *X* and *Y* directions. Then, the transformation between the local and global sliding velocities

> cos *ϕ* − sin *ϕ* sin *ϕ* cos *ϕ*

The shear displacements *jf r*\_*<sup>X</sup>* and *jf r*\_*<sup>Y</sup>* in the *X* and *Y* directions can be expressed as

(*vf r*\_*<sup>x</sup>* cos *<sup>ϕ</sup>* <sup>−</sup> *vf r*\_*<sup>y</sup>* sin *<sup>ϕ</sup>*) <sup>1</sup>

(*L*/2 − *yf r*)*ϕ*˙ *rωr*

(*vf r*\_*<sup>x</sup>* sin *<sup>ϕ</sup>* <sup>+</sup> *vf r*\_*<sup>y</sup>* cos *<sup>ϕ</sup>*) <sup>1</sup>

Similarly, it can be shown that for the reference point (*xrr*, *yrr*) in the rear right wheel the

(*R* + *B*/2 + *xrr*)*ϕ*˙ − *rω<sup>r</sup>* −*yrrϕ*˙

(*L*/2 − *yf r*)*ϕ*˙ *rωr*

(−*C*/2 − *yrr*)*ϕ*˙ *rωr*

(−*C*/2 − *yrr*)*ϕ*˙ *rωr*

The resultant shear displacement *jf r* in the *X*–*Y* frame is given by, *jf r* =

*<sup>γ</sup>rr* <sup>=</sup> arctan

The resultant sliding velocity *vf r* and its angle *γf r* in the *xr*-*yr* frame are

 = 

*vf r*\_*<sup>x</sup>* = −*yf rϕ*˙, *vf r*\_*<sup>y</sup>* = (*R* + *B*/2 + *xf r*)*ϕ*˙ − *rωr*. (11)

 *vf r*\_*<sup>x</sup> vf r*\_*<sup>y</sup>*

> *rωr dyr*

*rωr dyr*

<sup>−</sup> *<sup>L</sup>*/2 <sup>+</sup> *yf r* cos

<sup>−</sup> <sup>1</sup>} − *yrr* sin

<sup>+</sup> *<sup>C</sup>*/2 <sup>+</sup> *yrr* cos

<sup>−</sup> <sup>1</sup>} − *yf r* sin

*vf r*\_*<sup>x</sup>*

. (12)

. (13)

, (14)

. (15)

(*L*/2 − *yf r*)*ϕ*˙ *rωr*

(*L*/2 − *yf r*)*ϕ*˙ *rωr*

> *j* 2 *f r*\_*<sup>X</sup>* + *j* 2 *f r*\_*Y*.

, (16)

, (17)

. (18)

(−*C*/2 − *yrr*)*ϕ*˙ *rωr*

(−*C*/2 − *yrr*)*ϕ*˙ *rωr*

*f r*\_*y*, *<sup>γ</sup>f r* <sup>=</sup> *<sup>π</sup>* <sup>+</sup> arctan *vf r*\_*<sup>y</sup>*

denoted by *vf r*\_*<sup>x</sup>* and *vf r*\_*y*. Therefore,

is given by,

*jf r*\_*<sup>X</sup>* =

*jf r*\_*<sup>Y</sup>* =

 *t* 0

 *t* 0

*vf r*\_*Xdt* =

*vf r*\_*Ydt* =

*jrr*\_*<sup>X</sup>* = (*<sup>R</sup>* <sup>+</sup> *<sup>B</sup>*/2 <sup>+</sup> *xrr*) · {cos

*jrr*\_*<sup>Y</sup>* = (*<sup>R</sup>* <sup>+</sup> *<sup>B</sup>*/2 <sup>+</sup> *xrr*) · sin

= (*<sup>R</sup>* <sup>+</sup> *<sup>B</sup>*/2 <sup>+</sup> *xf r*) · sin

angle of the sliding velocity *γrr* in the *xr*-*yr* frame is

and the shear displacements *jrr*\_*<sup>X</sup>* and *jrr*\_*<sup>Y</sup>* are given by

= (*<sup>R</sup>* <sup>+</sup> *<sup>B</sup>*/2 <sup>+</sup> *xf r*) · {cos

*vf r* =

 *v*2 *f r*\_*<sup>x</sup>* <sup>+</sup> *<sup>v</sup>*<sup>2</sup>

and if the vehicle is in pure rolling, *vf r*\_*<sup>x</sup>* and *vf r*\_*<sup>y</sup>* are zero.

 *vf r*\_*<sup>X</sup> vf r*\_*<sup>Y</sup>*

 *L*/2 *yf r*

 *L*/2 *yf r*

Based on the theory in Wong (2001); Wong & Chiang (2001), the shear stress *τss* and shear displacement *j* relationship can be described as,

$$
\pi\_{\rm ss} = p\mu(1 - e^{-j/K}),
\tag{9}
$$

where *p* is the normal pressure, *μ* is the coefficient of friction and *K* is the shear deformation modulus. *K* is a terrain-dependent parameter, like the rolling resistance and coefficient of friction Wong (2001).

Fig. 3 depicts a skid-steered wheeled vehicle moving counterclockwise (CCW) at constant linear velocity *v* and angular velocity *ϕ*˙ in a circle centered at O from position 1 to position 2. *X*–*Y* denotes the global frame and the body-fixed frames for the right and left wheels are given respectively by the *xr*–*yr* and *xl*–*yl*. The four contact patches of the wheels with the ground are shadowed in Fig. 3 and *L* and *C* are the patch-related distances shown in Fig. 3. It is assumed that the vehicle is symmetric and the center of gravity (CG) is at the geometric center. Note that because *ω<sup>l</sup>* and *ω<sup>r</sup>* are known, *vy* and *ϕ*˙ can be computed using the vehicle kinematic model (3), which enables the determination of the radius of curvature *R* since *vy* = *Rϕ*˙.

Fig. 3. Circular motion of a skid-steered wheeled vehicle

In the *xr*–*yr* frame consider an arbitrary point on the contact patch of the front right wheel with coordinates (*xf r*, *yf r*). This contact patch is *not* fixed on the tire, but is the part of the tire that contacts the ground. The time interval *t* for this point to travel from an initial contact point (*xf r*, *L*/2) to (*xf r*, *yf r*) is,

$$\mathbf{t} = \int\_{y\_{fr}}^{\mathcal{L}/2} \frac{1}{r\omega\_r} dy\_r = \frac{\mathcal{L}/2 - y\_{fr}}{r\omega\_r}. \tag{10}$$

8 Will-be-set-by-IN-TECH

Based on the theory in Wong (2001); Wong & Chiang (2001), the shear stress *τss* and shear

where *p* is the normal pressure, *μ* is the coefficient of friction and *K* is the shear deformation modulus. *K* is a terrain-dependent parameter, like the rolling resistance and coefficient of

Fig. 3 depicts a skid-steered wheeled vehicle moving counterclockwise (CCW) at constant linear velocity *v* and angular velocity *ϕ*˙ in a circle centered at O from position 1 to position 2. *X*–*Y* denotes the global frame and the body-fixed frames for the right and left wheels are given respectively by the *xr*–*yr* and *xl*–*yl*. The four contact patches of the wheels with the ground are shadowed in Fig. 3 and *L* and *C* are the patch-related distances shown in Fig. 3. It is assumed that the vehicle is symmetric and the center of gravity (CG) is at the geometric center. Note that because *ω<sup>l</sup>* and *ω<sup>r</sup>* are known, *vy* and *ϕ*˙ can be computed using the vehicle kinematic model (3), which enables the determination of the radius of curvature *R* since *vy* = *Rϕ*˙.

In the *xr*–*yr* frame consider an arbitrary point on the contact patch of the front right wheel with coordinates (*xf r*, *yf r*). This contact patch is *not* fixed on the tire, but is the part of the tire that contacts the ground. The time interval *t* for this point to travel from an initial contact

> *dyr* <sup>=</sup> *<sup>L</sup>*/2 <sup>−</sup> *yf r rωr*

. (10)

<sup>−</sup>*j*/*K*), (9)

*τss* = *pμ*(1 − *e*

displacement *j* relationship can be described as,

Fig. 3. Circular motion of a skid-steered wheeled vehicle

*t* =

 *L*/2 *yf r*

1 *rωr*

point (*xf r*, *L*/2) to (*xf r*, *yf r*) is,

friction Wong (2001).

During the same time, the vehicle has moved from position 1 to position 2 with an angular displacement of *ϕ*. The sliding velocities of point (*xf r*, *yf r*) in the *xr* and *yr* directions are denoted by *vf r*\_*<sup>x</sup>* and *vf r*\_*y*. Therefore,

$$
\upsilon\_{fr\\_x} = -y\_{fr}\dot{\varphi}\_r \cdot \upsilon\_{fr\\_y} = (\mathcal{R} + \mathcal{B}/2 + \mathbf{x}\_{fr})\dot{\varphi} - r\omega\_r. \tag{11}
$$

The resultant sliding velocity *vf r* and its angle *γf r* in the *xr*-*yr* frame are

$$v\_{fr} = \sqrt{v\_{fr\\_x}^2 + v\_{fr\\_y}^2} \quad \gamma\_{fr} = \pi + \arctan\left(\frac{v\_{fr\\_y}}{v\_{fr\\_x}}\right). \tag{12}$$

Note that when the wheel is sliding, the direction of friction is opposite to the sliding velocity, and if the vehicle is in pure rolling, *vf r*\_*<sup>x</sup>* and *vf r*\_*<sup>y</sup>* are zero.

In order to calculate the shear displacement of this reference point, the sliding velocities need to be expressed in the global *X*–*Y* frame. Let *vf r*\_*<sup>X</sup>* and *vf r*\_*<sup>Y</sup>* denote the sliding velocities in the *X* and *Y* directions. Then, the transformation between the local and global sliding velocities is given by,

$$
\begin{bmatrix} v\_{fr\_{-}X} \\ v\_{fr\_{-}Y} \end{bmatrix} = \begin{bmatrix} \cos\varphi - \sin\varphi \\ \sin\varphi \quad \cos\varphi \end{bmatrix} \begin{bmatrix} v\_{fr\_{-}X} \\ v\_{fr\_{-}Y} \end{bmatrix} . \tag{13}
$$

The shear displacements *jf r*\_*<sup>X</sup>* and *jf r*\_*<sup>Y</sup>* in the *X* and *Y* directions can be expressed as

$$\begin{split} \mathbf{j}\_{fr\\_X} &= \int\_0^t \mathbf{v}\_{fr\\_X} dt = \int\_{y\_{fr}}^{L/2} (\mathbf{v}\_{fr\\_X} \cos \varphi - \mathbf{v}\_{fr\\_Y} \sin \varphi) \frac{1}{r\omega\_r} dy\_r \\ &= (\mathbf{R} + \mathbf{B}/2 + \mathbf{x}\_{fr}) \cdot \{\cos \left[\frac{(L/2 - y\_{fr})\dot{\varphi}}{r\omega\_r}\right] - 1\} - y\_{fr} \sin \left[\frac{(L/2 - y\_{fr})\dot{\varphi}}{r\omega\_r}\right], \end{split} \tag{14}$$

$$\begin{split} \dot{y}\_{fr\\_Y} &= \int\_0^t v\_{fr\\_Y} dt = \int\_{y\_{fr}}^{L/2} (v\_{fr\\_x} \sin \varphi + v\_{fr\\_y} \cos \varphi) \frac{1}{r\omega\_r} dy\_r \\ &= (R + B/2 + \mathbf{x}\_{fr}) \cdot \sin \left[ \frac{(L/2 - y\_{fr}) \dot{\varphi}}{r \omega\_r} \right] - L/2 + y\_{fr} \cos \left[ \frac{(L/2 - y\_{fr}) \dot{\varphi}}{r \omega\_r} \right]. \end{split} \tag{15}$$

The resultant shear displacement *jf r* in the *X*–*Y* frame is given by, *jf r* = *j* 2 *f r*\_*<sup>X</sup>* + *j* 2 *f r*\_*Y*. Similarly, it can be shown that for the reference point (*xrr*, *yrr*) in the rear right wheel the angle of the sliding velocity *γrr* in the *xr*-*yr* frame is

$$\gamma\_{rr} = \arctan\left[\frac{(\mathcal{R} + \mathcal{B}/\mathcal{Q} + \chi\_{rr})\dot{\varphi} - r\omega\_r}{-y\_{rr}\dot{\varphi}}\right],\tag{16}$$

and the shear displacements *jrr*\_*<sup>X</sup>* and *jrr*\_*<sup>Y</sup>* are given by

$$j\_{\rm tr\_{\rm }X} = (\mathbf{R} + \mathbf{B}/2 + \mathbf{x\_{rr}}) \cdot \{ \cos \left[ \frac{(-\mathbf{C}/2 - y\_{rr})\dot{\phi}}{r\omega\_{\rm r}} \right] - 1 \} - y\_{\rm tr} \sin \left[ \frac{(-\mathbf{C}/2 - y\_{rr})\dot{\phi}}{r\omega\_{\rm r}} \right], \tag{17}$$

$$j\_{\rm tr\\_Y} = (R + B/2 + \mathbf{x}\_{\rm rr}) \cdot \sin\left[\frac{(-\mathbf{C}/2 - y\_{\rm rr})\dot{\phi}}{r\omega\_{\rm r}}\right] + \mathbf{C}/2 + y\_{\rm rr}\cos\left[\frac{(-\mathbf{C}/2 - y\_{\rm rr})\dot{\phi}}{r\omega\_{\rm r}}\right].\tag{18}$$

Fig. 4. Inner and outer motor resistance torque prediction using function (9) and Coulomb's

Dynamic Modeling and Power Modeling of Robotic Skid-Steered Wheeled Vehicles 301

Substituting (7), (25) and *G*(*q*) = 0 into (6) yields a dynamic model that can be used to predict

⎤ ⎦ *q*¨ +

In summary, in order to obtain (25), the shear displacement calculation of (14), (15), (17) and (18) is the first step. The inputs to these equations are the left and right wheel angular velocities *ω<sup>l</sup>* and *ωr*. The shear displacements are employed in (19) and (22) to obtain the right and left sliding friction forces, *Fr*\_ *<sup>f</sup>* and *Fl*\_ *<sup>f</sup>* . Next, the sliding friction forces and rolling resistances are substituted into (20) and (23) to calculate the right and left resistance torques,

This section derives power models for a skid-steered wheeled vehicle, moving as in Fig. 3. The foundation for modeling the power consumption model is the dynamic model of Section 3. The power consumption for each side of the vehicle includes the mechanical power consumption due to the motion of the wheels and the electrical power consumption due to the electrical resistance of the motors. The total power consumption of the vehicle is the sum

Assume that a skid-steered wheeled vehicle moves CCW about an instantaneous center of rotation (see Fig. 3). The circuit diagram for each side of the vehicle is shown in Fig. 5. Each circuit includes a battery, motor controller, motor *M* and the motor electrical resistance *Re*. In Fig. 5 *ω<sup>l</sup>* and *ω<sup>r</sup>* are the angular velocities of the left and right wheels, *Ul* and *Ur* are the output voltages of the left and right motor controllers, and *il* and *ir* are the currents of the left

� *τl*\_*Res τr*\_*Res* � = � *τ<sup>l</sup> τr* �

. (26)

law when vehicle is in steady state rotation.

2D movement for the skid-steered vehicle:

which determine *C*(*q*, *q*˙) using (25).

⎡ ⎣ *mr*<sup>2</sup> <sup>4</sup> <sup>+</sup> *<sup>r</sup>*<sup>2</sup> *<sup>I</sup> αB*<sup>2</sup> *mr*<sup>2</sup> <sup>4</sup> <sup>−</sup> *<sup>r</sup>*<sup>2</sup> *<sup>I</sup> αB*<sup>2</sup>

*mr*<sup>2</sup> <sup>4</sup> <sup>−</sup> *<sup>r</sup>*<sup>2</sup> *<sup>I</sup> αB*<sup>2</sup> *mr*<sup>2</sup> <sup>4</sup> <sup>+</sup> *<sup>r</sup>*<sup>2</sup> *<sup>I</sup> αB*<sup>2</sup>

**4. Power modeling of a skid-steered wheeled vehicle**

of the power consumption of of the left and right sides.

and the magnitude of the resultant shear displacement *jrr* is *jrr* = *j* 2 *rr*\_*<sup>X</sup>* + *j* 2 *rr*\_*Y*. The friction force points in the opposite direction of the sliding velocity. Using *jf r* and *jrr*, derived above, with (9) and integrating along the contact patches yields that the longitudinal sliding friction of the right wheels *Fr*\_ *<sup>f</sup>* can be expressed as

$$\begin{split} \mathbf{F}\_{r\downarrow f} &= \int\_{\mathbf{C}/2}^{\mathbf{L}/2} \int\_{-b/2}^{b/2} p\_r \mu\_r (1 - e^{-j\_{fr}/K\_r}) \sin(\pi + \gamma\_{fr}) d\mathbf{x}\_r d\mathbf{y}\_r \\ &+ \int\_{-\mathbf{L}/2}^{-\mathbf{C}/2} \int\_{-b/2}^{b/2} p\_r \mu\_r (1 - e^{-j\_{rr}/K\_r}) \sin(\pi + \gamma\_{rr}) d\mathbf{x}\_r d\mathbf{y}\_r. \end{split} \tag{19}$$

where *pr*, *μ<sup>r</sup>* and *Kr* are respectively the normal pressure, coefficient of friction, and shear deformation modulus of the right wheels. While most of the parameters in (19) can be directly measured, as discussed further below, the parameters *μ<sup>r</sup>* and *Kr* must be estimated.

Let *fr*\_*<sup>r</sup>* denote the rolling resistance of the right wheels, including the internal locomotion resistance such as resistance from belts, motor windings and gearboxes Morales et al. (2006). The complete resistance torque *τr*\_*Res* from the ground to the right wheel is given by

$$
\pi\_{r\_{-}Res} = r(F\_{r\_{-}f} + f\_{r\_{-}r}).\tag{20}
$$

Since *ω<sup>r</sup>* is constant, the input torque *τ<sup>r</sup>* from right motor will compensate for the resistance torque, such that

$$
\pi\_r = \pi\_{r\_w \text{Res}}.\tag{21}
$$

The above discussion is for the right wheels. Exploiting the same derivation process, one can obtain analytical expressions for the shear displacements *jf l* and *jrl* of the front and rear left wheels, and the angles of the sliding velocity *γf l* and *γrl*. The longitudinal sliding friction of the left wheels *Fl*\_ *<sup>f</sup>* is then given by

$$\begin{split} F\_{\rm l\\_f} &= \int\_{\rm C/2}^{\rm L/2} \int\_{-b/2}^{b/2} p\_l \mu\_l (1 - e^{-j\_{fl}/K\_l}) \sin(\pi + \gamma\_{fl}) dx\_l dy\_l \\ &+ \int\_{-\rm L/2}^{-\rm C/2} \int\_{-b/2}^{b/2} p\_l \mu\_l (1 - e^{-j\_{fl}/K\_l}) \sin(\pi + \gamma\_{fl}) dx\_l dy\_l. \end{split} \tag{22}$$

where *pl*, *μ<sup>l</sup>* and *Kl* are respectively the normal pressure, coefficient of friction, and shear deformation modulus of the left wheels. Denote the rolling resistance of the left wheels as *fl*\_*r*. The input torque *τ<sup>l</sup>* of the left motor equals the resistance torque of the left wheel *τl*\_*Res*, such that

$$
\pi\_l = \pi\_{l\\_Res} = r(F\_{l\\_f} + f\_{l\\_r}).\tag{23}
$$

Fig. 4 compares the resistance torque prediction of *τl*\_*Res* and *τr*\_*Res* using shear stress and shear displacement function (9) and Coulomb's law when the skid-steered wheeled vehicle of Fig. 13 is in steady state rotation.

$$
\tau\_{\rm ss} = p\mu \text{ (Coulomb's Law)}\tag{24}
$$

It is seen that Coulomb's law leads to a resistance torque that has the same constant value for all turning radii, which contradicts the experimental results shown in Wong (2001); Wong & Chiang (2001) for tracked vehicles and below in Fig. 15 for wheeled vehicles. Using (21) and the left equation of (23) with (8) yields

$$\mathbf{C}(q,\dot{q}) = [\tau\_{l\_{-}\text{Res}} \ \tau\_{r\_{-}\text{Res}}]^T. \tag{25}$$

10 Will-be-set-by-IN-TECH

The friction force points in the opposite direction of the sliding velocity. Using *jf r* and *jrr*, derived above, with (9) and integrating along the contact patches yields that the longitudinal

where *pr*, *μ<sup>r</sup>* and *Kr* are respectively the normal pressure, coefficient of friction, and shear deformation modulus of the right wheels. While most of the parameters in (19) can be directly

Let *fr*\_*<sup>r</sup>* denote the rolling resistance of the right wheels, including the internal locomotion resistance such as resistance from belts, motor windings and gearboxes Morales et al. (2006).

Since *ω<sup>r</sup>* is constant, the input torque *τ<sup>r</sup>* from right motor will compensate for the resistance

The above discussion is for the right wheels. Exploiting the same derivation process, one can obtain analytical expressions for the shear displacements *jf l* and *jrl* of the front and rear left wheels, and the angles of the sliding velocity *γf l* and *γrl*. The longitudinal sliding friction of

where *pl*, *μ<sup>l</sup>* and *Kl* are respectively the normal pressure, coefficient of friction, and shear deformation modulus of the left wheels. Denote the rolling resistance of the left wheels as *fl*\_*r*. The input torque *τ<sup>l</sup>* of the left motor equals the resistance torque of the left wheel *τl*\_*Res*,

Fig. 4 compares the resistance torque prediction of *τl*\_*Res* and *τr*\_*Res* using shear stress and shear displacement function (9) and Coulomb's law when the skid-steered wheeled vehicle of

It is seen that Coulomb's law leads to a resistance torque that has the same constant value for all turning radii, which contradicts the experimental results shown in Wong (2001); Wong &

*C*(*q*, *q*˙)=[*τl*\_*Res τr*\_*Res*]

*τss* = *pμ* (Coulomb�

Chiang (2001) for tracked vehicles and below in Fig. 15 for wheeled vehicles.

*plμl*(1 − *e*

*plμl*(1 − *e*

*prμr*(1 − *e*

*prμr*(1 − *e*

measured, as discussed further below, the parameters *μ<sup>r</sup>* and *Kr* must be estimated.

The complete resistance torque *τr*\_*Res* from the ground to the right wheel is given by

 *j* 2 *rr*\_*<sup>X</sup>* + *j* 2 *rr*\_*Y*.

<sup>−</sup>*jrr*/*Kr* ) sin(*π* + *γrr*)*dxrdyr*, (19)

<sup>−</sup>*jf r*/*Kr* ) sin(*<sup>π</sup>* + *<sup>γ</sup>f r*)*dxrdyr*

*τr*\_*Res* = *r*(*Fr*\_ *<sup>f</sup>* + *fr*\_*r*). (20)

<sup>−</sup>*jf l*/*Kl*) sin(*<sup>π</sup>* + *<sup>γ</sup>f l*)*dxldyl*

*τ<sup>l</sup>* = *τl*\_*Res* = *r*(*Fl*\_ *<sup>f</sup>* + *fl*\_*r*). (23)

*τ<sup>r</sup>* = *τr*\_*Res*. (21)

<sup>−</sup>*jrl*/*Kl*) sin(*<sup>π</sup>* + *<sup>γ</sup>rl*)*dxldyl*, (22)

s Law) (24)

*<sup>T</sup>*. (25)

and the magnitude of the resultant shear displacement *jrr* is *jrr* =

 *b*/2 −*b*/2

 *b*/2 −*b*/2

sliding friction of the right wheels *Fr*\_ *<sup>f</sup>* can be expressed as

 *L*/2 *C*/2

*Fr*\_ *<sup>f</sup>* =

 −*C*/2 −*L*/2

+

the left wheels *Fl*\_ *<sup>f</sup>* is then given by

Fig. 13 is in steady state rotation.

*Fl*\_ *<sup>f</sup>* =

 −*C*/2 −*L*/2

Using (21) and the left equation of (23) with (8) yields

+

 *L*/2 *C*/2

 *b*/2 −*b*/2

 *b*/2 −*b*/2

torque, such that

such that

Fig. 4. Inner and outer motor resistance torque prediction using function (9) and Coulomb's law when vehicle is in steady state rotation.

Substituting (7), (25) and *G*(*q*) = 0 into (6) yields a dynamic model that can be used to predict 2D movement for the skid-steered vehicle:

$$
\begin{bmatrix}
\frac{mr^2}{4} + \frac{r^2I}{aR^2} \frac{mr^2}{4} - \frac{r^2I}{aR^2} \\\\ \frac{mr^2}{4} - \frac{r^2I}{aR^2} \frac{mr^2}{4} + \frac{r^2I}{aR^2}
\end{bmatrix}
\vec{q} + \begin{bmatrix}
\tau\_{I\\_Res} \\\\ \tau\_{r\\_Res}
\end{bmatrix} = \begin{bmatrix}
\tau\_I \\\\ \tau\_r
\end{bmatrix}.
\tag{26}
$$

In summary, in order to obtain (25), the shear displacement calculation of (14), (15), (17) and (18) is the first step. The inputs to these equations are the left and right wheel angular velocities *ω<sup>l</sup>* and *ωr*. The shear displacements are employed in (19) and (22) to obtain the right and left sliding friction forces, *Fr*\_ *<sup>f</sup>* and *Fl*\_ *<sup>f</sup>* . Next, the sliding friction forces and rolling resistances are substituted into (20) and (23) to calculate the right and left resistance torques, which determine *C*(*q*, *q*˙) using (25).

#### **4. Power modeling of a skid-steered wheeled vehicle**

This section derives power models for a skid-steered wheeled vehicle, moving as in Fig. 3. The foundation for modeling the power consumption model is the dynamic model of Section 3. The power consumption for each side of the vehicle includes the mechanical power consumption due to the motion of the wheels and the electrical power consumption due to the electrical resistance of the motors. The total power consumption of the vehicle is the sum of the power consumption of of the left and right sides.

Assume that a skid-steered wheeled vehicle moves CCW about an instantaneous center of rotation (see Fig. 3). The circuit diagram for each side of the vehicle is shown in Fig. 5. Each circuit includes a battery, motor controller, motor *M* and the motor electrical resistance *Re*. In Fig. 5 *ω<sup>l</sup>* and *ω<sup>r</sup>* are the angular velocities of the left and right wheels, *Ul* and *Ur* are the output voltages of the left and right motor controllers, and *il* and *ir* are the currents of the left

where *Pr*,*<sup>m</sup>* and *Pr*,*<sup>e</sup>* are the mechanical power consumption and the electrical power

Dynamic Modeling and Power Modeling of Robotic Skid-Steered Wheeled Vehicles 303

*Pr*,*<sup>m</sup>* <sup>=</sup> *<sup>τ</sup>rω<sup>r</sup>*

For the right side motor, the output torque *τ<sup>r</sup>* determined from the dynamic model (26) is

where *KT* is the torque constant and *gr* is the gear ratio. So the required current in right side

*<sup>η</sup>* + ( *<sup>τ</sup><sup>r</sup>*

Notice that the only variables in (39) is the applied torques *τ<sup>r</sup>* along with the angular velocities

*Pl*,*<sup>m</sup>* <sup>=</sup> *<sup>τ</sup>lω<sup>l</sup>*

*<sup>η</sup>* + ( *<sup>τ</sup><sup>l</sup>*

Let *P* denote the power that must be supplied by the motor drivers to the motors to enable the motion of a of a skid-steered wheeled vehicle and define the operator *σ* : **R** → **R** such that

*<sup>σ</sup>*(*Q*) = *<sup>Q</sup>* : *<sup>Q</sup>* <sup>≥</sup> <sup>0</sup>

Typically, one might expect to write *P* = *Pr* + *Pl*. However, since it turns out that *Pl* can be negative and that this generated power does not charge the battery in our research vehicle,

*Pl*,*<sup>e</sup>* = *i* 2

*Pl*,*<sup>e</sup>* = ( *<sup>τ</sup><sup>l</sup>*

*Pl* <sup>=</sup> *<sup>τ</sup>lω<sup>l</sup>*

Then the entire power model of a skid-steered wheeled vehicle is,

*ir* <sup>=</sup> *<sup>τ</sup><sup>r</sup>*

*Pr*,*<sup>e</sup>* = ( *<sup>τ</sup><sup>r</sup>*

Substituting (34) and (38) into (33), the power model for the right (outer) motor is,

*Pr* <sup>=</sup> *<sup>τ</sup>rω<sup>r</sup>*

*ωr*, which are available from the dynamic model in Chapter 3.

Similarly, for the left (inner) part of vehicle,

*Pr*,*<sup>e</sup>* = *i* 2 *<sup>η</sup>* , (34)

*<sup>r</sup> Re*, (35)

*τ<sup>r</sup>* = *KTirgrη*, (36)

*KTgr<sup>η</sup>* . (37)

*KTgr<sup>η</sup>* )2*Re*. (38)

*KTgr<sup>η</sup>* )2*Re*. (39)

*<sup>η</sup>* , (41)

*<sup>l</sup> Re*, (42)

*τ<sup>l</sup>* = *KTil*, (43)

*KTgr<sup>η</sup>* )2*Re*, (44)

*KTgr<sup>η</sup>* )2*Re*. (45)

0 : *<sup>Q</sup>* <sup>&</sup>lt; 0. (46)

*P* = *σ*(*Pr*) + *σ*(*Pl*). (47)

*Pl* = *Ulil* = *Pl*,*<sup>m</sup>* + *Pl*,*e*, (40)

consumption for the right side motor. In non ideal case, *Pr*,*<sup>m</sup>* and *Pr*,*<sup>e</sup>* in (33) are,

where *τ<sup>r</sup>* and *ω<sup>r</sup>* are the same as in (26) in Section 3, and *η* is the motor efficiency.

given by

of motor is,

Plugging (37) into (35) yields,

Fig. 5. The circuit layout for the left and right side of a skid-steered wheeled vehicle.

and right circuits. For the experimental vehicle used in this research (the modified Pioneer 3-AT shown in Fig. 13), *ω<sup>l</sup>* and *ω<sup>r</sup>* are always positive1.

The electric model of a DC motor at steady state is given by Rizzoni (2000),

$$V\_a = E\_b + R\_a I\_{a\nu} \tag{27}$$

where *Va* is the supply voltage to the motor, *Ra* is the motor armature resistance, *Ia* is the motor current, and *Eb* is the back EMF. The power consumption *Pa* of a DC motor is given by *Pa* = *Va Ia*. Hence, multiplying (27) by *Ia* yields

$$P\_a = V\_a I\_a = P\_\varepsilon + R\_a I\_{a\prime}^2 \tag{28}$$

where *Pe* is the portion of the electric power converted to mechanical power by the motor and is given by

$$P\_{\mathcal{E}} = E\_b I\_a. \tag{29}$$

The mechanical power *Pm* of a DC motor is given by

$$P\_m = \omega\_m \tau\_\prime \tag{30}$$

where *ω<sup>m</sup>* and *τ* are respectively the angular velocity and applied torque of the motor. For ideal energy-conversion case Rizzoni (2000),

$$P\_m = P\_{\mathcal{C}}.\tag{31}$$

Substituting (29), (30) and (31) into (28) yields the power model of a DC motor used in the analysis of this research,

$$P\_a = \omega\_m \tau + R\_a I\_a^2. \tag{32}$$

In (32), the first term is the mechanical power consumption, which includes the power to compensate the left and right sliding frictions and the moment of resistance along with the power to accelerate the motor, and the second term is the electrical power consumption due to the motor electric resistance, which is dissipated as heat.

Using (32), the power consumed by the right motor *Pr* can be expressed as,

$$P\_r = \mathcal{U}\_r i\_r = P\_{r,m} + P\_{r,\varepsilon} \tag{33}$$

<sup>1</sup> Due to the torque limitations of the motors in the experimental vehicle, the minimum achievable turning radius is larger than half the width of the vehicle. This implies that the instantaneous radius of curvature is located outside of the vehicle body so that *ω<sup>l</sup>* and *ω<sup>r</sup>* are always positive.

where *Pr*,*<sup>m</sup>* and *Pr*,*<sup>e</sup>* are the mechanical power consumption and the electrical power consumption for the right side motor. In non ideal case, *Pr*,*<sup>m</sup>* and *Pr*,*<sup>e</sup>* in (33) are,

$$P\_{r,m} = \frac{\tau\_r \omega\_r}{\eta} \tag{34}$$

$$P\_{r,\varepsilon} = i\_r^2 R\_{\varepsilon\prime} \tag{35}$$

where *τ<sup>r</sup>* and *ω<sup>r</sup>* are the same as in (26) in Section 3, and *η* is the motor efficiency. For the right side motor, the output torque *τ<sup>r</sup>* determined from the dynamic model (26) is given by

$$
\pi\_r = \mathcal{K}\_T i\_r \mathcal{g}\_r \eta\_r \tag{36}
$$

where *KT* is the torque constant and *gr* is the gear ratio. So the required current in right side of motor is,

$$i\_r = \frac{\tau\_r}{K\_T \mathcal{g}\_r \eta}.\tag{37}$$

Plugging (37) into (35) yields,

12 Will-be-set-by-IN-TECH

Fig. 5. The circuit layout for the left and right side of a skid-steered wheeled vehicle.

The electric model of a DC motor at steady state is given by Rizzoni (2000),

3-AT shown in Fig. 13), *ω<sup>l</sup>* and *ω<sup>r</sup>* are always positive1.

*Pa* = *Va Ia*. Hence, multiplying (27) by *Ia* yields

The mechanical power *Pm* of a DC motor is given by

to the motor electric resistance, which is dissipated as heat.

Using (32), the power consumed by the right motor *Pr* can be expressed as,

curvature is located outside of the vehicle body so that *ω<sup>l</sup>* and *ω<sup>r</sup>* are always positive.

ideal energy-conversion case Rizzoni (2000),

analysis of this research,

is given by

and right circuits. For the experimental vehicle used in this research (the modified Pioneer

where *Va* is the supply voltage to the motor, *Ra* is the motor armature resistance, *Ia* is the motor current, and *Eb* is the back EMF. The power consumption *Pa* of a DC motor is given by

*Pa* = *Va Ia* = *Pe* + *Ra I*

where *Pe* is the portion of the electric power converted to mechanical power by the motor and

where *ω<sup>m</sup>* and *τ* are respectively the angular velocity and applied torque of the motor. For

Substituting (29), (30) and (31) into (28) yields the power model of a DC motor used in the

*Pa* = *ωmτ* + *Ra I*

In (32), the first term is the mechanical power consumption, which includes the power to compensate the left and right sliding frictions and the moment of resistance along with the power to accelerate the motor, and the second term is the electrical power consumption due

<sup>1</sup> Due to the torque limitations of the motors in the experimental vehicle, the minimum achievable turning radius is larger than half the width of the vehicle. This implies that the instantaneous radius of

2

2

*Va* = *Eb* + *Ra Ia*, (27)

*Pe* = *Eb Ia*. (29)

*Pm* = *ωmτ*, (30)

*Pm* = *Pe*. (31)

*Pr* = *Urir* = *Pr*,*<sup>m</sup>* + *Pr*,*e*, (33)

*<sup>a</sup>* . (32)

*<sup>a</sup>* , (28)

$$P\_{r\mathcal{E}} = (\frac{\pi\_r}{K\_T \mathcal{G}\_r \eta})^2 R\_{\mathcal{E}}.\tag{38}$$

Substituting (34) and (38) into (33), the power model for the right (outer) motor is,

$$P\_r = \frac{\tau\_r \omega\_r}{\eta} + (\frac{\tau\_r}{K\_T \mathcal{g}\_r \eta})^2 R\_\varepsilon. \tag{39}$$

Notice that the only variables in (39) is the applied torques *τ<sup>r</sup>* along with the angular velocities *ωr*, which are available from the dynamic model in Chapter 3. Similarly, for the left (inner) part of vehicle,

$$P\_l = \mathcal{U}\_l \mathbf{i}\_l = P\_{l,m} + P\_{l,\varphi} \tag{40}$$

$$P\_{l,m} = \frac{\pi \mu \omega\_l}{\eta},\tag{41}$$

$$P\_{\rm l,\mathcal{E}} = \dot{\mathbf{i}}\_{\rm l}^{2} \mathbf{R}\_{\rm \varepsilon} \tag{42}$$

$$
\tau\_l = K\_T i\_{l\nu} \tag{43}
$$

$$P\_{l, \mathfrak{e}} = (\frac{\pi\_l}{\mathcal{K}\_T \mathcal{g}\_r \eta})^2 \mathcal{R}\_{\mathfrak{e}\_{\prime l}} \tag{44}$$

$$P\_l = \frac{\tau\_l \omega\_l}{\eta} + (\frac{\tau\_l}{K\_T \xi\_r \eta})^2 R\_\varepsilon. \tag{45}$$

Let *P* denote the power that must be supplied by the motor drivers to the motors to enable the motion of a of a skid-steered wheeled vehicle and define the operator *σ* : **R** → **R** such that

$$\sigma(Q) = \begin{cases} Q: & Q \ge 0 \\ 0: & Q < 0. \end{cases} \tag{46}$$

Then the entire power model of a skid-steered wheeled vehicle is,

$$P = \sigma(P\_r) + \sigma(P\_l). \tag{47}$$

Typically, one might expect to write *P* = *Pr* + *Pl*. However, since it turns out that *Pl* can be negative and that this generated power does not charge the battery in our research vehicle,

Below, Fig. 6, Fig. 7 and Fig. 8 are used to analyze the current, voltage, and power consumption of each motor in greater detail. Particular attention is given to the inner (left)

Dynamic Modeling and Power Modeling of Robotic Skid-Steered Wheeled Vehicles 305

Fig. 6. Steady-state, inner and outer wheel torques vs. commanded turning radius, obtained via simulation of the dynamic model, for a commanded linear velocity of 0.2 m/s on the lab

Fig. 7. Steady-state, inner and outer wheel angular velocities vs. commanded turning radius,

Fig. 6, Fig. 7 and Fig. 8 show that *τr*, *ω<sup>r</sup>* and *Pr* are always positive. From (36) and (33), it

which implies that the outer motor always consumes power. The direction of current flow, and the voltage of the motor controller, motor resistance and motor have the same signs as in

*ir* > 0, *Ur* > 0, *Pr* > 0, (49)

obtained via simulation of the dynamic model for the conditions of Fig. 6.

follows that *ir* and *Ur* are also positive. Therefore, for the right side of the vehicle,

**Analysis of the Vehicle's Outer (Right) Side**

motor since it sometimes generates power.

vinyl surface.

Fig. 5.

the more general form (47) is used. To enable the battery to be charged requires modifications of the motor controller, which was beyond the scope of this project.

In summary, given the input of the left wheel and right wheel angular velocities *ω<sup>l</sup>* and *ωr*, the first step in computing (45) and (39), the power models of the left motors and right motors, is the calculation of the left wheel and right wheel sliding frictions *Fl*\_ *<sup>f</sup>* and *Fr*\_ *<sup>f</sup>* using (22) and (19). The sliding frictions along with experimentally determined values of the rolling resistances *fl*\_*<sup>r</sup>* and *fr*\_*<sup>r</sup>* are then substituted into (23) and (20) to obtain the resistance torques *τl*\_*Res* and *τr*\_*Res*, which are in turn substituted into the vehicle dynamic model (26) to obtain the left wheel and right wheel torques *τ<sup>l</sup>* and *τr*. Next, the left and right wheel torques are substituted into (45) and (39) to calculate the power consumption of the left and right wheels. The entire power consumption of the vehicle may then computed by substituting (45) and (39) into (47). While (45) and (39) are general equations for all DC motor driven vehicles, the *τ<sup>l</sup>* and *τ<sup>r</sup>* in (45) and (39) have to be calculated specifically from the skid-steered dynamic model (26).

#### **4.1 Power models analysis**

To better analyze the vehicle power model, consider the results from a series of simulations for CCW steady state rotation of the modified Pioneer 3-AT, the research platform shown in Fig. 13. For each simulation, the vehicle was commanded to have a linear velocity of 0.2 m/s. The commanded turning radius *rc* is defined as the turning radius resulting from applying the wheel speeds, *ω<sup>l</sup>* and *ωr*, to a differential-steered kinematic model assuming no slip. The simulations corresponded to varying the commanded turning radius from 10−0.7 m to 104 m with the exponent increasing in increments of 0.1.

Fig. 6 and Fig. 7 were developed by using a simulation of the dynamic model (26) and respectively represent the applied torques and angular velocities for the left and right wheels. For both the right and left motors Fig. 8 compares the power consumption prediction obtained using the exponential friction model (9) with that obtained using Coulomb's law (24). (Both predictions were based on using the dynamic model (26) in conjunction with the power models (39) and (45).) It is seen that when Coulomb friction is assumed, as the turning radius increases, the power consumption of the outer motor does not converge to a small value (∼3W) as is predicted using exponential friction. Instead it converges to a much larger value (∼35W), which contradicts the experimental result in Fig. 16, which is almost identical to the exponential friction prediction. Fig. 9 further shows that the two models can yield dramatic differences in their predictions of the power consumption of the entire vehicle.

For the right motor Fig. 10 shows the total power consumption resulting from (39) along with its mechanical power component from (34) and its electrical power component from (35). Fig. 11 displays the same power information for the left motor using (45), (41), and (42).Although these curves correspond to a specific commanded linear velocity (0.2m/s), the shapes of these curves are typical of all velocities that have been simulated. Note that in these figures and the following discussion, *<sup>r</sup>* <sup>≈</sup> <sup>10</sup><sup>0</sup> m and *<sup>r</sup>*¯ <sup>≈</sup> 101.1 m.

From (26), it is seen that when a vehicle is in steady state rotation,

$$
\tau\_{l\_{\text{\\_Res}}} = \tau\_{l\prime} \quad \tau\_{r\_{\text{\\_Res}}} = \tau\_{r\prime} \tag{48}
$$

which implies that for vehicle steady state rotation *τ<sup>l</sup>* and *τ<sup>r</sup>* only need to compensate for the resistance torques. So *τl*, *τ<sup>r</sup>* in Fig. 6 also represent the resistance torques *τl*\_*Res*, *τr*\_*Res* for vehicle steady state rotation.

14 Will-be-set-by-IN-TECH

the more general form (47) is used. To enable the battery to be charged requires modifications

In summary, given the input of the left wheel and right wheel angular velocities *ω<sup>l</sup>* and *ωr*, the first step in computing (45) and (39), the power models of the left motors and right motors, is the calculation of the left wheel and right wheel sliding frictions *Fl*\_ *<sup>f</sup>* and *Fr*\_ *<sup>f</sup>* using (22) and (19). The sliding frictions along with experimentally determined values of the rolling resistances *fl*\_*<sup>r</sup>* and *fr*\_*<sup>r</sup>* are then substituted into (23) and (20) to obtain the resistance torques *τl*\_*Res* and *τr*\_*Res*, which are in turn substituted into the vehicle dynamic model (26) to obtain the left wheel and right wheel torques *τ<sup>l</sup>* and *τr*. Next, the left and right wheel torques are substituted into (45) and (39) to calculate the power consumption of the left and right wheels. The entire power consumption of the vehicle may then computed by substituting (45) and (39) into (47). While (45) and (39) are general equations for all DC motor driven vehicles, the *τ<sup>l</sup>* and *τ<sup>r</sup>* in (45) and (39) have to be calculated specifically from the skid-steered dynamic model

To better analyze the vehicle power model, consider the results from a series of simulations for CCW steady state rotation of the modified Pioneer 3-AT, the research platform shown in Fig. 13. For each simulation, the vehicle was commanded to have a linear velocity of 0.2 m/s. The commanded turning radius *rc* is defined as the turning radius resulting from applying the wheel speeds, *ω<sup>l</sup>* and *ωr*, to a differential-steered kinematic model assuming no slip. The simulations corresponded to varying the commanded turning radius from 10−0.7 m to 104 m

Fig. 6 and Fig. 7 were developed by using a simulation of the dynamic model (26) and respectively represent the applied torques and angular velocities for the left and right wheels. For both the right and left motors Fig. 8 compares the power consumption prediction obtained using the exponential friction model (9) with that obtained using Coulomb's law (24). (Both predictions were based on using the dynamic model (26) in conjunction with the power models (39) and (45).) It is seen that when Coulomb friction is assumed, as the turning radius increases, the power consumption of the outer motor does not converge to a small value (∼3W) as is predicted using exponential friction. Instead it converges to a much larger value (∼35W), which contradicts the experimental result in Fig. 16, which is almost identical to the exponential friction prediction. Fig. 9 further shows that the two models can yield dramatic differences in their predictions of the power consumption of the entire vehicle. For the right motor Fig. 10 shows the total power consumption resulting from (39) along with its mechanical power component from (34) and its electrical power component from (35). Fig. 11 displays the same power information for the left motor using (45), (41), and (42).Although these curves correspond to a specific commanded linear velocity (0.2m/s), the shapes of these curves are typical of all velocities that have been simulated. Note that in these figures and the

which implies that for vehicle steady state rotation *τ<sup>l</sup>* and *τ<sup>r</sup>* only need to compensate for the resistance torques. So *τl*, *τ<sup>r</sup>* in Fig. 6 also represent the resistance torques *τl*\_*Res*, *τr*\_*Res* for

*τl*\_*Res* = *τl*, *τr*\_*Res* = *τr*, (48)

of the motor controller, which was beyond the scope of this project.

(26).

**4.1 Power models analysis**

with the exponent increasing in increments of 0.1.

following discussion, *<sup>r</sup>* <sup>≈</sup> <sup>10</sup><sup>0</sup> m and *<sup>r</sup>*¯ <sup>≈</sup> 101.1 m.

vehicle steady state rotation.

From (26), it is seen that when a vehicle is in steady state rotation,

Below, Fig. 6, Fig. 7 and Fig. 8 are used to analyze the current, voltage, and power consumption of each motor in greater detail. Particular attention is given to the inner (left) motor since it sometimes generates power.

Fig. 6. Steady-state, inner and outer wheel torques vs. commanded turning radius, obtained via simulation of the dynamic model, for a commanded linear velocity of 0.2 m/s on the lab vinyl surface.

Fig. 7. Steady-state, inner and outer wheel angular velocities vs. commanded turning radius, obtained via simulation of the dynamic model for the conditions of Fig. 6.

#### **Analysis of the Vehicle's Outer (Right) Side**

Fig. 6, Fig. 7 and Fig. 8 show that *τr*, *ω<sup>r</sup>* and *Pr* are always positive. From (36) and (33), it follows that *ir* and *Ur* are also positive. Therefore, for the right side of the vehicle,

$$\text{If } i\_{\mathbb{T}} > 0, \text{ } \mathcal{U}\_{\mathbb{T}} > 0, \text{ } P\_{\mathbb{T}} > 0,\tag{49}$$

which implies that the outer motor always consumes power. The direction of current flow, and the voltage of the motor controller, motor resistance and motor have the same signs as in Fig. 5.

Fig. 10. Outer motor power comparison: outer total power consumption, outer mechanical power consumption and outer electrical power consumption, obtained via simulation of the

Dynamic Modeling and Power Modeling of Robotic Skid-Steered Wheeled Vehicles 307

Fig. 11. Inner motor power comparison: inner total power consumption, inner mechanical power consumption and inner electrical power consumption, obtained via simulation of the

Fig. 6 shows that *τ<sup>l</sup>* can be either positive or negative, Fig. 7 shows that *ω<sup>l</sup>* is positive, while Fig. 8 shows that *Pl* can be either positive or negative. The signs of *τ<sup>l</sup>* and *Pl* depend on whether the commanded turning radius *rc* is in one of three regions: 1) *rc* ≥ *r*¯, 2) *r* < *rc* < *r*¯,

*Case 1* (*rc* ≥ *r*¯, *Pl* > 0): Fig. 6, Fig. 7 and Fig. 8 show that *τ<sup>l</sup>* > 0, *ω<sup>l</sup>* > 0 and *Pl* > 0. From (43) and (40), it follows *il* > 0, *Ul* > 0. Therefore, for the left side of the vehicle in this case,

*il* > 0, *Ul* > 0, *Pl* > 0, (51)

dynamic model for the conditions of Fig. 6.

dynamic model for the conditions of Fig. 6.

**Analysis of the Vehicle's Inner (Left) Side**

and 3) *rc* ≤ *r*. These three cases are now analyzed.

where *Pr* is given by (39).

Fig. 8. Power prediction for the inner and outer motors vs. commanded turning radius using exponential friction model (9) and Coulomb's law.

Fig. 9. Power prediction for the whole vehicle corresponding to Fig. 8 using exponential friction model (9) and Coulomb's law.

For the outer side of vehicle Fig. 10 shows that the mechanical power consumption and electrical power consumption are nearly equal for each turning radius2. (As discussed after (32), only the mechanical power is converted into motion while the electrical power consumption is dissipated (or wasted) as heat.) In this case, the power source is always the battery operating through the motor controller,<sup>3</sup> while the motor shaft motion consumes mechanical power and the motor electrical resistance consumes electrical power. Referring to (47),

$$
\sigma(P\_{\mathcal{V}}) = P\_{\mathcal{V}\_{\mathcal{I}}} \tag{50}
$$

<sup>2</sup> As the motor electrical resistance decreases, the electrical power consumption will be smaller than the mechanical power consumption.

<sup>3</sup> Some of the battery's power is dissipated as heat in the motor controller's resistance. See Fig. 5

Fig. 10. Outer motor power comparison: outer total power consumption, outer mechanical power consumption and outer electrical power consumption, obtained via simulation of the dynamic model for the conditions of Fig. 6.

Fig. 11. Inner motor power comparison: inner total power consumption, inner mechanical power consumption and inner electrical power consumption, obtained via simulation of the dynamic model for the conditions of Fig. 6.

where *Pr* is given by (39).

16 Will-be-set-by-IN-TECH

Fig. 8. Power prediction for the inner and outer motors vs. commanded turning radius using

Fig. 9. Power prediction for the whole vehicle corresponding to Fig. 8 using exponential

For the outer side of vehicle Fig. 10 shows that the mechanical power consumption and electrical power consumption are nearly equal for each turning radius2. (As discussed after (32), only the mechanical power is converted into motion while the electrical power consumption is dissipated (or wasted) as heat.) In this case, the power source is always the battery operating through the motor controller,<sup>3</sup> while the motor shaft motion consumes mechanical power and the motor electrical resistance consumes electrical power. Referring to

<sup>2</sup> As the motor electrical resistance decreases, the electrical power consumption will be smaller than the

<sup>3</sup> Some of the battery's power is dissipated as heat in the motor controller's resistance. See Fig. 5

*σ*(*Pr*) = *Pr*, (50)

exponential friction model (9) and Coulomb's law.

friction model (9) and Coulomb's law.

mechanical power consumption.

(47),

#### **Analysis of the Vehicle's Inner (Left) Side**

Fig. 6 shows that *τ<sup>l</sup>* can be either positive or negative, Fig. 7 shows that *ω<sup>l</sup>* is positive, while Fig. 8 shows that *Pl* can be either positive or negative. The signs of *τ<sup>l</sup>* and *Pl* depend on whether the commanded turning radius *rc* is in one of three regions: 1) *rc* ≥ *r*¯, 2) *r* < *rc* < *r*¯, and 3) *rc* ≤ *r*. These three cases are now analyzed.

*Case 1* (*rc* ≥ *r*¯, *Pl* > 0): Fig. 6, Fig. 7 and Fig. 8 show that *τ<sup>l</sup>* > 0, *ω<sup>l</sup>* > 0 and *Pl* > 0. From (43) and (40), it follows *il* > 0, *Ul* > 0. Therefore, for the left side of the vehicle in this case,

$$\text{if } i\_l > 0, \text{ } \mathcal{U}\_l > 0 \text{ } P\_l > 0,\tag{51}$$

**Analysis of the Power Consumption of the Entire Vehicle**

efficient to plan for trajectories with large turning radii.

The overall power consumption of the vehicle is due to the power consumption of both the inner and outer motors and is shown in Fig. 8. Fig. 12 shows the percentage of the overall power consumption of the vehicle due to mechanical power consumption and electrical power consumption. It is of interest to note that when *rc* ≥ *r*¯, the mechanical power consumption is dominant, while as *rc* decreases in value from *r*¯ the electrical heat dissipation eventually dominates. This indicates that in motion planning, as might be expected, it is more energy

Dynamic Modeling and Power Modeling of Robotic Skid-Steered Wheeled Vehicles 309

Fig. 12. Mechanical power and electrical power consumption percentages vs. commanded turning radius, obtained via simulation of the dynamic model, for the conditions of Fig. 6.

the model enables prediction of the two transition turning radii *r* and *r*¯.

**5. Experimental results**

models and power models in this chapter.

linear velocity and turning radius.

In summary, analysis of the power models of the right and left motors reveal the interesting phenomenon that for vehicle steady state rotation while the outer motor always consumes power, as the vehicle turning radius decreases, the inner motor first consumes, then generates and finally consumes power again. Since Fig. 8 is generated using the the power model (45),

This section presents the experiment and simulation results to verify the proposed dynamic

The experimental platform is the modified Pioneer 3-AT shown in Fig. 13. The original, nontransparent, speed controller from the company was replaced by a PID controller and motor controller. PC104 boards replaced the original control system boards that came with the vehicle. Two current sensors were mounted on each side of the vehicle to provide real time measurement of the motors' currents. It was modified to run on the QNX realtime operating system with a control sampling rate of 1KHz. The mobile robot can be commanded with a

which implies the left motor consumes power. The direction of the motor current flow and voltage are as shown in Fig. 5.

Fig. 11 shows that for each commanded turning radius *rc* satisfying *rc* ≥ *r*¯ the total motor power consumption is dominated by the mechanical power consumption although there is a small amount of electrical power consumption. In this case, the power source is the motor controller system, while the motor shaft motion consumes mechanical power and the motor electrical resistance consumes electrical power. Referring to (47),

$$
\sigma(P\_l) = P\_{l\nu} \tag{52}
$$

where *Pl* is given by (45).

*Case 2* (*r* < *rc* < *r*¯, *Pl* < 0): Fig. 6, Fig. 7 and Fig. 8 show that *τ<sup>l</sup>* < 0, *ω<sup>l</sup>* > 0 and *Pl* < 0. From (43) and (40), it follows *il* < 0, *Ul* > 0. Therefore, for the left side of the vehicle in this case,

$$
\dot{q}\_l < 0, \ U\_l > 0, P\_l < 0,\tag{53}
$$

which implies that the left motor generates power. In Fig. 5 the direction of *il* and the voltage drop across *Re* are reversed, while the motor controller voltage *Ul* and that of the motor remain as shown.

Fig. 11 shows that for each commanded turning radius *rc* satisfying *r* < *rc* < *r*¯ the mechanical power consumption is negative, and hence the motor shaft motion does not consume power but on the contrary generates power from the terrain. This is because when the vehicle rotates, the outer wheel drags the inner wheel through the vehicle body Morales et al. (2009), which leads to the backward sliding friction for the inner wheel and the generation of power for the inner motor from terrain. Since the mechanically generated power is larger than the electrical power consumption, there is a net power generation that is consumed by the motor controller system. In this case, the power source is the motor shaft, while the motor electrical resistance and the motor controller system consume power. Referring to (47),

$$
\sigma(P\_l) = 0.\tag{54}
$$

*Case 3* (*rc* ≤ *r*, *Pl* > 0): Fig. 6, Fig. 7 and Fig. 8 show that *τ<sup>l</sup>* < 0, *ω<sup>l</sup>* > 0 and *Pl* > 0. From (43) and (40), it follows *il* < 0, *Ul* < 0. Therefore, for the left side of the vehicle in this case,

$$\text{If } i\_l < 0, \text{ } \mathcal{U}\_l < 0 \text{ } P\_l > 0,\tag{55}$$

which implies that the left motor consumes power. In Fig. 5 the direction of *il*, the voltage drop across the *Re* and the motor controller voltage *Ul* are reversed, while the voltage sign of the motor remains the same.

Fig. 11 shows for each commanded turning radius *rc* satisfying *rc* ≤ *r* the mechanical power consumption is negative, which means, as in Case 2, the motor shaft motion does not consume power but generates power from terrain. However, unlike Case 2, the generated mechanical power is smaller than the electrical power consumption. Hence, there is a net power consumption and the motor controller system still has to supply power. In this case, the power sources are the motor shaft and the motor controller system, while the motor electrical resistance consumes power. Referring to (47),

$$
\sigma(P\_l) = P\_{l\nu} \tag{56}
$$

where *Pl* is given by (45).

#### **Analysis of the Power Consumption of the Entire Vehicle**

18 Will-be-set-by-IN-TECH

which implies the left motor consumes power. The direction of the motor current flow and

Fig. 11 shows that for each commanded turning radius *rc* satisfying *rc* ≥ *r*¯ the total motor power consumption is dominated by the mechanical power consumption although there is a small amount of electrical power consumption. In this case, the power source is the motor controller system, while the motor shaft motion consumes mechanical power and the motor

*Case 2* (*r* < *rc* < *r*¯, *Pl* < 0): Fig. 6, Fig. 7 and Fig. 8 show that *τ<sup>l</sup>* < 0, *ω<sup>l</sup>* > 0 and *Pl* < 0. From (43) and (40), it follows *il* < 0, *Ul* > 0. Therefore, for the left side of the vehicle in this case,

which implies that the left motor generates power. In Fig. 5 the direction of *il* and the voltage drop across *Re* are reversed, while the motor controller voltage *Ul* and that of the motor remain

Fig. 11 shows that for each commanded turning radius *rc* satisfying *r* < *rc* < *r*¯ the mechanical power consumption is negative, and hence the motor shaft motion does not consume power but on the contrary generates power from the terrain. This is because when the vehicle rotates, the outer wheel drags the inner wheel through the vehicle body Morales et al. (2009), which leads to the backward sliding friction for the inner wheel and the generation of power for the inner motor from terrain. Since the mechanically generated power is larger than the electrical power consumption, there is a net power generation that is consumed by the motor controller system. In this case, the power source is the motor shaft, while the motor electrical resistance

*Case 3* (*rc* ≤ *r*, *Pl* > 0): Fig. 6, Fig. 7 and Fig. 8 show that *τ<sup>l</sup>* < 0, *ω<sup>l</sup>* > 0 and *Pl* > 0. From (43) and (40), it follows *il* < 0, *Ul* < 0. Therefore, for the left side of the vehicle in this case,

which implies that the left motor consumes power. In Fig. 5 the direction of *il*, the voltage drop across the *Re* and the motor controller voltage *Ul* are reversed, while the voltage sign of

Fig. 11 shows for each commanded turning radius *rc* satisfying *rc* ≤ *r* the mechanical power consumption is negative, which means, as in Case 2, the motor shaft motion does not consume power but generates power from terrain. However, unlike Case 2, the generated mechanical power is smaller than the electrical power consumption. Hence, there is a net power consumption and the motor controller system still has to supply power. In this case, the power sources are the motor shaft and the motor controller system, while the motor electrical

*σ*(*Pl*) = *Pl*, (52)

*il* < 0, *Ul* > 0, *Pl* < 0, (53)

*σ*(*Pl*) = 0. (54)

*il* < 0, *Ul* < 0, *Pl* > 0, (55)

*σ*(*Pl*) = *Pl*, (56)

electrical resistance consumes electrical power. Referring to (47),

and the motor controller system consume power. Referring to (47),

voltage are as shown in Fig. 5.

where *Pl* is given by (45).

the motor remains the same.

where *Pl* is given by (45).

resistance consumes power. Referring to (47),

as shown.

The overall power consumption of the vehicle is due to the power consumption of both the inner and outer motors and is shown in Fig. 8. Fig. 12 shows the percentage of the overall power consumption of the vehicle due to mechanical power consumption and electrical power consumption. It is of interest to note that when *rc* ≥ *r*¯, the mechanical power consumption is dominant, while as *rc* decreases in value from *r*¯ the electrical heat dissipation eventually dominates. This indicates that in motion planning, as might be expected, it is more energy efficient to plan for trajectories with large turning radii.

Fig. 12. Mechanical power and electrical power consumption percentages vs. commanded turning radius, obtained via simulation of the dynamic model, for the conditions of Fig. 6.

In summary, analysis of the power models of the right and left motors reveal the interesting phenomenon that for vehicle steady state rotation while the outer motor always consumes power, as the vehicle turning radius decreases, the inner motor first consumes, then generates and finally consumes power again. Since Fig. 8 is generated using the the power model (45), the model enables prediction of the two transition turning radii *r* and *r*¯.

### **5. Experimental results**

This section presents the experiment and simulation results to verify the proposed dynamic models and power models in this chapter.

The experimental platform is the modified Pioneer 3-AT shown in Fig. 13. The original, nontransparent, speed controller from the company was replaced by a PID controller and motor controller. PC104 boards replaced the original control system boards that came with the vehicle. Two current sensors were mounted on each side of the vehicle to provide real time measurement of the motors' currents. It was modified to run on the QNX realtime operating system with a control sampling rate of 1KHz. The mobile robot can be commanded with a linear velocity and turning radius.

Fig. 14. Vehicle inner and outer wheel angular velocity comparison during steady-state CCW

Dynamic Modeling and Power Modeling of Robotic Skid-Steered Wheeled Vehicles 311

Fig. 15. Vehicle inner and outer wheel applied torque comparison corresponding to Fig. 14

approximately 58 w, which is above the 51 w power limitation, which means this velocity and radius combination cannot be achieved. However, the total power requirement in this case is also equal to 58 w. This power requirement is well below the 102 w limitation for the entire vehicle, showing the importance of estimating the power consumption of each individual motor. It is seen that as predicted by the power model for the outer wheel that this wheel was unable to achieve the desired vehicle due to the power limitation of the motor drive system. Fig. 18 compares the experimental, simulation and commanded angular velocity for the inner and outer wheels. Fig. 19 shows the experimental and simulation applied torques vs.

rotation for different commanded turning radii on the lab vinyl surface when the

commanded linear velocity is 0.2 m/s.

Fig. 13. Modified Pioneer 3-AT

### **5.1 Steady state rotation for different turning radii**

In this subsection, 2D steady state rotation results for different turning radii are presented. For a vehicle commanded linear velocity of 0.2m/s and turning radius changing from 10−0.7 m to 104 m, Fig. 14 shows the experimental and simulation wheel angular velocity vs. commanded radius, Fig. 15 shows the experimental and simulation applied torques vs. commanded radius and Fig. 16 shows the experimental and simulated power consumption vs. commanded radius. In all figures there is good correspondence between the experimental and simulated results. If shear stress is not a function of shear displacement, but instead takes on a maximum value when there is a small relative movement between wheel and terrain, the left and right motor torques should be constant for different commanded turning radii, a phenomenon not seen in Fig. 15. Instead this figure shows the magnitudes of both the left and right torques reduce as the commanded turning radius increases. The same trend is found in Wong (2001); Wong & Chiang (2001). Note that the three cases of inner motor power consumption are observed in the experimental results of Fig. 16.

#### **5.2 Circular movement with motor power saturation**

In terms of motion planning one advantage of using the separate motor power models (39) and (45) instead of relying completely on the entire vehicle power model (47) is that the separate models enable more accurate predictions of vehicle velocity when an individual motor experiences power saturation. For a given trajectory it is possible for the entire vehicle power consumption required to achieve that trajectory to be below the total power that can be provided by the motor drive systems, but the power consumption required for one of the motors to be above the power limitations of that motor's drive system. In this case the desired trajectory cannot be achieved.

For the modified Pioneer 3-AT of Fig. 13, the maximum linear velocity is 0.93 m/s. The power limitation for each side of the motor drive system is 51 w, and total power limitation is 102 w. Fig. 17 was generated using (39) and (45) and shows the power requirements for the inner and outer motors vs. commanded turning radius when the vehicle has a linear velocity of 0.7 m/s. It also shows a line corresponding to the 51 w power limitation of each of the motor drive systems.

For a vehicle commanded linear velocity of 0.7 m/s and turning radius of 1.2 m, the predicted inner and outer motor power requirements for a commanded turning radius of 1.2 m are marked using square symbols in Fig. 17. It is seen that the outer motor power consumption is 20 Will-be-set-by-IN-TECH

In this subsection, 2D steady state rotation results for different turning radii are presented. For a vehicle commanded linear velocity of 0.2m/s and turning radius changing from 10−0.7 m to 104 m, Fig. 14 shows the experimental and simulation wheel angular velocity vs. commanded radius, Fig. 15 shows the experimental and simulation applied torques vs. commanded radius and Fig. 16 shows the experimental and simulated power consumption vs. commanded radius. In all figures there is good correspondence between the experimental and simulated results. If shear stress is not a function of shear displacement, but instead takes on a maximum value when there is a small relative movement between wheel and terrain, the left and right motor torques should be constant for different commanded turning radii, a phenomenon not seen in Fig. 15. Instead this figure shows the magnitudes of both the left and right torques reduce as the commanded turning radius increases. The same trend is found in Wong (2001); Wong & Chiang (2001). Note that the three cases of inner motor power consumption are

In terms of motion planning one advantage of using the separate motor power models (39) and (45) instead of relying completely on the entire vehicle power model (47) is that the separate models enable more accurate predictions of vehicle velocity when an individual motor experiences power saturation. For a given trajectory it is possible for the entire vehicle power consumption required to achieve that trajectory to be below the total power that can be provided by the motor drive systems, but the power consumption required for one of the motors to be above the power limitations of that motor's drive system. In this case the desired

For the modified Pioneer 3-AT of Fig. 13, the maximum linear velocity is 0.93 m/s. The power limitation for each side of the motor drive system is 51 w, and total power limitation is 102 w. Fig. 17 was generated using (39) and (45) and shows the power requirements for the inner and outer motors vs. commanded turning radius when the vehicle has a linear velocity of 0.7 m/s. It also shows a line corresponding to the 51 w power limitation of each of the motor

For a vehicle commanded linear velocity of 0.7 m/s and turning radius of 1.2 m, the predicted inner and outer motor power requirements for a commanded turning radius of 1.2 m are marked using square symbols in Fig. 17. It is seen that the outer motor power consumption is

Fig. 13. Modified Pioneer 3-AT

**5.1 Steady state rotation for different turning radii**

observed in the experimental results of Fig. 16.

trajectory cannot be achieved.

drive systems.

**5.2 Circular movement with motor power saturation**

Fig. 14. Vehicle inner and outer wheel angular velocity comparison during steady-state CCW rotation for different commanded turning radii on the lab vinyl surface when the commanded linear velocity is 0.2 m/s.

Fig. 15. Vehicle inner and outer wheel applied torque comparison corresponding to Fig. 14

approximately 58 w, which is above the 51 w power limitation, which means this velocity and radius combination cannot be achieved. However, the total power requirement in this case is also equal to 58 w. This power requirement is well below the 102 w limitation for the entire vehicle, showing the importance of estimating the power consumption of each individual motor. It is seen that as predicted by the power model for the outer wheel that this wheel was unable to achieve the desired vehicle due to the power limitation of the motor drive system. Fig. 18 compares the experimental, simulation and commanded angular velocity for the inner and outer wheels. Fig. 19 shows the experimental and simulation applied torques vs.

Fig. 18. Experiment, simulation and commanded angular velocity comparison for inner and outer wheels for 2D circular movement on the lab vinyl surface when the commanded radius

Dynamic Modeling and Power Modeling of Robotic Skid-Steered Wheeled Vehicles 313

Fig. 19. Vehicle inner and outer wheels torque comparison corresponding to Fig. 18.

*<sup>X</sup>*(*t*) = 9 cos( *<sup>t</sup>*

*<sup>Y</sup>*(*t*) = 27 sin( *<sup>t</sup>*

<sup>50</sup> )

1 + sin2( *<sup>t</sup>*

<sup>50</sup> )

<sup>50</sup> ) cos( *<sup>t</sup>*

<sup>50</sup> )

, (57)

<sup>50</sup> ) , (58)

1 + sin2( *<sup>t</sup>*

where *t* is the time, and (*X*(*t*),*Y*(*t*)) is the position in global coordinates. The vehicle was first commanded to go straight with an acceleration of 1 m/s2 to achieve the initial entering velocity 0.54 m/s of the lemniscate trajectory in Fig. 21 within 3 seconds. Then it was commanded to follow the desired trajectory in Fig. 21 with changing linear velocity and

is 1.2 m and the commanded linear velocity is 0.7 m/s.

In Fig. 21, the lemniscate trajectory is governed by,

Fig. 16. Vehicle inner and outer wheel power comparison comparison corresponding to Fig. 14.

commanded radius and Fig. 20 shows the experimental and simulated power consumption vs. commanded radius.

Fig. 17. Power limitation for each side of vehicle, and vehicle inner and outer wheel power prediction during steady-state CCW rotation for different commanded turning radii on the lab vinyl surface when the commanded linear velocity is 0.7 m/s

#### **5.3 Curvilinear movement**

In this subsection, the model is to be tested for general 2D curvilinear motion. The vehicle was commanded to move in lemniscate trajectory, which required it to have modest linear and angular accelerations. Fig. 21 shows one complete cycle lemniscate trajectory along with the partial lemniscate trajectory used in the experiment.

22 Will-be-set-by-IN-TECH

Fig. 16. Vehicle inner and outer wheel power comparison comparison corresponding to Fig.

commanded radius and Fig. 20 shows the experimental and simulated power consumption

Fig. 17. Power limitation for each side of vehicle, and vehicle inner and outer wheel power prediction during steady-state CCW rotation for different commanded turning radii on the

In this subsection, the model is to be tested for general 2D curvilinear motion. The vehicle was commanded to move in lemniscate trajectory, which required it to have modest linear and angular accelerations. Fig. 21 shows one complete cycle lemniscate trajectory along with

lab vinyl surface when the commanded linear velocity is 0.7 m/s

the partial lemniscate trajectory used in the experiment.

14.

vs. commanded radius.

**5.3 Curvilinear movement**

Fig. 18. Experiment, simulation and commanded angular velocity comparison for inner and outer wheels for 2D circular movement on the lab vinyl surface when the commanded radius is 1.2 m and the commanded linear velocity is 0.7 m/s.

Fig. 19. Vehicle inner and outer wheels torque comparison corresponding to Fig. 18.

In Fig. 21, the lemniscate trajectory is governed by,

$$X(t) = \frac{9\cos(\frac{t}{50})}{1 + \sin^2(\frac{t}{50})},\tag{57}$$

$$Y(t) = \frac{27\sin(\frac{t}{50})\cos(\frac{t}{50})}{1 + \sin^2(\frac{t}{50})},\tag{58}$$

where *t* is the time, and (*X*(*t*),*Y*(*t*)) is the position in global coordinates. The vehicle was first commanded to go straight with an acceleration of 1 m/s2 to achieve the initial entering velocity 0.54 m/s of the lemniscate trajectory in Fig. 21 within 3 seconds. Then it was commanded to follow the desired trajectory in Fig. 21 with changing linear velocity and

Fig. 22. Vehicle linear velocity and angular velocity comparison corresponding to lemniscate

Dynamic Modeling and Power Modeling of Robotic Skid-Steered Wheeled Vehicles 315

Fig. 23. Vehicle inner and outer wheel angular velocity comparison corresponding to

movement in Fig. 21

lemniscate movement in Fig. 21

Fig. 20. Vehicle inner and outer wheels power comparison corresponding to Fig. 18.

Fig. 21. Partial and whole lemniscate trajectories

turning radius for another 18 seconds. The linear acceleration changes in the range [0 0.02] m/s2.

Fig. 22, Fig. 23 and Fig. 24 show the experimental and simulation comparisons for the vehicle linear and angular velocity, the inner and outer wheel angular velocity, and the inner and outer wheel applied torque. Fig. 25 shows the corresponding power comparison for the inner and outer part of the vehicle. These results reveal that if the vehicle turns with continually changing linear and angular accelerations of limited magnitude, the dynamic models and power models are still capable of providing high fidelity predictions for both the inner and outer part of the vehicle. Fig. 25 also shows during the lemniscate traversal the inner motor gradually changes from consuming power to generating power while the outer motor always consumes power. The transition time for the inner motor can also be predicted from the motor power model (45).

24 Will-be-set-by-IN-TECH

Fig. 20. Vehicle inner and outer wheels power comparison corresponding to Fig. 18.

turning radius for another 18 seconds. The linear acceleration changes in the range [0 0.02]

Fig. 22, Fig. 23 and Fig. 24 show the experimental and simulation comparisons for the vehicle linear and angular velocity, the inner and outer wheel angular velocity, and the inner and outer wheel applied torque. Fig. 25 shows the corresponding power comparison for the inner and outer part of the vehicle. These results reveal that if the vehicle turns with continually changing linear and angular accelerations of limited magnitude, the dynamic models and power models are still capable of providing high fidelity predictions for both the inner and outer part of the vehicle. Fig. 25 also shows during the lemniscate traversal the inner motor gradually changes from consuming power to generating power while the outer motor always consumes power. The transition time for the inner motor can also be predicted from the motor

Fig. 21. Partial and whole lemniscate trajectories

m/s2.

power model (45).

Fig. 22. Vehicle linear velocity and angular velocity comparison corresponding to lemniscate movement in Fig. 21

Fig. 23. Vehicle inner and outer wheel angular velocity comparison corresponding to lemniscate movement in Fig. 21

**6. References**

pp. 367–383.

pp. 1222–1227.

pp. 867–878.

Mandow, A., Mart´

Caracciolo, L., Luca, A. D. & Iannitti, S. (1999). Trajectory tracking control of a four-wheel

Dynamic Modeling and Power Modeling of Robotic Skid-Steered Wheeled Vehicles 317

Economou, J., Colyer, R., Tsourdos, A. & White, B. (2002). Fuzzy logic approaches for wheeled

Endo, D., Okada, Y., Nagatani, K. & Yoshida, K. (2007). Path following control for tracked

Golconda, S. (2005). *Steering control for a skid-steered autonomous ground vehicle at varying speed*,

Howard, T. M. & Kelly, A. (2007). Optimal rough terrain trajectory generation for wheeled

Kim, C. & Kim, B. K. (2007). Minimum-energy translational trajectory generation

Kozlowski, K. & Pazderski, D. (2004). Modeling and control of a 4-wheel skid-steering mobile robot, *International Journal of Mathematics and Computer Science* pp. 477–496.

Martinez, J., Mandow, A., Morales, J., Pedraza, S. & Garcia-Cerezo, A. (2005). Approximating

Moosavian, S. A. A. & Kalantari, A. (2008). Experimental slip estimation for exact kinematics

Morales, J., Martinez, J. L., Mandow, A., Garcia-Cerezo, A. J. & Pedraza, S. (2009). Power

Nagatani, K., Endo, D. & Yoshida, K. (2007). Improvement of the odometry accuracy

O. Chuy, J., E. Collins, J., Yu, W. & Ordonez, C. (2009). Power modeling of a skid steered

Petrov, P., de Lafontaine, J., Bigras, P. & Tetreault, M. (2000). Lateral control of a skid-steering

Shamah, B., Wagner, M. D., Moorehead, S., Teza, J., Wettergreen, D. & Whittaker, W. R. L.

*Conference on Robotics and Automation*, Rome, Italy, pp. 2752–2757.

Rizzoni, G. (2000). *Principles and Applications of Electrical Engineering*, McGraw-Hill.

*Conference on Intelligent Robots and Systems*, Nice, France, pp. 95–100. Morales, J., Martinez, J. L., Mandow, A., Garcia-Cerezo, A., Gomez-Gabriel, J. & Pedraza, S.

*International Conference on Mechatronics*, pp. 420–425.

*Transactions on Robotics* pp. 1098–1108.

*Robotics and Automation*, Kobe, Japan.

*Systems*, Takamatsu, Japan, pp. 1804–1809.

*Decentralized Control in Robotic Systems IV*, Vol. 4571.

mobile robots, *International Journal of Robotics Research* pp. 141–166.

lłnez, J. L., Morales, J., Blanco, J.-L., Garc´

*Robotics and Automation*, Detroit, MI, pp. 2632–2638.

skid-steer vehicle, *Vehicular Technology Conference*, pp. 990–994.

*Conference on Intelligent Robots and Systems*, pp. 2871–2876.

Master's thesis, University of Louisiana at Lafayette.

differentially driven mobile robot, *Proceedings of the IEEE International Conference on*

vehicles based slip-compensation odometry, *Proceedings of the IEEE International*

for differential-driven wheeled mobile robots, *Journal of Intelligent Robot System*

(2007). Experimental kinematics for wheeled skid-steer mobile robots, *Proceedings of the International Conference on Intelligent Robots and Systems*, San Diego, CA,

kinematics for tracked mobile robots, *International Journal of Robotics Research*

modeling and control of a tracked mobile robot, *Proceedings of the International*

(2006). Power anlysis for a skid-steered tracked mobile robot, *Proceedings of the IEEE*

consumption modeling of skid-steer tracked mobile robots on rigid terrain, *IEEE*

of a crawler vehicle with consideration of slippage, *Proceedings of the International*

wheeled robotic ground vehicle, *Proceedings of the IEEE International Conference on*

mining vehicle, *Proceedings of the International Conference on Intelligent Robots and*

(2001). Steering and control of a passively articulated robot, *SPIE, Sensor Fusion and*

lła-Cerezo, A. & Gonzalez, J.

Fig. 24. Vehicle inner and outer wheel torque comparison corresponding to lemniscate movement in Fig. 21

Fig. 25. Vehicle inner and outer wheel power comparison corresponding to lemniscate movement in Fig. 21

#### **6. References**

26 Will-be-set-by-IN-TECH

Fig. 24. Vehicle inner and outer wheel torque comparison corresponding to lemniscate

Fig. 25. Vehicle inner and outer wheel power comparison corresponding to lemniscate

movement in Fig. 21

movement in Fig. 21


**0**

**15**

*Spain*

**Robotic Exploration: Place Recognition**

E. Jauregi1, I. Irigoien1, E. Lazkano1, B. Sierra1 and C. Arenas<sup>2</sup>

Autonomous exploration is one of the main challenges of robotic researchers. Exploration requires navigation capabilities in unknown environments and hence, the robots should be endowed not only with safe moving algorithms but also with the ability to recognise visited places. Frequently, the aim of indoor exploration is to obtain the map of the robot's environment, i.e. the *mapping* process. Not having an automatic mapping mechanism represents a big burden for the designer of the map because the perception of robots and humans differs significantly from each other. In addition, the *loop-closing* problem must be addressed, i.e. correspondences among already visited places must be identified during the

In this chapter, a recent method for topological map acquisition is presented. The nodes within the obtained topological map do not represent single locations but contain information about areas of the environment. Each time sensor measurements identify a set of landmarks that characterise a location, the method must decide whether or not it is the first time the robot visits that location. From a statistical point of view, the problem we are concerned with is the typicality problem, i.e. the identification of new classes in a general classification context. We addressed the problem using the so-called INCA statistic which allows one to perform a typicality test (Irigoien & Arenas, 2008). In this approach, the analysis is based on the distances between each pair of units. This approach can be complementary to the more traditional approach units × measurements – or features – and offers some advantages over it. For instance, an important advantage is that once an appropriate distance metric between units is defined, the distance- based method can be applied regardless of the type of data or

We describe the theoretical basis of the proposed approach and present extensive experimental results performed in both a simulated and a real robot-environment system. Behaviour Based philosophy is used to construct the whole control architecture. The developed system allows the robot not only to construct the map but also comes in useful for localisation purposes.

**1. Introduction**

mapping process.

the underlying probability distribution.

<sup>1</sup>*Department of Computer Sciences and Artificial Intelligence,*

<sup>2</sup>*Department of Statistics, University of Barcelona*

**as a Tipicality Problem**

*University of Basque Country*


## **Robotic Exploration: Place Recognition as a Tipicality Problem**

E. Jauregi1, I. Irigoien1, E. Lazkano1, B. Sierra1 and C. Arenas<sup>2</sup> <sup>1</sup>*Department of Computer Sciences and Artificial Intelligence, University of Basque Country* <sup>2</sup>*Department of Statistics, University of Barcelona Spain*

#### **1. Introduction**

28 Will-be-set-by-IN-TECH

318 Mobile Robots – Current Trends

Siegwart, R. & Nourbakhsh, I. R. (2005). *Introduction to Mobile Robotics*, MIT Press, Cambridge,

Song, X., Song, Z., Senevirante, L. & Althoefer, K. (2008). Optical flow-based slip and velocity estimation technique for unmanned skid-steered vehicles, pp. 101–106. Song, Z., Zweiri, Y. H. & Seneviratne, L. D. (2006). Non-linear observer for slip estimation of

Wang, H., Zhang, J., Yi, J., Song, D., Jayasuriya, S. & Liu, J. (2009). Modeling and

Wong, J. Y. & Chiang, C. F. (2001). A general theory for skid steering of tracked vehicles on

Yi, J., Song, D., Zhang, J. & Goodwin, Z. (2007). Adaptive trajectory tracking control of

Yi, J., Zhang, J., Song, D. & Jayasuriya, S. (2007). IMU-based localization and slip estimation for

Yu, W., Chuy, O., Collins, E. G. & Hollis, P. (2010). Analysis and experimental verification for

Zhang, Y., Hong, D., Chung, J. H. & Velinsky, S. A. (1998). Dynamic model based robust

*Conference on Robotics and Automation*, Kobe, Japan, pp. 4112– 4117.

Wong, J. Y. (2001). *Theory of Ground Vehicle*, 3rd edn, John Wiley & Sons, Inc.

*Automation*, Orlando, Fl, pp. 1499–1504.

*Automotive Engineerings* pp. 343–355.

*and Automation*, Roma, Italy, pp. 2605–2610.

*Robots and Systems*, San Diego, CA, pp. 2845–2849.

*American Control Conference*, Philadelphia, PA, pp. 850–855.

skid-steering vehicles, *Proceedings of the IEEE International Conference on Robotics and*

motion stability analysis of skid-steered mobile robots, *Proceedings of the International*

firm ground, *Proceedings of the Institution of Mechanical Engineers, Part D, Journal of*

skid-steered mobile robots, *Proceedings of the IEEE International Conference on Robotics*

skid-steered mobile robot, *Proceedings of the IEEE International Conference on Intelligent*

dynamic modeling of a skid-steered wheeled vehicle, *IEEE Transactions on Robotics*

tracking control of a differentially steered wheeled mobile robot, *Proceedings of The*

MA.

pp. 440–453.

Autonomous exploration is one of the main challenges of robotic researchers. Exploration requires navigation capabilities in unknown environments and hence, the robots should be endowed not only with safe moving algorithms but also with the ability to recognise visited places. Frequently, the aim of indoor exploration is to obtain the map of the robot's environment, i.e. the *mapping* process. Not having an automatic mapping mechanism represents a big burden for the designer of the map because the perception of robots and humans differs significantly from each other. In addition, the *loop-closing* problem must be addressed, i.e. correspondences among already visited places must be identified during the mapping process.

In this chapter, a recent method for topological map acquisition is presented. The nodes within the obtained topological map do not represent single locations but contain information about areas of the environment. Each time sensor measurements identify a set of landmarks that characterise a location, the method must decide whether or not it is the first time the robot visits that location. From a statistical point of view, the problem we are concerned with is the typicality problem, i.e. the identification of new classes in a general classification context. We addressed the problem using the so-called INCA statistic which allows one to perform a typicality test (Irigoien & Arenas, 2008). In this approach, the analysis is based on the distances between each pair of units. This approach can be complementary to the more traditional approach units × measurements – or features – and offers some advantages over it. For instance, an important advantage is that once an appropriate distance metric between units is defined, the distance- based method can be applied regardless of the type of data or the underlying probability distribution.

We describe the theoretical basis of the proposed approach and present extensive experimental results performed in both a simulated and a real robot-environment system. Behaviour Based philosophy is used to construct the whole control architecture. The developed system allows the robot not only to construct the map but also comes in useful for localisation purposes.

as a Tipicality Problem 3

Robotic Exploration: Place Recognition as a Tipicality Problem 321

direct uses of this approach, with this particular name, have not been found in the robotics

There are different approaches found in the literature to deal with the typicality problem (Bar-Hen, 2001; Cuadras & Fortiana, 2000; Irigoien & Arenas, 2008; McDonald et al., 1976; Rao, 1962). Some of them are only suitable for normal multivariate data, others are appropriate for any kind of data but are limited to *k* = 2, being *k* the number of classes. The latter case offers the most general framework to be applied. However, and in spite of the high diversity of the used methods, to the best of the author's knowledge, neither typicality

The approach proposed in this chapter combines the INCA statistic (Irigoien & Arenas, 2008) with the topological properties of the environmental locations considered and thus represents

In this section the INCA statistic is introduced and the INCA test is proposed as a solution to

The data we consider are random vectors and we assume that distinct classes exist. Let *C*1, *C*2, ..., *Ck* be *k* classes represented as *k* independent *S*-valued random vectors **Y**1, **Y**2, ..., **Y***k*, with probability density functions *f*1, *f*2, ..., *fk* with respect to a suitable common measure *λ*.

function if the metric space (*S*, *<sup>δ</sup>*) can be embedded in a Euclidean space, <sup>Ψ</sup> : *<sup>S</sup>* −→ **<sup>R</sup>***p*, such

In this general framework the following concepts are considered. The geometric variability of

This quantity is a variant of Rao's diversity coefficient (Rao, 1982). When *δ* is the Euclidean distance and Σ*<sup>i</sup>* = *COV*(**Y***i*), then *Vδ*(*Ci*) = *tr*(Σ*i*). For other dissimilarities *Vδ*(*Ci*) is a general measure of dispersion of **Y***i*. In the context of discriminant analysis (Cuadras et al., 1997) the

This quantity is the Jensen difference (Rao, 1982) between the distributions of *Ci* and *Cj*. If the metric space (*S*, *<sup>δ</sup>*) can be embedded (see (1)) in a Euclidean space **<sup>R</sup>***<sup>p</sup>* and if *<sup>E</sup>*(�Ψ(**Y***i*)�) and *<sup>E</sup>*(�Ψ(**Y***i*)�2) are finite, then *<sup>V</sup>δ*(*Ci*) = *<sup>E</sup>*(�Ψ(**Y***i*)�2) − �*E*(Ψ(**Y***i*))�2, *<sup>i</sup>* <sup>=</sup> 1, ..., *<sup>k</sup>*, and <sup>Δ</sup>2(*Ci*, *Cj*) = �*E*(Ψ(**Y***i*)) <sup>−</sup> *<sup>E</sup>*(Ψ(**Y***j*)�2. If there is only one element *Ci* <sup>=</sup> {**y**0}, (3) gives the

) = �Ψ(**y**) − Ψ(**y**�

) be a distance (Gower, 1985) function on *S*. We say that *δ* is a Euclidean distance

*δ*2(**y***i*1, **y***i*2)*f*(**y***i*1)*f*(**y***i*2)*λ*(*d***y***i*1)*λ*(*d***y***i*2).

*<sup>δ</sup>*2(**y***i*, **<sup>y</sup>***j*)*f*(**y***i*)*g*(**y***j*)*λ*(*d***y***i*)*λ*(*d***y***j*) <sup>−</sup> *<sup>V</sup>δ*(*Ci*) <sup>−</sup> *<sup>V</sup>δ*(*Cj*) (2)

*<sup>δ</sup>*2(**y**0, **<sup>y</sup>***j*)*f*(**y***j*)*λ*(*d***y***j*) <sup>−</sup> *<sup>V</sup>δ*(**C***j*). (3)

)�2, (1)

nor one-class approaches appear in the mapping literature.

**3. Typicality test by means of the INCA statistic**

a new approach to tackling the robot mapping problem as a typicality case.

*δ*2(**y**, **y**�

*Ci*, *i* = 1, ..., *k* with respect to *δ* is defined (Cuadras & Fortiana, 1995) as

and we may understand *E*(Ψ(**Y***i*)) as the *δ*-mean of **Y***i*, *i* = 1, ..., *k*.

*<sup>V</sup>δ*(*Ci*) = <sup>1</sup>

squared distance between *Ci* and *Cj* is defined by

*S*×*S*

*<sup>φ</sup>*2(**y**0, **<sup>Y</sup>***j*) =

*S*

<sup>Δ</sup>2(*Ci*, *Cj*) =

proximity function of **y**<sup>0</sup> to *Cj*,

2 *S*×*S*

literature.

the typicality problem.

**3.1 Preliminaries**

Let *δ*(**y**, **y**�

that:

### **2. Literature review**

Loop-closing has long been identified as a critical issue when building maps from local observations. Topological mapping methods isolate the problem of how loops are closed from the problem of how to determine the metrical layout of places in the map and how to deal with noisy sensors.

The loop-closing problem cannot be solved neither relying only on extereoceptive information (due to sensor aliasing) nor on propioceptive information (cumulative error). Both environmental properties and odometric information must be used to disambiguate locations and to correct robot position. Fraundorfer et al. (2007) present a highly scalable vision based localisation and mapping method that uses image collections, whereas Se et al. (2005) use vision mainly to detect the so called *loop-closing* –the place has already been visited by the robot– in robot localisation; Tardós et al. (2002) introduce a perceptual grouping process that permits the robust identification and localisation of environmental features from the sparse and noisy sonar data. On the other hand, the probabilistic Bayesian inference, along with a symbolic topological map is used by Chen & Wang (2006) to relocalise a mobile robot. More recently, Olson (2009) presents a new loop-closing approach based on data association, where places are recognised by performing a number of pose-to-pose matchings; a review of loop-closing approaches related to MONOSLAM can be found in (Williams et al., 2009). Within the field of probabilistic robotics (Thrun et al., 2005), Kalman filters, Bayesian Networks and particle filters are used to maintain probability distributions over the state space while solving mapping, localisation and planning.

But the mapping problem can also be stated from a classification perspective. In most classification problems, there is a training data available for all classes of instances that can occur at prediction time. In this case, the learning algorithm can use the training data to determine decision boundaries that discriminate among the classes. However, there are some problems that exhibit only a single class of instances at training time but are amenable to machine learning. At prediction time, new instances with unknown class labels can either belong to the target class or to a new class that was not available during training. In this scenario, two different predictions are possible: *target*, an instance that belongs to the class learnt during training, and *unknown*, where the instance does not seem to belong to the previously learnt class. Within the machine learning community, this kind of problems are known as *one-class* problems and as *typicality* problems within the statistics research.

To give some examples, in (Hempstalk et al., 2008) the probability distributions of the class variable known values are used to determine if a new case belongs to the known class values or if it should be considered as a different class member. One-class classification categorizers have a wide range of applications; in (Manevitz & Yousef, 2007) one-class classification is used to document categorisation in order to decide whether a reference is relevant in a database searching query. The same authors combine this approach with the Support Vector Machine (SVM) paradigm for document classification purposes (Manevitz & Yousef, 2002); and in (Sánchez-Yáñez et al., 2003) the same idea is applied to texture recognition in images. A thorough review of one-class classification can be found in (Tax, 2001).

Regarding the mobile robotics area, one-class classification approaches can be applied to robot mapping, i.e. to learn the structure of its environment in an automatic manner. In this way, Brooks & K. Iagnemma (2009) present a use of this approach to deal with terrain recognition, and Wang & Lopes (2005) use it to identify user actions in human-robot-interaction. However, direct uses of this approach, with this particular name, have not been found in the robotics literature.

There are different approaches found in the literature to deal with the typicality problem (Bar-Hen, 2001; Cuadras & Fortiana, 2000; Irigoien & Arenas, 2008; McDonald et al., 1976; Rao, 1962). Some of them are only suitable for normal multivariate data, others are appropriate for any kind of data but are limited to *k* = 2, being *k* the number of classes. The latter case offers the most general framework to be applied. However, and in spite of the high diversity of the used methods, to the best of the author's knowledge, neither typicality nor one-class approaches appear in the mapping literature.

The approach proposed in this chapter combines the INCA statistic (Irigoien & Arenas, 2008) with the topological properties of the environmental locations considered and thus represents a new approach to tackling the robot mapping problem as a typicality case.

#### **3. Typicality test by means of the INCA statistic**

In this section the INCA statistic is introduced and the INCA test is proposed as a solution to the typicality problem.

#### **3.1 Preliminaries**

2 Will-be-set-by-IN-TECH

Loop-closing has long been identified as a critical issue when building maps from local observations. Topological mapping methods isolate the problem of how loops are closed from the problem of how to determine the metrical layout of places in the map and how to deal with

The loop-closing problem cannot be solved neither relying only on extereoceptive information (due to sensor aliasing) nor on propioceptive information (cumulative error). Both environmental properties and odometric information must be used to disambiguate locations and to correct robot position. Fraundorfer et al. (2007) present a highly scalable vision based localisation and mapping method that uses image collections, whereas Se et al. (2005) use vision mainly to detect the so called *loop-closing* –the place has already been visited by the robot– in robot localisation; Tardós et al. (2002) introduce a perceptual grouping process that permits the robust identification and localisation of environmental features from the sparse and noisy sonar data. On the other hand, the probabilistic Bayesian inference, along with a symbolic topological map is used by Chen & Wang (2006) to relocalise a mobile robot. More recently, Olson (2009) presents a new loop-closing approach based on data association, where places are recognised by performing a number of pose-to-pose matchings; a review of loop-closing approaches related to MONOSLAM can be found in (Williams et al., 2009). Within the field of probabilistic robotics (Thrun et al., 2005), Kalman filters, Bayesian Networks and particle filters are used to maintain probability distributions over the state

But the mapping problem can also be stated from a classification perspective. In most classification problems, there is a training data available for all classes of instances that can occur at prediction time. In this case, the learning algorithm can use the training data to determine decision boundaries that discriminate among the classes. However, there are some problems that exhibit only a single class of instances at training time but are amenable to machine learning. At prediction time, new instances with unknown class labels can either belong to the target class or to a new class that was not available during training. In this scenario, two different predictions are possible: *target*, an instance that belongs to the class learnt during training, and *unknown*, where the instance does not seem to belong to the previously learnt class. Within the machine learning community, this kind of problems are

known as *one-class* problems and as *typicality* problems within the statistics research.

A thorough review of one-class classification can be found in (Tax, 2001).

To give some examples, in (Hempstalk et al., 2008) the probability distributions of the class variable known values are used to determine if a new case belongs to the known class values or if it should be considered as a different class member. One-class classification categorizers have a wide range of applications; in (Manevitz & Yousef, 2007) one-class classification is used to document categorisation in order to decide whether a reference is relevant in a database searching query. The same authors combine this approach with the Support Vector Machine (SVM) paradigm for document classification purposes (Manevitz & Yousef, 2002); and in (Sánchez-Yáñez et al., 2003) the same idea is applied to texture recognition in images.

Regarding the mobile robotics area, one-class classification approaches can be applied to robot mapping, i.e. to learn the structure of its environment in an automatic manner. In this way, Brooks & K. Iagnemma (2009) present a use of this approach to deal with terrain recognition, and Wang & Lopes (2005) use it to identify user actions in human-robot-interaction. However,

space while solving mapping, localisation and planning.

**2. Literature review**

noisy sensors.

The data we consider are random vectors and we assume that distinct classes exist. Let *C*1, *C*2, ..., *Ck* be *k* classes represented as *k* independent *S*-valued random vectors **Y**1, **Y**2, ..., **Y***k*, with probability density functions *f*1, *f*2, ..., *fk* with respect to a suitable common measure *λ*. Let *δ*(**y**, **y**� ) be a distance (Gower, 1985) function on *S*. We say that *δ* is a Euclidean distance function if the metric space (*S*, *<sup>δ</sup>*) can be embedded in a Euclidean space, <sup>Ψ</sup> : *<sup>S</sup>* −→ **<sup>R</sup>***p*, such that:

$$\delta^2(\mathbf{y}, \mathbf{y}') = ||\mathbf{Y}(\mathbf{y}) - \mathbf{Y}(\mathbf{y}')||^2,\tag{1}$$

and we may understand *E*(Ψ(**Y***i*)) as the *δ*-mean of **Y***i*, *i* = 1, ..., *k*.

In this general framework the following concepts are considered. The geometric variability of *Ci*, *i* = 1, ..., *k* with respect to *δ* is defined (Cuadras & Fortiana, 1995) as

$$V\_{\delta}(\mathbf{C}\_{i}) = \frac{1}{2} \int\_{S \times S} \delta^{2}(\mathbf{y}\_{i1}, \mathbf{y}\_{i2}) f(\mathbf{y}\_{i1}) f(\mathbf{y}\_{i2}) \lambda(d\mathbf{y}\_{i1}) \lambda(d\mathbf{y}\_{i2}) \lambda$$

This quantity is a variant of Rao's diversity coefficient (Rao, 1982). When *δ* is the Euclidean distance and Σ*<sup>i</sup>* = *COV*(**Y***i*), then *Vδ*(*Ci*) = *tr*(Σ*i*). For other dissimilarities *Vδ*(*Ci*) is a general measure of dispersion of **Y***i*. In the context of discriminant analysis (Cuadras et al., 1997) the squared distance between *Ci* and *Cj* is defined by

$$
\Delta^2(\mathbb{C}\_{i\prime}\mathbb{C}\_{j}) = \int\_{S\times S} \delta^2(\mathbf{y}\_{i\prime}\mathbf{y}\_j) f(\mathbf{y}\_i) g(\mathbf{y}\_j) \lambda(d\mathbf{y}\_i) \lambda(d\mathbf{y}\_j) - V\_\delta(\mathbb{C}\_i) - V\_\delta(\mathbb{C}\_j) \tag{2}
$$

This quantity is the Jensen difference (Rao, 1982) between the distributions of *Ci* and *Cj*. If the metric space (*S*, *<sup>δ</sup>*) can be embedded (see (1)) in a Euclidean space **<sup>R</sup>***<sup>p</sup>* and if *<sup>E</sup>*(�Ψ(**Y***i*)�) and *<sup>E</sup>*(�Ψ(**Y***i*)�2) are finite, then *<sup>V</sup>δ*(*Ci*) = *<sup>E</sup>*(�Ψ(**Y***i*)�2) − �*E*(Ψ(**Y***i*))�2, *<sup>i</sup>* <sup>=</sup> 1, ..., *<sup>k</sup>*, and <sup>Δ</sup>2(*Ci*, *Cj*) = �*E*(Ψ(**Y***i*)) <sup>−</sup> *<sup>E</sup>*(Ψ(**Y***j*)�2. If there is only one element *Ci* <sup>=</sup> {**y**0}, (3) gives the proximity function of **y**<sup>0</sup> to *Cj*,

$$
\delta\phi^2(\mathbf{y}\_0, \mathbf{Y}\_j) = \int\_S \delta^2(\mathbf{y}\_0, \mathbf{y}\_j) f(\mathbf{y}\_j) \lambda(d\mathbf{y}\_j) - V\_\delta(\mathbf{C}\_j). \tag{3}
$$

as a Tipicality Problem 5

Robotic Exploration: Place Recognition as a Tipicality Problem 323

The statistic *W*(**y**0) has a very nice geometric interpretation. It can be interpreted (see Figure 1) as the (squared) orthogonal distance or height *h* of **y**<sup>0</sup> on the hyperplane generated by the *δ*-mean of *Ci* (*i* = 1, ..., *k*), denoted in Figure 1 by *ai*, *i* = 1, ..., *k*. Then, points which lie significantly far from this hyperplane are held to be outliers. This intuitive idea is used to

 **y0**

 *h*

 *r2*

Suppose now that the data are classified in *k* classes. Let **y**<sup>0</sup> be a new observation and consider the test to decide whether **y**<sup>0</sup> belongs to one of the fixed classes *Cj*, *j* = 1, ..., *k* or, on the contrary, it is an outlier or an atypical observation which belongs to a different and unknown

 *r1*

Fig. 1. For *k* = 3, new observation {**y**0}, centres of classes {**a**1, **a**2, **a**3} and (squared) projection **r***<sup>i</sup>* of the edges {**y**0, **a***i*} on the plane {**a**1, **a**2, **a**3}. The (squared) height **h** is *W*(**y**0)

*H*<sup>0</sup> : **y**<sup>0</sup> comes from the class with

*αiE*(Ψ(**Y***i*)),

*H*<sup>1</sup> : **y**<sup>0</sup> comes from another unknown class,

Allocate **y**<sup>0</sup> to *Ci* if *Ui*(**y**0) = min

and compute statistic (5). If *W*(**y**0) is significant it means that **y**<sup>0</sup> comes from a different and

It can be observed (Irigoien & Arenas, 2008) that *Uj*(**y**0) represents the (squared) projection of {**y**0, *E*(Ψ(**Y***i*))} on the hyper plane {*E*(Ψ(**Y**1)),..., *E*(Ψ(**Y***k*))}. See Figure 1, where for simplicity the (squared) projection *Uj*(**y**0) is denoted by *rj*, *j* = 1, ..., *k*. Hence, criterion 6 follows the next geometric and intuitive allocation rule: Allocate **y**<sup>0</sup> to *Ci* if the projection

We obtained sampling distributions of *W*(**y**0) and *Uj*(**y**0) (*j* = 1, ..., *k*) by re-sampling methods, in particular drawing bootstrap samples as follows. Draw *N* units **y** with replacement from the union of *C*1,..., *Ck* and calculate the corresponding *W*(**y**) and

*i*

*δ*-mean ∑

unknown class. Otherwise we allocate **y**<sup>0</sup> to *Ci* using the rule:

*<sup>j</sup>* (**y**0) − *W*(**y**0), *j* = 1, ..., *k*.

( **C2** ) **a2**

 *r3*

*k* ∑ *i*=1

*j*=1,...,*k*

*α<sup>i</sup>* = 1, *i* = 1, ..., *k*,

{*Uj*(**y**0)}, (6)

( **C3** )  **a3**

detect outliers among existing classes.

class. Consider the INCA test,

where *Uj*(**y**0) = *φ*<sup>2</sup>

*Ui*(**y**0) is the smallest.

( **C1** )  **a1**

In applied problems the distance function is typically a datum, but the probability distribution for each population is unknown. Natural estimators given samples **y**<sup>1</sup> <sup>1</sup>, ..., **<sup>y</sup>**<sup>1</sup> *<sup>n</sup>*1, ..., **<sup>y</sup>***<sup>k</sup>* <sup>1</sup>, ..., **<sup>y</sup>***<sup>k</sup> nk*, of sizes *n*1, ..., *nk* coming from *C*1, ..., *Ck* are the following:

• The geometric variabilityof *Cj*,

$$
\hat{\mathcal{V}}\_{\delta}(\mathbf{C}q\_{\boldsymbol{j}}) = \frac{1}{2n\_{\boldsymbol{j}}^2} \sum\_{l,m} \delta^2(\mathbf{y}\_{l'}^{\boldsymbol{j}}\mathbf{y}\_m^{\boldsymbol{j}}) .
$$

• The proximity functionof a new object **y0** to *Cj*,

$$
\delta^2(\mathbf{y}\_{0\prime}\mathbf{C}\_{\dot{j}}) = \delta^2\_{\dot{j}}(\mathbf{y}\_0) = \frac{1}{n\_{\dot{j}}} \sum\_{l} \delta^2(\mathbf{y}\_{0\prime}\mathbf{y}^{\dot{j}}\_l) - \mathcal{V}\_{\delta}(\mathbf{C}\_{\dot{j}}) .
$$

• The squared distancebetween *Ci* and *Cj*,

$$
\hat{\Delta}^2(\mathbb{C}\_i, \mathbb{C}\_j) = \hat{\Delta}\_{ij}^2 = \frac{1}{n\_i n\_j} \sum\_{l,m} \delta^2(\mathbf{y}\_{l'}^i, \mathbf{y}\_m^j) - \hat{\mathcal{V}}\_\delta(\mathbb{C}\_i) - \hat{\mathcal{V}}\_\delta(\mathbb{C}\_j). \tag{4}
$$

See (Arenas & Cuadras, 2002) and references therein for a review of these concepts, their application, different properties and proofs.

#### **3.2 INCA statistic**

Consider that *n* units are simply divided into *k* classes *C*1, ...,*Ck*, of sizes *n*1, ..., *nk*. Consider a fixed unit **y**0, which may be an element of a *Cj*, *j* = 1, ..., *k* or may belong to an unknown class, i.e. it may be an atypical unit. Consider a new class with *δ*-mean the linear combination ∑*k <sup>i</sup>*=<sup>1</sup> *αiE*(Ψ(**Y***i*)), where **Y***<sup>i</sup>* is the random vector representing the class *Ci*, *i* = 1, ..., *k*. The INCA statistic is defined as follows:

$$\mathcal{W}(\mathbf{y}\_0) = \min\_{a\_i} \left\{ L(\mathbf{y}\_0) \right\}, \qquad \sum\_{i=1}^k a\_i = 1,\tag{5}$$

$$L(\mathbf{y}\_0) = \sum\_{i=1}^k \alpha\_i \phi\_i^2(\mathbf{y}\_0) - \sum\_{1 \le i < j \le k} \alpha\_i \alpha\_j \Delta\_{ij}^2.$$

*φ*2 *<sup>i</sup>* (**y**0) is the proximity function of **<sup>y</sup>**<sup>0</sup> to *Ci* and <sup>Δ</sup><sup>2</sup> *ij* is the squared distance between *Ci* and *Cj*. The INCA statistic *W*(**y**0) = min*α<sup>i</sup> L*(**y**0) trades off between minimising the weighted sum of proximities of **y**<sup>0</sup> to classes (which takes into consideration the within-group variability) and maximising the weighted sum of the squared distances between classes (between-groups variability) - a common behaviour of a classing criterion. The values of *<sup>α</sup>*� = (*α*1,..., *<sup>α</sup>k*−1) together with *<sup>α</sup><sup>k</sup>* <sup>=</sup> <sup>1</sup> <sup>−</sup> <sup>∑</sup>*k*−<sup>1</sup> *<sup>i</sup>*=<sup>1</sup> *<sup>α</sup>i*, verifying (5) are **ff**� <sup>=</sup> *<sup>M</sup>*−1*N*, where *<sup>M</sup>* is the (*<sup>k</sup>* <sup>−</sup> <sup>1</sup>) <sup>×</sup> (*<sup>k</sup>* <sup>−</sup> <sup>1</sup>) matrix

$$M = \left(\Delta\_{ik}^2 + \Delta\_{jk}^2 - \Delta\_{ij}^2\right)\_{i,j=1,\dots,k-1}$$

and *N* is the (*k* − 1) × 1 vector

$$N = \left(\Delta\_{ik}^2 + \phi\_k^2(\mathbf{y}\_0) - \phi\_i^2(\mathbf{y}\_0)\right)\_{i=1,\ldots,k-1}.$$

4 Will-be-set-by-IN-TECH

In applied problems the distance function is typically a datum, but the probability distribution

2*n*<sup>2</sup> *j* ∑ *l*,*m*

> *nj* ∑ *l*

*δ*2(**y***<sup>i</sup> l* , **y***j <sup>m</sup>*) <sup>−</sup> *<sup>V</sup>*<sup>ˆ</sup>

See (Arenas & Cuadras, 2002) and references therein for a review of these concepts, their

Consider that *n* units are simply divided into *k* classes *C*1, ...,*Ck*, of sizes *n*1, ..., *nk*. Consider a fixed unit **y**0, which may be an element of a *Cj*, *j* = 1, ..., *k* or may belong to an unknown class, i.e. it may be an atypical unit. Consider a new class with *δ*-mean the linear combination

*<sup>i</sup>*=<sup>1</sup> *αiE*(Ψ(**Y***i*)), where **Y***<sup>i</sup>* is the random vector representing the class *Ci*, *i* = 1, ..., *k*. The

{*L*(**y**0)} ,

*<sup>i</sup>* (**y**0) − ∑

*Cj*. The INCA statistic *W*(**y**0) = min*α<sup>i</sup> L*(**y**0) trades off between minimising the weighted sum of proximities of **y**<sup>0</sup> to classes (which takes into consideration the within-group variability) and maximising the weighted sum of the squared distances between classes (between-groups variability) - a common behaviour of a classing criterion. The values of *<sup>α</sup>*� = (*α*1,..., *<sup>α</sup>k*−1)

> *jk* <sup>−</sup> <sup>Δ</sup><sup>2</sup> *ij*

> > *<sup>i</sup>* (**y**0)

*<sup>k</sup>* (**y**0) <sup>−</sup> *<sup>φ</sup>*<sup>2</sup>

1≤*i*<*j*≤*k*

*<sup>δ</sup>*2(**y***<sup>j</sup> l* , **y***j <sup>m</sup>*).

*<sup>δ</sup>*2(**y**0, **<sup>y</sup>***<sup>j</sup> l* ) <sup>−</sup> *<sup>V</sup>*<sup>ˆ</sup>

> *k* ∑ *i*=1

> > *αiαj*Δ<sup>2</sup> *ij*.

*<sup>i</sup>*=<sup>1</sup> *<sup>α</sup>i*, verifying (5) are **ff**� <sup>=</sup> *<sup>M</sup>*−1*N*, where *<sup>M</sup>* is the (*<sup>k</sup>* <sup>−</sup> <sup>1</sup>) <sup>×</sup> (*<sup>k</sup>* <sup>−</sup> <sup>1</sup>)

*i*,*j*=1,...,*k*−1

*i*=1,...,*k*−1

.

<sup>1</sup>, ..., **<sup>y</sup>**<sup>1</sup>

*<sup>δ</sup>*(*Cj*).

*<sup>δ</sup>*(*Ci*) <sup>−</sup> *<sup>V</sup>*<sup>ˆ</sup>

*<sup>n</sup>*1, ..., **<sup>y</sup>***<sup>k</sup>*

*<sup>δ</sup>*(*Cj*). (4)

*α<sup>i</sup>* = 1, (5)

*ij* is the squared distance between *Ci* and

<sup>1</sup>, ..., **<sup>y</sup>***<sup>k</sup>*

*nk*, of

for each population is unknown. Natural estimators given samples **y**<sup>1</sup>

*V*ˆ

*<sup>δ</sup>*(*Cqj*) = <sup>1</sup>

*<sup>j</sup>* (**y**0) = <sup>1</sup>

of a new object **y0** to *Cj*,

*ij* <sup>=</sup> <sup>1</sup> *ninj* ∑ *l*,*m*

sizes *n*1, ..., *nk* coming from *C*1, ..., *Ck* are the following:

of *Cj*,

*φ*ˆ2(**y**0, *Cj*) = *φ*ˆ2

Δˆ <sup>2</sup>(*Ci*, *Cj*) = Δˆ <sup>2</sup>

application, different properties and proofs.

INCA statistic is defined as follows:

together with *<sup>α</sup><sup>k</sup>* <sup>=</sup> <sup>1</sup> <sup>−</sup> <sup>∑</sup>*k*−<sup>1</sup>

and *N* is the (*k* − 1) × 1 vector

between *Ci* and *Cj*,

*W*(**y**0) = min

*L*(**y**0) =

*M* = Δ2 *ik* <sup>+</sup> <sup>Δ</sup><sup>2</sup>

*N* = Δ2 *ik* <sup>+</sup> *<sup>φ</sup>*<sup>2</sup>

*<sup>i</sup>* (**y**0) is the proximity function of **<sup>y</sup>**<sup>0</sup> to *Ci* and <sup>Δ</sup><sup>2</sup>

*αi*

*αiφ*<sup>2</sup>

*k* ∑ *i*=1

variability

• The

• The

• The

∑*k*

*φ*2

matrix

geometric

proximity

squared distance

**3.2 INCA statistic**

 function The statistic *W*(**y**0) has a very nice geometric interpretation. It can be interpreted (see Figure 1) as the (squared) orthogonal distance or height *h* of **y**<sup>0</sup> on the hyperplane generated by the *δ*-mean of *Ci* (*i* = 1, ..., *k*), denoted in Figure 1 by *ai*, *i* = 1, ..., *k*. Then, points which lie significantly far from this hyperplane are held to be outliers. This intuitive idea is used to detect outliers among existing classes.

Fig. 1. For *k* = 3, new observation {**y**0}, centres of classes {**a**1, **a**2, **a**3} and (squared) projection **r***<sup>i</sup>* of the edges {**y**0, **a***i*} on the plane {**a**1, **a**2, **a**3}. The (squared) height **h** is *W*(**y**0)

Suppose now that the data are classified in *k* classes. Let **y**<sup>0</sup> be a new observation and consider the test to decide whether **y**<sup>0</sup> belongs to one of the fixed classes *Cj*, *j* = 1, ..., *k* or, on the contrary, it is an outlier or an atypical observation which belongs to a different and unknown class. Consider the INCA test,

*H*<sup>0</sup> : **y**<sup>0</sup> comes from the class with

$$\begin{aligned} \delta \text{-mean } & \sum\_{i} \mathfrak{a}\_{i} E(\Psi(\mathbf{Y}\_{i})), \quad \sum\_{i=1}^{k} \mathfrak{a}\_{i} = 1, \; i = 1, \dots, k, \\ H\_{1}: & \mathbf{y}\_{0} \text{ comes from another unknown class.} \end{aligned}$$

and compute statistic (5). If *W*(**y**0) is significant it means that **y**<sup>0</sup> comes from a different and unknown class. Otherwise we allocate **y**<sup>0</sup> to *Ci* using the rule:

$$\text{Allocate } \mathbf{y}\_0 \text{ to } \mathcal{C}\_i \text{ if } \mathcal{U}\_i(\mathbf{y}\_0) = \min\_{j=1,\ldots,k} \{ \mathcal{U}\_j(\mathbf{y}\_0) \},\tag{6}$$

where *Uj*(**y**0) = *φ*<sup>2</sup> *<sup>j</sup>* (**y**0) − *W*(**y**0), *j* = 1, ..., *k*.

It can be observed (Irigoien & Arenas, 2008) that *Uj*(**y**0) represents the (squared) projection of {**y**0, *E*(Ψ(**Y***i*))} on the hyper plane {*E*(Ψ(**Y**1)),..., *E*(Ψ(**Y***k*))}. See Figure 1, where for simplicity the (squared) projection *Uj*(**y**0) is denoted by *rj*, *j* = 1, ..., *k*. Hence, criterion 6 follows the next geometric and intuitive allocation rule: Allocate **y**<sup>0</sup> to *Ci* if the projection *Ui*(**y**0) is the smallest.

We obtained sampling distributions of *W*(**y**0) and *Uj*(**y**0) (*j* = 1, ..., *k*) by re-sampling methods, in particular drawing bootstrap samples as follows. Draw *N* units **y** with replacement from the union of *C*1,..., *Ck* and calculate the corresponding *W*(**y**) and

as a Tipicality Problem 7

Robotic Exploration: Place Recognition as a Tipicality Problem 325

3. A function *α<sup>i</sup>* to be executed when the node *i* is active and that will output the action to be performed at the node specific current state. The behaviour of the robot as well as the

(*xi*0, *yi*0),(*xi f* , *yi f*)

The overall "map" is then composed of sets of behaviours, each launched on a different thread. The environment is only partially unknown to the robot since it is provided with behaviour modules to properly identify certain features such as corridors, crossings or junctions and halls, each of them identifiable using distance sensors like a laser scanner. Each landmark identifier outputs a confidence level (*cl*) as a measure of the confidence of the identification process. These values are filtered through node signatures, giving at each time step the node

• Corridors: the robot is considered to be in a corridor if the place is between 1.6 and 2.4 m wide. To that aim, left and right side shortest readings are summed and stored in a FIFO buffer. The mean of the buffer is used in a Gaussian function that gives the confidence

• Halls: as opposed to corridors, halls are wide areas. Therefore, the confidence level of

• Crossings or junctions: these locations are areas where two or more alternative ways are possible. It is mandatory for the robot to identify junctions in order to choose the right way when looking for goals. Depending on the destination, the robot must select one way or another. Crossing areas usually come at the end of a corridor or hall and lead to a new area. Hence, left and right minimum distances are looked for and these minimum values are used as reference for searching continuous interval of readings that exceed the minimum values. The orientations of the possible alternative ways at the junctions are registered according to the robot heading provided by the compass sensor and the indexes of the laser scan that define the different intervals, the orientation of the possible alternative

The goal of the mapping process is to fill in the nodes with the information that they must contain. More precisely, the contents of the signature and the location identifier. For this aim, during the learning process and depending on the state of the landmark identification subsystems, i.e. the confidence level of the corridor/hall/junction (*clcorr*, *clhall* and *clcross*), the

• Initial and final pose obtained by the odometric subsystem. These poses correspond to the position values of the robot when the node signature activates/deactivates:

• Length (previously named as *duration*) of the area calculated using the initial and final pose

• Number of alternative ways and their associated orientation: *num*\_*ways* and

being in a hall is defined as 1 minus the probability of being in a corridor.

associated function of the nodes can be different depending on the location.

4. The location identifier that contains initial and final position of the node:

activation level according to the sensor readings.

level of being in a corridor.

ways at the junctions are registered.

following information is given to the INCA test: • Initial and mean heading values: *θ*0, *θmean*.

(*x*0, *y*0),(*x <sup>f</sup>* , *y <sup>f</sup>*).

information: *d*.

*θw*<sup>1</sup> , ··· , *θwnum*\_*ways*.

*Uj*(**y**) (*j* = 1, ..., *k*) values. As usual, this process is repeated 10*P* times with *P* ≥ 1 selected by the user. In this way, the bootstrap distributions under *H*<sup>0</sup> are obtained.

#### **4. Behavior-Based navigation**

Behavior-Based (BB) systems appeared in 1986, when R.A. Brooks proposed a bottom-up approach for robot control that imposed a new outlook for developing intelligent embodied agents capable of navigating in real environments performing complex tasks. He introduced the Subsumption Architecture (Brooks, 1986; Brooks & Connell, 1986) and developed multiple robotic creatures capable of showing different behaviours not seen before in real robots (Brooks, 1989; Connell, 1990; Matari´c, 1990). Behavior-based systems are originally inspired on biological systems. Even the most simple animals show navigation capabilities with high degree of performance. For those systems, navigation consist of determining and maintaining a trajectory to the goal (Mallot & Franz, 2000). The main question to be answered for navigation is not *Where am I?* but *How do I reach the goal?* and the answer does not always require knowing the initial position. Therefore, the main abilities the agent needs in order to navigate are to move around and to identify goals.

The behavior-based approach to robot navigation relies on the idea that the control problem is better assessed by bottom-up design and incremental addition of light-weight processes, called behaviors, where each one is responsible for reading its own inputs and sensors, and deciding the adequate motor actions. There is no centralized world model and data from multiple sensors do not need to be merged to match the current system state in the stored model. The motor responses of the several behavioural modules must be somehow coordinated in order to obtain valid intelligent behavior. *Way-finding* methods rely on *local navigation strategies*. How these local strategies are coordinated is a matter of study known as *motor fusion* in BB robotics, opposed to the well known *data fusion* process needed to model data information. The aim is to match subsets of available data with motor decisions; outputs of all the active decisions somehow merge to obtain the final actions. In this case there is no semantic interpretation of the data but behavior emergence.

#### **5. Topological places**

Generally speaking, there are two typical strategies for deriving topological maps: one is to learn the topological map directly; the other is to first learn a geometric map, then to derive a topological model from it through some process of analysis (Thrun, 1999; Thrun & Bücken, 1996a;b).

As mentioned before, BB systems advocates for a functional bottom-up decomposition of the control problem in independent processes called behaviours. From this point of view, the topological "map" should be composed of tightly coupled behaviours, specific to the meaningful locations.

A topological map is formally defined as a set of nodes where each node consists of:


6 Will-be-set-by-IN-TECH

*Uj*(**y**) (*j* = 1, ..., *k*) values. As usual, this process is repeated 10*P* times with *P* ≥ 1 selected by

Behavior-Based (BB) systems appeared in 1986, when R.A. Brooks proposed a bottom-up approach for robot control that imposed a new outlook for developing intelligent embodied agents capable of navigating in real environments performing complex tasks. He introduced the Subsumption Architecture (Brooks, 1986; Brooks & Connell, 1986) and developed multiple robotic creatures capable of showing different behaviours not seen before in real robots (Brooks, 1989; Connell, 1990; Matari´c, 1990). Behavior-based systems are originally inspired on biological systems. Even the most simple animals show navigation capabilities with high degree of performance. For those systems, navigation consist of determining and maintaining a trajectory to the goal (Mallot & Franz, 2000). The main question to be answered for navigation is not *Where am I?* but *How do I reach the goal?* and the answer does not always require knowing the initial position. Therefore, the main abilities the agent needs in order to

The behavior-based approach to robot navigation relies on the idea that the control problem is better assessed by bottom-up design and incremental addition of light-weight processes, called behaviors, where each one is responsible for reading its own inputs and sensors, and deciding the adequate motor actions. There is no centralized world model and data from multiple sensors do not need to be merged to match the current system state in the stored model. The motor responses of the several behavioural modules must be somehow coordinated in order to obtain valid intelligent behavior. *Way-finding* methods rely on *local navigation strategies*. How these local strategies are coordinated is a matter of study known as *motor fusion* in BB robotics, opposed to the well known *data fusion* process needed to model data information. The aim is to match subsets of available data with motor decisions; outputs of all the active decisions somehow merge to obtain the final actions. In this case there is no

Generally speaking, there are two typical strategies for deriving topological maps: one is to learn the topological map directly; the other is to first learn a geometric map, then to derive a topological model from it through some process of analysis (Thrun, 1999; Thrun & Bücken,

As mentioned before, BB systems advocates for a functional bottom-up decomposition of the control problem in independent processes called behaviours. From this point of view, the topological "map" should be composed of tightly coupled behaviours, specific to the

1. A set of inputs (from landmark identification subsystems) and outputs. These outputs

2. A signature that identifies the node: *signaturei*. Each location has a signature that reflects the state of a set of specific landmarks and that is used by the robot for localisation

A topological map is formally defined as a set of nodes where each node consists of:

should serve to reduce the distance between the current state and the goal.

the user. In this way, the bootstrap distributions under *H*<sup>0</sup> are obtained.

**4. Behavior-Based navigation**

navigate are to move around and to identify goals.

semantic interpretation of the data but behavior emergence.

**5. Topological places**

meaningful locations.

purposes.

1996a;b).


(*xi*0, *yi*0),(*xi f* , *yi f*)

The overall "map" is then composed of sets of behaviours, each launched on a different thread. The environment is only partially unknown to the robot since it is provided with behaviour modules to properly identify certain features such as corridors, crossings or junctions and halls, each of them identifiable using distance sensors like a laser scanner. Each landmark identifier outputs a confidence level (*cl*) as a measure of the confidence of the identification process. These values are filtered through node signatures, giving at each time step the node activation level according to the sensor readings.


The goal of the mapping process is to fill in the nodes with the information that they must contain. More precisely, the contents of the signature and the location identifier. For this aim, during the learning process and depending on the state of the landmark identification subsystems, i.e. the confidence level of the corridor/hall/junction (*clcorr*, *clhall* and *clcross*), the following information is given to the INCA test:


as a Tipicality Problem 9

Robotic Exploration: Place Recognition as a Tipicality Problem 327

• Two local navigation strategies that are combined in a cooperative manner (weighted sum): balance the free space at both sides of the robot and follow a desired compass orientation

• Landmark identification subsystems that allow the robot to recognise corridors, left/right walls, halls, junctions and dead-ends. These landmarks are used to change robot's desired orientation. To show an example, Figure 2 shows the coordination of the modules for the

OBSTACLE\_AVOID

Although the robot can be positioned at any starting location, initially and until the robot reaches a dead-end the map remains empty. Hence, the map construction starts after a dead-end has been identified. This gives the correct measurement of the length of the locations (nodes). Afterwards, the first corridor, the first crossing and the first hall are always identified as new nodes since there is not any instance of the same type already stored in the map. Once the map building process starts, each time the robot identifies a location – a corridor, a hall or a crossing – the geometric information of the identified location is recorded (the **Y** vector, Equation 7), and then the INCA test is applied to evaluate if they are locations already visited or new ones. When the location corresponds to a crossing, i.e. a junction, the orientations of the alternative ways the robot can choose are recorded. If the location has been visited before, one of the non-explored paths is randomly selected. In this way, the robot has the chance to cover all the environment. The robot finishes the exploration process when

Experiments were carried out in the third floor of the Faculty of Computer Science. This environment is a semi-structured office-like common environment, with regular geometry as

The parameter selection obtained in the previous experimental phase was applied to the more general problem of identifying the whole set of environmental locations during an exploration phase performed in simulation. To this purpose the *Stage* simulation tool was used together

In order to have a wider view of the mapping process, we let the robot move in the environment for a long time (more than 6500 seconds). On the left of the Figure 4 shows the robot's path starting from the dead-end at the bottom left corner and on the right the complete

COMPASS\_FOLLOW

θd

Fig. 2. Diagram of behaviours modules (*v*: translational velocity, *w*: angular velocity)

DEAD\_END Laser

θd

Σ ν,ω

(*θd*).

case where a dead-end is recognised.

Compass\_orientation

Laser

all the alternatives of the crossing nodes have been tried.

path followed during the exploration of the environment.

**8. Simulated experiments**

can be seen in Figure 3.

with the *Player* robot server.

These measurements will constitute the observations of the random vectors **Y** considered in the INCA statistic, as represented in Equation 7.

$$\begin{aligned} & \text{Corridors, Halls:}\\ & \mathbf{Y} = (\sin(\theta\_0), \cos(\theta\_0), \sin(\theta\_{\text{mean}}), \cos(\theta\_{\text{mean}}), (\mathbf{x}\_0, y\_0), (\mathbf{x}\_f, y\_f), d) \\ & \text{Junctions:}\\ & \mathbf{Y} = (\sin(\theta\_0), \cos(\theta\_0), \sin(\theta\_{\text{mean}}), \cos(\theta\_{\text{mean}}), \theta\_{\text{w1}}, \dots, \theta\_{\text{w}\_{\text{num\\_nys}}}(\mathbf{x}\_0, y\_0)) \end{aligned} \tag{7}$$

Note that there are two types of measurements: variables type coordinates in meters and variables type orientation in degrees.

The corridors/halls/crosses can differ in their orientation (mean compass value that the robot maintains when going through them in its canonical path). This is why each physical place will correspond to two or more different nodes in the topological map.

### **6. Proposed approach**

The locations the robot must identify are not only single points but areas surrounding these points. Therefore, we propose firstly, a data generation approach to characterise the areas; and secondly, the application of the INCA test.

Let us assume that the robot has recorded the geometric information (see section 5) of *k* different places *C*1,..., *Ck*, all of them of the same type. There is only one **y***<sup>i</sup>* measurement for each place *Ci* (*i* = 1, . . . , *k*). However, the place we want to identify topologically is not just a spot but an area or neighbourhood of the recorded measurement **y***i*. In order to do so we generate *ni* − 1 new observations for each place *i* which will make up the observations corresponding to the place *Ci*. These new observations are generated as **y***<sup>l</sup> <sup>i</sup>* = **y***<sup>i</sup>* + *U*(−*u*, *u*), *l* = 2, . . . , *n*1, where *U*(−*u*, *u*) stands for the uniform distribution with parameters −*u* and *u* (*u* > 0). Taking into account that the robot records two kinds of variables, metres and degrees, we consider two kinds of values for the parameter of the uniform distribution, let us call them, *uM* and *uDEG*, respectively.

Once the data corresponding to the *k* classes –places– are generated, and given **y**0, the information the robot has recorded when he arrives at a new place, the INCA test can be applied and consequently it is possible to decide whether or not **y**<sup>0</sup> corresponds to a new place. In case it is decided **y**<sup>0</sup> is not a new place, the conclusion is that **y**<sup>0</sup> is one of the places *C*1,..., *Ck* according to rule (6).

Pearson distance has been used for the calculus of the interdistances *δ*(**y**, **y**� ) between **y** and **y**� . The parameter values used during the experimental phase were *ni* = 10, *uM* = 2 and *uDEG* = 30. These values were chosen experimentaly as explained in (Jauregi et al., 2011).

#### **7. Exploration behaviour**

As stated earlier, the mapping process requires an exploration strategy to guide the robot for the terrain inspection. The strategy used in this proposal, the exploration behaviour is a coordination of the local navigation strategies and landmark identification subsystems the robot is endowed with. The proper combination of these behaviours, allow the safe exploration of the environment.

8 Will-be-set-by-IN-TECH

These measurements will constitute the observations of the random vectors **Y** considered in

**Y** = (sin(*θ*0), cos(*θ*0), sin(*θmean*), cos(*θmean*), *θw*<sup>1</sup> , ··· , *θwnum*\_*ways*,(*x*0, *y*0))

Note that there are two types of measurements: variables type coordinates in meters and

The corridors/halls/crosses can differ in their orientation (mean compass value that the robot maintains when going through them in its canonical path). This is why each physical place

The locations the robot must identify are not only single points but areas surrounding these points. Therefore, we propose firstly, a data generation approach to characterise the areas; and

Let us assume that the robot has recorded the geometric information (see section 5) of *k* different places *C*1,..., *Ck*, all of them of the same type. There is only one **y***<sup>i</sup>* measurement for each place *Ci* (*i* = 1, . . . , *k*). However, the place we want to identify topologically is not just a spot but an area or neighbourhood of the recorded measurement **y***i*. In order to do so we generate *ni* − 1 new observations for each place *i* which will make up the observations

*l* = 2, . . . , *n*1, where *U*(−*u*, *u*) stands for the uniform distribution with parameters −*u* and *u* (*u* > 0). Taking into account that the robot records two kinds of variables, metres and degrees, we consider two kinds of values for the parameter of the uniform distribution, let us call them,

Once the data corresponding to the *k* classes –places– are generated, and given **y**0, the information the robot has recorded when he arrives at a new place, the INCA test can be applied and consequently it is possible to decide whether or not **y**<sup>0</sup> corresponds to a new place. In case it is decided **y**<sup>0</sup> is not a new place, the conclusion is that **y**<sup>0</sup> is one of the places

. The parameter values used during the experimental phase were *ni* = 10, *uM* = 2 and *uDEG* = 30. These values were chosen experimentaly as explained in (Jauregi et al., 2011).

As stated earlier, the mapping process requires an exploration strategy to guide the robot for the terrain inspection. The strategy used in this proposal, the exploration behaviour is a coordination of the local navigation strategies and landmark identification subsystems the robot is endowed with. The proper combination of these behaviours, allow the safe

(7)

*<sup>i</sup>* = **y***<sup>i</sup>* + *U*(−*u*, *u*),

) between **y** and

**Y** = (sin(*θ*0), cos(*θ*0), sin(*θmean*), cos(*θmean*),(*x*0, *y*0),(*x <sup>f</sup>* , *y <sup>f</sup>*), *d*)

will correspond to two or more different nodes in the topological map.

corresponding to the place *Ci*. These new observations are generated as **y***<sup>l</sup>*

Pearson distance has been used for the calculus of the interdistances *δ*(**y**, **y**�

the INCA statistic, as represented in Equation 7.

Corridors, Halls:

variables type orientation in degrees.

secondly, the application of the INCA test.

Junctions:

**6. Proposed approach**

*uM* and *uDEG*, respectively.

*C*1,..., *Ck* according to rule (6).

**7. Exploration behaviour**

exploration of the environment.

**y**�


Fig. 2. Diagram of behaviours modules (*v*: translational velocity, *w*: angular velocity)

Although the robot can be positioned at any starting location, initially and until the robot reaches a dead-end the map remains empty. Hence, the map construction starts after a dead-end has been identified. This gives the correct measurement of the length of the locations (nodes). Afterwards, the first corridor, the first crossing and the first hall are always identified as new nodes since there is not any instance of the same type already stored in the map.

Once the map building process starts, each time the robot identifies a location – a corridor, a hall or a crossing – the geometric information of the identified location is recorded (the **Y** vector, Equation 7), and then the INCA test is applied to evaluate if they are locations already visited or new ones. When the location corresponds to a crossing, i.e. a junction, the orientations of the alternative ways the robot can choose are recorded. If the location has been visited before, one of the non-explored paths is randomly selected. In this way, the robot has the chance to cover all the environment. The robot finishes the exploration process when all the alternatives of the crossing nodes have been tried.

#### **8. Simulated experiments**

Experiments were carried out in the third floor of the Faculty of Computer Science. This environment is a semi-structured office-like common environment, with regular geometry as can be seen in Figure 3.

The parameter selection obtained in the previous experimental phase was applied to the more general problem of identifying the whole set of environmental locations during an exploration phase performed in simulation. To this purpose the *Stage* simulation tool was used together with the *Player* robot server.

In order to have a wider view of the mapping process, we let the robot move in the environment for a long time (more than 6500 seconds). On the left of the Figure 4 shows the robot's path starting from the dead-end at the bottom left corner and on the right the complete path followed during the exploration of the environment.

as a Tipicality Problem 11

Robotic Exploration: Place Recognition as a Tipicality Problem 329

New Expected 17 8 13

• Each of the 17 existing corridors were properly labelled as new places the first time the robot went along them; the same happened with the new traversed halls and crossing

• The nodes visited more than once by the robot in this long journey were also properly classified with their corresponding label; a total number of 47 corridors, 23 halls and 38

Figure 6 shows the distribution of the locations (plotted according to their central poses) and

−20 −15 −10 −5 0 5 10 15 20

In spite of the degree of symmetry of the environment, the spatial configuration of the obtained locations does not show the same degree of symmetry. This is due to the fact that robot's and humans' perception differ from each other, and since the robot navigates according to a desired compass heading, depending on its orientation it makes the same physical place

The simulation experiments showed that the proposed approach can solve the stated problem. To test the robustness of the approach experiments were extended to the real robot-environment system. The robot Tartalo–a *PeopleBot* robot form *MobileRobots* equipped with a Canon VCC5 monocular PTZ vision system, a Sicl LMS laser, a TCM2 compass and several sonars and bumpers– has been used for the empirical evaluation of the mapping system developed. But instead of relying on raw odometry information, two odometry

Fig. 6. Location distribution over the map. Corridors: +; Halls: x; Crossings: \*

correspond to several nodes in the topological representation.

correction methods were tested to smooth the positioning error:

**9. Experiments in the real robot/environment system**

Known

crossing nodes were visited in the robot path.

the evolution of the number of nodes over time.

−10

−5

0

5

10

15

Table 1. Experimental results

nodes.

Found 100% 100% 100%

Traversed 47 23 38 Classified 100% 100% 100%

Corr Hall Cross

Fig. 3. Third floor of the Faculty of Computer Science. Approx. 60 × 22 meters

Fig. 4. The simulation path resulting from the exploration process

Related to the number of nodes, the map converged to 38 nodes: 17 corridors, 8 halls and 13 crosses (Figure 5). Table 1 shows the number of nodes that have been traversed in the path followed by the robot.

Fig. 5. Evolving number of nodes

As it can be seen, all the nodes are correctly classified:


Table 1. Experimental results

10 Will-be-set-by-IN-TECH

*Crossing*

*Hall*

Fig. 3. Third floor of the Faculty of Computer Science. Approx. 60 × 22 meters

Fig. 4. The simulation path resulting from the exploration process

followed by the robot.

As it can be seen, all the nodes are correctly classified:

Fig. 5. Evolving number of nodes

node id

*Corridor*

Related to the number of nodes, the map converged to 38 nodes: 17 corridors, 8 halls and 13 crosses (Figure 5). Table 1 shows the number of nodes that have been traversed in the path

Number of nodes (GPS)

0 2000 4000 6000 8000 10000

time (s)


Figure 6 shows the distribution of the locations (plotted according to their central poses) and the evolution of the number of nodes over time.

Fig. 6. Location distribution over the map. Corridors: +; Halls: x; Crossings: \*

In spite of the degree of symmetry of the environment, the spatial configuration of the obtained locations does not show the same degree of symmetry. This is due to the fact that robot's and humans' perception differ from each other, and since the robot navigates according to a desired compass heading, depending on its orientation it makes the same physical place correspond to several nodes in the topological representation.

#### **9. Experiments in the real robot/environment system**

The simulation experiments showed that the proposed approach can solve the stated problem. To test the robustness of the approach experiments were extended to the real robot-environment system. The robot Tartalo–a *PeopleBot* robot form *MobileRobots* equipped with a Canon VCC5 monocular PTZ vision system, a Sicl LMS laser, a TCM2 compass and several sonars and bumpers– has been used for the empirical evaluation of the mapping system developed. But instead of relying on raw odometry information, two odometry correction methods were tested to smooth the positioning error:

as a Tipicality Problem 13

Robotic Exploration: Place Recognition as a Tipicality Problem 331

−50 −45 −40 −35 −30 −25 −20 −15 −10 −5 0

Number of nodes over time

Real compass based odometry (CODO)

0 1000 2000 3000 4000 5000

time (s)

During the previous experiments the learning process was not stopped once the loop was closed. This methodological criterion was chosen to asses the appropriateness of the approach, and as a result, there was a slow increase in the number of nodes over time mainly due to odometry error. However, in practical terms the map learning process can be stopped and

The experiments described in this section were carried out to measure the usefulness of the acquired map for localisation. In this occasion, instead of a non-stop learning process, a criterion was set so that the generation of the map would stop once a certain number of nodes was included. Once the procedure reaches this value, no more nodes are allowed to be created and hence, classification rule 6 (Section 3.2) gives the closest node according to the available data. In this manner, after the map is completed the robot continues moving according to its exploration strategy while the mentioned rule gives its localisation. It is worth to mention that classification rule 6 is equivalent to the distance based classifier introduced in (Cuadras,

Experiments were conducted both in simulation and in the real robot/environment system.

Fig. 8. Location distribution over the map. Corridors: +; Halls: x; Crossings: \*

Simulated ideal odometry (GPS)

node id

then use the learnt map for localisation purposes.

Fig. 9. Comparison with ideal odometry

**10. INCA for localisation**

1992).


Experiments were performed in the third floor of the Faculty of Computer Sciences. On the left of Figure 7 shows the path completed by the robot (according to compass based odometry) and on the right shows the evolution of the number of nodes over time (s) for the different positioning methods. Clearly, the compass odometry obtained with the proposed approach offers the most precise position information.

On the other hand, Figure 8 shows the distribution of locations of the different nodes obtained from the run performed by the robot using CODO (Figure 7). As mentioned in the previous section, the difference in perception explains the fact that the number of nodes acquired by robot and humans differ from each other.

And, as expected, the number of nodes is higher when the mapping is performed by the robot because of its perception of the environment and its positioning error. However, although the number of junction nodes identified is higher in the real run, this is mainly due to the people and furniture the robot comes across, which produce nodes that lead to any number of alternative paths. However, after an exploration of about an hour and a half (more than 500 meters), the robot was able to close the loop and to recognise several times the final location as the starting one, thus confirming the suitability of the proposed approach.

Fig. 7. Left: Robot's path (CODO). Right: Evolving number of nodes

As mentioned earlier, the experiments performed in simulation cannot be directly compared with the experiments with the real robot; the simulated sensor readings produce nodes with different characteristics specially when junction nodes are identified. Hence, the path produced by the exploration strategy in simulation differs from the path executed by the real robot. However, it is interesting to compare the evolution of the learning process using exact odometry with the evolving number of nodes when the odometry is corrected using the compass sensor. The map obtained simulating ideal odometry converged to 38 nodes and the map obtained by the robot after 4500 seconds contained 48 nodes (see Figure 9).

12 Will-be-set-by-IN-TECH

• *Laser stabilised odometry* (by means of the LODO driver provided by *Player*). Laser data is used to correct the raw odometry estimate that once corrected exhibits a drift rate that is an order of magnitude less than the rate observed using pure odometry (Howard, 2005). • Compass based odometry (CODO), where compass heading is used to correct raw

Experiments were performed in the third floor of the Faculty of Computer Sciences. On the left of Figure 7 shows the path completed by the robot (according to compass based odometry) and on the right shows the evolution of the number of nodes over time (s) for the different positioning methods. Clearly, the compass odometry obtained with the proposed approach

On the other hand, Figure 8 shows the distribution of locations of the different nodes obtained from the run performed by the robot using CODO (Figure 7). As mentioned in the previous section, the difference in perception explains the fact that the number of nodes acquired by

And, as expected, the number of nodes is higher when the mapping is performed by the robot because of its perception of the environment and its positioning error. However, although the number of junction nodes identified is higher in the real run, this is mainly due to the people and furniture the robot comes across, which produce nodes that lead to any number of alternative paths. However, after an exploration of about an hour and a half (more than 500 meters), the robot was able to close the loop and to recognise several times the final location

As mentioned earlier, the experiments performed in simulation cannot be directly compared with the experiments with the real robot; the simulated sensor readings produce nodes with different characteristics specially when junction nodes are identified. Hence, the path produced by the exploration strategy in simulation differs from the path executed by the real robot. However, it is interesting to compare the evolution of the learning process using exact odometry with the evolving number of nodes when the odometry is corrected using the compass sensor. The map obtained simulating ideal odometry converged to 38 nodes and the

nodes

Laser corrected odometry (LODO)

Compass corrected odometry (CODO)

0 500 1000 1500 2000 2500 3000 3500 4000 4500 5000

time (s)

Raw odometry (ODO)

as the starting one, thus confirming the suitability of the proposed approach.

odometric poses.

**Start**

**End**

offers the most precise position information.

robot and humans differ from each other.

−10 0 10 20 30 40 50 60

Fig. 7. Left: Robot's path (CODO). Right: Evolving number of nodes

map obtained by the robot after 4500 seconds contained 48 nodes (see Figure 9).

Fig. 8. Location distribution over the map. Corridors: +; Halls: x; Crossings: \*

Fig. 9. Comparison with ideal odometry

#### **10. INCA for localisation**

During the previous experiments the learning process was not stopped once the loop was closed. This methodological criterion was chosen to asses the appropriateness of the approach, and as a result, there was a slow increase in the number of nodes over time mainly due to odometry error. However, in practical terms the map learning process can be stopped and then use the learnt map for localisation purposes.

The experiments described in this section were carried out to measure the usefulness of the acquired map for localisation. In this occasion, instead of a non-stop learning process, a criterion was set so that the generation of the map would stop once a certain number of nodes was included. Once the procedure reaches this value, no more nodes are allowed to be created and hence, classification rule 6 (Section 3.2) gives the closest node according to the available data. In this manner, after the map is completed the robot continues moving according to its exploration strategy while the mentioned rule gives its localisation. It is worth to mention that classification rule 6 is equivalent to the distance based classifier introduced in (Cuadras, 1992).

Experiments were conducted both in simulation and in the real robot/environment system.

as a Tipicality Problem 15

Robotic Exploration: Place Recognition as a Tipicality Problem 333

0 5 10 15 20 25 30 35 40 45

X (a) Path corresponding to perfect odometry

Node configuration (GPS)

P7

P28 P36

P: corridors

B: junctions

H: halls

P23

H22

B6

B37

B21

H5

P4

H24 H3

B25

P33

B32 B34

5 10 15 20 25 30 35 40 45

B12 B2

P31

P13 P26

B27

P20 P29

B8

P10

H11

P17

H16H18

H9

B19

B30

P14

B15

X

(b) Obtained map

localisation starts to degrade. It is not possible to extract valid path patterns from the plots in Figure 16. Both the LODO and CODO methods are insufficient for long term localisation. Looking at the robot's paths drawn in Figures 14(a) and 15(a), it can be stated that:

• When using the LODO correction method, the error accumulates more slowly but the error occurs in *x*, *y* and *θ* coordinates. According to LODO odometry, the path rotates over time. While the error is maintained within a certain range the rotation angle is small and the localisation process works correctly. Afterwards, and due to the high dependency the approach has in nominal orientations the system starts to fail and no correspondences are

• When using the CODO correction method, only *x* and *y* values are affected. *θ* value is obtained from an absolute reference value and hence, the error is not accumulative. This produces a diagonal shift on the drawn path over time. This shift led to the misclassification of the lower corridors as if they were the upper ones. Oddly, the upper



corridors were always well identified.

found.

0

P1

P38

Fig. 10. *Stage* (GPS): robot's path and the obtained map

5

10

Y

15

P35

20

0

5

10

Y

15

20

#### **10.1 Simulated experiments**

Once more the *Stage* simulator was used to simulate the robot and its environment. The criterion to stop the learning process was established in 38 nodes, which was the number of nodes the map converged to when simulating the mapping process with ideal odometry. Two experiments were carried out in the simulator:


Table 2 shows the path patterns extracted from plot in Figure 13(a). Again, their associated node sequence, the time interval and the label used in the plot to represent each pattern is included.

The identified patterns concentrate in the first part of the plot (seconds 2000 to 12000). As times goes by the extracted paths are shorter due to localisation failures and the task becomes extremely difficult from second 12000 and there on. Although an odometry correction method is applied, the accumulating error severely affects the localisation of the robot. The type of error remaining after the LODO correction procedure produces a rotation on the robot's trajectory (see for instance Figure 12(a)) and thus, a misclassification of nodes with different orientations assigned. This effect was detected in the sequences labelled as *c*0 − *c*1 in Table 2. Chain P17, B34, P38 should have been P31, B34, P38. Node P31 with assigned orientation SN was misclassified as node P17 with assigned orientation WE.

Note that both procedures produced the same configuration of nodes, 17 corridors, 13 crosses and 8 halls, although their positional information differed due to odometry values.

#### **10.2 Experiments in the real robot/environment system**

A second set of experiments were carried out with the real robot. This time the node threshold was established in 44 nodes.

Figure 14 shows the robot's path and the obtained node distribution using laser corrected odometry values (LODO).

Figure 15 shows the robot's path and the obtained node distribution using compass corrected odometry values (CODO). And Figure 16 shows the mapping process, and the localisation over time for both, LODO and CODO.

The results were disappointing but confirmed what the simulated experiments showed for the LODO case. Although the robot localises properly for about 2000 seconds, afterwards the 14 Will-be-set-by-IN-TECH

Once more the *Stage* simulator was used to simulate the robot and its environment. The criterion to stop the learning process was established in 38 nodes, which was the number of nodes the map converged to when simulating the mapping process with ideal odometry.

• Ideal odometry (GPS). Figure 10 shows the journey together with the spatial node configuration learnt whereas Figure 11 shows the results of the mapping and localisation process, and thus the identified set of nodes over time. The mapping process lasted about 1100 seconds and the fact that no error occurred during the localisation phase (seconds 1100-12000) confirmed that INCA is a valid approach also for localisation. Once the map has been generated, the trajectory of the robot is randomly decided at run-time. The resulting unpredictability means that instead of following a static route, the robot will randomly select the orientation at each junction. As a consequence, the robot does not produce repeatable sequences of nodes in the path, but the probability that it will revisit

• Laser corrected odometry (LODO). A new experiment was conducted applying the default odometry error value defined in *Player/Stage* and applying the LODO driver to correct the odometry. Figure 12 shows the journey together with the spatial node configuration learnt whereas Figure 13 shows the results of the mapping and localisation process, and thus the

Table 2 shows the path patterns extracted from plot in Figure 13(a). Again, their associated node sequence, the time interval and the label used in the plot to represent each pattern is

The identified patterns concentrate in the first part of the plot (seconds 2000 to 12000). As times goes by the extracted paths are shorter due to localisation failures and the task becomes extremely difficult from second 12000 and there on. Although an odometry correction method is applied, the accumulating error severely affects the localisation of the robot. The type of error remaining after the LODO correction procedure produces a rotation on the robot's trajectory (see for instance Figure 12(a)) and thus, a misclassification of nodes with different orientations assigned. This effect was detected in the sequences labelled as *c*0 − *c*1 in Table 2. Chain P17, B34, P38 should have been P31, B34, P38. Node P31 with assigned orientation SN was misclassified as node P17 with assigned orientation

Note that both procedures produced the same configuration of nodes, 17 corridors, 13 crosses

A second set of experiments were carried out with the real robot. This time the node threshold

Figure 14 shows the robot's path and the obtained node distribution using laser corrected

Figure 15 shows the robot's path and the obtained node distribution using compass corrected odometry values (CODO). And Figure 16 shows the mapping process, and the localisation

The results were disappointing but confirmed what the simulated experiments showed for the LODO case. Although the robot localises properly for about 2000 seconds, afterwards the

and 8 halls, although their positional information differed due to odometry values.

**10.2 Experiments in the real robot/environment system**

**10.1 Simulated experiments**

Two experiments were carried out in the simulator:

the whole set of nodes increases.

identified set of nodes over time.

included.

WE.

was established in 44 nodes.

odometry values (LODO).

over time for both, LODO and CODO.

Fig. 10. *Stage* (GPS): robot's path and the obtained map

localisation starts to degrade. It is not possible to extract valid path patterns from the plots in Figure 16. Both the LODO and CODO methods are insufficient for long term localisation. Looking at the robot's paths drawn in Figures 14(a) and 15(a), it can be stated that:


as a Tipicality Problem 17

Robotic Exploration: Place Recognition as a Tipicality Problem 335


X (a) Path corresponding to LODO

Node configuration (LODO)

P7

P: corridors

B: junctions

H: halls

5 10 15 20 25 30 35 40 45

P31

B12 B2

P26 P13

B27

P28

B30

P14

B15

P10

H9

B19

H18

P17

H11

H16

P20 P29

B8

X

(b) Obtained map

An intuitive way of coping with this problem is to modify the positional values of the nodes each time they are revisited. Instead of keeping the acquired node information unaltered, during the localisation phase the contents of the nodes can be updated when a positive match

A last experiment was performed with the robot using CODO to correct the odometry to measure the effect of updating the contents of the node. This choice was made because of the lack of accumulated error in orientation values. Figure 17 shows the acquired map after reaching the maximum number of nodes (established in 39 nodes). The different scales of these two maps reflect the magnitude of the accumulated error in the *x* and *y* coordinates

Table 3 shows the path patterns extracted from plot in Figure 18. Again, their associated node sequence, the time interval and the label used in the plot to represent each pattern is included.

over time. Figure 18 shows the evolution of the localisation system over time.


occurs.

0

P1

Fig. 12. *Stage* (LODO): robot's path and the obtained map

P38

5

10

Y

15

20

P36

P35

P23

H22

B6

B37

B21

H5

P4

H24 H3

P33

B32 B34

B25

Y

Fig. 11. *Stage* (GPS): node identification over time

Table 2. LODO: extracted path patterns (\*: localisation error)

16 Will-be-set-by-IN-TECH

GPS

c1

0 2000 4000 6000 8000 10000 12000

Robot's path node sequence time stamp label

time (s)

c0

p0

P1, B2, H3, P4, H5, B6, P7, B8, P28, P29, B30, H9, P10, H11, B12

P1, B2, H3, P4, H5, B6, P7, B8, P28, P29, B30, H11, P10, H11, B12,

P35, P36, B37, H22, P23, H24, B25, P26, B27

P13, P14, B15, H16, P17, H18, B19, P20, B21, H22, P23, H24, B25, P26, B27

P17\*, B34, P38

p1

p0

p1

c0

2128-2391,

5767-6217, 6301-6916, 6981-7606

3565-3781, 9130-9413, 11322-11627

3821-4169 b0-b1

2955-3227 g0-g1

c0-c1

m0-m1

c1

o0

o0

o1

o1

g0

Fig. 11. *Stage* (GPS): node identification over time

g0

Table 2. LODO: extracted path patterns (\*: localisation error)

b0

b1

g1

g1

node id

Fig. 12. *Stage* (LODO): robot's path and the obtained map

An intuitive way of coping with this problem is to modify the positional values of the nodes each time they are revisited. Instead of keeping the acquired node information unaltered, during the localisation phase the contents of the nodes can be updated when a positive match occurs.

A last experiment was performed with the robot using CODO to correct the odometry to measure the effect of updating the contents of the node. This choice was made because of the lack of accumulated error in orientation values. Figure 17 shows the acquired map after reaching the maximum number of nodes (established in 39 nodes). The different scales of these two maps reflect the magnitude of the accumulated error in the *x* and *y* coordinates over time. Figure 18 shows the evolution of the localisation system over time.

Table 3 shows the path patterns extracted from plot in Figure 18. Again, their associated node sequence, the time interval and the label used in the plot to represent each pattern is included.

as a Tipicality Problem 19

Robotic Exploration: Place Recognition as a Tipicality Problem 337

−10 0 10 20 30 40 50 60 (a) Path corresponding to LODO

Node configuration (LODO)

P38

B37

P: corridors

B: junctions

H: halls

H43 H9

P7

P24

P42

P36 P40

0 5 10 15 20 25 30 35 40 45

P14

P13

P41

P21

P20

P17

H18

H23

B19

B22

P10

B8

H16

B15

H11

B12

X

(b) Obtained map

P28

P29

P4

H33

P32

H27

B39 B44

H31

B30

H5

B6

P26

B25

H3

B34

B2

Fig. 14. Tartalo (LODO): robot's path and the obtained map

Y

P35

P1

Fig. 13. *Stage* (LODO): node identification over time

The localisation process last until the robot run out of batteries and only one location was misclassified. As mentioned in Section 6, some parameters need to be adjusted for INCA to function properly. The value *uM* deeply influences the acceptable deviations from nodes' (*x*, *y*) locations. A small *uM* value produces failures on loop-closings because of the odometry error. On the contrary, setting *uM* to a high value produces that close areas with the same signatures remain indistinguishable. This effect was detected once during the last localisation experiment carried out. Node H30 was wrongly identified as node H32 and thus, the sequence H32, P31, H32 of time stamp 1700 should have been H30, P31, H32. Notice that nodes H30 and H32 are separated by a short corridor labelled as P31. Summarising, upgrading node information made the developed system valuable for localisation in spite of odometry error.

18 Will-be-set-by-IN-TECH

LODO

c1

c1

c1

m0

m1

m0

m1

0 2000 4000 6000 8000 10000 12000

c0

(a) Time stamp 0 to 12000

time (s)

LODO

c0

14000 16000 18000 20000 22000 24000

time (s)

(b) Time stamp 12000 to 25000

The localisation process last until the robot run out of batteries and only one location was misclassified. As mentioned in Section 6, some parameters need to be adjusted for INCA to function properly. The value *uM* deeply influences the acceptable deviations from nodes' (*x*, *y*) locations. A small *uM* value produces failures on loop-closings because of the odometry error. On the contrary, setting *uM* to a high value produces that close areas with the same signatures remain indistinguishable. This effect was detected once during the last localisation experiment carried out. Node H30 was wrongly identified as node H32 and thus, the sequence H32, P31, H32 of time stamp 1700 should have been H30, P31, H32. Notice that nodes H30 and H32 are separated by a short corridor labelled as P31. Summarising, upgrading node information made the developed system valuable for localisation in spite of odometry error.

c0

node id

g0

Fig. 13. *Stage* (LODO): node identification over time

g0

b0

b1

m0

m1

g1

g1

node id

Fig. 14. Tartalo (LODO): robot's path and the obtained map

as a Tipicality Problem 21

Robotic Exploration: Place Recognition as a Tipicality Problem 339

LODO

0 500 1000 1500 2000 2500 3000 3500 4000 4500

0 500 1000 1500 2000 2500 3000 3500 4000 4500

time (s)

(b) Compass corrected odometry

Fig. 16. Tartalo(LODO and CODO): node identification over time. Labels are set to show

time (s)

CODO

(a) Laser corrected odometry

extracted path patterns

node id

node id

Fig. 15. Tartalo (CODO): robot's path and the obtained map

20 Will-be-set-by-IN-TECH

−10 0 10 20 30 40 50 60 70 (a) Path corresponding to compass odometry

Node configuration (CODO)

P: corridors

B: junctions

H: halls

P7

B6 B8

P43

H44

P38

0 5 10 15 20 25 30 35 40 45

P24

P35

P14

P13

P21

P20

H42

B22

P17

H18

H23

B19

H9

P10

H16

H11

B12

B36

H37

B15

X

(b) Obtained map



0

P1

5

10

Y

15

20

P27

P28

P4

H5

H3

P26

B25

H32

B33

H40

P31

H30

B39 B41

B29

P34

Fig. 15. Tartalo (CODO): robot's path and the obtained map

B2

25

Fig. 16. Tartalo(LODO and CODO): node identification over time. Labels are set to show extracted path patterns

as a Tipicality Problem 23

Robotic Exploration: Place Recognition as a Tipicality Problem 341

CODO

b0

g0

b0

m1

g1

643-1048,

2195-2430,

2430-2737,

3092-3428 g0-g1

2804-3034 b0-b1

4150-4459 m0-m1

m0

m1

0 500 1000 1500 2000 2500 3000 3500 4000 4500

b1

H23, P10, H11, B12, P24, B25, P26, B25, H3, P4, H5, B6, P27, P28, B29

P34, P1, B2, H3, P4, H5,

P7, B8, P20, P21, B22, H23, P10, H11, B12, P24,

B25, P26, B25

b1 m0

time (s)

Fig. 18. Tartalo (CODO with adaptive node location): node identification over time

Robot's path node sequence time stamp label

B6, P7

g0

Table 3. CODO: extracted path patterns

g1

node id

Fig. 17. Tartalo (CODO with adaptive node location): robot's path and the obtained map

22 Will-be-set-by-IN-TECH

−10 0 10 20 30 40 50 60 (a) Robot's path

Node configuration (CODO)

P7

P: corridors

H32 B: junctions

H: halls

P38

0 5 10 15 20 25 30 35 40 45

P24

P35

P14

P13

P21

P21

P20

P17

H18

H23

B19

B22

H9

B8

P10

H16

H11

B12

B36

P14

P13

H11

B12

H23

B22

B8

H37

B15

X

Node configuration (CODO)

P7

P20P28

H9

P: corridors

H: halls

P10 P31

P24

H16

H37

B15

P17

H18

B19

10 15 20 25 30 35 40 45 50 55

B36

X

(c) Upgraded map

Fig. 17. Tartalo (CODO with adaptive node location): robot's path and the obtained map

(b) Original map

−20 −15 −10 −5 0 5 10 15 20 25



Y

P34

B33


0

5

Y

10

15

20

P27

P28

P4

H5

B6

P31

B33

B25

P38 P4

H5

B6

P35

H32 B: junctions

B25

H3

B29

H30

B39

B29

P34

P27

B39

H30

P26 P1

B2

P1 P26

B2

H3

Fig. 18. Tartalo (CODO with adaptive node location): node identification over time

Table 3. CODO: extracted path patterns

as a Tipicality Problem 25

Robotic Exploration: Place Recognition as a Tipicality Problem 343

Brooks, R. A. (1986). A robust layered control system for a mobile robot, *IEEE Journal of robotics*

Brooks, R. A. (1989). A robot that walks: emergent behaviors from a carefully evolved

Brooks, R. A. & Connell, J. H. (1986). Asynchronous distributed control system for a mobile

Chen, C. & Wang, H. (2006). Appearance-based topological bayesian inference for

Connell, J. H. (1990). *Minimalist mobile robotics. A colony-style architecture for an artificial creature*,

Cuadras, C. M. (1992). Some examples of distance based discrimination, *Biometrical Letters*

Cuadras, C. M. & Fortiana, J. (1995). A continuous metric scaling solution for a random

Cuadras, C. M. & Fortiana, J. (2000). The importance of geometry in multivariate analysis

Cuadras, C. M., Fortiana, J. & Oliva, F. (1997). The proximity of an individual to a population with applications in discriminant analysis, *Journal of Classification* 14: 117–136. Fraundorfer, F., Engels, C. & Nistér, D. (2007). Topological mapping, localization

Gower, J. C. (1985). *Encyclopedia of Statistical Sciences*, Vol. 5, John Wiley & Sons, New York, chapter Measures of similarity, dissimilarity and distance, pp. 397–405. Hempstalk, K., Frank, E. & Witten, I. H. (2008). One-class classification by combining

Howard, A. (2005). Multi-robot simultaneous localization and mapping using particle filters,

Irigoien, I. & Arenas, C. (2008). INCA: New statistic for estimating the number of clusters and

Jauregi, E., Irigoien, I., Sierra, B., Lazkano, E. & Arenas, C. (2011). Lopp-closing: a tipicality

Mallot, H. A. & Franz, M. A. (2000). Biomimetic robot navigation, *Robotics and Autonomous*

Manevitz, L. M. & Yousef, M. (2002). One-class SVMs for document classification, *J. Mach.*

Manevitz, L. & Yousef, M. (2007). One-class document classification via neural networks,

Matari´c, M. (1990). *A distributed model for mobile robot environment-learning and navigation*,

McDonald, L. L., Lowe, V. W., Smidt, R. K. & Meister, K. A. (1976). A preliminary test for discriminant analysis based on small samples, *Biometrics* 32: 417–422.

identifying atypical units, *Statistics in Medicine* 27(15): 2948–2973.

approach, *Robotics and Autonomous Systems* 59: 218–227.

Master's thesis, MIT Artificial Intelligence Laboratory.

and some applications, *Statistics for the 21st Century*, Marcel Dekker, New York,

and navigation using image collections, *Intelligent Robots and Systems (IROS)*,

density and class probability estimation, *ECML PKDD '08: Proceedings of the 2008 European Conference on Machine Learning and Knowledge Discovery in Databases - Part I*,

robot, *Proceedings of the SPIE's Cambridge Symposyum on Optical and Optoelectronic*

loop-closing detection in a cross-country environment, *International Journal of Robotic*

*and automation* RA–26: 14–23.

*Engineering*, pp. 77–84.

*Research* 25(10): 953–983.

Academic Press, inc.

pp. 3–20.

pp. 93–108.

pp. 3872–3877.

*System* 30: 133–153.

*Learn. Res.* 2: 139–154.

*Neurocomput.* 70(7-9): 1466–1481.

network, *Technical Report AI MEMO 1091*, MIT.

variable, *Journal of Multivariate Analysis* 32: 1–14.

Springer-Verlag, Berlin, Heidelberg, pp. 505–519.

*In Robotics: Science and Systems*, pp. 201–208.

### **11. Conclusions**

In this chapter a new approach for incremental topological map construction was presented. A statistical test called INCA was used to this end, combined with a data sampling approach which decided if a topological node found by the robot had already been visited by it. The method was integrated in a behaviour-based control architecture and tested also for localisation purposes.

To measure the adequateness of the approach the map acquisition was performed non-stop until the robot run out of batteries. Afterwards, the experiments were repeated but once the number of nodes in the map reached a given threshold, the learning step was finished and the acquired map was used for localisation purposes.

However, INCA also suffers from odometry error. Of the two error correction methods used in the present work, LODO and CODO, compass corrected odometry was better suited for the developed navigation approach. A last experiment was carried out using CODO and modifying the contents of the acquired nodes each time a location was revisited. The type of error remaining after CODO facilitated the upgrade of the nodes' locations and improved drastically the localisation process.

The experiments conducted confirmed INCA based mapping and localisation as a valid approach and that BB systems can be provided with automatic map acquisition mechanisms. To improve the efficiency of the automatic map acquisition system, when looking for correspondences the use of their associated probability value should be studied.

The criteria for stopping the learning process, i.e. the maximum number of nodes should be revised. Given that it is not possible to know a priori the number of nodes, the map should be closed when no more alternative ways remain unvisited in the junction nodes.

Some aspects of the implementation of INCA should be improved and more experiments should be conducted in a systematic manner in order to better identify the advantages and drawbacks of the test.

New local navigation strategies and several landmark identification modules need to be incorporated to increase the granularity of the environment in order to reach more interesting goals than halls and corridors, such as offices and laboratories. Adding more topological nodes would allow the generalisation of the experiments to different environments, and the comparison with other approaches. The first step should be to integrate the door identification and door crossing modules already developed, and to enrich the behaviour associated to several nodes with door crossing abilities, and a wall following behaviour. These two modules would help to cover the perimeter of small rooms and improve the exploration strategy.

Nothing has been said about planning. Up to now, the proposed modifications were tested using an exploration strategy. The overall map should be used for commanding the robot to fulfil a concrete goal and thus, to reach concrete locations.

### **12. References**

Arenas, C. & Cuadras, C. M. (2002). Some recent statistical methods based on distances, *Contributions to Science* 2: 183–191.

Bar-Hen, A. (2001). Preliminary tests in linear discriminat analysis, *Statistica* 4: 585–593.

Brooks, C. & K. Iagnemma, K. (2009). Visual detection of novel terrain via two-class classification, *Proceedings of the ACM symposium on Applied Computing (SAC)*, ACM, New York, NY, USA, pp. 1145–1150.

24 Will-be-set-by-IN-TECH

In this chapter a new approach for incremental topological map construction was presented. A statistical test called INCA was used to this end, combined with a data sampling approach which decided if a topological node found by the robot had already been visited by it. The method was integrated in a behaviour-based control architecture and tested also for

To measure the adequateness of the approach the map acquisition was performed non-stop until the robot run out of batteries. Afterwards, the experiments were repeated but once the number of nodes in the map reached a given threshold, the learning step was finished and the

However, INCA also suffers from odometry error. Of the two error correction methods used in the present work, LODO and CODO, compass corrected odometry was better suited for the developed navigation approach. A last experiment was carried out using CODO and modifying the contents of the acquired nodes each time a location was revisited. The type of error remaining after CODO facilitated the upgrade of the nodes' locations and improved

The experiments conducted confirmed INCA based mapping and localisation as a valid approach and that BB systems can be provided with automatic map acquisition mechanisms. To improve the efficiency of the automatic map acquisition system, when looking for

The criteria for stopping the learning process, i.e. the maximum number of nodes should be revised. Given that it is not possible to know a priori the number of nodes, the map should be

Some aspects of the implementation of INCA should be improved and more experiments should be conducted in a systematic manner in order to better identify the advantages and

New local navigation strategies and several landmark identification modules need to be incorporated to increase the granularity of the environment in order to reach more interesting goals than halls and corridors, such as offices and laboratories. Adding more topological nodes would allow the generalisation of the experiments to different environments, and the comparison with other approaches. The first step should be to integrate the door identification and door crossing modules already developed, and to enrich the behaviour associated to several nodes with door crossing abilities, and a wall following behaviour. These two modules would help to cover the perimeter of small rooms and improve the exploration strategy. Nothing has been said about planning. Up to now, the proposed modifications were tested using an exploration strategy. The overall map should be used for commanding the robot to

Arenas, C. & Cuadras, C. M. (2002). Some recent statistical methods based on distances,

classification, *Proceedings of the ACM symposium on Applied Computing (SAC)*, ACM,

Bar-Hen, A. (2001). Preliminary tests in linear discriminat analysis, *Statistica* 4: 585–593. Brooks, C. & K. Iagnemma, K. (2009). Visual detection of novel terrain via two-class

correspondences the use of their associated probability value should be studied.

closed when no more alternative ways remain unvisited in the junction nodes.

fulfil a concrete goal and thus, to reach concrete locations.

*Contributions to Science* 2: 183–191.

New York, NY, USA, pp. 1145–1150.

**11. Conclusions**

localisation purposes.

drawbacks of the test.

**12. References**

acquired map was used for localisation purposes.

drastically the localisation process.


**16** 

Jie-Tong Zou

*Taiwan, R.O.C.* 

**The Development of the Omnidirectional** 

 *Department of Aeronautical Engineering, National Formosa University* 

In the last few years, intelligent robots were successfully fielded in hospitals (King, S., and Weiman, C, 1990), museums (Burgard, W. et al., 1999), and office buildings/department stores (Endres, H. et al. , 1998), where they perform cleaning services, deliver, educate, or entertain (Schraft, R., and Schmierer, G. , 2005). Robots have also been developed for

Today, the number of elderly in need of care is increasing dramatically. As the baby-boomer generation approaches the retirement age, this number will increase significantly. Current living conditions for the majority of elderly people are already unsatisfactory, and situation

Rapid progress of standard of living and health care resulted in the increase of aging population. More and more elderly people do not receive good care from their family or caregivers. Maybe the intelligent service robots can assist people in their daily living activities. Robotics aids for the elderly have been developed, but many of these robotics aids are mechanical aids. (Song, W.-K. et al., 1998) (Dario, P. et al., 1999) (Takahashi, Y. et al., 1999). The intelligent service robot can assist elderly people with many tasks, such as

The main objective of this Chapter is to develop an omnidirectional mobile home care robot. This service mobile robot is equipped with "Indoor positioning system". The indoor positioning system is used for rapid and precise positioning and guidance of the mobile robot. Five reflective infrared sensors are placed around the robot for obstacle

The wireless IP camera is placed on the top layer of this robot. Through the internet remote control system, the live image of the IP camera on the robot can be transferred to the remote client user. With this internet remote control system, the remote client user can monitor the elderly people or the home security condition. On the aid of this system, remote family member can control the robot and talk to the elderly. This intelligent robot also can deliver the medicine or remind to measure the blood pressure or blood sugar on time. We hope this intelligent robot can be a housekeeper or family guard to protect our elderly people or our

remind to measure and record the blood pressure or blood sugar of the elderly on time

guiding blind people, as well as robotic aids for the elderly.

remembering to take medicine or measure blood pressure on time.

The functions of the proposed robot are illustrated as follows:

deliver medicine or food on time

will worsen in the future. (Nicholas Roy et al. , 2000)

**1. Introduction** 

avoidance.

family.

**Mobile Home Care Robot** 


## **The Development of the Omnidirectional Mobile Home Care Robot**

Jie-Tong Zou

 *Department of Aeronautical Engineering, National Formosa University Taiwan, R.O.C.* 

### **1. Introduction**

26 Will-be-set-by-IN-TECH

344 Mobile Robots – Current Trends

Olson, E. (2009). Recognizing places using spectrally clustered local matches, *Robotics and*

Rao, C. R. (1962). Use of discriminant and allied functions in multivariate analysis,

Rao, C. R. (1982). Diversity: its measurement, decomposition, apportionment and analysis,

Sánchez-Yáñez, R. E., Kurmyshev, E. V. & Fernández, A. (2003). One-class texture classifier in the CCR feature space, *Pattern Recognition Letters* 24(9-10): 1503–1511. Se, S., Lowe, D. G. & Little, J. J. (2005). Vision-based global localization and mapping for

Tardós, J. D., Neira, J., Newman, P. M. & Leonard, J. J. (2002). Robust mapping and localization

Tax, D. (2001). *One-class classification; Concept-learning in the absence of counter-examples*, PhD

Thrun, S. (1999). Learning metric-topological maps for indoor mobile robot navigation,

Thrun, S. & Bücken, A. (1996a). Integrating grid-based and topological maps for mobile robot

Thrun, S. & Bücken, A. (1996b). Learning maps for indoor mobile robot navigation, *Technical*

Wang, Q. & Lopes, L. S. (2005). *Emerging Solutions for Future Manufacturing Systems*, Vol.

Williams, B., Cummins, M., Neira, J., Newman, P., Reid, I. & Tardós, J. (2009). A comparison

in indoor environments using sonar data, *The International Journal of Robotics Research*

navigation, *Proceedings of the Thirteenth National Conference on Artificial Intelligence*,

159, Springer Boston, chapter One-Class Learning for Human-Robot Interaction,

of loop closing techniques in monocular SLAM, *Robotics and Autonomous System*

*Sankhya. The Indian Journal of Statistics, Series A ¯* 44: 1–22.

mobile robots, *IEEE Transactions on Robotics* 21: 364–375.

Thrun, S., Burgard, W. & Fox, D. (2005). *Probabilistic Robotics*, MIT Press.

*Autonomous System* 57: 1157–1172.

thesis, Delft University of Technology.

*Artificial Intelligence* pp. 21–71.

*report*, Carnegie Mellon University.

*Sankhya-Serie A* 24: 149–154.

21(4): 311–330.

pp. 944–950.

pp. 489–498.

57: 1157–1172.

In the last few years, intelligent robots were successfully fielded in hospitals (King, S., and Weiman, C, 1990), museums (Burgard, W. et al., 1999), and office buildings/department stores (Endres, H. et al. , 1998), where they perform cleaning services, deliver, educate, or entertain (Schraft, R., and Schmierer, G. , 2005). Robots have also been developed for guiding blind people, as well as robotic aids for the elderly.

Today, the number of elderly in need of care is increasing dramatically. As the baby-boomer generation approaches the retirement age, this number will increase significantly. Current living conditions for the majority of elderly people are already unsatisfactory, and situation will worsen in the future. (Nicholas Roy et al. , 2000)

Rapid progress of standard of living and health care resulted in the increase of aging population. More and more elderly people do not receive good care from their family or caregivers. Maybe the intelligent service robots can assist people in their daily living activities. Robotics aids for the elderly have been developed, but many of these robotics aids are mechanical aids. (Song, W.-K. et al., 1998) (Dario, P. et al., 1999) (Takahashi, Y. et al., 1999). The intelligent service robot can assist elderly people with many tasks, such as remembering to take medicine or measure blood pressure on time.

The main objective of this Chapter is to develop an omnidirectional mobile home care robot. This service mobile robot is equipped with "Indoor positioning system". The indoor positioning system is used for rapid and precise positioning and guidance of the mobile robot. Five reflective infrared sensors are placed around the robot for obstacle avoidance.

The wireless IP camera is placed on the top layer of this robot. Through the internet remote control system, the live image of the IP camera on the robot can be transferred to the remote client user. With this internet remote control system, the remote client user can monitor the elderly people or the home security condition. On the aid of this system, remote family member can control the robot and talk to the elderly. This intelligent robot also can deliver the medicine or remind to measure the blood pressure or blood sugar on time. We hope this intelligent robot can be a housekeeper or family guard to protect our elderly people or our family.

The functions of the proposed robot are illustrated as follows:


The Development of the Omnidirectional Mobile Home Care Robot 347

Wireless IP camera

Emergency STOP

Handlebar

Reflective infrared

sensors

Fig. 2. Structure of the omnidirectional mobile home care robot.

Indoor positioning

Touch screen

system

module

Wireless network

Fig. 3. Photo of the omnidirectional mobile home care robot.

omni-directional

Power monitoring

Medicine, blood pressure gauge

wheel

system


Fig. 1. Hardware structure of the omnidirectional mobile home care robot.

Hardware structure of the omnidirectional mobile home care robot is shown in Fig. 1. A PC based controller was used to control three DC servo motors. The indoor positioning system was used for rapid and precise positioning and guidance of the mobile robot. Five reflective infrared sensors are connected to an I/O card for sensor data acquisition. The GSM modem can send a short message automatically under emergency condition. The live image of the wireless IP camera on the robot can be transferred to the remote client user. The subsystems of this robot are explained in the following sections.

### **2. The robot mechanism**

The proposed omnidirectional mobile home care robot is shown in Fig. 2 and Fig. 3. The main body of this robot is consisted with five layers of hexagonal aluminum alloy board. Many wheeled mobile robots are equipped with two differential driving wheels. Since these robots possess 2 degrees-of-freedom (DOFs), they can rotate about any point, but cannot perform holonomic motion including sideways motion (Jae-Bok Song and Kyung-Seok Byun , 2006). To increase the mobility of this service robot, three omni-directional wheels driven by three DC servo motors are assembled on the robot platform (see Fig. 4). The omnidirectional mobile robots can move in an arbitrary direction without changing the direction of the wheels.

The three-wheeled omni-directional mobile robots are capable of achieving 3 DOF motions by driving 3 independent actuators (Carlisle, B., 1983) (Pin, F. & Killough, S., 1999), but they may have stability problem due to the triangular contact area with the ground, especially when traveling on a ramp with the high center of gravity owing to the payload they carry.

346 Mobile Robots – Current Trends

With the remote control system, remote family member can control the robot and talk to

Human machine interface

Switchin

<sup>g</sup> power 24V battery

I/O card Infrared

sensors x5

remind the elderly to do something important

Wireless network

send a short message automatically under emergency condition

Fig. 1. Hardware structure of the omnidirectional mobile home care robot.

PC based controller

Indoor positioning system

Driver 1 Driver 2 Driver 3

Motor 1 Motor 2 Motor 3

of this robot are explained in the following sections.

**2. The robot mechanism** 

of the wheels.

Hardware structure of the omnidirectional mobile home care robot is shown in Fig. 1. A PC based controller was used to control three DC servo motors. The indoor positioning system was used for rapid and precise positioning and guidance of the mobile robot. Five reflective infrared sensors are connected to an I/O card for sensor data acquisition. The GSM modem can send a short message automatically under emergency condition. The live image of the wireless IP camera on the robot can be transferred to the remote client user. The subsystems

The proposed omnidirectional mobile home care robot is shown in Fig. 2 and Fig. 3. The main body of this robot is consisted with five layers of hexagonal aluminum alloy board. Many wheeled mobile robots are equipped with two differential driving wheels. Since these robots possess 2 degrees-of-freedom (DOFs), they can rotate about any point, but cannot perform holonomic motion including sideways motion (Jae-Bok Song and Kyung-Seok Byun , 2006). To increase the mobility of this service robot, three omni-directional wheels driven by three DC servo motors are assembled on the robot platform (see Fig. 4). The omnidirectional mobile robots can move in an arbitrary direction without changing the direction

The three-wheeled omni-directional mobile robots are capable of achieving 3 DOF motions by driving 3 independent actuators (Carlisle, B., 1983) (Pin, F. & Killough, S., 1999), but they may have stability problem due to the triangular contact area with the ground, especially when traveling on a ramp with the high center of gravity owing to the payload they carry.

assist the elderly to stand or walk

GSM modem Wireless IP cam

the elderly.

Fig. 2. Structure of the omnidirectional mobile home care robot.

Fig. 3. Photo of the omnidirectional mobile home care robot.

The Development of the Omnidirectional Mobile Home Care Robot 349

Fig. 6. 3D CAD software was used to design the robot platform

Fig. 7. An omni-directional wheel was driven by DC servo motor

Fig. 8. Photo of the robot platform with three omni-directional wheels

Fig. 5(a) is structures of the omni-directional wheel, Fig. 5(b) is the motor layout of the robot platform. The relationship of motor speed and robot moving speed is shown as:

$$\begin{array}{l} \mathbf{V\_{1}} = \boldsymbol{\omega\_{1}}\mathbf{r} = \mathbf{V\_{x}} + \boldsymbol{\omega\_{p}}\,\mathbf{R} \\ \mathbf{V\_{2}} = \boldsymbol{\omega\_{2}}\mathbf{r} = \mathbf{-0.5V\_{x}} + \mathbf{0.867V\_{y}} + \boldsymbol{\omega\_{p}}\,\mathbf{R} \\ \mathbf{V\_{3}} = \boldsymbol{\omega\_{3}}\mathbf{r} = \mathbf{-0.5V\_{x}} \quad \mathbf{-0.867V\_{y}} + \boldsymbol{\omega\_{p}}\,\mathbf{R} \end{array} \tag{1}$$

Where:

Vi=Velocity of wheel i ωi=rotation speed of motor i ωP= rotation speed of robot r=radius of wheel R=distance from wheel to center of the platform

Fig. 4. Robot platform with three Omnidirectional wheels.

Fig. 5. (a) Structure of Omni-directional wheel; (b)Motor layout of Robot platform

As shown in Fig. 6, 3D CAD software (SolidWork) was used to design the robot platform. In Fig. 7, an omni-directional wheel composed of passive rollers or balls was driven by a DC servo motor. Fig. 8 is the photo of the robot platform with three omni-directional wheels. The omni-directional robot platform can move in an arbitrary direction and can rotate about any point. The omni-directional robot platform can enhance the mobility of mobile robots.

348 Mobile Robots – Current Trends

Fig. 5(a) is structures of the omni-directional wheel, Fig. 5(b) is the motor layout of the robot

(1)

platform. The relationship of motor speed and robot moving speed is shown as:

Where:

mobile robots.

Vi=Velocity of wheel i ωi=rotation speed of motor i ωP= rotation speed of robot

R=distance from wheel to center of the platform

Fig. 4. Robot platform with three Omnidirectional wheels.

Fig. 5. (a) Structure of Omni-directional wheel; (b)Motor layout of Robot platform

As shown in Fig. 6, 3D CAD software (SolidWork) was used to design the robot platform. In Fig. 7, an omni-directional wheel composed of passive rollers or balls was driven by a DC servo motor. Fig. 8 is the photo of the robot platform with three omni-directional wheels. The omni-directional robot platform can move in an arbitrary direction and can rotate about any point. The omni-directional robot platform can enhance the mobility of

r=radius of wheel

Fig. 6. 3D CAD software was used to design the robot platform

Fig. 7. An omni-directional wheel was driven by DC servo motor

Fig. 8. Photo of the robot platform with three omni-directional wheels

The Development of the Omnidirectional Mobile Home Care Robot 351

Fig. 10. Indoor localization system (Hagisonic co.)

Fig. 11. Localization sensor module (Hagisonic co.)

Fig. 12. IR passive landmark (Hagisonic co.)

Fig. 9. Kinematic Diagram of the robot platform with three omni-directional wheels

Fig. 9 is the kinematic diagram of the robot platform with three omni-directional wheels. The inverse kinematic equations of the robot platform with three omni-directional wheels are shown as follow:

$$\begin{aligned} R\theta\_1 &= -\sin(\delta + \phi) \dot{\mathbf{x}}\_{\text{w}} + \cos(\delta + \phi) \dot{\mathbf{y}}\_{\text{w}} + L\_1 \phi \\\\ R\dot{\theta}\_2 &= -\sin(\delta - \phi) \dot{\mathbf{x}}\_{\text{w}} - \cos(\delta - \phi) \dot{\mathbf{y}}\_{\text{w}} + L\_1 \dot{\phi} \\\\ R\dot{\theta}\_3 &= \cos(\phi) \dot{\mathbf{x}}\_{\text{w}} + \sin(\phi) \dot{\mathbf{y}}\_{\text{w}} + L\_2 \dot{\phi} \end{aligned} \tag{2}$$

Where:

Li=distance from center of the platform to omni-directional wheel ψ=Orientation angle according to world coordinate [*xw, yw*] θi=rotation angle of omni-directional wheel i The inverse Jacobian matrix can be derived from the above equations:

$$
\begin{vmatrix}
\dot{\theta}\_{1} \\
\dot{\theta}\_{2} \\
\dot{\theta}\_{3}
\end{vmatrix} = \frac{1}{R} \begin{bmatrix}
\cos\phi
\end{bmatrix} \begin{array}{c}
\dot{\chi}\_{\text{v}} \\
\dot{\chi}\_{\text{v}} \\
\dot{\phi}
\end{array} \tag{3}
$$

#### **3. Indoor localization system**

As shown in Fig. 10, Indoor localization system (http://www.hagisonic.com/), which used IR passive landmark technology, was used in the proposed service mobile robot. The localization sensor module (see Fig. 11) can analyze infrared ray image reflected from a passive landmark (see Fig. 12) with characteristic ID. The output of position and heading angle of a mobile robot is given with very precise resolution and high speed. The position repetition accuracy is less than 2cm; the heading angle accuracy is 1 degree.

350 Mobile Robots – Current Trends

Fig. 9. Kinematic Diagram of the robot platform with three omni-directional wheels

Li=distance from center of the platform to omni-directional wheel ψ=Orientation angle according to world coordinate [*xw, yw*]

The inverse Jacobian matrix can be derived from the above equations:

repetition accuracy is less than 2cm; the heading angle accuracy is 1 degree.

As shown in Fig. 10, Indoor localization system (http://www.hagisonic.com/), which used IR passive landmark technology, was used in the proposed service mobile robot. The localization sensor module (see Fig. 11) can analyze infrared ray image reflected from a passive landmark (see Fig. 12) with characteristic ID. The output of position and heading angle of a mobile robot is given with very precise resolution and high speed. The position

θi=rotation angle of omni-directional wheel i

**3. Indoor localization system** 

are shown as follow:

Where:

Fig. 9 is the kinematic diagram of the robot platform with three omni-directional wheels. The inverse kinematic equations of the robot platform with three omni-directional wheels

(2)

(3)

Fig. 10. Indoor localization system (Hagisonic co.)

Fig. 11. Localization sensor module (Hagisonic co.)


Fig. 12. IR passive landmark (Hagisonic co.)

The Development of the Omnidirectional Mobile Home Care Robot 353

Right turn

Right turn Left turn

Moving

Fig. 15. Five reflective infrared sensors are placed around the robot on the bottom layer

The human-machine interface (HMI) includes touch screen, speaker, and appliances voice control system. Touch screen can be regarded as input and display interface. Speaker can

Wireless IP cam

Appliances voice control system

Fig. 16. Five reflective infrared sensors

**6. Human-machine interface (HMI)** 

Fig. 17. Human-machine interface (HMI)

Touch screen

Speaker

### **4. Robot control system**

As shown in Fig. 13, a PC based controller was used to control the mobile robot. Through RS232 interface, PC based controller can control three motor drivers to drive three DC servo motors. The PC based controller and Solid State Disks (SSD) are shown in Fig. 14. Solid State Disks have no moving parts. Consequently, SSDs deliver a level of reliability in data storage that hard drives cannot approach. In this mobile robot application that is exposed to shock or vibration, the reliability offered by SSDs is vitally important.

Fig. 13. PC based controller and motor drivers.

Fig. 14. PC based controller and Solid State Disk (SSD)

### **5. Obstacle avoidance system**

Obstacle avoidance is a robotic discipline with the objective of moving vehicles on the basis of the sensorial information. As shown in Fig. 15, five reflective infrared sensors (see Fig. 16) are placed around the robot for obstacle avoidance. Five infrared sensors are numbered from 1 to 5 in a counterclockwise direction. If the obstacle is in front of the robot or on the left hand side, it will turn right. If the obstacle is on the right hand side, it will turn left.

352 Mobile Robots – Current Trends

As shown in Fig. 13, a PC based controller was used to control the mobile robot. Through RS232 interface, PC based controller can control three motor drivers to drive three DC servo motors. The PC based controller and Solid State Disks (SSD) are shown in Fig. 14. Solid State Disks have no moving parts. Consequently, SSDs deliver a level of reliability in data storage that hard drives cannot approach. In this mobile robot application that is exposed to shock

RS232

PC based controller

Motor driver

DC servo motor

or vibration, the reliability offered by SSDs is vitally important.

Fig. 13. PC based controller and motor drivers.

Fig. 14. PC based controller and Solid State Disk (SSD)

Obstacle avoidance is a robotic discipline with the objective of moving vehicles on the basis of the sensorial information. As shown in Fig. 15, five reflective infrared sensors (see Fig. 16) are placed around the robot for obstacle avoidance. Five infrared sensors are numbered from 1 to 5 in a counterclockwise direction. If the obstacle is in front of the robot or on the left hand side, it will turn right. If the obstacle is on the right hand side, it will turn left.

**5. Obstacle avoidance system** 

**4. Robot control system** 

Fig. 15. Five reflective infrared sensors are placed around the robot on the bottom layer

Fig. 16. Five reflective infrared sensors

### **6. Human-machine interface (HMI)**

The human-machine interface (HMI) includes touch screen, speaker, and appliances voice control system. Touch screen can be regarded as input and display interface. Speaker can

Fig. 17. Human-machine interface (HMI)

The Development of the Omnidirectional Mobile Home Care Robot 355

The proposed service robot can remind the elderly to measure and record the blood pressure or blood sugar on time. As shown in Fig. 19, the blood pressure or blood sugar data can be displayed and recorded in this interface. If blood pressure or blood sugar data is too high, the GSM modem will send a short message automatically to the remote

In order to understand the stability of three wheeled omni-directional mobile robot, an experiment for the straight line path error had been discussed (Jie-Tong Zou, et al., 2010). From these experimental results, when the robot moves faster or farther, the straight line error will increase. We make some experiments to measure several different paths error of

In this experiment, the proposed mobile robot will move along a rectangular path with or without the guidance of the indoor localization system. As shown in Fig. 20, the mobile robot moves along a rectangular path (a→b→c→d→a) without the guidance of the localization system. The localization system is only used to record the real path in this

In Fig. 20, solid line represents the ideal rectangular path, dot lines (■:Test1, ▲:Test2) are the real paths of the mobile robot without the guidance of the localization system. The vertical paths (path b→c and d→a) have larger path error. Finally, the mobile robot cannot return to

**8.1 Rectangular path error test for the omni-directional robot platform** 

families.

Fig. 19. Interface for blood pressure measurement

the proposed mobile robot in this research.

**8. Experimental results** 

experiment.

the starting point "a".

produce the voice of robot. Appliances voice control system can let users or the elderly to remote control the appliances by voice command.

### **7. Software interface**

The software interface of the proposed robot is developed by Visual BASIC program. As shown in Fig. 18, the main interface of the proposed service robot can be divided into the following six regions:


Fig. 18. Main interface of the proposed robot

354 Mobile Robots – Current Trends

produce the voice of robot. Appliances voice control system can let users or the elderly to

The software interface of the proposed robot is developed by Visual BASIC program. As shown in Fig. 18, the main interface of the proposed service robot can be divided into the

1. **Home map region**: The home map and the targets position are displayed in this region. With the information from the indoor positioning system, the position and heading

2. **Robot targets setting region**: First, as a teaching stage, a user controls a robot by joystick or other interface and teaches the targets to the robot. The position and heading angle of the mobile robot on the target place can be recorded into a file. Next, as a playback stage,

5. **Robot control interface**: In this region, users can control the mobile robot to move in an

6. **Remote control information**: With the internet remote control system, the remote client user can monitor the elderly people or the home security condition. On the aid of this system, remote family member can control the robot and talk to the elderly. The remote

> Infrared sensors information

Remote control information

Robot control interface

user IP and the remote control command also can be shown in this region.

the robot runs autonomously on the path instructed during the teaching stage. 3. **Positioning system information region**: With the aid of the indoor positioning system, the mobile robot position (X,Y) and heading angle also can be shown in this region. 4. **Infrared sensors information**: Five reflective infrared sensors are placed around the robot for obstacle avoidance. Five reflective infrared sensors are connected to an I/O card for sensor data acquisition. Obstacles in front of the mobile robot can be displayed

remote control the appliances by voice command.

arbitrary direction or rotate about any point

Fig. 18. Main interface of the proposed robot

angle of the mobile robot also can be shown in this region.

**7. Software interface** 

following six regions:

in this region.

Home map

Robot targets setting

Information of positioning system

The proposed service robot can remind the elderly to measure and record the blood pressure or blood sugar on time. As shown in Fig. 19, the blood pressure or blood sugar data can be displayed and recorded in this interface. If blood pressure or blood sugar data is too high, the GSM modem will send a short message automatically to the remote families.

Fig. 19. Interface for blood pressure measurement

### **8. Experimental results**

In order to understand the stability of three wheeled omni-directional mobile robot, an experiment for the straight line path error had been discussed (Jie-Tong Zou, et al., 2010). From these experimental results, when the robot moves faster or farther, the straight line error will increase. We make some experiments to measure several different paths error of the proposed mobile robot in this research.

### **8.1 Rectangular path error test for the omni-directional robot platform**

In this experiment, the proposed mobile robot will move along a rectangular path with or without the guidance of the indoor localization system. As shown in Fig. 20, the mobile robot moves along a rectangular path (a→b→c→d→a) without the guidance of the localization system. The localization system is only used to record the real path in this experiment.

In Fig. 20, solid line represents the ideal rectangular path, dot lines (■:Test1, ▲:Test2) are the real paths of the mobile robot without the guidance of the localization system. The vertical paths (path b→c and d→a) have larger path error. Finally, the mobile robot cannot return to the starting point "a".

The Development of the Omnidirectional Mobile Home Care Robot 357

**8.2 Circular path error test for the omni-directional robot platform** 

0°

Robot heading angle:90°

Fig. 23. Circular path without the guidance of the localization system.

localization system is only used to record the real path in this experiment.

system can successfully maintain the robot heading angle along a circular path.

Fig. 24. The maximum heading angle error is about 8°.

The omni-directional mobile robot can move in an arbitrary direction without changing the direction of the wheels. In this experiment, the proposed mobile robot will move along a circular path with or without the guidance of the indoor localization system. As shown in Fig. 22, the mobile robot moves along a circular path without the guidance of the localization system. The robot heading angle is 90°(upwards) during this test. The

The circular path without the guidance of the localization system is shown in Fig. 23. The shape of the real path is similar to a circle, but the starting point and the end point cannot overlap. The heading angle error with different circular angle (θ) of the robot is shown in

The circular path with the guidance of the localization system is shown in Fig. 25. The shape of this path is more similar to a circle; the starting point and the end point are overlapped. The heading angle error with different circular angle (θ) of the robot is shown in Fig. 26. The maximum heading angle error is about ±1°. From this experiment result, the localization

Fig. 22. Circular angle (θ) of the robot

Fig. 20. Rectangular path error without the guidance of the localization system.

As shown in Fig. 21, the mobile robot moves along a rectangular path (a→b→c→d→a) with the guidance of the localization system. In Fig. 21, solid line represents the ideal rectangular path, dot lines (■:Test1, ▲:Test2) are the real paths of the mobile robot with the guidance of the localization system. With the guidance of the localization system, the mobile robot can pass through the corner points a, b, c, d. The rectangular path error in Fig.21 is smaller than that in Fig. 20. The maximum path error is under 10 cm in Fig.21. Finally, the mobile robot can return to the starting point "a". The rectangular path is closed at point "a".

Fig. 21. Rectangular path error with the guidance of the localization system.

#### **8.2 Circular path error test for the omni-directional robot platform**

Robot heading angle:90°

Fig. 22. Circular angle (θ) of the robot

356 Mobile Robots – Current Trends

a b

Fig. 20. Rectangular path error without the guidance of the localization system.

can return to the starting point "a". The rectangular path is closed at point "a".

Vertical path (cm)

Vertical path (cm)

Fig. 21. Rectangular path error with the guidance of the localization system.

Horizontal path (cm)

d c

As shown in Fig. 21, the mobile robot moves along a rectangular path (a→b→c→d→a) with the guidance of the localization system. In Fig. 21, solid line represents the ideal rectangular path, dot lines (■:Test1, ▲:Test2) are the real paths of the mobile robot with the guidance of the localization system. With the guidance of the localization system, the mobile robot can pass through the corner points a, b, c, d. The rectangular path error in Fig.21 is smaller than that in Fig. 20. The maximum path error is under 10 cm in Fig.21. Finally, the mobile robot

Horizontal path (cm)

d c

a b

Fig. 23. Circular path without the guidance of the localization system.

The omni-directional mobile robot can move in an arbitrary direction without changing the direction of the wheels. In this experiment, the proposed mobile robot will move along a circular path with or without the guidance of the indoor localization system. As shown in Fig. 22, the mobile robot moves along a circular path without the guidance of the localization system. The robot heading angle is 90°(upwards) during this test. The localization system is only used to record the real path in this experiment.

The circular path without the guidance of the localization system is shown in Fig. 23. The shape of the real path is similar to a circle, but the starting point and the end point cannot overlap. The heading angle error with different circular angle (θ) of the robot is shown in Fig. 24. The maximum heading angle error is about 8°.

The circular path with the guidance of the localization system is shown in Fig. 25. The shape of this path is more similar to a circle; the starting point and the end point are overlapped. The heading angle error with different circular angle (θ) of the robot is shown in Fig. 26. The maximum heading angle error is about ±1°. From this experiment result, the localization system can successfully maintain the robot heading angle along a circular path.

The Development of the Omnidirectional Mobile Home Care Robot 359

As shown in Fig. 27, a handlebar is placed on the rear side of the robot. With the assistance of the robot, the elderly can hold the handlebar to stand up or walk. The elderly can set the target place on the touch screen and hold on the handle bar, the mobile robot will help the elderly to the target place. There are four buttons (Start, Stop, Speed up, Slow down) on the

handlebar, the elderly can control the robot speed to fit his walk speed.

**8.3.2 Assist the elderly to stand or walk** 

Fig. 27. Robot can assist the elderly to stand or walk

**8.3.3 Send a short message automatically under emergency condition** 

The blood pressure measurement and short message sending interface is shown in Fig. 28. The robot will remind the elderly to take blood pressure or blood sugar on time. When the elderly finished taking blood pressure (the blood pressure gauge is shown in Fig. 29), the blood pressure data will be recorded in the robot's computer. If the blood pressure is too

Fig. 24. Heading angle error with different circular angle (θ) of the robot

Fig. 25. Circular path with the guidance of the localization system.

Fig. 26. Heading angle error with different circular angle (θ) of the robot

#### **8.3 Functions test for robot taking care of the elderly 8.3.1 Delivering medicine or food on time**

The elderly people usually forget to take medicine or measure blood pressure on time. It is harmful for the elderly people's health. The proposed robot can deliver medicine or food on the preset time. The robot also can remind the elderly to take medicine on time.

### **8.3.2 Assist the elderly to stand or walk**

358 Mobile Robots – Current Trends

Fig. 24. Heading angle error with different circular angle (θ) of the robot

Fig. 25. Circular path with the guidance of the localization system.

Fig. 26. Heading angle error with different circular angle (θ) of the robot

the preset time. The robot also can remind the elderly to take medicine on time.

The elderly people usually forget to take medicine or measure blood pressure on time. It is harmful for the elderly people's health. The proposed robot can deliver medicine or food on

**8.3 Functions test for robot taking care of the elderly** 

**8.3.1 Delivering medicine or food on time** 

As shown in Fig. 27, a handlebar is placed on the rear side of the robot. With the assistance of the robot, the elderly can hold the handlebar to stand up or walk. The elderly can set the target place on the touch screen and hold on the handle bar, the mobile robot will help the elderly to the target place. There are four buttons (Start, Stop, Speed up, Slow down) on the handlebar, the elderly can control the robot speed to fit his walk speed.

### **8.3.3 Send a short message automatically under emergency condition**

The blood pressure measurement and short message sending interface is shown in Fig. 28. The robot will remind the elderly to take blood pressure or blood sugar on time. When the elderly finished taking blood pressure (the blood pressure gauge is shown in Fig. 29), the blood pressure data will be recorded in the robot's computer. If the blood pressure is too

The Development of the Omnidirectional Mobile Home Care Robot 361

The wireless IP camera is placed on the top layer of this robot. Through the internet remote control system, the live image of the IP camera on the robot can be transferred to the remote client user. With this internet remote control system, the remote client user can monitor the elderly people or the home security condition. On the aid of this system, remote family

Today, the number of elderly in need of care is increasing dramatically. More and more elderly people do not receive good care from their family or caregivers. Maybe the

The main objective of this Chapter is to present an omnidirectional mobile home care robot. This service mobile robot is equipped with "Indoor positioning system". The indoor positioning system is used for rapid and precise positioning and guidance of the mobile robot. Five reflective infrared sensors are placed around the robot for obstacle avoidance. In order to present the stability of three wheeled omni-directional mobile robot, the ahthors make some experiments to measure the rectangular and circular path error of the proposed

Firstly, the mobile robot moves along a rectangular path without the guidance of the localization system. The experimental paths have larger path error. Finally, the mobile robot cannot return to the starting point. To overcome this problem, the indoor localization system was used to compensate the path error. With the guidance of the localization system, the maximum path error is under 10 cm. Finally, the mobile robot can pass through the corner points and return to the starting point. The rectangular path is closed at the starting point. Secondly, the proposed mobile robot can move along a circular path with or without the guidance of the indoor localization system. The circular path without the guidance of the

localization system cannot be closed. The maximum heading angle error is about 8°.

intelligent service robots can assist people in their daily living activities.

**8.3.4 Remote control system** 

Fig. 30. Remote control interface

mobile robot in this research.

**9. Conclusion** 

member can control the robot and talk to the elderly.

high, the robot will send a short message to the remote families automatically. If the elderly has emergency condition, for example, the elderly falls down, he can press the "Emergency" button, the robot also can send a short message to the families to deal with this emergency condition.

Fig. 28. Blood pressure measurement and short message sending interface

Fig. 29. Blood pressure gauge

### **8.3.4 Remote control system**

360 Mobile Robots – Current Trends

high, the robot will send a short message to the remote families automatically. If the elderly has emergency condition, for example, the elderly falls down, he can press the "Emergency" button, the robot also can send a short message to the families to deal with this emergency

Fig. 28. Blood pressure measurement and short message sending interface

Fig. 29. Blood pressure gauge

condition.

The wireless IP camera is placed on the top layer of this robot. Through the internet remote control system, the live image of the IP camera on the robot can be transferred to the remote client user. With this internet remote control system, the remote client user can monitor the elderly people or the home security condition. On the aid of this system, remote family member can control the robot and talk to the elderly.

Fig. 30. Remote control interface

### **9. Conclusion**

Today, the number of elderly in need of care is increasing dramatically. More and more elderly people do not receive good care from their family or caregivers. Maybe the intelligent service robots can assist people in their daily living activities.

The main objective of this Chapter is to present an omnidirectional mobile home care robot. This service mobile robot is equipped with "Indoor positioning system". The indoor positioning system is used for rapid and precise positioning and guidance of the mobile robot. Five reflective infrared sensors are placed around the robot for obstacle avoidance.

In order to present the stability of three wheeled omni-directional mobile robot, the ahthors make some experiments to measure the rectangular and circular path error of the proposed mobile robot in this research.

Firstly, the mobile robot moves along a rectangular path without the guidance of the localization system. The experimental paths have larger path error. Finally, the mobile robot cannot return to the starting point. To overcome this problem, the indoor localization system was used to compensate the path error. With the guidance of the localization system, the maximum path error is under 10 cm. Finally, the mobile robot can pass through the corner points and return to the starting point. The rectangular path is closed at the starting point.

Secondly, the proposed mobile robot can move along a circular path with or without the guidance of the indoor localization system. The circular path without the guidance of the localization system cannot be closed. The maximum heading angle error is about 8°.

**17** 

*Iran* 

**Design and Prototyping of Autonomous** 

Mobile robots are platforms that are able to move autonomously. Now a day, their use is increased in different areas like, autonomous vehicles, flexible manufacturing and service environments and lunar explorations. A large number of wheeled or tracked platform mechanisms have been studied and developed to prove their mobility and capability as autonomous robot vehicles (Pin, and Killough, 1994), ( Kim, et al. , 2003), ( Wada, et al. , 2000), (Jung, et al., 2000), (Mori, et al., 1999). For large and heavy outdoor robots, four-wheel car-like driving mechanisms or skid–steer platforms have traditionally been used. These vehicles are quite restricted in their motion ( Jarvis, 1997), particularly when operating in

In recent years, study of nonholonomic systems has been an area of active research. Nonholonomic systems are characterized by nonintegrable rate constraints resulting from rolling contact or momentum conservation. Nonholonomic behaviors are sometimes introduced on purpose in the design of mechanism, in order to obtain certain characteristics and performances such as those in. One advantage offered by nonholonomic systems is the possibility of controlling a higher number of configurations than the number of actuators actually employed in the system, which is sometimes useful in terms of reducing the system's weight and cost. The nonholonomic constraints cause complexities in trajectory planning and designing of control algorithms for feedback stability of the vehicle system. It is required that a suitable desired trajectory satisfying the above constraint be designed to

On the other hand, holonomic vehicles have been proposed with several advantages and disadvantages, so that there is introduced a control strategy to avoid a nonholonomic constraint of a wheel to implement a holonomic omnidirectional vehicle ( Asada & Wada, 1998). Holonomic vehicles, also, have some problems in practical applications such as low payload capability, complicated mechanism and limited accuracy of motion ( Ferriere &

Several omnidirectional platforms have been known to be realized by developing a specialized wheel or mobile mechanism. From this point of view, such specialized mechanisms suitable for constructing an omnidirectional mobile robot are summarized as

control a nonholonomic mobile mechanism ( Fierro & Lewis, 1997).

1. Steered wheel mechanism (Chung, et al., 2010), ( Wada, et al. , 2000)..

**1. Introduction** 

tight environments.

Raucent, 1998).

following:

**Ball Wheel Mobile Robots** 

*Mech. Eng. dept., Mechatronics Lab.* 

*Zanjan University* 

H. Ghariblu, A. Moharrami and B. Ghalamchi

The circular path with the guidance of the localization system is more similar to a circle; the starting point and the end point are overlapped. The maximum heading angle error is about ±1°. From this experiment result, the localization system can successfully maintain the robot heading angle along a circular path.

On the aid of the remote control system, remote family member can control the robot and talk to the elderly. This intelligent robot also can deliver the medicine or remind to measure the blood pressure or blood sugar on time. We hope this intelligent robot can be a housekeeper or family guard to protect our elderly people or our family.

#### **10. References**


## **Design and Prototyping of Autonomous Ball Wheel Mobile Robots**

H. Ghariblu, A. Moharrami and B. Ghalamchi

*Mech. Eng. dept., Mechatronics Lab. Zanjan University Iran* 

### **1. Introduction**

362 Mobile Robots – Current Trends

The circular path with the guidance of the localization system is more similar to a circle; the starting point and the end point are overlapped. The maximum heading angle error is about ±1°. From this experiment result, the localization system can successfully maintain the robot

On the aid of the remote control system, remote family member can control the robot and talk to the elderly. This intelligent robot also can deliver the medicine or remind to measure the blood pressure or blood sugar on time. We hope this intelligent robot can be a

King, S., and Weiman, C (1990), "Helpmate autonomous mobile robot navigation system", Proceedings of the SPIE Conference on Mobile Robots, Vol. 2352, pp.190–198. Burgard, W.; Cremers, A.; Fox, D.; H¨ahnel, D.; Lakemeyer, G.;Schulz, D.; Steiner,W.; and

Endres, H.; Feiten, W.; and Lawitzky, G. (1998), "Field test of a navigation system:

Nicholas Roy, Gregory Baltus, Dieter Fox, Francine Gemperle, Jennifer Goetz, Tad Hirsch

Song, W.-K.; Lee, H.-Y.; Kim, J.-S.; Yoon, Y.-S.; and Bien, Z. Kares (1998), "Intelligent

Dario, P.; Laschi, C.; and Guglielmelli, E. (1999), "Design and experiments on a personal

Takahashi, Y.; Kikuchi, Y.; T.Ibaraki; and Ogawa, S. (1999), "Robotic food feeder.", Proceedings of the *38th* International Conference of *SICE*, pp. 979–982. Jae-Bok Song and Kyung-Seok Byun (2006), "Design and Control of an

Carlisle, B. (1983) "An Omnidirectional Mobile Robot", Development in Robotics,

Pin, F. & Killough, S. (1999), "A New Family of Omnidirectional and Holonomic Wheeled

Jie-Tong Zou , Kuo L. Su and Feng-Chun Chiang (2010), "The development of the

International Conference on Robotics & Automation (ICRA 98). Schraft, R., and Schmierer, G. (1998), "Serviceroboter", Springer verlag. In German.

Interactive Robots and Entertainment (WIRE 2000).

robotic assistant.", Advanced Robotics 13(2) ,pp.153–69.

15,No. 6, pp. 978-989. http://www.hagisonic.com/

Robots, Moving Intelligence, pp. 576.

Thrun, S. (1999), "Experiences with an interactive museum tour-guide robot",

Autonomous cleaning in supermarkets.", Proceedings of the 1998 IEEE

Dimitris Margaritis, Michael Montemerlo, Joelle Pineau, Jamie Schulte, Sebastian Thrun (2000), "Towards Personal Service Robots for the Elderly", Workshop on

rehabilitation robotic system for the disabled and the elderly. Proceedings of the 20th Inter. Conf. of the IEEE Engineering in Medicine and Biology Society, Vol. 5,

OmnidirectionalMobile Robot with SteerableOmnidirectional Wheels", Mobile

Platforms for Mobile Robot", IEEE Transactions on Robotics and Automation, Vol.

Omnidirectional Home Care Mobile Robot" , The Fifteenth International Symposium on Artificial Life and Robotics (AROB 15th '10),B-Con Plaza, Beppu,

housekeeper or family guard to protect our elderly people or our family.

heading angle along a circular path.

Artificial Intelligence.

pp. 2682–2685.

Kempston, pp.79-87.

Oita, Japan, Feb. 4- 6.

**10. References** 

Mobile robots are platforms that are able to move autonomously. Now a day, their use is increased in different areas like, autonomous vehicles, flexible manufacturing and service environments and lunar explorations. A large number of wheeled or tracked platform mechanisms have been studied and developed to prove their mobility and capability as autonomous robot vehicles (Pin, and Killough, 1994), ( Kim, et al. , 2003), ( Wada, et al. , 2000), (Jung, et al., 2000), (Mori, et al., 1999). For large and heavy outdoor robots, four-wheel car-like driving mechanisms or skid–steer platforms have traditionally been used. These vehicles are quite restricted in their motion ( Jarvis, 1997), particularly when operating in tight environments.

In recent years, study of nonholonomic systems has been an area of active research. Nonholonomic systems are characterized by nonintegrable rate constraints resulting from rolling contact or momentum conservation. Nonholonomic behaviors are sometimes introduced on purpose in the design of mechanism, in order to obtain certain characteristics and performances such as those in. One advantage offered by nonholonomic systems is the possibility of controlling a higher number of configurations than the number of actuators actually employed in the system, which is sometimes useful in terms of reducing the system's weight and cost. The nonholonomic constraints cause complexities in trajectory planning and designing of control algorithms for feedback stability of the vehicle system. It is required that a suitable desired trajectory satisfying the above constraint be designed to control a nonholonomic mobile mechanism ( Fierro & Lewis, 1997).

On the other hand, holonomic vehicles have been proposed with several advantages and disadvantages, so that there is introduced a control strategy to avoid a nonholonomic constraint of a wheel to implement a holonomic omnidirectional vehicle ( Asada & Wada, 1998). Holonomic vehicles, also, have some problems in practical applications such as low payload capability, complicated mechanism and limited accuracy of motion ( Ferriere & Raucent, 1998).

Several omnidirectional platforms have been known to be realized by developing a specialized wheel or mobile mechanism. From this point of view, such specialized mechanisms suitable for constructing an omnidirectional mobile robot are summarized as following:

1. Steered wheel mechanism (Chung, et al., 2010), ( Wada, et al. , 2000)..

Design and Prototyping of Autonomous Ball Wheel Mobile Robots 365

consists of springs and dashpots are added to each driving system. Meanwhile, specific design of the robot solved the trouble of small space between the balls chassis and moving

To overcome bearing problem standard ball and socket bearings implemented to ease the

The outline of the chapter is as following: Section 2, describes the mechanical and electrical construction of ball wheel robots consist of wheels driving mechanism, chasis and suspension system. Section 3 discusses on new elements of BWR-2 to overcome to drawbacks of BWR-1 robot. Sections 4 and 5 present kinematic and dynamic equations of both robots. Finally, in Section 6 with some simulation studies we show the ability of robot

This section introduces the mechanical structure of BWR-1 and troubles find out in that robot. Then we will discuss about modifications utilized in BWR-2 to solve these troubles. Fig. 1 shows schematic views of BWR-1. The construction of the BWR-1 consisted of a triangular platform and three identical ball wheels that were fixed with bearings between two upper and lower fixing plates. Ball wheels driving were realized by the actuated omniwheels mounted on a common base over the main base. The locomotion mechanism of this

(a)

(b) (c)

Fig. 1. First type Spherical wheel robot, a) Isometric View, b) Side View, c) Top View

base.

balls motion.

in traversing predefined desired paths.

**2. The structure of ball wheel robot** 

robot consists of three independently actuated ball wheels.


From the workspace view point, mobile robots must be able to reach at any position on its plane of motion, with any orientation. Thus, their frame must have the three Independent Coordinates of the general plane motion of the rigid body. This can be achieved by means of 2 DOF provided the robot frame kinematics is non-holonomic, but maneuvering is required. Maneuvering can be avoided whenever the mobile robot has 3 DOF. Mobility is enhanced by the use of omni-directional instead of conventional wheels. Omni-directional movement is the ability to travel in any path at any specified orientation. This type of drive system combines translation navigation along any desired path and desired orientation. This ability cause many researchers to study in the subject of omni-directional (holonomic) mobile robots and vehicles.

Generally, a special type of wheels called as omni-wheels are used to achieve Omnidirectional movement. Omni-wheels are all based on the same general principle: while the wheel provides traction in the direction normal to the motor axis, it can roll passively in the direction of the motor axis. In order to achieve this, the wheel is built using smaller wheels attached along the outside edge of the wheel. Each wheel provides a torque in the direction normal to the motor axis and parallel to the floor.

The main drawback of such structures is vibrations induced into the complete robot system due to the successive shocks occurring when the contact with the ground shifts from one roller to the next. Other drawbacks of omni-wheels are limited load capacity and surmountable bump height that is limited by the diameter of the rollers, and not by the diameter of the wheels, as with conventional wheels. Therefore, these types of wheels are not appropriate to employ in outdoor applications. To overcome these difficulties, (West, & Asada, 1997), (Ostrovskaya, et al., 2000), and ( Asada & Wada, 1998) introduced omnidirectional ball wheel vehicles. All of these designs suffer from technical and functional issues include complex design, cost, nonholonomic structure or limited load capacity.

To overcome the above mentioned difficulties and based on our experiment in fabricating a ball wheel robot (BWR-1) (Ghariblu, 2010), the second generation of this robot (BWR-2) is developed (Ghariblu, et al., 2010). In the first generation, a new approach introduced that uses three spherical wheels as main wheels. This design was a good step to develop a new holonomic mobile robotic vehicle. But, after experimental tests, several difficulties were appeared. Most important problems in our former style robot were lack of suspension system, rigid structure of robot, non modular construction of driving system that leads to fixing all driving motor on a common rigid frame, and low effective space between main wheels chassis and floor. These problems are basis to weak operation of robot in passing through rough terrains.

Other drawback was the unwanted locking of ball wheels due to inappropriate structure of their supports to corrupt the overall robot motion on its desired path. To solve the above problems, second generation of a ball wheel mobile robot is designed and constructed.

To improve the robot motion in rough terrains several modification are made on our former robot. The first modification was the modular construction of driving system, which separately fabricated and assembled on this system. Also, proper suspension system consists of springs and dashpots are added to each driving system. Meanwhile, specific design of the robot solved the trouble of small space between the balls chassis and moving base.

To overcome bearing problem standard ball and socket bearings implemented to ease the balls motion.

The outline of the chapter is as following: Section 2, describes the mechanical and electrical construction of ball wheel robots consist of wheels driving mechanism, chasis and suspension system. Section 3 discusses on new elements of BWR-2 to overcome to drawbacks of BWR-1 robot. Sections 4 and 5 present kinematic and dynamic equations of both robots. Finally, in Section 6 with some simulation studies we show the ability of robot in traversing predefined desired paths.

### **2. The structure of ball wheel robot**

364 Mobile Robots – Current Trends

3. Ball wheel mechanism (Ostrovskaya, et al., 2000), (West, & Asada, 1997), (Ghariblu,

From the workspace view point, mobile robots must be able to reach at any position on its plane of motion, with any orientation. Thus, their frame must have the three Independent Coordinates of the general plane motion of the rigid body. This can be achieved by means of 2 DOF provided the robot frame kinematics is non-holonomic, but maneuvering is required. Maneuvering can be avoided whenever the mobile robot has 3 DOF. Mobility is enhanced by the use of omni-directional instead of conventional wheels. Omni-directional movement is the ability to travel in any path at any specified orientation. This type of drive system combines translation navigation along any desired path and desired orientation. This ability cause many researchers to study in the subject of omni-directional (holonomic) mobile

Generally, a special type of wheels called as omni-wheels are used to achieve Omnidirectional movement. Omni-wheels are all based on the same general principle: while the wheel provides traction in the direction normal to the motor axis, it can roll passively in the direction of the motor axis. In order to achieve this, the wheel is built using smaller wheels attached along the outside edge of the wheel. Each wheel provides a torque in the direction

The main drawback of such structures is vibrations induced into the complete robot system due to the successive shocks occurring when the contact with the ground shifts from one roller to the next. Other drawbacks of omni-wheels are limited load capacity and surmountable bump height that is limited by the diameter of the rollers, and not by the diameter of the wheels, as with conventional wheels. Therefore, these types of wheels are not appropriate to employ in outdoor applications. To overcome these difficulties, (West, & Asada, 1997), (Ostrovskaya, et al., 2000), and ( Asada & Wada, 1998) introduced omnidirectional ball wheel vehicles. All of these designs suffer from technical and functional issues include complex design, cost, nonholonomic structure or limited load capacity. To overcome the above mentioned difficulties and based on our experiment in fabricating a ball wheel robot (BWR-1) (Ghariblu, 2010), the second generation of this robot (BWR-2) is developed (Ghariblu, et al., 2010). In the first generation, a new approach introduced that uses three spherical wheels as main wheels. This design was a good step to develop a new holonomic mobile robotic vehicle. But, after experimental tests, several difficulties were appeared. Most important problems in our former style robot were lack of suspension system, rigid structure of robot, non modular construction of driving system that leads to fixing all driving motor on a common rigid frame, and low effective space between main wheels chassis and floor. These problems are basis to weak operation of robot in passing

Other drawback was the unwanted locking of ball wheels due to inappropriate structure of their supports to corrupt the overall robot motion on its desired path. To solve the above problems, second generation of a ball wheel mobile robot is designed and constructed. To improve the robot motion in rough terrains several modification are made on our former robot. The first modification was the modular construction of driving system, which separately fabricated and assembled on this system. Also, proper suspension system

2. Universal wheel mechanism (Song & Byun, 2004).

5. Crawler mechanism (Tadakuma. et.al., 2009).

normal to the motor axis and parallel to the floor.

4. Orthogonal wheel mechanism (Pin, and Killough, 1994).

6. Offset steered wheel mechanism(Udengaard & Iagnemma, 2007).

2010), (Ghariblu, et al., 2010).

robots and vehicles.

through rough terrains.

This section introduces the mechanical structure of BWR-1 and troubles find out in that robot. Then we will discuss about modifications utilized in BWR-2 to solve these troubles. Fig. 1 shows schematic views of BWR-1. The construction of the BWR-1 consisted of a triangular platform and three identical ball wheels that were fixed with bearings between two upper and lower fixing plates. Ball wheels driving were realized by the actuated omniwheels mounted on a common base over the main base. The locomotion mechanism of this robot consists of three independently actuated ball wheels.

Fig. 1. First type Spherical wheel robot, a) Isometric View, b) Side View, c) Top View

Design and Prototyping of Autonomous Ball Wheel Mobile Robots 367

Fig. 3. BWR-2 modular driving system

Table 1. Geometric and physical characteristics

A list of robot geometric and physical parameters is summarized in Table 1.

Dimension: Equilateral Triangle with 750 mm sides

Material: Aluminum, peripheral wheels covered with

• **Total weight**: 20 kg, without batteries

• **MAIN BASE**

• **Ball wheels**

ϕ 150 mm Material: polyurethane resin

• **Omni-wheels**

50 mm

• **Geared DC motor** 120 rpm- 45 Watt, 12 volt

ϕ

Material: polyamide

Weight: 8kg

Dimension:

Dimension:

plastic o-ring Weight: 0.1 kg

Weight: 0.25 kg,

Fig. 4 shows the electrical architecture of the BWR. The BWR has an onboard notebook computer. The robot can be controlled remotely through the wireless connection or using a Microsoft game pad USB Sidewinder connected to another USB port of the notebook. Game pad buttons control the movement and rotation of the robot. Three Serial/USB converters are used to connect the robot base. Software drives the motor by the USB hub and the

After the design stage and constructing of first prototype of BWR-1, we determined some difficulties in application of this vehicle in real environment, as follows


Above mentioned troubles in BWR-1 degrades its function for performing requested motions such as rotation and lateral movements.

To solve the above problems and achieve the required characteristics for a high performance, its second generation named as BWR-2 is designed and constructed.

### **2.1 The architecture of BWR-2**

Here, the design specification of new ball wheel omni-directional mobile robot BWR-2 is explained (Fig. 2). The objective of designing BWR-2 is to ensure that new robot meets design specifications and overcome weaknesses of BWR-1.

Performing some experiments on BWR-1 directed us to problems existed in robot structure that was mentioned in the previous section.

BWR-2 robot is an omni-directional mobile platform. Omni-directional design of this robot improves its characteristics for high maneuverability. Consequently, robot can simultaneously move along any desired path with rotation about its geometrical axis.

Same as BWR-1, the new robot BWR-2 is equipped with three balls as main wheels driving by two omni-wheels that transfer motor traction force to the balls. Moreover, three parallel suspension systems consist of springs and dashpots are added to BWR-2 structure. Maximum achievable speed for robot is 3.4 m/s.

To drive the robot, modular and mechanically independent driving system is employed (Fig. 3). Each driving system is driven by a DC motor with reduction ratios of 12:1, consists of a ball as main wheel, and two omni-wheels to transfer motor traction force to the ball, and three parallel suspension systems consists of springs and dashpots.

366 Mobile Robots – Current Trends

After the design stage and constructing of first prototype of BWR-1, we determined some

a. In specific conditions owing to rigid structure of robot, in passing through uneven ground one of the robot wheels separated from the moving plane. This fact results

b. Driving wheels and their transmission system were rigidly assembled on a common frame. This fact directed road irregularities and acceleration and deceleration motions

c. Unwanted locking of ball wheels due to inappropriate structure of their supports

Above mentioned troubles in BWR-1 degrades its function for performing requested

To solve the above problems and achieve the required characteristics for a high

Here, the design specification of new ball wheel omni-directional mobile robot BWR-2 is explained (Fig. 2). The objective of designing BWR-2 is to ensure that new robot meets

Performing some experiments on BWR-1 directed us to problems existed in robot structure

BWR-2 robot is an omni-directional mobile platform. Omni-directional design of this robot improves its characteristics for high maneuverability. Consequently, robot can simultaneously move along any desired path with rotation about its geometrical axis. Same as BWR-1, the new robot BWR-2 is equipped with three balls as main wheels driving by two omni-wheels that transfer motor traction force to the balls. Moreover, three parallel suspension systems consist of springs and dashpots are added to BWR-2 structure.

To drive the robot, modular and mechanically independent driving system is employed (Fig. 3). Each driving system is driven by a DC motor with reduction ratios of 12:1, consists of a ball as main wheel, and two omni-wheels to transfer motor traction force to the ball, and

to the main body, with a possible degradation to the vehicle required tasks.

performance, its second generation named as BWR-2 is designed and constructed.

difficulties in application of this vehicle in real environment, as follows

unwanted deviation of robot motion from its desired path.

corrupt the overall robot motion.

that was mentioned in the previous section.

Fig. 2. Two different views of BWR-2

Maximum achievable speed for robot is 3.4 m/s.

three parallel suspension systems consists of springs and dashpots.

**2.1 The architecture of BWR-2** 

motions such as rotation and lateral movements.

design specifications and overcome weaknesses of BWR-1.

Fig. 3. BWR-2 modular driving system

A list of robot geometric and physical parameters is summarized in Table 1.

Table 1. Geometric and physical characteristics

Fig. 4 shows the electrical architecture of the BWR. The BWR has an onboard notebook computer. The robot can be controlled remotely through the wireless connection or using a Microsoft game pad USB Sidewinder connected to another USB port of the notebook. Game pad buttons control the movement and rotation of the robot. Three Serial/USB converters are used to connect the robot base. Software drives the motor by the USB hub and the

Design and Prototyping of Autonomous Ball Wheel Mobile Robots 369

Kinematics of both robots assuming horizontal plane movement are similar. The main focus lies in the connection between driving omni-wheels angular velocities and robot velocity. Fig. 6 illustrates a top view of the BWR-1 robot. The common radius of all omni-wheels is rw and all ball wheels is rs , and the distance from robot center to the contact points between wheels and floor is rR. Four sets of coordinates are introduced: The mutually perpendicular unit vectors {ex; ey; ez} are fixed in a global non-moving reference frame, with ex and ey being parallel to the floor and ez facing upwards. The unit vectors {e´1; e´2; e´3} are also mutually perpendicular. They are fixed on the robot, e´1 denotes the robot forward direction and e´3 is parallel to ez. Note that ez represents a translational dimension whereas e´3 is used to express rotational quantities. The third set of coordinates consists of the unit vectors {ew1; ew2; ew3}, each of them points in the respective wheel's axial direction. The unit vectors {e1; e2; e3}, are pointing in the wheel peripheral directions. They are linearly dependent on e´1 and e´2 and

The origin of the X´-Y´ coordinate system is located in the robot's geometrical center where

α

coordinates, respectively. The angular velocity of the omni-wheels are expressed by

 <sup>=</sup> . Matrix *WTR* transforms body coordinates into omni-wheel coordinates, and *<sup>R</sup> TG* transforms global coordinates into body coordinates. It can be seen that body

 and ' [, ,] *<sup>R</sup> <sup>T</sup> X xy* = ′ ′

α

in global and body

α

i 3 e e ,i 1,2,3 *wi* =× = ′ *e* (1)

is defined as the angle between ex

**4. Kinematics** 

and e´1.

compute as vector cross products:

Fig. 6. Coordinate system definition

<sup>123</sup> [,,]*<sup>T</sup>*

φ φφφ

The robot velocity is expressed by [,, ]*<sup>T</sup> X xy <sup>R</sup>* <sup>=</sup>

coordinates and wheels motion are expressed as

the omni- wheel axes intersect. The robot orientation

Serial/USB converters. The robot is powered by two 24V, 7A-H lead-acid batteries. Motor drivers activate DC motors with two omni-wheels assembled on their shaft to transfer motor traction to the ball wheels.

Fig. 4. Electrical architecture of BWR

### **3. Advantages of BWR-2 robot**

Some significant modifications are performed to ensure better function of BWR-2 respecting BWR-1. As bellow


Fig. 5. Wheels adjustment with obstacle owing to proper action of suspension system.

#### **4. Kinematics**

368 Mobile Robots – Current Trends

Serial/USB converters. The robot is powered by two 24V, 7A-H lead-acid batteries. Motor drivers activate DC motors with two omni-wheels assembled on their shaft to transfer motor

Some significant modifications are performed to ensure better function of BWR-2 respecting

• Corrugated polyurethane resin spherical ball is utilized to generate relative high coefficient of friction regarding to driving omni-wheels and moving surface.

• Handmade bearings of ball wheels changed with standard ball and socket bearings. By this modification locking problem between bearings and ball wheels appropriately solved. Consequently, balls wheels are able to rotate freely according to vehicle desired motion. Each ball is held by six ball and socket bearings fixed on two sides of the frame

• To make adequate driving force between driving omni-wheels and ball wheel doubled

• Finally, differing from BWR-1 that was designed to navigate in indoor applications or flat surfaces, owing to employing new suspension for wheels BWR-2 will be able to navigate in outdoor applications. It is shown in Fig. 5 by adjusting suspension system according to

obstacle existed in the robot path; it can easily traverse along uneven ground.

Fig. 5. Wheels adjustment with obstacle owing to proper action of suspension system.

disk on outside edge of ball wheel to make stable motion (Fig. 3).

traction to the ball wheels.

Fig. 4. Electrical architecture of BWR

**3. Advantages of BWR-2 robot** 

omni-wheels are used.

BWR-1. As bellow

Kinematics of both robots assuming horizontal plane movement are similar. The main focus lies in the connection between driving omni-wheels angular velocities and robot velocity. Fig. 6 illustrates a top view of the BWR-1 robot. The common radius of all omni-wheels is rw and all ball wheels is rs , and the distance from robot center to the contact points between wheels and floor is rR. Four sets of coordinates are introduced: The mutually perpendicular unit vectors {ex; ey; ez} are fixed in a global non-moving reference frame, with ex and ey being parallel to the floor and ez facing upwards. The unit vectors {e´1; e´2; e´3} are also mutually perpendicular. They are fixed on the robot, e´1 denotes the robot forward direction and e´3 is parallel to ez. Note that ez represents a translational dimension whereas e´3 is used to express rotational quantities. The third set of coordinates consists of the unit vectors {ew1; ew2; ew3}, each of them points in the respective wheel's axial direction. The unit vectors {e1; e2; e3}, are pointing in the wheel peripheral directions. They are linearly dependent on e´1 and e´2 and compute as vector cross products:

$$\mathbf{e}\_{i} = \mathbf{e}\_{3}^{\prime} \times \mathbf{e}\_{wi} \qquad , \mathbf{i} = \mathbf{1} \tag{1} \\ \text{2.3} \tag{1}$$

The origin of the X´-Y´ coordinate system is located in the robot's geometrical center where the omni- wheel axes intersect. The robot orientation α is defined as the angle between ex and e´1.

Fig. 6. Coordinate system definition

The robot velocity is expressed by [,, ]*<sup>T</sup> X xy <sup>R</sup>* <sup>=</sup> α and ' [, ,] *<sup>R</sup> <sup>T</sup> X xy* = ′ ′ α in global and body coordinates, respectively. The angular velocity of the omni-wheels are expressed by <sup>123</sup> [,,]*<sup>T</sup>* φ φφφ <sup>=</sup> . Matrix *WTR* transforms body coordinates into omni-wheel coordinates, and *<sup>R</sup> TG* transforms global coordinates into body coordinates. It can be seen that body coordinates and wheels motion are expressed as

Design and Prototyping of Autonomous Ball Wheel Mobile Robots 371

Assuming horizontal motion for the robot, the gravitational potential energy may be

( ) , 1,2,3 *<sup>i</sup>*

The robot's generalized coordinates is expressed with its omni-wheels angular motion

α

φ3

Lagrange equations and rearranging terms, the equations of motion can be expressed in its

Some numerical examples have been performed based on equations developed the previous section to simulate the behavior of the robot. Hence, a circular path is used with two scenarios of motion for the vehicle. Then, associated equivalent motor torques and wheels angular motions are computed, and overall characteristics of the robot are analyzed and

 φ

 . Therefore, associated generalized force *Qi* , are the motor torques applied to the actuators attached to the driving omni-wheels, 1 2 3 123 [ , , ] [,,] *T T QQQ TTT* = .

3

=

wa 1

> φ

*r r T I <sup>I</sup>*

<sup>1</sup> 3 (I (( ) ( ) ) ) <sup>2</sup> *w w rot R s i*

in the Eq. (7) are substituted from Eqs. (2) and (3) with respect

∂ ∂ −= = ∂ ∂ (6)

2 2 22

+ ++= *C GDT* (8)

*r r*

. Computing the derivatives in the Euler-

= + ++ (7)

φ

*i s R*

omitted. The Euler-Lagrange equations of motion for the simplified robot model are

*i i dT T Q i dt q q*

1 2 3 123 [, , ] [,,] *T T qqq* = φ φ φ

Components of *x* ,*y* and

closed form as

to actuators angular velocities 1 2

Fig. 7. Robot mass and inertia properties

**6. Simulation study** 

evaluated.

The robot's kinetic energy T is *TT T* = + *trans rot* ;

<sup>1</sup> 2 2 ( ) <sup>2</sup> *T mx trans R* = + *y* ,

α

φ φ, and

φφ

 φφ

() (,) () () *M*

$$\begin{cases} -\dot{\phi}\_1 r\_w = \dot{\mathbf{x}}' \sin 60 + \dot{\mathbf{y}}' \sin 30 + \dot{\alpha} r\_\mathbf{R} \\ -\dot{\phi}\_2 r\_w = -\dot{\mathbf{x}}' \sin 60 + \dot{\mathbf{y}}' \sin 30 + \dot{\alpha} r\_\mathbf{R} \\ -\dot{\phi}\_3 r\_w = 0 - \dot{\mathbf{y}}' + \dot{\alpha} r\_\mathbf{R} \end{cases} \tag{2}$$

Matrix form of Eq. 2 is

$$\begin{bmatrix} \phi\_1\\ \dot{\phi}\_2\\ \dot{\phi}\_3 \end{bmatrix} = T\_R^{\mathcal{W}} \begin{bmatrix} \dot{\mathbf{x}'}\\ \dot{\mathbf{y}'}\\ \dot{\mathbf{z}} \end{bmatrix}$$

Where

$$
\begin{bmatrix}
\dot{\mathbf{x}}' \\
\dot{y}' \\
\dot{\alpha}
\end{bmatrix} = T\_G^R \begin{bmatrix}
\dot{\mathbf{x}} \\
\dot{y} \\
\dot{\alpha}
\end{bmatrix} \tag{3}
$$

Substituting Eq. 3 into Eq. 2, the relation between φ and *XR* is derived.

$$T\_{\mathbf{R}}^{\mathbf{W}} = \frac{-1}{r\_w} \begin{bmatrix} \frac{\sqrt{3}}{2} & \frac{1}{2} & r\_{\mathbf{R}} \\ -\frac{\sqrt{3}}{2} & \frac{1}{2} & r\_{\mathbf{R}} \\ 0 & -1 & r\_{\mathbf{R}} \\ \end{bmatrix}, \quad T\_{\mathbf{G}}^{\mathbf{R}} = \begin{bmatrix} \cos(\alpha) & \sin(\alpha) & 0 \\ -\sin(\alpha) & \cos(\alpha) & 0 \\ 0 & 0 & 1 \end{bmatrix} \tag{4}$$

#### **5. Dynamics**

In this section, to simplify the dynamic equations, it is assumed that robot motion lies in a horizontal surface without obstacles. Therefore, to derive the dynamic equations, the action of suspension system would be neglected. Fig. 7 shows the masses and inertias of the robot. The robot is modeled as a set of three different rigid bodies. The robot chassis is illustrated as a triangle. Its mass mc contains the chassis, motors, battery, and all other parts rigidly attached to it, its moment of inertia with respect to the Z´ axis is Iz´, Its mass center is assumed to be located on the Z´ axis. The three omni-wheels are same, with masses of mw. The mass center of the omni-wheels is located on their axis of rotation at the distance rR from the robot's center. Also, moments of inertia about the axis passing through their center and parallel to Z´ axis and axis of rotation are Iwz´ and Iwa, respectively. Also, the three ball wheels are the same, with masses of ms. their moments of inertia about principle axis of rotation is Is. The robot total mass mR consists of the chassis mass plus the wheels masses. Its center of mass is assumed to be located on the Z´ axis at the height *hM* above the ground:

$$m\_R = m\_c + 3m\_w + 3m\_s$$

$$I\_R = I\_x + S(I\_{wZ} + I\_s + (m\_s + m\_w)|r\_R|^2) \tag{5}$$

370 Mobile Robots – Current Trends

sin60 sin 30 sin 60 sin 30

 α

 

 

α

*yx T <sup>W</sup> R*

 

 ′ ′

*R G x x y T y*

 ′ ′ <sup>=</sup>

 

 α

φ

2 2 cos( ) sin( ) 0 1 31 , sin( ) cos( ) 0 2 2

<sup>−</sup> = − = − <sup>−</sup>

In this section, to simplify the dynamic equations, it is assumed that robot motion lies in a horizontal surface without obstacles. Therefore, to derive the dynamic equations, the action of suspension system would be neglected. Fig. 7 shows the masses and inertias of the robot. The robot is modeled as a set of three different rigid bodies. The robot chassis is illustrated as a triangle. Its mass mc contains the chassis, motors, battery, and all other parts rigidly attached to it, its moment of inertia with respect to the Z´ axis is Iz´, Its mass center is assumed to be located on the Z´ axis. The three omni-wheels are same, with masses of mw. The mass center of the omni-wheels is located on their axis of rotation at the distance rR from the robot's center. Also, moments of inertia about the axis passing through their center and parallel to Z´ axis and axis of rotation are Iwz´ and Iwa, respectively. Also, the three ball wheels are the same, with masses of ms. their moments of inertia about principle axis of rotation is Is. The robot total mass mR consists of the chassis mass plus the wheels masses. Its center of mass is assumed to be located on the Z´ axis at the height *hM* above the ground:

*mR = mc+ 3mw + 3ms,*

 *IR =* I z´*+ 3(*Iwz´ *+ Is* +(*ms+mw) rR 2)* (5)

0 01 0 1

α

and *XR* is derived.

 α

 α

α

α

(2)

(3)

(4)

*r x y r r x y r*

 

> α

= 

 

ϕ

α

*R*

*r*

*R*

*r*

3 1

 

*W R R R G*

*T r T*

 

3 2 1

ϕ

ϕ

*w R w R*

1 2 3

ϕ

ϕ

ϕ

Substituting Eq. 3 into Eq. 2, the relation between

*w*

*r*

Matrix form of Eq. 2 is

Where

**5. Dynamics** 

0

− =− +′

*w R*

−= + + ′ ′ − =− + + ′ ′

*r yr*

Assuming horizontal motion for the robot, the gravitational potential energy may be omitted. The Euler-Lagrange equations of motion for the simplified robot model are

$$\frac{d}{dt}(\frac{\partial T}{\partial \dot{q}\_i}) - \frac{\partial T}{\partial q\_i} = \mathbf{Q}\_{i\prime} \quad \text{i} = 1, 2, 3 \tag{6}$$

The robot's generalized coordinates is expressed with its omni-wheels angular motion 1 2 3 123 [, , ] [,,] *T T qqq* = φ φ φ . Therefore, associated generalized force *Qi* , are the motor torques applied to the actuators attached to the driving omni-wheels, 1 2 3 123 [ , , ] [,,] *T T QQQ TTT* = . The robot's kinetic energy T is *TT T* = + *trans rot* ;

$$T\_{\rm trans} = \frac{1}{2} m\_{\rm R} (\dot{\mathbf{x}}^2 + \dot{\mathbf{y}}^2)\_{\prime} \ \ T\_{\rm rot} = \frac{1}{2} I\_{\rm R} \dot{\alpha}^2 \ \ + 3 \sum\_{i=1}^3 (I\_{\rm wa} + ((\frac{r\_{\rm w}}{r\_s})^2 + (\frac{r\_{\rm w}}{r\_R})^2) I\_s) \dot{\phi}\_i^2 \tag{7}$$

Components of *x* ,*y* and α in the Eq. (7) are substituted from Eqs. (2) and (3) with respect to actuators angular velocities 1 2 φ φ, and φ3 . Computing the derivatives in the Euler-Lagrange equations and rearranging terms, the equations of motion can be expressed in its closed form as

$$\mathbf{M}(\phi)\ddot{\phi} + \mathbf{C}(\phi, \dot{\phi}) + \mathbf{G}(\phi) + \mathbf{D}(\phi) = \mathbf{T} \tag{8}$$

Fig. 7. Robot mass and inertia properties

#### **6. Simulation study**

Some numerical examples have been performed based on equations developed the previous section to simulate the behavior of the robot. Hence, a circular path is used with two scenarios of motion for the vehicle. Then, associated equivalent motor torques and wheels angular motions are computed, and overall characteristics of the robot are analyzed and evaluated.

Design and Prototyping of Autonomous Ball Wheel Mobile Robots 373

(a)

 (b) (c) Fig. 9. a) Robot moves with fixed direction, b and c) corresponding actuators angular

The concept, design and implementation of an autonomous spherical wheel mobile robots, named as BWR-1 and BWR-2 has been presented. A prototype of such platforms has been built using three spherical wheels driven by classical omni-wheels. These robots are omnidirectional, and so simultaneous longitudinal, transverse and rotational motions are possible. These robots have the same mobility as robots equipped with classical universal wheels that are suitable for in indoor applications. But, these robots owing to larger diameter of ball wheels are able to move in outdoor purposes, especially BWR-2 that provides a modular driving system equipped to suspension system. Future work will employ new features on the prototype for autonomous navigation such as optic encoders

mounted on passive universal wheels and camera for vision.

velocities and torques

**7. Conclusion** 

The given trajectory is a circle with radius of 1.5 m . The motion of robot starts from the rest with constant acceleration and reaches to its maximum velocity Vmax in the first 90o, then moves with constant velocity Vmax in the next 180o of the path, and, finally decelerates constantly in the last 90o to reach at rest. The overall time of the motion is equal *t=6.0* s.

In the first scenario, as shown in the Fig. (8-a), during the motion X´ axis that shows forward direction, is directed tangent to the robot path. The corresponding actuators angular velocities and torques are shown in Fig's (8-b) and (8-c).

Fig. 8. a) Robot moving with Orient tangent to the path, b and c) corresponding actuators angular velocities and torques

In the second scenario, as shown in the Fig. (9), direction of the robot during the motion is fixed. The corresponding actuators angular velocities and torques are shown in Fig's (9-a) and (9-b).

According to the simulation results, it can be seen that both ball wheeled platforms are able to move on a given path with different scenarios.

372 Mobile Robots – Current Trends

The given trajectory is a circle with radius of 1.5 m . The motion of robot starts from the rest with constant acceleration and reaches to its maximum velocity Vmax in the first 90o, then moves with constant velocity Vmax in the next 180o of the path, and, finally decelerates constantly in the last 90o to reach at rest. The overall time of the motion is

In the first scenario, as shown in the Fig. (8-a), during the motion X´ axis that shows forward direction, is directed tangent to the robot path. The corresponding actuators angular

(a)

 (b) (c) Fig. 8. a) Robot moving with Orient tangent to the path, b and c) corresponding actuators

In the second scenario, as shown in the Fig. (9), direction of the robot during the motion is fixed. The corresponding actuators angular velocities and torques are shown in Fig's (9-a)

According to the simulation results, it can be seen that both ball wheeled platforms are able

velocities and torques are shown in Fig's (8-b) and (8-c).

equal *t=6.0* s.

angular velocities and torques

to move on a given path with different scenarios.

and (9-b).

Fig. 9. a) Robot moves with fixed direction, b and c) corresponding actuators angular velocities and torques

#### **7. Conclusion**

The concept, design and implementation of an autonomous spherical wheel mobile robots, named as BWR-1 and BWR-2 has been presented. A prototype of such platforms has been built using three spherical wheels driven by classical omni-wheels. These robots are omnidirectional, and so simultaneous longitudinal, transverse and rotational motions are possible. These robots have the same mobility as robots equipped with classical universal wheels that are suitable for in indoor applications. But, these robots owing to larger diameter of ball wheels are able to move in outdoor purposes, especially BWR-2 that provides a modular driving system equipped to suspension system. Future work will employ new features on the prototype for autonomous navigation such as optic encoders mounted on passive universal wheels and camera for vision.

**18** 

*P. R. China* 

**Advances in Simulation of** 

*State Key Laboratory of Robotics and System,* 

*Harbin Institute of Technology* 

**Planetary Wheeled Mobile Robots** 

Liang Ding, Haibo Gao, Zongquan Deng and Weihua Li

Ever since the Sojourner rover of the United States landed on Mars in 1997 (Jet Propulsion Laboratory [JPL], a), there has been an upsurge in the exploration of planets using wheeled mobile robots (WMRs or rovers). The twin rovers that followed, Spirit and Opportunity, have endured many years of activity on Mars and have made many significant discoveries (JPL, b). Several other new missions are in progress to explore Mars (Volpe, 2005; Van et al., 2008) and the Moon (Neal, 2009) using planetary rovers that are expected to traverse more challenging terrain with scientific objectives such as searching for evidence of life and

The present planet exploration rovers are advanced WMRs that show excellent performance and have integrated the cutting-edge technologies of many fields, and overcoming new frontier issues that are specific to planetary rovers has promoted the development of

Simulation technology plays an important role in both the research and development (R&D) and exploration phases of planetary WMRs (Ding et al., 2008). During the R&D phase of a WMR, a simulation system can be used for mechanical design (e.g., performance analysis and optimization), control algorithm verification, and performance testing, and during the exploration phase, the simulation system can be used to support three-dimensional (3D) predictive displays for successive teleoperation (such as in the case of a lunar rover) or to validate command sequences for supervised teleoperation (such as in the case of a Mars rover). As compared to conventional simulation systems used for WMRs, the simulation system for

planetary rovers is characterized by high fidelity, high speed, and comprehensiveness.

The recent development and cutting-edge technologies of simulation systems for planetary rovers are summarized in this article to extend their application to conventional WMRs. The significance of simulation for planetary WMRs is discussed in Section 2. An overview of the simulation technology for planetary WMRs is presented in Section 3. Section 4 introduces key theories (models of terramechanics, dynamics, and terrain geometry) for developing a simulation system for planetary rovers. In Section 5, the research results for the simulation of planetary rovers at the State Key Laboratory of Robotics and System (SKLRS) of China, including simulation methods, systems, and verification results, are presented to provide examples of different simulation methods (based on the commercial dynamics simulation software, general simulation software, and real-time simulation software) in detail for

**1. Introduction** 

terrestrial WMRs.

different applications.

investigating the origin of the solar system.

#### **8. References**


*Int. J. of Advanced Robotic Systems*, Vol. 7, No. 4., pp. 101-106.


## **Advances in Simulation of Planetary Wheeled Mobile Robots**

Liang Ding, Haibo Gao, Zongquan Deng and Weihua Li *State Key Laboratory of Robotics and System, Harbin Institute of Technology P. R. China* 

### **1. Introduction**

374 Mobile Robots – Current Trends

Asada H., & Wada M., (1998), "The Superchair :A Holonomic Omnidirectional Wheelchair

Chung W., Moon C., Jung C., & Jin J., (2010), "Design of the Dual Offset Active Caster Wheel

Ferriere L. & Raucent B., (1998), "ROLLMOBS, a new universal wheel concept*", Proceedings* 

Fierro R. & Lewis F. L., (1997), "Control of a Nonholonomic Mobile Robot: Backstepping Kinematics into Dynamics", *Journal of Robotic Systems,* Vol. 14(3), pp. 149–163. Ghariblu H., "Design and Modeling of a Ball Wheel Omni-Directional Mobile Robot", *ICIET* 

Ghariblu H., Moharrami A., Ghalamchi B., Bayat N., (2010), "Design and Modeling of a Ball

Jarvis R., (1997), "An autonomous heavy duty outdoor robotic tracked vehicle", *IROS '97*,

Jung M.-J., Kim H.-S., Kim S., & Kim J.-H., (2000), "Omni-Directional Mobile Base OK-II",

Kim D. S., Kwon W. H., & Park H. S., (2003).' Geometric Kinematics and Applications of a

Mori Y., Nakano E., Takahashi T., & Takayama K., (1999), "Mechanism and Running Modes

Ostrovskaya, J. Angeles & Spiteri R., (2000), "Dynamics of a Mobile Robot with Three Ball-

Pin F. G., & Killough S. M., (1994), "A New Family of Omnidirectional and Holonomic

Song J. & Byun K., (2004), "Design and Control of a Four-Wheeled Omnidirectional Mobile

Tadakuma K.. et.al., (2009), "Basic Running Test of the Cylindrical Tracked Vehicle with

Udengaard M.& Iagnemma K., (2007), "Kinematic Analysis and Control of an Omnidirectional Mobile Robot in Rough Terrain", . *IROS 2007*, pp. 795-800. Wada M., Takagi A. & Mori S., (2000), "Caster Drive Mechanisms for Holonomic and Omni-

West M., & Asada, H., (1997). "Design of ball wheel mechanisms for omnidirectional

Mobile Robot ", *Int. J. of Control, Automation, and Systems,* Vol. 1, No. 3, pp. 376-384.

of New Omni-Directional Vehicle ODV9*", JSME International Journal*, Series C,

Wheeled Platforms for Mobile Robots*" IEEE Transactions on Robotics and* 

Robot with Steerable Omnidirectional Wheels", *J. of Robotic Systems*, Vol. 21(4), pp.

Sideways Mobility", *The 2009 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems*,

directional Mobile Platforms with no Cover Constraints", *Proc. IEEE Int. Conf. on* 

vehicles with full mobility and invariant kinematics". *ASME J. of Mechanical Design,*

for Holonomic Omni-directional Mobile Robots",

*2010*, ,March. 2010, Bali, Indonesia.

*Int. J. of Advanced Robotic Systems*, Vol. 7, No. 4., pp. 101-106.

*Engneerng, Robotics and Aerospace*. Bucharest, Romania.

*Proc. IEEE Int.Conf. on Robotics and Automation*, pp. 3449-3454.

Wheels", *The Int. J. of Robotics Research*, Vol. 19; pp. 383-393.

*Automation*, Vol. 10, No. 4, pp. 480-489.

*Robotics and Automation*, pp. 1531-1538.

with a Variable Footprint Mechanism*", Total Home Automation and Health/Elderly* 

*of the 1998 IEEE, International Conference on Robotics & Automation,* Leuven, Belgium,

Wheel Mobile Robot 2 (BWR-2)", *2010 1th International Conference on Mechanical* 

**8. References**

*Care Consortium* .

pp. 1877–1882.

Vol.1, pp. 352 – 359.

42(1): 210-217.

193–208.

pp.1679-1384.

119(2):153–161.

Ever since the Sojourner rover of the United States landed on Mars in 1997 (Jet Propulsion Laboratory [JPL], a), there has been an upsurge in the exploration of planets using wheeled mobile robots (WMRs or rovers). The twin rovers that followed, Spirit and Opportunity, have endured many years of activity on Mars and have made many significant discoveries (JPL, b). Several other new missions are in progress to explore Mars (Volpe, 2005; Van et al., 2008) and the Moon (Neal, 2009) using planetary rovers that are expected to traverse more challenging terrain with scientific objectives such as searching for evidence of life and investigating the origin of the solar system.

The present planet exploration rovers are advanced WMRs that show excellent performance and have integrated the cutting-edge technologies of many fields, and overcoming new frontier issues that are specific to planetary rovers has promoted the development of terrestrial WMRs.

Simulation technology plays an important role in both the research and development (R&D) and exploration phases of planetary WMRs (Ding et al., 2008). During the R&D phase of a WMR, a simulation system can be used for mechanical design (e.g., performance analysis and optimization), control algorithm verification, and performance testing, and during the exploration phase, the simulation system can be used to support three-dimensional (3D) predictive displays for successive teleoperation (such as in the case of a lunar rover) or to validate command sequences for supervised teleoperation (such as in the case of a Mars rover). As compared to conventional simulation systems used for WMRs, the simulation system for planetary rovers is characterized by high fidelity, high speed, and comprehensiveness.

The recent development and cutting-edge technologies of simulation systems for planetary rovers are summarized in this article to extend their application to conventional WMRs. The significance of simulation for planetary WMRs is discussed in Section 2. An overview of the simulation technology for planetary WMRs is presented in Section 3. Section 4 introduces key theories (models of terramechanics, dynamics, and terrain geometry) for developing a simulation system for planetary rovers. In Section 5, the research results for the simulation of planetary rovers at the State Key Laboratory of Robotics and System (SKLRS) of China, including simulation methods, systems, and verification results, are presented to provide examples of different simulation methods (based on the commercial dynamics simulation software, general simulation software, and real-time simulation software) in detail for different applications.

Advances in Simulation of Planetary Wheeled Mobile Robots 377

During the R&D phase of a rover, virtual simulation can be used for configuration design, evaluation and optimization, mobility performance analysis, and control strategy research before a rover is manufactured. For example, before the Mars Pathfinder probe carrying the Sojourner rover was launched, with the help of virtual simulation technology, researchers at JPL predicted that it would turn over while landing because of the interaction between the breaking rocket and Martian wind. The technical program was revised to solve the problem and ensured successful soft landing on Mars (Chen et al., 2005). To support the study on Mars rovers, the United States developed a comprehensive simulation system ROAMS. Europe is committed to the development of the simulation tools RCET, RPET, and RCAST for the evaluation and optimization of Mars rovers. Ye, the general director and general designer of the Chinese Chang'e-1 satellite, advocated numerical simulation technology to study the mobility performance of lunar rovers and simulate floating lunar dust (Ye & Xiao, 2006). Researchers from the Beijing Institute of Control Engineering discussed the technical scheme for researching lunar rovers and pointed out that a virtual simulation system having the abilities of modeling, dynamics simulation, and control simulation should be developed to test the configuration and dynamics parameters of a rover, as well as the control

Exploration activities and the moving path of a rover are subject to the great uncertainty in our knowledge of a planet's surface environment. Therefore, even the most advanced rovers at present, Spirit and Opportunity, do not employ a completely autonomous control strategy but are teleoperated by scientists on the Earth. The time delay in transmission between the Earth and Moon is about several seconds owing to the long distance and limited bandwidth, which makes continuous closed-loop control of a rover unstable. This problem can be solved effectively by successive remote operation using a 3D predictive display. This has been studied by researchers at Japan's Meiji University, China's Jilin University, and elsewhere (Lei et al., 2004; Kuroda et al., 2003; Kunii et al., 2001). Ground computers construct a virtual simulation environment according to the rover status and imagery information of the terrain, through which the rover will move after a time delay. Operators control the rover in a virtual environment, and the same commands are sent to the real rover after error compensation. The real rover repeats the motion of the virtual rover after a while. If the timing of the sent commands is regulated well, the rover can move

As the time delay in transmission from the Earth to Mars is tens of minutes, supervised teleoperation is adopted instead of nonfeasible continuous teleoperation to control Mars rovers. Researchers at JPL learned an important lesson from the Mars Pathfinder mission that a fast, accurate, and powerful tool for driving the rover is necessary (Maxwell et al., 2005). Thus, the rover sequencing and visualization program (RSVP) has been developed to drive rovers (Wright et al., 2006). It is a suite consisting of two main components: RoSE (rover sequence editor) and Hyperdrive. Hyperdrive is an immersive 3D simulation of the rover, and its environment enables operators to construct detailed rover motions and verify their safety. It uses the state information of a rover to analyze and review the current state, identify any anomalous issues, review previously commanded activities, and verify that the

**2.1 Platform for rover design, evaluation, testing, and control** 

**2.2 Supporting 3D predictive display for successive teleoperation** 

**2.3 Validating command sequence for supervised teleoperation** 

algorithms (Liang et al., 2005).

successively without stopping.

### **2. Importance of simulation for planetary rovers**

Virtual simulation can guarantee successful of WMRs for exploring the planets as it plays important roles in both the R&D phase and the exploration phase of the rovers, which are shown in Fig. 1. It is of great importance to planetary rovers in three aspects.

Fig. 1. Roles of simulation for planetary rovers

376 Mobile Robots – Current Trends

Virtual simulation can guarantee successful of WMRs for exploring the planets as it plays important roles in both the R&D phase and the exploration phase of the rovers, which are

(a) R&D phase

(b) Exploration phase

Fig. 1. Roles of simulation for planetary rovers

shown in Fig. 1. It is of great importance to planetary rovers in three aspects.

**2. Importance of simulation for planetary rovers** 

#### **2.1 Platform for rover design, evaluation, testing, and control**

During the R&D phase of a rover, virtual simulation can be used for configuration design, evaluation and optimization, mobility performance analysis, and control strategy research before a rover is manufactured. For example, before the Mars Pathfinder probe carrying the Sojourner rover was launched, with the help of virtual simulation technology, researchers at JPL predicted that it would turn over while landing because of the interaction between the breaking rocket and Martian wind. The technical program was revised to solve the problem and ensured successful soft landing on Mars (Chen et al., 2005). To support the study on Mars rovers, the United States developed a comprehensive simulation system ROAMS. Europe is committed to the development of the simulation tools RCET, RPET, and RCAST for the evaluation and optimization of Mars rovers. Ye, the general director and general designer of the Chinese Chang'e-1 satellite, advocated numerical simulation technology to study the mobility performance of lunar rovers and simulate floating lunar dust (Ye & Xiao, 2006). Researchers from the Beijing Institute of Control Engineering discussed the technical scheme for researching lunar rovers and pointed out that a virtual simulation system having the abilities of modeling, dynamics simulation, and control simulation should be developed to test the configuration and dynamics parameters of a rover, as well as the control algorithms (Liang et al., 2005).

#### **2.2 Supporting 3D predictive display for successive teleoperation**

Exploration activities and the moving path of a rover are subject to the great uncertainty in our knowledge of a planet's surface environment. Therefore, even the most advanced rovers at present, Spirit and Opportunity, do not employ a completely autonomous control strategy but are teleoperated by scientists on the Earth. The time delay in transmission between the Earth and Moon is about several seconds owing to the long distance and limited bandwidth, which makes continuous closed-loop control of a rover unstable. This problem can be solved effectively by successive remote operation using a 3D predictive display. This has been studied by researchers at Japan's Meiji University, China's Jilin University, and elsewhere (Lei et al., 2004; Kuroda et al., 2003; Kunii et al., 2001). Ground computers construct a virtual simulation environment according to the rover status and imagery information of the terrain, through which the rover will move after a time delay. Operators control the rover in a virtual environment, and the same commands are sent to the real rover after error compensation. The real rover repeats the motion of the virtual rover after a while. If the timing of the sent commands is regulated well, the rover can move successively without stopping.

#### **2.3 Validating command sequence for supervised teleoperation**

As the time delay in transmission from the Earth to Mars is tens of minutes, supervised teleoperation is adopted instead of nonfeasible continuous teleoperation to control Mars rovers. Researchers at JPL learned an important lesson from the Mars Pathfinder mission that a fast, accurate, and powerful tool for driving the rover is necessary (Maxwell et al., 2005). Thus, the rover sequencing and visualization program (RSVP) has been developed to drive rovers (Wright et al., 2006). It is a suite consisting of two main components: RoSE (rover sequence editor) and Hyperdrive. Hyperdrive is an immersive 3D simulation of the rover, and its environment enables operators to construct detailed rover motions and verify their safety. It uses the state information of a rover to analyze and review the current state, identify any anomalous issues, review previously commanded activities, and verify that the

Advances in Simulation of Planetary Wheeled Mobile Robots 379

Determines the position and orientation of a rover (dead reckoning

Analyzes the stability and traversability of a rover while following a

Analyzes the dynamics performance such as vibratility and the ability to overcome obstacles; the basis for designing control

Develops a program for

visualization platform for virtual simulation

rover control Matlab

method)

certain path

strategy

Provides a

Mars rover chassis evaluation tools (RCET) have been developed in Europe to support the design of planetary rovers (Michaud et al., 2006). RCET, which was developed jointly by Contraves Space, Ecole Polytechnique Fédérale de Lausanne, the German Aerospace Center (DLR) and others, accurately predicts the rover performances of the locomotion subsystem. It consists of a two-dimensional (2D) rover simulator that uses a tractive prediction module to compute the wheel/ground interaction. 3D simulation can also be performed with the

The rover performance evaluation tool (RPET) consists of the rover mobility performance evaluation tool (RMPET) and mobility synthesis (MobSyn). RMPET computes mobility

**Examples of simulation tools** 

Matlab, VC++

ADAMS, DADS, Vortex

Vega, 3D Max, MultiGen Creator

**contents Explanation Purpose**

motors and joints

Calculates the position and yaw angle of a rover according to feedback information of pitch and roll angles and the positions of

Calculates the altitude, pitch, and rock angles of the rover, as well as the positions of motors and joints given the yaw angle, horizontal position of a rover, and the path on known terrain

Used in research on wheel– soil interaction mechanics, multi-body dynamics, and methods for solving differential equations with

Used in research on control strategies of a rover's locomotion, path planning, path following, and intelligent navigation

Builds a visual virtual simulation environment according to topography and surface features of the planet and the rover's configuration

**3.2.2 RCET, RPET, and RCAST developed in Europe** 

high efficiency

**Simulation** 

**Forward kinematics** 

**Inverse kinematics** 

**Dynamics** 

**Control strategy** 

**Visualization** 

help of RoverGen.

Table 1. Rover simulation and tools

commanded activities have been completed. Scientists define science activities and plan a rover's path with 3D terrain models built according to images obtained by stereo cameras. Then, command sequences are constructed. Sequence validation is done by simulation, and the verified sequence is transmitted to the real rover for the exploration of a solar day. RSVP is used to generate the final command sequences for the mission and plan and validate all mobility and manipulation activities.

The supervised teleoperation method has its disadvantages. For instance, the rover must stop to wait for command sequences and it should be very intelligent. However, it also has obvious advantages. The science activities are defined systematically, and the operators can easily control the rover. Therefore, the two teleoperation modes are used according to the terrain and mission. No matter which is used, virtual simulation is indispensable.

### **3. Overview of current simulation technology for planetary WMRs**

#### **3.1 General simulation tools used in research of planetary WMRs**

Various general simulation tools are used by researchers to investigate the kinematics, dynamics, control, and visualization of rovers. They are summarized in Table 1 (Ding et al., 2008). General simulation tools are used by researchers to realize different research purposes, but they do not meet the requirements of rover-based exploration missions well because of their limitations.

#### **3.2 Virtual simulation system developed for planetary WMRs**

To support the R&D of WMRs for related planetary exploration missions, customized simulation tools or tools with different simulation functions have been developed.

#### **3.2.1 ROAMS, WITS, and RSVP developed by JPL, United States**

The Mars Technology Program, in conjunction with the Mars Science Laboratory Mission, has funded three complementary infrastructure elements: ROAMS, the web interface for tele-science (WITS), and CLARAty (Volpe, 2003).

ROAMS is a comprehensive physics-based simulator for planetary-exploration roving vehicles. It includes a mechanical subsystem, electrical subsystem, internal and external sensors, on-board resources, on-board control software, a terrain environment, and terrain/vehicle interactions. ROAMS can be used for stand-alone simulation, closed-loop simulation with on-board software, and operator-in-the-loop simulation (Yen et al., 1999). It includes rover kinematics, dynamics, navigation, locomotion, and visualization. ROAMS provides simulation services for off-line analysis, and acts as a virtual rover platform for CLARAty control software (Fig. 2), which is reusable rover software architecture being developed in collaboration by JPL, NASA Ames, Carnegie Mellon University and other institutions. A goal of the CLARAty development is to provide open architecture for component algorithm developers to develop and integrate their capabilities into.

 RSVP, which is capable of kinematics simulation, was developed for rover teleoperation as mentioned above. WITS provides collaborative downlink data visualization and uplink activity planning for multiple Mars lander and rover missions. For the 2003 Mars Exploration Rover mission, WITS was the primary science operations tool for downlink data visualization and uplink science activity planning. It can build scripted sequences and execute them on a 3D simulated rover, which can be controlled interactively (Backes et al., 2004).

378 Mobile Robots – Current Trends

commanded activities have been completed. Scientists define science activities and plan a rover's path with 3D terrain models built according to images obtained by stereo cameras. Then, command sequences are constructed. Sequence validation is done by simulation, and the verified sequence is transmitted to the real rover for the exploration of a solar day. RSVP is used to generate the final command sequences for the mission and plan and validate all

The supervised teleoperation method has its disadvantages. For instance, the rover must stop to wait for command sequences and it should be very intelligent. However, it also has obvious advantages. The science activities are defined systematically, and the operators can easily control the rover. Therefore, the two teleoperation modes are used according to the

Various general simulation tools are used by researchers to investigate the kinematics, dynamics, control, and visualization of rovers. They are summarized in Table 1 (Ding et al., 2008). General simulation tools are used by researchers to realize different research purposes, but they do not meet the requirements of rover-based exploration missions well

To support the R&D of WMRs for related planetary exploration missions, customized

The Mars Technology Program, in conjunction with the Mars Science Laboratory Mission, has funded three complementary infrastructure elements: ROAMS, the web interface for

ROAMS is a comprehensive physics-based simulator for planetary-exploration roving vehicles. It includes a mechanical subsystem, electrical subsystem, internal and external sensors, on-board resources, on-board control software, a terrain environment, and terrain/vehicle interactions. ROAMS can be used for stand-alone simulation, closed-loop simulation with on-board software, and operator-in-the-loop simulation (Yen et al., 1999). It includes rover kinematics, dynamics, navigation, locomotion, and visualization. ROAMS provides simulation services for off-line analysis, and acts as a virtual rover platform for CLARAty control software (Fig. 2), which is reusable rover software architecture being developed in collaboration by JPL, NASA Ames, Carnegie Mellon University and other institutions. A goal of the CLARAty development is to provide open architecture for

 RSVP, which is capable of kinematics simulation, was developed for rover teleoperation as mentioned above. WITS provides collaborative downlink data visualization and uplink activity planning for multiple Mars lander and rover missions. For the 2003 Mars Exploration Rover mission, WITS was the primary science operations tool for downlink data visualization and uplink science activity planning. It can build scripted sequences and execute them on a 3D simulated rover, which can be controlled interactively (Backes et al.,

simulation tools or tools with different simulation functions have been developed.

component algorithm developers to develop and integrate their capabilities into.

terrain and mission. No matter which is used, virtual simulation is indispensable.

**3. Overview of current simulation technology for planetary WMRs** 

**3.1 General simulation tools used in research of planetary WMRs** 

**3.2 Virtual simulation system developed for planetary WMRs** 

**3.2.1 ROAMS, WITS, and RSVP developed by JPL, United States** 

tele-science (WITS), and CLARAty (Volpe, 2003).

mobility and manipulation activities.

because of their limitations.

2004).


Table 1. Rover simulation and tools

#### **3.2.2 RCET, RPET, and RCAST developed in Europe**

Mars rover chassis evaluation tools (RCET) have been developed in Europe to support the design of planetary rovers (Michaud et al., 2006). RCET, which was developed jointly by Contraves Space, Ecole Polytechnique Fédérale de Lausanne, the German Aerospace Center (DLR) and others, accurately predicts the rover performances of the locomotion subsystem. It consists of a two-dimensional (2D) rover simulator that uses a tractive prediction module to compute the wheel/ground interaction. 3D simulation can also be performed with the help of RoverGen.

The rover performance evaluation tool (RPET) consists of the rover mobility performance evaluation tool (RMPET) and mobility synthesis (MobSyn). RMPET computes mobility

Advances in Simulation of Planetary Wheeled Mobile Robots 381

Space Robotics Laboratory (SRL) of Tohoku University developed the dynamics simulation toolbox SpaceDyn with Matlab. SpaceDyn has been successfully used for ETS-VII robot-arm simulation and touchdown of the Hayabusa spacecraft on Itokawa. SpaceDyn is also used to simulate the motion dynamics of a rover with a slip-based traction model (Yoshida & Hamano, 2002) and steering characteristics of a rover on loose soil based on terramechanics (Ishigami & Yoshida, 2005). A path-planning method taking into account wheel-slip dynamics of a planetary exploration rover was developed to generate several paths for a rover moving over rough terrain. Dynamics simulation was carried out by controlling the rover to follow the candidate paths for evaluation, as shown

Researchers at the State Key Laboratory of Robotics and System (SKLRS) of the Harbin Institute of Technology developed a simulation platform entitled rover simulation based on terramechanics and dynamics (RoSTDyn) (Li et al., 2012) under a contract with the Chinese Academy of Space Technology to evaluate the performance of the Chang'e lunar rover and

Yang et al. of Shanghai Jiaotong University presented the framework and key technologies of a virtual simulation environment for a lunar rover (Yang et al., 2008). A fractional Brownian motion technique and statistical properties were used to generate the lunar surface. The multi-body dynamics and complex interactions with soft ground were

Researchers at Tsinghua University investigated a test and simulation platform for lunar rovers (Luo & Sun, 2002). The platform provides modules for creating the topography of the terrain and an environmental components editor. The virtual lunar environment can be constructed with the terrain modules built in advance. COM technology was used to

**3.2.3 SpaceDyn developed by Tohoku University, Japan** 

in Fig. 4 (Ishigami et al., 2007).

Fig. 4. Simulation of path planning and evaluation

**3.2.4 Simulation platform for China's lunar rover** 

assist with its teleoperation.

integrated in the environment.

support distributed simulation.

performance parameters such as drawbar pull, motion resistances, soil thrust, slippage, and sinkage for a mobility system selected by the user for the evaluation of particular terrain (Patel et al., 2004).

RCAST was developed to characterize and optimize the ExoMars rover mobility in support of the evaluation of locomotion subsystem designs before RCET was available (Fig. 3). It uses the AESCO soft soil tire model (AS2TM) software package for terramechanics (Bauer et al., 2005).

At present, DLR is responsible for modeling, simulating and testing the entire mobility behavior of the rover within the ExoMars mission preparation phases. The commercial software tool Simpack, which includes contact modeling based on polygonal contact modeling, is used for simulation (Schäfer et al., 2010). The terramechanics for wheel–soil contact dynamics modeling and simulation and its experimental validation for the ExoMars rover has been introduced (Schäfer et al., 2010).

Fig. 2. Simulation with ROAMS and CLARAty

Fig. 3. ExoMars simulation with RCAST

380 Mobile Robots – Current Trends

performance parameters such as drawbar pull, motion resistances, soil thrust, slippage, and sinkage for a mobility system selected by the user for the evaluation of particular terrain

RCAST was developed to characterize and optimize the ExoMars rover mobility in support of the evaluation of locomotion subsystem designs before RCET was available (Fig. 3). It uses the AESCO soft soil tire model (AS2TM) software package for terramechanics (Bauer et

At present, DLR is responsible for modeling, simulating and testing the entire mobility behavior of the rover within the ExoMars mission preparation phases. The commercial software tool Simpack, which includes contact modeling based on polygonal contact modeling, is used for simulation (Schäfer et al., 2010). The terramechanics for wheel–soil contact dynamics modeling and simulation and its experimental validation for the ExoMars

(Patel et al., 2004).

rover has been introduced (Schäfer et al., 2010).

Fig. 2. Simulation with ROAMS and CLARAty

Fig. 3. ExoMars simulation with RCAST

al., 2005).

### **3.2.3 SpaceDyn developed by Tohoku University, Japan**

Space Robotics Laboratory (SRL) of Tohoku University developed the dynamics simulation toolbox SpaceDyn with Matlab. SpaceDyn has been successfully used for ETS-VII robot-arm simulation and touchdown of the Hayabusa spacecraft on Itokawa. SpaceDyn is also used to simulate the motion dynamics of a rover with a slip-based traction model (Yoshida & Hamano, 2002) and steering characteristics of a rover on loose soil based on terramechanics (Ishigami & Yoshida, 2005). A path-planning method taking into account wheel-slip dynamics of a planetary exploration rover was developed to generate several paths for a rover moving over rough terrain. Dynamics simulation was carried out by controlling the rover to follow the candidate paths for evaluation, as shown in Fig. 4 (Ishigami et al., 2007).

Fig. 4. Simulation of path planning and evaluation

### **3.2.4 Simulation platform for China's lunar rover**

Researchers at the State Key Laboratory of Robotics and System (SKLRS) of the Harbin Institute of Technology developed a simulation platform entitled rover simulation based on terramechanics and dynamics (RoSTDyn) (Li et al., 2012) under a contract with the Chinese Academy of Space Technology to evaluate the performance of the Chang'e lunar rover and assist with its teleoperation.

Yang et al. of Shanghai Jiaotong University presented the framework and key technologies of a virtual simulation environment for a lunar rover (Yang et al., 2008). A fractional Brownian motion technique and statistical properties were used to generate the lunar surface. The multi-body dynamics and complex interactions with soft ground were integrated in the environment.

Researchers at Tsinghua University investigated a test and simulation platform for lunar rovers (Luo & Sun, 2002). The platform provides modules for creating the topography of the terrain and an environmental components editor. The virtual lunar environment can be constructed with the terrain modules built in advance. COM technology was used to support distributed simulation.

Advances in Simulation of Planetary Wheeled Mobile Robots 383

Motor/ Driver

Motor Control

Position

Position

Rover & Arm

**Kinematics/ Dynamics** 

Battery Encoder Sun

**Rover Simulation based on Terramechanics and Dynamics (RoSTDyn)**

**Terrain geometry**

Camera **Stereo Vision**

**Wheel-soil interaction terramechanic**

Soil parameter

Science **Instrument** 

Solar Panel **Power** 

Terrain geometry

<sup>0</sup> { }

*Y*<sup>0</sup> *Z*<sup>0</sup>

Lunar rover's state and environment information

Time delay

DEM Terrain Data

**Interactive Virtual Planetary Rover Environment (IVPRE)**

> Time delay

> > { } *<sup>m</sup>*

2*r* 3 *p* 22 *c*

Paths

**Commands**

Goals

Goals

Tele-operation commands

{ } *<sup>n</sup>*

*s n p s nr*

*e p*

**Footnote:**  Solid line is default; dot-and-dashed line is optional; dashed line is extensible.

{ } *<sup>e</sup> Xe Ye Ze*

{ } *<sup>s</sup>*

*s n e l*

**User control algorithm** 

> Navigation

Locomotion

Malab, C++, ...

sensor

Navigatio

**Command**

Paths

**Sensors** IMU

> Locomotion

Lunar rover

<sup>11</sup> *<sup>c</sup>* <sup>12</sup> *<sup>c</sup>*

2 *p*

Fig. 5. Architecture of planetary rover's comprehensive simulation system

{ } *<sup>l</sup>*

*X*0

1 *p*

01 *c*

Rover & Arm

Terrain componen

{ } *<sup>I</sup> XI*

*ZI*

Fig. 6. Coordinates and vectors from rover body to wheel

0*r*

*YI*

1 *r*

### **4. Key theories for development of simulation system for planetary WMRs**

Planetary exploration missions require comprehensive simulation systems that have the abilities of modeling, kinematics, dynamics, control, and visualization, and have the characteristics of high speed and high fidelity.

Figure 5 shows the architecture of a comprehensive virtual simulation system for the highfidelity/high-speed simulation of rovers. It comprises the RoSTDyn and interactive virtual planetary rover environment (IVPRE) systems. RoSTDyn is a comprehensive simulation system similar to ROAMS. Control commands of three levels received from itself, IVPRE, or other control software can be accepted by RoSTDyn; i.e. the goals, paths, and motor's position. Users can control the virtual rover interactively with IVPRE, which constructs a virtual lunar environment with terrain components or images from the real rover. Digital evaluation model (DEM) terrain data are then generated for RoSTDyn. It can also calculate the mechanics parameters of the soil for RoSTDyn. This system can be further developed for successive lunar rover teleoperation based on 3D predictive display. Key technologies of the simulation system include generalized dynamics modeling, wheel–soil interaction terramechanics models, and deformable rough-terrain geometry modeling.

#### **4.1 Generalized recursive dynamics modeling 4.1.1 Recursive kinematics and Jacobian matrices**

$$\text{If } \mathbf{a} = \begin{bmatrix} a\_1 & a\_2 & a\_3 \end{bmatrix}^\mathrm{T}, \ \mathbf{b} = \begin{bmatrix} b\_1 & b\_2 & b\_3 \end{bmatrix}^\mathrm{T} \text{ - and we define } \tilde{\mathbf{a}} = \begin{bmatrix} 0 & -a\_3 & a\_2 \\ a\_3 & 0 & -a\_1 \\ -a\_2 & a\_1 & 0 \end{bmatrix} \text{, then } \mathbf{a} \times \mathbf{b} = \tilde{\mathbf{a}}\mathbf{b}$$

and *<sup>T</sup>* **b a ab a b** . Let <sup>T</sup> 1 2 [ ] *nv* **q** *qq q* denote joint variables, where *nv* is the number of joints. The WMRs are articulated multi-body systems with a moving base and *nw* end-points (wheels). Let <sup>T</sup> [ ] *s lmn s* **q** *qq q q* denote a branch from the rover body to a wheel and *ns* denote the number of elements in **q***<sup>s</sup>* . Replace the joint number *l*, *m*, *n*, …, *s* of the branch with 1, 2, 3,…, *ns*, as shown in Fig. 1, which also shows the inertial coordinate {Σ*I*} and the coordinates {Σ*i*} related to link *i* (*i* = *l*, *m*, *n*, …, *s*) and related vectors, where *pi* is the position vector of joint *i*; *ri* is the position vector of the centroid of link *i*; *cij* is the link vector from link *i* to joint *j*; *lij* = *p<sup>j</sup>* – *pi* is the link vector from joint *i* to joint *j*; and *lie* is the vector from joint *i* to end-point *e*.

The position vector of end-point *pe* is

$$\mathbf{p}\_e = \mathbf{r}\_0 + \mathbf{c}\_{01} + \sum\_{i=1}^{n\_s - 1} \mathbf{l}\_{i(i+1)} + \mathbf{l}\_{n\_s e} \tag{1}$$

The derivative of Eq. (1) is

$$\begin{aligned} \mathbf{v}\_{\varepsilon} &= \mathbf{v}\_{0} + \boldsymbol{\mathfrak{o}}\_{0} \times (\mathbf{p}\_{\varepsilon} - \mathbf{r}\_{0}) + \sum\_{i=1}^{n\_{\ast}} \mathbf{A}\_{i} \, ^{i} \mathbf{Z}\_{i} \times (\mathbf{p}\_{\varepsilon} - \mathbf{p}\_{i}) \dot{q}\_{i} \\ &= \begin{bmatrix} \mathbf{J}\_{B\text{Te}} & \mathbf{J}\_{MT\varepsilon} \end{bmatrix} \begin{bmatrix} \left(\mathbf{v}\_{0}^{\mathrm{T}} & \mathbf{o}\_{0}^{\mathrm{T}}\right) & \dot{\mathbf{q}}^{\mathrm{T}} \end{bmatrix}^{\mathrm{T}} \end{aligned} \tag{2}$$

382 Mobile Robots – Current Trends

Planetary exploration missions require comprehensive simulation systems that have the abilities of modeling, kinematics, dynamics, control, and visualization, and have the

Figure 5 shows the architecture of a comprehensive virtual simulation system for the highfidelity/high-speed simulation of rovers. It comprises the RoSTDyn and interactive virtual planetary rover environment (IVPRE) systems. RoSTDyn is a comprehensive simulation system similar to ROAMS. Control commands of three levels received from itself, IVPRE, or other control software can be accepted by RoSTDyn; i.e. the goals, paths, and motor's position. Users can control the virtual rover interactively with IVPRE, which constructs a virtual lunar environment with terrain components or images from the real rover. Digital evaluation model (DEM) terrain data are then generated for RoSTDyn. It can also calculate the mechanics parameters of the soil for RoSTDyn. This system can be further developed for successive lunar rover teleoperation based on 3D predictive display. Key technologies of the simulation system include generalized dynamics modeling, wheel–soil interaction

terramechanics models, and deformable rough-terrain geometry modeling.

<sup>123</sup> **b** [ ] *bbb* ,and we define

1 2 [ ] *nv* **q** *qq q* denote joint variables, where *nv* is the number of joints. The WMRs are articulated multi-body systems with a moving base and *nw* end-points (wheels). Let <sup>T</sup> [ ] *s lmn s* **q** *qq q q* denote a branch from the rover body to a wheel and *ns* denote the number of elements in **q***<sup>s</sup>* . Replace the joint number *l*, *m*, *n*, …, *s* of the branch with 1, 2, 3,…, *ns*, as shown in Fig. 1, which also shows the inertial coordinate {Σ*I*} and the coordinates {Σ*i*} related to link *i* (*i* = *l*, *m*, *n*, …, *s*) and related vectors, where *pi* is the position vector of joint *i*; *ri* is the position vector of the centroid of link *i*; *cij* is the link vector from link *i* to joint *j*; *lij* = *p<sup>j</sup>* – *pi* is the link vector from joint *i* to joint *j*; and *lie* is the vector from joint *i* to

> 1 0 01 ( 1) 1

> > 1 <sup>T</sup> TT T 0 0

() ( ) *ns*

*i*

*s*

**p rc l l** (1)

*q*

(2)

*s*

*e e i i e ii i*

**v v <sup>ω</sup> p r AZ p p**

*n e ii ne i*

**JJ v ω q**

00 0

*BTe MTe*

3 2 3 1 2 1

*a a a a a a* 

0

**a** , then **a b ab**

0

0

**4. Key theories for development of simulation system for planetary WMRs** 

characteristics of high speed and high fidelity.

**4.1 Generalized recursive dynamics modeling 4.1.1 Recursive kinematics and Jacobian matrices** 

<sup>123</sup> **a** [ ] *aaa* , <sup>T</sup>

The position vector of end-point *pe* is

The derivative of Eq. (1) is

If <sup>T</sup>

end-point *e*.

and *<sup>T</sup>* **b a ab a b** . Let <sup>T</sup>

Fig. 5. Architecture of planetary rover's comprehensive simulation system

Fig. 6. Coordinates and vectors from rover body to wheel

Advances in Simulation of Planetary Wheeled Mobile Robots 385

11 2 2 *<sup>v</sup>*

*MRi i <sup>i</sup> in n* **J LZ LZ L Z** and

*MTi i <sup>i</sup> <sup>i</sup> <sup>i</sup> in n i n* **J LZ rp L Z rp L Z rp**

T TT

*ii i ii i sys*

T 33 0 33 3 0 33 33 3

*a a g Tg n*

 

*Tg n n n n*

*v v v v*

( )( )

*i iii i iii*

() ( ) ( ) () ( ) ( ) () ( ) ()

**E rJ**

3 3

0 0 00 0 0 1 1

( ) + ( , ) +( ) ( ) **F H** *sys sys sys* **Φ Φ <sup>C</sup> ΦΦΦ <sup>f</sup> <sup>Φ</sup> <sup>G</sup> <sup>Φ</sup>** , (8)

where *C* is an ( 6) ( 6) *n n v v* stiffness matrix describing the Coriolis and centripetal

describes viscous and Coulomb friction (typically negligible in a rigid-body dynamics system); *G* is an ( 6) 1 *nv* gyroscopic vector reflecting gravity loading; and *F*sys is the

In Eq. (9), *N* is an ( 6) 1 *nv* matrix including the forces ( **F**<sup>0</sup> ) and moments ( **M**<sup>0</sup> ) acting on

including the external forces ( **F***<sup>e</sup>* ) and moments ( **M***<sup>e</sup>* ) from the soil that act on the wheel:

1 2 [ ] *nv* **τ** 

*n n v v*

*i i*

 *m m* **H I I rr I I rr** ,

**J HH**

*sys a g n*

1 1 ( ) 2 2

where **H***sys* is the ( 6) ( 6) *n n v v* system generalized inertia matrix (Yoshida, 2000):

T T

**H r HH**

*M M*

**r rr** 0 0 *g g* , T T

11 1 2 2 2 ( ) ( ) ... ( ) *<sup>v</sup>*

1 2

1 2

0

*M*

T T

( )

**H J IJ J J** , and

the body, and those acting on the joints ( <sup>T</sup>

*MRi i MRi i MTi MTi*

*T m* 

*i*

*nv*

Substituting Eq. (5) into the kinetic energy equation gives

*BTi* <sup>0</sup>*<sup>i</sup>* **<sup>J</sup> E r** and **J***BRi* <sup>0</sup> **<sup>E</sup>** are both 3 × 6 matrices, and

*n*

*vv v*

*v v n*

**<sup>ω</sup> <sup>I</sup> <sup>ω</sup> v v <sup>Φ</sup> <sup>H</sup> <sup>Φ</sup>** , (6)

 

1

 *m* 

*<sup>i</sup> q* and *<sup>i</sup> <sup>j</sup> q q* , respectively; *f* is an ( 6) 1 *nv* matrix that

<sup>T</sup> **F NJ N** *sys ae ae* . (9)

 

**H IJ r J** .

*i* 

*nv*

*v v*

. (7)

0 = *nv Tg i MTi i m* **J J** ,

); **N***ae* is a 6 1 *nw* vector

( )

*i MRi i oi MTi*

where *Ji* is a 6 × (6 + *nv*) matrix. <sup>T</sup>

are both 3 *nv* matrices.

**4.1.2 Generalized dynamics model** 

In Eq. (7), *Ma* is the overall mass of the robot,

1

*i* 

According to the Lagrange function,

effects, which are proportional to <sup>2</sup>

vector of generalized forces:

*nv*

*m*

where 1 2 11 1 1 22 2 2 [ ] *<sup>r</sup> vv v v <sup>n</sup>* **J LA Z P LA Z P L A Z P** *MTe s e s e sn n n n e* is a 3 × *nv* matrix, *<sup>I</sup>* **A A** *i i* is the transformation matrix from {Σ*i*} to {Σ*I*}, <sup>T</sup> [0 0 1] *<sup>i</sup>* **Z***<sup>i</sup>* because the *z* axis is set to coincide with the joint displacement axis, **L***ij* is an element of matrix **L***n n v v* that indicates whether link *j* is on the access road from link 0 to link *i* ( **L***ij* =1) or not ( **L***ij* = 0), and **<sup>P</sup>***ie* is the vector from the origin of {Σ*i*} to the end-point. 0 [ ] *<sup>T</sup> BTe er* **J EP** is a 3 × 6 matrix, where **P pr** *er e* 0 0 .

The angular velocity of the end-point is

$$\boldsymbol{\dot{\alpha}}\boldsymbol{\alpha}\_{\varepsilon} = \boldsymbol{\alpha}\_{0} + \sum\_{i=1}^{n\_{\varepsilon}} \mathbf{A}\_{i} \, ^{i} \mathbf{Z}\_{i} \dot{\boldsymbol{q}}\_{i} = \begin{bmatrix} \mathbf{J}\_{BR\varepsilon} & \mathbf{J}\_{MR\varepsilon} \end{bmatrix} \begin{bmatrix} \left(\mathbf{v}\_{0}^{\mathrm{T}} & \boldsymbol{\alpha}\_{0}^{\mathrm{T}}\right) & \dot{\mathbf{q}}^{\mathrm{T}} \end{bmatrix}^{\mathrm{T}},\tag{3}$$

where 1 2 11 1 22 2 [ ] *<sup>r</sup> vv v n MRe s <sup>s</sup> sn n n* **J LA Z LA Z L A Z** is a 3 × *ns* matrix and [0 ] **J***BRe* **E** is a 3 × 6 matrix.

Let *BTe MTe e Be Me BRe MRe* **J J JJJ J J** be a 6 × (6 + *nv*) Jacobian matrix for mapping generalized velocities to the end-points; let <sup>T</sup> TT T 0 0 **<sup>Φ</sup> <sup>v</sup> <sup>ω</sup> <sup>q</sup>** be a vector with (6 + *nv*) elements, which are the linear velocities and angular velocities of the body, and joint velocities. Let

**<sup>X</sup>***ae* and *ae* **<sup>J</sup>** denote the velocities of all the wheel–soil interaction points and the corresponding Jacobian matrix:

$$
\dot{\mathbf{X}}\_{\text{ac}} = \begin{bmatrix}
\mathbf{v}\_{\text{c}}(1) \\
\mathbf{o}\_{\text{c}}(1) \\
\vdots \\
\mathbf{v}\_{\text{c}}(n\_{w}) \\
\mathbf{o}\_{\text{c}}(n\_{w})
\end{bmatrix},
\quad
\mathbf{J}\_{\text{ac}} = \begin{bmatrix}
\mathbf{J}\_{\text{c}}(1) \\
\mathbf{J}\_{\text{c}}(2) \\
\vdots \\
\mathbf{J}\_{\text{c}}(n\_{w})
\end{bmatrix}'
$$

which are a 6 1 *nw* vector and 6 ( 6) *n n w v* matrix, respectively. We thus obtain

$$
\dot{\mathbf{X}}\_{ae} = \mathbf{J}\_{ae} \dot{\boldsymbol{\Phi}} \,. \tag{4}
$$

The same method is used to deduce the Jacobian matrix by mapping the velocities from the generalized coordinates to the link centroid:

$$
\dot{\mathbf{X}}\_a = \mathbf{J}\_a \dot{\boldsymbol{\Phi}} \,\tag{5}
$$

where **X***<sup>a</sup>* ( 6 1 *nv* ) is the velocity vector of all centroids, and *Ja* ( 6 ( 6) *n n v v* ) is the Jacobian matrix. In Eq. (5),

$$
\dot{\mathbf{X}}\_{a} = \begin{bmatrix} \mathbf{v}\_{1} \\ \mathbf{o}\mathbf{o}\_{1} \\ \vdots \\ \mathbf{v}\_{n\_{v}} \\ \mathbf{o}\_{n\_{v}} \end{bmatrix}, \quad \mathbf{J}\_{a} = \begin{bmatrix} \mathbf{J}\_{1} \\ \mathbf{J}\_{2} \\ \vdots \\ \mathbf{J}\_{n\_{v}} \end{bmatrix}, \quad \mathbf{J}\_{i} = \begin{bmatrix} \mathbf{J}\_{Bi} & \mathbf{J}\_{Mi} \end{bmatrix} = \begin{bmatrix} \mathbf{J}\_{BTi} & \mathbf{J}\_{MTi} \\ \mathbf{J}\_{BRi} & \mathbf{J}\_{MRi} \end{bmatrix},
$$

where *Ji* is a 6 × (6 + *nv*) matrix. <sup>T</sup> *BTi* <sup>0</sup>*<sup>i</sup>* **<sup>J</sup> E r** and **J***BRi* <sup>0</sup> **<sup>E</sup>** are both 3 × 6 matrices, and

$$\mathbf{J}\_{MRi} = \begin{bmatrix} \mathbf{L}\_{i1} \,^1 \mathbf{Z}\_1 & \mathbf{L}\_{i2} \,^2 \mathbf{Z}\_2 & \cdots & \mathbf{L}\_{in\_v} \,^{n\_v} \mathbf{Z}\_{n\_v} \end{bmatrix} \text{ and }$$

$$\mathbf{J}\_{MTi} = \begin{bmatrix} \mathbf{L}\_{i1} \,^1 \mathbf{Z}\_1 \times (\mathbf{r}\_i - \mathbf{p}\_1) & \mathbf{L}\_{i2} \,^2 \mathbf{Z}\_2 \times (\mathbf{r}\_i - \mathbf{p}\_2) & \cdots & \mathbf{L}\_{in\_v} \,^{n\_v} \mathbf{Z}\_{n\_v} \times (\mathbf{r}\_i - \mathbf{p}\_{n\_v}) \end{bmatrix}.$$

are both 3 *nv* matrices.

384 Mobile Robots – Current Trends

set to coincide with the joint displacement axis, **L***ij* is an element of matrix **L***n n v v* that indicates whether link *j* is on the access road from link 0 to link *i* ( **L***ij* =1) or not ( **L***ij* = 0), and

0 0 0

*vv v n MRe s <sup>s</sup> sn n n* **J LA Z LA Z L A Z** is a 3 × *ns* matrix and [0 ] **J***BRe* **E**

**J J JJJ J J** be a 6 × (6 + *nv*) Jacobian matrix for mapping generalized

which are the linear velocities and angular velocities of the body, and joint velocities. Let **<sup>X</sup>***ae* and *ae* **<sup>J</sup>** denote the velocities of all the wheel–soil interaction points and the

> (1) (1)

*e e*

**v ω**

,

*ae*

**X**

( ) ( )

**X J**

**X J**

1 2

**J**

*v*

*n*

**J**

The same method is used to deduce the Jacobian matrix by mapping the velocities from the

( 6 1 *nv* ) is the velocity vector of all centroids, and *Ja* ( 6 ( 6) *n n v v* ) is the

 , *BTi MTi i Bi Mi*

 **J J JJJ J J** ,

*e w e w n n*

which are a 6 1 *nw* vector and 6 ( 6) *n n w v* matrix, respectively. We thus obtain

**v ω**

*<sup>n</sup>* **J LA Z P LA Z P L A Z P** *MTe s e s e sn n n n e* is a 3 × *nv* matrix,

*vv v v*

0 0 **<sup>Φ</sup> <sup>v</sup> <sup>ω</sup> <sup>q</sup>** be a vector with (6 + *nv*) elements,

(1) (2)

*e e*

**J**

*ae*

**<sup>J</sup> <sup>J</sup>**

( )

,

*ae ae* **<sup>Φ</sup>** . (4)

*a a* **<sup>Φ</sup>** , (5)

*BRi MRi*

*e wn*

**J**

<sup>T</sup> TT T

**ω ω AZ J J v <sup>ω</sup> <sup>q</sup>** , (3)

**Z***<sup>i</sup>* because the *z* axis is

*BTe er* **J EP** is a 3 × 6 matrix,

11 1 1 22 2 2 [ ] *<sup>r</sup>*

*<sup>I</sup>* **A A** *i i* is the transformation matrix from {Σ*i*} to {Σ*I*}, <sup>T</sup> [0 0 1] *<sup>i</sup>*

**<sup>P</sup>***ie* is the vector from the origin of {Σ*i*} to the end-point. 0 [ ] *<sup>T</sup>*

1 + *nv*

*i*

*BRe MRe*

velocities to the end-points; let <sup>T</sup> TT T

*i e i i i BRe MRe*

11 1 22 2 [ ] *<sup>r</sup>*

*q*

where 1 2

The angular velocity of the end-point is

where 1 2

Let *BTe MTe e Be Me*

corresponding Jacobian matrix:

generalized coordinates to the link centroid:

*a*

**X**

1 1

**v ω**

> *v v*

*a*

**<sup>J</sup> <sup>J</sup>**

*n n*

**v ω**

,

where **P pr** *er e* 0 0 .

is a 3 × 6 matrix.

where **X***<sup>a</sup>*

Jacobian matrix. In Eq. (5),

#### **4.1.2 Generalized dynamics model**

Substituting Eq. (5) into the kinetic energy equation gives

$$T = \frac{1}{2} \sum\_{i=0}^{n\_v} \left( \mathbf{o}\_i^T \mathbf{I}\_i \mathbf{o}\_i + m\_i \mathbf{v}\_i^T \mathbf{v}\_i \right) = \frac{1}{2} \dot{\mathbf{O}}^T \mathbf{H}\_{\text{sys}} \dot{\mathbf{O}} \tag{6}$$

where **H***sys* is the ( 6) ( 6) *n n v v* system generalized inertia matrix (Yoshida, 2000):

$$\mathbf{H}\_{\rm sys} = \begin{bmatrix} M\_a(\mathbf{E})\_{3 \times 3} & M\_a(\tilde{\mathbf{r}}\_{0g}^T)\_{3 \times 3} & (\mathbf{J}\_{Tg})\_{3 \times n\_v} \\ M\_a(\tilde{\mathbf{r}}\_{0g})\_{3 \times 3} & (\mathbf{H}\_{ao})\_{3 \times 3} & (\mathbf{H}\_{ao\phi})\_{3 \times n\_v} \\ (\mathbf{J}\_{Tg}^T)\_{n\_v \times 3} & (\mathbf{H}\_{ao\phi}^T)\_{n\_v \times 3} & (\mathbf{H}\_{\phi})\_{n\_v \times n\_v} \end{bmatrix}. \tag{7}$$

In Eq. (7), *Ma* is the overall mass of the robot,

**r rr** 0 0 *g g* , T T 0 0 00 0 0 1 1 ( )( ) *n n v v i iii i iii i i m m* **H I I rr I I rr** , 0 = *nv Tg i MTi i m* **J J** , T T 1 ( ) *nv MRi i MRi i MTi MTi i m* **H J IJ J J** , and 1 ( ) *nv i MRi i oi MTi i m* **H IJ r J** .

According to the Lagrange function,

( ) + ( , ) +( ) ( ) **F H** *sys sys sys* **Φ Φ <sup>C</sup> ΦΦΦ <sup>f</sup> <sup>Φ</sup> <sup>G</sup> <sup>Φ</sup>** , (8)

where *C* is an ( 6) ( 6) *n n v v* stiffness matrix describing the Coriolis and centripetal effects, which are proportional to <sup>2</sup> *<sup>i</sup> q* and *<sup>i</sup> <sup>j</sup> q q* , respectively; *f* is an ( 6) 1 *nv* matrix that describes viscous and Coulomb friction (typically negligible in a rigid-body dynamics system); *G* is an ( 6) 1 *nv* gyroscopic vector reflecting gravity loading; and *F*sys is the vector of generalized forces:

$$\mathbf{F}\_{\rm sys} = \mathbf{N} + \mathbf{J}\_{ae} \, ^T \mathbf{N}\_{ae} \, . \tag{9}$$

In Eq. (9), *N* is an ( 6) 1 *nv* matrix including the forces ( **F**<sup>0</sup> ) and moments ( **M**<sup>0</sup> ) acting on the body, and those acting on the joints ( <sup>T</sup> 1 2 [ ] *nv* **τ** ); **N***ae* is a 6 1 *nw* vector including the external forces ( **F***<sup>e</sup>* ) and moments ( **M***<sup>e</sup>* ) from the soil that act on the wheel:

Advances in Simulation of Planetary Wheeled Mobile Robots 387

The soil applies three forces and three moments to each wheel, as shown in Fig. 4. The normal force *FN* can sustain the wheel. The cohesion and shearing of the soil can generate a resistance moment *MR* and a tractive force; the resistance force is caused by the wheel sinking into the soil; the composition of the tractive and resistance forces is called the drawbar pull *FDP*, which is the effective force of driving a wheel. As a wheel steers or when there is a slip angle, there is a side force *FS*, a steering resistance moment *MS,* and an

Figure 7 is a diagram of the lugged wheel–soil interaction mechanics, where *z* is wheel sinkage; *θ*1 is the entrance angle at which the wheel begins to contact the soil; *θ*2 is the exit angle at which the wheel loses contact with the soil; *θm* is the the angle of maximum stress;

 is the angle at which the soil starts to deform; *W* is the vertical load of the wheel; *DP* is the resistance force acting on the wheel; *T* is the driving torque of the motor; *r* is the wheel radius; *h* is the height of the lugs; *v* is the vehicle velocity; and *ω* is the angular velocity of the wheel. The soil interacts with the wheel in the form of continuous normal stress *σ* and shearing stress *τ*, which can be integrated to calculate the interaction mechanics. To improve the simulation speed, a simplified closed-form formula (Ding et al., 2009a) is adopted and

( ) =[ ](1 )(1 )

*R N N*

*r AC A W*

[ [1 ( ) / ] tan /( )] <sup>=</sup> 1+ tan /( )

*r BD rA*

In Eq. (16), *s* is the slip ratio defined by Ding et al. (2009); *cP*1 and *cP*2 are adopted to reflect the influence of the slip ratio on the drawbar pull, and *θm* can thus be simplified as half of *θ*1; *cP*3 and *cM* are parameters that compensate for the effect of the normal force; *W* is the

2 2 1 1 *A* (cos cos ) /( ) (cos cos ) /( )

*mmm m*

1 1

*s m m*

.

*rs k*

 

0 .

(1 exp{ [( ) (1 )(sin sin )]/ })

, *N n ns* 0 1 , and 2

The radius *Rj* is a value between *r* and *r* + *h* that compensates for the lug effect (Ding et al., 2009b). The soil parameters in the equations are *kc*, the cohesive modulus; *kφ*, the frictional modulus; *N*, an improved soil sinkage exponent; *c*, the cohesion of the soil; *φ*, the internal

2 2 1 1 *B* (sin sin ) /( ) (sin sin ) /( )

*mmm m*

 

*MA B BF W F F c <sup>c</sup> <sup>s</sup> <sup>c</sup>*

*DP PP P*

*s MN N*

*r CD bc c W F W F rA <sup>M</sup>*

*s*

 

  12 3

  *ms m K r* , 1 2 *C* ( )/2

> 

   ,

 

  . (16)

 ,

, and

**4.2 Wheel–soil interaction terramechanics models** 

overturning moment *MO* acting on the wheel.

improved considering the effect of the normal force:

2

*F rbA r bB*

*m m*

 

*c*

The newly introduced parameters are

*rz R* , / *K k bk s c*

<sup>1</sup> acos[( ) / ]*<sup>j</sup>*

*R*

 

  2 2

 

*s N ms m*

average normal force of the wheels; and 1 (cos cos ) *N N*

( tan )

**4.2.1 Driving model** 

1

$$\mathbf{N} = \begin{bmatrix} \mathbf{F}\_0 \\ \mathbf{M}\_0 \\ \mathbf{\tau} \end{bmatrix}, \ \mathbf{N}\_{ae} = \begin{bmatrix} \mathbf{F}\_e^T(1) \ \mathbf{M}\_e^T(1) & \cdots & \mathbf{F}\_e^T(n\_w) \ \mathbf{M}\_e^T(n\_w) \end{bmatrix}^T.$$

The dynamics equation of a WMR including the wheel–soil interaction terramechanics is

$$\mathbf{H}\_{\rm sys}(\boldsymbol{\Phi})\ddot{\boldsymbol{\Phi}}\_{\rm sys} + \mathbf{C}(\boldsymbol{\Phi}, \dot{\boldsymbol{\Phi}})\dot{\boldsymbol{\Phi}} + \mathbf{f}(\dot{\boldsymbol{\Phi}}) + \mathbf{G}(\boldsymbol{\Phi}) - \mathbf{N} - \mathbf{J}\_{\rm ae}{}^{T}\mathbf{N}\_{\rm ae} = \boldsymbol{0} \,. \tag{10}$$

Let ( , ) + ( )+ ( )= **C ΦΦΦ f Φ G Φ D** . The generalized accelerations can then be calculated as

$$\ddot{\boldsymbol{\Phi}}\_{sys} = \mathbf{H}\_{sys}^{-1} \left( \mathbf{N} + \mathbf{J}\_{ae} \, ^{\mathrm{T}} \mathbf{N}\_{ae} - \mathbf{D} \right) \,. \tag{11}$$

The recursive Newton–Euler method is used to deduce an equation equivalent to Eq. (10) to calculate the unknown *D*.

The Newton–Euler equations are

$$\begin{cases} \mathbf{F}\_i = m\_i \dot{\mathbf{v}}\_i \\ \mathbf{N}\_i = \mathbf{I}\_i \dot{\mathbf{o}}\_i + \mathbf{o}\_i \times \mathbf{I}\_i \mathbf{o}\_i \end{cases} \tag{12}$$

According to D'Alembert's principle, the effect of *<sup>i</sup>***f** and **m***i* on link *i* through joint *i* is

$$\begin{cases} \mathbf{m}\_{i} = \mathbf{M}\_{i} + \sum\_{j=i+1}^{n} \mathbf{S}\_{ij} (\mathbf{l}\_{ij} \times \mathbf{f}\_{j} + \mathbf{m}\_{j}) - \mathbf{S}\_{ii} \times \\ \quad \quad \quad \quad \quad \quad \left[ \mathcal{A}\_{P}(i) \mathbf{A}\_{i} \, ^{i} \mathbf{Z}\_{i} q\_{i} - \mathbf{c}\_{ii} \right] \times (\mathbf{F}\_{i} - m\_{i} \mathbf{g}) - \mathbf{S}\_{ei} (\mathbf{I}\_{ie} \times \mathbf{F}\_{ei} + \mathbf{M}\_{ei}) \, \end{cases} \tag{13}$$
 
$$\begin{cases} \mathbf{f}\_{i} = \mathbf{F}\_{i} - m\_{i} \mathbf{g} + \sum\_{j=i+1}^{n} \mathbf{S}\_{ij} \mathbf{f}\_{j} - \mathbf{S}\_{ei} \mathbf{F}\_{ei} \\ \end{cases} \tag{13}$$

where ( ) *<sup>P</sup> i* is 1 for a prismatic joint and zero for a rotational joint, *S* is the incidence matrix to find the upper connection of a link, and *Sei* indicates whether *i* is an end-point. The generalized force/moment of link *i* is

$$\boldsymbol{\tau}\_{i} = \begin{cases} \mathbf{m}\_{i}^{T} \mathbf{A}\_{i} \,^{i} \mathbf{Z}\_{i} & \text{(Rotational joint)}\\ \mathbf{f}\_{i}^{T} \mathbf{A}\_{i} \,^{i} \mathbf{Z}\_{i} & \text{(Prismatrix joint)} \end{cases} . \tag{14}$$

The forces and moments that act on the body are

$$\begin{cases} \mathbf{F}\_{0} = \sum\_{j=1}^{n} \mathbf{S}\_{0j} \mathbf{f}\_{j} + m\_{0} (\dot{\mathbf{v}}\_{0} - \mathbf{g}) \\\\ \mathbf{M}\_{0} = \sum\_{j=1}^{n} \mathbf{S}\_{0j} (\mathbf{c}\_{0j} \times \mathbf{f}\_{j} + \mathbf{m}\_{j}) + \mathbf{I}\_{0} \dot{\mathbf{o}}\_{0} + \mathbf{o}\_{0} \times \mathbf{I}\_{0} \mathbf{o}\_{0} \end{cases} \tag{15}$$

where *S*0*<sup>j</sup>* is a flag vector that indicates whether *j* has a connection with the body. Following Eq. (10), let the accelerations of all the generalized coordinates and the external forces/moments be zero; it is then possible to obtain *D* with Eqs. (14) and (15).

#### **4.2 Wheel–soil interaction terramechanics models**

The soil applies three forces and three moments to each wheel, as shown in Fig. 4. The normal force *FN* can sustain the wheel. The cohesion and shearing of the soil can generate a resistance moment *MR* and a tractive force; the resistance force is caused by the wheel sinking into the soil; the composition of the tractive and resistance forces is called the drawbar pull *FDP*, which is the effective force of driving a wheel. As a wheel steers or when there is a slip angle, there is a side force *FS*, a steering resistance moment *MS,* and an overturning moment *MO* acting on the wheel.

#### **4.2.1 Driving model**

386 Mobile Robots – Current Trends

The dynamics equation of a WMR including the wheel–soil interaction terramechanics is

Let ( , ) + ( )+ ( )= **C ΦΦΦ f Φ G Φ D** . The generalized accelerations can then be calculated as

*i ii*

According to D'Alembert's principle, the effect of *<sup>i</sup>***f** and **m***i* on link *i* through joint *i* is

( )

[ () ] ( ) ( )

to find the upper connection of a link, and *Sei* indicates whether *i* is an end-point. The

0 00 00 0 00

**M Sc f m I ω ω I ω**

Following Eq. (10), let the accelerations of all the generalized coordinates and the external

( )

( )

*jjj j*

where *S*0*<sup>j</sup>* is a flag vector that indicates whether *j* has a connection with the body.

forces/moments be zero; it is then possible to obtain *D* with Eqs. (14) and (15).

*m*

*P i i i ii i i ei ie ei ei*

**AZ c F g S l F M**

*i* is 1 for a prismatic joint and zero for a rotational joint, *S* is the incidence matrix

 (Rotational joint) (Prismatic joint)

**fA Z** . (14)

1

*n*

*j i i*

*m*

*n*

*j n*

The forces and moments that act on the body are

 

 

generalized force/moment of link *i* is

where ( ) *<sup>P</sup>* 

1

*n ii i ij j ei ei j i*

**f F g Sf SF**

T T

**mA Z**

*i i iii*

0 0 00 1

*j j*

**F Sf v g**

1

*j*

 

*i ii i*

*i i ij ij j j ii*

**m M Sl f m S**

*iq m*

1 T =( ) *sys sys ae ae*

The recursive Newton–Euler method is used to deduce an equation equivalent to Eq. (10) to

*i ii i ii <sup>m</sup>* **F v**

**N I ω ω I ω**

, T T T TT [ (1) (1) ( ) ( )] **NF M F M** *ae e e e w ew n n* .

<sup>T</sup> ( ) + ( , ) +( ) ( ) <sup>0</sup> **<sup>H</sup>***sys sys* **Φ Φ <sup>C</sup> ΦΦΦ** *ae ae* **<sup>f</sup> <sup>Φ</sup> <sup>G</sup> <sup>Φ</sup> NJ N** . (10)

**Φ H NJ N D** . (11)

. (12)

, (13)

, (15)

0 0 **F N M τ**

calculate the unknown *D*. The Newton–Euler equations are Figure 7 is a diagram of the lugged wheel–soil interaction mechanics, where *z* is wheel sinkage; *θ*1 is the entrance angle at which the wheel begins to contact the soil; *θ*2 is the exit angle at which the wheel loses contact with the soil; *θm* is the the angle of maximum stress; 1 is the angle at which the soil starts to deform; *W* is the vertical load of the wheel; *DP* is the resistance force acting on the wheel; *T* is the driving torque of the motor; *r* is the wheel radius; *h* is the height of the lugs; *v* is the vehicle velocity; and *ω* is the angular velocity of the wheel. The soil interacts with the wheel in the form of continuous normal stress *σ* and shearing stress *τ*, which can be integrated to calculate the interaction mechanics. To improve the simulation speed, a simplified closed-form formula (Ding et al., 2009a) is adopted and improved considering the effect of the normal force:

$$\begin{cases} F\_{DP} = \left[ \frac{M\_R(A^2 + B^2)}{r\_s A C} - \frac{BF\_N}{A} \right] (1 + c\_{p1} + c\_{p2}s)(1 + c\_{p3} \frac{\overline{\mathcal{W}} - F\_N}{\overline{\mathcal{W}}})\\ F\_N = r\hbar A \sigma\_m + r\_s b B \sigma\_m \\\ M\_R = \frac{r\_s^2 C D \left[ bc + \left[1 + c\_M(\overline{\mathcal{W}} - F\_N) / \overline{\mathcal{W}} \right] F\_N \tan \varphi / \langle rA \rangle \right]}{1 + r\_s B D \tan \varphi / \langle rA \rangle} \end{cases} . \tag{16}$$

In Eq. (16), *s* is the slip ratio defined by Ding et al. (2009); *cP*1 and *cP*2 are adopted to reflect the influence of the slip ratio on the drawbar pull, and *θm* can thus be simplified as half of *θ*1; *cP*3 and *cM* are parameters that compensate for the effect of the normal force; *W* is the average normal force of the wheels; and 1 (cos cos ) *N N ms m K r* , 1 2 *C* ( )/2 ,

$$A = \left(\cos\theta\_m - \cos\theta\_2\right) / \left(\theta\_m - \theta\_2\right) + \left(\cos\theta\_m - \cos\theta\_1\right) / \left(\theta\_1 - \theta\_m\right),$$

$$B = \left(\sin\theta\_m - \sin\theta\_2\right) / \left(\theta\_m - \theta\_2\right) + \left(\sin\theta\_m - \sin\theta\_1\right) / \left(\theta\_1 - \theta\_m\right), \text{ and}$$

$$\begin{aligned} \tau\_m &= \left(c + \sigma\_m \tan\phi\right) \times \\ &\quad \left(1 - \exp\left\{-r\_s[\left(\theta\_1' - \theta\_m\right) - \left(1 - s\right)\left(\sin\theta\_1' - \sin\theta\_m\right)\right\} / k\right) \end{aligned}$$

The newly introduced parameters are

<sup>1</sup> acos[( ) / ]*<sup>j</sup> rz R* , / *K k bk s c* , *N n ns* 0 1 , and 2 0 .

The radius *Rj* is a value between *r* and *r* + *h* that compensates for the lug effect (Ding et al., 2009b). The soil parameters in the equations are *kc*, the cohesive modulus; *kφ*, the frictional modulus; *N*, an improved soil sinkage exponent; *c*, the cohesion of the soil; *φ*, the internal

Advances in Simulation of Planetary Wheeled Mobile Robots 389

terrain, and even in simulation failure because of abrupt changes in wheel sinkage and other forces. Calculating the interaction area of a wheel moving on soft soil is important for highfidelity simulation, which is employed to predict and transform the interaction mechanics. Figure 8 shows the interaction area of a wheel moving on rough terrain. The known parameters are the position of a wheel's center *W*, (*xw*, *yw*, *zw*); the yaw angle of a wheel,

and the DEM of the terrain. The interaction area is simplified as an inclined plane

*A xx xx B yy yy C zz zz*

*P*, the foot of the perpendicular line drawn from point *w* to plane *P*1*P*2*P*3, is located on the line ( )/ ( )/ ( )/ *w t wt wt xx A yy B zz C* . The coordinates of point *E* can be solved by

*Ax x By y Cz z wP*

 

21 31 21 31 21 31

1 11 222 ( )( )( ) *tw t w tw*

*ttt*

*ABC* 

**<sup>e</sup> z** . (21)

1 11 ( ) ( ) ( )0 *Ax x By y Cz z tt t* (22)

*z Pe r wP* . (24)

determined by points *P*1, *P*2, and *P*3, the normal vector of which is

The equation of the inclined plane *P*1*P*2*P*3 is therefore

The wheel sinkage is then determined as

coordinates of point *P*2 are then

*t t t*

substituting the line equation into Eq. (22). The length of *wP* is thus deduced:

Fig. 8. Interaction area of wheel moving on deformable rough terrain

Point *P*2 is used to illustrate how to obtain the coordinates of points *P*1, *P*2, and *P*3. A wheel moving on a random plane can be decomposed into climbing up/down a slope at an angle *θcl* and traversing a slope with an inclination angle *θcr*, as shown in Fig. 4. The *x* and *y*

*w*;

. (23)

friction angle; and *k*, the shearing deformation modulus. *n*0 and *n*1 are coefficients for calculating *N*, which are important when predicting the slip-sinkage of wheels.

Fig. 7. Lugged wheel–soil interaction mechanics

#### **4.2.2 Steering model**

The model for calculating the side force *FS* is (Ishigami & Yoshida, 2005)

$$F\_S = r b \int\_{\theta\_2}^{\theta\_1} \tau\_y(\theta) d\theta + \int\_{\theta\_2}^{\theta\_1} R\_b(r - h(\theta)\cos\theta) d\theta \,\,\,\,\tag{17}$$

$$\tau\_y(\theta) = [\varepsilon + \sigma(\theta)] \{ 1 - \exp[-r(1-s)(\theta\_1 - \theta) \tan \beta / k\_y] \} \,, \tag{18}$$

$$R\_b = \cot X\_c + \tan(X\_c + \phi) \left| hc + \frac{1}{2} \rho h^2 \left[ (\cot X\_c + \frac{\cot X\_c}{\cot \phi}) \right] \right| \tag{19}$$

where /4 /2 *Xc* ; *<sup>y</sup> k* is the shearing deformation modulus in the *y* direction; is the skid angle; and *h* is the wheel height in the soil. The overturning moment is approximated by

$$M\_O \approx F\_S r \,. \tag{20}$$

The steering resistance moment *Ms* is considered to be zero, and the motion of steering is simulated employing the kinematics method, as the steering torque has little effect on the motion of the entire rover, and the model is still under development.

### **4.3 Deformable rough-terrain geometry modeling (Ding, 2009)**

#### **4.3.1 Calculation of contact area**

For simplicity, the literature often assumes that wheel–soil interaction occurs at a single point, which may result in large errors when the robot moves over deformable rough 388 Mobile Robots – Current Trends

friction angle; and *k*, the shearing deformation modulus. *n*0 and *n*1 are coefficients for

*W*

*DP ω***,***T*

*θ*2

*τ*

*τ*

*θm*

*σ*

*θ*1

1  *v*

*z*

*σ*

 , (17)

> 

*MO S F r* . (20)

, (18)

, (19)

is

calculating *N*, which are important when predicting the slip-sinkage of wheels.

*r*

The model for calculating the side force *FS* is (Ishigami & Yoshida, 2005)

 

motion of the entire rover, and the model is still under development.

**4.3 Deformable rough-terrain geometry modeling (Ding, 2009)** 

 

1 1 2 2

( ) ( ( )cos ) *F rb d R r h d Sy b*

 

<sup>1</sup> ( ) [ ( )]{1 exp[ (1 )( )tan / ]} *y y*

*c r s k*

<sup>2</sup> <sup>1</sup> <sup>2</sup> cot cot tan( ) (cot 2 cot *<sup>c</sup> bc c <sup>c</sup> <sup>X</sup> R X X hc h X* 

The steering resistance moment *Ms* is considered to be zero, and the motion of steering is simulated employing the kinematics method, as the steering torque has little effect on the

For simplicity, the literature often assumes that wheel–soil interaction occurs at a single point, which may result in large errors when the robot moves over deformable rough

 

; *<sup>y</sup> k* is the shearing deformation modulus in the *y* direction;

 

*r+h*

Fig. 7. Lugged wheel–soil interaction mechanics

 

**4.3.1 Calculation of contact area** 

the skid angle; and *h* is the wheel height in the soil. The overturning moment is approximated by

**4.2.2 Steering model** 

where /4 /2 *Xc* 

terrain, and even in simulation failure because of abrupt changes in wheel sinkage and other forces. Calculating the interaction area of a wheel moving on soft soil is important for highfidelity simulation, which is employed to predict and transform the interaction mechanics.

Figure 8 shows the interaction area of a wheel moving on rough terrain. The known parameters are the position of a wheel's center *W*, (*xw*, *yw*, *zw*); the yaw angle of a wheel, *w*; and the DEM of the terrain. The interaction area is simplified as an inclined plane determined by points *P*1, *P*2, and *P*3, the normal vector of which is

$$\mathbf{z}\_{\mathbf{e}} = \begin{bmatrix} A\_t \\ B\_t \\ C\_t \end{bmatrix} = \begin{bmatrix} x\_2 - x\_1 \\ y\_2 - y\_1 \\ z\_2 - z\_1 \end{bmatrix} \times \begin{bmatrix} x\_3 - x\_1 \\ y\_3 - y\_1 \\ z\_3 - z\_1 \end{bmatrix}. \tag{21}$$

The equation of the inclined plane *P*1*P*2*P*3 is therefore

$$A\_t(\mathbf{x} - \mathbf{x}\_1) + B\_t(y - y\_1) + C\_t(z - z\_1) = 0 \tag{22}$$

*P*, the foot of the perpendicular line drawn from point *w* to plane *P*1*P*2*P*3, is located on the line ( )/ ( )/ ( )/ *w t wt wt xx A yy B zz C* . The coordinates of point *E* can be solved by substituting the line equation into Eq. (22). The length of *wP* is thus deduced:

$$\overline{wP} = \frac{\left| A\_t(\mathbf{x}\_w - \mathbf{x}\_1) + B\_t(\mathbf{y}\_w - \mathbf{y}\_1) + \mathbf{C}\_t(\mathbf{z}\_w - \mathbf{z}\_1) \right|}{\sqrt{\mathbf{A}\_t^2 + \mathbf{B}\_t^2 + \mathbf{C}\_t^2}}. \tag{23}$$

The wheel sinkage is then determined as

$$z = \overline{Pe} = r - \overline{wP} \,. \tag{24}$$

Fig. 8. Interaction area of wheel moving on deformable rough terrain

Point *P*2 is used to illustrate how to obtain the coordinates of points *P*1, *P*2, and *P*3. A wheel moving on a random plane can be decomposed into climbing up/down a slope at an angle *θcl* and traversing a slope with an inclination angle *θcr*, as shown in Fig. 4. The *x* and *y* coordinates of point *P*2 are then

Advances in Simulation of Planetary Wheeled Mobile Robots 391

arcsin[( tan ) / ] arcsin[ ( tan ) / ]

where 2 2 <sup>2</sup> <sup>1</sup> (1 tan ) ( tan ) *X C t w tt w* 

tan tan

*t w t t tt w t <sup>e</sup>*

[ ]

*F FF*

The equivalent forces and moments that act on the wheel in the inertial coordinates {Σ*I*} are

= =

**FAF**

*e O S R DP S*

*e e ee e ee e*

Simulation technology has played an important role in the research of planetary rovers at SKLRS for more than 10 years. Different simulation methods are used for different purposes. In collaboration with Tohoku University, Japan, the Matlab toolbox entitled SpaceDyn is used to develop a simulation system for control algorithm verification, by integrating high-fidelity terramechanics models. To realize real-time simulation for hardware-in-loop testing and successive teleoperation, Vortex, a real-time simulation

Commercial dynamics software ADAMS is adopted in the mechanical design of rovers for purposes such as performance analysis, optimization design, and control algorithm verification. The Contact model, Tire model, and self-developed terramechanics model are

[ ]

*M rF M rF M*

tan tan

*AB X CA B X*

   

1 2

2 2

( + )tan

**A** . (29)

1 23 2 2

*t t t tt w t*

*C AB B C A X XX C C A AB B X XX A B AC BC C X XX*

1 23

 

T

1 23

*t t w tt w tt t*

 *A B* ,

T

**MAM** . (31)

 

, and <sup>222</sup> *X ABC* <sup>3</sup> *ttt* .

, (28)

. (30)

*θcl* (*θcr* ) is the angle between *x<sup>e</sup>* (*ye*) and the horizontal plane, which can be calculated as

*cl tt w cr tt w t*

2 2 222 2 3[ 2 tan ( )tan ] *X X A C AB B C t t tt w t t w* 

According to *xe*, *ye,* and *ze*, the transformation matrix from {Σ*e*} to {Σ*I*} is

*e w*

**5. Research on simulation for planetary WMRs at SKLRS** 

platform, is adopted for the development of RoSTDyn with C++ language.

**5.1 ADAMS-based simulation for planetary WMRs** 

used to predict wheel–soil interaction mechanics.

**F F M**

*e*

The external forces and torques that act at the wheel–soil interaction point are

*e e DP S N*

 

$$\begin{cases} \mathbf{x}\_{P2} = \mathbf{x}\_w + r \cos \theta\_{cr} \\ y\_{P2} = y\_w + r \sin \theta\_1 \cos \theta\_{cl} \end{cases} \tag{25}$$

The coordinates of points *A*1, *A*2, and *A*3 are easy to find by referring to the DEM. *zP*2 can then be determined using the same method as that for calculating point *E*.

#### **4.3.2 Terminal force transformation matrix**

Figure 9 shows the forces and moments of the soil that act on a wheel. {Σ*e*} and {Σ*w*} are coordinate systems with the same orientation and different origins, at the end-point and wheel center, respectively.

Fig. 9. Force analysis of wheel moving on random slope

*x<sup>e</sup>* is the line of intersection of the wheel–soil interaction plane and the plane with an included angle of *φw* to the *x* axis: tan 0 *<sup>w</sup> x yD* . It is deduced that

$$\mathbf{x}\_{\mathbf{e}} = \left\{ \mathbf{C}\_{t'} \mathbf{C}\_t \tan \varphi\_{w,t} - A\_t - B\_t \tan \varphi\_w \right\}. \tag{26}$$

The vector direction of *ye* is then

$$\mathbf{y}\_{\varepsilon} = \mathbf{z}\_{\varepsilon} \times \mathbf{x}\_{\varepsilon} = \begin{bmatrix} -A\_t B\_t - \left(B\_t^{\top} + \mathbf{C}\_t^{\top}\right) \tan \varphi\_w\\ \mathbf{C}\_t^{\top} + A\_t \left(A\_t + B\_t \tan \varphi\_w\right) \\\ A\_t \mathbf{C}\_t \tan \varphi\_w - B\_t \mathbf{C}\_t \end{bmatrix} \tag{27}$$

390 Mobile Robots – Current Trends

cos sin cos

 *cr*

{ } *<sup>w</sup>*

. (26)

. (25)

2 1

The coordinates of points *A*1, *A*2, and *A*3 are easy to find by referring to the DEM. *zP*2 can

Figure 9 shows the forces and moments of the soil that act on a wheel. {Σ*e*} and {Σ*w*} are coordinate systems with the same orientation and different origins, at the end-point and

> *w*

*x<sup>e</sup>* is the line of intersection of the wheel–soil interaction plane and the plane with an

**xe** *CC A B tt w t t w* , tan , tan 

2

*e e e t tt t w*

. It is deduced that

2 2

*tt t t w*

*tt w tt*

*AC BC*

= ( tan ) tan

*AB B C C AA B*

( + )tan

**y zx** . (27)

{ } *<sup>e</sup>*

{ } *w*

*P w cr P w cl*

2

then be determined using the same method as that for calculating point *E*.

*cl*

{ } *<sup>w</sup>*

Fig. 9. Force analysis of wheel moving on random slope

included angle of *φw* to the *x* axis: tan 0 *<sup>w</sup> x yD*

The vector direction of *ye* is then

**4.3.2 Terminal force transformation matrix** 

wheel center, respectively.

{ } *<sup>I</sup>*

*x xr y yr*

*θcl* (*θcr* ) is the angle between *x<sup>e</sup>* (*ye*) and the horizontal plane, which can be calculated as

$$\begin{cases} \theta\_{cl} = \arcsin\left[\left(-A\_t - B\_t \tan \phi\_w\right) / \left|X\_1\right|\right] \\ \theta\_{cr} = \arcsin\left[\mathbf{C}\_t \left(A\_t \tan \phi\_w - B\_t\right) / \left|X\_2\right|\right]' \end{cases} \tag{28}$$

$$\text{where } X\_1 = \sqrt{\mathbb{C}\_t^2 \left(1 + \tan^2 \varphi\_w\right) + \left(A\_t + B\_t \tan \varphi\_w\right)^2},$$

$$X\_2 = \sqrt{X\_3 \left[A\_t^2 + \mathbb{C}\_t^2 + 2A\_t B\_t \tan \varphi\_w + \left(B\_t^2 + \mathbb{C}\_t^2\right) \tan^2 \varphi\_w\right]}, \text{ and } X\_3 = \sqrt{A\_t^2 + B\_t^2 + \mathbb{C}\_t^2}.$$

According to *xe*, *ye,* and *ze*, the transformation matrix from {Σ*e*} to {Σ*I*} is

$$\mathbf{A}\_{c} = \begin{bmatrix} \frac{\mathbf{C}\_{t}}{X\_{1}} & \frac{-A\_{t}B\_{t} - \left(B\_{t}^{2} + \mathbf{C}\_{t}^{2}\right)\tan\varphi\_{w}}{X\_{2}} & \frac{A\_{t}}{X\_{3}}\\ & \frac{\mathbf{C}\_{t}\tan\varphi\_{w}}{X\_{1}} & \frac{\mathbf{C}\_{t}^{2} + A\_{t}^{2} + A\_{t}B\_{t}\tan\varphi\_{w}}{X\_{2}} & \frac{B\_{t}}{X\_{3}}\\ \frac{-A\_{t} - B\_{t}\tan\varphi\_{w}}{X\_{1}} & \frac{A\_{t}\mathbf{C}\_{t}\tan\varphi\_{w} - B\_{t}\mathbf{C}\_{t}}{X\_{2}} & \frac{\mathbf{C}\_{t}}{X\_{3}} \end{bmatrix}. \tag{29}$$

The external forces and torques that act at the wheel–soil interaction point are

$$\begin{cases} \prescript{e}{}{\mathbf{F}}\_{e} = \prescript{w}{}{\mathbf{F}}\_{e} = \begin{bmatrix} F\_{DP} & F\_{S} & F\_{N} \end{bmatrix}^{\mathrm{T}}\\ \prescript{e}{}{\mathbf{M}}\_{e} = \begin{bmatrix} M\_{O} - rF\_{S} & -M\_{R} + rF\_{DP} & M\_{S} \end{bmatrix}^{\mathrm{T}}. \end{cases} \tag{30}$$

The equivalent forces and moments that act on the wheel in the inertial coordinates {Σ*I*} are

$$\begin{cases} \mathbf{F}\_{\varepsilon} = \mathbf{A}\_{\varepsilon} \, \, ^{\varepsilon} \mathbf{F}\_{\varepsilon} \\ \mathbf{M}\_{\varepsilon} = \mathbf{A}\_{\varepsilon} \, ^{\varepsilon} \mathbf{M}\_{\varepsilon} \end{cases} . \tag{31}$$

#### **5. Research on simulation for planetary WMRs at SKLRS**

Simulation technology has played an important role in the research of planetary rovers at SKLRS for more than 10 years. Different simulation methods are used for different purposes. In collaboration with Tohoku University, Japan, the Matlab toolbox entitled SpaceDyn is used to develop a simulation system for control algorithm verification, by integrating high-fidelity terramechanics models. To realize real-time simulation for hardware-in-loop testing and successive teleoperation, Vortex, a real-time simulation platform, is adopted for the development of RoSTDyn with C++ language.

#### **5.1 ADAMS-based simulation for planetary WMRs**

Commercial dynamics software ADAMS is adopted in the mechanical design of rovers for purposes such as performance analysis, optimization design, and control algorithm verification. The Contact model, Tire model, and self-developed terramechanics model are used to predict wheel–soil interaction mechanics.

Advances in Simulation of Planetary Wheeled Mobile Robots 393

In directing a virtual rover to follow a planned path while compensating for slipping on deformable rough terrain, the required control strategy can be realized with Matlab. A virtual rover created with ADAMS software can be controlled through the interface between ADAMS/Control and Matlab/Simulink. ADAMS/Control provides an interactive environment for establishing and demonstrating an S-function "controlled object," which can be controlled with the Simulink toolbox of Matlab. In each integration step, ADAMS/Control is called as a subprogram by Simulink. The control instructions generated by Simulink are then directed to the corresponding mechanisms (such as the driving motors of the wheels) of lunar rovers in ADAMS through ADAMS/Control. The rover's motion is then calculated by ADAMS/Solver on the basis of dynamics simulation, and the related information is fed back to Simulink through ADAMS/Control. The entire transmission

ADAMS/Tire is a module for predicting the interactive forces and moments between wheels and roads. Several types of tire models are provided by ADAMS, including MF-Tyre, UA, Ftire, SWIFT, PAC89, and PAC2002. Each tire model has its own advantages and disadvantages. After careful analysis and comparison, the PAC2002 model was finally chosen for calculation of the mechanics of the wheel–soil interaction of a lunar rover. This model is applicable to the interaction between a wheel and 3D roads or

The PAC2002 model uses trigonometric functions to fit the experimental data and thus predict the forces and moments that the soil exerts upon a wheel, including the drawbar pull *Fx*, lateral force *Fy*, sustaining force *Fz*, overturning moment *Mx*, resistance torque *My*, and aligning torque *Mz*. The general expression of the function, which is called the magic

where *Y*(*X*) is a force or moment, the independent variable *X* reflects the effect of the slip angle or longitudinal slip ratio of a wheel for an actual situation, parameters *B*, *C*, and *D* are determined by the wheel's vertical load and camber angle, while *E* is the curvature factor.

*Y X D C BX E BX BX* ( ) cos arctan [ arctan( )] , (32)

process is automatic and transparent.

obstacles.

formula, is

Fig. 12. Dynamics simulation of lunar rover with ADAMS

### **5.1.1 Simulation with Contact model**

Using the Contact model provided by ADAMS software is an easy way to realize dynamics simulation of a wheeled rover. The wheel–soil interaction is considered as solid-to-solid contact. Different frictional coefficients and dumping ratios can be set by the users according to their understanding of the soil properties. The structured terrain can be constructed with ADAMS or imported from computer-aided design software such as Pro/E. This approach is simple and reflects the characteristics of the mechanism, at least at the kinematics level, although the fidelity of wheel–soil interaction mechanics is poor, and it is thus used during the initial phase of rover design. For instance, Tao et al. designed a six-wheeled robot with good locomotion performance on rough terrain, and they simulated different locomotion modes with ADAMS to verify the concept of its mechanism, as shown in Fig. 10 (Tao et al., 2006).

Fig. 10. Simulation for WMR with ADAMS and Contact model

### **5.1.2 Comprehensive simulation using Tire model**

The structure of a comprehensive virtual simulation system for lunar rovers is shown in Fig. 11 (Gao et al., 2011). The simulation system integrates the software of ADAMS, Pro/E, 3DS Max, and Matlab to realize the functions of rover vehicle modeling, terrain modeling, kinematics/dynamics analysis including the mechanics of wheel–soil interaction, and control, respectively.

Fig. 11. Diagram of comprehensive virtual simulation system for lunar rovers

The 3D model of a lunar rover constructed using Pro/E can be imported into ADAMS through an ADAMS/Pro connection module; the rough terrain model of the lunar surface produced with 3DS Max can be imported into an ADAMS/Tire module through an interface; the PAC2002 Tire model is called by the ADAMS/Tire module for prediction of the terramechanics of the wheel–soil interaction. By configuring the simulation parameters for the rover model, the terrain, and Tire model in ADAMS, a virtual rover is constructed whose kinematics and dynamics can be solved by ADAMS/Solver. Figure 12 presents an example.

392 Mobile Robots – Current Trends

Using the Contact model provided by ADAMS software is an easy way to realize dynamics simulation of a wheeled rover. The wheel–soil interaction is considered as solid-to-solid contact. Different frictional coefficients and dumping ratios can be set by the users according to their understanding of the soil properties. The structured terrain can be constructed with ADAMS or imported from computer-aided design software such as Pro/E. This approach is simple and reflects the characteristics of the mechanism, at least at the kinematics level, although the fidelity of wheel–soil interaction mechanics is poor, and it is thus used during the initial phase of rover design. For instance, Tao et al. designed a six-wheeled robot with good locomotion performance on rough terrain, and they simulated different locomotion modes with ADAMS to verify the concept of its mechanism, as shown in Fig. 10 (Tao et al.,

The structure of a comprehensive virtual simulation system for lunar rovers is shown in Fig. 11 (Gao et al., 2011). The simulation system integrates the software of ADAMS, Pro/E, 3DS Max, and Matlab to realize the functions of rover vehicle modeling, terrain modeling, kinematics/dynamics analysis including the mechanics of wheel–soil interaction, and

Fig. 11. Diagram of comprehensive virtual simulation system for lunar rovers

The 3D model of a lunar rover constructed using Pro/E can be imported into ADAMS through an ADAMS/Pro connection module; the rough terrain model of the lunar surface produced with 3DS Max can be imported into an ADAMS/Tire module through an interface; the PAC2002 Tire model is called by the ADAMS/Tire module for prediction of the terramechanics of the wheel–soil interaction. By configuring the simulation parameters for the rover model, the terrain, and Tire model in ADAMS, a virtual rover is constructed whose kinematics and dynamics can be solved by ADAMS/Solver. Figure 12 presents an

**5.1.1 Simulation with Contact model** 

Fig. 10. Simulation for WMR with ADAMS and Contact model

**5.1.2 Comprehensive simulation using Tire model** 

2006).

control, respectively.

example.

In directing a virtual rover to follow a planned path while compensating for slipping on deformable rough terrain, the required control strategy can be realized with Matlab. A virtual rover created with ADAMS software can be controlled through the interface between ADAMS/Control and Matlab/Simulink. ADAMS/Control provides an interactive environment for establishing and demonstrating an S-function "controlled object," which can be controlled with the Simulink toolbox of Matlab. In each integration step, ADAMS/Control is called as a subprogram by Simulink. The control instructions generated by Simulink are then directed to the corresponding mechanisms (such as the driving motors of the wheels) of lunar rovers in ADAMS through ADAMS/Control. The rover's motion is then calculated by ADAMS/Solver on the basis of dynamics simulation, and the related information is fed back to Simulink through ADAMS/Control. The entire transmission process is automatic and transparent.

Fig. 12. Dynamics simulation of lunar rover with ADAMS

ADAMS/Tire is a module for predicting the interactive forces and moments between wheels and roads. Several types of tire models are provided by ADAMS, including MF-Tyre, UA, Ftire, SWIFT, PAC89, and PAC2002. Each tire model has its own advantages and disadvantages. After careful analysis and comparison, the PAC2002 model was finally chosen for calculation of the mechanics of the wheel–soil interaction of a lunar rover. This model is applicable to the interaction between a wheel and 3D roads or obstacles.

The PAC2002 model uses trigonometric functions to fit the experimental data and thus predict the forces and moments that the soil exerts upon a wheel, including the drawbar pull *Fx*, lateral force *Fy*, sustaining force *Fz*, overturning moment *Mx*, resistance torque *My*, and aligning torque *Mz*. The general expression of the function, which is called the magic formula, is

$$Y(X) = D \cos\left(C \arctan\left(BX - E[BX - \arctan(BX)]\right)\right),\tag{32}$$

where *Y*(*X*) is a force or moment, the independent variable *X* reflects the effect of the slip angle or longitudinal slip ratio of a wheel for an actual situation, parameters *B*, *C*, and *D* are determined by the wheel's vertical load and camber angle, while *E* is the curvature factor.

Advances in Simulation of Planetary Wheeled Mobile Robots 395

0 0 *v q* , , 

0 0 00 *v qP Q q* , , , , , 

El-Dorado II, a four-wheeled mobile robot developed at Space Robotics Laboratory of Tohoku University in Japan was used to validate the simulation. The robot uses four force/torque (F/T) sensors to measure the wheel–soil interaction terramechanics. A visual odometry system was developed to measure the position of the rover body and the slip ratio of the wheels. The entrance angles used for calculating the wheel sinkage were measured with an angle meter. Two groups of experiments were performed. In group 1, resistance forces were applied to the rover with counterweights from 0 to 60 N in intervals of 10 N, to generate different slip ratios. In group 2, the rover was controlled to climb slopes ranging

Telecentric camera F/T sensor

Wheel 1 Wheel 4

Fig. 14. Principle diagram of dynamics simulation

**5.2.2 Experimental validation** 

from 0° to 15° in intervals of 3° (Fig. 15).

Wheel 2

Wheel 3

Fig. 15. Slope-climbing experiment using El-Dorado II robot

The unknown parameters can be determined by a data fitting method based on experimental results.

#### **5.1.3 Simulation using self-developed terramechanics model**

The simulation fidelity is improved to some extent using the Tire model instead of the Contact model. However, phenomenon such as of severe slip-sinkage and the lug effect still cannot be reflected well. ADAMS provides a method for users to embed the newly developed terramechanics models with higher precision (Jiao, 2009).

The GFORCE command is used to define the wheel–soil interaction mechanics that consists of three mutually orthogonal translational force components and three orthogonal torque components. As the force and torque expressions are lengthy and complex, the GFOSUB evaluation subroutine is used to compute wheel–soil interaction mechanics applied by a GFORCE statement with FORTRAN language. The wheel–soil interaction mechanics program is compiled and linked to generate an object file (\*.obj). The Create Custom Solver command is then used to generate a dynamic link library (\*.dll) file and library file (\*.lib). The general force, GFORCE, which is applied to the center of the wheel, is set as shown in Fig. 13 to call the wheel–soil interaction function through the \*.dll file.


Fig. 13. GFORCE Subroutine

#### **5.2 Matlab-based high-fidelity simulation platform for planetary WMRs (Ding, 2010a) 5.2.1 Implementation of simulation platform**

A numerical simulation program based on a Matlab toolbox called SpaceDyn was developed (Yoshida 2000). The principle diagram is shown in Fig. 14. Given the DEM terrain, soil parameters, and rover model parameters, the program calculates the wheel– soil interaction area, predicts the external forces that act on the wheel, calculates the accelerations of the generalized coordinates on the basis of the dynamics model, and then integrates them to obtain their velocities and positions on the basis of kinematics equations.

394 Mobile Robots – Current Trends

The unknown parameters can be determined by a data fitting method based on

The simulation fidelity is improved to some extent using the Tire model instead of the Contact model. However, phenomenon such as of severe slip-sinkage and the lug effect still cannot be reflected well. ADAMS provides a method for users to embed the newly

The GFORCE command is used to define the wheel–soil interaction mechanics that consists of three mutually orthogonal translational force components and three orthogonal torque components. As the force and torque expressions are lengthy and complex, the GFOSUB evaluation subroutine is used to compute wheel–soil interaction mechanics applied by a GFORCE statement with FORTRAN language. The wheel–soil interaction mechanics program is compiled and linked to generate an object file (\*.obj). The Create Custom Solver command is then used to generate a dynamic link library (\*.dll) file and library file (\*.lib). The general force, GFORCE, which is applied to the center of the wheel, is set as shown in

**5.2 Matlab-based high-fidelity simulation platform for planetary WMRs (Ding, 2010a)** 

A numerical simulation program based on a Matlab toolbox called SpaceDyn was developed (Yoshida 2000). The principle diagram is shown in Fig. 14. Given the DEM terrain, soil parameters, and rover model parameters, the program calculates the wheel– soil interaction area, predicts the external forces that act on the wheel, calculates the accelerations of the generalized coordinates on the basis of the dynamics model, and then integrates them to obtain their velocities and positions on the basis of kinematics

**5.1.3 Simulation using self-developed terramechanics model** 

developed terramechanics models with higher precision (Jiao, 2009).

Fig. 13 to call the wheel–soil interaction function through the \*.dll file.

experimental results.

Fig. 13. GFORCE Subroutine

equations.

**5.2.1 Implementation of simulation platform** 

Fig. 14. Principle diagram of dynamics simulation

### **5.2.2 Experimental validation**

El-Dorado II, a four-wheeled mobile robot developed at Space Robotics Laboratory of Tohoku University in Japan was used to validate the simulation. The robot uses four force/torque (F/T) sensors to measure the wheel–soil interaction terramechanics. A visual odometry system was developed to measure the position of the rover body and the slip ratio of the wheels. The entrance angles used for calculating the wheel sinkage were measured with an angle meter. Two groups of experiments were performed. In group 1, resistance forces were applied to the rover with counterweights from 0 to 60 N in intervals of 10 N, to generate different slip ratios. In group 2, the rover was controlled to climb slopes ranging from 0° to 15° in intervals of 3° (Fig. 15).

Fig. 15. Slope-climbing experiment using El-Dorado II robot

Advances in Simulation of Planetary Wheeled Mobile Robots 397

 (a) Rough terrain and wheel trajectories (b) RPY and q1 and q2 angles Fig. 18. Simulation results for El-Dorado II moving on deformable rough terrain

**5.3 RoSTDyn: Vortex-based high-fidelity and real-time simulation platform** 

can be created vividly with 3D modeling software such as 3DS Max or Creator.

force module, and this module will be introduced in detail in part III.

RoSTDyn is composed of five modules as shown in Fig. 20 (Li et al., 2012). The module of the planetary rover model is used to create the simulation object-rover model, which is composed of a physical model and scenic model. The physical model is the real model that is used in the collision detection and dynamics calculation; the scenic model does not participate in any calculation, and it is driven according to the message transferred from the physical model, so it

The controlling module focuses on realizing the interaction between the user and the simulation platform. This module is mainly used to control the movable joints of the rover and thus control, for example, the speed of the driving wheel, the turning degree of a wheel,

The module of the terrain model is used to simulate the terrain. It also includes a physical model and scene model. For the terrain, the physical model is a file of the node information including X, Y, and Z; the scenic model is generated by the 3D modeling software on the

The contact-area computing module is used to compute the parameters for the contact area between the wheel and terrain. These parameters are the precondition of the interaction

Fig. 19. Path-following simulation

and the posture of the solar panels and mast.

**5.3.1 Structure of RoSTDyn** 

basis of node information.

The parameters of Toyoura soft sand are identified from the experimental data: *Ks* = 1796 Kpa/m*N*, *c* = 24.5 Pa, *φ* = 35.75°, and *K* = 10.45 mm. *ky* is 19 mm. When the robot climbs a slope, the remaining parameters are *n*0 = 0.66, *n*1 = 0.72, *cP*1 = –0.379, *cP*2 = 0.616, *cP*3 = –0.448, and *CM* = 0.214; on flat terrain, the parameters are *n*0 = 0.63, *n*1 = 0.72, *cP*1 = –0.276, *cP*2 = 0.633, *cP*3 = –0.304, and *CM* = 0.354.

Comparisons of the simulation and experimental results are shown in Figs. 16 and 17. Not only can the motion of the robot be predicted with high fidelity, as indicated by the slip ratio, so too can the drawbar pull, moment of resistance, the normal force, and wheel sinkage.

Fig. 16. Simulation and experimental results for robot moving on flat terrain

#### **5.2.3 Simulation for deformable rough terrain**

The robot was controlled to move from (0.5 m, 0.5 m) to (5 m, 5 m) on the randomly generated rough terrain shown in Fig. 17, with an initial yaw angle of 45°. While moving, the robot deviates from the scheduled path because of the inclination angle of the terrain. Figure 18 shows the slope angles that wheel number 4 traverses, the RPY (roll, pitch, and yaw) angles of the body and *q*1 and *q*2 joint angles (*q*1 = –*q*2), the slip ratios, and normal forces. The simulation platform was used to verify the slip-ratio-coordinated control (Ding, 2010b) and path-following strategy (Ding, 2009c), as shown in Fig. 19.

Fig. 17. Simulation and experimental results for robot climbing slope

396 Mobile Robots – Current Trends

The parameters of Toyoura soft sand are identified from the experimental data: *Ks* = 1796 Kpa/m*N*, *c* = 24.5 Pa, *φ* = 35.75°, and *K* = 10.45 mm. *ky* is 19 mm. When the robot climbs a slope, the remaining parameters are *n*0 = 0.66, *n*1 = 0.72, *cP*1 = –0.379, *cP*2 = 0.616, *cP*3 = –0.448, and *CM* = 0.214; on flat terrain, the parameters are *n*0 = 0.63, *n*1 = 0.72, *cP*1 = –0.276, *cP*2 = 0.633,

Comparisons of the simulation and experimental results are shown in Figs. 16 and 17. Not only can the motion of the robot be predicted with high fidelity, as indicated by the slip ratio, so too can the drawbar pull, moment of resistance, the normal force, and wheel

Fig. 16. Simulation and experimental results for robot moving on flat terrain

2010b) and path-following strategy (Ding, 2009c), as shown in Fig. 19.

Fig. 17. Simulation and experimental results for robot climbing slope

The robot was controlled to move from (0.5 m, 0.5 m) to (5 m, 5 m) on the randomly generated rough terrain shown in Fig. 17, with an initial yaw angle of 45°. While moving, the robot deviates from the scheduled path because of the inclination angle of the terrain. Figure 18 shows the slope angles that wheel number 4 traverses, the RPY (roll, pitch, and yaw) angles of the body and *q*1 and *q*2 joint angles (*q*1 = –*q*2), the slip ratios, and normal forces. The simulation platform was used to verify the slip-ratio-coordinated control (Ding,

**5.2.3 Simulation for deformable rough terrain** 

*cP*3 = –0.304, and *CM* = 0.354.

sinkage.

 (a) Rough terrain and wheel trajectories (b) RPY and q1 and q2 angles Fig. 18. Simulation results for El-Dorado II moving on deformable rough terrain

#### **5.3 RoSTDyn: Vortex-based high-fidelity and real-time simulation platform 5.3.1 Structure of RoSTDyn**

RoSTDyn is composed of five modules as shown in Fig. 20 (Li et al., 2012). The module of the planetary rover model is used to create the simulation object-rover model, which is composed of a physical model and scenic model. The physical model is the real model that is used in the collision detection and dynamics calculation; the scenic model does not participate in any calculation, and it is driven according to the message transferred from the physical model, so it can be created vividly with 3D modeling software such as 3DS Max or Creator.

The controlling module focuses on realizing the interaction between the user and the simulation platform. This module is mainly used to control the movable joints of the rover and thus control, for example, the speed of the driving wheel, the turning degree of a wheel, and the posture of the solar panels and mast.

The module of the terrain model is used to simulate the terrain. It also includes a physical model and scene model. For the terrain, the physical model is a file of the node information including X, Y, and Z; the scenic model is generated by the 3D modeling software on the basis of node information.

The contact-area computing module is used to compute the parameters for the contact area between the wheel and terrain. These parameters are the precondition of the interaction force module, and this module will be introduced in detail in part III.

Advances in Simulation of Planetary Wheeled Mobile Robots 399

RoSTDyn is done in real time. To demonstrate this, the rate is multiplied by a factor of 10, and it is found that the time ratio decreases only to 0.8. This shows that it is still possible to

Virtual simulation can guarantee the successful exploration of planets by WMRs as it plays important roles in both the R&D phase and the exploration phase of the rovers. Customized simulation tools or tools with different simulation functions have been developed to support the R&D of planetary rovers, because general simulation tools do not meet the requirements of rover-based exploration missions well. The key technologies for developing high-fidelity comprehensive simulation systems include recursive dynamics, wheel–soil interaction terramechanics, and deformable rough terrain modeling. At SKLRS, different methods have been employed for rover simulation. ADAMS software is adopted along with the Contact model, Tire model, and self-developed GFORCE model during the design phase of rovers. Matlab is used to develop a high-fidelity simulation platform by embedding the terramechanics model and rough terrain model into support control strategy simulation. Vortex is used to develop RoSTDyn to realize real-time simulation and thus support the

In the future, successive teleoperation of planetary rovers using a 3D predictive display will be explored on the basis of the high-fidelity/real-time simulation platform. A faster-thanreal-time simulation system will be developed to support the supervised control of rovers.

This work was supported by the National Natural Science Foundation of China (50975059/61005080), the Postdoctoral Foundation of China (20100480994), the Foundation of the Chinese State Key Laboratory of Robotics and System (grant No. SKLRS200801A02), the Key Natural Science Foundation of Heilongjiang Province in China (ZJG0709), the

Backes, P. G., Noms, J. S., & Powell, M. W. (2004). Multi-mission activity planning for Mars

Bauer, R., Leung, W., & Barfoot, T. (2005). Development of a dynamic simulation tool for the

Ding, L., Yoshida, K., Nagatani K., et al. (2009a). Parameter identification for planetary soil

*Robotics and Automation in Space*, Munich, Germany, September 2005 Chen, L., Zhang, Y., & Ren, W. (2005). *Dynamic Analysis of Mechanical System and ADAMS Application*, Tsinghua University Press, ISBN 7302100969, Beijing, China Ding, L., Gao, H., Deng, Z., et al. (2008). Design of comprehensive high-fidelity/high-speed

lander and rover missions. *IEEE Aerospace Conference Proceedings*, pp. 877-886,

exomars rover. *Proceedings of the 8th International Symposium on Artificial Intelligence,* 

virtual simulation system for lunar rover. *Proceedings of 2008 IEEE Int. Conf. on Robotics, Automation and Mechatronics*, pp. 1118-1123, ISBN 978-1-4244-1675-2,

based on a decoupled analytical wheel–soil interaction terramechanics model.

Postdoctoral Foundation of Heilongjiang Province, and the "111" Project (B07018).

ISBN: 0-7803-8155-6, Big Sky, Montana, USA, March 2004

Chengdu, China, September 2008

increase the simulation speed of RoSTDyn.

**6. Conclusions and future work** 

teleoperation of rovers.

**7. Acknowledgments** 

**8. References** 

Fig. 20. Structure of RoSTDyn

The interaction force computing module is used to compute the interactive force between the wheel and terrain. These forces act on the rover model, and drive it to move and turn. The five modules are integrated in Vortex. Each module only needs to do its own work and transfer necessary information to other modules; in this way, the five modules work together in the simulation. Snapshots of rover simulation with RoSTDyn are shown in Fig. 21.

Fig. 21. Snapshots of rover simulation with RoSTDyn

### **5.3.2 Testing of real-time properties**

To test the simulation speed, an experiment is designed. The rover's velocity is set to a constant value. At the moment that the simulation starts, a stopwatch is used to record the physical time, and the simulation time is exported by the program. At the same time, to test the calculation speed, another test is designed while the display of the scene is closed. Table 2 lists the time that the rover passes over three types of terrain. The display is opened for the first three and closed for the last three.


Here, because the rate is 80 Hz, the display wastes much time, and the time ratio is about 0.8; however, if the display is closed, the ratio reaches 1.0. Therefore, the calculation of RoSTDyn is done in real time. To demonstrate this, the rate is multiplied by a factor of 10, and it is found that the time ratio decreases only to 0.8. This shows that it is still possible to increase the simulation speed of RoSTDyn.

### **6. Conclusions and future work**

398 Mobile Robots – Current Trends

The interaction force computing module is used to compute the interactive force between the wheel and terrain. These forces act on the rover model, and drive it to move and turn. The five modules are integrated in Vortex. Each module only needs to do its own work and transfer necessary information to other modules; in this way, the five modules work together

To test the simulation speed, an experiment is designed. The rover's velocity is set to a constant value. At the moment that the simulation starts, a stopwatch is used to record the physical time, and the simulation time is exported by the program. At the same time, to test the calculation speed, another test is designed while the display of the scene is closed. Table 2 lists the time that the rover passes over three types of terrain. The display is opened for the

> Flat (Yes) 24.33 30.45 0.799:1 Slope (Yes) 34.50 42.91 0.804:1 Random (Yes) 43.50 53.76 0.809:1 Flat (No) 41.25 41.07 1:1 Slope (No) 38.50 38.43 1:1 Random (No) 45.00 44.82 1:1

Here, because the rate is 80 Hz, the display wastes much time, and the time ratio is about 0.8; however, if the display is closed, the ratio reaches 1.0. Therefore, the calculation of

time *ts* (s) Physical time *tp* (s) *ts*:*tp*

in the simulation. Snapshots of rover simulation with RoSTDyn are shown in Fig. 21.

Fig. 21. Snapshots of rover simulation with RoSTDyn

Terrain (display) Simulation

Table 2. Time that the rover passes over three types of terrain

**5.3.2 Testing of real-time properties** 

first three and closed for the last three.

Fig. 20. Structure of RoSTDyn

Virtual simulation can guarantee the successful exploration of planets by WMRs as it plays important roles in both the R&D phase and the exploration phase of the rovers. Customized simulation tools or tools with different simulation functions have been developed to support the R&D of planetary rovers, because general simulation tools do not meet the requirements of rover-based exploration missions well. The key technologies for developing high-fidelity comprehensive simulation systems include recursive dynamics, wheel–soil interaction terramechanics, and deformable rough terrain modeling. At SKLRS, different methods have been employed for rover simulation. ADAMS software is adopted along with the Contact model, Tire model, and self-developed GFORCE model during the design phase of rovers. Matlab is used to develop a high-fidelity simulation platform by embedding the terramechanics model and rough terrain model into support control strategy simulation. Vortex is used to develop RoSTDyn to realize real-time simulation and thus support the teleoperation of rovers.

In the future, successive teleoperation of planetary rovers using a 3D predictive display will be explored on the basis of the high-fidelity/real-time simulation platform. A faster-thanreal-time simulation system will be developed to support the supervised control of rovers.

### **7. Acknowledgments**

This work was supported by the National Natural Science Foundation of China (50975059/61005080), the Postdoctoral Foundation of China (20100480994), the Foundation of the Chinese State Key Laboratory of Robotics and System (grant No. SKLRS200801A02), the Key Natural Science Foundation of Heilongjiang Province in China (ZJG0709), the Postdoctoral Foundation of Heilongjiang Province, and the "111" Project (B07018).

### **8. References**


Advances in Simulation of Planetary Wheeled Mobile Robots 401

Li, W., Gao, H., Ding, L., et al. RoSTDyn: A High-fidelity Real-time Simulation Platform for

Liang, B., Wang, W., & Wang, C. (2005). Primary concept on developing lunar rover. *Journal of Space International*, Vol. 2, (February, 2003), pp. 22-25, ISSN 009-2366 Luo, X., Ye, J., & Sun, Z. (2002). Research on the lunar rover simulation platform. *Journal of System Simulation*, Vol. 14, No. 9, (September 2002), pp. 1235-1238, ISSN 1004-731X Maxwell, S., Cooper, B., Hartman, F., et al. (2005). The best of both worlds: integrating

Michaud, S., Richter, L., Patel, N., et al. (2004). RCET: Rover Chassis Evaluation Tools.

Neal, C. R. (2009). The Moon 35 years after Apollo: what's left to learn? *Chemie der Erde Geochemistry*, Vol. 69, No. 1, (February 2009), pp. 3-43, ISSN 0009-2819 Patel, N., Ellery, A., Allouis, E., et al. (2004). Rover Mobility Performance Evaluation Tool

Schäfer, B. & Krenn, R. (2010). Multibody and contact dynamics: from satellite capturing to

Tao J., Deng Z., Fang H., et al. (2006). Development of a wheeled robotic rover in rough

Van winnendael, M., Baglioni, P., Elfving, A., et al. (2008). The ExoMars Rover—Overview

Volpe, R. (2003). Rover functional autonomy development for the Mars Mobile Science

Volpe, R. (2005). Rover technology development and mission infusion beyond MER.

Wright, J., Hartman, F., Cooper, B., et al. (2006). Driving on Mars with RSVP: building safe

Yang, Y., Bao, J., Jin, Y., & Cheng, Y. (2008). A virtual simulation environment for lunar

*Systems*, Vol. 5, No. 2, (June 2008), pp. 201-208, ISSN 1729-8806

pp. 9272-9276, ISBN: 1-4244-0332-4, Dalian, China, June 2006

*Automation*, ESTEC, Noordwijk, Netherlands, November 2004

7803-9298-1, Waikoloa, HI, USA, October 2005

(January, 2010), pp. 149-169, ISSN: 0042-3114

X, Big Sky, Montana, USA, March 2003

No. 2, (March 2006), pp. 37-45, ISSN 070-9932

submitted

2004

USA, February 2008

Montana, USA, March 2005

Planetary Rovers. 2012 IEEE International Conference on Robotics and Automation,

textual and visual command interfaces for mars rover operations. *Proceedings of IEEE International Conference on Systems, Man and Cybernetics*, pp. 1384-1388, ISBN 0-

*Proceedings of the 8th ESA Workshop on Advanced Space Technologies for Robotics and* 

(RMPET): a systematic tool for rover chassis evaluation via application of Bekker theory. *Proceedings of the 8th ESA Workshop on Advanced Space Technologies for Robotics and Automation*, pp. 251-258, ESTEC, Noordwijk, Netherlands, November

planetary rover terrainability. *Proceedings of the 1st ESA Workshop on Multibody Dynamics for Space Applications*, ESTEC, Noordwijk, Netherlands, February 2010 Schäfer, B., Gibbesch, A., Krenn, R., & Rebele, B. (2010). Planetary rover mobility simulation

on soft and uneven terrain. *Journal of Vehicle System Dynamics*, Vol. 48, No. 1,

terrains. *Proceedings of the 6th World Congress on Intelligent Control and Automation*,

of phase B1 results. *Proceedings of 9th International Symposium on Artificial Intelligence, Robotics and Automation in Space*, Hollywood, Los Angles, California,

Laboratory. *Proceedings of IEEE Aerospace Conference,* pp. 643–652, ISBN 0-7803-7651-

*Proceedings of IEEE Aerospace Conference*, pp. 971-981, ISBN 0-7803-8870-4, Big Sky,

and effective command sequences. *IEEE Robotics and Automation Magazine*, Vol. 13,

rover: framework and key technologies. *International Journal of Advanced Robotic* 

*Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems*, pp. 4122-4127, ISBN 978-1-4244-3803-7, St. Louis, MO, USA, October 2009


400 Mobile Robots – Current Trends

Ding, L. (2009c). *Wheel–soil Interaction Terramechanics for Lunar/planetary Exploration Rovers:* 

Ding, L., Nagatani, K., Sato, K., et al. (2010a). Terramechanics-based high-fidelity dynamics

Ding, L., Gao, H., Deng, Z., & Liu, Z. (2010b). Slip-ratio-coordinated control of planetary

Gao, H., Deng, Z., Ding, L., & Wang M. (2011). Virtual simulation system with path

*Mechanical Engineering*, Vol. 24, No. 5, (May 2011), pp. 1-10, ISSN 0577-6686 Ishigami, G. & Yoshida, K. (2005). Steering characteristics of an exploration rover on loose

Ishigami, G., Nagatani, K., & Yoshida, K. (2007). Path planning for planetary exploration

Jet Propulsion Laboratory. (b). http://marsprogram.jpl.nasa.gov/MPF/rover/sojourner.

Jiao, Z. (2009). *Dynamics Modelling and (ADAMS) Simulation for Lunar Rover Based on* 

Kuroda, Y., Kawanishi, M., & Matsukuma, M. (2003). Tele-operating system for continuous

Kunii, Y., Suhara, M., Kuroda, Y., & Kubota, T. (2001). Command data compensation for

Lei, Z. (2004). *3D Predictive Display and Force Feedback Control of Tele-operation with Large Time Delay*. Dissertation of Master degree, Jilin University, Changchun, Jilin

Jet Propulsion Laboratory. (a). http://marsrovers.jpl.nasa.gov/home/index.html

4122-4127, ISBN 978-1-4244-3803-7, St. Louis, MO, USA, October 2009 Ding, L., Gao H., Deng Z., et al. (2009b). Slip ratio for lugged wheel of planetary rover in

St. Louis, MO, USA, October 2009

(December, 2009), Harbin, China

August 2005

html

Harbin, China.

3, Roman, Italy, April 2007

Nevada, USA, October 2003

3, Seoul, Korea, May 2001

4244-5038-12010, Anchorage, Alaska, USA, May 2010

ISBN: 978-1-4244-6674-0, Taipei, Taiwan, China, October 2010

*Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems*, pp.

deformable soil: definition and estimation. *Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems*, pp. 3343-3348, ISBN 978-1-4244-3803-7,

*Modeling and Application*. Doctoral Thesis of Harbin Institute of Technology

simulation for wheeled mobile robot on deformable rough terrain. *Proceedings of IEEE International Conference on Robotics and Automation*, pp. 4922-4927, ISBN: 978-1-

exploration robots traversing over deformable rough terrain. *Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems*, pp. 4958-4963,

following control for lunar rovers moving on rough terrain. *Chinese Journal of* 

soil based on all-wheel dynamics model. *Proceedings of IEEE International Conference on Intelligent Robots and Systems*, pp. 2041-2046, ISBN 0-7803-8912-3, Roman, Italy,

rovers and its evaluation based on wheel slip dynamics. *Proceedings of IEEE International Conference on Robotics and Automation*, pp. 2361-2366, ISBN 1-4244-0601-

*Terramechanics*. Master Dissertation of Harbin Institute of Technology (July 2009),

operation of lunar rover. *Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems*, pp. 2565-2570, ISBN 0-7803-7860-1, Las Vegas,

real-time tele-driving system on lunar rover: Micro-5. *Proceedings of the 2001 IEEE International Conference on Robotics and Automation*, pp. 1394-1399, ISBN 0-7803-6576-


402 Mobile Robots – Current Trends

Ye, P. & Xiao, F. (2006). Environment problem for lunar exploration engineering. *Spacecraft Environment Engineering*, Vol. 23, No. 1, (January 2006), pp. 1-11, ISSN 1673-1379 Yen, J., Jain, A., & Balaram, J. (1999). ROAMS: Rover Analysis Modeling and Simulation

Yoshida, K. & Hamano, H. (2002). Motion dynamics of a rover with slip-based traction

pp. 3155-3160, ISBN 0-7803-7272-7, Washington, D.C. USA, May 2002

3934

Software. *Proceedings of International Symposium on Artifcial Intelligence, Robotics and Automation in Space*, pp. 249-254, ESTEC, Noordwijk, the Netherlands, June 1999 Yoshida, K. (2000). The SpaceDyn: a MATLAB toolbox for space and mobile robots. *Journal* 

*of Robotics and Mechatronics*, Vol. 12, No. 4, (August, 2000), pp. 411–416, ISSN 0915-

model. *Proceedings of the IEEE International Conference on Robotics and Automation*,

## *Edited by Zoran Gacovski*

This book consists of 18 chapters divided in four sections: Robots for Educational Purposes, Health-Care and Medical Robots, Hardware - State of the Art, and Localization and Navigation. In the first section, there are four chapters covering autonomous mobile robot Emmy III, KCLBOT - mobile nonholonomic robot, and general overview of educational mobile robots. In the second section, the following themes are covered: walking support robots, control system for wheelchairs, leg-wheel mechanism as a mobile platform, micro mobile robot for abdominal use, and the influence of the robot size in the psychological treatment. In the third section, there are chapters about I2C bus system, vertical displacement service robots, quadruped robots - kinematics and dynamics model and Epi.q (hybrid) robots. Finally, in the last section, the following topics are covered: skid-steered vehicles, robotic exploration (new place recognition), omnidirectional mobile robots, ball-wheel mobile robots, and planetary wheeled mobile robots.

Mobile Robots - Current Trends

Mobile Robots

Current Trends

*Edited by Zoran Gacovski*