**Part 4**

**Programming and Algorithms**

470 Robotic Systems – Applications, Control and Programming

Allin M. J. (2002) Disbond *Detection in Adhesive Joints Using Low Frequency Ultrasound*. PhD

Airmar Technology. (2011). http://www.airmartechnology.com/ Accesed 4th August 2011 González J.J.; Jiménez J.A.; Ovalle D.A. (2010) *TEAC2H-RI: Educational Robotic Platform for* 

González J., Jiménez J. *Algoritmo para la Identificación de Materiales en Agentes Robóticos* 

Gunarathne, G. P. P. (1997) *Measurement and monitoring techniques for scale deposits in* 

Canada, May 19–21, 1997, pp. 841–847. DOI: 10.1109/IMTC.1997.610200 Gunarathne G.P.P ; Zhou Q. ; Christidis K. (1998) *Ultrasonic feature extraction techniques for* 

Measurement. Vol. 51, pp. 368-373. 2002 DOI: 10.1109/19.997839

NASA (2007) *Ultrasonic Testing of Aerospace Materials*. URL:

Conference, pp. 1676-1681. 2006

Neural Networks. pp. 267-270. 1991.

Ultrasonics 49 (2009) 244-253

Thesis. Imperial College of Science Technology and Medicine. University of

*Improving Teaching-Learning Processes of Technology in Developing Countries*. Technological Developments in Networking, Education and Automation 2010, pp

*(Spanish)*. Quinto Congreso Colombiano de Computación (5CCC) Cartagena-

*petroleum pipelines*. In: Proc. IEEE Instrum. Meas. Technol. Conf., Ottawa, ON,

*characterization and quantification of scales in petroleum pipelines*. In: Proceedings IEEE Ultrasonic Symposium, vol. 1; 1998, p. 859–64 DOI: 10.1109/ULTSYM.1998.762279 Gunarathne G.P.P ; Christidis K. (2002) *Material Characterization in situ Using ultrasound* 

*measurements*. In: Proceedings of the IEEE Transactions on Instrumentation and

klabs.org/DEI/References/design\_guidelines/test\_series/1422msfc.pdf Accessed

*Ultrasonic Sensor Array*. Proceedings of the SICE-ICASE International Joint

*Applications.* Proceedings of the Second International Conference on Artificial

Ohtanil K., Baba M. (2006) *An Identification Approach for Object Shapes and Materials Using an* 

Pallav P., Hutchins D., Gan T. (2009) *Air-coupled ultrasonic evaluation of food materials*.

Stepanić, H. Wüstenberg, V. Krstelj, H. Mrasek (2003) *Contribution to classification of buried objects based on acoustic impedance matching*, Ultrasonics, 41(2), pp. 115-123, 2003 Thomas, S.M, Bull D.R, (1991) *Neural Processing of Airborne Sonar for Mobile Robot* 

Zhao B., Bashir O.A., Mittal G.S., (2003) *Detection of metal, glass, plastic pieces in bottled beverages using ultrasound*, Food Research International 36 (2003) 513–521.

**7. References** 

London. 2002

Colombia (2010)

on: August 2011.

71-76

**0**

**23**

*Germany*

**Robotic Software Systems: From Code-Driven to**

Advances in robotics and cognitive sciences have stimulated expectations for emergence of new generations of robotic devices that interact and cooperate with people in ordinary human environments (robot companion, elder care, home health care), that seamlessly integrate themselves into complex environments (domestic, outdoor, public spaces), that fit into different levels of system hierarchies (human-robot co-working, hyper-flexible production cells, cognitive factory), that can fulfill different tasks (multi-purpose systems) and that are able to adapt themselves to different situations and changing conditions (dynamic environments, varying availability and accessibility of internal and external resources,

Unfortunately, so far, steady improvements in specific robot abilities and robot hardware have not been matched by corresponding robot performance in real-world environments. On the one hand, simple robotic devices for tasks such as cleaning floors and cutting the grass have met with growing commercial success. Robustness and single purpose design is the key quality factor of these simple systems. At the same time, more sophisticated robotic devices such as *Care-O-Bot 3* (Reiser et al., 2009) and *PR2* (Willow Garage, 2011) have not yet met commercial success. Hardware and software complexity is their distinguishing factor. Advanced robotic systems are systems of systems and their complexity is tremendous. Complex means they are built by integrating an increasingly larger body of heterogeneous (robotics, cognitive, computational, algorithmic) resources. The need for these resources arises from the overwhelming number of different situations an advanced robot is faced with during execution of multitude tasks. Despite the expended effort, even sophisticated systems are still not able to perform at an expected and appropriate level of overall quality of service in complex scenarios in real-world environments. By quality of service we mean the set of system level non-functional properties that a robotic system should exhibit to appropriately operate in an open-ended environment, such as robustness to exceptional situations, performance

Since vital functions of advanced robotic systems are provided by software and software dominance is still growing, the above challenges of system complexity are closely related to the need of mastering software complexity. Mastering software complexity becomes pivotal towards exploiting the capabilities of advanced robotic components and algorithms. Tailoring modern approaches of software engineering to the needs of robotics is seen as decisive

towards significant progress in system integration for advanced robotic systems.

**1. Introduction**

coordination and collaboration with other agents).

despite of limited resources and aliveness for long periods of time.

**Model-Driven Software Development**

*Computer Science Department, University of Applied Sciences Ulm*

Christian Schlegel, Andreas Steck and Alex Lotz

## **Robotic Software Systems: From Code-Driven to Model-Driven Software Development**

Christian Schlegel, Andreas Steck and Alex Lotz *Computer Science Department, University of Applied Sciences Ulm Germany*

### **1. Introduction**

Advances in robotics and cognitive sciences have stimulated expectations for emergence of new generations of robotic devices that interact and cooperate with people in ordinary human environments (robot companion, elder care, home health care), that seamlessly integrate themselves into complex environments (domestic, outdoor, public spaces), that fit into different levels of system hierarchies (human-robot co-working, hyper-flexible production cells, cognitive factory), that can fulfill different tasks (multi-purpose systems) and that are able to adapt themselves to different situations and changing conditions (dynamic environments, varying availability and accessibility of internal and external resources, coordination and collaboration with other agents).

Unfortunately, so far, steady improvements in specific robot abilities and robot hardware have not been matched by corresponding robot performance in real-world environments. On the one hand, simple robotic devices for tasks such as cleaning floors and cutting the grass have met with growing commercial success. Robustness and single purpose design is the key quality factor of these simple systems. At the same time, more sophisticated robotic devices such as *Care-O-Bot 3* (Reiser et al., 2009) and *PR2* (Willow Garage, 2011) have not yet met commercial success. Hardware and software complexity is their distinguishing factor.

Advanced robotic systems are systems of systems and their complexity is tremendous. Complex means they are built by integrating an increasingly larger body of heterogeneous (robotics, cognitive, computational, algorithmic) resources. The need for these resources arises from the overwhelming number of different situations an advanced robot is faced with during execution of multitude tasks. Despite the expended effort, even sophisticated systems are still not able to perform at an expected and appropriate level of overall quality of service in complex scenarios in real-world environments. By quality of service we mean the set of system level non-functional properties that a robotic system should exhibit to appropriately operate in an open-ended environment, such as robustness to exceptional situations, performance despite of limited resources and aliveness for long periods of time.

Since vital functions of advanced robotic systems are provided by software and software dominance is still growing, the above challenges of system complexity are closely related to the need of mastering software complexity. Mastering software complexity becomes pivotal towards exploiting the capabilities of advanced robotic components and algorithms. Tailoring modern approaches of software engineering to the needs of robotics is seen as decisive towards significant progress in system integration for advanced robotic systems.

Based on these observations, we assume that the next big step in advanced robotic systems towards mastering their complexity and their overall integration into any kind of environment and systems depends on separation of concerns. Since software plays a pivotal role in advanced robotic systems, we illustrate how to tailor a service-oriented component-based software approach to robotics, how to support it by a model-driven approach and according tools and how this allows separation of concerns which so far is not yet addressed

Robotic Software Systems: From Code-Driven to Model-Driven Software Development 475

*Experienced software engineers* should get insights into the specifics of robotics and should better understand what is in the robotics community needed and expected from the software engineering community. *Experienced roboticists* should get detailed insights into how model-driven software development *(MDSD)* and its design abstraction is an approach towards system-level complexity handling and towards decoupling of robotics knowledge from implementational technologies. *Practitioners* should get insights into how separation of concerns in robotics is supported by a service-oriented component-based software approach and that according tools are already matured enough to make life easier for developers of robotics software and system integrators. *Experts in application domains* and *business consultants* should gain insights into maturity levels of robotic software systems and according approaches under a short-term, medium-term and long-term perspective. *Students* should understand how design abstraction as recurrent principle of computer science applied to software systems results in *MDSD*, how *MDSD* can be applied to robotics, how it provides a perspective to overcome the vicious circle of robotics software starting from scratch again

and again and how software engineering and robotics can cross-fertilize each other.

such as robustness, adaptability, maintainability, and reusability.

and interacting) concerns (Tarr et al., 2000).

Eisenbach, 1996):

Separation of concerns is one of the most fundamental principles in software engineering (Chris, 1989; Dijkstra, 1976; Parnas, 1972). It states that a given problem involves different kinds of concerns, promotes their identification and separation in order to solve them separately without requiring detailed knowledge of the other parts, and finally combining them into one result. It is a general problem solving strategy which breaks the problem complexity into loosely-coupled subproblems. The solutions to the subproblems can be composed relatively easily to yield a solution to the original problem (Mili et al., 2004). This allows to cope with complexity and thereby achieving the required engineering quality factors

Despite a common agreement on the necessity of the application of the separation of concerns principle, there is not a well-established understanding of the notion of concern. Indeed, *concern* can be thought of as a unit of modularity (Blogspot, 2008). Progress towards separation of concerns is typically achieved through modularity of programming and encapsulation (or *transparency* of operation), with the help of information hiding. Advanced uses of this principle allow for simultaneous decomposition according to multiple kinds of (overlapping

In practice, the principle of separation of concerns should drive the identification of the right decomposition or modularization of a problem. Obviously, there are both: (i) generic and domain-independent patterns of how to decompose and modularize certain problems in a suitable way as well as (ii) patterns driven by domain-specific best practices and use-cases. In most engineering approaches as well as in robotics, at least the following are dominant dimensions of concerns which should be kept apart (Björkelund et al., 2011; Radestock &

appropriately in robotics software systems.

**2.1 Separation of concerns**

## **2. Software engineering in robotics**

Complex systems are rarely built from scratch but their design is typically partitioned according to the variety of technological concerns. In robotics, these are among others mechanics, sensors and actuators, control and algorithms, computational infrastructure and software systems. In general, successful engineering of complex systems heavily relies on the *divide and conquer* principle in order to reduce complexity. Successful markets typically come up with precise role assignments for participants and stakeholders ranging from component developers over system integrators and experts of an application domain to business consultants and end-users.

Sensors, actuators, computers and mechanical parts are readily available as commercial off-the-shelf black-box components with precisely specified characteristics. They can be re-used in different systems and they are provided by various dedicated suppliers. In contrast, most robotics software systems are still based on proprietarily designed software architectures. Very often, robotics software is tightly bound to specific robot hardware, processing platforms, or communication infrastructures. In addition, assumptions and constraints about tasks, operational environments, and robotic hardware are hidden and hard-coded in the software implementation.

Software for robotics is typically embedded, concurrent, real-time, distributed, data-intensive and must meet specific requirements, such as safety, reliability and fault-tolerance. From this point of view, software requirements of advanced robots are similar to those of software systems in other domains, such as avionics, automotive, factory automation, telecommunication and even large scale information systems. In these domains, modern software engineering principles are rigorously applied to separate roles and responsibilities in order to cope with the overall system complexity.

In robotics, tremendous code-bases (libraries, middleware, etc.) coexist without being interoperable and each tool has attributes that favors its use. Although one would like to reuse existing and matured software building blocks in order to reduce development time and costs, increase robustness and take advantage from specialized and second source suppliers, up to now this is not possible. Typically, experts for application domains need to become experts for robotics software to make use of robotics technology in their domain. So far, robotics software systems even do not enforce separation of roles for component developers and system integrators.

The current situation in software for robotics is caused by the lack of separation of concerns. In consequence, role assignments for robotics software are not possible, there is nothing like a software component market for robotic systems, there is no separation between component developers and system integrators and even no separation between experts in robotics and experts in application domains. This is seen as a major and serious obstacle towards developing a market of advanced robotic systems (for example, all kinds of cognitive robots, companion systems, service robots).

The current situation in software for robotics can be compared with the early times of the *World Wide Web (WWW)* where one had to be a computer engineer to setup web pages. The *WWW* turned into a universal medium only since the availability of tools which have made it accessible and which support separation of concerns: domain experts like journalists can now easily provide content without bothering with technical details and there is a variety of specialized, competing and interoperable tools available provided by computer engineers, designers and others. These can be used to provide and access any kind of content and to support any kind of application domain.

Based on these observations, we assume that the next big step in advanced robotic systems towards mastering their complexity and their overall integration into any kind of environment and systems depends on separation of concerns. Since software plays a pivotal role in advanced robotic systems, we illustrate how to tailor a service-oriented component-based software approach to robotics, how to support it by a model-driven approach and according tools and how this allows separation of concerns which so far is not yet addressed appropriately in robotics software systems.

*Experienced software engineers* should get insights into the specifics of robotics and should better understand what is in the robotics community needed and expected from the software engineering community. *Experienced roboticists* should get detailed insights into how model-driven software development *(MDSD)* and its design abstraction is an approach towards system-level complexity handling and towards decoupling of robotics knowledge from implementational technologies. *Practitioners* should get insights into how separation of concerns in robotics is supported by a service-oriented component-based software approach and that according tools are already matured enough to make life easier for developers of robotics software and system integrators. *Experts in application domains* and *business consultants* should gain insights into maturity levels of robotic software systems and according approaches under a short-term, medium-term and long-term perspective. *Students* should understand how design abstraction as recurrent principle of computer science applied to software systems results in *MDSD*, how *MDSD* can be applied to robotics, how it provides a perspective to overcome the vicious circle of robotics software starting from scratch again and again and how software engineering and robotics can cross-fertilize each other.

### **2.1 Separation of concerns**

2 Robotic Systems

Complex systems are rarely built from scratch but their design is typically partitioned according to the variety of technological concerns. In robotics, these are among others mechanics, sensors and actuators, control and algorithms, computational infrastructure and software systems. In general, successful engineering of complex systems heavily relies on the *divide and conquer* principle in order to reduce complexity. Successful markets typically come up with precise role assignments for participants and stakeholders ranging from component developers over system integrators and experts of an application domain to

Sensors, actuators, computers and mechanical parts are readily available as commercial off-the-shelf black-box components with precisely specified characteristics. They can be re-used in different systems and they are provided by various dedicated suppliers. In contrast, most robotics software systems are still based on proprietarily designed software architectures. Very often, robotics software is tightly bound to specific robot hardware, processing platforms, or communication infrastructures. In addition, assumptions and constraints about tasks, operational environments, and robotic hardware are hidden and

Software for robotics is typically embedded, concurrent, real-time, distributed, data-intensive and must meet specific requirements, such as safety, reliability and fault-tolerance. From this point of view, software requirements of advanced robots are similar to those of software systems in other domains, such as avionics, automotive, factory automation, telecommunication and even large scale information systems. In these domains, modern software engineering principles are rigorously applied to separate roles and responsibilities

In robotics, tremendous code-bases (libraries, middleware, etc.) coexist without being interoperable and each tool has attributes that favors its use. Although one would like to reuse existing and matured software building blocks in order to reduce development time and costs, increase robustness and take advantage from specialized and second source suppliers, up to now this is not possible. Typically, experts for application domains need to become experts for robotics software to make use of robotics technology in their domain. So far, robotics software systems even do not enforce separation of roles for component developers

The current situation in software for robotics is caused by the lack of separation of concerns. In consequence, role assignments for robotics software are not possible, there is nothing like a software component market for robotic systems, there is no separation between component developers and system integrators and even no separation between experts in robotics and experts in application domains. This is seen as a major and serious obstacle towards developing a market of advanced robotic systems (for example, all kinds of cognitive robots,

The current situation in software for robotics can be compared with the early times of the *World Wide Web (WWW)* where one had to be a computer engineer to setup web pages. The *WWW* turned into a universal medium only since the availability of tools which have made it accessible and which support separation of concerns: domain experts like journalists can now easily provide content without bothering with technical details and there is a variety of specialized, competing and interoperable tools available provided by computer engineers, designers and others. These can be used to provide and access any kind of content and to

**2. Software engineering in robotics**

business consultants and end-users.

hard-coded in the software implementation.

in order to cope with the overall system complexity.

and system integrators.

companion systems, service robots).

support any kind of application domain.

Separation of concerns is one of the most fundamental principles in software engineering (Chris, 1989; Dijkstra, 1976; Parnas, 1972). It states that a given problem involves different kinds of concerns, promotes their identification and separation in order to solve them separately without requiring detailed knowledge of the other parts, and finally combining them into one result. It is a general problem solving strategy which breaks the problem complexity into loosely-coupled subproblems. The solutions to the subproblems can be composed relatively easily to yield a solution to the original problem (Mili et al., 2004). This allows to cope with complexity and thereby achieving the required engineering quality factors such as robustness, adaptability, maintainability, and reusability.

Despite a common agreement on the necessity of the application of the separation of concerns principle, there is not a well-established understanding of the notion of concern. Indeed, *concern* can be thought of as a unit of modularity (Blogspot, 2008). Progress towards separation of concerns is typically achieved through modularity of programming and encapsulation (or *transparency* of operation), with the help of information hiding. Advanced uses of this principle allow for simultaneous decomposition according to multiple kinds of (overlapping and interacting) concerns (Tarr et al., 2000).

In practice, the principle of separation of concerns should drive the identification of the right decomposition or modularization of a problem. Obviously, there are both: (i) generic and domain-independent patterns of how to decompose and modularize certain problems in a suitable way as well as (ii) patterns driven by domain-specific best practices and use-cases.

In most engineering approaches as well as in robotics, at least the following are dominant dimensions of concerns which should be kept apart (Björkelund et al., 2011; Radestock & Eisenbach, 1996):

Limited resources require decisions: when to assign which resources to what activity taking into account perceived situation, current context and tasks to be fulfilled. Finding adequate solutions for this major challenge of engineering robotic systems is difficult for two reasons: • the *problem space* is huge: as uncertainty of the environment and the number and type of resources available to a robot increase, the definition of the best matching between current situation and correct robot resource exploitation becomes an overwhelming endeavour

Robotic Software Systems: From Code-Driven to Model-Driven Software Development 477

• the *solution space* is huge: in order to enhance overall quality of service like robustness of complex robotic systems in real-world environments, robotic system engineers should master highly heterogeneous technologies, need to integrate them in a consistent and effective way and need to adequately exploit the huge variety of robotic-specific resources. In consequence, it is impossible to statically assign resources in advance in such a way that all potential situations arising at runtime are properly covered. Due to open-ended real-world environments, there will always be a deviation between *design-time optimality* and *runtime optimality* with respect to resource assignments. Therefore, there is a need for dynamic resource assignments at runtime which arises from the enormous sizes of the problem space

For example, a robot designer cannot foresee how crowded an elevator will be. Thus, a robot will need to decide by its own and at runtime whether it is possible and convenient to exploit the elevator resource. The robot has to trade the risk of hitting an elevator's user with the risk of arriving late at the next destination. To match the level of safety committed at design-time, the runtime trade-off has to come up with parameters for speed and safety margins whose risk is within the design-time committed boundaries while still implementing the intent to

The above example illustrates why we have to think of engineering advanced robotic systems differently compared to other complex systems. A complex robotic system cannot be treated as design-time finalizable system. At runtime, system configurations need to be changeable according to current situation and context including prioritized assignments of resources to activities, (de)activations of components as well as changes to the wiring between components. At runtime, the robot has to analyze and to decide for the most appropriate configuration. For example, if the current processor load does not allow to run the navigation component at the highest level of quality, the component should be configured to a lower level of navigation quality. A reasonable option to prepare a component to cope with reduced resource assignments might be to reduce the maximum velocity of the robot in order to still

In consequence, we need to support design-time reasoning (at least by the system engineer) as well as runtime reasoning (by the robot itself) about both, the problem space and the solution space. This can be achieved by raising the level of abstraction at which relevant properties and characteristics of a robotics system are expressed. As for every engineering endeavour, this means to rely on the power of models and asks for an overall different design approach

• The solution space can be managed by providing advanced design tools for robot software development to design reconfigurable and adaptive robotic systems. Different stakeholders involved in the development of a robotic system need the ability to formally

even for the most skilled robot engineer,

and the solution space.

enter the elevator.

as illustrated in figure 1:

**2.2.1 Model-centric robotic systems**

guarantee the same level of navigation safety.


According to (Björkelund et al., 2011), this is in line with results published in (Delamer & Lastra, 2007; Gelernter & Carriero, 1992; Lastra & Delamer, 2006) although variations exist which split *configuration* (into *connection* and *configuration*) or treat *configuration* and *coordination* in the same way (Andrade et al., 2002; Bruyninckx, 2011).

It is important to recognize that there are cross-cutting concerns like *quality of service (QoS)* that have instantiations within the above dimensions of concerns. Facets of *QoS for computation* can manifest with respect to time (best effort computation, hard real-time computation) or anytime algorithms (explicated relationship between assigned computing resources and achieved quality of result). Facets of *QoS for communication* are, for example, response times, latencies and bandwidth.

It is also important to recognize that various concerns need to be addressed at different stages of the lifecycle of a system and by different stakeholders. For example, *configuration* is part of the *design phase* (a component developer provides dedicated configurable parameters, a system integrator binds some of them for deployment) *and* of the *runtime phase* (the task coordination mechanism of a robot modifies parameter settings and changes the connections between entities according to the current situation and task to fulfill).

It is perfectly safe to say that robotics should take advantage from insights and successful approaches for complexity handling readily available in other but similar domains like, for example, automotive and avionics industry or embedded systems in general. Instead, robotics often reinvents the wheel instead of exploiting cross-fertilization between robotics and communities like software engineering and middleware systems. The interesting question is whether there are differences in robotics compared to other domains which hinder roboticists from jumping onto already existing and approved solutions. One should also examine whether or not these solutions are tailorable to robotics needs.

### **2.2 Specifics in robotics**

The difference of robotics compared to other domains like automotive and avionics is neither the huge variety of different sensors and actuators nor the number of different disciplines being involved nor the diversity of hardware-platforms and software-platforms. In many domains, developers need to deal with heterogeneous hardware devices and are obliged to deploy their software on computers which are often constrained in terms of memory and computational power.

We are convinced that differences of robotics compared to other domains originate from the need of a robot to cope with open-ended environments while having only limited resources at its disposal.

Limited resources require decisions: when to assign which resources to what activity taking into account perceived situation, current context and tasks to be fulfilled. Finding adequate solutions for this major challenge of engineering robotic systems is difficult for two reasons:


In consequence, it is impossible to statically assign resources in advance in such a way that all potential situations arising at runtime are properly covered. Due to open-ended real-world environments, there will always be a deviation between *design-time optimality* and *runtime optimality* with respect to resource assignments. Therefore, there is a need for dynamic resource assignments at runtime which arises from the enormous sizes of the problem space and the solution space.

For example, a robot designer cannot foresee how crowded an elevator will be. Thus, a robot will need to decide by its own and at runtime whether it is possible and convenient to exploit the elevator resource. The robot has to trade the risk of hitting an elevator's user with the risk of arriving late at the next destination. To match the level of safety committed at design-time, the runtime trade-off has to come up with parameters for speed and safety margins whose risk is within the design-time committed boundaries while still implementing the intent to enter the elevator.

### **2.2.1 Model-centric robotic systems**

4 Robotic Systems

**Computation** provides the functionality of an entity and can be implemented in different ways (software and/or hardware). Computation activities require communication to

**Communication** exchanges data between entities (ranging from hardware devices to

**Configuration** comprises the binding of configurable parameters of individual entities. It also comprises the binding of configurable parameters at a system level like, for example,

**Coordination** is about when is something being done. It determines how the activities of all entities in a system should work together. It relates to orchestration and resource

According to (Björkelund et al., 2011), this is in line with results published in (Delamer & Lastra, 2007; Gelernter & Carriero, 1992; Lastra & Delamer, 2006) although variations exist which split *configuration* (into *connection* and *configuration*) or treat *configuration* and

It is important to recognize that there are cross-cutting concerns like *quality of service (QoS)* that have instantiations within the above dimensions of concerns. Facets of *QoS for computation* can manifest with respect to time (best effort computation, hard real-time computation) or anytime algorithms (explicated relationship between assigned computing resources and achieved quality of result). Facets of *QoS for communication* are, for example, response times,

It is also important to recognize that various concerns need to be addressed at different stages of the lifecycle of a system and by different stakeholders. For example, *configuration* is part of the *design phase* (a component developer provides dedicated configurable parameters, a system integrator binds some of them for deployment) *and* of the *runtime phase* (the task coordination mechanism of a robot modifies parameter settings and changes the connections

It is perfectly safe to say that robotics should take advantage from insights and successful approaches for complexity handling readily available in other but similar domains like, for example, automotive and avionics industry or embedded systems in general. Instead, robotics often reinvents the wheel instead of exploiting cross-fertilization between robotics and communities like software engineering and middleware systems. The interesting question is whether there are differences in robotics compared to other domains which hinder roboticists from jumping onto already existing and approved solutions. One should also examine

The difference of robotics compared to other domains like automotive and avionics is neither the huge variety of different sensors and actuators nor the number of different disciplines being involved nor the diversity of hardware-platforms and software-platforms. In many domains, developers need to deal with heterogeneous hardware devices and are obliged to deploy their software on computers which are often constrained in terms of memory and

We are convinced that differences of robotics compared to other domains originate from the need of a robot to cope with open-ended environments while having only limited resources

access required data and to provide computed results to other entities.

*coordination* in the same way (Andrade et al., 2002; Bruyninckx, 2011).

between entities according to the current situation and task to fulfill).

whether or not these solutions are tailorable to robotics needs.

connections between entities.

management.

latencies and bandwidth.

**2.2 Specifics in robotics**

computational power.

at its disposal.

interfaces for real-world access over software entities to user interfaces etc.).

The above example illustrates why we have to think of engineering advanced robotic systems differently compared to other complex systems. A complex robotic system cannot be treated as design-time finalizable system. At runtime, system configurations need to be changeable according to current situation and context including prioritized assignments of resources to activities, (de)activations of components as well as changes to the wiring between components. At runtime, the robot has to analyze and to decide for the most appropriate configuration. For example, if the current processor load does not allow to run the navigation component at the highest level of quality, the component should be configured to a lower level of navigation quality. A reasonable option to prepare a component to cope with reduced resource assignments might be to reduce the maximum velocity of the robot in order to still guarantee the same level of navigation safety.

In consequence, we need to support design-time reasoning (at least by the system engineer) as well as runtime reasoning (by the robot itself) about both, the problem space and the solution space. This can be achieved by raising the level of abstraction at which relevant properties and characteristics of a robotics system are expressed. As for every engineering endeavour, this means to rely on the power of models and asks for an overall different design approach as illustrated in figure 1:

• The solution space can be managed by providing advanced design tools for robot software development to design reconfigurable and adaptive robotic systems. Different stakeholders involved in the development of a robotic system need the ability to formally

During the whole lifecycle, models are refined and enriched step-by-step until finally they become executable. Models comprise variation points which support alternative solutions. Some variation points are purposefully left open at design time and even can be bound earliest at runtime after a specific context and situation dependent information is available. In consequence, models need to be interpretable not only by a human designer but also by a computer program. At design-time, software tools should understand the models and support designers in their transformations. At runtime, adaptation algorithms should exploit the models to automatically reconfigure the control system according to the operational

Robotic Software Systems: From Code-Driven to Model-Driven Software Development 479

The need to explicitly support the design for runtime adaptability adds robotic-specific requirements on software structures and software engineering processes, gives guidance on how to separate concerns in robotics and allows to understand where the robotics domain

Another strong influence on robotic software systems besides technical challenges comes from the involved individuals and their needs. We can distinguish several user roles that all put a different focus on complexity management, on separation of concerns and on software

**End users** operate applications based on the provided user interface. They focus on the functionality of readily provided systems. They do not care on how the application has been built and mainly expect reliable operation, easy usage and reasonable value for

**Application builders / system integrators** assemble applications out of approved, standardized and reusable off-the-shelf components. Any non trivial robotic application requires the orchestration of several components such as computer vision, sensor fusion, human machine interaction, object recognition, manipulation, localization and mapping, control of multiple hardware devices, etc. Once these parts work together, we call it a *system*. This part of the development process is called, therefore, *system integration*. Components can be provided by different vendors. Application builders and system integrators consider components as black boxes and depend on precise specifications and explications of all relevant properties for smooth composition, resource assignments and mappings to target platforms. Components are customized during system level composition by adjusting parameters or filling in application dependent parts at so-called *hot spots* via plug-in interfaces. Application builders expect support for system-level

**Component builders** focus on the specification and implementation of a single component. They want to focus on algorithms and component functionality without being restricted too much with respect to component internals. They don't want to bother with integration issues and expect a framework to support their implementation efforts such that the

**Framework builders / tool providers** prepare and provide tools that allow the different users to focus on their role. They implement the software frameworks and the domain-specific add-ons on top of state-of-the-art and standard software systems (like middleware systems), use latest software technology and make these available to the benefit of robotics.

resulting component is conformant to a system-level black box view.

needs extended solutions compared to other and at first glance similar domains.

context (see figure 2).

engineering in robotics:

money.

engineering.

**2.2.2 User roles and requirements**

Fig. 1. Novel workflow bridging design-time and runtime model-usage: at design-time variation points are purposefully left open and allow for runtime decisions (Schlegel et al., 2010).

model and relate different views relevant to robotic system design. A major issue is the support of separation of concerns taking into account the specific needs of robotics.

• The problem space can be mastered by giving the robot the ability to reconfigure its internal structure and to adapt the way its resources are exploited according to its understanding of the current situation.

Fig. 2. Separation of concerns and design abstraction: models created at design-time are used and manipulated at runtime by the robot (Steck & Schlegel, 2011).

We coin the term *model-centric robotic systems* (Steck & Schlegel, 2011) for the new approach of using models to cover and support the whole life-cycle of robotic systems. Such a model-centric view puts models into focus and bridges design-time and runtime model-usage.

During the whole lifecycle, models are refined and enriched step-by-step until finally they become executable. Models comprise variation points which support alternative solutions. Some variation points are purposefully left open at design time and even can be bound earliest at runtime after a specific context and situation dependent information is available. In consequence, models need to be interpretable not only by a human designer but also by a computer program. At design-time, software tools should understand the models and support designers in their transformations. At runtime, adaptation algorithms should exploit the models to automatically reconfigure the control system according to the operational context (see figure 2).

The need to explicitly support the design for runtime adaptability adds robotic-specific requirements on software structures and software engineering processes, gives guidance on how to separate concerns in robotics and allows to understand where the robotics domain needs extended solutions compared to other and at first glance similar domains.

### **2.2.2 User roles and requirements**

6 Robotic Systems

Fig. 1. Novel workflow bridging design-time and runtime model-usage: at design-time variation points are purposefully left open and allow for runtime decisions (Schlegel et al.,

> **---**


*-*

Fig. 2. Separation of concerns and design abstraction: models created at design-time are used

We coin the term *model-centric robotic systems* (Steck & Schlegel, 2011) for the new approach of using models to cover and support the whole life-cycle of robotic systems. Such a model-centric view puts models into focus and bridges design-time and runtime

#

\$,,-

 


**--**



\$-

\$-

#

%

&-






**-**

'( %
% )\*+&-

"-

 **--**

and manipulated at runtime by the robot (Steck & Schlegel, 2011).

 !---

model and relate different views relevant to robotic system design. A major issue is the support of separation of concerns taking into account the specific needs of robotics. • The problem space can be mastered by giving the robot the ability to reconfigure its internal structure and to adapt the way its resources are exploited according to its understanding

2010).

 **-**

*-*

model-usage.

*-*

of the current situation.

**-**

*---*

Another strong influence on robotic software systems besides technical challenges comes from the involved individuals and their needs. We can distinguish several user roles that all put a different focus on complexity management, on separation of concerns and on software engineering in robotics:


ready-to-use platforms to the end user.

*(SOA)* and *model-driven software development (MDSD)*.

and strictly separates service providers and consumers.

accepted definition of a software component is the following one:

**2.3.1 Component-based software engineering**

 **-**

separation of the roles of component development and system integration.

**2.3 Service-oriented software components to master system complexity**

robots with a specific application and deliver a valuable product to the customer.

Fig. 3. Building robotic systems out of readily-available and reusable software components:

Robotic Software Systems: From Code-Driven to Model-Driven Software Development 481

and business-to-client products such as hardware parts, software applications and complete

To understand better what a *robotics industry* means, we draw an analogy to the personal computer industry. Apart from very few exceptions, we can identify several companies involved in the manufacturing of a single and very specific part of the final product: single hardware components (memories, hard drives, CPU, mother boards, screens, power supplies, graphic cards, etc.), operating systems (Windows, commercial Linux distributions), software applications (CAD, word processing, video games, etc.) and system integrators which provide

Software engineering provides three major approaches that help to address the above challenges, that is *component-based software engineering (CBSE)*, *service-oriented architectures*

*CBSE* separates the component development process from the system development process and aims at component reusability. *MDSD* separates domain knowledge (formally specified by domain experts) from how it is being implemented (defined by software experts using model transformations). *SOA* is about the right level of granularity for offering functionality

*CBSE* (Heineman & Councill, 2001) is an approach that has arisen in the software engineering community in the last decade. It shifts the emphasis in system-building from traditional requirements analysis, system design and implementation to composing software systems from a mixture of reusable off-the-shelf and custom-built components. A compact and widely

"A **software component** is a unit of composition with contractually specified interfaces and explicit context dependencies only. A software component can be developed

*Software components* explicitly consider reusable pieces of software including notions of independence and late composition. *CBSE* promises the benefits of increased reuse, reduced production cost, and shorter time to market. In order to realize these benefits, it is vital to have components that are easy to reuse and composition mechanisms that can be applied systematically. Composition can take place during different stages of the

independently and is subject to composition by third parties." (Szyperski, 2002).

**--**

**-**

 **-**

**-**

**The robotics community** provides domain-specific concepts, best practices and design patterns of robotics. These are independent of any software technology and implementational technology. They form the *body of knowledge of robotics* and provide the domain-specific ground for the above roles.

The essence of the work of the *component builder* is to design reusable components which can seamlessly be integrated into multiple systems and different hardware platforms. A component is considered as a black box. The developer can achieve this abstraction only if he is strictly limited in his knowledge and assumptions about what happens outside his component and what happens inside other components.

On the other hand, the methodology and the purpose of the *system integrator* is opposite: he knows exactly the application of the software system, the platform where it will be deployed and its constraints. For this reason, he is able to take the right decision about the kind of components to be used, how to connect them together and how to configure their parameters and the quality of service of each of them to orchestrate their behavior. The work of the system integrator is rarely reusable by others, because it is intrinsically related to a specific hardware platform and a well-defined and sometimes unique use-case. We don't want the system integrator to modify a component or to understand the internal structure and implementation of the components he assembles.

### **2.2.3 Separation of roles from an industrial perspective**

This distinction between the development of single components and system integration is important (figure 3). So far, reuse in robotics software is mainly possible at the level of libraries and/or complete frameworks which require system integrators to be component developers and vice versa. A formal separation between *component building* and *system integration* introduces another and intermediate level of abstraction for reuse which will make it possible to


This separation of roles will eventually have a positive impact in robotics: it will potentially allow the creation of a robotics industry, that is an ecosystem of small, medium and large enterprises which can profitably and symbiotically coexist to provide business-to-business 8 Robotic Systems

**The robotics community** provides domain-specific concepts, best practices and design patterns of robotics. These are independent of any software technology and implementational technology. They form the *body of knowledge of robotics* and provide the

The essence of the work of the *component builder* is to design reusable components which can seamlessly be integrated into multiple systems and different hardware platforms. A component is considered as a black box. The developer can achieve this abstraction only if he is strictly limited in his knowledge and assumptions about what happens outside his

On the other hand, the methodology and the purpose of the *system integrator* is opposite: he knows exactly the application of the software system, the platform where it will be deployed and its constraints. For this reason, he is able to take the right decision about the kind of components to be used, how to connect them together and how to configure their parameters and the quality of service of each of them to orchestrate their behavior. The work of the system integrator is rarely reusable by others, because it is intrinsically related to a specific hardware platform and a well-defined and sometimes unique use-case. We don't want the system integrator to modify a component or to understand the internal structure and implementation

This distinction between the development of single components and system integration is important (figure 3). So far, reuse in robotics software is mainly possible at the level of libraries and/or complete frameworks which require system integrators to be component developers and vice versa. A formal separation between *component building* and *system integration* introduces another and intermediate level of abstraction for reuse which will make

• create commercial off-the-shelf (COTS) robotic software: when components become independent of any specific robot application, it becomes possible to integrate them quickly into different robotic systems. This abstraction allows the component developer to

• overcome the need for the system integrator to be also an expert of robotic algorithms and software development. We want companies devoted to system integration (often SMEs) to take care of the Business-to-Client part of the value chain, but this will be possible only

• establish dedicated system integrators (specific to industrial branches and application domains) apart from experts for robotic components (like navigation, localization, object

• provide plug-and-play robotic hardware: so far the effort of the integration of the hardware into the platform was undertaken by the system integrator. If manufacturers start providing ready-to-use drivers which work seamlessly in a component-driven

This separation of roles will eventually have a positive impact in robotics: it will potentially allow the creation of a robotics industry, that is an ecosystem of small, medium and large enterprises which can profitably and symbiotically coexist to provide business-to-business

environment, robotic applications can be deployed faster and become cheaper.

domain-specific ground for the above roles.

component and what happens inside other components.

**2.2.3 Separation of roles from an industrial perspective**

sell its robotic software component to a system integrator;

when their work will become less challenging;

recognition, speech interaction, etc.);

of the components he assembles.

it possible to

Fig. 3. Building robotic systems out of readily-available and reusable software components: separation of the roles of component development and system integration.

and business-to-client products such as hardware parts, software applications and complete robots with a specific application and deliver a valuable product to the customer.

To understand better what a *robotics industry* means, we draw an analogy to the personal computer industry. Apart from very few exceptions, we can identify several companies involved in the manufacturing of a single and very specific part of the final product: single hardware components (memories, hard drives, CPU, mother boards, screens, power supplies, graphic cards, etc.), operating systems (Windows, commercial Linux distributions), software applications (CAD, word processing, video games, etc.) and system integrators which provide ready-to-use platforms to the end user.

### **2.3 Service-oriented software components to master system complexity**

Software engineering provides three major approaches that help to address the above challenges, that is *component-based software engineering (CBSE)*, *service-oriented architectures (SOA)* and *model-driven software development (MDSD)*.

*CBSE* separates the component development process from the system development process and aims at component reusability. *MDSD* separates domain knowledge (formally specified by domain experts) from how it is being implemented (defined by software experts using model transformations). *SOA* is about the right level of granularity for offering functionality and strictly separates service providers and consumers.

### **2.3.1 Component-based software engineering**

*CBSE* (Heineman & Councill, 2001) is an approach that has arisen in the software engineering community in the last decade. It shifts the emphasis in system-building from traditional requirements analysis, system design and implementation to composing software systems from a mixture of reusable off-the-shelf and custom-built components. A compact and widely accepted definition of a software component is the following one:

"A **software component** is a unit of composition with contractually specified interfaces and explicit context dependencies only. A software component can be developed independently and is subject to composition by third parties." (Szyperski, 2002).

*Software components* explicitly consider reusable pieces of software including notions of independence and late composition. *CBSE* promises the benefits of increased reuse, reduced production cost, and shorter time to market. In order to realize these benefits, it is vital to have components that are easy to reuse and composition mechanisms that can be applied systematically. Composition can take place during different stages of the

4. Abstractions are provided by models (Beydeda et al., 2005). Abstraction is a core principle

Robotic Software Systems: From Code-Driven to Model-Driven Software Development 483

"A **model** is a simplified representation of a system intended to enhance our ability to understand, predict and possibly control the behavior of the system" (Neelamkavil, 1987).

Software Technology Concepts Software Technology Concepts Software Technology Concepts Software Technology Concepts

In *MDSD*, models are used for many purposes, including reasoning about problem and solution domains and documenting the stages of the software lifecycle; the result is improved

Transformation

**PDM**

The standard workflow of a model-driven software development process is illustrated in figure 5. This workflow is supported by tools like the *Eclipse Modeling Project* (Eclipse Modeling Project, 2010) which provide means to express model-to-model and model-to-code transformations. They import standardized textual *XMI* representations of the models and can parse them according to the used meta-model. Thus, one can easily introduce domain-specific concepts to forward information from a model-level to the model transformations and code generators. Tools like *Papyrus* (PAPYRUS UML, 2011) allow for a graphical representation of the various models and can export them into the *XMI* format. Overall, there is a complete toolchain for graphical modelling and transformation steps

*MDSD* is much more than code generation for different platforms to address the technology change problem and to make development more efficient by automatically generating repetitive code. The benefits of *MDSD* are manifold (Stahl & Völter, 2006; Völter, 2006): (i) models are free of implementation artefacts and directly represent reusable domain knowledge including best practices, (ii) domain experts can play a direct role and are not requested to translate their knowledge into software representations, (iii) design patterns, sophisticated & optimized software structures and approved software solutions can be made available to domain experts and enforced by embedding them in templates for use by highly optimized code generators such that even novices can immediately take advantage from a coded immense experience, (iv) parameters and properties of components required for system level composition and the adaptation to different target systems are explicated and can be

Fig. 4. Design abstraction of model-driven software development.

software quality, improved time-to-value and reduced costs (IBM, 2006).

**Model Code**

Independent **Initialization Files**

Platform Specific Model

available that can be tailored to the domain specific needs of robotics.

Software Technology Concepts Software Technology Concepts

Specific Implementation

**Model**

**Code**

**Modeling**

Transformation for Deployment

**PDM**

**Deployable Binaries, Parameter and**

**CodeBinaries**

Perform Deployment

Platform **Runtime**

**like roboticists mental work of domain experts**

**software experts and tool support**

**Binaries**

**robot**

**Real−world**

**Domain Concepts**

**Domain Concepts with MDSD**

of software engineering.

**mental work of developers (domain experts need to be software experts)**

**without MDSD**

Model

Computation Platform

**Meta−Model**

Independent Model

modified within a model-based toolchain.

Transformation

**Model**

Platform Description Model

Fig. 5. Model-driven software development at a glance.

**Idea PIM PSM PSI**

Transformation

**PDM**

lifecycle of components that is during the design phase (design and implementation), the deployment phase (system integration) and even the runtime phase (dynamic wiring of data flow according to situation and context). *CBSE* is based on the explication of all relevant information of a component to make it usable by other software elements whose authors are not known. The key properties of *encapsulation* and *composability* result in the following seven criteria that make a good component: "(i) may be used by other software elements (clients), (ii) may be used by clients without the intervention of the component's developers, (iii) includes a specification of all dependencies (hardware and software platform, versions, other components), (iv) includes a precise specification of the functionalities it offers, (v) is usable on the sole basis of that specification, (vi) is composable with other components, (vii) can be integrated into a system quickly and smoothly" (Meyer, 2000).

### **2.3.2 Service-oriented architectures**

Another generally accepted view of a software component is that it is a software unit with *provided services* and *required services*. In component models, where components are architectural units, *services* are represented as *ports* (Lau & Wang, 2007). This view puts the focus on the question of a proper level of abstraction of offered functionalities. Services "combine information and behavior, hide the internal workings from outside intrusion and present a relatively simple interface to the rest of the program" (Sprott & Wilkes, 2004). The (CBDI Forum, 2011) recommends to define *service-oriented architectures (SOA)* as follows:

**SOA** are "the policies, practices, frameworks that enable application functionality to be provided and consumed as sets of services published at a granularity relevant to the service consumer. Services can be invoked, published and discovered, and are abstracted away from the implementation using a single, standards-based form of interface" (Sprott & Wilkes, 2004).

*Service* is the key to communication between providers and consumers and key properties of good service design are summarized as in table 1. *SOA* is all about style (policy, practice, frameworks) which makes process matters an essential consideration. A *SOA* has to ensure that services don't get reduced to the status of interfaces, rather they have an identity of their own. With *SOA*, it is critical to implement processes that ensure that there are at least two different and separate processes - for providers and consumers (Sprott & Wilkes, 2004).


Table 1. Principles of good service design enabled by characteristics of *SOA* as formulated in (Sprott & Wilkes, 2004).

### **2.3.3 Model-driven software development**

*MDSD* is a technology that introduces significant efficiencies and rigor to the theory and practice of software development. It provides a design abstraction as illustrated in figure 10 Robotic Systems

lifecycle of components that is during the design phase (design and implementation), the deployment phase (system integration) and even the runtime phase (dynamic wiring of data flow according to situation and context). *CBSE* is based on the explication of all relevant information of a component to make it usable by other software elements whose authors are not known. The key properties of *encapsulation* and *composability* result in the following seven criteria that make a good component: "(i) may be used by other software elements (clients), (ii) may be used by clients without the intervention of the component's developers, (iii) includes a specification of all dependencies (hardware and software platform, versions, other components), (iv) includes a precise specification of the functionalities it offers, (v) is usable on the sole basis of that specification, (vi) is composable with other components,

Another generally accepted view of a software component is that it is a software unit with *provided services* and *required services*. In component models, where components are architectural units, *services* are represented as *ports* (Lau & Wang, 2007). This view puts the focus on the question of a proper level of abstraction of offered functionalities. Services "combine information and behavior, hide the internal workings from outside intrusion and present a relatively simple interface to the rest of the program" (Sprott & Wilkes, 2004). The (CBDI Forum, 2011) recommends to define *service-oriented architectures (SOA)* as follows:

**SOA** are "the policies, practices, frameworks that enable application functionality to be provided and consumed as sets of services published at a granularity relevant to the service consumer. Services can be invoked, published and discovered, and are abstracted away from the implementation using a single, standards-based form of interface" (Sprott

*Service* is the key to communication between providers and consumers and key properties of good service design are summarized as in table 1. *SOA* is all about style (policy, practice, frameworks) which makes process matters an essential consideration. A *SOA* has to ensure that services don't get reduced to the status of interfaces, rather they have an identity of their own. With *SOA*, it is critical to implement processes that ensure that there are at least two different and separate processes - for providers and consumers (Sprott & Wilkes, 2004).

published precise, published specification functionality of service interface, not

formal formal contract between endpoints places obligations on provider and

relevant functionality is presented at a granularity recognized by the user as a

Table 1. Principles of good service design enabled by characteristics of *SOA* as formulated in

*MDSD* is a technology that introduces significant efficiencies and rigor to the theory and practice of software development. It provides a design abstraction as illustrated in figure

reusable use of service, not reuse by copying of code/implementation

abstracted service is abstracted from the implementation

implementation

meaningful service

**2.3.3 Model-driven software development**

consumer

(Sprott & Wilkes, 2004).

(vii) can be integrated into a system quickly and smoothly" (Meyer, 2000).

**2.3.2 Service-oriented architectures**

& Wilkes, 2004).

4. Abstractions are provided by models (Beydeda et al., 2005). Abstraction is a core principle of software engineering.

"A **model** is a simplified representation of a system intended to enhance our ability to understand, predict and possibly control the behavior of the system" (Neelamkavil, 1987).

Fig. 4. Design abstraction of model-driven software development.

In *MDSD*, models are used for many purposes, including reasoning about problem and solution domains and documenting the stages of the software lifecycle; the result is improved software quality, improved time-to-value and reduced costs (IBM, 2006).

Fig. 5. Model-driven software development at a glance.

The standard workflow of a model-driven software development process is illustrated in figure 5. This workflow is supported by tools like the *Eclipse Modeling Project* (Eclipse Modeling Project, 2010) which provide means to express model-to-model and model-to-code transformations. They import standardized textual *XMI* representations of the models and can parse them according to the used meta-model. Thus, one can easily introduce domain-specific concepts to forward information from a model-level to the model transformations and code generators. Tools like *Papyrus* (PAPYRUS UML, 2011) allow for a graphical representation of the various models and can export them into the *XMI* format. Overall, there is a complete toolchain for graphical modelling and transformation steps available that can be tailored to the domain specific needs of robotics.

*MDSD* is much more than code generation for different platforms to address the technology change problem and to make development more efficient by automatically generating repetitive code. The benefits of *MDSD* are manifold (Stahl & Völter, 2006; Völter, 2006): (i) models are free of implementation artefacts and directly represent reusable domain knowledge including best practices, (ii) domain experts can play a direct role and are not requested to translate their knowledge into software representations, (iii) design patterns, sophisticated & optimized software structures and approved software solutions can be made available to domain experts and enforced by embedding them in templates for use by highly optimized code generators such that even novices can immediately take advantage from a coded immense experience, (iv) parameters and properties of components required for system level composition and the adaptation to different target systems are explicated and can be modified within a model-based toolchain.

At the *component level (C)*, the *component builder* wants to rely on a stable interface to the component framework ②. In an ideal situation, the component framework can be considered as black box hiding all operating system and middleware aspects from the user code. The component framework adds the execution container to the user code such that the resulting

Robotic Software Systems: From Code-Driven to Model-Driven Software Development 485

At the *framework level (F)*, two stable interfaces exist: (i) between the framework and the user code of the component builder ② and (ii) between the framework and the underlying middleware & operating system ③. The stable interface ② ensures that no middleware and operating system specifics are unnecessarily passed on to the component builder. The stable interface ③ ensures that the framework can be mapped onto different implementational technologies (middleware, operating systems) without reimplementing the framework in its entirety. The *framework builder* maintains the framework which links the stable interfaces ② and ③ and maps the framework onto different implementational technologies via the interface

The basic idea behind SMARTSOFT (Schlegel, 2011) is to master the component hull and thereby achieve *separation of concerns* as well as *separation of roles*. Figure 7 illustrates the SMARTSOFT component model and how its component hull links the stable interfaces ①, ②

> ... Query Query Send

query two-way request state activate/deactivate component services

The link between ① and ② is realized by *communication patterns*. Binding a communication pattern with the type of data to be transmitted results in an externally visible service represented as port. The small set of generic and predefined communication patterns listed in the left part of table 2 are the only ones to define externally visible services. Thus, the behavior and usage of a service is immediately evident as soon as one knows its underlying

**E**

Fig. 7. The structure of a SMARTSOFT component and its stable interfaces.

send one-way communication param component configuration

push newest 1-to-n distribution wiring dynamic component wiring push timed 1-to-n distribution diagnose introspection of components event asynchronous notification *(internally based on communication patterns)*

**Pattern Description Service Description**

**User Space** stable interface

**A 1 2**

**3**

to other components

to middleware and operating system

stable interface (framework internal)

can have any number of ports but each port is based on one of the communication patterns (send, query, push newest, push timed, event). abstraction of operating system resources

component user code inside stable interface to

component is conformant to a black box component view.

③.

and ③.

**C**

**B** State

**3. The** SMARTSOFT**-approach**

**<<Component>>**

**G**

communication pattern.

**F**

Fatal Alive Shutdown

Monitoring

Diagnose **D**

Threads / Mutex / Timer Interface Execution Environment

Table 2. The set of patterns and services of SMARTMARS.

**H I**

Middleware OS

Init

component lifecycle

### **2.4 Stable structures and freedom from choice**

In robotics, we believe that the cornerstone is a *component model* based on *service-orientation* for its provided and required interactions represented in an abstract way in form of *models*.

A *robotics component model* needs to provide *component level* as well as *system level* concepts, structures and building blocks to support separation of concerns while at the same time ensuring composability based on a composition theory: (i) building blocks out of which one composes a component, (ii) patterns of how to design a well-formed component to achieve system level conformance, (iii) guidance towards providing a suitable granularity of services, (iv) specification of the behavior of interactions and (v) best practices and solutions of domain-specific problems. *MDSD* can then provide toolchains and thereby support separation of concerns and separation of roles.

The above approach asks for the identification of *stable structures* versus *variation points* (Webber & Gomaa, 2004). A robotics component model has to provide guidance via stable structures where these are required to support separation of concerns and to ensure system level conformance. At the same time, it has to allow for freedom wherever possible. The distinction between stable structures and variation points is of relevance at all levels (operating system interfaces, library interfaces, component internal structures, provided and required services etc.). In fact, identified and enforced stable structures come along with restrictions. However, one has to notice that well thought out limitations are not a universal negative and *freedom from choice* (Lee & Seshia, 2011) gives guidance and assurance of properties beyond one's responsibilities in order to ensure separation of concerns.

As detailed in (Schlegel et al., 2011), stable structures with respect to a service-oriented component-based approach can be identified. These are illustrated in figure 6.

Fig. 6. Stable structures and different roles in a component-based software approach.

At the *system level (S)*, *provided* and *required* service ports ① of a component form a stable interface for the *application builder*. In an ideal situation, all relevant properties of a component are made explicit to support a black box view. Hence, system level properties like resource conformance of the component mapping to the computing platform can be checked during system composition and deployment.

At the *component level (C)*, the *component builder* wants to rely on a stable interface to the component framework ②. In an ideal situation, the component framework can be considered as black box hiding all operating system and middleware aspects from the user code. The component framework adds the execution container to the user code such that the resulting component is conformant to a black box component view.

At the *framework level (F)*, two stable interfaces exist: (i) between the framework and the user code of the component builder ② and (ii) between the framework and the underlying middleware & operating system ③. The stable interface ② ensures that no middleware and operating system specifics are unnecessarily passed on to the component builder. The stable interface ③ ensures that the framework can be mapped onto different implementational technologies (middleware, operating systems) without reimplementing the framework in its entirety. The *framework builder* maintains the framework which links the stable interfaces ② and ③ and maps the framework onto different implementational technologies via the interface ③.

## **3. The** SMARTSOFT**-approach**

12 Robotic Systems

In robotics, we believe that the cornerstone is a *component model* based on *service-orientation* for its provided and required interactions represented in an abstract way in form of *models*. A *robotics component model* needs to provide *component level* as well as *system level* concepts, structures and building blocks to support separation of concerns while at the same time ensuring composability based on a composition theory: (i) building blocks out of which one composes a component, (ii) patterns of how to design a well-formed component to achieve system level conformance, (iii) guidance towards providing a suitable granularity of services, (iv) specification of the behavior of interactions and (v) best practices and solutions of domain-specific problems. *MDSD* can then provide toolchains and thereby support

The above approach asks for the identification of *stable structures* versus *variation points* (Webber & Gomaa, 2004). A robotics component model has to provide guidance via stable structures where these are required to support separation of concerns and to ensure system level conformance. At the same time, it has to allow for freedom wherever possible. The distinction between stable structures and variation points is of relevance at all levels (operating system interfaces, library interfaces, component internal structures, provided and required services etc.). In fact, identified and enforced stable structures come along with restrictions. However, one has to notice that well thought out limitations are not a universal negative and *freedom from choice* (Lee & Seshia, 2011) gives guidance and assurance

of properties beyond one's responsibilities in order to ensure separation of concerns.

component-based approach can be identified. These are illustrated in figure 6.

Laser Mapper ...

Control Base Motion Planner

**Framework** Player, MSRS, OROCOS, ROS, OpenRTM, SmartSoft, ... **Middleware Operating System**

Fig. 6. Stable structures and different roles in a component-based software approach.

At the *system level (S)*, *provided* and *required* service ports ① of a component form a stable interface for the *application builder*. In an ideal situation, all relevant properties of a component are made explicit to support a black box view. Hence, system level properties like resource conformance of the component mapping to the computing platform can be checked during

**Framework**

**User−Code** algorithms, libraries, ... OpenCV, PCL, MRPT, OpenRAVE, ...

As detailed in (Schlegel et al., 2011), stable structures with respect to a service-oriented

**Application Builder**

compose components

libraries algorithmas and integrate

**Component Builder**

target platform model onto specific map component−

**Framework Builder**

**Framework−Level (F)**

− abstract middleware − build framework − black−box view on and OS details middleware / OS

− black−box view on − build component − use framework framework

− reuse components − black−box view on − build system components

**Component−Level (C)**

**System−Level (S)**

**2.4 Stable structures and freedom from choice**

separation of concerns and separation of roles.

**<<Deployment>>**

**<<Component>>**

**<<Component>>**

**2**

**1**

**2 1**

**1**

provided / required stable ports

stable interface to middleware and operating system

inside component to user−code stable interface

**3**

system composition and deployment.

The basic idea behind SMARTSOFT (Schlegel, 2011) is to master the component hull and thereby achieve *separation of concerns* as well as *separation of roles*. Figure 7 illustrates the SMARTSOFT component model and how its component hull links the stable interfaces ①, ② and ③.

Fig. 7. The structure of a SMARTSOFT component and its stable interfaces.


Table 2. The set of patterns and services of SMARTMARS.

The link between ① and ② is realized by *communication patterns*. Binding a communication pattern with the type of data to be transmitted results in an externally visible service represented as port. The small set of generic and predefined communication patterns listed in the left part of table 2 are the only ones to define externally visible services. Thus, the behavior and usage of a service is immediately evident as soon as one knows its underlying communication pattern.

Besides the services defined by the component builder (*A)*, several predefined services exist to support system level concerns (Lotz et al., 2011). Each component needs to provide a *state service* to support system level orchestration (outside view *B*: activation, deactivation, reconfiguration; inside view *F*: manage transitions between service activations, support housekeeping activities by entry/exit actions). An optional *diagnostic service* (*C, G*) supports runtime monitoring of the component. The optional *param service* manages parameters by name/value-pairs and allows to change them at runtime. The optional *wiring service* allows to wire required services of a component at runtime from outside the component. This is needed

Robotic Software Systems: From Code-Driven to Model-Driven Software Development 487

All the stable interfaces, concepts and structures as well as knowledge about which ingredients and structures form a well-formed SMARTSOFT component and a well-formed system of SMARTSOFT components are explicated in the SMARTMARS meta-model (figure 9). The meta-model is abstract, universally valid and independent from implementation technologies (e.g. UML profile (Fuentes-Fernández & Vallecillo-Moreno, 2004), eCore (Gronback, 2009)). It provides the input for tool support for the different roles (like component developer, system integrator etc.), explicates separation of concerns and can be mapped onto different software technologies (e.g. different types of middleware like CORBA, ACE

<<metaelement>> **SmartPortServer**

<<metaelement>> **SmartComponent**

<<metaelement>> **SmartQueryServer**

A major part of the SMARTSOFT approach are the policies and strategies that manifest themselves in the structure of the component model, explain its building blocks and guide

Separation of the roles of a component developer and a system integrator requires to control the interface between the inner part of a component and its outer part and to control this boundary. As soon as one gains control over the component hull, one can make sure that all relevant properties and parameters needed for the black box view of the system integrator become explicated at the component hull. One can also make sure that a component developer has no chance to expose component internals to the outside. SMARTSOFT achieves this via predefined communication patterns as the only building blocks to define externally visible

<<metaelement>> **SmartTimer** <<metaelement>> **SmartSemaphore**

aelement>> **artMutex int32** <<datatype>>

**SmartDiagnoseServer** <<metaelement>>

<<metaelement>> **SmartQueryHandler** <<metaelement>> **SmartSubState**

<<metaelement>> **SmartStateServer**

<<enumeration>> **TimeUnitKind**

s ms us ns

<<dataty **uin**

**Monitoring** <<metaelem **SmartMon**

**Types**

<<datatype>> **boolean** <<dataty **stri**

**SmartMainState** <<metaelement>> **State Automaton**

for task and context dependent composition of behaviors.

(Schmidt, 2011) and different types of operating systems).

timeUnit: TimeUnitKind cycle: uint32

<<metaelement>>

<<metaelement>> requestObject: CommObject answerObject: CommObject **QueryPattern**

**3.2 Policies and strategies behind** SMARTSOFT **services**

services and by further guidelines on how to build good services.

serviceName: string **SmartPort**

Fig. 9. Excerpt of the SMARTMARS meta-model.

**3.1 The** SMARTMARS **meta-model**

isRealtime: boolean isPeriodic: boolean period: uint32 wcet: uint32 priority: int8 timeUnit: TimeUnitKind

<<metaelement>> **SmartTask**

schedPolicy: SchedPolicyKind

etaelement>> **artQueryClient**

their usage.

**Ports**

<<metaelement>> **SmartPortClient** serverName: string

**OS Abstraction**

Fig. 8. The views of a component builder and a system integrator on services by the example of a grid map service based on a query communication pattern.

Figure 8 illustrates this concept by means of the *query* communication pattern which consists of a *query client* and a *query server*. The query pattern expects two *communication objects* to define a service: a request object and an answer object. Communication objects are transmitted *by-value* to ensure decoupling of the lifecycles of the client side and the server side of a service. They are arbitrary objects enriched by a unique identifier and get/set-methods. Hidden from the user and inside the communication patterns, the content of a communication object provided via *E* gets extracted and forwarded to the middleware interface *H*. Incoming content at *H* is put into a new instance of the according communication object before providing access to it via *E*.

In the example, the system integrator sees a provided port based on a *query server* with the communication objects *GridMapRequest* and *GridMap*. The map service might be provided by a map building component. Each component with a port consisting out of a *query client* with the same communication objects can use that service. For example, a path planning component might need a grid map and expose a required port for that service. The *GridMapRequest* object provides the parameters of the individual request (for example, the size of the requested map patch, its origin and resolution) and the *GridMap* returns the answer. The answer is self-contained comprising all the parameters describing the provided map. That allows to interpret the map independently of the current settings of the service providing component and gives the service provider the chance to return a map as close to but different from the requested parameters in case he cannot handle them exactly.

A component builder uses the stable interface *E*. In case of the client side of a service based on the query pattern, it always consists of the same synchronous as well as asynchronous access modes independent from the used communication objects and the underlying middleware. They can be used from any number of threads in any order. The server side in this example always consists of an asynchronous handler upcall for incoming requests and a separated answer method. This separation is important since it does not require the upcall to wait until the answer is available before returning. We can now implement any kind of processing model inside a component, even a processing pipeline where the last thread calls the answer method, without blocking or wasting system resources of the upcall or be obliged to live with the threading models behind the upcall.

In the example, the upcall at the service provider either directly processes the incoming *GridMapRequest* object or forwards it to a separate processing thread. The requested map patch is put into a *GridMap* object which then is provided as answer via the *answer* method.

It can be seen that the client side is not just a proxy for the server side. Both sides of a communication pattern are completely standalone entities providing stable interfaces *A* and *E* by completely hiding all the specifics of *H* and *I* (see figure 7). One can neither expose arbitrary member functions at the outside component hull nor can one dilute the semantics and behavior of ports. The different communication patterns and their internals are explained in detail in (Schlegel, 2007).

Besides the services defined by the component builder (*A)*, several predefined services exist to support system level concerns (Lotz et al., 2011). Each component needs to provide a *state service* to support system level orchestration (outside view *B*: activation, deactivation, reconfiguration; inside view *F*: manage transitions between service activations, support housekeeping activities by entry/exit actions). An optional *diagnostic service* (*C, G*) supports runtime monitoring of the component. The optional *param service* manages parameters by name/value-pairs and allows to change them at runtime. The optional *wiring service* allows to wire required services of a component at runtime from outside the component. This is needed for task and context dependent composition of behaviors.

## **3.1 The** SMARTMARS **meta-model**

14 Robotic Systems

Fig. 8. The views of a component builder and a system integrator on services by the example

Figure 8 illustrates this concept by means of the *query* communication pattern which consists of a *query client* and a *query server*. The query pattern expects two *communication objects* to define a service: a request object and an answer object. Communication objects are transmitted *by-value* to ensure decoupling of the lifecycles of the client side and the server side of a service. They are arbitrary objects enriched by a unique identifier and get/set-methods. Hidden from the user and inside the communication patterns, the content of a communication object provided via *E* gets extracted and forwarded to the middleware interface *H*. Incoming content at *H* is put into a new instance of the according communication object before providing access

In the example, the system integrator sees a provided port based on a *query server* with the communication objects *GridMapRequest* and *GridMap*. The map service might be provided by a map building component. Each component with a port consisting out of a *query client* with the same communication objects can use that service. For example, a path planning component might need a grid map and expose a required port for that service. The *GridMapRequest* object provides the parameters of the individual request (for example, the size of the requested map patch, its origin and resolution) and the *GridMap* returns the answer. The answer is self-contained comprising all the parameters describing the provided map. That allows to interpret the map independently of the current settings of the service providing component and gives the service provider the chance to return a map as close to but different from the

A component builder uses the stable interface *E*. In case of the client side of a service based on the query pattern, it always consists of the same synchronous as well as asynchronous access modes independent from the used communication objects and the underlying middleware. They can be used from any number of threads in any order. The server side in this example always consists of an asynchronous handler upcall for incoming requests and a separated answer method. This separation is important since it does not require the upcall to wait until the answer is available before returning. We can now implement any kind of processing model inside a component, even a processing pipeline where the last thread calls the answer method, without blocking or wasting system resources of the upcall or be obliged to live with the

In the example, the upcall at the service provider either directly processes the incoming *GridMapRequest* object or forwards it to a separate processing thread. The requested map patch is put into a *GridMap* object which then is provided as answer via the *answer* method. It can be seen that the client side is not just a proxy for the server side. Both sides of a communication pattern are completely standalone entities providing stable interfaces *A* and *E* by completely hiding all the specifics of *H* and *I* (see figure 7). One can neither expose arbitrary member functions at the outside component hull nor can one dilute the semantics and behavior of ports. The different communication patterns and their internals are explained

**A**

QueryServer *<GridMapRequest, GridMap>*

**System Integrator Component Builder**

handleQuery (Id,R) async upcall

answer(Id,A) async answer

**E E**

*<GridMapRequest, GridMap>*

QueryClient

QueryClient *<GridMapRequest, GridMap>*

#position, size, resolution **GridMapRequest**

*requests specific part of a grid map*

#position, size, resolution #mapCells, isValid **GridMap**

queryDiscard (Id)

queryReceiveWait (Id,&A) queryReceive (Id,&A) queryRequest (R,&Id)

of a grid map service based on a query communication pattern.

requested parameters in case he cannot handle them exactly.

threading models behind the upcall.

in detail in (Schlegel, 2007).

query (R,&A)

**Component Builder**

async

 sync

**...**

**...**

to it via *E*.

All the stable interfaces, concepts and structures as well as knowledge about which ingredients and structures form a well-formed SMARTSOFT component and a well-formed system of SMARTSOFT components are explicated in the SMARTMARS meta-model (figure 9). The meta-model is abstract, universally valid and independent from implementation technologies (e.g. UML profile (Fuentes-Fernández & Vallecillo-Moreno, 2004), eCore (Gronback, 2009)). It provides the input for tool support for the different roles (like component developer, system integrator etc.), explicates separation of concerns and can be mapped onto different software technologies (e.g. different types of middleware like CORBA, ACE (Schmidt, 2011) and different types of operating systems).

Fig. 9. Excerpt of the SMARTMARS meta-model.

### **3.2 Policies and strategies behind** SMARTSOFT **services**

A major part of the SMARTSOFT approach are the policies and strategies that manifest themselves in the structure of the component model, explain its building blocks and guide their usage.

Separation of the roles of a component developer and a system integrator requires to control the interface between the inner part of a component and its outer part and to control this boundary. As soon as one gains control over the component hull, one can make sure that all relevant properties and parameters needed for the black box view of the system integrator become explicated at the component hull. One can also make sure that a component developer has no chance to expose component internals to the outside. SMARTSOFT achieves this via predefined communication patterns as the only building blocks to define externally visible services and by further guidelines on how to build good services.

**3.3 A robotics example illustrating the** SMARTSOFT **concepts**

services (or expect readily available components and services).

Laser Server push newest

state

state

<laser scan>

<laser scan>

<pose update> <pose>

> states: tasks: maxspeed:

robotshape: P3DX

Fig. 10. Structure of a navigation task based on the SMARTSOFT component model.

navigation while at the same time allowing for replacement of components.

**<<Component>> SmartMapperGridMap** Map Building push newest

**<<Component>> SmartCdlServer** Motion Execution

timed send send

push newest serialport: position−offset: (70 0 230)

tasks:

**<<Component>> SmartLaserLMS200Server**

state: neutral, active

**<<Component>> SmartAmcl** Self Localization

> LaserTask /dev/ttyS0

<laser scan>

Figure 10 illustrates how the SMARTSOFT component model and its meta-elements provided by SMARTMARS structure and partition a typical robotics use-case, namely the navigation of a mobile platform. Besides access to sensor data and to the mobile base, algorithmic building blocks of a navigation system are map building, path planning, motion execution and self localization. Since these building blocks are generic for navigation systems independently of the used algorithms, it makes sense to come up with an according component structure and

Robotic Software Systems: From Code-Driven to Model-Driven Software Development 489

neutral, active CdlTask 600

buildCurr, buildBoth tasks: LtmTask, CurrTask states: neutral, buildLtm,

The SmartLaserLMS200Server component provides the latest laser scan via a push newest port. Thus, all subscribed clients always get an update as soon as a new laserscan is available. It is subscribed to the pose service of the robot base to label laser scans with pose stamps. The component comprises a SmartTask to handle the internal communication with the laser hardware. This way, the aliveness of the overall component and its services is not affected by flaws on the laser hardware interface. Parameters like position-offset and serialport are used to customize the component to the target robotic system. These parameters have to be set by the application builder during the deployment step. The SmartMapperGridMap component requires a laser scan to build the longterm and the current map. The current map is provided by a push newest server port (as soon as a new map is available, it is provided to subscribed clients which makes sense since path planning depends on latest maps) and the longterm map by a query server port (since it is not needed regularly, it makes sense to provide it only on a per-request basis). The state port is used to set the component into different states depending on which services are needed in the current situation: build no map at all (neutral), build the current map only (buildCurr), build the longterm map only (buildLtm) or build both maps (buildBoth). The push newest server publishes the current map only in the states buildCurr and buildBoth. Requests for a longterm map are answered as long as the component and its services are alive but with an invalid map in case it is in the states neutral or buildCurr (valid flag of answer object set to false). Accordingly, the SmartPlannerBreadthFirstSearch component provides its intermediate waypoints by a push newest server (update the motion execution component as soon as new information is available). The motion execution component regularly commands new velocities to the robot base via a send service. The motion execution component is also subscribed to the laser scan service to be able to immediately react to obstacles in dynamic environments. This way, the different services interact to build various control loops to combine goal directed and reactive

query

<current map>

<goal>

<v,w>

state

push

states:

tasks:

**<<Component>>**

**<<Component>> SmartPioneerBaseServer**

Base Server

serialport: /dev/ttyS1 BaseTask

Path Planning

tasks:

PlannerTask neutral, active

**SmartPlannerBreadthFirstSearch**

<pose>

*A basic principle is that clients of services are not allowed to make any assumptions about offered services beyond the announced characteristics and that service providers are not allowed to make any assumptions about service requestors (like e.g. their maximum rate of requests).*

This principle results in simple and precise guidelines of how to apply the communication patterns in order to come up with well-formed services. As long as a service is being offered, the service provider has to accept all incoming requests and has to respond to them according to its announced quality-of-service parameters.

We illustrate this principle by means of the *query pattern*. As long as there are no further quality-of-service attributes, the service provider accepts all incoming requests and guarantees to answer all accepted requests. However, only the service provider knows about its resources available to process incoming requests and clients are not allowed to impose constraints on the service provider (a request might provide further non-committal hints to the service provider like a request priority). Thus, the service provider is allowed to provide a nil answer (the flag *is valid* is set to false in the answer) in case he is running out of resources to answer a particular request. In consequence, all service requestors always must be prepared to get a nil answer. A service requestor is also not allowed to make any assumptions about the response time as long as according quality-of-service attributes are not set by the service provider. However, if a service provider announces to answer requests within a certain time limit, one can rely on getting at least a nil answer before the deadline. If a service requestor depends on a maximum response time although this quality-of-service attribute is not offered by the service provider, he needs to use client-side timeouts with his request. This overall principle ensures (i) loose coupling of services, (ii) prevents clients from imposing constraints on service providers and (iii) gives service providers the means to arbitrate requests in case of limited resources.

It now also becomes evident why SMARTSOFT offers more than just a request/response and a publish/subscribe pattern which would be sufficient to cover all communicational needs. The *send* pattern explicates a one-way communication although one can emulate it via a query pattern with a void answer object. However, practical experience proved that a much better clarity for services with this characteristic is achieved when offering a separate pattern. The same holds true for the *push newest* and the *push timed* pattern. In principle, the push timed pattern is a push newest pattern with a regular update. However, in case of a push newest pattern, service requestors rely on having the latest data available at any time. This is different from a push timed pattern where the focus is on the service provider guaranteeing a regular time interval (in some cases even providing the same data). Although one could cover some of these aspects by quality-of-service attributes, they also have an impact on the kind of perception of its usage by a component developer. Again, achieving clarity and making the characteristics easily recognizable is of particular importance for the strict separation of the roles of component developers and system integrators. This also becomes obvious with the *event* pattern. In contrast to the push patterns, service requestors get informed only in case a server side event predicate (service requestors individually parametrize each event activation) becomes true. This tremendously saves bandwidth compared to publishing latest changes to all clients since one then always would have to publish a snapshot of the overall context needed to evaluate the predicate at the client side instead of just the information when an event fired.

### **3.3 A robotics example illustrating the** SMARTSOFT **concepts**

16 Robotic Systems

*A basic principle is that clients of services are not allowed to make any assumptions about offered services beyond the announced characteristics and that service providers are not allowed to make any*

This principle results in simple and precise guidelines of how to apply the communication patterns in order to come up with well-formed services. As long as a service is being offered, the service provider has to accept all incoming requests and has to respond to them according

We illustrate this principle by means of the *query pattern*. As long as there are no further quality-of-service attributes, the service provider accepts all incoming requests and guarantees to answer all accepted requests. However, only the service provider knows about its resources available to process incoming requests and clients are not allowed to impose constraints on the service provider (a request might provide further non-committal hints to the service provider like a request priority). Thus, the service provider is allowed to provide a nil answer (the flag *is valid* is set to false in the answer) in case he is running out of resources to answer a particular request. In consequence, all service requestors always must be prepared to get a nil answer. A service requestor is also not allowed to make any assumptions about the response time as long as according quality-of-service attributes are not set by the service provider. However, if a service provider announces to answer requests within a certain time limit, one can rely on getting at least a nil answer before the deadline. If a service requestor depends on a maximum response time although this quality-of-service attribute is not offered by the service provider, he needs to use client-side timeouts with his request. This overall principle ensures (i) loose coupling of services, (ii) prevents clients from imposing constraints on service providers and (iii) gives service providers the means to arbitrate requests in case of

It now also becomes evident why SMARTSOFT offers more than just a request/response and a publish/subscribe pattern which would be sufficient to cover all communicational needs. The *send* pattern explicates a one-way communication although one can emulate it via a query pattern with a void answer object. However, practical experience proved that a much better clarity for services with this characteristic is achieved when offering a separate pattern. The same holds true for the *push newest* and the *push timed* pattern. In principle, the push timed pattern is a push newest pattern with a regular update. However, in case of a push newest pattern, service requestors rely on having the latest data available at any time. This is different from a push timed pattern where the focus is on the service provider guaranteeing a regular time interval (in some cases even providing the same data). Although one could cover some of these aspects by quality-of-service attributes, they also have an impact on the kind of perception of its usage by a component developer. Again, achieving clarity and making the characteristics easily recognizable is of particular importance for the strict separation of the roles of component developers and system integrators. This also becomes obvious with the *event* pattern. In contrast to the push patterns, service requestors get informed only in case a server side event predicate (service requestors individually parametrize each event activation) becomes true. This tremendously saves bandwidth compared to publishing latest changes to all clients since one then always would have to publish a snapshot of the overall context needed to evaluate the predicate at the client side instead of just the information when

*assumptions about service requestors (like e.g. their maximum rate of requests).*

to its announced quality-of-service parameters.

limited resources.

an event fired.

Figure 10 illustrates how the SMARTSOFT component model and its meta-elements provided by SMARTMARS structure and partition a typical robotics use-case, namely the navigation of a mobile platform. Besides access to sensor data and to the mobile base, algorithmic building blocks of a navigation system are map building, path planning, motion execution and self localization. Since these building blocks are generic for navigation systems independently of the used algorithms, it makes sense to come up with an according component structure and services (or expect readily available components and services).

Fig. 10. Structure of a navigation task based on the SMARTSOFT component model.

The SmartLaserLMS200Server component provides the latest laser scan via a push newest port. Thus, all subscribed clients always get an update as soon as a new laserscan is available. It is subscribed to the pose service of the robot base to label laser scans with pose stamps. The component comprises a SmartTask to handle the internal communication with the laser hardware. This way, the aliveness of the overall component and its services is not affected by flaws on the laser hardware interface. Parameters like position-offset and serialport are used to customize the component to the target robotic system. These parameters have to be set by the application builder during the deployment step. The SmartMapperGridMap component requires a laser scan to build the longterm and the current map. The current map is provided by a push newest server port (as soon as a new map is available, it is provided to subscribed clients which makes sense since path planning depends on latest maps) and the longterm map by a query server port (since it is not needed regularly, it makes sense to provide it only on a per-request basis). The state port is used to set the component into different states depending on which services are needed in the current situation: build no map at all (neutral), build the current map only (buildCurr), build the longterm map only (buildLtm) or build both maps (buildBoth). The push newest server publishes the current map only in the states buildCurr and buildBoth. Requests for a longterm map are answered as long as the component and its services are alive but with an invalid map in case it is in the states neutral or buildCurr (valid flag of answer object set to false). Accordingly, the SmartPlannerBreadthFirstSearch component provides its intermediate waypoints by a push newest server (update the motion execution component as soon as new information is available). The motion execution component regularly commands new velocities to the robot base via a send service. The motion execution component is also subscribed to the laser scan service to be able to immediately react to obstacles in dynamic environments. This way, the different services interact to build various control loops to combine goal directed and reactive navigation while at the same time allowing for replacement of components.

and more experience on the different levels of abstraction. However, the major drawbacks of

Robotic Software Systems: From Code-Driven to Model-Driven Software Development 491

• *UML* is a general purpose modeling language covering aspects of several domains and is thus complex. Using profiles, it is only possible to enrich *UML*, but not to remove elements.

• *UML Profiles* provide just a lightweight extension of *UML*. That means, the structure of *UML* itself cannot be modified. The elements can be customized only by stereotypes and

To counter the drawbacks of *UML Profiles*, we only support the usage of the stereotyped elements provided by SMARTMARS to create the models of the components and deployments. Directly using pure *UML* elements in the diagrams is not supported. Thus, the models are created using just the meta-elements provided by SMARTMARS. Restricting the usage to SMARTMARS meta-elements, a mapping to another meta-model implementation technology like *eCore* (Gronback, 2009) is straightforward. The stereotyped elements can be mapped onto *eCore* without taking into account *UML* and its structure. In the current implementation of our toolchain, the restriction to only use SMARTMARS meta-elements is enforced with *check* (Efftinge et al., 2008), the *EMP* implementation of *OCL* (Object Management Group, 2010). In the model transformation and code generation steps of our toolchain pure *UML* elements are ignored. Another approach would be to customize the diagrams by removing the *UML* elements from the palette (see fig. 12) and thus restricting their usage. The latter approach is on the agenda of the *Papyrus UML* project and will be

Figure 11 illustrates the roles of the framework builder and the component builder. The component builder creates a model of the component using the Eclipse based toolchain, focusing on the component hull. Pushing the button he receives the source files where to integrate the business logic (algorithms, libraries) of the component. During this process the component builder is supported and guided by the toolchain. The internals of the model transformation and code generation steps implemented by the framework builder are not

integrates

Fig. 11. The component builder models a component, gets the source code of its overall structure (component hull, tasks, etc.) generated by the toolchain and can then integrate

**Component Builder**

The view of the component builder on the toolchain is depicted in figure 12. It is illustrated by a face recognition component which is a building block in many service robotics scenarios as part of the human-robot interface (detection, identification and memorization of persons). In its active state, the component shall receive camera images, apply face recognition algorithms and report detected and recognized persons. Thus, besides the standard ports for setting

creates

 Model of Component

Meta−Model SmartMARS

• Deployment and instantiations of components are not adequately supported.

*UML Profiles* are:

tagged values.

supported by future releases.

visible to the component builder.

user-code into these structures.

SmartSoft Library

Generated Code User Code

**Framework Builder**

**4.2 Development of components – component builder view**

### **3.4 State-of-the-art and related work**

The historical need in robotics to be responsible for creation of the application logic and to be at the same time the system integrator generated a poor understanding in the robotics community that these two roles ought to be separated. In consequence, most robotics frameworks don't make this distinction and consequently they don't offer any clear guideline to the developer on how to achieve separation of roles.

For example, *ROS* (Quigley et al., 2009) is a currently widely-used framework in robotics providing a huge and valuable codebase. However, it lacks guidance for component developers to ensure system level conformance for composability. Instead, its focus is on side-by-side existence of all kinds of overlapping concepts without an abstract representation of its core features and properties in a way independent of any implementation.

The only approach in line with the presented concepts is the *RTC Specification* (OMG, 2008) which is considered the most advanced concept of *MDSD* in robotics. However, it is strongly influenced by use-cases requiring a data-flow architecture and they do not yet considerably take into account requirements imposed by runtime adaptability.

## **4. Reference implementation of the SMARTMDSD TOOLCHAIN**

The reference implementation of the SMARTMDSD TOOLCHAIN implements the SMARTMARS meta-model within a particular MDSD-toolchain. It is used in real world operation to develop components and to compose complex systems out of them. The focus of this section is on technical details of the implementation of a meta-model. Another focus is on the role-specific view and the support a MDSD-toolchain provides. We illustrate the reference implementation of the toolchain along the different roles of the stakeholders and their views on the toolchain.

## **4.1 Decisions and tools behind the reference implementation - framework builder view**

The reference implementation of our SMARTMDSD TOOLCHAIN is based on the *Eclipse Modeling Project (EMP)* (Eclipse Modeling Project, 2010) and *Papyrus UML* (PAPYRUS UML, 2011).

*Papyrus UML* is used as graphical modeling tool in our toolchain. Therefore, it is customized by the framework builder for the development of SMARTSOFT Components (component builder) and deployments of components (application builder). This includes for example a customized wizard to create communication objects, components as well as deployments. The modeling view of *Papyrus UML* is enriched with a customized set of meta-elements to create the models. The model transformation and code generation steps are developed with *Xpand* and *Xtend* (Efftinge et al., 2008) which are part of the *EMP*. These internals are not visible to the component builder and the application builder. They just see the graphical modeling tool to create their models and use the *CDT Eclipse Plugin* (Eclipse CDT, 2011) to extend the source code and to compile binaries. The SMARTMARS meta-model is implemented as a *UML Profile* (Fuentes-Fernández & Vallecillo-Moreno, 2004) using *Papyrus UML*.

The decision to use *UML Profiles* and *Papyrus UML* to implement our toolchain is motivated by the reduced effort to come up with a graphical modeling environment customized to the robotics domain and its requirements by reusing available tools from other communities. Although some shortcomings have to be accepted and taken into account we were not caught in the huge effort related to implementing a full-fledged *GMF*-based development environment. This allowed us to early come up with our toolchain and to gain deeper insights 18 Robotic Systems

The historical need in robotics to be responsible for creation of the application logic and to be at the same time the system integrator generated a poor understanding in the robotics community that these two roles ought to be separated. In consequence, most robotics frameworks don't make this distinction and consequently they don't offer any clear guideline

For example, *ROS* (Quigley et al., 2009) is a currently widely-used framework in robotics providing a huge and valuable codebase. However, it lacks guidance for component developers to ensure system level conformance for composability. Instead, its focus is on side-by-side existence of all kinds of overlapping concepts without an abstract representation

The only approach in line with the presented concepts is the *RTC Specification* (OMG, 2008) which is considered the most advanced concept of *MDSD* in robotics. However, it is strongly influenced by use-cases requiring a data-flow architecture and they do not yet considerably

The reference implementation of the SMARTMDSD TOOLCHAIN implements the SMARTMARS meta-model within a particular MDSD-toolchain. It is used in real world operation to develop components and to compose complex systems out of them. The focus of this section is on technical details of the implementation of a meta-model. Another focus is on the role-specific view and the support a MDSD-toolchain provides. We illustrate the reference implementation of the toolchain along the different roles of the stakeholders and

**4.1 Decisions and tools behind the reference implementation - framework builder view** The reference implementation of our SMARTMDSD TOOLCHAIN is based on the *Eclipse Modeling Project (EMP)* (Eclipse Modeling Project, 2010) and *Papyrus UML* (PAPYRUS UML,

*Papyrus UML* is used as graphical modeling tool in our toolchain. Therefore, it is customized by the framework builder for the development of SMARTSOFT Components (component builder) and deployments of components (application builder). This includes for example a customized wizard to create communication objects, components as well as deployments. The modeling view of *Papyrus UML* is enriched with a customized set of meta-elements to create the models. The model transformation and code generation steps are developed with *Xpand* and *Xtend* (Efftinge et al., 2008) which are part of the *EMP*. These internals are not visible to the component builder and the application builder. They just see the graphical modeling tool to create their models and use the *CDT Eclipse Plugin* (Eclipse CDT, 2011) to extend the source code and to compile binaries. The SMARTMARS meta-model is implemented as a *UML Profile*

The decision to use *UML Profiles* and *Papyrus UML* to implement our toolchain is motivated by the reduced effort to come up with a graphical modeling environment customized to the robotics domain and its requirements by reusing available tools from other communities. Although some shortcomings have to be accepted and taken into account we were not caught in the huge effort related to implementing a full-fledged *GMF*-based development environment. This allowed us to early come up with our toolchain and to gain deeper insights

of its core features and properties in a way independent of any implementation.

take into account requirements imposed by runtime adaptability.

**4. Reference implementation of the SMARTMDSD TOOLCHAIN**

(Fuentes-Fernández & Vallecillo-Moreno, 2004) using *Papyrus UML*.

**3.4 State-of-the-art and related work**

their views on the toolchain.

2011).

to the developer on how to achieve separation of roles.

and more experience on the different levels of abstraction. However, the major drawbacks of *UML Profiles* are:


To counter the drawbacks of *UML Profiles*, we only support the usage of the stereotyped elements provided by SMARTMARS to create the models of the components and deployments. Directly using pure *UML* elements in the diagrams is not supported. Thus, the models are created using just the meta-elements provided by SMARTMARS. Restricting the usage to SMARTMARS meta-elements, a mapping to another meta-model implementation technology like *eCore* (Gronback, 2009) is straightforward. The stereotyped elements can be mapped onto *eCore* without taking into account *UML* and its structure. In the current implementation of our toolchain, the restriction to only use SMARTMARS meta-elements is enforced with *check* (Efftinge et al., 2008), the *EMP* implementation of *OCL* (Object Management Group, 2010). In the model transformation and code generation steps of our toolchain pure *UML* elements are ignored. Another approach would be to customize the diagrams by removing the *UML* elements from the palette (see fig. 12) and thus restricting their usage. The latter approach is on the agenda of the *Papyrus UML* project and will be supported by future releases.

## **4.2 Development of components – component builder view**

Figure 11 illustrates the roles of the framework builder and the component builder. The component builder creates a model of the component using the Eclipse based toolchain, focusing on the component hull. Pushing the button he receives the source files where to integrate the business logic (algorithms, libraries) of the component. During this process the component builder is supported and guided by the toolchain. The internals of the model transformation and code generation steps implemented by the framework builder are not visible to the component builder.

Fig. 11. The component builder models a component, gets the source code of its overall structure (component hull, tasks, etc.) generated by the toolchain and can then integrate user-code into these structures.

The view of the component builder on the toolchain is depicted in figure 12. It is illustrated by a face recognition component which is a building block in many service robotics scenarios as part of the human-robot interface (detection, identification and memorization of persons). In its active state, the component shall receive camera images, apply face recognition algorithms and report detected and recognized persons. Thus, besides the standard ports for setting

of SmartSoft **CorbaSmartSoft** CORBA based implementation

**M2M M2T**

**SmartMARS** (Modeling and Analysis of Robotics Systems)

legacy code PCL

**User Code**

MRPT RTAI−Lab MATLAB / Simulink ...

**4.3.1 The SmartMARS UML profiles (PIM/PSM)**

transformation steps inside the toolchain.

standard task and (2) *RTAI* task.

**PIM PSM PSI**

Robotic Software Systems: From Code-Driven to Model-Driven Software Development 493

Qt OpenRave

*Implementation (PSI)*. This transformation is based on customizable code templates.

(*M2M*) transformation (encoded with *Xtend*) from the *PIM* into a *Platform Specific Model (PSM)*. In this step the elements of the *PIM* are transformed into corresponding elements of the *PSM* according to the selected target platform. The second step is the model-to-text (*M2T*) transformation (encoded with *Xpand* and *Xtend*) from the *PSM* into a *Platform Specific*

The abstract SMARTMARS meta-model is implemented by the framework builder as *UML Profile* using *Papyrus UML*. Therefore, standard *UML* elements (e.g. *Component*, *Class*, *Port*) are extended by stereotypes (e.g. SMARTCOMPONENT, SMARTTASK, SMARTQUERYSERVER) to give the meta-elements a new meaning according to the SMARTMARS concept. To distinguish and highlight the new element, it has its own icon attached. Tagged values are used to enrich the meta-element by new attributes which are not provided by the base *UML* element. In fact there are two *UML Profiles*: one for the *PIM* and one for the *PSM*. The *PIM UML Profile* is visible to the component builder and is used by him to create the models of the components. For each SMARTSOFT implementation (e.g. *CORBA*, *ACE*), a *PSM UML Profile* has to be provided covering the specifics of the implementation. For example, the *CORBA*-based *PSM* supports *RTAI* linux to provide hard realtime tasks. This is represented by the meta-element *RTAITask*. The *PSM UML Profile* is not visible to the component builder and only used by the

Fig. 14. Screenshots of excerpt of the UML Profiles created with *Papyrus UML* showing the metaelements dedicated to the SMARTTASK. *Left: PIM*; *Right: PSM* with the two variants (1)

An excerpt of the *UML Profiles* is illustrated in figure 14. In the *UML Profile* for the *PIM*, the SMARTTASK extends the *UML class* and enriches it with attributes (tagged values) like *isPeriodic*, *isRealtime*, *period* and *timeUnit*. For the *timeUnit* an enumeration (*TimeUnitKind*)

of SmartSoft **AceSmartSoft** ACE based implementation

any other middleware

Fig. 13. Two step transformation workflow: Framework Builder view.

**...**

OpenCV

Fatal Alive Shutdown

Monitoring

Threads / Mutex / Timer Interface Execution Environment

Middleware OS

Init

**User Space** Send

Query Event ...

component lifecycle

**<<Component>>**

State

Diagnose

states (active, neutral) and parameters, we need to specify a port to receive the latest camera images (based on a push newest client) and another one to report on the results (based on an event server). The component shall run the face recognition based on a commercially available library within one thread and optional visualization mechanisms within a second and separated thread. Thus, we need to specify two tasks within the component.

Fig. 12. Screenshot of our toolchain showing the view of the Component Builder.

To create the model the component builder uses the SMARTMARS meta-elements offered in the *palette*. The elements of the created model can be accessed either in the outline view or directly in the graphical representation. Several of the meta-element attributes (tagged values) can be customized and modified in the properties tab (e.g. customizing services to ports, specifying properties of tasks, etc.). The model is stored in files specific to *Papyrus UML*. Pushing the button, the workflow is started and the *PSI* (source) files are generated. The user code files are directly accessible in the *src* folder. The component builder integrates his business logic into these files (in our example, the interaction with the face recognition library). The generated files the component builder must not modify are stored in the *gen* folder. These files are generated and overwritten each time the workflow is executed. For the further processing of the source files, the *Eclipse CDT plugin* is used (*Makefile Project*). The makefile is also generated by the workflow specific to the model properties. User modifications in the makefile can be done inside of *protected regions* (Gronback, 2009).

### **4.3 Development of components – framework builder view**

Taking a look behind the scenes of the toolchain, the workflow (fig. 13) appears as a two step transformation according to the *OMG MDA* (Object Management Group & Soley, 2000). The *Platform Independent Model (PIM)*, which is created by the component builder using the meta-elements provided by the *PIM UML Profile*, specifies the component independently of the implementation technology. The first step in the workflow is the model-to-model

Fig. 13. Two step transformation workflow: Framework Builder view.

(*M2M*) transformation (encoded with *Xtend*) from the *PIM* into a *Platform Specific Model (PSM)*. In this step the elements of the *PIM* are transformed into corresponding elements of the *PSM* according to the selected target platform. The second step is the model-to-text (*M2T*) transformation (encoded with *Xpand* and *Xtend*) from the *PSM* into a *Platform Specific Implementation (PSI)*. This transformation is based on customizable code templates.

### **4.3.1 The SmartMARS UML profiles (PIM/PSM)**

20 Robotic Systems

states (active, neutral) and parameters, we need to specify a port to receive the latest camera images (based on a push newest client) and another one to report on the results (based on an event server). The component shall run the face recognition based on a commercially available library within one thread and optional visualization mechanisms within a second

and separated thread. Thus, we need to specify two tasks within the component.

Fig. 12. Screenshot of our toolchain showing the view of the Component Builder.

makefile can be done inside of *protected regions* (Gronback, 2009).

**4.3 Development of components – framework builder view**

To create the model the component builder uses the SMARTMARS meta-elements offered in the *palette*. The elements of the created model can be accessed either in the outline view or directly in the graphical representation. Several of the meta-element attributes (tagged values) can be customized and modified in the properties tab (e.g. customizing services to ports, specifying properties of tasks, etc.). The model is stored in files specific to *Papyrus UML*. Pushing the button, the workflow is started and the *PSI* (source) files are generated. The user code files are directly accessible in the *src* folder. The component builder integrates his business logic into these files (in our example, the interaction with the face recognition library). The generated files the component builder must not modify are stored in the *gen* folder. These files are generated and overwritten each time the workflow is executed. For the further processing of the source files, the *Eclipse CDT plugin* is used (*Makefile Project*). The makefile is also generated by the workflow specific to the model properties. User modifications in the

Taking a look behind the scenes of the toolchain, the workflow (fig. 13) appears as a two step transformation according to the *OMG MDA* (Object Management Group & Soley, 2000). The *Platform Independent Model (PIM)*, which is created by the component builder using the meta-elements provided by the *PIM UML Profile*, specifies the component independently of the implementation technology. The first step in the workflow is the model-to-model The abstract SMARTMARS meta-model is implemented by the framework builder as *UML Profile* using *Papyrus UML*. Therefore, standard *UML* elements (e.g. *Component*, *Class*, *Port*) are extended by stereotypes (e.g. SMARTCOMPONENT, SMARTTASK, SMARTQUERYSERVER) to give the meta-elements a new meaning according to the SMARTMARS concept. To distinguish and highlight the new element, it has its own icon attached. Tagged values are used to enrich the meta-element by new attributes which are not provided by the base *UML* element.

In fact there are two *UML Profiles*: one for the *PIM* and one for the *PSM*. The *PIM UML Profile* is visible to the component builder and is used by him to create the models of the components. For each SMARTSOFT implementation (e.g. *CORBA*, *ACE*), a *PSM UML Profile* has to be provided covering the specifics of the implementation. For example, the *CORBA*-based *PSM* supports *RTAI* linux to provide hard realtime tasks. This is represented by the meta-element *RTAITask*. The *PSM UML Profile* is not visible to the component builder and only used by the transformation steps inside the toolchain.

Fig. 14. Screenshots of excerpt of the UML Profiles created with *Papyrus UML* showing the metaelements dedicated to the SMARTTASK. *Left: PIM*; *Right: PSM* with the two variants (1) standard task and (2) *RTAI* task.

An excerpt of the *UML Profiles* is illustrated in figure 14. In the *UML Profile* for the *PIM*, the SMARTTASK extends the *UML class* and enriches it with attributes (tagged values) like *isPeriodic*, *isRealtime*, *period* and *timeUnit*. For the *timeUnit* an enumeration (*TimeUnitKind*)

Fig. 17. *PSM* to *PSI* transformation of the SMARTTASK. *Left:* Excerpt of the transformation template (*xPand*) generating the PSI of a standard task. *Right:* The generated code where the

Robotic Software Systems: From Code-Driven to Model-Driven Software Development 495

Depending on the attribute *isRealtime* the SMARTTASK is either mapped onto a RTAITASK or a non-realtime SMARTCORBATASK1. The *Xtend* transformation rule to transform the *PIM*

In case the attributes specify a non-realtime, periodic SMARTTASK, the toolchain extends the *PSM* by the elements needed to emulate periodic tasks (as this feature is not covered by standard tasks). In each case the user integrates his algorithms and libraries into the stable interface provided by the SMARTTASK (component builder view) independent of the hidden internal mapping of the SMARTTASK (generated code). Figure 17 depicts the *Xpand* template to generate the user code file for the task in the *PSI*. The figure shows the template on the left

The *PSI* consists of the SMARTSOFT library, the generated code and the user code (fig. 15 *right*). To be able to re-generate parts of the component source code according to modified parameters in the model without affecting the source code parts added by the component builder, the generation gap pattern (Vlissides, 2009) is used. It is based on inheritance – the user code inherits from the generated code2. The source files called *generated code* are generated each time the transformation workflow in the toolchain is executed. These files contain the logic which is generated behind the scenes according to the model parameters and must not be modified by the component builder. The source files called *user code* are just generated if they do not already exist. They are intended for the component builder to add the algorithms and libraries. The generation of the user code files is more for the convenience of the component builder to have a code template as starting point. These files are in the full responsibility of the component builder and are never modified or overwritten by the transformation workflow of the toolchain. In this context *generate once* means that the file is only generated if it does not already exist. This is typically the case if the workflow is executed for the first time. The clear separation of generated code and user code by the generation gap pattern allows on the one hand to reflect modifications of the model in the generated source

<sup>1</sup> *Corba* in element names indicates that the element belongs to the *CORBA* specific *PSM*.

<sup>2</sup> The pattern could also be used in the opposite inheritance ordering so that the generated code inherits

SMARTTASK into the appropriate *PSM* element is depicted in figure 16.

user adds the business logic of the task.

and the generated code on the right.

from the user code.

is used to specify the unit in which time values are annotated. In the *UML Profile* for the *CORBA-based PSM*, an abstract task is specified (cannot be instantiated) and the two variants (1) standard task and (2) realtime task are derived from it. They are both not abstract and can thus be instantiated by the component builder to create the model. The standard task adds an optional attribute referencing to a SMARTTIMER meta-element. This is used to emulate periodic non-realtime tasks which are not natively supported by standard tasks of the *CORBA*-based SMARTSOFT implementation.

### **4.3.2 Model transformation and code generation steps**

The *M2M* transformation maps the platform independent elements of the *PIM* onto platform specific elements of the selected target platform. Such a mapping is illustrated by the example of the *SmartTask* (fig. 15 *left*) and the *CORBA*-based *PSM*. The SMARTTASK comprises several elements which are necessary to describe a task behavior and its characteristics.

Fig. 15. Model transformation and code generation steps illustrated by the example of the SMARTTASK. *Left:* Transformation of the PIM into a PSM. *Right:* Code generation and Generation Gap Pattern.


Fig. 16. PIM to PSM model transformation of the SMARTTASK depending on the attribute *isRealtime*.

22 Robotic Systems

is used to specify the unit in which time values are annotated. In the *UML Profile* for the *CORBA-based PSM*, an abstract task is specified (cannot be instantiated) and the two variants (1) standard task and (2) realtime task are derived from it. They are both not abstract and can thus be instantiated by the component builder to create the model. The standard task adds an optional attribute referencing to a SMARTTIMER meta-element. This is used to emulate periodic non-realtime tasks which are not natively supported by standard tasks of

The *M2M* transformation maps the platform independent elements of the *PIM* onto platform specific elements of the selected target platform. Such a mapping is illustrated by the example of the *SmartTask* (fig. 15 *left*) and the *CORBA*-based *PSM*. The SMARTTASK comprises several

> <<class>> **MyTaskCore**

**a**

<<class>>

Generation Gap Pattern

Fig. 15. Model transformation and code generation steps illustrated by the example of the SMARTTASK. *Left:* Transformation of the PIM into a PSM. *Right:* Code generation and

Fig. 16. PIM to PSM model transformation of the SMARTTASK depending on the attribute

**MyTask**

User Code

**PSI**

<<class>> **SmartTask RTAI**

**ACE Task**

**Task**

**Provided by Framework Builder**

generate

generate

only once

**SmartSoft lib**

**b**

schedPolicy: SchedPolicyKind isRealtime: Boolean isPeriodic: true priority: Integer timeUnit: TimeUnitKind period: Integer wcet: Integer

**Modifiable by Component Builder**

**PIM**

**MyTask** <<model−element>>

**. . .**

<<meta−element>>

**SmartTask** <<meta−element>>

**SmartMARS**

elements which are necessary to describe a task behavior and its characteristics.

<sup>i</sup>sRealtim<sup>e</sup> == tru<sup>e</sup>

**SmartCorbaMutex** <<metaelement>>

mutex [1]

schedPolicy : SchedPolicyKind **RTAITask** <<metaelement>>

<sup>i</sup>sPeriodic : Boolean priority : Intege<sup>r</sup> <sup>p</sup>eriod : Intege<sup>r</sup> wcet : Intege<sup>r</sup>

the *CORBA*-based SMARTSOFT implementation.

schedPolicy : SchedPolicyKind <sup>i</sup>sRealtime : Boolean <sup>i</sup>sPeriodic : Boolean priority : Intege<sup>r</sup> **SmartTask** <<metaelement>>

timeUnit : TimeUnitTyp<sup>e</sup> <sup>p</sup>eriod : Intege<sup>r</sup> wcet : Intege<sup>r</sup>

<sup>i</sup>sRealtim<sup>e</sup> == false

<sup>i</sup>sPeriodic : Boolean priority : Intege<sup>r</sup> period : Integer

Generation Gap Pattern.

<<metaelement>> **SmartCorbaTask** schedPolicy : SchedPolicyKind

**PSM a b**

<<metaelement>> **SmartCorbaCondMutex**

condMutex [0..1]

<sup>i</sup>sPeriodi<sup>c</sup> == tru<sup>e</sup>

period : Integer

*isRealtime*.

timer [0..1]

**SmartCorbaTimer** <<metaelement>>

**SchedPolicyKind** <<enumeration>>

FIFO <sup>r</sup>ound−robin <sup>s</sup>poradi<sup>c</sup>

**PIM**

**4.3.2 Model transformation and code generation steps**

Fig. 17. *PSM* to *PSI* transformation of the SMARTTASK. *Left:* Excerpt of the transformation template (*xPand*) generating the PSI of a standard task. *Right:* The generated code where the user adds the business logic of the task.

Depending on the attribute *isRealtime* the SMARTTASK is either mapped onto a RTAITASK or a non-realtime SMARTCORBATASK1. The *Xtend* transformation rule to transform the *PIM* SMARTTASK into the appropriate *PSM* element is depicted in figure 16.

In case the attributes specify a non-realtime, periodic SMARTTASK, the toolchain extends the *PSM* by the elements needed to emulate periodic tasks (as this feature is not covered by standard tasks). In each case the user integrates his algorithms and libraries into the stable interface provided by the SMARTTASK (component builder view) independent of the hidden internal mapping of the SMARTTASK (generated code). Figure 17 depicts the *Xpand* template to generate the user code file for the task in the *PSI*. The figure shows the template on the left and the generated code on the right.

The *PSI* consists of the SMARTSOFT library, the generated code and the user code (fig. 15 *right*). To be able to re-generate parts of the component source code according to modified parameters in the model without affecting the source code parts added by the component builder, the generation gap pattern (Vlissides, 2009) is used. It is based on inheritance – the user code inherits from the generated code2. The source files called *generated code* are generated each time the transformation workflow in the toolchain is executed. These files contain the logic which is generated behind the scenes according to the model parameters and must not be modified by the component builder. The source files called *user code* are just generated if they do not already exist. They are intended for the component builder to add the algorithms and libraries. The generation of the user code files is more for the convenience of the component builder to have a code template as starting point. These files are in the full responsibility of the component builder and are never modified or overwritten by the transformation workflow of the toolchain. In this context *generate once* means that the file is only generated if it does not already exist. This is typically the case if the workflow is executed for the first time. The clear separation of generated code and user code by the generation gap pattern allows on the one hand to reflect modifications of the model in the generated source

<sup>1</sup> *Corba* in element names indicates that the element belongs to the *CORBA* specific *PSM*.

<sup>2</sup> The pattern could also be used in the opposite inheritance ordering so that the generated code inherits from the user code.

*RTAI* system have to be specified. If the application builder forgets to bind required settings,

Robotic Software Systems: From Code-Driven to Model-Driven Software Development 497

The application builder can identify the provided and required services of a component via its ports. He can inspect its characteristics by clicking on the port icon which opens a property view. That comprises the communication pattern type, the used communication objects and further characteristics like service name and also port specific information like update frequencies. The initial wiring is done within the graphical representation of the model. In case the application builder wants to connect incompatible ports, the toolchain refuses the

If the *CORBA*-based implementation of SMARTSOFT is used, the *CORBA* naming service properties *IP*-address and *port*-number have to be set. Furthermore, the deployment type (*local*, *remote*) has to be selected. For a remote deployment, the *IP*-address, *username* and *target folder* of the target computer have to be specified. The deployed system is copied to the target computer and can be executed there. In case of a local deployment, the system is customized to run on the local machine of the application builder. This is, for example, the case if no real robot is used and the deployed system uses simulation components (e.g. *Gazebo*). Depending on the initial wiring, parameter files are generated and also copied into the deployment folder. These parameter files contain application specific adjustments of the components. In addition,

To implement the deployment of components, some meta-elements are added by the framework builder to the *UML Profile* (fig. 19). This section focuses on the *CORBA*-based

The deployment model contains relevant information like the initial wiring between components (*Connection*), naming service properties (*CorbaNamingService*), scheduler properties (*RTAISetup*) and parameters about the deployment itself (*CorbaSmartSoftTarget*). The models of the components are made available to the deployment model using the *UML import* mechanism. This allows to access the internal structure of the components. Out of the deployment model the parameter files and a start script are generated (*M2T*) using *Xpand* and *Xtend* in a similar way as these transformation languages are used to generate code for the components. Based on the deployment model several analysis and simulation models can be generated to get feedback from 3rd-party tools. For example, one can extract parameters of all realtime tasks mapped onto a specific processor to perform hard realtime schedulability

a shell script to start the system is generated out of the deployment model.

**4.5 Deployment of components – framework builder view**

Fig. 19. Meta-elements to support the deployment of components.

analysis (CHEDDAR (Cheddar, 2010)) (Schlegel et al., 2010).

deployment.

this absence is reported to him by the toolchain.

connection and gives further hints on the reasons.

code without overwriting the user parts. On the other hand it gives the user the freedom to structure his source code according to his needs and does not restrict the structure as would be the case with, for example, *protected regions*. Consequently, the component builder can modify the *period*, *priority* or even the *isRealtime* attribute of the task in the model, re-generate and compile the code without requiring any modification in the user code files. The modification in the model just affects the generated code part of the *PSI*.

### **4.4 Deployment of components – application builder view**

The deployment is used to compose a robotic system out of available components. The application builder imports the desired components and places them onto the target platform. Furthermore, he defines the initial wiring of the components by connecting the ports with the meta-element *Connection*. Figure 18 illustrates the composition of navigation

Fig. 18. Screenshot of our toolchain showing the deployment of components to build a robotic system.

components. In this example, the application builder (system integrator) imports components specific to a particular robot platform (SmartPioneerBaseServer) and specific to a particular sensor (SmartLaserLMS200Server). The navigation components (SmartMapperGridMap, SmartPlannerBreadthFirstSearch, SmartCDLServer) can be used across different mobile robots. The SmartRobotConsole provides a user interface to command the robot.

The components are presented to the application builder as black boxes with dedicated variation points. These have to be bound during the deployment step and can be specified according to system level requirements. For example, a laser ranger component might need the coordinates of its mounting point relative to the robot coordinate system. One might also reduce the maximum scanning frequency to save computing resources. Parameters also need to be bound for the target system. For example, in case *RTAI* is used inside of a component, the *RTAI* scheduler parameters (timer model underlying RTAI: periodic, oneshot) of the target 24 Robotic Systems

code without overwriting the user parts. On the other hand it gives the user the freedom to structure his source code according to his needs and does not restrict the structure as would be the case with, for example, *protected regions*. Consequently, the component builder can modify the *period*, *priority* or even the *isRealtime* attribute of the task in the model, re-generate and compile the code without requiring any modification in the user code files. The modification

The deployment is used to compose a robotic system out of available components. The application builder imports the desired components and places them onto the target platform. Furthermore, he defines the initial wiring of the components by connecting the ports with the meta-element *Connection*. Figure 18 illustrates the composition of navigation

Fig. 18. Screenshot of our toolchain showing the deployment of components to build a

robots. The SmartRobotConsole provides a user interface to command the robot.

components. In this example, the application builder (system integrator) imports components specific to a particular robot platform (SmartPioneerBaseServer) and specific to a particular sensor (SmartLaserLMS200Server). The navigation components (SmartMapperGridMap, SmartPlannerBreadthFirstSearch, SmartCDLServer) can be used across different mobile

The components are presented to the application builder as black boxes with dedicated variation points. These have to be bound during the deployment step and can be specified according to system level requirements. For example, a laser ranger component might need the coordinates of its mounting point relative to the robot coordinate system. One might also reduce the maximum scanning frequency to save computing resources. Parameters also need to be bound for the target system. For example, in case *RTAI* is used inside of a component, the *RTAI* scheduler parameters (timer model underlying RTAI: periodic, oneshot) of the target

robotic system.

in the model just affects the generated code part of the *PSI*.

**4.4 Deployment of components – application builder view**

*RTAI* system have to be specified. If the application builder forgets to bind required settings, this absence is reported to him by the toolchain.

The application builder can identify the provided and required services of a component via its ports. He can inspect its characteristics by clicking on the port icon which opens a property view. That comprises the communication pattern type, the used communication objects and further characteristics like service name and also port specific information like update frequencies. The initial wiring is done within the graphical representation of the model. In case the application builder wants to connect incompatible ports, the toolchain refuses the connection and gives further hints on the reasons.

If the *CORBA*-based implementation of SMARTSOFT is used, the *CORBA* naming service properties *IP*-address and *port*-number have to be set. Furthermore, the deployment type (*local*, *remote*) has to be selected. For a remote deployment, the *IP*-address, *username* and *target folder* of the target computer have to be specified. The deployed system is copied to the target computer and can be executed there. In case of a local deployment, the system is customized to run on the local machine of the application builder. This is, for example, the case if no real robot is used and the deployed system uses simulation components (e.g. *Gazebo*). Depending on the initial wiring, parameter files are generated and also copied into the deployment folder. These parameter files contain application specific adjustments of the components. In addition, a shell script to start the system is generated out of the deployment model.

## **4.5 Deployment of components – framework builder view**

To implement the deployment of components, some meta-elements are added by the framework builder to the *UML Profile* (fig. 19). This section focuses on the *CORBA*-based deployment.

Fig. 19. Meta-elements to support the deployment of components.

The deployment model contains relevant information like the initial wiring between components (*Connection*), naming service properties (*CorbaNamingService*), scheduler properties (*RTAISetup*) and parameters about the deployment itself (*CorbaSmartSoftTarget*). The models of the components are made available to the deployment model using the *UML import* mechanism. This allows to access the internal structure of the components. Out of the deployment model the parameter files and a start script are generated (*M2T*) using *Xpand* and *Xtend* in a similar way as these transformation languages are used to generate code for the components. Based on the deployment model several analysis and simulation models can be generated to get feedback from 3rd-party tools. For example, one can extract parameters of all realtime tasks mapped onto a specific processor to perform hard realtime schedulability analysis (CHEDDAR (Cheddar, 2010)) (Schlegel et al., 2010).

and the SMARTMDSD toolchain which are both available on *Sourceforge* (http://

Robotic Software Systems: From Code-Driven to Model-Driven Software Development 499

The component builder view of the SMARTMDSD toolchain supports **component builders** to develop their components independently of each other, but based on agreed interfaces. These components are independent of the concrete implementation technology of SMARTSOFT. Component builders provide their components in a component shelf. The models of the components include all information to allow a black-box view of the components (e.g. services, properties, resources). The explication of such information about the components is required by the application builder to compose robotic systems in a systematic way. To orchestrate the components at run-time, the task coordination language SMARTTCL (Steck & Schlegel, 2010) is used. Therefore, SMARTTCL is wrapped by a SMARTSOFT component and is also provided in the component shelf. The SMARTTCL component provides reusable action plots which can be composed and extended to form the desired behavior of the robot. The **application builder** uses the application builder view of the SMARTMDSD toolchain. He composes already existing components to build the complete robotic system. In the above described cleanup scenario, 17 components (e.g. mapping, path planning, collision avoidance, laser ranger, robot-base) are reused from the component shelf. It is worth noting that the components were not particularly developed for the cleanup scenario, but can be used in the cleanup scenario due to the generic services they provide. The SMARTTCL sequencer component is customized according to the desired behavior of the cleanup scenario. Therefore, several of the already existing action plots can be reused. Application specific

At run-time the SMARTTCL sequencer component coordinates the software components of the **robot** by modifying the configuration and parametrization as well as the wiring between the components. As SMARTTCL can access the information (e.g. parameters, resources) explicated in the models of the components at run-time, this information can be taken into account by the decision making process. That allows the robot not only to take the current situation and context into account, but also the configuration and resource usage of the components. In the described scenario, the sequencer manages the resources of the overall system, for example, by switching off components which are not required in the current situation. While the robot is manipulating objects on the table and requires all available computational resources for the trajectory planning of the manipulator, the components for

The service-oriented component-based software approach allows separation of roles and is an important step towards the overall vision of a robotics software component shelf. The feasibility of the overall approach has been demonstrated by an Eclipse-based toolchain and its application within complex Robocup@Home scenarios. Next steps towards model-centric robotic systems that comprehensively bridge design-time and runtime model usage now

Andrade, L., Fiadeiro, J. L., Gouveia, J. & Koutsoukos, G. (2002). Separating computation, coordination and configuration, *Journal of Software Maintenance* 14(5): 353–369. Beydeda, S., Book, M. & Gruhn, V. (eds) (2005). *Model-Driven Software Development*, Springer.

smart-robotics.sourceforge.net).

extensions are added by the application builder.

navigation are switched off.

**6. Conclusion**

become viable.

**7. References**

Fig. 20. The clean-up scenario. (1) Kate approaches the table; (2/3) Kate stacks objects into each other; (4) Kate throws cups into the kitchen sink.

As deployments and especially instantiations of components are not sufficiently supported by *UML*, a few workarounds are necessary as long as the SMARTMARS meta-model is implemented as *UML Profile*. For example, a robot with two laser range finders (front, rear) requires two instances of the same component. Each laser instance requires its individual parameters (e.g. serial port, pose on robot). These parameters are assigned to the deployment model by the application builder specifically for each component. In the implementation based on the *UML Profile*, we hence work on copies of components. Individual instances with their own parameter sets are considered in the abstract SMARTMARS meta-model and are also covered in the SMARTSOFT implementation, thus switching to a different meta-model implementation technology would allow for instances. This has not yet been done due to the huge manpower needed compared to just reusing *UML* tools.

## **5. Example / scenario**

The work presented has been used to build and run several real-world scenarios, including the participation at the RoboCup@Home challenge. Among other tasks our robot "Kate" can follow persons, deliver drinks, recognize persons and objects and interact with humans by gestures and speech.

In the clean-up scenario <sup>3</sup> (fig. 20) the robot approaches a table, recognizes the objects which are placed on the table and cleans the table either by throwing the objects into the trash bin or into the kitchen sink. There are different objects, like cups, beverage cans and different types of crisp cans. The cups can be stacked into each other and have to be thrown into the kitchen sink. Beverage cans can be stacked into crisp cans and have to be thrown into the trash bin. Depending on the type of crisp can, one or two beverage cans can be stacked into one crisp can. After throwing some of the objects into the correct disposal the robot has to decide whether to drive back to the table to clean up the remaining objects (if existing) or to drive to the operator and announce the result of the cleaning task. The robot reports whether all objects on the table could be cleaned up or, in case any problems occurred, reports how many objects are still left.

Such complex and different scenarios can neither be developed from scratch nor can their overall system complexity be handled without using appropriate software engineering methods. Due to their overall complexity and richness, they are considered as convincing stress test for the proposed approach. In the following the development of the cleanup example scenario is illustrated according to the different roles.

The **framework builder** provides the tools to develop SMARTSOFT components as well as to perform deployments of components to build a robotic system. In the described example this includes the *CORBA*-based implementation of the SMARTSOFT framework

<sup>3</sup> http://www.youtube.com/roboticsathsulm#p/u/0/xtLK-655v7k

and the SMARTMDSD toolchain which are both available on *Sourceforge* (http:// smart-robotics.sourceforge.net).

The component builder view of the SMARTMDSD toolchain supports **component builders** to develop their components independently of each other, but based on agreed interfaces. These components are independent of the concrete implementation technology of SMARTSOFT. Component builders provide their components in a component shelf. The models of the components include all information to allow a black-box view of the components (e.g. services, properties, resources). The explication of such information about the components is required by the application builder to compose robotic systems in a systematic way. To orchestrate the components at run-time, the task coordination language SMARTTCL (Steck & Schlegel, 2010) is used. Therefore, SMARTTCL is wrapped by a SMARTSOFT component and is also provided in the component shelf. The SMARTTCL component provides reusable action plots which can be composed and extended to form the desired behavior of the robot.

The **application builder** uses the application builder view of the SMARTMDSD toolchain. He composes already existing components to build the complete robotic system. In the above described cleanup scenario, 17 components (e.g. mapping, path planning, collision avoidance, laser ranger, robot-base) are reused from the component shelf. It is worth noting that the components were not particularly developed for the cleanup scenario, but can be used in the cleanup scenario due to the generic services they provide. The SMARTTCL sequencer component is customized according to the desired behavior of the cleanup scenario. Therefore, several of the already existing action plots can be reused. Application specific extensions are added by the application builder.

At run-time the SMARTTCL sequencer component coordinates the software components of the **robot** by modifying the configuration and parametrization as well as the wiring between the components. As SMARTTCL can access the information (e.g. parameters, resources) explicated in the models of the components at run-time, this information can be taken into account by the decision making process. That allows the robot not only to take the current situation and context into account, but also the configuration and resource usage of the components. In the described scenario, the sequencer manages the resources of the overall system, for example, by switching off components which are not required in the current situation. While the robot is manipulating objects on the table and requires all available computational resources for the trajectory planning of the manipulator, the components for navigation are switched off.

### **6. Conclusion**

26 Robotic Systems

Fig. 20. The clean-up scenario. (1) Kate approaches the table; (2/3) Kate stacks objects into

As deployments and especially instantiations of components are not sufficiently supported by *UML*, a few workarounds are necessary as long as the SMARTMARS meta-model is implemented as *UML Profile*. For example, a robot with two laser range finders (front, rear) requires two instances of the same component. Each laser instance requires its individual parameters (e.g. serial port, pose on robot). These parameters are assigned to the deployment model by the application builder specifically for each component. In the implementation based on the *UML Profile*, we hence work on copies of components. Individual instances with their own parameter sets are considered in the abstract SMARTMARS meta-model and are also covered in the SMARTSOFT implementation, thus switching to a different meta-model implementation technology would allow for instances. This has not yet been done due to the

The work presented has been used to build and run several real-world scenarios, including the participation at the RoboCup@Home challenge. Among other tasks our robot "Kate" can follow persons, deliver drinks, recognize persons and objects and interact with humans by

In the clean-up scenario <sup>3</sup> (fig. 20) the robot approaches a table, recognizes the objects which are placed on the table and cleans the table either by throwing the objects into the trash bin or into the kitchen sink. There are different objects, like cups, beverage cans and different types of crisp cans. The cups can be stacked into each other and have to be thrown into the kitchen sink. Beverage cans can be stacked into crisp cans and have to be thrown into the trash bin. Depending on the type of crisp can, one or two beverage cans can be stacked into one crisp can. After throwing some of the objects into the correct disposal the robot has to decide whether to drive back to the table to clean up the remaining objects (if existing) or to drive to the operator and announce the result of the cleaning task. The robot reports whether all objects on the table could be cleaned up or, in case any problems occurred, reports how

Such complex and different scenarios can neither be developed from scratch nor can their overall system complexity be handled without using appropriate software engineering methods. Due to their overall complexity and richness, they are considered as convincing stress test for the proposed approach. In the following the development of the cleanup

The **framework builder** provides the tools to develop SMARTSOFT components as well as to perform deployments of components to build a robotic system. In the described example this includes the *CORBA*-based implementation of the SMARTSOFT framework

each other; (4) Kate throws cups into the kitchen sink.

huge manpower needed compared to just reusing *UML* tools.

example scenario is illustrated according to the different roles.

<sup>3</sup> http://www.youtube.com/roboticsathsulm#p/u/0/xtLK-655v7k

**5. Example / scenario**

gestures and speech.

many objects are still left.

The service-oriented component-based software approach allows separation of roles and is an important step towards the overall vision of a robotics software component shelf. The feasibility of the overall approach has been demonstrated by an Eclipse-based toolchain and its application within complex Robocup@Home scenarios. Next steps towards model-centric robotic systems that comprehensively bridge design-time and runtime model usage now become viable.

### **7. References**

Andrade, L., Fiadeiro, J. L., Gouveia, J. & Koutsoukos, G. (2002). Separating computation, coordination and configuration, *Journal of Software Maintenance* 14(5): 353–369. Beydeda, S., Book, M. & Gruhn, V. (eds) (2005). *Model-Driven Software Development*, Springer.

Mili, H., Elkharraz, A. & Mcheick, H. (2004). Understanding separation of concerns, *Proc.*

Robotic Software Systems: From Code-Driven to Model-Driven Software Development 501

Parnas, D. (1972). On the criteria to be used in decomposing systems into modules,

Quigley, M., Gerkey, B., Conley, K., Faust, J., Foote, T., Leibs, J., Berger, E., Wheeler, R. & Ng,

Radestock, M. & Eisenbach, S. (1996). Coordination in evolving systems, *Trends in Distributed*

Reiser, U., Connette, C., Fischer, J., Kubacki, J., Bubeck, A., Weisshardt, F., Jacobs, T., Parlitz,

Schlegel, C., Steck, A., Brugali, D. & Knoll, A. (2010). Design abstraction and processes in

Schlegel, C., Steck, A. & Lotz, A. (2011). Model-driven software development in robotics:

Stahl, T. & Völter, M. (2006). *Model-Driven Software Development: Technology, Engineering,*

Steck, A. & Schlegel, C. (2010). SmartTCL: An Execution Language for Conditional Reactive

Task Execution in a Three Layer Architecture for Service Robots, *Int. Workshop on DYnamic languages for RObotic and Sensors systems (DYROS/SIMPAR)*, Germany,

*Systems – CORBA and Beyond*, Springer-Verlag, pp. 162–176.

Schlegel, C. (2011). SMARTSOFT – Components and Toolchain for Robotics.

Schmidt, D. (2011). The ADAPTIVE Communication Environment. URL: *http://www.cs.wustl.edu/ schmidt/ACE.html*

URL: *http://msdn.microsoft.com/en-us/library/aa480021.aspx*

A. (2009). ROS: An open-source Robot Operating System, *ICRA Workshop on Open*

C., Hägele, M. & Verl, A. (2009). Care-O-bot 3 – Creating a product vision for service robot applications by integrating design and technology, *Proc. IEEE/RSJ International Conference on Intelligent Robots and Systems (ICRA)*, St. Louis, USA, pp. 1992–1997. Schlegel, C. (2007). Communication patterns as key towards component interoperability, *in*

D. Brugali (ed.), *Software Engineering for Experimental Robotics, STAR 30*, Springer,

robotics: From code-driven to model-driven engineering, *2nd Int. Conf. on Simulation, Modeling, and Programming for Autonomous Robots (SIMPAR)*, Springer LNAI 6472,

Communication patterns as key for a robotics component model, *Introduction to*

Neelamkavil, F. (1987). *Computer simulation and modeling*, John Wiley & Sons Inc.

Object Management Group & Soley, R. (2000). Model-Driven Architecture (MDA).

Object Management Group (2010). Object Constraint Language (OCL).

URL: *http://www.eclipse.org/modeling/mdt/papyrus/*

*Design*, Vancouver, Canada, pp. 75–84.

URL: *http://www.omg.org/spec/OCL/*

*Communications of the ACM* 15(12).

URL: *http://smart-robotics.sf.net/*

*Modern Robotics*, iConcept Press.

Sprott, D. & Wilkes, L. (2004). CBDI Forum.

*Management*, Wiley.

pp. 274–277.

*Source Software*.

pp. 183–210.

pp. 324–335.

URL: *http://www.omg.org/mda* OMG (2008). Robotic Technology Component (RTC). URL: *http://www.omg.org/spec/RTC/* PAPYRUS UML (2011). Graphical editing tool for uml.

*Workshop Early Aspects: Aspect-Oriented Requirements Engineering and Architecture*

	- URL: *http://people.mech.kuleuven.be/ bruyninc/ecs/LevelsOfComplexity-5C-20110223.pdf*
	- URL: *http://everware-cbdi.com/cbdi-forum*
	- URL: *http://www.eclipse.org/cdt/*

Mili, H., Elkharraz, A. & Mcheick, H. (2004). Understanding separation of concerns, *Proc. Workshop Early Aspects: Aspect-Oriented Requirements Engineering and Architecture Design*, Vancouver, Canada, pp. 75–84.

Neelamkavil, F. (1987). *Computer simulation and modeling*, John Wiley & Sons Inc.


28 Robotic Systems

Björkelund, A., Edström, L., Haage, M., Malec, J., Nilsson, K., Nugues, P., Robertz, S. G.,

Bruyninckx, H. (2011). Separation of Concerns: The 5Cs - Levels of Complexity, Lecture Notes,

Chris, R. (1989). *Elements of functional programming*, Addison-Wesley Longman Publishing Co,

Delamer, I. & Lastra, J. (2007). Loosely-coupled automation systems using device-level SOA, *5th IEEE International Conference on Industrial Informatics*, Vol. 2, pp. 743–748.

Efftinge, S., Friese, P., Haase, A., Hübner, D., Kadura, C., Kolb, B., Köhnlein, J., Moroff,

Fuentes-Fernández, L. & Vallecillo-Moreno, A. (2004). An Introduction to UML Profiles,

Gelernter, D. & Carriero, N. (1992). Coordination languages and their significance, *Commun.*

Gronback, R. C. (2009). *Eclipse Modeling Project: A Domain-Specific Language (DSL) Toolkit*,

Heineman, G. T. & Councill, W. T. (eds) (2001). *Component-Based Software Engineering: Putting*

Lastra, J. L. M. & Delamer, I. M. (2006). Semantic web services in factory automation: Fundamental insights and research roadmap, *IEEE Trans. Ind. Informatics* 2: 1–11. Lau, K.-K. & Wang, Z. (2007). Software component models, *IEEE Transactions on Software*

Lee, E. A. & Seshia, S. A. (2011). *Introduction to Embedded Systems - A Cyber-Physical Systems*

Lotz, A., Steck, A. & Schlegel, C. (2011). Runtime monitoring of robotics software components:

Meyer, B. (2000). What to compose, *Software Development* 8(3): 59, 71, 74–75.

Increasing robustness of service robotic systems, *Proc. 15th Int. Conference on Advanced*

D., Thoms, K., Völter, M., Schönbach, P., Eysholdt, M. & Reinisch, S. (2008).

Dijkstra, E. (1976). *A Discipline of Programming*, Prentice Hall, Englewood Cliffs, NJ.

Eclipse Modeling Project (2010). Modeling framework and code generation facility.

URL: *http://people.mech.kuleuven.be/ bruyninc/ecs/LevelsOfComplexity-5C-20110223.pdf* CBDI Forum (2011). CBDI Service Oriented Architecture Practice Portal - Independent

*Manufacturing*, Tampere, Finland.

Embedded Control Systems.

Boston, MA.

Blogspot (2008). Discussion of Aspect oriented programming(AOP). URL: *http://programmingaspects.blogspot.com/*

Guidance for Service Architecture and Engineering.

URL: *http://everware-cbdi.com/cbdi-forum* Cheddar (2010). A free real time scheduling analyzer.

URL: *http://beru.univ-brest.fr/ singhoff/cheddar/*

Eclipse CDT (2011). C/C++ Development Tooling for Eclipse.

URL: *http://www.eclipse.org/cdt/*

*UPGRADE* Volume V(2): 6–13.

*ACM* 35(2): 97–107.

*Engineering* 33: 709–724.

URL: *http://LeeSeshia.org*

*Approach*, ISBN 978-0-557-70857-4.

*Robotics (ICAR)*, Tallinn, Estland.

URL: *http://www.eclipse.org/modeling/*

openArchitectureWare User Guide 4.3.1.

Addison-Wesley, Upper Saddle River, NJ.

*the Pieces Together*, Addison-Wesley Professional. IBM (2006). Model-Driven Software Development, *Systems Journal* 45(3).

Störkle, D., Blomdell, A., Johansson, R., Linderoth, M., Nilsson, A., Robertsson, A., Stolt, A. & Bruyninckx, H. (2011). On the integration of skilled robot motions for productivity in manufacturing, *Proc. IEEE Int. Symposium on Assembly and*


**24** 

*Germany* 

**Using Ontologies for Configuring Architectures** 

The provision of goods and services accomplishes a transition to greater value-addedoriented logistics processes. The philosophy of logistics is changing to a cross-disciplinary function. Therefore it becomes a critical success factor for competitive companies (Göpfert, 2009). Thus logistics assumes the task of a modern management concept. It provides for the development, design, control and implementation of more effective and efficient flows of goods. Further, on aspects of information, money and financing flows are crucial for for the

According to (Scheid, 2010a) this can be ensuring, by the automation of logistic processes. Based (Granlund, 2008), the necessity for automated logistics processes raises the focus on logistics by existing dominant factors of uncertainty and rapid changes in the business area environment. Therefore, the adoption of flexible automation systems is essential. Here robotics appears very promising due to its universal character as a handling machine. This is how (Suppa & Hofschulte, 2010) characterizes the development of industrial robotics: '[...] increasingly in the direction of flexible systems, which take over new fields with sensors and innovative fiscal and regulatory approaches.' Here, logistics represents a major application field. (Westkämper & Verl, 2009) describe the broad applications for logistics and demonstrate the capability for flexibility with examples from industry and research. Besides the technological feasibility, there is also the existing demand by logistics firms concerning

These representations demonstrate the interaction of robotics-logistics regarding the design of technical systems for the operator strongly driven by the manufacturer (technology push) and the technological standardization of the system. Robotic-logistics concentrates on the development and integration of products. Accordingly, standardization activities of the technical systems focus on components and sub-systems that represent the manufacturer-

The main goal concentrates on the planning, implementation, and monitoring of enterprisewide process chains of technological systems under the consideration of economic criteria. In this context, the interaction of the two domains 'process' and 'technology' are essential. Thus, the configuration design of technological layouts or machines is crucial. The harmonization of the two domains requires a systematic description framework concerning their exchange of information and knowledge. A high-level abstraction of knowledge representation in the description of the relationships and connections is essential. It also

development of enterprise-wide and company-comprehensive success.

**1. Introduction** 

the need for their application.

oriented perspective.

**of Industrial Robotics in Logistic Processes** 

Matthias Burwinkel and Bernd Scholz-Reiter

*BIBA Bremen, University of Bremen* 


## **Using Ontologies for Configuring Architectures of Industrial Robotics in Logistic Processes**

Matthias Burwinkel and Bernd Scholz-Reiter *BIBA Bremen, University of Bremen Germany* 

## **1. Introduction**

30 Robotic Systems

502 Robotic Systems – Applications, Control and Programming

Steck, A. & Schlegel, C. (2011). Managing execution variants in task coordination by exploiting

Szyperski, C. (2002). *Component-Software: Beyond Object-Oriented Programming*,

Tarr, P., Harrison, W., Finkelstein, A., Nuseibeh, B. & Perry, D. (eds) (2000). *Proc. of the*

*Robots and Systems (IROS)*, San Francisco, CA.

Vlissides, J. (2009). Pattern Hatching – Generation Gap Pattern.

Völter, M. (2006). MDSD Benefits - Technical and Economical.

*2000)*, Limerick, Ireland.

*Management* 53(3): 305–331.

Addison-Wesley Professional, ISBN 0-201-74572-0, Boston.

URL: *http://researchweb.watson.ibm.com/designpatterns/pubs/gg.html*

Willow Garage (2011). PR2: Robot platform for experimentation and innovation.

URL: *http://www.willowgarage.com/pages/pr2/overview*

URL: *http://www.voelter.de/data/presentations/mdsd-tutorial/02\_Benefits.pdf* Webber, D. L. & Gomaa, H. (2004). Modeling variability in software product lines with

design-time models at run-time, *Proc. IEEE/RSJ International Conference on Intelligent*

*Workshop on Multi-Dimensional Separation of Concerns in Software Engineering (ICSE*

the variation point model, *Science of Computer Programming - Software Variability*

The provision of goods and services accomplishes a transition to greater value-addedoriented logistics processes. The philosophy of logistics is changing to a cross-disciplinary function. Therefore it becomes a critical success factor for competitive companies (Göpfert, 2009). Thus logistics assumes the task of a modern management concept. It provides for the development, design, control and implementation of more effective and efficient flows of goods. Further, on aspects of information, money and financing flows are crucial for for the development of enterprise-wide and company-comprehensive success.

According to (Scheid, 2010a) this can be ensuring, by the automation of logistic processes. Based (Granlund, 2008), the necessity for automated logistics processes raises the focus on logistics by existing dominant factors of uncertainty and rapid changes in the business area environment. Therefore, the adoption of flexible automation systems is essential. Here robotics appears very promising due to its universal character as a handling machine. This is how (Suppa & Hofschulte, 2010) characterizes the development of industrial robotics: '[...] increasingly in the direction of flexible systems, which take over new fields with sensors and innovative fiscal and regulatory approaches.' Here, logistics represents a major application field. (Westkämper & Verl, 2009) describe the broad applications for logistics and demonstrate the capability for flexibility with examples from industry and research. Besides the technological feasibility, there is also the existing demand by logistics firms concerning the need for their application.

These representations demonstrate the interaction of robotics-logistics regarding the design of technical systems for the operator strongly driven by the manufacturer (technology push) and the technological standardization of the system. Robotic-logistics concentrates on the development and integration of products. Accordingly, standardization activities of the technical systems focus on components and sub-systems that represent the manufactureroriented perspective.

The main goal concentrates on the planning, implementation, and monitoring of enterprisewide process chains of technological systems under the consideration of economic criteria. In this context, the interaction of the two domains 'process' and 'technology' are essential. Thus, the configuration design of technological layouts or machines is crucial. The harmonization of the two domains requires a systematic description framework concerning their exchange of information and knowledge. A high-level abstraction of knowledge representation in the description of the relationships and connections is essential. It also

Using Ontologies for Configuring Architectures of Industrial Robotics in Logistic Processes 505

process dynamics and process volatility are a handicap for standardized processes. The continuous automation of specific and individual processes appears to be difficult due to these reasons. Machine application requires great flexibility for adapting changing parameters.

Fig. 1. Historical development of logistics philosophy [source: authors illustration following

(Baumgarten, 2008)]

allows the description of implicit relationships such as comparative relationship notations. This applies to both qualitative and quantitative types of relationships. The outcome is a framework that is available to represent an object dependency between process and technology and to serve the described requirements for flexibility regarding logistics cargo, throughput and machine- and process-layout.

Thus, there is the need for qualitative description of relationship between process and technology by means of specific parameters and properties on a high-level abstraction.

## **2. Robotics-Logistics: Challenges and potentials**

Since the 1970s, there has been a multifaceted development of the basic understanding of logistics. The origin of 'logistics' refers to the Greek 'logos' (reason, arithmetic) and the Romanesque-French ('providing', 'supporting'). In the past logistics were understood in delimited functions. Nowadays logistics are global networks, which are necessary to optimize. The understanding of the task itself changed from a pure functional perspective through process chains to value-adding networks:

Fig. 1 shows the historical development starting in the 1970s. Today's logistics is characterized by its value and integration in the appropriate process chains. The Federal Logistics Association designates logistic processes to the areas of procurement, production, distribution, disposal and transport logistics. (Arnold, 2006) designates differentiated performance-oriented processes as transport and storage processes. Storage processes are the processes of handling, order picking, and packing. Logistics services are evaluated based on delivery time, delivery reliability, inventory availability, delivery quality, and delivery flexibility. These are the objectives for both intra-logistics and extra-logistics. Logistics institutions, such as logistics service providers, provide value-added benefits to this process. These services are dependent of the collection and the output of their product 'commodity'. Finishing or outer packaging operations are examples here.

The logistics of the future will be essentially determined by the automation of material and information flows. In this area, automation systems in logistics already exist for several years. Application areas for these systems, such as de-palletizing and palletizing, sorting, and picking, are 'technically feasible and tested for decades' (Scheid; 2010b). The complete automation of the so-called intra-logistics is technically feasible. However, this situation is not encountered in practice due to the singular character of isolated applications. In future material flow technologies will be more modularized as (Straube & Rösch, 2008) identified. Modular automation systems maximize flexibility in the logistics systems and enable the reutilization of technical components of handling and storage technology. To summarize the research requirements concerning these technologies (Straube & Rösch, 2008) ask for new modular constructions, which can combine different techniques based on their standardized modular features. This simplifies the integration into new systems. They describe a weakening tendency in new features for the components of conveying and storage technology. The focus is set on the configuration of system architectures composed of existing commercial components. This approach leads to process-specific integrated systems.

From an industrial point of view, multiple logistics areas display a high potential for the automation of processes (Scheid, 2010b). Thus, a high potential exists for the processes 'transport," 'storage' and 'de-palletizing." Transport processes will be automated in 2015 by nearly 30 percent. The reasons for the limiting borders for straightening the degree of automation are lying in the characteristics of the material and information flows. The existing 504 Robotic Systems – Applications, Control and Programming

allows the description of implicit relationships such as comparative relationship notations. This applies to both qualitative and quantitative types of relationships. The outcome is a framework that is available to represent an object dependency between process and technology and to serve the described requirements for flexibility regarding logistics cargo,

Thus, there is the need for qualitative description of relationship between process and technology by means of specific parameters and properties on a high-level abstraction.

Since the 1970s, there has been a multifaceted development of the basic understanding of logistics. The origin of 'logistics' refers to the Greek 'logos' (reason, arithmetic) and the Romanesque-French ('providing', 'supporting'). In the past logistics were understood in delimited functions. Nowadays logistics are global networks, which are necessary to optimize. The understanding of the task itself changed from a pure functional perspective

Fig. 1 shows the historical development starting in the 1970s. Today's logistics is characterized by its value and integration in the appropriate process chains. The Federal Logistics Association designates logistic processes to the areas of procurement, production, distribution, disposal and transport logistics. (Arnold, 2006) designates differentiated performance-oriented processes as transport and storage processes. Storage processes are the processes of handling, order picking, and packing. Logistics services are evaluated based on delivery time, delivery reliability, inventory availability, delivery quality, and delivery flexibility. These are the objectives for both intra-logistics and extra-logistics. Logistics institutions, such as logistics service providers, provide value-added benefits to this process. These services are dependent of the collection and the output of their product 'commodity'.

The logistics of the future will be essentially determined by the automation of material and information flows. In this area, automation systems in logistics already exist for several years. Application areas for these systems, such as de-palletizing and palletizing, sorting, and picking, are 'technically feasible and tested for decades' (Scheid; 2010b). The complete automation of the so-called intra-logistics is technically feasible. However, this situation is not encountered in practice due to the singular character of isolated applications. In future material flow technologies will be more modularized as (Straube & Rösch, 2008) identified. Modular automation systems maximize flexibility in the logistics systems and enable the reutilization of technical components of handling and storage technology. To summarize the research requirements concerning these technologies (Straube & Rösch, 2008) ask for new modular constructions, which can combine different techniques based on their standardized modular features. This simplifies the integration into new systems. They describe a weakening tendency in new features for the components of conveying and storage technology. The focus is set on the configuration of system architectures composed of existing commercial

From an industrial point of view, multiple logistics areas display a high potential for the automation of processes (Scheid, 2010b). Thus, a high potential exists for the processes 'transport," 'storage' and 'de-palletizing." Transport processes will be automated in 2015 by nearly 30 percent. The reasons for the limiting borders for straightening the degree of automation are lying in the characteristics of the material and information flows. The existing

throughput and machine- and process-layout.

**2. Robotics-Logistics: Challenges and potentials** 

Finishing or outer packaging operations are examples here.

components. This approach leads to process-specific integrated systems.

through process chains to value-adding networks:

process dynamics and process volatility are a handicap for standardized processes. The continuous automation of specific and individual processes appears to be difficult due to these reasons. Machine application requires great flexibility for adapting changing parameters.

Fig. 1. Historical development of logistics philosophy [source: authors illustration following (Baumgarten, 2008)]

Using Ontologies for Configuring Architectures of Industrial Robotics in Logistic Processes 507

complicated or complex systems on the process- and on the technology-level. Additionally, procedural complexity influences technical complexity. The necessary reaction possibilities with technical components to procedurally dynamic events are the main driver here. The individual solutions counteract the intended economic standard solutions. Standardization serves to reduce complexity and have to integrate both the process environment and systems engineering. Robotic systems can be standardized by considering the two

The description of this relational structure is represented by an approach that works with a qualitative logical description on an abstract level. Current approaches to system modeling appear too formal. Ontological approaches with their level of abstraction are an interesting alternative. Despite the standardization of system architecture, a process-orientated configuration is to be ensured. The necessary flexibility intends to serve the dynamic and volatile processes. The construction and structure of the architecture has to be monitored and planned in its modular basic approach. To cover the historical, actual and future usage

This book chapter describes a conceptually basic approach procedure for the representation of the relational structure between process and technology through an ontological

Examples of traditional modeling methods for representing systems where relationships between entities are described are: 'entity-relationship model," 'Petri nets,' and 'event-

The Entity Relationship Model (ER-model) was developed in 1976 by Chen. It allows delimited systems to be represented in a way which is intelligible for all involved. The entities (objects) and the relationships between the objects form the basis of the modeling. Regarding the purpose of the modeling, only objects, relationships, and attributes are described (Chen, 1976). The method of Petri nets represents structural coherence between sets of events (Kiencke, 1997). In general, a Petri net is a graphic description where the transaction of the generation of sequences of event-driven networks is represented. It consists of types of nodes, which are representative of a so-called position or transition conditions and events. A directed edge connects a position with a transition. Petri nets are capable of describing a large class of possible processes (Tabeling, 2006). Event-Driven Process Chains (EPC) modeling is a process-oriented perspective on functions, events, organizational units, and information object systems. A process chain is defined by

Systems can be also modeled by using ontologies. The concept of ontology originates from philosophy and describes the 'science of being." Many authors define ontology from different perspectives. (Gruber, 1993) describes ontologies as the explicit specification of a conceptualization. The abstract level has the advantage that many basic approaches of different research areas are defined. For example, linguistically and mathematically oriented ontologies are combined due to this definition. (Stuckenschmidt, 2009) establishes the common reference to this definition by many authors. (Studer et al., 1998) take it as a basis and defines ontologies from their formal logic: 'An ontology is a formal, explicit specification of a shared conceptualization.' They emphasize the machine-readable formality

**3. State of the art - modelling approaches for system representation** 

driven process chains' (Kastens et al., 2008, Seidlmeier, 2002, Siegert, 1996).

perspectives of the configuration.

vocabulary.

of technical systems, modular robotic systems are essential.

modeling rules and operators (Staud, 2006).

A fundamental role belongs to robotics. By definition, industrial robot systems are a central success factor in process automation due to their universality. The application of robotics systems in logistics factories should be designed flexibly. The automation of the processes under the customers' existing requirements can allow individually designed systems (Günthner & Hompel, 2009). In their recent study, the European Initiative (EUROP, 2009) identified the application of robotics in logistics as a central issue for the future. Thus, it highlights the broad range of application and diverse functions in this area. The operation of the systems under limited process standardization due to the complexity of the processes because of heterogeneous and manifold variables, leads to individual and special solutions in today's logistics factories. The adaption of the systems to changing process environment is hindered due to the process specific character of the systems.

(Fritsch & Wöltje, 2006) identifies the necessity for a paradigm shift from such individual system configuration and underline the relevance of standardized robot systems. This necessity establishes (Elger & Hausser, 2009) by describing the demand of more standardized solutions, which can also serve individual needs. The initiative (EUROP, 2009) characterizes the standardization of components and technical systems as an essential challenge for the so-called 'Robotics 2020." This concerns both hardware and software, and their interfaces among these components. In the authors' view, this requirement influences the system architecture essentially. In the view of the (EUROP, 2009), the system architecture accords robotics a central role. In the future architectures for robotic systems will be designed to both comprehensive configuration conditions and technical subsystems and components. They can be assigned from comparable and very different applications. Therefore, robotics systems will be more modularized in their architecture configurations in the medium term (until 2015). The interconnection between the modules is weakly configured in an overall perspective. On the one hand, this allows a rapid reconfiguration when changes of the process environment appear. On the other hand, the standardization of components and systems is besides the repeated partial usage the second driver of so-called 'adaptable configuration status." The long-term perspective for the year 2020 looks out for the development of architectures down to autonomous selfconfigurations.

The second crucial development is represented by the compositionality of robotic systems. A robotic system is compositional when the complexity of system architecture is based on compilation of the subsystems or components and their specific functions. The more sub systems or components will be used, the higher is the probability of complex system architecture. Thus, this configuration status is dependent on the process environment. This means that the robotic systems for self-changing or complicated processes must be explicitly designed to fit these requirements. The robotic system has to be configured process orientated. Robotics-logistics configuration conditions appear to diverge in comparison to the configuration condition of production robotics. This can be attributed to the characteristics of logistics processes. The process environment appears to have an essential influence on the technological configuration status of robotic systems. Out of the perspective of system theory, the degree of complexity can be influenced by its technological configuration status of the robotic systems and the characteristics of the process environment.

Thus, complicated processes often require robotic system architectures, which are composed of numerous components and are individually configured. This relationship can result in 506 Robotic Systems – Applications, Control and Programming

A fundamental role belongs to robotics. By definition, industrial robot systems are a central success factor in process automation due to their universality. The application of robotics systems in logistics factories should be designed flexibly. The automation of the processes under the customers' existing requirements can allow individually designed systems (Günthner & Hompel, 2009). In their recent study, the European Initiative (EUROP, 2009) identified the application of robotics in logistics as a central issue for the future. Thus, it highlights the broad range of application and diverse functions in this area. The operation of the systems under limited process standardization due to the complexity of the processes because of heterogeneous and manifold variables, leads to individual and special solutions in today's logistics factories. The adaption of the systems to changing process environment

(Fritsch & Wöltje, 2006) identifies the necessity for a paradigm shift from such individual system configuration and underline the relevance of standardized robot systems. This necessity establishes (Elger & Hausser, 2009) by describing the demand of more standardized solutions, which can also serve individual needs. The initiative (EUROP, 2009) characterizes the standardization of components and technical systems as an essential challenge for the so-called 'Robotics 2020." This concerns both hardware and software, and their interfaces among these components. In the authors' view, this requirement influences the system architecture essentially. In the view of the (EUROP, 2009), the system architecture accords robotics a central role. In the future architectures for robotic systems will be designed to both comprehensive configuration conditions and technical subsystems and components. They can be assigned from comparable and very different applications. Therefore, robotics systems will be more modularized in their architecture configurations in the medium term (until 2015). The interconnection between the modules is weakly configured in an overall perspective. On the one hand, this allows a rapid reconfiguration when changes of the process environment appear. On the other hand, the standardization of components and systems is besides the repeated partial usage the second driver of so-called 'adaptable configuration status." The long-term perspective for the year 2020 looks out for the development of architectures down to autonomous self-

The second crucial development is represented by the compositionality of robotic systems. A robotic system is compositional when the complexity of system architecture is based on compilation of the subsystems or components and their specific functions. The more sub systems or components will be used, the higher is the probability of complex system architecture. Thus, this configuration status is dependent on the process environment. This means that the robotic systems for self-changing or complicated processes must be explicitly designed to fit these requirements. The robotic system has to be configured process orientated. Robotics-logistics configuration conditions appear to diverge in comparison to the configuration condition of production robotics. This can be attributed to the characteristics of logistics processes. The process environment appears to have an essential influence on the technological configuration status of robotic systems. Out of the perspective of system theory, the degree of complexity can be influenced by its technological configuration status of the robotic systems and the characteristics of the process

Thus, complicated processes often require robotic system architectures, which are composed of numerous components and are individually configured. This relationship can result in

is hindered due to the process specific character of the systems.

configurations.

environment.

complicated or complex systems on the process- and on the technology-level. Additionally, procedural complexity influences technical complexity. The necessary reaction possibilities with technical components to procedurally dynamic events are the main driver here. The individual solutions counteract the intended economic standard solutions. Standardization serves to reduce complexity and have to integrate both the process environment and systems engineering. Robotic systems can be standardized by considering the two perspectives of the configuration.

The description of this relational structure is represented by an approach that works with a qualitative logical description on an abstract level. Current approaches to system modeling appear too formal. Ontological approaches with their level of abstraction are an interesting alternative. Despite the standardization of system architecture, a process-orientated configuration is to be ensured. The necessary flexibility intends to serve the dynamic and volatile processes. The construction and structure of the architecture has to be monitored and planned in its modular basic approach. To cover the historical, actual and future usage of technical systems, modular robotic systems are essential.

This book chapter describes a conceptually basic approach procedure for the representation of the relational structure between process and technology through an ontological vocabulary.

## **3. State of the art - modelling approaches for system representation**

Examples of traditional modeling methods for representing systems where relationships between entities are described are: 'entity-relationship model," 'Petri nets,' and 'eventdriven process chains' (Kastens et al., 2008, Seidlmeier, 2002, Siegert, 1996).

The Entity Relationship Model (ER-model) was developed in 1976 by Chen. It allows delimited systems to be represented in a way which is intelligible for all involved. The entities (objects) and the relationships between the objects form the basis of the modeling. Regarding the purpose of the modeling, only objects, relationships, and attributes are described (Chen, 1976). The method of Petri nets represents structural coherence between sets of events (Kiencke, 1997). In general, a Petri net is a graphic description where the transaction of the generation of sequences of event-driven networks is represented. It consists of types of nodes, which are representative of a so-called position or transition conditions and events. A directed edge connects a position with a transition. Petri nets are capable of describing a large class of possible processes (Tabeling, 2006). Event-Driven Process Chains (EPC) modeling is a process-oriented perspective on functions, events, organizational units, and information object systems. A process chain is defined by modeling rules and operators (Staud, 2006).

Systems can be also modeled by using ontologies. The concept of ontology originates from philosophy and describes the 'science of being." Many authors define ontology from different perspectives. (Gruber, 1993) describes ontologies as the explicit specification of a conceptualization. The abstract level has the advantage that many basic approaches of different research areas are defined. For example, linguistically and mathematically oriented ontologies are combined due to this definition. (Stuckenschmidt, 2009) establishes the common reference to this definition by many authors. (Studer et al., 1998) take it as a basis and defines ontologies from their formal logic: 'An ontology is a formal, explicit specification of a shared conceptualization.' They emphasize the machine-readable formality

Using Ontologies for Configuring Architectures of Industrial Robotics in Logistic Processes 509

and connections, which (Steinmann & Nejdl, 1999) designate as the 'encyclopedia of reality'. In the first sense, they function as meta-models. Abstract modeling concepts are described and provide the framework for the ontology. Specific meta-model-oriented ontologies will be designated as representation ontologies. For example, the frame-ontology in Ontolingua can be mentioned here, according to (Gruber, 1993). The ontology provides a grammar composed of concepts, relations and attributes. In the second sense, ontologies describe conceptual models are based on structures and correlation of the area of a specific application. Examples of existing conceptual models are legal texts, integration of

Comparing classic and ontological methods, some differences can be identified. Ontologies describe the composition of reality. Traditional modeling approaches assume this information to be known. In this context (Herb, 2006) ascertains, that ontologies are applied for concept-based structuring of information. In his view, ontologies are essentially for more detailed structured information than conventional sources. (Stuckenschmidt, 2009) describes the existence of objects and items and the representation form. (Steinmann & Nejdl, 1999) detail this approach and describe it as a central factor for understanding of items. He concludes that ontologies always are based on a highly abstract level in comparison to

The authors also indicate the borders of ontological modeling. The crucial difficulties are inconsistencies in classification of meta-data in ontologies, application of meta-data and the distinct classification and structuring of information. Therefore, these aspects are attributable to the highly abstract level of ontologies. Abstract notations lead to such

Ontologies can be differentiated in two aspects, conceptual and formal logical nature. According to (Swartout & Tate, 1999), the first aspect has the task of depicting and composing structures of reality. The second addresses the creation of the semantic framework with the definition of objects, classes and contexts. There are many basic approaches to different ontologies in the literature, oriented to the areas of application. (Bateman, 1993) describes the existence of basic types of interconnected entities and describes the so-called 'design patterns.' These are entities that can be differentiated according to the types 'endurant', 'perdurant/occurrence,' 'quality' and 'abstract.' While entities of the 'endurant' type have a continuous and predictable nature, entities of the type 'perdurant/occurrence' describe events that occur unexpectedly and unpredictably. Entities of the types 'quality' and 'abstract' unite properties, attributes, relations, and

The ontology DOLCE is an example of the application of these basic types. DOLCE was developed by the Institute of Cognitive Science and Technology in Trento, Italy and stands for 'Descriptive Ontology for Linguistic and Cognitive Engineering.' DOLCE attempts to impart meanings to things and events. Here, entities deal with the meanings through use of agents in order to obtain consensus among all entities regarding to the meaning. (Gangemi et al., 2002) treated this principle in a plausible way. Further examples of conceptual ontologies are WordNet, the 'Unified Medical Language' ontology, 'Suggested Upper Merged Ontology," the ontology of 'ε-Connection' (Kutz et al., 2004), the ontology of

Another key component of conceptual ontologies is ontology engineering. Ontology engineering is concerned with the process design of ontology development, in order to

'Process Specification Language,' and the ontology 'OntoClean."

application systems or open systems.'

assignment, classification, and structuring issues.

model-based approaches.

comparatives.

of ontology. (Neches et al., 1991) specifies this idea and describes ontologies as 'basic terms and relations comprising the vocabulary of a topic area, as well as the rules for combining terms and relations, to define extensions to the vocabulary.' According to this understanding, concepts are defined through basic distinctions of objects and their rulebased relationships to each other. (Bunge, 1977) describes ontology as the only area of science besides the fields of natural and social sciences, which focuses on concrete objects and concrete reality. Ontologies are to be assigned based on philosophy since they stress the basic principles of virtual science, which cannot be proven or refused by experiments.

Ontologies represent knowledge, which is structured and provided with information technologies. They can be a crucial part of knowledge management. According to (Staab, 2002) knowledge management has the goal to optimize the requirements for employees' performance. The following factors 'persons", 'culture", 'organization" and 'basic organization processes" are the major success criteria for knowledge management. According to (Gruber, 1993) ontologies can facilitate the sharing and exchange of knowledge.

There are many kinds and types of ontologies. Depending on their internal structure, ontologies vary in their complexity, as represented in Fig. 2:

Examples for trivial complex ontologies are simple catalogues or collections of concepts. Maximally complex ontologies contain an amount of general and weak-defined axioms. An interesting type is taxonomies, which can be defined as a hierarchical classification of concepts in categories.

Taxonomies are also considered as an attenuated definition of ontology. According to (Herb, 2006), they include a series of concepts that are interlinked by hereditary structures. Depending on their nature, ontologies can be applied and re-applied with different levels of intensity (Gómez-Pérez, Fernández-López, Corcho 2004). Ontologies can be classified in socalled 'lightweight ontologies' and in 'heavyweight ontologies.' 'Lightweight ontologies' describe notions (concepts), taxonomies, and relationships and properties between terms. Additionally to these properties, 'heavyweight ontologies' also consider axioms and constraints.

The ontological modeling of systems is possible through the application of existing relationships and rules. (Steinmann & Nejdl, 1999) describe the two tasks of ontologies. In the first sense, ontologies describe the nature of the constituents and the principles. He designates these as 'grammar of reality'. In the second sense, ontologies establish the objects 508 Robotic Systems – Applications, Control and Programming

of ontology. (Neches et al., 1991) specifies this idea and describes ontologies as 'basic terms and relations comprising the vocabulary of a topic area, as well as the rules for combining terms and relations, to define extensions to the vocabulary.' According to this understanding, concepts are defined through basic distinctions of objects and their rulebased relationships to each other. (Bunge, 1977) describes ontology as the only area of science besides the fields of natural and social sciences, which focuses on concrete objects and concrete reality. Ontologies are to be assigned based on philosophy since they stress the basic principles of virtual science, which cannot be proven or refused by experiments. Ontologies represent knowledge, which is structured and provided with information technologies. They can be a crucial part of knowledge management. According to (Staab, 2002) knowledge management has the goal to optimize the requirements for employees' performance. The following factors 'persons", 'culture", 'organization" and 'basic organization processes" are the major success criteria for knowledge management. According to (Gruber, 1993) ontologies can facilitate the sharing and exchange of

There are many kinds and types of ontologies. Depending on their internal structure,

Fig. 2. Types of ontologies organized by increasing complexity [source: authors illustration

Examples for trivial complex ontologies are simple catalogues or collections of concepts. Maximally complex ontologies contain an amount of general and weak-defined axioms. An interesting type is taxonomies, which can be defined as a hierarchical classification of

Taxonomies are also considered as an attenuated definition of ontology. According to (Herb, 2006), they include a series of concepts that are interlinked by hereditary structures. Depending on their nature, ontologies can be applied and re-applied with different levels of intensity (Gómez-Pérez, Fernández-López, Corcho 2004). Ontologies can be classified in socalled 'lightweight ontologies' and in 'heavyweight ontologies.' 'Lightweight ontologies' describe notions (concepts), taxonomies, and relationships and properties between terms. Additionally to these properties, 'heavyweight ontologies' also consider axioms and

The ontological modeling of systems is possible through the application of existing relationships and rules. (Steinmann & Nejdl, 1999) describe the two tasks of ontologies. In the first sense, ontologies describe the nature of the constituents and the principles. He designates these as 'grammar of reality'. In the second sense, ontologies establish the objects

ontologies vary in their complexity, as represented in Fig. 2:

knowledge.

following (Herb, 2006)]

concepts in categories.

constraints.

and connections, which (Steinmann & Nejdl, 1999) designate as the 'encyclopedia of reality'. In the first sense, they function as meta-models. Abstract modeling concepts are described and provide the framework for the ontology. Specific meta-model-oriented ontologies will be designated as representation ontologies. For example, the frame-ontology in Ontolingua can be mentioned here, according to (Gruber, 1993). The ontology provides a grammar composed of concepts, relations and attributes. In the second sense, ontologies describe conceptual models are based on structures and correlation of the area of a specific application. Examples of existing conceptual models are legal texts, integration of application systems or open systems.'

Comparing classic and ontological methods, some differences can be identified. Ontologies describe the composition of reality. Traditional modeling approaches assume this information to be known. In this context (Herb, 2006) ascertains, that ontologies are applied for concept-based structuring of information. In his view, ontologies are essentially for more detailed structured information than conventional sources. (Stuckenschmidt, 2009) describes the existence of objects and items and the representation form. (Steinmann & Nejdl, 1999) detail this approach and describe it as a central factor for understanding of items. He concludes that ontologies always are based on a highly abstract level in comparison to model-based approaches.

The authors also indicate the borders of ontological modeling. The crucial difficulties are inconsistencies in classification of meta-data in ontologies, application of meta-data and the distinct classification and structuring of information. Therefore, these aspects are attributable to the highly abstract level of ontologies. Abstract notations lead to such assignment, classification, and structuring issues.

Ontologies can be differentiated in two aspects, conceptual and formal logical nature. According to (Swartout & Tate, 1999), the first aspect has the task of depicting and composing structures of reality. The second addresses the creation of the semantic framework with the definition of objects, classes and contexts. There are many basic approaches to different ontologies in the literature, oriented to the areas of application. (Bateman, 1993) describes the existence of basic types of interconnected entities and describes the so-called 'design patterns.' These are entities that can be differentiated according to the types 'endurant', 'perdurant/occurrence,' 'quality' and 'abstract.' While entities of the 'endurant' type have a continuous and predictable nature, entities of the type 'perdurant/occurrence' describe events that occur unexpectedly and unpredictably. Entities of the types 'quality' and 'abstract' unite properties, attributes, relations, and comparatives.

The ontology DOLCE is an example of the application of these basic types. DOLCE was developed by the Institute of Cognitive Science and Technology in Trento, Italy and stands for 'Descriptive Ontology for Linguistic and Cognitive Engineering.' DOLCE attempts to impart meanings to things and events. Here, entities deal with the meanings through use of agents in order to obtain consensus among all entities regarding to the meaning. (Gangemi et al., 2002) treated this principle in a plausible way. Further examples of conceptual ontologies are WordNet, the 'Unified Medical Language' ontology, 'Suggested Upper Merged Ontology," the ontology of 'ε-Connection' (Kutz et al., 2004), the ontology of 'Process Specification Language,' and the ontology 'OntoClean."

Another key component of conceptual ontologies is ontology engineering. Ontology engineering is concerned with the process design of ontology development, in order to

Using Ontologies for Configuring Architectures of Industrial Robotics in Logistic Processes 511

semantic Web cognitive basis partially relevant; wide

terminology for medical applications

combined out of multiple ontologies

connecting different domains in a formal and logical way

object oriented

for semantic web

base for integration of software

checking inconsistencies automatically

knowledge, uses cognitive

relations between processes

not relevant; especcially for

not relevant; designed for automated verification

not relevant, formal approach

not relevant, formal approach

not relevant, formal approach

relevant; complex

modular concept not relevant; formal approach

relevant for structuring processes and robotics technologies

aspects

modular d relevant for describving

lexical database not relevant

and robotics

biomedicine

**ontology author application area characteristics relevance** 

hierarchical strucutre of

neutral representation of process knowledge

communicating medical

providing information in databases and in the

complex correlation of different domains

first-Order-logics for subsuming specific languages

representation of correlation in the semantic

web

hierarchical structure of the domains 'process' and 'technology':

deductive database conceptional and

Table 1. Comparison of selected ontologies in the context of Robotic Logistics [source:

**4. Logical ontologies for configuration of individual system architectures** 

'Robotic-Logistics' formulates the central expectations to the ontology for configuration robotic system architectures. The input and output variables of the environment due to the reference process have to be defined. On this basis, the relevant domains 'technology' and 'process' can be described as to contents. Classes and variables structure them. On the process side, the reference process is addressed. In this domain, the direct upstream and downstream processes of the reference process are also relevant. The output of the upstream process provides the input of the reference process. The output of the reference process provides the input of the downstream process. The relevant technical systems and components of robotic-logistics will be structured in the technology perspective. The regulatory framework has described the following entities. Fig. 3 gives an overview of the

web based language formal infrastructure

knowledge

languages in IT applications

database for

terminology

internet

DOLCE Institute of cognitive

Italien

Trento

PSL National Institute of Standards and Technology, Gaithersburg

UMLS National Libary of Health

Corporation

University of Liverpool

CASL Common Framework Initiative

University

World wide Web Consortium

**4.1 Required ontological framework** 

SUMO Teknowledge

F-Logic Stony Brook

author's illustration]

Ontology Web Language

OIL Vrje Universität, Amsterdam

E-Connection

Onto Clean Laboratory for

Science a Technology,

Applied Ontology,

WordNet Princeton University representation of natural

create and to apply ontologies. There are multiple methods here. (Wiedemann, 2008) lists these as follows:


Ontology Development deals with the question of methodological development and the composition of ontologies. Ontology Re-Engineering focuses on existing approaches and adapts them to the current task. Ontology Learning focuses on approaches for semi- or fullyautomatic knowledge acquisition. The Collaborative Ontology Construction issued guidelines for the generation of consensual knowledge. Ontology Merging combines two or more ontologies in order to depict various domains. This method allows handling knowledge that is brought together from different worlds.

(Gruninger, 2002) describes formal logical ontologies as communication, automatic conclusion and representation and re-utilization of knowledge. Formal logical ontologies aim to depict a semantic domain through syntax. The concept of semantics is to be classed in semiotics and describes the theory of signs. Semantics can be also defined as the 'theory of the relationships among the signs and the things in the world, which they denote' (Erdmann, 2001). Semantics are relevant for formal logical ontologies for modeling and generating calculations on a mathematical foundation. This basic approach with its syntax performs a key relevance by providing the mathematical grammar and the concretely denotable model. Exemplary syntaxes are algebraic terms, logical formulas or informational programs. Formal logic provides a language for formalizing the description of the real world and the tool for representing ontologies. It is differentiated according to propositional logic and predicate logic. In propositional logic, there exist exactly two possible truthvalues: true or false. Predicate logic consists of terms and describes real world objects in an abstract manner by means of variables and functions. (Stuckenschmidt, 2009) presents methods and techniques of the notation.

Formal logic ontologies do not allow automatic proofs. Only computer-based evidence for sub-problems is possible. Examples of formal logical ontologies are OntoSpace, DiaSpace, OASIS-IP, CASL, OIL, and OWL.

In summary, it can be stated that both ontology types can be classified in different types according to (Guarino, 1998): 'top-level ontologies," 'domain ontologies' and 'application ontologies" which already represent known data and class models. 'Top-level ontologies' describe fundamental and generally applicable basic approaches which are independent of a specific real world. Their level of abstraction is high that allows a wide range of users.

'Domain ontologies' focus on a specific application area and describe these fundamental events and activities by specifying the syntax of 'top-level' ontologies. 'Application ontologies' make use of known data or class models which apply to a specific application area.

The following table finally summarizes the described ontologies and compares them according to the presented properties and characteristics. Furthermore, the relevance of ontologies applicable for robotic logistics is specified and the ontologies of 'Process Specification Language' and the ontology 'OntoClean' are highlighted:

510 Robotic Systems – Applications, Control and Programming

create and to apply ontologies. There are multiple methods here. (Wiedemann, 2008) lists

Ontology Development deals with the question of methodological development and the composition of ontologies. Ontology Re-Engineering focuses on existing approaches and adapts them to the current task. Ontology Learning focuses on approaches for semi- or fullyautomatic knowledge acquisition. The Collaborative Ontology Construction issued guidelines for the generation of consensual knowledge. Ontology Merging combines two or more ontologies in order to depict various domains. This method allows handling

(Gruninger, 2002) describes formal logical ontologies as communication, automatic conclusion and representation and re-utilization of knowledge. Formal logical ontologies aim to depict a semantic domain through syntax. The concept of semantics is to be classed in semiotics and describes the theory of signs. Semantics can be also defined as the 'theory of the relationships among the signs and the things in the world, which they denote' (Erdmann, 2001). Semantics are relevant for formal logical ontologies for modeling and generating calculations on a mathematical foundation. This basic approach with its syntax performs a key relevance by providing the mathematical grammar and the concretely denotable model. Exemplary syntaxes are algebraic terms, logical formulas or informational programs. Formal logic provides a language for formalizing the description of the real world and the tool for representing ontologies. It is differentiated according to propositional logic and predicate logic. In propositional logic, there exist exactly two possible truthvalues: true or false. Predicate logic consists of terms and describes real world objects in an abstract manner by means of variables and functions. (Stuckenschmidt, 2009) presents

Formal logic ontologies do not allow automatic proofs. Only computer-based evidence for sub-problems is possible. Examples of formal logical ontologies are OntoSpace, DiaSpace,

In summary, it can be stated that both ontology types can be classified in different types according to (Guarino, 1998): 'top-level ontologies," 'domain ontologies' and 'application ontologies" which already represent known data and class models. 'Top-level ontologies' describe fundamental and generally applicable basic approaches which are independent of a specific real world. Their level of abstraction is high that allows a wide

'Domain ontologies' focus on a specific application area and describe these fundamental events and activities by specifying the syntax of 'top-level' ontologies. 'Application ontologies' make use of known data or class models which apply to a specific application

The following table finally summarizes the described ontologies and compares them according to the presented properties and characteristics. Furthermore, the relevance of ontologies applicable for robotic logistics is specified and the ontologies of 'Process

Specification Language' and the ontology 'OntoClean' are highlighted:

these as follows:

• Ontology Development • Ontology Re-Engineering • Ontology Learning

• Ontology Alignment/Merging • Collaborative Ontology Construction

methods and techniques of the notation.

OASIS-IP, CASL, OIL, and OWL.

range of users.

area.

knowledge that is brought together from different worlds.


Table 1. Comparison of selected ontologies in the context of Robotic Logistics [source: author's illustration]

## **4. Logical ontologies for configuration of individual system architectures**

## **4.1 Required ontological framework**

'Robotic-Logistics' formulates the central expectations to the ontology for configuration robotic system architectures. The input and output variables of the environment due to the reference process have to be defined. On this basis, the relevant domains 'technology' and 'process' can be described as to contents. Classes and variables structure them. On the process side, the reference process is addressed. In this domain, the direct upstream and downstream processes of the reference process are also relevant. The output of the upstream process provides the input of the reference process. The output of the reference process provides the input of the downstream process. The relevant technical systems and components of robotic-logistics will be structured in the technology perspective. The regulatory framework has described the following entities. Fig. 3 gives an overview of the hierarchical structure of the domains 'process' and 'technology':

Using Ontologies for Configuring Architectures of Industrial Robotics in Logistic Processes 513

descriptive notation of functions and processes through its manifold concepts and relations in different levels of detail. Each participant in a pair of relationship is standardized and the relationship is jointly depicted. Due to the functional and procedural point of view of the ontology, the relationship can be well illustrated. The representation is done by focusing on process elements of one domain that cause an impact on the process elements of the second

For preparation, the conceptual framework is defined as the delimitation of the considered environment to be covered. It is defined according to the procedure model developed by (Figgener & Hompel, 2007). They describe a regulatory framework for reference processes:

Fig. 4. Procedure model for the creation of reference process models [source: authors

With these results, both phases of the ontology model can be completed.

The aspects of an application area are defined. Thus, displayed in fig. 4, six phases for the generation of a reference model are described. For the existing problem phase 1.2, phase 2.1, and phase 2.2 are especially relevant. Phase 1.2 describes the regulatory framework. The process modules and process elements are defined in phase 2.1 and 2.2. This distinction allows the reduction and control of the complexity and expenditure for model creation

illustration following (Figgener & Hompel, 2007)]

Fig. 5 shows the interdependence of both ontologies:

through the ontology.

domain.

Fig. 3. Class structure of the 'process' and 'technology' domains [source: authors illustration]

The entities in these two class structures are the main processes of meta-model, which will be presented in this chapter. On this basis, process modules and process elements of the reference process are derived with application of the regulatory framework.

The reference process is situated in its systemic environment. It influences the reference process with input and output variables. Thereby the input describes the general framework and restrictions, which are valid for the robotic system. The output results directly from the target dimension of the automation task. The parameters cover technical, organizational, and economic aspects. (Kahraman et al., 2007) define a multi-criteria system for the evaluation of robotic systems, which provides a multiple key factors for the evaluation of robotic systems.

With the development of entity structures of the two domains of the reference process and the inputs and outputs of the environment, the fundamentals of ontology development are set. Based on these structures the hierarchical ontological taxonomies are created with the aid of the ontology OntoClean. This is necessary in order to be able to describe the relations between the two domains through ontology, the Process Specification Language.

### **4.2 Conceptional ontology for descriptive process technology relations**

This section introduces a two-stage approach. In the first phase, the hierarchical structures of the respective domains are composed. The procedure model of (Stuckenschmidt, 2009) offers advantages for the creation of these taxonomies. This approach forms the taxonomies through the OntoClean ontology and analyzes potential sources of error. In the lowest level of taxonomy elements, properties and attributes of the process elements are denoted. They define the reference process. Thus, for example, the process module 'piece goods' with the process element 'bulk" displays the property 'five kilos". Due to this definition, the reference process is individualized and specified.

The second phase provides the combination of the two domains. The description of these relations is done through the ontology of 'Process Specification Language.' It is based on the 512 Robotic Systems – Applications, Control and Programming

Fig. 3. Class structure of the 'process' and 'technology' domains [source: authors illustration] The entities in these two class structures are the main processes of meta-model, which will be presented in this chapter. On this basis, process modules and process elements of the

The reference process is situated in its systemic environment. It influences the reference process with input and output variables. Thereby the input describes the general framework and restrictions, which are valid for the robotic system. The output results directly from the target dimension of the automation task. The parameters cover technical, organizational, and economic aspects. (Kahraman et al., 2007) define a multi-criteria system for the evaluation of robotic systems, which provides a multiple key factors for the evaluation of

With the development of entity structures of the two domains of the reference process and the inputs and outputs of the environment, the fundamentals of ontology development are set. Based on these structures the hierarchical ontological taxonomies are created with the aid of the ontology OntoClean. This is necessary in order to be able to describe the relations

This section introduces a two-stage approach. In the first phase, the hierarchical structures of the respective domains are composed. The procedure model of (Stuckenschmidt, 2009) offers advantages for the creation of these taxonomies. This approach forms the taxonomies through the OntoClean ontology and analyzes potential sources of error. In the lowest level of taxonomy elements, properties and attributes of the process elements are denoted. They define the reference process. Thus, for example, the process module 'piece goods' with the process element 'bulk" displays the property 'five kilos". Due to this definition, the

The second phase provides the combination of the two domains. The description of these relations is done through the ontology of 'Process Specification Language.' It is based on the

between the two domains through ontology, the Process Specification Language.

**4.2 Conceptional ontology for descriptive process technology relations** 

reference process is individualized and specified.

reference process are derived with application of the regulatory framework.

robotic systems.

descriptive notation of functions and processes through its manifold concepts and relations in different levels of detail. Each participant in a pair of relationship is standardized and the relationship is jointly depicted. Due to the functional and procedural point of view of the ontology, the relationship can be well illustrated. The representation is done by focusing on process elements of one domain that cause an impact on the process elements of the second domain.

For preparation, the conceptual framework is defined as the delimitation of the considered environment to be covered. It is defined according to the procedure model developed by (Figgener & Hompel, 2007). They describe a regulatory framework for reference processes:

Fig. 4. Procedure model for the creation of reference process models [source: authors illustration following (Figgener & Hompel, 2007)]

The aspects of an application area are defined. Thus, displayed in fig. 4, six phases for the generation of a reference model are described. For the existing problem phase 1.2, phase 2.1, and phase 2.2 are especially relevant. Phase 1.2 describes the regulatory framework. The process modules and process elements are defined in phase 2.1 and 2.2. This distinction allows the reduction and control of the complexity and expenditure for model creation through the ontology.

With these results, both phases of the ontology model can be completed. Fig. 5 shows the interdependence of both ontologies:

Using Ontologies for Configuring Architectures of Industrial Robotics in Logistic Processes 515

essential aspect. Thereby both the direction of the relationship and the qualitative description will be identified. With the ontology model, the defined requirements will be

notation relations PSL - identification and notation of relations

The ontology OntoClean structures the domains. It defines the concepts and composes the taxonomies of the domains 'process" and 'technology". The 'Process Specification Language' note the relations between the parameters. The worked out ontology model is the basis for the individual process modularization and configuration of technical robotic

The structuring of the domains is done by defining taxonomies. The usage of taxonomies joins and collects concepts and entities and forms a base frame for these ontologies by structuring them. Here, the relationships are developed associatively mutually. Descending rules work out the taxonomy structures. Due to the qualitative character, the taxonomies are often incorrectly distinguished. The process of 'OntoClean' creates taxonomies and checks

The procedure involves the definition of taxonomies and their meta-properties. It aims at overcoming the frequent deficit of false descent of entities in the taxonomic structure. These erroneous subsuming structures will be avoided by a philosophy-based distinction of the entities and classes with meta-properties. (Herb, 2006) describes comprehensively the metaproperties 'identity,' 'essence and rigidity,' 'dependency,' and 'unity.' Using these metaproperties, the taxonomies are distinctly defined through the concepts of class, entity, instance, and property. Entities describe the objects of taxonomy, which are collected in classes. Entities, which have a common property, instantiate a class and will be defined as instances. This is of great relevance. Especially the representation is challenging due to multiple components and parameters, which are displayed in both the domain 'technology'

The concepts are the basis of the taxonomies to be created. They are based on the entity structures. For a specific reference process, the entities are reviewed and adapted individually. The procedural taxonomies are developed based on the meta-models of process standardization developed by (Figgener & Hompel, 2007). Depending on the type of process, the main processes, process modules and process elements are applied. Based on the structured system techniques in the domain 'technology', the technical taxonomies are

The claim of universality is not maintained. The conception is defined to each reference process specifically. This increases the risk of erroneous and inconsistent definition and


between process and technology

structuring domains OntoClean - identification of relevant terms

description relationship PSL - description of the relations Table 2. Handling the requirements through the ontology model [source: author's

**requirement ontology contribution** 

**4.2.1 Definition of taxonomies using the ontology 'OntoClean'** 

and the domain 'process'. Table 3 provides an overview of the conceptions.

the consistency and accuracy of the structures.

defined due to the commercial state of the art.

satisfied as follows:

illustration]

systems.

Fig. 5. Spheres of action of the ontologies 'OntoClean' and 'Process Specification Language'[source: author's illustration]

In the first sphere, the taxonomies of the domains 'process" and 'technology" are defined with 'OntoClean'. Depending on the reference process, the process taxonomies will be characterized by properties. They customize the taxonomies. The fig. also displays the sphere of action of the first part namely 'PSL Core' of the ontology 'Process Specification Language." It identifies the relations, which exist among the entities and properties of the process domain and the entities and attributes of the technology domains. Summarized the figure works out 2 phases of the ontology model.

The first phase develops the taxonomies using OntoClean. The taxonomies are denoted and structured. Based on the notation of the meta-properties, the accuracy of the taxonomies is analyzed. Inconsistencies regarding the clearness of the hierarchies arise when relations are utilized incorrectly. This leads to incorrect and misleading interpretations of the ontology (Herb, 2006). The OntoClean process examines existing subsuming structures existing between classes by using meta-features.

The second phase depicts relations between the domains by using the ontology of 'Process Specification Language'. Fixed taxonomies for the domain 'process" and the domain 'technology" for a specific reference process, are the basis for representation of the interaction between the two domains. The main goal of the ontology is to figure out parameters or components of a domain affecting the second domain. Besides the demonstration of the existing relations, the description of the quality of the relationship is an 514 Robotic Systems – Applications, Control and Programming

Fig. 5. Spheres of action of the ontologies 'OntoClean' and 'Process Specification

In the first sphere, the taxonomies of the domains 'process" and 'technology" are defined with 'OntoClean'. Depending on the reference process, the process taxonomies will be characterized by properties. They customize the taxonomies. The fig. also displays the sphere of action of the first part namely 'PSL Core' of the ontology 'Process Specification Language." It identifies the relations, which exist among the entities and properties of the process domain and the entities and attributes of the technology domains. Summarized the

The first phase develops the taxonomies using OntoClean. The taxonomies are denoted and structured. Based on the notation of the meta-properties, the accuracy of the taxonomies is analyzed. Inconsistencies regarding the clearness of the hierarchies arise when relations are utilized incorrectly. This leads to incorrect and misleading interpretations of the ontology (Herb, 2006). The OntoClean process examines existing subsuming structures existing

The second phase depicts relations between the domains by using the ontology of 'Process Specification Language'. Fixed taxonomies for the domain 'process" and the domain 'technology" for a specific reference process, are the basis for representation of the interaction between the two domains. The main goal of the ontology is to figure out parameters or components of a domain affecting the second domain. Besides the demonstration of the existing relations, the description of the quality of the relationship is an

Language'[source: author's illustration]

figure works out 2 phases of the ontology model.

between classes by using meta-features.

essential aspect. Thereby both the direction of the relationship and the qualitative description will be identified. With the ontology model, the defined requirements will be satisfied as follows:


Table 2. Handling the requirements through the ontology model [source: author's illustration]

The ontology OntoClean structures the domains. It defines the concepts and composes the taxonomies of the domains 'process" and 'technology". The 'Process Specification Language' note the relations between the parameters. The worked out ontology model is the basis for the individual process modularization and configuration of technical robotic systems.

## **4.2.1 Definition of taxonomies using the ontology 'OntoClean'**

The structuring of the domains is done by defining taxonomies. The usage of taxonomies joins and collects concepts and entities and forms a base frame for these ontologies by structuring them. Here, the relationships are developed associatively mutually. Descending rules work out the taxonomy structures. Due to the qualitative character, the taxonomies are often incorrectly distinguished. The process of 'OntoClean' creates taxonomies and checks the consistency and accuracy of the structures.

The procedure involves the definition of taxonomies and their meta-properties. It aims at overcoming the frequent deficit of false descent of entities in the taxonomic structure. These erroneous subsuming structures will be avoided by a philosophy-based distinction of the entities and classes with meta-properties. (Herb, 2006) describes comprehensively the metaproperties 'identity,' 'essence and rigidity,' 'dependency,' and 'unity.' Using these metaproperties, the taxonomies are distinctly defined through the concepts of class, entity, instance, and property. Entities describe the objects of taxonomy, which are collected in classes. Entities, which have a common property, instantiate a class and will be defined as instances. This is of great relevance. Especially the representation is challenging due to multiple components and parameters, which are displayed in both the domain 'technology' and the domain 'process'. Table 3 provides an overview of the conceptions.

The concepts are the basis of the taxonomies to be created. They are based on the entity structures. For a specific reference process, the entities are reviewed and adapted individually. The procedural taxonomies are developed based on the meta-models of process standardization developed by (Figgener & Hompel, 2007). Depending on the type of process, the main processes, process modules and process elements are applied. Based on the structured system techniques in the domain 'technology', the technical taxonomies are defined due to the commercial state of the art.

The claim of universality is not maintained. The conception is defined to each reference process specifically. This increases the risk of erroneous and inconsistent definition and

Using Ontologies for Configuring Architectures of Industrial Robotics in Logistic Processes 517

A special case is presented due to the class structure 'environment'. Here, both are structured the environmental framework conditions and the target dimensions. This taxonomy is independent of the reference process, and provides an example, shown in its

Fig. 7. Taxonomy of the environment variables [source: author's illustration]

The environmentally framework conditions define the technical requirements, such as availability and performance data. The listed properties are specifically defined for each reference process. They customize the process. By means of the target dimensions, technical and economic criteria are carried out. In this context, process safety or process velocity on the technical side are determined. On the economic side, capital value or amortization time is identified. The target dimensions represent the criteria for success of the realization of the

For each taxonomy and its class K, a notation is defined with the property M. K is denoted as +M, when M applies to all instances of K. The notation -M is used, if not all instances of K have the property M. If M is not valid for any instance of the class K, this relationship is denoted by ~M. Each of the four meta-properties will be reviewed to that effect for each class and entity. This describes the meta-property, 'essence and rigidity' by fixing essence in general first. Second, the specific form of the essence, the rigidity, is described. A property is essential for an entity, if it occurs in every possible situation in the entity. In the next step, a property is rigid, if it is essential for all instances (+R). Non-rigid properties are referred with –R. They describe properties for those entities, which are needed not, but may be instances of the class. Anti-rigidity (~R) is available if there is no instance of associated class

basic structure, as follows:

robotic system in an ex-post manner.

instance of the corresponding class.


description of the concepts. Due to this circumstance, the analysis and validation of the developed taxonomies is an essential part of ontology development.

Table 3. Definition of conceptions in the framework of 'OntoClean' [source: authors illustration]

The OntoClean process provides a procedure of subsuming, shown in fig. 6:

Fig. 6. General construction of taxonomy [source: author's illustration]

A class A subsumes a class B if all instances of class B are always also instances of class A. Fig. 6 presents a class with n entities. Each entity can display further lower-level hierarchical entities, known as sub-entities. Thus, the number of vertical levels is unlimited. On the lowest vertical level, the reference process is individualized by distinct properties. They provide the specific information about the reference process. These may be quantitative or qualitative. As an example here, for a procedural taxonomy, a sub-entity of type 'mass' can be specified with the quantitative property of '22 kg'.

516 Robotic Systems – Applications, Control and Programming

description of the concepts. Due to this circumstance, the analysis and validation of the

entity process module entities which are subsumed

**commentary** 

in one class

module

all entities with same attributes in one process

individualization of the reference process

**model due to fig. 4** 

class main process structuring of classes

property process element characterization and

Table 3. Definition of conceptions in the framework of 'OntoClean' [source: authors

The OntoClean process provides a procedure of subsuming, shown in fig. 6:

Fig. 6. General construction of taxonomy [source: author's illustration]

be specified with the quantitative property of '22 kg'.

A class A subsumes a class B if all instances of class B are always also instances of class A. Fig. 6 presents a class with n entities. Each entity can display further lower-level hierarchical entities, known as sub-entities. Thus, the number of vertical levels is unlimited. On the lowest vertical level, the reference process is individualized by distinct properties. They provide the specific information about the reference process. These may be quantitative or qualitative. As an example here, for a procedural taxonomy, a sub-entity of type 'mass' can

developed taxonomies is an essential part of ontology development.

**term definition of procedure** 

instance process module with same

illustration]

attributes

A special case is presented due to the class structure 'environment'. Here, both are structured the environmental framework conditions and the target dimensions. This taxonomy is independent of the reference process, and provides an example, shown in its basic structure, as follows:

Fig. 7. Taxonomy of the environment variables [source: author's illustration]

The environmentally framework conditions define the technical requirements, such as availability and performance data. The listed properties are specifically defined for each reference process. They customize the process. By means of the target dimensions, technical and economic criteria are carried out. In this context, process safety or process velocity on the technical side are determined. On the economic side, capital value or amortization time is identified. The target dimensions represent the criteria for success of the realization of the robotic system in an ex-post manner.

For each taxonomy and its class K, a notation is defined with the property M. K is denoted as +M, when M applies to all instances of K. The notation -M is used, if not all instances of K have the property M. If M is not valid for any instance of the class K, this relationship is denoted by ~M. Each of the four meta-properties will be reviewed to that effect for each class and entity. This describes the meta-property, 'essence and rigidity' by fixing essence in general first. Second, the specific form of the essence, the rigidity, is described. A property is essential for an entity, if it occurs in every possible situation in the entity. In the next step, a property is rigid, if it is essential for all instances (+R). Non-rigid properties are referred with –R. They describe properties for those entities, which are needed not, but may be instances of the class. Anti-rigidity (~R) is available if there is no instance of associated class instance of the corresponding class.

Using Ontologies for Configuring Architectures of Industrial Robotics in Logistic Processes 519

That figure points out for an exemplary illustration the removal of the non-rigid 'sub-entity 1.1' and the non-rigid 'entity 2'. This procedure results in the so-called backbone taxonomy. In the next step, subsuming structure has to be examined. It checks any violations of subsuming restrictions. Subsuming is described with the relation 'is-a' and is visualized by arrows. For instance, further relations are described with the notation type 'has.' To avoid false distinctions the arrows are inscribed with the relation name. The hierarchies can be

• *Have*: The relationship type 'have' connects an attribute with a concept. Thereby, the

• *Att*: This type of relationship describes properties of elements. A concept can take on several properties simultaneously. They do not need to be met simultaneously at all.

• *Is a*: The relationship describes traditional subset relations (subsuming relations). In the last step, the non-rigid classes and entities are added. Within this last step, the

**4.2.2 Description of the interaction by using the ontology of 'Process Specification** 

In this section, the interacting entities and attributes have to be identified between the two domains by using the created taxonomies. In order to give these interactions a qualitative meaning, (Schlenoff et al., 1999) propose an approach to denote interactions of processes in independents worlds. Therefore, he develops the terminology 'Process Specification Language' (PSL). PSL is a neutral, standard language for process specification for the integration of multiple process applications within the product life cycle. The language is versatile in application and uses multiple concepts, classes, and functions to describe complex processes. Through its manifold applications and many years of further development, the language has diversified and expanded. PSL consists of several modules. The principle fundamental concepts are set in the first module which is called 'PSL-Core'. The module provides four concepts with corresponding functions. According to (Schlenoff et al., 1999), the aim of this module is to fix axioms to describe trivial process connections

The description of further and more complex processes is carried out with other modules, the so-called extensions. PSL offers in total three extensions: 'outer core', 'generic activities'

Fig. 8. Backbone taxonomy [source: author's illustration]

described in the following ways:

using a set of semantic relations.

taxonomy is completed.

**Language'** 

Attribute is a type of the concept.

The meta-property 'identity' describes criteria, which distinctly identify classes and differentiates instances from each other. Both classes and upper classes can provide these identity criteria. The upper classes inherit the criteria. In the first case, the classes are marked with +I. Thus, the identity criterion has been inherited by an upper class. In the second case, the criterion of identity is first defined in an upper class and is marked with +O. Classes that require a further identity criterion as restrictions for distinct definition are denoted with –I.

The third meta-property 'unity' is related to the property 'identity' and describes the affiliation of certain entities to a class. A unity criterion defines a unifying relation of all entities, which are interconnected. The corresponding classes are distinguished with +U. ~U denotes those entities of a class that cannot be distinctly described. If there is no unity criterion provided, the class is described with –U.

The fourth meta-property 'dependence' describes the dependence of a class to other. This fact is relevant if an instance of a class may not be an instance of a second class. Dependent classes will be listed with +D, while independent classes are notated with –D.


In summary, the meta-properties are defined as follows, according to (Herb, 2006):

Table 4. Summary definition of meta-properties, [source: authors illustration following (Herb, 2006)]

The review of meta-properties shows incorrect taxonomy structures and makes their correction possible. The next step involves reviewing the consistency of the meta-properties with each other. This will determine whether there are inadmissible combinations of metaproperties. , An example for such a combination is +O und –U. The next step focuses at the removal of all non-rigid classes from the taxonomy.

Fig. 8. Backbone taxonomy [source: author's illustration]

518 Robotic Systems – Applications, Control and Programming

The meta-property 'identity' describes criteria, which distinctly identify classes and differentiates instances from each other. Both classes and upper classes can provide these identity criteria. The upper classes inherit the criteria. In the first case, the classes are marked with +I. Thus, the identity criterion has been inherited by an upper class. In the second case, the criterion of identity is first defined in an upper class and is marked with +O. Classes that require a further identity criterion as restrictions for distinct definition are

The third meta-property 'unity' is related to the property 'identity' and describes the affiliation of certain entities to a class. A unity criterion defines a unifying relation of all entities, which are interconnected. The corresponding classes are distinguished with +U. ~U denotes those entities of a class that cannot be distinctly described. If there is no unity

The fourth meta-property 'dependence' describes the dependence of a class to other. This fact is relevant if an instance of a class may not be an instance of a second class. Dependent


~R a property where an instance of an allowing class belongs to an

+I classes which differentiate due to the criteria of the allowing instances

+O identity criteria that is defined for the first time and is not transmitted

classes will be listed with +D, while independent classes are notated with –D. In summary, the meta-properties are defined as follows, according to (Herb, 2006):

denoted with –I.

**Meta-property notification** 

(Herb, 2006)]

criterion provided, the class is described with –U.

**definition** 

class


removal of all non-rigid classes from the taxonomy.

+D dependent classes -D independent classes

+R a property is essential for all valid instances

instance of a regarded class


+U unity criteria that denotes connected entities

~U connection of entities which cannot described definitely

Table 4. Summary definition of meta-properties, [source: authors illustration following

The review of meta-properties shows incorrect taxonomy structures and makes their correction possible. The next step involves reviewing the consistency of the meta-properties with each other. This will determine whether there are inadmissible combinations of metaproperties. , An example for such a combination is +O und –U. The next step focuses at the That figure points out for an exemplary illustration the removal of the non-rigid 'sub-entity 1.1' and the non-rigid 'entity 2'. This procedure results in the so-called backbone taxonomy. In the next step, subsuming structure has to be examined. It checks any violations of subsuming restrictions. Subsuming is described with the relation 'is-a' and is visualized by arrows. For instance, further relations are described with the notation type 'has.' To avoid false distinctions the arrows are inscribed with the relation name. The hierarchies can be described in the following ways:


In the last step, the non-rigid classes and entities are added. Within this last step, the taxonomy is completed.

### **4.2.2 Description of the interaction by using the ontology of 'Process Specification Language'**

In this section, the interacting entities and attributes have to be identified between the two domains by using the created taxonomies. In order to give these interactions a qualitative meaning, (Schlenoff et al., 1999) propose an approach to denote interactions of processes in independents worlds. Therefore, he develops the terminology 'Process Specification Language' (PSL). PSL is a neutral, standard language for process specification for the integration of multiple process applications within the product life cycle. The language is versatile in application and uses multiple concepts, classes, and functions to describe complex processes. Through its manifold applications and many years of further development, the language has diversified and expanded. PSL consists of several modules. The principle fundamental concepts are set in the first module which is called 'PSL-Core'. The module provides four concepts with corresponding functions. According to (Schlenoff et al., 1999), the aim of this module is to fix axioms to describe trivial process connections using a set of semantic relations.

The description of further and more complex processes is carried out with other modules, the so-called extensions. PSL offers in total three extensions: 'outer core', 'generic activities'

Using Ontologies for Configuring Architectures of Industrial Robotics in Logistic Processes 521

The third concept, 'object' describes all activities which do not correspond to any of the above concepts. The concept has two functions. The first relation of the type 'participate in' describes the concept 'object' which receives a non further defined relevance for a concept 'activity.' The second relation 'exists-at' describes an existing relevance to a particular point

The entities are, inclusive of their properties, distinguished from the taxonomies of process and technology domains with these concepts. Here, procedural entities and properties can exist which are either calculable or definable. These activities relate to the concept 'activity'. Unpredictable, indefinable or changing conditions can be described as ad-hoc activities and assigned the concept of 'activity occurrence.' Other logistical or technical objects are called objects and assigned to the concept 'object.' An example describes the entity 'general cargo' as an activity (code 1.1) with its property 'cubic' and the entity 'stock situation' for a concept named ad-hoc activities (code 1.2) with the property 'chaotic'. An example of an object (code 1.3) is a technical process such as the process of recognizing the cargo. The following table

**logistics code** 

1.1

1.2

1.2.2

1.3

defined and calculable activity that notes an entity or an attribute of taxonomy.

the concept 1.1 generates a concept 1.1 1.1.1

the concept 1.1 generates a concept 1.2 1.1.2

a non-calculable and changing ad-hoc-activity that notes an entity or attribute of a taxonomy

the concept 1.2 initiates a new concept 1.2 1.2.1

the concept 1.2 generates a non-defined relevance for an object at a specific time

entity or attribute of a taxonomy that are not concepts 1.1 or 1.2

the concept 1.1 assigns a relevance to a concept 1.3 1.3.1

the concept is relevant in a specific time 1.3.2

module **concept definition relation definition modification for robotics-**

is-occurring-at

occurrence-of

occurrence-of

participates-in

participates-in

Table 6. Vocabulary PSL module 'Core' [source: author's illustration]

**PSL Core** 1

a primary activity generates a secondary activity at a defined

the primary concept generates a secondary non-expected activity

the primary activity generates a secondary non-expected activity

a primary activity generates a nondefinable relevance for

a primary activity assigns a non-defined relevance to an object in a specific time

an object

exists-at an object exists to a specific time

In a first step, the implementation of ontologies for a specific reference process is associated with concepts and properties of the valid entities. The second step identifies and denotes the relations between the concepts. A matrix representation is provided which is shown in the general structure in tab. 7. The columns show the entities and properties of the technology

time

summarizes the results of the relevant vocabulary:

defined acititvity

a temporary activity and specific activity that occurs nonrecurring

all entities that are not an activity or activity occurrence

acitvity a general non-

activity occurrence

object

of time.

PSL

and 'schedules'. The module 'PSL outer core' deals with generic and broadly based concepts regarding to their applicability. The module 'generic activities' defines a terminology to describe generic activities and their relations. The module 'schedules' describes the application and allocation of resources to activities under the premise of satisfying the temporary restrictions:


Table 5. Definition of relevant terms of PSL [source: author's illustration]

The definitions are based on the adaptation of the ontology to the current requirements. With these concepts, the individual concepts of this approach will be presented and adapted to this task. With the creation of taxonomies, the relations of taxonomy properties of both domains are identified and described.

This section describes the identification of existing relations and their corresponding notation using the vocabulary of the first module 'PSL Core' of the ontology 'Process Specification Language'. The module contains three concepts. The first concept 'activity' describes general activities, which appear to be predictable and manageable. They do not have to be determined detailed. For instance, 'activities' may be standard processes of a recurring nature. The concept exhibits two different types of functions for further concepts. Function one focuses on further planned activities ('activity') ('is-occurring-at'). Function two describes the connection to unpredictable and unplanned activities ('occurrence-of').

The second concept, 'activity occurrence' describes a unique activity that proceeds unforeseen and unplanned. The concept can also exhibit two different functions for other concepts. The function 'occurrence of' is analogous to the second function of the first concept and describes the initiation of a second type of unpredictable activity of the type 'activity occurrence.' The second function describes the relationship to a concept of the type 'object'. This function expresses the impart of the concept 'activity occurrence' with a none further defined significance to the second concept named 'object.'

520 Robotic Systems – Applications, Control and Programming

and 'schedules'. The module 'PSL outer core' deals with generic and broadly based concepts regarding to their applicability. The module 'generic activities' defines a terminology to describe generic activities and their relations. The module 'schedules' describes the application and allocation of resources to activities under the premise of satisfying the

PSL short notification of the ontology 'Process Specification Language'

Minimum one activity is an ad-hoc activity.

The definitions are based on the adaptation of the ontology to the current requirements. With these concepts, the individual concepts of this approach will be presented and adapted to this task. With the creation of taxonomies, the relations of taxonomy properties of both

This section describes the identification of existing relations and their corresponding notation using the vocabulary of the first module 'PSL Core' of the ontology 'Process Specification Language'. The module contains three concepts. The first concept 'activity' describes general activities, which appear to be predictable and manageable. They do not have to be determined detailed. For instance, 'activities' may be standard processes of a recurring nature. The concept exhibits two different types of functions for further concepts. Function one focuses on further planned activities ('activity') ('is-occurring-at'). Function two describes the connection to unpredictable and unplanned activities

The second concept, 'activity occurrence' describes a unique activity that proceeds unforeseen and unplanned. The concept can also exhibit two different functions for other concepts. The function 'occurrence of' is analogous to the second function of the first concept and describes the initiation of a second type of unpredictable activity of the type 'activity occurrence.' The second function describes the relationship to a concept of the type 'object'. This function expresses the impart of the concept 'activity occurrence' with a none further

Table 5. Definition of relevant terms of PSL [source: author's illustration]

relation interrelation between an entity couple of 'process domain' and 'technology domain'. None of the entities is a ad-hoc activity. activity process or technical entity or sub-activity that is continuous and relates to

process or technical (sub-) entity including its attribute that existence is

relation of an entity couple of the process and technology domain.

temporary restrictions:

ad-hoc activity

ad-hoc relation

('occurrence-of').

**term definition** 

PSL module group of concepts of the PSL

the second domain.

not calculable concept first and highest level of a PSL

function third and lowest level of PSL

class second level of PSL

domains are identified and described.

defined significance to the second concept named 'object.'

The third concept, 'object' describes all activities which do not correspond to any of the above concepts. The concept has two functions. The first relation of the type 'participate in' describes the concept 'object' which receives a non further defined relevance for a concept 'activity.' The second relation 'exists-at' describes an existing relevance to a particular point of time.

The entities are, inclusive of their properties, distinguished from the taxonomies of process and technology domains with these concepts. Here, procedural entities and properties can exist which are either calculable or definable. These activities relate to the concept 'activity'. Unpredictable, indefinable or changing conditions can be described as ad-hoc activities and assigned the concept of 'activity occurrence.' Other logistical or technical objects are called objects and assigned to the concept 'object.' An example describes the entity 'general cargo' as an activity (code 1.1) with its property 'cubic' and the entity 'stock situation' for a concept named ad-hoc activities (code 1.2) with the property 'chaotic'. An example of an object (code 1.3) is a technical process such as the process of recognizing the cargo. The following table summarizes the results of the relevant vocabulary:


Table 6. Vocabulary PSL module 'Core' [source: author's illustration]

In a first step, the implementation of ontologies for a specific reference process is associated with concepts and properties of the valid entities. The second step identifies and denotes the relations between the concepts. A matrix representation is provided which is shown in the general structure in tab. 7. The columns show the entities and properties of the technology

Using Ontologies for Configuring Architectures of Industrial Robotics in Logistic Processes 523

**gripper**

Fig. 9. Industrial application of a robotic system for depalletizing plastic boxes: result of the

The illustration points out that the configuration of the technical system depends on the parameters of the process. The upstream conveyor supply box pallets including a buffer function. The downstream conveyor conveyors the single boxes into the distribution cycle. The task of an robot-based automation system focuses on the handling of single or multiple boxes and the lay down onto the roller conveyor. Using the presented framework for configuring the robotics architecture, the first step of generating the procedural and technical taxonomies has to be executed. The class structure with its entities and attributes of

**Entity meta- property sub-entity attribute meta-property** 

dimension (min, max)

packaging +R, +I, -U, +D strapping 1 -R, +I, -U, -D

Table 8. Entities and attributes of the class structure 'piece goods' of the 'process domain'

mass +R, +I, -U, +D weight 28 [kg] +R, +I, -U, +D

**plastic boxes on pallet**

**upstream conveyor**

form cubic +R, -I, -U, +D

length = 55 [cm] width = 40[cm] height = 30 [cm]

volume V = 27 [l] -R, -I, -U, -D surface closed +R, -I, -U, -D

stability high -R, -I, -U, -D

type of packaging single -R, +I, +U, +D

art plastic +R, +I, +U, +D

+R, +I, -U, +D

**robot**

**downstream conveyor**

geometry +R, +I, -U, +D

material +R, -I, -U, -D

[source: author's illustration]

ontological configuration [source: author's illustration]

the 'process domain' can be assigned with attributed as followed:

**robot control & communication**

domains. The lines depict the process domains. The individual hierarchy steps of taxonomies are presented. As described, they were indicated by the hierarchic structure. The coding of the lines and columns indicates the respective levels of the hierarchic structure. Additionally, the identified concepts of the respective sub-entities and properties are noted on the lowest structural level. In the cells, the interaction from tab. 6 are noted and distinguished by means of the coding. For example, the procedural subentity 1.1.1 affects the technical components 1.1.1 through the relationship 'objectparticipates-in' (code 1.3.1).


Table 7. General matrix representation of the process-technology relations in accordance with PSL module 'core' [source: author's illustration]

The vocabulary allows the description of the relational structure for a dedicated reference process, which describes the relations among the procedural entities and the technical components.

## **4.3 Industrial application: Depalletizing plastic boxes with a robotic system**

This section presents the robot based automation of a simple industrial application by using the presented ontological framework. The presented example focuses on the interaction between 'piece good' of the 'process domain' and 'gripper' of the 'technology domain'. Here the automation of a logistics process by using the ontological framework will be presented. Online books shops package their goods in plastic boxes. Logistics Providers handle these boxes for delivering to the customer. Hence, the boxes are send on pallets in swap bodies by using trucks. The logistics provider unloads the trucks and imports them in their distribution center which operates with a high degree of automation. Therefore the boxes have to be depalletized and brought onto the conveyor technology system. In general this separation is done manually. A robotic systems was configured and integrated by using the ontological framework to automate this reference process. The following figure displays the process with the implemented robotic system:

522 Robotic Systems – Applications, Control and Programming

domains. The lines depict the process domains. The individual hierarchy steps of taxonomies are presented. As described, they were indicated by the hierarchic structure. The coding of the lines and columns indicates the respective levels of the hierarchic structure. Additionally, the identified concepts of the respective sub-entities and properties are noted on the lowest structural level. In the cells, the interaction from tab. 6 are noted and distinguished by means of the coding. For example, the procedural subentity 1.1.1 affects the technical components 1.1.1 through the relationship 'object-

> **system technique 1.1**

**… … ... ... ...** … … … … … … … … **… … … … ...** … … … … … … … … **… … … … ...** … … … … … … … …

Table 7. General matrix representation of the process-technology relations in accordance

The vocabulary allows the description of the relational structure for a dedicated reference process, which describes the relations among the procedural entities and the technical

This section presents the robot based automation of a simple industrial application by using the presented ontological framework. The presented example focuses on the interaction between 'piece good' of the 'process domain' and 'gripper' of the 'technology domain'. Here the automation of a logistics process by using the ontological framework will be presented. Online books shops package their goods in plastic boxes. Logistics Providers handle these boxes for delivering to the customer. Hence, the boxes are send on pallets in swap bodies by using trucks. The logistics provider unloads the trucks and imports them in their distribution center which operates with a high degree of automation. Therefore the boxes have to be depalletized and brought onto the conveyor technology system. In general this separation is done manually. A robotic systems was configured and integrated by using the ontological framework to automate this reference process. The following figure displays the

**n.m** … …

**4.3 Industrial application: Depalletizing plastic boxes with a robotic system** 

**Code T.1 T.1.1 T.1.1.1 … … T.n T.n.m T.n.m.o** 

**… …** 

**technique n** 

**system technique n.m** 

> **component n.m.o**

**technique 1 … … system** 

**component 1.1.1 … …** 

**concept concept 1.z ... ... concept 1.z** 

**concept 1.x** 1.3.1 … … 1.x.y

**concept 1.x** 1.x.y … … 1.x.y

participates-in' (code 1.3.1).

**technical- taxonomy** 

**P.1 Class 1** … … **P.1.1 entity 1.1** … …

**P.n class n** … …

**system** 

**Process Specification Language** 

**PSL Core** 

**Code process taxonomy PSL-**

**subentity 1.1.1** 

**subentity n.m.o** 

with PSL module 'core' [source: author's illustration]

process with the implemented robotic system:

 

**P.1.1.1** 

**P.n.m.o** 

**Pn.m entity** 

components.

Fig. 9. Industrial application of a robotic system for depalletizing plastic boxes: result of the ontological configuration [source: author's illustration]

The illustration points out that the configuration of the technical system depends on the parameters of the process. The upstream conveyor supply box pallets including a buffer function. The downstream conveyor conveyors the single boxes into the distribution cycle. The task of an robot-based automation system focuses on the handling of single or multiple boxes and the lay down onto the roller conveyor. Using the presented framework for configuring the robotics architecture, the first step of generating the procedural and technical taxonomies has to be executed. The class structure with its entities and attributes of the 'process domain' can be assigned with attributed as followed:


Table 8. Entities and attributes of the class structure 'piece goods' of the 'process domain' [source: author's illustration]

Using Ontologies for Configuring Architectures of Industrial Robotics in Logistic Processes 525

robotics

P.1.1 geometry 1.1.2 1.1.1

P.1.2 material 1.1.1 1.2.1 1.1.1

P.1.3 packaging 1.1.1

technology", "pattern recognition" and "robot control and communication".

Table 9. Entities and attributes of the class structure 'piece goods' of the 'process domain'

This example clarify the potential of the ontological framework. The framework offers a general and systemic knowledge to configure the best technical components and modules for the specific application due to the system technologies "robotics", "gripping

The paper presents an ontological approach to standardize robotic systems in logistic processes. Ontologies allow the systematic depiction of the technical systems in the procedural environment. Through their high level of abstraction, this chapter describes the conceptualization and elaboration of an ontological vocabulary for configuration process customized robotic architectures. The vocabulary allows the description of the relational structure for a dedicated reference process. It describes the relations among the procedural entities and the technical components. This ontology framework is the basis for the

formation of modules and the configuration of the modules in robotics architectures.

The main goal provides a descriptive approach to the relationship between process and technology. Here, representations of conceptual ontologies were consulted. Due to the conceptual approach, the notation is on an abstract level, so that an automatic conclusion through formal ontologies is realistic. The representation of a dedicated solution space of possible technical configuration states of robotics system architectures is feasible, too. In further research requirements, the development of formal ontologies in the context of this scope reduces the level of abstraction and enables the mechanical and automatic generation

In this way, interpretation and manipulation opportunities will be reduced and the interconnections of relationships between process and technology detailed. In this connection, formal ontologies allow the development of so-called architecture Configurator. They are based on the provided procedural and technical information and the possible ontological interrelationships. With this information, automatically development, including economic criteria, prioritizes configurations of robotic system architectures for dedicated

P.1.4 mass 1.1.1

**PSL Core** 

T.1 T.1.1 T.1.2 T.1.3 T.1.4 T.1.5

kinematics geometry load accuracy installation

**Process Specification Language** 

P.1 piece goods

[source: author's illustration]

**5. Conclusion** 

of ontologies.

The table displays the various entities with its attribute regarding the class structure 'piece goods'. For instance, the meta-properties define the taxonomy of this class. Also the hierarchical structure is defined. For instance, incorrect assignments of sub-entities will be avoided. Additionally the table assigns relevant attributes of the reference process to entities. The meta-properties figure out that the sub-entities 'geometry' and 'mass' are quite important due to their essential (+R). Furthermore they give identity to their class (+I). Finally, the corresponding taxonomy is presented in figure 10:

Fig. 10. Backbone taxonomy of the class structure "piece goods" of the "Process domain" [source: author's illustration]

It displays the corrected taxonomy of the exemplary class structure of the industrial application. The class "piece goods" consists of four entities (2nd level) and seven subentities. These are related with to attributes. The relations 'subsumption' (is) or 'attribute' (att) describes the connection to the entities. Subsumption are given if the attribute is part of the sub-entity.

The following step focuses on interactions between the "process domain" and "technology domain". Therefore, the relations are noted. Table 9 exhibits the relationship to the system technology "robotics" which unite the entities "kinematics", "geometry", "load", "accuracy" and "installation". Table 9 displays the relation codes between the two domains. For instance, there are some relations of the type "ad-hoc-activity" and "activity". For instance, the strapping has an influence to the accuracy of the robot. Also the mass defines the type of the robot. The table resumes these relations:


Table 9. Entities and attributes of the class structure 'piece goods' of the 'process domain' [source: author's illustration]

This example clarify the potential of the ontological framework. The framework offers a general and systemic knowledge to configure the best technical components and modules for the specific application due to the system technologies "robotics", "gripping technology", "pattern recognition" and "robot control and communication".

## **5. Conclusion**

524 Robotic Systems – Applications, Control and Programming

The table displays the various entities with its attribute regarding the class structure 'piece goods'. For instance, the meta-properties define the taxonomy of this class. Also the hierarchical structure is defined. For instance, incorrect assignments of sub-entities will be avoided. Additionally the table assigns relevant attributes of the reference process to entities. The meta-properties figure out that the sub-entities 'geometry' and 'mass' are quite important due to their essential (+R). Furthermore they give identity to their class (+I).

Fig. 10. Backbone taxonomy of the class structure "piece goods" of the "Process domain"

It displays the corrected taxonomy of the exemplary class structure of the industrial application. The class "piece goods" consists of four entities (2nd level) and seven subentities. These are related with to attributes. The relations 'subsumption' (is) or 'attribute' (att) describes the connection to the entities. Subsumption are given if the attribute is part of

The following step focuses on interactions between the "process domain" and "technology domain". Therefore, the relations are noted. Table 9 exhibits the relationship to the system technology "robotics" which unite the entities "kinematics", "geometry", "load", "accuracy" and "installation". Table 9 displays the relation codes between the two domains. For instance, there are some relations of the type "ad-hoc-activity" and "activity". For instance, the strapping has an influence to the accuracy of the robot. Also the mass defines the type of

[source: author's illustration]

the robot. The table resumes these relations:

the sub-entity.

Finally, the corresponding taxonomy is presented in figure 10:

The paper presents an ontological approach to standardize robotic systems in logistic processes. Ontologies allow the systematic depiction of the technical systems in the procedural environment. Through their high level of abstraction, this chapter describes the conceptualization and elaboration of an ontological vocabulary for configuration process customized robotic architectures. The vocabulary allows the description of the relational structure for a dedicated reference process. It describes the relations among the procedural entities and the technical components. This ontology framework is the basis for the formation of modules and the configuration of the modules in robotics architectures.

The main goal provides a descriptive approach to the relationship between process and technology. Here, representations of conceptual ontologies were consulted. Due to the conceptual approach, the notation is on an abstract level, so that an automatic conclusion through formal ontologies is realistic. The representation of a dedicated solution space of possible technical configuration states of robotics system architectures is feasible, too.

In further research requirements, the development of formal ontologies in the context of this scope reduces the level of abstraction and enables the mechanical and automatic generation of ontologies.

In this way, interpretation and manipulation opportunities will be reduced and the interconnections of relationships between process and technology detailed. In this connection, formal ontologies allow the development of so-called architecture Configurator. They are based on the provided procedural and technical information and the possible ontological interrelationships. With this information, automatically development, including economic criteria, prioritizes configurations of robotic system architectures for dedicated

Using Ontologies for Configuring Architectures of Industrial Robotics in Logistic Processes 527

Erdmann, M. (2001). *Ontologien zur konzeptuellen Modellierung der Semantik von XML,* 

EUROP. (2009). Robotics Visions to 2020 And Beyond : The Strategic Research Agenda For

Figgener, O. & Hompel, M. *Beitrag zur Prozessstandardisierung in der Intralogistik.* In: *Logistics* 

Fritsch, D. & Wöltje, K. (2006). Roboter in der Intralogistik : Von der Speziallösung zum

Gangemi, A., Guarino, N., Masolo, C., Oltramari, A. & Schneider, L. (2002). Sweetening

Gómez-Pérez, A., Fernández-López, M., Corcho, O. (2004). *Ontological engineering: With* 

Gruber, T. (1993). A translation approach to portable ontology specifications. *Knowledge* 

Gruninger, M. (2002). Ontology - Applications and Design*. Communication of the ACM,*

Guarino, N. (1998). *Formal ontology in information systems: Proceedings of the first international* 

Günthner, W. & HompelM. (2009.). *Internet der Dinge in der Intralogistik,* Springer, ISBN 978-

Herb, M. (2006). Ontology Engineering mit OntoClean. In: *IPD University Karlsruhe,* 

Kastens, U. & Kleine Büning, H. (2008). *Modellierung : Grundlagen und formale Methoden,* 

Kiencke, U. (1997). *Ereignisdiskrete Systeme: Modellierung und Steuerung verteilter Systeme,* 

Kutz, O., Lutz, C., Wolter, F., Zakharyaschev, M. (2004). ε-connections of abstract

Scheid, W. (2010). Perspektiven zur Automatisierung in der Logistik : Teil 1 - Ansätze und

http://www.ipd.uni-karlsruhe.de/~oosem/S2D2/material/1-Herb.pdf Kahraman, C., Cevik, S., Ates, N.Y. & Gulbay, M. (2007). Fuzzy multi-criteria evaluation of

*conference (FOIS '98), June 6 - 8, Trento, Italy,* FOIS, ISBN 905-1-993-994, Amsterdam,

industrial robotic systems. *Computers & Industrial Engineering,* Vol.52, No.4, pp. 414-

description systems*. Artif. Intelligence,* Vol.156, No.1, pp.1‐73, ISSN 0004- 3702Neches, R., Fikes, R., Finin, T., Gruber, T., Patil, R., Senator, T. & Swartout, W. (1991). Enabling technology for knowledge sharing*. AI Mag,* Vol.12, No.3, pp.36–56

Umfeld. *Hebezeuge Fördermittel - Fachzeitschrift für technische Logistik,* Vol.50, No.9,

Robotics in Europe, In: *European Robotics Technology Platform,* 20.06.2011, available

wirtschaftlichen Standardprodukt*. wt Werkstattsstechnik Online,* Vol.96, No.9, pp.

Ontologies with DOLCE, In: *Knowledge Engineering and Knowledge Management: Ontologies and the Semantic Web,* Gómez-Pérez, A. & Benjamins, V. (Ed.),. pp.223–

*examples from the areas of knowledge management, e-commerce and the semantic web,*  Springer, ISBN 978-1-852-33551-9, London, England Göpfert, I. (2009). *Logistik der Zukunft - Logistics for the future*, Gabler, ISBN 978-3-834-91082-0, Wiesbaden,

University Karlsruhe, ISBN 3-8311-2635-6, Karlsruhe, Germany

233, Springer, ISBN 978-3-540-44268-4, Berlin, Germany

*Acquisition,* Vol. 5, No.2, pp.199‐220, ISSN 1042-8143

Hanser, ISBN 978-3-446-41537-9, München, Germany

Oldenbourg, ISBN 348-6-241-508, München, Germany

Vol.45, No.2, pp.39–41, ISSN 00010782

3-642-04895-1, Berlin, Germany

10.06.2011, available from:

pp. 406–409, ISSN 0017-9442

433, ISSN 0360-8352

from: www.robotics-platform.eu

623–630, ISSN 1436-4980

Germany

Netherlands

*Journal* (2007). ISSN 1860-5923 S. 1–12

reference processes. This approach can also serve as an appreciation of the nature of a 'Rapid Configuration Robotics' approach, which can digitally review prototyping activities such as technical feasibility and economic usefulness. The requirement for this type of ITbased configuration planning is shown by the RoboScan10 survey:

Fig. 11. Study RoboScan10: Answers about necessity for an IT-based system that plans the configuration of robotic systems [source: (Burwinkel, 2011)]

Fig. 11 shows the field of opinions in the context of RoboScan10 about the necessity for ITbased configuration planning of robotic systems. The question asked was: 'from a planning perspective: Could you envisage using an IT-based planning tool, which enables the configuration of both single robot systems and multi robot systems? 75% of all respondents could envisage the application of such tools.

## **6. References**


526 Robotic Systems – Applications, Control and Programming

reference processes. This approach can also serve as an appreciation of the nature of a 'Rapid Configuration Robotics' approach, which can digitally review prototyping activities such as technical feasibility and economic usefulness. The requirement for this type of IT-

Fig. 11. Study RoboScan10: Answers about necessity for an IT-based system that plans the

Fig. 11 shows the field of opinions in the context of RoboScan10 about the necessity for ITbased configuration planning of robotic systems. The question asked was: 'from a planning perspective: Could you envisage using an IT-based planning tool, which enables the configuration of both single robot systems and multi robot systems? 75% of all respondents

Arnold, D. (2006). *Intralogistik : Potentiale, Perspektiven, Prognosen,* Springer, ISBN 978-3-540-

Bateman, J. (1993): On the relationship between ontology construction and natural language:

Baumgarten, H. (2008). *Das Beste der Logistik : Innovationen, Strategien, Umsetzungen,* 

Bunge, M. (1977). *Treatise on basic philosophy,* Reidel, ISBN 9027728399, Dordrecht,

Burwinkel, M. & Pfeffermann, N. (2010). Die Zukunft der Robotik-Logistik liegt in der

Chen, P. (1976). The entity-relationship model - toward a unified view of data. *ACM Trans.* 

Elger, J., & Haußener, C. (2009). Entwicklungen in der Automatisierungstechnik. In: *Internet* 

a socio-semiotic view. *International Journal of Human-Computer Studie,* Vol.43,

Modularisierung : Studienergebnisse "RoboScan'10". *Logistik für Unternehmen,* 

*der Dinge in der Intralogistik,* W. Günthner (Ed.), 23–27, Springer, ISBN 978-3-642-

based configuration planning is shown by the RoboScan10 survey:

configuration of robotic systems [source: (Burwinkel, 2011)]

could envisage the application of such tools.

29657-7, Berlin, Germany

Netherlands

No.5/6, pp.929-944, ISSN 1071-5819

Vol.24, No.10, pp.21-23, ISSN 0930-7834

*Database Syst*, Vol. 1, No. 4, pp. 9–36

04895-1, Berlin, Germany

Springer, ISBN 978-3-540-78405-0, Berlin, Germany

**6. References** 

	- http://www.ipd.uni-karlsruhe.de/~oosem/S2D2/material/1-Herb.pdf

**25** 

**Programming of Intelligent Service Robots** 

**Configurable Task-Knowledge** 

Oliver Prenzel1, Uwe Lange2, Henning Kampe2,

Christian Martens1 and Axel Gräser2

*1Rheinmetall Defence Electronics,* 

*2University of Bremen* 

*Germany* 

**with the Process Model "FRIEND::Process" and** 

In Alex Proyas's science fiction movie "I, Robot" (2004) a detective suspects a robot as murderer. This robot is a representative of a new generation of personal assistants that help and entertain people during daily life activities. In opposition to the public opinion the detective proclaimed that the robot is able to follow his own will and is not forced to Isaac Asimov's three main rules of robotics (Asimov, 1991). In the end this assumption turned out

Even though the technological part of this story is still far beyond realization, the idea of a personal robotic assistant is still requested. Experts predicted robotic solutions to be ready to break through in domestic and other non-industrial domains (Engelberger, 1989) within the next years. But up to now, only rather simple robotic assistants like lawn mowers and vacuum cleaners are available on the market. As stated in (Gräfe & Bischoff, 2003), all these systems have in common that they only show traces of intelligence and are specialists, designed for mostly a particular task. Robots being able to solve more complex tasks have not yet left the prototypical status. This is due to the large number of scientific and technical challenges that have to be coped with in the domain of robots acting and interacting in

The focus of this paper is to describe a tool based process model, called the "FRIEND::Process"1, which supports the development of intelligent robots in the domain of personal assistants. The paper concentrates on the interaction and close relation between the FRIEND::Process and configurable task-knowledge, the so called process-structures. Process-structures are embedded in different layers of abstraction within the layered control architecture MASSiVE2 (Martens et al., 2007). Even though the usage of layered control architectures for service robots is not a novel idea and has been proposed earlier (Schlegel &

1 The name FRIEND::Process is related to the FRIEND projects (Martens et al., 2007). It has been developed within the scope of these projects, but is also applicable to other service robots. 2 MASSiVE – Multilayer Control Architecture for Semi-Autonomous Service Robots with Verified Task

**1. Introduction** 

to be the truth.

Execution

human environments (Kemp et al., 2007).


http://www.mel.nist.gov/msidlibrary/doc/nistir6459

	- leipzig.de/~loebe/teaching/2008ss-seweb/08v-ontengineering-gwiedemann.pdf

## **Programming of Intelligent Service Robots with the Process Model "FRIEND::Process" and Configurable Task-Knowledge**

Oliver Prenzel1, Uwe Lange2, Henning Kampe2, Christian Martens1 and Axel Gräser2 *1Rheinmetall Defence Electronics, 2University of Bremen Germany* 

## **1. Introduction**

528 Robotic Systems – Applications, Control and Programming

Scheid, W. (2010): Perspektiven zur Automatisierung in der Logistik : Teil 2 - Praktische

Schlenoff, C., Gruninger, M., Tissot, F., Valois, J. Lubell, J. & Lee, J. (1999). The Process

Seidlmeier, H. (2002). *Prozessmodellierung mit ARIS® : Eine beispielorientierte Einführung für Studium und Praxis,* Vieweg, ISBN 352-8-058-048, Braunschweig, Germany Siegert, H. (1996). *Robotik: Programmierung intelligenter Roboter,* Springer, ISBN 3540606653,

Staab, S. (2002*).* Wissensmanagement mit Ontologien und Metadaten. *Informatik-Spektrum,* 

Staud, J. (2006). *Geschäftsprozessanalyse : Ereignisgesteuerte Prozessketten und objektorientierte* 

Steinmann, F. & Nejdl, W. (1999). Modellierung und Ontologie, In: Institut für

Straube, F. & Rösch, F. (2008): *Logistik im produzierenden Gewerbe,* TU Berlin, ISBN 978-3-000-

Stuckenschmidt, H. (2009). *Ontologien : Konzepte, Technologien und Anwendungen,* Springer,

Studer, R., Benjamins, V., Fensel, D. (1998). Knowledge engineering: Principles and

Suppa, M. & Hofschulte, J. (2010). Industrial Robotics. *at – Automatisierungstechnik,* Vol.58,

Swartout, W. , Tate, A. (1999). Ontologies*. IEEE Intelligent Systems and their Applications,* 

Tabeling, P. (2006): *Softwaresysteme und ihre Modellierung : Grundlagen, Methoden und* 

Westkämper, E., Verl,A. (2009). *Roboter in der Intralogistik : Aktuelle Trends - Neue Technologien* 

Wiedemann, G. (2008). Ontologien und Ontology Engineering*,* In: Seminar 'Semantic Web Technologien', 12.06.2011, available from: http://www.informatik.uni-

*- Moderne Anwendungen,* Verein zur Förderung produktionstechnischer Forschung,

leipzig.de/~loebe/teaching/2008ss-seweb/08v-ontengineering-gwiedemann.pdf

*Techniken,* Springer, ISBN 978-3-540-25828-5, Berlin, Germany

hannover.de/Arbeiten/Publikationen/1999/M%26O.pdf

*Geschäftsprozessmodellierung für betriebswirtschaftliche Standardsoftware,* Germany,

Rechnergestützte Wissensverarbeitung, 25.05.2011, available from: www.kbs.uni-

methods. *Data & Knowledge Engineering,* Vol.25, , No.1-2, pp.161–197, ISSN 0169-

No.10, pp. 482–483, ISSN 0017-9442

Berlin, Germany

24165-9, Berlin, Germany

No. 12., pp.663–664,

Stuttgart, Germany

023X

*www.mel.nist.gov/psl/,* 12.06.2011, available from: http://www.mel.nist.gov/msidlibrary/doc/nistir6459

Vol.25, No.3, pp.194–209, ISSN 0170-6012

ISBN 978-3-540-24510-0, Berlin, Germany

ISBN 978-3-540-79330-4*,* Berlin, Germany

Vol.14, No.1, pp.18–19, ISSN 1094-7167

Umsetzung. *Hebezeuge Fördermittel - Fachzeitschrift für technische Logistik* Vol.50,

Specification Language (PSL) Overview and Version 1.0 Specification, In:

In Alex Proyas's science fiction movie "I, Robot" (2004) a detective suspects a robot as murderer. This robot is a representative of a new generation of personal assistants that help and entertain people during daily life activities. In opposition to the public opinion the detective proclaimed that the robot is able to follow his own will and is not forced to Isaac Asimov's three main rules of robotics (Asimov, 1991). In the end this assumption turned out to be the truth.

Even though the technological part of this story is still far beyond realization, the idea of a personal robotic assistant is still requested. Experts predicted robotic solutions to be ready to break through in domestic and other non-industrial domains (Engelberger, 1989) within the next years. But up to now, only rather simple robotic assistants like lawn mowers and vacuum cleaners are available on the market. As stated in (Gräfe & Bischoff, 2003), all these systems have in common that they only show traces of intelligence and are specialists, designed for mostly a particular task. Robots being able to solve more complex tasks have not yet left the prototypical status. This is due to the large number of scientific and technical challenges that have to be coped with in the domain of robots acting and interacting in human environments (Kemp et al., 2007).

The focus of this paper is to describe a tool based process model, called the "FRIEND::Process"1, which supports the development of intelligent robots in the domain of personal assistants. The paper concentrates on the interaction and close relation between the FRIEND::Process and configurable task-knowledge, the so called process-structures. Process-structures are embedded in different layers of abstraction within the layered control architecture MASSiVE2 (Martens et al., 2007). Even though the usage of layered control architectures for service robots is not a novel idea and has been proposed earlier (Schlegel &

<sup>1</sup> The name FRIEND::Process is related to the FRIEND projects (Martens et al., 2007). It has been

developed within the scope of these projects, but is also applicable to other service robots. 2 MASSiVE – Multilayer Control Architecture for Semi-Autonomous Service Robots with Verified Task Execution

Programming of Intelligent Service Robots

programming intelligent service robots.

be presented in detail in this paper.

Fig. 1. FRIEND III rehabilitation robot

following actions:

door

2009).

**2. Task planning on basis of process-structures** 

rehabilitation robot FRIEND III (IAT, 2009; Martens et al., 2007).

**2.1 The complexity of classical task-planning approaches** 

with the Process Model "FRIEND::Process" and Configurable Task-Knowledge 531

Finally, Section 4 summarizes and concludes the description of the FRIEND::Process for

In this section, the complexity of classical task planning approaches is discussed first, before the introduction of process-structures is motivated. The discussion is carried out with the help of task execution examples from the field of rehabilitation robotics and the

With respect to one exemplary task – a service robot is supporting the preparation and the eating of a meal by a disabled person – the complexity of robotic task execution shall be illustrated. For this purpose the figures Fig. 1 to Fig. 3 are introduced. Fig. 1 shows the rehabilitation robot FRIEND III which is used as exemplary target system. In Fig. 2 snapshots of the task sequence "Meal preparation and eating assistance" are depicted. Finally, Fig. 3 shows the decomposition of this task sequence according to the principles to

FRIEND III is a general purpose semi-autonomous rehabilitation robot suitable for the implementation of a wide range of support tasks. As depicted, FRIEND III consists of an electrical wheelchair which is equipped with several sensors and actuators: A stereo camera system mounted on a pan-tilt-head, force torque sensor, robotic arm and gripper with force control. FRIEND III has been developed by an interdisciplinary team of engineers, therapists and designers and has been tested with disabled users within the AMaRob project (IAT,

To perform "meal preparation and eating assistance", the robot system has to execute the

• Locate the refrigerator, open the refrigerator door, locate the meal inside the refrigerator, grasp and retrieve the meal from the refrigerator, close the refrigerator

• Open the microwave-oven, insert the meal, close the oven, start the heating process

Woerz, 1999; Schreckenghost et al., 1998; Simmons & Apfelbaum, 1998), MASSiVE is tailored for process-structures and thus is the vehicle for the realization of verified intelligent task execution for service robots, as it is shown in the following. The advantages of using process-structures shall be anticipated here:


To be able to provide a user-friendly configuration of process-structures and to guarantee consistency throughout all abstraction levels of task-knowledge, a tool-based process model – the FRIEND::Process – has been developing. The process model, on the one hand, guides the **development and programming** of intelligent behavior for service robots with processstructures. On the other hand, process-structures can be seen as a **process model for the service robot** itself, which guides the task execution of the robot during runtime. The unique feature of the FRIEND::Process in comparison to other frameworks (Gostai, 2011; Microsoft, 2011; Quigley et al., 2009) and the above mentioned control architectures is to completely rely on configurable process-structures and thus on determinism, real-time capability and fault tolerance.

The FRIEND::Process consists of the following development steps:


In the following Section 2, the motivation for the introduction of process-structures is explained in more detail by discussing the complexity of task planning for service robots with the help of an exemplary scenario. The description of the FRIEND::Process development steps is subject of Section 3. Throughout this description, exemplary processstructures of the sample scenario of Section 2 are introduced for each development step. Finally, Section 4 summarizes and concludes the description of the FRIEND::Process for programming intelligent service robots.

## **2. Task planning on basis of process-structures**

530 Robotic Systems – Applications, Control and Programming

Woerz, 1999; Schreckenghost et al., 1998; Simmons & Apfelbaum, 1998), MASSiVE is tailored for process-structures and thus is the vehicle for the realization of verified intelligent task execution for service robots, as it is shown in the following. The advantages

• **Determinism**: Process-structures represent the complete finite sequence of actions that have to be carried out during the execution of a task. Due to the possibility of a bijective transformation from process-structures to Petri-Nets, a-priori verification with respect to deadlocks, reachability and liveness becomes possible. Thus, the task planner and executor, as part of the layered architecture, operate deterministically when using

• **Real-time capability***:* Additionally, the complexity of the task planning process satisfies real-time execution requirements, because this process is reduced to a graph search

• **Fault-Tolerance**: Erroneous execution results are explicitly modeled within processstructures. Additionally, redundant behavior is programmatically foreseen. If an alternative robotic operation, which shall cope with the unexpected result, is not available, the user is included as part of a semi-autonomous task execution process. To be able to provide a user-friendly configuration of process-structures and to guarantee consistency throughout all abstraction levels of task-knowledge, a tool-based process model – the FRIEND::Process – has been developing. The process model, on the one hand, guides the **development and programming** of intelligent behavior for service robots with processstructures. On the other hand, process-structures can be seen as a **process model for the service robot** itself, which guides the task execution of the robot during runtime. The unique feature of the FRIEND::Process in comparison to other frameworks (Gostai, 2011; Microsoft, 2011; Quigley et al., 2009) and the above mentioned control architectures is to completely rely on configurable process-structures and thus on determinism, real-time capability and

• **Analysis of Scenario and Task Sequence:** A scenario is split up into a sequence of

• **Configuration of Object Templates and Abstract Process-Structures:** The task participating objects are specified as Object Templates and pictographic process-

• **Configuration of Elementary Process-Structures:** Process-structures on the level of system resources and sub-symbolic (geometric) information are configured and verified

• **Configuration and Testing of Reactive Process-Structures:** Process-structures on the level of algorithms and closed loop control, operating sensors and actuators, are

• **Task Testing:** Task planning and execution is applied on all levels of process-structures

In the following Section 2, the motivation for the introduction of process-structures is explained in more detail by discussing the complexity of task planning for service robots with the help of an exemplary scenario. The description of the FRIEND::Process development steps is subject of Section 3. Throughout this description, exemplary processstructures of the sample scenario of Section 2 are introduced for each development step.

structures on the symbolic (abstract) level are configured and verified.

configured and tested, also with configurable function blocks.

and a complete and complex task execution is tested.

of using process-structures shall be anticipated here:

problem within the state-graph of the associated Petri-Net.

The FRIEND::Process consists of the following development steps:

with the help of function block networks.

verified task-knowledge.

fault tolerance.

tasks.

In this section, the complexity of classical task planning approaches is discussed first, before the introduction of process-structures is motivated. The discussion is carried out with the help of task execution examples from the field of rehabilitation robotics and the rehabilitation robot FRIEND III (IAT, 2009; Martens et al., 2007).

## **2.1 The complexity of classical task-planning approaches**

With respect to one exemplary task – a service robot is supporting the preparation and the eating of a meal by a disabled person – the complexity of robotic task execution shall be illustrated. For this purpose the figures Fig. 1 to Fig. 3 are introduced. Fig. 1 shows the rehabilitation robot FRIEND III which is used as exemplary target system. In Fig. 2 snapshots of the task sequence "Meal preparation and eating assistance" are depicted. Finally, Fig. 3 shows the decomposition of this task sequence according to the principles to be presented in detail in this paper.

FRIEND III is a general purpose semi-autonomous rehabilitation robot suitable for the implementation of a wide range of support tasks. As depicted, FRIEND III consists of an electrical wheelchair which is equipped with several sensors and actuators: A stereo camera system mounted on a pan-tilt-head, force torque sensor, robotic arm and gripper with force control. FRIEND III has been developed by an interdisciplinary team of engineers, therapists and designers and has been tested with disabled users within the AMaRob project (IAT, 2009).

Fig. 1. FRIEND III rehabilitation robot

To perform "meal preparation and eating assistance", the robot system has to execute the following actions:


Programming of Intelligent Service Robots

level.

with the Process Model "FRIEND::Process" and Configurable Task-Knowledge 533

considered, e. g. caused by changing lighting conditions, arbitrarily placed and filled objects, changing locations of objects and the robotic platform, various obstacles, and many more. Consequently, a strategy to plan a sequence of actions that fulfills a certain task is mandatory. Many task planners are based upon deliberative approaches according to classical artificial intelligence. Typically, the robotic system models the world with the help of symbolic facts (e. g. first order predicate logic, (Russel & Norvig, 2003)), where each node of a graph represents a state (snapshot) of the world. The planner has to find a sequence of operations which transforms a given initial state into a desired target state. In the worst cases this leads to NP-complete problems, as there is an exponential complexity of classical search algorithms (Russel & Norvig, 2003). If we consider breadth-first search as a simple example, a calculation time of hours results at search depth 8; and with a depth of 14, hundreds of years are required for exhaustive search (branching factor 10 and calculation time of 10.000 nodes/s are assumed). The search depth is related to the number of required operators for a certain task and the branching factor results from the number of applicable operators in one node. Compared to the number of required and available operations shown in Fig. 3 it becomes obvious that only trivial problems can be solved on this basis. Certainly, the mean search time can be improved in comparison to breadth-first search, with e. g. heuristic approaches like A\*, with hierarchical planning, search in the space of plans or successive reduction of abstraction (Russel & Norvig, 2003; Weld, 1999), but in worst cases a planning complexity as mentioned has to be faced. Even though the improvements of deliberative task planners are notable, it is still questionable whether they are efficient (real-time capable) and robust (deterministic and fault-tolerant) enough for the application in real

world domains (Cao & Sanderson, 1998; Dario et al., 2004; Russel & Norvig, 2003).

An alternative to deliberative systems are assembly planning systems. Cao and Sanderson proposed such an approach for the application to service robotics (Cao & Sanderson, 1998). Based on this idea, Martens developed a software-technical framework (Martens, 2003) that operates on pre-structured task-knowledge, called *process-structures*. Table 1 summarizes the concept of process-structures and the distinction of task level, system level and algorithmic

Fig. 4 shows an example of an abstract process-structure that models the fetching of a cup from a container. The object constellations (OC) model the physical contact situation of the involved objects box (B), container (C), gripper (G) and table (T). The object constellations are connected via composed operators (COPs). These are in most cases (i. e. where this is physically meaningful) bi-directional operators. To be able to perform task planning based on an abstract process-structure, a set of OCs defines an initial situation and another set of OCs defines the target situation. Thus, task planning on abstract level means to find a sequence of COPs from initial to target situation. The initial situation is usually dynamically determined at runtime with the help of an initial monitoring procedure (Prenzel, 2005). The

A process-structure contains a context-related subset of task-knowledge. The finite size of a process-structure makes planning in real-time with short time intervals as well as a priori verification possible. The logical correctness of a structure is checked against a set of rules. A positive result of this check guarantees that no system resource conflicts exist. It also guarantees the correct control and data flow. Altogether, the concept of process-structures is the basis for a robust system runtime behavior. Despite pre-structuring, the process-

**2.2 Process-structures as alternative to classical planning approaches** 

target situation is pre-determined for a certain PSA.


Fig. 2. Task sequence for meal preparation and eating assistance

Fig. 3. Decomposition of a scenario on four abstraction levels, illustrated with the sample scenario "Meal preparation and eating assistance"

As shown in Fig. 3, the overall scenario is decomposed into tasks, abstract operators, elementary operators and reactive operators according to the layered control architecture MASSiVE. Abstract process-structures (PSA3) model behavior on task planning level and elementary process-structures (PSE) model behavior on system planning level. The reactive process-structures (PSR) define reactive operations on the executable algorithmic level. From viewpoint of task planning, the "meal preparation and eating assistance" scenario is split up into 6 tasks, 19 abstract operators and 43 elementary task planning operators. Additionally, a large set of reactive operators is required within the execution layer.

In typical human environments, it is impossible to predefine a static sequence of operators beforehand. Many dynamic aspects resulting from dynamic environmental changes have to be

<sup>3</sup> Find all abbreviations in the glossary at the end of this paper.

532 Robotic Systems – Applications, Control and Programming

• Open the microwave-oven door again, grasp and retrieve the meal, close the

• In a cycle, take food with the spoon and serve it near the user's mouth, finally put the

Fig. 3. Decomposition of a scenario on four abstraction levels, illustrated with the sample

As shown in Fig. 3, the overall scenario is decomposed into tasks, abstract operators, elementary operators and reactive operators according to the layered control architecture MASSiVE. Abstract process-structures (PSA3) model behavior on task planning level and elementary process-structures (PSE) model behavior on system planning level. The reactive process-structures (PSR) define reactive operations on the executable algorithmic level. From viewpoint of task planning, the "meal preparation and eating assistance" scenario is split up into 6 tasks, 19 abstract operators and 43 elementary task planning operators. Additionally,

In typical human environments, it is impossible to predefine a static sequence of operators beforehand. Many dynamic aspects resulting from dynamic environmental changes have to be

microwave-oven door

spoon back to the meal-tray • Clear the wheelchair tray

• Place the meal in front of the user, take away the lid

Fig. 2. Task sequence for meal preparation and eating assistance

scenario "Meal preparation and eating assistance"

a large set of reactive operators is required within the execution layer.

3 Find all abbreviations in the glossary at the end of this paper.

considered, e. g. caused by changing lighting conditions, arbitrarily placed and filled objects, changing locations of objects and the robotic platform, various obstacles, and many more. Consequently, a strategy to plan a sequence of actions that fulfills a certain task is mandatory.

Many task planners are based upon deliberative approaches according to classical artificial intelligence. Typically, the robotic system models the world with the help of symbolic facts (e. g. first order predicate logic, (Russel & Norvig, 2003)), where each node of a graph represents a state (snapshot) of the world. The planner has to find a sequence of operations which transforms a given initial state into a desired target state. In the worst cases this leads to NP-complete problems, as there is an exponential complexity of classical search algorithms (Russel & Norvig, 2003). If we consider breadth-first search as a simple example, a calculation time of hours results at search depth 8; and with a depth of 14, hundreds of years are required for exhaustive search (branching factor 10 and calculation time of 10.000 nodes/s are assumed). The search depth is related to the number of required operators for a certain task and the branching factor results from the number of applicable operators in one node. Compared to the number of required and available operations shown in Fig. 3 it becomes obvious that only trivial problems can be solved on this basis. Certainly, the mean search time can be improved in comparison to breadth-first search, with e. g. heuristic approaches like A\*, with hierarchical planning, search in the space of plans or successive reduction of abstraction (Russel & Norvig, 2003; Weld, 1999), but in worst cases a planning complexity as mentioned has to be faced. Even though the improvements of deliberative task planners are notable, it is still questionable whether they are efficient (real-time capable) and robust (deterministic and fault-tolerant) enough for the application in real world domains (Cao & Sanderson, 1998; Dario et al., 2004; Russel & Norvig, 2003).

### **2.2 Process-structures as alternative to classical planning approaches**

An alternative to deliberative systems are assembly planning systems. Cao and Sanderson proposed such an approach for the application to service robotics (Cao & Sanderson, 1998). Based on this idea, Martens developed a software-technical framework (Martens, 2003) that operates on pre-structured task-knowledge, called *process-structures*. Table 1 summarizes the concept of process-structures and the distinction of task level, system level and algorithmic level.

Fig. 4 shows an example of an abstract process-structure that models the fetching of a cup from a container. The object constellations (OC) model the physical contact situation of the involved objects box (B), container (C), gripper (G) and table (T). The object constellations are connected via composed operators (COPs). These are in most cases (i. e. where this is physically meaningful) bi-directional operators. To be able to perform task planning based on an abstract process-structure, a set of OCs defines an initial situation and another set of OCs defines the target situation. Thus, task planning on abstract level means to find a sequence of COPs from initial to target situation. The initial situation is usually dynamically determined at runtime with the help of an initial monitoring procedure (Prenzel, 2005). The target situation is pre-determined for a certain PSA.

A process-structure contains a context-related subset of task-knowledge. The finite size of a process-structure makes planning in real-time with short time intervals as well as a priori verification possible. The logical correctness of a structure is checked against a set of rules. A positive result of this check guarantees that no system resource conflicts exist. It also guarantees the correct control and data flow. Altogether, the concept of process-structures is the basis for a robust system runtime behavior. Despite pre-structuring, the process-

Programming of Intelligent Service Robots

**3. The FRIEND::Process** 

assistance" is shown in each step.

execution

Scenario Task Sequence Tasks

with the Process Model "FRIEND::Process" and Configurable Task-Knowledge 535

Process models structure complex processes in manifold application areas. With respect to system- and software-engineering, a process model shall organize the steps of development, the tools to be used and finally the artifacts to be produced throughout the different development stages. The overall scheme of the FRIEND::Process is depicted in Fig. 5. Central elements of the process and consequently the specialty in comparison to other process models are the process-structures. Within the development steps, the building blocks of process-structures are decomposed as shown in Table 2. In the following sections the five development steps of the FRIEND::Process are discussed in detail. Thus, the contents of Table 2, i. e. the composition of process-structures and the decomposition on the next level as well as the abbreviations will be explained. Also, the application of the FRIEND::Process for the development of the sample task of "meal preparation and eating

Fig. 5. Scheme of the FRIEND::Process with five development steps and the respective process-structure levels as well as the involved tools for configuration, planning and

**Process-Structure Decomposition Process-Structure Building Blocks** 

COP PSE System, Object Templates (OTs), Facts, Skills Skill PSR System, Object Templates (OTs), Reactive Blocks

Table 2. Decomposition and building blocks of process-structures

Task PSA System, Object Templates (OTs), Object Constellations

(OCs), Facts, Composed Operators (COPs)

structures are still flexible to adapt to diverse objects, so that their re-usability in different scenarios is achieved. Technical details of process-structures beyond this summarized concept description can be found in (Martens et al., 2007).



Fig. 4. Schematic illustration of an abstract process-structure (PSA) which models the fetching of a cup from a container-like place as e. g. a fridge or a cupboard

The applicability of process-structures for the programming of service robots has been shown in (Martens, 2003) with the help of several representative rehabilitation robotic scenarios. As anticipated in the introduction this approach has been extended during the AMaRob project (2006 – 2009) and within (Prenzel, 2009) to embed the process-structurebased programming into a process model – the FRIEND::Process. From task analysis to final testing of implemented system capabilities, the FRIEND::Process guides through the complete development cycle of a service robot based on a closed chain of user-friendly configuration tools. Enhancements of the FRIEND::Process are matter of ongoing developments.

## **3. The FRIEND::Process**

534 Robotic Systems – Applications, Control and Programming

structures are still flexible to adapt to diverse objects, so that their re-usability in different scenarios is achieved. Technical details of process-structures beyond this summarized

Defines what happens Models e. g. the fetching of an object Is configured by: Non-technical personnel or the user

Models the usage of system resources

Models the combined usage of hardware sensors and actuators

and the control and data flow

concept description can be found in (Martens et al., 2007).

Defines how something happens from

Defines how something happens from perspective of reactive algorithms

Table 1. Summarized concept of process-structures

system perspective

developments.

**PSA Task Level** 

**PSE System Level** 

Is configured by: System programmer **PSR Algorithmic Level** 

Is configured by: System programmer

Fig. 4. Schematic illustration of an abstract process-structure (PSA) which models the

The applicability of process-structures for the programming of service robots has been shown in (Martens, 2003) with the help of several representative rehabilitation robotic scenarios. As anticipated in the introduction this approach has been extended during the AMaRob project (2006 – 2009) and within (Prenzel, 2009) to embed the process-structurebased programming into a process model – the FRIEND::Process. From task analysis to final testing of implemented system capabilities, the FRIEND::Process guides through the complete development cycle of a service robot based on a closed chain of user-friendly configuration tools. Enhancements of the FRIEND::Process are matter of ongoing

fetching of a cup from a container-like place as e. g. a fridge or a cupboard

Process models structure complex processes in manifold application areas. With respect to system- and software-engineering, a process model shall organize the steps of development, the tools to be used and finally the artifacts to be produced throughout the different development stages. The overall scheme of the FRIEND::Process is depicted in Fig. 5. Central elements of the process and consequently the specialty in comparison to other process models are the process-structures. Within the development steps, the building blocks of process-structures are decomposed as shown in Table 2. In the following sections the five development steps of the FRIEND::Process are discussed in detail. Thus, the contents of Table 2, i. e. the composition of process-structures and the decomposition on the next level as well as the abbreviations will be explained. Also, the application of the FRIEND::Process for the development of the sample task of "meal preparation and eating assistance" is shown in each step.

Fig. 5. Scheme of the FRIEND::Process with five development steps and the respective process-structure levels as well as the involved tools for configuration, planning and execution


Table 2. Decomposition and building blocks of process-structures

Programming of Intelligent Service Robots

inheritance and aggregation.

assistance to eat"

**structures** 

with the Process Model "FRIEND::Process" and Configurable Task-Knowledge 537

The objects involved in task execution are the elements that are relevant in all subsequent development steps. To follow the principle of re-usable task-knowledge, the TPOs are specified as abstract object classes. For example, a task that describes the fetching of an object from a container-like place (see Fig. 4) can be re-used to fetch either a bottle or a meal from the refrigerator. In the FRIEND::Process the re-usable classes of objects are specified as hierarchical UML ontology. An exemplary ontology for the scenario "meal preparation and assistance to eat" is depicted in Fig. 8. It is depicted that the TPOs are constructed from basic geometric bodies (cuboid and cylinder) and more complex objects are created with

Fig. 8. Ontology of task participating objects (TPO) for the scenario "Meal preparation and

To embed the TPOs in the tool-chain that covers all further development steps, the concept of "Object Templates" (OT) has been introduced (Kampe & Gräser, 2010). The configuration of Object Templates and their integration into the different levels of process-structure

**3.2 FRIEND::Process step 2: Configuration of object templates and abstract process-**

In this development step the task participating objects are formally specified and configured with the help of Object Templates. Subsequently, an abstract process-structure (PSA) is configured based on pictographic And/Or-Nets. This means that physical object constellations (OC) and physical transitions between the object constellations are specified. Besides configuration of PSA, the logical correctness of the abstract process-structures is guaranteed by the configuration tool. Finally, the pictographic PSA are converted to Petri-

In the following, a description of the process step is introduced first. Afterwards, the configuration concept for Object Templates is shown. Finally, the configuration of an

As shown in Fig. 9 the FRIEND::Process decomposes each task into an abstract processstructure (PSA). A schematic exemplary pictographic PSA for the task "Fetch cup from container" has already been introduced and discussed in Fig. 4. Within the FRIEND::Process, the configuration of PSA is carried out within a pictographic configuration

configuration will be discussed in more detail within the following process steps.

Nets according to (Cao & Sanderson, 1998) for the input into the task planner.

abstract process-structure is exemplified.

**3.2.1 Description of the process step** 

## **3.1 FRIEND::Process step 1: Analysis of scenario and task sequence**

Development according to the FRIEND::Process starts with the "Scenario Analysis" as step 1. Unlike the subsequent steps, this step is not (yet) tool-supported. The scenario analysis splits up a complex scenario like "meal preparation and eating assistance" into a sequence of re-usable tasks. Also, a structured collection of the objects takes place that are in the focus of a certain scenario.

## **3.1.1 Description of the process step**

The development step 1 is dedicated to a first analysis of the desired task execution scenario. As shown in Fig. 6 a sequence of re-usable tasks is specified. Besides the strictly sequential concatenation of tasks, cyclic repetitions are also possible, as e. g. required for the eating assistance scenario introduced at the beginning of the paper.

Fig. 6. A complex task sequence consists of several tasks

The FRIEND::Process defines criteria for task splitting:


Currently, the process step 1 is not yet supported by a dedicated tool. Therefore, to still achieve a certain level of formality, the results of scenario analysis are collected in a UML use case diagram as seen in Fig. 7. For each task a use case with verbal task description is specified. This includes the objects involved in the task, the so-called task participating objects (TPO).

Fig. 7. Use case diagram with tasks (use cases) of the sample scenario. For each task, a detailed description as well as the set of task participating objects (TPO) is specified

536 Robotic Systems – Applications, Control and Programming

Development according to the FRIEND::Process starts with the "Scenario Analysis" as step 1. Unlike the subsequent steps, this step is not (yet) tool-supported. The scenario analysis splits up a complex scenario like "meal preparation and eating assistance" into a sequence of re-usable tasks. Also, a structured collection of the objects takes place that are in

The development step 1 is dedicated to a first analysis of the desired task execution scenario. As shown in Fig. 6 a sequence of re-usable tasks is specified. Besides the strictly sequential concatenation of tasks, cyclic repetitions are also possible, as e. g. required for the eating

• **Modularity, low complexity and re-usability:** One task is focusing on a set of objects. This set shall be kept as small as possible to limit the task's complexity and to ensure reusability of a task. It shall be possible to use the tasks independently, but also to

• **The typical physical location of the objects:** If movement of the robotic platform is required, this is a clear indicator to switch the task context, e. g. when moving from fridge to microwave oven in the meal preparation scenario. After moving the platform,

Currently, the process step 1 is not yet supported by a dedicated tool. Therefore, to still achieve a certain level of formality, the results of scenario analysis are collected in a UML use case diagram as seen in Fig. 7. For each task a use case with verbal task description is specified. This includes the objects involved in the task, the so-called task participating

Fig. 7. Use case diagram with tasks (use cases) of the sample scenario. For each task, a detailed description as well as the set of task participating objects (TPO) is specified

relative locations between platform and objects have to be re-assessed.

**3.1 FRIEND::Process step 1: Analysis of scenario and task sequence** 

the focus of a certain scenario.

objects (TPO).

**3.1.1 Description of the process step** 

assistance scenario introduced at the beginning of the paper.

Fig. 6. A complex task sequence consists of several tasks The FRIEND::Process defines criteria for task splitting:

concatenate them to more complex action sequences.

The objects involved in task execution are the elements that are relevant in all subsequent development steps. To follow the principle of re-usable task-knowledge, the TPOs are specified as abstract object classes. For example, a task that describes the fetching of an object from a container-like place (see Fig. 4) can be re-used to fetch either a bottle or a meal from the refrigerator. In the FRIEND::Process the re-usable classes of objects are specified as hierarchical UML ontology. An exemplary ontology for the scenario "meal preparation and assistance to eat" is depicted in Fig. 8. It is depicted that the TPOs are constructed from basic geometric bodies (cuboid and cylinder) and more complex objects are created with inheritance and aggregation.

Fig. 8. Ontology of task participating objects (TPO) for the scenario "Meal preparation and assistance to eat"

To embed the TPOs in the tool-chain that covers all further development steps, the concept of "Object Templates" (OT) has been introduced (Kampe & Gräser, 2010). The configuration of Object Templates and their integration into the different levels of process-structure configuration will be discussed in more detail within the following process steps.

## **3.2 FRIEND::Process step 2: Configuration of object templates and abstract processstructures**

In this development step the task participating objects are formally specified and configured with the help of Object Templates. Subsequently, an abstract process-structure (PSA) is configured based on pictographic And/Or-Nets. This means that physical object constellations (OC) and physical transitions between the object constellations are specified. Besides configuration of PSA, the logical correctness of the abstract process-structures is guaranteed by the configuration tool. Finally, the pictographic PSA are converted to Petri-Nets according to (Cao & Sanderson, 1998) for the input into the task planner.

In the following, a description of the process step is introduced first. Afterwards, the configuration concept for Object Templates is shown. Finally, the configuration of an abstract process-structure is exemplified.

## **3.2.1 Description of the process step**

As shown in Fig. 9 the FRIEND::Process decomposes each task into an abstract processstructure (PSA). A schematic exemplary pictographic PSA for the task "Fetch cup from container" has already been introduced and discussed in Fig. 4. Within the FRIEND::Process, the configuration of PSA is carried out within a pictographic configuration

Programming of Intelligent Service Robots

**3.2.2 Object templates** 

Templates has been introduced.

a plate, a lid and a spoon

of all geometric primitives

Object Template (right)

detached) of the meal tray

Object Templates comprise the following aspects:

level, e. g. the grasping location of an object

The different stages of separation are depicted in Fig. 12.

with the Process Model "FRIEND::Process" and Configurable Task-Knowledge 539

The pictographic representation of an OC is configured within a sub-dialog within the PSA-Configurator. Within this configuration dialog, the predicate logic facts, which are assigned to an OC, can be inspected. These facts are the pre- and post-condition facts of the COPs that interconnect the OCs. Within the constraints given by the COP facts, the pictographic appearance of an OC can be adjusted within the PSA-Configurator within a 3D scene. The

Objects play a central role in process-structures. The different levels of process-structures model different aspects of objects. On abstract level, a symbol is associated with an object for the purpose of task planning (e. g. "Mt.1" for the meal tray in the sample scenario). On system level, i. e. on the level of elementary process-structures, so-called sub-symbolic (i. e. geometric) object information is processed. With respect to the meal tray this is, for instance, the location to grasp the tray. To model the different aspects of objects and to assure an information consistency throughout the different information layers, the concept of Object

• A 3D model of the object, used for pictographic rendering of object constellations on PSA level as well as for motion planning and collision avoidance on PSR level • Associated sub-symbolic (geometric) data for planning and execution on PSE and PSR

• Complex objects can be composed of simpler objects; e. g. a meal tray consists of a tray,

• Object Templates are configured with natural parameters of the composed object, e. g. width, height, depth and wall thickness for a container, instead of separate specification

• The 3D appearance of Object Templates is associated with task-knowledge like symbolic facts and characteristics. For example the fact "IsAccessible(MicrowaveOven)"

An exemplary Object Template is the meal tray depicted in Fig. 11. It consists of a base tray, a plate with a lid and a spoon. Both the lid and the spoon are detachable from the meal tray.

Fig. 11. The meal tray of the eating scenario as photo (left) and modeled by means of an

Fig. 12. The different separation stages (detached lid, detached spoon, both lid and spoon

renders the opening status of the door of the oven's 3D model.

rendering of object constellations is based on "Object Templates".

environment, the so-called PSA-Configurator. Fig. 10 shows the PSA-Configurator with the PSA "Fetch meal from fridge".

Fig. 9. Decomposition of a task as abstract process-structure with object constellations (OC) and composed operators (COP)

The procedure of PSA configuration is as follows:


Fig. 10. PSA-Configurator with the pictographic abstract process-structure modeling the Task "Fetch meal from fridge"4

 4 For better readability, overlays have been added in this illustration.

The pictographic representation of an OC is configured within a sub-dialog within the PSA-Configurator. Within this configuration dialog, the predicate logic facts, which are assigned to an OC, can be inspected. These facts are the pre- and post-condition facts of the COPs that interconnect the OCs. Within the constraints given by the COP facts, the pictographic appearance of an OC can be adjusted within the PSA-Configurator within a 3D scene. The rendering of object constellations is based on "Object Templates".

## **3.2.2 Object templates**

538 Robotic Systems – Applications, Control and Programming

environment, the so-called PSA-Configurator. Fig. 10 shows the PSA-Configurator with the

Fig. 9. Decomposition of a task as abstract process-structure with object constellations (OC)

Fig. 10. PSA-Configurator with the pictographic abstract process-structure modeling the

4 For better readability, overlays have been added in this illustration.

PSA "Fetch meal from fridge".

and composed operators (COP)

Task "Fetch meal from fridge"4

The procedure of PSA configuration is as follows: • Selection of task participating objects (TPOs) • Composition of object constellations (OCs)

• Connection of OCs via composed operators (COPs) • Selection of default initial and default target situation Objects play a central role in process-structures. The different levels of process-structures model different aspects of objects. On abstract level, a symbol is associated with an object for the purpose of task planning (e. g. "Mt.1" for the meal tray in the sample scenario). On system level, i. e. on the level of elementary process-structures, so-called sub-symbolic (i. e. geometric) object information is processed. With respect to the meal tray this is, for instance, the location to grasp the tray. To model the different aspects of objects and to assure an information consistency throughout the different information layers, the concept of Object Templates has been introduced.

Object Templates comprise the following aspects:


An exemplary Object Template is the meal tray depicted in Fig. 11. It consists of a base tray, a plate with a lid and a spoon. Both the lid and the spoon are detachable from the meal tray. The different stages of separation are depicted in Fig. 12.

Fig. 11. The meal tray of the eating scenario as photo (left) and modeled by means of an Object Template (right)

Fig. 12. The different separation stages (detached lid, detached spoon, both lid and spoon detached) of the meal tray

Programming of Intelligent Service Robots

tray in the fridge.

Container)"

with the Process Model "FRIEND::Process" and Configurable Task-Knowledge 541

situation consists of two object constellations. The first one models the manipulator in a free position in the workspace (instance number "0" is assigned to this object constellation "MP.1\_0"). The second object constellation models the already opened fridge containing the meal tray ("Fr.1-Mt.1\_0"). The two OCs are connected via the assembly COP "GraspObjectInContainer(MP.1, Mt.1, Fr.1)". If physically possible, a complementary disassembly operator is assigned to model the reverse operation for re-usage of the PSA in another scenario context. In this case this is the COP "DepartFromContainer(MP.1, Mt.1, Fr.1)". The assembled object constellation is depicted on the bottom left side and the associated abstract planning symbol is "Fr.1-MP.1-Mt.1\_0". Due to the associated symbolic facts, which are imposed within the object constellation by the post-condition facts of the COP, the pictographic representation is rendered so that the manipulator grasps the meal

Besides assembly and disassembly operators, the And/Or-Net syntax provides operators modeling the internal state transition (IST) of object constellations (IST COPs). IST COPs are applied when the physical contact state of the involved objects is not changed. From the viewpoint of planning on abstract level, objects being in close relative locations to each other are considered to be in a physical contact situation. Therefore, the IST COP "GetObjectOutside(MP.1, Mt.1, Fr.1, InsertLoc)" is applied to transform the OC "Fr.1-MP.1- Mt.1\_0" on the left side into the OC "Fr.1-MP.1-Mt.1\_1" on the right side. Finally, the COP "MoveObjectFromRelLoc(MP.1, Mt.1, Fr.1, InsertLoc)" models the disassembly operation and results in two object constellations which model the target situation of this abstract process-structure: "Fr.1\_0" is the empty fridge and "MP.1-Mt.1\_0" is the manipulator with

To be able to develop and verify the three levels of process-structures independently, i. e. in a modular manner, the consistency of task-knowledge on all levels has to be assured. This is achieved with common building blocks of the different process-structures as shown in the decomposition chain in Table 2. The common elements are the interfaces to the next level of process-structures. The important interfacing elements between PSA and PSE are the preand post-condition facts of the COP to be decomposed as PSE in the next process step. For

HoldsNothing(Manipulator) = True HoldsNothing(Manipulator) = False IsInFreePos(Manipulator) = True IsInFreePos(Manipulator) = False - IsGripped(Manipulator, Object) = True

Table 4. Pre- and Post facts of COP "GraspObjectInContainer(Manipulator, Object,

**3.3 FRIEND::Process step 3: Configuration of elementary process-structures** 

In the third process step, each composed operator (COP) of an abstract process-structure (PSA) is decomposed into an elementary process-structure (PSE). To achieve user-friendly configuration of PSE, configurable function blocks are assembled to function block networks (FBN). Each function block models a reactive robot system operation, also called skill. A

the gripped meal tray in a free position in the work space.

**Pre-Facts Post-Facts** 

ContainerAccessible(Container) = True - IsInsideContainer(Object, Container) = True -

the COP "GraspObjectInContainer" the facts are shown in Table 4.

The configuration of Object Templates takes place within the Object-Template-Configurator (OT-Configurator) which is part of the PSA-Configurator as shown in Fig. 13.


Fig. 13. Object-Template-Configurator (OT-Configurator) as part of the PSA-Configurator

Within the screenshot in Fig. 13 the Object Template of the refrigerator is modeled. On the left side the parameters and their association to symbolic facts are specified. On the right side the 3D model of the object is rendered according to the current configuration. To render the 3D model of a composed object, the aggregated sub-objects are composed with formulas within the Object-Template-Configurator tool. Frequently required and complex formulas like alignment and rotation of Object Templates are provided with the help of assistive functions.

Certain aspects of the 3D geometry have a fix association with object characteristics as given in the following table:


Table 3. Relations between characteristics and sub-symbolic elements

## **3.2.3 Exemplary abstract Process-Structure: Fetch meal tray from fridge**

The exemplary PSA that shall be discussed in detail has already been introduced within the PSA-Configurator frontend in Fig. 10. In this PSA the task participating object are a fridge (symbol "Fr" with instance number "1" "Fr.1"), a meal tray ("Mt.1"), the manipulator ("MP.1") and the abstract symbol for a relative location ("InsertLoc"). In this PSA the initial 540 Robotic Systems – Applications, Control and Programming

The configuration of Object Templates takes place within the Object-Template-Configurator

Fig. 13. Object-Template-Configurator (OT-Configurator) as part of the PSA-Configurator

functions.

in the following table:

Within the screenshot in Fig. 13 the Object Template of the refrigerator is modeled. On the left side the parameters and their association to symbolic facts are specified. On the right side the 3D model of the object is rendered according to the current configuration. To render the 3D model of a composed object, the aggregated sub-objects are composed with formulas within the Object-Template-Configurator tool. Frequently required and complex formulas like alignment and rotation of Object Templates are provided with the help of assistive

Certain aspects of the 3D geometry have a fix association with object characteristics as given

IsPlatform Limits to place other objects onto this object IsContainer Limits to place other objects within this object

The exemplary PSA that shall be discussed in detail has already been introduced within the PSA-Configurator frontend in Fig. 10. In this PSA the task participating object are a fridge (symbol "Fr" with instance number "1" "Fr.1"), a meal tray ("Mt.1"), the manipulator ("MP.1") and the abstract symbol for a relative location ("InsertLoc"). In this PSA the initial

**Characteristic Associated sub-symbolic element**  IsGrippable Coordinates to grasp the object

Table 3. Relations between characteristics and sub-symbolic elements

**3.2.3 Exemplary abstract Process-Structure: Fetch meal tray from fridge** 

(OT-Configurator) which is part of the PSA-Configurator as shown in Fig. 13.

situation consists of two object constellations. The first one models the manipulator in a free position in the workspace (instance number "0" is assigned to this object constellation "MP.1\_0"). The second object constellation models the already opened fridge containing the meal tray ("Fr.1-Mt.1\_0"). The two OCs are connected via the assembly COP "GraspObjectInContainer(MP.1, Mt.1, Fr.1)". If physically possible, a complementary disassembly operator is assigned to model the reverse operation for re-usage of the PSA in another scenario context. In this case this is the COP "DepartFromContainer(MP.1, Mt.1, Fr.1)". The assembled object constellation is depicted on the bottom left side and the associated abstract planning symbol is "Fr.1-MP.1-Mt.1\_0". Due to the associated symbolic facts, which are imposed within the object constellation by the post-condition facts of the COP, the pictographic representation is rendered so that the manipulator grasps the meal tray in the fridge.

Besides assembly and disassembly operators, the And/Or-Net syntax provides operators modeling the internal state transition (IST) of object constellations (IST COPs). IST COPs are applied when the physical contact state of the involved objects is not changed. From the viewpoint of planning on abstract level, objects being in close relative locations to each other are considered to be in a physical contact situation. Therefore, the IST COP "GetObjectOutside(MP.1, Mt.1, Fr.1, InsertLoc)" is applied to transform the OC "Fr.1-MP.1- Mt.1\_0" on the left side into the OC "Fr.1-MP.1-Mt.1\_1" on the right side. Finally, the COP "MoveObjectFromRelLoc(MP.1, Mt.1, Fr.1, InsertLoc)" models the disassembly operation and results in two object constellations which model the target situation of this abstract process-structure: "Fr.1\_0" is the empty fridge and "MP.1-Mt.1\_0" is the manipulator with the gripped meal tray in a free position in the work space.

To be able to develop and verify the three levels of process-structures independently, i. e. in a modular manner, the consistency of task-knowledge on all levels has to be assured. This is achieved with common building blocks of the different process-structures as shown in the decomposition chain in Table 2. The common elements are the interfaces to the next level of process-structures. The important interfacing elements between PSA and PSE are the preand post-condition facts of the COP to be decomposed as PSE in the next process step. For the COP "GraspObjectInContainer" the facts are shown in Table 4.


Table 4. Pre- and Post facts of COP "GraspObjectInContainer(Manipulator, Object, Container)"

## **3.3 FRIEND::Process step 3: Configuration of elementary process-structures**

In the third process step, each composed operator (COP) of an abstract process-structure (PSA) is decomposed into an elementary process-structure (PSE). To achieve user-friendly configuration of PSE, configurable function blocks are assembled to function block networks (FBN). Each function block models a reactive robot system operation, also called skill. A

Programming of Intelligent Service Robots

modeling the COP "GraspObjectInContainer".5

microwave oven after heating of the meal.

5 For better readability, overlays have been added in this illustration.

with the Process Model "FRIEND::Process" and Configurable Task-Knowledge 543

explicitly connected to an abort block to increase the readability of the network structure. The typical construction rule for a semi-autonomous system (like FRIEND) is to provide user interactions as redundant action for autonomous system operations. As shown in Fig. 15, the failure of an autonomous operation (e. g. "AcquireObjectBySCam") is linked to the

user interaction "DetermineObjectBySCam", replacing the failed system action.

Fig. 15. PSE-Configurator with elementary process-structure as function block network,

the "Meal preparation and assistance" task, become manageable in their complexity.

The configuration of PSE on the basis of function block networks does not only achieve a decisive increase of development comfort (configuration instead of programming), but it also decreases the required task-knowledge engineering time significantly. By building the PSE directly in the correct manner, the time-consumption for the construction of one PSE is reduced from hours to 10-15 minutes per network. On this basis, real world problems like

**3.3.2 Exemplary elementary process-structure: Manipulator grasps meal tray in fridge**  The exemplary PSE "GraspObjectInContainer", as shown in Fig. 15, models the grasping of an object in a container-like place in a general way. In the sample scenario "meal preparation" this PSE is applied to fetch the meal tray from the refrigerator and also from the

The objects (Object Templates) "Manipulator", "Object" and "Container", which are involved in this PSE, are the input artifacts handed over as COP parameters from the previous step of the FRIEND::Process. The first skill block that follows the "Start" block is the manipulator skill "OpenGripper". Subsequently, the container (fridge) is located with the help of the vision skill "AcquireObjectBySCam(Object)". This skill calculates the location and size of the given object with the help of a stereo camera (SCam). In the sample scenario the COP parameter "Container" (i. e. the fridge) is inserted at the skill's placeholder

priori verification of task-knowledge on this level takes place with the help of Petri-Nets, which result from automatic conversion of FBNs.

## **3.3.1 Description of the process step**

Fig. 14 depicts the decomposition principle of COPs into elementary process-structures, consisting of skill blocks. An elementary process-structure, as first introduced by (Martens, 2003), is a Petri-Net with enhanced syntax and superordinated construction rules. The advantage of Petri-Nets is their ability to model parallel activities. This is useful for the behavioral modeling on robot system level, for instance, if a manipulator action is guided by input from a camera system or another sensor. Furthermore, Petri-Net-based PSE offer mathematical methods for analysis of the reachability of a certain system state, for verification of the correctness of control and dataflow and for the exclusion of resource conflicts (Martens, 2003).

Fig. 14. Decomposition of a composed operator (COP) as elementary process-structure

Besides these conceptual advantages, from the viewpoint of implementation it turned out that the programming of elementary process-structures with Petri-Nets is a time consuming and error prone procedure. The setup of a correctly verified Petri-net-PSE usually takes several hours. Even with strong modularization of the networks, the large number of places and transitions leads to hardly manageable Petri-Nets in real-life applications. This is the reason why the FRIEND::Process introduces the configuration of PSE on the basis of function block networks (FBN). Similar to the PSA-Configurator, a configuration frontend, called PSE-Configurator, has been created. This tool subsumes all logical and syntactical rules that are required for PSE-configuration. Furthermore, a conversion algorithm has been developed (Prenzel et al., 2008), which converts an FBN into a Petri-Net for automatic execution of verification routines, like a reachability analysis. A screenshot of the PSE-Configurator with the PSE "GraspObjectInContainer" is given in Fig. 15.

With respect to their representative function for Petri-Nets, the control flow within the FBN structures is token-oriented. The execution starts from the "Start" block and ends at the "Target Success" block. In-between, reactive skills are executed, including manipulative operations as well as sensor operations or user interactions. Each function block has one input port, and several output ports according to the possible execution results of the skill (see e. g. block "CoarseApproachToObjectInContainer" in Fig. 15 with the output ports "Success", "Failure", "Abort" and "UserTakeOver"). The output port "Abort" is not 542 Robotic Systems – Applications, Control and Programming

priori verification of task-knowledge on this level takes place with the help of Petri-Nets,

Fig. 14 depicts the decomposition principle of COPs into elementary process-structures, consisting of skill blocks. An elementary process-structure, as first introduced by (Martens, 2003), is a Petri-Net with enhanced syntax and superordinated construction rules. The advantage of Petri-Nets is their ability to model parallel activities. This is useful for the behavioral modeling on robot system level, for instance, if a manipulator action is guided by input from a camera system or another sensor. Furthermore, Petri-Net-based PSE offer mathematical methods for analysis of the reachability of a certain system state, for verification of the correctness of control and dataflow and for the exclusion of resource

Fig. 14. Decomposition of a composed operator (COP) as elementary process-structure

the PSE "GraspObjectInContainer" is given in Fig. 15.

Besides these conceptual advantages, from the viewpoint of implementation it turned out that the programming of elementary process-structures with Petri-Nets is a time consuming and error prone procedure. The setup of a correctly verified Petri-net-PSE usually takes several hours. Even with strong modularization of the networks, the large number of places and transitions leads to hardly manageable Petri-Nets in real-life applications. This is the reason why the FRIEND::Process introduces the configuration of PSE on the basis of function block networks (FBN). Similar to the PSA-Configurator, a configuration frontend, called PSE-Configurator, has been created. This tool subsumes all logical and syntactical rules that are required for PSE-configuration. Furthermore, a conversion algorithm has been developed (Prenzel et al., 2008), which converts an FBN into a Petri-Net for automatic execution of verification routines, like a reachability analysis. A screenshot of the PSE-Configurator with

With respect to their representative function for Petri-Nets, the control flow within the FBN structures is token-oriented. The execution starts from the "Start" block and ends at the "Target Success" block. In-between, reactive skills are executed, including manipulative operations as well as sensor operations or user interactions. Each function block has one input port, and several output ports according to the possible execution results of the skill (see e. g. block "CoarseApproachToObjectInContainer" in Fig. 15 with the output ports "Success", "Failure", "Abort" and "UserTakeOver"). The output port "Abort" is not

which result from automatic conversion of FBNs.

**3.3.1 Description of the process step** 

conflicts (Martens, 2003).

explicitly connected to an abort block to increase the readability of the network structure. The typical construction rule for a semi-autonomous system (like FRIEND) is to provide user interactions as redundant action for autonomous system operations. As shown in Fig. 15, the failure of an autonomous operation (e. g. "AcquireObjectBySCam") is linked to the user interaction "DetermineObjectBySCam", replacing the failed system action.

Fig. 15. PSE-Configurator with elementary process-structure as function block network, modeling the COP "GraspObjectInContainer".5

The configuration of PSE on the basis of function block networks does not only achieve a decisive increase of development comfort (configuration instead of programming), but it also decreases the required task-knowledge engineering time significantly. By building the PSE directly in the correct manner, the time-consumption for the construction of one PSE is reduced from hours to 10-15 minutes per network. On this basis, real world problems like the "Meal preparation and assistance" task, become manageable in their complexity.

## **3.3.2 Exemplary elementary process-structure: Manipulator grasps meal tray in fridge**

The exemplary PSE "GraspObjectInContainer", as shown in Fig. 15, models the grasping of an object in a container-like place in a general way. In the sample scenario "meal preparation" this PSE is applied to fetch the meal tray from the refrigerator and also from the microwave oven after heating of the meal.

The objects (Object Templates) "Manipulator", "Object" and "Container", which are involved in this PSE, are the input artifacts handed over as COP parameters from the previous step of the FRIEND::Process. The first skill block that follows the "Start" block is the manipulator skill "OpenGripper". Subsequently, the container (fridge) is located with the help of the vision skill "AcquireObjectBySCam(Object)". This skill calculates the location and size of the given object with the help of a stereo camera (SCam). In the sample scenario the COP parameter "Container" (i. e. the fridge) is inserted at the skill's placeholder

 5 For better readability, overlays have been added in this illustration.

Programming of Intelligent Service Robots

• Extensible set of Plug-Ins and • Configurable function blocks

• PSR-Configurator,

with the Process Model "FRIEND::Process" and Configurable Task-Knowledge 545

**Reactive Process-Structure (PSR)**

The "PSR-Configurator" is a graphical user interface, which can be used to rapidly create a "function block network", namely a reactive process-structure (PSR). With the PSR-Executor, it is possible to load and execute the previously configured PSR. The PSR itself is a directed graph, connecting configurable "function blocks". Each block can execute code to process its input (image data or other data) and save its outputs. One or more blocks are grouped in a "Plug-In" and an arbitrary number of Plug-Ins can be loaded dynamically by the PSR. In this way, the PSR-Framework can be easily extended by new independent Plug-Ins. This independency of the algorithmic modules results in completely independent development within a team of developers. In addition, the strong modularization leads to a technically manageable amount of code within a single block and reduces the time of inspecting an erroneous block. The PSR execution library can save a PSR in human readable XML format. Thus, on the one hand the PSR-Configurator can configure, load and save a PSR, but on the

Hierarchical modeling is a common method to subdivide algorithms into separate parts - it breaks down the complexity and facilitates reusability. In the PSR-Framework, parts can be constructed as separate PSR and can be combined afterwards to constitute a complete

**Block 4a**

**Skill 4 Skill 2b**

**Block 3b**

**Block 5**

**Skill 3b**

**Block 3a**

**Block 2 Block 4b**

**Skill 3a**

**Elementary Process-Structure (PSE)**

The PSR Configuration Framework consists of the following five parts (see Fig. 17):

• Reactive process-structures (PSR), which are executable function block networks,

• Embedding of PSR into any C++ code via PSR-Executor,

**Skill 1 Skill 2a**

**Block 1**

other hand also external C++ code can load a PSR file.

Fig. 17. The UML structure of the PSR-Configuration Framework

Fig. 16. Decomposition of a skill into a reactive process-structure

"Object" according to the principle of type-conform parameter replacement (Martens, 2003). The Object Template of a fridge provides the according two sub-symbolic parameters location and size. A successful execution of the skill guarantees that the container's location and size are stored in the system's world model and can serve as input parameters for subsequent skills. After verification of the associated Petri-Net of this PSE the correctness of the data flow between all skill blocks is assured. If the recognition of the fridge is successful and the user has not to be involved, the skill "AcquireObjectInContainerBySCam" is executed to determine the location of the meal tray in the fridge. Afterwards, a "CoarseApproachToObjectInContainer" follows. This skill roughly directs the manipulator in front of the meal tray in the fridge based on the location information calculated beforehand. Fig. 15 depicts that this manipulator skill is followed by an enforced user interaction, since all output ports are connected to the Or-block preceding the user interaction. The confirmation by the user is included at this place due to testing purposes to assure a correct execution of the first skill. For real task-execution a quick reconfiguration of the PSE will change the system behavior and directly execute the next manipulator skill "FineApproachToObjectInContainer". This skill leads to a final grasping of the meal tray handle, while avoiding collisions of the manipulator with the fridge with the help of dedicated methods for collision avoidance and path planning (Ojdanic, 2009). The final action necessary to complete the grasping is to close the gripper. The PSE ends with setting the post-facts of the COP as specified in Table 4.

From the viewpoint of the system's task planner, each skill-function-block represents an elementary (executable) operation. Within the execution level of the system, the operations are not seen as atomic units. The execution of one skill means to activate reactive system functionality, for instance the sensor-controlled approach of an object to be grasped in the skill "FineApproachToObjectInContainer". These basic system skills have to couple sensors and actuators on the algorithmic level. To pursue the paradigm of configurable processstructures also on this level, the FRIEND::Process introduces reactive process-structures.

## **3.4 FRIEND::Process step 4: Configuration and testing of reactive process-structures**

Historically, during the elaboration of the FRIEND::Process, the elementary operators (skills) have been implemented directly in C++. Subsequently, when appropriate CASEtools became available, the elementary operators have been implemented with model driven development techniques (Schmidt, 2006) as executable UML models. Then, a configuration tool has been developed, which makes user-friendly configuration of process-structures possible also on this development level. With the help of this tool it is assured that the verified interfaces from the PSE-layer are respected and the robustness assertion throughout the complete system architecture is maintained.

### **3.4.1 Description of the process step**

Fig. 16 depicts the decomposition of a skill block from PSE-layer into a reactive processstructure (PSR) consisting of algorithmic blocks. Similar to the PSE function blocks, PSR are also based on configurable function block networks. The PSR-Configurator tool results from the Open-Source Image Nets Framework6, which originally has been developed for configurable image processing algorithms.

<sup>6</sup> http://imagenets.sourceforge.net/

The PSR Configuration Framework consists of the following five parts (see Fig. 17):

• PSR-Configurator,

544 Robotic Systems – Applications, Control and Programming

"Object" according to the principle of type-conform parameter replacement (Martens, 2003). The Object Template of a fridge provides the according two sub-symbolic parameters location and size. A successful execution of the skill guarantees that the container's location and size are stored in the system's world model and can serve as input parameters for subsequent skills. After verification of the associated Petri-Net of this PSE the correctness of the data flow between all skill blocks is assured. If the recognition of the fridge is successful and the user has not to be involved, the skill "AcquireObjectInContainerBySCam" is executed to determine the location of the meal tray in the fridge. Afterwards, a "CoarseApproachToObjectInContainer" follows. This skill roughly directs the manipulator in front of the meal tray in the fridge based on the location information calculated beforehand. Fig. 15 depicts that this manipulator skill is followed by an enforced user interaction, since all output ports are connected to the Or-block preceding the user interaction. The confirmation by the user is included at this place due to testing purposes to assure a correct execution of the first skill. For real task-execution a quick reconfiguration of the PSE will change the system behavior and directly execute the next manipulator skill "FineApproachToObjectInContainer". This skill leads to a final grasping of the meal tray handle, while avoiding collisions of the manipulator with the fridge with the help of dedicated methods for collision avoidance and path planning (Ojdanic, 2009). The final action necessary to complete the grasping is to close the gripper. The PSE ends with setting

From the viewpoint of the system's task planner, each skill-function-block represents an elementary (executable) operation. Within the execution level of the system, the operations are not seen as atomic units. The execution of one skill means to activate reactive system functionality, for instance the sensor-controlled approach of an object to be grasped in the skill "FineApproachToObjectInContainer". These basic system skills have to couple sensors and actuators on the algorithmic level. To pursue the paradigm of configurable processstructures also on this level, the FRIEND::Process introduces reactive process-structures.

**3.4 FRIEND::Process step 4: Configuration and testing of reactive process-structures**  Historically, during the elaboration of the FRIEND::Process, the elementary operators (skills) have been implemented directly in C++. Subsequently, when appropriate CASEtools became available, the elementary operators have been implemented with model driven development techniques (Schmidt, 2006) as executable UML models. Then, a configuration tool has been developed, which makes user-friendly configuration of process-structures possible also on this development level. With the help of this tool it is assured that the verified interfaces from the PSE-layer are respected and the robustness assertion throughout

Fig. 16 depicts the decomposition of a skill block from PSE-layer into a reactive processstructure (PSR) consisting of algorithmic blocks. Similar to the PSE function blocks, PSR are also based on configurable function block networks. The PSR-Configurator tool results from the Open-Source Image Nets Framework6, which originally has been developed for

the post-facts of the COP as specified in Table 4.

the complete system architecture is maintained.

**3.4.1 Description of the process step** 

configurable image processing algorithms.

6 http://imagenets.sourceforge.net/


Fig. 16. Decomposition of a skill into a reactive process-structure

The "PSR-Configurator" is a graphical user interface, which can be used to rapidly create a "function block network", namely a reactive process-structure (PSR). With the PSR-Executor, it is possible to load and execute the previously configured PSR. The PSR itself is a directed graph, connecting configurable "function blocks". Each block can execute code to process its input (image data or other data) and save its outputs. One or more blocks are grouped in a "Plug-In" and an arbitrary number of Plug-Ins can be loaded dynamically by the PSR. In this way, the PSR-Framework can be easily extended by new independent Plug-Ins. This independency of the algorithmic modules results in completely independent development within a team of developers. In addition, the strong modularization leads to a technically manageable amount of code within a single block and reduces the time of inspecting an erroneous block. The PSR execution library can save a PSR in human readable XML format. Thus, on the one hand the PSR-Configurator can configure, load and save a PSR, but on the other hand also external C++ code can load a PSR file.

Fig. 17. The UML structure of the PSR-Configuration Framework

Hierarchical modeling is a common method to subdivide algorithms into separate parts - it breaks down the complexity and facilitates reusability. In the PSR-Framework, parts can be constructed as separate PSR and can be combined afterwards to constitute a complete

Programming of Intelligent Service Robots

and 3D (using OpenGL (Wright et al., 2010), see Fig. 20, right).

with the Process Model "FRIEND::Process" and Configurable Task-Knowledge 547

results in the same space. While configuring a PSR with the PSR-Configurator, intermediate results can be visualized in two and three dimensions; depending on the data type, for example scalar values can only be visualized in 2D, camera matrices can be visualized in 2D

To be able to execute a PSR as skill block within the context of the PSE layer and to guarantee that the PSE interfaces are respected, a special type of "Verified PSR-Executor block" is created. During configuration of this kind of block, the PSR-Configurator checks that the used resources as well as input and output parameters match the specification of a certain PSE skill to be modeled as PSR. For example in the case of the PSR "AquireObjectBySCam(Object)" the allowed resource is the stereo camera system. The input parameter is the Object Template of the given object and the output parameter are the return values "Success" and "Failure".

**3.4.2 Exemplary reactive process-structure: Acquire meal tray by stereo camera**  To show the capabilities of the reactive process-structures, a simplified example is discussed in the following, namely the machine vision skill to acquire an object by the stereo camera with the configuration "Meal Tray". This example of a PSR is non-reactive, as no actor is involved. Though, in a more complex PSR, it is possible to combine the camera and the robot

In Fig. 21 several general blocks are used to find the red meal tray handle in an image. The processing chain starts with the detection of highly saturated, red parts. It is followed by a 9x9 closing operation to eliminate noise. Afterwards, contours are detected and filtered according to a priori knowledge of the size of the handle. Then, the minimum rectangles around the contours are determined and the major axes and their end points are calculated. For testing the current PSR, again the orange blocks have been added to visualize

Fig. 21. PSR "MajorAxisPoints" which detects red areas of a certain size and calculates the major axes of these areas. Orange blocks are omitted when this PSR is used in a PSR-Executor To grasp the meal tray handle with the manipulator, the determination of its location in 3D is required. Thus, a 2D detection of the meal tray is not sufficient. However, the previously created and tested PSR "MajorAxisPoints" can be used twice, one for each image of the stereo camera. Fig. 22 depicts the usage of the previous net to calculate the 3D line,

in a feedback loop to implement visual servoing to achieve reactive behaviour.

intermediate testing results and they are not executed during task execution.

algorithm. The PSR-Executor is in fact also a function block, which can load and process a PSR. The connection between the PSR inside an Executor and the outer net is established by special input and output blocks. For example the PSR "Color2Color3D" shown in Fig. 18 calculates a colored point cloud out of a stereo image pair. On the left side there are two input blocks, which hand over the images from the block in orange. This block only exists in this PSR for testing the net and will be ignored on execution if this PSR is loaded by a PSR-Executor (see Fig. 19, right side).

Fig. 18. The functionality of calculating a colored point cloud out of a stereo image pair is depicted in this PSR, called "Color2Color3D"

Fig. 19. The previously shown PSR can be loaded as one PSR-Executor block

Fig. 20. Left: original image, right: resulting point cloud of the stereo camera images visualized in 3D by the PSR-Configurator

The PSR in Fig. 19 shows the use of a subnet of an image acquisition together with the calculation of the extrinsic matrices of a stereo camera, which describe the relation of the cameras to the robot. These matrices depend on an invariant transformation frame inside the pan-tilt-head (see Fig. 1) and its rotation angles. By combining the two subnets, a live view of the stereo camera's point cloud can be calculated (depicted in Fig. 20, in the center of the images the meal tray can be seen). As a visually guided robot is a real world object, which moves in the three dimensional Cartesian space, it is useful to display the vision 546 Robotic Systems – Applications, Control and Programming

algorithm. The PSR-Executor is in fact also a function block, which can load and process a PSR. The connection between the PSR inside an Executor and the outer net is established by special input and output blocks. For example the PSR "Color2Color3D" shown in Fig. 18 calculates a colored point cloud out of a stereo image pair. On the left side there are two input blocks, which hand over the images from the block in orange. This block only exists in this PSR for testing the net and will be ignored on execution if this PSR is loaded by a PSR-

Fig. 18. The functionality of calculating a colored point cloud out of a stereo image pair is

Fig. 19. The previously shown PSR can be loaded as one PSR-Executor block

Fig. 20. Left: original image, right: resulting point cloud of the stereo camera images

The PSR in Fig. 19 shows the use of a subnet of an image acquisition together with the calculation of the extrinsic matrices of a stereo camera, which describe the relation of the cameras to the robot. These matrices depend on an invariant transformation frame inside the pan-tilt-head (see Fig. 1) and its rotation angles. By combining the two subnets, a live view of the stereo camera's point cloud can be calculated (depicted in Fig. 20, in the center of the images the meal tray can be seen). As a visually guided robot is a real world object, which moves in the three dimensional Cartesian space, it is useful to display the vision

Executor (see Fig. 19, right side).

depicted in this PSR, called "Color2Color3D"

visualized in 3D by the PSR-Configurator

results in the same space. While configuring a PSR with the PSR-Configurator, intermediate results can be visualized in two and three dimensions; depending on the data type, for example scalar values can only be visualized in 2D, camera matrices can be visualized in 2D and 3D (using OpenGL (Wright et al., 2010), see Fig. 20, right).

To be able to execute a PSR as skill block within the context of the PSE layer and to guarantee that the PSE interfaces are respected, a special type of "Verified PSR-Executor block" is created. During configuration of this kind of block, the PSR-Configurator checks that the used resources as well as input and output parameters match the specification of a certain PSE skill to be modeled as PSR. For example in the case of the PSR "AquireObjectBySCam(Object)" the allowed resource is the stereo camera system. The input parameter is the Object Template of the given object and the output parameter are the return values "Success" and "Failure".

## **3.4.2 Exemplary reactive process-structure: Acquire meal tray by stereo camera**

To show the capabilities of the reactive process-structures, a simplified example is discussed in the following, namely the machine vision skill to acquire an object by the stereo camera with the configuration "Meal Tray". This example of a PSR is non-reactive, as no actor is involved. Though, in a more complex PSR, it is possible to combine the camera and the robot in a feedback loop to implement visual servoing to achieve reactive behaviour.

In Fig. 21 several general blocks are used to find the red meal tray handle in an image. The processing chain starts with the detection of highly saturated, red parts. It is followed by a 9x9 closing operation to eliminate noise. Afterwards, contours are detected and filtered according to a priori knowledge of the size of the handle. Then, the minimum rectangles around the contours are determined and the major axes and their end points are calculated. For testing the current PSR, again the orange blocks have been added to visualize intermediate testing results and they are not executed during task execution.

Fig. 21. PSR "MajorAxisPoints" which detects red areas of a certain size and calculates the major axes of these areas. Orange blocks are omitted when this PSR is used in a PSR-Executor

To grasp the meal tray handle with the manipulator, the determination of its location in 3D is required. Thus, a 2D detection of the meal tray is not sufficient. However, the previously created and tested PSR "MajorAxisPoints" can be used twice, one for each image of the stereo camera. Fig. 22 depicts the usage of the previous net to calculate the 3D line,

Programming of Intelligent Service Robots

and the 3D simulation result in comparison.

Fig. 24. Real scene of this PSR

Fig. 25. Simulated scene of this PSR

step.

**3.5 FRIEND::Process step 5: Task testing** 

**3.5.1 Description of the process step** 

with the Process Model "FRIEND::Process" and Configurable Task-Knowledge 549

For simulation and PSR unit testing in the PSR-Configurator, the fridge, the static environment (wheelchair, monitor and user) and the robot with its current configuration can be placed in the same 3D scene with the meal tray. Fig. 24 and Fig. 25 show the real scene

After finishing the configuration of process-structures on all three levels, the planning and execution of a task (PSA) has to be tested. The modularly configured, verified and tested process-structures of lower abstraction (PSE and PSR) are involved in this final process

For the purpose of task testing the "Sequence*r*" is used, which embeds a task planner for process-structures and the PSR-Executor (see Fig. 5). The Sequencer is part of the processstructure-based control architecture MASSiVE mentioned in Section 1. The Sequencer interacts with skill servers, which offer the functionality that has been configured and

describing the handle of the meal tray. The block *Optimal Stereo Triangulation* computes a 3D contour based on key feature points, extracted from a stereo image. With the known camera matrices and the 2D feature correspondences, the 3D points are found by the intersection of two projection lines in the 3D space using optimal stereo triangulation, as described in (Natarajan et al., 2011).

Fig. 22. PSR which detects the meal tray handle in 3D

Next, the 3D line of the 3D handle detection is used to calculate a transformation frame, having the position of the right 3D point and the rotations to point the y-axis in line direction. Using the a priori knowledge that the meal tray should be parallel to the world coordinate system, only rotation around z-axis has to be calculated. Fig. 23 displays (top, from left to right) the 3D line, the calculated frame, the meal tray Object Template and the placed meal tray, based on the frame. For the fulfillment of the specification of the calling PSE, the Object Template has to be written to the World Model (a service to read from and write data to) with the "Write to World Model" block. This ensures that the detected object is globally available for later processing steps and is the actual result of this PSR.

Fig. 23. Meal tray detection and Object Template placement based on 3D handle detection, frame calculation and Object Template movement (UD = user data)

For simulation and PSR unit testing in the PSR-Configurator, the fridge, the static environment (wheelchair, monitor and user) and the robot with its current configuration can be placed in the same 3D scene with the meal tray. Fig. 24 and Fig. 25 show the real scene and the 3D simulation result in comparison.

Fig. 24. Real scene of this PSR

548 Robotic Systems – Applications, Control and Programming

describing the handle of the meal tray. The block *Optimal Stereo Triangulation* computes a 3D contour based on key feature points, extracted from a stereo image. With the known camera matrices and the 2D feature correspondences, the 3D points are found by the intersection of two projection lines in the 3D space using optimal stereo triangulation, as described in

Next, the 3D line of the 3D handle detection is used to calculate a transformation frame, having the position of the right 3D point and the rotations to point the y-axis in line direction. Using the a priori knowledge that the meal tray should be parallel to the world coordinate system, only rotation around z-axis has to be calculated. Fig. 23 displays (top, from left to right) the 3D line, the calculated frame, the meal tray Object Template and the placed meal tray, based on the frame. For the fulfillment of the specification of the calling PSE, the Object Template has to be written to the World Model (a service to read from and write data to) with the "Write to World Model" block. This ensures that the detected object

is globally available for later processing steps and is the actual result of this PSR.

Fig. 23. Meal tray detection and Object Template placement based on 3D handle detection,

frame calculation and Object Template movement (UD = user data)

(Natarajan et al., 2011).

Fig. 22. PSR which detects the meal tray handle in 3D

Fig. 25. Simulated scene of this PSR

## **3.5 FRIEND::Process step 5: Task testing**

After finishing the configuration of process-structures on all three levels, the planning and execution of a task (PSA) has to be tested. The modularly configured, verified and tested process-structures of lower abstraction (PSE and PSR) are involved in this final process step.

## **3.5.1 Description of the process step**

For the purpose of task testing the "Sequence*r*" is used, which embeds a task planner for process-structures and the PSR-Executor (see Fig. 5). The Sequencer is part of the processstructure-based control architecture MASSiVE mentioned in Section 1. The Sequencer interacts with skill servers, which offer the functionality that has been configured and

Programming of Intelligent Service Robots

COP **C**omposed **Op**erator FBN **F**unction **B**lock **N**etwork

OC **O**bject **C**onstellation OT **O**bject **T**emplate PS **P**rocess-**S**tructure

*and Reviews*, 28(2)

PSA **A**bstract **P**rocess-**S**tructure PSE **E**lementary **P**rocess-**S**tructure PSR **R**eactive **P**rocess-**S**tructure TPO **T**ask **P**articipating **O**bject

**5. Glossary** 

**6. References** 

elaborated tool, will build the basis for this.

**V**erified Task **E**xecution

with the Process Model "FRIEND::Process" and Configurable Task-Knowledge 551

configuration environment. The PSR Configuration Framework, which is the most

FRIEND **F**unctional **R**obotarm with user-fr**IEN**dly interface for **D**isabled people MASSiVE **M**ultilayer Control **A**rchitecture for **S**emi-Autonomous **S**ervice Robots w**i**th

Asimov, I. (1991). Robot Visions, Roc (Reissue 5th March 1991), ISBN-10: 0451450647

Gostai. (2010). Urbi 2.0. Available from http://www.gostai.com

Available from http://www.amarob.de

Physics / Electrical Engineering, (in German)

3273-9, Munich, Germany

Cao, T. & Sanderson, A. C. (1998). AND/OR net representation for robotic task sequence

Dario, P., Dillman, R., and Christensen, H. I. (2004). EURON research roadmaps. Key area 1 on 'Research coordination', Available from http://www.euron.org Engelberger, J. F. (1989), *Robotics in Service*, MIT Press, Cambridge, MA, USA, 1st ed, 1989

Gräfe, V. & Bischoff, R. (2003). Past, present and future of intelligent robots, *Proceedings of the* 

IAT (2011). *ReIntegraRob Project*, Institute of Automation, University of Bremen, Germany. Available from http://www.iat.uni-bremen.de/sixcms/detail.php?id=1268 Kampe, H. & Gräser, A. (2010). Integral modelling of objects for service robotic systems,

Kemp, C. C., Edsinger, A. & Torres-Jara, E. (2007). Challenges for robot manipulation in

Martens, C. (2003). Teilautonome Aufgabenbearbeitung bei Rehabilitations-robotern mit

*Automation* (CIRA 2003), volume 2, ISBN 0-7803-7866-0, Kobe, Japan IAT (2009). *AMaRob Project*, Institute of Automation, University of Bremen, Germany.

planning, In: *IEEE Transactions on Systems, Man, and Cybernetics - part C: Applications* 

*2003 IEEE International Symposium on Computional Intelligence*, In: *Robotics and* 

Proceedings for the joint conference of ISR 2010 (41st International Symposium on Robotics) und ROBOTIK 2010 (6th German Conference on Robotics), 978-3-8007-

human environments, In: *IEEE Robotics and Automation Magazine*, vol. 14, pp. 20-29

Manipulator - Konzeption und Realisierung eines software-technischen und algorithmischen Rahmenwerks, PhD dissertation, University of Bremen, Faculty of

verified as reactive process-structures beforehand. The layered system architecture organizes a hardware abstraction via skill layer, so that there is a unique access-point on the sensors and actuators from a certain responsible skill server.

Task tests can be performed in the following execution modes:


Based on the process-structures, a complete task is planned and executed in one of the listed skill execution modes. This means that the Sequencer first plans a sequence of COPs and subsequently decomposes each COP into an elementary process-structure. Planning on this level results in a sequence of skills to be executed then. Step by step and based on the execution result of each skill, the once planned skill sequence is pursued, or re-planning takes place if an unexpected result is obtained.

## **4. Conclusion**

As shown in Section 2.1 it is a challenging task to establish intelligent behavior of service robots operating in human environments. Typical operation sequences of support tasks in daily life activities seem to be simple from human understanding. However, to realize them with a robotic system, a huge complexity arises due to the variability and unpredictability of human environments.

In this paper the FRIEND::Process – an engineering approach for programming robust intelligent robotic behavior – has been presented. This approach is an alternative solution in contrast to other existing approaches, since it builds on configurable process-structures as central development elements. Process-structures comprise a finite-sized and context-related set of task knowledge. This allows a priori verification of the programmed system behavior and leads to deterministic, fault-tolerant and real-time capable robotic systems.

The FRIEND::Process organizes the different stages of development and leads to consistent development artifacts. This is achieved with the help of a tool chain for user-friendly configuration of process-structures.

The applicability of the here proposed methods has been proven throughout the realization of the AMaRob project (IAT, 2009) where task execution in three complex scenarios for the support of disabled persons in daily life activities has been solved. One of theses scenarios is the "Meal preparation and eating assistance" scenario, used for exemplification throughout this paper. The most error prone and thus challenging action in this scenario is the correct recognition of smaller objects (e. g. the handle of the meal tray) under extreme lighting conditions. However, with the inclusion of redundant skills in the elementary processstructures, the system's robustness has been raised in an evolutionary manner. In cases where even redundant autonomous skills did not execute successfully, the accomplishment of the desired task was achieved via inclusion of the user within a user interaction skill.

Currently, the methods and tools discussed in this paper are continuously developed further and are applied in the project ReIntegraRob (IAT, 2011). The mid-term objective is to integrate the different configuration tools for process-structures into one integrated configuration environment. The PSR Configuration Framework, which is the most elaborated tool, will build the basis for this.

## **5. Glossary**

550 Robotic Systems – Applications, Control and Programming

verified as reactive process-structures beforehand. The layered system architecture organizes a hardware abstraction via skill layer, so that there is a unique access-point on the

• *Probabilistic simulation:* the skill interfaces and the communication infrastructure are

• *Motion simulation:* the motion governed by manipulative skills is simulated and

Based on the process-structures, a complete task is planned and executed in one of the listed skill execution modes. This means that the Sequencer first plans a sequence of COPs and subsequently decomposes each COP into an elementary process-structure. Planning on this level results in a sequence of skills to be executed then. Step by step and based on the execution result of each skill, the once planned skill sequence is pursued, or re-planning

As shown in Section 2.1 it is a challenging task to establish intelligent behavior of service robots operating in human environments. Typical operation sequences of support tasks in daily life activities seem to be simple from human understanding. However, to realize them with a robotic system, a huge complexity arises due to the variability and unpredictability of

In this paper the FRIEND::Process – an engineering approach for programming robust intelligent robotic behavior – has been presented. This approach is an alternative solution in contrast to other existing approaches, since it builds on configurable process-structures as central development elements. Process-structures comprise a finite-sized and context-related set of task knowledge. This allows a priori verification of the programmed system behavior

The FRIEND::Process organizes the different stages of development and leads to consistent development artifacts. This is achieved with the help of a tool chain for user-friendly

The applicability of the here proposed methods has been proven throughout the realization of the AMaRob project (IAT, 2009) where task execution in three complex scenarios for the support of disabled persons in daily life activities has been solved. One of theses scenarios is the "Meal preparation and eating assistance" scenario, used for exemplification throughout this paper. The most error prone and thus challenging action in this scenario is the correct recognition of smaller objects (e. g. the handle of the meal tray) under extreme lighting conditions. However, with the inclusion of redundant skills in the elementary processstructures, the system's robustness has been raised in an evolutionary manner. In cases where even redundant autonomous skills did not execute successfully, the accomplishment of the desired task was achieved via inclusion of the user within a user interaction skill. Currently, the methods and tools discussed in this paper are continuously developed further and are applied in the project ReIntegraRob (IAT, 2011). The mid-term objective is to integrate the different configuration tools for process-structures into one integrated

and leads to deterministic, fault-tolerant and real-time capable robotic systems.

sensors and actuators from a certain responsible skill server. Task tests can be performed in the following execution modes:

visualized within a virtual 3D space as shown in Fig. 25, • *Hardware simulation:* the sensors and actuators are simulated,

• *Real execution:* the skill is executed with access of sensors and actuators.

tested and skill return values are simulated, • *Skill simulation:* the skill's functional core is simulated,

takes place if an unexpected result is obtained.

**4. Conclusion** 

human environments.

configuration of process-structures.


## **6. References**

Asimov, I. (1991). Robot Visions, Roc (Reissue 5th March 1991), ISBN-10: 0451450647


Engelberger, J. F. (1989), *Robotics in Service*, MIT Press, Cambridge, MA, USA, 1st ed, 1989

Gostai. (2010). Urbi 2.0. Available from http://www.gostai.com


**26** 

*Chile* 

**Performance Evaluation of Fault-Tolerant** 

*Universidad Iberoamericana de Ciencias y Tecnología, UNICIT, Santiago* 

Thanks to the incorporation of robotic systems, the development of industrial processes has generated a great increase in productivity, yield and product quality. Nevertheless, as far as technological advancement permits a greater automation level, system complexity also increases, with greater number of components, therefore rising the probability of failures or anomalous operation. This can result in operator's hazard, difficulties for users, economic losses, etc. Robotic automatic systems, even if helped in minimizing human operation in control and manual intervention tasks, haven't freed them from failure occurrences. Although such failures can´t be eliminated, they can be properly managed through an adequate control system, allowing to reduce degraded performance in industrial processes.

In figure 1 we see a scheme showing the different performance regions a given system can adopt when a failure occurs. If the system deviates to a degraded performance region in, presence of a failure, it can recover itself moving into an optimum performance region, or

**1. Introduction** 

Fig. 1. Performance regions under failure occurrence

**Controllers in Robotic Manipulators** 

Claudio Urrea1, John Kern1,2 and Holman Ortiz2

*Universidad de Santiago de Chile, USACH, Santiago 2Escuela de Ingeniería Electrónica y Computación,* 

*1Departamento de Ingeniería Eléctrica, DIE,* 

Martens, C., Prenzel, O. & Gräser, A. (2007). The rehabilitation robots FRIEND-I & II: Daily life independency through semi-autonomous task-execution, In: *Rehabilitation Robotics* (Sashi S Kommu, Ed.), pp. 137-162., I-Tech Education and Publishing, Vienna, Austria, Available from

http://www.intechopen.com/books/show/title/rehabilitation\_robotics


## **Performance Evaluation of Fault-Tolerant Controllers in Robotic Manipulators**

Claudio Urrea1, John Kern1,2 and Holman Ortiz2

*1Departamento de Ingeniería Eléctrica, DIE, Universidad de Santiago de Chile, USACH, Santiago 2Escuela de Ingeniería Electrónica y Computación, Universidad Iberoamericana de Ciencias y Tecnología, UNICIT, Santiago Chile* 

## **1. Introduction**

552 Robotic Systems – Applications, Control and Programming

Martens, C., Prenzel, O. & Gräser, A. (2007). The rehabilitation robots FRIEND-I & II: Daily

http://www.intechopen.com/books/show/title/rehabilitation\_robotics

Natarajan, S.K., Ristic-Durrant, D., Leu, A., Gräser, A. (2011). Robust stereo-vision based 3D-

Prenzel, O. (2005). Semi-autonomous object anchoring for service-robots, *in B. Lohmann (Ed.),* 

Prenzel, O., Boit, A. and Kampe H. (2008) Ergonomic programming of service robot

Prenzel, O. (2009). *Process model for the development of semi-autonomous service robots*, PhD dissertation, University of Bremen, Faculty of Physics and Electrical Engineering Quigley, M., Conley, K., Gerkey, B. P., Faust, J., Foote, T., Leibs, J., Wheeler, R., Ng, A. Y.

Russel, S., and Norvig, P. (2003). *Articial Intelligence - A Modern Approach*, Prentice Hall,

Schlegel, C., and Woerz, R. (1999) The software framework SmartSoft for implementing

Schmidt, D. C. (2006). Model-driven-engineering, In: *guest editor's introduction*, pp. 25-31,

Schreckenghost, D., Bonasso, R., Kortenkamp, D., Ryan D. (1998) Three tier architecture for

Simmons, R., Apfelbaum, D. (1998) A task description language for robot control, In: *Proc. of* 

Wright, R. S., Lipchak, B., Haemel, N. & Sellers, G. (2010). *OpenGL SuperBible: Comprehensive Tutorial and Reference* (5th Edition), Addison-Wesley, ISBN 978-0321712615

Weld, D. S. (1999). Recent advances in AI planning, in *AI Magazine*, vol 20, pp. 93-123

*International Conference on Robots and Systems (IROS), San Francisco, USA*  Ojdanic, D. (2009). Using cartesian space for manipulator motion planning - application in

Vienna, Austria, Available from

Electrical Engineering

Shaker-Verlag, pp. 31-42

*Open Source Software*

IEEE Computer

USA

Aachen, 2005, ISBN 3-8322-4502-2

Upper Saddle River, New Jersey, 2nd ed.

*Robots and Systems (IROS)*, pp. 1610-1616

*Conference on Intelligent Robotics and Systems*

Microsoft. (2011). Microsoft Robotic Studio, Available from http://www.microsoft.com/robotics

life independency through semi-autonomous task-execution, In: *Rehabilitation Robotics* (Sashi S Kommu, Ed.), pp. 137-162., I-Tech Education and Publishing,

modeling of real-world objects for assistive robotic applications, in *Proc. of IEEE/RSJ* 

service robotics, PhD dissertation, University of Bremen, Faculty of Physics and

*A. Gräser, Methods and Applications in Automation,* pp. 57 - 68, Shaker-Verlag,

behavior with function block networks, in *Methods and Applications in Automation*,

(2009) ROS: an open-source Robot Operating System, In: *Proc. Of ICRA Workshop on* 

sensorimotor systems, In: *Proc. of the IEEE/RSJ International Conference on Intelligent* 

controlling space life support systems, In: *Proc. of IEEE SIS'98*, Washington DC,

Thanks to the incorporation of robotic systems, the development of industrial processes has generated a great increase in productivity, yield and product quality. Nevertheless, as far as technological advancement permits a greater automation level, system complexity also increases, with greater number of components, therefore rising the probability of failures or anomalous operation. This can result in operator's hazard, difficulties for users, economic losses, etc. Robotic automatic systems, even if helped in minimizing human operation in control and manual intervention tasks, haven't freed them from failure occurrences. Although such failures can´t be eliminated, they can be properly managed through an adequate control system, allowing to reduce degraded performance in industrial processes.

Fig. 1. Performance regions under failure occurrence

In figure 1 we see a scheme showing the different performance regions a given system can adopt when a failure occurs. If the system deviates to a degraded performance region in, presence of a failure, it can recover itself moving into an optimum performance region, or

Performance Evaluation of Fault-Tolerant Controllers in Robotic Manipulators 555

rearrangement or control reconfiguration, even stopping the whole system, depending on

Fault tolerant control systems (being of hybrid nature) consider the application of a series of techniques like: component and structure analysis; detection, isolation and quantification of failures; physical or virtual redundancy of sensors and/or actuators; integrated real-time supervision of all tasks performed by the fault tolerant control, as we can see in figure 2b

the severity of the problem (Puig, Quevedo, Escobet, Morcego, & C., 2004).

Fig. 2b. Stages included in the design of a fault tolerant control system

(Blanke, Kinnaert, Lunze, & Staroswiecki, 2000).

near to it. These systems are called fault tolerant systems and have become increasingly important for robot manipulators, especially those performing tasks in remote or hazardous environments, like outer space, underwater or nuclear environments.

In this chapter we will address the concept of fault tolerance applied to a robotic manipulator. We will consider the first three degrees of freedom of a redundant SCARAtype robot, which is intended to follow a Cartesian test trajectory composed by a combination of linear segments. We developed three fault-tolerant controllers by using classic control laws: *hyperbolic sine-cosine*, calculated torque and adaptive inertia. The essays for such controllers will be run in a simulation environment developed through MatLab/Simulink software. As a performance requirement for those controllers, we considered the application of a failure consisting in blocking one of the manipulator's actuators during trajectory execution. Finally, we present a performance evaluation for each one of the above mentioned fault-tolerant controllers, through joint and Cartesian errors, by means of graphics and rms rates.

## **2. Fault tolerant control**

The concept of fault tolerant control (Zhang & Jiang, 2003) comes first from airplane fault tolerant control; although at scientific level it appears later, as a basic aim in the first congress of IFAC SAFEPROCESS 1991, with an especially stronger development since the beginning of 21th century. Fault tolerant control can be considered both under an active or passive approach, as seen in figure 2a. Passive tolerant control is based on the ability of feedback systems to compensate perturbations, changes in system dynamics and even system failures (Puig, Quevedo, Escobet, Morcego, & C., 2004). Passive tolerant control considers a robust design of the feedback control system in order to immunize it from some specific failures (Patton, 1997). Active tolerant control is centered in on-line failure, that is, the ability to identify the failing component, determine the kind of damage, its magnitude and moment of appearance and, from this information, to activate some mechanism for

Fig. 2a. Types of fault tolerant control

554 Robotic Systems – Applications, Control and Programming

near to it. These systems are called fault tolerant systems and have become increasingly important for robot manipulators, especially those performing tasks in remote or hazardous

In this chapter we will address the concept of fault tolerance applied to a robotic manipulator. We will consider the first three degrees of freedom of a redundant SCARAtype robot, which is intended to follow a Cartesian test trajectory composed by a combination of linear segments. We developed three fault-tolerant controllers by using classic control laws: *hyperbolic sine-cosine*, calculated torque and adaptive inertia. The essays for such controllers will be run in a simulation environment developed through MatLab/Simulink software. As a performance requirement for those controllers, we considered the application of a failure consisting in blocking one of the manipulator's actuators during trajectory execution. Finally, we present a performance evaluation for each one of the above mentioned fault-tolerant controllers, through joint and Cartesian errors, by

The concept of fault tolerant control (Zhang & Jiang, 2003) comes first from airplane fault tolerant control; although at scientific level it appears later, as a basic aim in the first congress of IFAC SAFEPROCESS 1991, with an especially stronger development since the beginning of 21th century. Fault tolerant control can be considered both under an active or passive approach, as seen in figure 2a. Passive tolerant control is based on the ability of feedback systems to compensate perturbations, changes in system dynamics and even system failures (Puig, Quevedo, Escobet, Morcego, & C., 2004). Passive tolerant control considers a robust design of the feedback control system in order to immunize it from some specific failures (Patton, 1997). Active tolerant control is centered in on-line failure, that is, the ability to identify the failing component, determine the kind of damage, its magnitude and moment of appearance and, from this information, to activate some mechanism for

environments, like outer space, underwater or nuclear environments.

means of graphics and rms rates.

Fig. 2a. Types of fault tolerant control

**2. Fault tolerant control** 

rearrangement or control reconfiguration, even stopping the whole system, depending on the severity of the problem (Puig, Quevedo, Escobet, Morcego, & C., 2004).

Fault tolerant control systems (being of hybrid nature) consider the application of a series of techniques like: component and structure analysis; detection, isolation and quantification of failures; physical or virtual redundancy of sensors and/or actuators; integrated real-time supervision of all tasks performed by the fault tolerant control, as we can see in figure 2b (Blanke, Kinnaert, Lunze, & Staroswiecki, 2000).

Fig. 2b. Stages included in the design of a fault tolerant control system

Performance Evaluation of Fault-Tolerant Controllers in Robotic Manipulators 557

The considered failure is the blocking of the second actuator, what makes this robot an

Having in mind the exposed manipulator, it is necessary to obtain its model; therefore we will consider that the dynamic model of a manipulator with *n* joints can be expressed

Under failure conditions in actuator number 2, that is, it's blocking, the component 2 of

Fig. 4. Scheme of the three first DOF of a redundant SCARA-type robotic manipulator

Considering the hybrid nature of fault tolerant control, it is proposed an active fault tolerant control having a different control law according to the status of the robotic manipulator, *i.e.*

**τ** =**M q+C , +G +F** () ( ) () () *q qq q q* (1)

underactuated system.

through equation (1):

τ : Vector of generalized forces (*n*×1 dimension).

**C** : Centrifugal and Coriolis forces vector (*n*×1dimension).

**M** : Inertia matrix (*n*×*n* dimension).

equation (1) becomes a constant.

**4. Considered controllers** 

*q* : Components of joint position vector. ݍሶ : Components of joint speed vector. **G** : Gravity force vector (*n*×1dimension). ܙሷ : Joint acceleration vector (*n*×1dimension). **F** : Friction forces vector (*n*×1dimension).

where:

In the evaluation of fault tolerant controllers it is assumed that a robotic manipulator where a failure has arisen in one or more actuators, can be considered as an underactuated system, that is, a system with less actuators than the number of joints (El-Salam, El-Haweet, & and Pertew, 2005). Those underactuated systems present a greater degree of complexity compared with the simplicity of conventional robot control, being not so profoundly studied yet (Rubí, 2002). The advantages of underactuated systems have been recognized mainly because they are lighter and cheaper, with less energy consumption. Therefore, a great deal of concern is being focused on those underactuated robots (Xiujuan & Zhen, 2007). In figure 3 it is shown a diagram displaying the first three degrees of freedom of a SCARA type redundant manipulator, upon which essays will be conducted considering a failure in the second actuator, making the robot become an underactuated system.

## **3. SCARA-type redundant manipulator**

For the evaluation of fault-tolerant controllers, we consider the first three degrees of freedom of a redundant SCARA-type robotic manipulator, with a failure occurring in one of its actuators; such a system can be considered as an underactuated system, *i.e.*, with less actuators than the number of joints (Xiujuan & Zhen, 2007). Those underactuated systems have a greater complexity compared with the simplicity of conventional robots control, and they haven't been so deeply studied yet (Rubí, 2002). The advantages of underactuated systems have been remarked mainly because they are lighter and less expensive; also having less energy consumption, consequently an increasing level of attention is being paid to underactuated robots (Xiujuan & Zhen, 2007).

In figure 3 it is shown the scheme of a redundant SCARA-type robotic manipulator, and in figure 4 we can see a diagram showing the first three degrees of freedom of such manipulator, on which the essays will be carried on.

Fig. 3. Scheme of a SCARA-type redundant manipulator

The considered failure is the blocking of the second actuator, what makes this robot an underactuated system.

Having in mind the exposed manipulator, it is necessary to obtain its model; therefore we will consider that the dynamic model of a manipulator with *n* joints can be expressed through equation (1):

$$\mathbf{r} = \mathbf{M}(q)\ddot{\mathbf{q}} + \mathbf{C}(q, \dot{q}) + \mathbf{G}(q) + \mathbf{F}(\dot{q}) \tag{1}$$

where:

556 Robotic Systems – Applications, Control and Programming

In the evaluation of fault tolerant controllers it is assumed that a robotic manipulator where a failure has arisen in one or more actuators, can be considered as an underactuated system, that is, a system with less actuators than the number of joints (El-Salam, El-Haweet, & and Pertew, 2005). Those underactuated systems present a greater degree of complexity compared with the simplicity of conventional robot control, being not so profoundly studied yet (Rubí, 2002). The advantages of underactuated systems have been recognized mainly because they are lighter and cheaper, with less energy consumption. Therefore, a great deal of concern is being focused on those underactuated robots (Xiujuan & Zhen, 2007). In figure 3 it is shown a diagram displaying the first three degrees of freedom of a SCARA type redundant manipulator, upon which essays will be conducted considering a failure in the

For the evaluation of fault-tolerant controllers, we consider the first three degrees of freedom of a redundant SCARA-type robotic manipulator, with a failure occurring in one of its actuators; such a system can be considered as an underactuated system, *i.e.*, with less actuators than the number of joints (Xiujuan & Zhen, 2007). Those underactuated systems have a greater complexity compared with the simplicity of conventional robots control, and they haven't been so deeply studied yet (Rubí, 2002). The advantages of underactuated systems have been remarked mainly because they are lighter and less expensive; also having less energy consumption, consequently an increasing level of attention is being paid to

In figure 3 it is shown the scheme of a redundant SCARA-type robotic manipulator, and in figure 4 we can see a diagram showing the first three degrees of freedom of such

second actuator, making the robot become an underactuated system.

**3. SCARA-type redundant manipulator**

underactuated robots (Xiujuan & Zhen, 2007).

manipulator, on which the essays will be carried on.

Fig. 3. Scheme of a SCARA-type redundant manipulator


Under failure conditions in actuator number 2, that is, it's blocking, the component 2 of equation (1) becomes a constant.

Fig. 4. Scheme of the three first DOF of a redundant SCARA-type robotic manipulator

## **4. Considered controllers**

Considering the hybrid nature of fault tolerant control, it is proposed an active fault tolerant control having a different control law according to the status of the robotic manipulator, *i.e.*

Performance Evaluation of Fault-Tolerant Controllers in Robotic Manipulators 559

2

2

*p*

If estimation errors are little, joint errors near to a linear equation, as shown in equation (10).

The fault tolerant control under examination is based on an adaptive control law, namely: adaptive inertia (Lewis, Dawson, & Abdallah, 2004), (Siciliano & Khatib, 2008), for what it is necessary to consider the manipulator dynamic model in the form expressed in equation (11). The term corresponding to centrifugal and Coriolis forces is expressed through a

In this case, we define an auxiliary error signal **r** and its derivativeܚሶ, as shown in equations

K

 

*v*

K

n

n

*p*

K

(7)

(8)

**q =q -q e d** (9)

++ ≈ 0 **q Kq Kq e ve pe** (10)

**τ** = , **M q+V q+G +F** () ( ) () () *q qq q q* **<sup>m</sup>** (11)

= +**e e r Λq q** (12)

= +**e e r Λq q** (13)

*v*

K

**C**ˆ : Estimation of centrifugal and Coriolis forces vector (*n*×1dimension).

v

p

K =

K =

1

1

*p*

K

*v*

K

**G**ˆ : Estimation of gravity force vector (*n*×1dimension).

**F** : Estimation of friction forces vector (*n*×1dimension).

**Kv** : Diagonal definite positive matrix (*n*×*n* dimension).

**Kp** : Diagonal definite positive matrix (*n*×*n* dimension). **qd** : Desired joint acceleration vector (*n*×1 dimension).

**qe** : Joint speed error vector (*n*×1dimension).

**7. Fault tolerant controller: Adaptive inertia** 

**Λ** : Diagonal definite positive matrix (*n*×*n* dimension).

ˆ

matrix **Vm**.

where:

(12) and (13), respectively:

normal or failing, with on-line sensing of possible failures and, in correspondence with this, reconfiguring the controller by selecting the most adequate control law (changing inputs and outputs).

Next, we will present a summary of the controllers considered for performance evaluation when a failure occurs in the second actuator of the previously described manipulator.

### **5. Fault tolerant controller:** *hyperbolic sine* **and** *cosine*

This controller is based on the classic controller *hyperbolic sine-cosine* presented in (Barahona, Espinosa, & L., 2002), composed by a proportional part based on sine and *hyperbolic cosine* functions, a derivative part based on hyperbolic sine and gravity compensation, as shown in equation (2). The proposed fault tolerant control law includes two classic *hyperbolic sinecosine* controllers that are "switched" to reconfigure the fault tolerant controller.

$$\mathbf{r} = \mathbf{K}\_p \sinh(\mathbf{q}\_c) \cosh(\mathbf{q}\_c) \text{ -K}\_v \sinh(\dot{\mathbf{q}}) \text{+G}(\mathbf{q}) \tag{2}$$

$$\mathbf{q}\_{\mathbf{e}} = \mathbf{q}\_{\mathbf{d}} \mathbf{-q} \tag{3}$$

According to equations (2) and (3):

**Kp** : Proportional gain, diagonal definite positivematrix (n×n dimension).

**Kv** : Derivative gain, diagonal definite positive matrix (*n*×*n* dimension).

**qe** : Joint position error vector (*n*×1dimension).

**qd** : Desired joint position vector (*n*×1dimension).

In (Barahona, Espinosa, & L., 2002) it is established that robotic manipulator's joint position error will tend asymptotically to zero as long as time approaches to infinite:

$$\lim\_{t \to \!\!\! -} \mathbf{q}\_{\mathbf{e}} \to 0 \tag{4}$$

This behavior is proved analyzing equation (5) and pointing that the only equilibrium point for the system is the origin (0,0).

$$\begin{aligned} \frac{d}{dt} \begin{bmatrix} \mathbf{q}\_e \\ \dot{\mathbf{q}} \end{bmatrix} &= \begin{bmatrix} a\_1 \\ a\_2 \end{bmatrix} \\ a\_1 &= \text{-}\dot{\mathbf{q}} \\ a\_2 &= \mathbf{M}(\mathbf{q})^{-1} \left( \mathbf{K}\_p \sinh(\mathbf{q}\_e) \cosh(\mathbf{q}\_e) \cdot \mathbf{K}\_r \sinh(\dot{\mathbf{q}}) \cdot \mathbf{C}(\mathbf{q}, \dot{\mathbf{q}}) \dot{\mathbf{q}} \right) \end{aligned} \tag{5}$$

### **6. Fault tolerant controller: Computed torque**

Another active fault tolerant controller analyzed here uses a control law by computed torque, consisting in the application of a torque in order to compensate the centrifugal, Coriolis, gravity and friction effects, as shown in equation (6).

$$\mathbf{r} = \hat{\mathbf{M}}(q)(\ddot{\mathbf{q}}\_{\mathrm{d}} + \mathbf{K}\_{\mathrm{v}}\dot{\mathbf{q}}\_{\mathrm{e}} + \mathbf{K}\_{\mathrm{p}}\mathbf{q}\_{\mathrm{e}}) + \hat{\mathbf{C}}(q, \dot{q}) + \hat{\mathbf{G}}(q) + \hat{\mathbf{F}}(\dot{q}) \tag{6}$$

where:

**M**ˆ : Estimation of inertia matrix (*n*×*n* dimension).


$$\mathbf{K}\_{\rm v} = \begin{bmatrix} \mathbf{K}\_{v1} \\ & & \\ & \mathbf{K}\_{v2} \\ & & \ddots \\ & & & \mathbf{K}\_{vn} \end{bmatrix} \tag{7}$$

**Kv** : Diagonal definite positive matrix (*n*×*n* dimension).

$$\mathbf{K}\_{\rm p} = \begin{bmatrix} \mathbf{K}\_{p1} & & & \\ & \mathbf{K}\_{p2} & & \\ & & \ddots & \\ & & & \mathbf{K}\_{p\rm n} \end{bmatrix} \tag{8}$$


$$
\dot{\mathbf{q}}\_a \equiv \dot{\mathbf{q}}\_d \cdot \dot{\mathbf{q}} \tag{9}
$$

**qe** : Joint speed error vector (*n*×1dimension).

If estimation errors are little, joint errors near to a linear equation, as shown in equation (10).

$$\ddot{\mathbf{q}}\_{\mathbf{e}} + \mathbf{K}\_{\mathbf{v}} \dot{\mathbf{q}}\_{\mathbf{e}} + \mathbf{K}\_{\mathbf{p}} \mathbf{q}\_{\mathbf{e}} = \begin{array}{c} \mathbf{0} \\ \end{array} \tag{10}$$

## **7. Fault tolerant controller: Adaptive inertia**

The fault tolerant control under examination is based on an adaptive control law, namely: adaptive inertia (Lewis, Dawson, & Abdallah, 2004), (Siciliano & Khatib, 2008), for what it is necessary to consider the manipulator dynamic model in the form expressed in equation (11). The term corresponding to centrifugal and Coriolis forces is expressed through a matrix **Vm**.

$$\mathbf{r} = \mathbf{M}(q)\ddot{\mathbf{q}} + \mathbf{V}\_{\mathbf{m}}(q\_{\prime}\dot{q})\dot{\mathbf{q}} + \mathbf{G}(q) + \mathbf{F}(\dot{q})\tag{11}$$

In this case, we define an auxiliary error signal **r** and its derivativeܚሶ, as shown in equations (12) and (13), respectively:

$$\mathbf{r} = \Lambda \mathbf{q}\_e + \dot{\mathbf{q}}\_e \tag{12}$$

$$
\dot{\mathbf{r}} = \Lambda \dot{\mathbf{q}}\_e + \ddot{\mathbf{q}}\_e \tag{13}
$$

where:

558 Robotic Systems – Applications, Control and Programming

normal or failing, with on-line sensing of possible failures and, in correspondence with this, reconfiguring the controller by selecting the most adequate control law (changing inputs

Next, we will present a summary of the controllers considered for performance evaluation when a failure occurs in the second actuator of the previously described manipulator.

This controller is based on the classic controller *hyperbolic sine-cosine* presented in (Barahona, Espinosa, & L., 2002), composed by a proportional part based on sine and *hyperbolic cosine* functions, a derivative part based on hyperbolic sine and gravity compensation, as shown in equation (2). The proposed fault tolerant control law includes two classic *hyperbolic sine-*

In (Barahona, Espinosa, & L., 2002) it is established that robotic manipulator's joint position

lim 0

This behavior is proved analyzing equation (5) and pointing that the only equilibrium point

() ( ) ( ) ( () ( ) )

( )( ) ( ) () () ˆ ˆ ˆ ˆ <sup>=</sup> *<sup>q</sup> qq q q* **d ve pe <sup>τ</sup> M q +K q +K q +C , +G +F** (6)

**M q K sinh q cosh q -K sinh q - C q,q q**

Another active fault tolerant controller analyzed here uses a control law by computed torque, consisting in the application of a torque in order to compensate the centrifugal,

**p e ev**

**τ** = **K sinh q cosh q - K sinh q +G q p e ev** ( ) ( ) () () (2)

**q =q -q e d** (3)

*<sup>t</sup>*→∞ **qe** <sup>→</sup> (4)

(5)

*cosine* controllers that are "switched" to reconfigure the fault tolerant controller.

**Kp** : Proportional gain, diagonal definite positivematrix (n×n dimension). **Kv** : Derivative gain, diagonal definite positive matrix (*n*×*n* dimension).

error will tend asymptotically to zero as long as time approaches to infinite:

**5. Fault tolerant controller:** *hyperbolic sine* **and** *cosine*

and outputs).

According to equations (2) and (3):

for the system is the origin (0,0).

where:

1 2

*a a* =

=

**qe** : Joint position error vector (*n*×1dimension). **qd** : Desired joint position vector (*n*×1dimension).

> 1 2

**-1**

Coriolis, gravity and friction effects, as shown in equation (6).

**6. Fault tolerant controller: Computed torque** 

**M**ˆ : Estimation of inertia matrix (*n*×*n* dimension).

=

 

*d a dt a*

**e**

 

**q q -q**

**Λ** : Diagonal definite positive matrix (*n*×*n* dimension).

Performance Evaluation of Fault-Tolerant Controllers in Robotic Manipulators 561

In Addendum B we show the set of parameter values employed in the manipulator dynamic

Fig. 5. Block diagram of the structure of the fault tolerant controller used to test the above

After establishing the control laws being utilized, we determine the trajectory to be entered in the control system to carry out the corresponding performance tests of fault tolerant

control algorithms. This trajectory is displayed in figure 6.

mentioned control laws

**9. Results** 

model, and the gains considered for each kind of fault tolerant controller.

$$\mathbf{A} = \begin{bmatrix} \lambda\_1 \\ & \lambda\_2 \\ & & \ddots \\ & & & \lambda\_n \end{bmatrix} \tag{14}$$

When replacing equations (3), (9), (12) and (13) into expression (11), we obtain:

$$\mathbf{r} = \mathbf{M}(q)(\ddot{\mathbf{q}}\_{\rm{d}} + \boldsymbol{\Lambda}\dot{\mathbf{q}}\_{\rm{e}}) + \mathbf{V}\_{\rm{m}}(q, \dot{q})(\dot{\mathbf{q}}\_{\rm{d}} + \boldsymbol{\Lambda}\mathbf{q}\_{\rm{e}}) + \mathbf{G}(q) + \mathbf{F}(\dot{q}) \cdot \mathbf{M}(q)\dot{\mathbf{r}} \cdot \mathbf{V}\_{\rm{m}}(q, \dot{q})\mathbf{r} \tag{15}$$

And making the following matching:

$$\mathbf{Y(\cdot)}\mathbf{q} = \mathbf{M(q)}(\ddot{\mathbf{q}}\_d + \mathbf{A}\dot{\mathbf{q}}\_e) + \mathbf{V}\_\mathbf{m}(q, \dot{q})(\dot{\mathbf{q}}\_d + \mathbf{A}\mathbf{q}\_e) + \mathbf{G}(q) + \mathbf{F}(\dot{q})\tag{16}$$

where:

$$\mathbf{Y} \begin{pmatrix} \mathbf{y}\_{\boldsymbol{\varbeta}} \dot{\boldsymbol{q}}\_{\boldsymbol{\varbeta}} \mathbf{q}\_{d^{\boldsymbol{\varbeta}}} \dot{\boldsymbol{q}}\_{d^{\boldsymbol{\varbeta}}} \ddot{\boldsymbol{q}}\_{d} \end{pmatrix} = \begin{bmatrix} \mathbf{Y}\_{11} & \mathbf{Y}\_{12} & \cdots & \mathbf{Y}\_{1n} \\\\ \mathbf{Y}\_{21} & \mathbf{Y}\_{22} & \cdots & \mathbf{Y}\_{2n} \\\\ \vdots & \vdots & \ddots & \vdots \\\\ \mathbf{Y}\_{n1} & \mathbf{Y}\_{n2} & \cdots & \mathbf{Y}\_{nn} \end{bmatrix} \tag{17}$$

**Y**( ) : Regression matrix (*n*×*n* dimension).

**φ** : Parameter vector (*n*×1 dimension).

With these relationships, expression (15) can be rewritten in the following way:

$$\mathbf{r} = \mathbf{Y}(\bullet)\,\mathbf{q} \mathbf{ - M}(q)\dot{\mathbf{r}} \cdot \mathbf{V}\_{\mathbf{m}}(q\_{\prime}\dot{q})\mathbf{r}\,\mathbf{Y}(\bullet) \tag{18}$$

And the control torque is expressed through equation (19):

$$\mathbf{r} = \mathbf{Y}(\bullet)\hat{\boldsymbol{\Phi}} + \mathbf{K}\_{\mathbf{v}}\mathbf{r} \tag{19}$$

where:

**φ**ˆ : Parameter estimation vector (*n*×1dimension).

**Kv**: Diagonal definite positive matrix (*n*×*n* dimension).

The adaptive control updating rule can be expressed by:

$$
\dot{\hat{\boldsymbol{\Phi}}} = -\dot{\tilde{\boldsymbol{\Phi}}} = \boldsymbol{\Gamma} \mathbf{Y}^{\mathrm{T}}(\cdot) \mathbf{r} \tag{20}
$$

where:

Γ: Diagonal definite positive matrix (*n*×*n* dimension).

### **8. Fault tolerant control simulator**

The three above mentioned control laws, along with the dynamic model of the redundant SCARA-type manipulator considering the first three degrees of freedom (Addendum A), are run under the simulation structure shown in figure 5, where we can see the hybrid nature of this kind of controller.

In Addendum B we show the set of parameter values employed in the manipulator dynamic model, and the gains considered for each kind of fault tolerant controller.

Fig. 5. Block diagram of the structure of the fault tolerant controller used to test the above mentioned control laws

## **9. Results**

560 Robotic Systems – Applications, Control and Programming

λ

*n*

( ) ( )( ) = ( ) *q q* ( ) () () *q q* **Y φ Mq q + d em d e Λq +V , q + Λq +G +F** (16)

11 12 1 21 22 2

*Y Y Y Y*

*n n nn*

*YY Y*

*n n*

**τ** = **Y**( ) **φ -M r-V , r Y** () ( ) *q qq* **<sup>m</sup>** ( ) (18)

= ( ) ˆ **<sup>v</sup> τ Y φ+K r** (19)

ˆ ( ) *<sup>T</sup>* **φ φΓ** =− = ⋅ **Y r** (20)

Y Y

1 2

The three above mentioned control laws, along with the dynamic model of the redundant SCARA-type manipulator considering the first three degrees of freedom (Addendum A), are run under the simulation structure shown in figure 5, where we can see the hybrid nature of

(14)

(17)

λ

= ( ) *q* ( ) ( ) *q q* ( ) () () () ( ) *q q q qq* **τ M q+d em d e Λq +V , q +Λq +G +F -M r - V , r <sup>m</sup>** (15)

1 <sup>2</sup> =

λ

**Λ**

( )

*qqq q q*

**Y**

And the control torque is expressed through equation (19):

**φ**ˆ : Parameter estimation vector (*n*×1dimension). **Kv**: Diagonal definite positive matrix (*n*×*n* dimension). The adaptive control updating rule can be expressed by:

Γ: Diagonal definite positive matrix (*n*×*n* dimension).

**8. Fault tolerant control simulator** 

this kind of controller.

**Y**( ) : Regression matrix (*n*×*n* dimension). **φ** : Parameter vector (*n*×1 dimension).

,, , , =

*ddd*

With these relationships, expression (15) can be rewritten in the following way:

And making the following matching:

where:

where:

where:

When replacing equations (3), (9), (12) and (13) into expression (11), we obtain:

After establishing the control laws being utilized, we determine the trajectory to be entered in the control system to carry out the corresponding performance tests of fault tolerant control algorithms. This trajectory is displayed in figure 6.

Performance Evaluation of Fault-Tolerant Controllers in Robotic Manipulators 563

Fig. 8a. Cartesian trajectory error with fault control using *hyperbolic sine-cosine* controller

the previous case.


eq ( ° )

The performance of fault tolerant controller by computed torque is shown in figures 8b and 9, displaying the curves for joint and Cartesian errors under the same failure conditions than

Joint trajectory error with fault control

0 0.5 1 1.5 2 2.5 3 3.5 4

Fig. 8b. Joint trajectory error with fault control using computed torque controller

eq1 eq2 eq3

Time (sec)

Fig. 6. Cartesian test trajectory

Figures 7 and 8a show the curves corresponding to the differences between desired and real joint trajectories, and between desired and real Cartesian trajectories, respectively, all this under *hyperbolic sine-cosine* fault tolerant control when there is a failure in actuator 2 at 0.5 sec from initiating movement.

Where:


Fig. 7. Joint trajectory error with fault control using *hyperbolic sine-cosine* controller

562 Robotic Systems – Applications, Control and Programming

Figures 7 and 8a show the curves corresponding to the differences between desired and real joint trajectories, and between desired and real Cartesian trajectories, respectively, all this under *hyperbolic sine-cosine* fault tolerant control when there is a failure in actuator 2 at 0.5

Fig. 7. Joint trajectory error with fault control using *hyperbolic sine-cosine* controller

Fig. 6. Cartesian test trajectory

sec from initiating movement.

eq1 : Joint trajectory error, joint 1. eq2 : Joint trajectory error, joint 2. eq3 : Joint trajectory error, joint 3. ex : Cartesian trajectory error, *x* axis. ey : Cartesian trajectory error, *y* axis.

Where:

Fig. 8a. Cartesian trajectory error with fault control using *hyperbolic sine-cosine* controller The performance of fault tolerant controller by computed torque is shown in figures 8b and 9, displaying the curves for joint and Cartesian errors under the same failure conditions than the previous case.

Fig. 8b. Joint trajectory error with fault control using computed torque controller

Performance Evaluation of Fault-Tolerant Controllers in Robotic Manipulators 565

Fig. 11. Cartesian trajectory error with fault control using adaptive inertia controller

Joint performance index

eq1 eq2 eq3

Controller type

sinh-cosh calculated torque adaptive inertia

Fig. 12. Performance index corresponding to joint trajectory

0,00

0,50

1,00

1,50

Joint error rms ( ° )

2,00

2,50

3,00

Fig. 9. Cartesian trajectory error with fault control using computed torque controller

In figures 10 and 11 we can see charts displaying respectively joint and Cartesian errors corresponding to the performance of fault tolerant controller by adaptive inertia, under the same failure conditions imposed to the previous controllers.

Fig. 10. Joint trajectory error with fault control using adaptive inertia controller

564 Robotic Systems – Applications, Control and Programming

Fig. 9. Cartesian trajectory error with fault control using computed torque controller

same failure conditions imposed to the previous controllers.


In figures 10 and 11 we can see charts displaying respectively joint and Cartesian errors corresponding to the performance of fault tolerant controller by adaptive inertia, under the

Joint trajectory error with fault control

0 0.5 1 1.5 2 2.5 3 3.5 4

Fig. 10. Joint trajectory error with fault control using adaptive inertia controller

eq1 eq2 eq3

Time (sec)

Fig. 11. Cartesian trajectory error with fault control using adaptive inertia controller

Fig. 12. Performance index corresponding to joint trajectory

Performance Evaluation of Fault-Tolerant Controllers in Robotic Manipulators 567

Thanks to the development of this work, from the implemented simulation tools and the obtained results, fault tolerant control systems essays are being currently carried out, in order to apply them to actual robotic systems, with and without link redundancy, like the

SCARA-type robots shown in figure 14 and figure 15, respectively.

Fig. 14. SCARA-type redundant robot, DIE-USACH

Fig. 15. SCARA-type robot, DIE-USACH

**12. Addendum A: Manipulator's dynamic model** 

The manipulator's dynamic model is given by equations a1 to a14.

11 12 13 21 22 23 31 32 33

**M** (a1)

*MMM MMM MMM* <sup>=</sup>

**11. Further developments** 

## Controller type

Fig. 13. Performance index corresponding to Cartesian trajectory

Finally, in figures 12 and 13 it is shown a performance summarization of the analyzed fault tolerant controllers in terms of joint and Cartesian *mean square root* errors, accordingly to equation (21)

$$r \text{ms} \quad = \sqrt{\frac{1}{n} \sum\_{i=1}^{n} e\_i^2} \tag{21}$$

Where *ei* represents articular trajectory as well as Cartesian errors.

### **10. Conclusions**

In this work we presented a performance evaluation of three fault tolerant controllers based on classic control techniques: hyperbolic sine-cosine, calculated torque and adaptive inertia. Those fault tolerant controllers were applied on the first three degrees of freedom of a redundant SCARA-type robotic manipulator. The different system stages were implemented in a simulator developed using MatLab/Simulink *software*, allowing to represent the robotic manipulator behavior following a desired trajectory, when blocking of one of its actuators occurs. In this way we obtained the corresponding simulation curves. From the obtained results, we observed that the adaptive inertia fault tolerant controller have errors with less severe maximums than the other controllers, resulting in more homogeneous manipulator movements. We noticed that greater errors were produced with the calculated torque fault tolerant controller, both for maximums and *rms*. Consequently, the best performance is obtained when using the adaptive inertia controller, as shown in figures 14 and 15. It is also remarkable that the hyperbolic sine-cosine fault tolerant controller have a lesser implementation complexity, since it does not require the second derivative of joint position. This can be a decisive factor in the case of not having high performance processors.

## **11. Further developments**

566 Robotic Systems – Applications, Control and Programming

Cartesian performance index

Finally, in figures 12 and 13 it is shown a performance summarization of the analyzed fault tolerant controllers in terms of joint and Cartesian *mean square root* errors, accordingly to

Controller type

calculated torque

*rms e n* <sup>=</sup>

In this work we presented a performance evaluation of three fault tolerant controllers based on classic control techniques: hyperbolic sine-cosine, calculated torque and adaptive inertia. Those fault tolerant controllers were applied on the first three degrees of freedom of a redundant SCARA-type robotic manipulator. The different system stages were implemented in a simulator developed using MatLab/Simulink *software*, allowing to represent the robotic manipulator behavior following a desired trajectory, when blocking of one of its actuators occurs. In this way we obtained the corresponding simulation curves. From the obtained results, we observed that the adaptive inertia fault tolerant controller have errors with less severe maximums than the other controllers, resulting in more homogeneous manipulator movements. We noticed that greater errors were produced with the calculated torque fault tolerant controller, both for maximums and *rms*. Consequently, the best performance is obtained when using the adaptive inertia controller, as shown in figures 14 and 15. It is also remarkable that the hyperbolic sine-cosine fault tolerant controller have a lesser implementation complexity, since it does not require the second derivative of joint position. This can be a decisive factor in the case of not having high

2 1 1 *<sup>n</sup>*

<sup>=</sup> (21)

ex ey

*i i*

adaptive inertia

Fig. 13. Performance index corresponding to Cartesian trajectory

sinh-cosh

Where *ei* represents articular trajectory as well as Cartesian errors.

equation (21)

0

0,005

Cartesian error rms (m)

0,01

0,015

**10. Conclusions** 

performance processors.

Thanks to the development of this work, from the implemented simulation tools and the obtained results, fault tolerant control systems essays are being currently carried out, in order to apply them to actual robotic systems, with and without link redundancy, like the SCARA-type robots shown in figure 14 and figure 15, respectively.

Fig. 14. SCARA-type redundant robot, DIE-USACH

Fig. 15. SCARA-type robot, DIE-USACH

## **12. Addendum A: Manipulator's dynamic model**

The manipulator's dynamic model is given by equations a1 to a14.

$$\mathbf{M} = \begin{bmatrix} M\_{11} & M\_{12} & M\_{13} \\ M\_{21} & M\_{22} & M\_{23} \\ M\_{31} & M\_{32} & M\_{33} \end{bmatrix} \tag{a1}$$

Performance Evaluation of Fault-Tolerant Controllers in Robotic Manipulators 569

Parameter values considered for the manipulator as well as controller gains values are

Link 1 Link 2 Link 3 Units

*l*1 = 0.2 *l*2 = 0.2 *l*3 = 0.2 [m]

*l*c1 = 0.0229 *l*c2 = 0.0229 *l*c3 = 0.0983 [m]

*m*1 = 2.0458 *m*2 = 2.0458 *m*3 = 6.5225 [kg]

*<sup>I</sup>*1*zz* = 0.0116 *I*2*zz* = 0.0116 *I*3*zz* = 0.1213 <sup>⋅</sup>

*Fv*<sup>1</sup> = 0.025 *Fv*<sup>2</sup> = 0.025 *Fv*<sup>3</sup> = 0.025 ⋅ ⋅

*Feca*<sup>1</sup> = 0.05 *Feca*2 = 0.05 *Feca*<sup>3</sup> = 0.05 [N m⋅ ]

*Fecb*<sup>1</sup> = -0.05 *Fecb*2 = -0.05 *Fecb*3 = -0.05 [N m⋅ ]

Computed Torque

> 800, 800, 800

<sup>3</sup> \_\_ \_\_ 8, 8, 8

<sup>3</sup> \_\_ \_\_ 0.1, 0.1, 0.1

Controller Type

Sine-Cosine

400, 300, 200

*Kv*1, *Kv*2, *Kv*3 3, 2, 1 140, 140,

<sup>2</sup> kg m

Nms rad

Adaptive Inertia

\_\_

<sup>140</sup>20, 20, 20

*I*1zz : 1st link inertial momentum with respect to the first *z* axis of its joint. *I*2zz : 2nd link inertial momentum with respect to the first *z* axis of its joint. *I*3zz : 3rd link inertial momentum with respect to the first *z* axis of its joint.

*m*3 : Third link mass. *l*1 : First link length. *l*2 : Second link length. *l*3 : Third link length.

*l*c1 : Length from 1st link origin to its centroid. *l*c2 : Length from 2nd link origin to its centroid. *l*c3 : Length from 3rd link origin to its centroid.

**13. Addendum B: Considered parameter values** 

Table B1. Considered parameters for the manipulator

*Kp*1, *Kp*2, *Kp*<sup>3</sup>

λ1, λ2, λ

γ1, γ2, γ

Table B2. Controller gains

Constants Hyperbolic

shown in tables B1 and B2, respectively.

$$M\_{11} = I\_{1xx} + I\_{2xx} + I\_{3xx} + m\_1 l\_{c1}^2 + m\_2 \left(l\_1^2 + l\_{c2}^2\right) + m\_2 \left2 l\_1 l\_{c2} \cos \theta\_2 + \dots$$

$$m\_3 \left(l\_1^2 + l\_2^2 + l\_{c3}^2 + 2l\_1 l\_2 \cos \theta\_2 + 2l\_2 l\_{c3} \cos \theta\_3 + 2l\_1 l\_{c3} \cos \left(\theta\_2 + \theta\_3\right)\right)$$

$$M\_{21} = M\_{12} = I\_{2xx} + I\_{3xx} + m\_2 \left(l\_{c2}^2 + l\_1 l\_{c2} \cos \theta\_2\right) + \dots \tag{33}$$

$$m\_3 \left(l\_2^2 + l\_{c3}^2 + l\_1 l\_2 \cos \theta\_2 + 2l\_2 l\_{c3} \cos \theta\_3 + l\_1 l\_{c3} \cos \left(\theta\_2 + \theta\_3\right)\right) \tag{34}$$

$$M\_{31} = M\_{13} = I\_{3 \text{zz}} + m\_3 \left(l\_{c3}^2 + l\_2 l\_{c3} \cos \theta\_3\right) + m\_3 l\_1 l\_{c3} \cos \left(\theta\_2 + \theta\_3\right) \tag{84}$$

$$M\_{22} = I\_{2xx} + I\_{3xx} + m\_2 l\_{c2}^2 + m\_3 \left( l\_2^2 + l\_{c3}^2 + 2l\_2 l\_{c3} \cos \theta\_3 \right) \tag{45}$$

$$M\_{32} = M\_{23} = I\_{3xx} + m\_3 \left(l\_{c3}^2 + l\_2 l\_{c3} \cos \theta\_3\right) \tag{a6}$$

$$M\_{33} = \, \, I\_{3\text{xx}} + m\_3 \text{I}\_{\text{c3}}^2 \tag{\text{a}\Omega}$$

$$\mathbf{C} = \begin{bmatrix} \mathbf{C}\_{11} & \mathbf{C}\_{21} & \mathbf{C}\_{31} \end{bmatrix}^{\mathrm{T}} \tag{a8}$$

$$\begin{aligned} \mathbf{C}\_{11} &= -2l\_1 \big( m\_2 l\_{c2} \sin \theta\_2 + m\_3 l\_2 \sin \theta\_2 \big) \dot{\theta}\_1 \dot{\theta}\_2 - 2l\_1 m\_3 l\_{c3} \sin \big( \theta\_2 + \theta\_3 \big) \dot{\theta}\_1 \dot{\theta}\_2 + \dots \\ &- m\_2 l\_1 l\_{c2} \sin \theta\_2 \cdot \dot{\theta}\_2^2 + m\_3 \big( l\_1 l\_2 \sin \theta\_2 + l\_1 l\_{c3} \sin \big( \theta\_2 + \theta\_3 \big) \big) \dot{\theta}\_2^2 + \dots \\ &- 2l\_{c3} m\_3 \big( l\_2 \sin \theta\_3 + l\_1 \sin \big( \theta\_2 + \theta\_3 \big) \big) \dot{\theta}\_1 \dot{\theta}\_3 + \dots \\ &- 2m\_3 l\_{c3} \big( l\_2 \sin \theta\_3 + l\_1 \sin \big( \theta\_2 + \theta\_3 \big) \big) \dot{\theta}\_2 \dot{\theta}\_3 + \dots \\ &m\_3 \big( -l\_2 l\_{c3} \sin \theta\_3 - l\_1 l\_{c3} \sin \big( \theta\_2 + \theta\_3 \big) \big) \dot{\theta}\_3^2 \end{aligned} \tag{a9}$$

$$\begin{aligned} \mathbf{C}\_{21} &= m\_3 \left( l\_1 l\_2 \sin \theta\_2 + l\_1 l\_{c3} \sin \left( \theta\_2 + \theta\_3 \right) \right) \dot{\theta}\_1^2 + m\_2 l\_1 l\_{c2} \sin \theta\_2 \cdot \dot{\theta}\_1^2 + \dots \\ &- 2 m\_3 l\_2 l\_{c3} \sin \theta\_3 \cdot \dot{\theta}\_1 \dot{\theta}\_3 - 2 m\_3 l\_2 l\_{c3} \sin \theta\_3 \cdot \dot{\theta}\_2 \dot{\theta}\_3 - m\_3 l\_2 l\_{c3} \sin \theta\_3 \cdot \dot{\theta}\_3^2 \end{aligned} \tag{a10}$$

$$\begin{aligned} \mathbf{C}\_{31} &= m\_3 \left( l\_2 l\_{c3} \sin \theta\_3 + l\_1 l\_{c3} \sin \left( \theta\_2 + \theta\_3 \right) \right) \dot{\theta}\_1^2 + \dots \\ &\quad 2 m\_3 l\_2 l\_{c3} \sin \theta\_3 \cdot \dot{\theta}\_1 \dot{\theta}\_2 + m\_3 l\_2 l\_{c3} \sin \theta\_3 \cdot \dot{\theta}\_2^2 \end{aligned} \tag{a11}$$

$$\begin{aligned} \mathbf{C}\_{31} &= m\_3 \left( l\_2 l\_{c3} \sin \theta\_3 + l\_1 l\_{c3} \sin \left( \theta\_2 + \theta\_3 \right) \right) \dot{\theta}\_1^2 + 2 m\_3 l\_2 l\_{c3} \sin \theta\_3 \cdot \dot{\theta}\_1 \dot{\theta}\_2 + \dots \\ m\_3 l\_2 l\_{c3} \sin \theta\_3 \cdot \dot{\theta}\_2^2 \end{aligned} \tag{a12}$$

$$\mathbf{G} = \begin{bmatrix} \mathbf{0} & \mathbf{0} & \mathbf{0} \end{bmatrix}^{\mathrm{r}} \tag{\text{a13}}$$

$$\mathbf{F} = \begin{bmatrix} F\_{11} & F\_{21} & F\_{31} \end{bmatrix}^T \tag{a14}$$

where:

*m*<sup>1</sup> : First link mass.

*m*2 : Second link mass.

*m*3 : Third link mass.

568 Robotic Systems – Applications, Control and Programming

2

2

31 13 3zz 3 c3 2 c3 <sup>3</sup> 3 1 c3 <sup>2</sup> <sup>3</sup> *M M I m l ll mll* = =+ + + + cos cos

22 2zz 3zz 2 c2 3 2 c3 2 c3 <sup>3</sup> *M I I ml m l l ll* = + + + ++ 2 cos

32 23 3zz 3 c3 2 c3 <sup>3</sup> *M M I m l ll* = =+ + cos

[ ] 11 21 31

2 sin sin 2 sin ... sin sin sin ...

( ) ( )

+ 

2

+ + +

1 22 2

θ

 

 θ

θ

 θ

θ

sin sin sin ...

sin sin ...

⋅ ⋅ +

2

*T*

sin sin 2 sin ...

θ

θ

θ

 θ

> θ

θ

2 22 2 2 2 2

( ) ()

= − + − + +

θ

2 2

 θ

c3 3 3 sin θ +θ θ

3 2 c3 3 2 c3 3 2 c3

*mll mll mll*

31 3 2 c3 ( ) <sup>1</sup> ( ) <sup>2</sup>

*mll mll*

θ

θ

2 sin sin

θ

θ

2 sin 2 sin sin

= + ++ + ⋅ − ⋅ ⋅ − − ⋅

1 3 3 2

2 c3 3 2 c3 2 3 3 3 3 3 c

> θ θ

[ ] 000 *<sup>T</sup>*

[ ] 11 21 31

= + + + ⋅+

= + ++

θ

1 2 3

33 3

++ +

2 2

 θ

θ

3 3 3 3 3

2

θ

θ

21 3 1 2 ( ) 1 c3 ( ) 2 1 c2

*C m ll ll mll*

2 3

 θθ

( ) ( ) ( ) ( )

( ) ( ) <sup>2</sup>

3 3

θ

31 3 2 c3 ( ) 1 c3 ( ) <sup>2</sup> <sup>3</sup>

*C m ll ll mll*

θ

*C m ll ll*

2 sin sin ...

c3 3 2 1 3

−⋅ +

3

c3 1

θ

θ

3 2

θ

⋅

θ

3 2

θ

sin

θ

3 c3 2 1

sin

− −

2 sin sin

*lm l l ml l l m ll ll*

3 2

32c

*mll*

where:

*m*<sup>1</sup> : First link mass. *m*2 : Second link mass.

−

11 1 2 c2 3 2 1 3 c3 2 1 c2 3 12 1 c3

*C l ml ml lml mll m ll ll*

θ

θθ

− ++

θ

θ

θ

*m l l l ll ll ll*

*m l l ll ll ll*

= =++ + +

θ

2

*M I I I ml m l l m ll*

=+++ + + +

11 1zz 2zz 3zz 1 c1 2 1 c2

21 12 2zz 3zz 2 c2 1 c2

*M M I I m l ll*

222

2 2 3 2 c3 1 2

+ +

( ) ( ) ( )

3 1 2 c3 1 2 2 c3 3 1 c3 3

2 cos 2 cos 2 cos

( ) ( ) ( )

cos 2 cos cos

θ

+ + + + +

+ + + +

2 2

2

θ

 θ

θ

θ

<sup>2</sup> *M I ml* 33 3zz 3 c3 = + (a7)

**C** = *CCC* (a8)

3

2 2 1 1

θθ

2 3

2

2 1

θ

 θ

2 c3

1 1 2

 (a12)

θ

 θ

1 1

3

 θ

θ

2

 θ

(a11)

θ θ θ

**G** = (a13)

**F** = *FFF* (a14)

 θ

θθ

(a10)

θ

 θ

2

3

θ

2 cos ...

2 1 c2

θ

2 2 2 2 c3 3 3 1 c3

cos ...

θ

θ

( ) ( ) <sup>2</sup>

( ) 2 22

( ) <sup>2</sup>

*T*

...

 θ 2

 θ

θ

(a2)

(a3)

(a4)

(a9)

(a5)

(a6)

θ θ

+


## **13. Addendum B: Considered parameter values**

Parameter values considered for the manipulator as well as controller gains values are shown in tables B1 and B2, respectively.


Table B1. Considered parameters for the manipulator


Table B2. Controller gains

**0**

**27**

*Spain*

**Software for Robotics**\*

A. C. Domínguez-Brito, J. Cabrera-Gámez,

*Universidad de Las Palmas de Gran Canaria*

**An Approach to Distributed Component-Based**

J. D. Hernández-Sosa, J. Isern-González and E. Fernández-Perdomo *Instituto Universitario SIANI & the Departamento de Informática y Sistemas,*

Programming robotic systems is not an easy task, even developing software for simple systems may be difficult, or at least cumbersome and error prone. Those systems are usually multi-threaded and multi-process, so synchronization problems associated with processes and threads must be faced. In addition distributed systems in network environments are also very common, and coordination between processes and threads in different machines increases programming complexity, specially if network environments are not completely reliable like a wireless network. Hardware abstraction is another question to take into account, uncommon hardware for an ordinary computer user is found in robotics systems, sensors and actuators with APIs (Applications Programming Interfaces) in occasions not very stable from version to version, and many times not well supported on the most common operating systems. Besides, it is not rare that even sensors or actuators with the same functionality (i.e. range sensors, cameras, etc.) are endowed with APIs with very different semantics. Moreover, many robotic systems must operate in hard real time conditions in order to warrant system and operation integrity, so it is necessary that the software behaves obeying strictly specific response times, deadlines and high frequencies of operation. Software integration and portability are also important problems in those systems, since many times only in one of them we may find a variety of machines, operating systems, drivers and libraries which we have to cope to. Last but not least, we want robotic systems to behave "autonomous" and "intelligently", and to carry out complex tasks like mapping a building, navigating safely in a cluttered, dynamic and crowded environment or driving a car safely in a motorway, to name

Despite there is no established standard methodology or solution to the situation described in the previous paragraph, in the last ten years many approaches have blossomed in the robotics community in order to tackle with it. In fact, many software engineering techniques and experiences coming from other areas of computer science are being applied to the specific area of robotic control software. A review of the state of the art of software engineering applied

\*This work has been partially supported by the Research Project *TIN2008-06068* funded by the Ministerio de Ciencia y Educación, Gobierno de España, Spain, and by the Research Project *ProID20100062* funded by the Agencia Canaria de Investigación, Innovación y Sociedad de la Información (ACIISI), Gobierno de

Canarias, Spain, and by the European Union's FEDER funds.

**1. Introduction**

a few.

## **14. References**


## **An Approach to Distributed Component-Based Software for Robotics**\*

A. C. Domínguez-Brito, J. Cabrera-Gámez, J. D. Hernández-Sosa, J. Isern-González and E. Fernández-Perdomo *Instituto Universitario SIANI & the Departamento de Informática y Sistemas, Universidad de Las Palmas de Gran Canaria Spain*

### **1. Introduction**

570 Robotic Systems – Applications, Control and Programming

Barahona, J., Espinosa & L., C.F., 2002. Evaluación Experimental de Controladores de

*Electrónica, Centro de Convenciones William o Jenkins*. Puebla. México, 2002. Blanke, M., Kinnaert, M., Lunze, J. & Staroswiecki, M., 2000. What is Fault-Tolerant Control.

*SAFEPROCESS 2000*. Budapest, 2000. Springer-Verlag Berlin Heidelberg. El-Salam, A., El-Haweet, W. & and Pertew, A., 2005. Fault Tolerant Kinematic Controller

Lewis, F., Dawson, D. & Abdallah, C., 2004. *Robot Manipulator Control Theory and Practice*.

Patton, R.J., 1997. Fault-Tolerant Control: The 1997 Situation. In *Proc. IFAC Symposium* 

Puig, V. et al., 2004. Control Tolerante a Fallos (Parte I): Fundamentos y Diagnóstico de Fallos. *Revista Iberoamericana de Automática e Informática Industrial*, 1(1), pp.15-31. Rubí, J., 2002. *Cinemática, Dinámica y Control de Robots Redundantes y Robots Subactuados*. Tesis

Siciliano, B. & Khatib, O., 2008. *Handbook of Robotics*. Berlin, Heidelberg: Springer-Verlag. Xiujuan, D. & Zhen, L., 2007. *Underactuated Robot Dynamic Modeling and Control Based on Embedding Model*. In *12th IFToMM World Congress*. Besançon. France, 2007. Zhang, Y. & Jiang, J., 2003. Bibliographical Review on Reconfigurable Fault-Tolerant Control Systems. In *Proceedings IFAC SAFEPROCESS*. Washington, D.C., USA, 2003.

Doctoral. San Sebastián, España: Universidad de Navarra.

*Systems Engineering Conference*. Cairo, 2005.

*Safeprocess*. Kingston Upon Hull, 1997.

New York: Marcel Dekker, Inc.

Posición tipo Saturados para Robot Manipuladores. In *Congreso Nacional de* 

In *IFAC Symposium on Fault Detection, Supervision and Safety for Technical Process -* 

Design for Underactuated Robot Manipulators. In *The Automatic Control and* 

**14. References** 

Programming robotic systems is not an easy task, even developing software for simple systems may be difficult, or at least cumbersome and error prone. Those systems are usually multi-threaded and multi-process, so synchronization problems associated with processes and threads must be faced. In addition distributed systems in network environments are also very common, and coordination between processes and threads in different machines increases programming complexity, specially if network environments are not completely reliable like a wireless network. Hardware abstraction is another question to take into account, uncommon hardware for an ordinary computer user is found in robotics systems, sensors and actuators with APIs (Applications Programming Interfaces) in occasions not very stable from version to version, and many times not well supported on the most common operating systems. Besides, it is not rare that even sensors or actuators with the same functionality (i.e. range sensors, cameras, etc.) are endowed with APIs with very different semantics. Moreover, many robotic systems must operate in hard real time conditions in order to warrant system and operation integrity, so it is necessary that the software behaves obeying strictly specific response times, deadlines and high frequencies of operation. Software integration and portability are also important problems in those systems, since many times only in one of them we may find a variety of machines, operating systems, drivers and libraries which we have to cope to. Last but not least, we want robotic systems to behave "autonomous" and "intelligently", and to carry out complex tasks like mapping a building, navigating safely in a cluttered, dynamic and crowded environment or driving a car safely in a motorway, to name a few.

Despite there is no established standard methodology or solution to the situation described in the previous paragraph, in the last ten years many approaches have blossomed in the robotics community in order to tackle with it. In fact, many software engineering techniques and experiences coming from other areas of computer science are being applied to the specific area of robotic control software. A review of the state of the art of software engineering applied

<sup>\*</sup>This work has been partially supported by the Research Project *TIN2008-06068* funded by the Ministerio de Ciencia y Educación, Gobierno de España, Spain, and by the Research Project *ProID20100062* funded by the Agencia Canaria de Investigación, Innovación y Sociedad de la Información (ACIISI), Gobierno de Canarias, Spain, and by the European Union's FEDER funds.

integration

An Approach to Distributed Component-Based Software for Robotics 573

Fig. 1. Diagram of the elements and interconnections of a system designed with CoolBOT.

are software components in the whole sense, since we can compose them indistinctly and arbitrarily without changing their implementation to built up a given system. The main difference among them is that *views* and *probes* are "light-weight" software components in relation to CoolBOT *components*. *Views* are software components which implement graphical control and monitoring interfaces for CoolBOT systems, which are completely decoupled from them. On the other side, *probes* mainly allow to implement decoupled interfaces for interoperation of CoolBOT systems with non CoolBOT applications, as depicted in the figure.

In CoolBOT, systems are made of CoolBOT *components* (components for short). A component is an *active entity*, in the sense that it has its own unit of execution. It also presents a clear separation between its external interface and its internals. Components intercommunicate among them only through their external interfaces which are formed by *input and output ports*. When connected, they form *port connections*, as depicted on Fig. 1. Through them, components interchange discrete units of information termed *port packets*. *Views* and *probes* have a similar

component

component

network

integration

component

component

integration

machine

view

component

machine

integration

machine

view

other

non coolbot application

probe

TCP/IP connections

Both will be explained in more detail in section 5.

port connection

input port

output port

specifically to robotics can be found in [Brugali (2007)]. Many of the approaches that have come up in these last years, albeit different, are either based completely, or follow or share to a certain extend some of the fundamentals ideas of the CBSE (Component Based Software Engineering) [George T. Heineman & William T. Councill (2001)] paradigm as a principle of design for robotic software.

Some of the significant approaches freely available within the robotics community based on the CBSE paradigm are G*en*oM/BIP [Mallet et al. (2010)][Basu et al. (2006)][Bensalem et al. (2009)], Smartsoft [Schlegel et al. (2009)], OROCOS [*The Orocos Project* (2011)], project ORCA [Brooks et al. (2005)], OpenRTM-aist [Ando et al. (2008)] and Willow Garage's ROS project [*ROS: Robot Operating System* (2011)]. All these approaches, in general incompatible among them, use many similar abstractions in order to build robotic software out of a set of software components. Each of these approaches usually solve or deal with many of the mentioned problems faced when programming robotic systems (hard real-time operation, distributed, multithread and multiprocess programming, hardware abstractions, portability, etc.), and using any of them implies to get used to its own methodology, abstractions and software tools to develop robotic software. Our group have also developed an approach to tackle with the problem of programming robotic systems. It is some years already that we have been using a CBSE C++ distributed framework designed and developed in our laboratory, termed CoolBOT [Antonio C. Domínguez-Brito et al. (2007)], which is also aimed at easing software development for robotic systems. Along several years of use acquiring experience programming mobile robotic systems, we have ended up integrating in CoolBOT some new developments in order to improve their use and operation. Those new improvements have been focused mainly in two main questions, namely: transparent distributed computation, and "deeper" interface decoupling; in next sections they will be presented more deeply. In order to do so this paper is organized as follows. In section 2 we will introduce briefly an overview about CoolBOT. Next, we will pass to focus on each one of the mentioned topics respectively, in sections 4 and 5. The last section is devoted to presenting the conclusions of this work.

## **2. CoolBOT. Overview**

CoolBOT [Antonio C. Domínguez-Brito et al. (2007)] is a C++ component oriented programming framework aimed to robotics, developed at our laboratory some years ago [Domínguez-Brito et al. (2004)], which is normally used to develop the control software for the mobile robots we have available at our institution. It is a programming framework that follows the CBSE paradigm for software development. The key concept in the CBSE paradigm is the concept of *software component* which is a unit of integration, composition and software reuse [Brugali & Scandurra (2009)][Brugali & Shakhimardanov (2010)]. Complex systems might be composed of several ready-to-use components. Ideally, interconnecting available components out of a repository of components previously developed, we can program a complete system. Thus, it should be only necessary a graphical interface or alike, to set up a system. Hence, being CoolBOT CBSE oriented, it also makes use of this central concept to build software systems.

Fig. 1 gives a global view of a typical system developed using CoolBOT. As we can observe, there are five CoolBOT *components* and two CoolBOT *views*, forming all them four CoolBOT *integrations* involving three different machines sharing a computer network. In addition, hosted by one of the machines, there is a non CoolBOT application which uses a CoolBOT *probe* to interact with one of the components of the system. Thus, in CoolBOT we can find three types of software components: *components*, *views* and *probes*. All these three types 2 Will-be-set-by-IN-TECH

specifically to robotics can be found in [Brugali (2007)]. Many of the approaches that have come up in these last years, albeit different, are either based completely, or follow or share to a certain extend some of the fundamentals ideas of the CBSE (Component Based Software Engineering) [George T. Heineman & William T. Councill (2001)] paradigm as a principle of

Some of the significant approaches freely available within the robotics community based on the CBSE paradigm are G*en*oM/BIP [Mallet et al. (2010)][Basu et al. (2006)][Bensalem et al. (2009)], Smartsoft [Schlegel et al. (2009)], OROCOS [*The Orocos Project* (2011)], project ORCA [Brooks et al. (2005)], OpenRTM-aist [Ando et al. (2008)] and Willow Garage's ROS project [*ROS: Robot Operating System* (2011)]. All these approaches, in general incompatible among them, use many similar abstractions in order to build robotic software out of a set of software components. Each of these approaches usually solve or deal with many of the mentioned problems faced when programming robotic systems (hard real-time operation, distributed, multithread and multiprocess programming, hardware abstractions, portability, etc.), and using any of them implies to get used to its own methodology, abstractions and software tools to develop robotic software. Our group have also developed an approach to tackle with the problem of programming robotic systems. It is some years already that we have been using a CBSE C++ distributed framework designed and developed in our laboratory, termed CoolBOT [Antonio C. Domínguez-Brito et al. (2007)], which is also aimed at easing software development for robotic systems. Along several years of use acquiring experience programming mobile robotic systems, we have ended up integrating in CoolBOT some new developments in order to improve their use and operation. Those new improvements have been focused mainly in two main questions, namely: transparent distributed computation, and "deeper" interface decoupling; in next sections they will be presented more deeply. In order to do so this paper is organized as follows. In section 2 we will introduce briefly an overview about CoolBOT. Next, we will pass to focus on each one of the mentioned topics respectively, in sections 4 and 5. The last section is devoted to presenting the conclusions of

CoolBOT [Antonio C. Domínguez-Brito et al. (2007)] is a C++ component oriented programming framework aimed to robotics, developed at our laboratory some years ago [Domínguez-Brito et al. (2004)], which is normally used to develop the control software for the mobile robots we have available at our institution. It is a programming framework that follows the CBSE paradigm for software development. The key concept in the CBSE paradigm is the concept of *software component* which is a unit of integration, composition and software reuse [Brugali & Scandurra (2009)][Brugali & Shakhimardanov (2010)]. Complex systems might be composed of several ready-to-use components. Ideally, interconnecting available components out of a repository of components previously developed, we can program a complete system. Thus, it should be only necessary a graphical interface or alike, to set up a system. Hence, being CoolBOT CBSE oriented, it also makes use of this central concept to

Fig. 1 gives a global view of a typical system developed using CoolBOT. As we can observe, there are five CoolBOT *components* and two CoolBOT *views*, forming all them four CoolBOT *integrations* involving three different machines sharing a computer network. In addition, hosted by one of the machines, there is a non CoolBOT application which uses a CoolBOT *probe* to interact with one of the components of the system. Thus, in CoolBOT we can find three types of software components: *components*, *views* and *probes*. All these three types

design for robotic software.

this work.

**2. CoolBOT. Overview**

build software systems.

Fig. 1. Diagram of the elements and interconnections of a system designed with CoolBOT.

are software components in the whole sense, since we can compose them indistinctly and arbitrarily without changing their implementation to built up a given system. The main difference among them is that *views* and *probes* are "light-weight" software components in relation to CoolBOT *components*. *Views* are software components which implement graphical control and monitoring interfaces for CoolBOT systems, which are completely decoupled from them. On the other side, *probes* mainly allow to implement decoupled interfaces for interoperation of CoolBOT systems with non CoolBOT applications, as depicted in the figure. Both will be explained in more detail in section 5.

In CoolBOT, systems are made of CoolBOT *components* (components for short). A component is an *active entity*, in the sense that it has its own unit of execution. It also presents a clear separation between its external interface and its internals. Components intercommunicate among them only through their external interfaces which are formed by *input and output ports*. When connected, they form *port connections*, as depicted on Fig. 1. Through them, components interchange discrete units of information termed *port packets*. *Views* and *probes* have a similar

Bear in mind that in CoolBOT port connections are established dynamically, but the definition of each input and output port for each software component, whether a component, a view or a probe, is static. Thus, for a given component we define statically its external interface of input and output ports, each one with its identifier, port type and accepted port packet types. In opposition, port connections are established at runtime. Only a compatible pair of output and input ports can form a port connection, and we say that they are compatible when two conditions fulfill: first, they have compatible port types (the compatible combinations are shown in Tables 1 and 2); and second, the port packets the pair of port accepts also match. As commented, Tables 1 and 2 show all the possible types and combinations of output and input ports available in CoolBOT. As we can observe we have two groups of types of port connections depending on the types of the output and input ports we connect, namely:

An Approach to Distributed Component-Based Software for Robotics 575

• *Active Publisher/Passive Subscriber (AP/PS)* **connections**. In this kind of connections we say the publisher is the *active* part of the communication through the connection, since it is the publisher (the sender) who invest more computational resources in the communication. More specifically, in these connections, there is a buffer (a cache, a memory) in the input port where incoming port packets get stored, when they get to the subscriber (the receiver) end. We say the publisher is active, because the copy of port packets in the input port buffers is made by the publisher's threads of execution. Those memories get signaled for the subscribers to access them at their own pace. Evidently, if the output port has several subscribers, the publisher has to make a copy for all of them, so the computational cost for copies increases and this cost is afforded by the publisher. Table 1 enumerates all the available types of port connections following this model of

• *Passive Publisher/Active Subscriber (PP/AS)* **connections**. Those connections follow a model of communication where the subscriber plays the *active* role in the communication, in the sense that in opposition to the previous ones, the subscriber is the part of the communication which invests more computational resources. In this type of connections, we have buffers at both ends of the port connection. When the publisher sends (publishes) a port packet through the connection, it gets stores in a buffer in the output port, and the input port gets signaled, in order to notify the subscriber that there are new data on the connection. Note that the publisher does not copy the port packet on the subscriber's input port buffer. It is the subscriber the one who copies the port packet on its input port buffer when it access its input port in order to get fresh port packets, stored at the other end of the port connection. In this way, when we have several subscribers and one publisher the

computational cost of copies is afforded by each one of the subscribers separately.

connected to the output port right at the moment of sending them.

Apart from the computational cost of using AP/PS connections versus PP/AS connections, there is another important aspect to take into account when using any of them. PP/AS connections are *persistent* in the sense that, as explained in Table 2 the last data (port packet) sent by the publisher through the output port is stored there, in the output port buffer, so subscribers which are connected to this output port once the last port packet was sent, can access those data posteriorly. On the contrary AP/PS connections are not persistent, because port packets are stored on the subscriber end, so packets, once they have been sent, are not available for new subscribers, because port packets only gets to those subscribers who are

Other important aspect to take into account with respect to port connections is that they follow an asynchronous model of communications. Note that they are unidirectional, port packets go from the publisher's output port to the subscriber's input port, the publisher sends port

communication.

external interface of input and output ports, hence, they can also be interconnected among them and with components using port connections. The functionality of a whole system comes up from the interaction through port connections among all the components integrating the system, including views and probes.

### **2.1 Port connections, ports and port packets**

CoolBOT components interact among them using *port connections*. A *port connection* is defined by an output port and an input port. Port connections are established dynamically in runtime, and they are unidirectional (from output to input port), and follow a *publish/subscribe* paradigm of communication [Brugali & Shakhimardanov (2010)]. In that way, we can have multiple subscribers for the same output port, as shown in Fig. 2, and multiple publishers feeding the same input port, illustrated in Fig. 3. Note that input and output ports are decoupled in the sense that component publishers do not know necessarily who is receiving what they are publishing through their output ports. The contrary is also true, component subscribers do not necessarily know who is publishing the data which are reaching them through their input ports. Data are sent through port connections in discrete units called *port packets*.

Fig. 2. Port connections: one publisher, many subscribers.

Fig. 3. Port connections: many publishers, one subscriber.

To establish a port connection the ports involved should be compatible, i.e., they must have compatible types, and should transport the same types of port packets. In particular, when defining an output or input port we have to specify three aspects for them, namely:


4 Will-be-set-by-IN-TECH

external interface of input and output ports, hence, they can also be interconnected among them and with components using port connections. The functionality of a whole system comes up from the interaction through port connections among all the components integrating

CoolBOT components interact among them using *port connections*. A *port connection* is defined by an output port and an input port. Port connections are established dynamically in runtime, and they are unidirectional (from output to input port), and follow a *publish/subscribe* paradigm of communication [Brugali & Shakhimardanov (2010)]. In that way, we can have multiple subscribers for the same output port, as shown in Fig. 2, and multiple publishers feeding the same input port, illustrated in Fig. 3. Note that input and output ports are decoupled in the sense that component publishers do not know necessarily who is receiving what they are publishing through their output ports. The contrary is also true, component subscribers do not necessarily know who is publishing the data which are reaching them through their input ports. Data are sent through port connections in discrete units called *port*

component

component

component

one subscriber.

To establish a port connection the ports involved should be compatible, i.e., they must have compatible types, and should transport the same types of port packets. In particular, when

1. **An identifier**: This is an identifier or name for the port. It has to be unique at component (or *view* or *probe*) scope. The port identifier is what we use to refer to a specific port when

2. **A port type**: This is the type of the port. There are several typologies available for input and output ports, and depending on how we combine them, we can establish a different model of communication for each connection. The typologies of the input and output ports involved in a connection determine the pattern and semantics of the communication through it, following the same philosophy that the communications patterns of Smartsoft [Schlegel et al. (2009)], and the interfaces for component communication available in OROCOS [*The Orocos Project* (2011)]. On Tables 1 and 2 we can see all the types of connections we have available in CoolBOT right now, we will elaborate on this later. 3. **Port packet types**: Those are the types of port packets accepted by the output or input port. Most input and output port types only accept one type of port packet through them, although we have also some of them that accept a set of different port packet types.

defining an output or input port we have to specify three aspects for them, namely:

Fig. 3. Port connections: many publishers,

component

output port input port port connection

component

component

component

the system, including views and probes.

*packets*.

component

many subscribers.

output port input port port connection

Fig. 2. Port connections: one publisher,

establishing/de-establishing port connections.

**2.1 Port connections, ports and port packets**

Bear in mind that in CoolBOT port connections are established dynamically, but the definition of each input and output port for each software component, whether a component, a view or a probe, is static. Thus, for a given component we define statically its external interface of input and output ports, each one with its identifier, port type and accepted port packet types. In opposition, port connections are established at runtime. Only a compatible pair of output and input ports can form a port connection, and we say that they are compatible when two conditions fulfill: first, they have compatible port types (the compatible combinations are shown in Tables 1 and 2); and second, the port packets the pair of port accepts also match. As commented, Tables 1 and 2 show all the possible types and combinations of output and input ports available in CoolBOT. As we can observe we have two groups of types of port connections depending on the types of the output and input ports we connect, namely:


Apart from the computational cost of using AP/PS connections versus PP/AS connections, there is another important aspect to take into account when using any of them. PP/AS connections are *persistent* in the sense that, as explained in Table 2 the last data (port packet) sent by the publisher through the output port is stored there, in the output port buffer, so subscribers which are connected to this output port once the last port packet was sent, can access those data posteriorly. On the contrary AP/PS connections are not persistent, because port packets are stored on the subscriber end, so packets, once they have been sent, are not available for new subscribers, because port packets only gets to those subscribers who are connected to the output port right at the moment of sending them.

Other important aspect to take into account with respect to port connections is that they follow an asynchronous model of communications. Note that they are unidirectional, port packets go from the publisher's output port to the subscriber's input port, the publisher sends port

**Port packet type Description**

**2.2 CoolBOT components**

of a given component.

(2010)] modules.

PacketUChar Transports a C++ unsigned char.

Table 3. Available port packet types provided by CoolBOT itself.

PacketTime Transports a CoolBOT Time value (a time-stamp).

PacketCoordinates2D Transports a CoolBOT Coordinates2D value (stores a 2D point). PacketFrame2D Transports a CoolBOT Frame2D value (stores a 2D frame). PacketCoordinates3D Transports a CoolBOT Coordinates3D value (stores a 3D point). PacketFrame3D Transports a CoolBOT Frame3D value (stores a 3D frame).

CoolBOT components are *active objects* [Ellis & Gibbs (1989)], as [Brugali & Shakhimardanov (2010)] states: "a component is a computation unit that encapsulates data structures and operations to manipulate them". Moreover, in CoolBOT, components can be seen as "data-flow machines", since they process data when they dispose of it at their input ports. Otherwise, they stay idle, waiting for incoming port packets. On the other side, components send processed data in form of port packets through their output ports. All in all, the model of computation of CoolBOT systems follows the *Flow Based Programming* (FBP) paradigm according to [J. Paul Morrison (2010)], so systems can be built as networks of components interconnected by means of port connections. More formally, CoolBOT components are modeled as *port automata* [Steenstrup et al. (1983)][Stewart et al. (1997)]. Fig. 4 provides a view of the structure of a component in CoolBOT. There is a clear separation between its external interface and its internal structure. Externally, a component can only communicate with other components (and views and probes) through its input and output ports. Thence, a component's external interface comprises all its input and output ports, its types, and the port packets it accepts through them. As we can see on the figure there are two special ports in any component: the *control port* and the *monitoring port*, the rest of the ports are user defined. The *control port* allows to modify component's *controllable variables*. Through the *monitoring port* components publish their *observable variables*. Both ports allows an external supervisor (i.e. another component, a view or a probe) to observe and modify the execution and configuration

An Approach to Distributed Component-Based Software for Robotics 577

Internally a component is organized as an automaton, as illustrated in Fig. 4. All components follow the same automaton structure. The part of this automaton which is common to all components is called the *default automaton*, and comprises several states, namely: *starting*, *ready*, *suspended*, *end*, and four states for component exception handling. This structure allows for an external supervisor to control the execution of any component in a system using an standard protocol, likewise in an operating system where threads and processes transit among different states during their lifetime. To complete the automaton of a component the user defines the *user automaton* which is specific for each component and implements its functionality. This is represented in Fig. 4 with a dotted circle as the meta-state *running*. Transitions among component's automaton states are triggered by incoming port packets through any of its input ports, and also by internal events (timers, empty transition, a control variable that has been modified by an external supervisor, entering or exiting a state, etc.). The user can associate C++ callbacks to transitions, much like the *codels* for G*en*oM [Mallet et al.

PacketInt Transports a C++ int. PacketLong Transports a C++ long. PacketDouble Transports a C++ double.


Table 1. Available port connections types: *active publisher/passive subscriber (AP/PS)* connections.


Table 2. Available port connections types: *passive publisher/ active subscriber (PP/AS)* connections.

packets and keeps doing something different, the subscriber gets packets at its own pace and not necessarily at the right moment they get to its input ports.

As to port packets, when defining an output or input port, we have to specify which port packet type or types (depending on the port type being defined), the port will accept. In general, port packet types are defined by the user, as we will see in section 3, we may also use port packets types provided by CoolBOT itself (the available ones are shown in Fig. 3), port packet types previously developed, or third party port packet types.


Table 3. Available port packet types provided by CoolBOT itself.

### **2.2 CoolBOT components**

6 Will-be-set-by-IN-TECH

*Active Publisher/Passive Subscriber (AP/PS)*

only communicate the occurrence of an event. *generic last last* connections: the input port stores always the *last* port packet sent through

accepted through the port connection.

port connection.

stored on them.

port.

**Output Port Input Port Port Connection Type**

connections.

connections.

*tick tick tick* connections: those connections do not transport any port packet, they

*multipacket multipacket multipacket* connections: those connections accept a set of port packet types.

*lazymultipacket lazymultipacket* connections: those connections accept a set of port packet

*Passive Publisher/Active Subscriber (PP/AS)*

*poster poster poster* connections: there is a buffer at the output port where the last packet

port packet is accepted through the port connection.

packets and keeps doing something different, the subscriber gets packets at its own pace and

As to port packets, when defining an output or input port, we have to specify which port packet type or types (depending on the port type being defined), the port will accept. In general, port packet types are defined by the user, as we will see in section 3, we may also use port packets types provided by CoolBOT itself (the available ones are shown in Fig. 3), port

Table 1. Available port connections types: *active publisher/passive subscriber (AP/PS)*

Table 2. Available port connections types: *passive publisher/ active subscriber (PP/AS)*

not necessarily at the right moment they get to its input ports.

packet types previously developed, or third party port packet types.

the connection by the publisher (publishers). Only one type of port packet is

There is a different buffer for each accepted port packet type, the last port packet of each type which is sent through the connection by publishers gets

types. At the input port there is a different buffer for each accepted port packet type, the last port packet of each type which is sent through the connection by publishers gets stored on them. On the output port, port packets get stored in a queue of port packets to send, they are really sent to the other end when a *flush* operation is applied by the publisher on the output

send by the publisher gets stored. There is another buffer at the input port which is "synchronized" with the output port buffer when the subscriber accesses its input port in order to get the last port packet sent through the connection. Therefore, the port packet gets copied to the input port only when a new port packet has been stored at the output port end. Only one type of

*fifo fifo* connections: at the input port there is a circular fifo with a specific length. Port packets sent through the port connection by publishers, get stored there. Only one type of port packet is accepted through the port connection. *ufifo unbounded fifo* connections: at the input port there is a fifo with a specific length. Port packets sent through the port connection by publishers, get stored there. When the fifo is full and port packets keep coming the fifo grows in order to store them. Only one type of port packet is accepted through the

**Output Port Input Port Port Connection Type**

CoolBOT components are *active objects* [Ellis & Gibbs (1989)], as [Brugali & Shakhimardanov (2010)] states: "a component is a computation unit that encapsulates data structures and operations to manipulate them". Moreover, in CoolBOT, components can be seen as "data-flow machines", since they process data when they dispose of it at their input ports. Otherwise, they stay idle, waiting for incoming port packets. On the other side, components send processed data in form of port packets through their output ports. All in all, the model of computation of CoolBOT systems follows the *Flow Based Programming* (FBP) paradigm according to [J. Paul Morrison (2010)], so systems can be built as networks of components interconnected by means of port connections. More formally, CoolBOT components are modeled as *port automata* [Steenstrup et al. (1983)][Stewart et al. (1997)]. Fig. 4 provides a view of the structure of a component in CoolBOT. There is a clear separation between its external interface and its internal structure. Externally, a component can only communicate with other components (and views and probes) through its input and output ports. Thence, a component's external interface comprises all its input and output ports, its types, and the port packets it accepts through them. As we can see on the figure there are two special ports in any component: the *control port* and the *monitoring port*, the rest of the ports are user defined. The *control port* allows to modify component's *controllable variables*. Through the *monitoring port* components publish their *observable variables*. Both ports allows an external supervisor (i.e. another component, a view or a probe) to observe and modify the execution and configuration of a given component.

Internally a component is organized as an automaton, as illustrated in Fig. 4. All components follow the same automaton structure. The part of this automaton which is common to all components is called the *default automaton*, and comprises several states, namely: *starting*, *ready*, *suspended*, *end*, and four states for component exception handling. This structure allows for an external supervisor to control the execution of any component in a system using an standard protocol, likewise in an operating system where threads and processes transit among different states during their lifetime. To complete the automaton of a component the user defines the *user automaton* which is specific for each component and implements its functionality. This is represented in Fig. 4 with a dotted circle as the meta-state *running*. Transitions among component's automaton states are triggered by incoming port packets through any of its input ports, and also by internal events (timers, empty transition, a control variable that has been modified by an external supervisor, entering or exiting a state, etc.). The user can associate C++ callbacks to transitions, much like the *codels* for G*en*oM [Mallet et al. (2010)] modules.

GNULinux/Windows

CoolBOT threads,

latter tool, the CoolBOT compiler coolbot-c, generates C++ skeletons for components, port packets, views and integrations, and for each component it also generates its corresponding probe. All, the probe and the skeletons are C++ classes, and CoolBOT uses a description language as source code to generate those C++ classes. Except for probes, which are complete functional C++ classes, coolbot-c generates incomplete C++ classes which constitute the mentioned C++ skeletons. They are incomplete in the sense that they lack functionality, the user is responsible for completing them. Once completed, and using the CMake templates provided by coolbot-ske, they can be compiled. Components, probes, port packets, and views compile yielding dynamic libraries, integrations compile yielding executable programs. Moreover, the coolbot-c compiler preserves information when recompiling description files which have been modified, in such a way that, all C++ code introduced by the user into the

Transparent distributed computation is the first development we have integrated on CoolBOT in order to improve its use and operation. The main idea was to make network communications as transparent as possible to developers (and components). We wanted CoolBOT to be responsible for them on behalf of components. Thus, at system level, to connect two component instances instantiated in different CoolBOT integrations should be as easy as connecting them when instantiated in the same integration. In particular, we follow three main principles related to transparent distributed computation facilities: transparent network inter component communications, network decoupling of component's functional logic, and

port packets

component component

controllable & observable variables

view

An Approach to Distributed Component-Based Software for Robotics 579

view component System probe

input & output ports

probe

Operating System

skeletons is preserved.

incremental design.

Fig. 5. Abstraction layers in CoolBOT.

**4. Transparent distributed computation**

OS APIs & Utility Libraries

CoolBOT C++ framework infraestructure

CoolBOT C++ API

Level

OS API ACE (threads, mutexes, cond. variables, time, etc.) (sockets & data marshalling)

component exceptions

component

utility classes

ICC mechanisms & time DC3 Protocol

port connections

CoolBOT components, views, probes, integrations & services

component

timers & port watchdogs

view

Fig. 4. CoolBOT. Component structure.

A key design principle for CoolBOT components is to take advantage of multithreaded and multicore capacities of mainstream operating systems, and the infrastructure they provide for multithreaded programming. Another key principle of design for components was to separate functional logic from thread synchronization logic. The user should be only worried about the functional logic, synchronization issues should be completely transparent to the developer. CoolBOT should be responsible for them behind the scenes. As active objects, CoolBOT components can organize its execution using multiple threads of execution as depicted on Fig. 4. Those threads are mapped on the underlying operating system (see Fig. 5). Thus, when developing a component the user assigns disjointly threads to automaton states, and to input ports and internal events provoking transitions. Those transitions, i.e. their associated callbacks, will be executed by the specific threads being assigned. The synchronization among them is guaranteed by the underlying framework infrastructure. All components are endowed at least with one thread of execution; the rest, if any, are user defined.

As depicted in Fig. 1, CoolBOT provides means for distributed computation. A given system can be mapped on a set formed by different machines sharing a computer network. Port connections among components, views and probes are transparently multiplexed using TCP/IP connections (see section 4). Furthermore, each machine can host one or several CoolBOT *integrations*. A CoolBOT *integration* is an application (a process) which integrates instances of CoolBOT components, views and probes. Integrations can be instantiated in any machine, and are user defined using a description language as we will see in next section.

### **3. CoolBOT development tools**

CoolBOT provides several tools for helping developers. Fig. 6 shows the main ones, namely: coolbot-ske and coolbot-c. The former one, coolbot-ske, is used to create a directory structure for development of CoolBOT components, probes, port packets, views and integrations. It also generates CMake [Kitware, Inc. (2010)] template files for compiling them, description language template files for coolbot-c, and test programs for components. The 8 Will-be-set-by-IN-TECH

time, timers, watchdogs, component exceptions, internal events, controllable and observable variables, threads synchronization, priorities

error

endowed at least with one thread of execution; the rest, if any, are user defined.

suspended

running

error

A key design principle for CoolBOT components is to take advantage of multithreaded and multicore capacities of mainstream operating systems, and the infrastructure they provide for multithreaded programming. Another key principle of design for components was to separate functional logic from thread synchronization logic. The user should be only worried about the functional logic, synchronization issues should be completely transparent to the developer. CoolBOT should be responsible for them behind the scenes. As active objects, CoolBOT components can organize its execution using multiple threads of execution as depicted on Fig. 4. Those threads are mapped on the underlying operating system (see Fig. 5). Thus, when developing a component the user assigns disjointly threads to automaton states, and to input ports and internal events provoking transitions. Those transitions, i.e. their associated callbacks, will be executed by the specific threads being assigned. The synchronization among them is guaranteed by the underlying framework infrastructure. All components are

As depicted in Fig. 1, CoolBOT provides means for distributed computation. A given system can be mapped on a set formed by different machines sharing a computer network. Port connections among components, views and probes are transparently multiplexed using TCP/IP connections (see section 4). Furthermore, each machine can host one or several CoolBOT *integrations*. A CoolBOT *integration* is an application (a process) which integrates instances of CoolBOT components, views and probes. Integrations can be instantiated in any machine, and are user defined using a description language as we will see in next section.

CoolBOT provides several tools for helping developers. Fig. 6 shows the main ones, namely: coolbot-ske and coolbot-c. The former one, coolbot-ske, is used to create a directory structure for development of CoolBOT components, probes, port packets, views and integrations. It also generates CMake [Kitware, Inc. (2010)] template files for compiling them, description language template files for coolbot-c, and test programs for components. The

monitoring

user output ports

port

component threads

end

error recovery

<<component>>

starting ready

component automaton

> error recovery

control port

network threads

Fig. 4. CoolBOT. Component structure.

**3. CoolBOT development tools**

user input ports

Fig. 5. Abstraction layers in CoolBOT.

latter tool, the CoolBOT compiler coolbot-c, generates C++ skeletons for components, port packets, views and integrations, and for each component it also generates its corresponding probe. All, the probe and the skeletons are C++ classes, and CoolBOT uses a description language as source code to generate those C++ classes. Except for probes, which are complete functional C++ classes, coolbot-c generates incomplete C++ classes which constitute the mentioned C++ skeletons. They are incomplete in the sense that they lack functionality, the user is responsible for completing them. Once completed, and using the CMake templates provided by coolbot-ske, they can be compiled. Components, probes, port packets, and views compile yielding dynamic libraries, integrations compile yielding executable programs. Moreover, the coolbot-c compiler preserves information when recompiling description files which have been modified, in such a way that, all C++ code introduced by the user into the skeletons is preserved.

## **4. Transparent distributed computation**

Transparent distributed computation is the first development we have integrated on CoolBOT in order to improve its use and operation. The main idea was to make network communications as transparent as possible to developers (and components). We wanted CoolBOT to be responsible for them on behalf of components. Thus, at system level, to connect two component instances instantiated in different CoolBOT integrations should be as easy as connecting them when instantiated in the same integration. In particular, we follow three main principles related to transparent distributed computation facilities: transparent network inter component communications, network decoupling of component's functional logic, and incremental design.

generated automatically. In addition, we have endowed also CoolBOT with a rich set of C++ templates and classes to support marshalling and demarshalling of port packets (or any other

An Approach to Distributed Component-Based Software for Robotics 581

Another important aspect for network communication transparency is the decoupling of network communication logic from the functional logic of the component. Fig. 4 illustrates how this decoupling has been put into practice. Each component is endowed with a pair of network threads, and *output network thread*, and an *input network thread*, which are responsible for network communications using DC3P. CoolBOT guarantees transparently thread synchronization between them and the functional threads of the component. The network threads are mainly idle, waiting to have port packets to send through open network port connections, or to receive incoming port packets that should be redirected to the corresponding component's input ports. At instantiation time, it is possible to deactivate the network support for a component instance (and also for views and probes instances). In this manner, the component is not reachable from outside the integration where it has been instantiated, and evidently network threads and the resources they have associated are not

In future versions of CoolBOT, it is very possible that the set of DC3P protocol packets grow with new ones. In order to allow an easy integration of new DC3P packets in CoolBOT, we have applied the *composite* and *prototype* patterns [Gamma et al. (1995)] to their design. Those design patterns, jointly with the C++ templates and classes to support marshalling and demarshalling provide a systematic and easy manner of integrating new DC3P packets in

Inspired by one of the "good practices" proposed by the authors of Carnegie Mellon's Navigation Toolkit CARMEN [Montemerlo et al. (2003)]: "one important design principle of CARMEN is the separation of robot control from graphical displays", we have introduced in CoolBOT the concept of *view* as an integrable, composite and reusable graphical interface available for CoolBOT system integrators and developers. Thus, CoolBOT *views* are graphical interfaces which, as software components, may be interconnected with any other component, view or probe in a CoolBOT system. In Fig. 7 is depicted the structure of a view in CoolBOT. As shown, CoolBOT views are also endowed with an external interface of input and output ports. Through this interface the view can communicate with other components, views and probes present in a given system. Identically to components, views are provided with the same network thread support which allows transparent and decoupled network communications through port connections. Internally, a view is a graphical interface, in fact, the current views already developed and operational which are available in CoolBOT have been implemented using the GTK graphical library [*The GTK+ Project* (2010)]. As shown in Fig. 6, C++ skeletons for views are generated using the coolbot-c compiler. The part which should be completed by the user is precisely the graphical implementation, which can be done using directly the GTK library or a GUI graphical programming software for designing

window-based interfaces like Glade [*Glade - A User Interface Designer* (2010)].

As depicted in Fig. 7 a CoolBOT *probe* is provided with an external interface of input and output ports, and likewise component and views, as software components, this allows them to

arbitrary C++ class).

allocated.

**4.3 Incremental design**

future versions of the framework.

**5. Deeper interface decoupling: Views and probes**

**4.2 Network decoupling of component's functional logic**

Fig. 6. CoolBOT's software development process.

### **4.1 Transparent network inter component communications**

In order to make network communications transparent to components, we have developed a protocol termed *Distributed CoolBOT Component Communication Protocol (DC3P)* to multiplex port connections over TCP connections established among the components involved. In the current version of CoolBOT, only the TCP protocol is supported for network connections. The integration of the UDP protocol is under development, and it is expected for next CoolBOT version. DC3P has been implemented using the TCP/IP socket wrappers and the marshalling facilities provided by the ACE library [Douglas C. Schmidt (2010)], illustrated in Fig. 5. The protocol consists of the following packets:


All DC3P packets and port packets sent through port connections are marshalled and unmarshalled in order to be correctly sent through the network. We have used the facilities ACE provides for marshalling/demarshalling based on the OMG Common Data Representation (CDR) [Object Management Group (2002b)]. In general, port packets are user defined. In order to make their marshalling/demarshalling as easy and transparent as possible for developers, the description language accepted by the coolbot-c compiler includes sentences for describing port packets (as we can observe in Fig. 6), much like CORBA IDL [Object Management Group (2002a)]. The compiler generates a C++ skeleton class for each port packet where the code for marshalling/demarshalling is part of the skeleton's code generated automatically. In addition, we have endowed also CoolBOT with a rich set of C++ templates and classes to support marshalling and demarshalling of port packets (or any other arbitrary C++ class).

## **4.2 Network decoupling of component's functional logic**

Another important aspect for network communication transparency is the decoupling of network communication logic from the functional logic of the component. Fig. 4 illustrates how this decoupling has been put into practice. Each component is endowed with a pair of network threads, and *output network thread*, and an *input network thread*, which are responsible for network communications using DC3P. CoolBOT guarantees transparently thread synchronization between them and the functional threads of the component. The network threads are mainly idle, waiting to have port packets to send through open network port connections, or to receive incoming port packets that should be redirected to the corresponding component's input ports. At instantiation time, it is possible to deactivate the network support for a component instance (and also for views and probes instances). In this manner, the component is not reachable from outside the integration where it has been instantiated, and evidently network threads and the resources they have associated are not allocated.

### **4.3 Incremental design**

10 Will-be-set-by-IN-TECH

coolbot-c

.cpp .h

port packets C++ skeletons

In order to make network communications transparent to components, we have developed a protocol termed *Distributed CoolBOT Component Communication Protocol (DC3P)* to multiplex port connections over TCP connections established among the components involved. In the current version of CoolBOT, only the TCP protocol is supported for network connections. The integration of the UDP protocol is under development, and it is expected for next CoolBOT version. DC3P has been implemented using the TCP/IP socket wrappers and the marshalling facilities provided by the ACE library [Douglas C. Schmidt (2010)], illustrated in Fig. 5. The

• *Port Type Info* (request & response): For asking type information about input and output ports through network connections. This allows port connection compatibility verification

• *Disconnect* (request & response): To disconnect a previous established port connection. • *Data Sending*: Once a port connection is established over TCP/IP, port packets are sent

• *Remote Connect* (request & response): For establishing port connections between two remote component instances. Permits to connect component instances remotely.

• *Remote Disconnect* (request & response): To disconnect port connections previously

• *Echo Request & Response*: Those DC3P packets are useful to verify that the other end in a

All DC3P packets and port packets sent through port connections are marshalled and unmarshalled in order to be correctly sent through the network. We have used the facilities ACE provides for marshalling/demarshalling based on the OMG Common Data Representation (CDR) [Object Management Group (2002b)]. In general, port packets are user defined. In order to make their marshalling/demarshalling as easy and transparent as possible for developers, the description language accepted by the coolbot-c compiler includes sentences for describing port packets (as we can observe in Fig. 6), much like CORBA IDL [Object Management Group (2002a)]. The compiler generates a C++ skeleton class for each port packet where the code for marshalling/demarshalling is part of the skeleton's code

• *Connect* (request & response): For establishing a port connection over TCP/IP.

description lenguage

coolbot-c

.cpp .h

view C++ skeleton

integration C++ skeleton

.cpp

coolbot-c

description lenguage

.coolbot-integration

.coolbot-view

description lenguage

.coolbot-packets

coolbot-c

.cpp .h <sup>+</sup>

coolbot-ske

directory structure for development, cmake templates, description language templates & test programs

command line options

> component C++ skeleton

Fig. 6. CoolBOT's software development process.

protocol consists of the following packets:

through it using this DC3P packet.

description lenguage

.coolbot-component

.cpp .h

**4.1 Transparent network inter component communications**

when establishing a port connection through the network.

established between two remote component instances.

network communication is active and responding.

component probe

> In future versions of CoolBOT, it is very possible that the set of DC3P protocol packets grow with new ones. In order to allow an easy integration of new DC3P packets in CoolBOT, we have applied the *composite* and *prototype* patterns [Gamma et al. (1995)] to their design. Those design patterns, jointly with the C++ templates and classes to support marshalling and demarshalling provide a systematic and easy manner of integrating new DC3P packets in future versions of the framework.

## **5. Deeper interface decoupling: Views and probes**

Inspired by one of the "good practices" proposed by the authors of Carnegie Mellon's Navigation Toolkit CARMEN [Montemerlo et al. (2003)]: "one important design principle of CARMEN is the separation of robot control from graphical displays", we have introduced in CoolBOT the concept of *view* as an integrable, composite and reusable graphical interface available for CoolBOT system integrators and developers. Thus, CoolBOT *views* are graphical interfaces which, as software components, may be interconnected with any other component, view or probe in a CoolBOT system. In Fig. 7 is depicted the structure of a view in CoolBOT. As shown, CoolBOT views are also endowed with an external interface of input and output ports. Through this interface the view can communicate with other components, views and probes present in a given system. Identically to components, views are provided with the same network thread support which allows transparent and decoupled network communications through port connections. Internally, a view is a graphical interface, in fact, the current views already developed and operational which are available in CoolBOT have been implemented using the GTK graphical library [*The GTK+ Project* (2010)]. As shown in Fig. 6, C++ skeletons for views are generated using the coolbot-c compiler. The part which should be completed by the user is precisely the graphical implementation, which can be done using directly the GTK library or a GUI graphical programming software for designing window-based interfaces like Glade [*Glade - A User Interface Designer* (2010)].

As depicted in Fig. 7 a CoolBOT *probe* is provided with an external interface of input and output ports, and likewise component and views, as software components, this allows them to

integration

Short Term Planner

ND

Player Robot

periodically a 360*<sup>o</sup>* virtual scan for the ND+ algorithm), NDNavigation (implements the ND+ algorithm) and ShortTermPlanner (a planner which uses the grid map for planning paths in the robot surroundings using a modification of the numerical navigation function NF2 found in [Jean-Claude Latombe (1991)]). On other machine another integration is shown hosting four view instances through which we can control and monitor the system remotely. In addition, in another machine, there is a web browser hosting a Java applet using a CoolBOT *probe* to

In order to clarify how the integration of Fig. 8 has been built, and also to clarify the process of development of each of its components, in next paragraphs we will have a look at the description files used to generate some of them, including the whole integration shown in the figure. Thus, in Fig. 9 we can see the description file accepted by coolbot-c of one of the

commands

Navigation GridMap

virtual scan

map

MbICP

sensory info

commands

network

web browser

java applet

probe

commands

machine

sensory info

An Approach to Distributed Component-Based Software for Robotics 583

machine machine

view Robot

commands

view MbICP

integration

commands

view

planner map

TCP/IP connections

port connection

Fig. 8. CoolBOT. Secure Navigation System.

connect to some of the components of the system.

input port

output port

Map

view MbICP Planner

Fig. 7. CoolBOT. View and probe structures.

be interconnected with other components, views or probes. Equally they implement the same network decoupled support of threads for transparent network communications. In CoolBOT, *probes* are devised as interfaces for interoperability with non CoolBOT software, as illustrated graphically in Fig. 1. A complete functional C++ class implementing a probe is generated when a component is compiled by coolbot-c. The probe implements the complementary external interface of their corresponding component. Those probes generated automatically can be seen as automatic refactorings of external component interfaces in order to support interoperability of CoolBOT components with non CoolBOT software. As mentioned in [Makarenko et al. (2007)] this is an important factor in order to facilitate integration of different development robotic software approaches.

## **6. A real integration**

In its current operating version, CoolBOT has been mainly used to control mobile robotic systems with the platforms we have available at our laboratory: Pioneer mobile robots models 3-DX, and P3-AT from Adept Mobile Robots [*Adept Mobile Robots* (2011)] (former Activ Media Robotics).

In this section we will show next a real robotic system using one of our Pioneer 3-DX mobile robots, in order to give a glimpse of a real system controlled using CoolBOT. The system is illustrated in Fig. 8. The example shows a secure navigation system for an indoor mobile robot. This is a real application we have usually in operation on the robots we have at the laboratory. The system implements a secure navigation system based on the ND+ algorithm for obstacle avoidance [Minguez et al. (2004)]. It has been implemented attending to [Montesano et al. (2006)]. In the figure, input ports, output ports, and port connections, have been reduced for the sake of clarification. Some of them represent several connections and ports in the real system.

The system is organized using two CoolBOT integrations, one only formed by CoolBOT component instances, and the other one containing CoolBOT view instances. The former one is really the integration which implements the navigation system. As we can observe, it consists of five component instances, namely: PlayerRobot (this is a wrapper component for hardware abstraction using the Player/Stage project framework [Vaughan et al. (2003)]), MbICP (this is a component which implements the MbICP scan matching algorithm based on laser range sensor data [Minguez et al. (2006)]), GridMap (this component maintains a grid map of the surroundings of the robot built using robot range laser scans, it also generates 12 Will-be-set-by-IN-TECH

be interconnected with other components, views or probes. Equally they implement the same network decoupled support of threads for transparent network communications. In CoolBOT, *probes* are devised as interfaces for interoperability with non CoolBOT software, as illustrated graphically in Fig. 1. A complete functional C++ class implementing a probe is generated when a component is compiled by coolbot-c. The probe implements the complementary external interface of their corresponding component. Those probes generated automatically can be seen as automatic refactorings of external component interfaces in order to support interoperability of CoolBOT components with non CoolBOT software. As mentioned in [Makarenko et al. (2007)] this is an important factor in order to facilitate integration of different

In its current operating version, CoolBOT has been mainly used to control mobile robotic systems with the platforms we have available at our laboratory: Pioneer mobile robots models 3-DX, and P3-AT from Adept Mobile Robots [*Adept Mobile Robots* (2011)] (former Activ Media

In this section we will show next a real robotic system using one of our Pioneer 3-DX mobile robots, in order to give a glimpse of a real system controlled using CoolBOT. The system is illustrated in Fig. 8. The example shows a secure navigation system for an indoor mobile robot. This is a real application we have usually in operation on the robots we have at the laboratory. The system implements a secure navigation system based on the ND+ algorithm for obstacle avoidance [Minguez et al. (2004)]. It has been implemented attending to [Montesano et al. (2006)]. In the figure, input ports, output ports, and port connections, have been reduced for the sake of clarification. Some of them represent several connections

The system is organized using two CoolBOT integrations, one only formed by CoolBOT component instances, and the other one containing CoolBOT view instances. The former one is really the integration which implements the navigation system. As we can observe, it consists of five component instances, namely: PlayerRobot (this is a wrapper component for hardware abstraction using the Player/Stage project framework [Vaughan et al. (2003)]), MbICP (this is a component which implements the MbICP scan matching algorithm based on laser range sensor data [Minguez et al. (2006)]), GridMap (this component maintains a grid map of the surroundings of the robot built using robot range laser scans, it also generates

<<probe>>

network threads

<<view>>

graphical interface

network threads

Fig. 7. CoolBOT. View and probe structures.

development robotic software approaches.

**6. A real integration**

and ports in the real system.

Robotics).

time, timers, threads synchronization

Fig. 8. CoolBOT. Secure Navigation System.

periodically a 360*<sup>o</sup>* virtual scan for the ND+ algorithm), NDNavigation (implements the ND+ algorithm) and ShortTermPlanner (a planner which uses the grid map for planning paths in the robot surroundings using a modification of the numerical navigation function NF2 found in [Jean-Claude Latombe (1991)]). On other machine another integration is shown hosting four view instances through which we can control and monitor the system remotely. In addition, in another machine, there is a web browser hosting a Java applet using a CoolBOT *probe* to connect to some of the components of the system.

In order to clarify how the integration of Fig. 8 has been built, and also to clarify the process of development of each of its components, in next paragraphs we will have a look at the description files used to generate some of them, including the whole integration shown in the figure. Thus, in Fig. 9 we can see the description file accepted by coolbot-c of one of the

components of Fig. 8, file player-robot.coolbot-component, corresponding to component PlayerRobot. As to views, in Fig. 10, we can see the description file for one of the view instances of Fig. 8, concretely for the Map view in the figure, which is an instance of view GridGtk. As we can observe, the description file specifies mainly the view's external interface

An Approach to Distributed Component-Based Software for Robotics 585

In Fig. 11 it is shown a snapshot of the view in runtime. Once developed CoolBOT views are graphical components we can integrate in a window-based application like the one shown in the figure. In particular, in the application we can see, views are plugged in as "pages" of the GTK widget notepad (a container of "pages" with "tabs") we can observe in the figure. In fact, the GUI application shown integrates the four view instances of Fig. 8, whose C++ skeleton has been generated using also coolbot-c from a description file (the .coolbot-integration file in Fig. 6). In Fig. 12 we can see part of this file. Notice that coolbot-c generates C++ skeletons for integrations where the static instantiation and interconnection among components and views are generated automatically. If we want to build a dynamic integration, in terms of dynamic instantiation of components and views, and also in terms of dynamic interconnections among them establishing port connections, we must complete the dynamic part of the skeleton generated by coolbot-c using the C++

With respect to CoolBOT probes, as of now we have used them to interoperate with Java applets inserted in a web browser, as shown in Fig. 8. More specifically we have used SWIG [*SWIG* (2011)] in order to have access to the probe C++ classes in Java with the aim of implementing several Java GUI interfaces in Java equivalent to some of the CoolBOT views we have already developed. In Fig. 13 we can see a snapshot of the Java equivalent of a view

formed by input and output ports.

\* Description: description file for GridGtk view

**author** "Antonio Carlos Domínguez-Brito <adominguez@iusiani.ulpgc.es>";

**input port** ROBOT\_CONFIG **type poster port packet** PlayerRobot::ConfigPacket; **input port** GRID\_MAP **type poster port packet** GridMap::GridMapPacket;

**input port** PLANNER\_PATH **type** last **port packet** ShortTermPlanner::PlannerPathPacket;

Fig. 10. grid-gtk.coolbot-view: GridGtk's description file.

runtime services provided by the framework (Fig. 5).

to represent the range sensor information of a mobile robot.

**output port** PLANNER\_COMMANDS **type generic port packet** ShortTermPlanner::CommandPacket; **output port** ND\_COMMANDS **type generic port packet** NDNavigation::CommandPacket;

**private** DEFAULT\_REFRESHING\_PERIOD=500; // milliseconds

\* File: grid-gtk.coolbot-view

**description** "GridGtk View"; **institution** "IUSIANI - ULPGC (Spain)";

\* Date: 29 April 2011 \* Generated by coolbot-ske

**version** "0.1"

/\*

\*/ **view** GridGtk { **header** {

> }; **constants** {

};

... };

// input ports

// output ports

```
/*
 * File: player-robot.coolbot
 * Description: description file for PlayerRobot component
 * Date: 02 June 2010
 * Generated by coolbot-ske
 */
component PlayerRobot
{
  header
  {
    author "Antonio Carlos Domínguez Brito <adominguez@iusiani.ulpgc.es>";
    description "PlayerRobot component";
    institution "IUSIANI - Universidad de Las Palmas de Gran Canaria";
    version "0.1"
  };
  constants
  {
    LASER_MAX_RANGE="LaserPacket::LASER_MAX_RANGE";
    SONAR_MAX_RANGE=5000; // millimeters
    private FIFO_LENGTH=5;
    private ROBOT_DATA_INCOMING_FREQUENCY= 10; // Hz
    private LASER_MIN_ANGLE= -90; // degrees
    ...
  };
  // input ports
  input port Commands type fifo port packet CommandPacket length FIFO_LENGTH;
  input port NavigationCommands type fifo port packet NDNavigation::CommandPacket length FIFO_LENGTH;
  //output ports
  output port RobotConfig type poster port packet ConfigPacket;
  output port Odometry type generic port packet OdometryPacket network buffer FIFO_LENGTH;
  output port OdometryReset type generic port packet PacketTime;
  output port BumpersGeometry type poster port packet "BumperGeometryPacket";
  output port Bumpers type generic port packet BumperPacket;
  output port SonarGeometry type poster port packet "SonarGeometryPacket";
  output port SonarScan type generic port packet "SonarPacket";
  output port LaserGeometry type poster port packet PacketFrame3D;
  output port LaserScan type generic port packet LaserPacket;
  output port Power type generic port packet PacketDouble;
  output port CameraImage type poster port packet CameraImagePacket;
  output port PTZJoints type generic port packet "PacketPTZJoints";
  exception RobotConnection
  {
    description "Robot connection failed.";
  };
  exception NoPositionProxy
  {
    description "Position proxy not available in this robot.";
  };
  exception InternalPlayerException
  {
    description "A Player library exception has been thrown.";
  };
  entry state Main
  {
    transition on Commands,NavigationCommands,Timer;
  };
};
```
Fig. 9. player-robot.coolbot-component: PlayerRobot's description file.

14 Will-be-set-by-IN-TECH

/\*

\*/

}; **constants** {

... };

{

};

{

};

{

};

{

}; };

**entry state** Main

// input ports

//output ports

{ **header** {

\* File: player-robot.coolbot

**description** "PlayerRobot component";

LASER\_MAX\_RANGE="LaserPacket::LASER\_MAX\_RANGE"; SONAR\_MAX\_RANGE=5000; // millimeters

**private** ROBOT\_DATA\_INCOMING\_FREQUENCY= 10; // Hz **private** LASER\_MIN\_ANGLE= -90; // degrees

\* Date: 02 June 2010 \* Generated by coolbot-ske

**component** PlayerRobot

**version** "0.1"

**private** FIFO\_LENGTH=5;

exception RobotConnection

exception NoPositionProxy

exception InternalPlayerException

**description** "Robot connection failed.";

\* Description: description file for PlayerRobot component

**author** "Antonio Carlos Domínguez Brito <adominguez@iusiani.ulpgc.es>";

**input port** Commands **type fifo port packet** CommandPacket **length** FIFO\_LENGTH;

**output port** BumpersGeometry **type poster port packet** "BumperGeometryPacket";

**output port** SonarGeometry **type poster port packet** "SonarGeometryPacket"; **output port** SonarScan **type generic port packet** "SonarPacket"; **output port** LaserGeometry **type poster port packet** PacketFrame3D; **output port** LaserScan **type generic port packet** LaserPacket; **output port** Power **type generic port packet** PacketDouble; **output port** CameraImage **type poster port packet** CameraImagePacket; **output port** PTZJoints **type generic port packet** "PacketPTZJoints";

**output port** RobotConfig **type poster port packet** ConfigPacket;

**output port** OdometryReset **type generic port packet** PacketTime;

**description** "Position proxy not available in this robot.";

**description** "A Player library exception has been thrown.";

**transition on** Commands,NavigationCommands,Timer;

**output port** Bumpers **type generic port packet** BumperPacket;

**input port** NavigationCommands **type fifo port packet** NDNavigation::CommandPacket **length** FIFO\_LENGTH;

Fig. 9. player-robot.coolbot-component: PlayerRobot's description file.

**output port** Odometry **type generic port packet** OdometryPacket **network buffer** FIFO\_LENGTH;

**institution** "IUSIANI - Universidad de Las Palmas de Gran Canaria";

components of Fig. 8, file player-robot.coolbot-component, corresponding to component PlayerRobot. As to views, in Fig. 10, we can see the description file for one of the view instances of Fig. 8, concretely for the Map view in the figure, which is an instance of view GridGtk. As we can observe, the description file specifies mainly the view's external interface formed by input and output ports.

Fig. 10. grid-gtk.coolbot-view: GridGtk's description file.

In Fig. 11 it is shown a snapshot of the view in runtime. Once developed CoolBOT views are graphical components we can integrate in a window-based application like the one shown in the figure. In particular, in the application we can see, views are plugged in as "pages" of the GTK widget notepad (a container of "pages" with "tabs") we can observe in the figure. In fact, the GUI application shown integrates the four view instances of Fig. 8, whose C++ skeleton has been generated using also coolbot-c from a description file (the .coolbot-integration file in Fig. 6). In Fig. 12 we can see part of this file. Notice that coolbot-c generates C++ skeletons for integrations where the static instantiation and interconnection among components and views are generated automatically. If we want to build a dynamic integration, in terms of dynamic instantiation of components and views, and also in terms of dynamic interconnections among them establishing port connections, we must complete the dynamic part of the skeleton generated by coolbot-c using the C++ runtime services provided by the framework (Fig. 5).

With respect to CoolBOT probes, as of now we have used them to interoperate with Java applets inserted in a web browser, as shown in Fig. 8. More specifically we have used SWIG [*SWIG* (2011)] in order to have access to the probe C++ classes in Java with the aim of implementing several Java GUI interfaces in Java equivalent to some of the CoolBOT views we have already developed. In Fig. 13 we can see a snapshot of the Java equivalent of a view to represent the range sensor information of a mobile robot.

/\*

\*/

};

{

};

{

};

{

};

{

};

{

... }; };

local instances

{ **header** {

\* Date: 29 April 2011 \* Generated by coolbot-ske

**version** "0.1"

machine addresses

ROBOT\_PORT: 1950; MBICP\_PORT: 1965; NAVIGATION\_MAP\_PORT: 1970; ND\_PORT: 1980;

integration mbicp\_integration

\* File: mbicp-integration.coolbot-integration

**description** "MbICP's views integration"; **institution** "IUSIANI - ULPGC (Spain)";

local my\_machine: "127.0.0.1"; the\_other\_machine: "...";

listening ports // TCP/IP ports

NAVIGATION\_PLANNER\_PORT: 1990; ROBOT\_VIEW\_PORT: 1955; MBICP\_VIEW\_PORT: 1985; NAVIGATION\_MAP\_VIEW\_PORT: 1975; NAVIGATION\_PLANNER\_VIEW\_PORT: 1995;

remote instances **on** the\_other\_machine;

**port** connections // static connections

**component** robot:PlayerRobot listening **on** ROBOT\_VIEW\_PORT; **component** mbicpInstance:MbICPCorrector listening **on** MBICP\_PORT; **component** navigationMap:GridMap listening **on** NAVIGATION\_MAP\_PORT;

connect robot:SONARGEOMETRY to robotView:SONAR\_GEOMETRY; connect robot:SONARSCAN to robotView:SONAR\_SCAN; connect robot:ODOMETRY to mbicpInstance:ODOMETRY; connect robot:LASERGEOMETRY to mbicpInstance:LASER\_GEOMETRY; connect robot:LASERSCAN to mbicpInstance:LASER\_SCAN; connect robotView:COMMANDS to robot:COMMANDS;

**component** nd:NDNavigation listening **on** ND\_PORT;

connect robot:ODOMETRY to robotView:ODOMETRY; connect robot:LASERGEOMETRY to robotView:LASER\_GEOMETRY; connect robot:LASERSCAN to robotView:LASER\_SCAN; connect robot:POWER to robotView:POWER;

\* Description: description file for mbicp-integration integration.

**author** "Antonio Carlos Domínguez-Brito <adominguez@iusiani.ulpgc.es>";

**view** robotView:PlayerRobotGtk listening **on** ROBOT\_VIEW\_PORT with **description** "Robot"; **view** mbicpView:MbICPGtk listening **on** MBICP\_VIEW\_PORT with **description** "MbICP"; **view** mapView:GridGtk listening **on** NAVIGATION\_MAP\_VIEW\_PORT with **description** "Map";

**component** navigationPlanner:ShortTermPlanner listening **on** NAVIGATION\_PLANNER\_VIEW\_PORT;

**view** navigationPlannerView:PlannerGtk listening **on** NAVIGATION\_PLANNER\_VIEW\_PORT with **description** "Planner";

An Approach to Distributed Component-Based Software for Robotics 587

Fig. 12. The integration containing Robot. MbICP, Map and Planner views of Fig. 8

Fig. 11. View GridGtk's snapshot.

16 Will-be-set-by-IN-TECH

Fig. 11. View GridGtk's snapshot.

```
/*
 * File: mbicp-integration.coolbot-integration
 * Description: description file for mbicp-integration integration.
 * Date: 29 April 2011
 * Generated by coolbot-ske
 */
integration mbicp_integration
{
  header
  {
    author "Antonio Carlos Domínguez-Brito <adominguez@iusiani.ulpgc.es>";
    description "MbICP's views integration";
    institution "IUSIANI - ULPGC (Spain)";
    version "0.1"
  };
  machine addresses
  {
    local my_machine: "127.0.0.1";
    the_other_machine: "...";
  };
  listening ports // TCP/IP ports
  {
    ROBOT_PORT: 1950;
    MBICP_PORT: 1965;
    NAVIGATION_MAP_PORT: 1970;
    ND_PORT: 1980;
    NAVIGATION_PLANNER_PORT: 1990;
    ROBOT_VIEW_PORT: 1955;
    MBICP_VIEW_PORT: 1985;
    NAVIGATION_MAP_VIEW_PORT: 1975;
    NAVIGATION_PLANNER_VIEW_PORT: 1995;
  };
  local instances
  {
    view robotView:PlayerRobotGtk listening on ROBOT_VIEW_PORT with description "Robot";
    view mbicpView:MbICPGtk listening on MBICP_VIEW_PORT with description "MbICP";
    view mapView:GridGtk listening on NAVIGATION_MAP_VIEW_PORT with description "Map";
    view navigationPlannerView:PlannerGtk listening on NAVIGATION_PLANNER_VIEW_PORT with description "Planner";
  };
  remote instances on the_other_machine;
  {
    component robot:PlayerRobot listening on ROBOT_VIEW_PORT;
    component mbicpInstance:MbICPCorrector listening on MBICP_PORT;
    component navigationMap:GridMap listening on NAVIGATION_MAP_PORT;
    component nd:NDNavigation listening on ND_PORT;
    component navigationPlanner:ShortTermPlanner listening on NAVIGATION_PLANNER_VIEW_PORT;
  };
  port connections // static connections
  {
    connect robot:ODOMETRY to robotView:ODOMETRY;
    connect robot:LASERGEOMETRY to robotView:LASER_GEOMETRY;
    connect robot:LASERSCAN to robotView:LASER_SCAN;
    connect robot:POWER to robotView:POWER;
    connect robot:SONARGEOMETRY to robotView:SONAR_GEOMETRY;
    connect robot:SONARSCAN to robotView:SONAR_SCAN;
    connect robot:ODOMETRY to mbicpInstance:ODOMETRY;
    connect robot:LASERGEOMETRY to mbicpInstance:LASER_GEOMETRY;
    connect robot:LASERSCAN to mbicpInstance:LASER_SCAN;
    connect robotView:COMMANDS to robot:COMMANDS;
    ...
  };
};
```
**7. Conclusions**

**8. References**

In this document we have presented the last developments which have been integrated in the last operating version of CoolBOT. The developments have been aimed mainly to two questions: transparent distributed computation, and "deeper" interface decoupling. It is our opinion that the use and operation of CoolBOT has improved considerably. CoolBOT is an open source initiative supported by our laboratory which is freely available via

An Approach to Distributed Component-Based Software for Robotics 589

Ando, N., Suehiro, T. & Kotoku, T. (2008). A Software Platform for Component Based

Antonio C. Domínguez-Brito, Daniel Hernández-Sosa, José Isern-González & Jorge

Basu, A., Bozga, M. & Sifakis, J. (2006). Modeling heterogeneous real-time components in

Bensalem, S., Gallien, M., Ingrand, F., Kahloul, I. & Thanh-Hung, N. (2009). Designing autonomous robots, *IEEE Robotics and Automation Magazine* 16(1): 67–77. Brooks, A., Kaupp, T., Makarenko, A., S.Williams & Oreback, A. (2005). Towards

Brugali, D. (ed.) (2007). *Software Engineering for Experimental Robotics*, Springer Tracts in

Brugali, D. & Scandurra, P. (2009). Component-based robotic engineering (part i) [tutorial],

Brugali, D. & Shakhimardanov, A. (2010). Component-based robotic engineering (part ii),

Domínguez-Brito, A. C., Hernández-Sosa, D., Isern-González, J. & Cabrera-Gámez, J. (2004).

Douglas C. Schmidt (2010). The Adaptive Communication Environment (ACE),

Ellis, C. & Gibbs, S. (1989). *Object-Oriented Concepts, Databases, and Applications*, ACM Press,

Gamma, E., Helm, R., Johnson, R. & Vlissides, J. (1995). *Design Patterns: Elements of*

George T. Heineman & William T. Councill (2001). *Component-Based Software Engineering*,

J. Paul Morrison (2010). *Flow-Based Programming, 2nd Edition: A New Approach to Application*

Addison-Wesley, chapter Active Objects: Realities and Possibilities.

Model and Software Infrastructure for Robotics, pp. 143–168.

RT-System Development: OpenRTM-Aist, *in* S. Carpin, I. Noda, E. Pagello, M. Reggiani & O. von Stryk (eds), *Simulation, Modeling, and Programming for Autonomous Robots*, Vol. 5325 of *Lecture Notes in Computer Science*, Springer Berlin /

Cabrera-Gámez (2007). *Software Engineering for Experimental Robotics*, Vol. 30 of *Springer Tracts in Advanced Robotics Series*, Springer, chapter CoolBOT: a Component

BIP, In Fourth IEEE International Conference on Software Engineering and Formal

component-based robotics, *In IEEE International Conference on Intelligent Robots and*

Integrating Robotics Software, IEEE International Conference on Robotics and

*Reusable Object-Oriented Software*, Addison-Wesley Professional Computing Series,

**www.coolbotproject.org**, including the secure navigation system depicted in Fig. 8.

*Adept Mobile Robots* (2011). http://www.mobilerobots.com/.

Heidelberg, pp. 87–98.

Methods, pages 3-12, Pune (India).

*Systems*, Tsukuba, Japan, pp. 163–168.

*Robotics Automation Magazine, IEEE* 16(4): 84 –96.

*Robotics Automation Magazine, IEEE* 17(1): 100 –112.

Advanced Robotics, Springer.

Automation, New Orleans, USA.

Addison-Wesley.

Addison-Wesley.

*Development*, CreateSpace.

www.cs.wustl.edu/∼schmidt/ACE.html.

*Glade - A User Interface Designer* (2010). glade.gnome.org.

Fig. 13. Java view implemented using a probe to access range sensor information for a mobile robot. A snapshot.

### **7. Conclusions**

18 Will-be-set-by-IN-TECH

Fig. 13. Java view implemented using a probe to access range sensor information for a mobile

robot. A snapshot.

In this document we have presented the last developments which have been integrated in the last operating version of CoolBOT. The developments have been aimed mainly to two questions: transparent distributed computation, and "deeper" interface decoupling. It is our opinion that the use and operation of CoolBOT has improved considerably. CoolBOT is an open source initiative supported by our laboratory which is freely available via **www.coolbotproject.org**, including the secure navigation system depicted in Fig. 8.

### **8. References**

*Adept Mobile Robots* (2011). http://www.mobilerobots.com/.


**28** 

*Spain* 

**Sequential and Simultaneous Algorithms to** 

**Solve the Collision-Free Trajectory Planning** 

**Interpolation Functions and the Characteristics** 

Francisco J. Rubio, Francisco J. Valero, Antonio J. Besa and Ana M. Pedrosa *Centro de Investigación de Tecnología de Vehículos, Universitat Politècnica de València* 

Trajectory planning for robots is a very important issue in those industrial activities which have been automated. The introduction of robots into industry seeks to upgrade not only the standards of quality but also productivity as the working time is increased and the useless or wasted time is reduced. Therefore, trajectory planning has an important role to play in achieving these objectives (the motion of robot arms will have an influence on the

Formally, the trajectory planning problem aims to find the force inputs (control �(�)) to move the actuators so that the robot follows a trajectory �(�) that enables it to go from the initial configuration to the final one while avoiding obstacles. This is also known as the complete motion planning problem compared with the path planning problem in which the

An important part of obtaining an efficient trajectory plan lies with both the interpolation function used to help obtain the trajectory and the robot actuators. Ultimately actuators will generate the robot motion, and it is very important for robot behavior to be smooth. Therefore, the trajectory planning algorithms should take into account the characteristics of the actuators without forgetting the interpolation functions which also have an impact on the resulting motion. As well as smooth robot motion, it is also necessary to monitor some working parameters to verify the efficiency of the process, because most of the time the user seeks to optimize certain objective functions. Among the most important working parameters and variables are the time required to get the trajectory done, the input torques, the energy consumed and the power transmitted. The kinematic properties of the robot´s

The trajectory algorithm should also not overlook the presence of possible obstacles in the workspace. Therefore it is very important to model both the workspace and the obstacles efficiently. The quality of the collision avoidance procedure will depend on this

links, such as the velocities, accelerations and jerks are also important.

**1. Introduction** 

work done).

modelization.

temporal evolution of motion is neglected.

**Problem for Industrial Robots – Impact of** 

**of the Actuators on Robot Performance** 


## **Sequential and Simultaneous Algorithms to Solve the Collision-Free Trajectory Planning Problem for Industrial Robots – Impact of Interpolation Functions and the Characteristics of the Actuators on Robot Performance**

Francisco J. Rubio, Francisco J. Valero, Antonio J. Besa and Ana M. Pedrosa *Centro de Investigación de Tecnología de Vehículos, Universitat Politècnica de València Spain* 

## **1. Introduction**

20 Will-be-set-by-IN-TECH

590 Robotic Systems – Applications, Control and Programming

Jean-Claude Latombe (1991). *Robot-motion planning*, The Kluwer International Series in

Makarenko, A., Brooks, A. & Kaupp, T. (2007). On the benefits of making robotic software

Mallet, A., Pasteur, C., Herrb, M., Lemaignan, S. & Ingrand, F. (2010). GenoM3: Building

Minguez, J., Montesano, L. & Lamiraux, F. (2006). Metric-based iterative closest point

Minguez, J., Osuna, J. & Montano, L. (2004). A "Divide and Conquer" Strategy based on

International Conference on Robotics and Automation, New Orleans, USA. Montemerlo, M., Roy, N. & Thrun, S. (2003). Perspectives on standardization in mobile robot

Montesano, L., Minguez, J. & Montano, L. (2006). Lessons Learned in Integration for

Object Management Group (2002a). OMG IDL: Details,

Object Management Group (2002b). The Common Object Request Broker: Architecture and

Schlegel, C., Haßler, T., Lotz, A. & Steck, A. (2009). Robotic Software Systems: From

Steenstrup, M., Arbib, M. A. & Manes, E. G. (1983). Port automata and the algebra of concurrent processes, *Journal of Computer and System Sciences* 27: 29–50. Stewart, D. B., Volpe, R. A. & Khosla, P. (1997). Design of Dynamically Reconfigurable

Vaughan, R. T., Gerkey, B. & Howard, A. (2003). On Device Abstractions For Portable,

*(IROS 2003), Las Vegas, USA, October 2003*, pp. 2121–2427.

(http://www.omg.org/gettingstarted/omg\_ idl.htm).

*ROS: Robot Operating System* (2011). http://www.ros.org.

frameworks thin, IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS'07),

middleware-independent robotic components, IEEE International Conference on

scan matching for sensor displacement estimation, *Robotics, IEEE Transactions on*

Situations to achieve Reactive Collision Avoidance in Troublesome Scenarios, IEEE

programming: the carnegie mellon navigation (carmen) toolkit, Vol. 3, pp. 2436 –

Sensor-Based Robot Navigation Systems, *International Journal of Advanced Robotic*

Specification, Ch. 15, Sec. 1-3. (http://www.omg.org/cgi-bin/doc?formal/02-06-01).

Code-Driven to Model-Driven Designs, In Proc. 14th Int. Conf. on Advanced

Real-Time Software Using Port-Based Objects, *IEEE Transactions on Software*

Reusable Robot Code, *IEEE/RSJ International Conference on Intelligent Robot Systems*

Engineering and Computer Science, Kluwer Academic. Kitware, Inc. (2010). The CMake Open Source Build System, www.cmake.org.

San Diego CA, USA.

22(5): 1047 –1054.

*Systems* 3(1): 85–91.

Robotics (ICAR), Munich.

*Engineering* 23(12): 759–776. *SWIG* (2011). http://www.swig.org/. *The GTK+ Project* (2010). www.gtk.org.

*The Orocos Project* (2011). http://www.orocos.org.

2441 vol.3.

Robotics and Automation.

Trajectory planning for robots is a very important issue in those industrial activities which have been automated. The introduction of robots into industry seeks to upgrade not only the standards of quality but also productivity as the working time is increased and the useless or wasted time is reduced. Therefore, trajectory planning has an important role to play in achieving these objectives (the motion of robot arms will have an influence on the work done).

Formally, the trajectory planning problem aims to find the force inputs (control �(�)) to move the actuators so that the robot follows a trajectory �(�) that enables it to go from the initial configuration to the final one while avoiding obstacles. This is also known as the complete motion planning problem compared with the path planning problem in which the temporal evolution of motion is neglected.

An important part of obtaining an efficient trajectory plan lies with both the interpolation function used to help obtain the trajectory and the robot actuators. Ultimately actuators will generate the robot motion, and it is very important for robot behavior to be smooth. Therefore, the trajectory planning algorithms should take into account the characteristics of the actuators without forgetting the interpolation functions which also have an impact on the resulting motion. As well as smooth robot motion, it is also necessary to monitor some working parameters to verify the efficiency of the process, because most of the time the user seeks to optimize certain objective functions. Among the most important working parameters and variables are the time required to get the trajectory done, the input torques, the energy consumed and the power transmitted. The kinematic properties of the robot´s links, such as the velocities, accelerations and jerks are also important.

The trajectory algorithm should also not overlook the presence of possible obstacles in the workspace. Therefore it is very important to model both the workspace and the obstacles efficiently. The quality of the collision avoidance procedure will depend on this modelization.

Sequential and Simultaneous Algorithms to Solve the Collision-Free Trajectory Planning

solving the trajectory planning problem are explained.

which are called significant points. These points, defined as

**3. Robot modelling** 

i.e. *C j*

*= C j* (α

significant points, some other points *p j*

points and the geometric characteristics of the robot.

Problem for Industrial Robots – Impact of Interpolation Functions and the Characteristics of … 593

In this chapter we will describe two algorithms for solving the collision-free trajectory planning for industrial robots that we have developed. We have called them "sequential" and "simultaneous" algorithms. The first is an indirect method while the second is a direct one. The "sequential" algorithm considers the main properties of the actuators (torque, power, jerk and energy consumed). The "simultaneous" algorithm analyzes what is the best interpolation function to be used to generate the trajectory considering a simple actuator (only the torque required). The chapter content is based on the previous work done by the authors (see Valero et al., 2006, and Rubio et al., 2007). Specifically, the two approaches to

The robot model used henceforth is the wire model corresponding to the PUMA 560 robot shown in Fig. 1. The robot involves rigid links that are joined by the corresponding kinematic joints (revolution).The robot has *F* degrees of freedom and each robot´s configuration *C j* can be unequivocally set using the Cartesian coordinates of N points,

*i=1..F, j=*number of configuration) are chosen systematically. Therefore, ultimately, every configuration will be expressed in Cartesian coordinates by means of the significant points,

point out that they do not constitute an independent set of coordinates. Besides the

the efficiency of the algorithms, the coordinates of which are obtained from the significant

Fig. 1. Model of robot PUMA 560. Significant and Interesting Points (mobile base)

 *ji*), which represent the specifics of the robot under study. It is important to

α *ji* (*x j*

*<sup>k</sup>*, called interesting points, will be used to improve

*3(i-1)+1 ,x j*

*3(i-1)+2 , x j*

*3(i-1)+3,* 

## **2. A brief look at previous work**

Trajectory planning for industrial robots is a very important topic in the field of robotics and has attracted a great number of researchers so that there are at the moment a variety of methodologies for its resolution.

By studying the work done by other researchers on this topic it is easy to deduce that the problem has mainly been tackled with two different approaches: direct and indirect methods. Some authors who have analyzed this topic using indirect methods are Saramago, 2001; Valero et al., 2006; Gasparetto and Zanotto, 2007 ; du Plessis et al., 2003.

Other authors, on the other hand, have implemented the direct method such as Chettibi et al, 2002; Macfarlane, 2003; Abdel-Malek et al. 2006. However, in these examples the obstacles have been neglected which is a drawback.

Over the years, the algorithms have been improved and the study of the robotic system has become more and more realistic. One way of achieving that is to analyze the complete behavior of the robotic system, which in turn leads us to optimize some of the working parameters mentioned earlier by means of the appropriate objective functions. The most widely used optimization criteria can be classified as follows:


The early algorithms that solved the trajectory planning problem tried to minimize the time needed for performing the task (see Bobrow et al., 1985; Shin et al., 1985; Chen et al., 1989). In those studies, the authors impose smooth trajectories to be followed, such as spline functions.

Another way of tackling the trajectory planning problem was based on searching for jerkoptimal trajectories. Jerks are essential for working with precision and without vibrations. They also affects the control system and the wearing of joints and bars. Jerk constraints were introduced by Kyriakopoulos (see Kyriakopoulos et al.,1988). Later, Constantinescou introduces (Constantinescu et all, 2000) a method for determining smooth and time-optimal path-constrained trajectories for robotic manipulators imposing limits on the actuator jerks.

Another different approach to solving the trajectory planning problem is based on minimizing the torque and the energy consumed instead of the execution time or the jerk. An early example is seen in Garg et al., 1992. Similarly, Hirakawa and Kawamura searched for the minimum energy consumed (Hirakawa et al., 1996). In Field and Stepanenko, 1996, the authors plan minimum energy consumption trajectories for robotic manipulators. In Saramago and Steffen, 2000, the authors considered not only the minimum time but also the minimum mechanical energy of the actuators. They built a multi-objective function and the results obtained depended on the associated weighting factor. The subject of energy minimization continues to be of interest in the field of robotics and automated manufacturing processes ( Cho et al., 2006 ).

Later, new approaches appear for solving the trajectory planning problem. The idea of using a weighted objective function to optimize the operating parameters of the robot arises (Chettibi et al., 2004). Gasparetto and Zanotto also use a weighted objective function (see Gasparetto and Zanotto, 2010). In this chapter we will introduce an indirect method which has been called the "sequential algorithm".

In this chapter we will describe two algorithms for solving the collision-free trajectory planning for industrial robots that we have developed. We have called them "sequential" and "simultaneous" algorithms. The first is an indirect method while the second is a direct one. The "sequential" algorithm considers the main properties of the actuators (torque, power, jerk and energy consumed). The "simultaneous" algorithm analyzes what is the best interpolation function to be used to generate the trajectory considering a simple actuator (only the torque required). The chapter content is based on the previous work done by the authors (see Valero et al., 2006, and Rubio et al., 2007). Specifically, the two approaches to solving the trajectory planning problem are explained.

## **3. Robot modelling**

592 Robotic Systems – Applications, Control and Programming

Trajectory planning for industrial robots is a very important topic in the field of robotics and has attracted a great number of researchers so that there are at the moment a variety of

By studying the work done by other researchers on this topic it is easy to deduce that the problem has mainly been tackled with two different approaches: direct and indirect methods. Some authors who have analyzed this topic using indirect methods are Saramago,

Other authors, on the other hand, have implemented the direct method such as Chettibi et al, 2002; Macfarlane, 2003; Abdel-Malek et al. 2006. However, in these examples the

Over the years, the algorithms have been improved and the study of the robotic system has become more and more realistic. One way of achieving that is to analyze the complete behavior of the robotic system, which in turn leads us to optimize some of the working parameters mentioned earlier by means of the appropriate objective functions. The most

2. Minimum jerk, which is bounded to the quality of work, accuracy and equipment

The early algorithms that solved the trajectory planning problem tried to minimize the time needed for performing the task (see Bobrow et al., 1985; Shin et al., 1985; Chen et al., 1989). In those studies, the authors impose smooth trajectories to be followed, such as spline

Another way of tackling the trajectory planning problem was based on searching for jerkoptimal trajectories. Jerks are essential for working with precision and without vibrations. They also affects the control system and the wearing of joints and bars. Jerk constraints were introduced by Kyriakopoulos (see Kyriakopoulos et al.,1988). Later, Constantinescou introduces (Constantinescu et all, 2000) a method for determining smooth and time-optimal path-constrained trajectories for robotic manipulators imposing limits on the actuator jerks. Another different approach to solving the trajectory planning problem is based on minimizing the torque and the energy consumed instead of the execution time or the jerk. An early example is seen in Garg et al., 1992. Similarly, Hirakawa and Kawamura searched for the minimum energy consumed (Hirakawa et al., 1996). In Field and Stepanenko, 1996, the authors plan minimum energy consumption trajectories for robotic manipulators. In Saramago and Steffen, 2000, the authors considered not only the minimum time but also the minimum mechanical energy of the actuators. They built a multi-objective function and the results obtained depended on the associated weighting factor. The subject of energy minimization continues to be of interest in the field of robotics and automated

Later, new approaches appear for solving the trajectory planning problem. The idea of using a weighted objective function to optimize the operating parameters of the robot arises (Chettibi et al., 2004). Gasparetto and Zanotto also use a weighted objective function (see Gasparetto and Zanotto, 2010). In this chapter we will introduce an indirect method which

3. Minimum energy consumed or minimum actuator effort, both linked to savings.

2001; Valero et al., 2006; Gasparetto and Zanotto, 2007 ; du Plessis et al., 2003.

**2. A brief look at previous work** 

obstacles have been neglected which is a drawback.

4. Hybrid criteria, e.g. minimum time and energy.

manufacturing processes ( Cho et al., 2006 ).

has been called the "sequential algorithm".

widely used optimization criteria can be classified as follows: 1. Minimum time required, which is bounded to productivity.

methodologies for its resolution.

maintenance.

functions.

The robot model used henceforth is the wire model corresponding to the PUMA 560 robot shown in Fig. 1. The robot involves rigid links that are joined by the corresponding kinematic joints (revolution).The robot has *F* degrees of freedom and each robot´s configuration *C j* can be unequivocally set using the Cartesian coordinates of N points, which are called significant points. These points, defined as α *ji* (*x j 3(i-1)+1 ,x j 3(i-1)+2 , x j 3(i-1)+3, i=1..F, j=*number of configuration) are chosen systematically. Therefore, ultimately, every configuration will be expressed in Cartesian coordinates by means of the significant points, i.e. *C j = C j* (α *ji*), which represent the specifics of the robot under study. It is important to point out that they do not constitute an independent set of coordinates. Besides the significant points, some other points *p j <sup>k</sup>*, called interesting points, will be used to improve the efficiency of the algorithms, the coordinates of which are obtained from the significant points and the geometric characteristics of the robot.

Fig. 1. Model of robot PUMA 560. Significant and Interesting Points (mobile base)

Sequential and Simultaneous Algorithms to Solve the Collision-Free Trajectory Planning

its radius are known. It is characterized by means of



*kr*

Therefore ( ) 1 2 , , *CO CO CO CO c c r k k kk* = . See Fig. 3

Fig. 3. Generic Cylindrical obstacle *COk*





Fig. 4. Generic Prismatic obstacle *POl*

**5. Discretizing the workspace** 

Therefore ( ) <sup>123</sup> ,,, *PO PO PO PO PO a q q q l l lll* = . See Fig. 4

drawn up. It is characterized by means of

*l xl yl zl a aaa* =

*l xl yl zl q qqq* =

*l xl yl zl q qqq* =

*l xl yl zl q qqq* =

*PO l q*1


Problem for Industrial Robots – Impact of Interpolation Functions and the Characteristics of … 595

• Cylindrical obstacle *COk*, is defined when the coordinates of the centres of its bases and

*CO r k*

• Prismatic obstacle *POl*, is defined when four points located in the vertices of the rectangular prism are known so that vectors that are perpendicular to each other can be

> *PO l q*2

> > *PO l q*3

With the purpose of working with a limited number of configurations, the generation of a discrete workspace that represents the possible positions of the end-effector of the robot is considered. To do this, a rectangular prism with its edges parallel to the axes of the

*PO l a*

*k xk yk zk c ccc* =

*k xk yk zk c ccc* =

The PUMA 560 robot can be modelled with a movable base or a fixed base. The mobilebased robot is shown in Fig. 1 with the seven significant points used (α*1*, α*2*, α*3*, α*4*, α*<sup>5</sup>* , α*6* and α*<sup>7</sup>* ), together with another five interesting points *p <sup>j</sup> <sup>k</sup>*. As a result of this, the configuration *C j* is determined by twenty-one variables corresponding to the coordinates of the significant points. These variables are connected through fourteen constraint equations relative to the geometric characteristics of the robot (length of links, geometric constraints and range of motion). See Rubio et al, 2009 for more details. It must be noted that any other industrial robot can be modelled in this way by just selecting and choosing appropriately those significant points that best describe it.

This property is very important as far as the effectiveness of the algorithm is concerned.

### **4. Workspace modelling**

The workspace is modelled as a rectangular prism with its edges parallel to the axes of the Cartesian reference system. The work environment is defined by the obstacles bound to produce collisions when the robot moves within the workspace. The obstacles are considered static, i.e. their positions do not vary over time and they are represented by means of unions of patterned obstacles.

The fact of working with patterned obstacles introduces two fundamental advantages:


The patterned obstacles have a geometry based on simple three-dimensional figures, particularly spheres, rectangular prisms and cylinders. Any obstacle present in the workspace could be represented as a combination of these geometric figures.

The definition of a patterned obstacle is made in the following way:


Therefore ( ) , *SO SO SO c r i ii* = . See Fig. 2

Fig. 2. Generic Spherical obstacle *SOi*


594 Robotic Systems – Applications, Control and Programming

The PUMA 560 robot can be modelled with a movable base or a fixed base. The mobile-

the significant points. These variables are connected through fourteen constraint equations relative to the geometric characteristics of the robot (length of links, geometric constraints and range of motion). See Rubio et al, 2009 for more details. It must be noted that any other industrial robot can be modelled in this way by just selecting and choosing appropriately

This property is very important as far as the effectiveness of the algorithm is concerned.

The fact of working with patterned obstacles introduces two fundamental advantages: 1. It allows the modelling of any generic obstacle so that collisions with the robot´s links

The workspace is modelled as a rectangular prism with its edges parallel to the axes of the Cartesian reference system. The work environment is defined by the obstacles bound to produce collisions when the robot moves within the workspace. The obstacles are considered static, i.e. their positions do not vary over time and they are represented by

2. It permits working with a reduced number of patterned obstacles in order to model a complex geometric environment so that its use is efficient. It means that a small number of constraints are introduced into the optimization problem when obtaining collision-

The patterned obstacles have a geometry based on simple three-dimensional figures, particularly spheres, rectangular prisms and cylinders. Any obstacle present in the

• Spherical obstacle *SOi*, is defined when the position of its centre and its radius are

*c SO i*

*r SO i*

workspace could be represented as a combination of these geometric figures.

The definition of a patterned obstacle is made in the following way:

*i xi yi zi c ccc* =

is determined by twenty-one variables corresponding to the coordinates of

α*1*, α*2*, α*3*, α*4*, α*<sup>5</sup>* , α*6*

*<sup>k</sup>*. As a result of this, the

based robot is shown in Fig. 1 with the seven significant points used (

*<sup>7</sup>* ), together with another five interesting points *p <sup>j</sup>*

and α

configuration *C j*

those significant points that best describe it.

means of unions of patterned obstacles.

free adjacent configurations.

known. It is characterized by means of - Centre of Sphere *i*: ( ) , , *SO SO SO SO*

*ir*

**4. Workspace modelling** 

can be avoided.


Therefore ( ) , *SO SO SO c r i ii* = . See Fig. 2

Fig. 2. Generic Spherical obstacle *SOi*

Therefore ( ) 1 2 , , *CO CO CO CO c c r k k kk* = . See Fig. 3

Fig. 3. Generic Cylindrical obstacle *COk*


Therefore ( ) <sup>123</sup> ,,, *PO PO PO PO PO a q q q l l lll* = . See Fig. 4

Fig. 4. Generic Prismatic obstacle *POl*

## **5. Discretizing the workspace**

With the purpose of working with a limited number of configurations, the generation of a discrete workspace that represents the possible positions of the end-effector of the robot is considered. To do this, a rectangular prism with its edges parallel to the axes of the

Sequential and Simultaneous Algorithms to Solve the Collision-Free Trajectory Planning

segment and the point on the other segment closest to the endpoint is computed.


Fig. 6. Three different cases to calculate distances (and prevent collisions)

L1,S1

P1

*P* lies within the segment, then *P* itself is the correct answer.

closest pairs of points between the following entities:


> A) Cylinder-Sphere

C

A

P B

point (A or B).

repeated twice. **C) Cylinder-Prism** 

**B) Cylinder-Cylinder** 

Problem for Industrial Robots – Impact of Interpolation Functions and the Characteristics of … 597

Projecting *C* onto the extended line through *AB* provides the solution. If the projection point

If *P* lies outside the segment, then the segment endpoint closest to *C* is instead the closest

See Fig. 6. Here we compute the distance between two line segments. The problem of determining the closest points between two line segments *S1* (P1Q1) and *S2* (P2Q2) (and therefore the distance) is more complicated than computing the closest points of the lines *L1* and *L2* of which the segments are a part. Only when the closest points of *L1* and *L2* happen to lie on the segments does the method for closest points between lines apply. For the case in which the closest points between *L1* and *L2* lie outside one or both segments, a common misconception is that it is sufficient to clamp the outside points to the nearest segment endpoint. It can be shown that if just one of the closest points between the lines is outside its corresponding segment, that point can be clamped to the appropriate endpoint of the

If both points are outside their respective segments, the same clamping procedure must be

See Fig. 6. The prismatic surfaces are divided into triangles. In this case we compute the distance between a line segment and a triangle. The closest pair of points between a line segment *PQ* and a triangle is not necessarily unique. When the line segment is parallel to the plane of the triangle, there may be an infinite number of equally close pairs. However, regardless of whether the segment is parallel to the plane or not, it is always possible to locate a point such that the minimum distance falls either between the end point of the segment and the interior of the triangle or between the segment and an edge of the triangle. Thus, the closest pair of points (and therefore the distance) can be found by computing the

> B) Cylinder-Cylinder

> > L2,S2

Q2

Q1 P2

C) Cylinder-Prism

A B

P

P'

C

Q

Q'

Cartesian reference system is created and whose opposite vertices correspond to the positions of the end-effector of the robot in the initial and final configurations from which the connecting path is calculated. The set of positions that the end-effector of the robot can adopt within the prism is restricted to a finite number of points resulting from the discretization of the prism according to the following increases:

$$
\Delta \mathbf{x} = \frac{\left| \alpha^f\_{nx} - \alpha^i\_{nx} \right|}{N\_x} \quad \Delta y = \frac{\left| \alpha^f\_{ny} - \alpha^i\_{ny} \right|}{N\_y} \quad \Delta z = \frac{\left| \alpha^f\_{nz} - \alpha^i\_{nz} \right|}{N\_z} \tag{1}
$$

Where the values of *Δx* , *Δy* and *Δz* are calculated from the values of the number of intervals *Nx* , *N y* and *N z* in which the prism is discretized, and those increments should be smaller than the smallest dimension of the obstacle modelled in the workspace. Points (α*nxf ,* α*nyf ,* α*nxf* ) and (α*nxi ,* α*ny i ,* α*nzi* ) correspond to the coordinates of the end-effector of the robot for the initial and final configurations. Fig. 6 demonstrates the way in which the prism that gives rise to the set of nodes that the end-effector of the PUMA 560 robot with a mobile base can adopt is discretized.

Fig. 5. Rectangular prism with edges parallel to the axes of the Cartesian reference system

### **6. Obstacle avoidance**

By controlling the distance from the different patterned obstacles to the cylinders that cover the robot links, collision avoidance between the robot and the obstacles is possible. Distances are constraints in the optimization problem. They serve to calculate collision-free adjacent configurations (for adjacent configuration see Section 7).

### **6.1 Calculation of distances**

Each robot´s link is modelled as a cylinder and it is characterized as ( ) 1 2 , , *RC RC RC RC c c r i i ii* = (see section 4). The application of the procedure to calculate distances between link *i* of the robot and the patterned obstacle *j* (which may be a cylinder, sphere or a prism), can give rise to three different cases to prevent collisions:

### **A) Cylinder-Sphere**

See Fig. 6. Here we compute the distance between a line segment (cylinder) to a point (centre of the sphere). Let *AB* be a line segment specified by the endpoints *A* and *B*. Given an arbitrary point *C*, the problem is to determine the point *P* on *AB* closest to *C*. Then we calculate the distance between these two points.

Projecting *C* onto the extended line through *AB* provides the solution. If the projection point *P* lies within the segment, then *P* itself is the correct answer.

If *P* lies outside the segment, then the segment endpoint closest to *C* is instead the closest point (A or B).

### **B) Cylinder-Cylinder**

596 Robotic Systems – Applications, Control and Programming

Cartesian reference system is created and whose opposite vertices correspond to the positions of the end-effector of the robot in the initial and final configurations from which the connecting path is calculated. The set of positions that the end-effector of the robot can adopt within the prism is restricted to a finite number of points resulting from the

> *f f i i f i nx nx ny ny nz nz x yz*

Where the values of *Δx* , *Δy* and *Δz* are calculated from the values of the number of intervals *Nx* , *N y* and *N z* in which the prism is discretized, and those increments should be smaller than

final configurations. Fig. 6 demonstrates the way in which the prism that gives rise to the set of nodes that the end-effector of the PUMA 560 robot with a mobile base can adopt is discretized.

Fig. 5. Rectangular prism with edges parallel to the axes of the Cartesian reference system

By controlling the distance from the different patterned obstacles to the cylinders that cover the robot links, collision avoidance between the robot and the obstacles is possible. Distances are constraints in the optimization problem. They serve to calculate collision-free

Each robot´s link is modelled as a cylinder and it is characterized as ( ) 1 2 , , *RC RC RC RC c c r i i ii* = (see section 4). The application of the procedure to calculate distances between link *i* of the robot and the patterned obstacle *j* (which may be a cylinder, sphere or a prism), can give rise to

See Fig. 6. Here we compute the distance between a line segment (cylinder) to a point (centre of the sphere). Let *AB* be a line segment specified by the endpoints *A* and *B*. Given an arbitrary point *C*, the problem is to determine the point *P* on *AB* closest to *C*. Then we

adjacent configurations (for adjacent configuration see Section 7).

α α

> α*nxf ,* α*nyf ,* α*nxf* ) and

Δ= Δ= Δ= (1)

*nzi* ) correspond to the coordinates of the end-effector of the robot for the initial and

− − α α−

*xyz NNN*

the smallest dimension of the obstacle modelled in the workspace. Points (

discretization of the prism according to the following increases:

α α

(α*nxi ,* α*ny i ,* α

**6. Obstacle avoidance** 

**6.1 Calculation of distances** 

**A) Cylinder-Sphere** 

three different cases to prevent collisions:

calculate the distance between these two points.

See Fig. 6. Here we compute the distance between two line segments. The problem of determining the closest points between two line segments *S1* (P1Q1) and *S2* (P2Q2) (and therefore the distance) is more complicated than computing the closest points of the lines *L1* and *L2* of which the segments are a part. Only when the closest points of *L1* and *L2* happen to lie on the segments does the method for closest points between lines apply. For the case in which the closest points between *L1* and *L2* lie outside one or both segments, a common misconception is that it is sufficient to clamp the outside points to the nearest segment endpoint. It can be shown that if just one of the closest points between the lines is outside its corresponding segment, that point can be clamped to the appropriate endpoint of the segment and the point on the other segment closest to the endpoint is computed.

If both points are outside their respective segments, the same clamping procedure must be repeated twice.

### **C) Cylinder-Prism**

See Fig. 6. The prismatic surfaces are divided into triangles. In this case we compute the distance between a line segment and a triangle. The closest pair of points between a line segment *PQ* and a triangle is not necessarily unique. When the line segment is parallel to the plane of the triangle, there may be an infinite number of equally close pairs. However, regardless of whether the segment is parallel to the plane or not, it is always possible to locate a point such that the minimum distance falls either between the end point of the segment and the interior of the triangle or between the segment and an edge of the triangle. Thus, the closest pair of points (and therefore the distance) can be found by computing the closest pairs of points between the following entities:


The number of tests required to calculate the distance can be reduced in some cases.

Fig. 6. Three different cases to calculate distances (and prevent collisions)

Sequential and Simultaneous Algorithms to Solve the Collision-Free Trajectory Planning

and the actuator capabilities (torque, jerk and consumed energy constraints).

**Problem statement** 

**8.2 Obtaining a path** 

adjacent configurations.

**8.3 Interpolation function** 

*p* ( *Cp* robot configuration), can be given as:

**8.1 Introduction** 

Problem for Industrial Robots – Impact of Interpolation Functions and the Characteristics of … 599

**8. "Sequential" algorithm applied to solving the trajectory planning problem.** 

The "sequential" algorithm is based on an indirect approach to solving the trajectory planning problem. The algorithm takes into account the characteristics of the actuators (torque, power, jerk and consumed energy), the interpolation functions and the obstacles in the workspace. It generates the configuration space. Then, a graph is associated to the previously obtained configuration space, which allows a collision-free path to be obtained between the initial and final configurations. Once the path is available, the dynamic characteristics of the robot are included, setting an optimal trajectory planning problem which aims to obtain the minimum time trajectory that is compatible with the robot features

First, the algorithm solves the path planning problem, obtaining the discrete configuration space of the robot (the discrete configuration space is generated by means of adjacent configurations, see Section 7) and then the minimum distance path is calculated. This path (a sequence of *m* configurations) is obtained by associating a weighted graph to the discrete configuration space and looking for the minimum weighted path. In the graph, the nodes correspond to the robot configurations and the arcs are related to joint displacements between

The weight corresponding to the arc that goes from node *k* ( *Ck* robot configuration) to node

( ) ( ) 3 1 ( ) <sup>2</sup> 1

*p k i i*

= − (5)

*F*

*i a k,p x x* −

displacements are avoided for movement between adjacent configurations.

exists, it is easy to obtain a sequence of *m* robot configurations *S*.

=

when that *Ck* and *Cp* are adjacent. In addition *Ck* and *Cp* must satisfy both type (3) constraints that avoid the obstacles between configurations and the angle increased from *Ck* to *Cp* must be smaller than the magnitude of the forbidden zone for that joint, so that large

In case the points above mentioned are not satisfied, then we consider that *a(k,p)* = ∞ . Finally, the searching is started in the weighed graph with the path that joins the node corresponding to the initial configuration to the node corresponding to the final configuration. Since the arcs satisfy that *a(k,p)* ≥ 0 , the Dijkstra´s algorithm is used to obtain the path that minimises the distance between the initial and final configurations. If this path

Once the path has been obtained (at this point, the algorithm uses Cartesian coordinates), we have a sequence of *m* robot configurations, *S* = {*S1*(*qi1*), *S2*(*qi2*)… *Sm*(*qim***)**}. These configurations are expressed now in joint coordinates. And the objective now is to look for a minimum time trajectory (*tmin*), that contains them. The path is decomposed into *m-1* intervals, so the time needed to reach the *Sj*+1 configuration from the initial *S*1 is *tj*, and the time spent in the segment *j* (between *Sj* and *Sj*+1 configurations) will be *tj-tj-1*. Cubic interpolation functions have been used for joint trajectories. They are defined by means of

joint variables between successive configurations, so that for the segment *j* it is:

## **7. Obtaining adjacent configurations**

The discrete configuration space is obtained by means of generating adjacent configurations. Given a feasible configuration *C k*, it is said that a new configuration *C p* is adjacent to the first if it is also feasible (i.e. it fulfils the characteristics associated to the robot modelling and avoids collisions with the obstacles), and in addition the following three properties are fulfilled:

1. The position of the end-effector that corresponds to a node of the discrete workspace is at a distance of one unit with respect to the position of the end-effector of configuration *C k*. That means that at least one of the following conditions has to be fulfilled:

$$\left| \alpha\_{nx}^{\mathbb{k}} - \alpha\_{nx}^{p} \right| = \Delta \ge \qquad \left| \alpha\_{ny}^{\mathbb{k}} - \alpha\_{ny}^{p} \right| = \Delta y \qquad \left| \alpha\_{nz}^{\mathbb{k}} - \alpha\_{nz}^{p} \right| = \Delta z \tag{2}$$

*n* being the subscript corresponding to the significant point associated to the end-effector of the robot. For PUMA 560 with mobility in the base, *n*=7, as can be seen in Fig. 1.

What we obtain is a sequence of configurations that is contained in the path, so that by using interpolation we can obtain a collision-free and continuous path.

2. Verification of the absence of obstacles between adjacent configurations *C k* and *C p*. Since the algorithm works in a discrete space it is necessary to verify that there are no obstacles between adjacent configurations, for which the following condition is set out:

$$\left| \overrightarrow{\alpha\_i^k \alpha\_i^p} \right| \le 2 \cdot \min \left( r\_j \right) \tag{3}$$

where *rj* is the characteristic dimension of the smallest patterned obstacle. This condition is necessary to guarantee that the distance for each link between two adjacent configurations is less than the characteristic dimension of the smallest patterned obstacle.

3. Configuration *C p* must be such that it minimizes the following objective function:

$$\left\|\mathbf{C}^{\boldsymbol{\nu}} - \mathbf{C}^{\boldsymbol{\prime}}\right\| = \sum\_{i=1}^{n} \left( \left(\alpha\_{\mathrm{zi}}^{\boldsymbol{\nu}} - \alpha\_{\mathrm{zi}}^{\boldsymbol{\prime}}\right)^{2} + \left(\alpha\_{\mathrm{yi}}^{\boldsymbol{\nu}} - \alpha\_{\mathrm{yi}}^{\boldsymbol{\prime}}\right)^{2} + \left(\alpha\_{\mathrm{zi}}^{\boldsymbol{\nu}} - \alpha\_{\mathrm{zi}}^{\boldsymbol{\prime}}\right)^{2} \right) \tag{4}$$

*n* being the number of significant points of the robot. This third property facilitates the final configuration to be reached even for redundant robots, i.e. the robot´s end-effector should not be part, at the final node, of a configuration different from the desired one. On the other hand, this property has an influence on the configurations generated, facilitating the configurations in the neighbourhoods at the end so that they are compatible with the end.

An optimization procedure is set by using a sequential quadratic programming method (SQP). This method serves to minimize a quadratic objective function subject to a set of constraints which might include those from a simple limit to the values of the variables, linear restrictions and nonlinear continuous constraints. This is an iterative method.

Applying this procedure to the path planning problem, the objective function used is given by Eq. (4). The constraints are associated to the geometry of the robot, the limits of the actuators and the avoidance of collision. And the configuration *C k* is used as an initial estimation for its resolution. The solution of this optimization problem gives the adjacent configuration *C p* looked for. By repeating the obtaining of adjacent configuration, the discrete configuration space of the robot is obtained. These configurations are recorded in a graph.

## **8. "Sequential" algorithm applied to solving the trajectory planning problem. Problem statement**

### **8.1 Introduction**

598 Robotic Systems – Applications, Control and Programming

The discrete configuration space is obtained by means of generating adjacent configurations. Given a feasible configuration *C k*, it is said that a new configuration *C p* is adjacent to the first if it is also feasible (i.e. it fulfils the characteristics associated to the robot modelling and avoids collisions with the obstacles), and in addition the following three properties are

1. The position of the end-effector that corresponds to a node of the discrete workspace is at a distance of one unit with respect to the position of the end-effector of configuration

*n* being the subscript corresponding to the significant point associated to the end-effector

What we obtain is a sequence of configurations that is contained in the path, so that by

2. Verification of the absence of obstacles between adjacent configurations *C k* and *C p*. Since the algorithm works in a discrete space it is necessary to verify that there are no obstacles between adjacent configurations, for which the following condition is set out:

2 ( ) *<sup>k</sup> <sup>p</sup>*

where *rj* is the characteristic dimension of the smallest patterned obstacle. This condition is necessary to guarantee that the distance for each link between two adjacent configurations is less than the characteristic dimension of the smallest patterned

(( ) ( ) ( ) ) 2 2 <sup>2</sup>

*xi xi yi yi zi zi*

 αα

−= − +− +− (4)

*i i* ≤ ⋅*min rj*

 αα− =Δ − =Δ − =Δ *x y z* (2)

(3)

*C k*. That means that at least one of the following conditions has to be fulfilled:

*kkk ppp nx nx ny ny nz nz*

of the robot. For PUMA 560 with mobility in the base, *n*=7, as can be seen in Fig. 1.

 αα

using interpolation we can obtain a collision-free and continuous path.

α α

1

*i*

=

*C C*

compatible with the end.

3. Configuration *C p* must be such that it minimizes the following objective function:

αα

*<sup>n</sup> pf p f p f pf*

 αα

*n* being the number of significant points of the robot. This third property facilitates the final configuration to be reached even for redundant robots, i.e. the robot´s end-effector should not be part, at the final node, of a configuration different from the desired one. On the other hand, this property has an influence on the configurations generated, facilitating the configurations in the neighbourhoods at the end so that they are

An optimization procedure is set by using a sequential quadratic programming method (SQP). This method serves to minimize a quadratic objective function subject to a set of constraints which might include those from a simple limit to the values of the variables,

Applying this procedure to the path planning problem, the objective function used is given by Eq. (4). The constraints are associated to the geometry of the robot, the limits of the actuators and the avoidance of collision. And the configuration *C k* is used as an initial estimation for its resolution. The solution of this optimization problem gives the adjacent configuration *C p* looked for. By repeating the obtaining of adjacent configuration, the discrete configuration space of the robot is obtained. These configurations are recorded in a graph.

linear restrictions and nonlinear continuous constraints. This is an iterative method.

**7. Obtaining adjacent configurations** 

αα

fulfilled:

obstacle.

The "sequential" algorithm is based on an indirect approach to solving the trajectory planning problem. The algorithm takes into account the characteristics of the actuators (torque, power, jerk and consumed energy), the interpolation functions and the obstacles in the workspace. It generates the configuration space. Then, a graph is associated to the previously obtained configuration space, which allows a collision-free path to be obtained between the initial and final configurations. Once the path is available, the dynamic characteristics of the robot are included, setting an optimal trajectory planning problem which aims to obtain the minimum time trajectory that is compatible with the robot features and the actuator capabilities (torque, jerk and consumed energy constraints).

### **8.2 Obtaining a path**

First, the algorithm solves the path planning problem, obtaining the discrete configuration space of the robot (the discrete configuration space is generated by means of adjacent configurations, see Section 7) and then the minimum distance path is calculated. This path (a sequence of *m* configurations) is obtained by associating a weighted graph to the discrete configuration space and looking for the minimum weighted path. In the graph, the nodes correspond to the robot configurations and the arcs are related to joint displacements between adjacent configurations.

The weight corresponding to the arc that goes from node *k* ( *Ck* robot configuration) to node *p* ( *Cp* robot configuration), can be given as:

$$a(k, p) = \sum\_{i=1}^{3(F-1)} \left(\mathbf{x}\_i^p - \mathbf{x}\_i^k\right)^2 \tag{5}$$

when that *Ck* and *Cp* are adjacent. In addition *Ck* and *Cp* must satisfy both type (3) constraints that avoid the obstacles between configurations and the angle increased from *Ck* to *Cp* must be smaller than the magnitude of the forbidden zone for that joint, so that large displacements are avoided for movement between adjacent configurations.

In case the points above mentioned are not satisfied, then we consider that *a(k,p)* = ∞ . Finally, the searching is started in the weighed graph with the path that joins the node corresponding to the initial configuration to the node corresponding to the final configuration. Since the arcs satisfy that *a(k,p)* ≥ 0 , the Dijkstra´s algorithm is used to obtain the path that minimises the distance between the initial and final configurations. If this path exists, it is easy to obtain a sequence of *m* robot configurations *S*.

### **8.3 Interpolation function**

Once the path has been obtained (at this point, the algorithm uses Cartesian coordinates), we have a sequence of *m* robot configurations, *S* = {*S1*(*qi1*), *S2*(*qi2*)… *Sm*(*qim***)**}. These configurations are expressed now in joint coordinates. And the objective now is to look for a minimum time trajectory (*tmin*), that contains them. The path is decomposed into *m-1* intervals, so the time needed to reach the *Sj*+1 configuration from the initial *S*1 is *tj*, and the time spent in the segment *j* (between *Sj* and *Sj*+1 configurations) will be *tj-tj-1*. Cubic interpolation functions have been used for joint trajectories. They are defined by means of joint variables between successive configurations, so that for the segment *j* it is:

Sequential and Simultaneous Algorithms to Solve the Collision-Free Trajectory Planning

time variable should be Δ*tj*=*tj*-*tj*-1, and the objective function,

being the energy consumed by the *i* actuator between configurations *j* and *j*+1

1

− =

*m*

*j*

1

*j*

The solution is obtained by means of SQP procedures, so that at each iterative step it is necessary to obtain the above mentioned polynomials coefficients from the estimation of the

The trajectory is obtained when the optimization problem posed has been solved. The solution (and therefore the trajectory) is achieved by solving an optimization problem whose objective function is the trajectory total time and the constraints are the maximum torques in the robot actuators, maximum power, maximum jerk and the consumed energy. The solution of the optimization problem is approached by means of a SQP algorithm of Fortran mathematical library NAG. In each iterative step is necessary to obtain the coefficients of the previously mentioned polynomials from an estimation of the variables (*tj*). Notice that the previous conditions above mentioned define a system of (*4Ndof(m-1)* ) independent linear equations. Since the complete trajectory has (*4Ndof(m-1)* ) unknowns corresponding to the coefficients of the polynomials, the linear system can be solved, obtaining the complete trajectory. And this linear system is solved in each iteration whithin the optimization problem. These coefficients are necessary to calculate the maximum torque, power, jerk and consumed energy for each one of the actuators by means of solving the

Finally, when the optimization problem has been solved we obtain the minimum time trayectory (subject to the mentioned constraints) and also all the kinematic properties of the

The impact of the interpolation function is very important from the point of view of the robot´s performance. Polynomial interpolation functions have been used in the "sequential" algorithm. It has been noticed during the resolution of the examples that they extract the maximum dynamic capabilities of the robot´s actuators, so that the robot moves faster than if any other interpolation function is used (harmonic functions, etc). Therefore when the polynomial interpolation functions are used the algorithm gives the best results from the

Different examples have been solved for a PUMA 560 robot. The examples have been solved with sequences of different initial and final configurations. The trajectories calculated meet constraints on torque, power, jerk and energy consumed and the goal is to analyze impact of

*t t*

*ij* ε

variables of the problem.

**8.4 Obtaining a trajectory** 

inverse dynamic problem in each interval.

**8.5 Impact of interpolation function** 

**8.6 Application and examples solved** 

point of view of the time requiered to do the tasks.

robotic system.

Problem for Industrial Robots – Impact of Interpolation Functions and the Characteristics of … 601

Given the large number of iterations required by the process, the technique used for obtaining the coefficients is crucial. The first task is to normalize the polynomials that define the stages (see Suñer et al., 2007).In short, the optimization problem is set by using incremental time variables in each interval, so that in the interval between *Sj* and *Sj*+1, the

min

Δ = (16)

2 3 *j j* <sup>1</sup> *ij ij ij ij ij t t ,t q a b t c t d t* <sup>−</sup> ∀ ∈ =+ + + for *i*=1,…,*dof* (*dof* being the degrees of freedoom of the robot) and *j*=1,…,*m*-1. (*m* is the number of the robot configuration)

To ensure motion continuity between configurations, the following conditions associated to the given configurations are considered.

• Position: it gives a total of (*2dof (m-1)*) equations:

$$a\_{ij} \left( t\_{j-1} \right) = a\_{ij} + b\_{ij} t\_{j-1} + c\_{ij} t\_{j-1}^2 + d\_{ij} t\_{j-1}^3 \tag{6}$$

$$a\_{ij} \left( t\_j \right) = a\_{ij} + b\_{ij} t\_j + c\_{ij} t\_j^2 + d\_{ij} t\_j^3 \tag{7}$$

• Velocity: for each interval, the initial and final velocity is zero, the velocity condition gives place to (*2dof*) equations:

$$
\dot{q}\_{i1} \left( t\_0 \right) = 0 \tag{8}
$$

$$
\dot{q}\_{im} \left( t\_m \right) = 0 \tag{9}
$$

When passing through each configuration, the final velocity of the previous configuration should be equal to the initial velocity of the next configuration, leading to (*dof (m-2)*) equations

$$
\dot{q}\_{\neq} \begin{pmatrix} t\_{\neq} \end{pmatrix} = \dot{q}\_{\neq \neq 1} \begin{pmatrix} t\_{\neq} \end{pmatrix} \tag{10}
$$

• Acceleration: For each intermediate configuration, the final acceleration of the previous configuration should be equal to the initial acceleration of the next configuration, giving rise to (*dof*(*m*-2)) equations:

$$
\ddot{q}\_{\dot{q}}\left(t\_{\dot{q}}\right) = \ddot{q}\_{\dot{q}\leftrightarrow 1}\left(t\_{\dot{q}}\right) \tag{11}
$$

In addition, the minimum time trajectory must meet the following constraints:

• Maximum torque on the actuators,

$$
\tau\_i^{\min} \le \tau\_i(t) \le \tau\_i^{\max} \quad \forall t \in \left[0, t\_{\min}\right], i = 1...dof \tag{12}
$$

• Maximum power on the actuators,

$$P\_i^{\min} \le P\_i(t) \le P\_i^{\max} \quad \forall t \in \left[0, t\_{\min}\right], \ i = 1 \ldots dof \tag{13}$$

• Maximum jerk on the actuators,

$$
\ddot{q}\_i^{\text{-min}} \le \dddot{q}\_i(t) \le \dddot{q}\_i^{\text{-max}} \quad \forall t \in \left[0, t\_{\text{min}}\right], \text{ i } = 1 \dots dof \tag{14}
$$

• Consumed Energy,

$$\sum\_{j=1}^{m-1} \left( \sum\_{i=1}^{d(f)} \mathcal{E}\_{ij} \right) \le \mathcal{E} \quad \text{s.t.} \tag{15}$$

#### *ij* εbeing the energy consumed by the *i* actuator between configurations *j* and *j*+1

Given the large number of iterations required by the process, the technique used for obtaining the coefficients is crucial. The first task is to normalize the polynomials that define the stages (see Suñer et al., 2007).In short, the optimization problem is set by using incremental time variables in each interval, so that in the interval between *Sj* and *Sj*+1, the time variable should be Δ*tj*=*tj*-*tj*-1, and the objective function,

$$\sum\_{j=1}^{m-1} \Delta t\_j = t\_{\text{min}} \tag{16}$$

The solution is obtained by means of SQP procedures, so that at each iterative step it is necessary to obtain the above mentioned polynomials coefficients from the estimation of the variables of the problem.

### **8.4 Obtaining a trajectory**

600 Robotic Systems – Applications, Control and Programming

*j j* <sup>1</sup> *ij ij ij ij ij t t ,t q a b t c t d t* <sup>−</sup> ∀ ∈ =+ + + for *i*=1,…,*dof* (*dof* being the degrees of freedoom of the robot) and *j*=1,…,*m*-1. (*m* is the number of the robot configuration) To ensure motion continuity between configurations, the following conditions associated to

( ) 2 3

( ) 2 3

• Velocity: for each interval, the initial and final velocity is zero, the velocity condition

When passing through each configuration, the final velocity of the previous configuration should be equal to the initial velocity of the next configuration, leading to (*dof (m-2)*)

• Acceleration: For each intermediate configuration, the final acceleration of the previous configuration should be equal to the initial acceleration of the next configuration, giving

In addition, the minimum time trajectory must meet the following constraints:

 τ

( ) min max *ii i* ττ

( ) min max

1

−

*j i*

= =

*m dof*

1 1

*ij*

ε

 ≤ Ε

*ij j* 1 11 1 *ij ij j ij j ij j q t a bt ct dt* − −− − =+ + + (6)

*ij j ij ij j ij j ij j q t a bt ct dt* =+ + + (7)

( ) 1 0 0 *<sup>i</sup> q t* = (8)

( ) 0 *im m q t* = (9)

*ij j* () () *ij j* <sup>1</sup> *q t q t* = <sup>+</sup> (10)

*ij j* () () *ij j* <sup>1</sup> *q t q t* = <sup>+</sup> (11)

≤ ≤ *t* ∀ ∈*t t* [0, min ] , *i* = 1…*dof* (12)

( ) min max *P Pt P ii i* ≤ ≤ ∀ ∈*t t* [0, min ] , i = 1…*dof* (13)

*ii i q q* ≤ ≤ *t q* ∀ ∈*t t* [0, min ] , i = 1…*dof* (14)

, (15)

2 3

the given configurations are considered.

gives place to (*2dof*) equations:

rise to (*dof*(*m*-2)) equations:

• Maximum torque on the actuators,

• Maximum power on the actuators,

• Maximum jerk on the actuators,

• Consumed Energy,

equations

• Position: it gives a total of (*2dof (m-1)*) equations:

The trajectory is obtained when the optimization problem posed has been solved. The solution (and therefore the trajectory) is achieved by solving an optimization problem whose objective function is the trajectory total time and the constraints are the maximum torques in the robot actuators, maximum power, maximum jerk and the consumed energy. The solution of the optimization problem is approached by means of a SQP algorithm of Fortran mathematical library NAG. In each iterative step is necessary to obtain the coefficients of the previously mentioned polynomials from an estimation of the variables (*tj*). Notice that the previous conditions above mentioned define a system of (*4Ndof(m-1)* ) independent linear equations. Since the complete trajectory has (*4Ndof(m-1)* ) unknowns corresponding to the coefficients of the polynomials, the linear system can be solved, obtaining the complete trajectory. And this linear system is solved in each iteration whithin the optimization problem. These coefficients are necessary to calculate the maximum torque, power, jerk and consumed energy for each one of the actuators by means of solving the inverse dynamic problem in each interval.

Finally, when the optimization problem has been solved we obtain the minimum time trayectory (subject to the mentioned constraints) and also all the kinematic properties of the robotic system.

### **8.5 Impact of interpolation function**

The impact of the interpolation function is very important from the point of view of the robot´s performance. Polynomial interpolation functions have been used in the "sequential" algorithm. It has been noticed during the resolution of the examples that they extract the maximum dynamic capabilities of the robot´s actuators, so that the robot moves faster than if any other interpolation function is used (harmonic functions, etc). Therefore when the polynomial interpolation functions are used the algorithm gives the best results from the point of view of the time requiered to do the tasks.

### **8.6 Application and examples solved**

Different examples have been solved for a PUMA 560 robot. The examples have been solved with sequences of different initial and final configurations. The trajectories calculated meet constraints on torque, power, jerk and energy consumed and the goal is to analyze impact of

Sequential and Simultaneous Algorithms to Solve the Collision-Free Trajectory Planning

*M*( ) *q*( )*t q*() () () *t C* + += ( ) *q t , q t q*() () *t g q*( ) *t (t)*

Unknown boundary conditions for intermediate configurations a priori

Find () () , , *<sup>f</sup> qt t t* τ

Minimizing

( ) ( )

min ( ) max

Collision avoidance within the robot workspace

Subject to the robot dynamics

Actuator torque rate limits

that contains the link *i*.

of the robot), ( ) *<sup>n</sup>* τ

Problem for Industrial Robots – Impact of Interpolation Functions and the Characteristics of … 603

τ∈Ω

( ) () *int int int int* 1 1 *q t q ; q t q* − − = =

( ) () *int int int int* 11 1 *qt q ;qt q* −− − = = Boundary conditions for initial and final configuration (used to solve the first and final step)

( ) ( )

τττ

*q ;q t*

00 0

*ij j j d rw* ≥ + (24)

where *ij d* is the distance from any obstacle pattern *j* (sphere, cylinder or prism) to link *i*; *<sup>j</sup> r* is the characteristic radius of the obstacle pattern and *wj* is the radius of the smallest cylinder

As well, ( ) *<sup>n</sup> q t R* ∈ is the vector of joint positions (*n* being the number of degrees of freedom

of the robot, ( ) () () , *nxn Cqt qt R* ∈ is a third-order tensor representing the coefficients of centrifugal and Coriolis forces and ( ) ( ) *<sup>n</sup> G q t R* ∈ is the vector of gravity terms, and Ω is the space state in which the vector of actuator torques is feasible. Each time a new adjacent configuration *C k* is generated, an uncertainty to be overcome lies in the fact that at this stage we do not know its kinematic characteristics (particularly velocity and acceleration), although we know they should be compatible with the dynamic characteristics of the robot. It should also be noted when calculating the minimum time between two adjacent configurations that each step starts from a configuration with its kinematic properties known, obtaining the time and the kinematic properties at the end configuration, so that if

due to the kinematic properties generated at the end configuration ( ( ) int *q t* , ( ) int *q t* and ( ) int *q t* ), it would have been impossible to observe constraints on the next generation step of

the dynamic capabilities of the actuators had been exhausted ( *<sup>i</sup>*( ) int min

*t R* ∈ is the vector of actuator torques, ( ) ( ) *nxn Mqt R*∈ is the inertia matrix

τ

 τ

 *t* ≅ or *i*( ) int max τ

 τ*t* ≅ )

*q q ;q t q*

= =

*o f f f*

0

0 1 *f t min J dt*

between each two configurations (18)

τ

( ) () *int int int int* 11 1 *qt q ;qt q* −− − = = (21)

= = (22)

≤ ≤ *t* (23)

= ⋅ (19)

(20)

these constraints on the generation of minimum time collision-free trajectories for industrial robots. The results obtained show that constraints on the energy consumed must enable the manipulator to exceed the requirements associated with potential energy, as the algorithm works on the assumption that the energy can be dissipated but not recovered. Also, an increase in the severity of energy constraints results in longer time trajectories with more soft power requirements. When constraints are not very severe, efficient trajectories can be obtained without high penalties on the working time cycle. An increase in the severity of the jerk constraints involves longer time trajectories with more soft power requirements and lower energy consumed. When constraints are very severe, times are also severely penalized even the jerk might appear. To obtain competitive results in the balance between time cycle and energy consumed, the actuators should work with the maximum admissible value of the jerk so that the robot can work with the desired accuracy.

## **9. "Simultaneous" algorithm applied to solving the trajectory planning problem. Problem statement**

### **9.1 Introduction**

The "simultaneous" algorithm is based on a direct approach to solving the trajectory planning problem in which the path planning problem and the problem of determining the time history of motion are treated as one instead of treating them separately as the indirect methods do. The algorithm is called "simultaneous" because of the simultaneous generation of discrete configuration space and the minimum distance path, making use of the information that the objective function is generating when new configurations are obtained. The algorithm works on a discretized configuration space which is generated gradually as the direct procedure solution evolves. It uses Cartesian coordinates (to specify the motion of the end-effector) and joint coordinates (to solve the inverse dynamic problem). An important role is played by the generation of adjacent configurations using techniques described by Valero et al. ,2006 . The resolution of the inverse dynamic problem has been done using Gibbs-Appel´s equations, as proposed by Sebastian Provenzano ( see Provenzano, 2001). Any obstacle can be modelled using simple obstacle patterns: sphere, cylinder and prism. This helps calculate distances and avoid collisions.

The algorithm takes into account the torque required by the actuators, analyses the best interpolation function and consider the obstacles in the workspace. To obtain a new adjacent configuration *C k* , a first optimization problem has to be solved which can be stated as follows:

$$\text{Find } C^k \text{, minimizing } \operatorname{Min}(\left\|\mathbf{C}^k - \mathbf{C}^\prime\right\|) = \operatorname{Min}\left(\sqrt{\sum\_{i=1}^n \left(\mathbf{x}\_i^k - \mathbf{x}\_i^\prime\right)}\right) \tag{17}$$

and subject to:

a. Geometrical constraints of the robot structure;

b. Constraints on the mobility of robot joints;

c. Collision avoidance within the robot workspace;

(where *<sup>k</sup> <sup>i</sup> x* and *<sup>f</sup> <sup>i</sup> x* are the Cartesian coordinates of intermediate and final configurations *C k* and *C k* respectively).

The process for calculating the whole trajectory between initial and final configurations ( *C 1* and *C f* ) is based on a second and different optimization problem which can be stated as:

Find () () , , *<sup>f</sup> qt t t* τbetween each two configurations (18)

$$\text{Minimizing } \min\_{\pi \in \Omega} J = \bigcup\_{0}^{t\_f} \mathbf{1} \cdot dt \tag{19}$$

Subject to the robot dynamics

602 Robotic Systems – Applications, Control and Programming

these constraints on the generation of minimum time collision-free trajectories for industrial robots. The results obtained show that constraints on the energy consumed must enable the manipulator to exceed the requirements associated with potential energy, as the algorithm works on the assumption that the energy can be dissipated but not recovered. Also, an increase in the severity of energy constraints results in longer time trajectories with more soft power requirements. When constraints are not very severe, efficient trajectories can be obtained without high penalties on the working time cycle. An increase in the severity of the jerk constraints involves longer time trajectories with more soft power requirements and lower energy consumed. When constraints are very severe, times are also severely penalized even the jerk might appear. To obtain competitive results in the balance between time cycle and energy consumed, the actuators should work with the maximum admissible value of

**9. "Simultaneous" algorithm applied to solving the trajectory planning** 

The "simultaneous" algorithm is based on a direct approach to solving the trajectory planning problem in which the path planning problem and the problem of determining the time history of motion are treated as one instead of treating them separately as the indirect methods do. The algorithm is called "simultaneous" because of the simultaneous generation of discrete configuration space and the minimum distance path, making use of the information that the objective function is generating when new configurations are obtained. The algorithm works on a discretized configuration space which is generated gradually as the direct procedure solution evolves. It uses Cartesian coordinates (to specify the motion of the end-effector) and joint coordinates (to solve the inverse dynamic problem). An important role is played by the generation of adjacent configurations using techniques described by Valero et al. ,2006 . The resolution of the inverse dynamic problem has been done using Gibbs-Appel´s equations, as proposed by Sebastian Provenzano ( see Provenzano, 2001). Any obstacle can be modelled using simple obstacle patterns: sphere,

The algorithm takes into account the torque required by the actuators, analyses the best interpolation function and consider the obstacles in the workspace. To obtain a new adjacent configuration *C k* , a first optimization problem has to be solved which can be stated as

The process for calculating the whole trajectory between initial and final configurations ( *C 1*

) is based on a second and different optimization problem which can be stated as:

1

*i*

= −= −

*i i*

(17)

*<sup>n</sup> k k f f*

*Min C C Min x x*

*<sup>i</sup> x* are the Cartesian coordinates of intermediate and final configurations *C k*

the jerk so that the robot can work with the desired accuracy.

cylinder and prism. This helps calculate distances and avoid collisions.

Find *C k* , minimizing ( ) ( )

a. Geometrical constraints of the robot structure; b. Constraints on the mobility of robot joints; c. Collision avoidance within the robot workspace;

**problem. Problem statement** 

**9.1 Introduction** 

follows:

and subject to:

*<sup>i</sup> x* and *<sup>f</sup>*

and *C k* respectively).

(where *<sup>k</sup>*

and *C f*

$$M(q(t))\ddot{q}(t) + C(q(t), \dot{q}(t))\dot{q}(t) + \mathbf{g}(q(t)) = \mathbf{\tau}(t) \tag{20}$$

Unknown boundary conditions for intermediate configurations a priori

$$\begin{aligned} q\left(t\_{int-1}\right) &= q\_{int-1} \; ; \; q\left(t\_{int}\right) = q\_{int}\\ \dot{q}\left(t\_{int-1}\right) &= \dot{q}\_{int-1} \; ; \; \dot{q}\left(t\_{int}\right) = \dot{q}\_{int-1} \\ \ddot{q}\left(t\_{int-1}\right) &= \ddot{q}\_{int-1} \; ; \; \ddot{q}\left(t\_{int}\right) = \ddot{q}\_{int-1} \end{aligned} \tag{21}$$

Boundary conditions for initial and final configuration (used to solve the first and final step)

$$\begin{aligned} q(0) &= q\_o \; ; \; q\left(t\_f\right) = q\_f\\ \dot{q}\left(0\right) &= 0 \; ; \dot{q}\left(t\_f\right) = 0 \end{aligned} \tag{22}$$

Actuator torque rate limits

$$
\pi\_{\min} \le \pi\left(t\right) \le \pi\_{\max} \tag{23}
$$

Collision avoidance within the robot workspace

$$d\_{\vec{\eta}} \ge r\_j + w\_j \tag{24}$$

where *ij d* is the distance from any obstacle pattern *j* (sphere, cylinder or prism) to link *i*; *<sup>j</sup> r* is the characteristic radius of the obstacle pattern and *wj* is the radius of the smallest cylinder that contains the link *i*.

As well, ( ) *<sup>n</sup> q t R* ∈ is the vector of joint positions (*n* being the number of degrees of freedom of the robot), ( ) *<sup>n</sup>* τ *t R* ∈ is the vector of actuator torques, ( ) ( ) *nxn Mqt R*∈ is the inertia matrix of the robot, ( ) () () , *nxn Cqt qt R* ∈ is a third-order tensor representing the coefficients of centrifugal and Coriolis forces and ( ) ( ) *<sup>n</sup> G q t R* ∈ is the vector of gravity terms, and Ω is the space state in which the vector of actuator torques is feasible. Each time a new adjacent configuration *C k* is generated, an uncertainty to be overcome lies in the fact that at this stage we do not know its kinematic characteristics (particularly velocity and acceleration), although we know they should be compatible with the dynamic characteristics of the robot.

It should also be noted when calculating the minimum time between two adjacent configurations that each step starts from a configuration with its kinematic properties known, obtaining the time and the kinematic properties at the end configuration, so that if the dynamic capabilities of the actuators had been exhausted ( *<sup>i</sup>*( ) int min τ τ *t* ≅ or *i*( ) int max τ τ *t* ≅ ) due to the kinematic properties generated at the end configuration ( ( ) int *q t* , ( ) int *q t* and ( ) int *q t* ), it would have been impossible to observe constraints on the next generation step of

Sequential and Simultaneous Algorithms to Solve the Collision-Free Trajectory Planning

with *i*=1..*Ndof* and *j* =1..*Nspan*. . *Nspan* is the number of the span that is being analyzed.

b1) Case A

b2) Case B

b3. Case C

The interpolation function is

Velocity and acceleration equations are

*2*

*2*

*ij ij ij ij*

*c ) + sin(t) ( - cos(t) (a sin(t) + b ) - 3 a cos(t) sin(t))* =⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅⋅ ⋅ ⋅⋅ ⋅⋅ ⋅

The interpolation function is

Velocity and acceleration equations are

*ij ij*

The interpolation function is

this span. Velocity and acceleration equations are

determined which, as in the previous case, are dependent on time.

*ij ij ij ij ij ij ij*

determined which, as in the previous case, are dependent on time.

*ij ij ij ij ij*

*)*

= ⋅⋅ ⋅ ⋅ ⋅ ⋅

*(sin( t) (a sin( t ) + b ) + c )*

⋅ ⋅

*+ b ) + a cos( t ) sin( t )*

⋅ ⋅

Problem for Industrial Robots – Impact of Interpolation Functions and the Characteristics of … 605

From experience in the resolution of a great number of cases, a polynomial term has been added to ensure the boundary conditions of velocity and acceleration along the trajectory in

Their values are limited by the coefficients *aij*, *bij*, *cij* and *dij*. The known boundary conditions are four: the initial and final configurations of the span and the velocities and accelerations at the beginning, and they allow the expressions for the constants *aij, bij, cij* and *dij* to be

*q cos( t ) (cos( t ) (a sin( t ) + b ) + a cos( t ) sin( t )) - sin( t )* 

*q cos( t ) ( - a sin ( t ) - sin( t ) (a sin( t ) + b ) +2 a cos ( t ))-* 

= ⋅ ⋅⋅ ⋅ ⋅ ⋅

(31)

*2 2*

*cos( t ) (sin( t ) (a sin( t ) + b ) + c ) - 2 sin( t ) (cos( t ) (a sin( t )* 

*ij ij ij ij ij ij ij q sin(t) (a cos (t) - sin(t) (a sin(t) + b )* = ⋅ ⋅⋅ *) + cos(t) (cos(t) (a sin(t) + b ) + c )* ⋅ ⋅⋅ (34)

(35)

*ij ij ij ij ij ij*

*q 2 cos(t) (a cos (t) - sin(t) (a sin(t) + b )) - sin(t) (cos(t) (a sin(t) + b ) +*

⋅ ⋅⋅ ⋅ ⋅ ⋅⋅

Their values are limited by the new coefficients *aij*, *bij*, *cij* and *dij*. The known boundary conditions are also four: the initial and final configurations of the span and the velocities and accelerations at the beginning. Therefore the constants *aij, bij, cij* and *dij* can be

*ij ij ij ij*

() ( ) ( ) ( ) 234 *ij ij ij ij ij q a sin t b cos t c sin t d cos t* =⋅ −⋅ ⋅+⋅ ⋅−⋅ ⋅ (27)

() ( ) ( ) ( ) 2 23 34 4 *ij ij ij ij ij q a cos t b sin t c cos t d sin t* = ⋅ +⋅ ⋅ ⋅ +⋅ ⋅ ⋅ +⋅ ⋅ ⋅ (28)

() ( ) ( ) ( ) 4 2 9 3 16 4 *ij ij ij ij ij q a sin t b cos t c sin t d cos t* =− ⋅ + ⋅ ⋅ ⋅ − ⋅ ⋅ ⋅ + ⋅ ⋅ ⋅ (29)

*ij ij* ( ) *ij ij ij q cos( t ) (sin( t ) (a sin t b ) c ) d* = ⋅ ⋅⋅ +++ (30)

( ) *ij ij ij ij ij q sin(t) (cos(t) (a sin t b ) c ) d* = ⋅ ⋅⋅ +++ (33)

(32)

the trajectory ττ τ min int 1 max ( ) *t* ≤ ≤ <sup>+</sup> . Finally, by connecting adjacent configurations, the whole trajectory is generated.

The process explained is applied repeatedly to generate adjacent configurations until reaching the final configuration. Finally, by connecting adjacent configurations, the whole trajectory is generated.

### **9.2 Interpolation function**

It must be noticed that three types of trajectory spans should be distinguished because of their different boundary conditions: the initial (which contains the initial configuration *C 1*), the final (which contains the final configuration C *<sup>f</sup>*), and the intermediate (which does not contain either the initial or the final configuration).

Each pair of adjacent configurations is interpolated using harmonic functions in order to limit the kinematic characteristics of goal configuration so that progression to the following step should be admissible without breaking the dynamic properties. In that way, it is not necessary to previously impose kinematic constraints onto the process. Now, starting from the initial configuration, the harmonic function leads to the knowledge of the kinematic characteristics of the configurations adjacent to it, and so on. And therefore the process of obtaining adjacent configurations can continue until reaching the end. It is true that the results are influenced by the use of different interpolation functions between adjacent configurations. We use harmonic functions because they are capable of limiting the maximum values of velocity and acceleration required for the actuators. So, values for velocities and accelerations are limited. This important trait is deduced from the properties of Fourier series because of the harmonic functions used as interpolation functions and can be expressed by means of their Fourier series, which can ultimately be expressed as

$$f(t) = \mathbb{C}\_0 + \mathbb{C}\_1 \cos(t + \theta\_1) \tag{25}$$

whose *C1* coefficient is the value of the amplitude for the fundamental component and *θ1* is the phase angle. It can be demonstrated that the values of the function are limited to the interval [*C1*,-*C1*] (the coefficients of the cosine terms). Analyzing the harmonic function on the basis of the type of trajectory span we distinguish three types. We use different interpolation functions to determine their impact on the characteristics of the solution generated. The cases analyzed and interpolation functions for each case are as follows.

a. Initial span: Cases A, B and C

In all three cases we have used the same interpolation function for the first span, therefore the procedure to calculate the constants is identical

$$q\_{i1} = a\_{i1} \cdot \sin(\ t \ ) - b\_{i1} \cdot \cos(\ t \ ) + c\_{i1} \tag{26}$$

with *i* =1..*Ndof* and 1 is for the first span. *Ndof* is the number of the robot´s degrees of freedom. For this type of interpolation function, velocity and acceleration values are limited by the coefficients *aij*, *bij*. The known boundary conditions are three: the initial and final configuration of the interval and the initial velocity. They allow the set of coefficients *aij*, *bij* and *cij* to be obtained, which are dependent on time.

b. Intermediate span.

Three different interpolation functions corresponding to cases A, B and C have been used. To calculate the constants in each case we have proceeded as follows:

### b1) Case A

604 Robotic Systems – Applications, Control and Programming

The process explained is applied repeatedly to generate adjacent configurations until reaching the final configuration. Finally, by connecting adjacent configurations, the whole

It must be noticed that three types of trajectory spans should be distinguished because of their different boundary conditions: the initial (which contains the initial configuration *C 1*), the final (which contains the final configuration C *<sup>f</sup>*), and the intermediate (which does not

Each pair of adjacent configurations is interpolated using harmonic functions in order to limit the kinematic characteristics of goal configuration so that progression to the following step should be admissible without breaking the dynamic properties. In that way, it is not necessary to previously impose kinematic constraints onto the process. Now, starting from the initial configuration, the harmonic function leads to the knowledge of the kinematic characteristics of the configurations adjacent to it, and so on. And therefore the process of obtaining adjacent configurations can continue until reaching the end. It is true that the results are influenced by the use of different interpolation functions between adjacent configurations. We use harmonic functions because they are capable of limiting the maximum values of velocity and acceleration required for the actuators. So, values for velocities and accelerations are limited. This important trait is deduced from the properties of Fourier series because of the harmonic functions used as interpolation functions and can

be expressed by means of their Fourier series, which can ultimately be expressed as

01 1 *f (t) C C cos(t )* =+ +

whose *C1* coefficient is the value of the amplitude for the fundamental component and *θ1* is the phase angle. It can be demonstrated that the values of the function are limited to the interval [*C1*,-*C1*] (the coefficients of the cosine terms). Analyzing the harmonic function on the basis of the type of trajectory span we distinguish three types. We use different interpolation functions to determine their impact on the characteristics of the solution generated. The cases analyzed and interpolation functions for each case are as follows.

In all three cases we have used the same interpolation function for the first span, therefore

with *i* =1..*Ndof* and 1 is for the first span. *Ndof* is the number of the robot´s degrees of freedom. For this type of interpolation function, velocity and acceleration values are limited by the coefficients *aij*, *bij*. The known boundary conditions are three: the initial and final configuration of the interval and the initial velocity. They allow the set of coefficients *aij*, *bij*

Three different interpolation functions corresponding to cases A, B and C have been used.

θ

*q a sin t b cos t c <sup>i</sup>*<sup>1</sup> =⋅ −⋅ + *iii* <sup>111</sup> () ()(26)

(25)

min int 1 max ( ) *t* ≤ ≤ <sup>+</sup> . Finally, by connecting adjacent configurations, the whole

the trajectory

trajectory is generated.

trajectory is generated.

**9.2 Interpolation function** 

ττ

a. Initial span: Cases A, B and C

b. Intermediate span.

the procedure to calculate the constants is identical

and *cij* to be obtained, which are dependent on time.

To calculate the constants in each case we have proceeded as follows:

 τ

contain either the initial or the final configuration).

The interpolation function is

$$q\_{\bar{\eta}} = a\_{\bar{\eta}} \cdot \sin\left(\text{t }\right) - b\_{\bar{\eta}} \cdot \cos\left(\text{2} \cdot \text{t}\right) + c\_{\bar{\eta}} \cdot \sin\left(\text{3} \cdot \text{t}\right) - d\_{\bar{\eta}} \cdot \cos\left(\text{4} \cdot \text{t}\right) \tag{27}$$

with *i*=1..*Ndof* and *j* =1..*Nspan*. . *Nspan* is the number of the span that is being analyzed. From experience in the resolution of a great number of cases, a polynomial term has been added to ensure the boundary conditions of velocity and acceleration along the trajectory in this span. Velocity and acceleration equations are

$$\dot{q}\_{\dot{\eta}} = a\_{\ddot{\eta}} \cdot \cos\left(\text{t}\right) + \text{2} \cdot b\_{\ddot{\eta}} \cdot \sin\left(\text{2} \cdot \text{t}\right) + \text{3} \cdot c\_{\dddot{\eta}} \cdot \cos\left(\text{3} \cdot \text{t}\right) + \text{4} \cdot d\_{\ddot{\eta}} \cdot \sin\left(\text{4} \cdot \text{t}\right) \tag{28}$$

$$\ddot{q}\_{\bar{\eta}} = -a\_{\bar{\eta}} \cdot \sin\left(\,\,t\right) + 4 \cdot b\_{\bar{\eta}} \cdot \cos\left(\,\,2 \cdot t\right) - 9 \cdot c\_{\bar{\eta}} \cdot \sin\left(\,\,3 \cdot t\right) + 16 \cdot d\_{\bar{\eta}} \cdot \cos\left(\,\,4 \cdot t\right) \tag{29}$$

Their values are limited by the coefficients *aij*, *bij*, *cij* and *dij*. The known boundary conditions are four: the initial and final configurations of the span and the velocities and accelerations at the beginning, and they allow the expressions for the constants *aij, bij, cij* and *dij* to be determined which, as in the previous case, are dependent on time.

 b2) Case B The interpolation function is

$$q\_{\dot{\imath}\dot{\jmath}} = \cos(\text{t}) \cdot (\sin(\text{t}) \cdot (a\_{\ddot{\imath}\dot{\jmath}} \cdot \sin(\text{t}) + b\_{\ddot{\imath}\dot{\jmath}}) + c\_{\ddot{\imath}\dot{\jmath}}) + d\_{\dddot{\imath}}\tag{30}$$

Velocity and acceleration equations are

$$\begin{aligned} \dot{q}\_{\overline{\boldsymbol{\eta}}} &= \cos(\boldsymbol{\mathfrak{t}}) \cdot \left( \cos(\boldsymbol{\mathfrak{t}}) \right) \cdot \left( \boldsymbol{a}\_{\overline{\boldsymbol{\eta}}} \cdot \sin(\boldsymbol{\mathfrak{t}}) + \boldsymbol{b}\_{\overline{\boldsymbol{\eta}}} \right) + \boldsymbol{a}\_{\overline{\boldsymbol{\eta}}} \cdot \cos(\boldsymbol{\mathfrak{t}}) \cdot \boldsymbol{\sim} \sin(\boldsymbol{\mathfrak{t}}) \cdot \boldsymbol{\sim} \sin(\boldsymbol{\mathfrak{t}}) \cdot \boldsymbol{\sim} \boldsymbol{\tag{31}} \\ &\quad \left( \sin(\boldsymbol{\mathfrak{t}}) \cdot \left( \boldsymbol{a}\_{\overline{\boldsymbol{\eta}}} \cdot \sin(\boldsymbol{\mathfrak{t}}) + \boldsymbol{b}\_{\overline{\boldsymbol{\eta}}} \right) + \boldsymbol{c}\_{\overline{\boldsymbol{\eta}}} \right) \end{aligned} \tag{31}$$

$$\begin{aligned} \ddot{q}\_{\dot{\eta}} &= \cos(t) \cdot \left( -a\_{\dot{\eta}} \cdot \sin^2(t) \cdot \sin(t) \cdot (a\_{\ddot{\eta}} \cdot \sin(t) + b\_{\ddot{\eta}}) + 2 \cdot a\_{\ddot{\eta}} \cdot \cos^2(t) \right) - \\ \cos(t) \cdot \left( \sin(t) \cdot (a\_{\ddot{\eta}} \cdot \sin(t) + b\_{\ddot{\eta}}) + c\_{\ddot{\eta}} \right) - 2 \cdot \sin(t) \cdot \left( \cos(t) \cdot (a\_{\ddot{\eta}} \cdot \sin(t) + \sin(t) \cdot \cos(t)) + \cos(t) \right) \\ &+ b\_{\ddot{\eta}} \cdot + a\_{\ddot{\eta}} \cdot \cos(t) \cdot \sin(t) \end{aligned} \tag{32}$$

Their values are limited by the new coefficients *aij*, *bij*, *cij* and *dij*. The known boundary conditions are also four: the initial and final configurations of the span and the velocities and accelerations at the beginning. Therefore the constants *aij, bij, cij* and *dij* can be determined which, as in the previous case, are dependent on time.

 b3. Case C The interpolation function is

$$q\_{ij} = \sin(t) \cdot (\cos(t) \cdot (a\_{ij} \cdot \sin(t) + b\_{ij}) + c\_{ij}) + d\_{ij} \tag{33}$$

Velocity and acceleration equations are

$$\dot{q}\_{\bar{\eta}} = \sin(\mathfrak{t}) \left( a\_{\bar{\eta}} \cdot \cos^2(\mathfrak{t}) \cdot \sin(\mathfrak{t}) \cdot (a\_{\bar{\eta}} \cdot \sin(\mathfrak{t}) + b\_{\bar{\eta}}) \right) + \cos(\mathfrak{t}) \cdot \left( \cos(\mathfrak{t}) \cdot (a\_{\bar{\eta}} \cdot \sin(\mathfrak{t}) + b\_{\bar{\eta}}) + c\_{\bar{\eta}} \right) \tag{34}$$

$$\begin{aligned} \ddot{q}\_{\dot{\imath}\dot{}} &= 2 \cdot \cos(\mathbf{t}) \cdot \left( a\_{\dot{\imath}} \cdot \cos^2(\mathbf{t}) \cdot \sin(\mathbf{t}) \cdot \left( a\_{\ddot{\imath}} \cdot \sin(\mathbf{t}) + b\_{\dot{\imath}} \right) \right) \cdot \sin(\mathbf{t}) \cdot \left( \cos(\mathbf{t}) \cdot \left( a\_{\dot{\imath}} \cdot \sin(\mathbf{t}) + b\_{\dot{\imath}} \right) + \mathbf{b} \right) \\\ c\_{\ddot{\imath}}) &+ \sin(\mathbf{t}) \cdot \left( -\cos(\mathbf{t}) \cdot \left( a\_{\ddot{\imath}} \cdot \sin(\mathbf{t}) + b\_{\ddot{\imath}} \right) - \mathbf{\hat{3}} \cdot a\_{\ddot{\imath}} \cdot \cos(\mathbf{t}) \cdot \sin(\mathbf{t}) \right) \end{aligned} \tag{35}$$

Sequential and Simultaneous Algorithms to Solve the Collision-Free Trajectory Planning

results obtained using the "sequential algorithm".

search for the final configuration is as follows:

**9.5 Obtaining the trajectory** 

**9.4 Cost function** 

Problem for Industrial Robots – Impact of Interpolation Functions and the Characteristics of … 607

been characterized by the maximum and minimum torque it can provide, see Eq. (23). Nonetheless, both the computational and execution time are very high compared with the

An important point of the algorithm is to understand the process by which the algorithm is gradually creating the trajectory. The algorithm works in a discretised workspace (see Rubio et al.*,*2009) , looking for a trajectory that joins the initial and final configurations by starting from the initial configuration and, on the basis of generating adjacent configurations and branching out from the most promising one, obtaining new configurations until reaching the final one. Therefore, the trajectory contains a discrete set of intermediate configurations. To ensure that the process moves from one configuration to another, that is, that the algorithm branches out from a general intermediate configuration to generate more new adjacent configurations, the uniform cost function is used. The discrete configuration space is analysed as a graph, where the configurations generated are the nodes and the arc between nodes (arc (*i*, *j*)= *time i, j* ( ) ) is calculated as the time necessary to perform the motion between adjacent configurations. It is desirable that the number of configurations generated is not high and, in addition, that these configurations enable efficient trajectories to be obtained. The process followed to achieve the growth of the configuration space in the

Let { } 1 2 *<sup>k</sup> CC C ,C , ,C* = be the set of existing configurations at a given instant, and *CR* the subgroup of *CC* that contains *r* (*r* < *k*) configurations that have still not been used to branch out. Now, it is necessary to follow what is called a branching strategy or searching strategy to select a *C p* configuration pertaining to *CR,* from which the algorithm tries to generate another six new adjacent configurations *C p+1*, *C p+2*, *C p+3*, *C p+4*, *C p+5* and *C p+6* (according to the technique explained in Valero (2006)), which are new configurations belonging to *CR*, while *C p* is taken out of this subgroup. The process finishes when the final configuration is reached. The cost function *c (p)* used to select a new configuration to branch out from is defined as follows

the minimum sum of arcs that permit the node *j* to be reached from the initial node

When a set of adjacent configurations has been created, Eq. (40) is used to select that one from which the process is expected to branch out again. Given two adjacent configurations the minimum time between them is calculated as explained in Section 4. Time is used to select the new configuration as just explained in Section 5, which is used to repeat the branching process and this in turn is repeated until the final configuration is reached.

When the final configuration is reached we know not only the robot configuration through

*c j time , j* () ( ) = 1 (39)

*c*( ) () *p* = ∀∈ *min c j , j CR* (40)

τ

( )*t* and the kinematic

is defined as

• Uniform Cost: the time function *c (j)* associated with the configuration *C j*

And the new branching is started from configuration C p , which meets

the joint positions *q*( )*t* but also the necessary torques

Their values are limited by the new coefficients *aij*, *bij*, *cij* and *dij*. The known boundary conditions are also four: the initial and final configurations of the span and the velocities and accelerations at the beginning. Therefore the constants *aij, bij, cij* and *dij* can be determined which, as in the previous case, are dependent on time.

c. Final span: Cases A, B and C

In all three cases we used the same interpolation function for the last span and therefore the procedure to calculate the constants is identical

$$q\_{i\bar{\text{F}}} = a\_{i\bar{\text{F}}} \cdot \sin\left(\text{t}\right) + b\_{i\bar{\text{F}}} \cdot \cos\left(\text{t}\right) + c\_{i\bar{\text{F}}} \cdot \sin\left(\text{t}\right)^2 + d\_{j\bar{\text{F}}} \cdot \text{t} + e\_{i\bar{\text{F}}} \cdot \text{t}^2 \tag{36}$$

with *i*=1..*Ndof* and *F* is for the final trajectory span.

In this type of span a polynomial term has been introduced, in this case of grade 2, which would ensure the continuity of velocity and acceleration. The velocity and acceleration equations are

$$\dot{q}\_{i\bar{\text{F}}} = a\_{i\bar{\text{F}}} \cdot \cos\left(\text{t}\right) - b\_{i\bar{\text{F}}} \cdot \sin\left(\text{t}\right) + 2 \cdot c\_{i\bar{\text{F}}} \cdot \sin\left(\text{t}\right) \cdot \cos\left(\text{t}\right) + d\_{i\bar{\text{F}}} + 2 \cdot e\_{i\bar{\text{F}}} \cdot \text{t}\tag{37}$$

$$\vec{q}\_{i\text{F}} = -a\_{i\text{F}} \cdot \sin\left(\text{t}\right) - b\_{i\text{F}} \cdot \cos\left(\text{t}\right) + \text{2} \cdot c\_{i\text{F}} \cdot \left(\cos\left(\text{t}\right)^{2} - \sin\left(\text{t}\right)^{2}\right) + \text{2} \cdot e\_{i\text{F}}\tag{38}$$

Their values are limited by the coefficients *aij*, *bij, cij, dij* and *eij*. The known boundary conditions are five: the initial and final configuration in the last span or interval, the velocity and acceleration at the beginning of the interval and the velocity at the end. These boundary conditions enable the coefficients *aij*, *bij*, *cij*, *dij* and *eij* to be obtained.

Whenever a new adjacent configuration is generated by solving Eq. (4), a new trajectory span will also be created (by solving the second optimization problem Eq. (17)), and the necessary time *tj* to perform the span is then obtained. The joint positions are adjusted using the corresponding harmonic interpolation function again. The solution of equations is obtained by iteration using quadratic sequential programming techniques (SQP) through the mathematical commercial software NAG (Numerical Algorithms Group). An each step of the iterative process it is necessary to recalculate the coefficients of the harmonic interpolation functions used, since they are time functions. To facilitate calculations, each span has been discretized using ten subintervals, so that the kinematic and dynamic characteristics are to be calculated at this discrete set of points. The solution of the optimization process provides the minimum time *tj* to go from one configuration to its adjacent one and consequently the joint positions *q*( )*t* that must be followed between these two configurations, as well as the necessary torques in the actuators τ ( )*t* and the corresponding kinematic characteristics *q*( )*t* and *q*( )*t* .

### **9.3 Impact of interpolation function**

As it was said earlier, the impact of the interpolation function is very important from the point of view of the robot´s performance. Three types of interpolation functions have been used (A, B and C) for the computation of intermediate configurations (harmonic functions) when using the "simultaneous algorithm. Pure polynomial interpolation functions have been excluded because they exceeded the dynamic capabilities of the actuators and therefore the algorithm failed to reach any solution. Therefore, after having analysed all kinds of interpolation functions, we state that the best of all them C (notice that each actuator has been characterized by the maximum and minimum torque it can provide, see Eq. (23). Nonetheless, both the computational and execution time are very high compared with the results obtained using the "sequential algorithm".

## **9.4 Cost function**

606 Robotic Systems – Applications, Control and Programming

Their values are limited by the new coefficients *aij*, *bij*, *cij* and *dij*. The known boundary conditions are also four: the initial and final configurations of the span and the velocities and accelerations at the beginning. Therefore the constants *aij, bij, cij* and *dij* can be

In all three cases we used the same interpolation function for the last span and therefore the

In this type of span a polynomial term has been introduced, in this case of grade 2, which would ensure the continuity of velocity and acceleration. The velocity and acceleration

> () () () () 2 2 2 2 *iF iF iF iF iF*

Their values are limited by the coefficients *aij*, *bij, cij, dij* and *eij*. The known boundary conditions are five: the initial and final configuration in the last span or interval, the velocity and acceleration at the beginning of the interval and the velocity at the end. These boundary

Whenever a new adjacent configuration is generated by solving Eq. (4), a new trajectory span will also be created (by solving the second optimization problem Eq. (17)), and the necessary time *tj* to perform the span is then obtained. The joint positions are adjusted using the corresponding harmonic interpolation function again. The solution of equations is obtained by iteration using quadratic sequential programming techniques (SQP) through the mathematical commercial software NAG (Numerical Algorithms Group). An each step of the iterative process it is necessary to recalculate the coefficients of the harmonic interpolation functions used, since they are time functions. To facilitate calculations, each span has been discretized using ten subintervals, so that the kinematic and dynamic characteristics are to be calculated at this discrete set of points. The solution of the optimization process provides the minimum time *tj* to go from one configuration to its adjacent one and consequently the joint positions *q*( )*t* that must be followed between these

As it was said earlier, the impact of the interpolation function is very important from the point of view of the robot´s performance. Three types of interpolation functions have been used (A, B and C) for the computation of intermediate configurations (harmonic functions) when using the "simultaneous algorithm. Pure polynomial interpolation functions have been excluded because they exceeded the dynamic capabilities of the actuators and therefore the algorithm failed to reach any solution. Therefore, after having analysed all kinds of interpolation functions, we state that the best of all them C (notice that each actuator has

() () ()<sup>2</sup> <sup>2</sup> *iF iF iF iF jF iF q a sin t b cos t c sin t d t e t* = ⋅ + ⋅ + ⋅ + ⋅+ ⋅ (36)

() () () () 2 2 *iF iF iF iF iF iF q a cos t b sin t c sin t cos t d e t* = ⋅ − ⋅ +⋅ ⋅ ⋅ + +⋅ ⋅ (37)

*q a sin t b cos t c (cos t sin t ) e* =− ⋅ − ⋅ + ⋅ ⋅ − + ⋅ (38)

τ

( )*t* and the

determined which, as in the previous case, are dependent on time.

conditions enable the coefficients *aij*, *bij*, *cij*, *dij* and *eij* to be obtained.

two configurations, as well as the necessary torques in the actuators

corresponding kinematic characteristics *q*( )*t* and *q*( )*t* .

**9.3 Impact of interpolation function** 

c. Final span: Cases A, B and C

equations are

procedure to calculate the constants is identical

with *i*=1..*Ndof* and *F* is for the final trajectory span.

An important point of the algorithm is to understand the process by which the algorithm is gradually creating the trajectory. The algorithm works in a discretised workspace (see Rubio et al.*,*2009) , looking for a trajectory that joins the initial and final configurations by starting from the initial configuration and, on the basis of generating adjacent configurations and branching out from the most promising one, obtaining new configurations until reaching the final one. Therefore, the trajectory contains a discrete set of intermediate configurations. To ensure that the process moves from one configuration to another, that is, that the algorithm branches out from a general intermediate configuration to generate more new adjacent configurations, the uniform cost function is used. The discrete configuration space is analysed as a graph, where the configurations generated are the nodes and the arc between nodes (arc (*i*, *j*)= *time i, j* ( ) ) is calculated as the time necessary to perform the motion between adjacent configurations. It is desirable that the number of configurations generated is not high and, in addition, that these configurations enable efficient trajectories to be obtained. The process followed to achieve the growth of the configuration space in the search for the final configuration is as follows:

Let { } 1 2 *<sup>k</sup> CC C ,C , ,C* = be the set of existing configurations at a given instant, and *CR* the subgroup of *CC* that contains *r* (*r* < *k*) configurations that have still not been used to branch out. Now, it is necessary to follow what is called a branching strategy or searching strategy to select a *C p* configuration pertaining to *CR,* from which the algorithm tries to generate another six new adjacent configurations *C p+1*, *C p+2*, *C p+3*, *C p+4*, *C p+5* and *C p+6* (according to the technique explained in Valero (2006)), which are new configurations belonging to *CR*, while *C p* is taken out of this subgroup. The process finishes when the final configuration is reached. The cost function *c (p)* used to select a new configuration to branch out from is defined as follows

• Uniform Cost: the time function *c (j)* associated with the configuration *C j* is defined as the minimum sum of arcs that permit the node *j* to be reached from the initial node

$$\mathcal{L}\left(j\right) = \text{time}\left(1, j\right) \tag{39}$$

And the new branching is started from configuration C p , which meets

$$c(p) = \min\left[c(j)\right], \forall \ j \in \mathbb{CR} \tag{40}$$

When a set of adjacent configurations has been created, Eq. (40) is used to select that one from which the process is expected to branch out again. Given two adjacent configurations the minimum time between them is calculated as explained in Section 4. Time is used to select the new configuration as just explained in Section 5, which is used to repeat the branching process and this in turn is repeated until the final configuration is reached.

### **9.5 Obtaining the trajectory**

When the final configuration is reached we know not only the robot configuration through the joint positions *q*( )*t* but also the necessary torques τ( )*t* and the kinematic

Sequential and Simultaneous Algorithms to Solve the Collision-Free Trajectory Planning

jerk so that the robot can work with the desired accuracy.

**11. Acknowledgment** 

**12. References** 

DPI2010-20814-C02-01 (IDEMOV).

nº 3, pp. (3-17).

(1063-1086).

5, pp. (233-249).

715.

1938-4, pp. (971-976), Scottsdale, USA, 1989.

*Automation, and Systems*, Vol. 4, nº 1, pp. (10-16).

*in Engineering*, Vol. 57, nº 11, pp. (1615-1641).

Problem for Industrial Robots – Impact of Interpolation Functions and the Characteristics of … 609

c.2. To obtain competitive results in the balance between time cycle and energy consumed, the actuators should work with the maximum admissible value of the

c.3. The cubic interpolation function gives the best computational and execution time. d. "Simultaneous" algorithm: as for the peculiarities of the interpolation functions in relation to the four monitored operating parameters (computational time, execution time, distance travelled and number of configurations generated), the main point is that the best results are obtained when using the interpolation function of case C (taking into account that each actuator has been characterized by the maximum and minimum torque it can provide). With this algorithm the cubic interpolation function does not work because during the resolution phase of the examples, they exceeded the dynamic capabilities of the actuators and therefore the algorithm failed to reach any solution.

This paper has been possible thanks to the funding of Science and Innovation Ministry of the Spain Government by means of the Researching and Technologic Development Project

Abdel-Malek, K., Mi, Z., Yang, J.Z. & Nebel, K. (2006), Optimization-based trajectory planning of the human upper body, *Robotica*, Vol. 24, nº 6, pp. (683-696). Bobrow, J.E., Dubowsky, S. & Gibson, J.S. (1985), Time-Optimal Control of Robotic

Chen, Y. & Desrochers, A.A. (1989), Structure of minimum time control law for robotic

Chettibi, T., Lehtihet, H.E., Haddad, M. & Hanchi, S. (2002), Optimal pose trajectory

Chettibi, T., Lehtihet, H.E., Haddad, M. & Hanchi, S. (2004), Minimum cost trajectory

Cho, B. H., Choi, B. S. & Lee, J. M. (2006), Time-optimal trajectory planning for a robot

Constantinescu, D. & Croft, E.A. (2000), Smooth and time-optimal trajectory planning for

du Plessis, L. J. & Snyman, J. A. (2003 ), Trajectory-planning through interpolation by

Field, G. & Stepanenko, Y. (1996), Iterative dynamic programming: an approach to

Manipulators Along Specied Paths, *International Journal of Robotics Research*, Vol. 4,

manipulators with constrained paths, *IEEE Int Conf Robot Automat,* ISBN: 0-8186-

planning for robot manipulators. *Mechanism and Machine Theory*, vol 37, nº 10, pp.

planning for industrial robots. European Journal of Mechanics a-solids 23 (4): 703-

system under torque and impulse constraints, *International Journal of Control,* 

industrial manipulators along specified paths, *Journal of Robotic Systems*, Vol. 17, no

overlapping cubic arcs and cubic splines. *International Journal for Numerical Methods* 

minimum energy trajectory planning for robotic manipulators, *Proc. of the IEEE* 

characteristics of the motion *q*( )*t* and *q*( )*t* . The trajectory obtained is of minimum time on the graph generated. To obtain the global minimum time, the process should be repeated with different discretization sizes. The global minimum time is the smallest of all times calculated.

### **9.6 Application and examples solved**

This algorithm has been applied to the PUMA 560 robot type, and a great number of examples have been analysed. Four important operational parameters have been monitored: the computational time used in generating a solution, the execution time, the distance travelled (which corresponds to the sum of the whole distance travelled by each significant point throughout the path to go from the initial to the final configuration, measured in meters) and the number of configurations generated. Though the examples, the behaviour of those four operational parameters mentioned earlier when the simultaneous algorithm and the different interpolation functions have been use can be analized. The results obtained show that the worst computational time is achieved when using the interpolation function of case A. Case B and C yield similar results. Also, the results show that the smallest execution time is achieved when using the interpolation function of case C. The smallest distance travelled is achieved when using the interpolation function of case C as well as the smallest number of configurations generated are achieved when using the interpolation function of case C.

## **10. Conclusion**

In this paper, two algorithms that solve the trajectory planning problem for industrial robots in an environment with obstacles have been introduced and summarized. They have been called "sequential" and "simultaneous" algorithm respectively. Both are off-line algorithms. The first one is based on an indirect methodology because it solves the trajectory planning in two sequential steps (first a path is generated and once the path is known, a trajectory is adjusted to it). Polynomial interpolation functions have been in this algorithm because they yield the best results. Besides, the trajectories calculated meet constraints on torque, power, jerk and energy consumed. The second algorithm is a direct method, which solves the equations in the state space of the robot. Unlike other direct methods, it does not use previously defined paths, which enables working with mobile obstacles although the obstacles used in this chapter are statics. Three types of interpolation functions have been used for the computation of intermediate configurations (harmonic functions). Polynomial interpolation functions have been excluded from this algorithm because during the resolution phase of the examples, because converge problems in the optimization problem have come up.

The main conclusions are summarized as follows:

	- c.1. Constraints on the energy consumed must be compatible with the robot´s demanded potential energy, as energy recovery is not considered, as the algorithm works on the assumption that the energy can be dissipated but not recovered.

## **11. Acknowledgment**

This paper has been possible thanks to the funding of Science and Innovation Ministry of the Spain Government by means of the Researching and Technologic Development Project DPI2010-20814-C02-01 (IDEMOV).

## **12. References**

608 Robotic Systems – Applications, Control and Programming

the graph generated. To obtain the global minimum time, the process should be repeated with different discretization sizes. The global minimum time is the smallest of all times

This algorithm has been applied to the PUMA 560 robot type, and a great number of examples have been analysed. Four important operational parameters have been monitored: the computational time used in generating a solution, the execution time, the distance travelled (which corresponds to the sum of the whole distance travelled by each significant point throughout the path to go from the initial to the final configuration, measured in meters) and the number of configurations generated. Though the examples, the behaviour of those four operational parameters mentioned earlier when the simultaneous algorithm and the different interpolation functions have been use can be analized. The results obtained show that the worst computational time is achieved when using the interpolation function of case A. Case B and C yield similar results. Also, the results show that the smallest execution time is achieved when using the interpolation function of case C. The smallest distance travelled is achieved when using the interpolation function of case C as well as the smallest number of configurations generated are achieved when using the interpolation

In this paper, two algorithms that solve the trajectory planning problem for industrial robots in an environment with obstacles have been introduced and summarized. They have been called "sequential" and "simultaneous" algorithm respectively. Both are off-line algorithms. The first one is based on an indirect methodology because it solves the trajectory planning in two sequential steps (first a path is generated and once the path is known, a trajectory is adjusted to it). Polynomial interpolation functions have been in this algorithm because they yield the best results. Besides, the trajectories calculated meet constraints on torque, power, jerk and energy consumed. The second algorithm is a direct method, which solves the equations in the state space of the robot. Unlike other direct methods, it does not use previously defined paths, which enables working with mobile obstacles although the obstacles used in this chapter are statics. Three types of interpolation functions have been used for the computation of intermediate configurations (harmonic functions). Polynomial interpolation functions have been excluded from this algorithm because during the resolution phase of the examples, because converge problems in the optimization problem

a. The algorithms solve the trajectory planning problem for industrial robots in

c.1. Constraints on the energy consumed must be compatible with the robot´s demanded potential energy, as energy recovery is not considered, as the algorithm works on the assumption that the energy can be dissipated but not recovered.

*q*( )*t* . The trajectory obtained is of minimum time on

characteristics of the motion *q*( )*t* and

**9.6 Application and examples solved** 

calculated.

function of case C.

**10. Conclusion** 

have come up.

The main conclusions are summarized as follows:

b. It can be applied to any industrial robot.

c. "Sequential" algorithm:

environments with obstacles therefore avoiding collisions.


**29** 

*1University of Ostrava* 

*Czech Republic* 

*2Tomas Bata University in Zlín* 

**Methodology for System Adaptation** 

This paper describes the methodology for system description and application so that the system can be managed using real time system adaptation. The term system here can represent any structure regardless its size or complexity (industrial robots, mobile robot navigation, stock market, systems of production, control systems, etc.). The methodology describes the whole development process from system requirements to software tool that

In this work, we propose approaches relying on machine learning methods (Bishop, 2006), which would enable to characterize key patterns and detect them in real time and in their altered form as well. Then, based on the pattern recognized, it is possible to apply a suitable intervention to system inputs so that the system responds in the desired way. Our aim is to develop and apply a hybrid approach based on machine learning methods, particularly based on soft-computing methods to identify patterns successfully and for the subsequent adaptation of the system. The main goal of the paper is to recognize important pattern and

The paper is arranged as follows: Section 1 introduces the critical topic of the article. Section 2 details the feature extraction process in order to optimize the patterns used as inputs into experiments. The pattern recognition algorithms using machine learning methods are discussed in section 3. Section 4 describes the used data-sets and covers the experimental results and a conclusion is given in section 5. We focus on reliability of recognition made by the described algorithms with optimized patterns based on the

Gershenson (Gershenson, 2007) proposed a methodology called *The General Methodology* for system description necessary to manage a system. It presents a conceptual framework for describing systems as self-organizing and consists of five steps: representation, modelling, simulation, application and evaluation. Our goal is to use and adapt this methodology for our specific needs. Basically we would like to describe a methodology that the designer should be able to use to describe his system, find key patterns in its behaviour based on the

**1. Introduction** 

will be able to execute a specific system adaptation.

adapt the system's behaviour based on the pattern desired way.

reduction of the calculation costs. All results are compared mutually.

**1.1 The methodology for system description** 

**Based on Characteristic Patterns** 

Eva Volná1, Michal Janošek1, Václav Kocian1, Martin Kotyrba1 and Zuzana Oplatková2

*International Conference on Robotics and Automation,* ISBN: 0-7803-2988-0, pp. (2755- 2760), Minneapolis, USA, 1996.


## **Methodology for System Adaptation Based on Characteristic Patterns**

Eva Volná1, Michal Janošek1, Václav Kocian1, Martin Kotyrba1 and Zuzana Oplatková2 *1University of Ostrava 2Tomas Bata University in Zlín Czech Republic* 

## **1. Introduction**

610 Robotic Systems – Applications, Control and Programming

Garg, D. & Ruengcharungpong, C. (1992), Force balance and energy optimization in

Gasparetto, A. & Zanotto, V. (2007),A new method for smooth trajectory planning of robot manipulators, *Mechanism and Machine Theory*, Vol. 42, nº 4, pp. (455-471). Gasparetto, A. & Zanotto, V. (2010), Optimal trajectory planning for industrial robots,

Hirakawa, A. & Kawamura, A. (1996), Proposal of trajectory generation for redundant

Kyriakopoulos, K.J. & Saridis, G.N. (1988), Minimum jerk path generation, *in IEEE* 

Macfarlane, S. & Croft, E. A. (2003), Jerk-bounded manipulator trajectory planning: Design

Provenzano, S. E. (2001), Aplicación de las ecuaciones de Gibbs-Appell a la dinámica de robots. Doctoral thesis, Universidad Politécnica de Valencia, Spain, 2001. Rubio, F.J., Valero, F.J., Suñer, J.L. & Mata, V. (2009), Direct step-by-step method for

Rubio, F.J., Valero, F.J., Suñer, J.L. and & Mata, V. (2007), Técnicas globales para la

Saramago, S. F. & Steffen Jr,V. (2001), Trajectory modelling of robot manipulators in the

Shin, K.G. & McKay, N.D. (1985), Minimum-time control of robotic manipulators with

Suñer, J.L., Valero, F.J., Ródenas, J.J., & Besa, A. (2007), Comparación entre procedimientos

*Mecánica in Cusco,* ISBN: 978-9972-2885-31, Cuzco (Peru), Octubre, 2007 Valero,F. J., Mata V. & Besa A. (2006), Trajectory planning in workspaces with obstacles

*Simulation Conference,* pp. (2017-2024), Pittsburgh, USA, 1992.

*Advances in Engineering Software*, vol. 41, No 4, pp. 548-556.

*Control,* ISBN: 0-7803-3219-9, pp. (687-692), Mie, Japan, 1996.

2760), Minneapolis, USA, 1996.

Philadelphia, USA, 1988.

1, pp. (42-52).

6, pp. (594-607).

(1079-1094).

pp. (531-541).

Vol. 41, pp. (525-536).

*International Conference on Robotics and Automation,* ISBN: 0-7803-2988-0, pp. (2755-

cooperating manipulators, *Proceedings of the 23rd Annual Pittsburgh Modeling and* 

manipulators using variational approach applied to minimization of consumed electrical energy, *Proceedings of the Fourth International Workshop on Advanced Motion* 

*international conference on robotics and automation,* ISBN: 0-8186-0852-8, pp. (364-369),

for real-time applications. *IEEE Transactions on Robotics and Automation*, Vol. 19, nº

industrial robot path planning, *Industrial Robot: An International Journal*, Vol. 36, nº

planificación de caminos de robots industriales*, VIII Congreso Iberoamericano de Ingeniería Mecánica in Cusco,* ISBN: 978-9972-2885-31, Cuzco (Peru), Octubre, 2007 Saramago, S.F.P. & Steffen, V. Jr. (2000), Optimal trajectory planning of robot manipulators

in the presence of moving obstacles, *Mechanism and Machine Theory*, Vol. 35, pp.

presence of obstacles. Journal of optimization theory and applications. 110(1), 17-34.

geometric path constraints, *IEEE Transactions on Automatic Control*, ISSN: 0018-9286,

de solución de la interpolación por funciones splines para la planificación de trayectorias de robots industriales, *VIII Congreso Iberoamericano de Ingeniería* 

taking into account the dynamic robot behaviour. *Mechanism and Machine Theory*.

This paper describes the methodology for system description and application so that the system can be managed using real time system adaptation. The term system here can represent any structure regardless its size or complexity (industrial robots, mobile robot navigation, stock market, systems of production, control systems, etc.). The methodology describes the whole development process from system requirements to software tool that will be able to execute a specific system adaptation.

In this work, we propose approaches relying on machine learning methods (Bishop, 2006), which would enable to characterize key patterns and detect them in real time and in their altered form as well. Then, based on the pattern recognized, it is possible to apply a suitable intervention to system inputs so that the system responds in the desired way. Our aim is to develop and apply a hybrid approach based on machine learning methods, particularly based on soft-computing methods to identify patterns successfully and for the subsequent adaptation of the system. The main goal of the paper is to recognize important pattern and adapt the system's behaviour based on the pattern desired way.

The paper is arranged as follows: Section 1 introduces the critical topic of the article. Section 2 details the feature extraction process in order to optimize the patterns used as inputs into experiments. The pattern recognition algorithms using machine learning methods are discussed in section 3. Section 4 describes the used data-sets and covers the experimental results and a conclusion is given in section 5. We focus on reliability of recognition made by the described algorithms with optimized patterns based on the reduction of the calculation costs. All results are compared mutually.

### **1.1 The methodology for system description**

Gershenson (Gershenson, 2007) proposed a methodology called *The General Methodology* for system description necessary to manage a system. It presents a conceptual framework for describing systems as self-organizing and consists of five steps: representation, modelling, simulation, application and evaluation. Our goal is to use and adapt this methodology for our specific needs. Basically we would like to describe a methodology that the designer should be able to use to describe his system, find key patterns in its behaviour based on the

Methodology for System Adaptation Based on Characteristic Patterns 613

each other to achieve desired state or behaviour. If we determine the state as a self-organizing state, we can call that system self-organizing and define our complex self-organizing system. In our example with manufacturing line our self-organizing state will be a state where the production runs smoothly without any production delays. But how can we achieve that? Still using Gerhenson's General Methodology we can label fulfilling agent's goal as its

σ

σ

*sys* (2) can be represented as function

*<sup>i</sup>* shouldn't be considered as a part of the

σ*i*.

n, *w*0,*w*1, *w*2, …, *w*n) (2)

∈ [0,1]. Then the system's satisfaction

*f* : 0,1 **R** → [ ] and it is a satisfaction of its individual components.

σ

**1.2 Patterns as a system's behaviour description** 

*w0* represents bias and other weights *wi* represents an importance given to each

system, which we would like to describe. But that is a very wide frame content.

Although theory of regulation and control (Armstrong & Porter, 2006) is mainly focused on methods of automatic control, it also includes methods for adaptive and fuzzy controls. In general, through the control or regulation we guide the system's behaviour in the desired direction. For our purposes, it suffices to regulate the system behaviour based on the predefined target and compensate any deviation in desired direction. So we search for key

*sys* and increase their

system. Of course it is hard to say if for higher system's satisfaction it is sufficient to increase satisfaction of each individual component because some components can use others fulfilling their goals. For maximization of σ*sys* we should minimize the friction among components and increase their synergy. A mediator arbitrates among elements of a system, to minimize conflict, interferences and frictions; and to maximize cooperation and synergy. So we have two types of agents in the MAS. Regular agents fulfil their goals and mediator agents streamline their behaviour. Using that simple agent's division we can build quite adaptive system.

Every system has its unique characteristics that can be described as patterns. Using patterns we would like to characterize particular system and its key characteristics. Generally a system can sense a lot of data using its sensors. If we put the sensor's data into some form, a set or a graph then a lot of patterns can be recognized and further processed. When every system's component has some sensor then the system can produce some patterns in its behaviour. Some sensor reads data about its environment so we can find some patterns of the environment, where the system is located. If we combine several sensors data, we would be able to recognise some patterns in the whole system's behaviour. It is important to realize that everything, which we observe is relative from our point of view. When we search for the pattern, we want to choose such pattern, which represents the system reliably and define its important properties. Every pattern, which we find, is always misrepresented with our point of view. We can imagine a pattern as some object with same or similar properties. There are many ways how to recognize and sort them. When we perform pattern recognition, we assign a pre-defined output value to an input value. For some purpose, we can use a particular pattern recognition algorithm, which is introduced in (Ciskowski & Zaton, 2010). In this case we try to assign each input value to the one of the output sets of values. Some input value can be any data regardless its origin as a text, audio, image or any other data. When patterns repeat in the same or altered forms then can be classified into predefined classes of patterns. Since we are working on computers, the input data and all patterns can be represented in a binary form without the loss of generality. Such approach can work nearly with any

satisfaction

σ

Components, which decrease

σsys = *f* (σ1, σ2,…, σ

observation and prepare suitable response to these patterns that emerge from time to time and adapt to any deviation in the system's behaviour.

As we are using Gershenson's methodology we are not going to describe it in detail because detailed info can be found in his book (Gershenson, 2007). Let's mention crucial parts of his methodology that is important to our work. The methodology is useful for designing and controlling complex systems. Basically a complex system consists of two or more interconnected components and these components react together and it is very complicated to separate them. So the system's behaviour is impossible to deduce from the behaviour of its individual components. This deduction becomes more complicated how more components #*E* and more interactions #*I* the system has (*Csy*s corresponds with system complexity; *Ce* corresponds with element complexity; *Ci* corresponds with interaction complexity).

$$\mathbf{C}\_{sys} \sim \begin{cases} \overline{E} \\ \overline{I} \\ \text{\*} \underline{E} \\ \sum\_{j=0}^{\#\overline{E}} \mathbf{C}\_{\varepsilon\_j} \\ \sum\_{k=0}^{\#\overline{I}} \mathbf{C}\_{i\_k} \end{cases} \tag{1}$$

Imagine a manufacturing factory. We can describe the manufacturing factory as a complex system. Now it is important to realize that we can have several levels of abstraction starting from the manufacturing line to the whole factory complex. The manufacturing line can consist of many components. There can be robots, which perform the main job. Conveyor belts, roller beds, jigs, hangers and other equipment responsible for the product or material transport and other equipments. All the interactions are some way related to the material or product. Although it is our best interest to run all the processes smoothly there will be always some incidents we cannot predict exactly. The supply of the material can be interrupted or delayed, any equipment can have a multifunction and it is hard to predict when and how long will it takes. Because there are interactions among many of these components we can call manufacturing factory a complex system.

If we want to characterize a system we should create its model. Gershenson (Gershenson, 2002) proposes two types of models, absolute and relative. The absolute model (abs-model) refers to what the thing actually is, independently of the observer. The relative model (relmodel) refers to the properties of the thing as distinguished by an observer within a context. We can say that the rel-model is a model, while the abs-model is modelled. Since we are all limited observers, it becomes clear that we can speak about reality only with relbeings/models (Gershenson, 2007).

So how we can model a complex system? Any complex system can be modelled using multiagent system (MAS) where each system's component is represented by an agent and any interactions among system's components are represented as interactions among agents. If we take into consideration *The General Methodology* thus any system can be modelled as group of agents trying to satisfy their goals. There is a question. Can we describe a systems modelling as a group of agents as self-organizing? We think that we can say *Yes*. As the agents in the MAS try to satisfy their goals, same as components in self-organizing systems interact with 612 Robotic Systems – Applications, Control and Programming

observation and prepare suitable response to these patterns that emerge from time to time

As we are using Gershenson's methodology we are not going to describe it in detail because detailed info can be found in his book (Gershenson, 2007). Let's mention crucial parts of his methodology that is important to our work. The methodology is useful for designing and controlling complex systems. Basically a complex system consists of two or more interconnected components and these components react together and it is very complicated to separate them. So the system's behaviour is impossible to deduce from the behaviour of its individual components. This deduction becomes more complicated how more components #*E* and more interactions #*I* the system has (*Csy*s corresponds with system complexity; *Ce* corresponds with element complexity; *Ci* corresponds with interaction

#

*I*

*E*

 

*E sys <sup>e</sup> j I i k*

~

Imagine a manufacturing factory. We can describe the manufacturing factory as a complex system. Now it is important to realize that we can have several levels of abstraction starting from the manufacturing line to the whole factory complex. The manufacturing line can consist of many components. There can be robots, which perform the main job. Conveyor belts, roller beds, jigs, hangers and other equipment responsible for the product or material transport and other equipments. All the interactions are some way related to the material or product. Although it is our best interest to run all the processes smoothly there will be always some incidents we cannot predict exactly. The supply of the material can be interrupted or delayed, any equipment can have a multifunction and it is hard to predict when and how long will it takes. Because there are interactions among many of these

If we want to characterize a system we should create its model. Gershenson (Gershenson, 2002) proposes two types of models, absolute and relative. The absolute model (abs-model) refers to what the thing actually is, independently of the observer. The relative model (relmodel) refers to the properties of the thing as distinguished by an observer within a context. We can say that the rel-model is a model, while the abs-model is modelled. Since we are all limited observers, it becomes clear that we can speak about reality only with rel-

So how we can model a complex system? Any complex system can be modelled using multiagent system (MAS) where each system's component is represented by an agent and any interactions among system's components are represented as interactions among agents. If we take into consideration *The General Methodology* thus any system can be modelled as group of agents trying to satisfy their goals. There is a question. Can we describe a systems modelling as a group of agents as self-organizing? We think that we can say *Yes*. As the agents in the MAS try to satisfy their goals, same as components in self-organizing systems interact with

components we can call manufacturing factory a complex system.

beings/models (Gershenson, 2007).

*C C*

0 #

=

*j*

(1)

*k*

*C*

0

=

and adapt to any deviation in the system's behaviour.

complexity).

each other to achieve desired state or behaviour. If we determine the state as a self-organizing state, we can call that system self-organizing and define our complex self-organizing system. In our example with manufacturing line our self-organizing state will be a state where the production runs smoothly without any production delays. But how can we achieve that? Still using Gerhenson's General Methodology we can label fulfilling agent's goal as its satisfaction σ ∈ [0,1]. Then the system's satisfaction σ*sys* (2) can be represented as function *f* : 0,1 **R** → [ ] and it is a satisfaction of its individual components.

$$\sigma\_{\rm sys} = f\left(\sigma\_{\rm l}, \sigma\_{\rm 2}, \dots, \sigma\_{\rm n}, w\_0, w\_1, w\_2, \dots, w\_n\right) \tag{2}$$

*w0* represents bias and other weights *wi* represents an importance given to each σ*i*. Components, which decrease σ*sys* and increase their σ*<sup>i</sup>* shouldn't be considered as a part of the system. Of course it is hard to say if for higher system's satisfaction it is sufficient to increase satisfaction of each individual component because some components can use others fulfilling their goals. For maximization of σ*sys* we should minimize the friction among components and increase their synergy. A mediator arbitrates among elements of a system, to minimize conflict, interferences and frictions; and to maximize cooperation and synergy. So we have two types of agents in the MAS. Regular agents fulfil their goals and mediator agents streamline their behaviour. Using that simple agent's division we can build quite adaptive system.

### **1.2 Patterns as a system's behaviour description**

Every system has its unique characteristics that can be described as patterns. Using patterns we would like to characterize particular system and its key characteristics. Generally a system can sense a lot of data using its sensors. If we put the sensor's data into some form, a set or a graph then a lot of patterns can be recognized and further processed. When every system's component has some sensor then the system can produce some patterns in its behaviour. Some sensor reads data about its environment so we can find some patterns of the environment, where the system is located. If we combine several sensors data, we would be able to recognise some patterns in the whole system's behaviour. It is important to realize that everything, which we observe is relative from our point of view. When we search for the pattern, we want to choose such pattern, which represents the system reliably and define its important properties. Every pattern, which we find, is always misrepresented with our point of view.

We can imagine a pattern as some object with same or similar properties. There are many ways how to recognize and sort them. When we perform pattern recognition, we assign a pre-defined output value to an input value. For some purpose, we can use a particular pattern recognition algorithm, which is introduced in (Ciskowski & Zaton, 2010). In this case we try to assign each input value to the one of the output sets of values. Some input value can be any data regardless its origin as a text, audio, image or any other data. When patterns repeat in the same or altered forms then can be classified into predefined classes of patterns. Since we are working on computers, the input data and all patterns can be represented in a binary form without the loss of generality. Such approach can work nearly with any system, which we would like to describe. But that is a very wide frame content.

Although theory of regulation and control (Armstrong & Porter, 2006) is mainly focused on methods of automatic control, it also includes methods for adaptive and fuzzy controls. In general, through the control or regulation we guide the system's behaviour in the desired direction. For our purposes, it suffices to regulate the system behaviour based on the predefined target and compensate any deviation in desired direction. So we search for key

Methodology for System Adaptation Based on Characteristic Patterns 615

domains that involve such data. The recognition of structural shapes plays a central role in distinguishing particular system behaviour. Sometimes just one structural form (a bump, an abrupt peak or a sinusoidal component), is enough to identify a specific phenomenon. There is not a general rule to describe the structure – or structure combinations – of various phenomena, so specific knowledge about their characteristics has to be taken into account. In other words, signal structural shape may be not enough for a complete description of system properties. Therefore, domain knowledge has to be added to the structural information. However, the goal of our approach is not knowledge extraction but to provide users with an easy tool to perform a first data screening. In this sense, the interest is focused on searching for specific patterns within waveforms (Dormido-Canto et al., 2006). The algorithms used in pattern recognition systems are commonly divided into two tasks, as shown in Fig. 1. The description task transforms data collected from the environment into features (primitives).

The classification task arrives at an identification of patterns based on the features provided by the description task. There is no general solution for extracting structural features from data. The selection of primitives by which the patterns of interest are going to be described depends upon the type of data and the associated application. The features are generally

*data Identification*

*Pattern Recognition Algorithms* 

*Description Classification*

*features*

The input data can be presented to the system in various forms. In principle we can

Figures 2 and 3 show an image and a numerical expression of one particular section of OHLC data. The image expression contains only information from the third to the sixth column of the table (Fig.3). In spite of the fact, the pattern size (number of pixels) equals to 7440. In contrast

to it, a table expression with 15 rows and 7 columns of 16-bit numbers takes only.

Fig. 1. Tasks in the pattern recognition systems

distinguish two basic possibilities:

Fig. 2. Visual representations of pattern

designed making use of the experience and intuition of the designer.

• The numeric representation of monitored parameters • Image data - using the methods of machine vision

patterns in system's behaviour a try to adapt to any changes. However, in order to react quickly and appropriately, it is good to have at least an expectation of what may happen and which reaction would be appropriate, i.e. what to anticipate. Expectations are subjective probabilities that we learn from experience: the more often pattern B appears after pattern A, or the more successful action B is in solving problem A, the stronger the association A → B becomes. The next time we encounter A (or a pattern similar to A), we will be prepared, and more likely to react adequately. The simple ordering of options according to the probability that they would be relevant immensely decreases the complexity of decisionmaking (Heylighen, 1994).

Agents are appropriate for defining, creating, maintaining, and operating the software of distributed systems in a flexible manner, independent of service location and technology. Systems of agents are complex in part because both the structural form and the behaviour patterns of the system change over time, with changing circumstances. By structural form, we mean the set of active agents and inter-agent relationships at a particular time. This form changes over time as a result of inter-agent negotiations that determine how to deal with new circumstances or events. We call such changing structural form morphing, by analogy with morphing in computer animation. By behaviour patterns, we mean the collaborative behaviour of a set of active agents in achieving some overall purpose. In this sense, behaviour patterns are properties of the whole system, above the level of the internal agent detail or of pair wise, inter-agent interactions. Descriptions of whole system behaviour patterns need to be above this level of detail to avoid becoming lost in the detail, because agents are, in general, large grained system components with lots of internal detail, and because agents may engage in detailed sequences of interactions that easily obscure the big picture. In agent systems, behaviour patterns and morphing are inseparable, because they both occur on the same time scale, as part of normal operation. Use case maps (UCMs) (Burth & Hubbard, 1997) are descriptions of large grained behaviour patterns in systems of collaborating large grained components.

### **1.3 System adaptation vs. prediction**

Let's say we have built pattern recognition system and it is working properly to meet our requirements. We are able to recognize certain patterns reliably. What can we do next? Basically, we can predict systems behaviour or we can adapt to any change that emerge.

It is possible to try to predict what will happen, but more or less it is a lottery. We will never be able to predict such systems' behaviour completely. This doesn't mean it is not possible to build a system based on prediction (Gershenson, 2007). But there is another approach that tries to adapt to any change by reflecting current situation. To adapt on any change (expected or unexpected) it should be sufficient to compensate any deviation from desired course. In case that response to a deviation comes quickly enough that way of regulation can be very effective. It does not matter how complicated system is (how many factors and interactions has) in case we have efficient means of control (Armstrong & Porter, 2006). To respond quickly and flexible it is desirable to have some expectation what can happen and what kind of response will be appropriate. We can learn such expectation through experiences.

### **2. Feature extraction process in order to optimize the patterns**

Identification problems involving time-series data (or waveforms) constitute a subset of pattern recognition applications that is of particular interest because of the large number of 614 Robotic Systems – Applications, Control and Programming

patterns in system's behaviour a try to adapt to any changes. However, in order to react quickly and appropriately, it is good to have at least an expectation of what may happen and which reaction would be appropriate, i.e. what to anticipate. Expectations are subjective probabilities that we learn from experience: the more often pattern B appears after pattern A, or the more successful action B is in solving problem A, the stronger the association A → B becomes. The next time we encounter A (or a pattern similar to A), we will be prepared, and more likely to react adequately. The simple ordering of options according to the probability that they would be relevant immensely decreases the complexity of decision-

Agents are appropriate for defining, creating, maintaining, and operating the software of distributed systems in a flexible manner, independent of service location and technology. Systems of agents are complex in part because both the structural form and the behaviour patterns of the system change over time, with changing circumstances. By structural form, we mean the set of active agents and inter-agent relationships at a particular time. This form changes over time as a result of inter-agent negotiations that determine how to deal with new circumstances or events. We call such changing structural form morphing, by analogy with morphing in computer animation. By behaviour patterns, we mean the collaborative behaviour of a set of active agents in achieving some overall purpose. In this sense, behaviour patterns are properties of the whole system, above the level of the internal agent detail or of pair wise, inter-agent interactions. Descriptions of whole system behaviour patterns need to be above this level of detail to avoid becoming lost in the detail, because agents are, in general, large grained system components with lots of internal detail, and because agents may engage in detailed sequences of interactions that easily obscure the big picture. In agent systems, behaviour patterns and morphing are inseparable, because they both occur on the same time scale, as part of normal operation. Use case maps (UCMs) (Burth & Hubbard, 1997) are descriptions of large grained behaviour patterns in systems of

Let's say we have built pattern recognition system and it is working properly to meet our requirements. We are able to recognize certain patterns reliably. What can we do next? Basically, we can predict systems behaviour or we can adapt to any change that emerge. It is possible to try to predict what will happen, but more or less it is a lottery. We will never be able to predict such systems' behaviour completely. This doesn't mean it is not possible to build a system based on prediction (Gershenson, 2007). But there is another approach that tries to adapt to any change by reflecting current situation. To adapt on any change (expected or unexpected) it should be sufficient to compensate any deviation from desired course. In case that response to a deviation comes quickly enough that way of regulation can be very effective. It does not matter how complicated system is (how many factors and interactions has) in case we have efficient means of control (Armstrong & Porter, 2006). To respond quickly and flexible it is desirable to have some expectation what can happen and what kind of response will be appropriate. We can learn such expectation through experiences.

Identification problems involving time-series data (or waveforms) constitute a subset of pattern recognition applications that is of particular interest because of the large number of

**2. Feature extraction process in order to optimize the patterns** 

making (Heylighen, 1994).

collaborating large grained components.

**1.3 System adaptation vs. prediction** 

domains that involve such data. The recognition of structural shapes plays a central role in distinguishing particular system behaviour. Sometimes just one structural form (a bump, an abrupt peak or a sinusoidal component), is enough to identify a specific phenomenon. There is not a general rule to describe the structure – or structure combinations – of various phenomena, so specific knowledge about their characteristics has to be taken into account. In other words, signal structural shape may be not enough for a complete description of system properties. Therefore, domain knowledge has to be added to the structural information.

However, the goal of our approach is not knowledge extraction but to provide users with an easy tool to perform a first data screening. In this sense, the interest is focused on searching for specific patterns within waveforms (Dormido-Canto et al., 2006). The algorithms used in pattern recognition systems are commonly divided into two tasks, as shown in Fig. 1. The description task transforms data collected from the environment into features (primitives).

Fig. 1. Tasks in the pattern recognition systems

The classification task arrives at an identification of patterns based on the features provided by the description task. There is no general solution for extracting structural features from data. The selection of primitives by which the patterns of interest are going to be described depends upon the type of data and the associated application. The features are generally designed making use of the experience and intuition of the designer.

The input data can be presented to the system in various forms. In principle we can distinguish two basic possibilities:


Figures 2 and 3 show an image and a numerical expression of one particular section of OHLC data. The image expression contains only information from the third to the sixth column of the table (Fig.3). In spite of the fact, the pattern size (number of pixels) equals to 7440. In contrast to it, a table expression with 15 rows and 7 columns of 16-bit numbers takes only.

Fig. 2. Visual representations of pattern

Methodology for System Adaptation Based on Characteristic Patterns 617

Neural networks that allow so-called supervised learning process (i.e. approach, in which the neural network is familiar with prototype of patterns) use to be regarded as the best choice for pattern recognition tasks. After adaptation, it is expected that the network is able to recognise learned (known) or similar patterns in input vectors. Generally, it is true - the more training patterns (prototypes), the better network ability to solve the problem. On the other hand, too many training patterns could lead to exceeding a memory capacity of the network.

Our aim was to test two networks with extreme qualities. In other words, we chose such neural networks, which promised the greatest possible differences among achieved results.

Hebb network is the simplest and also the "cheapest" neural network, which adaptation runs in one cycle. Both adaptive and inactive modes work with integer numbers. These properties allow very easy training set modification namely in applications that work with

Hebbian learning in its simplest form (Fausett, 1994) is given by the weights update rule (3)

Δ*wij* = η

where *wij* is the change in the strength of the connection from unit *j* to unit *i*, *ai* and *aj* are the

classify patterns with this rule, it is necessary to have some method of forcing a unit to respond

Each data point *x* is represented by the vector of inputs (*x1, x2, …, xn*). A possible network for learning is given in Figure 4. All units are linear. During training the class inputs *c1, c2, …,cm*

> 1 0 *i i i i c C c C* = ∈ = ∉ *x*

Each of the class inputs is connected to just one corresponding output unit, i.e. *ci* connects to *oi* only for *i* = 1, 2, …,*m*. There is full interconnection from the data inputs *x1, x2, …, xn* to each

Fig. 4. Hebb network. Weights of connections w11-wij are modified in accordance with the

η

strongly to a particular pattern. Consider a set of data divided into classes *C1, C2,...,Cm*.

 *ai a*<sup>j</sup> (3)

is a learning rate. When training a network to

*<sup>x</sup>* (4)

**3.1 Artificial neural networks** 

• Backpropagation network

very large input vectors (e.g. image data).

activations of units *i* and *j* respectively, and

for a point *x* are set as follows (4):

of these outputs.

Hebbian learning rule

• Hebb network

**3.1.1 Hebb network** 

We used typical representative of neural networks, namely:


Fig. 3. Tabular expression of pattern

The image data better correspond to an intuitive human idea of patterns recognitions, which is their main advantage. We also have to remember that even table data must be transferred into binary (image) form before their processing.

Image data are always two-dimensional. Generally, tabular patterns can have more dimensions. Graphical representation of OHLC data (Lai, 2005) in Fig.2 is a good example of the expression of multidimensional data projection to two-dimensional space. Fig.2 shows a visual representation of 4-dimensional vector in time, which corresponds to 5 - dimensions. In this article, we consider experiments only over two-dimensional data (time series). Extending the principles of multidimensional vectors (random processes) will be the subject of our future projects.

The intuitive concept of "pattern" corresponds to the two-dimensional shapes. This way allows showing a progress of a scalar variable. In the case that a system has more than one parameter, the graphic representation is not trivial anymore.

## **3. Pattern recognition algorithms**

Classification is one of the most frequently encountered decision making tasks of human activity. A classification problem occurs when an object needs to be assigned into a predefined group or class based on a number of observed attributes related to that object. Pattern recognition is concerned with making decisions from complex patterns of information. The goal has always been to tackle those tasks presently undertaken by humans, for instance to recognize faces, buy or sell stocks or to decide on the next move in a chess game. Rather simpler tasks have been considered by us. We have defined a set of classes, which we plan to assign patterns to, and the task is to classify a future pattern as one of these classes. Such tasks are called classification or supervised pattern recognition. Clearly someone had to determine the classes in the first phase. Seeking the groupings of patterns is called cluster analysis or unsupervised pattern recognition. Patterns are made up of features, which are measurements used as inputs to the classification system. In case that patterns are images, the major part of the design of a pattern recognition system is to select suitable features; choosing the right features can be even more important than what is done with them subsequently.

### **3.1 Artificial neural networks**

Neural networks that allow so-called supervised learning process (i.e. approach, in which the neural network is familiar with prototype of patterns) use to be regarded as the best choice for pattern recognition tasks. After adaptation, it is expected that the network is able to recognise learned (known) or similar patterns in input vectors. Generally, it is true - the more training patterns (prototypes), the better network ability to solve the problem. On the other hand, too many training patterns could lead to exceeding a memory capacity of the network. We used typical representative of neural networks, namely:

• Hebb network

616 Robotic Systems – Applications, Control and Programming

The image data better correspond to an intuitive human idea of patterns recognitions, which is their main advantage. We also have to remember that even table data must be transferred

Image data are always two-dimensional. Generally, tabular patterns can have more dimensions. Graphical representation of OHLC data (Lai, 2005) in Fig.2 is a good example of the expression of multidimensional data projection to two-dimensional space. Fig.2 shows a visual representation of 4-dimensional vector in time, which corresponds to 5 - dimensions. In this article, we consider experiments only over two-dimensional data (time series). Extending the principles of multidimensional vectors (random processes) will be the subject

The intuitive concept of "pattern" corresponds to the two-dimensional shapes. This way allows showing a progress of a scalar variable. In the case that a system has more than one

Classification is one of the most frequently encountered decision making tasks of human activity. A classification problem occurs when an object needs to be assigned into a predefined group or class based on a number of observed attributes related to that object. Pattern recognition is concerned with making decisions from complex patterns of information. The goal has always been to tackle those tasks presently undertaken by humans, for instance to recognize faces, buy or sell stocks or to decide on the next move in a chess game. Rather simpler tasks have been considered by us. We have defined a set of classes, which we plan to assign patterns to, and the task is to classify a future pattern as one of these classes. Such tasks are called classification or supervised pattern recognition. Clearly someone had to determine the classes in the first phase. Seeking the groupings of patterns is called cluster analysis or unsupervised pattern recognition. Patterns are made up of features, which are measurements used as inputs to the classification system. In case that patterns are images, the major part of the design of a pattern recognition system is to select suitable features; choosing the right

features can be even more important than what is done with them subsequently.

Fig. 3. Tabular expression of pattern

**3. Pattern recognition algorithms** 

of our future projects.

into binary (image) form before their processing.

parameter, the graphic representation is not trivial anymore.

• Backpropagation network

Our aim was to test two networks with extreme qualities. In other words, we chose such neural networks, which promised the greatest possible differences among achieved results.

### **3.1.1 Hebb network**

Hebb network is the simplest and also the "cheapest" neural network, which adaptation runs in one cycle. Both adaptive and inactive modes work with integer numbers. These properties allow very easy training set modification namely in applications that work with very large input vectors (e.g. image data).

Hebbian learning in its simplest form (Fausett, 1994) is given by the weights update rule (3)

$$
\Delta w\_{ij} = \eta \, a\_i a\_j \tag{3}
$$

where *wij* is the change in the strength of the connection from unit *j* to unit *i*, *ai* and *aj* are the activations of units *i* and *j* respectively, and ηis a learning rate. When training a network to classify patterns with this rule, it is necessary to have some method of forcing a unit to respond strongly to a particular pattern. Consider a set of data divided into classes *C1, C2,...,Cm*.

Each data point *x* is represented by the vector of inputs (*x1, x2, …, xn*). A possible network for learning is given in Figure 4. All units are linear. During training the class inputs *c1, c2, …,cm* for a point *x* are set as follows (4):

$$\begin{array}{l}\mathbf{c}\_{i} = \mathbf{1} \quad \mathbf{x} \in \mathsf{C}\_{i} \\ \mathbf{c}\_{i} = \mathbf{0} \quad \mathbf{x} \notin \mathsf{C}\_{i} \end{array} \tag{4}$$

Each of the class inputs is connected to just one corresponding output unit, i.e. *ci* connects to *oi* only for *i* = 1, 2, …,*m*. There is full interconnection from the data inputs *x1, x2, …, xn* to each of these outputs.

Fig. 4. Hebb network. Weights of connections w11-wij are modified in accordance with the Hebbian learning rule

Methodology for System Adaptation Based on Characteristic Patterns 619

2. If node *j* is a hidden node, then δ*j* is the product of φ'(*vj*) and the weighted sum of the δ's computed for the nodes in the next hidden or output layer that are connected to

[The actual formula is δ*j* = φ'(*vj*) &Sigma*<sup>k</sup>* δ*kwkj* where *k* ranges over those nodes for which *wkj* is non-zero (i.e. nodes *k* that actually have connections from node *j*. The δ*k* values have already been computed as they are in the output layer (or a layer closer to the output

Basic principles of the analytic programming (AP) were developed in 2001 (Zelinka, 2002). Until that time only genetic programming (GP) and grammatical evolution (GE) had existed. GP uses genetic algorithms while AP can be used with any evolutionary algorithm, independently on individual representation. To avoid any confusion, based on use of names according to the used algorithm, the name - Analytic Programming was chosen, since AP

The core of AP is based on a special set of mathematical objects and operations. The set of mathematical objects is set of functions, operators and so-called terminals (as well as in GP), which are usually constants or independent variables. This set of variables is usually mixed together and consists of functions with different number of arguments. Because of a variability of the content of this set, it is called here "general functional set" – GFS. The structure of GFS is created by subsets of functions according to the number of their arguments. For example *GFSall* is a set of all functions, operators and terminals, *GFS3arg* is a subset containing functions with only three arguments, *GFS0arg* represents only terminals, etc. The subset structure presence in GFS is vitally important for AP. It is used to avoid synthesis of pathological programs, i.e. programs containing functions without arguments, etc. The content of GFS is dependent only on the user. Various functions and terminals can

The second part of the AP core is a sequence of mathematical operations, which are used for the program synthesis. These operations are used to transform an individual of a population into a suitable program. Mathematically stated, it is a mapping from an individual domain into a program domain. This mapping consists of two main parts. The first part is called discrete set handling (DSH), see Fig. 6 (Zelinka, 2002) and the second one stands for security procedures which do not allow synthesizing pathological programs. The method of DSH, when used, allows handling arbitrary objects including nonnumeric objects like linguistic terms {hot, cold, dark…}, logic terms (True, False) or other user defined functions. In the AP DSH is used to map an individual into GFS and together with security procedures creates the above mentioned mapping which transforms arbitrary

AP needs some evolutionary algorithm (Zelinka, 2004) that consists of population of individuals for its run. Individuals in the population consist of integer parameters, i.e. an individual is an integer index pointing into GFS. The creation of the program can be schematically observed in Fig. 7. The individual contains numbers which are indices into

AP exists in 3 versions – basic without constant estimation, APnf – estimation by means of nonlinear fitting package in *Mathematica* environment and APmeta – constant estimation by

GFS. The detailed description is represented in (Zelinka, 2002; Oplatková, 2009).

means of another evolutionary algorithms; meta means metaevolution.

represents synthesis of analytical solution by means of evolutionary algorithms.

be mixed together (Zelinka, 2002; Oplatková, 2009).

node *j*.

layer than node *j*).]

**3.2 Analytic programming** 

individual into a program.

### **3.1.2 Backpropagation network**

Back propagation network is one of the most complex neural networks for supervised learning. Its ability to learning and recognition are much higher than Hebb network, but its disadvantage is relatively lengthy processes of adaptation, which may in some cases (complex input vectors) significantly prolong the network adaptation to new training sets. Backpropagaton network is a multilayer feedforward neural network. See Fig. 5, usually a

fully connected variant is used, so that each neuron from the *n-th* layer is connected to all neurons in the (*n+1)-th* layer, but it is not necessary and in general some connections may be missing – see dashed lines, however, there are no connections between neurons of the same layer. A subset of input units has no input connections from other units; their states are fixed by the problem. Another subset of units is designated as output units; their states are considered the result of the computation. Units that are neither input nor output are known as hidden units.

Fig. 5. A general three-layer neural network

Backpropagation algorithm belongs to a group called "gradient descent methods". An intuitive definition is that such an algorithm searches for the global minimum of the weight landscape by descending downhill in the most precipitous direction. The initial position is set at random selecting the weights of the network from some range (typically from -1 to 1 or from 0 to 1). Considering the different points, it is clear, that backpropagation using a fully connected neural network is not a deterministic algorithm. The basic backpropagation algorithm can be summed up in the following equation (the *delta rule*) for the change to the weight *wji* from node *i* to node *j* (5):

$$
\begin{array}{ccccc}
\text{weight} & \text{learning} & \text{local} & \text{input signal} \\
\text{change} & \text{rate} & \text{gradient} & \text{to node} \, j \\
\Delta w\_{ji} = & \eta & \times & \delta\_j & \times & \eta\_i \\
\end{array}
\tag{5}
$$

where the local gradient δ*j* is defined as follows: (Seung, 2002):

1. If node *j* is an output node, then δ*j* is the product of φ'(*vj*) and the error signal *ej*, where φ(\_) is the logistic function and *vj* is the total input to node *j* (i.e. Σ*<sup>i</sup> wjiyi*), and *ej* is the error signal for node *j* (i.e. the difference between the desired output and the actual output);

2. If node *j* is a hidden node, then δ*j* is the product of φ'(*vj*) and the weighted sum of the δ's computed for the nodes in the next hidden or output layer that are connected to node *j*.

[The actual formula is δ*j* = φ'(*vj*) &Sigma*<sup>k</sup>* δ*kwkj* where *k* ranges over those nodes for which *wkj* is non-zero (i.e. nodes *k* that actually have connections from node *j*. The δ*k* values have already been computed as they are in the output layer (or a layer closer to the output layer than node *j*).]

## **3.2 Analytic programming**

618 Robotic Systems – Applications, Control and Programming

Back propagation network is one of the most complex neural networks for supervised learning. Its ability to learning and recognition are much higher than Hebb network, but its disadvantage is relatively lengthy processes of adaptation, which may in some cases (complex input vectors) significantly prolong the network adaptation to new training sets. Backpropagaton network is a multilayer feedforward neural network. See Fig. 5, usually a fully connected variant is used, so that each neuron from the *n-th* layer is connected to all neurons in the (*n+1)-th* layer, but it is not necessary and in general some connections may be missing – see dashed lines, however, there are no connections between neurons of the same layer. A subset of input units has no input connections from other units; their states are fixed by the problem. Another subset of units is designated as output units; their states are considered the result of the computation. Units that are neither input nor output are known

Backpropagation algorithm belongs to a group called "gradient descent methods". An intuitive definition is that such an algorithm searches for the global minimum of the weight landscape by descending downhill in the most precipitous direction. The initial position is set at random selecting the weights of the network from some range (typically from -1 to 1 or from 0 to 1). Considering the different points, it is clear, that backpropagation using a fully connected neural network is not a deterministic algorithm. The basic backpropagation algorithm can be summed up in the following equation (the *delta rule*) for the change to the

1. If node *j* is an output node, then δ*j* is the product of φ'(*vj*) and the error signal *ej*, where φ(\_) is the logistic function and *vj* is the total input to node *j* (i.e. Σ*<sup>i</sup> wjiyi*), and *ej* is the error signal for node *j* (i.e. the difference between the desired output and the actual output);

rate local

Δ*wji* = η × δ*j* × *yi*

gradient input signal

to node *j*

learning

(5)

**3.1.2 Backpropagation network** 

Fig. 5. A general three-layer neural network

input

hidden

output

weight *wji* from node *i* to node *j* (5):

where the local gradient δ*j* is defined as follows: (Seung, 2002):

weight change

as hidden units.

Basic principles of the analytic programming (AP) were developed in 2001 (Zelinka, 2002). Until that time only genetic programming (GP) and grammatical evolution (GE) had existed. GP uses genetic algorithms while AP can be used with any evolutionary algorithm, independently on individual representation. To avoid any confusion, based on use of names according to the used algorithm, the name - Analytic Programming was chosen, since AP represents synthesis of analytical solution by means of evolutionary algorithms.

The core of AP is based on a special set of mathematical objects and operations. The set of mathematical objects is set of functions, operators and so-called terminals (as well as in GP), which are usually constants or independent variables. This set of variables is usually mixed together and consists of functions with different number of arguments. Because of a variability of the content of this set, it is called here "general functional set" – GFS. The structure of GFS is created by subsets of functions according to the number of their arguments. For example *GFSall* is a set of all functions, operators and terminals, *GFS3arg* is a subset containing functions with only three arguments, *GFS0arg* represents only terminals, etc. The subset structure presence in GFS is vitally important for AP. It is used to avoid synthesis of pathological programs, i.e. programs containing functions without arguments, etc. The content of GFS is dependent only on the user. Various functions and terminals can be mixed together (Zelinka, 2002; Oplatková, 2009).

The second part of the AP core is a sequence of mathematical operations, which are used for the program synthesis. These operations are used to transform an individual of a population into a suitable program. Mathematically stated, it is a mapping from an individual domain into a program domain. This mapping consists of two main parts. The first part is called discrete set handling (DSH), see Fig. 6 (Zelinka, 2002) and the second one stands for security procedures which do not allow synthesizing pathological programs. The method of DSH, when used, allows handling arbitrary objects including nonnumeric objects like linguistic terms {hot, cold, dark…}, logic terms (True, False) or other user defined functions. In the AP DSH is used to map an individual into GFS and together with security procedures creates the above mentioned mapping which transforms arbitrary individual into a program.

AP needs some evolutionary algorithm (Zelinka, 2004) that consists of population of individuals for its run. Individuals in the population consist of integer parameters, i.e. an individual is an integer index pointing into GFS. The creation of the program can be schematically observed in Fig. 7. The individual contains numbers which are indices into GFS. The detailed description is represented in (Zelinka, 2002; Oplatková, 2009).

AP exists in 3 versions – basic without constant estimation, APnf – estimation by means of nonlinear fitting package in *Mathematica* environment and APmeta – constant estimation by means of another evolutionary algorithms; meta means metaevolution.

Methodology for System Adaptation Based on Characteristic Patterns 621

user. In order to test the efficiency of pattern recognition, we applied a database downloaded from (Google finance, 2010). We used time series, which shows development of the market value of U.S. company Google and represents the minute time series from

Used algorithms need for their adaptation training sets. In all experimental works, the training set consists of 100 samples (e.g. training pars of input and corresponding output vectors) and it is made from the time series and contains three peaks, which are indicated by vertical lines and they are shown in Figure 8. Samples obtained in this way are always adjusted for the needs of the specific algorithm. Data, which were tested in our experimental works, contains only one peak, which is indicated by vertical lines and it is shown in Fig. 9.

29 October 2010, see Fig. 8.

Fig. 8. The training set with three marked peaks

Fig. 9. The test set with one marked peak, which is searched

The aim of this experiment was to adapt neural network so that it could find one kind of pattern (peak) in the test data. We have used two sets of values, which are graphically depicted in Figure 10 (training patterns) and Figure 11 (test patterns) in our experiments. Training set always contained all define peaks, which were completed by four randomly

**4.2 Pattern recognition via artificial neural networks** 

Fig. 6. Discrete set handling

Fig. 7. Main principles of AP

## **4. Experimental results**

### **4.1 Used datasets**

This approach allows a search of structural shapes (patterns) inside time-series. Patterns are composed of simpler sub-patterns. The most elementary ones are known as primitives. Feature extraction is carried out by dividing the initial waveform into segments, which are encoded. Search for patterns is accomplished process, which is performed manually by the 620 Robotic Systems – Applications, Control and Programming

This approach allows a search of structural shapes (patterns) inside time-series. Patterns are composed of simpler sub-patterns. The most elementary ones are known as primitives. Feature extraction is carried out by dividing the initial waveform into segments, which are encoded. Search for patterns is accomplished process, which is performed manually by the

Fig. 6. Discrete set handling

Fig. 7. Main principles of AP

**4. Experimental results** 

**4.1 Used datasets** 

user. In order to test the efficiency of pattern recognition, we applied a database downloaded from (Google finance, 2010). We used time series, which shows development of the market value of U.S. company Google and represents the minute time series from 29 October 2010, see Fig. 8.

Used algorithms need for their adaptation training sets. In all experimental works, the training set consists of 100 samples (e.g. training pars of input and corresponding output vectors) and it is made from the time series and contains three peaks, which are indicated by vertical lines and they are shown in Figure 8. Samples obtained in this way are always adjusted for the needs of the specific algorithm. Data, which were tested in our experimental works, contains only one peak, which is indicated by vertical lines and it is shown in Fig. 9.

Fig. 8. The training set with three marked peaks

Fig. 9. The test set with one marked peak, which is searched

### **4.2 Pattern recognition via artificial neural networks**

The aim of this experiment was to adapt neural network so that it could find one kind of pattern (peak) in the test data. We have used two sets of values, which are graphically depicted in Figure 10 (training patterns) and Figure 11 (test patterns) in our experiments. Training set always contained all define peaks, which were completed by four randomly

Methodology for System Adaptation Based on Characteristic Patterns 623

Fig. 11. Graphic representation of test patterns (**S** vectors) that have been made by selection

**No. S T**

++++++++++|++++++++++|++++++++++|++++++++++|++++++++++ -+





from the test data set. The first pattern represents the peak. Next four patterns are

0. ---+------|--+++-----|--+++---+-|-++++++++-|-+++++++++|

1. ----------|----------|----------|----------|----------|

2. ----------|----------|----------|----------|----------|

3. ----------|----------|----------|----------|----------|

4. ----------|----------|----------|----------|----------|

Table 2. Vectors **T** and **S** from the test pattern set. Values of '-1' are written using the character '-' and values of '+1' are written using the character '+' because of better clarity

"uninteresting" data segments too. Other experiments gave similar results too.

**Backpropagation network configuration:** 

Number of input neurons: 100

Number of output neurons: 2

Number of hidden layers: 1

Two types of classifiers: Backpropagation and classifier based on Hebb learning were used in our experimental part. Both used networks classified input patterns into two classes. Backpropagation network was adapted according the training set (Fig.10, Tab. 1) in 7 cycles. After its adaptation, the network was able to also correctly classify all five patterns from the test set (Fig. 11, Tab. 2), e.g. the network was able to correctly identify the peak and

representatives of non-peak "not-interested" segments of values

selected parts out of peaks. These randomly selected parts were used to network can learn to recognize what is or what is not a search pattern (peak). All patterns were normalized to the square of a bitmap of the edge of size *a* = 10. The effort is always to choose the size of training set as small as possible, because especially backpropagation networks increases their computational complexity with the size of a training set.

Fig. 10. Graphic representation of learning patterns (**S** vectors) that have been made by selection from training data set. The first three patterns represent peaks. Next four patterns are representatives of non-peak "not-interested" segments of values


Table 1. Vectors **T** and **S** from the learning pattern set. Values of '-1' are written using the character '-' and values of '+1' are written using the character '+' because of better clarity

622 Robotic Systems – Applications, Control and Programming

selected parts out of peaks. These randomly selected parts were used to network can learn to recognize what is or what is not a search pattern (peak). All patterns were normalized to the square of a bitmap of the edge of size *a* = 10. The effort is always to choose the size of training set as small as possible, because especially backpropagation networks increases

Fig. 10. Graphic representation of learning patterns (**S** vectors) that have been made by selection from training data set. The first three patterns represent peaks. Next four patterns

**No. S T**








0. --------+-|-------++-|-------+++|------++++|------++++|

1. ----------|----------|--------+-|-------++-|-------+++|

2. ----------|----------|-------++-|-----++++-|----++++++|

3. ----------|----------|----------|----------|----------|

4. ----------|----------|----------|----------|----------|

5. ----------|----------|----------|----------|----------|

6. ----------|----------|----------|----------|----------|

Table 1. Vectors **T** and **S** from the learning pattern set. Values of '-1' are written using the character '-' and values of '+1' are written using the character '+' because of better clarity

their computational complexity with the size of a training set.

are representatives of non-peak "not-interested" segments of values

Fig. 11. Graphic representation of test patterns (**S** vectors) that have been made by selection from the test data set. The first pattern represents the peak. Next four patterns are representatives of non-peak "not-interested" segments of values


Table 2. Vectors **T** and **S** from the test pattern set. Values of '-1' are written using the character '-' and values of '+1' are written using the character '+' because of better clarity

Two types of classifiers: Backpropagation and classifier based on Hebb learning were used in our experimental part. Both used networks classified input patterns into two classes. Backpropagation network was adapted according the training set (Fig.10, Tab. 1) in 7 cycles. After its adaptation, the network was able to also correctly classify all five patterns from the test set (Fig. 11, Tab. 2), e.g. the network was able to correctly identify the peak and "uninteresting" data segments too. Other experiments gave similar results too.

### **Backpropagation network configuration:**


Methodology for System Adaptation Based on Characteristic Patterns 625

effective, with global optimization ability. It does not require the objective function to be differentiable, and it works well even with noisy and time-dependent objective functions. The technique for the solving of this problem by means of analytic programming was inspired in neural networks. The method in this case study used input values and future output values – similarly as training set for the neural network and the whole structure which transfer input to output was synthesized by analytic programming. The final solution of the analytic programming is based on evolutionary process which selects only the required components from the basic sets of operators (Fig. 6 and Fig 7). Fig. 13 shows analytic programming experimental result for exact modelling during training phase.

Fig. 13. Analytic programming experimental result for exact modelling during training phase. Red colour represents original data from training set (Fig. 8), while green colour

The resulting formula, which calculates the output value *xn* was developed using AP (6):

( )0.010009 <sup>2</sup> <sup>3</sup> 17.1502 85.999 *xn nx*

Analytic programming experimental results are shown in Fig. 14. Equation (6) also represents the behaviour of training set so that the given pattern was also successfully

The operators used in GFS were (see Fig. 7): +, -, /, \*, Sin, Cos, K, xn-1 to xn-4, exp, power. As the main algorithm for AP and also for constants estimation in meta-evolutionary process differential evolution was used. The final solution of the analytic programming is based on evolutionary process which selects only the required components from the basic sets of operators. In this case, not all components have to be selected as can be seen in one of

identified in the test set (Fig. 9). Other experiments gave similar results too.

*<sup>n</sup> x e* <sup>⋅</sup> <sup>−</sup> <sup>−</sup> <sup>−</sup> = ⋅ (6)

represents modelling data using formula (6)

solutions presented in (6).


Hebb network in its basic configuration was not able to adapt given training set (Fig.10, Tab. 1), therefore we used modified version of the network removing useless components from input vectors (Kocian & Volná & Janošek & Kotyrba, 2011). Then, the modified Hebb network was able to adapt all training patters (Fig. 12) and in addition to that the network correctly classified all the patterns from the test set (Fig. 11, Tab. 2), e.g. the network was able to correctly identify the peak and "uninteresting" data segments too. Other experiments gave similar results too.

## **Hebbian-learning-based-classifier configuration:**


Fig. 12. Learning patterns from Fig. 10 with uncovered redundant components (gray colour). The redundant components prevented the Hebbian-learning-based-classifier in its default variant to learn patterns properly. So the modified variant had to be used

## **4.2 Pattern recognition via analytic programming**

As an evolutionary algorithm used in our experimental work was differential evolution (DE). DE is a population-based optimization method that works on real-number-coded individuals (Price, 1999). For each individual *i G*, *x* in the current generation (*G*), DE generates a new trial individual *i G*, *x*′ by adding the weighted difference between two randomly selected individuals *r G* 1, *x* and *r G* 2 , *<sup>x</sup>* to a randomly selected third individual *r G* 3, *<sup>x</sup>* . The resulting individual *i G*, *x*′ is crossed-over with the original individual *i G*, *<sup>x</sup>* . The fitness of the resulting individual, referred to as a perturbed vector *ui G*, 1 <sup>+</sup> , is then compared with the fitness of *i G*, *x* . If the fitness of *ui G*, 1 <sup>+</sup> is greater than the fitness of *i G*, *<sup>x</sup>* , then *i G*, *<sup>x</sup>* is replaced with *ui G*, 1 <sup>+</sup> ; otherwise, *i G*, *<sup>x</sup>* remains in the population as *i G*, 1 *<sup>x</sup>* <sup>+</sup> . DE is quite robust, fast, and 624 Robotic Systems – Applications, Control and Programming

Hebb network in its basic configuration was not able to adapt given training set (Fig.10, Tab. 1), therefore we used modified version of the network removing useless components from input vectors (Kocian & Volná & Janošek & Kotyrba, 2011). Then, the modified Hebb network was able to adapt all training patters (Fig. 12) and in addition to that the network correctly classified all the patterns from the test set (Fig. 11, Tab. 2), e.g. the network was able to correctly identify the peak and "uninteresting" data segments too. Other experiments

Fig. 12. Learning patterns from Fig. 10 with uncovered redundant components (gray colour). The redundant components prevented the Hebbian-learning-based-classifier in its default

As an evolutionary algorithm used in our experimental work was differential evolution (DE). DE is a population-based optimization method that works on real-number-coded

is crossed-over with the original individual *i G*, *<sup>x</sup>*

is greater than the fitness of *i G*, *<sup>x</sup>*

in the current generation (*G*), DE

, is then compared with the

, then *i G*, *<sup>x</sup>*

. DE is quite robust, fast, and

.

. The fitness of

is replaced

by adding the weighted difference between two

to a randomly selected third individual *r G* 3, *<sup>x</sup>*

variant to learn patterns properly. So the modified variant had to be used

and *r G* 2 , *<sup>x</sup>*

remains in the population as *i G*, 1 *<sup>x</sup>* <sup>+</sup>

Number of hidden neurons: 3

α - learning parameter: 0.4

Weight initialization algorithm: Nguyen-Widrow

Weight initialization range: (-0.5; +0.5)

Type of I/O values: bipolar

**Hebbian-learning-based-classifier configuration:** 

Number of input neurons: 100

Number of output neurons: 2

Type of I/O values: bipolar

**4.2 Pattern recognition via analytic programming** 

individuals (Price, 1999). For each individual *i G*, *x*

. If the fitness of *ui G*, 1 <sup>+</sup>

the resulting individual, referred to as a perturbed vector *ui G*, 1 <sup>+</sup>

generates a new trial individual *i G*, *x*′

randomly selected individuals *r G* 1, *x*

The resulting individual *i G*, *x*′

; otherwise, *i G*, *<sup>x</sup>*

fitness of *i G*, *x*

with *ui G*, 1 <sup>+</sup>

gave similar results too.

effective, with global optimization ability. It does not require the objective function to be differentiable, and it works well even with noisy and time-dependent objective functions. The technique for the solving of this problem by means of analytic programming was inspired in neural networks. The method in this case study used input values and future output values – similarly as training set for the neural network and the whole structure which transfer input to output was synthesized by analytic programming. The final solution of the analytic programming is based on evolutionary process which selects only the required components from the basic sets of operators (Fig. 6 and Fig 7). Fig. 13 shows analytic programming experimental result for exact modelling during training phase.

Fig. 13. Analytic programming experimental result for exact modelling during training phase. Red colour represents original data from training set (Fig. 8), while green colour represents modelling data using formula (6)

The resulting formula, which calculates the output value *xn* was developed using AP (6):

$$\mathbf{x}\_n = \mathbf{85.999} \cdot \mathbf{e}^{\left(17.1902 - \mathbf{x}\_{n-3}\right)^{0.00000}} \tag{6}$$

Analytic programming experimental results are shown in Fig. 14. Equation (6) also represents the behaviour of training set so that the given pattern was also successfully identified in the test set (Fig. 9). Other experiments gave similar results too.

The operators used in GFS were (see Fig. 7): +, -, /, \*, Sin, Cos, K, xn-1 to xn-4, exp, power. As the main algorithm for AP and also for constants estimation in meta-evolutionary process differential evolution was used. The final solution of the analytic programming is based on evolutionary process which selects only the required components from the basic sets of operators. In this case, not all components have to be selected as can be seen in one of solutions presented in (6).

Methodology for System Adaptation Based on Characteristic Patterns 627

According to the results of experimental studies, it can be stated that pattern recognition in our system behaviour using all presented methods was successful. It is not possible to say with certainty, which of them reaches the better results, whether neural networks or analytic programming. Both approaches have an important role in the tasks of pattern

In the future, we would like to apply pattern recognition tasks with the followed system adaptation methods in SIMATIC environment. SIMATIC (SIMATIC, 2010) is an appropriate application environment for industrial control and automation. SIMATIC platform can be applied at the operational, management and the lowest, physical level. At an operational level, it particularly works as a control of the running processes and monitoring of the production. On the management and physical level it can be used to receive any production instructions from the MES system (Manufacturing Execution System - the corporate ERP system set between customers' orders and manufacturing systems, lines and robots). At the physical level it is mainly used as links among various sensors and actuators, which are physically involved in the production process (Janošek, 2010). The core consists of the SIMATIC programmable logic computers with sensors and actuators. This system collects information about its surroundings through sensors. Data from the sensors can be provided (e.g. via Ethernet) to proposed and created software tools for pattern recognition in real

The research described here has been financially supported by University of Ostrava grant SGS23/PRF/2011. It was also supported by the grant NO. MSM 7088352101 of the Ministry of Education of the Czech Republic, by grant of Grant Agency of Czech Republic GACR 102/09/1680 and by the European Regional Development Fund under the Project CEBIA-Tech No. CZ.1.05/2.1.00/03.0089. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily

Armstrong, M. and Porter, R. ed. (2006): *Handbook of Industrial Organization*, *vol. III.* New

Buhr, R.J.A. and Hubbard, A. (1997) Use Case Maps for Engineering Real Time and

Ciskowski, P. and Zaton, M. (2010) Neural Pattern Recognition with Self-organizing Maps

Dormido-Canto, S., Farias, G., Vega, J., Dormido, R., Sánchez, J. and N. Duro *et al.* (2006)

Fausett, L.V. (1994) *Fundamentals of neural networks: architectures, algorithms and applications,* 

Distributed Computer Systems: A Case Study of an ACE-Framework Application. In *Hawaii International Conference on System Sciences*, Jan 7-10, 1997, Wailea, Hawaii, Available from http://www.sce.carletonca/ftp/pub/UseCaseMaps/hicss-final-

for Efficient Processing of Forex Market Data Streams. In *Artificial Intelligence and Soft Computing*, Volume 6113/2010, pp. 307-314, DOI: 10.1007/978-3-642-13208-7\_39

recognition.

time, which runs on a powerful computer.

**6. Acknowledgment** 

**7. References** 

reflect the views of the sponsors.

public.ps

York and Amsterdam: North-Holland.

*Rev. Sci. Instrum.* 77 (10), p. F514.

*first edition*. Prentice Hall. ISBN 978-953-7619-24-4

Bishop, C. (2006) *Pattern Recognition and Machine Learning*. Springer, 2006.

Fig. 14. Analytic programming experimental result. Red colour represents original data from test set (Fig. 9), while green colour represents modelling data using formula (6)

## **5. Conclusion**

In this chapter, a short introduction into the field of pattern recognition using system adaptation, which is represented via time series, has been given. Two possible approaches were used from the framework of softcomputing methods. The first approach was based on analytic programming and the second one was based on artificial neural networks. Both types of used neural networks (e.g. Hebb and backpropagation networks) as well as analytic programming demonstrated ability to manage to learn and recognize given patterns in time series, which represents our system behaviour. Our experimental results suggest that for the given class of tasks can be acceptable simple classifiers (we tested the simplest type of Hebb learning). The advantage of simple neural networks is very easy implementation and quick adaptation. Easy implementation allows to realize them at low-performance computers (PLC) and their fast adaptation facilitates the process of testing and finding the appropriate type of network for the given application.

The method of analytic programming described here is universal (from point of view of used evolutionary algorithm), relatively simple, easy to implement and easy to use. Analytic programming can be regarded as an equivalent of genetic programming in program synthesis and new universal method, which can be used by arbitrary evolutionary algorithm. AP is also independent of computer platform (PC, Apple, …) and operation system (Windows, Linux, Mac OS,…) because analytic programming can be realized for example in the Mathematica® environment or in other computer languages. It allows manipulation with symbolic terms and final programs are synthesised by AP of mapping, therefore main benefit of analytic programming is the fact that symbolic regression can be done by arbitrary evolutionary algorithm, as was proofed by comparative study.

According to the results of experimental studies, it can be stated that pattern recognition in our system behaviour using all presented methods was successful. It is not possible to say with certainty, which of them reaches the better results, whether neural networks or analytic programming. Both approaches have an important role in the tasks of pattern recognition.

In the future, we would like to apply pattern recognition tasks with the followed system adaptation methods in SIMATIC environment. SIMATIC (SIMATIC, 2010) is an appropriate application environment for industrial control and automation. SIMATIC platform can be applied at the operational, management and the lowest, physical level. At an operational level, it particularly works as a control of the running processes and monitoring of the production. On the management and physical level it can be used to receive any production instructions from the MES system (Manufacturing Execution System - the corporate ERP system set between customers' orders and manufacturing systems, lines and robots). At the physical level it is mainly used as links among various sensors and actuators, which are physically involved in the production process (Janošek, 2010). The core consists of the SIMATIC programmable logic computers with sensors and actuators. This system collects information about its surroundings through sensors. Data from the sensors can be provided (e.g. via Ethernet) to proposed and created software tools for pattern recognition in real time, which runs on a powerful computer.

## **6. Acknowledgment**

626 Robotic Systems – Applications, Control and Programming

Fig. 14. Analytic programming experimental result. Red colour represents original data from

In this chapter, a short introduction into the field of pattern recognition using system adaptation, which is represented via time series, has been given. Two possible approaches were used from the framework of softcomputing methods. The first approach was based on analytic programming and the second one was based on artificial neural networks. Both types of used neural networks (e.g. Hebb and backpropagation networks) as well as analytic programming demonstrated ability to manage to learn and recognize given patterns in time series, which represents our system behaviour. Our experimental results suggest that for the given class of tasks can be acceptable simple classifiers (we tested the simplest type of Hebb learning). The advantage of simple neural networks is very easy implementation and quick adaptation. Easy implementation allows to realize them at low-performance computers (PLC) and their fast adaptation facilitates the process of testing and finding the appropriate

The method of analytic programming described here is universal (from point of view of used evolutionary algorithm), relatively simple, easy to implement and easy to use. Analytic programming can be regarded as an equivalent of genetic programming in program synthesis and new universal method, which can be used by arbitrary evolutionary algorithm. AP is also independent of computer platform (PC, Apple, …) and operation system (Windows, Linux, Mac OS,…) because analytic programming can be realized for example in the Mathematica® environment or in other computer languages. It allows manipulation with symbolic terms and final programs are synthesised by AP of mapping, therefore main benefit of analytic programming is the fact that symbolic regression can be

done by arbitrary evolutionary algorithm, as was proofed by comparative study.

test set (Fig. 9), while green colour represents modelling data using formula (6)

**5. Conclusion** 

type of network for the given application.

The research described here has been financially supported by University of Ostrava grant SGS23/PRF/2011. It was also supported by the grant NO. MSM 7088352101 of the Ministry of Education of the Czech Republic, by grant of Grant Agency of Czech Republic GACR 102/09/1680 and by the European Regional Development Fund under the Project CEBIA-Tech No. CZ.1.05/2.1.00/03.0089. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the sponsors.

## **7. References**


http://hebb.mit.edu/courses/9.641/2002/lectures/lecture04.pdf


628 Robotic Systems – Applications, Control and Programming

Gershenson, C. ( 2007): *Design and Control of Self-organizing Systems*. Mexico: CopIt ArXives,

Gershenson, C. (2002) Complex philosophy. In: *Proceedings of the 1st Biennial Seminar on* 

Gogle finance [online], http://www.google.com/finance?q=NASDAQ:GOOG, 10.8. 2010 Heylighen, F. (1994) Fitness as default: the evolutionary basis for cognitive complexity

Janošek, M. (2010) Systémy Simatic a jejich využití ve výzkumu. In: *Studentská vědecká* 

Kocian, V., Volná, E., Janošek, M. and Kotyrba, M. (2011) Optimizatinon of training sets for

Lai, K.K., Yu, L. and Wang, S: A (2005) Neural Network and Web-Based Decision Support

Price, K. (1999) An Introduction to Differential Evolution, In: (D. Corne, M. Dorigo and F. Glover, eds.) *New Ideas in Optimization*, pp. 79–108, London: McGraw-Hill. Seung, S. (2002). Multilayer perceptrons and backpropagation learning. 9.641 Lecture4. 1-6.

Zelinka, I. (2002) Analytic programming by Means of Soma Algorithm. Mendel '02, In: *Proc.* 

Zelinka, I. (2004) SOMA – Self Organizing Migrating Algorithm", In: B.V. Babu, G.

Onwubolu (eds), *New Optimization Techniques in Engineering Springer-Verlag*, 2004,

http://hebb.mit.edu/courses/9.641/2002/lectures/lecture04.pdf

*Mendel'02*, Brno, Czech Republic, 2002, 93-101., ISBN 80-214-2135-5

*Philosophical, Methodological & Epistemological Implications of Complexity Theory*. La

reduction. In Trappl (Ed.) *Proceedings of Cybernetics and Systems '94*, R. Singapore:

*konference 2010*. Ostrava: Ostravská univerzita, pp. 177-180. ISBN 978-80-7368-719-9

Hebbian-learningbased classifiers. In R. Matoušek (ed.): *Proceedings of the 17th International Conference on Soft Computing, Mendel 2011*, Brno, Czech Republic, pp.

System for Forex Forecasting and Trading. In *Data Mining and Knowledge Management*, Volume 3327/2005, pp. 243-253, DOI: 10.1007/978-3-540-30537-8\_27. Oplatkova, Z. (2009) Metaevolution - Synthesis of Optimization Algorithms by means of

Symbolic. In *Regression and Evolutionary Algorithms, Lambert-Publishing*, ISBN 978-

ISBN: 978-0-9831172-3-0.

8383-1808-0.

Available from:

ISBN 3-540-20167X

Habana, Cuba. 14.02.2011, Available from http://uk.arXiv.org/abs/nlin.AO/0108001

185-190. ISBN 978-80-214-4302-0, ISSN 1803-3814.

SIMATIC (2010) [online]. SIMATIC Controller, Available from

http://www.automation.siemens.com/salesmaterialas/brochure/en/brochure\_simatic-controller\_en.pdf

World Science, pp. 1595–1602, 1994.

## *Edited by Ashish Dutta*

This book brings together some of the latest research in robot applications, control, modeling, sensors and algorithms. Consisting of three main sections, the first section of the book has a focus on robotic surgery, rehabilitation, self-assembly, while the second section offers an insight into the area of control with discussions on exoskeleton control and robot learning among others. The third section is on vision and ultrasonic sensors which is followed by a series of chapters which include a focus on the programming of intelligent service robots and systems adaptations.

Robotic Systems - Applications, Control and Programming

Robotic Systems

Applications, Control and Programming

*Edited by Ashish Dutta*

Photo by Ociacia / iStock