**3. Autonomy and relate**

The work described in this article addresses the use of ontologies for the augmentation of autonomy in robots [18]. As ontologies foster the use of formality in conceptualizations, it seems natural to try to provide a definition of the adjective *autonomous*.

The term "autonomous" is a buzzword these days and has received different meanings in different contexts<sup>3</sup> . In the analysis of the use of the term "autonomous" in automatic control and robotics, there are two major generalized uses of the term "autonomous":


In our position as autonomous *systems* engineers, it is the second interpretation that we focus on. In systems engineering the task to be performed by the system is always something of value to the final user. An *useful* mobile robot shall not just wander around but perform some task of value during this wandering (find an object, move an object, detect intruders, *etc.*). When we say that a robot is autonomous we mean that it is capable of performing its assigned activities—*e.g.* generate a map—without the need of external intervention [19]. This also applies to the

<sup>3</sup> It has indeed a long tradition of use in the domains of healthcare and political science.

capability of movement—including the whole robot navigation infrastructure and to all the other functions that the robot may perform subsidiarily to the main task [20].

affected by limited knowledge, observation capability, and biased by previous

The problem we are addressing here is achieving trustworthiness, specifically dependability and mission assurance. The framework discussed here provides engineering tools in terms of system and mission concepts and relationships to define system design alternatives to deal with abnormal scenarios and unpredictable envi-

The underlying idea is to break the design/operation barrier. Using ontologies we can make available the engineering design knowledge at run-time to allow system self-reconfiguration using self-knowledge. With this approach, the scope of the ontologies covers from the system conceptualization until the system deployment. The use of ontologies at run-time provides an information-driven adaptation

In systems engineering, the life cycle of an artifact usually includes eight stages: (i) identify the needs, (ii) define the system concept, (iii) specify system requirements, (iv) design the system, (v) implement the system, (vi) verify the system,

typically iterative until the deployment phase, when requirements and design deci-

In fault-tolerant systems, a set of methods and algorithms intervene at run-time to keep the functional activity of the system, *i.e.* to maintain the operation as it was designed. The fault-tolerance mechanics is predefined, blind, and triggered by certain events. There is no system knowledge to reason about but its reification in rigid adaptation mechanisms. The idea we pursue in this work is the usage of ontologies to include the knowledge of engineering as part of the run-time system to endow the system with *flexible reconfiguration capability based on system knowledge*. With this approach, the design phase and the deployed phase maintain an explicit link through the system knowledge because the system ontology provides a metamodel that spans the whole system life cycle. This link can be exploited to combine other subsystems and create new designs at run-time more suitable for

Ideally, the system knowledge base should encode all include all the system concepts developed in early phases of the system life cycle, for example, *user needs* as the artifact is produced to satisfy the needs defined in the first stage. With this information, the system could be able to ensure the mission and reason about it at any stage. In adaptive systems, with component or functional redundancy, the early stages of the life cycle are not addressed. The reconfiguration in this case aims to comply with the initial system design or a few designs for possible known contingencies. However, by providing the system with capabilities to trace until the needs that justify its existence as well as the requirements that justify that design, the system can augment its autonomy in search of trustworthiness. If a requirement is imposed by a component that is not functioning and is going to be substituted with another element, that requirement is no longer applicable to the system. Therefore, besides the component in use, other adjustments can be made in the system for better

<sup>5</sup> A final decommissioning stage is also of importance, esp. in terms of sustainability, for real-world

. In the first six stages, the work is

experiences.

ronments.

capability to enhance system autonomy [18].

*Using Ontologies in Autonomous Robots Engineering DOI: http://dx.doi.org/10.5772/intechopen.97357*

(vii) deploy the system and (viii) operate it5

addressing certain contingencies.

systems. We do not address this stage here.

performance.

**77**

**4. Ontologies in the life cycle of autonomous systems**

sions are frozen and remain implicit in the final artifact.

#### **3.1 Autonomy and disruption**

A second aspect concerning task execution that is of maximal importance is the distinction between (i) being able to perform certain tasks alone (*e.g.* moving to a pose or building a map); (ii) doing so while handling some degree of disturbance; and (iii) being able to perform these tasks *alone* in the presence of *severe disturbances*<sup>4</sup> . In the first case, a simple automaton can do the job. In the second case, a feedback control system can do the job. In the third case, a perception-thoughtaction loop is necessary to provide both feedback, adaptation, and anticipation. Some people use the term "automatic" for the first or second cases, keeping "autonomous" for the third. In the automatic control domain, some authors may use "open loop" and "closed-loop" to make this distinction, but for us, the second case also includes closed-loop controllers for operational set-points.

A more thorough distinction could be done concerning the nature of the disturbances, especially when severe. In the case of anticipated, well known severe disturbances, the system could be built in accordance to them to be able to respond adequately and predictably. If the disturbances are not predictable—or don't want to bother about their anticipation—the system can be built to respond reactively to them. In the design of the system, we shall define, however, a set of bounds of the system operational environment to be able to design the system to behave robustly in this region.

In the work described in this chapter, we address situations where the system finds itself outside the boundaries set for its operation at design time—its normal operational profile. In these circumstances, the only possibility for keeping the mission going is for the robot to adapt to the new situation: it shall change its very design/realization to be able to still achieve mission objectives in this new situation.

#### **3.2 Autonomy and trustworthiness**

Trustworthiness is a necessary but not sufficient condition to carry out tasks in open environments [23]. In real operation, autonomous systems are deployed in complex environments plagued with uncertainty. This affects the system capability to complete the mission assigned to it by the user. For a user to confidently rely on an autonomous system, the system shall be trustworthy.

Trust and trustworthiness may seem similar but they must be distinguished; especially in an autonomous system, where behavior assurance is quite more complex. Trust is a human-system relational property; *i.e.* something that the human user perceives or *feels about* the robot. On the contrary, trustworthiness is a property of the system itself, *i.e.* that the system is robust and resilient in relation to its mission and hence, it deserves trust by the human user [24]. This implies that a human user may not trust a trustworthy system [25] because user perception is

<sup>4</sup> A severe disturbance is a disturbance that violates the system design assumptions for normal operational conditions. An example of severe external disturbance is a slippery floor for an unmanned ground vehicle (UGV) when designed to operate on a non-slippery floor. An example of severe internal disturbance is the failure of a laser range sensor used in robot navigation. See [21] for a discussion of types of system change under the Klir general systems framework [22].

*Using Ontologies in Autonomous Robots Engineering DOI: http://dx.doi.org/10.5772/intechopen.97357*

capability of movement—including the whole robot navigation infrastructure and to all the other functions that the robot may perform subsidiarily to the

A second aspect concerning task execution that is of maximal importance is the distinction between (i) being able to perform certain tasks alone (*e.g.* moving to a pose or building a map); (ii) doing so while handling some degree of disturbance; and (iii) being able to perform these tasks *alone* in the presence of *severe distur-*

. In the first case, a simple automaton can do the job. In the second case, a

A more thorough distinction could be done concerning the nature of the disturbances, especially when severe. In the case of anticipated, well known severe disturbances, the system could be built in accordance to them to be able to respond adequately and predictably. If the disturbances are not predictable—or don't want to bother about their anticipation—the system can be built to respond reactively to them. In the design of the system, we shall define, however, a set of bounds of the system operational environment to be able to design the system to behave robustly

In the work described in this chapter, we address situations where the system finds itself outside the boundaries set for its operation at design time—its normal operational profile. In these circumstances, the only possibility for keeping the mission going is for the robot to adapt to the new situation: it shall change its very design/realization to be able to still achieve mission objectives in this new situation.

Trustworthiness is a necessary but not sufficient condition to carry out tasks in open environments [23]. In real operation, autonomous systems are deployed in complex environments plagued with uncertainty. This affects the system capability to complete the mission assigned to it by the user. For a user to confidently rely on

Trust and trustworthiness may seem similar but they must be distinguished; especially in an autonomous system, where behavior assurance is quite more complex. Trust is a human-system relational property; *i.e.* something that the human user perceives or *feels about* the robot. On the contrary, trustworthiness is a property of the system itself, *i.e.* that the system is robust and resilient in relation to its mission and hence, it deserves trust by the human user [24]. This implies that a human user may not trust a trustworthy system [25] because user perception is

<sup>4</sup> A severe disturbance is a disturbance that violates the system design assumptions for normal operational conditions. An example of severe external disturbance is a slippery floor for an unmanned ground vehicle (UGV) when designed to operate on a non-slippery floor. An example of severe internal disturbance is the failure of a laser range sensor used in robot navigation. See [21] for a discussion of

feedback control system can do the job. In the third case, a perception-thoughtaction loop is necessary to provide both feedback, adaptation, and anticipation. Some people use the term "automatic" for the first or second cases, keeping "autonomous" for the third. In the automatic control domain, some authors may use "open loop" and "closed-loop" to make this distinction, but for us, the second

case also includes closed-loop controllers for operational set-points.

main task [20].

*bances*<sup>4</sup>

in this region.

**76**

**3.2 Autonomy and trustworthiness**

an autonomous system, the system shall be trustworthy.

types of system change under the Klir general systems framework [22].

**3.1 Autonomy and disruption**

*Robotics Software Design and Engineering*

affected by limited knowledge, observation capability, and biased by previous experiences.

The problem we are addressing here is achieving trustworthiness, specifically dependability and mission assurance. The framework discussed here provides engineering tools in terms of system and mission concepts and relationships to define system design alternatives to deal with abnormal scenarios and unpredictable environments.

The underlying idea is to break the design/operation barrier. Using ontologies we can make available the engineering design knowledge at run-time to allow system self-reconfiguration using self-knowledge. With this approach, the scope of the ontologies covers from the system conceptualization until the system deployment. The use of ontologies at run-time provides an information-driven adaptation capability to enhance system autonomy [18].

#### **4. Ontologies in the life cycle of autonomous systems**

In systems engineering, the life cycle of an artifact usually includes eight stages: (i) identify the needs, (ii) define the system concept, (iii) specify system requirements, (iv) design the system, (v) implement the system, (vi) verify the system, (vii) deploy the system and (viii) operate it5 . In the first six stages, the work is typically iterative until the deployment phase, when requirements and design decisions are frozen and remain implicit in the final artifact.

In fault-tolerant systems, a set of methods and algorithms intervene at run-time to keep the functional activity of the system, *i.e.* to maintain the operation as it was designed. The fault-tolerance mechanics is predefined, blind, and triggered by certain events. There is no system knowledge to reason about but its reification in rigid adaptation mechanisms. The idea we pursue in this work is the usage of ontologies to include the knowledge of engineering as part of the run-time system to endow the system with *flexible reconfiguration capability based on system knowledge*. With this approach, the design phase and the deployed phase maintain an explicit link through the system knowledge because the system ontology provides a metamodel that spans the whole system life cycle. This link can be exploited to combine other subsystems and create new designs at run-time more suitable for addressing certain contingencies.

Ideally, the system knowledge base should encode all include all the system concepts developed in early phases of the system life cycle, for example, *user needs* as the artifact is produced to satisfy the needs defined in the first stage. With this information, the system could be able to ensure the mission and reason about it at any stage.

In adaptive systems, with component or functional redundancy, the early stages of the life cycle are not addressed. The reconfiguration in this case aims to comply with the initial system design or a few designs for possible known contingencies.

However, by providing the system with capabilities to trace until the needs that justify its existence as well as the requirements that justify that design, the system can augment its autonomy in search of trustworthiness. If a requirement is imposed by a component that is not functioning and is going to be substituted with another element, that requirement is no longer applicable to the system. Therefore, besides the component in use, other adjustments can be made in the system for better performance.

<sup>5</sup> A final decommissioning stage is also of importance, esp. in terms of sustainability, for real-world systems. We do not address this stage here.

An example of this case is the use of different navigation sensors in a mobile robot. Suppose we have an autonomous robot with laser and ultrasound sensors to navigate. An initial objective may be to reach a point as fast as possible. According to the final design of the robot, that requirement would be specified with a specification of a targeted velocity value.

alternatives the system engineer has thought as possible to fulfill a certain

• The instantaneous state is captured with Objectives, which define a set of operational requirements pursued at run-time when executing a Function; Function Groundings, that are used at run-time to specify which Function Design is in use; and Components, used to describe the structural modules at that instant. Lastly, Quality Attributes affect both static and run-time knowledge. They are used to make explicit the operational requirements of the

• Each Objective has a Quality Attribute associated to meet operational

those requirements are being fulfilled.

*Using Ontologies in Autonomous Robots Engineering DOI: http://dx.doi.org/10.5772/intechopen.97357*

such as energy, safety, and performance.

**5.1 Run-time reconfiguration for fault-tolerance**

the corresponding Component individual.

**79**

requirements such as safety, performance, and energy consumption. Likewise, each Function Design has a Quality Attribute value estimation to select the best design alternative to meet the mission requirements. Additionally, the Function Grounding measures the real Quality Attribute value to monitor if

As it was previously mentioned, the knowledge base is completed with two sets of individuals. The navigation-domain file contains instances of widely-used navigation sensors such as ultrasound, laser, RGBD cameras, etc., and other important elements in autonomous robots such as the battery. These elements are instances of the TOMASys class Component. Besides, popular Quality Attributes are defined

The application-specific knowledge base is made of all the Function Designs, these are the design alternatives to perform navigation. Other elements are the instance of an Objective, the instance of a Function Grounding, this is the Function Design in use, and the Quality Attributes relative to them. Each Function Design has a Quality Attribute estimation in safety and energy, which is calculated for the Function Grounding. This calculated Quality Attribute value is compared with the non-functional requirements (NFR) defined for the Objective.

The NFRs are the Quality Attributes required for the specific mission.

To use the knowledge base at run-time, it is written in a machine-readable format using the Web Ontology Language (OWL). A descriptive logic (DL) reasoner uses it during the system operation to evaluate the robot's functioning. Once an Objective is defined, and it is linked to the Function that solves it, a Function Grounding is selected according to the mission requirements and the Component availability. In the MROS proof-of-concept, two possible classes of contingencies are addressed: component fault and mission requirements non-fulfillment.

Each Component has a *required by* relationship with the Function Design that makes use of it. If a Component is malfunctioning, those Function Designs that use it becomes unavailable. **Figure 1** depicts the main relationships contained in the knowledge base. The two components considered, laser and battery, are required for all Function Designs except one. In case of laser failure, the Function Design degraded mode should be selected. Likewise, in the case of a low battery, the Function Design energy saving mode should be selected. This is implicitly shown in the figure, as there are no links between those Function Design individuals and

The ontology also includes some rules using the Semantic Web Rule Language (SWRL) to perform functional diagnosis. This is done by asserting the information

Function.

mission.

The laser is a device with a high refresh rate so the robot can navigate safely at higher velocities. If the robot enters a room with glass walls, the laser is not reliable. If the robot detects through the reasoning that the environmental conditions are not suitable for the laser and triggers a reconfiguration to use the ultrasound sensor, the robot can keep its operation to fulfill the mission. However, as the design has changed, the requirements that can be fulfilled are not the same. In this case, as the ultrasound sensor has a shorter range, the maximum velocity of the robot shall be significantly less to keep a safe operational profile. Once the robot has traversed that glass room, the laser can be re-activated so the requirements must change again to achieve the maximum performance available.

This is a naive example of how a system engineering knowledge base can improve a navigation task. However, real-world missions are composed of complexorchestrated tasks, for instance, the operation of a waiter-robot which must serve a drink, or a miner-robot that must obtain a certain mineral. In this case, that knowledge can be further exploited with deep reasoning to perform adaptation at different tasks and several stages of the system life cycle.
