**3. Semantic knowledge-based situation awareness**

Unmanned vehicle situation awareness *SAV* consists in enabling the vehicle to autonomously understand the '*big picture*' (Adams, 2007). This picture is composed of the experience gained from previous missions (orientation) and the information obtained from the sensors while on mission (observation). Ontologies allow the representation of knowledge of these two components.

Ontologies are models of entities and interactions, either generically or in some particular practice of knowledge (Gruber, 1995). The main components of an ontology are concepts and axioms. A concept represents a set or class of entities within a domain (e.g., a *fault* is a concept within the domain of diagnostics). Axioms are used to constrain the range and domain of the concepts (e.g., a *driver* is a *software* that has a *hardware*). The finite set of concept and axiom definitions is called the *Terminology Box TBox* of the ontology.

6 Underwater Vehicles

et al., 2009). This is now providing adaptive planning capabilities to oceanographers for maximising the science return of their UUV missions (McGann et al., 2007; Rajan et al., 2007). Using deliberative reactors for the concurrent integration of execution and planning (McGann, Py, Rajan & Henthorn, 2008), live sensor data can be analysed during mission to adapt the control of the platform in order to measure dynamic and episodic phenomenon, such as chemical plumes (McGann, Py, Rajan, Henthorn & McEwen, 2008; McGann et al., 2009). Alternative approaches to adaptive plume tracing can also be found in the works of Farrell et al. (2005) and Jakuba (2007). Their research goals of all these approaches have been motivated by scientific applications and do not consider the needs of the human operators

However, autonomy cannot be achieved without humans, as it is necessary for this autonomy to be ultimately accepted by an operator. Our research is geared towards improving human access to UUVs in order to solve the maritime industry's primary requirement of improving platform operability (Patrón et al., 2007). We propose a goal-based approach to solving adaptive mission planning. The advantage of this approach is that it provides high levels of mission abstraction. This makes the human interface simple, powerful and platform independent, which greatly eases the operator's task of designing and deploying missions (Patrón, 2009). Ultimately, operators will not need any specialist training for an UUV specific platform, and instead missions will be described purely in terms of their goals. Apart from ease of use, we have also demonstrated using a novel metric (Patrón & Birch, 2009) that adaptive mission planners can produce solutions which are close to what a human planner would produce (Patrón et al., 2009a). This means that our solutions can be trusted by

Another advantage of our research over other state-of-the-art UUV implementations, is that we are industry focussed. Our service-oriented approach provides goal-based mission planning with discoverable capabilities, which meets industry's need for platform independence (Patrón et al., 2009b). Finally, our plan repair approach optimises the resources required for adaptability and maximises consistency with the original plan, which improves human acceptance of autonomy. Resource optimisation and consistency are very important properties for real world implementations, as we demonstrate in our sea trials (Patrón,

Section 3 describes how do we link together orientation and observation. Section 4 presents

Unmanned vehicle situation awareness *SAV* consists in enabling the vehicle to autonomously understand the '*big picture*' (Adams, 2007). This picture is composed of the experience gained from previous missions (orientation) and the information obtained from the sensors while on mission (observation). Ontologies allow the representation of knowledge of these two

Ontologies are models of entities and interactions, either generically or in some particular practice of knowledge (Gruber, 1995). The main components of an ontology are concepts and axioms. A concept represents a set or class of entities within a domain (e.g., a *fault* is a concept within the domain of diagnostics). Axioms are used to constrain the range and domain of the concepts (e.g., a *driver* is a *software* that has a *hardware*). The finite set of concept and axiom

or the maritime industry.

an operator.

components.

Miguelanez, Petillot & Lane, 2008).

an approach to the continuous iteration of decision and action.

definitions is called the *Terminology Box TBox* of the ontology.

**3. Semantic knowledge-based situation awareness**

Fig. 4. Knowledge Base representation system including the *TBox*, *ABox*, the description language and the reasoning components. Its interface is made of orientation rules and agent queries.

Instances are the individual entities represented by a concept of the ontology (e.g. a *remus* is an instance of the concept *UUV*). Relations are used to describe the interactions between individuals (e.g. the relation *isComponentOf* might link the individual *SensorX* to the individual *PlatformY*). This finite set of instances and relations about individuals is called the *Assertion Box ABox*. The combination of *TBox* and *ABox* is what is known as a *Knowledge Base*. *TBox* aligns naturally to the orientation component of *SAV* while *ABox* aligns to the observation component.

In the past, authors such as Matheus et al. (2003) and Kokar et al. (2009) have used ontologies for situation awareness in order to assist humans during information fusion and situation analysis processes. Our work extends these previous works by using ontologies for providing unmanned situation awareness in order to assist autonomous decision making algorithms in underwater vehicles. One of the main advantages of using a knowledge base over a classical data base schema to represent *SAV* is the extended querying that it provides, even across heterogeneous data systems. The meta-knowledge within an ontology can assist an intelligent agent (e.g., status monitor, mission planner, etc.) with processing a query. Part of this intelligent processing is due to the capability of reasoning. This enables the publication of machine understandable meta-data, opening opportunities for automated information processing and analysis.

For instance, a status monitor agent using meta-data about sensor location could automatically infer the location of an event based on observations from nearby sensors (Miguelanez et al., 2008). Inferences over the ontology are made by reasoners. A reasoner enables the domain's logic to be specified with respect to the context model and applied to the corresponding knowledge i.e., the instances of the model (see Fig. 4). A detailed description of how a reasoner works is outside of the scope of this article. For the implementation of our approach, we use the open source reasoner called Pellet (Sirin et al., 2007).
