**3. Ontology design in IMNET**

For developing and sharing animation procedures in the IMNET system, we will describe our ontology design for virtual objects and avatars in IMNET in this section. The topology described here is to serve as a demonstration example for potential applications; therefore, the design is by no means complete.

#### **3.1 Ontology design of virtual environment**

The objective of ontology design for virtual environment in this work is two-fold. First, we would like to keep the information that exists in the original IMNET such as object geometry and transformation. Second, we hope to use an example to show that more semantic information about the virtual objects can facilitate the computation of advanced reasoning procedures such as a path planner that may be designed by the users.

Realizing Semantic Virtual Environments with Ontology and Pluggable Procedures 175

by using the hasBehaviour property to connect to the Behaviour class. This Behaviour class defines the procedure (with name, package, and codebase) in order to generate the desired animation. In addition, an avatar may contain some basic attributes such as name, geometryInfo, and status. We also use the hasFriend and hasPosition properties to get the

**Avatar** hasFriend\*

y <sup>z</sup> updateTimestamp

In the two subsections above, we have defined an ontology for the objects and avatars in the virtual world. However, in the original IMNET system, the geometry of the virtual world is loaded from a single VRML file. The geometry is parsed and converted into the underlying format for 3D display by a module called VRMLModelFactory as shown in Fig. 4. In order to augment the system with semantic information, we have split the geometry into several VRML files with one file for each object. This file is specified in the geometryInfo attribute of every worldObject. We have adopted the Web Ontology Language (OWL) established by W3C as the file format for the ontology of the virtual world. As shown in Fig. 5, the system

Position hasPosition <sup>x</sup> geometryInfo

:class :property

updateTimestamp hasBehavior\*

package codeBase

friendship and current position information of the avatars.

Fig. 3. Ontology design for avatars

**3.3 Using ontology to load the virtual world** 

status name

UI

Behavior

updateTimestamp

Fig. 4. Processing a single VRML file to generate the virtual world

Fig. 2. Ontology design for virtual world

Our ontology design of the virtual environment is shown in Fig. 2. The root of the world document is the IMWorld node, which contains world information (WorldInfo) and all the virtual objects (WorldObject) in the world. In order to retain the semantic information of the virtual objects existing in the original IMNET, we have designed the GeometryInfo and Transform nodes. Each object also has some additional attributes such as name, tag, baseLevel, and height. The tag attribute is designed for the user to denote applicationspecific properties for virtual objects. For example, in the example of path planning, one can tag certain objects as sidewalk and crosswalk such that these regions can be treated appropriately by the path planner according to their meanings in the world. Each object may also have the attribute of Approximation2D, which is a polygon that can be used to define 2D approximation of obstacles in the environment for the path planner. In addition, if 2D approximation is over simplified for the application, one can also use the baseLevel and height attributes to define 3D approximation regions where the obstacles are located. If these attributes are not available for some objects, they still can be computed from the given 3D geometry and transformation of the objects. Some objects may also serve as the ground of the world through the node of Ground to define the boundary of the world. In addition, some objects could also be treated as HotPosition when they are the foci of interest in the application.

#### **3.2 Ontology design for avatars**

In MUVE's, an avatar could be controlled by a real user or by a computer program (called virtual user) if the system provides such a function. Virtual users can be used to by the designer of the virtual world to perform simple tasks such as a watching a gate or offering guided tours. In this section, we describe the basic ontology classes and attributes (as shown in Fig. 3) that we have designed for the applications of avatar interactions. Although an avatar is also an object in a virtual world, they have more active and complicated roles to play. For example, a user may choose to use his/her own animation for a specific behaviour by using the hasBehaviour property to connect to the Behaviour class. This Behaviour class defines the procedure (with name, package, and codebase) in order to generate the desired animation. In addition, an avatar may contain some basic attributes such as name, geometryInfo, and status. We also use the hasFriend and hasPosition properties to get the friendship and current position information of the avatars.

Fig. 3. Ontology design for avatars

174 Applications of Virtual Reality

file

hasObject\* isFocusedOn

hasApproximation

isa

Scale

z

Approximation2D

hasGeometry

Rotation Polygon

HotPosition

value

file

GeometryInfo

Ground

hasPolygon

y x z description

**WorldObject**

hasTransform

hasScale

y z z

hasRotation

Our ontology design of the virtual environment is shown in Fig. 2. The root of the world document is the IMWorld node, which contains world information (WorldInfo) and all the virtual objects (WorldObject) in the world. In order to retain the semantic information of the virtual objects existing in the original IMNET, we have designed the GeometryInfo and Transform nodes. Each object also has some additional attributes such as name, tag, baseLevel, and height. The tag attribute is designed for the user to denote applicationspecific properties for virtual objects. For example, in the example of path planning, one can tag certain objects as sidewalk and crosswalk such that these regions can be treated appropriately by the path planner according to their meanings in the world. Each object may also have the attribute of Approximation2D, which is a polygon that can be used to define 2D approximation of obstacles in the environment for the path planner. In addition, if 2D approximation is over simplified for the application, one can also use the baseLevel and height attributes to define 3D approximation regions where the obstacles are located. If these attributes are not available for some objects, they still can be computed from the given 3D geometry and transformation of the objects. Some objects may also serve as the ground of the world through the node of Ground to define the boundary of the world. In addition, some objects could also be treated as HotPosition when they are the foci of interest in the

: class : property : inherited property

x

rot

In MUVE's, an avatar could be controlled by a real user or by a computer program (called virtual user) if the system provides such a function. Virtual users can be used to by the designer of the virtual world to perform simple tasks such as a watching a gate or offering guided tours. In this section, we describe the basic ontology classes and attributes (as shown in Fig. 3) that we have designed for the applications of avatar interactions. Although an avatar is also an object in a virtual world, they have more active and complicated roles to play. For example, a user may choose to use his/her own animation for a specific behaviour

Fig. 2. Ontology design for virtual world

IMWorld

tag baseLevel

height

name

Transform

hasTranslation

Translation

x x

y y

WorldInf

hasWorldInfo

application.

**3.2 Ontology design for avatars** 

### **3.3 Using ontology to load the virtual world**

In the two subsections above, we have defined an ontology for the objects and avatars in the virtual world. However, in the original IMNET system, the geometry of the virtual world is loaded from a single VRML file. The geometry is parsed and converted into the underlying format for 3D display by a module called VRMLModelFactory as shown in Fig. 4. In order to augment the system with semantic information, we have split the geometry into several VRML files with one file for each object. This file is specified in the geometryInfo attribute of every worldObject. We have adopted the Web Ontology Language (OWL) established by W3C as the file format for the ontology of the virtual world. As shown in Fig. 5, the system

Fig. 4. Processing a single VRML file to generate the virtual world

Realizing Semantic Virtual Environments with Ontology and Pluggable Procedures 177

scripting language, XAML, is an example of message type (Li, et al., 2004). Another example is the message for textual information used in the chat module. For instance, in Fig. 6, we show an example where User1 wants to send a <Chat> message to user2. However, in the original design there is no way for the clients to query the information of other avatars that may be defined by the avatar designers instead of the system. This function is crucial for the avatars to exchange information for richer interactions in a semantic virtual environment.

Fig. 7. Client architecture for the processing of three types of information

**User** 

**Client** 

answer

Answer Information

Question Processing Component

Dialog interface for questions

as shown in Fig. 10.

Fig. 8. An example of message for update information

</Info>

<updateInfo position="20 34"/>

In the work, we have enhanced the communication protocol of IMNET to incorporate a broader range of message types. We distinguish three types of information exchange between avatars and have designed their processing flow in IMNET as shown in Fig. 7. The first one is static information, such as the id and name properties that are delivered only once at the beginning when a user logs in. The second type is update information, such as the position of an avatar, which is voluntarily pushed to all clients in a more frequent way. An example of update information is shown in Fig. 8. The third type is query information, such as optional attributes or questions, which are sent to the inquirer only upon requests. We have integrated these three types of messages with the tag <Info> and distinguish their types in an internal tag. For example, in Fig. 9, we show a scenario where user1 asks user2 if user2 wants to be a friend of user1. The message is sent with a <queryInfo> internal tag and with a timestamp property. This timestamp property can be used as the id of the query. The query processing component in Fig. 7 may prompt user2 (if user2 is a real user) with a dialog box or evoke an auto-responding component (if user2 is a virtual user) for a response. Then he/she can use this id to indicate this query (askId) in his/her reply message to user1

<Info from= "user2" to="all" timestamp="1215476355">

ask

**Avatar Ontology**

Communication

**Server** 

Query Information

Static Information Update Information

first loads and parses the OWL file into an object format through the automatic generated Java class and the Protégé API. The geometry file for each object is then retrieved from the ontology and loaded into the system by the VRMLModelFactory module.

Fig. 5. Processing an OWL file and loading multiple VRML files to generate the virtual world

#### **4. Communication protocol for information query**

In a semantic virtual environment, we think semantic information should not only be used by internal modules, but should also be accessible to other clients through user-defined pluggable modules. In the previous section, we have described the ontology of the world and how it is loaded into the IMNET system. However, the clients are not required to specify all attributes defined in the ontology of the avatar. In addition, not all information described in the ontology will be broadcast to all clients. Therefore, we need to have a flexible way for the avatars to communicate semantic information with each other. In this subsection, we will describe how we modify the current communication protocol of IMNET to take information query into account.

```
<IMNET from="user1" to="user2"> 
 <Chat> Hello, How are you? </Chat> 
</IMNET>
```
Fig. 6. An example of IMNET message

The application protocol in the original IMNET is similar to other MUVE's that only encapsulates predefined message types in the XML format. The underlying animation

first loads and parses the OWL file into an object format through the automatic generated Java class and the Protégé API. The geometry file for each object is then retrieved from the

Fig. 5. Processing an OWL file and loading multiple VRML files to generate the virtual world

In a semantic virtual environment, we think semantic information should not only be used by internal modules, but should also be accessible to other clients through user-defined pluggable modules. In the previous section, we have described the ontology of the world and how it is loaded into the IMNET system. However, the clients are not required to specify all attributes defined in the ontology of the avatar. In addition, not all information described in the ontology will be broadcast to all clients. Therefore, we need to have a flexible way for the avatars to communicate semantic information with each other. In this subsection, we will describe how we modify the current communication protocol of IMNET

The application protocol in the original IMNET is similar to other MUVE's that only encapsulates predefined message types in the XML format. The underlying animation

**4. Communication protocol for information query** 

<IMNET from="user1" to="user2"> <Chat> Hello, How are you? </Chat>

to take information query into account.

Fig. 6. An example of IMNET message

</IMNET>

ontology and loaded into the system by the VRMLModelFactory module.

scripting language, XAML, is an example of message type (Li, et al., 2004). Another example is the message for textual information used in the chat module. For instance, in Fig. 6, we show an example where User1 wants to send a <Chat> message to user2. However, in the original design there is no way for the clients to query the information of other avatars that may be defined by the avatar designers instead of the system. This function is crucial for the avatars to exchange information for richer interactions in a semantic virtual environment.

Fig. 7. Client architecture for the processing of three types of information

In the work, we have enhanced the communication protocol of IMNET to incorporate a broader range of message types. We distinguish three types of information exchange between avatars and have designed their processing flow in IMNET as shown in Fig. 7. The first one is static information, such as the id and name properties that are delivered only once at the beginning when a user logs in. The second type is update information, such as the position of an avatar, which is voluntarily pushed to all clients in a more frequent way. An example of update information is shown in Fig. 8. The third type is query information, such as optional attributes or questions, which are sent to the inquirer only upon requests. We have integrated these three types of messages with the tag <Info> and distinguish their types in an internal tag. For example, in Fig. 9, we show a scenario where user1 asks user2 if user2 wants to be a friend of user1. The message is sent with a <queryInfo> internal tag and with a timestamp property. This timestamp property can be used as the id of the query. The query processing component in Fig. 7 may prompt user2 (if user2 is a real user) with a dialog box or evoke an auto-responding component (if user2 is a virtual user) for a response. Then he/she can use this id to indicate this query (askId) in his/her reply message to user1 as shown in Fig. 10.

```
<Info from= "user2" to="all" timestamp="1215476355"> 
 <updateInfo position="20 34"/> 
</Info>
```
#### Fig. 8. An example of message for update information

Realizing Semantic Virtual Environments with Ontology and Pluggable Procedures 179

In order to generate a collision-free path, a motion planner needs to acquire obstacle information from the world. In our system, the motion planning component obtains this semantic information through the ontology of the virtual world defined in Section 3. According to the obtained obstacle information, the planner first converts the obstacle information into a 2D bitmap and then computes a potential field that is commonly used in a motion planner. And then the planner performs a best-first search to find a feasible path to the goal according to the potential field. Finally, the planner component translates the path to a XAML script and assigns it to the avatar to generate the walking animation as depicted

IMBrowser

OWLModel

Animation Manager

Avatar Walking

Planner Bundle

MoPlan Service

MapLoader

Path Planning

Path Data

Fig. 12. The process of how to generate animations through the motion planning component

Get OWL Model

> XAML Script

Obstacle information can not only be inferred from low-level geometry but also be given as approximation by the scene designer. In Section 3, we have designed an optional attribute called Ap-proximation2D in the ontology of a virtual object. In Fig. 13, we show an example of the collision-free path generated by the planner by the use of the 2D approximation of the objects in the world. If the planner can find this 2D approximation for an object, it will use it to build the 2D bitmap needed in the planner. If not, it still can build the convex hull of the 3D geometry and project it into the ground to form a 2D approximation. In other words, semantic information could be designed to facilitate automatic reasoning but it is not mandatory. The designers of virtual objects are not obligated to define all attributes in an ontology that could be large in collaborative creation. In addition, the user-defined animation procedures do not easily break down in such a loosely coupled distributed environment either since they can take this into

However, some semantic information cannot be inferred directly from geometry. For example, in the virtual environment, there could be some crosswalk or sidewalk regions that need to be used whenever possible. One can tell this kind of objects from their appearance but it would be difficult for the machine to infer their functions through geometry. In this case, the planner has to acquire this information through the semantics defined in the ontology of the virtual world. In the example shown in Fig. 14, the planner

in Fig. 12.

account in the design stage.

<Info from="user1" to="user2" timestamp="1215476323"> <queryInfo ask="make friend"/> </Info>

Fig. 9. An example of query message

```
<Info from="user2" to="user1" timestamp="1215476330"> 
 <queryInfo askId="1215476323" answer="yes"/> 
</Info>
```
Fig. 10. An example of responding message

#### **5. Demonstrative examples**

In this section, we will give two examples of using semantic information in the virtual world to enhance the functions and behaviours of the avatars.

#### **5.1 Example 1: Motion planning for avatars**

A common way for a user to navigate in a virtual environment is by controlling his/her avatar by input devices such as keyboard or mouse. However, it is a low-level task that may not be easy for a novice user to control his/her avatar to reach a destination. There has been much research that proposed to use motion planning techniques to generate collision-free navigation paths for the avatar to follow (Salomon et al., 2003). However, in order to define a motion planning problem, we need to obtain the geometric information of the objects in the environment such that we know where the boundary of the world is and which of the objects needs to be treated as an obstacle.

In this subsection, we will use a motion planning component as an example to illustrate how a user-defined animation procedure can be installed dynamically and retrieve necessary world information for the needs of the application. A user first prepares the procedure as a software bundle according to the specification of OSGi . Then he/she can use a XAML script, such as the one shown in Fig. 11, to indicate where he/she wants to move to. In this script, he/she needs to specify the name of the package, the initial (optional) and goal locations, and the URL for downloading the bundle if it is not already installed. In this case, the bundle will be installed dynamically and evoked through the OSGi mechanism (Chu et al., 2008).

```
<MoPlan package='imlab.osgi.bundle.interfaces' 
  codebase='http://imlab.cs.nccu.edu.tw/plan.jar'> 
 <param name="-s" value="1.1 2.3"/> 
 <param name="-g" value="5.2 3.8"/> 
</MoPlan>
```
Fig. 11. An example of specifying a motion planning problem in a user-defined component, MoPlan

<Info from="user1" to="user2" timestamp="1215476323">

<Info from="user2" to="user1" timestamp="1215476330">

<queryInfo askId="1215476323" answer="yes"/>

<queryInfo ask="make friend"/>

In this section, we will give two examples of using semantic information in the virtual world

A common way for a user to navigate in a virtual environment is by controlling his/her avatar by input devices such as keyboard or mouse. However, it is a low-level task that may not be easy for a novice user to control his/her avatar to reach a destination. There has been much research that proposed to use motion planning techniques to generate collision-free navigation paths for the avatar to follow (Salomon et al., 2003). However, in order to define a motion planning problem, we need to obtain the geometric information of the objects in the environment such that we know where the boundary of the world is and which of the

In this subsection, we will use a motion planning component as an example to illustrate how a user-defined animation procedure can be installed dynamically and retrieve necessary world information for the needs of the application. A user first prepares the procedure as a software bundle according to the specification of OSGi . Then he/she can use a XAML script, such as the one shown in Fig. 11, to indicate where he/she wants to move to. In this script, he/she needs to specify the name of the package, the initial (optional) and goal locations, and the URL for downloading the bundle if it is not already installed. In this case, the bundle will be installed dynamically and evoked through the OSGi mechanism (Chu et

Fig. 11. An example of specifying a motion planning problem in a user-defined component,

<MoPlan package='imlab.osgi.bundle.interfaces' codebase='http://imlab.cs.nccu.edu.tw/plan.jar'>

 <param name="-s" value="1.1 2.3"/> <param name="-g" value="5.2 3.8"/>

Fig. 9. An example of query message

</Info>

Fig. 10. An example of responding message

</Info>

**5.1 Example 1: Motion planning for avatars** 

objects needs to be treated as an obstacle.

</MoPlan>

al., 2008).

MoPlan

to enhance the functions and behaviours of the avatars.

**5. Demonstrative examples** 

In order to generate a collision-free path, a motion planner needs to acquire obstacle information from the world. In our system, the motion planning component obtains this semantic information through the ontology of the virtual world defined in Section 3. According to the obtained obstacle information, the planner first converts the obstacle information into a 2D bitmap and then computes a potential field that is commonly used in a motion planner. And then the planner performs a best-first search to find a feasible path to the goal according to the potential field. Finally, the planner component translates the path to a XAML script and assigns it to the avatar to generate the walking animation as depicted in Fig. 12.

Fig. 12. The process of how to generate animations through the motion planning component

Obstacle information can not only be inferred from low-level geometry but also be given as approximation by the scene designer. In Section 3, we have designed an optional attribute called Ap-proximation2D in the ontology of a virtual object. In Fig. 13, we show an example of the collision-free path generated by the planner by the use of the 2D approximation of the objects in the world. If the planner can find this 2D approximation for an object, it will use it to build the 2D bitmap needed in the planner. If not, it still can build the convex hull of the 3D geometry and project it into the ground to form a 2D approximation. In other words, semantic information could be designed to facilitate automatic reasoning but it is not mandatory. The designers of virtual objects are not obligated to define all attributes in an ontology that could be large in collaborative creation. In addition, the user-defined animation procedures do not easily break down in such a loosely coupled distributed environment either since they can take this into account in the design stage.

However, some semantic information cannot be inferred directly from geometry. For example, in the virtual environment, there could be some crosswalk or sidewalk regions that need to be used whenever possible. One can tell this kind of objects from their appearance but it would be difficult for the machine to infer their functions through geometry. In this case, the planner has to acquire this information through the semantics defined in the ontology of the virtual world. In the example shown in Fig. 14, the planner

Realizing Semantic Virtual Environments with Ontology and Pluggable Procedures 181

To facilitate the interaction between the avatars, we have designed a component called SocialService. There are three steps for initiating an interaction between avatars as shown in Fig. 15. A user who would like to initiate the interaction first sends a customized XAML script shown in Fig. 16 to the other avatar (step 1) for it to install this social interaction component (step 2). Once the component has been installed, interaction queries related to social activities can be delivered through the communication protocol described in Section 4

1) request

3) interaction messages

2) components

Client B

Ontology

IMNET Communication Module

Components

and processed by the SocialService component (step 3).

Client A

Fig. 15. The steps for initiating an interaction between avatars

IMNET Communication Module

the friend information was updated into the ontology of both avatars.

Fig. 17. An example of interaction between the real users

Fig. 16. Starting the mechanism of the interaction between avatars through XML script

<SocialService package='imlab.osgi.bundle.interfaces' codebase='http://imlab.cs.nccu.edu.tw/social.jar'/>

Components

Ontology install

In the first scenario, both users are real users. First, user1 would like to invite user2 to be his friend (Fig. 17(a)). Therefore, a query message: "'User1' added you to his friend list. Do you want to invite 'user1' to be your friend as well?" appeared in user2's interface (Fig. 17(b)). If user2 choose 'yes', user1 would be added into her friend list and a confirmation message would be sent back to user1 (Fig. 17(c)). Through the interaction between the two real users,

**User1 View User2 View User1 View** 

(a) (b) (c)

knows where the sidewalk and crosswalk through object tagging in the ontology and makes the regions occupied by these objects a higher priority when planning the path for the avatar. The potential values in these regions are lowered to increase the priority during the search for a feasible path. Consequently, a path passing through these regions was generated in the example shown in Fig. 14. In addition, according to this semantic information, appropriate animations, such as looking around before moving onto the crosswalk region, could be inserted into the motion sequence of walking to the goal, as shown in Fig. 14(c).

Fig. 13. An example path generated by the path planner: (a) avoiding the obstacles described by the 2D approximation in semantics; (b) a snapshot of the scene from the avatar's view.

Fig. 14. Another example path generated by the path planner: (a) the path generated by taking crosswalk and sidewalk into account; (b) a snapshot of the scene from the avatar's view; (c) special animation can be inserted before crossing the crosswalk.

#### **5.2 Example 2: Interaction between avatars**

An objective of this work is to allow different animation components owned by different clients to interact with each other. The users can communicate through customized tags to acquire the avatar ontology of each other and use this information to perform specific interactions. The user behind an avatar can actually be a real user or a virtual user controlled by a computer program. In this subsection, we will use two scenarios to illustrate these two types of interactions. The first scenario is to demonstrate the interaction between two real users, and the second scenario is for the interaction between a real user and a virtual user.

knows where the sidewalk and crosswalk through object tagging in the ontology and makes the regions occupied by these objects a higher priority when planning the path for the avatar. The potential values in these regions are lowered to increase the priority during the search for a feasible path. Consequently, a path passing through these regions was generated in the example shown in Fig. 14. In addition, according to this semantic information, appropriate animations, such as looking around before moving onto the crosswalk region, could be inserted into the motion sequence of walking to the goal, as

(a) (b)

 (a) (b) (c) Fig. 14. Another example path generated by the path planner: (a) the path generated by taking crosswalk and sidewalk into account; (b) a snapshot of the scene from the avatar's

An objective of this work is to allow different animation components owned by different clients to interact with each other. The users can communicate through customized tags to acquire the avatar ontology of each other and use this information to perform specific interactions. The user behind an avatar can actually be a real user or a virtual user controlled by a computer program. In this subsection, we will use two scenarios to illustrate these two types of interactions. The first scenario is to demonstrate the interaction between two real users, and the second scenario is for the interaction between

view; (c) special animation can be inserted before crossing the crosswalk.

**5.2 Example 2: Interaction between avatars** 

a real user and a virtual user.

Fig. 13. An example path generated by the path planner: (a) avoiding the obstacles described by the 2D approximation in semantics; (b) a snapshot of the scene from the avatar's view.

shown in Fig. 14(c).

To facilitate the interaction between the avatars, we have designed a component called SocialService. There are three steps for initiating an interaction between avatars as shown in Fig. 15. A user who would like to initiate the interaction first sends a customized XAML script shown in Fig. 16 to the other avatar (step 1) for it to install this social interaction component (step 2). Once the component has been installed, interaction queries related to social activities can be delivered through the communication protocol described in Section 4 and processed by the SocialService component (step 3).

Fig. 15. The steps for initiating an interaction between avatars

Fig. 16. Starting the mechanism of the interaction between avatars through XML script

In the first scenario, both users are real users. First, user1 would like to invite user2 to be his friend (Fig. 17(a)). Therefore, a query message: "'User1' added you to his friend list. Do you want to invite 'user1' to be your friend as well?" appeared in user2's interface (Fig. 17(b)). If user2 choose 'yes', user1 would be added into her friend list and a confirmation message would be sent back to user1 (Fig. 17(c)). Through the interaction between the two real users, the friend information was updated into the ontology of both avatars.

Fig. 17. An example of interaction between the real users

Realizing Semantic Virtual Environments with Ontology and Pluggable Procedures 183

software components in IMNET. In this work, we have extended the MUVE system to allow the semantics of the objects and avatars in the virtual environment to be described in the form of ontology. This provides a standard way for the software components to acquire semantic information of the world for further reasoning. We have used two types of examples: path planning and social interaction, to show how users can design their own code to facilitate richer or autonomous behaviours for their avatars (possibly virtual). We hope that these examples will shed some lights on the further development of object

This research was funded in part by the National Science Council of Taiwan, R.O.C., under contract No. NSC96-2221-E-004-008. The paper is extended from a conference paper published in the International Conference on Virtual Reality Continuum and Its

Abaci, T.; C'iger, J. & Thalmann, D. (2005). Action semantics in Smart Objects. *Proc. of* 

Aylett, R. & Cavazza, M. (2001). Intelligent Virtual Environments - A State-of-the-art Report.

Chu, Y.L.; Li, T.Y. & Chen, C.C. (2008). User Pluggable Animation Components in Multi-

Garcia-Rojas, A.; Vexo, F.; Thalmann, D.; Raou-Zaiou, A.; Karpouzis, K. & Kollias, S. (2006).

Kleinermann, F.; Troyer, O.D.; Creelle, C. & Pellens, B. (2007). Adding Semantic

Li, T.Y.; Liao, M.Y. & Liao, J.F. (2004). An Extensible Scripting Language for Interactive

Li, T.Y.; Liao, M.Y & Tao, P.C. (2005). IMNET: An Experimental Testbed for Extensible

Otto, K.A. (2005). The Semantics of Multi-user Virtual Environments. *Proc. of the Workshop* 

*Intl. Workshop on Shapes and Semantics*, pp. 63-70, Matsushima, Japan Guti'errez, M.; Garcia-Rojas, A.; Thalmann, D.; Vexo, F.; Moc-Cozet, L.; Magnenat-

user Virtual Environment. *Proc. of the Intl. Conf. on Intelligent Virtual Environments* 

Emotional Body Expression Parameters In Virtual Human Ontology. *Proc. of the 1st* 

Thalmann, N.; Mortara, M. & Spag-Nuolo, M. (2005). An Ontology of Virtual Humans: incorporating semantics into human shapes. *Proc. of the Workshop towards* 

Annotations, Navigation paths and Tour Guides to Existing Virtual Environments. *Proc. of the 13th Intl. Conf. on Virtual Systems and Multimedia (VSMM'07)*, Brisbane,

Animation in a Speech-Enabled Virtual Environment. *Proc. of the IEEE Intl. Conf. on* 

Multi-user Virtual Environment Systems. *Proc. of the Intl. Conf. on Computational Science and its Applications, LNCS 3480*, O. Gervasi et al. (Eds.), Springer-Verlag

*Workshop towards Semantic Virtual Environments*

*Multimedia and Expo (ICME2004)*, Taipei, Taiwan.

ontology and more sophisticated applications.

Applications in Industry (VRCAI2008)

*Proc. of Eurographics*

*and Virtual Agents*, China

*Semantic Virtual Environments*

Berlin Heidelberg, pp. 957-966.

*towards Semantic Virtual Environments*

Australia.

**7. Acknowledgement** 

**8. References** 

In the second scenario, user1 arranged a virtual user called door-keeper to watch the door and provide information to potential guests (Fig. 18(1~2)). When user2 entered a designated region, the doorkeeper would turn to face user2 and ask: "May I help you?" At the first encounter, user2 just entered this area by accident and therefore chose the answer: "Just look around." The doorkeeper replied: "Have a good day!" (Fig. 18(3~5)) The state of the doorkeeper in this interaction was then set to FINISH. After user2 left the area, the state was restored to IDLE (Fig. 18(6)). Assume that after some period of time, user2 approached the doorkeeper again for the second time. This time user2 chose: "I'm looking for my friend." The doorkeeper replied: "Who's your friend?" Then user2 answered: "Jerry." At this moment, the doorkeeper queried the avatar ontology of user1 (named Jerry) to see if user2 is in his friend list. If so, the doorkeeper would inform user2 the current position of user1. Otherwise, the doorkeeper would answer: "Sorry, Jerry does not seem to know you." If there is no such a user called Jerry, the doorkeeper would answer: "Jerry is not in this world." (Fig. 18(7~9))

Fig. 18. An example of interaction between a real users and a virtual user (doorkeeper)
