**Personalization of Virtual Environments Navigation and Tasks for Neurorehabilitation**

Dani Tost, Sergi Grau and Sergio Moya *CREB-Polytechnical University of Catalonia Spain*

#### **1. Introduction**

134 Virtual Reality and Environments

Smith, D. (1996). Developing a More Interactive Classroom: A Continuing Odyssey*. Teaching* 

Suchman, E.; Uchiyama, K.; Smith, R. & Bender K. (2006). Evaluating the impact of a

Siau, K.; Sheng, H. & Nah, F. (2006). Use of a Classroom Response System to Enhance

Willis C. & Miertschin, L. (2004). Tablet PCs as Instructional Tools or the Pen is Mightier

Zhang, C. & Fulford, S. (1993). Perceptions of interaction: The critical predictor in distance education. *American Journal of Distance Education*, Vol. 7. pp. 8-21, 1993 Zurn, J. & Frolik J. (2004). Evaluation of Tablet PCs for Engineering Content Development

classroom response system in a microbiology course. *Journal of Microbiology &* 

Classroom Interactivity. *IEEE Transactions on Education*, Vol 49, No.3, pp. 398-403,

than the Board! *Proceedings of Special Interest Group on Information Technology Education (SIGITE'04)*, pp.153-159. Salt Lake City, Utah, USA, October 28-30, 2004 Wilson, A. (2004). TouchLight: An Imaging Touch Screen and Display For Gesture-Based

Interaction. *Proceedings of 6th International Conference On Multimodal Interfaces*, pp.

and Instruction. *American Society for Engineering Education Annual Conference &* 

*Sociology*, Vol. 24. pp. 64-75, 1996

August 2006

*Biology Education*, ISSN: 1935-7885, Vol. 7, 2006

69-76. State College, PA, USA, October 13-15, 2004

*Exposition*, Salt Lake City, Utah, June 20-23, 2004

The use of "serious" computer games designed for other purposes than purely leisure is becoming a recurrent research topic in diverse areas from professional training and education to psychiatric and neuropsychological rehabilitation. In particular 3D computer games have been introduced in neuropsychological rehabilitation of cognitive functions to train daily activities. In general, these games are based on the first person paradigm. Patients control an avatar who moves around in a 3D virtual scenario. They manipulate virtual objects in order to perform daily activities such as cooking or tidying up a room. The underlying hypothesis of Cognitive Neuropsychological Virtual Rehabilitation (CNVR) systems is that 3D Interactive Virtual Environments (VE) can provide good simulations of the real world yielding to an effective transfer of virtual skills to real capacities (Rose et al., 2005). Other potential advantages of CNVR are that they are highly motivating, safe and controlled, and they can recreate a diversity of scenarios (Guo et al., 2004). Virtual tasks are easy to document automatically. Moreover, they are reproducible, which is useful for accurate analyses of the patients behavior.

Several studies has shown that leaving tasks unfinished can be counterproductive in a rehabilitation process (Prigatano, 1997). Therefore, CNVR systems tend to provide free-of-error rehabilitation tasks. For this, by opposite to traditional leisure games, they integrate different intervention mechanisms to guide patients towards the fulfillment of their goals, from instruction messages to automatic realization of part and even all the task.

The main drawback of CNVR is that the use of technology introduces a complexity factor alien to the rehabilitation process. In particular, it requires patients to acquire spatial abilities (Satalich, 1995) and navigational awareness (Chen & Stanney, 1999) in order to find ways through the VE and perceive relative distances to the patient's avatar. In addition, interacting with virtual objects involves recognizing their shape and semantics (Nesbitt et al., 2009). Moreover, pointing, picking and putting virtual objects is difficult, especially if the environments are complex and the objects size is small (Elmqvist & Fekete, 2008). Finally, steering virtual objects through VEs needs spatio-temporal skills to update the perception of the relative distance between objects through motion (Liu et al., 2011). These skills vary from one individual to another, and they can be strongly affected by patient's neuropsychological impairments. Therefore, it is necessary to design strategies to make technology more usable

Fig. 1. The six cursors tested. From left to right: opaque hand, transparent hand + spy-hole,

Personalization of Virtual Environments Navigation and Tasks for Neurorehabilitation 137

From a technological point of view, navigating interactively is far more complex than clicking. It involves various degrees of freedom and a non-trivial translation of the input device data into 3D motion. Therefore, in objects manipulation tasks, navigation interferes with the foreseen development of the activity. It can hinder it, delay it, and even make it impossible. In order to decouple navigation from object manipulation, automatic camera placement methods must be designed. This topic has been largely addressed by the computer graphics community (Christie & Olivier, 2008) in order to compute automatically the best camera placement for the exploration of virtual environments (Argelaguet & Andújar, 2010) and for the visualization and animation of scientific data (Bordoloi & Shen, 2005). Two main approaches have been proposed: reactive and indirect methods. Reactive methods apply autonomous robotics strategies to drive the camera from one point to the other through the shortest possible path, avoiding obstacles. They apply to the camera the navigation models that are used for the animation of autonomous non-player characters (Reese & Stout, 1999). Indirect approaches translate users needs into constraints on the camera parameters, which they intend to solve

In leisure video games, camera positioning cannot be totally automatic, because camera control is usually an essential part of the game. The camera is placed automatically at the beginning of the game or at the transition between scenarios. In this case, automating positioning must preserve the continuity of the game-play, while providing the best view of the environment. A specially challenging problem is the computation of the camera position in third person games, in which the camera tracks the user's avatar. In this case, it is necessary to avoid collisions of the camera that may produce disturbing occlusions of parts of the

The aim of our work is to design methods that reduce the technological barriers of VEs for memory, attention and executive skills rehabilitation. We extend existing techniques to ease objects manipulation, and we explore their use as intervention strategies. Moreover, in order to decouple navigation from objects manipulation, we provide automatic and semi-automatic camera placement. We show that automatic camera control provides mechanisms of

We apply the eye-hand metaphor. We propose several strategies to ease pointing, picking and

intervention in the task development that ease a free-of-error training.

putting: cursor enrichment, objects outlining and free surfaces highlighting.

spy-hole, opaque hand + spy-hole, arrow, pointing finger

(Driel & Bidarra, 2009).

environment (Liu et al., 2011).

**3. Objects manipulation**

**3.1 Technological assistance**

and minimize its side effects in the rehabilitation process. These strategies, combined with customized intervention strategies, can contribute to the success of CNVR.

In this paper, we propose and discuss several strategies to ease the navigation and interaction in VEs. Our aim is to remove technological barriers as well as to facilitate therapeutic interventions in order to help patients reaching their goals. Specifically, we present mechanisms to help objects picking and placement and to ease navigation. We present the results of these strategies on users without cognitive impairments.

#### **2. Related work**

Several studies have shown that virtual objects manipulation improves the performance of visual, attention, memory and executive skills (Boot et al., 2008). This is why typical 3D CNVR systems aimed at training these functions reproduce daily life scenarios. The patients exercises consist mainly in performing virtually domestic tasks. The principal activity of patients in the virtual environments is the manipulation of virtual objects, specifically, picking, dragging and placing objects (Rose et al., 2005). These actions can be performed through a simple user interaction, in general a user click having the cursor put onto the target. More complex actions can be performed, such as breaking, cutting, folding, but in general, all can be implemented as pre-recorded animations that can also be launched with a simple user click (Tost et al., 2009).

The intervention strategies that help patients in realizing these activities by their own consist in reminding the goal through oral and written instructions and attracting the patients attention to the target through some visual mechanisms. Difficulties arisen from the use of the technology are related to the users ability in recognizing target objects, understanding the rules and limitations of virtual manipulation in comparison to real manipulation and managing the scale of the VE. In particular, the selection of small objects in cluttered VE can be difficult. To overcome this problem, a variety of mechanisms has been proposed (Balakrishnan, 2004), mainly based in scaling the target as the cursor passes in front of it. Another interesting question is the convenience of decoupling selection from vision by using the relative position of the hand to make selections, or by applying the hand-eye metaphor and computing the selected objects as those intersected by the viewing ray (Argelaguet & Andújar, 2009).

To be able to reach the objects and manipulate them, patients must navigate in the environment. However, navigation constitutes another type of activity that involves by itself a lot of cognitive skills, some of them different that those needed for objects manipulation. It requires spatial abilities, namely spatial orientation, visualization and relations (Satalich, 1995) and temporal skills to perceive the direction of movement and the relative velocity of moving objects. Moreover, navigation is closely related to way-finding. It requires not only a good navigational awareness, but also logic to perform selective searching of the target in semantically related locations, visual memory to remember the places already explored and strategy to design efficient search. In fact, although some controversy exists on if virtual navigation enhances or not real navigation abilities (Richardson et al., 2011), virtual navigation by itself is being used for the rehabilitation of spatial skills after brain damage (Koenig et al., 2009).

Fig. 1. The six cursors tested. From left to right: opaque hand, transparent hand + spy-hole, spy-hole, opaque hand + spy-hole, arrow, pointing finger

From a technological point of view, navigating interactively is far more complex than clicking. It involves various degrees of freedom and a non-trivial translation of the input device data into 3D motion. Therefore, in objects manipulation tasks, navigation interferes with the foreseen development of the activity. It can hinder it, delay it, and even make it impossible. In order to decouple navigation from object manipulation, automatic camera placement methods must be designed. This topic has been largely addressed by the computer graphics community (Christie & Olivier, 2008) in order to compute automatically the best camera placement for the exploration of virtual environments (Argelaguet & Andújar, 2010) and for the visualization and animation of scientific data (Bordoloi & Shen, 2005). Two main approaches have been proposed: reactive and indirect methods. Reactive methods apply autonomous robotics strategies to drive the camera from one point to the other through the shortest possible path, avoiding obstacles. They apply to the camera the navigation models that are used for the animation of autonomous non-player characters (Reese & Stout, 1999). Indirect approaches translate users needs into constraints on the camera parameters, which they intend to solve (Driel & Bidarra, 2009).

In leisure video games, camera positioning cannot be totally automatic, because camera control is usually an essential part of the game. The camera is placed automatically at the beginning of the game or at the transition between scenarios. In this case, automating positioning must preserve the continuity of the game-play, while providing the best view of the environment. A specially challenging problem is the computation of the camera position in third person games, in which the camera tracks the user's avatar. In this case, it is necessary to avoid collisions of the camera that may produce disturbing occlusions of parts of the environment (Liu et al., 2011).

The aim of our work is to design methods that reduce the technological barriers of VEs for memory, attention and executive skills rehabilitation. We extend existing techniques to ease objects manipulation, and we explore their use as intervention strategies. Moreover, in order to decouple navigation from objects manipulation, we provide automatic and semi-automatic camera placement. We show that automatic camera control provides mechanisms of intervention in the task development that ease a free-of-error training.
