**3. Objects manipulation**

2 Will-be-set-by-IN-TECH

and minimize its side effects in the rehabilitation process. These strategies, combined with

In this paper, we propose and discuss several strategies to ease the navigation and interaction in VEs. Our aim is to remove technological barriers as well as to facilitate therapeutic interventions in order to help patients reaching their goals. Specifically, we present mechanisms to help objects picking and placement and to ease navigation. We present the

Several studies have shown that virtual objects manipulation improves the performance of visual, attention, memory and executive skills (Boot et al., 2008). This is why typical 3D CNVR systems aimed at training these functions reproduce daily life scenarios. The patients exercises consist mainly in performing virtually domestic tasks. The principal activity of patients in the virtual environments is the manipulation of virtual objects, specifically, picking, dragging and placing objects (Rose et al., 2005). These actions can be performed through a simple user interaction, in general a user click having the cursor put onto the target. More complex actions can be performed, such as breaking, cutting, folding, but in general, all can be implemented as pre-recorded animations that can also be launched with a simple user click (Tost et al., 2009). The intervention strategies that help patients in realizing these activities by their own consist in reminding the goal through oral and written instructions and attracting the patients attention to the target through some visual mechanisms. Difficulties arisen from the use of the technology are related to the users ability in recognizing target objects, understanding the rules and limitations of virtual manipulation in comparison to real manipulation and managing the scale of the VE. In particular, the selection of small objects in cluttered VE can be difficult. To overcome this problem, a variety of mechanisms has been proposed (Balakrishnan, 2004), mainly based in scaling the target as the cursor passes in front of it. Another interesting question is the convenience of decoupling selection from vision by using the relative position of the hand to make selections, or by applying the hand-eye metaphor and computing the selected objects as those intersected by the viewing ray (Argelaguet &

To be able to reach the objects and manipulate them, patients must navigate in the environment. However, navigation constitutes another type of activity that involves by itself a lot of cognitive skills, some of them different that those needed for objects manipulation. It requires spatial abilities, namely spatial orientation, visualization and relations (Satalich, 1995) and temporal skills to perceive the direction of movement and the relative velocity of moving objects. Moreover, navigation is closely related to way-finding. It requires not only a good navigational awareness, but also logic to perform selective searching of the target in semantically related locations, visual memory to remember the places already explored and strategy to design efficient search. In fact, although some controversy exists on if virtual navigation enhances or not real navigation abilities (Richardson et al., 2011), virtual navigation by itself is being used for the rehabilitation of spatial skills after brain damage (Koenig et al.,

customized intervention strategies, can contribute to the success of CNVR.

results of these strategies on users without cognitive impairments.

**2. Related work**

Andújar, 2009).

2009).

#### **3.1 Technological assistance**

We apply the eye-hand metaphor. We propose several strategies to ease pointing, picking and putting: cursor enrichment, objects outlining and free surfaces highlighting.

Fig. 3. The six target object strategies to help pointing; A: normal object; B: with rectangular halo; C: with complementary colors; D: enlarged; E: with increased luminance; F: with

Personalization of Virtual Environments Navigation and Tasks for Neurorehabilitation 139

selectable objects, we have tested 6 different ways of highlighting them (see Figure 3): (i) with bounding box halo,(ii) a circular halo, (iii) enlarging size, (iv) changing their color by their complementary color, (v) drawing silhouette edges and (iv) increasing luminance.

In a first person game with the eye-hand metaphor, dragging objects at their real scale in the VE can cause collision problems with the other elements of the scenario and occlusions in the view fustrum. Therefore, instead of moving the geometric model of the object, we actually move a scaled version of the object projected onto the image plane. We have tested three different strategies: (i) to substitute the cursor by the object when it is dragged, to keep the cursor and show the dragged object (ii) centered under the cursor and (iii) separated from the cursor, at right and below it. We have tested two variants of the three strategies with and

A drawback of not moving the actual object is that when users must put it down on a surface, they don't have a good spatial perception of the free space left. Therefore, we highlight the free space able to lodge the held object as users move the cursor. We have tested different ways of highlighting the surface applying different colors and drawing the 3D bounding box

Technological aids can also be used to assist patients at the cognitive level. In particular, to assist patients in picking a specific object, instead of outlining all the pickable objects, we can outline only those related to the task goal. In this case, outlining fulfills two different functions: technological aid and cognitive assistance. The number of visual stimuli is reduced to only those that are related to the task. As a consequence, the range of strategies to outline objects is larger: in addition of appearance changes, we can apply sounds and animations that

that the object would occupy in the surface or only its projected area (see Figure 5).

are not suitable when there is a large number of objects to be outlined.

silhouette edges; G: with spherical halo.

**3.2 Cognitive assistance**

without transparency. Figure 4 illustrates these modes.

Fig. 2. The four cursor strategies to help pointing; A to A' changing shape; B to B' changing size; C to C' changing color; D to D' animating.

We have tested six different types of widgets for the cursor (see Figure 1): an opaque hand, a spy-hole, an arrow, an opaque hand with a spy-hole over-impressed, a transparent hand with a spy-hole over-impressed and a pointing finger. The hand and the finger have the advantage of helping users to understand that they are able to interact with the environment. The main drawback of the hand is that it can occlude objects. This can be corrected making it transparent. Another inconvenient is its lack of precision, which can be corrected over-impressing a spy-hole on it. The pointing finger solves this problem. The arrow has the advantage of being small and precise. However, it has a low symbolic value. The spy-hole is precise and little occlusive but it gives a non-desirable aggressive look to the task.

We have proposed two different mechanisms to help pointing: cursor-based mechanisms and object-based mechanisms. The aim of these techniques is to signal in a way or another that the object under the cursor is selectable. For the cursor, we have analyzed four different possibilities (see Figure 2): changing its shape, when it is in front of selectable objects, (ii) enlarging it, (iii) changing its color, and (iv) launching a small animation. For the

Fig. 3. The six target object strategies to help pointing; A: normal object; B: with rectangular halo; C: with complementary colors; D: enlarged; E: with increased luminance; F: with silhouette edges; G: with spherical halo.

selectable objects, we have tested 6 different ways of highlighting them (see Figure 3): (i) with bounding box halo,(ii) a circular halo, (iii) enlarging size, (iv) changing their color by their complementary color, (v) drawing silhouette edges and (iv) increasing luminance.

In a first person game with the eye-hand metaphor, dragging objects at their real scale in the VE can cause collision problems with the other elements of the scenario and occlusions in the view fustrum. Therefore, instead of moving the geometric model of the object, we actually move a scaled version of the object projected onto the image plane. We have tested three different strategies: (i) to substitute the cursor by the object when it is dragged, to keep the cursor and show the dragged object (ii) centered under the cursor and (iii) separated from the cursor, at right and below it. We have tested two variants of the three strategies with and without transparency. Figure 4 illustrates these modes.

A drawback of not moving the actual object is that when users must put it down on a surface, they don't have a good spatial perception of the free space left. Therefore, we highlight the free space able to lodge the held object as users move the cursor. We have tested different ways of highlighting the surface applying different colors and drawing the 3D bounding box that the object would occupy in the surface or only its projected area (see Figure 5).

#### **3.2 Cognitive assistance**

4 Will-be-set-by-IN-TECH

Fig. 2. The four cursor strategies to help pointing; A to A' changing shape; B to B' changing

We have tested six different types of widgets for the cursor (see Figure 1): an opaque hand, a spy-hole, an arrow, an opaque hand with a spy-hole over-impressed, a transparent hand with a spy-hole over-impressed and a pointing finger. The hand and the finger have the advantage of helping users to understand that they are able to interact with the environment. The main drawback of the hand is that it can occlude objects. This can be corrected making it transparent. Another inconvenient is its lack of precision, which can be corrected over-impressing a spy-hole on it. The pointing finger solves this problem. The arrow has the advantage of being small and precise. However, it has a low symbolic value. The spy-hole is

We have proposed two different mechanisms to help pointing: cursor-based mechanisms and object-based mechanisms. The aim of these techniques is to signal in a way or another that the object under the cursor is selectable. For the cursor, we have analyzed four different possibilities (see Figure 2): changing its shape, when it is in front of selectable objects, (ii) enlarging it, (iii) changing its color, and (iv) launching a small animation. For the

precise and little occlusive but it gives a non-desirable aggressive look to the task.

size; C to C' changing color; D to D' animating.

Technological aids can also be used to assist patients at the cognitive level. In particular, to assist patients in picking a specific object, instead of outlining all the pickable objects, we can outline only those related to the task goal. In this case, outlining fulfills two different functions: technological aid and cognitive assistance. The number of visual stimuli is reduced to only those that are related to the task. As a consequence, the range of strategies to outline objects is larger: in addition of appearance changes, we can apply sounds and animations that are not suitable when there is a large number of objects to be outlined.

Fig. 5. Two different strategies of highlighting the space that a dragged object would occupy

Personalization of Virtual Environments Navigation and Tasks for Neurorehabilitation 141

The free navigation is the classical first person four degrees of freedom model. Users control the viewing vector orientation through the yaw and pitch angle. As usual in computer games, rolling is not allowed. The pitch angle is restricted within a parameterized range between −50 to 50 degrees to allow looking at the floor and ceiling but forbidding complete turns. In addition, users control the camera position by allowing its movement in a plane parallel to the floor at a fixed height. Jumping and crouching down are not allowed. The movement follows the direction of the projection of the viewing vector in that plane, therefore it is not possible to go back. Users can also stop and restart the camera movement. The movement has constant speed except for a short acceleration at its beginning and deceleration at its end. The camera control is done using the mouse to specify the viewing vector and camera path orientation and the bar-space key to start and stop the motion. This system has the advantage that it requires to control only two types of input (mouse movement and space bar key), which is suitable for

The aim of the assisted navigation mode is to provide means for users to indicate where they want to go, and then automatically drive them to this location. This way, the focus is put on the destination and not on the path towards it. Therefore, navigation is decoupled from

This assistance can be implemented in two ways: by computing only the final camera position, or by calculating all the camera path towards this position. In the first case, the transition from one view to the next is very abrupt. Therefore, we reserve it for the transition between one scenario to the other. In this work, we focus on the second mode. We compute all the camera

To indicate the target location, users click onto it. If the target location is reachable from the avatar's position, i.e. if it is at a smaller distance than the avatar's arm estimated length, the system interprets the user click as a petition of interaction (to open, pick, put or transform),

on a surface: at right the 3D bounding box, at left the projected area.

patients with neuropsychological impairments.

**4.1 Free navigation**

**4.2 Assisted navigation**

path and orientation.

interaction.

Fig. 4. Different strategies fo give feedback of the dragged object: left column transparent feedback, right column opaque feedback; first row the cursor is substituted by the object; seond row, the cursor is centered on the object ; third row, the object is at bottom right of the cursor.

Cognitive assistance can also be provided through game-master actions. The game-master is a component of the game logics that simulates the intervention of an external observer. It helps the patient by emitting instructions and feedback messages, removing objects of the scenario to simplify it, demonstrating the required action or doing it automatically partially or totally. This way, if focus of the task is put on picking objects and not placing them anywhere else, the picking action can be implemented as a pick-and-place: users pick and the game-master places them.
