**4. Navigation methods**

We propose four different modes of navigation: free navigation, two user-assisted navigation modes and a fully automatic navigation.

Fig. 5. Two different strategies of highlighting the space that a dragged object would occupy on a surface: at right the 3D bounding box, at left the projected area.

#### **4.1 Free navigation**

6 Will-be-set-by-IN-TECH

Fig. 4. Different strategies fo give feedback of the dragged object: left column transparent feedback, right column opaque feedback; first row the cursor is substituted by the object; seond row, the cursor is centered on the object ; third row, the object is at bottom right of the

Cognitive assistance can also be provided through game-master actions. The game-master is a component of the game logics that simulates the intervention of an external observer. It helps the patient by emitting instructions and feedback messages, removing objects of the scenario to simplify it, demonstrating the required action or doing it automatically partially or totally. This way, if focus of the task is put on picking objects and not placing them anywhere else, the picking action can be implemented as a pick-and-place: users pick and the game-master

We propose four different modes of navigation: free navigation, two user-assisted navigation

cursor.

places them.

**4. Navigation methods**

modes and a fully automatic navigation.

The free navigation is the classical first person four degrees of freedom model. Users control the viewing vector orientation through the yaw and pitch angle. As usual in computer games, rolling is not allowed. The pitch angle is restricted within a parameterized range between −50 to 50 degrees to allow looking at the floor and ceiling but forbidding complete turns. In addition, users control the camera position by allowing its movement in a plane parallel to the floor at a fixed height. Jumping and crouching down are not allowed. The movement follows the direction of the projection of the viewing vector in that plane, therefore it is not possible to go back. Users can also stop and restart the camera movement. The movement has constant speed except for a short acceleration at its beginning and deceleration at its end. The camera control is done using the mouse to specify the viewing vector and camera path orientation and the bar-space key to start and stop the motion. This system has the advantage that it requires to control only two types of input (mouse movement and space bar key), which is suitable for patients with neuropsychological impairments.

#### **4.2 Assisted navigation**

The aim of the assisted navigation mode is to provide means for users to indicate where they want to go, and then automatically drive them to this location. This way, the focus is put on the destination and not on the path towards it. Therefore, navigation is decoupled from interaction.

This assistance can be implemented in two ways: by computing only the final camera position, or by calculating all the camera path towards this position. In the first case, the transition from one view to the next is very abrupt. Therefore, we reserve it for the transition between one scenario to the other. In this work, we focus on the second mode. We compute all the camera path and orientation.

To indicate the target location, users click onto it. If the target location is reachable from the avatar's position, i.e. if it is at a smaller distance than the avatar's arm estimated length, the system interprets the user click as a petition of interaction (to open, pick, put or transform),

Fig. 6. VE example that shows the avatar's position and the first intersection of the

for interaction at the current position.

that allows the interaction with the object.

best path to arrive there and the best camera's orientation.

view-vector with the scene. In this case the hit-point is the microwave, and it is not reachable

Personalization of Virtual Environments Navigation and Tasks for Neurorehabilitation 143

user's target, the system has the ability to compute the best position to reach it, computing the

When the user does a mouse click, the system detects the first object that intersects the view vector (hit-point). Figure 6 shows an example where the hit-point is a microwave. The object is reachable if the distance between its position and the avatar is smaller than a fixed value. In this case, the interaction with the object is performed. When the distance is greater, the system interprets that the user wants to perform the interaction, and it moves the avatar to a position

The system uses the object's position to determine the best avatar's location to reach it. This location lays on the grid of the VE's floor. Figure 7 shows a grid example of a kitchen. The cells of the floor can be classified as occupied (red), unreachable (orange) or reachable (green). The unreachable cells are free cells where the avatar cannot go, because it would collide with other elements of the VE. Thus, the avatar is only allowed to be in a position inside reachable cells. The system uses the grid to determine which of the reachable cells is the best to interact with the target object. The naive strategy consists of finding the closest cell to the target. However, it does not take into account the possible occlusions. Therefore, we choose the closest cell

and it realizes it according to the task logics. However, if the object is not reachable, the system interprets that the user wants to go towards it, it computes the corresponding path and follows it automatically. To provide more user control on the navigation, the system allows users to stop and restart the navigation at any time. Observe that the target location may not be directly the object that the user needs to manipulate, but a container in which the object is hidden. For instance, if the goal of the task is to take out a chicken from the oven, the target direction is the oven's door in order to open it. Depending on the current level of difficulty of the task user will have more or less precise instructions on their goal. This is precisely one of the objectives of the rehabilitation.

The main difficulty with this strategy is that clicking onto the target requires to have it in the view fustrum and to put the cursor onto it. However, both things require a previous navigation or camera orientation process. We distinguish two cases: (i) when the target can be seen without need of modifying the camera position, but only its orientation, and (ii) when the camera position must be modified. In the former case, we propose two strategies: *free camera orientation* and *restricted camera orientation*. The *free camera orientation mode* has two degrees of freedom: yaw and pitch. The camera position is fixed, and users move the viewing direction until the target is in the center of the view fustrum. The *restricted camera orientation mode* has one degree of freedom. The system performs an automatic rotation of the yaw angle. Users only modify the pitch angle. To select the desired orientation, users stop the rotation with a mouse click.

When the target location is invisible from the current camera position, users indicate movement by steps, giving a first path direction, stopping the movement to reorient the camera and clicking again to specify a new direction. In this case, although navigation is removed, way-finding cannot be eliminated. Therapists must be aware of that in the design of their tasks. To overcome this problem, inside rooms, we design the scenarios avoiding the presence of occluders. When the scenario is composed of various rooms, we avoid corridors, we design doors from one room to the other and put the name of the room on the doors. This way, to indicate the direction to another room, users click onto the corresponding door.

#### **4.3 Automatic navigation**

The automatic navigation method removes the camera control from users. The system computes the target destination according to the task logics. It puts the camera in front on the next object with which users must interact. For instance, if users are asked to pick a tomato, the application places the camera in front of one.

With this mode, the system intervenes in the task development: it takes decisions on the places to go, and therefore eases the task. It is part of the possible intervention strategies that therapists can design to help their patients.

#### **4.4 Implementation**

In order to manage the described alternatives, the system needs to know the position of all interactive objects. The position of static objects is part of the scene model. Objects that can be moved can be on top of or inside other objects. We use a system of grids that allows us to control the exact position of all the objects at any time of the play. Then, when an object is the 8 Will-be-set-by-IN-TECH

and it realizes it according to the task logics. However, if the object is not reachable, the system interprets that the user wants to go towards it, it computes the corresponding path and follows it automatically. To provide more user control on the navigation, the system allows users to stop and restart the navigation at any time. Observe that the target location may not be directly the object that the user needs to manipulate, but a container in which the object is hidden. For instance, if the goal of the task is to take out a chicken from the oven, the target direction is the oven's door in order to open it. Depending on the current level of difficulty of the task user will have more or less precise instructions on their goal. This is precisely one of the objectives

The main difficulty with this strategy is that clicking onto the target requires to have it in the view fustrum and to put the cursor onto it. However, both things require a previous navigation or camera orientation process. We distinguish two cases: (i) when the target can be seen without need of modifying the camera position, but only its orientation, and (ii) when the camera position must be modified. In the former case, we propose two strategies: *free camera orientation* and *restricted camera orientation*. The *free camera orientation mode* has two degrees of freedom: yaw and pitch. The camera position is fixed, and users move the viewing direction until the target is in the center of the view fustrum. The *restricted camera orientation mode* has one degree of freedom. The system performs an automatic rotation of the yaw angle. Users only modify the pitch angle. To select the desired orientation, users stop the rotation with a

When the target location is invisible from the current camera position, users indicate movement by steps, giving a first path direction, stopping the movement to reorient the camera and clicking again to specify a new direction. In this case, although navigation is removed, way-finding cannot be eliminated. Therapists must be aware of that in the design of their tasks. To overcome this problem, inside rooms, we design the scenarios avoiding the presence of occluders. When the scenario is composed of various rooms, we avoid corridors, we design doors from one room to the other and put the name of the room on the doors. This way, to indicate the direction to another room, users click onto the corresponding door.

The automatic navigation method removes the camera control from users. The system computes the target destination according to the task logics. It puts the camera in front on the next object with which users must interact. For instance, if users are asked to pick a tomato,

With this mode, the system intervenes in the task development: it takes decisions on the places to go, and therefore eases the task. It is part of the possible intervention strategies that

In order to manage the described alternatives, the system needs to know the position of all interactive objects. The position of static objects is part of the scene model. Objects that can be moved can be on top of or inside other objects. We use a system of grids that allows us to control the exact position of all the objects at any time of the play. Then, when an object is the

of the rehabilitation.

mouse click.

**4.3 Automatic navigation**

**4.4 Implementation**

the application places the camera in front of one.

therapists can design to help their patients.

Fig. 6. VE example that shows the avatar's position and the first intersection of the view-vector with the scene. In this case the hit-point is the microwave, and it is not reachable for interaction at the current position.

user's target, the system has the ability to compute the best position to reach it, computing the best path to arrive there and the best camera's orientation.

When the user does a mouse click, the system detects the first object that intersects the view vector (hit-point). Figure 6 shows an example where the hit-point is a microwave. The object is reachable if the distance between its position and the avatar is smaller than a fixed value. In this case, the interaction with the object is performed. When the distance is greater, the system interprets that the user wants to perform the interaction, and it moves the avatar to a position that allows the interaction with the object.

The system uses the object's position to determine the best avatar's location to reach it. This location lays on the grid of the VE's floor. Figure 7 shows a grid example of a kitchen. The cells of the floor can be classified as occupied (red), unreachable (orange) or reachable (green). The unreachable cells are free cells where the avatar cannot go, because it would collide with other elements of the VE. Thus, the avatar is only allowed to be in a position inside reachable cells. The system uses the grid to determine which of the reachable cells is the best to interact with the target object. The naive strategy consists of finding the closest cell to the target. However, it does not take into account the possible occlusions. Therefore, we choose the closest cell

Fig. 8. A task example to test the suitability of the different types of cursors.

game-players that were smaller (only 1 and 2 users, respectively).

**5.1 Object manipulation tests**

grasping objects.

gender, age (below and over 30) and 3D-game players or not. The groups were approximately of the same size except for women over 30 and game-player and men below 30 non 3D

Personalization of Virtual Environments Navigation and Tasks for Neurorehabilitation 145

To test the strategies to help picking, we put the camera in front of an open virtual fridge full of objects. The camera movement was inhibited. User could only rotate the camera but not change its position. All the objects were accessible from the camera position. The task consisted of clicking onto objects to remove them. Figure 8 shows an example of this task. We measured the number of objects that users could remove during a fixed time interval. Users answered questions about their preferred mechanism according to three different criteria: visibility, precision and visual appearance. We tested the different cursors as well as the objects highlighting mechanisms. The results showed that the cursor that allowed a larger number of selections was the spy-hole. However, users preferred the pointing finger because they found it more meaningful. Their second preference was the arrow. In addition, users preferred the animation of the cursor to indicate the nature of the interaction that objects support, for instance, a rotation of the hand on doors, and opening and closing the hand for

To test objects grabbing, we set a task in which users had to move objects from the fridge to the kitchen marble at a side. The marble was also reachable, so it was not necessary to move the camera but only to rotate it. We measured the number of objects that users were able to move

Fig. 7. Example of the navigation strategy. The system use the floor's grid to determine the best path to reach the hit-point. The destination cell is draw in white, and the cells are classified as occupied (red), unreachable (orange) and reachable (green).

from which the target is visible. We compute these cells in a pre-process, casting rays from the surfaces cells to virtual camera positions centered at the grid floor cells. Our scenario model stores the cells associated to each surface cell. Then, when the system wants to determine the best destination for a target, it selects the closest cell that belongs to the set of the target's surface cell. Taking into account that the objects during the task can change their positions, it is possible that all the cells are occupied, and then the avatar cannot reach the target. In those cases, the system's logic is the responsible of asking the user to move some objects to be able to reach the target.

Once the system has the current position of the avatar and the destination position, it computes the path that allows the avatar to move inside the VE without colliding with any object. The method used is an implementation of the A\* path-finding method that minimizes the Euclidean distance, and uses the floor's grid to compute a discrete path. After this process, the system computes a Bezier path that follows the discrete path computed before. This new path allows the system to perform softer movements and keep a constant speed.

#### **5. Results and discussion**

In order to test the suitability of the proposed technological assistance strategies, we have created a set of specific tasks in a virtual scenario representing a kitchen. We have asked 30 volunteers without cognitive impairments to realize these tasks. We have recorded their results and asked them to fill a questionnaire about different aspects of these assistance strategies. The profile of the users can be categorized according to three different criteria:

Fig. 8. A task example to test the suitability of the different types of cursors.

gender, age (below and over 30) and 3D-game players or not. The groups were approximately of the same size except for women over 30 and game-player and men below 30 non 3D game-players that were smaller (only 1 and 2 users, respectively).

#### **5.1 Object manipulation tests**

10 Will-be-set-by-IN-TECH

Fig. 7. Example of the navigation strategy. The system use the floor's grid to determine the best path to reach the hit-point. The destination cell is draw in white, and the cells are

from which the target is visible. We compute these cells in a pre-process, casting rays from the surfaces cells to virtual camera positions centered at the grid floor cells. Our scenario model stores the cells associated to each surface cell. Then, when the system wants to determine the best destination for a target, it selects the closest cell that belongs to the set of the target's surface cell. Taking into account that the objects during the task can change their positions, it is possible that all the cells are occupied, and then the avatar cannot reach the target. In those cases, the system's logic is the responsible of asking the user to move some objects to be able

Once the system has the current position of the avatar and the destination position, it computes the path that allows the avatar to move inside the VE without colliding with any object. The method used is an implementation of the A\* path-finding method that minimizes the Euclidean distance, and uses the floor's grid to compute a discrete path. After this process, the system computes a Bezier path that follows the discrete path computed before. This new

In order to test the suitability of the proposed technological assistance strategies, we have created a set of specific tasks in a virtual scenario representing a kitchen. We have asked 30 volunteers without cognitive impairments to realize these tasks. We have recorded their results and asked them to fill a questionnaire about different aspects of these assistance strategies. The profile of the users can be categorized according to three different criteria:

path allows the system to perform softer movements and keep a constant speed.

classified as occupied (red), unreachable (orange) and reachable (green).

to reach the target.

**5. Results and discussion**

To test the strategies to help picking, we put the camera in front of an open virtual fridge full of objects. The camera movement was inhibited. User could only rotate the camera but not change its position. All the objects were accessible from the camera position. The task consisted of clicking onto objects to remove them. Figure 8 shows an example of this task. We measured the number of objects that users could remove during a fixed time interval. Users answered questions about their preferred mechanism according to three different criteria: visibility, precision and visual appearance. We tested the different cursors as well as the objects highlighting mechanisms. The results showed that the cursor that allowed a larger number of selections was the spy-hole. However, users preferred the pointing finger because they found it more meaningful. Their second preference was the arrow. In addition, users preferred the animation of the cursor to indicate the nature of the interaction that objects support, for instance, a rotation of the hand on doors, and opening and closing the hand for grasping objects.

To test objects grabbing, we set a task in which users had to move objects from the fridge to the kitchen marble at a side. The marble was also reachable, so it was not necessary to move the camera but only to rotate it. We measured the number of objects that users were able to move

Fig. 10. Results of the paths used by the users to perform the task.

it can be seen from their paths sparser and more erratic.

option: to walk in a straight line near the marble. The black line represents the automatic path computed by the system. Our method keeps constant speed along the path and avoids the obstacles automatically without errors. Note that some of users chose the other side of the table which is the longest way. These users had, in general, more difficulties in navigating, as

Personalization of Virtual Environments Navigation and Tasks for Neurorehabilitation 147

The temperature diagram in Figure 11 represents the head deviation in the pitch angle in free navigation mode. The figure shows that users tend to look down to see where they are going. Users ranked the navigation method easiness in the following order: automatic, assisted, assisted rotation and free. As expected, they found the automatic method very easy or easy (100% of users). The assisted mode was also considered as easy and very easy by 95% of users. However, the assisted rotation was less valued (only 80% scored it as easy or very easy). Users reported that not being able to rotate the camera was disturbing. The sensation was the same in all groups. The free navigation mode was found easy and very easy by 67% of users in general, but only by 20% of non 3D-gamers, who were, however, 2D-gamers. We conclude, as expected, that free navigation is difficult for non-gamers. Concerning the preferred navigation

mode, all groups of users chose the assisted mode, even the 60% of game players.

Fig. 9. The three cursor modes: at left an arrow on unreachable objects, in the middle an animation of the hand on reachable objects and, for dragging, the arrow and the held object conveniently scaled, here, the knife.

during a fixed time interval. Users answered questions about visibility, precision and visual appearance. The results showed the more effective mechanism was to show the grasped object at a side of the cursor. Surprisingly, the opaque mode was preferred to the semi transparent one, because it was found more natural.

Taking into account these results, we finally chose to have three cursor modes (see Figure 9): when the object under the cursor is not reachable or an object is being held, we use an arrow and when the object is reachable the animated hand.

Finally, to test the feedback mechanisms to help putting objects, we used a task consisting in placing objects on the kitchen marble. Users did not need to pick them. As soon as they put one, another was automatically grasped. We measured the number of objects that users were able to leave on the marble, and users answered a questionnaire about precision and comfort. The preferred mechanism was to color the 2D free space nearer the cursor.

#### **5.2 Navigation methods test**

To test the navigation methods we proposed a task consisting in touching two objects strategically placed in the scenario (see Figure 10): an orange (dashed in blue) and a dish (dashed in pink). The task is segmented in three stages. The first stage is to reach the orange. In this stage, it is necessary to avoid the table, which is an obstacle that does not prevent from seeing the target object. The next stage is to go to the cabinet's door, which consists of walking in a straight line near the marble without obstacles. The last stage is to open the cabinet door and walk around to reach the dish, which is an object placed into a container. Thus, the cabinet door is an obstacle that prevents from seeing the target object. Figure 11 represents the paths performed by the users in free navigation mode using a temperature range color encoding. Temperatures are represented with a blue-to-red color scale. The most transited is the place that represents the higher temperature. In other words, places where users stay during a long time are colored in red.

Clearly, it can be seen that areas containing obstacles (near the table in the first stage and near the door in the third stage) are critical for users. The area corresponding to the second stage is also colored in red because when there are no obstacles, they all choose the same 12 Will-be-set-by-IN-TECH

Fig. 9. The three cursor modes: at left an arrow on unreachable objects, in the middle an animation of the hand on reachable objects and, for dragging, the arrow and the held object

during a fixed time interval. Users answered questions about visibility, precision and visual appearance. The results showed the more effective mechanism was to show the grasped object at a side of the cursor. Surprisingly, the opaque mode was preferred to the semi transparent

Taking into account these results, we finally chose to have three cursor modes (see Figure 9): when the object under the cursor is not reachable or an object is being held, we use an arrow

Finally, to test the feedback mechanisms to help putting objects, we used a task consisting in placing objects on the kitchen marble. Users did not need to pick them. As soon as they put one, another was automatically grasped. We measured the number of objects that users were able to leave on the marble, and users answered a questionnaire about precision and comfort.

To test the navigation methods we proposed a task consisting in touching two objects strategically placed in the scenario (see Figure 10): an orange (dashed in blue) and a dish (dashed in pink). The task is segmented in three stages. The first stage is to reach the orange. In this stage, it is necessary to avoid the table, which is an obstacle that does not prevent from seeing the target object. The next stage is to go to the cabinet's door, which consists of walking in a straight line near the marble without obstacles. The last stage is to open the cabinet door and walk around to reach the dish, which is an object placed into a container. Thus, the cabinet door is an obstacle that prevents from seeing the target object. Figure 11 represents the paths performed by the users in free navigation mode using a temperature range color encoding. Temperatures are represented with a blue-to-red color scale. The most transited is the place that represents the higher temperature. In other words, places where users stay during a long

Clearly, it can be seen that areas containing obstacles (near the table in the first stage and near the door in the third stage) are critical for users. The area corresponding to the second stage is also colored in red because when there are no obstacles, they all choose the same

The preferred mechanism was to color the 2D free space nearer the cursor.

conveniently scaled, here, the knife.

one, because it was found more natural.

**5.2 Navigation methods test**

time are colored in red.

and when the object is reachable the animated hand.

Fig. 10. Results of the paths used by the users to perform the task.

option: to walk in a straight line near the marble. The black line represents the automatic path computed by the system. Our method keeps constant speed along the path and avoids the obstacles automatically without errors. Note that some of users chose the other side of the table which is the longest way. These users had, in general, more difficulties in navigating, as it can be seen from their paths sparser and more erratic.

The temperature diagram in Figure 11 represents the head deviation in the pitch angle in free navigation mode. The figure shows that users tend to look down to see where they are going.

Users ranked the navigation method easiness in the following order: automatic, assisted, assisted rotation and free. As expected, they found the automatic method very easy or easy (100% of users). The assisted mode was also considered as easy and very easy by 95% of users. However, the assisted rotation was less valued (only 80% scored it as easy or very easy). Users reported that not being able to rotate the camera was disturbing. The sensation was the same in all groups. The free navigation mode was found easy and very easy by 67% of users in general, but only by 20% of non 3D-gamers, who were, however, 2D-gamers. We conclude, as expected, that free navigation is difficult for non-gamers. Concerning the preferred navigation mode, all groups of users chose the assisted mode, even the 60% of game players.

controlling a virtual camera. We have tested these strategies with volunteer users and seen

Personalization of Virtual Environments Navigation and Tasks for Neurorehabilitation 149

Argelaguet F. & Andújar C. (2009). Efficient 3D pointing selection in cluttered virtual environments, *IEEE Computer Graphics and Applications* 29(6): 34–43. Argelaguet F. & Andújar C. (2010). Automatic speed graph generation for predefined camera

Balakrishnan R. (2004). "Beating" Fitts' law: virtual enhancements for pointing facilitation,

Boot W., Kramer A., Dimons D., Fabiani M. & Gratton G. (2008). The effects of video

Bordoloi U. & Shen H.-W. (2005). View selection for volume rendering, *IEEE Visualization'05*,

Chen J. L. & Stanney K. M. (1999). A theoretical model of wayfinding in virtual environments:

Christie M. & Olivier P. (2008). Camera control in computer graphics, *Computer Graphics Forum*

Driel L. & Bidarra R. (2009). A semantic navigation model for video games, *Proceedings of the*

Elmqvist N. & Fekete J. (2008). Semantic pointing for object picking in complex 3D environments, *Proceedings of Graphics Interface 2008*, GI '08, pp. 243–250. Guo W. Lim S. Fok G. & Chan, G. (2004). Virtual reality for memory rehabilitation, *International*

Koenig S., Crucian G., Dalrymple-Alford J. & Dunser A. (2009). Virtual reality rehabilitation

Liu L., Jean-Bernard J. & Van Liere R. (2011). Revisiting path steering for 3D manipulation

Nesbitt K., Sutton K., Wilson J. & Hookham G. (2009). Improving player spatial abilities for 3D

Prigatano G. (1997). Learning from our successes and failures: Reflections and comments,

Reese B. & Stout B. (1999). Finding a pathfinder, *Symposium on arificial intelligence and computer*

Richardson A., Powers M. & Bousquet L. (2011). Video game experience predicts virtual, but

Rose F., Brooks B. & Rizzo A. (2005). Virtual reality in brain damage rehabilitation: Review,

Satalich G. (1995). Navigation and wayfinding in virtual reality: Finding the proper

tools and cues to enhance navigation awareness, Http://www.hitl,washington,

not real navigation performance, *Comput. Hum. Behav.* 27: 552–560.

tasks, *International Journal on Human-Computer Studies* 69: 170–181.

*2nd International Workshop on Motion in Games*, MIG'09, Springer-Verlag, pp. 146–157.

of spatial abilities after brain damage, *Studies for Health Technological Information*

challenges, *Proceedings of the 6th Australasian Conference on Interactive Entertainment*,

game playing on attention, memory, and executive control, *Acta Psychologica*

that they were effective and well accepted. Our next step is to use them on patients.

*International Journal on Human-Computer Studies* 61: 857–874.

Proposed strategies for navigational aiding, *Presence* 8: 671–685.

*Journal of Computer Applications in Technology* 21(1–2): 32–37.

*International Neuropsychological Society* 3: 497–499.

*CyberPsychology & Behavior* 8(3): 241–271.

edu/publications/satalich/.

paths, *Smart graphics*, pp. 115–126.

129(3): 387–398.

pp. 487–494.

27(8): 1–197.

144: 105–107.

pp. 1–6.

*games*, pp. 69–72.

**7. References**

Fig. 11. Results of the camera movements performed by the users to encountering the task.

In relation to the quality of the paths, 90% of users valued them as good and very good. About 75% of users described the camera orientation during movement as good and very good whereas 25% found it regular or bad. The bad scores came from users that reported being disturbed by the fact that the camera didn't look at the target when it was avoiding an obstacle. They affirmed that the target object should always be in the view. For this reason, we modified the system to allow users to control the camera rotation if they wish during the automatic camera movement. In this way, the system is more flexible and satisfies the desires of passive users as well as more active ones.

### **6. Conclusions**

Serious games in 3D virtual environments can greatly contribute to rehabilitation. However, to be usable by patients with cognitive impairments, many technological barriers must be broken that are disturbing for all kind of users as well. In particular, it is important to separate objects manipulation from navigation. In this paper, we have proposed and analyzed several strategies to ease manipulation in 3D. We have designed visual mechanisms to enhance the perception of objects. Our aim is to help users in picking, dragging and putting objects. In addition, we have proposed a mechanism for the semi-automatic navigation inside VEs. Our goal is to allow users to move in the VE without having the technological difficulties of controlling a virtual camera. We have tested these strategies with volunteer users and seen that they were effective and well accepted. Our next step is to use them on patients.

#### **7. References**

14 Will-be-set-by-IN-TECH

Fig. 11. Results of the camera movements performed by the users to encountering the task.

In relation to the quality of the paths, 90% of users valued them as good and very good. About 75% of users described the camera orientation during movement as good and very good whereas 25% found it regular or bad. The bad scores came from users that reported being disturbed by the fact that the camera didn't look at the target when it was avoiding an obstacle. They affirmed that the target object should always be in the view. For this reason, we modified the system to allow users to control the camera rotation if they wish during the automatic camera movement. In this way, the system is more flexible and satisfies the desires

Serious games in 3D virtual environments can greatly contribute to rehabilitation. However, to be usable by patients with cognitive impairments, many technological barriers must be broken that are disturbing for all kind of users as well. In particular, it is important to separate objects manipulation from navigation. In this paper, we have proposed and analyzed several strategies to ease manipulation in 3D. We have designed visual mechanisms to enhance the perception of objects. Our aim is to help users in picking, dragging and putting objects. In addition, we have proposed a mechanism for the semi-automatic navigation inside VEs. Our goal is to allow users to move in the VE without having the technological difficulties of

of passive users as well as more active ones.

**6. Conclusions**


**8** 

*USA* 

**Vision for Motor Performance in Virtual** 

Current trends in neuroscience research are heavily focused on new technologies to study and interact with the human brain. Specifically, three-dimensional (3D) virtual environment (VE) systems have been identified as technology with good potential to serve in both research and applied settings. For the purpose of this chapter, a virtual environment is defined as a computer with displays and controls configured to immerse the operator in a predominantly graphical environment containing 3D objects in 3D space. The operator can manipulate virtually displayed objects in real time using a variety of motor output channels or input devices. The use of VEs has almost exclusively been limited to experimental processes, utilizing cumbersome equipment well suited for the laboratory, but unrealistic for use in everyday applications. As the evolution of computer technology continues, the possibility of creating an affordable system capable of producing a high-quality 3D virtual experience for home or office applications comes nearer to fruition. However, in order to improve the success and the cost-to-benefit ratio of such a system, more precise information regarding the use of VEs by a broad population of users is needed. The goal of this chapter is to review knowledge relating to the use of visual feedback for human performance in virtual environments, and how this changes across the lifespan. Further, we will discuss future experiments we believe will contribute to this area of research by examining the role

of luminance contrast for upper extremity performance in a virtual environment.

**2.1 Changes to the human sensorimotor system across the lifespan** 

The following sections identify the well-known physiologic changes that occur in the sensorimotor system as part of the natural human aging process. Further, we discuss some of the limited work that has been done to understand the implications of these changes for

The human body is a constantly changing entity throughout the lifespan. Most physiologic processes begin to decline at a rate of 1% per year beginning around age 30, and the sensorimotor system is no exception (Schut, 1998). There is a general indication from the research that both the processing of afferent information and the production of efferent

**1. Introduction** 

**2. Background** 

the design of VEs.

**Environments Across the Lifespan** 

Patrick Grabowski and Andrea Mason

*University of Wisconsin, Madison* 

Tost D., Ferré M., Garcia P., Tormos J., Garcia A., Roig T. & Grau, S. (2009). Previrnec: a cognitive telerehabilitation system based on virtual environments, *Virtual Rehabilitation*, pp. 1–8.
