**3. Research methods, design, and results**

In the next sections, we describe the specific methodology used in our lab, followed by a brief review of the most recent findings.

#### **3.1 Physical apparatus**

For our research, we utilize a tabletop virtual environment located in the Human Motor Behavior laboratory at the University of Wisconsin-Madison (Figure 1). This system has been used in a number of studies investigating the role of visual feedback for upper extremity movement in young adults, as well as the first phase of data collection on subjects across the lifespan. This single-user VE is specifically designed to permit detailed and highly accurate kinematic measurements of human performance. Paradigms from the Human Motor Control and Biomechanics disciplines are used to provide detailed descriptions of human movement and to make inferences about the cognitive processes controlling those movements. More recently, our research has focused on how these processes change throughout the lifespan. Our virtual environment has been designed to focus on natural manipulation, allowing users to employ their hands to manipulate and explore augmented objects located within the desktop environment (i.e. Tangible user interface or TUI) (MacLean, 1996; Mason & MacKenzie, 2002; Mason et al., 2001; Sturman & Zeltzer, 1993; Y. Wang & MacKenzie, 2000).

most problematic. Difficulty with cursor control is also named as a top complaint among older individuals (Hawthorn, 2001; Hawthorn, 2007). It has also been shown that performance within a standard computer interface is slower and results in a greater number of errors with increased age of the operator. These specific limitations point to the need to develop new interfaces that capitalize on natural manipulation, thereby eliminating

Contrary to standard computer interface systems, little is known about the age-related variance of HCI within three-dimensional virtual environments. The literature in this subject area is nearly non-existent. There is some evidence of age-related differences in performance between children and adults, as well as young adults and older participants. This research indicates relevant disparities in reactions to environmental immersion, usage of various input devices, size estimation ability, navigational skills and completion time for gross motor tasks (Allen et al., 2000). According to these authors, "these results highlight the importance of considering age differences when designing for the population at large." Currently, the International Encyclopedia of Ergonomics and Human Factors (Karwowski, 2006) leaves the explanation of age-related differences in virtual environments to a short, two sentence description recommending that equipment be tailored to physically fit the smaller frames of children, and for designers to take into consideration the changes in sensory and motor functions of the elderly. Other than these works, very little specific knowledge regarding age and motor control in virtual environments has been elicited through research, especially as it relates to precision movements with the upper extremities. This fact has led us to begin a series of experiments investigating the use of vision for precise sensorimotor control of the upper extremity in virtual environments, and how that

In the next sections, we describe the specific methodology used in our lab, followed by a

For our research, we utilize a tabletop virtual environment located in the Human Motor Behavior laboratory at the University of Wisconsin-Madison (Figure 1). This system has been used in a number of studies investigating the role of visual feedback for upper extremity movement in young adults, as well as the first phase of data collection on subjects across the lifespan. This single-user VE is specifically designed to permit detailed and highly accurate kinematic measurements of human performance. Paradigms from the Human Motor Control and Biomechanics disciplines are used to provide detailed descriptions of human movement and to make inferences about the cognitive processes controlling those movements. More recently, our research has focused on how these processes change throughout the lifespan. Our virtual environment has been designed to focus on natural manipulation, allowing users to employ their hands to manipulate and explore augmented objects located within the desktop environment (i.e. Tangible user interface or TUI) (MacLean, 1996; Mason & MacKenzie, 2002; Mason et al., 2001; Sturman & Zeltzer, 1993; Y.

difficulties with the functional abstraction of input devices.

usage changes as a function of age.

brief review of the most recent findings.

**3.1 Physical apparatus** 

Wang & MacKenzie, 2000).

**3. Research methods, design, and results** 

Fig. 1. Wisconsin virtual environment (WiscVE). Panel A shows the apparatus with downward facing monitor projecting to the mirror. Images are then reflected up to the user wearing stereoscopic LCD shutter goggles, and thus the images appear at the level of the actual work surface below. Panels B and C demonstrate a reach to grasp task commonly utilized in this environment. The hand and physical cube are instrumented with light emitting diodes (LEDs) that are tracked by the VisualEyez (PTI Phoenix, Inc) system, not shown.

This type of interface gives investigators complete control over the three-dimensional visual scene (important in generalizeability to natural environments), and makes for maximal use of the naturalness, dexterity and adaptability of the human hand for the control of computer mediated tasks (Sturman & Zeltzer, 1993). The use of such a tangible user interface removes many of the implicit difficulties encountered with standard computer input devices due to natural aging processes (Smith et al., 1999). The exploitation of these abilities in computergenerated environments is believed to lead to better overall performance and increased richness of interaction for a variety of applications (Hendrix & Barfield, 1996; Ishii & Ullmer, 1997; Slater, Usoh, & Steed, 1995). Furthermore, this type of direct-manipulation environment capitalizes on the user's pre-existing abilities and expectations, as the human hand provides the most familiar means of interacting with one's environment (Schmidt & Lee, 1999; Schneiderman, 1983). Such an environment is suitable for applications in simulation, gaming/entertainment, training, visualization of complex data structures, rehabilitation and learning (measurement and presentation of data regarding movement disorders). This allows for ease of translation of our data to marketable applications.

The VE provides a head-coupled, stereoscopic experience to a single user, allowing the user to grasp and manipulate augmented objects. The system is configured as follows (Figure 1):

 3-D motion information (e.g. movement of the subject's hand, head and physical objects within the environment) is monitored by a VisualEyez (PTI Phoenix, Inc.) motion

Vision for Motor Performance in Virtual Environments Across the Lifespan 157

removed, performance deteriorated. The results of this work shed light on ways to minimize the amount of visual feedback necessary for successful control of precision upper extremity movements in virtual environments. While the rendering of complex images capable of simulating the full human hand is now possible in VEs, it remains problematic in two ways. First, the motion capture and computing technology can result in significant increases in equipment costs to render a realistic hand. More importantly, the complexity of the rendering process generally results in significant latency problems, with time lags on the scale of 150+ ms from the movement of the real hand to its represented movement within the virtual environment (Wang & Popović, 2009). Latencies of this magnitude can have a significant negative impact on human performance (Ellis et al., 1997). Further, time delays in the range of 16-33 ms become noticeable to subjects when performing simple visual tasks in virtual reality (Mania et al., 2004). As a result of these problems, a key area of research in the development of successful, cost-effective VEs must relate to simulator validity. That is, the degree of realism the environment provides in approximating a real situation. Simulator validity has been identified as a key parameter for the effectiveness of learning in training simulations (Issenberg et al., 2005). This is extremely important in applications such as neurologic rehabilitation, where the ultimate goal is to ensure that practice in the virtual environment will carry over to function in activities of daily living. We must identify the minimal features of sensory feedback required for valid simulations so that humans can interact in ways sufficiently similar to movements in natural environments. In their initial study, Mason and Bernardin (2009) identified some sufficient visual feedback parameters for young adults. We conducted a follow up study using a similar paradigm to see if these results

In our follow-up study, participants were asked to reach from a designated start position to grasp and lift a target cube. We manipulated three variables. The first was age group membership: children (7-12 years), young adults (18-30 years), middle age adults (40-50 years) and senior adults (60+ years). Each of these groups included 12 participants. Second, we manipulated target distance by placing the target object at either 7.5cm or 15cm from the start mark. Finally we varied visual feedback of the hand by providing the subject with one of five visual feedback conditions (Figure 2). In the no vision (NV) condition, the subject was not given any graphical feedback about the position of the hand. In the full vision crude (FVC) condition, graphical feedback about hand position (10mm spheres at the fingertips) was provided throughout the entire reach-to-grasp movement. For the vision up to peak velocity (VPV) condition, graphical feedback about hand position was extinguished once peak velocity of the wrist was reached. In the vision until movement onset (VMO) condition, graphical feedback of the hand was extinguished at the start of movement. For these conditions, subjects were prevented from seeing the real workspace below the mirror so that vision of the actual limb and surrounding environment was absent. For the final condition (full vision or FV), subjects were given full vision of the real hand as in a natural environment. Computer rendered graphical information about the target size and location was always available. All visual feedback was presented with visual stimuli of moderate

1 A preliminary version of these results were published elsewhere (Grabowski & Mason, 2011).

generalize to older and younger user groups.1

contrast in relation to the background.

analysis system connected to a Windows PC workstation. The VisualEyez system monitors the 3-D positions of small infrared light emitting diodes (LEDs) located on landmarks of interest. We typically utilize the tips of the thumb and index finger along with the radial styloid at the wrist to demarcate the hand. Objects in the environment are also equipped with three LEDs for motion tracking.


#### **3.2 Human performance measurement**

Human motor control, biomechanics and neuroscience research has provided a comprehensive description of how humans reach to grasp and manipulate objects in natural environments under a variety of sensory and environmental conditions (MacKenzie & Iberall, 1994). By using the same measurement techniques as those employed to monitor human performance in natural environments we can compare movement in virtual environments to decades of existing human performance literature. The comparisons allow the development of comprehensive cognitive models of human performance under various sensory feedback conditions. Simple timing measures such as reaction time and movement time provide a general description of upper limb movements. However, in motor control studies, more complex three-dimensional kinematic measures such as displacement profiles, movement velocity, deceleration time, and the formation of the grasp aperture (distance between the index finger and thumb for a precision pinch grip) have also been used to characterize object acquisition movements (MacKenzie & Iberall, 1994). By observing regularities in the 3D kinematic information, inferences can be made regarding how movements are planned and performed by the neuromotor control system. This detailed movement information essentially provides a window into the motor control system and allows the determination of what sensory feedback characteristics are important for movement planning and production.

#### **3.3 Preliminary experiments: Understanding vision for motor performance in virtual environments across the lifespan**

In a study investigating the role of visual feedback about the hand for the control of reach to grasp movements, Mason and Bernardin (2009) demonstrated that young healthy adults could utilize very simple visual feedback of their fingertips to improve motor performance when compared to a condition in which no visual feedback of self was present. The crude visual representation consisted of two 10 mm yellow spheres representing the thumb and index finger tips (see Figure 2B for example). The visual representation of the hand was always provided with moderate contrast. Mason and Bernardin (2009) also noted that vision of the hand was not necessary throughout the movement, but only up to movement onset. If vision of the hand was completely

Once the motion information is obtained by the VisualEyez system, it is broadcast on a

 Using the motion capture information the scene is calculated and then rendered on a downward facing CRT monitor, placed parallel to a work surface. A half-silvered mirror is placed parallel to the computer monitor, midway between the screen and the workspace. The image on the computer monitor is reflected in the mirror and is perceived by subjects, wearing stereoscopic goggles, as if it were a three-dimensional

Human motor control, biomechanics and neuroscience research has provided a comprehensive description of how humans reach to grasp and manipulate objects in natural environments under a variety of sensory and environmental conditions (MacKenzie & Iberall, 1994). By using the same measurement techniques as those employed to monitor human performance in natural environments we can compare movement in virtual environments to decades of existing human performance literature. The comparisons allow the development of comprehensive cognitive models of human performance under various sensory feedback conditions. Simple timing measures such as reaction time and movement time provide a general description of upper limb movements. However, in motor control studies, more complex three-dimensional kinematic measures such as displacement profiles, movement velocity, deceleration time, and the formation of the grasp aperture (distance between the index finger and thumb for a precision pinch grip) have also been used to characterize object acquisition movements (MacKenzie & Iberall, 1994). By observing regularities in the 3D kinematic information, inferences can be made regarding how movements are planned and performed by the neuromotor control system. This detailed movement information essentially provides a window into the motor control system and allows the determination of what sensory feedback characteristics are important for

**3.3 Preliminary experiments: Understanding vision for motor performance in virtual** 

In a study investigating the role of visual feedback about the hand for the control of reach to grasp movements, Mason and Bernardin (2009) demonstrated that young healthy adults could utilize very simple visual feedback of their fingertips to improve motor performance when compared to a condition in which no visual feedback of self was present. The crude visual representation consisted of two 10 mm yellow spheres representing the thumb and index finger tips (see Figure 2B for example). The visual representation of the hand was always provided with moderate contrast. Mason and Bernardin (2009) also noted that vision of the hand was not necessary throughout the movement, but only up to movement onset. If vision of the hand was completely

are also equipped with three LEDs for motion tracking.

subnetwork to a scene rendering Linux-based PC.

object located in the workspace below the mirror.

**3.2 Human performance measurement** 

movement planning and production.

**environments across the lifespan** 

analysis system connected to a Windows PC workstation. The VisualEyez system monitors the 3-D positions of small infrared light emitting diodes (LEDs) located on landmarks of interest. We typically utilize the tips of the thumb and index finger along with the radial styloid at the wrist to demarcate the hand. Objects in the environment removed, performance deteriorated. The results of this work shed light on ways to minimize the amount of visual feedback necessary for successful control of precision upper extremity movements in virtual environments. While the rendering of complex images capable of simulating the full human hand is now possible in VEs, it remains problematic in two ways. First, the motion capture and computing technology can result in significant increases in equipment costs to render a realistic hand. More importantly, the complexity of the rendering process generally results in significant latency problems, with time lags on the scale of 150+ ms from the movement of the real hand to its represented movement within the virtual environment (Wang & Popović, 2009). Latencies of this magnitude can have a significant negative impact on human performance (Ellis et al., 1997). Further, time delays in the range of 16-33 ms become noticeable to subjects when performing simple visual tasks in virtual reality (Mania et al., 2004). As a result of these problems, a key area of research in the development of successful, cost-effective VEs must relate to simulator validity. That is, the degree of realism the environment provides in approximating a real situation. Simulator validity has been identified as a key parameter for the effectiveness of learning in training simulations (Issenberg et al., 2005). This is extremely important in applications such as neurologic rehabilitation, where the ultimate goal is to ensure that practice in the virtual environment will carry over to function in activities of daily living. We must identify the minimal features of sensory feedback required for valid simulations so that humans can interact in ways sufficiently similar to movements in natural environments. In their initial study, Mason and Bernardin (2009) identified some sufficient visual feedback parameters for young adults. We conducted a follow up study using a similar paradigm to see if these results generalize to older and younger user groups.1

In our follow-up study, participants were asked to reach from a designated start position to grasp and lift a target cube. We manipulated three variables. The first was age group membership: children (7-12 years), young adults (18-30 years), middle age adults (40-50 years) and senior adults (60+ years). Each of these groups included 12 participants. Second, we manipulated target distance by placing the target object at either 7.5cm or 15cm from the start mark. Finally we varied visual feedback of the hand by providing the subject with one of five visual feedback conditions (Figure 2). In the no vision (NV) condition, the subject was not given any graphical feedback about the position of the hand. In the full vision crude (FVC) condition, graphical feedback about hand position (10mm spheres at the fingertips) was provided throughout the entire reach-to-grasp movement. For the vision up to peak velocity (VPV) condition, graphical feedback about hand position was extinguished once peak velocity of the wrist was reached. In the vision until movement onset (VMO) condition, graphical feedback of the hand was extinguished at the start of movement. For these conditions, subjects were prevented from seeing the real workspace below the mirror so that vision of the actual limb and surrounding environment was absent. For the final condition (full vision or FV), subjects were given full vision of the real hand as in a natural environment. Computer rendered graphical information about the target size and location was always available. All visual feedback was presented with visual stimuli of moderate contrast in relation to the background.

<sup>1</sup> A preliminary version of these results were published elsewhere (Grabowski & Mason, 2011).

Vision for Motor Performance in Virtual Environments Across the Lifespan 159

Fig. 3. Effect of visual condition manipulations on average movement times (error bars show ±SE) and mean age ±SE is shown in parentheses: A) Children F4,44=10.09, p<0.01 B) Young adult F4,44=7.24, p<0.01 C) Middle age adults F4,44= 4.23, p<0.01 D) Senior adult F4,44= 1.86, p=0.19. In young adults, FV was significantly faster than VMO and NV, but not different from the other simple feedback conditions (FVC and VPV). Also of note is the lack of any discernable condition effect in the group of older adults. Adapted from (Grabowski & Mason, 2011).

Fig. 4. Peak velocity results for children, F4,44=5.32, p<0.01. All conditions show a slowing

compared to the natural viewing in the FV condition.

Fig. 2. Visual conditions. A) No vision-NV B) Vision to movement onset-VMO C) Vision to peak velocity – VPV D) Full vision crude-FVC E) Full vision-FV.

*Results - Transport Component:* Movement time results for young adults showed that crude feedback of the hand (both FVC and VPV) resulted in performance that was not different from performance under natural viewing conditions (FV) (Figure 3B). Conditions where this feedback was provided only to movement onset or not at all (VMO and NV) showed distinct performance deterioration. This pattern of results has been replicated several times in our lab (Mason, 2007; Mason & Bernardin, 2008; Mason & Bernardin, 2009). Older adults did not show any differences between visual conditions, indicating that they used a transport strategy that was independent of visual feedback of self (Figure 3D). While this strategy was effective for performance of the current experimental task, it could result in significant limitations with more complex and continuous tasks. For middle age adults and children (Figure 3C and 3A respectively), results indicated that they make use of full visual feedback of their moving limb to improve performance, but use of any crude feedback failed to provide significant performance enhancements.

The peak velocity of the transport varied with visual condition for all age groups, but children showed the most distinct effect (Figure 4). All conditions with altered feedback (FVC, VPV, VMO, and NV) had significantly lower peak velocities when compared to the natural viewing conditions (FV). Peak-velocity is determined by feed-forward motor planning mechanisms. Therefore, since slower movements are more accurate, it appears that children used a pre-planned strategy of slowing their reach to enhance the accuracy of their transport when they were only provided with crude visual feedback or no visual feedback.

Finally the results from the limb deceleration data, which give an indication of the portion of movement allotted for closed-loop sensory feedback processing, shed light on the same phenomenon mentioned previously in older adults: this age group did not alter their movement patterns based on the visual feedback conditions provided. This finding is

Fig. 2. Visual conditions. A) No vision-NV B) Vision to movement onset-VMO C) Vision to

*Results - Transport Component:* Movement time results for young adults showed that crude feedback of the hand (both FVC and VPV) resulted in performance that was not different from performance under natural viewing conditions (FV) (Figure 3B). Conditions where this feedback was provided only to movement onset or not at all (VMO and NV) showed distinct performance deterioration. This pattern of results has been replicated several times in our lab (Mason, 2007; Mason & Bernardin, 2008; Mason & Bernardin, 2009). Older adults did not show any differences between visual conditions, indicating that they used a transport strategy that was independent of visual feedback of self (Figure 3D). While this strategy was effective for performance of the current experimental task, it could result in significant limitations with more complex and continuous tasks. For middle age adults and children (Figure 3C and 3A respectively), results indicated that they make use of full visual feedback of their moving limb to improve performance, but use of any crude feedback failed to

The peak velocity of the transport varied with visual condition for all age groups, but children showed the most distinct effect (Figure 4). All conditions with altered feedback (FVC, VPV, VMO, and NV) had significantly lower peak velocities when compared to the natural viewing conditions (FV). Peak-velocity is determined by feed-forward motor planning mechanisms. Therefore, since slower movements are more accurate, it appears that children used a pre-planned strategy of slowing their reach to enhance the accuracy of their transport when they were only provided with crude visual feedback or no visual feedback. Finally the results from the limb deceleration data, which give an indication of the portion of movement allotted for closed-loop sensory feedback processing, shed light on the same phenomenon mentioned previously in older adults: this age group did not alter their movement patterns based on the visual feedback conditions provided. This finding is

peak velocity – VPV D) Full vision crude-FVC E) Full vision-FV.

provide significant performance enhancements.

Fig. 3. Effect of visual condition manipulations on average movement times (error bars show ±SE) and mean age ±SE is shown in parentheses: A) Children F4,44=10.09, p<0.01 B) Young adult F4,44=7.24, p<0.01 C) Middle age adults F4,44= 4.23, p<0.01 D) Senior adult F4,44= 1.86, p=0.19. In young adults, FV was significantly faster than VMO and NV, but not different from the other simple feedback conditions (FVC and VPV). Also of note is the lack of any discernable condition effect in the group of older adults. Adapted from (Grabowski & Mason, 2011).

Fig. 4. Peak velocity results for children, F4,44=5.32, p<0.01. All conditions show a slowing compared to the natural viewing in the FV condition.

Vision for Motor Performance in Virtual Environments Across the Lifespan 161

representation of the fingertips provided greater luminance contrast. All other conditions either had a low contrast representation (FV) or no representation at all (VPV, VMO, NV). Therefore, the quality of the visual representation, specifically the luminance contrast, may

Fig. 6. Effect of visual condition on average peak grasp apertures (error bars show ±SE): A) Children F4,44=3.061,p=0.102 B) Young adults F4,44=10.011,p<0.001 C) Middle age adults F4,44=4.144 p=0.022 D) Senior adults F4,44=1.207,p= 0.320. For young and middle age adults, aperture in FVC was significantly smaller than in NV. For children and seniors FVC also had the smallest mean aperture, although this did not reach significance at the p<0.05 level. Also of note is that in young adults, FV and VPV also had significantly smaller apertures than NV. Note the general lack of visual condition effect among seniors. Adapted from

To summarize, there are a few key findings from these two studies. First, young adults were quite adept at utilizing limited visual feedback for the control of precision grasp tasks in virtual environments. In contrast, senior adults could not make use of limited visual feedback and tended to rely on a feed-forward strategy. While this strategy allowed the older adults to be successful with the experimental task, it may limit the nature of their interactions in such environments when tasks become more complex. Children and senior adults both appeared to make compensatory adjustments in their motor planning for the demands of the experimental task, however they involved different components of the movement. Children altered the transport of their hand in space by using lower peak velocities. Senior adults used a very large grasp configuration to compensate for task uncertainty. Finally, aperture results indicated that there could potentially be some enhancement in performance when augmented feedback about the hand (i.e. the crude finger representation) contrasts at least moderately with the background environment. This

(Grabowski & Mason, 2011).

serve as a means to further reduce task demands and uncertainty.

consistent with previous work on age and motor control showing that for faster movements, older adults rely on modes of control that are minimally dependent on sensory feedback (Chaput & Proteau, 1996). These results can be interpreted as a manifestation of slowed central processing of sensory information.

Fig. 5. Time spent in deceleration as a percentage of total movement time. A) Senior adult, F4,44=1.84, p=0.15 and B) Young adult , F4,44=13.70, p<0.01.

*Results – Grasp component:* To quantify formation of the grasp component we analyzed peak grasp apertures. Grasp aperture gives an indication of the precision requirements of a task, with larger apertures considered a compensatory strategy present in more demanding tasks with higher levels of uncertainty. In young adults apertures were significantly smaller in the FV, FVC, and VPV when compared to the condition where no visual feedback of the hand was provided (NV). This replicates the results found for movement time and indicates that young adults were able to use some limited visual feedback to reduce uncertainty in planning their grasp. In contrast, older adults used a markedly larger grasp aperture than the rest of the cohorts, and showed a minimal condition effect. Middle age adults did show an effect on grasp measures when provided with crude visual feedback of the hand throughout the movement (FVC). This condition resulted in apertures that were significantly smaller than in the no feedback condition (NV). No other condition resulted in smaller apertures than NV (Figure 5). Therefore, it appears that for middle age adults, the condition with crude feedback available throughout the movement simplified the sensorimotor requirements even more than when participants were provided with natural viewing conditions (FV).

Further inspection of the grasp aperture results shows that, although not statistically significant, the FVC condition resulted in the smallest average grasp apertures for all four age groups. These results provide preliminary evidence that luminance contrast may be an important variable for reducing movement complexity when grasping objects in a virtual environment. In our experiment, due to room lighting, the luminance contrast between the real limb and the background was low in the FV condition. Therefore, aperture planning in this condition may have been more difficult than in the FVC condition where the graphical

consistent with previous work on age and motor control showing that for faster movements, older adults rely on modes of control that are minimally dependent on sensory feedback (Chaput & Proteau, 1996). These results can be interpreted as a manifestation of slowed

Fig. 5. Time spent in deceleration as a percentage of total movement time. A) Senior adult,

*Results – Grasp component:* To quantify formation of the grasp component we analyzed peak grasp apertures. Grasp aperture gives an indication of the precision requirements of a task, with larger apertures considered a compensatory strategy present in more demanding tasks with higher levels of uncertainty. In young adults apertures were significantly smaller in the FV, FVC, and VPV when compared to the condition where no visual feedback of the hand was provided (NV). This replicates the results found for movement time and indicates that young adults were able to use some limited visual feedback to reduce uncertainty in planning their grasp. In contrast, older adults used a markedly larger grasp aperture than the rest of the cohorts, and showed a minimal condition effect. Middle age adults did show an effect on grasp measures when provided with crude visual feedback of the hand throughout the movement (FVC). This condition resulted in apertures that were significantly smaller than in the no feedback condition (NV). No other condition resulted in smaller apertures than NV (Figure 5). Therefore, it appears that for middle age adults, the condition with crude feedback available throughout the movement simplified the sensorimotor requirements even more than when participants were provided with natural

Further inspection of the grasp aperture results shows that, although not statistically significant, the FVC condition resulted in the smallest average grasp apertures for all four age groups. These results provide preliminary evidence that luminance contrast may be an important variable for reducing movement complexity when grasping objects in a virtual environment. In our experiment, due to room lighting, the luminance contrast between the real limb and the background was low in the FV condition. Therefore, aperture planning in this condition may have been more difficult than in the FVC condition where the graphical

F4,44=1.84, p=0.15 and B) Young adult , F4,44=13.70, p<0.01.

viewing conditions (FV).

central processing of sensory information.

representation of the fingertips provided greater luminance contrast. All other conditions either had a low contrast representation (FV) or no representation at all (VPV, VMO, NV). Therefore, the quality of the visual representation, specifically the luminance contrast, may serve as a means to further reduce task demands and uncertainty.

Fig. 6. Effect of visual condition on average peak grasp apertures (error bars show ±SE): A) Children F4,44=3.061,p=0.102 B) Young adults F4,44=10.011,p<0.001 C) Middle age adults F4,44=4.144 p=0.022 D) Senior adults F4,44=1.207,p= 0.320. For young and middle age adults, aperture in FVC was significantly smaller than in NV. For children and seniors FVC also had the smallest mean aperture, although this did not reach significance at the p<0.05 level. Also of note is that in young adults, FV and VPV also had significantly smaller apertures than NV. Note the general lack of visual condition effect among seniors. Adapted from (Grabowski & Mason, 2011).

To summarize, there are a few key findings from these two studies. First, young adults were quite adept at utilizing limited visual feedback for the control of precision grasp tasks in virtual environments. In contrast, senior adults could not make use of limited visual feedback and tended to rely on a feed-forward strategy. While this strategy allowed the older adults to be successful with the experimental task, it may limit the nature of their interactions in such environments when tasks become more complex. Children and senior adults both appeared to make compensatory adjustments in their motor planning for the demands of the experimental task, however they involved different components of the movement. Children altered the transport of their hand in space by using lower peak velocities. Senior adults used a very large grasp configuration to compensate for task uncertainty. Finally, aperture results indicated that there could potentially be some enhancement in performance when augmented feedback about the hand (i.e. the crude finger representation) contrasts at least moderately with the background environment. This

Vision for Motor Performance in Virtual Environments Across the Lifespan 163

two visual stream hypothesis may be overly rigid. Recent investigation has shown that for simple eye movements and pointing tasks, color information can be used to guide movement (White, Kerzel, & Gegenfurtner, 2006). Pisella, Arzi, and Rossetti (1998) studied the ability of humans to utilize color information to quickly update their movements in a perturbation paradigm. While movement reorganization was possible utilizing only color information, the results showed a distinct slowing of movement reorganization. Brenner and Smeets (2004) also studied a similar paradigm, finding that color could in fact be utilized rather quickly for task reorganization; however, they still showed a minor slowing compared with movement reorganization based on luminance information. Luminance contrast, while also important in perception, may have more direct implications for motor output. Motion sensitivity is dependent on contrast sensitivity and motion sensitivity is a hallmark of the neuronal structure of the dorsal stream (Born & Bradley, 2005). Therefore luminance contrast may be an important source

Properties of visual feedback are used both in the planning and online control of movement. The specific role of luminance contrast for such processes has not been clearly identified, and previous study of this topic is sparse. Recently Braun et al. (2008) investigated whether initiation of eye movements differed when tracking two types of targets, one with luminance contrast compared to the background and one isoluminant with the background (i.e. defined by color only). They showed a strong and significant effect of target contrast on speed of eye movement initiation, with tracking of isoluminant targets delayed by 50 ms. They also showed lower eye accelerations to these no-contrast targets. For upper extremity control, studies have shown mixed results. White, Kerzel, and Gegenfurtner (2006) showed that there was no difference in accuracy or response latency when comparing simple rapid aiming movements to targets of high luminance contrast versus isoluminant targets. In a more complex task, Kleinholdermann et al. (2009) looked at the influence of the target object's luminance contrast as subjects performed reach to grasp movements within a desktop augmented (physical object with graphical overlay) environment. Participants were not provided with a head-coupled stereoscopic view, nor were they provided any visual representation of the hand. They were given a view of the environment that included only a virtual image overlaying the actual target disk. The independent variables controlled by the experimenters were the visual properties of chromatic and luminance contrast between the target object and the environment background. The results of this study showed only a minimal effect of luminance contrast on the formation of grasp aperture. They concluded that isoluminant targets were as suitable for the motor planning of grasp as targets defined by a luminance contrast or a luminance plus chromatic contrast. However, because current theories of motor control rest on the premise that object location can be precisely identified in relation to limb location (Wolpert, Miall, & Kawato, 1998) we contend that the lack of visual feedback about the limb likely resulted in a ceiling effect for a number of performance measures used by Kleinholdermann et al. Given that neuronal tuning properties make the visual system particularly sensitive to change, it is logical that some property involving a change in visual stimulus may be especially useful in this quick, precise identification of object and limb spatial location. Luminance contrast is such a property. Future experiments should expand upon the work of Kleinholdermann et al. by examining the role of luminance contrast of both the target object and the effector limb for upper extremity performance. Further, the Kleinholdermann et al. paper focused predominantly

of visual sensory feedback for motor output.

was most pronounced in middle age adults, but weakly present in all age groups. While there are many directions to head with future research, the finding regarding the possibility of performance enhancement through the manipulation of luminance contrast is one particular area of interest.
