**4.2 An apparent discrepancy in the calibration account**

So far we have suggested that hand gestures are calibrated to near surfaces, but are not calibrated for far surfaces (which seem steeper). We have argued that palm board measures, which have been described by some as calibrated for hills, aren't really calibrated to visual surfaces at all. And we have suggested that the haptic experience of slants underfoot may be partly calibrated to the visual experience of hills, but needn't be. The guiding rule might be that calibration occurs when there is some real possibility for action with immediate spatial feedback from more than one modality. Underfoot surface calibration may be unnecessary because the foot is biomechanically adaptive and people tend not to look at their own feet when walking. Touching surfaces manually provides haptic and visual feedback.

However, there is one apparent exception to this proposed guiding rule. Hajnal et al. (2011) used a force feedback robotic arm (Phantom) to allow participants to feel a virtual surface. With this (carefully calibrated) device, they collected verbal estimates of perceived slant. Given the calibration account (based on the potential for shared visual and haptic experience in normal life) we should expect that haptic exploration of a surface by dynamic touch would reveal the same kind of spatial bias that is evident in vision (Figure 1) and in static haptic contact (Figure 2). In fact, although Hajnal et al. did observe overestimation of slant in their procedure, they applied a linear fit to their data that they interpreted as suggesting a fairly constant overestimation across all orientations. Their fit line is shown in the left panel of Figure 7. In fact, most of their data can be captured by a curved bias function such as we have seen in other data. As shown in the right panel of Figure 7, only

same orientation – perhaps because of the additional forces used to maintain the tilted

In summary, haptic matching tasks have proven to be very difficult to interpret for at least three reasons. First, evidence of surprising accuracy between palm board matches and hills has turned out to be spurious. Palm boards simply feel much steeper than they are because they require flexing the wrist more than is customary in normal circumstances. Second, the perceptual gain of haptic surface perception is generally unknown. If people see a visual surface as 45° when it is only 34° and they feel that a haptic surface is 45° when it is only 34°, they may seem to be correctly matching a 34° surface when they think they are matching a 45° surface. Finally, the fact that passive contact with a rigid surface dissociates from active rotation of a palm board suggests that haptic measures can be contaminated by active

The function shown in Figure 6 for free-hand manual gestures was obtained with the same set of surfaces used to obtain the function in Figure 1 for verbal estimates of slant. Nonetheless the verbal estimates show a great deal of bias (exaggeration of the deviation from horizontal), whereas the manual gestures appear to be well-calibrated. One possibility is that proprioception is calibrated to vision. That is, the perceived orientation of the hand ought to match the perceived orientation of the surface. In support of this view, Li and Durgin (submitted) have found that perceived hand orientation during free-hand gestures

So far we have suggested that hand gestures are calibrated to near surfaces, but are not calibrated for far surfaces (which seem steeper). We have argued that palm board measures, which have been described by some as calibrated for hills, aren't really calibrated to visual surfaces at all. And we have suggested that the haptic experience of slants underfoot may be partly calibrated to the visual experience of hills, but needn't be. The guiding rule might be that calibration occurs when there is some real possibility for action with immediate spatial feedback from more than one modality. Underfoot surface calibration may be unnecessary because the foot is biomechanically adaptive and people tend not to look at their own feet

However, there is one apparent exception to this proposed guiding rule. Hajnal et al. (2011) used a force feedback robotic arm (Phantom) to allow participants to feel a virtual surface. With this (carefully calibrated) device, they collected verbal estimates of perceived slant. Given the calibration account (based on the potential for shared visual and haptic experience in normal life) we should expect that haptic exploration of a surface by dynamic touch would reveal the same kind of spatial bias that is evident in vision (Figure 1) and in static haptic contact (Figure 2). In fact, although Hajnal et al. did observe overestimation of slant in their procedure, they applied a linear fit to their data that they interpreted as suggesting a fairly constant overestimation across all orientations. Their fit line is shown in the left panel of Figure 7. In fact, most of their data can be captured by a curved bias function such as we have seen in other data. As shown in the right panel of Figure 7, only

when walking. Touching surfaces manually provides haptic and visual feedback.

**4.1 Calibration between proprioception and the visual experience of slant** 

follows the same function as the verbal pattern shown in Figure 1.

**4.2 An apparent discrepancy in the calibration account** 

orientation of the palm board.

control of the surface's orientation.

the lowest three plot points deviate from the typical curvature we have observed elsewhere. We therefore sought to replicate their dynamic touch result. We chose to use a real physical surface instead of a virtual one.

Fig. 7. Hajnal et al.'s (2011) dynamic touch data with two different fit lines. In the left panel, the linear fit line originally plotted by Hajnal et al. is shown. In the right panel, a cubic function with an intercept of (0, 0) was fit to the data.
