**1.2 Brief summary of haptic processing**

Haptic perception requires active exploratory movements derived from proprioceptive information. Unlike vision, which provides useful information from a single glance and at a distance (e.g., Biederman, 1972; Luck & Vogel, 1997), haptic perception relies on sequential examination in which tactile-kinesthetic reafferences can be generated only by direct contact with the stimulus. The absence of vision, however, does not preclude the coding of reference and spatial information (Golledge, 1992; Golledge, Ruggles, Pellegrino, & Gale, 1993; Kitchin, Blades, & Golledge, 1997). Haptic perception enables the blind to identify novel stimuli (Klatzky & Lederman, 2003), detect material properties of objects (Kitchin *et al.*, 1997; Gentaz & Hatwell, 2003; 1995), and to acquire abstract categories (Homa, Kanav, Priyamvada, Bratton, & Panchanathan, 2009). For example, we (Homa et al., 2009) demonstrated that students who are blind can learn concepts whose members vary in size, shape, and texture as rapidly as sighted subjects who were permitted to both touch and view the same stimuli. Interestingly, the blind subjects exhibited lower false alarm rates than normally-sighted subjects who were permitted to view and handle the stimuli or who were blindfolded and relied on touch alone, rarely calling 'new' stimuli 'old', but with one

Wilkie, & Lopez, 1990) involve properties reflecting the various modalities. More direct support has been obtained from motor control studies involving olfaction and vision (Castiello, Zucco, Parma, Ansuini, & Tirindelli, 2006), in which the odor of an object has been shown to influence maximum hand aperature for object grasping, and mental rotation of objects presented haptically and visually (Volcic, Wijnjes, Kool, & Kappers, 2010). Each of

In the present chapter, we report the results of experiments, including recent results from our laboratory, that explore whether concepts learned haptically or visually can transfer their information to the alternate modality. We also report whether categorical information, simultaneously perceived by the two modalities, can be learned when put into conflict. Specifically, the objects explored visually and haptically belonged to the same category but were, unbeknownst to the subject, different objects. In the latter situation, we are especially interested in whether intermodal conflict retards or even precludes learning or whether the disparities provided by touch and vision are readily overcome. We also report the results of a preliminary study that addresses whether concepts can be learned when partial information is provided. Finally, we explore whether the representation of categories acquired haptically or visually differ minimally or dramatically and whether the structures

The lack of research into multi-modal concepts should not imply that little is known about haptic processing. The classic Woodward and Schlosberg (1954) text devoted a chapter to touch and the cutaneous senses, and a recent textbook on haptics (Hatwell, Streri, & Gentaz, 2003) lists 17 subareas of research with over 1000 references. There is now an electronic journal devoted to haptics (Haptics-e), the IEEE Transactions on Haptics was established in 2009, and numerous labs have been formed both nationally and internationally that are dedicated to haptics and haptic interfaces. A brief summary of pertinent research on haptic

Haptic perception requires active exploratory movements derived from proprioceptive information. Unlike vision, which provides useful information from a single glance and at a distance (e.g., Biederman, 1972; Luck & Vogel, 1997), haptic perception relies on sequential examination in which tactile-kinesthetic reafferences can be generated only by direct contact with the stimulus. The absence of vision, however, does not preclude the coding of reference and spatial information (Golledge, 1992; Golledge, Ruggles, Pellegrino, & Gale, 1993; Kitchin, Blades, & Golledge, 1997). Haptic perception enables the blind to identify novel stimuli (Klatzky & Lederman, 2003), detect material properties of objects (Kitchin *et al.*, 1997; Gentaz & Hatwell, 2003; 1995), and to acquire abstract categories (Homa, Kanav, Priyamvada, Bratton, & Panchanathan, 2009). For example, we (Homa et al., 2009) demonstrated that students who are blind can learn concepts whose members vary in size, shape, and texture as rapidly as sighted subjects who were permitted to both touch and view the same stimuli. Interestingly, the blind subjects exhibited lower false alarm rates than normally-sighted subjects who were permitted to view and handle the stimuli or who were blindfolded and relied on touch alone, rarely calling 'new' stimuli 'old', but with one

these studies suggests that our modalities must share a common representation.

**1.1 Summary of proposed studies** 

processing is presented first.

**1.2 Brief summary of haptic processing** 

are modified in similar ways following category learning.

curious exception – they invariably false alarmed to the category prototypes and at a much higher rate than any other subjects.

Numerous studies have explored how shape (Gliner, Pick, Pick, & Hales, 1969; Moll & Erdmann, 2003; Streri, 1987), texture (Catherwood, 1993; Lederman, Klatzky, Tong, & Hamilton, 2006; Salada, Colgate, Vishton, & Frankel, 2004), and material (Bergmann-Tiest & Kappers, 2006; Stevens & Harris, 1962) are coded following haptic exploration. Researchers have embraced the possibility that learning and transfer are mediated by an integration of information from multiple sensory modalities (Millar & Al-Attar, 2005; Ernst & Bulthoff, 2004), and that visual and tactile shape processing share common neurological sites (Amedi, Jacobson, Hendler, Malach, and Zohary, 2001). Ernst and Banks (2002) concluded that the lateral occipital complex is activated in similar ways to objects viewed or handled. More recently, Ernst (2007) has shown that luminance and pressure resistance can be integrated into a single perception "if the value of one variable was informative about the value of the other". Specifically, participants had a lower threshold to discriminate stimuli when the two dimensions were correlated but not when they were uncorrelated.
