**1. Introduction**

A concept may be defined as a collection of objects grouped together by a common name whose members are usually, but not always, generated by a plan or algorithm. All words are concepts, as are the natural categories, esthetic style, the various diseases, and social stereotypes. In virtually all cases, an endless number of discriminably different examples of a concept has been rendered equivalent. A striking example was provided by Bruner, Goodnow, and Austin (1956), who noted that humans can make 7 million color discriminations and yet rely on a relative handful of color names. We categorize, according to Bruner et al., for a number of reasons – it is cognitively adaptive to segment the world into manageable categories, categories once acquired permit inference to novel instances, and concepts, once identified, provide direction for instrumental activity. For example, we avoid poisonous plants, fight or flee when encountering threat, and make decisions following a diagnosis. With rare exceptions, all concepts are acquired by experiences that are enormously complex and always unique.

However, the substantial and growing literature on formal models of concepts (e.g., Busemeyer & Pleskac, 2009) and the discovery of variables that shape concepts (e.g., Homa, 1984) has been acquired, almost exclusively, from studies that investigate the appearance of objects, i.e., the presentation of stimuli that are apprehended visually. Yet a moment's reflection reveals that our common concepts are associated with inputs from the various modalities. The taste, texture, odor, and appearance of food might critically inform us that this food is spoiled and not fresh; that the distinctive shape, gait, and sound marks this stray dog as probably lost and not dangerous; and the sounds, odors, and handling might be telling us that the family car needs a tune-up. Little is known about haptic or auditory concepts and virtually nothing is known about cross-modal transfer of categorical information between the different modalities, at least not from formal, experimental studies.

In contrast to the dearth of studies involving multimodal input and cross-modal transfer in category formation, there exists ample, albeit indirect, support for the role of multimodal properties revealed from other cognitive paradigms, ranging from feature and associative listing of words and category instances to the solution of analogies and logical decisionmaking. When asked to list attributes of category members (e.g., Garrard, Lambon, Ralph, Hodges, & Patterson, 2001; Rosch & Mervis, 1975), subjects typically include properties drawn from vision, audition, touch, olfaction, and taste. Similarly, the solution of analogies (e.g., Rumelhart & Abrahamson, 1973) and category-based induction (e.g., Osherson, Smith,

Haptic Concepts 5

curious exception – they invariably false alarmed to the category prototypes and at a much

Numerous studies have explored how shape (Gliner, Pick, Pick, & Hales, 1969; Moll & Erdmann, 2003; Streri, 1987), texture (Catherwood, 1993; Lederman, Klatzky, Tong, & Hamilton, 2006; Salada, Colgate, Vishton, & Frankel, 2004), and material (Bergmann-Tiest & Kappers, 2006; Stevens & Harris, 1962) are coded following haptic exploration. Researchers have embraced the possibility that learning and transfer are mediated by an integration of information from multiple sensory modalities (Millar & Al-Attar, 2005; Ernst & Bulthoff, 2004), and that visual and tactile shape processing share common neurological sites (Amedi, Jacobson, Hendler, Malach, and Zohary, 2001). Ernst and Banks (2002) concluded that the lateral occipital complex is activated in similar ways to objects viewed or handled. More recently, Ernst (2007) has shown that luminance and pressure resistance can be integrated into a single perception "if the value of one variable was informative about the value of the other". Specifically, participants had a lower threshold to discriminate stimuli when the two

The initial studies used complex 3D shapes, shown in Figure 1, that were composed of three abstract prototypical shapes and systematic distortions. Objects were originally modeled in the Maya 3D modeling software produced by Autodesk. Initially, 30-40 3-dimensional virtual forms were generated using a shape growth tool within the Maya suite, and 20 were chosen for multidimensional scaling. Three forms were then selected from the multidimensional space (MDS) that were moderately separated from each other and which appeared to be equi-distant from each other in three dimensions. These 3 forms become the prototypical forms for three categories. The surface of each prototype was then subdivided

The Maya's shape blend tool was used to generate forms that were incremental blends between all pairs of the 3 prototype forms. This resulted in a final category population of 24 3-dimensional objects, where each prototype was transformed, along two paths into the other two prototypes. The distortion setting used in the shape blend tool was set to .14, which allowed for 7 forms to be generated between each prototype pair. The forms were then converted from Maya's file format which could then be steriolithographically printed using a ZCorperation Zprinter. Each of the objects was smooth to the touch and of the same

This structure was selected to address a number of additional issues. First, each prototype occupied the endpoints of two transformational paths and was the only form capable of readily generating its distortions. However, unlike the vast majority of studies in categorization, each prototype was not otherwise central to its learning (or transfer) patterns but was positioned at the endpoints of two transformational paths. We were interested in whether these prototypical objects would, nonetheless, exhibit characteristics typically found in recognition and classification. For example, the prototype is often falsely

into a very small polygon mesh which gives objects a more organic appearance.

dimensions were correlated but not when they were uncorrelated.

higher rate than any other subjects.

**1.3 Stimuli for Experiments 1-3** 

approximate weight and overall size.

**1.4 Theoretical issues** 

Wilkie, & Lopez, 1990) involve properties reflecting the various modalities. More direct support has been obtained from motor control studies involving olfaction and vision (Castiello, Zucco, Parma, Ansuini, & Tirindelli, 2006), in which the odor of an object has been shown to influence maximum hand aperature for object grasping, and mental rotation of objects presented haptically and visually (Volcic, Wijnjes, Kool, & Kappers, 2010). Each of these studies suggests that our modalities must share a common representation.
