**3.3 Shape vs. texture**

324 Neuroimaging – Cognitive and Clinical Neuroscience

Streri and Gentaz (2003; see also Streri and Gentaz, 2004) conducted an experiment on crossmodal transfer of shape information from the right hand to the eyes in 24 human newborns (mean age: 62 hours). They used an intersensory paired-preference procedure that included two phases: a haptic familiarization phase in which newborns were given an object to explore manually without seeing it, followed by a visual test phase in which infants were shown the familiar object paired with a novel one. Tactile objects were a small cylinder (10 mm in diameter) and a small prism (10 mm triangle base). Because the vision of newborns is immature and their visual acuity is weak, visual objects were the same 3D shapes, but much larger (45mm triangle base and 100mm in length for the prism and 30mm in diameter and 100mm in length for the cylinder). An experimental group (12 newborns) underwent the two phases successively (haptic then visual) whereas a baseline group (12 newborns) underwent only the visual test phase with the same objects as the experimental group but without haptic familiarization. Comparison of looking times between the two groups provided evidence of crossmodal recognition, with shapes explored by the hands of the experimental group recognized by the eyes. The newborns in the experimental group looked at the novel object for longer than the familiar one. In contrast, the newborns in the baseline group looked equally at both objects. Moreover, infants in the experimental group made more gaze shifts toward the novel object than the familiar object. In the baseline group this was not the case. Thus, this recognition in the experimental group stems from the haptic habituation phase. These results suggest that newborns recognized the familiar object through a visual comparison process as well as a comparison between the haptic and visual modalities. Moreover, the discrepancy between the sizes of the visual and tactile objects was apparently not relevant for crossmodal recognition. Shape alone seems to have been

Sann and Streri (2007) tested transfer from eyes to hand and from hand to eyes in order to ascertain whether this would demonstrate a complete primitive 'unity of the senses.' After haptic habituation to an object (cylinder or prism), the infants were shown the familiar and the novel shape in alternation. After visual habituation with either the cylinder or the prism, the familiar and the novel shape were put in the infant's right hand. The tactile objects were presented sequentially in an alternating manner. Again, visual recognition was observed following haptic habituation, but the reverse was not the case: no haptic recognition was found following visual habituation. Evidence of a visual recognition of shape also depended on the hand stimulated during the familiarization phase. No evidence of crossmodal recognition was found when the left hand was stimulated (Streri and Gentaz, 2004). Thus, cross-modal transfer seems not to be a general property of the newborn human; instead it is

To understand this lack of bi-directional crossmodal transfer we must examine the differences between the ways that the two modalities process object shape. Vision processes shapes in a global manner, whereas touch processes information sequentially. Moreover, infants do not use efficient tactile exploratory procedures such as "*contour following*" to establish good representations of shapes (Lederman and Klatzky,, 1987). Earlier research performed on 2-month-old infants and using a bi-directional crossmodal shape transfer task (Streri 1987) revealed that two-month-old infants visually recognize an object that they have previously held, but do not manifest tactile recognition of an already-seen object. A

considered by newborns.

**3.2 Limits of cross-modal shape transfer** 

specific to certain parts of the body.

The comparison between shape and texture, amodal properties, should allow testing the hypothesis of amodal perception in newborns and to shed light on the processes involved in information-gathering by both sensory modalities. However, shape is best processed by vision, whereas texture is thought to be best detected by touch (see Bushnell and Boudreau 1998; Klatzky *et al.* 1987). According to Guest and Spence (2003), texture is "more ecologically suited" to touch than to vision. In many studies on shape (a macrogeometric property), transfer from haptics to vision has been found to be easier than transfer from vision to haptics in both children and adults (Connolly and Jones 1970; Jones and Connoly 1970; Juurmaa and Lehtinen-Railo 1988; Newham and MacKenzie 1993; cf. Hatwell 1994). In contrast, when the transfer concerns texture (a microgeometric property), for which touch is as efficient as (if not better than) vision, this asymmetry does not appear.

Sann and Streri (2007) undertook a comparison between shape and texture in bi-directional crossmodal transfer tasks. They sought to examine how information is gathered and processed by the visual and tactile modalities and, as a consequence, to shed light on the perceptual mechanisms of newborns. If the perceptual mechanisms involved in gathering information on object properties are equivalent in both modalities at birth, then reverse crossmodal transfer would be expected. In contrast, if the perceptual mechanisms differ in the two modalities, then non-reversible transfer should be found. Thirty-two newborns participated in two experiments (16 in crossmodal transfer from vision to touch, and 16 in the reverse transfer). The stimuli were one smooth cylinder and one granular cylinder (a

Intermanual and Intermodal Transfer in Human Newborns:

cross-modal transfer tasks is not due to any visual imagery.

for newborns because of the higher level of abstraction involved.

two different pathways in adult brains.

**4. Conclusions** 

Neonatal Behavioral Evidence and Neurocognitive Approach 327

of visual cortical areas using rTMS (repetitive Transcranial Magnetic Stimulation) did not hinder texture judgments, but impaired subjects' ability to judge the distance between dots in a raised dot pattern. Conversely, transient disruption of somatosensory cortex impaired texture judgments, while interdot distance judgments remained intact. In short, detection of shape and texture properties requires different exploratory procedures, and takes place in

A second important question is that of how haptic input is translated into a visual format given that the sensory impressions are so different and that newborns have no experience with tactile and visual object inputs. To date, there is substantial neuroimaging evidence from adults showing that vision and touch are intimately connected, although views on this interconnectedness vary (see Amedi *et al., 2001*; Sathian, 2005 for reviews). Cerebral cortical areas that were previously considered as exclusively visual, notably lateral occipital complex (LOC), are activated during haptic perception of shape (Lacey *et al.,* 2007). Crucially, LOC is activated in tactile recognition without mediation by visual recognition. Allen and Humphreys (2009) tested a patient with visual agnosia due to bilateral lesions of the ventral occipito-temporal cortex that had spared dorsal LOC. This patient's visual object recognition was impaired, but his tactile recognition was preserved. As a consequence, activation of dorsal LOC by tactile input can work directly through tactile inputs, and visual experience is unnecessary for LOC regions to be active in tactile object recognition. It seems plausible that visual imagery does not exist in newborns because they have little or no experience of the visual world of objects. It is possible that the LOC is activated in newborns brains when they explore an object haptically, and that the visual recognition of felt shape in

We recognize, understand, and interact with objects through both vision and touch (cf. Hatwell, Streri and Gentaz, 2003; Gentaz, 2009). In infancy, despite the various discrepancies between the haptic and visual modalities—such as asynchrony in the maturation and development of the different senses, distal vs. proximal inputs, and the contrast between the parallel character of vision and the sequential nature of the haptic modality—both systems detect regularities and irregularities when they are in contact with different objects, from birth onward. Conceivably, these two sensory systems may encode object properties such as shape and texture in similar ways. Behavioral evidence in newborns has revealed the involvement of different levels of abstraction in different types of transfer. Intermanual transfer of shape and texture seems to be bi-directional from birth. When newborns hold an object in one hand, left or right, its shape and texture are recognized by the other hand despite the immaturity of the corpus callosum. The maturity of the haptic sense is sufficient for gathering and processing information in a way that makes symmetrical correspondences between hands possible. This intermanual transfer may involve a low level of abstraction, because it does not require a change of representational format, since the steps involved, habituation and recognition, occur entirely within one modality—despite the fact that the transmission runs through the corpus callosum, which is immature at birth. Cross-modal transfers between vision and touch require a change of format and seem to be more difficult

Studies on crossmodal transfer tasks have revealed some links between the haptic and visual modalities at birth. Newborns are able to visually recognize a held object (Streri and

cylinder with pearls stuck on it). The results revealed crossmodal recognition of texture in both directions.

The findings suggest that for the property of texture, exchanges between the sensory modalities are bi-directional. Complete cross-modal transfer occurs with texture but not shape. However, this is true if only the object is volumetric and not flat, because newborns do not use the *"lateral motion"* exploratory procedure to detect differences between the textures of flat objects (Sann and Streri, 2008b). Cross-modal transfer between hands also reveals differences between shape and texture properties, and suggests that establishing representations of object shape is difficult for newborns. How should these results be explained? Human infants are particularly immature at birth, and brain maturation is protracted until adulthood. Almost no neuroimaging data is available because non-invasive techniques are difficult to apply in healthy infants. For example, newborns and young infants are often asleep (however, see Fransson et al., 2010 for a review on the functional architecture of the infant brain). Adult neuroimaging data, in contrast, offer some insights on how the brain processes cross-modal tasks.

### **3.4 Neuroimaging data**

On the basis of these findings, two main questions emerge: First, why is bi-directional intermodal transfer observed for texture and not for shape? Second, how is haptic input translated into a visual format in newborns, i.e. by an organism that has never both seen and felt a 3D object?

On the basis of animal and human studies, Hsiao (2008) claimed that 3D shape processing involves the integration of both proprioceptive and cutaneous inputs from the hand. As the hand explores objects, different combinations of neurons are activated, and object recognition occurs as these 3D spatial views of the object are integrated. Cutaneous inputs related to 2D stimulus form and texture properties do not need such integration and may be processed differently than 3D shape in cortex. Cutaneous inputs stemming from the form and texture of 2D stimuli are processed in area 3b of SI cortex, whereas the sensitivity of neurons in area 2 to cutaneous inputs depends on hand conformation and its changes. Moreover, according to Hsiao (1998), the mechanisms underlying the early stages of 2D form processing are similar for vision and touch. Newborns' exploration of objects is very weak, and they may not be able to establish the 3D representations needed to perform tactile recognition after visual exploration of the object. Since texture and 2D form are similar in vision and touch, this data could explain why in 2-month-olds intermodal transfer from visual 2D object to haptic 3D objects is found, but not transfer from visual 3D objects to haptic 3D objects. Similarly, this data could explain the bi-directional transfer of texture between touch and vision observed in newborns.

Moreover, neuroimaging data from human adults suggests a functional separation in the cortical processing of micro- and macrogeometric cues (Roland et al. 1988). In this study, adults had to discriminate the length, shape, and roughness of objects with their right hand. Discrimination of object roughness activated lateral parietal opercular cortex significantly more than length or shape discrimination. Shape and length discrimination activated the anterior part of the intraparietal sulcus (IPA) more than roughness discrimination. More recently, Merabet *et al.* (2004) confirmed the existence of this functional separation and suggested that occipital (visual) cortex is functionally involved in tactile tasks requiring fine spatial judgments in normally sighted individuals. More specifically, a transient disruption of visual cortical areas using rTMS (repetitive Transcranial Magnetic Stimulation) did not hinder texture judgments, but impaired subjects' ability to judge the distance between dots in a raised dot pattern. Conversely, transient disruption of somatosensory cortex impaired texture judgments, while interdot distance judgments remained intact. In short, detection of shape and texture properties requires different exploratory procedures, and takes place in two different pathways in adult brains.

A second important question is that of how haptic input is translated into a visual format given that the sensory impressions are so different and that newborns have no experience with tactile and visual object inputs. To date, there is substantial neuroimaging evidence from adults showing that vision and touch are intimately connected, although views on this interconnectedness vary (see Amedi *et al., 2001*; Sathian, 2005 for reviews). Cerebral cortical areas that were previously considered as exclusively visual, notably lateral occipital complex (LOC), are activated during haptic perception of shape (Lacey *et al.,* 2007). Crucially, LOC is activated in tactile recognition without mediation by visual recognition. Allen and Humphreys (2009) tested a patient with visual agnosia due to bilateral lesions of the ventral occipito-temporal cortex that had spared dorsal LOC. This patient's visual object recognition was impaired, but his tactile recognition was preserved. As a consequence, activation of dorsal LOC by tactile input can work directly through tactile inputs, and visual experience is unnecessary for LOC regions to be active in tactile object recognition. It seems plausible that visual imagery does not exist in newborns because they have little or no experience of the visual world of objects. It is possible that the LOC is activated in newborns brains when they explore an object haptically, and that the visual recognition of felt shape in cross-modal transfer tasks is not due to any visual imagery.
