**4. The influence of color information on the recognition of color diagnostic and non-color diagnostic objects**

In this section we present the results from two studies that aimed to clarify the conflicting results found in the literature and to test the hypothesis that the color effects in object recognition depend on the color diagnosticity status of the specific objects. More specifically, we hypothesize that color information influences the recognition of both color and non-color diagnostic objects at the low-level of vision (e.g., improving the segmentation and the organization of perceptual input). However, color is expected to play an additional role in the recognition of color diagnostic objects at higher levels of the visual processing.

In a first study, participants performed three object recognition tasks with different cognitive demands at the perceptual, semantic and phonological levels: an object verification task, a category verification task, and a name verification task. Humphreys and colleagues argued that performance of these tasks poses different challenges for the cognitive system (Humphreys, Price, & Riddoch, 1999; Humphreys & Riddoch, 2006; Humphreys, Riddoch, & Quinlan, 1988; Riddoch & Humphreys, 1987). In the name

The Contribution of Color to Object Recognition 81

electrophysiological index of perceptual processing, where increased visual processing demands are reflected in more negative values (Johnson & Olshausen, 2003; Kiefer, 2001; Rossion et al., 2000; Tanaka, Luu, Weisbrod, & Kiefer, 1999; Wang & Kameda, 2005; Wang & Suemitsu, 2007). Based on our previous research, we predicted that the ERP associated with black-and-white stimuli would elicit a more positive P1 response and a more negative N1 response in occipital sites compared to color stimuli for both color and non-color diagnostic

The late visual N300 and N400 components are ERPs related to semantic processing. N300 is a negative ongoing component that peaks at approximately 300 ms after stimulus presentation and has an anterior topographic distribution (Barrett & Rugg, 1990; McPherson & Holcomb, 1999; Pratarelli, 1994). The N300 appears specific for visual stimuli and reflects a neural system that supports object model selection and generic memory. The N300 is the earliest marker of successful object categorization, with increased negative magnitude over frontal regions for unidentified objects compared to correctly-categorized stimuli (Hamm, Johnson, & Kirk, 2002; McPherson & Holcomb, 1999; Schendan & Kutas, 2002, 2007). The N300 is followed by the N400 component, which is a negative deflection over centralparietal regions peaking at approximately 400 ms after stimulus onset. The N400 is widely used as an index of semantic processing, with an increase in negative magnitude for semantically unrelated compared to semantically related material (Kutas & Hillyard, 1980a, 1980b). Both the N300 and N400 ERP components are related to late visual processing, with the N300 reflecting early object categorization (e.g., activation of object structural features that lead to a categorical representation), and the N400 being sensitive to information extracted after initial categorization (Hamm, Johnson, & Kirk, 2002). According to our expectations, we predicted that color effects in these two components would be restricted to

objects recognition.

color diagnostic objects.

Fig. 5. Example of the stimuli used in our experiments.

Our behavioral results showed that, during non-color diagnostic object recognition,,the role of color was restricted to tasks that required high visual perceptual demanding. During

verification task, participants were instructed to verify the name of visually presented objects. A number of processing stages must be completed before accessing the name representation. First, the early visual processes must encode the object shape and other perceptually available information. The encoded information must then be matched with the structural descriptions stored in long-term memory. The stored semantic and conceptual information about the object must be activated, and subsequently, the name representation is accessed. During this process, different forms of stored memory must be accessed, including knowledge about the object's shape (structural description), its functional and other meaning-related properties (semantic representation), and its name (lexical representation). In the category verification task, participants were instructed to verify the object's semantic category (natural or artifact). In contrast to name verification, category verification only depends on access to the stored structural description and the semantic representation. In the object verification task, participants were instructed to verify whether the presented object was a known object, and this only requires access to the structural description (Humphreys, Price, & Riddoch, 1999; Humphreys & Riddoch, 2006). By comparing the performance on these tasks, using both colored and black-and-white images, we attempted to determine the processing level at which color information facilitates the recognition of color and non-color diagnostic objects (**Figure 5**). If color information improves the recognition of color diagnostic objects both at the early visual and the semantic levels, then we expect to find an effect of the perceptual color for these objects when the task requires access to the structural description (i.e., in object verification). Furthermore, a larger effect of color information is to be expected for color diagnostic objects when the task requires access to both structural descriptions and semantic representations (i.e., in category verification). In the name verification task, we predicted color effects similar to those in the category verification task, given that no specific role of color is expected for accessing the lexical representation (i.e., the name) of an object *per se*. However, if color only modulates non-color diagnostic object recognition at the early visual processing stages, then we expect to find a perceptual color effect when the task requires access to the structural descriptions (i.e., in object verification). Moreover, we predicted that the perceptual color effect would remain constant for these objects on the remaining tasks, suggesting that only the early visual processing stages are affected by color information for these objects (Bramão, Inácio, Faísca, Reis, & Petersson, 2011).

In another study, we used ERPs to investigate this question. In contrast to behavioral measures, the ERPs permit the analysis of cognitive processes with a temporal resolution of milliseconds and represent an optimal approach to study the level at which visual processing of color information modulates object recognition. In a recognition task, subjects were presented with color and black-and-white versions of color and non-color diagnostic objects (**Figure 5**). Color effects were investigated in two early visual ERP components, the P1 and N1, and in two visual ERP components modulated by higher visual processes, the N350 and N400 (Bramão et al., Submitted). The P1 is an early scalp-recordable response to presented visual stimuli, which peaks at approximately 100 ms following stimulus onset and is best represented over the occipital sensors. This component has been associated with low-level visual processing but is also sensitive to attention (Mangun & Hillyard, 1991). The P1 is followed by a negative deflection peaking approximately 150 ms after stimulus onset termed N1, which has been observed primarily over the occipito-temporal region, and is an 80 Advances in Object Recognition Systems

verification task, participants were instructed to verify the name of visually presented objects. A number of processing stages must be completed before accessing the name representation. First, the early visual processes must encode the object shape and other perceptually available information. The encoded information must then be matched with the structural descriptions stored in long-term memory. The stored semantic and conceptual information about the object must be activated, and subsequently, the name representation is accessed. During this process, different forms of stored memory must be accessed, including knowledge about the object's shape (structural description), its functional and other meaning-related properties (semantic representation), and its name (lexical representation). In the category verification task, participants were instructed to verify the object's semantic category (natural or artifact). In contrast to name verification, category verification only depends on access to the stored structural description and the semantic representation. In the object verification task, participants were instructed to verify whether the presented object was a known object, and this only requires access to the structural description (Humphreys, Price, & Riddoch, 1999; Humphreys & Riddoch, 2006). By comparing the performance on these tasks, using both colored and black-and-white images, we attempted to determine the processing level at which color information facilitates the recognition of color and non-color diagnostic objects (**Figure 5**). If color information improves the recognition of color diagnostic objects both at the early visual and the semantic levels, then we expect to find an effect of the perceptual color for these objects when the task requires access to the structural description (i.e., in object verification). Furthermore, a larger effect of color information is to be expected for color diagnostic objects when the task requires access to both structural descriptions and semantic representations (i.e., in category verification). In the name verification task, we predicted color effects similar to those in the category verification task, given that no specific role of color is expected for accessing the lexical representation (i.e., the name) of an object *per se*. However, if color only modulates non-color diagnostic object recognition at the early visual processing stages, then we expect to find a perceptual color effect when the task requires access to the structural descriptions (i.e., in object verification). Moreover, we predicted that the perceptual color effect would remain constant for these objects on the remaining tasks, suggesting that only the early visual processing stages are affected by color information for these objects (Bramão, Inácio,

In another study, we used ERPs to investigate this question. In contrast to behavioral measures, the ERPs permit the analysis of cognitive processes with a temporal resolution of milliseconds and represent an optimal approach to study the level at which visual processing of color information modulates object recognition. In a recognition task, subjects were presented with color and black-and-white versions of color and non-color diagnostic objects (**Figure 5**). Color effects were investigated in two early visual ERP components, the P1 and N1, and in two visual ERP components modulated by higher visual processes, the N350 and N400 (Bramão et al., Submitted). The P1 is an early scalp-recordable response to presented visual stimuli, which peaks at approximately 100 ms following stimulus onset and is best represented over the occipital sensors. This component has been associated with low-level visual processing but is also sensitive to attention (Mangun & Hillyard, 1991). The P1 is followed by a negative deflection peaking approximately 150 ms after stimulus onset termed N1, which has been observed primarily over the occipito-temporal region, and is an

Faísca, Reis, & Petersson, 2011).

electrophysiological index of perceptual processing, where increased visual processing demands are reflected in more negative values (Johnson & Olshausen, 2003; Kiefer, 2001; Rossion et al., 2000; Tanaka, Luu, Weisbrod, & Kiefer, 1999; Wang & Kameda, 2005; Wang & Suemitsu, 2007). Based on our previous research, we predicted that the ERP associated with black-and-white stimuli would elicit a more positive P1 response and a more negative N1 response in occipital sites compared to color stimuli for both color and non-color diagnostic objects recognition.

The late visual N300 and N400 components are ERPs related to semantic processing. N300 is a negative ongoing component that peaks at approximately 300 ms after stimulus presentation and has an anterior topographic distribution (Barrett & Rugg, 1990; McPherson & Holcomb, 1999; Pratarelli, 1994). The N300 appears specific for visual stimuli and reflects a neural system that supports object model selection and generic memory. The N300 is the earliest marker of successful object categorization, with increased negative magnitude over frontal regions for unidentified objects compared to correctly-categorized stimuli (Hamm, Johnson, & Kirk, 2002; McPherson & Holcomb, 1999; Schendan & Kutas, 2002, 2007). The N300 is followed by the N400 component, which is a negative deflection over centralparietal regions peaking at approximately 400 ms after stimulus onset. The N400 is widely used as an index of semantic processing, with an increase in negative magnitude for semantically unrelated compared to semantically related material (Kutas & Hillyard, 1980a, 1980b). Both the N300 and N400 ERP components are related to late visual processing, with the N300 reflecting early object categorization (e.g., activation of object structural features that lead to a categorical representation), and the N400 being sensitive to information extracted after initial categorization (Hamm, Johnson, & Kirk, 2002). According to our expectations, we predicted that color effects in these two components would be restricted to color diagnostic objects.


Fig. 5. Example of the stimuli used in our experiments.

Our behavioral results showed that, during non-color diagnostic object recognition,,the role of color was restricted to tasks that required high visual perceptual demanding. During

The Contribution of Color to Object Recognition 83

The electrophysiological results corroborate our behavioral results. Independent of the color diagnosticity status, an early color effect was found (~100 ms after stimulus onset), suggesting that color aids image segmentation, thus lowering the visual demand of early visual processing stages. For color diagnostic objects, color effects occurred later (~350 ms after stimulus onset). These later color effects indicate that color is involved in the later stages of the recognition process for color diagnostic objects (**Figure 7**; Bramão et al.,

Fig. 7. Topographic distribution of the black-and-white *vs.* color objects in the time windows

All together, these results suggest that color information contributes to the recognition of both color and non-color diagnostic objects but at different stages of visual processing. Color information has proven to be an important cue for solving the early perceptual demands at the initial stages of visual processing for both types of objects. Moreover, for color diagnostic object recognition, color information also contributes in the later stages of the visual

of interest for the color diagnostic and non-color diagnostic objects (Bramão et al.,

Submitted).

Submitted).

processing.

color diagnostic object recognition, however, color was found to play a role in tasks that required high semantic processing (**Figure 6**; Bramão, Inácio, Faísca, Reis, & Petersson, 2011).

Fig. 6. Three-way interaction between the factors task, diagnosticity color object and presentation mode on verification times. A – Object verification task, B – Category verification task, C – Naming verification task. Bars represent standard error (Bramão, Inácio, Faísca, Reis, & Petersson, 2011).

82 Advances in Object Recognition Systems

color diagnostic object recognition, however, color was found to play a role in tasks that required high semantic processing (**Figure 6**; Bramão, Inácio, Faísca, Reis, & Petersson,

Fig. 6. Three-way interaction between the factors task, diagnosticity color object and presentation mode on verification times. A – Object verification task, B – Category verification task, C – Naming verification task. Bars represent standard error (Bramão,

Inácio, Faísca, Reis, & Petersson, 2011).

2011).

The electrophysiological results corroborate our behavioral results. Independent of the color diagnosticity status, an early color effect was found (~100 ms after stimulus onset), suggesting that color aids image segmentation, thus lowering the visual demand of early visual processing stages. For color diagnostic objects, color effects occurred later (~350 ms after stimulus onset). These later color effects indicate that color is involved in the later stages of the recognition process for color diagnostic objects (**Figure 7**; Bramão et al., Submitted).

Fig. 7. Topographic distribution of the black-and-white *vs.* color objects in the time windows of interest for the color diagnostic and non-color diagnostic objects (Bramão et al., Submitted).

All together, these results suggest that color information contributes to the recognition of both color and non-color diagnostic objects but at different stages of visual processing. Color information has proven to be an important cue for solving the early perceptual demands at the initial stages of visual processing for both types of objects. Moreover, for color diagnostic object recognition, color information also contributes in the later stages of the visual processing.

The Contribution of Color to Object Recognition 85

work shows that the role of color in object recognition dependents on the correlation between color and shape. When the correlation between color and shape is high, as it is in the case of the color diagnostic objects, color information is especially important at the semantic representation level, whereas when the correlation between color and shape is low, as it is in the case of the non-color diagnostic objects, color information improves object recognition only at the early stages of the visual processing. These results suggest that color improves object recognition in the early stages of the visual processing for all objects. However, because non-color diagnostic objects are not strongly associated with a color, no

The results reviewed in this contribution advance our current understanding of the role of color information during object recognition and its relationship with the object's color diagnosticity status. Together our results showed that color modulates the recognition of color and non-color diagnostic objects at different levels of visual processing: for color diagnostic objects, color plays an important role at the semantic level; for non-color

This work was supported by Fundação para a Ciência e Tecnologia (REEQ/879/PSI/2005; PTDC/PSI-PCO/110734/2009; IBB/CBME, LA, FEDER/POCI 2010), Max Planck Institute for Psycholinguistics, Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Vetenskapsrådet (8276), Hedlunds Stiftelse and Stockholm County Council (ALF, FoUU). Inês Bramão was supported by a PhD fellowship

Allen, G. (1879). *The Colour-sense: Its Origin and Development*. London: Trubner & Co.

pictures. *Brain and Cognition, 14*, 201-212.

*Psychological Review, 94*, 115-147.

*Cognitive Psychology, 20*, 38-64.

and non-color diagnostic objects.

*The Journal of General Psychology, 138*, 1-17.

Barrett, S., & Rugg, M. (1990). Event-Related potentials and the semantic matching of

Bartels, A., & Zeki, S. (2000). The architecture of the colour centre in the human visual brain: new results and a review. *European Journal of Neuroscience, 12*, 172-193. Biederman, I. (1987). Recognition-by-components: A theory of human image understanding.

Biederman, I., & Ju, G. (1988). Surface versus edge-based determinants of visual recognition.

Bramão, I., Faísca, L., Forkstam, C., Reis, A., & Petersson, K. M. (2010). Cortical brain regions

Bramão, I., Francisco, A., Inácio, F., Faísca, L., Reis, A., & Petersson, K. M. (Submitted).

Bramão, I., Inácio, F., Faísca, L., Reis, A., & Petersson, K. M. (2011). The influence of color

associated with color processing: An FMRI study. *The Open Neuroimaging Journal, 4*,

Electrophysiological evidence for color effects on the recognition of color diagnostic

information on the recognition of color diagnostic and non color diagnostic objects.

further color advantage is expected at the higher processing levels.

diagnostic objects, color plays a role at the pre-semantic recognition level.

**6. Acknowledgements** 

(FCT/SFRH/BD/27381/2006).

164-173.

**7. References** 

The major outcome of these studies is that the influence of color on object recognition depends on object diagnosticity status. Tanaka and Presnell (1999) proposed that color information contributes to object recognition only when objects are color diagnostic (see also, Nagai & Yokosawa, 2003; Oliva & Schyns, 2000). However, recent studies have reported results that suggest that color contributes to the recognition of both color and noncolor diagnostic objects (Rossion & Pourtois, 2004; Uttl, Graf, & Santacruz, 2006). We have provided data that may clarify these apparently contradictory results. Our studies suggest that color information affects different levels of visual processing during the recognition of color and non-color diagnostic objects. For the recognition of non-color diagnostic objects, color information is an important cue for the initial image segmentation and visual input organization, making the selection of a structural description, stored in the long-term visual memory, easier and faster, thus resulting in faster object verification. Moreover, our results also show an absence of color effects for non-color diagnostic objects in the later stages of the visual process. However, for color diagnostic objects, we observed an additional role for color information. Beyond the facilitation that color information confers on the initial visual stages, our results showed a strong color effect in the later stages of object recognition. It appears that color affects the later stages of recognition of color diagnostic objects in two different ways. First, color information triggers the selection of the structural object description from long-term visual memory. When we see an object, color and shape are likely processed in a parallel fashion. Some studies suggest that the same neural circuits, in early visual cortical regions, process information about color, shape and luminance (Gegenfurtner, 2003). At some point, this information must be combined to achieve a unitary representation of the visual world. One possibility is that this information is combined during the selection of structural description, where color might act as a cue that limits the range of candidate structural descriptions. The results also suggest that the templates corresponding to color diagnostic objects are stored in our visual memory system in a typical color format. Second, color information contributes to the activation and retrieval of the semantic network associated with these objects.
