Skip to main content

ORIGINAL RESEARCH article

Front. Psychol., 01 November 2010
Sec. Developmental Psychology

Visual Exploration Strategies and the Development of Infants’ Facial Emotion Discrimination

  • 1 Department of Cognitive, Linguistic, & Psychological Sciences, Brown University, Providence, RI, USA
  • 2 Department of Neurology and Neuroscience, Weill-Cornell Graduate School of Medical Sciences, NY, USA
  • 3 Department of Psychology, Columbia University, New York City, NY, USA
  • 4 Clinical Psychology, St. John’s University, New York City, NY, USA
  • 5 Department of Psychology, University of California at Los Angeles, Los Angeles, CA, USA

We examined the role of visual exploration strategies in infants’ discrimination between facial emotion expressions. Twenty-eight 6- to 11-month olds were habituated to alternating models posing the same expression (happy N = 14/fearful N = 14) as eye gaze data were collected with a corneal reflection eye tracker. Gaze behavior analyses indicated that duration of gaze to the eyes and mouth was similar, consistent with what would be expected based on area subtended by those regions, and negatively correlated. This pattern did not differ as a function of age, sex, or habituation condition. There were no posthabituation performance differences as a function of age group (6- to 8-month- versus 9- to 11-month olds). Only infants habituated to happy faces showed longer looking at the novel emotion (fear) when the model was held constant from habituation to test. We found no reliable correlation between this performance and proportion of gaze directed at any one facial region. Consistent with previous work, the group habituated to fear faces showed no reliable posthabituation novelty preference. Individual differences in gaze behavior shed light on this finding. Greater proportion of gaze directed at the eyes correlated positively with preference for the novel emotion (happy). These data suggest that, as in other object classes, visual exploration strategies are an important agent of change in infants’ capacity to learn about emotion expressions.

Introduction

The ability to identify emotional expressions in others’ faces is key to typical social interaction. A first step in this process is the ability to distinguish between emotional expressions and to generalize the encoded information across individuals. There are two putative mechanisms underlying face processing in general, and facial emotion processing more specifically. The first is that understanding the meaning of facial expressions is a universal (Ekman, 1984) and may be represented in a domain specific module. Alternatively, neural substrates that become specialized exist, but specialization is a function of experience with faces (Adolphs et al., 1995; Davis and Whalen, 2001; Nelson, 2001; Tottenham et al., 2009). One piece of evidence for the second hypothesis is that visual processing strategies change across the first postnatal year to become more adult-like (Cohen and Cashon, 2001; LeGrand et al., 2001). For example, Kestenbaum and Nelson (1990) showed that 7-month-old infants process happy faces in a holistic rather than piecemeal manner, while Younger and Cohen (1986) showed that 4-month olds were not yet capable of this type of processing. Subsequent research has shown infants as young as 5 months to be sensitive to second-order relations between facial features (Hayden et al., 2007). The mechanisms underlying the nature of the visual experience sufficient to produce such processing changes are not clear. We test the hypothesis that, as in other object classes, targeted visual exploration strategies play a role in the discrimination and generalization of facial emotion expressions.

As in the face processing literature, a substantial body of evidence suggests that there is progress over the first several postnatal months from a disorganized and fragmented perception of objects to a more mature perception of objects as continuing across space and time (Slater et al., 1990; Johnson and Aslin, 1995, 1996). When learning about simple objects, individual differences in where infants look on a display have been shown to correlate with what they perceive. Johnson et al. (2004) and Amso and Johnson (2006) combined looking time methodology with corneal reflection eye tracking to examine the relations between active exploration (infants’ oculomotor scans and fixations) during habituation, and object unity perception. In the rod-and-box (Kellman and Spelke, 1983) paradigm, infants are habituated to a partly occluded rod display followed by two test displays, one designed to match a percept of unity and the other to match a percept of disjoint surfaces. Johnson (2004) showed that 2-month olds provided evidence of unity perception under limited conditions and it is not until 4-months that unity perception becomes reliably robust. Therefore, Johnson et al. (2004) reasoned that if development of visual exploration skills is an important mechanism of change in veridical object perception, 3 month-old infants whose novelty preference indicates unity perception on this task should target scans and fixations to the relevant informative areas on the habituation display. These included the top and bottom of the rod parts, the four background quadrants, and the occluder. Infants who looked longer at the broken rod display at test, indicating unity perception, provided evidence of more effective visual exploration strategies during habituation. As a group, they fixated the rod during habituation more frequently and scanned more often across the rod parts as they translated back and forth. These data suggest an association between object perception and infants’ visual experience, behavior, and interactions with the environment, and provide evidence for real-time visual information gathering occurring during habituation. We suggest that a similar mechanism may be at play with respect to face processing. We hypothesize that targeted visual exploration strategies, scans, and fixations to emotion-relevant eye and mouth regions, are likely to play a role in the discrimination and generalization of facial expressions in others.

Infants as young as 3 months have shown evidence of discriminating happy and surprised and sad and surprised faces (Young-Browne et al., 1977). Older infants have shown evidence of discriminating anger, fear, and surprise (Serrano et al., 1992), fear and happy (Nelson et al., 1979; Nelson and Dolgin, 1985), and happy and surprise (Caron et al., 1982). Categorical perception is evident in 7-month-old infants, suggesting that at least some facial expressions are robust to slight changes in an individual’s appearance (Kotsoni et al., 2001), including changes of identity (Nelson et al., 1979; Nelson and Dolgin, 1985). However, others find that infants’ capacity to discriminate facial expressions is robust only when the identity of the model is constant, regardless of expression (Caron et al., 1985). A common thread across several of these studies involves how particular emotions (happy, fearful) are processed differently by infants of the same age. There is evidence that infants habituated to happy faces can subsequently identify a fearful face as novel, but the opposite is not true (e.g., Nelson et al., 1979; Nelson and Dolgin, 1985; Kotsoni et al., 2001). Similarly, others have shown that infants can generalize emotion across models when habituated to happy but not fearful faces (Caron et al., 1982; Kotsoni et al., 2001). The mechanisms for this difference in happy vs. fearful expression processing remain elusive.

We argue that, as in other object classes, visual exploration may be an agent of change in facial emotion discrimination. We use the happy/fear habituation effect described to test this hypothesis. Scanning is limited to external facial features early in life and after 2 months of age, scanning switches to the internal features (Maurer and Salapatek, 1976). When measured at the group level, infants look longer at the mouth region than at the eyes until they are at least 4 months old (Hunnius and Geuze, 2004), and at 6 months looking is on average evenly distributed across the eyes and mouth (Merin et al.,, 2006). The evidence presented leads us to ask whether happy faces may be visually explored differently than are fearful faces, thereby supporting differences in emotion expression discrimination after habituation to happy but not fearful faces. We tested a cohort of infants (6–11 months) on a combined eye tracking/habituation paradigm modeled after Kestenbaum and Nelson (1990). We focused on the latter part of the first postnatal year firstly to control for individual differences in oculomotor control and visual attention known to influence visual orienting (Amso and Johnson, 2006). Specifically, we aimed to ensure that infants in our sample were beyond the simple reflexive orienting that may hypothetically drive looking only at the salient mouth region in very young infants (e.g., Hunnius and Geuze, 2004). We specifically aimed to understand whether individual differences in looking at emotion-relevant regions (eye/mouth) are relevant to facial emotion discrimination. Experience with facial emotions may be relevant to where babies look on a face or what emotions they are familiar with. Reasoning that older infants have more experience with facial emotions, we divided our sample into 6- to 8- and 9- to 11-month-old groups to consider the contribution of this variance to posthabituation test performance and gaze patterns.

Materials and Methods

Participants

A total of 28 infants participated in this experiment (M age = 244.64 days, SD = 52.06, 14 girls). Age group and sex were evenly distributed across the happy (6- to 8-month olds, M = 6.57 months, SD = 0.79, N = 7; 9- to 11-month olds, M = 9.3, SD = 0.49, N = 7; eight total girls) and fear (6- to 8-month olds, M = 6.9 months, SD = 0.38, N = 7; 9- to 11-month olds, M = 9.6, SD = 0.79, N = 7; six total girls) habituation conditions. Sixteen additional infants were observed but excluded from the final sample for excessive movement resulting in insufficient point of gaze (POG) data (5), failure to habituate (4), fussiness (5), and for experimenter error (2). Infants were full term with no known developmental disabilities. Families were recruited via advertisements and/or a letter and a follow-up phone call. Families were compensated for travel expenses and infants received a certificate of completion as a thank-you gift. Infants roughly represented the racial and socioeconomic makeup of the New York City area: 54% Caucasian, 11% Hispanic, 4% African American, 7% Asian, 18% of mixed racial background, and 7% declined to provide information. All participants were screened for evidence of maternal depression, family history of psychiatric disorders, neural delays or impairments, visual impairments, etc. None of these is represented in this sample.

Apparatus

Infants were seated in a parent’s lap approximately 100 cm from a 50 cm stimulus-presentation monitor. Eye movement data were collected with a remote optics corneal reflection eye tracker (Applied Science Laboratories Model 6000). Each infant’s POG was calibrated with an attention-getter that contracted and expanded in synchrony with a rhythmic sound at the top left and bottom right corners of the screen. Subjects then viewed the attention-getter at several random locations on the screen. If the POG was not within 0.5° of the center of the attention-getter at all locations (minimum of six), the calibration procedure was repeated. The experiment began only once the calibration criterion had been reached. Eye tracking data was collected at 60 Hz, with the software averaging across five samples.

Stimuli and Procedure

We used an infant-controlled habituation procedure. One experimenter controlled the ASL eye tracker while another collected looking time data using Habit (Cohen et al., 2004), blind to both habituation and test displays. Each trial began with presentation of the attention-getter in the center of the screen. The habituation experimenter ended the attention-getter and began the display for each trial when it was judged that the infant oriented to the attention-getter. A trial ended when the infant looked away for 2 s, or when 30 s had elapsed; the stimulus was then replaced by the attention-getter to begin the next trial. The habituation stimulus was presented until looking times declined across three continuous trials that summed to less than half the total during the first three trials. The minimum number of habituation trials was four and the maximum was 10. We presented infants with two alternating female models posing the same expression (either happy or fearful) during habituation. Face stimuli (color) were taken from the NimStim Set of Facial Expressions (Tottenham et al., 2009) and positioned on the screen such that the nose was at display center. We used models # 03, 08, and 09, who provide expressions of fear and happy that are identified at high rates by adult raters (#03 Happy = 100%; #03 Fear = 78%; #08 Happy = 99%; #08 Fear = 84%; #09 Happy = 99%; #09 Fear = 81%). The set is designed such that faces are relatively the same size. Stimuli were chosen such that, minor natural variability notwithstanding, regions subtended relatively the same visual angle and area across emotion and areas of interest (AOI). Specifically, all faces subtended 9.6–10.2° visual angle and were approximately 110 cm2 in area (including all AOIs but excluding hair). The forehead AOI was 2° visual angle (25.55 cm2, 23% of face area), the eyes AOI was 2° visual angle (33.25 cm2, 30% of face area), the cheek/nose AOI was 0.8° at its narrowest and 2.7° visual angle at its widest (22.76 cm2, 21% of face area), and the mouth AOI was 3° visual angle (28.5 cm2, 26% of face area) at its widest (please see Figure 1). Emotional expression during habituation was counterbalanced across subjects. We presented three types of test trials, twice each for a total of 6 alternating test trials: a familiar model with the novel emotion (FMNE), a novel model with the novel emotion (NMNE), and a novel model with the familiar emotion (NMFE). Test order was counterbalanced across subjects. Each face was divided into four AOIs (eye, mouth, cheek/nose and forehead, see Figure 1) for quantification of gaze behavior. Individual fixations were defined as portions of the data during which the x–y coordinates of the POG did not vary more than 0.5° for a minimum of 100 ms. A scan between regions was defined as beginning and ending in a fixation.

FIGURE 1
www.frontiersin.org

Figure 1. Illustrates the areas of interest (AOIs) used to determine gaze location.

Results

Preliminary Analyses of Habituation Data

We first examined general patterns of habituation in the entire sample. A multivariate ANOVA examined total time (M = 81.42 seconds, SD = 34.67 s) and number of trials to habituation (M = 6.96 trials, SD = 2.01) as a function of age group, sex, and habituation condition. The analysis revealed no differences as a function of age group in total time to habituate, F(1,20) = 0.000, p = ns, or number of trials to habituation, F(1,20) = 1.51, p = ns. There were also no differences as a function of habituation condition (fearful vs. happy faces) on number of trials to habituation, F(1,20) = 0.06, p = ns, or total time to habituate, F(1,20) = 0.45, p = ns. As well, no differences in total time to habituate, F(1,20) = 0.40, p = ns, or trials to habituation, F(1,20) = 0.002, p = ns, obtained as a function of sex. Finally, we found no interactions between age, sex, habituation condition, and number of trials to habituation (all ps < 0.05) (Figure 2).

FIGURE 2
www.frontiersin.org

Figure 2. Illustrates average total time to habituate and number of trials to habituation as a function of habituation condition.

Preliminary Analyses of Gaze Patterns

We examined general gaze patterns in the entire sample. Proportion of looking to the eye, mouth, and other regions was calculated as total gaze duration per region relative to total duration of looking at all regions (eye, mouth, cheek/nose, and forehead) during habituation. We calculated proportion of looking directed at the eyes (M = 0.31, SD = 0.21), mouth (M = 0.35, SD = 0.29), and other regions combined (cheek/nose and forehead, M = 0.34, SD = 0.15).1 Infants’ proportional gaze duration was reliably less than would be expected to the forehead and cheek/nose areas combined (34% observed vs. 44% of the display subtended, t(26) = 3.45, p < 0.005). Looking to the emotion-relevant eye (31% observed vs. 30% of the display subtended, p = ns) and mouth (35% observed vs. 26% of the display subtended, p = ns) regions roughly approximates expected performance based on area size alone. Gaze proportion to the mouth was inversely correlated with gaze proportion to the eyes, r(27) = −0.86, p = 0.000, as well as to other regions combined, r(27) = −0.70, p = 0.000. That is, infants who looked at the mouth tended to not look at other regions of the face (Figure 3). We found no reliable correlation between gaze proportion to the eyes and other regions.

FIGURE 3
www.frontiersin.org

Figure 3. Depicts individual infants’ proportion of gaze to the mouth relative to the eye and separately to other (cheek/nose and forehead) regions.

We next considered whether this gaze distribution differed as a function of habituation condition, sex, and age group. A GLM repeated measures ANOVA considering differences in gaze proportions (Eye × Mouth × Other) as a function of habituation condition (Happy vs. Fearful faces), sex, and age group (6- to 8- × 9- to 11-month-old infants) revealed no reliable main effects or interactions (ps > 0.05). We tested whether looking distributions differed from what would be expected based on area subtended by each AOI as a function of age group, sex, and habituation condition. We calculated Observed – Expected (based on face area subtended) proportions for each subject for all three AOIs, and repeated the ANOVA described above. We found only a main effect of AOI, F(2,38) = 3.52, p < 0.05, driven largely by difference scores being greater for the forehead and cheek/nose areas combined than for the mouth, t(26) = 2.45, p < 0.05, and eye regions, t(26) = 2.47, p < 0.05. This indicates that proportional looking to the irrelevant regions deviated more from chance (less than expected) than did proportional looking to the relevant eye and mouth areas. Finally, the same analysis considering proportion of scans between the eye and mouth vs. scans between any other possible combination of facial AOIs, as a function of age group and habituation condition, revealed only a reliable main effect of scan type, F(1, 19) = 18.5, p < 0.001, yes = 0.49. Infants made proportionally more scans across all AOIs (M = 0.75, SD = 0.23) than scans specifically between the emotion-relevant eye and mouth AOIs (M = 0.25, SD = 0.28). This is consistent with infants being either eye or mouth lookers (see Figure 3). We conducted the same analyses on the gaze and scan data described above separately for the first and the last habituation trials. The analyses revealed no reliable effects or interactions in gaze proportion or scan type as a function of age group or habituation condition.

Posthabituation Test Performance

We compared duration of looking to all test displays in a 2 (Habituation Condition: Happy vs. Fearful faces) × 3 (Test Trial Type: FMNE, NMFE, NMNE) × 2 (Age Group) × 2 (Sex) GLM repeated measures ANOVA. This yielded a significant age group by sex interaction, F(1,20) = 4.49, p < 0.05. Overall looking times at test were longer for males in the 6–8 month relative to the 9–11 month group, and longer for females in the 9–11 month relative to the 6–8 month group. The analysis also yielded a main effect of Condition, F(1,20) = 5.22, p < 0.05, yes = 0.21, as well as a reliable Condition by Test Trial Type interaction, F(2,40) = 4.54, p < 0.05, yes = 0.19. Planned comparisons show this to be driven by longer looking to the FMNE relative to both the NMNE, t(13) = 2.083, p = 0.05, and NMFE, t(13) = 2.42, p < 0.05, test displays only in the group habituated to happy faces (Figure 4). There was no looking time difference between the NMNE and NMFE test displays. We found no looking differences between any of the test trial types in the group habituated to fearful faces. Furthermore, a direct comparison of duration of looking at test across habituation conditions revealed longer looking at the novel emotion only when presented on the familiar model (FMNE) in the group habituated to happy relative to the group habituated to fearful faces, F(1,26) = 4.66, p < 0.05. Taken together, these data indicate longer looking, and evidence for discrimination of facial emotions, for the novel emotion when the model was held constant (FMNE) only for infants habituated to happy faces.

FIGURE 4
www.frontiersin.org

Figure 4. Illustrates test display performance as a function of habituation condition (happy vs. fearful faces).

Gaze Patterns and Posthabituation Test Performance

The eyes have consistently been shown to be relevant in adult subjects for identifying a fearful face as such (e.g., Vinette et al., 2004). The overall distribution of gaze patterns in our sample is such that gaze to the emotion-relevant eye and mouth regions is consistent with expectation based on regional size. The irrelevant cheek/nose and forehead regions receive less attention than would be expected as a function of their area subtended on the face. This pattern was consistent across infants habituated to happy and those habituated to fearful faces. Furthermore, there was a negative correlation between looking to the mouth and to the eyes, suggesting that infants were either eye or mouth lookers. The question is whether these looking patterns were related to differences observed in relative posthabituation preferences at test. To address this, we considered the relation between gaze patterns and preference for the FMNE test display as a function of habituation condition. We specifically examined whether individual differences in gaze patterns may shed light on the observed differences in FMNE performance. Relative novelty preference scores for emotion, holding model constant, were calculated as the sum duration of looking to the FMNE display divided by the sum of total looking at all displays at test. We correlated this (partial correlations correcting for age) with proportion of looking at our previously defined AOI. We found that when infants were habituated to happy faces, there were no reliable correlations between novelty preference scores and duration of gaze at any one region. In the group habituated to fearful faces, gaze to the eyes was positively correlated with FMNE preference, r(13) = 0.57, p < 0.05 (Figure 5). No other correlations obtained. These data are consistent with the idea that mechanisms of information gathering, visual exploration in this case, may be most relevant when confronted with something unfamiliar or ambiguous. They also suggest that learning to discriminate facial emotions may involve the same mechanisms used for learning about other object classes (e.g., Johnson et al., 2004; Amso and Johnson, 2006).

FIGURE 5
www.frontiersin.org

Figure 5. Illustrates the correlation between proportion of gaze directed at the eyes during habituation (to both fearful and happy faces) and the relative preference to the familiar model wearing a novel emotion at test.

Discussion

We examined mechanisms that influence infants’ abilities to discriminate emotional expressions on the same model face and to generalize affective information across models. Our data provide no evidence for discrimination of emotion expressions across models (e.g., Caron et al., 1985). Consistent with previous work, we show that infants can make the simple emotion expression discrimination, i.e., when model is held constant, when habituated to happy but not fearful faces. Individual differences in gaze patterns provide some mechanistic insight into these findings. Specifically, duration of looking at emotion-relevant regions (eyes and mouth) on a face did not reliably correlate with preference for the novel emotion in the group of infants habituated to happy faces. Duration of looking at emotion-relevant regions was key to identifying a happy face as novel after having been habituated to a fearful face. Proportion of gaze directed at the eyes of the fearful face correlated positively with this performance.

We suggest that different visual exploration strategies are relevant for discrimination when infants are gathering information about fearful and happy faces. Fearful faces may require more online information gathering resources. Some have argued that fearful faces are biologically relevant and uniquely capture infants’ attention (Peltola et al., 2009). Disambiguating the information in a fearful face may be best done when infants focus attention on the emotion-relevant eye region. These data are consistent with adult work showing that the mouth and eye regions (Shepherd et al., 1981), and specifically the eyes (Vinette et al., 2004), are particularly informative with respect to face processing. Infants habituated to happy faces may have only needed to target emotion-relevant regions in a cursory manner, rather than making prolonged fixations, to retrieve some sort of a stored stable representation. These data do not suggest that mouths are irrelevant for the discrimination of happy faces but that active visual exploration, less or more looking to the mouth, was not the prominent force driving differences in posthabituation preference in this age group. We can speculate that the observed pattern of results may in part be due to the relative frequency of happy relative to fearful faces, and/or the relative ambiguity of fearful relative to happy faces, in an infant’s daily experience. We had included two age groups to account for the role of experience with emotions, but perhaps were underpowered to observe such an effect (please see discussion below). Regardless, further research is necessary to investigate these possibilities directly.

These data suggest an association between face processing and infants’ visual behavior, and provide evidence for real-time visual information gathering occurring during habituation. Notably, the emotions presented on the faces did not drive looking behavior. Overall analyses of gaze and scan patterns, as well as during the first and last habituation trials, showed no differences as a function of habituation condition. The findings are consistent with previous work on other object classes, providing support for the argument that like other non-face objects, successful processing of facial emotion expressions is dependent in part on an infant’s ability to gather the appropriate information about the object. For example, Johnson et al. (2003) and Amso and Johnson (2006) found visual exploration skills to be an important mechanism of change in object perception. Infants that showed a reliable novelty preference on an object unity task targeted scans and fixations to the relevant informative features of the habituation display. Importantly, these relations were evident in infants who as a group show no evidence of mature object perception. As in the fearful faces habituation condition, individual differences in the data provided an opportunity to examine developmental process.

It remains unclear what drives some infants to target their gaze to particular facial regions. While age and habituation condition played no role, differences in visual attention skill may in future work provide some insight. Amso and Johnson (2006) found that infants with a better ability to select information relevant regions, and suppress irrelevant regions, were able to extract the percept in a complex visual scene. Similarly, perhaps infants with better visual attention skills are more likely to override orienting to the mouth, a salient region that does not confer maximal information in the case of fearful faces. While not reliable, we have some indication that infants on average directed proportionally more looks to the mouth region (35%) than would be expected by its relative size (26% of display), and that those who directed the majority of their gaze at the mouth looked very little to other regions including the eyes (Figure 3). It may be that the mouth is hard to disengage attention from given its relative salience. Previous work has shown that very young infants tend to predominantly target gaze to the mouth region (e.g., Hunnius and Geuze, 2004). Therefore, it may be that infants targeting only the mouth in our sample are relying on a simple visual exploration strategy. Such a strategy may suffice for gathering information about happy but not fearful faces. Notably, our data do not indicate a preference for the eyes over the mouth in infancy, as has been suggested previously (Gliga and Csibra, 2007; Peltola et al., 2009). If prolonged looking at the eyes is necessary for detection of happy after habituation to fearful, then variability in gaze behavior (either predominantly directed at the eyes or mouth) in infancy may be contributing to the consistent null finding that, as a group, infants cannot discriminate emotion expression after habituation to fearful faces (e.g., Nelson and Dolgin, 1985; Kestenbaum and Nelson, 1990; Kotsoni et al., 2001).

A limitation of this study is the lack of a no-change control test trial. One of our intended goals, as outlined in the Introduction, was to use gaze patterns to shed light on the fear vs. happy habituation findings. In order to best do this, we modeled the design after early papers that had shown this effect (e.g., Nelson and Dolgin, 1985). The data, however, do not leave room for the possibility that our effects are based on spontaneous recovery. Firstly, we found no dishabituation (recovery) when infants were habituated to fearful faces. Secondly, we found differences in relative looking times, to the three test conditions, that were both interpretable and consistent with previous literature.

A second limitation is the sample size. While it is small, we replicated the habituation effects obtained in multiple studies that have found that when tested before their first birthday, infants who are habituated to fear will not dishabituate to happy faces (e.g., Nelson et al., 1979; Nelson and Dolgin, 1985; Kotsoni et al., 2001). As well, the small sample size combined with the large age range may have resulted in insufficient power to detect age-related effects. We note, however, that our study intended only to examine whether, as in other object classes, where infants look on a face relates to what they perceive about emotion. Age was included only to constrain the influence of experience with happy and fearful emotions on this relationship. The lack of age effects in our data can thus only be cautiously interpreted to mean that any difference in experience with fearful relative to happy faces, between 6- to 8-month- and 9- to 11-month olds, is not significantly driving the results of either gaze behavior or posthabituation test performance in this experiment.

We conclude that while infants can discriminate between expressions after exposure to happy faces, targeting the eyes during visual exploration supports the ability to do so after exposure to a fearful face. Although these data speak only to mechanisms that support discrimination of facial emotion expressions on a perceptual level, they provide mechanistic insight into the low level visual learning processes that may ultimately support understanding of visual displays of affect.

Conflict of Interest Statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Footnote

  1. ^One infant’s eye gaze data was excluded for being >3 SDs from the mean of the sample.

References

Adolphs, R., Tranel, D., Damasio, H., and Damasio, A. R. (1995). Fear and the human amygdala. J. Neurosci. 15, 5879–5891.

Pubmed Abstract | Pubmed Full Text

Amso, D., and Johnson, S. P. (2006). Learning by selection: visual search and object perception in young infants. Dev. Psychol. 42, 1236–1245.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Caron, R. F., Caron, A. J., and Meyers, R. S. (1982). Abstraction of invariant face expressions in infancy. Child Dev. 53, 1009–1015.

CrossRef Full Text

Caron, R. F., Caron, A. J., and Meyers, R. S. (1985). Do infants see emotional expressions in static faces? Child Dev. 56, 1552–1560.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Cohen, L. B., Atkinson, D. J., and Chaput, H. H. (2004). Habit X: A New Program for Obtaining and Organizing Data in Infant Cognition Studies (Version 1.0). Austin: University of Texas.

Cohen, L. B., and Cashon, C. H. (2001). Do 7-month-old infants process independent features or facial configurations? Infant Child Dev. 10, 83–92.

CrossRef Full Text

Davis, M., and Whalen, P. J. (2001). The amygdala: vigilance and emotion. Mol. Psychiatry 6, 13–34.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Ekman, P. (1984). “Expression and the nature of emotion,” in Approaches to Emotion, eds K. Scherer and P. Ekman (Hillsdale, N.J.: Lawrence Erlbaum), 319–343.

Gliga, T., and Csibra, G. (2007). Seeing the face through the eyes: a developmental perspective on face expertise. Prog. Brain Res. 164, 323–339.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Hayden, A., Bhatt, R. S., Reed, A., Corby, C. R., and Joseph, J. E. (2007). The development of expert face processing: are infants sensitive to normal differences in second-order relational information? J. Exp. Child. Psychol. 7, 85–98.

CrossRef Full Text

Hunnius, S., and Geuze, R. H. (2004). Developmental changes in visual scanning of dynamic faces and abstract stimuli in infants: a longitudinal study. Infancy 6, 231–255.

CrossRef Full Text

Johnson, S. P. (2004). Development of perceptual completion in infancy. Psychol. Sci. 15, 769–775.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Johnson, S. P., Amso, D., and Slemmer, J. A. (2003). Development of object concepts in infancy: evidence for early learning in an eye tracking paradigm. Proc. Natl. Acad. Sci. U.S.A. 100, 10568–10573.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Johnson, S. P., and Aslin, R. N. (1995). Perception of object unity in 2-month-old infants. Dev. Psychol. 31, 739–745.

CrossRef Full Text

Johnson, S. P., and Aslin, R. N. (1996). Perception of object unity in young infants: the roles of motion, depth, and orientation. Cogn. Dev. 11, 161–180.

CrossRef Full Text

Johnson, S. P., Slemmer, J. A., and Amso, D. (2004). Where infants look determines how they see: eye movements and object perception performance in 3-month-olds. Infancy 6, 185–201.

CrossRef Full Text

Kellman, P. J., and Spelke, E. S. (1983). Perception of partly occluded objects in infancy. Cogn. Psychol. 15, 483–524.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Kestenbaum, R., and Nelson, C. A. (1990). The recognition and categorization of upright and inverted emotional expressions by 7-month-old infants. Infant Behav. Dev 13, 497–511.

CrossRef Full Text

Kotsoni, E., de Haan, M., and Johnson, M. H. (2001). Categorical perception of facial expressions by 7-month-old infants. Perception 30, 1115–1125.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

LeGrand, R., Mondloch, C. J., Maurer, D., and Brent, H. P. (2001). Early visual experience and face processing, Nature 410, 890.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Maurer, D., and Salapatek, P. (1976). Developmental change in the scanning of faces by young infants. Child Dev. 47, 523–527.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Merin, N., Young, G. S., Ozonoff, S., and Rogers, S. J. (2006). Visual fixation patterns during reciprocal social interaction distinguish a subgroup of 6-month-old infants at risk for autism from comparison infants. J. Autism Dev. Disord. 37, 108–121.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Nelson, C. A. (2001). The development and neural bases of face recognition. Infant Child Dev. 10, 3–18.

CrossRef Full Text

Nelson, C. A., and Dolgin, K. (1985). The generalized discrimination of facial expressions by 7-month-old infants. Child Dev. 56, 58–61.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Nelson, C. A., Morse, P. A., and Leavitt, L. A. (1979). Recognition of facial expressions by 7-month-old infants. Child Dev. 50, 1239–1242.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Peltola, M. J., Leppanen, J. M., Vogel-Farley, V. K., Hietanen, J. K., and Nelson, C. A. (2009). Fearful faces but not fearful eyes alone delay attention disengagement in 7-month-old infants. Emotion 9, 560–565.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Serrano, J. M., Iglesias, J., and Loeches, A. (1992). Visual discrimination and recognition of facial expressions of anger, fear and surprise in 4- to 6-month-old infants. Dev. Psychobiol. 25, 411–425.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Shepherd, J. W., Davies, G. M., and Ellis, H. D. (1981). “Studies of cue saliency,” in Perceiving and Remembering Faces, eds G. Davies, H. Ellis, and J. Shepherd (London: Academic Press), 105–131.

Slater, A., Morison, V., Somers, M., Mattock, A., Brown, E., and Taylor, D. (1990). Newborn and older infants’ perception of partly occluded objects. Infant Behav. Dev. 13, 33–49.

CrossRef Full Text

Tottenham, N., Hare, T. A., and Casey, B. J. (2009). “A developmental perspective on human amygdala function,” in The Human Amygdala, eds E. Phelps and P. Whalen (New York: Guilford Press), 107–117.

Tottenham, N., Tanaka, J., Leon, A. C., McCarry, T., Nurse, M., Marcus, D. J., Westerlund, A., Casey, B. J., and Nelson, C. A. (2009). The NimStim set of facial expressions: judgments from untrained research participants. Psychiatry Res. 168, 242–249.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Vinette, C., Goselin, F., and Schyns, P. G. (2004). Spatio-temporal dynamics of face recognition in a flash: It’s in the eyes! Cogn. Sci. 28, 289–301.

Young-Browne, G., Rosenfeld, H. M., and Horowitz, F. D. (1977). Infant discrimination of facial expressions. Child Dev. 49, 555–562.

CrossRef Full Text

Younger, B. A., and Cohen, L. B. (1986). Developmental change in infants’ perception of correlation among attributes. Child Dev. 57, 803–815.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Keywords: infancy, visual exploration, face perception, emotion expression

Citation: Amso D, Fitzgerald M, Davidow J, Gilhooly T and Tottenham N (2010). Visual exploration strategies and the development of infants’ facial emotion discrimination. Front. Psychology 1:180. doi: 10.3389/fpsyg.2010.00180

Received: 02 June 2010; Paper pending published: 07 June 2010;
Accepted: 08 October 2010; Published online: 01 November 2010.

Edited by:

Susan M. Rivera, University of California, USA

Reviewed by:

Lisa Oakes, University of California, Davis, USA
Sara Jane Webb, University of Washington, USA

Copyright: © 2010 Amso, Fitzgerald, Davidow, Gilhooly and Tottenham. This is an open-access article subject to an exclusive license agreement between the authors and the Frontiers Research Foundation, which permits unrestricted use, distribution, and reproduction in any medium, provided the original authors and source are credited.

*Correspondence: Dima Amso, Cognitive, Linguistic, & Psychological Sciences, Brown University, 229 Waterman Street, Providence, RI 02912, USA. e-mail: dima_amso@brown.edu

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.