Brain activation evoked by perception of gaze shifts: the influence of context
Introduction
From the earliest stages of postnatal development, faces are salient to typically developing individuals [20], [37]. Faces derive their significance, in part, from the wealth of social information they provide. This information includes the bearer’s identity [9], emotional state [6], [16], intentions [4], [5], and focus of attention [41], [42]. The capacity to extract socially relevant information from faces is fundamental to normal reciprocal social interactions and interpersonal communication. Of the core internal facial features (i.e. eyes, nose, and mouth), the eyes are thought to provide the most critical information and preferentially draw a viewer’s attention [17], [42]. Adult viewers devote 70% or more of their fixations to the eyes [43], [49], [65]. This pattern of face scanning emerges as early as the second month of postnatal life [25], [45], and is disturbed in schizophrenia [53] and autism [50]. Information regarding direction of gaze is thought to be particularly important in guiding social interactions [4], [40]. Gaze can provide information concerning the mental states of others, facilitate social control, regulate turn taking, direct attention, and communicate intimacy [3], [17], [40]. Sensitivity to gaze direction emerges early in ontogeny. For example, infants detect direction of perceived gaze, and modulate their own attention accordingly [18], [32], [60], [63].
Recent neurofunctional models of the human face processing system distinguish cortical regions involved in processing invariant (i.e. those that carry information about identity) characteristics of faces from those regions involved in processing dynamic (i.e. those that facilitate communication) aspects of faces [27], [28], [46], [55]. For example, McCarthy [46] identified four nodes of the human face processing system. Two of these nodes, the lateral posterior fusiform gyrus (FFG) and anterior ventral temporal cortex, are involved, respectively, in structural encoding and face memory. A third node, centered in the superior temporal sulcus (STS), is involved in the analysis of face motion, such as eye and mouth movements. The remaining node, located in the amygdala, is involved in the analysis of facial expression.
Allison et al. [1] used the term “STS region” to refer to cortex within the STS, to adjacent cortex on the surface of the superior temporal gyrus and middle temporal gyrus (near the straight segment of the STS), and to adjoining cortex on the surface of the angular gyrus (near the ascending limb of the STS). Several sources of evidence have converged to indicate that the STS region is involved in the perception of gaze direction. This role was suggested initially by experimental studies of nonhuman primates [11], [26], [30], [51], [52], [67] and neuropsychological studies of human lesion patients [11]. More recently, functional neuroimaging and electrophysiology studies have started to enhance our knowledge of the STS region’s involvement in processing gaze direction.
Using functional magnetic resonance imaging (fMRI), Puce et al. [55] first identified a bilateral region of activation centered in the posterior STS in response to observed eye or mouth movements, but not in response to an inwardly moving radial pattern presented to control for effects related to movement per se. With event-related potential (ERP) recordings, Bentin et al. [7] demonstrated that an N170 ERP recorded from scalp electrodes overlying the STS was larger when evoked by isolated eyes than by whole faces or other face components, and Puce et al. [56] demonstrated that the N170 ERP was larger in response to the movement of eyes averting their gaze away from the viewer than to eyes returning to gaze at the observer. In a positron emission tomography (PET) study, Wicker et al. [66] identified several regions of activation in response to mutual and averted gaze including portions of the STS. Finally, using fMRI, Hoffman and Haxby [31] demonstrated that attention to gaze elicited a stronger response in STS than did attention to identity. Note that some of these studies used static stimuli that varied in direction of gaze [10], [31], [39], [66] while others used dynamic stimuli in which the eyes moved [55], [56].
Research concerning the role of the STS region in processing eye movements is fundamental to our understanding of the neurofunctional organization of the human face processing system. However, this line of inquiry is equally significant for its potential to provide information about the neuroanatomical systems underlying social perception and social cognition [8], [19]. Allison et al. [1] defined social perception as the initial stages of evaluating the intentions of others by analysis of gaze direction, body movement, and other types of biological motion, and stressed the role of the STS region in a larger social perception system involved in processing the emotional value and social significance of biological stimuli.
Baron-Cohen [4] has defined a four-component neuropsychological model of a “mindreading” system, whereby we acquire information from the face of another person during shared attention, use this information to attribute a mental state to the person, and then predict that individual’s behavior from his or her inferred mental state. In Baron-Cohen’s model, the “intentionality detector” (ID) detects self-propelled moving stimuli and allows us to interpret their movement in terms of simple volitional mental states (e.g. goals and desires). An “eye-direction detector” (EDD) perceives the presence of eyes and the direction of their gaze and attributes the mental state of seeing to the owner of those eyes. These two components are linked together by a “shared attention module” (SAM), which supports the identification of occasions when the self and another agent are attending to the same stimulus. Lastly, by integrating data from the three previous components, the “theory-of-mind mechanism” (ToMM) provides the ability to examine information gathered from another individual during shared attention, allows us to ascribe a mental state to that individual, and then permits us to explain or predict that individual’s behavior in terms of the inferred mental state. Information concerning the role of the STS region in processing eye movements is particularly significant for Baron-Cohen’s model, because two components of the mindreading system, the EDD and the SAM rely on normal gaze shift perception, and the ToMM, in turn, relies upon data from these two components.
In prior functional neuroimaging studies concerned with gaze perception, the stimulus face gazed towards empty space [31], [55], [66]. Thus, it is not clear whether the identified brain regions participated merely in simple gaze detection or in a more complex analysis related to the context in which the gaze shift occurred. Here, by providing a target for the gaze shift, we investigated whether regions activated by the perception of gaze are modulated by the context of the observed gaze shift. Participants observed an animated female character as visual target appeared within the character’s visual field at regular intervals. The character either made no gaze shift to the target, shifted gaze to the target with a 1 or 3 s delay, or shifted gaze to an empty location of space with the same delay. This allowed us to determine whether activity within the face processing system is influenced by the perceived intention or goal of the action, and whether a gaze shift toward an object produces a different pattern of activity than that of an identical gaze toward empty space. That is, we were interested in determining if elements of the face processing system are sensitive to the social relevance of a biological motion—whether the action is intentional and goal-directed within the established context.
In addition to investigating context-dependent activity within the STS, we also wished to examine the effects of perceived gaze in other brain regions. In a similarly designed pilot study with eight subjects, we observed, in addition to STS activity, activations related to gaze perception in the intraparietal sulcus (IPS) and FFG. However, we employed a constant 1 s delay between the appearance of the target and the gaze shift, and so it was uncertain whether the activity was related to processing the visual target or the gaze shift. We predicted that varying the delay interval between target and gaze would result in a systematic change in the latency to peak amplitude of the gaze-related hemodynamic response (HDR).
Finally, we were concerned that any observed differences in activity might not result from true differences in gaze processing, but rather from differences in the way that participants viewed the stimuli. For example, participants might move their own eyes more in one condition than another, and this differential eye movement might be related to a participant’s experience with the task. We therefore conducted a parallel study outside of the scanner in which we recorded the visual scanpaths of naı̈ve and experienced volunteers in response to the stimuli used in the current fMRI study. The point-of-regard (POR) recordings allowed us to address these two potential confounds.
Section snippets
Participants
Fifteen healthy right-handed volunteers participated in the fMRI experiment (8 male, 7 female; age range 19–30 years; mean age 24.2 years). Eight healthy right-handed volunteers (4 females; age range 22–33; average age 27.1 years) participated in the eye movement monitoring study. Of the eight subjects in this latter study, four also participated in the fMRI experiment (2 females). All participants had normal or corrected to normal visual acuity. The studies were approved by the Institutional
Eye movement results
The results of a 2 (experience) × 5 (stimulus condition) mixed ANOVA confirmed that the average amounts of eye movements did not differ by stimulus condition or experience with the stimuli. The interaction between these two factors was not significant. These results indicate that differences in functional activation by stimulus conditions cannot be explained by disparities in the total amount of eye movements made by the subjects during stimulus viewing. The absence of a difference in the
Discussion
The current findings confirm the results of previous studies that reported activation in discrete brain regions elicited by the perception of a gaze shift. The present research extends prior work by demonstrating that the context in which an eye movement occurs modulates activity in brain regions associated with gaze shift perception. The pattern of results observed in portions of the STS, IPS, and FFG was similar, and thus a single explanation may account for activation in all of these
Acknowledgements
We are grateful to Jeremy Goldstein, Lilly Kinross-Wright, Karen Emberger, Sarah Hart, and Ronald Viola for assistance in data acquisition and analysis and manuscript preparation. We thank Dr. Martin J. McKeown for developing the software used in aligning each subject’s anatomical images to a common space. We thank Dr. Gary Glover of Stanford University for providing source code for the spiral pulse sequence. We thank Drs. Allen Song and James Voyvodic for assistance with several aspects of
References (67)
- et al.
Social perception from visual cues: role of the STS region
Trends in Cognitive Science
(2000) - et al.
Event-related potentials, lexical decision and semantic priming
Electroencephalography and Clinical Neurophysiology
(1985) - et al.
Reading the mind from eye gaze.
Neuropsychologia
(2002) - et al.
Sensitivity to eye gaze in prosopagnosic patients and monkeys with superior temporal sulcus ablation
Neuropsychologia
(1990) - et al.
A common network of functional areas for attention and eye movements
Neuron
(1998) The eyes have it: the neuroethology, function and evolution of social gaze
Neuroscience and Biobehavioral Reviews
(2000)- et al.
The distributed human neural system for face perception
Trends in Cognitive Science
(2000) - et al.
Distinguishing the functional roles of multiple regions in distributed neural systems for visual working memory
NeuroImage
(2000) - et al.
The effect of face inversion on activity in human neural systems for face and object perception
Neuron
(1999) - et al.
Dissociating top–down attentional control from selective perception and action
Neuropsychologia
(2001)