Skip to main content

Main menu

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Blog
    • Collections
    • Podcast
  • TOPICS
    • Cognition and Behavior
    • Development
    • Disorders of the Nervous System
    • History, Teaching and Public Awareness
    • Integrative Systems
    • Neuronal Excitability
    • Novel Tools and Methods
    • Sensory and Motor Systems
  • ALERTS
  • FOR AUTHORS
  • ABOUT
    • Overview
    • Editorial Board
    • For the Media
    • Privacy Policy
    • Contact Us
    • Feedback
  • SUBMIT

User menu

Search

  • Advanced search
eNeuro

eNeuro

Advanced Search

 

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Blog
    • Collections
    • Podcast
  • TOPICS
    • Cognition and Behavior
    • Development
    • Disorders of the Nervous System
    • History, Teaching and Public Awareness
    • Integrative Systems
    • Neuronal Excitability
    • Novel Tools and Methods
    • Sensory and Motor Systems
  • ALERTS
  • FOR AUTHORS
  • ABOUT
    • Overview
    • Editorial Board
    • For the Media
    • Privacy Policy
    • Contact Us
    • Feedback
  • SUBMIT
PreviousNext
Research ArticleNew Research, Cognition and Behavior

Idiosyncratic, Retinotopic Bias in Face Identification Modulated by Familiarity

Matteo Visconti di Oleggio Castello, Morgan Taylor, Patrick Cavanagh and M. Ida Gobbini
eNeuro 1 October 2018, 5 (5) ENEURO.0054-18.2018; DOI: https://doi.org/10.1523/ENEURO.0054-18.2018
Matteo Visconti di Oleggio Castello
1Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH 03755
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Matteo Visconti di Oleggio Castello
Morgan Taylor
1Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH 03755
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Morgan Taylor
Patrick Cavanagh
1Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH 03755
2Department of Psychology, Glendon College, Toronto, Ontario M4N 3M6, Canada
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
M. Ida Gobbini
1Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH 03755
3Dipartimento di Medicina Specialistica, Diagnostica e Sperimentale, University of Bologna, Bologna 40100, Italy
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for M. Ida Gobbini
  • Article
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF
Loading

Abstract

The perception of gender and age of unfamiliar faces is reported to vary idiosyncratically across retinal locations such that, for example, the same androgynous face may appear to be male at one location but female at another. Here, we test spatial heterogeneity for the recognition of the identity of personally familiar faces in human participants. We found idiosyncratic biases that were stable within participants and that varied more across locations for low as compared to high familiar faces. These data suggest that like face gender and age, face identity is processed, in part, by independent populations of neurons monitoring restricted spatial regions and that the recognition responses vary for the same face across these different locations. Moreover, repeated and varied social interactions appear to lead to adjustments of these independent face recognition neurons so that the same familiar face is eventually more likely to elicit the same recognition response across widely separated visual field locations. We provide a mechanistic account of this reduced retinotopic bias based on computational simulations.

  • face processing
  • familiar faces
  • familiarity
  • retinotopy
  • social
  • vision

Significance Statement

In this work, we tested spatial heterogeneity for the recognition of personally familiar faces. We found retinotopic biases that varied more across locations for low as compared to highly familiar faces. The retinotopic biases were idiosyncratic and stable within participants. Our data suggest that, like face gender and age, face identity is processed by independent populations of neurons monitoring restricted spatial regions and that recognition may vary for the same face at these different locations. Unlike previous findings, our data and computational simulation address the effects of learning and show how increased familiarity modifies the representation of face identity in face-responsive cortical areas. This new perspective has broader implications for understanding how learning optimizes visual processes for socially salient stimuli.

Introduction

We spend most of our days interacting with acquaintances, family, and close friends. Because of these repeated and protracted interactions, the representation of personally familiar faces is rich and complex, as reflected by stronger and more widespread neural activation in the distributed face processing network, as compared to responses to unfamiliar faces (Gobbini and Haxby, 2007; Taylor et al., 2009; Gobbini, 2010; Natu and O’Toole, 2011; Bobes et al., 2013; Sugiura, 2014; Ramon and Gobbini, 2018; Visconti di Oleggio Castello et al., 2017a). Differences in representations are also reflected in faster detection and more robust recognition of familiar faces (Burton et al., 1999; Gobbini et al., 2013; Ramon et al., 2015; Visconti di Oleggio Castello and Gobbini, 2015; Guntupalli and Gobbini, 2017; Visconti di Oleggio Castello et al., 2017b).

The advantage for familiar faces could originate at different stages of the face processing system. The classic psychological model by Bruce and Young (1986) posits that recognition of familiar faces occurs when the structural encoding of a perceived face matches stored representations (Bruce and Young, 1986). In this model, the stored representations of familiar faces consist of “an interlinked set of expression-independent structural codes for distinct head angles, with some codes reflecting the global configuration at each angle and others representing particular distinctive features” (Bruce and Young, 1986, p 309). Behavioral evidence supports the hypothesis that local features are processed differentially for personally familiar faces. For example, in a study of perception of gaze direction and head angle, changes in eye gaze were detected around 100 ms faster in familiar than in unfamiliar faces (Visconti di Oleggio Castello and Gobbini, 2015). In another study, the advantage for personally familiar faces was maintained after face inversion, a manipulation that is generally thought to reduce holistic processing in favor of local processing (Visconti di Oleggio Castello et al., 2017b).

Taken together, these results suggest that optimized processing of personally familiar faces could rely on local features. This could be sufficient to initially drive a differential response to personally familiar faces. In a study measuring saccadic reaction time, correct and reliable saccades to familiar faces were recorded as fast as 180 ms when unfamiliar faces were distractors (Visconti di Oleggio Castello and Gobbini, 2015). In an EEG study using multivariate analyses, significant decoding of familiarity could be detected at around 140 ms from stimulus onset (Barragan-Jason et al., 2015). At such short latencies, it is unlikely that a viewpoint-invariant representation of an individual face’s identity drives these differential responses. To account for facilitated, rapid detection of familiarity, we have previously hypothesized that personally familiar faces may be recognized quickly based on diagnostic, idiosyncratic features, which become highly learned through extensive personal interactions (Visconti di Oleggio Castello and Gobbini, 2015; Visconti di Oleggio Castello et al., 2017b). Detection of these features may occur early in the face-processing system, allowing an initial, fast differential processing for personally familiar faces.

Processes occurring at early stages of the visual system can show idiosyncratic retinotopic biases (Greenwood et al., 2017). Afraz et al. (2010) reported retinotopic biases for perceiving face gender and age that varied depending on stimulus location in the visual field and were specific to each subject. These results suggest that diagnostic facial features for gender and age are encoded in visual areas with limited position invariance. Neuroimaging studies have shown that face-processing areas such as OFA, pFus, and mFus have spatially restricted population receptive fields (pRFs) that could result in retinotopic differences (Kay et al., 2015; Silson et al., 2016; Grill-Spector et al., 2017b). In addition, local facial features activate the OFA (and the putative monkey homolog PL; Issa and DiCarlo, 2012): responses to face parts are stronger when they are presented in typical locations (de Haas et al., 2016), and population activity in the OFA codes the position and relationship between face parts (Henriksson et al., 2015).

Here, we hypothesized that detectors of diagnostic visual features that play a role in identification of familiar faces may also show idiosyncratic retinotopic biases and that these biases may be tuned by repeated interactions with personally familiar faces. Such biases may affect recognition of the identities presented in different parts of the visual field and may be modulated by the familiarity of those identities. We tested this hypothesis by presenting participants with morphed stimuli of personally familiar individuals that were briefly shown at different retinal locations. In two separate experiments we found that participants showed idiosyncratic biases for specific identities in different visual field locations, and these biases were stable on retesting after weeks. Importantly, the range of the retinal biases was inversely correlated with the reported familiarity of each target identity, suggesting that prolonged personal interactions with the target individuals reduced retinal biases.

We hypothesized that these biases could arise because neurons in face-processing areas have restricted receptive fields centered around the fovea (Afraz et al., 2010; Kay et al., 2015; Silson et al., 2016), resulting in an incomplete coverage of the visual field. Thus, identifying a particular face at different peripheral locations would rely on independent populations tuned to that face that cover a limited portion of the visual field biased toward the foveal region, leading to variations in identification across locations. To test this mechanism, we created a computational simulation in which increased familiarity with a specific identity resulted in changes of neural properties of the units responsive to that particular face. By either increasing the number of units responsive to a face or by increasing the receptive field size of those units, this simple learning mechanism accounted for the reduced biases reported in the two experiments, providing testable hypotheses for future work.

These findings support the hypothesis that asymmetries in the processing of personally familiar faces can arise at stages of the face-processing system where there is reduced position invariance and where local features are being processed, such as in OFA or perhaps even earlier. Our behavioral results show that prolonged, personal interactions can modify the neural representation of faces at this early level of processing, and our computational simulation provides a simple account of how this learning process can be implemented at the neural level.

Materials and Methods

Stimuli

Pictures of the faces of individuals who were personally familiar to the participants (graduate students in the same department) were taken in a photo studio room with the same lighting condition and the same camera. Images of two individuals were used for experiment 1, and images of three individuals were used for experiment 2. All individuals portrayed in the stimuli signed written informed consent for the use of their pictures for research and in publications.

The images were converted to grayscale, resized and centered so that the eyes were aligned in the same position for the three identities, and the background was manually removed. These operations were performed using ImageMagick and Adobe Photoshop CS4. The resulting images were matched in luminance (average pixel intensity) using the SHINE toolbox (function lumMatch; Willenbockel et al., 2010) after applying an oval mask, so that only pixels belonging to the face were modified. The luminance-matched images were then used to create morph continua (between two identities in experiment 1; and among three identities in experiment 2) using Abrosoft Fantamorph (v. 5.4.7) with seven percentages of morphing: 0, 17, 33, 50, 67, 83, and 100.

Experiment 1

Paradigm

The experimental paradigm was similar to that by Afraz et al. (2010). In every trial, participants would see a briefly flashed image in one of eight locations at the periphery of their visual field (Fig. 1). Each image was shown for 50 ms at a distance of 7° of visual angle from the fixation point, and subtended ∼4° × 4° of visual angle. The images could appear in one of eight locations evenly spaced by 45 angular degrees around fixation. For experiment 1, only the morph ab was used (Fig. 1). Participants were required to maintain fixation on a central red dot subtending ∼1° of visual angle.

Figure 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 1.

Experimental paradigm. The left panel shows the experimental paradigm, while the right panel shows the locations used in experiment 1 (eight locations, top panel) and in experiment 2 (four locations, bottom panel).

After the image disappeared, participants reported which identity they saw using the left (identity a) and right (identity b) arrow keys. There was no time limit for responding, and participants were asked to be as accurate as possible. After responding, participants had to press the spacebar key to continue to the next trial.

Participants performed five blocks containing 112 trials each, for a total of 560 trials. In each block, all the images appeared twice for every angular location (eight angular locations × seven morph percentages × 2 = 112). This provided ten data points for each percentage morphing at each location, for a total of 70 trials at each angular location.

Before the experimental session participants were shown the identities used in the experiment (corresponding to 0% and 100% morphing; Fig. 2), and practiced the task with 20 trials. These data were discarded from the analyses. Participants performed two identical experimental sessions at least four weeks apart.

Figure 2.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 2.

Stable and idiosyncratic biases in identification in experiment 1. A, Psychometric fit for two subjects from both sessions. Colors indicate location (see colors in bottom left corner); actual data (points) are shown only for the extreme locations to avoid visual clutter. B, The parameter estimates across sessions (at least 33 d apart) were stable (r = 0.71 [0.47, 0.84]; Table 1). Dots represent individual parameter estimates for each location, color coded according to each subject. Correlations were performed on the data shown in this panel. C, Example morphs used in the experiment. Note that the morphs depicted here are shown for illustration only, and participants saw morphs of identities that were personally familiar to them.

Participants sat at a distance of ∼50 cm from the screen, with their chin positioned on a chin-rest. The experiment was run using Psychtoolbox (Kleiner et al., 2007; version 3.0.12) in MATLAB (R2014b). The screen operated at a resolution of 1920 × 1200 and a 60-Hz refresh rate.

Subjects

We recruited six subjects for this experiment (three males, including one of the authors, M.V.d.O.C.). The sample size for experiment 1 was not determined by formal estimates of power and was limited by the availability of participants familiar with the stimulus identities. After the first experimental session, two participants (one male, one female) were at chance level in the task, thus only data from four subjects (two males, mean age 27.50 ± 2.08 SD) were used for the final analyses.

All subjects had normal or corrected-to-normal vision, and provided written informed consent to participate in the experiment. The study was approved by the Dartmouth College Committee for the Protection of Human Subjects.

Experiment 2

Paradigm

Experiment 2 differed from experiment 1 in the following parameters (Figs. 1, 3): (1) three morph continua (ab, ac, bc) instead of one; (2) images appeared in four locations (45°, 135°, 225°, 315°) instead of eight; (3) images were shown for 100 ms instead of 50 ms to make the task easier.

Figure 3.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 3.

Stable and idiosyncratic biases in identification in experiment 2. A, Psychometric fit for one subject from both sessions for each of the morphs. Colors indicate location (see colors in bottom left corner); actual data (points) are shown only for the extreme locations to avoid visual clutter. B, The parameter estimates across sessions (at least 28 d apart) were stable (r = 0.64 [0.5, 0.75]; Table 1). Dots represent individual parameter estimates for each location, color coded according to each participant. Correlations were performed on the data shown in this panel. C, Example morphs used in the experiment. Note that the morphs depicted here are shown only for illustration (participants saw morphs of identities who were personally familiar).

All other parameters were the same as in experiment 1. Participants had to indicate which of the three identities they saw by pressing the left (identity a), right (identity b), or down (identity c) arrow keys.

Participants performed 10 blocks containing 84 trials each, for a total of 840 trials. In each block, all the images appeared once for every angular location (four angular locations × seven morph percentages × three morphs = 84). We used 70 data points at every angular location to fit the model for each pair of identities. Thus, we used the responses to different unmorphed images for each pair of identities, ensuring independence of the models.

Before the experimental session participants were shown the identities used in the experiment (corresponding to 0% and 100% morphing; Fig. 3), and practiced the task with 20 trials. These data were discarded from the analyses. Participants performed two experimental sessions at least four weeks apart.

Subjects

Ten participants (five males, mean age 27.30 ± 1.34 SD) participated in experiment 2, five of which were recruited for experiment 1 as well. No authors participated in experiment 2. The sample size (n = 10) was determined using G*Power3 (Faul et al., 2007, 2009) to obtain 80% power at α = 0.05 based on the correlation of the PSE estimates across sessions in experiment 1, using a bivariate normal model (one-tailed).

All subjects had normal or corrected-to-normal vision, and provided written informed consent to participate in the experiment. The study was approved by the Dartmouth College Committee for the Protection of Human Subjects.

Familiarity and contact scales

After the two experimental sessions, participants completed a questionnaire designed to assess how familiar each participant was with the identities shown in the experiment. Participants saw each target identity, and were asked to complete various scales for that identity. The questionnaire comprised the Inclusion of the Other in the Self (IOS) scale (Aron et al., 1992; Gächter et al., 2015), the Subjective Closeness Inventory (SCI; Berscheid et al., 1989), and the We-scale (Cialdini et al., 1997). The IOS scale showed two circles increasingly overlapping labeled “you” and “X”, and participants were given the following instructions: using the figure below select which pair of circles best describes your relationship with this person. In the figure, X serves as a placeholder for the person shown in the image at the beginning of this section, and you should think of X being that person. By selecting the appropriate number please indicate to what extent you and this person are connected (Aron et al., 1992; Gächter et al., 2015). The SCI scale comprised the two following questions. Relative to all your other relationships (both same and opposite sex), how would you characterize your relationship with the person shown at the beginning of this section? And relative to what you know about other people’s close relationships, how would you characterize your relationship with the person shown at the beginning of this section? Participants responded with a number between one (not close at all) and seven (very close; Berscheid et al., 1989). The We-scale comprised the following question: please select the appropriate number below to indicate to what extent you would use the term “WE” to characterize you and the person shown at the beginning of this section. Participants responded with a number between one (not at all) and seven (very much so). For each participant and each identity, we created a composite “familiarity score” by averaging the scores in the three scales.

We also introduced a scale aimed at estimating the amount of interaction or contact between the participant and the target identity. The scale was based on the work by Idson and Mischel (2001), and participants were asked to respond yes/no to the following six questions. Have you ever seen him during a departmental event? Have you ever seen him during a party? Have you ever had a group lunch/dinner/drinks with him? Have you ever had a one-on-one lunch/dinner/drinks with him? Have you ever texted him personally (not a group message)? And have you ever emailed him personally (not a group email)? The responses were converted to 0/1 and for each participant and for each identity we created a “contact score” by summing all the responses.

For each subject separately, to obtain a measure of familiarity and contact related to each morph, we averaged the familiarity and contact scores of each pair of identities (e.g., the familiarity score of morph ab was the average of the scores for identity a and identity b).

Psychometric fit

For both experiments, we fitted a group-level psychometric curve using Logit Mixed-Effect models (Moscatelli et al., 2012) as implemented in lme4 (Bates et al., 2015). For each experiment and each session, we fitted a model of the formEmbedded Image where k indicates the subject, n is the number of angular locations (n = 8 for the first experiment, and n = 4 for the second experiment), Ii is an indicator variable for the angular location, βi are the model fixed-effects, and zi are the subject-level random-effects (random intercept). From this model, we defined for each subject the point of subjective equality (PSE) as the point x such that logit(x) = 0.5, that is for each angular locationEmbedded Image

Thus, the PSE for subject k at angular location i can be decomposed in a population-level PSE and a subject-specific deviation from the population level, indicated with PSE p and ΔPSE k, respectively.

In experiment 2, we fitted three separate models for each of the morph continua. In addition, before fitting we removed all trials in which subjects mistakenly reported a third identity. For example, if an image belonging to morph ab was presented, and subjects responded with c, the trial was removed.

To quantify the bias across locations, we computed a variance score by squaring the Embedded Image , and summing them across locations, that is Embedded Image . Because this quantity is proportional to the variance against 0, throughout the manuscript we refer to it as ΔPSE variance.

Computational modeling

To account for the retinotopic biases we simulated a population of neural units activated according to the Compressive Spatial Summation model (Kay et al., 2013, 2015) and performed a model-based decoding analysis. This model was originally developed as an encoding model (Naselaris et al., 2011) to predict BOLD responses and estimate pRFs in visual areas and face-responsive areas such as OFA, pFus, and mFus (Kay et al., 2015). We refer to activations of neural units that can be thought as being voxels, small populations of neurons, or individual neurons.

The CSS model posits that the response of a neural unit is equal toEmbedded Image with Embedded Image , and Embedded Image being a 2D Gaussian centered at Embedded Image , with covariance Embedded Image , and Embedded Image being the stimulus converted into contrast map. The term g represents the gain of the response, while the power exponent n accounts for subadditive responses (Kay et al., 2013).

We reanalyzed the data from the fMRI experiments in Kay et al. (2015; pRF-estimation experiment and face-task experiment) using the publicly available data (http://kendrickkay.net/vtcdata) and code (http://kendrickkay.net/socmodel/) to obtain parameter estimates for three ROIs [inferior occipital gyrus (IOG), also termed OFA, mFus, and pFus]. The simulation results were similar using parameter estimates from both experiments, thus we describe the procedure for the face-task experiment only because of the similarities with the behavioral experiments reported here. We refer the reader to their paper for more details on the experiments and data preprocessing. In the face-task experiment three participants saw medium-sized faces (3.2°) in 25 visual field locations (5 × 5 grid with 1.5° spacing), and were asked to perform a 1-back repetition detection task on face identity while fixating at the center of the screen. The resulting 25 βs were used to fit the models. As in the original paper, negative β estimates were rectified (set to 0) and the power exponent was set to n = 0.2 and not optimized because of the reduced number of stimuli. Model fitting was performed with cross-validation. Stimuli were randomly split into ten groups, and each group was left out in turn for testing. The parameter estimates were aggregated across cross-validation runs taking the median value.

We simulated a population of N = Na + Nb neural units, where Na indicates the number of units selective to identity a, and Nb indicates the number of units selective to identity b. For simplicity we set Nb = 1 and varied Na, effectively changing the ratio of units selective to one of the two identities. We performed additional simulations increasing the total number of units and found consistent results, but here we report the simulation with Nb = 1 for simplicity and consistency with the hypothesis of small neural populations responsive to specific identities. The stimuli consisted of contrast circles of diameter 4° centered at 7° from the center, and placed at an angle of 45°, 135°, 225°, and 315°, simulating experiment 2. We simulated the activation of the units assuming independent and identically distributed random noise normally distributed with mean of 0 and standard deviation of 0.1.

Each experiment consisted of a learning phase in which we simulated the (noisy) response to the full identities a and b in each of the four locations, with 10 trials for each identity and location. We used these responses to train a support vector machine (Cortes and Vapnik, 1995) with linear kernel to differentiate between the two identities based on the pattern of population responses. Then, we simulated the actual experiment by generating responses to morphed faces. For simplicity, we assumed a linear response between the amount of morphing and the population response. That is, we assumed that if a morph with m percentage morphing toward b was presented, the population response was a combination of the responses to a and b, weighted by (1-m, m). The amounts of morphing paralleled those used in the two experiments (0, 17, 33, 50, 67, 83, 100). We simulated 10 trials for each angular location and each amount of morphing, and recorded the responses of the trained decoder. These responses were used to fit a logit model similar to the model used in the main analyses (without random effects), and to estimate the PSE for each angular location. The sum of these squared estimates around 50% was computed and stored.

We varied systematically the ratio Na/Nb of units responsive to identity a, ranging from 1 to 9, and repeated 500 experiments for each ratio. For each experiment, parameter values (pRF location and size) were randomly sampled without replacement from the population of parameters previously estimated from the face-task experiment of Kay et al. (2015). We simulated attentional modulations by modifying the gain for the units responsive to identity a between 1 and 4 in 0.5 steps, and fixing the gain for identity b to 1. As an alternative, we simulated the effect of increases in receptive field size for the units responsive to identity a by increasing their receptive field size from 0% to 50% in 10% steps, while keeping the gain fixed to 1. We simulated receptive fields in this way from three face-responsive ROIs (IOG, mFus, and pFus).

Code and data availability

Code for the analyses, raw data for both experiments, single subject results, and simulations are available at https://osf.io/wdaxs, as well as Extended Data.

Extended Data

The archive contains data from both experiments, as well as the analysis scripts. Download Extended Data 1, ZIP file.

Results

Experiment 1

In this experiment, participants performed a two-alternative forced-choice (AFC) task on identity discrimination. In each trial, they saw a face presented for 50 ms and were asked to indicate which of the two identities they just saw. Each face could appear in one of eight stimulus locations. Participants performed the same experiment with the same task a second time, at least 33 d after the first session (average 35 ± 4 d SD).

Participants showed stable and idiosyncratic retinal heterogeneity for identification. The PSE estimates for the two sessions were significantly correlated (Table 1; Fig. 2B), showing stable estimates, and the within-subject correlations of ΔPSEs (see Materials and Methods) was significantly higher than the between-subject correlation (correlation difference: 0.87 [0.64, 1.10], 95% BCa confidence intervals; Efron, 1987; Table 2), showing that the biases were idiosyncratic (for example fits for two different subjects, see Fig. 2A).

View this table:
  • View inline
  • View popup
Table 1.

Correlation of parameter estimates across sessions for the two experiments

View this table:
  • View inline
  • View popup
Table 2.

Comparison of within-subjects correlations of parameter estimates across sessions with between-subjects correlations

Experiment 2

In experiment 1, participants exhibited stable, retinotopic biases for face identification that were specific to each participant. Experiment 1, however, used only two target identities, thus it could not address the question of whether the biases were specific to target identities or to general variations in face recognition that would be the same for all target faces. For this reason, we conducted a second experiment in which we increased the number of target identities. In experiment 2, participants performed a similar task as in experiment 1 with the following differences. First, each face was presented for 100 ms instead of 50 ms to make the task easier, since some participants could not perform the task in experiment 1; second, each face could belong to one of three morphs, and participants were required to indicate which of three identities the face belonged to; third, each face could appear in four retinal locations instead of eight (Fig. 1) to maintain an appropriate duration of the experiment. Each participant performed another experimental session at least 28 d after the first session (average 33 ± 8 d SD).

We found that participants exhibited stable biases across sessions for the three morphs (Table 1; Fig. 3). Interestingly, within-subjects correlations were higher than between-subjects correlations for the two morphs that included the identity c (morphs ac and bc), but not for morph ab (Table 2), suggesting stronger differences in spatial heterogeneity caused by identity c. To test this further, we performed a two-way ANOVA on the PSE estimates across sessions with participants and angular locations as factors. The ANOVA was run for each pair of morphs containing the same identity (e.g., for identity a, the ANOVA was run on data from morphs ab and ac), and the PSE estimates were transformed to be with respect to the same identity (e.g., for identity b, we considered PSEbc and 100 - PSEab). We found significant interactions between participants and angular locations for identity b (F(27,120) = 1.77, p = 0.01947) and identity c (F(27,120) = 3.34, p = 3.229e-06), but not identity a (F(27,120) = 1.17, p = 0.2807), confirming that participants showed increased spatial heterogeneity for identities b and c. The increased spatial heterogeneity for identities b and c, but not a, can be appreciated by inspecting the ΔPSE estimates for each participant. Figure 4A shows lower bias across retinal locations for morph ab than the other two morphs, suggesting more similar performance across locations for morph ab. To investigate factors explaining the difference in performance across spatial locations between the three identities, we compared the ΔPSE estimates with the reported familiarity of the identities.

Figure 4.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 4.

The strength of idiosyncratic biases was modulated by personal familiarity. A, Individual subjects’ ΔPSE for each morph, averaged across sessions. Note the difference in variance across locations for the three different morphs (left to right). B, The variance across locations of ΔPSE estimates was inversely correlated with the reported familiarity of the identities (left panel; r = -0.56 [-0.71, -0.30]), even when adjusting for the contact score (middle panel; rp = -0.42 [-0.61, -0.16]). The right panel shows the scatterplot between the contact score and the ΔPSE variance, adjusted for the familiarity score, which were significantly correlated as well (rp = -0.44 [-0.62, -0.17]). See Materials and Methods for definition of the familiarity score and the contact score. Dots represent individual participant’s data, color coded according to morph type. Correlations were performed on the data shown in these panels.

The variance of the average ΔPSE estimates across sessions for each subject was significantly correlated with the reported familiarity of the identities (r = -0.56 [-0.71, -0.30], t(28) = -3.59, p = 0.001248), showing that the strength of the retinal bias for identities was inversely modulated by personal familiarity (Fig. 4B). We estimated personal familiarity by averaging participants’ ratings of the identities on three scales (IOS, the We-scale, and the SCI; for details, see Materials and Methods). The three scales were highly correlated (min correlation r = 0.89, max correlation r = 0.96).

Because the amount of personal familiarity was correlated with the amount of contact with a target identity (r = 0.45 [0.17, 0.68], t(28) = 2.65, p = 0.01304), we tested whether a linear model predicting ΔPSE with both contact and familiarity as predictors could fit the data better. Both models were significant, but the model with two predictors provided a significantly better fit (χ2(1) = 6.30, p = 0.0121, log-likelihood ratio test), and explained more variance as indicated by higher R 2: R 2 = 0.45, adjusted R 2 = 0.40 for the model with both Familiarity and contact scores (F(2,27) = 10.82, p = 0.0003539), and R 2 = 0.32, adjusted R 2 = 0.29 for the model with the familiarity score only (F(1,28) = 12.88, p = 0.001248). Importantly, both predictors were significant (Table 3), indicating that familiarity modulated the variance of the ΔPSE estimates in addition to modulation based on the amount of contact with a person. After adjusting for the contact score, the variance of the ΔPSE estimates and the familiarity score were still significantly correlated (rp = -0.42 [-0.61, -0.16], t(28) = -2.42, p = 0.02235).

View this table:
  • View inline
  • View popup
Table 3.

Models predicting variance of the ΔPSE estimates across angular locations in experiment 2

Model simulation

In two behavioral experiments we found a stable, idiosyncratic bias toward specific identities that varied according to the location in which the morphed face stimuli appeared. The bias was reduced with more familiar identities, showing effects of learning. To account for this effect, we hypothesized that small populations of neurons selective to specific identities sample a limited portion of the visual field (Afraz et al., 2010). We also hypothesized that with extended interactions with a person, more neural units become selective to the facial appearance of the identity. In turn, this increases the spatial extent of the field covered by the population and thus reduces the retinotopic bias.

To quantitatively test this hypothesis, we simulated a population of neural units in IOG (OFA), pFus, and mFus activated according to the Compressive Spatial Summation model (Kay et al., 2013, 2015). The parameters of this model were estimated from the publicly available data from Kay et al. (2015). We simulated learning effects by progressively increasing the number of units selective to one of the two identities, and measuring the response of a linear decoder trained to distinguish between the two identities. As can be seen in Figure 5A, increasing the number of units reduced the overall bias (expressed as variance against 0.5 of the PSE estimates; for details, see Materials and Methods) by increasing the spatial coverage (Fig. 5B).

Figure 5.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 5.

Simulating retinotopic biases and learning effects in face-responsive ROIs. We hypothesized that neural units (voxels, small populations of neurons, or individual neurons) cover a limited portion of the visual field, and that learning increases the number of neural units selective to a particular identity. A, Increasing the number of units selective to one identity reduces the retinotopic bias. Results of simulating 500 experiments by varying the ratio of neural units selective to one of two identities and fixing the gain to 1 for both identities. Dots represent median values with 95% bootstrapped CIs (1000 replicates; note that for some points the CIs are too small to be seen). In all simulated ROIs the variance of the PSE around 50% decreases with increasing number of units selective to a, but remains larger in IOG because of its receptive field size. B, Population coverage of the units in each ROI estimated from the face-task data in Kay et al. (2015) and used in the simulations. Circles at the periphery show the simulated stimulus locations. Each image is normalized to the number of units in each ROI. Receptive fields are computed with radius Embedded Image , following the convention in Kay et al. (2015). Percentages below each image show the average proportion of units whose receptive field cover the stimulus locations. Compared to pFus and mFus, fewer units cover the stimuli in IOG resulting in a larger bias across locations. C, Increasing the gain of the response to one identity fails to reduce the retinotopic bias. D, Increasing the receptive field size of the units responsive to one identity reduces the retinotopic bias. In both C, D, each dot represents median values of PSE variance for 500 simulated experiments. CIs are not shown to reduce visual clutter.

Interestingly, the larger bias was found within the simulated IOG. Inspecting the pRF coverage of the three ROIs revealed that the stimuli shown at 7° of eccentricity were at the border of the receptive field coverage in IOG (Fig. 5B) because of the smaller RF sizes (median value across voxels of 2.98° [2.85°, 3.10°], 95% bootstrapped confidence intervals), compared to those in pFus and mFus (3.87° [3.65°, 4.05°] and 3.55° [3.35°, 3.75°], respectively). To quantify this difference, we computed the average proportion of units covering the stimulus locations in each ROI. As predicted from the smaller RF sizes, fewer units in IOG covered the area where the stimuli were presented (31.61%) compared to pFus (47.04%) and mFus (45.83%). These results suggest that a larger retinotopic bias would be expected to originate from units in IOG.

As alternative explanations, we tested whether differences in gain or increases in RF size could reduce the bias to a similar extent as increasing the number of units. Figure 5C shows that modulating the gain failed to reduce the retinotopic bias in all simulated ROIs, while Figure 5D shows that increasing RF size of the units responsive to the more familiar identity can also reduce the retinotopic bias.

Discussion

Afraz et al. (2010) reported spatial heterogeneity for recognition of facial attributes such as gender and age, suggesting that relatively independent neural populations tuned to facial features might sample different regions of the visual field. Prolonged social interactions with personally familiar faces lead to facilitated, prioritized processing of those faces. Here we wanted to investigate if this learning of face identity through repeated social interactions also affects these local visual processes, by measuring spatial heterogeneity of identity recognition. We measured whether face identification performance for personally familiar faces differed according to the location in the visual field where face images were presented. We found that participants exhibited idiosyncratic, retinotopic biases for different face identities that were stable across experimental sessions. Importantly, the variability of the retinotopic bias was reduced with increased familiarity with the target identities. These data support the hypothesis that familiarity modulates processes in visual areas with limited position invariance (Visconti di Oleggio Castello et al., 2017a).

These results extend the reports of spatial heterogeneity in visual processing to face identification. Similar biases exist for high-level judgments such as face gender and age (Afraz et al., 2010), as well as shape discrimination (Afraz et al., 2010), crowding, and saccadic precision (Greenwood et al., 2017). Afraz et al. (2010) suggested that neurons in IT exhibit biases that are dependent on retinal location because their receptive field sizes are not large enough to provide complete translational invariance, and stimuli in different locations will activate a limited group of neurons. In this work, we show that these perceptual biases for face processing not only exist for gender and age judgments (Afraz et al., 2010), but also for face identification and that these biases are affected by learning.

Location-dependent coding in face-responsive areas

Neurons in temporal cortex involved in object recognition are widely thought to be invariant to object translation, that is their response to an object will not be modulated by the location of the object in the visual field (Riesenhuber and Poggio, 1999; Hung et al., 2005). However, evidence suggests that location information is preserved in activity of neurons throughout temporal cortex (Kravitz et al., 2008; Hong et al., 2016). Location information can be encoded as a retinotopic map, such as in early visual cortex, where neighboring neurons are selective to locations that are neighboring in the visual field. In the absence of a clear cortical retinotopic map, location information can still be preserved at the level of population responses (Schwarzlose et al., 2008; Rajimehr et al., 2014; Henriksson et al., 2015; Kay et al., 2015).

Areas of occipital and temporal cortices show responses to objects that are modulated by position (Kravitz et al., 2008, 2010; Sayres and Grill-Spector, 2008). In particular, also face-responsive areas of the ventral core system (Haxby et al., 2000; Guntupalli et al., 2017; Visconti di Oleggio Castello et al., 2017a) such as OFA, pFus, and mFus show responses that are modulated by the position in which a face appears. Responses to a face are stronger in these areas when faces are presented foveally rather than peripherally (Levy et al., 2001; Hasson et al., 2002; Malach et al., 2002). In addition, early face processing areas such as PL in monkeys or OFA in humans code specific features of faces in typical locations. Neurons in PL are tuned to eyes in the contralateral hemifield, with receptive fields covering the typical location of the eyes at fixation (Issa and DiCarlo, 2012). Similarly, OFA responses to face parts are stronger when they are presented in typical locations (de Haas et al., 2016), and OFA activity codes the position and relationship between face parts (Henriksson et al., 2015).

The modulation of responses by object location in these areas seems to be driven by differences in receptive field sizes. In humans, pRFs can be estimated with fMRI by modeling voxel-wise BOLD responses (Dumoulin and Wandell, 2008; Wandell and Winawer, 2011, 2015; Kay et al., 2013). These studies have shown that pRF centers are mostly located in the contralateral hemifield (Kay et al., 2015; Grill-Spector et al., 2017b), corresponding to the reported preference of these areas for faces presented contralaterally (Hemond et al., 2007). In addition, pRF sizes increase the higher in the face processing hierarchy, favoring perifoveal regions (Kay et al., 2015; Silson et al., 2016). The location-dependent coding of faces in these face-processing areas might be based on population activity, since these areas do not overlap with retinotopic maps in humans (for example, OFA does not seem to overlap with estimated retinotopic maps; Silson et al., 2016; but see Janssens et al., 2014; Rajimehr et al., 2014; Arcaro and Livingstone, 2017; Arcaro et al., 2017 for work in monkeys showing partial overlap between retinotopic maps and face patches).

Cortical origin of idiosyncratic biases and effects of familiarity

Populations of neurons in visual areas and in temporal cortex cover limited portions of the visual field, with progressively larger receptive fields centered around perifoveal regions (Grill-Spector et al., 2017b). This property suggests that biases in high-level judgments of gender, age, and identity may be due to the variability of feature detectors that cover limited portions of the visual field (Afraz et al., 2010). While the results from our behavioral study cannot point to a precise location of the cortical origin of these biases, our computational simulation suggests that a larger bias could arise from responses in the OFA, given the estimates of receptive field size and eccentricity in this area (Kay et al., 2015; Grill-Spector et al., 2017b). We cannot exclude that this bias might originate in earlier areas of the visual processing stream.

In this work, we showed that the extent of variation in biases across retinal locations was inversely correlated with the reported familiarity with individuals, suggesting that a history of repeated interaction with a person may tune the responses of neurons to that individual in different retinal locations, generating more homogeneous responses. Repeated exposure to the faces of familiar individuals during real-life social interactions results in a detailed representation of the visual appearance of a personally familiar face. Our computational simulation suggests a simple process for augmenting and strengthening the representation of a face. Learning through social interactions might cause a greater number of neural units to become responsive to a specific identity, thus covering a larger area of the visual field and reducing the retinotopic biases. Our results showed that both ratings of familiarity and ratings of amount of contact were strong predictors for reduced retinotopic bias; however, familiarity still predicted the reduced bias when accounting for amount of contact. While additional experiments are needed to test whether pure perceptual learning is sufficient to reduce the retinotopic biases to the same extent as personal familiarity, these results suggest that repeated personal interactions can strengthen neural representations to a larger extent than mere increased frequency of exposure to a face. This idea is consistent with neuroimaging studies showing a stronger and more widespread activation for personally familiar faces compared to unfamiliar or experimentally learned faces (Leibenluft et al., 2004; Gobbini and Haxby, 2006, 2007; Cloutier et al., 2011; Natu and O’Toole, 2011; Bobes et al., 2013; Ramon and Gobbini, 2018; Visconti di Oleggio Castello et al., 2017a).

Effects of attention

Could differences in attention explain the modulation of retinotopic biases reported here? Faces, and personally familiar faces in particular, are important social stimuli whose correct detection and processing affects social behavior (Brothers, 2002; Gobbini and Haxby, 2007). Behavioral experiments from our lab have shown that personally familiar faces break through faster in a continuous flash suppression paradigm (Gobbini et al., 2013), and hold attention more strongly than unfamiliar faces do in a Posner cueing paradigm (Chauhan et al., 2017). These results show that familiar faces differ not only at the level of representations, but also in allocation of attention. At the neural level, changes in attention might be implemented as increased gain for salient stimuli or increased receptive field size (Kay et al., 2015). In an fMRI experiment, Kay et al. (2015) reported that pRF estimates were modulated by the type of task. Gain, eccentricity, and size of the pRFs increased during a 1-back repetition detection task on facial identity as compared to a 1-back task on digits presented foveally.

To address differences in gain in our computational simulation, we modified the relative gain of units responsive to one of the two identities and found that it did not influence the PSE bias across locations. This bias was more strongly modulated by the number of units responsive to one of the identities. On the other hand, simulating increases in receptive field size reduced the retinotopic bias almost as much as increasing the number of units. These simulations suggest two alternative, and possibly interacting, mechanisms that can reduce retinotopic biases in identification: recruitment of additional units selective to an identity or changes in RF properties. Additional experiments are needed to further characterize the differences in attention and representations that contribute to the facilitated processing of personally familiar faces.

Implications for computational models of vision

Many computational models of biological vision posit translational invariance: neurons in IT are assumed to respond to the same extent, regardless of the object position (Riesenhuber and Poggio, 1999; Serre et al., 2007; Kravitz et al., 2008). Even the models that currently provide better fits to neural activity in IT such as hierarchical, convolutional neural networks (Yamins et al., 2014; Kriegeskorte, 2015; Yamins and DiCarlo, 2016) use weight sharing in convolutional layers to achieve position invariance (LeCun et al., 2015; Schmidhuber, 2015; Goodfellow et al., 2016). While this reduces complexity by limiting the number of parameters to be fitted, neuroimaging and behavioral experiments have shown that translational invariance in IT is preserved only for small displacements (DiCarlo and Maunsell, 2003; Kay et al., 2015; Silson et al., 2016; for review, see Kravitz et al., 2008), with varying receptive field sizes and eccentricities (Grill-Spector et al., 2017a). Our results highlight the limited position invariance for high-level judgments such as identity, and add to the known spatial heterogeneity for gender and age judgments (Afraz et al., 2010). Our results also show that a higher degree of invariance can be achieved through learning, as shown by the reduced bias for highly familiar faces. This finding highlights that to increase biological plausibility of models of vision, differences in eccentricity and receptive field size should be taken into account (Poggio et al., 2014), as well as more dynamic effects such as changes induced by learning and attention (Grill-Spector et al., 2017a).

Conclusions

Taken together, the results reported here support our hypothesis that facilitated processing for personally familiar faces might be mediated by the development or tuning of detectors for personally familiar faces in the visual pathway in areas that still have localized analyses (Gobbini et al., 2013; Visconti di Oleggio Castello et al., 2014, 2017b; Visconti di Oleggio Castello and Gobbini, 2015). The OFA might be a candidate for the cortical origin of these biases as well as for the development of detectors for diagnostic fragments. Patterns of responses in OFA (and neurons in the monkey putative homolog PL; Issa and DiCarlo, 2012) are tuned to typical locations of face fragments (Henriksson et al., 2015; de Haas et al., 2016). pRFs of voxels in this region cover an area of the visual field that is large enough to integrate features of intermediate complexity at an average conversational distance (Kay et al., 2015; Grill-Spector et al., 2017b), such as combinations of eyes and eyebrows, which have been shown to be theoretically optimal and highly informative for object classification (Ullman et al., 2001, 2002; Ullman, 2007).

Future research is needed to further disambiguate differences in representations or attention that generate these biases and how learning reduces them. Nonetheless, our results suggest that prioritized processing for personally familiar faces may exist at relatively early stages of the face processing hierarchy, as shown by the local biases reported here. Learning associated with repeated personal interactions modifies the representation of these faces, suggesting that personal familiarity affects face-processing areas well after developmental critical periods (Arcaro et al., 2017; Livingstone et al., 2017). We hypothesize that these differences may be one of the mechanisms that underlies the known behavioral advantages for perception of personally familiar faces (Burton et al., 1999; Gobbini and Haxby, 2007; Gobbini, 2010; Gobbini et al., 2013; Visconti di Oleggio Castello et al., 2014, 2017b; Ramon et al., 2015; Visconti di Oleggio Castello and Gobbini, 2015; Chauhan et al., 2017; Ramon and Gobbini, 2018).

Acknowledgments

Acknowledgements: We thank Carlo Cipolli for helpful discussions.

Footnotes

  • The authors declare no competing financial interests.

  • This work was supported by the Martens Family Fund and Dartmouth College.

This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International license, which permits unrestricted use, distribution and reproduction in any medium provided that the original work is properly attributed.

References

  1. ↵
    Afraz A, Pashkam MV, Cavanagh P (2010) Spatial heterogeneity in the perception of face and form attributes. Curr Biol 20:2112–2116. doi:10.1016/j.cub.2010.11.017 pmid:21109440
    OpenUrlCrossRefPubMed
  2. ↵
    Arcaro MJ, Livingstone MS (2017) A hierarchical, retinotopic proto-organization of the primate visual system at birth. Elife 6:e26196. doi:10.7554/eLife.26196
    OpenUrlCrossRefPubMed
  3. ↵
    Arcaro MJ, Schade PF, Vincent JL, Ponce CR, Livingstone MS (2017) Seeing faces is necessary for face-domain formation. Nat Neurosci 20:1404–1412. doi:10.1038/nn.4635
    OpenUrlCrossRefPubMed
  4. ↵
    Aron A, Aron EN, Smollan D (1992) Inclusion of other in the self scale and the structure of interpersonal closeness. J Pers Soc Psychol 63:596. doi:10.1037/0022-3514.63.4.596
    OpenUrlCrossRef
  5. ↵
    Barragan-Jason G, Cauchoix M, Barbeau EJ (2015) The neural speed of familiar face recognition. Neuropsychologia 75:390–401. doi:10.1016/j.neuropsychologia.2015.06.017 pmid:26100560
    OpenUrlCrossRefPubMed
  6. ↵
    Bates D, Mächler M, Bolker B, Walker S (2015) Fitting linear mixed-effects models using lme4. J Stat Softw 67:1–48.
    OpenUrl
  7. ↵
    Berscheid E, Snyder M, Omoto AM (1989) The relationship closeness inventory: assessing the closeness of interpersonal relationships. J Pers Soc Psychol 57:792. doi:10.1037/0022-3514.57.5.792
    OpenUrlCrossRef
  8. ↵
    Bobes MA, Lage Castellanos A, Quiñones I, García L, Valdes-Sosa M (2013) Timing and tuning for familiarity of cortical responses to faces. PLoS One 8:e76100. doi:10.1371/journal.pone.0076100 pmid:24130761
    OpenUrlCrossRefPubMed
  9. ↵
    Brothers L (2002) The social brain: a project for integrating primate behavior and neurophysiology in a new domain. Foundations in Social Neuroscience, pp 367–385. Cambridge: MIT Press.
  10. ↵
    Bruce V, Young A (1986) Understanding face recognition. Br J Psychol 77:305–327. doi:10.1111/j.2044-8295.1986.tb02199.x
    OpenUrlCrossRefPubMed
  11. ↵
    Burton AM, Wilson S, Cowan M, Bruce V (1999) Face recognition in poor-quality video: evidence from security surveillance. Psychol Sci 10:243–248. doi:10.1111/1467-9280.00144
    OpenUrlCrossRef
  12. ↵
    Chauhan V, Visconti di Oleggio Castello M, Soltani A, Gobbini MI (2017) Social saliency of the cue slows attention shifts. Front Psychol 8:738. doi:10.3389/fpsyg.2017.00738 pmid:28555117
    OpenUrlCrossRefPubMed
  13. ↵
    Cialdini RB, Brown SL, Lewis BP, Luce C, Neuberg SL (1997) Reinterpreting the empathy–altruism relationship: when one into one equals oneness. J Pers Soc Psychol 73:481. pmid:9294898
    OpenUrlCrossRefPubMed
  14. ↵
    Cloutier J, Kelley WM, Heatherton TF (2011) The influence of perceptual and knowledge-based familiarity on the neural substrates of face perception. Soc Neurosci 6:63–75. doi:10.1080/17470911003693622 pmid:20379899
    OpenUrlCrossRefPubMed
  15. ↵
    Cortes C, Vapnik V (1995) Support-vector networks. Mach Learn 20:273–297. doi:10.1007/BF00994018
    OpenUrlCrossRef
  16. ↵
    de Haas B, Schwarzkopf DS, Alvarez I, Lawson RP, Henriksson L, Kriegeskorte N, Rees G (2016) Perception and processing of faces in the human brain is tuned to typical feature locations. J Neurosci 36:9289–9302. doi:10.1523/JNEUROSCI.4131-14.2016 pmid:27605606
    OpenUrlAbstract/FREE Full Text
  17. ↵
    DiCarlo JJ, Maunsell JHR (2003) Anterior inferotemporal neurons of monkeys engaged in object recognition can be highly sensitive to object retinal position. J Neurophysiol 89:3264–3278. doi:10.1152/jn.00358.2002
    OpenUrlCrossRefPubMed
  18. ↵
    Dumoulin SO, Wandell BA (2008) Population receptive field estimates in human visual cortex. Neuroimage 39:647–660. doi:10.1016/j.neuroimage.2007.09.034 pmid:17977024
    OpenUrlCrossRefPubMed
  19. ↵
    Efron B (1987) Better bootstrap confidence intervals. J Am Stat Assoc 82:171–185. doi:10.1080/01621459.1987.10478410
    OpenUrlCrossRef
  20. ↵
    Faul F, Erdfelder E, Lang AG, Buchner A (2007) G*Power 3: a flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behav Res Methods 39:175–191. pmid:17695343
    OpenUrlCrossRefPubMed
  21. ↵
    Faul F, Erdfelder E, Buchner A, Lang AG (2009) Statistical power analyses using G*Power 3.1: tests for correlation and regression analyses. Behav Res Methods 41:1149–1160. doi:10.3758/BRM.41.4.1149 pmid:19897823
    OpenUrlCrossRefPubMed
  22. ↵
    Gächter S, Starmer C, Tufano F (2015) Measuring the closeness of relationships: a comprehensive evaluation of the “inclusion of the other in the self” scale. PLoS One 10:e0129478. doi:10.1371/journal.pone.0129478 pmid:26068873
    OpenUrlCrossRefPubMed
  23. ↵
    Gobbini MI (2010) Distributed process for retrieval of person knowledge. In: Social neuroscience: toward understanding the underpinnings of the social mind, pp 40–53. New York: Oxford University Press.
  24. ↵
    Gobbini MI, Haxby JV (2006) Neural response to the visual familiarity of faces. Brain Res Bull 71:76–82. doi:10.1016/j.brainresbull.2006.08.003 pmid:17113931
    OpenUrlCrossRefPubMed
  25. ↵
    Gobbini MI, Haxby JV (2007) Neural systems for recognition of familiar faces. Neuropsychologia 45:32–41. doi:10.1016/j.neuropsychologia.2006.04.015 pmid:16797608
    OpenUrlCrossRefPubMed
  26. ↵
    Gobbini MI, Gors JD, Halchenko YO, Rogers C, Guntupalli JS, Hughes H, Cipolli C (2013) Prioritized detection of personally familiar faces. PLoS One 8:e66620. doi:10.1371/journal.pone.0066620 pmid:23805248
    OpenUrlCrossRefPubMed
  27. ↵
    Goodfellow I, Bengio Y, Courville A, Bengio Y (2016) Deep learning. Cambridge: MIT Press.
  28. ↵
    Greenwood JA, Szinte M, Sayim B, Cavanagh P (2017) Variations in crowding, saccadic precision, and spatial localization reveal the shared topology of spatial vision. Proc Natl Acad Sci USA 114:E3573–E3582. doi:10.1073/pnas.1615504114 pmid:28396415
    OpenUrlAbstract/FREE Full Text
  29. ↵
    Grill-Spector K, Kay K, Weiner KS (2017a) The functional neuroanatomy of face processing: insights from neuroimaging and implications for deep learning. In: Deep learning for biometrics ( Bhanu B, Kumar A , eds), pp 3–31. Cham: Springer International Publishing.
  30. ↵
    Grill-Spector K, Weiner KS, Kay K, Gomez J (2017b) The functional neuroanatomy of human face perception. Annu Rev Vis Sci 3:167–196. doi:10.1146/annurev-vision-102016-061214 pmid:28715955
    OpenUrlCrossRefPubMed
  31. ↵
    Guntupalli JS, Gobbini MI (2017) Reading faces: from features to recognition. Trends Cogn Sci 21:915–916. doi:10.1016/j.tics.2017.09.007 pmid:28939331
    OpenUrlCrossRefPubMed
  32. ↵
    Guntupalli JS, Wheeler KG, Gobbini MI (2017) Disentangling the representation of identity from head view along the human face processing pathway, Cerebral Cortex 27:46–53. doi:10.1093/cercor/bhw344
    OpenUrlCrossRef
  33. ↵
    Hasson U, Levy I, Behrmann M, Hendler T, Malach R (2002) Eccentricity bias as an organizing principle for human high-order object areas. Neuron 34:479–490. pmid:11988177
    OpenUrlCrossRefPubMed
  34. ↵
    Haxby JV, Hoffman EA, Gobbini MI (2000) The distributed human neural system for face perception. Trends Cogn Sci 4:223–233. pmid:10827445
    OpenUrlCrossRefPubMed
  35. ↵
    Hemond CC, Kanwisher NG, Op de Beeck HP (2007) A preference for contralateral stimuli in human object- and face-selective cortex. PLoS One 2:e574. doi:10.1371/journal.pone.0000574 pmid:17593973
    OpenUrlCrossRefPubMed
  36. ↵
    Henriksson L, Mur M, Kriegeskorte N (2015) Faciotopy—a face-feature map with face-like topology in the human occipital face area. Cortex 72:156–167. doi:10.1016/j.cortex.2015.06.030 pmid:26235800
    OpenUrlCrossRefPubMed
  37. ↵
    Hong H, Yamins DLK, Majaj NJ, DiCarlo JJ (2016) Explicit information for category-orthogonal object properties increases along the ventral stream. Nat Neurosci 19:613–622. doi:10.1038/nn.4247
    OpenUrlCrossRefPubMed
  38. ↵
    Hung CP, Kreiman G, Poggio T, DiCarlo JJ (2005) Fast readout of object identity from macaque inferior temporal cortex. Science 310:863–866. doi:10.1126/science.1117593 pmid:16272124
    OpenUrlAbstract/FREE Full Text
  39. ↵
    Idson LC, Mischel W (2001) The personality of familiar and significant people: the lay perceiver as a social-cognitive theorist. J Pers Soc Psychol 80:585–596. pmid:11316223
    OpenUrlCrossRefPubMed
  40. ↵
    Issa EB, DiCarlo JJ (2012) Precedence of the eye region in neural processing of faces. J Neurosci 32:16666–16682. doi:10.1523/JNEUROSCI.2391-12.2012 pmid:23175821
    OpenUrlAbstract/FREE Full Text
  41. ↵
    Janssens T, Zhu Q, Popivanov ID, Vanduffel W (2014) Probabilistic and single-subject retinotopic maps reveal the topographic organization of face patches in the macaque cortex. J Neurosci 34:10156–10167. doi:10.1523/JNEUROSCI.2914-13.2013 pmid:25080579
    OpenUrlAbstract/FREE Full Text
  42. ↵
    Kay KN, Winawer J, Mezer A, Wandell BA (2013) Compressive spatial summation in human visual cortex. J Neurophysiol 110:481–494. doi:10.1152/jn.00105.2013 pmid:23615546
    OpenUrlCrossRefPubMed
  43. ↵
    Kay KN, Weiner KS, Grill-Spector K (2015) Attention reduces spatial uncertainty in human ventral temporal cortex. Curr Biol 25:595–600. doi:10.1016/j.cub.2014.12.050 pmid:25702580
    OpenUrlCrossRefPubMed
  44. ↵
    Kleiner M, Brainard D, Pelli D, Ingling A, Murray R (2007) What’s new in Psychtoolbox. Perception 36:1–16.
    OpenUrlCrossRefPubMed
  45. ↵
    Kravitz DJ, Vinson LD, Baker CI (2008) How position dependent is visual object recognition? Trends Cogn Sci 12:114–122. doi:10.1016/j.tics.2007.12.006 pmid:18262829
    OpenUrlCrossRefPubMed
  46. ↵
    Kravitz DJ, Kriegeskorte N, Baker CI (2010) High-level visual object representations are constrained by position. Cereb Cortex 20:2916–2925. doi:10.1093/cercor/bhq042 pmid:20351021
    OpenUrlCrossRefPubMed
  47. ↵
    Kriegeskorte N (2015) Deep neural networks: a new framework for modeling biological vision and brain information processing. Annu Rev Vis Sci 1:417–446. doi:10.1146/annurev-vision-082114-035447
    OpenUrlCrossRefPubMed
  48. ↵
    LeCun Y, Bengio Y, Hinton G (2015) Deep learning. Nature 521:436–444. doi:10.1038/nature14539 pmid:26017442
    OpenUrlCrossRefPubMed
  49. ↵
    Leibenluft E, Gobbini MI, Harrison T, Haxby JV (2004) Mothers’ neural activation in response to pictures of their children and other children. Biol Psychiatry 56:225–232. doi:10.1016/j.biopsych.2004.05.017 pmid:15312809
    OpenUrlCrossRefPubMed
  50. ↵
    Levy I, Hasson U, Avidan G, Hendler T, Malach R (2001) Center–periphery organization of human object areas. Nat Neurosci 4:533–539. doi:10.1038/87490 pmid:11319563
    OpenUrlCrossRefPubMed
  51. ↵
    Livingstone MS, Vincent JL, Arcaro MJ, Srihasam K, Schade PF, Savage T (2017) Development of the macaque face-patch system. Nat Commun 8:14897. doi:10.1038/ncomms14897 pmid:28361890
    OpenUrlCrossRefPubMed
  52. ↵
    Malach R, Levy I, Hasson U (2002) The topography of high-order human object areas. Trends Cogn Sci 6:176–184. pmid:11912041
    OpenUrlCrossRefPubMed
  53. ↵
    Moscatelli A, Mezzetti M, Lacquaniti F (2012) Modeling psychophysical data at the population-level: the generalized linear mixed model. J Vis 12.
  54. ↵
    Naselaris T, Kay KN, Nishimoto S, Gallant JL (2011) Encoding and decoding in fMRI. Neuroimage 56:400–410. doi:10.1016/j.neuroimage.2010.07.073 pmid:20691790
    OpenUrlCrossRefPubMed
  55. ↵
    Natu V, O’Toole AJ (2011) The neural processing of familiar and unfamiliar faces: a review and synopsis. Br J Psychol 102:726–747. doi:10.1111/j.2044-8295.2011.02053.x pmid:21988381
    OpenUrlCrossRefPubMed
  56. ↵
    Poggio T, Mutch J, Isik L (2014) Computational role of eccentricity dependent cortical magnification. arXiv [csLG]
  57. ↵
    Rajimehr R, Bilenko NY, Vanduffel W, Tootell RBH (2014) Retinotopy versus face selectivity in macaque visual cortex. J Cogn Neurosci 22:1–10.
    OpenUrl
  58. ↵
    Ramon M, Gobbini MI (2018) Familiarity matters: a review on prioritized processing of personally familiar faces. Vis Cogn 26:179–195.
    OpenUrl
  59. ↵
    Ramon M, Vizioli L, Liu-Shuang J, Rossion B (2015) Neural microgenesis of personally familiar face recognition. Proc Natl Acad Sci USA 112:E4835–E4844. doi:10.1073/pnas.1414929112 pmid:26283361
    OpenUrlAbstract/FREE Full Text
  60. ↵
    Riesenhuber M, Poggio T (1999) Hierarchical models of object recognition in cortex. Nat Neurosci 2:1019–1025. doi:10.1038/14819 pmid:10526343
    OpenUrlCrossRefPubMed
  61. ↵
    Sayres R, Grill-Spector K (2008) Relating retinotopic and object-selective responses in human lateral occipital cortex. J Neurophysiol 100:249–267. doi:10.1152/jn.01383.2007 pmid:18463186
    OpenUrlCrossRefPubMed
  62. ↵
    Schmidhuber J (2015) Deep learning in neural networks: an overview. Neural Netw 61:85–117. doi:10.1016/j.neunet.2014.09.003 pmid:25462637
    OpenUrlCrossRefPubMed
  63. ↵
    Schwarzlose RF, Swisher JD, Dang S, Kanwisher N (2008) The distribution of category and location information across object-selective regions in human visual cortex. Proc Natl Acad Sci USA 105:4447–4452. doi:10.1073/pnas.0800431105 pmid:18326624
    OpenUrlAbstract/FREE Full Text
  64. ↵
    Serre T, Oliva A, Poggio T (2007) A feedforward architecture accounts for rapid categorization. Proc Natl Acad Sci USA 104:6424–6429. doi:10.1073/pnas.0700622104 pmid:17404214
    OpenUrlAbstract/FREE Full Text
  65. ↵
    Silson EH, Groen IIA, Kravitz DJ, Baker CI (2016) Evaluating the correspondence between face-, scene-, and object-selectivity and retinotopic organization within lateral occipitotemporal cortex. J Vis 16:14. doi:10.1167/16.6.14
    OpenUrlCrossRefPubMed
  66. ↵
    Sugiura M (2014) Neuroimaging studies on recognition of personally familiar people. Front Biosci 19:672–686. doi:10.2741/4235
    OpenUrlCrossRef
  67. ↵
    Taylor MJ, Arsalidou M, Bayless SJ, Morris D, Evans JW, Barbeau EJ (2009) Neural correlates of personally familiar faces: parents, partner and own faces. Hum Brain Mapp 30:2008–2020. doi:10.1002/hbm.20646 pmid:18726910
    OpenUrlCrossRefPubMed
  68. ↵
    Ullman S (2007) Object recognition and segmentation by a fragment-based hierarchy. Trends Cogn Sci 11:58–64. doi:10.1016/j.tics.2006.11.009 pmid:17188555
    OpenUrlCrossRefPubMed
  69. ↵
    Ullman S, Sali E, Vidal-Naquet M (2001) A fragment-based approach to object representation and classification. In: Visual form 2001, pp 85–100. Berlin; Heidelberg: Springer.
  70. ↵
    Ullman S, Vidal-Naquet M, Sali E (2002) Visual features of intermediate complexity and their use in classification. Nat Neurosci 5:682–687.
    OpenUrlCrossRefPubMed
  71. ↵
    Visconti di Oleggio Castello M, Gobbini MI (2015) Familiar face detection in 180ms. PLoS One 10:e0136548. doi:10.1371/journal.pone.0136548 pmid:26305788
    OpenUrlCrossRefPubMed
  72. ↵
    Visconti di Oleggio Castello M, Guntupalli JS, Yang H, Gobbini MI (2014) Facilitated detection of social cues conveyed by familiar faces. Front Hum Neurosci 8:678. doi:10.3389/fnhum.2014.00678 pmid:25228873
    OpenUrlCrossRefPubMed
  73. ↵
    Visconti di Oleggio Castello M, Halchenko YO, Guntupalli JS, Gors JD, Gobbini MI (2017a) The neural representation of personally familiar and unfamiliar faces in the distributed system for face perception. Sci Rep 7:12237.
    OpenUrl
  74. ↵
    Visconti di Oleggio Castello M, Wheeler KG, Cipolli C, Gobbini MI (2017b) Familiarity facilitates feature-based face processing. PLoS One 12:e0178895. doi:10.1371/journal.pone.0178895 pmid:28582439
    OpenUrlCrossRefPubMed
  75. ↵
    Wandell BA, Winawer J (2011) Imaging retinotopic maps in the human brain. Vision Res 51:718–737. doi:10.1016/j.visres.2010.08.004 pmid:20692278
    OpenUrlCrossRefPubMed
  76. ↵
    Wandell BA, Winawer J (2015) Computational neuroimaging and population receptive fields. Trends Cogn Sci 19:349–357. doi:10.1016/j.tics.2015.03.009
    OpenUrlCrossRefPubMed
  77. ↵
    Willenbockel V, Sadr J, Fiset D, Horne GO, Gosselin F, Tanaka JW (2010) Controlling low-level image properties: the SHINE toolbox. Behav Res Methods 42:671–684. doi:10.3758/BRM.42.3.671 pmid:20805589
    OpenUrlCrossRefPubMed
  78. ↵
    Yamins DLK, DiCarlo JJ (2016) Using goal-driven deep learning models to understand sensory cortex. Nat Neurosci 19:356–365. doi:10.1038/nn.4244 pmid:26906502
    OpenUrlCrossRefPubMed
  79. ↵
    Yamins DLK, Hong H, Cadieu CF, Solomon EA, Seibert D, DiCarlo JJ (2014) Performance-optimized hierarchical models predict neural responses in higher visual cortex. Proc Natl Acad Sci USA 111:8619–8624. doi:10.1073/pnas.1403112111
    OpenUrlAbstract/FREE Full Text

Synthesis

Reviewing Editor: Tatiana Pasternak, University of Rochester

Decisions are customarily a result of the Reviewing Editor and the peer reviewers coming together and discussing their recommendations until a consensus is reached. When revisions are invited, a fact-based synthesis statement explaining their decision and outlining what is needed to prepare a revision will be listed below. The following reviewer(s) agreed to reveal their identity: Pawan Sinha, Sonia Poltoratski. Note: If this manuscript was transferred from JNeurosci and a decision was made to accept the manuscript without peer review, a brief statement to this effect will instead be what is listed below.

The authors showed spatial heterogeneities in face identification and that this heterogeneity is greater for unfamiliar than for familiar faces. While both reviewers found the question addressed in the paper interesting,the results compelling and were supportive of the approach, they raised a number of issues that must be addressed before the paper can be considered for publication in eNeuro. These are summarized below.

Generally, both reviewers felt that the observed retinotopic biases should be linked to what is known about the neural substrates of face processing.

1. The interpretation that the observed spatial biases are the result of processing in early retinotopic areas conflicts with the literature on spatial biases late visual areas linked to face processing. The revision should include the discussion that considers the existing neurophysiological and imaging studies relevant to face perception (Reviewer 2).

2. Please address the question whether perceptual learning of face identities would produce similar biases. Reviewer 2 suggests strengthening the study by the inclusion of an additional experiment that would help addressing variability between identities that is not controlled in the study.

3. Please discuss whether attention to a familiar face rather the reshaped representation of familiar faces could explain the results.

4. Please discuss how you would validate the hypothesis that frequency of appearance plays a role in the observed differences between more and less familiar faces (see Reviewer 1).

5. Please address all the questions and comments from Reviewer 2, listed under “Finer points on the manuscript”. Some of these are highlighed below:

- provide explicit logic behind the two experiments

- provide a schematic illustration of stimulus locations

- a number of questions and fixes to figures 2 and 4

- provide justification for using several measures of familiarity

6. Please address the comment about the use of a nested model comparison in your statistical analysis

------------------------------

REVIEWER 1

The authors examined whether identification performance with familiar face images exhibited spatial heterogeneity as a function of their location on observers' retinas. They found that such differences do exist, with low-familiarity faces showing greater heterogeneity than high-familiarity ones. This result has implications for the mechanisms underlying face identification. I believe that the work is well done and likely to be of interest to many vision researchers. Hence, I am supportive of publication, but would like to suggest a few points that the authors might consider addressing in the discussion section of a revised manuscript.

The most obvious question that emerges from these results concerns the genesis of the observed heterogeneities. What factors drive the location biases? Given the differences between high- and low-familiar faces, the authors suggest that frequency of appearance might play a role. But, how can this proposal be empirically validated? Can a study be undertaken of the statistics of face locations in naturalistic first-person point of view video? Are there any other ways to test the hypothesis?

Related to the above, if frequency of past exposure is the primary factor driving location biases, then shouldn't one expect that there would be a generalized bias favoring the upper visual field for all faces (whether low- or high-familiarity)? This is because it intuitively seems that when not looking directly at a person's face, we are less likely to fixate locations above their heads than below. 'Below the head' fixations will place the face images in the upper visual field.

Finally, I believe that it might be interesting to consider the implications of these results for computational systems. Specifically, deep neural networks employ the approach of weight sharing, whereby the pattern of weights in one location gets replicated across the entire visual field. Do the results reported here suggest that there might be reasons to spatially restrict such weight sharing? More generally, what might be the pros and cons, as evident in the computational simulations, of uniform or restricted weight-sharing?

REVIEWER 2

STATISTICS

I suggest a nested model comparison (e.g. likelihood ratio) as a more appropriate way to compare a one- and two-factor model. It is also not apparent from legends what data the depicted correlations are being performed on.

COMMENTS TO THE AUTHORS

I think that the manuscript tackles an interesting question about the spatial aspect of face processing. The real-life familiarity manipulation - that is, testing participants in the same cohort who had varying degrees of social contact with the test faces - is clever, and the dataset is interesting. However, some of the interpretations that the authors make fall beyond the scope of the data, and need to be strengthened with additional connections made to the existing literature. I would also like to see clarification in the statistics, modeling, and plotting of the results.

The authors claim to test the hypothesis that 'asymmetries in the processing of personally familiar faces can arise at stages of the visual processing hierarchy where there is still retinotopic coding.' (ln 116). However, this interpretation does not seem to consider the extensive literature in both primate (e.g. work by the groups of Tanaka & Tootell) and human (e.g. groups of Malach, Grill-Spector, Kanwisher, Kastner, Kreigeskorte, and Rees) that has reported spatially restrictive receptive fields (/pRFs), spatial biases, and spatial information in high-level face areas and patches. This work seems highly relevant to both the current experiment and the readership of this journal.

Because of the prevalence of spatially-restricted processing in late stages of face processing, I do not think this data can tell us where in the hierarchy (early/late, retinotopic cortex/not) familiar face processing is occurring, which seems to be the primary framework of the manuscript.

[To this point, the authors introduce the findings of the Afraz et al. (2010) as illustrating that 'neurons in higher-order face areas have restricted receptive fields or, equivalently, that diagnostic facial features for gender and age are encoded in retinotopic visual cortices.' (ln 100). These two claims are not equivalent - nor do I think they capture the (quite elegant) logic of the cited work. Afraz and colleagues argue that since faces are processed at a larger spatial scale than, say, orientation (commensurate with increasing RF size from V1 to IT), making the two judgments on stimuli of the same small size will lead to a stronger representation of orientation than of the face, which will be covered by proportionally fewer receptive fields. Thus, they report greater perceptual variability for faces than for stimuli that are primarily processed at earlier stages of the visual hierarchy.]

I do, however the finding that learning or social contact can alter the retinotopic biases to be quite compelling. In the Afraz et al. framework, this would suggest that familiarity might increase the proportion of face-selective neurons that are recruited to process a given face. To further explore this, the authors may consider testing if perceptual learning of face identities can have a similar effect on the bias patterns. This would ameliorate some of the difficulties of using real-life faces with uncontrolled variability between identities (which seems to come up in Experiment 2), and nicely complement the current work: e.g. we see that this change happens from naturalistic interaction, and clarify exactly how it happens via controlled experiment.

A finer point, but I think that attentional effects should be discussed in the context of these findings. Could increased attention to a familiar face explain this pattern of results? Is familiarity a property of visual processing that is contingent on attention, or do the authors believe that familiarity reshapes the underlying representation of these faces?

Finer points on the manuscript:

- The paragraph starting with line 78 should be fleshed out with references beyond the authors' own papers to support and extend their claims on the timecourse of face processing, identity-diagnostic features, etc.

- The authors introduce the hypothesis that 'facial features that are diagnostic for identity are processed more efficiently for familiar as compared to unfamiliar faces' (ln 91), but this feature-based processing framework is not really brought up again, and/or I am not following how it connects to the rest of the paper.

- Figure 1/2. It would be helpful to have the schematics illustrating the stimulus locations to be presented alongside the results graphs in the same color scheme as the psychometric fit plots.

- Can the authors lay out more concretely the logic between performing the two experiments? I can appreciate the differences in their design, but what, conceptually, do we learn in Experiment 1 that we do not also learn in Experiment 2?

- It is somewhat surprising that 2/6 participants in Experiment 1 were at chance in this task, which used morphs up to 100% of the face identities. Can the authors explain this?

- Many measures are used to quantify familiarity; what was the justification for using these (vs. a single reliable measure), and was there agreement/reliability between them? This data would be useful in a supplement.

- Given the relatively small number of subjects in this dataset and the nature of the question (intra vs. intersubject variability), it would be helpful to see data from (a) the same subject across sessions in Figure 2, and (b) data from all subjects somewhere, either in the main text or a supplement.

- In Figure 2, it is not obvious what each of the dots for each subject corresponds to (positions? morph levels?). Can the authors (a) clarify this in the legend, (b) add a subject average across these data points, and (c) clarify whether the depicted correlation is between subject averages or these depicted points?

- I am not sure what to make of the increased heterogeneity reported for some identities in Experiment 2 (paragraph ending in ln 318). Can the authors unpack this finding?

-The authors should perform a nested model comparison (e.g. likelihood ratio test) to more accurately compare the predictive value of the single- or two-factor model of familiarity (ln 328). A comparison of explained variance is insufficient here, as more factors almost necessarily explain more variance.

- Figure 3: the data points plotted to underlie these psychometric curves seem to not be particularly well-fit (for example, the green dots of the middle panel). Can the authors comment on this?

- As in Figure 2, a comparison of a subjects' fits across sessions seems appropriate here, as do subject averages in panel B and a clearer legend of panel B, including whether the correlation depicted is run on a subject average or these individual points.

- Figure 4: in B, do these dots now depict individual subjects?

Back to top

In this issue

eneuro: 5 (5)
eNeuro
Vol. 5, Issue 5
September/October 2018
  • Table of Contents
  • Index by author
Email

Thank you for sharing this eNeuro article.

NOTE: We request your email address only to inform the recipient that it was you who recommended this article, and that it is not junk mail. We do not retain these email addresses.

Enter multiple addresses on separate lines or separate them with commas.
Idiosyncratic, Retinotopic Bias in Face Identification Modulated by Familiarity
(Your Name) has forwarded a page to you from eNeuro
(Your Name) thought you would be interested in this article in eNeuro.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Print
View Full Page PDF
Citation Tools
Idiosyncratic, Retinotopic Bias in Face Identification Modulated by Familiarity
Matteo Visconti di Oleggio Castello, Morgan Taylor, Patrick Cavanagh, M. Ida Gobbini
eNeuro 1 October 2018, 5 (5) ENEURO.0054-18.2018; DOI: 10.1523/ENEURO.0054-18.2018

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Respond to this article
Share
Idiosyncratic, Retinotopic Bias in Face Identification Modulated by Familiarity
Matteo Visconti di Oleggio Castello, Morgan Taylor, Patrick Cavanagh, M. Ida Gobbini
eNeuro 1 October 2018, 5 (5) ENEURO.0054-18.2018; DOI: 10.1523/ENEURO.0054-18.2018
del.icio.us logo Digg logo Reddit logo Twitter logo Facebook logo Google logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Abstract
    • Significance Statement
    • Introduction
    • Materials and Methods
    • Results
    • Discussion
    • Conclusions
    • Acknowledgments
    • Footnotes
    • References
    • Synthesis
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF

Keywords

  • face processing
  • familiar faces
  • familiarity
  • retinotopy
  • social
  • vision

Responses to this article

Respond to this article

Jump to comment:

No eLetters have been published for this article.

Related Articles

Cited By...

More in this TOC Section

New Research

  • Allopregnanolone effects on inhibition in hippocampal parvalbumin interneurons
  • LINCs are vulnerable to epileptic insult and fail to provide seizure control via on-demand activation
  • Pregabalin silences oxaliplatin-activated sensory neurons to relieve cold allodynia
Show more New Research

Cognition and Behavior

  • Hippocampal neuronal activity preceding stimulus predicts later memory success
  • Absence of VGLUT3 expression leads to impaired fear memory in mice
  • Gender impacts the relationship between mood disorder symptoms and effortful avoidance performance
Show more Cognition and Behavior

Subjects

  • Cognition and Behavior

  • Home
  • Alerts
  • Visit Society for Neuroscience on Facebook
  • Follow Society for Neuroscience on Twitter
  • Follow Society for Neuroscience on LinkedIn
  • Visit Society for Neuroscience on Youtube
  • Follow our RSS feeds

Content

  • Early Release
  • Current Issue
  • Latest Articles
  • Issue Archive
  • Blog
  • Browse by Topic

Information

  • For Authors
  • For the Media

About

  • About the Journal
  • Editorial Board
  • Privacy Policy
  • Contact
  • Feedback
(eNeuro logo)
(SfN logo)

Copyright © 2023 by the Society for Neuroscience.
eNeuro eISSN: 2373-2822

The ideas and opinions expressed in eNeuro do not necessarily reflect those of SfN or the eNeuro Editorial Board. Publication of an advertisement or other product mention in eNeuro should not be construed as an endorsement of the manufacturer’s claims. SfN does not assume any responsibility for any injury and/or damage to persons or property arising from or related to any use of any material contained in eNeuro.