Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • Brief Communication
  • Published:

Neural pattern similarity reveals the inherent intersection of social categories

Abstract

We provide evidence that neural representations of ostensibly unrelated social categories become bound together by their overlapping stereotype associations. While viewing faces, multi-voxel representations of gender, race, and emotion categories in the fusiform and orbitofrontal cortices were stereotypically biased and correlated with subjective perceptions. The findings suggest that social-conceptual knowledge can systematically alter the representational structure of social categories at multiple levels of cortical processing, reflecting bias in visual perceptions.

This is a preview of subscription content, access via your institution

Access options

Buy this article

Prices may be subject to local taxes which are calculated during checkout

Figure 1: Task designs for study 1 (n = 17) and study 2 (n = 26).
Figure 2: Study 1 results (n = 17).
Figure 3: Study 2 fMRI results (n = 26).

Similar content being viewed by others

References

  1. Fiske, S.T. & Neuberg, S.L. Adv. Exp. Soc. Psychol. 23, 1–74 (1990).

    Google Scholar 

  2. Allport, G.W. The Nature of Prejudice (Addison-Wesley, Oxford, 1954).

  3. Freeman, J.B. & Ambady, N. Psychol. Rev. 118, 247–279 (2011).

    Article  Google Scholar 

  4. Freeman, J.B. & Johnson, K.L. Trends Cogn. Sci. 10.1016/j.tics.2016.03.003 (2016).

  5. Johnson, K.L., Freeman, J.B. & Pauker, K. J. Pers. Soc. Psychol. 102, 116–131 (2012).

    Article  Google Scholar 

  6. Hess, U., Adams, R.B. Jr. & Kleck, R.E. Emotion 4, 378–388 (2004).

    Article  Google Scholar 

  7. Hugenberg, K. & Bodenhausen, G.V. Psychol. Sci. 15, 342–345 (2004).

    Article  Google Scholar 

  8. Haxby, J.V., Hoffman, E.A. & Gobbini, M.I. Trends Cogn. Sci. 4, 223–233 (2000).

    Article  CAS  Google Scholar 

  9. Kaul, C., Ratner, K.G. & Van Bavel, J.J. Soc. Cogn. Affect. Neurosci. 8, 750–755 (2013).

    Article  Google Scholar 

  10. Rotshtein, P., Henson, R.N.A., Treves, A., Driver, J. & Dolan, R.J. Nat. Neurosci. 8, 107–113 (2005).

    Article  CAS  Google Scholar 

  11. Freeman, J.B. et al. Cereb. Cortex 25, 415–422 (2015).

    Article  Google Scholar 

  12. Bar, M. et al. Proc. Natl. Acad. Sci. USA 103, 449–454 (2006).

    Article  CAS  Google Scholar 

  13. Mitchell, J.P., Heatherton, T.F. & Macrae, C.N. Proc. Natl. Acad. Sci. USA 99, 15238–15243 (2002).

    Article  CAS  Google Scholar 

  14. Milne, E. & Grafman, J. J. Neurosci. 21, RC150 (2001).

    Article  CAS  Google Scholar 

  15. Knutson, K.M., Mah, L., Manly, C.F. & Grafman, J. Hum. Brain Mapp. 28, 915–930 (2007).

    Article  Google Scholar 

  16. Kriegeskorte, N., Mur, M. & Bandettini, P. Front. Syst. Neurosci. 2, 4 (2008).

    Article  Google Scholar 

  17. Becker, D.V., Kenrick, D.T., Neuberg, S.L., Blackwell, K.C. & Smith, D.M. J. Pers. Soc. Psychol. 92, 179–190 (2007).

    Article  Google Scholar 

  18. Adams, R.B., Ambady, N., Nakayama, K. & Shimojo, S. The Science of Social Vision (Oxford University Press, New York, 2011).

  19. Brosch, T., Bar-David, E. & Phelps, E.A. Psychol. Sci. 24, 160–166 (2013).

    Article  Google Scholar 

  20. Summerfield, C. & Egner, T. Trends Cogn. Sci. 13, 403–409 (2009).

    Article  Google Scholar 

  21. Connolly, A.C. et al. J. Neurosci. 32, 2608–2618 (2012).

    Article  CAS  Google Scholar 

  22. Kietzmann, T.C., Swisher, J.D., König, P. & Tong, F. J. Neurosci. 32, 11763–11772 (2012).

    Article  CAS  Google Scholar 

  23. Hehman, E., Ingbretsen, Z.A. & Freeman, J.B. Neuroimage 101, 704–711 (2014).

    Article  Google Scholar 

  24. Blanz, V. & Vetter, T. SIGGRAPH′99 187–194 (ACM Press, Los Angeles, 1999).

  25. Willenbockel, V. et al. Behav. Res. Methods 42, 671–684 (2010).

    Article  Google Scholar 

  26. Calder, A.J., Young, A.W., Perrett, D.I., Etcoff, N.L. & Rowland, D. Vis. Cogn. 3, 81–118 (1996).

    Article  Google Scholar 

  27. Etcoff, N.L. & Magee, J.J. Cognition 44, 227–240 (1992).

    Article  CAS  Google Scholar 

  28. Freeman, J.B. & Ambady, N. Behav. Res. Methods 42, 226–241 (2010).

    Article  Google Scholar 

  29. Galinsky, A.D., Hall, E.V. & Cuddy, A.J. Psychol. Sci. 24, 498–506 (2013).

    Article  Google Scholar 

  30. Devine, P.G. & Elliot, A.J. J. Pers. Soc. Psychol. 21, 1139–1150 (1995).

    Article  Google Scholar 

  31. Katz, D. & Braly, K. J. Abnorm. & Soc. Psych. 28, 280–290 (1933).

    Article  Google Scholar 

  32. Cox, R.W. Comput. Biomed. Res. 29, 162–173 (1996).

    Article  CAS  Google Scholar 

  33. Freeman, J.B., Dale, R. & Farmer, T.A. Front. Psychol. 2, 59 (2011).

    Article  Google Scholar 

  34. Spivey, M.J. & Dale, R. Curr. Dir. Psychol. Sci. 15, 207–211 (2006).

    Article  Google Scholar 

  35. Carlin, J.D., Calder, A.J., Kriegeskorte, N., Nili, H. & Rowe, J.B. Curr. Biol. 21, 1817–1821 (2011).

    Article  CAS  Google Scholar 

  36. Hanke, M. et al. Front. Neuroinform. 3, 3 (2009).

    Article  Google Scholar 

  37. Kriegeskorte, N., Goebel, R. & Bandettini, P. Proc. Natl. Acad. Sci. USA 103, 3863–3868 (2006).

    Article  CAS  Google Scholar 

  38. Carlin, J.D. & Kriegeskorte, N. Preprint at 10.1101/029603 (2015).

  39. Khaligh-Razavi, S.-M. & Kriegeskorte, N. PLoS Comput. Biol. 10, e1003915 (2014).

    Article  Google Scholar 

  40. Alink, A., Walther, A., Krugliak, A., van den Bosch, J.J. & Kriegeskorte, N. Preprint at 10.1101/032391 (2015).

  41. Nili, H. et al. PLoS Comput. Biol. 10, e1003553 (2014).

    Article  Google Scholar 

  42. Serre, T., Wolf, L. & Poggio, T. IEEE Computer Society Conference on Computer Vision and Pattern Recognition 2005 2, 994–1000 (IEEE, 2005).

    Article  Google Scholar 

  43. Riesenhuber, M. & Poggio, T. Nat. Neurosci. 2, 1019–1025 (1999).

    CAS  Google Scholar 

  44. Freeman, J.B. & Ambady, N. Psychon. Bull. Rev. 18, 705–712 (2011).

    Article  Google Scholar 

  45. Rosenthal, R. & Rosnow, R.L. Essentials of Behavioral Research: Methods and Data Analysis (McGraw-Hill, New York, 2007).

  46. Burton, P., Gurrin, L. & Sly, P. Stat. Med. 17, 1261–1291 (1998).

    Article  CAS  Google Scholar 

Download references

Acknowledgements

We thank D. Wagner, J. Haxby, A. Connolly, Z. Ingbretsen, and Y. Xie for insights and assistance in data analysis and D. Amodio and L. Hackel for comments. This work was supported in part by research grant NSF BCS-1423708 (J.B.F.).

Author information

Authors and Affiliations

Authors

Contributions

R.M.S. and J.B.F. conceived the studies and contributed to study design, analytic strategy, and interpretation of the data. R.M.S. analyzed the data, with the advice of J.B.F. R.M.S. and J.B.F. wrote the paper.

Corresponding authors

Correspondence to Ryan M Stolier or Jonathan B Freeman.

Ethics declarations

Competing interests

The authors declare no competing financial interests.

Integrated supplementary information

Supplementary Figure 1 Task design

Task design for Study 1 (N = 17) and Study 2 (N = 26) in expanded form (see also Fig. 1 in main text). (a) Event-related fMRI task. Each encoding event included 3 exemplars from 1 condition. Additional probe events ensured participants’ visual attention (see Online Methods). (b) Mouse-tracking behavioral task. On each trial, subjects click a ‘Start’ button, after which a face appears and they make a categorization response. The trajectory of their hand movement is recorded en route to the selected response. A trajectory’s maximum deviation toward the opposite category response (on the opposite side of the screen) indexes the degree to which that category was activated during perception. In this hypothetical example, a Black female face elicits a trajectory that initially deviates toward the ‘Male’ response, because shared stereotypes between Black and male categories bias perceptions of Black faces toward male categorization. For this example, in our RSA framework, a hypothetical bias to perceive Black faces more similarly to male faces would correspond to a greater similarity between Black and Male categories in the subjective DM.

Supplementary Figure 2 Creation and summary of subjective DM in study 1

Study 1 subjective DM creation and summary (N = 17). (a) Mouse-trajectories were used to compute similarity vectors of each condition to the 7 main categories (e.g., trajectory-deviation towards Angry vs. Happy measures similarity to each category). Dissimilarity between all 12 conditions was computed as correlational distance (1 – Pearson r) between each of their vectors (see Online Methods). An example computation finding Happy-Asian-Females to be more similar to Happy-Asian-Males than Angry-Asian-Males is depicted. For ease of understanding, similarity (Pearson r) rather than dissimilarity (1 – Pearson r) values are depicted. (b) The resulting 12×12 subjective DM. (c) Means and (d) variability (SD) of the subjective 7×7 DM showing similarity of the 7 main categories underlying the subjective 12×12 DM.

Supplementary Figure 3 Study 1 behavioral RSA results

Study 1 behavioral RSA results (N = 17). To visually illustrate the positive linear relationship, a scatterplot of the Pearson correlation from Fig. 2a is depicted (identical to the Spearman correlation; r19 =.474, p =.023). For ease of understanding, we depict similarity (Pearson r) rather than dissimilarity (1 – Pearson r) values.

Supplementary Figure 4 Study 2 group-level normative subjective DM

Study 2 group-level normative subjective DM (N = 26). Average of 26 subjects’ idiosyncratic subjective DMs. Idiosyncratic subjective DMs (Supplementary Fig. 6) were used to predict neural pattern similarity in Study 2 over and above visual DMs and this group-level normative DM to assess their unique contributions. This figure also demonstrates stimuli from each condition in Study 2. To reduce confound of visual similarity between social categories, all stimuli were gray-scaled then matched on luminance and contrast. For each category (Race × Sex × Emotion), 32 exemplars were used as stimuli in the scanner. Analyses additionally controlled for visual similarity through visual model DMs (see Online Methods and Supplementary Fig. 7).

Supplementary Figure 5 Variability in inter-category similarities (Pearson r) in subjective and stereotype DMs in study 2

Variability in inter-category similarities (Pearson r) in subjective and stereotype DMs in Study 2 (N = 26), where subjects’ unique stereotype DMs were used to predict their subjective DMs. The subjective DMs, in turn, were used to predict neural-pattern similarity structure. (a) Variability in inter-category similarity in subjective perceptions as measured by mouse-tracking. (b) Variability in inter-category similarity in stereotype contents as measured by stereotype ratings.

Supplementary Figure 6 Study 2 subjective and stereotype DMs depicting inter-category similarities

Study 2 subjective DMs (12×12 DMs, left; collapsed 7×7 DMs, middle) and stereotype DMs (7×7, right), depicting inter-category similarities (Pearson r). The normative group-level subjective and stereotype DMs (top row) are the average of these DMs across all subjects. The bottom three rows provide examples of individual DMs from several representative subjects (s1, s2, s3). The central tendency and variance of subjective and stereotype DMs in Study 2 is summarized in Supplementary Fig. 5.

Supplementary Figure 7 Study 2 visual model DMs depicting inter-category similarities

Study 2 visual model DMs (12×12 DMs, left; collapsed 7×7 DMs, right), depicting inter-category similarities (Pearson r). These DMs model the inherent visual similarities between stimulus conditions, with the image silhouette DM modeling similarities in retinotopic outlines across conditions, the pixel-intensity map DM modeling pixel intensity similarities across conditions, and the HMAX C-2 output DM modeling high-level ventral-temporal representation of stimuli across conditions.

Supplementary Figure 8 Study 2 behavioral RSA results

Study 2 behavioral RSA results (N = 26). A positive relationship between idiosyncratic 7×7 stereotype and subjective DMs was obtained while controlling for three visual models (visual DMs) and the normative group-level stereotype DM (b =.28, SE =.11, z = 2.47, p =.014; tested with a multi-level regression model; see Online Methods). For illustrative purposes, each subject’s stereotype and subjective DM data (inter-category similarities) are plotted alongside their linear slope (using ordinary least-squares), with subjects’ intercepts equated to permit visual inspection of variability in slopes. For ease of understanding, similarity (Pearson r) rather than dissimilarity (1 – Pearson r) values are depicted.

Supplementary information

Supplementary Text and Figures

Supplementary Figures 1–8 and Supplementary Tables 1 and 2 (PDF 1092 kb)

Supplementary Methods Checklist (PDF 501 kb)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Stolier, R., Freeman, J. Neural pattern similarity reveals the inherent intersection of social categories. Nat Neurosci 19, 795–797 (2016). https://doi.org/10.1038/nn.4296

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1038/nn.4296

This article is cited by

Search

Quick links

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing