Neural encoding of relative position

J Exp Psychol Hum Percept Perform. 2011 Aug;37(4):1032-50. doi: 10.1037/a0022338.

Abstract

Late ventral visual areas generally consist of cells having a significant degree of translation invariance. Such a "bag of features" representation is useful for the recognition of individual objects; however, it seems unable to explain our ability to parse a scene into multiple objects and to understand their spatial relationships. We review several schemes (e.g., global features and serial attention) for how to reconcile bag-of-features representation with our ability to understand relationships; we review structural description theories that, in contrast, suggest that a neural binding mechanism assigns the features of each object in a scene to a separate "slot" to which relational information for that object is explicitly bound. Four functional magnetic resonance imaging-adaptation experiments assessed how ventral stream regions respond to rearrangements of two objects in a minimal scene that depict scene translations and relational changes. Changes of relative position (e.g., elephant above bus changing to bus above elephant) produced larger releases of adaptation in the anterior lateral occipital complex (LOC) than physically equivalent translations, providing evidence that spatial relations are explicitly encoded in the anterior LOC in agreement with structural description theories.

Publication types

  • Research Support, U.S. Gov't, Non-P.H.S.

MeSH terms

  • Attention / physiology*
  • Brain Mapping*
  • Cerebral Cortex / physiology*
  • Discrimination, Psychological / physiology*
  • Exploratory Behavior
  • Humans
  • Magnetic Resonance Imaging
  • Photic Stimulation
  • Recognition, Psychology
  • Reference Values
  • Space Perception / physiology*
  • Visual Pathways / physiology