Elsevier

NeuroImage

Volume 147, 15 February 2017, Pages 507-516
NeuroImage

Differential patterns of 2D location versus depth decoding along the visual hierarchy

https://doi.org/10.1016/j.neuroimage.2016.12.039Get rights and content

Abstract

Visual information is initially represented as 2D images on the retina, but our brains are able to transform this input to perceive our rich 3D environment. While many studies have explored 2D spatial representations or depth perception in isolation, it remains unknown if or how these processes interact in human visual cortex. Here we used functional MRI and multi-voxel pattern analysis to investigate the relationship between 2D location and position-in-depth information. We stimulated different 3D locations in a blocked design: each location was defined by horizontal, vertical, and depth position. Participants remained fixated at the center of the screen while passively viewing the peripheral stimuli with red/green anaglyph glasses. Our results revealed a widespread, systematic transition throughout visual cortex. As expected, 2D location information (horizontal and vertical) could be strongly decoded in early visual areas, with reduced decoding higher along the visual hierarchy, consistent with known changes in receptive field sizes. Critically, we found that the decoding of position-in-depth information tracked inversely with the 2D location pattern, with the magnitude of depth decoding gradually increasing from intermediate to higher visual and category regions. Representations of 2D location information became increasingly location-tolerant in later areas, where depth information was also tolerant to changes in 2D location. We propose that spatial representations gradually transition from 2D-dominant to balanced 3D (2D and depth) along the visual hierarchy.

Introduction

We live in a three dimensional (3D) world, yet visual input is initially recorded in two dimensions (2D) on the retinas. How does our visual system transform this 2D retinal input into the cohesive 3D representation of space that we effortlessly perceive? A large body of research has provided insight into how our visual systems use different cues, such as binocular disparity, perspective, shading, and motion parallax to perceive depth (Howard, 2012). What is less well understood is how position-in-depth information (hereafter referred to as depth location information) is integrated with 2D location to form a 3D perception of space.

Past research has demonstrated that 2D spatial information is represented throughout visual cortex and beyond. Both neurophysiology and functional neuroimaging studies have revealed a large number of regions in the brain sensitive to 2D visuo-spatial information: visual cortex is organized into topographic maps of 2D spatial location (Engel et al., 1994, Grill-Spector and Malach, 2004, Maunsell and Newsome, 1987, Sereno et al., 1995, Silver and Kastner, 2009, Wandell et al., 2007), and 2D location information can be decoded from fMRI response patterns in early, ventral, and dorsal visual areas (Carlson et al., 2011, Fischer et al., 2011, Golomb and Kanwisher, 2012, Kravitz et al., 2010, Schwarzlose et al., 2008).

Although often treated as a separate field, many studies have also explored how and where depth information is represented in visual cortex. Binocular disparity and/or depth-sensitive responses have been reported in several visual regions in macaques (DeAngelis and Newsome, 1999, Hubel et al., 2015, Tsao et al., 2003) and humans (Backus et al., 2001, Ban et al., 2012, Dekker et al., 2015, Durand et al., 2009, Neri et al., 2004, Preston et al., 2008, Tsao et al., 2003, Welchman et al., 2005). Interestingly, while binocular disparity signals are found as early as V1, these signals are not thought to correspond to perception of depth until later visual areas (Barendregt et al., 2015, Cumming and Parker, 1997, Cumming and Parker, 1999, Preston et al., 2008). These later visual areas (including V3A, V3B, V7, IPS, MT+, LO) have been shown to be sensitive to 3D object structure (Backus et al., 2001, Durand et al., 2009), differences in perceived depth (Neri et al., 2004, Preston et al., 2008), and the integration of different depth cues (Ban et al., 2012, Dekker et al., 2015, Murphy et al., 2013, Welchman et al., 2005). However, the nature of position-in-depth (spatial) representations remains less explored. Specifically, none of these studies have explored depth in the context of an integrated 3D representation of space, which requires combining – and comparing – information about position in depth with 2D location.

To our knowledge, our study is the first to combine and quantify both 2D and depth location information to investigate the visual representations and interactions of all three spatial dimensions. We use human functional MRI (fMRI) and multivariate pattern analysis (MVPA) to investigate how 3D spatial information is decoded throughout visual cortex. By “information”, we mean explicit, large-scale differences in neural response patterns that can be detected with fMRI MVPA. Across two experiments we explored 3D spatial representations throughout human visual cortex by comparing the amount of MVPA information about horizontal, vertical, and depth position and the dependence/tolerance between these dimensions. The first experiment presented stimuli across the whole visual field, and was more exploratory in nature. The second experiment presented stimuli within one quadrant of the visual field, to control for possible hemifield or quadrant-based effects, and to provide a replication test for the effects found in Experiment 1.

Section snippets

Overview

Our approach used human fMRI to investigate how 3D spatial information is decoded in visual cortex. By 3D spatial information, we mean information about both 2D and depth location. Specifically, we refer to stimulus locations that can be defined spatially in horizontal (X), vertical (Y), and depth (Z) coordinates. We focus on the simplest case where the observer’s eyes, head, and body remain stationary, and spatial position in each dimension can be expressed in terms of position relative to

Whole-brain comparison of X, Y, and Z location information

We first conducted an MVPA “searchlight” analysis (Kriegeskorte et al., 2006) to test where in the brain horizontal (X), vertical (Y), and depth (Z) information could be decoded (Fig. 2A). Searchlight maps were largely consistent across Experiments 1 and 2, with the exception that X information was more widespread in Experiment 1, likely reflecting regions that exhibit broad contralateral, hemisphere-based information.

As expected, most of the location information for all three dimensions was

General Discussion

Our study provides the first direct investigation of the interactions between 2D location and position-in-depth information in human visual cortex. While many previous studies have explored the decoding of 2D location information in different visual areas, here our focus was on how the decoding of depth location varies along the visual hierarchy, particularly with respect to how it compares to (and interacts with) 2D location information. We found that depth location information was not

Acknowledgements

This work was supported by research grants from the National Institutes of Health (R01-EY025648 to J.G.) and Alfred P. Sloan Foundation (BR-2014-098 to J.G.). We thank C. Kupitz and A. Shafer-Skelton for assistance with programming and subject testing; J. Mattingley, J. Todd, B. Harvey, J. Fischer, and A. Leber for helpful discussion; and the OSU Center for Cognitive and Behavioral Brain Imaging for research support.

References (48)

  • D.J. Aks et al.

    Visual search for size is influenced by a background texture gradient

    Journal of Experimental Psychology: Human Perception and Performance

    (1996)
  • B.T. Backus et al.

    Human Cortical Activity Correlates With Stereoscopic Depth Perception

    Journal of Neurophysiology

    (2001)
  • H. Ban et al.

    The integration of motion and disparity cues to depth in dorsal visual cortex

    Nature Neuroscience

    (2012)
  • M. Barendregt et al.

    Transformation from a Retinal to a Cyclopean Representation in Human Visual Cortex

    Current Biology

    (2015)
  • D.H. Brainard

    The Psychophysics Toolbox

    Spatial Vision

    (1997)
  • B.G. Cumming et al.

    Responses of primary visual cortical neurons to binocular disparity without depth perception

    Nature

    (1997)
  • B.G. Cumming et al.

    Binocular neurons in V1 of awake monkeys are selective for absolute, not relative, disparity

    The Journal of Neuroscience

    (1999)
  • G.C. DeAngelis et al.

    Organization of Disparity-Selective Neurons in Macaque Area MT

    The Journal of Neuroscience

    (1999)
  • R.C. deCharms et al.

    Neural Representation and the Cortical Code

    Annual Review of Neuroscience

    (2000)
  • S.A. Engel et al.

    fMRI of human visual cortex

    Nature.

    (1994)
  • D.J. Felleman et al.

    Distributed Hierarchical Processing in the Primate

    Cerebral Cortex

    (1991)
  • S. Ferraina et al.

    Disparity sensitivity of frontal eye field neurons

    Journal of Neurophysiology

    (2000)
  • N.J. Finlayson et al.

    Visual search is influenced by 3D spatial layout

    Attention, Perception, & Psychophysics

    (2015)
  • J. Fischer et al.

    The emergence of perceived position in the visual system

    Journal of Cognitive Neuroscience

    (2011)
  • Cited by (0)

    View full text