Abstract
When the eyes rotate during translational self-motion, the focus of expansion in optic flow no longer indicates heading, yet heading judgements are largely unbiased. Much emphasis has been placed on the role of extraretinal signals in compensating for the visual consequences of eye rotation. However, recent studies also support a purely visual mechanism of rotation compensation in heading-selective neurons. Computational theories support a visual compensatory strategy but require different visual depth cues. We examined the rotation tolerance of heading tuning in macaque area MSTd using two different virtual environments, a frontoparallel (2D) wall and a three-dimensional (3D) cloud of random dots. Both environments contained rotational optic flow cues (i.e., dynamic perspective), but only the 3D cloud stimulus contained local motion parallax cues, which are required by some models. The 3D cloud environment did not enhance the rotation tolerance of heading tuning for individual MSTd neurons, nor the accuracy of heading estimates decoded from population activity, suggesting a key role for dynamic perspective cues. We also added vestibular translation signals to optic flow, to test whether rotation tolerance is enhanced by non-visual cues to heading. We found no benefit of vestibular signals overall, but a modest effect for some neurons with significant vestibular heading tuning. We also find that neurons with more rotation tolerant heading tuning typically are less selective to pure visual rotation cues. Together, our findings help to clarify the types of information that are used to construct heading representations that are tolerant to eye rotations.
SIGNIFICANCE STATEMENT To estimate one’s direction of translation (or heading) from optic flow, it is necessary for the brain to compensate for the effects of eye rotations on the optic flow field. We examined how visual depth cues and vestibular translation signals contribute to the rotation tolerance of heading tuning in macaque area MSTd. Unlike the prediction of some computational models, we find that motion parallax cues in a 3D environment have little effect on rotation tolerance of MSTd neurons. We also find that vestibular translation signals do not substantially enhance tolerance to rotation. Our findings support a dominant role for visual rotation (i.e., dynamic perspective) cues in constructing a rotation-tolerant representation of heading in MSTd.
Footnotes
Authors report no conflict of interest.
This work was supported by NIH grants EY01618 (to GCD) and DC014678 (to DEA), and an NEI CORE grant (EY001319).
This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International license, which permits unrestricted use, distribution and reproduction in any medium provided that the original work is properly attributed.
Jump to comment: