Antagonistic comparison of temporal frequency filter outputs as a basis for speed perception

Vision Res. 1994 Jan;34(2):253-65. doi: 10.1016/0042-6989(94)90337-9.

Abstract

The prevailing view of motion detection in human vision is that the retinal image is convolved with each of a set of spatiotemporal filters and that perceived speed emerges from a process of pooling the outputs of these filters. Such a system can operate only if multiple filters exist; ideally the filters should also be fairly narrowly tuned in both spatial and temporal frequency. These constraints are met in the case of spatial frequency. But several studies suggest that multiple, finely tuned temporal filters do not exist; instead just two (perhaps three) broadly-tuned temporal mechanisms can be identified. We report some experiments concerning the effects of adaptation to motion on perceived speed. It is shown that perceived speed is increased by adaptation in some circumstances and decreased in others. We then present a computational model in which a temporal frequency code, on which perceived speed is presumed to be based, is derived by a process of antagonistic comparison of the responses of two psychophysically-plausible, broadly-tuned temporal mechanisms. The model, which includes the effects of adaptation to motion upon the sensitivities of the filters and the subsequent comparison of their sensitivities, is shown to give a good fit to the empirical data.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Adaptation, Ocular / physiology
  • Contrast Sensitivity / physiology
  • Humans
  • Models, Neurological
  • Motion Perception / physiology*
  • Pattern Recognition, Visual / physiology
  • Time Factors