Skip to main content

Main menu

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Blog
    • Collections
    • Podcast
  • TOPICS
    • Cognition and Behavior
    • Development
    • Disorders of the Nervous System
    • History, Teaching and Public Awareness
    • Integrative Systems
    • Neuronal Excitability
    • Novel Tools and Methods
    • Sensory and Motor Systems
  • ALERTS
  • FOR AUTHORS
  • ABOUT
    • Overview
    • Editorial Board
    • For the Media
    • Privacy Policy
    • Contact Us
    • Feedback
  • SUBMIT

User menu

Search

  • Advanced search
eNeuro
eNeuro

Advanced Search

 

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Blog
    • Collections
    • Podcast
  • TOPICS
    • Cognition and Behavior
    • Development
    • Disorders of the Nervous System
    • History, Teaching and Public Awareness
    • Integrative Systems
    • Neuronal Excitability
    • Novel Tools and Methods
    • Sensory and Motor Systems
  • ALERTS
  • FOR AUTHORS
  • ABOUT
    • Overview
    • Editorial Board
    • For the Media
    • Privacy Policy
    • Contact Us
    • Feedback
  • SUBMIT
PreviousNext
Research ArticleResearch Article: New Research, Sensory and Motor Systems

Speed Estimation for Visual Tracking Emerges Dynamically from Nonlinear Frequency Interactions

Andrew Isaac Meso, Nikos Gekas, Pascal Mamassian and Guillaume S. Masson
eNeuro 25 April 2022, 9 (3) ENEURO.0511-21.2022; https://doi.org/10.1523/ENEURO.0511-21.2022
Andrew Isaac Meso
1Department of Neuroimaging, Institute of Psychiatry, Psychology and Neuroscience, King’s College, London SE5 8AF, United Kingdom
4Institut de Neurosciences de la Timone, Centre National de la Recherche Scientifique and Aix-Marseille Université, Marseille 13005, France
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Andrew Isaac Meso
Nikos Gekas
2Department of Psychology, Edinburgh Napier University, Edinburgh, EH11 4BN, United Kingdom
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Pascal Mamassian
3Laboratoire des Systèmes Perceptifs, Département d’Études Cognitives, École Normale Supérieure, Paris Sciences et Lettres University, Centre National de la Recherche Scientifique, Paris 75005, France
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Pascal Mamassian
Guillaume S. Masson
4Institut de Neurosciences de la Timone, Centre National de la Recherche Scientifique and Aix-Marseille Université, Marseille 13005, France
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Article
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF
Loading

Article Figures & Data

Figures

  • Tables
  • Figure
    • Download figure
    • Open in new tab
    • Download powerpoint
  • Figure 1.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 1.

    Rationale for probing motion integration properties for ocular tracking. a, Motion stimuli are defined by their mean spatial and temporal frequencies in Fourier space. A set of 15 component stimuli were generated, paving the spatiotemporal frequency space (1 to 15 correspond to c1 to c15 detailed in Table 1). The component motion inputs were selected to cover three ranges of spatiotemporal frequency scales (low, medium, high, with a red-green-blue color code). They were also aligned (in groups of 3) along five different speed axes (from 11°/s to 53°/s, hue saturation code). b, Ocular tracking responses to either a component Motion Cloud (MC, top) or a component Drifting Grating (DG, bottom) of same mean spatiotemporal frequencies and speed (stimulus c1 in panel a). Gray curves are all single trials. Green curves are average across ∼150 valid trials, for participant S01. c, Pattern stimuli were generated by summing several (2 or 3) of these components. For instance, a triplet can be oriented along the iso-velocity line or along the scale axis. Components were built from either component DG or component MC. Notice that both pattern stimuli have the same mean spatial and temporal frequency as well as the mean speed (here 24°/s). d, Individual and mean eye velocity profiles of tracking responses to either pattern stimulus. Continuous blue/orange lines are the observed mean eye velocity profiles. Broken lines correspond to the linear prediction, computed as the mean of the responses to each component.

  • Figure 2.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 2.

    Dynamics of spatiotemporal frequency tuning of ocular tracking responses. a, Mean eye velocity profiles of tracking responses to each of the component MC (continuous lines) or DG (dotted lines). At each location in the spatiotemporal frequency space, grating, and MC stimuli have identical mean spatial frequency, temporal frequency, and speed, but different spreads around these means. b, Early (left) and late (right) temporal frequency (TF) tuning of ocular tracking responses to either component MCs (top) or DG (bottom). Data are mean (±SD) eye velocities across participants. c, Same plots but for spatial frequency (SF) tuning.

  • Figure 3.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 3.

    a, Distribution of 15 component DG and MC, relative to both the scale and speed axes (same as Fig. 1a with added labels for speed and scale). b, Mean eye velocity profiles for component MC (top) and DG (bottom) of mean speed increasing from 11°/s to 56°/s, but sharing the same medium scale range. c, Mean (±SD across participants) eye velocity, as a function of stimulus speed, for three different scale ranges. d, Mean eye velocity profiles for component MC (top) and DG (bottom) of identical mean speed but increasing scale range. e, Scale tuning of ocular tracking, when presented with either component DGs or MCs. Same color hue/saturation code.

  • Figure 4.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 4.

    Temporal dynamics of ocular tracking tuning. a, Spatiotemporal tuning of response amplitude computed across the 15 component DG for a given temporal window (here [250–300 ms]). The spatiotemporal tuning was approximated by a quadric surface (surface plot on the left, contour plot on the right). From these quadratic fits, we extracted the main axis (blue curve) that tracks pairs of spatiotemporal frequency coordinates along which the amplitude changes the least for each scale within the fitted range. We also estimated the maximum scale-speed axis (red curve) by collecting the scale corresponding to the maximum eye velocity for each stimulus speed. b, Best fitted spatiotemporal tuning surfaces for component MC stimuli computed across six participants and for four successive time windows, from [100–150 ms] to [250–300 ms] after stimulus onset. c, Same as in b but for component DG stimuli. d, For each participant, the angle Θ of the maximum scale-speed axis is plotted for five time windows, for both DG (pink) and MC (green) stimuli. Larger dots with error bars show mean ± SD (across participants). The first time window is indicated as baseline, before response onsets. e, For each participant, the angle Θ of the main axis of the spatiotemporal tuning surface is plotted for five time windows.

  • Figure 5.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 5.

    a, b, Best fit 2D polynomial function describing the relationship between ocular following amplitude and both spatial and temporal frequencies. Data are mean response amplitudes across six participants. a, Component DGs. b, Component MCs. c, Q index can be derived from the best-fit 2D polynomial functions, as indicated with the two equations. Individual Q indices were computed for each participant, and each time window. Individual and mean (±SD, across participants) are plotted. Note that one outlier (MC condition) with a Q value of ∼22 at time 150 ms is not plotted to allow comparison between the two conditions.

  • Figure 6.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 6.

    Variability across trials of tracking responses. a, Distributions of early eye velocities, for participant S02. Data are shown for three mean speeds (from left to right) and for each speed, with three different spatiotemporal frequencies and therefore three different scale ranges (color code). Motion stimuli were component MCs (top) or DGs (bottom). Continuous curves are best-fit Gaussian functions. b, Relationship between best-fit parameters σ and μ. Data are mean (±SD) across six participants for each of the 15 motion components, for either MCs (top) or DGs (bottom). Color code indicates the scale range of each data point. From left to right, plots illustrate results obtained for the five successive time windows. The first time window ([50–100 ms], gray shaded) provides an estimate of baseline variability, as most of this measurement window covered preresponse time. Continuous black lines indicate the theoretical relationship where σ increases with the square root of the μ value.

  • Figure 7.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 7.

    Speed or scale tuning of tracking responses variability. CV values estimated for ocular responses to component MCs or DGs are plotted one against the other, for six time windows, starting at 50 ms (left column) and spanning 50 ms each. The first time window ([50–100 ms]) estimates the relative variabilities over a baseline, preresponse epoch (gray shaded). a, Each dot corresponds to a given component stimulus pair, for all participants. Colors correspond to the scale range. b, CV values are averaged across speeds, and participants, for each of the three scale ranges. Continuous contour lines enclose the variance across participants. c, CV values are averaged across scales, and participants, for each of the stimulus speeds.

  • Figure 8.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 8.

    Tracking responses to pattern DGs (pDG) and MCs (pMC). a, Top, Three examples of triplets, aligned along the scale axis (pattern d), the speed axis (pattern i), or in between (pattern e). Next two rows illustrate mean eye velocity profiles to these pattern MCs (middle, continuous lines) or DGs (bottom, dotted lines). Colored lines show the observed response profiles for participant S04. Gray lines show the linear prediction computed by averaging the responses to each component of the triplet. b, Top and Bottom Rows, Component distributions of the 15 patterns (from a to i), in the spatiotemporal space, for both pMCs and pDGs, respectively. Different orientations and distances between components were used. c, The two rows plot, for each pattern type, the ratio between observed and predicted mean eye velocities, over time. A ratio of 1 indicates that observed responses are equal to the linear prediction. Values above or below 1 indicate the observed responses are larger or smaller than predicted, respectively. Data are mean (±SD) across six participants.

  • Figure 9.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 9.

    Latency of nonlinear effects. a, Two examples of tracking responses for one representative participant, for pattern e composed of three components, either MC (upper plot) or DG (lower plot). Gray lines plot the theoretical eye velocity profiles obtained by averaging the tracking responses to each component, independently. Vertical dotted lines indicate the first point in time (target motion onset) at which observed and predicted tracking responses are significantly different. b, The time of separation between predicted and observed eye velocity profile is plotted for each participant. For comparison, similar patterns MC (square) and DG (circle) are shown together. Larger symbols with error bars are mean (±SD) across six participants for one given condition. The gray shaded area indicates the mean open-loop period of tracking responses. c, Distributions of separation times, across nine patterns and six participants, for both pattern DG (red) and MC (green) conditions. Closed red circles and green squares plot the median values (±IQR, that is the interquartile range) of separation times.

  • Figure 10.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 10.

    Model. a, Tiling of spatiotemporal channels. Each channel is a bivariate normal function with its main axis oriented along a line of constant speed. All channels coding for the same speed are shown by the same color: blue for slow speed and pink for fast speed. The colored dots represent the center of the channels. The black dots represent the coordinates of the component stimuli. b, Receptive field of a channel. The contour lines show the receptive field in log-log Fourier space of a channel centered at 0.5 c/° and 11.31 Hz. The white dots represent the coordinates of the component stimuli. c, Speed tuning of a channel. Each curve shows the speed tuning of the channel in b at one spatial frequency; different colors indicate different spatial frequencies and the colors of the curves match the vertical lines in b. d–g, Representation of different stimuli types in the log-log Fourier space. Component DG 1 (d), PDG d (e), component MC 1 (f), and PMC d (g). h–k, Normalized responses of all channels of the network to the respective stimuli of d–g.

  • Figure 11.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 11.

    Model performance. a, Distribution of channel weights across the network. b, Best-fit interaction pattern. c, Comparison of responses probabilities across time windows between experimental and model data, for one participant (S03). d, Ratios between observed and predicted mean eye velocities (plotted on a Log scale) to pattern stimuli over time plotted for experimental data, the model with interaction and the model without interaction. Colors and labels are identical to Figure 6.

  • Figure 12.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 12.

    Comparison between data and model for component stimuli. a, Distribution of observed (data) and predicted (model) eye velocity of ocular responses to a component MC moving at 24°/s, for participant S04. Color indicates scale range. First and second row illustrate early ([150–200 ms]) and late ([250–300 ms]) time windows. b, Same plots, but now for component DG of the same mean spatial and temporal frequency and mean speed (24°/s). c, d, Relationship between predicted and observed CV (coefficient of variation) across the three scales (color) and five speeds (saturation), for three windows. Insets plot the same relationships between model and data but now for mean eye velocities. Oblique lines indicate a slope of 1 for the linear regression.

  • Figure 13.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 13.

    Comparison between linear and nonlinear models of motion integration for speed estimation. AIC values were computed for each model and component or pattern conditions and each participant. The mean (±SD across participants) difference between AICwith and AICwithout values is plotted for all 15 components (c1 to c15) and nine patterns (from a to i). Positive and negative values indicate that the nonlinear, interaction model is worse or better than a model without interactions, respectively. a, Color codes for either component (left) or pattern (right) stimuli. b, Component and pattern MC. c, Component and pattern DG.

Tables

  • Figures
    • View popup
    Table 1

    Speed (v0), spatial (sf0), and temporal (tf0) frequency parameters of each of the MCs

    MC #Spd (°/s)sf (c/°)tf (Hz)Relative
    distance (log)
    Angle
    c124.0000.50012.0000.000.0
    c235.5220.37913.4591.00157.5
    c335.5220.66023.4382.4167.5
    c416.2151.14918.6263.1327.9
    c516.2150.66010.6991.00−22.5
    c616.2150.3796.1442.41−112.5
    c735.5220.2187.7303.14−152.1
    c852.5760.28715.1002.00157.5
    c952.5760.66034.6903.6675.4
    c1024.0001.14927.5693.9245.0
    c1110.9562.00021.9115.0423.5
    c1210.9560.8719.5382.00−22.5
    c1310.9560.3794.1513.66−104.6
    c1424.0000.2185.2223.92−135.0
    c1552.5760.1256.5725.04−156.5
    • The first MC, c1 lies at the left of the stimulus space and is used as the reference stimulus. Also included are the relative distances in log units and the polar angle from the reference c1.

    • View popup
    Table 2

    Stimulus parameters for the pattern MCs (pMCs) or pattern gratings (pDGs)

    CMCNo of MCs (P)Comp MCsMean voRel. spanOrientation
    a32, 1, 524.0002.00157.52
    b33, 1, 624.0004.8367.50
    c34, 1, 724.0006.2727.86
    d38, 1, 1224.0004.00157.49
    e39, 1, 1324.0007.3175.36
    f311, 1, 1524.00010.0723.48
    g21, 520.1081.00157.52
    h22, 129.7611.00157.52
    i310, 1, 1424.0007.8445.00
    • These five stimulus characteristics are: the total number (P) and identities of the individual components based on Table 1 and Figure 1, the mean speed vo which is 24°/s for most stimuli except g and h, the relative Euclidean span covered by the stimulus across the frequency space in log units and the orientation angle (°) relative to the horizontal of the given stimulus in the frequency space

    • View popup
    Table 3

    Parameters of the model. Bolded parameters are allowed to vary to match the experimental data

    SymbolValueDescription
    σx 0.5Channel variance over spatial frequency
    σt 0.5Channel variance over temporal frequency
    ρ0.6Correlation coefficient
    g199 ± 58.7Channel gain multiplier (unique for each participant)
    wi From polynomialWeight of channel φi
    b0−5 [−23.88, −3.63, 1.95, −0.72, 0.61, −0.45]Polynomial parameters of channel weights
    α2.16Interaction weight
    σe1 0.2542Excitation SD over x-axis
    σe2 0.7718Excitation SD over y-axis
    σi1 0.2501Inhibition SD over x-axis
    σi2 0.7711Inhibition SD over y-axis
    θ3π/4Rotation of the interaction profile
    μprior 2.51 ± 1.35Mean of prior distribution (unique for each participant)
    σprior 0.86 ± 0.36SD of prior distribution (unique for each participant)
Back to top

In this issue

eneuro: 9 (3)
eNeuro
Vol. 9, Issue 3
May/June 16
  • Table of Contents
  • Index by author
  • Ed Board (PDF)
Email

Thank you for sharing this eNeuro article.

NOTE: We request your email address only to inform the recipient that it was you who recommended this article, and that it is not junk mail. We do not retain these email addresses.

Enter multiple addresses on separate lines or separate them with commas.
Speed Estimation for Visual Tracking Emerges Dynamically from Nonlinear Frequency Interactions
(Your Name) has forwarded a page to you from eNeuro
(Your Name) thought you would be interested in this article in eNeuro.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Print
View Full Page PDF
Citation Tools
Speed Estimation for Visual Tracking Emerges Dynamically from Nonlinear Frequency Interactions
Andrew Isaac Meso, Nikos Gekas, Pascal Mamassian, Guillaume S. Masson
eNeuro 25 April 2022, 9 (3) ENEURO.0511-21.2022; DOI: 10.1523/ENEURO.0511-21.2022

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Respond to this article
Share
Speed Estimation for Visual Tracking Emerges Dynamically from Nonlinear Frequency Interactions
Andrew Isaac Meso, Nikos Gekas, Pascal Mamassian, Guillaume S. Masson
eNeuro 25 April 2022, 9 (3) ENEURO.0511-21.2022; DOI: 10.1523/ENEURO.0511-21.2022
Twitter logo Facebook logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Visual Abstract
    • Abstract
    • Significance Statement
    • Introduction
    • Materials and Methods
    • Results
    • Discussion
    • Acknowledgments
    • Footnotes
    • References
    • Synthesis
    • Author Response
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF

Keywords

  • dynamic nonlinearities
  • motion clouds
  • naturalistic stimulation
  • ocular following
  • probabilistic modelling
  • speed estimation

Responses to this article

Respond to this article

Jump to comment:

No eLetters have been published for this article.

Related Articles

Cited By...

More in this TOC Section

Research Article: New Research

  • Early Development of Hypothalamic Neurons Expressing Proopiomelanocortin Peptides, Neuropeptide Y and Kisspeptin in Fetal Rhesus Macaques
  • Experience-dependent neuroplasticity in the hippocampus of bilingual young adults
  • Characterisation of transgenic lines labelling reticulospinal neurons in larval zebrafish
Show more Research Article: New Research

Sensory and Motor Systems

  • Characterisation of transgenic lines labelling reticulospinal neurons in larval zebrafish
  • Task Modulation of Resting-State Functional Gradient Stability in Lifelong Premature Ejaculation: An fMRI Study
  • Synaptic Drive onto Inhibitory and Excitatory Principal Neurons of the Mouse Lateral Superior Olive
Show more Sensory and Motor Systems

Subjects

  • Sensory and Motor Systems
  • Home
  • Alerts
  • Follow SFN on BlueSky
  • Visit Society for Neuroscience on Facebook
  • Follow Society for Neuroscience on Twitter
  • Follow Society for Neuroscience on LinkedIn
  • Visit Society for Neuroscience on Youtube
  • Follow our RSS feeds

Content

  • Early Release
  • Current Issue
  • Latest Articles
  • Issue Archive
  • Blog
  • Browse by Topic

Information

  • For Authors
  • For the Media

About

  • About the Journal
  • Editorial Board
  • Privacy Notice
  • Contact
  • Feedback
(eNeuro logo)
(SfN logo)

Copyright © 2025 by the Society for Neuroscience.
eNeuro eISSN: 2373-2822

The ideas and opinions expressed in eNeuro do not necessarily reflect those of SfN or the eNeuro Editorial Board. Publication of an advertisement or other product mention in eNeuro should not be construed as an endorsement of the manufacturer’s claims. SfN does not assume any responsibility for any injury and/or damage to persons or property arising from or related to any use of any material contained in eNeuro.