Skip to main content

Main menu

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Blog
    • Collections
    • Podcast
  • TOPICS
    • Cognition and Behavior
    • Development
    • Disorders of the Nervous System
    • History, Teaching and Public Awareness
    • Integrative Systems
    • Neuronal Excitability
    • Novel Tools and Methods
    • Sensory and Motor Systems
  • ALERTS
  • FOR AUTHORS
  • ABOUT
    • Overview
    • Editorial Board
    • For the Media
    • Privacy Policy
    • Contact Us
    • Feedback
  • SUBMIT

User menu

Search

  • Advanced search
eNeuro
eNeuro

Advanced Search

 

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Blog
    • Collections
    • Podcast
  • TOPICS
    • Cognition and Behavior
    • Development
    • Disorders of the Nervous System
    • History, Teaching and Public Awareness
    • Integrative Systems
    • Neuronal Excitability
    • Novel Tools and Methods
    • Sensory and Motor Systems
  • ALERTS
  • FOR AUTHORS
  • ABOUT
    • Overview
    • Editorial Board
    • For the Media
    • Privacy Policy
    • Contact Us
    • Feedback
  • SUBMIT
PreviousNext
Research ArticleNew Research, Sensory and Motor Systems

Transition from Target to Gaze Coding in Primate Frontal Eye Field during Memory Delay and Memory–Motor Transformation

Amirsaman Sajad, Morteza Sadeh, Xiaogang Yan, Hongying Wang and John Douglas Crawford
eNeuro 4 April 2016, 3 (2) ENEURO.0040-16.2016; https://doi.org/10.1523/ENEURO.0040-16.2016
Amirsaman Sajad
1Centre for Vision Research, York University, Toronto, Ontario M3J 1P3, Canada
2Neuroscience Graduate Diploma Program, York University, Toronto, Ontario M3J 1P3, Canada
3Department of Biology, York University, Toronto, Ontario M3J 1P3, Canada
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Morteza Sadeh
1Centre for Vision Research, York University, Toronto, Ontario M3J 1P3, Canada
3Department of Biology, York University, Toronto, Ontario M3J 1P3, Canada
4School of Kinesiology and Health Sciences, York University, Toronto, Ontario M3J 1P3, Canada
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Xiaogang Yan
1Centre for Vision Research, York University, Toronto, Ontario M3J 1P3, Canada
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Hongying Wang
1Centre for Vision Research, York University, Toronto, Ontario M3J 1P3, Canada
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
John Douglas Crawford
1Centre for Vision Research, York University, Toronto, Ontario M3J 1P3, Canada
2Neuroscience Graduate Diploma Program, York University, Toronto, Ontario M3J 1P3, Canada
3Department of Biology, York University, Toronto, Ontario M3J 1P3, Canada
4School of Kinesiology and Health Sciences, York University, Toronto, Ontario M3J 1P3, Canada
5Department of Psychology, York University, Toronto, Ontario M3J 1P3, Canada
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Article
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF
Loading

Article Figures & Data

Figures

  • Tables
  • Figure 1.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 1.

    An overview of the experimental paradigm and a conceptual schematic of the possible coding schemes in the FEF. A, Activity was recorded from single neurons in the FEF while monkeys performed a memory-guided gaze task with the head free to move. Monkeys initially fixated a visual stimulus (black dot labeled F) for 400-500 ms. A visual stimulus (black dot labeled T) was then briefly flashed on the screen for 80-100 ms (left). After an instructed delay (variable in duration; 450-850 or 700-1500 ms), the animal made a gaze shift to the remembered location of the target (gray dot labeled T) upon the presentation of the Go-signal. The Go-signal was the disappearance of the initial fixation target (gray dot labeled F). Inaccuracies in behavior were tolerated such that if the final gaze landed within a window around the target, a juice reward was provided. B, Five gaze trajectories to a single target (black circle) within a wide array of targets (5 × 7 for this example session; gray dots) within the approximate RF location of the neuron are shown. Initial fixation positions (tail of the trajectory) were randomly varied within a central zone (large gray circle) on a trial-by-trial basis. Final gaze positions (white circles) fell at variable positions around the target. Variability in initial and final positions (relative to different frames of reference) of target, gaze (i.e., eye in space), eye (in head), and head was used to spatially differentiate sensory and various motor parameters in various frames of reference. We exploited the variability in behavioral errors to differentiate between spatial models based on target position (T) and final gaze position (G). C, Additionally, a continuum of intermediary spatial models spanning T and G were constructed to treat the spatial code as a continuous variable; this allowed us to trace changes in the spatial code as activity evolved from vision to memory delay, during memory delay, and from memory delay to motor. D shows some plausible schemes for the spatiotemporal evolution of a neuronal code based on the following proposed theories: (1) the target code could be transformed into a gaze code early on, and this gaze code maintained during memory (motor theory; light gray line); (2) the target code could be maintained in the memory (sensory theory; black line) and subsequently transformed into a gaze code just before movement initiation; or (3) the spatial code could gradually change from a target code to a gaze code (dark gray line).

  • Figure 2.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 2.

    Approximate location of the FEF and the recorded sites in the two monkeys. A shows the anatomical location of the FEF, located at the anterior bank of the arcuate sulcus. B, Sites within the FEF from which neurons were recorded in each animal are plotted (circles) in the coordinates of the recording chamber with the center (0,0) approximately located at the stereotaxic coordinates corresponding to the FEF (see Materials and Methods). The black semicircle represents the edge of the recording chamber. The color code represents the neuron type recorded from each site. Low-threshold microstimulation at these sites evoked saccades ranging from 2° (at the most lateral sites) and 25° (at the most medial sites) in head-restrained conditions (Bruce and Goldberg, 1985).

  • Figure 3.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 3.

    An overview of the analysis methods for identifying the spatial code and sampling neuronal activity from a time-normalized activity profile. A shows an example analysis for identifying the spatial code. A-1, Here, activity from the early visual response (80-180 ms after target onset) was sampled for analysis. A-2 shows the T–G continuum, and three example RF plots are shown for the visual response corresponding to the demarked models (arrows) along the T–G continuum. T is the eye-centered target model and G is the eye-centered gaze model. In the RF plots, each circle represents firing rate data (diameter) for a single trial plotted over position data corresponding to the tested model (in this study, models spanning the target model, T, and the gaze model, G). The PRESS residuals are shown at the bottom of each RF plot. In each RF plot, the color code (blue–red scale corresponding to low-to-high) represents the nonparametric fit made to all data points. A-3 shows the mean PRESS (y-axis) as a function of tested spatial model along the T–G continuum (x-axis). For this example visual response the best-fit model or spatial code (lowest PRESS residuals) is the intermediate model one step away from T (toward G). Although A shows analysis only for a single sampling window, for the main analyses reported in this study we sampled activity at 16 half-overlapping time windows with the first starting at visual response onset and the last starting at gaze onset. For this, we normalized the time between visual response onset until movement onset so that we could collapse all trials together for analysis. B shows the raster and spike density plots corresponding to the classic visually aligned (B-1) and movement-aligned (B-2) neuronal responses, as well as the time-normalized spike density (B-3), and illustrates activity sampling based on each of these schemes.

  • Figure 4.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 4.

    A representative neuron with visual, delay, and movement responses, and results for the overall population. A shows the visually aligned (left) and movement-aligned (right) raster and spike density plots for a VM neuron with sustained delay activity. The visual response of this neuron is from 65 to 300 ms after target onset, and the movement response begins 30 ms before gaze onset. B shows the time-normalized activity profile corresponding to A with the period between visual response (VR) onset and gaze movement onset normalized for all trials. C shows the RF maps for four time steps (C1-C4) sampled from the time-normalized activity profile (B, pink shades) with the blue-to-red color gradient representing low-to-high neuronal activity levels. The best-fit model (i.e., spatial code) at each of these time steps is depicted by a red triangle placed on the T–G continuum (panels above the RF plots). For this neuron, there was a progressive but partial shift (3 of 10 steps) in spatial code toward G. D depicts the time-normalized spike density for the entire population (n = 74), including neurons with either visual or movement response, or both. Neurons with movement-related activity beginning at or after gaze onset are eliminated. E shows the mean (±SEM) of spatially tuned best-fits at 16 half-overlapping time steps from an early visual period (visual response onset for visually responsive neurons, and 80 ms after target onset if the neuron was not visually responsive) until gaze movement-onset time. The solid line shows the mean of the fits made to individual neuron data highlighting the change in the population spatial code along T–G continuum as activity progresses from vision to movement. The histogram in the bottom panel shows the percentage of neurons that exhibited spatial tuning (y-axis) at a given time step (x-axis).

  • Figure 5.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 5.

    Single-neuron example and population results for V neurons. A shows the time-normalized spike density profile for an example V neuron (top) and the data points corresponding to the spatially tuned time steps across 16 half-overlapping time steps (bottom). The RF plot corresponding to the highlighted time step (bottom panel, light red circle with green boarders; first time-step here) is shown with the spatial code highlighted above the plot. B shows the population time-normalized post-stimulus time histogram (mean ± SEM) and the mean (±SEM) of the spatially tuned data points at these time steps across the V population. Colored data points (bottom) correspond to time steps at which the population spatial coherence was significantly higher than the pretarget baseline and gray shades correspond to eliminated time steps, with spatial coherence indistinguishable from pretarget baseline. The histogram shows the percentage of neurons at each time step that exhibited spatial tuning. The baseline firing rate is calculated based on the average firing rate in the 100 ms pretarget period is shown by the solid horizontal lines in spike density plots (A, B, top). For reference, the approximate visual, delay, and motor epochs are depicted at the top of the panels.

  • Figure 6.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 6.

    Single-neuron example and population results for VM neurons. A, B, Same conventions as in Figure 5. C, The RF plots corresponding to time steps with highlighted data points (circles with a green border) in A (bottom) are shown, with the spatial code along the T–G continuum highlighted above each plot.

  • Figure 7.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 7.

    Distribution of best-fit models across the T–G continuum for the VM population through five time steps through visual, delay, and movement responses. A shows the distribution of best-fits for VM neurons for early-visual (1st time step from the time-normalized activity profile), early-delay (4th time step), mid-delay (9th time step), late-delay (13th time step), and perimovement (15th time step) intervals. Only neurons with significant spatial tuning are considered. The number of neurons contributing to each distribution is indicated on each panel (the number in brackets also includes best-fits outside of the presented range). B plots the spatial code (i.e, value of the fit to the T–G data) at each of the delay intervals (y-axis), versus the spatial code at the perimovement period (red dots). Here, only the 21 neurons that contributed to all five panels in A were plotted. Note the trend (from the early- to mid- to late-delay periods) for the data points to migrate toward the line of unity (i.e., toward their movement fits).

  • Figure 8.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 8.

    Spatiotemporal progression of neuronal code in VM neurons with sustained delay activity. A shows the results with time-normalized activity sampling, including visual and movement response using the same conventions as in Figure 5B (bottom). B shows the results for only the delay period, with visual and movement responses excluded. Specifically, activity was sampled from 12 half-overlapping steps from the end of the visual response (on average, 266 ms after target onset) until the beginning of the movement response (on average, 85 ms before gaze onset). This duration was on average 635 ms. C shows the spatial code at fixed times intervals relative to the following specific task events: target onset (left); the Go-signal (middle); and gaze onset (right). For target-aligned analysis (C, left), the time from 80 ms after target onset and the earliest Go-signal was divided into eight half-overlapping steps, resulting in a sampling window size fixed for any session but ranging between 80 and 150 ms, depending on whether the earliest Go-signal appeared at 450 or 700 ms relative to target onset for that session. The Go-signal-aligned analysis (C, middle) was performed using 100 ms half-overlapping windows starting at 150 ms before and extending to 150 ms after the Go-signal. The movement-aligned analysis (C, right) was performed using half-overlapping 100 ms sampling windows starting from 150 ms before and extending to 150 ms after gaze onset. Notice that, although there is no change in spatial code triggered by specific task events, there is a progressive change in spatial code from T toward G as we move away from the time of target presentation (left) to the time of gaze onset (right), which is in agreement with the trend seen in A and B.

  • Figure 9.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 9.

    Single-neuron example and population results for DM neurons. A and B follow the same conventions as in Figure 5. C follows the same convention as in Figure 6C. Since these neurons lacked a visual response neuronal activity, sampling started from 80 ms after target onset.

  • Figure 10.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 10.

    Single-neuron example and population results for M neurons. The same conventions as those in Figure 5 are used. Since these neurons lacked a visual response neuronal activity, sampling started from 80 ms after target onset.

  • Figure 11.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 11.

    Summary of the data for different neuron types and a proposed model of the flow of spatial information within the FEF. A shows the relationship among the spatiotemporal codes of V (green), VM (red), DM (blue), and M (magenta) neurons. Asterisks (*) denote significant differences between neuron subtypes. B shows a schematic of the possible flow of information. Target location information enters the FEF (but may already have undergone some spatial processing in VM neurons). The spatial code is maintained in WM, but monotonically changes toward G due to memory-related (mem) processes. Upon the presentation of the Go-signal, the most recent memory of target location (i.e., movement goal) is relayed to the motor (mot) circuitry (composed of M neurons), which in turn encodes the metrics of the eminent gaze shift (G).

Tables

  • Figures
    • View popup
    Table 1.

    Statistical table

    AnalysisData structureStatistical testPower
    aMonotonicity test for spatiotemporal code: entire populationy = spatial code, x = time stepSpearman's ρ correlationrs = 0.90, p = 2.44 × 10−6
    bV population (1st time step) code vs T codeNormality in V code distribution not assumed, n = 10One-sample Wilcoxon signed rank testp > 0.05
    cVM population (1st time step) code vs T codeNormality in V code distribution not assumed, n = 41One-sample Wilcoxon signed rank testp = 3.2 × 10−5
    dMonotonicity test for spatiotemporal code: VM populationy = spatial code, x = time stepSpearman's ρ correlationrs = 0.91, p = 9.08 × 10−7
    eVM population (final time step) code vs G codeNormality in V code distribution not assumed, n = 40One-sample Wilcoxon signed rank testp = 3.51 × 10−7
    fEarly-delay (time step 4) code vs visual response (time step 1) codeNormality in VM code distribution not assumed, n = 21Paired-sample Wilcoxon signed rank testp = 0.302
    gLate-delay (time step 13) code vs visual response (time step 1) codeNormality in VM code distribution not assumed, n = 21Paired-sample Wilcoxon signed rank testp = 0.0190
    h Figure 7B: early-, mid-, and late-delay (time steps 4, 9, 13) code vs movement response (time step 15) codeNormality in VM code distribution not assumed, n = 21Bonferroni corrected; Wilcoxon testp < 0.05 (see Fig 7B)
    iMonotonicity test for spatiotemporal code: VM neurons with sustained delayy = spatial code, x = time stepSpearman's ρ correlationrs = 0.86, p = 2.40 × 10−5
    jMonotonicity test for spatiotemporal code (during delay-only period): VM neurons with sustained delayy = spatial code, x = time stepSpearman's ρ correlationrs = 0.76, p = 0.0038
    kDM population (final time step) code vs T codeNormality in DM code distribution not assumedOne-sample Wilcoxon signed rank testp = 4.88 × 10−4
    lDM population (final time step) code vs G codeNormality in DM code distribution not assumedOne-sample Wilcoxon signed rank testp = 0.0015
    mMonotonicity test for spatiotemporal code: DM populationy = spatial code, x = time stepSpearman's ρ correlationrs = 0.47, p = 0.20
    nM population (final time steps) code vs G codeNormality in M code distribution not assumed, n ≤ 10One-sample Wilcoxon signed rank testp > 0.20
    oDM population vs VM population codeNormality in neither population distribution is assumedMann–Whitney U testp > 0.25 for each time step
    pDM population vs VM population spatiotemporal progressionTwo slopes obtained from: y = spatial code, x = time stepLinear regression comparisonp = 0.87
    q1VM population (motor epoch) vs M population (motor epoch) codeNormality in neither population distribution is assumedBonferroni-corrected Mann–Whitney U testp = 6.16 × 10−5
    q2DM population (motor epoch) vs M population (motor epoch) codeNormality in neither population distribution is assumedBonferroni-corrected Mann–Whitney U testp = 3.49 × 10−5
    rVM population (15th time step) code vs M neurons (15th time step) but only neurons with preference for G-like codesNormality in neither population distribution is assumedMann–Whitney U testp = 0.0127
Back to top

In this issue

eneuro: 3 (2)
eNeuro
Vol. 3, Issue 2
March/April 2016
  • Table of Contents
  • Index by author
Email

Thank you for sharing this eNeuro article.

NOTE: We request your email address only to inform the recipient that it was you who recommended this article, and that it is not junk mail. We do not retain these email addresses.

Enter multiple addresses on separate lines or separate them with commas.
Transition from Target to Gaze Coding in Primate Frontal Eye Field during Memory Delay and Memory–Motor Transformation
(Your Name) has forwarded a page to you from eNeuro
(Your Name) thought you would be interested in this article in eNeuro.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Print
View Full Page PDF
Citation Tools
Transition from Target to Gaze Coding in Primate Frontal Eye Field during Memory Delay and Memory–Motor Transformation
Amirsaman Sajad, Morteza Sadeh, Xiaogang Yan, Hongying Wang, John Douglas Crawford
eNeuro 4 April 2016, 3 (2) ENEURO.0040-16.2016; DOI: 10.1523/ENEURO.0040-16.2016

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Respond to this article
Share
Transition from Target to Gaze Coding in Primate Frontal Eye Field during Memory Delay and Memory–Motor Transformation
Amirsaman Sajad, Morteza Sadeh, Xiaogang Yan, Hongying Wang, John Douglas Crawford
eNeuro 4 April 2016, 3 (2) ENEURO.0040-16.2016; DOI: 10.1523/ENEURO.0040-16.2016
Twitter logo Facebook logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Abstract
    • Significance Statement
    • Introduction
    • Materials and Methods
    • Results
    • Discussion
    • Acknowledgments
    • Footnotes
    • References
    • Synthesis
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF

Keywords

  • monkey
  • prefrontal
  • response field
  • sensorimotor
  • single-unit
  • spatial working memory

Responses to this article

Respond to this article

Jump to comment:

No eLetters have been published for this article.

Related Articles

Cited By...

More in this TOC Section

New Research

  • A Very Fast Time Scale of Human Motor Adaptation: Within Movement Adjustments of Internal Representations during Reaching
  • TrkB Signaling Influences Gene Expression in Cortistatin-Expressing Interneurons
  • Optogenetic Activation of β-Endorphin Terminals in the Medial Preoptic Nucleus Regulates Female Sexual Receptivity
Show more New Research

Sensory and Motor Systems

  • Combinatorial Approaches to Restore Corticospinal Function after Spinal Cord Injury
  • Action intentions reactivate representations of task-relevant cognitive cues
  • Interference underlies attenuation upon relearning in sensorimotor adaptation
Show more Sensory and Motor Systems

Subjects

  • Sensory and Motor Systems
  • Home
  • Alerts
  • Follow SFN on BlueSky
  • Visit Society for Neuroscience on Facebook
  • Follow Society for Neuroscience on Twitter
  • Follow Society for Neuroscience on LinkedIn
  • Visit Society for Neuroscience on Youtube
  • Follow our RSS feeds

Content

  • Early Release
  • Current Issue
  • Latest Articles
  • Issue Archive
  • Blog
  • Browse by Topic

Information

  • For Authors
  • For the Media

About

  • About the Journal
  • Editorial Board
  • Privacy Notice
  • Contact
  • Feedback
(eNeuro logo)
(SfN logo)

Copyright © 2025 by the Society for Neuroscience.
eNeuro eISSN: 2373-2822

The ideas and opinions expressed in eNeuro do not necessarily reflect those of SfN or the eNeuro Editorial Board. Publication of an advertisement or other product mention in eNeuro should not be construed as an endorsement of the manufacturer’s claims. SfN does not assume any responsibility for any injury and/or damage to persons or property arising from or related to any use of any material contained in eNeuro.