Skip to main content

Main menu

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Blog
    • Collections
    • Podcast
  • TOPICS
    • Cognition and Behavior
    • Development
    • Disorders of the Nervous System
    • History, Teaching and Public Awareness
    • Integrative Systems
    • Neuronal Excitability
    • Novel Tools and Methods
    • Sensory and Motor Systems
  • ALERTS
  • FOR AUTHORS
  • ABOUT
    • Overview
    • Editorial Board
    • For the Media
    • Privacy Policy
    • Contact Us
    • Feedback
  • SUBMIT

User menu

Search

  • Advanced search
eNeuro
eNeuro

Advanced Search

 

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Blog
    • Collections
    • Podcast
  • TOPICS
    • Cognition and Behavior
    • Development
    • Disorders of the Nervous System
    • History, Teaching and Public Awareness
    • Integrative Systems
    • Neuronal Excitability
    • Novel Tools and Methods
    • Sensory and Motor Systems
  • ALERTS
  • FOR AUTHORS
  • ABOUT
    • Overview
    • Editorial Board
    • For the Media
    • Privacy Policy
    • Contact Us
    • Feedback
  • SUBMIT
PreviousNext
Research ArticleResearch Article: New Research, Cognition and Behavior

Decoding Visual Spatial Attention Control

Sreenivasan Meyyappan, Abhijit Rajan, Qiang Yang, George R. Mangun and Mingzhou Ding
eNeuro 13 February 2025, 12 (3) ENEURO.0512-24.2025; https://doi.org/10.1523/ENEURO.0512-24.2025
Sreenivasan Meyyappan
1Center for Mind and Brain, University of California, Davis, California 95618
2J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, Gainesville, Florida 32611
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Abhijit Rajan
2J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, Gainesville, Florida 32611
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Qiang Yang
2J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, Gainesville, Florida 32611
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
George R. Mangun
1Center for Mind and Brain, University of California, Davis, California 95618
3Departments of Psychology and Neurology, University of California, Davis, California 95616
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Mingzhou Ding
2J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, Gainesville, Florida 32611
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Mingzhou Ding
  • Article
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF
Loading

Article Figures & Data

Figures

  • Figure 1.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 1.

    Experimental paradigm. Each trial started with one of the three visual cues (200 ms), two of them instructing subjects to covertly attend either left (A) or right (B) hemifields, indicated by the two white dots. A third cue, the choice cue (C), prompted the participants to choose either left or right visual field to attend. Following a variable cue-to-target delay (2,000–8,000 ms), a target in the form of a grating patch appeared briefly for 100 ms with equal probability in the left or the right hemifield. Participants were asked to discriminate the spatial frequency of the grating displayed in the cued or chosen location and ignore the uncued or unchosen side. After a second variable ISI (2,000–8,000 ms), participants were presented with a “?SIDE?” screen and were asked to report the hemifield they paid attention to in that trial. An ITI, varied randomly from 2,000 to 8,000 ms, elapsed before the start of the next trial.

  • Figure 2.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 2.

    Behavioral results from instructed and choice trials. A, C, Comparison of accuracy (ratio of correct trials and total number of valid trials) between different attention conditions for UF (A) and UCD (C) datasets. B, D, Comparison of RT between different attention conditions for UF dataset (B) and UCD dataset (D).

  • Figure 3.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 3.

    Whole-brain GLM analysis of cue-evoked BOLD activity. Attention cues (left and right combined) evoked significant activation in DAN (FEF, IPS/SPL) and the extrastriate visual cortex (p < 0.05 FDR corrected) for both UF (A and B) and UCD (C and D) datasets. However, contrasting attention-directing cues (cue-left vs cue-right) revealed no activation when thresholded at p < 0.05 with FDR correction (data not shown). High-order visual regions only began to appear after lowering the threshold to p < 0.01 uncorrected: cue-left > cue-right (E and F) and cue-right > cue-left (G and H).

  • Figure 4.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 4.

    ROI-level analysis of cue-evoked BOLD activity. BOLD activity difference between attention to contralateral and ipsilateral visual fields were computed and averaged across the hemispheres for instructional (A–D) and choice trials (E–H) for both datasets: UF (A, E) and UCD (B, F). C, G, Activation differences from UF and UCD datasets were averaged from instructed (C) and choice trials (G) and the respective p values obtained via meta-analysis. D, H, correlation between behavior and activation differences across ROIs from instructed (D) and choice trials (H). Here the correlation coefficients are averaged across the datasets, and the p values from both the datasets were combined using meta-analysis. The error bars denote the standard error of the mean (SEM). *p < 0.05 FDR.

  • Figure 5.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 5.

    MVPA of cue-left versus cue-right: instructed trials. A, B, Decoding accuracies in different visual ROIs for UF dataset (A) and UCD dataset (B). C, D, Posterior view of color-coded decoding accuracies for UF dataset (C) and UCD dataset (D). E, Scatterplot comparing UCD versus UF decoding accuracies where each dot represents a visual ROI. The error bars denote the SEM. *p < 0.05 FDR.

  • Figure 6.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 6.

    Decoding accuracy versus behavior: instructed trials. A, B, The percentage of variance explained by the principal components of decoding accuracies across ROIs for (A) UF and (B) UCD datasets. C, D, Decoding accuracy represented by score on first PCA component versus behavioral efficiency (z score) for (C) UF and (D) UCD datasets. E, Correlation between decoding accuracy and behavior across ROIs. Here the correlation coefficients are averaged across the two datasets, and the p values from both the datasets were combined using meta-analysis; the correlation is significant for all ROIs at p < 0.05 FDR.

  • Figure 7.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 7.

    Decoding accuracy for attend-left versus attend-right for choice trials. A, Decoding accuracies averaged from UF and UCD datasets for different visual ROIs; p values were determined via meta-analysis. B, Scatterplot comparing UCD versus UF decoding accuracies where each dot represents a visual ROI. C, D, Decoding accuracy versus behavioral efficiency for UF dataset (C) and for UCD dataset (D). E, Correlation between decoding accuracy and behavior across visual-topic ROIs. The error bars denote the SEM. *p < 0.05 FDR.

  • Figure 8.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 8.

    Comparison between choice trials and instructed trials (UF and UCD datasets combined). A, Choice-left versus choice-right decoding accuracies plotted against cue-left versus cue-right decoding accuracies; each dot represents a visual ROI. B, Cross-decoding accuracy comparison across ROIs where classifiers were trained on instructed trials and tested on choice trials. C, Difference between cross-decoding accuracy (classifiers trained on instructed trials and tested on choice trials) and self-decoding accuracy (classifiers trained on choice trials and tested on choice trials) for different ROIs. The error bars denote the SEM. *p < 0.05 FDR.

Back to top

In this issue

eneuro: 12 (3)
eNeuro
Vol. 12, Issue 3
March 2025
  • Table of Contents
  • Index by author
  • Masthead (PDF)
Email

Thank you for sharing this eNeuro article.

NOTE: We request your email address only to inform the recipient that it was you who recommended this article, and that it is not junk mail. We do not retain these email addresses.

Enter multiple addresses on separate lines or separate them with commas.
Decoding Visual Spatial Attention Control
(Your Name) has forwarded a page to you from eNeuro
(Your Name) thought you would be interested in this article in eNeuro.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Print
View Full Page PDF
Citation Tools
Decoding Visual Spatial Attention Control
Sreenivasan Meyyappan, Abhijit Rajan, Qiang Yang, George R. Mangun, Mingzhou Ding
eNeuro 13 February 2025, 12 (3) ENEURO.0512-24.2025; DOI: 10.1523/ENEURO.0512-24.2025

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Respond to this article
Share
Decoding Visual Spatial Attention Control
Sreenivasan Meyyappan, Abhijit Rajan, Qiang Yang, George R. Mangun, Mingzhou Ding
eNeuro 13 February 2025, 12 (3) ENEURO.0512-24.2025; DOI: 10.1523/ENEURO.0512-24.2025
Twitter logo Facebook logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Abstract
    • Significance Statement
    • Introduction
    • Materials and Methods
    • Results
    • Discussion
    • Conclusions
    • Footnotes
    • References
    • Synthesis
    • Author Response
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF

Keywords

  • fMRI
  • MVPA
  • MVPA-behavior
  • spatial attention
  • top–down control
  • visual hierarchy

Responses to this article

Respond to this article

Jump to comment:

No eLetters have been published for this article.

Related Articles

Cited By...

More in this TOC Section

Research Article: New Research

  • Deletion of endocannabinoid synthesizing enzyme DAGLα in Pcp2-positive cerebellar Purkinje cells decreases depolarization-induced short-term synaptic plasticity, reduces social preference, and heightens anxiety
  • Release of extracellular matrix components after human traumatic brain injury
  • Action intentions reactivate representations of task-relevant cognitive cues
Show more Research Article: New Research

Cognition and Behavior

  • Visual Stimulation Under 4 Hz, Not at 10 Hz, Generates the Highest-Amplitude Frequency-Tagged Responses of the Human Brain: Understanding the Effect of Stimulation Frequency
  • Transformed visual working memory representations in human occipitotemporal and posterior parietal cortices
  • Neural Speech-Tracking During Selective Attention: A Spatially Realistic Audiovisual Study
Show more Cognition and Behavior

Subjects

  • Cognition and Behavior
  • Home
  • Alerts
  • Follow SFN on BlueSky
  • Visit Society for Neuroscience on Facebook
  • Follow Society for Neuroscience on Twitter
  • Follow Society for Neuroscience on LinkedIn
  • Visit Society for Neuroscience on Youtube
  • Follow our RSS feeds

Content

  • Early Release
  • Current Issue
  • Latest Articles
  • Issue Archive
  • Blog
  • Browse by Topic

Information

  • For Authors
  • For the Media

About

  • About the Journal
  • Editorial Board
  • Privacy Notice
  • Contact
  • Feedback
(eNeuro logo)
(SfN logo)

Copyright © 2025 by the Society for Neuroscience.
eNeuro eISSN: 2373-2822

The ideas and opinions expressed in eNeuro do not necessarily reflect those of SfN or the eNeuro Editorial Board. Publication of an advertisement or other product mention in eNeuro should not be construed as an endorsement of the manufacturer’s claims. SfN does not assume any responsibility for any injury and/or damage to persons or property arising from or related to any use of any material contained in eNeuro.