Skip to main content

Main menu

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Blog
    • Collections
    • Podcast
  • TOPICS
    • Cognition and Behavior
    • Development
    • Disorders of the Nervous System
    • History, Teaching and Public Awareness
    • Integrative Systems
    • Neuronal Excitability
    • Novel Tools and Methods
    • Sensory and Motor Systems
  • ALERTS
  • FOR AUTHORS
  • ABOUT
    • Overview
    • Editorial Board
    • For the Media
    • Privacy Policy
    • Contact Us
    • Feedback
  • SUBMIT

User menu

Search

  • Advanced search
eNeuro
eNeuro

Advanced Search

 

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Blog
    • Collections
    • Podcast
  • TOPICS
    • Cognition and Behavior
    • Development
    • Disorders of the Nervous System
    • History, Teaching and Public Awareness
    • Integrative Systems
    • Neuronal Excitability
    • Novel Tools and Methods
    • Sensory and Motor Systems
  • ALERTS
  • FOR AUTHORS
  • ABOUT
    • Overview
    • Editorial Board
    • For the Media
    • Privacy Policy
    • Contact Us
    • Feedback
  • SUBMIT
PreviousNext
Research ArticleResearch Article: New Research, Sensory and Motor Systems

A Standardized Nonvisual Behavioral Event Is Broadcasted Homogeneously across Cortical Visual Areas without Modulating Visual Responses

Mahdi Ramadan, Eric Kenji Lee, Saskia de Vries, Shiella Caldejon, India Kato, Kate Roll, Fiona Griffin, Thuyanh V. Nguyen, Josh Larkin, Paul Rhoads, Kyla Mace, Ali Kriedberg, Robert Howard, Nathan Berbesque and Jérôme Lecoq
eNeuro 7 September 2022, 9 (5) ENEURO.0491-21.2022; https://doi.org/10.1523/ENEURO.0491-21.2022
Mahdi Ramadan
1Allen Institute for Brain Science, Seattle 98103, WA
2Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge 02139, MA
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Eric Kenji Lee
1Allen Institute for Brain Science, Seattle 98103, WA
3Department of Psychological and Brain Sciences, Boston University, Boston 02215, MA
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Saskia de Vries
1Allen Institute for Brain Science, Seattle 98103, WA
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Shiella Caldejon
1Allen Institute for Brain Science, Seattle 98103, WA
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
India Kato
1Allen Institute for Brain Science, Seattle 98103, WA
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Kate Roll
1Allen Institute for Brain Science, Seattle 98103, WA
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Fiona Griffin
1Allen Institute for Brain Science, Seattle 98103, WA
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Thuyanh V. Nguyen
1Allen Institute for Brain Science, Seattle 98103, WA
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Josh Larkin
1Allen Institute for Brain Science, Seattle 98103, WA
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Paul Rhoads
1Allen Institute for Brain Science, Seattle 98103, WA
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Kyla Mace
1Allen Institute for Brain Science, Seattle 98103, WA
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Ali Kriedberg
1Allen Institute for Brain Science, Seattle 98103, WA
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Robert Howard
1Allen Institute for Brain Science, Seattle 98103, WA
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Nathan Berbesque
1Allen Institute for Brain Science, Seattle 98103, WA
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Jérôme Lecoq
1Allen Institute for Brain Science, Seattle 98103, WA
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Jérôme Lecoq
  • Article
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF
Loading

Article Figures & Data

Figures

  • Movies
  • Extended Data
  • Figure 1.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 1.

    Detection of behavioral fidget during two-photon imaging in freely-viewing mice. a, Computer design of our apparatus to monitor the behavior of mice during head-fixation and two-photon imaging. b, Example two-photon imaging field of view (400 × 400 μm) showcasing neurons labeled with Gcamp6f. c, Example video frame of mouse captured by the body camera at 30 Hz. d, A pair of video frames showing the progression of a prototype fidget behavior in time. i, First, the mouse is stationary. ii, Then, during the initiation of the startle response, the mouse stereotypically pushes its body up using its bottom paws while arching the back and contracting the abdomen. e, Computational strategy for detecting fidgets automatically. f, Cumulative probability sum of labeled fidget moment magnitude, showcasing the consistency of the stereotyped behavior across experiments and mice. Fidget magnitude is calculated as the sum of 2D optical flow vectors throughout the fidget duration (n = 20 sessions; for optical flow calculation, see Materials and Methods).

  • Figure 2.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 2.

    Fidget rate is correlated with visual stimulus type but independent of a mouse driver-line or session number. a, Left, Standardized experimental design of sensory visual stimuli. Six blocks of different stimuli were presented to mice and were distributed into three separate protocols identified as session A, session B, and session C (right). Normalized fidget rate (black dots) for all three session types across all mice. Gray bars indicate 95% mean confidence intervals. Color coded stimulus protocol with indicated durations (minutes) are aligned with the time axis for all three session types. N indicates the number of sessions. b, Average normalized fidget rate across all mice during the presentation of drifting grating visual stimuli in comparison to all other stimuli (p = 0.023). N indicates the number of sessions. c, Comparison of the fidget rate ratio between different session types across cre-lines. No significant effect was found (ANOVA, p = 0.14). N indicates the number of sessions. d, average percent of video frames labeled as fidget versus the number of experimental sessions a mouse has been exposed to, no significant learning effect found (ANOVA, p = 0.21). N indicates the number of sessions. Mice displayed a high variance in frequency of behaviors across sessions, but this variance was not explained by mouse cre-line (Extended Data Fig. 2-1; ANOVA, p = 0.14, n = 144). Error bars in Fig. 2b-d are standard devation. Asterisk in b indicates statistical significance.

  • Figure 3.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 3.

    Neuronal fidget response types are distributed equally across visual cortical layers and areas. a, Example two-photon imaging field of view (Cux2, 400 × 400 μm) showcasing all neurons recorded in one session. Four unique neuronal response types are displayed by the four neurons identified by colored circles. Scale bar: 100 μm. b, The trial averaged z-scored activity of all neurons from one experiment, aligned to the time of fidget initiation (0 s). c, The activity profiles of exemplary neurons showcasing the four types of neuronal responses identified using clustering (see Materials and Methods). Top-left in gray, Neutral. Top-right in green, Active. Bottom-left in red, Phasic. Bottom-right in blue, Depressed. Colored points indicate the trial averaged z-scored activity, shaded region indicate 1 SD across trials for each time point. d, All layer 2/3 neurons from all experiments clustered into the four neuronal response types outlined by color coded borders (neutral, gray; depressed, blue; phasic, red; active, green). e, Decoding of neural response type using UMAP feature vector across cortical depths (see Materials and Methods). Results suggest that the neural response types cannot be differentiated across cortical depth (black bars: F1 score based on neural data, orange dashed line: mean F1 score based on shuffled neural data). f, Decoding of neural response type using UMAP feature vector across cortical areas. Results suggest that the neural response types cannot be differentiated across cortical area (black bars: F1 score based on neural data, orange dashed line: F1 score based on shuffled neural data). g, Percent distribution of neuronal response types per cortical layer for all cortical areas. N indicates the number of neurons. h, Percent distribution of the neuronal response types per cortical area for all cortical layers. N indicates the number of neurons. Fidgets did not induce additional translational motion of the cortex compared with when the mouse was at rest (Extended Data Fig. 3-1, n = 20 h sessions). Equal distribution of clustered neuronal response types across area and layer, and clustering homogeneity is robust to threshold criteria (Extended Data Fig. 3-2). UMAP embeddings of postfidget single neuron responses by area, layer, and mouse Cre-line (Extended Data Fig. 3-3). Lack of neuronal activity differences between Cre-line, area, and depth is not because of the method of dimensionality reduction or hyperparameter choice (Extended Data Fig. 3-4).

  • Figure 4.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 4.

    Contrary to running responses, neuronal fidget responses do not modulate visually evoked activity. a, Example visual evoked responses of a cell to drifting gratings during running, fidgets, and resting conditions, shaded error bars indicate ± SEM. b, Normalized histograms of the direction selectivity of the four cells types during the drifting gratings stimulus is plotted (c) same as b but for cells’ preferred drifting grating orientation (d) same as b but for cells’ preferred temporal frequency of drifting gratings. e, The mean trial ΔF/F during fidget is plotted against nonfidget behavior for each cell during its preferred direction and temporal frequency of the drifting grating stimulus. The dotted line indicates unity. This metric is plotted separately for all four cell types. f, Same as d but for running versus stationary behaviors. g, The Cohen’s d metric for fidget is plotted against the Cohen’s d for running behavior for each cell, across all four cell types.

Movies

  • Figures
  • Extended Data
  • Movie 1.

    Example live calcium imaging of neuronal activity during fidget (fidget initiation indicated). The size of each calcium imaging frame is 400 × 400 μm. Videos are trial-averaged calcium responses during fidget [–3 (s) to +6 (s) relative to fidget initialization] across an imaging session. Video frames are converted to greyscale, played at 2× speed and compressed as a .mp4 file.

Extended Data

  • Figures
  • Movies
  • Extended Data Figure 2-1

    Fidget rate frequency across sessions. a, Left, Bar plot visualizing the number of sessions (total of 144) with their percentage of video frames classified as one of three behaviors (movement, resting, or fidget). Download Figure 2-1, TIF file.

  • Extended Data Figure 3-1

    Fidgets do not induce translational movement of the cortex. Violin plots of the Euclidean distance of 2P image motion in the x and y dimensions (translational movements) during mouse fidget versus mouse resting state (no movement). Download Figure 3-1, TIF file.

  • Extended Data Figure 3-2

    Equal distribution of clustered neuronal response types across area and layer is robust to threshold criteria. Distributions of clustered neuronal responses after applying three different threshold criteria to neural data. The distributions are conditioned in area or layer, and neural responses that did not meet the threshold criteria are labeled in green as criteria neutral. Depending on the strictness of the criteria the percent of neutral neurons changes, but the distribution of neural response types conditioned on area or layer remains fairly consistent. Download Figure 3-2, TIF file.

  • Extended Data Figure 3-3

    UMAP embeddings of postfidget single neuron responses by area, layer, and mouse Cre-line. Visualization of the projection of postfidget neural responses from averaged single neurons into a 2D-embedded space identified by unsupervised nonlinear dimensionality reduction (UMAP), labeled by area (top subplot), layer (middle subplot), and mouse Cre-line (bottom subplot). Download Figure 3-3, TIF file.

  • Extended Data Figure 3-4

    Lack of neuronal activity differences between Cre-line, area, and depth is not due to the method of dimensionality reduction or hyperparameter choice. A gradient boosted decision tree classifier with five-fold cross-validation (as in Fig. 3e,f) was trained to differentiate cells according to Cre-line, visual area, and laminar depth based on their dimensionality reduced fidget-onset aligned neuronal calcium imaging traces. The dimensionality reduced traces were compared using four methods: PCA of the first 10 principal components, UMAP in the first three dimensions (3D UMAP), UMAP with n_neighbors = 5, and UMAP with n_neighbors = 200. Chance performance is shown as a dashed horizontal line and was obtained by training a classifier on randomly permuted labels. a, F1 scores for a single classifier trained to distinguish whether neuronal activity of a single cell from one of four Cre-lines: Cux2, Rbp4, Rorb, and Scnn1a. b, The same procedure as in a but for another single classifier trained to distinguish single neurons by the visual area (VISp, VISpm, VISal, and VISl) they were collected from. c, The same procedure as in a, b but for a classifier trained on the laminar depth of a neuron. Download Figure 3-4, TIF file.

Back to top

In this issue

eneuro: 9 (5)
eNeuro
Vol. 9, Issue 5
September/October 2022
  • Table of Contents
  • Index by author
  • Ed Board (PDF)
Email

Thank you for sharing this eNeuro article.

NOTE: We request your email address only to inform the recipient that it was you who recommended this article, and that it is not junk mail. We do not retain these email addresses.

Enter multiple addresses on separate lines or separate them with commas.
A Standardized Nonvisual Behavioral Event Is Broadcasted Homogeneously across Cortical Visual Areas without Modulating Visual Responses
(Your Name) has forwarded a page to you from eNeuro
(Your Name) thought you would be interested in this article in eNeuro.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Print
View Full Page PDF
Citation Tools
A Standardized Nonvisual Behavioral Event Is Broadcasted Homogeneously across Cortical Visual Areas without Modulating Visual Responses
Mahdi Ramadan, Eric Kenji Lee, Saskia de Vries, Shiella Caldejon, India Kato, Kate Roll, Fiona Griffin, Thuyanh V. Nguyen, Josh Larkin, Paul Rhoads, Kyla Mace, Ali Kriedberg, Robert Howard, Nathan Berbesque, Jérôme Lecoq
eNeuro 7 September 2022, 9 (5) ENEURO.0491-21.2022; DOI: 10.1523/ENEURO.0491-21.2022

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Respond to this article
Share
A Standardized Nonvisual Behavioral Event Is Broadcasted Homogeneously across Cortical Visual Areas without Modulating Visual Responses
Mahdi Ramadan, Eric Kenji Lee, Saskia de Vries, Shiella Caldejon, India Kato, Kate Roll, Fiona Griffin, Thuyanh V. Nguyen, Josh Larkin, Paul Rhoads, Kyla Mace, Ali Kriedberg, Robert Howard, Nathan Berbesque, Jérôme Lecoq
eNeuro 7 September 2022, 9 (5) ENEURO.0491-21.2022; DOI: 10.1523/ENEURO.0491-21.2022
Twitter logo Facebook logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Abstract
    • Significance Statement
    • Introduction
    • Materials and Methods
    • Results
    • Discussion
    • Acknowledgments
    • Footnotes
    • References
    • Synthesis
    • Author Response
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF

Responses to this article

Respond to this article

Jump to comment:

No eLetters have been published for this article.

Related Articles

Cited By...

More in this TOC Section

Research Article: New Research

  • Robust representation and nonlinear spectral integration of harmonic stacks in layer 4 of mouse primary auditory cortex
  • Changes in palatability processing across the estrous cycle are modulated by hypothalamic estradiol signaling
  • Automatic, but not autonomous: Implicit adaptation is modulated by goal-directed attentional demands
Show more Research Article: New Research

Sensory and Motor Systems

  • Robust representation and nonlinear spectral integration of harmonic stacks in layer 4 of mouse primary auditory cortex
  • Changes in palatability processing across the estrous cycle are modulated by hypothalamic estradiol signaling
  • Automatic, but not autonomous: Implicit adaptation is modulated by goal-directed attentional demands
Show more Sensory and Motor Systems

Subjects

  • Sensory and Motor Systems
  • Home
  • Alerts
  • Follow SFN on BlueSky
  • Visit Society for Neuroscience on Facebook
  • Follow Society for Neuroscience on Twitter
  • Follow Society for Neuroscience on LinkedIn
  • Visit Society for Neuroscience on Youtube
  • Follow our RSS feeds

Content

  • Early Release
  • Current Issue
  • Latest Articles
  • Issue Archive
  • Blog
  • Browse by Topic

Information

  • For Authors
  • For the Media

About

  • About the Journal
  • Editorial Board
  • Privacy Notice
  • Contact
  • Feedback
(eNeuro logo)
(SfN logo)

Copyright © 2026 by the Society for Neuroscience.
eNeuro eISSN: 2373-2822

The ideas and opinions expressed in eNeuro do not necessarily reflect those of SfN or the eNeuro Editorial Board. Publication of an advertisement or other product mention in eNeuro should not be construed as an endorsement of the manufacturer’s claims. SfN does not assume any responsibility for any injury and/or damage to persons or property arising from or related to any use of any material contained in eNeuro.