Skip to main content

Main menu

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Blog
    • Collections
    • Podcast
  • TOPICS
    • Cognition and Behavior
    • Development
    • Disorders of the Nervous System
    • History, Teaching and Public Awareness
    • Integrative Systems
    • Neuronal Excitability
    • Novel Tools and Methods
    • Sensory and Motor Systems
  • ALERTS
  • FOR AUTHORS
  • ABOUT
    • Overview
    • Editorial Board
    • For the Media
    • Privacy Policy
    • Contact Us
    • Feedback
  • SUBMIT

User menu

Search

  • Advanced search
eNeuro

eNeuro

Advanced Search

 

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Blog
    • Collections
    • Podcast
  • TOPICS
    • Cognition and Behavior
    • Development
    • Disorders of the Nervous System
    • History, Teaching and Public Awareness
    • Integrative Systems
    • Neuronal Excitability
    • Novel Tools and Methods
    • Sensory and Motor Systems
  • ALERTS
  • FOR AUTHORS
  • ABOUT
    • Overview
    • Editorial Board
    • For the Media
    • Privacy Policy
    • Contact Us
    • Feedback
  • SUBMIT
PreviousNext
Research ArticleNew Research, Sensory and Motor Systems

Organization of Neural Population Code in Mouse Visual System

Kathleen Esfahany, Isabel Siergiej, Yuan Zhao and Il Memming Park
eNeuro 6 July 2018, 5 (4) ENEURO.0414-17.2018; DOI: https://doi.org/10.1523/ENEURO.0414-17.2018
Kathleen Esfahany
1Ward Melville High School, East Setauket, NY
3Department of Neurobiology and Behavior, Stony Brook University, Stony Brook, NY
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Isabel Siergiej
2Department of Computer Science, Cornell University, Ithaca, NY
3Department of Neurobiology and Behavior, Stony Brook University, Stony Brook, NY
4Institute for Advanced Computational Science, Stony Brook University, Stony Brook, NY
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Yuan Zhao
3Department of Neurobiology and Behavior, Stony Brook University, Stony Brook, NY
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Yuan Zhao
Il Memming Park
3Department of Neurobiology and Behavior, Stony Brook University, Stony Brook, NY
4Institute for Advanced Computational Science, Stony Brook University, Stony Brook, NY
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Il Memming Park
  • Article
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF
Loading

Article Figures & Data

Figures

  • Tables
  • Figure 1.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 1.

    Overview of the population decoding analysis. The neural code of either one of six visual categories (A) or one out of eight directions of drifting grating stimulus (B) by the excitatory neurons in the mouse visual system (C) were analyzed. A specific subpopulation (visual area, cell type, depth) were targeted and observed while the mice viewed the visual stimuli. From the normalized fluorescence signals from the subpopulation (D), we decoded the identity of the stimulus class (E). Successful decoding provides evidence for an instantaneous representation of the spatiotemporal signatures of stimuli within the population activity.

  • Figure 2.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 2.

    Population decoding performance by visual area for six stimulus classes. A, Stimulus decoding accuracy for individual randomly subsampled populations consisting of 1, 2, 4, 8, 16, 32, 64, and 128 neurons (black dots, jittered for visual clarity) and curve fits (solid lines). The number of populations per area is listed in the titles (n). In all visual areas, the majority of small populations (one to four neurons) outperformed chance level (gray line at 16.67% accuracy). However, small population performance in VISrl was more concentrated near chance level than all other areas. Individual populations of 128 neurons achieved near-perfect accuracy in all visual areas except VISrl. B, Population averaged accuracy by visual area (solid lines) with standard error (shaded regions). The line colors correspond to the visual area indicated by the line colors in A. C, Statistically significant (p < 0.05) pairwise comparisons of decoding accuracy at 128 neurons between the six visual areas using Tukey’s test. VISrl underperforms all other visual areas.

  • Figure 3.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 3.

    Stimulus-specific population decoding. A, Visual area averaged stimulus-specific decoding accuracy similar to Figure 2. The line color corresponds to the stimulus indicated by the legend in C. High-performing areas with similar overall decoding accuracy show differential accuracy in predicting specific stimuli. B, Stimulus-specific decoding accuracy averaged across all populations. There is differential accuracy in decoding the specific stimulus classes, with some being harder to decode than others. C, Statistical significance map (same convention as Fig. 2C). The natural movies are significantly more difficult to decode than all other stimuli, and the locally sparse noise is significantly more difficult to decode than all others except the natural movies.

  • Figure 4.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 4.

    Population decoding of directions for the drifting grating epoch. A, Direction decoding accuracy (same conventions as Fig. 2). Note that in VISrl, small populations (one to two neurons) performed closer to chance level (gray line at 12.5% accuracy) than the same sized populations in other areas. B, Population averaged accuracy by visual area (solid lines) with SE (shaded regions). The line colors correspond to the visual area indicated by the line colors in A. C, Statistical significance map (same convention as Fig. 2C). Three high-performing areas (VISp, VISal, VISl) showing similar performance are anatomically adjacent. Similarly, two of three low-performing areas (VISpm and VISam) showing similar performance are anatomically adjacent. D, E, Mean orientation (D) and direction (E) selectivity index (with SEM) per area (see Materials and Methods).

  • Figure 5.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 5.

    Neural population is synergistically encoding directional information. Accuracy of correlation blind decoder (gray bars; independent decoder) is compared to the joint decoder (same value as in Fig. 4B) for the population size of 128 neurons. Statistical significance indicated by paired t test.

  • Figure 6.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 6.

    Population decoding performance by recording depth for six stimulus classes (same conventions as Fig. 2). On average, small populations (one to two neurons) performed better than chance level performance (gray line at 16.67% accuracy). The 325- to 350-µm group significantly underperforms two shallower groups (175 and 265–300 µm).

  • Figure 7.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 7.

    Population decoding performance by imaging depth for eight drifting grating directions (same conventions as Fig. 4). On average, small populations (one to two neurons) in the three high-performing depth groups (175, 265–300, and 365–435 µm) outperformed chance level (gray line at 12.5% accuracy), while small populations in the low-performing 325- to 350-µm group performed at chance level.

  • Figure 8.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 8.

    Comparison of results of linear SVM and MLR classification. SVM and MLR classification accuracy for subsampled populations of 1, 2, 4, 8, 16, 32, 64, and 128 neurons are represented by a single point. Similar results were achieved by both classifiers.

  • Figure 9.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 9.

    Stimulus-specific decoding performance by imaging depth group. The highest performing depth (175 μm) and a lower performing group (365–435 μm) show uniform accuracy in decoding all six stimuli. In the 265- to 300- and 325- to 350-μm groups, natural movies are significantly harder to decode than other stimuli.

Tables

  • Figures
    • View popup
    Table 1.

    Mean population size with SD by visual area for the stimulus classification

    AreaVISalVISamVISlVISpVISpmVISrl
    Mean population size65.1638.1269.7182.9142.4292.00
    SD40.8918.5343.4548.9627.0837.64
    • Populations are composed of neurons common across the three imaging sessions A, B, and C.

    • View popup
    Table 2.

    Mean population size with SD by imaging depth group for the stimulus classification

    Imaging depth (µm)175265–300325–350365–435
    Mean population size70.8284.9646.8238.50
    SD33.7450.1434.2022.62
    • Populations are composed of neurons common across the three imaging sessions A, B, and C.

    • View popup
    Table 3.

    Mean population size with SD by visual area for the direction classification

    AreaVISalVISamVISlVISpVISpmVISrl
    Mean population size139.1389.12143.97169.8799.87178.13
    SD79.4056.7785.1684.8362.2968.83
    • Population sizes are larger for the direction classification than the stimulus classification because the population includes all neurons imaged in session A.

    • View popup
    Table 4.

    Mean population size with SD by imaging depth group for the direction classification

    Imaging depth (µm)175265–300325–350365–435
    Mean population size153.49178.1593.2780.13
    SD56.7688.7966.2252.75
    • Population sizes are larger for the direction classification than the stimulus classification because the population includes all neurons imaged in session A.

    • View popup
    Table 5.

    Cre line populations in each visual area

    AreaVISalVISamVISlVISpVISpmVISrl
    Cux2-CreERT21151316133
    Emx1-IRES-Cre726746
    Nr5a1-Cre415964
    Rbp4-Cre_KL100345551
    Rorb-IRES2-Cre656852
    Scnn1a-Tg3-Cre020900
    • View popup
    Table 6.

    Cre line populations in each depth group

    Imaging depth (µm)175265-300325-350365-435
    Cux2-CreERT2342700
    Emx1-IRES-Cre151007
    Nr5a1-Cre03260
    Rbp4-Cre_KL10000023
    Rorb-IRES2-Cre03200
    Scnn1a-Tg3-Cre0270
Back to top

In this issue

eneuro: 5 (4)
eNeuro
Vol. 5, Issue 4
July/August 2018
  • Table of Contents
  • Index by author
Email

Thank you for sharing this eNeuro article.

NOTE: We request your email address only to inform the recipient that it was you who recommended this article, and that it is not junk mail. We do not retain these email addresses.

Enter multiple addresses on separate lines or separate them with commas.
Organization of Neural Population Code in Mouse Visual System
(Your Name) has forwarded a page to you from eNeuro
(Your Name) thought you would be interested in this article in eNeuro.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Print
View Full Page PDF
Citation Tools
Organization of Neural Population Code in Mouse Visual System
Kathleen Esfahany, Isabel Siergiej, Yuan Zhao, Il Memming Park
eNeuro 6 July 2018, 5 (4) ENEURO.0414-17.2018; DOI: 10.1523/ENEURO.0414-17.2018

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Respond to this article
Share
Organization of Neural Population Code in Mouse Visual System
Kathleen Esfahany, Isabel Siergiej, Yuan Zhao, Il Memming Park
eNeuro 6 July 2018, 5 (4) ENEURO.0414-17.2018; DOI: 10.1523/ENEURO.0414-17.2018
Reddit logo Twitter logo Facebook logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Abstract
    • Significance Statement
    • Introduction
    • Materials and Methods
    • Results
    • Discussion
    • Acknowledgments
    • Footnotes
    • References
    • Synthesis
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF

Keywords

  • decoding
  • population
  • visual cortex

Responses to this article

Respond to this article

Jump to comment:

No eLetters have been published for this article.

Related Articles

Cited By...

More in this TOC Section

New Research

  • Heterozygous Dab1 null mutation disrupts neocortical and hippocampal development
  • The nasal solitary chemosensory cell signaling pathway triggers mouse avoidance behavior to inhaled nebulized irritants
  • Different control strategies drive interlimb differences in performance and adaptation during reaching movements in novel dynamics
Show more New Research

Sensory and Motor Systems

  • Different control strategies drive interlimb differences in performance and adaptation during reaching movements in novel dynamics
  • The nasal solitary chemosensory cell signaling pathway triggers mouse avoidance behavior to inhaled nebulized irritants
  • Taste-odor association learning alters the dynamics of intra-oral odor responses in the posterior piriform cortex of awake rats
Show more Sensory and Motor Systems

Subjects

  • Sensory and Motor Systems

  • Home
  • Alerts
  • Visit Society for Neuroscience on Facebook
  • Follow Society for Neuroscience on Twitter
  • Follow Society for Neuroscience on LinkedIn
  • Visit Society for Neuroscience on Youtube
  • Follow our RSS feeds

Content

  • Early Release
  • Current Issue
  • Latest Articles
  • Issue Archive
  • Blog
  • Browse by Topic

Information

  • For Authors
  • For the Media

About

  • About the Journal
  • Editorial Board
  • Privacy Policy
  • Contact
  • Feedback
(eNeuro logo)
(SfN logo)

Copyright © 2023 by the Society for Neuroscience.
eNeuro eISSN: 2373-2822

The ideas and opinions expressed in eNeuro do not necessarily reflect those of SfN or the eNeuro Editorial Board. Publication of an advertisement or other product mention in eNeuro should not be construed as an endorsement of the manufacturer’s claims. SfN does not assume any responsibility for any injury and/or damage to persons or property arising from or related to any use of any material contained in eNeuro.