Skip to main content

Main menu

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Blog
    • Collections
    • Podcast
  • TOPICS
    • Cognition and Behavior
    • Development
    • Disorders of the Nervous System
    • History, Teaching and Public Awareness
    • Integrative Systems
    • Neuronal Excitability
    • Novel Tools and Methods
    • Sensory and Motor Systems
  • ALERTS
  • FOR AUTHORS
  • ABOUT
    • Overview
    • Editorial Board
    • For the Media
    • Privacy Policy
    • Contact Us
    • Feedback
  • SUBMIT

User menu

Search

  • Advanced search
eNeuro

eNeuro

Advanced Search

 

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Blog
    • Collections
    • Podcast
  • TOPICS
    • Cognition and Behavior
    • Development
    • Disorders of the Nervous System
    • History, Teaching and Public Awareness
    • Integrative Systems
    • Neuronal Excitability
    • Novel Tools and Methods
    • Sensory and Motor Systems
  • ALERTS
  • FOR AUTHORS
  • ABOUT
    • Overview
    • Editorial Board
    • For the Media
    • Privacy Policy
    • Contact Us
    • Feedback
  • SUBMIT
PreviousNext
Research ArticleResearch Article: Methods/New Tools, Novel Tools and Methods

Machine Learning for Neural Decoding

Joshua I. Glaser, Ari S. Benjamin, Raeed H. Chowdhury, Matthew G. Perich, Lee E. Miller and Konrad P. Kording
eNeuro 31 July 2020, 7 (4) ENEURO.0506-19.2020; DOI: https://doi.org/10.1523/ENEURO.0506-19.2020
Joshua I. Glaser
1Interdepartmental Neuroscience Program, Northwestern University, Chicago, Illinois 60611
2Department of Physical Medicine & Rehabilitation, Feinberg School of Medicine, Northwestern University, Chicago, Illinois 60611
3Shirley Ryan AbilityLab, Chicago, Illinois 60611
7Department of Bioengineering, School of Engineering and Applied Science, University of Pennsylvania, Philadelphia, Pennsylvania 19104
9Department of Statistics, Columbia University, New York, New York 10027
10Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, New York 10027
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Ari S. Benjamin
7Department of Bioengineering, School of Engineering and Applied Science, University of Pennsylvania, Philadelphia, Pennsylvania 19104
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Raeed H. Chowdhury
4Department of Physiology, Feinberg School of Medicine, Northwestern University, Chicago, Illinois 60611
5Department of Biomedical Engineering, McCormick School of Engineering, Northwestern University, Evanston, Illinois 60208
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Matthew G. Perich
4Department of Physiology, Feinberg School of Medicine, Northwestern University, Chicago, Illinois 60611
5Department of Biomedical Engineering, McCormick School of Engineering, Northwestern University, Evanston, Illinois 60208
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Matthew G. Perich
Lee E. Miller
2Department of Physical Medicine & Rehabilitation, Feinberg School of Medicine, Northwestern University, Chicago, Illinois 60611
3Shirley Ryan AbilityLab, Chicago, Illinois 60611
4Department of Physiology, Feinberg School of Medicine, Northwestern University, Chicago, Illinois 60611
5Department of Biomedical Engineering, McCormick School of Engineering, Northwestern University, Evanston, Illinois 60208
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Lee E. Miller
Konrad P. Kording
2Department of Physical Medicine & Rehabilitation, Feinberg School of Medicine, Northwestern University, Chicago, Illinois 60611
3Shirley Ryan AbilityLab, Chicago, Illinois 60611
4Department of Physiology, Feinberg School of Medicine, Northwestern University, Chicago, Illinois 60611
5Department of Biomedical Engineering, McCormick School of Engineering, Northwestern University, Evanston, Illinois 60208
6Department of Engineering Sciences & Applied Mathematics, McCormick School of Engineering, Northwestern University, Evanston, Illinois 60208
7Department of Bioengineering, School of Engineering and Applied Science, University of Pennsylvania, Philadelphia, Pennsylvania 19104
8Department of Neuroscience, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania 19104
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Article
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF
Loading

Article Figures & Data

Figures

  • Extended Data
  • Figure 1.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 1.

    Decoding schematic. a, To decode (predict) the output in a given time bin, we used the firing rates of all N neurons in B time bins. In this schematic, N = 4 and B = 4 (3 bins preceding the output and 1 concurrent bin). As an example, preceding bins of neural activity could be useful for predicting upcoming movements, and following bins of neural activity could be useful for predicting preceding sensory information. Here, we show a single output being predicted. b, For nonrecurrent decoders (Wiener filter, Wiener Cascade, Support Vector Regression, XGBoost, and Feedforward Neural Network in our subsequent demonstration), this is a standard machine learning regression problem where N × B features (the firing rates of each neuron in each relevant time bin) are used to predict the output. c, To predict outputs with recurrent decoders (simple recurrent neural network, GRUs, LSTMs in our subsequent demonstration), we used N features, with temporal connections across B bins. A schematic of a recurrent neural network predicting a single output is on the right. Note that an alternative way to view this model is that the hidden state feeds back on itself (across time points).

  • Figure 2.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 2.

    Schematic of cross-validation. a, After training a decoder on some data (green), we would like to know how well it performs on held-out data that we do not have access to (red). b, Left, By splitting the data we have into test (orange) and train (green) segments, we can approximate the performance on held-out data. In k-fold cross-validation, we retrain the decoder k times, and each time rotate which parts of the data are the test or train data. The average test set performance approximates the performance on held-out data. Right, If we want to select among many models, we cannot maximize the performance on the same data we will report as a score. (This is very similar to “p-hacking” statistical significance.) Instead, we maximize performance on “validation” data (blue), and again rotate through the available data. c, All failure modes are ways in which a researcher lets information from the test set “leak” into the training algorithm. This happens if you explicitly train on the test data (top), or use any statistics of the test data to modify the train data before fitting (middle), or select your models or hyperparameters based on the performance on the test data (bottom).

  • Figure 3.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 3.

    Example decoder results. Example decoding results from motor cortex (left), somatosensory cortex (middle), and hippocampus (right), for all 11 methods (top to bottom). Ground truth traces are in black, while decoder results are in various colors.

  • Figure 4.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 4.

    Decoder result summary. R2 values are reported for all decoders (different colors) for each brain area (top to bottom). Error bars represent the mean ± SEM across cross-validation folds. Xs represent the R2 values of each cross-validation fold. The NB decoder had mean R2 values of 0.26 and 0.36 (below the minimum y-axis value) for the motor and somatosensory cortex datasets, respectively. Note the different y-axis limits for the hippocampus dataset in this and all subsequent figures. In Extended Data, we include the accuracy for multiple versions of the Kalman filter (Extended Data Fig. 4-1), accuracy for multiple bin sizes (Extended Data Fig. 4-2), and a table with further details of all these methods (Extended Data Fig. 4-3).

  • Figure 5.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 5.

    Decoder results with limited data. a, Testing the effects of limited training data. Using varying amounts of training data, we trained two traditional methods (Wiener filter and Kalman filter), and two modern methods (feedforward neural network and LSTM). R2 values are reported for these decoders (different colors) for each brain area (top to bottom). Error bars are 68% confidence intervals (meant to approximate the SEM) produced via bootstrapping, as we used a single test set. Values with negative R2s were not shown. b, Testing the effects of few neurons. Using only 10 neurons, we trained two traditional methods (Wiener filter and Kalman filter), and two modern methods (feedforward neural network and LSTM). We used the same testing set as in a, and the largest training set from a. R2 values are reported for these decoders for each brain area. Error bars represent the mean ± SEM of multiple repetitions with different subsets of 10 neurons. Xs represent the R2 values of each repetition. Note that the y-axis limits are different in a and b. In Extended Data, we provide examples of the decoder predictions for each of these methods (Extended Data Fig. 5-1).

  • Figure 6.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 6.

    Sensitivity of neural network results to hyperparameter selection. In a feedforward neural network, we varied the number of hidden units per layer (in increments of 100) and the proportion of dropout (in increments of 0.1), and evaluated the performance of the decoder on all three datasets (top to bottom). The neural network had two hidden layers, each with the same number of hidden units. The number of training epochs was kept constant at 10. The colors show the R2 on the test set, and the colors of each panel were put in the range: [maximum R2 – 0.2, maximum R2]. a, We used a large amount of training data (the maximum amount used in Fig. 5a), which was 10, 20, and 37.5 min of data for the motor cortex, somatosensory cortex, and hippocampus datasets, respectively. b, Same results for a limited amount of training data: 1, 1, and 15 min of data for the motor cortex, somatosensory cortex, and hippocampus datasets, respectively.

Extended Data

  • Figures
  • Extended Data 1

    Supplementary Code Package. Download Extended Data 1, ZIP file.

  • Figure 4-1

    Kalman filter versions. R2 values are reported for different versions of the Kalman filter for each brain area (top to bottom). On the left (in bluish gray), the Kalman filter is implemented as in the study by Wu et al. (2003). On the right (in cyan), the Kalman filter is implemented with an extra parameter that scales the noise matrix associated with the transition in kinematic states (see Demonstration methods). This version with the extra parameter is the one used in the main text. Error bars represent the mean ± SEM across cross-validation folds. Xs represent the R2 values of each cross-validation fold. Note the different y-axis limits for the hippocampus dataset. Download Figure 4-1, EPS file.

  • Figure 4-2

    Decoder results with different bin sizes. As different decoding applications may require different temporal resolutions, we tested a subset of methods with varying bin sizes. We trained two traditional methods (Wiener filter and Kalman filter), and two modern methods (feedforward neural network and LSTM). We used the same testing set as in Figure 5, and the largest training set from Figure 5. R2 values are reported for these decoders (different colors) for each brain area (top to bottom). Error bars are 68% confidence intervals (meant to approximate the SEM) produced via bootstrapping, as we used a single test set. Modern machine learning methods remained advantageous regardless of the temporal resolution. Note that for this figure, we used a slightly different amount of neural data than in other analyses in order to have a quantity that was divisible by many bin sizes. In this case, for motor cortex, we used 600 ms of neural activity prior to and including the current bin. For somatosensory cortex, we used 600 ms of neural activity centered on the current bin. For hippocampus, we used 2 s of neural activity centered on the current bin. Download Figure 4-2, EPS file.

  • Figure 4-3

    Additional decoder details, including equations and hyperparameters. These details are for the decoder implementations that we use in our demonstrations and have in our code package. Download Figure 4-3, PDF file.

  • Figure 5-1

    Example results with limited training data. Using only 2 min of training data for motor cortex and somatosensory cortex, and 15 min of training data for hippocampus, we trained two traditional methods (Wiener filter and Kalman filter) and two modern methods (feedforward neural network and LSTM). Example decoding results are shown from motor cortex (left), somatosensory cortex (middle), and hippocampus (right) for these methods (top to bottom). Ground truth traces are in black, while decoder results are in the same colors as previous figures. Download Figure 5-1, EPS file.

Back to top

In this issue

eneuro: 7 (4)
eNeuro
Vol. 7, Issue 4
July/August 2020
  • Table of Contents
  • Index by author
  • Ed Board (PDF)
Email

Thank you for sharing this eNeuro article.

NOTE: We request your email address only to inform the recipient that it was you who recommended this article, and that it is not junk mail. We do not retain these email addresses.

Enter multiple addresses on separate lines or separate them with commas.
Machine Learning for Neural Decoding
(Your Name) has forwarded a page to you from eNeuro
(Your Name) thought you would be interested in this article in eNeuro.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Print
View Full Page PDF
Citation Tools
Machine Learning for Neural Decoding
Joshua I. Glaser, Ari S. Benjamin, Raeed H. Chowdhury, Matthew G. Perich, Lee E. Miller, Konrad P. Kording
eNeuro 31 July 2020, 7 (4) ENEURO.0506-19.2020; DOI: 10.1523/ENEURO.0506-19.2020

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Respond to this article
Share
Machine Learning for Neural Decoding
Joshua I. Glaser, Ari S. Benjamin, Raeed H. Chowdhury, Matthew G. Perich, Lee E. Miller, Konrad P. Kording
eNeuro 31 July 2020, 7 (4) ENEURO.0506-19.2020; DOI: 10.1523/ENEURO.0506-19.2020
Reddit logo Twitter logo Facebook logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Abstract
    • Significance Statement
    • Introduction
    • Materials and Methods
    • Results
    • Discussion
    • Acknowledgments
    • Footnotes
    • References
    • Synthesis
    • Author Response
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF

Keywords

  • Neural decoding
  • machine learning
  • Neural data analysis
  • deep learning
  • motor cortex
  • somatosensory cortex
  • hippocampus

Responses to this article

Respond to this article

Jump to comment:

No eLetters have been published for this article.

Related Articles

Cited By...

More in this TOC Section

Research Article: Methods/New Tools

  • A Two-Stage Automatic System for Detection of Interictal Epileptiform Discharges from Scalp Electroencephalograms
  • Different Methods to Estimate the Phase of Neural Rhythms Agree But Only During Times of Low Uncertainty
  • A Versatile Strategy for Genetic Manipulation of Cajal–Retzius Cells in the Adult Mouse Hippocampus
Show more Research Article: Methods/New Tools

Novel Tools and Methods

  • A Two-Stage Automatic System for Detection of Interictal Epileptiform Discharges from Scalp Electroencephalograms
  • Different Methods to Estimate the Phase of Neural Rhythms Agree But Only During Times of Low Uncertainty
  • A Versatile Strategy for Genetic Manipulation of Cajal–Retzius Cells in the Adult Mouse Hippocampus
Show more Novel Tools and Methods

Subjects

  • Novel Tools and Methods

  • Home
  • Alerts
  • Visit Society for Neuroscience on Facebook
  • Follow Society for Neuroscience on Twitter
  • Follow Society for Neuroscience on LinkedIn
  • Visit Society for Neuroscience on Youtube
  • Follow our RSS feeds

Content

  • Early Release
  • Current Issue
  • Latest Articles
  • Issue Archive
  • Blog
  • Browse by Topic

Information

  • For Authors
  • For the Media

About

  • About the Journal
  • Editorial Board
  • Privacy Policy
  • Contact
  • Feedback
(eNeuro logo)
(SfN logo)

Copyright © 2023 by the Society for Neuroscience.
eNeuro eISSN: 2373-2822

The ideas and opinions expressed in eNeuro do not necessarily reflect those of SfN or the eNeuro Editorial Board. Publication of an advertisement or other product mention in eNeuro should not be construed as an endorsement of the manufacturer’s claims. SfN does not assume any responsibility for any injury and/or damage to persons or property arising from or related to any use of any material contained in eNeuro.