Skip to main content

Main menu

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Blog
    • Collections
    • Podcast
  • TOPICS
    • Cognition and Behavior
    • Development
    • Disorders of the Nervous System
    • History, Teaching and Public Awareness
    • Integrative Systems
    • Neuronal Excitability
    • Novel Tools and Methods
    • Sensory and Motor Systems
  • ALERTS
  • FOR AUTHORS
  • ABOUT
    • Overview
    • Editorial Board
    • For the Media
    • Privacy Policy
    • Contact Us
    • Feedback
  • SUBMIT

User menu

Search

  • Advanced search
eNeuro
eNeuro

Advanced Search

 

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Blog
    • Collections
    • Podcast
  • TOPICS
    • Cognition and Behavior
    • Development
    • Disorders of the Nervous System
    • History, Teaching and Public Awareness
    • Integrative Systems
    • Neuronal Excitability
    • Novel Tools and Methods
    • Sensory and Motor Systems
  • ALERTS
  • FOR AUTHORS
  • ABOUT
    • Overview
    • Editorial Board
    • For the Media
    • Privacy Policy
    • Contact Us
    • Feedback
  • SUBMIT
PreviousNext
Research ArticleOpen Source Tools and Methods, Novel Tools and Methods

DeepCINAC: A Deep-Learning-Based Python Toolbox for Inferring Calcium Imaging Neuronal Activity Based on Movie Visualization

Julien Denis, Robin F. Dard, Eleonora Quiroli, Rosa Cossart and Michel A. Picardo
eNeuro 22 July 2020, 7 (4) ENEURO.0038-20.2020; https://doi.org/10.1523/ENEURO.0038-20.2020
Julien Denis
Aix Marseille Univ, INSERM, INMED, Marseille 13273, France
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Robin F. Dard
Aix Marseille Univ, INSERM, INMED, Marseille 13273, France
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Eleonora Quiroli
Aix Marseille Univ, INSERM, INMED, Marseille 13273, France
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Rosa Cossart
Aix Marseille Univ, INSERM, INMED, Marseille 13273, France
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Michel A. Picardo
Aix Marseille Univ, INSERM, INMED, Marseille 13273, France
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Article
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF
Loading

Article Figures & Data

Figures

  • Tables
  • Movies
  • Extended Data
  • Figure 1.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 1.

    Experimental paradigm. A, Experimental timeline. B, Intraventricular injection of GCaMP6s on pups (drawing) done at P0. C, Schematic representing the cranial window surgery. D, top left, Imaged field of view. Scale bar: 100 µm. Top right, Activity of five random neurons in the field of view (variation of fluorescence is expressed as Δf/f). Scale bar: 50 s. Bottom, Drawing of a head fixed pup under the microscope.

  • Figure 2.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 2.

    Examples of different uses of the GUI. The GUI can be used for data exploration (A1, A2), to establish the ground truth (B) and to evaluate DeepCINAC predictions (C). A, The GUI can be used to explore the activity inference from any methods. The spikes inferred from CaImAn are represented by the green marks at the bottom. The GUI allows the user to play the movie at the time of the selected transient and visualize the transients and source profile of the cell of interest. A1, Movie visualization and correlation between transient and source profiles allow the classification of the first selected transient as true positive (TP) and the second selected transient as false positive (FP). A2, Movie visualization and correlation between transient and source profiles allow the classification of the selected transient as false negative (FN). B, The GUI can be used to establish a ground truth. In this condition, it offers the user the possibility to manually annotate onset and peak of calcium transient. Onsets are represented by vertical dashed blue lines, peaks by green dots. C, When the activity inference is done using DeepCINAC, the GUI allows the display of the classifier predictions. The prediction is represented by the red line. The dashed horizontal red line is a probability of one. The blue area represents time periods during which the probability is above a given threshold, in this example 0.5. T: transient profile, S: source profile, Corr: correlation, FOV: field of view.

  • Figure 3.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 3.

    Workflow to establish the ground truth. First, a cell was randomly chosen in the imaged field of view. 1, All putative transients of the segment to label were identified for the onset to the peak of each calcium event. 2, Three human experts [“expert” (A), “expert” (B), “expert” (C)] independently annotated the segment. Among all putative transients, each human expert had to decide whether it was in his opinion a true transient. 3, The combination of the labeling lead to “consensual transients” (i.e., true transient for each human expert; black square) and to “non-consensual transients” (i.e., true transient for at least one human expert but not all of them; open square). 4, All non-consensual transients were discussed and ground truth was established.

  • Figure 4.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 4.

    Architecture of DeepCINAC neural network. As a first step, for each set of inputs of the same cell, we extract CNNs features of video frames that we pass to an attention mechanism and feed the outputs into a forward pass network (FU, green units) and a backward pass network (BU, orange units), representing a bidirectional LSTM. Another bidirectional LSTM is fed from the attention mechanism and previous bidirectional LSTM outputs. A LSTM (MU, blue units) then integrates the outputs from the process of the three types of inputs to generate a final video representation. A sigmoid activation function is finally used to produce a probability for the cell to be active at each given frame given as input.

  • Figure 5.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 5.

    DeepCINAC step by step workflow. A, Schematic of two-photon imaging experiment. B, Screenshot of DeepCINAC GUI used to explore and annotate data. C, The GUI produces .cinac files that contain the necessary data to train or benchmark a classifier. D, Schematic representation of the architecture of the model that will be used to train the classifier and predict neuronal activity. E, Training of the classifier using the previously defined model. F, Schematic of a raster plot resulting from the inference of the neuronal activity using the trained classifier. G, Evaluation of the classifier performance using precision, sensitivity and F1 score. H, Active learning pipeline: screenshots of the GUI used to identify edge cases where the classifier wrongly infers the neuronal activity and annotate new data on similar situations to add data for a new classifier training.

  • Figure 6.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 6.

    Validation of visual ground truth and deep learning approach. A, Boxplots showing sensitivity for the three human experts (R.F.D., J.D., M.A.P.) and CINAC_v6 evaluated against the known ground truth from four cells from the GENIE project. B, Boxplots showing precision for the three human experts (R.F.D., J.D., M.A.P.) and CINAC_v6 evaluated against the known ground truth from four cells from the GENIE project. C, Boxplots showing F1 score for the three human experts (R.F.D., J.D., M.A.P.) and CINAC_v6 evaluated against the known ground truth from four cells from the GENIE project. Each colored dot represents a cell. Cell labels in the legend correspond to session identifiers from the dataset. CINAC_v6 is a classifier trained on data from the GENIE project and the Hippo-dvt dataset (Table 1; Extended Data Table 1-1).

  • Figure 7.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 7.

    Evaluation of CINAC_v1 performance on Hippo-dvt dataset. A, Boxplots showing sensitivity for the three human experts (R.F.D., J.D., M.A.P.), CaImAn and CINAC_v1 evaluated against the visual ground truth of 25 cells. A total of 15 cells were annotated by J.D. and R.F.D., six by M.A.P. B, Boxplots showing precision for the three human experts (R.F.D., J.D., M.A.P.), CaImAn and CINAC_v1 evaluated against the visual ground truth of 25 cells. A total of 15 cells were annotated by J.D. and R.F.D., six by M.A.P. C, Boxplots showing F1 score for the three human experts (R.F.D., J.D., M.A.P.), CaImAn and CINAC_v1 evaluated against the visual ground truth of 25 cells. A total of 15 cells were annotated by J.D. and R.F.D., six by M.A.P. Each colored dot represents a cell, the number inside indicates the cell’s id and each color represents a session as identified in the legend. CINAC_v1 is a classifier trained on data from the Hippo-dvt dataset (Table 1; Extended Data Table 1-1). Figure 7 is supported by Extended Data Figures 7-1, 7-2. *p < 0.05.

  • Figure 8.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 8.

    Use of DeepCINAC classifiers to optimize performances on various dataset. A, Boxplot displaying the sensitivity (top panel), precision (middle panel) and F1 score (bottom panel) for Hippo-GECO dataset. For each panel, we evaluated CaImAn performance as well as two different versions of CINAC (v1 and v3). CINAC_v1 is a classifier trained on data from the Hippo-dvt dataset and CINAC_v3 is a classifier trained on data from the Hippo-GECO dataset (Table 1; Extended Data Table 1-1). B, Boxplot displaying the sensitivity (top panel), precision (middle panel) and F1 score (bottom panel) for Hippo-6m dataset. For each panel, we evaluated CaImAn performance as well as two different versions of CINAC (v1 and v4). CINAC_v1 is a classifier trained on data from the Hippo-dvt dataset and CINAC_v4 is a classifier trained on data from the Hippo-dvt, Hippo-6m, and Barrel-ctx-6s dataset (Table 1; Extended Data Table 1-1). C, Boxplot displaying the sensitivity (top panel), precision (middle panel) and F1 score (bottom panel) for Barrel-ctx-6s dataset. For each panel, we evaluated CaImAn performance as well as two different versions of CINAC (v1 and v4). CINAC_v1 is a classifier trained on data from the Hippo-dvt dataset and CINAC_v4 is a classifier trained on data from the Hippo-dvt, Hippo-6m, and Barrel-ctx-6s dataset (Table 1; Extended Data Table 1-1). D, Boxplot displaying the sensitivity (top panel), precision (middle panel) and F1 score (bottom panel) for Hippo-dvt-INs dataset. For each panel, we evaluated CaImAn performance as well as two different versions of CINAC (v1 and v7). CINAC_v1 is a classifier trained on data from the Hippo-dvt dataset and CINAC_v7 is a classifier trained on interneurons from the Hippo-dvt dataset (Table 1; Extended Data Table 1-1). Each colored dot represents a cell, the number inside indicates the cell’s id and each color represents a session as identified in the legend. Figure 8 is supported by Extended Data Figures 8-1, 8-2, 8-3.

Tables

  • Figures
  • Movies
  • Extended Data
    • View popup
    Table 1

    Data used to train the classifiers

    CINAC version*n cellsn animalsn frames
    Hippo-dvtv1 v4 v61041 132 689,272
    Hippo-GECOv35245,000
    Hippo-6mv43142,000
    Barrel-ctx-6sv420236,000
    Visual-ctx-6sv5 v67NA33,800
    Hippo-dvt-INsv7299362,500
    • Training dataset include validation dataset (see Materials and Methods).

    • Description of the datasets precising the number of frames, number of animals and field of views included, as well as the classifiers that used these datasets.

    • n: number of.

    • * version that used at least part of those dataset.

    • ↵1 including two simulated movies, representing 32 cells and 80,000 frames.

    • ↵2 including two simulated movies.

    • Table 1 is supported by Extended Data Table 1-1.

    • View popup
    Table 2

    Cell type prediction confusion matrix

    Ground truth
    Pyramidal cellInterneuronNoise
    Prediction Pyramidal cell4650
    Interneuron1312
    Noise429
    • Confusion matrix, representing the number of true positives, true negatives, false positives, and false negatives. Ground truth refers to the manually detected interneurons and pyramidal cells. Prediction refers to the type predicted by the classifier for the same cells.

Movies

  • Figures
  • Tables
  • Extended Data
  • Movie 1.

    In vivo two-photon imaging in the CA1 region of the hippocampus in a 12-d-old mouse pup. Field of view (FOV) is 80 × 80 µm, frame rate is 8 Hz, and video is speeded up 10 times. The video shows recurrent periods of neuronal activations recruiting a large number of adjacent neurons leading to spatial and temporal overlaps.

  • Movie 2.

    In vivo two-photon imaging in the CA1 region of the hippocampus in a 7-d-old mouse pup. Field of view (FOV) is 100 × 100 µm, frame rate is 8 Hz, and video is speeded up 10 times. The Video shows different cell types (i.e., interneurons and pyramidal cells) with different calcium dynamics.

Extended Data

  • Figures
  • Tables
  • Movies
  • Extended Data Table 1-1

    Detailed data used to train and test the classifiers. Detailed content of training and test datasets used for all CINAC versions (v1 to v7) used in the analysis. Download Table 1-1, DOCX file.

  • Extended Data Figure 7-1

    Comparison of CINAC performance to human experts. A, Boxplot displaying F1 score of two human experts (R.F.D. and J.D.) and CINAC_v1. Here are shown 15 cells annotate by both experts. B, Boxplot displaying F1 score of one human expert (M.A.P.) and CINAC_v1. Here are shown six cells annotated by M.A.P. Each colored dot represents a cell, the number inside indicates the cell’s id and each color represents a session as identified in the legend. CINAC_v1 is a classifier trained on data from the Hippo-dvt dataset (Table 1; Extended Data Table 1-1). Download Figure 7-1, TIF file.

  • Extended Data Figure 7-2

    Onset to peak detection of calcium transient. Boxplot showing the proportion of frames predicted as active during the transient rise time. CINAC_v1 is a classifier trained on data from the Hippo-dvt dataset (Table 1; Extended Data Table 1-1). Each colored dot represents a transient and each color represents a session as identified in the legend. Download Figure 7-2, TIF file.

  • Extended Data Figure 8-1

    Comparison of CaImAn and CINAC_v1 performances on various dataset. A, Boxplot displaying the sensitivity (top panel), precision (middle panel), and F1 score (bottom panel) for Hippo-GECO dataset. For each panel, we evaluated CaImAn performance as well as CINAC_v1. B, Boxplot displaying the sensitivity (top panel), precision (middle panel), and F1 score (bottom panel) for Hippo-6m dataset. C, Boxplot displaying the sensitivity (top panel), precision (middle panel), and F1 score (bottom panel) for Barrel-ctx-6s. D, Boxplot displaying the sensitivity (top panel), precision (middle panel), and F1 score (bottom panel) for Hippo-dvt-INs dataset. Each colored dot represents a cell, the number inside indicates the cell’s id and each color represents a session as identified in the legend. CINAC_v1 is a classifier trained on data from the Hippo-dvt dataset (Table 1; Extended Data Table 1-1). Download Figure 8-1, TIF file.

  • Extended Data Figure 8-2

    Use of DeepCINAC classifiers to optimize performances on Visual-ctx-6s dataset. A, Boxplots showing sensitivity for CINAC_v1, CINAC_v5 and CINAC_v6 evaluated against the known ground truth of four cells from the GENIE project. B, Boxplots showing precision for CINAC_v1, CINAC_v5, and CINAC_v6 evaluated against the known ground truth of four cells from the GENIE project. C, Boxplots showing F1 score for CINAC_v1, CINAC_v5, and CINAC_v6 evaluated against the known ground truth of four cells from the GENIE project. CINAV_v1 is a classifier trained on data from the Hippo-dvt dataset, CINAC_v5 is a classifier trained on data from Visual-ctx-6s dataset, CINAC_v6 is a classifier trained on data from Visual-ctx-6s dataset and four cells from the Hippo-dvt dataset (Table 1; Extended Data Table 1-1). Each colored dot represents a cell. Cell labels in the legend correspond to session identifiers from the dataset. Download Figure 8-2, TIF file.

  • Extended Data Figure 8-3

    Cell assemblies detection and organization using CaImAn and CINAC_v1 on published data. A, B, The top panel represents the clustered covariance matrix of synchronous calcium events (SCE). The middle panel represents neurons active in SCE organized by cluster (cell assembly). The bottom panel represents the cell’s map, each color represents a cell assembly. A, Cell assemblies detection results using CaImAn. B, Cell assemblies detection results using CINAC_v1. C, Individual cells composing assemblies in each method. an a represents the number of neurons detected by Modol et al. (2020), using CaImAn; an b represents the number of neurons detected using CINAC_v1. Each color represents a cell assembly, color coded as in the maps. Download Figure 8-3, TIF file.

Back to top

In this issue

eneuro: 7 (4)
eNeuro
Vol. 7, Issue 4
July/August 2020
  • Table of Contents
  • Index by author
  • Ed Board (PDF)
Email

Thank you for sharing this eNeuro article.

NOTE: We request your email address only to inform the recipient that it was you who recommended this article, and that it is not junk mail. We do not retain these email addresses.

Enter multiple addresses on separate lines or separate them with commas.
DeepCINAC: A Deep-Learning-Based Python Toolbox for Inferring Calcium Imaging Neuronal Activity Based on Movie Visualization
(Your Name) has forwarded a page to you from eNeuro
(Your Name) thought you would be interested in this article in eNeuro.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Print
View Full Page PDF
Citation Tools
DeepCINAC: A Deep-Learning-Based Python Toolbox for Inferring Calcium Imaging Neuronal Activity Based on Movie Visualization
Julien Denis, Robin F. Dard, Eleonora Quiroli, Rosa Cossart, Michel A. Picardo
eNeuro 22 July 2020, 7 (4) ENEURO.0038-20.2020; DOI: 10.1523/ENEURO.0038-20.2020

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Respond to this article
Share
DeepCINAC: A Deep-Learning-Based Python Toolbox for Inferring Calcium Imaging Neuronal Activity Based on Movie Visualization
Julien Denis, Robin F. Dard, Eleonora Quiroli, Rosa Cossart, Michel A. Picardo
eNeuro 22 July 2020, 7 (4) ENEURO.0038-20.2020; DOI: 10.1523/ENEURO.0038-20.2020
Twitter logo Facebook logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Abstract
    • Significance Statement
    • Introduction
    • Materials and Methods
    • Results
    • Discussion
    • Conclusion
    • Acknowledgments
    • Footnotes
    • References
    • Synthesis
    • Author Response
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF

Keywords

  • calcium imaging
  • CNN
  • deep learning
  • hippocampus
  • LSTM
  • neuronal activity

Responses to this article

Respond to this article

Jump to comment:

No eLetters have been published for this article.

Related Articles

Cited By...

More in this TOC Section

Open Source Tools and Methods

  • RetINaBox: A Hands-On Learning Tool for Experimental Neuroscience
  • The Odor Delivery Optimization Research System (ODORS): An Open-Source Olfactometer for Behavioral Assessments in Tethered and Untethered Rodents
  • Low-Cost 3D-Printed Mazes with Open-Source ML Tracking for Mouse Behavior
Show more Open Source Tools and Methods

Novel Tools and Methods

  • RetINaBox: A Hands-On Learning Tool for Experimental Neuroscience
  • The Odor Delivery Optimization Research System (ODORS): An Open-Source Olfactometer for Behavioral Assessments in Tethered and Untethered Rodents
  • Low-Cost 3D-Printed Mazes with Open-Source ML Tracking for Mouse Behavior
Show more Novel Tools and Methods

Subjects

  • Novel Tools and Methods
  • Open Source Tools and Methods
  • Home
  • Alerts
  • Follow SFN on BlueSky
  • Visit Society for Neuroscience on Facebook
  • Follow Society for Neuroscience on Twitter
  • Follow Society for Neuroscience on LinkedIn
  • Visit Society for Neuroscience on Youtube
  • Follow our RSS feeds

Content

  • Early Release
  • Current Issue
  • Latest Articles
  • Issue Archive
  • Blog
  • Browse by Topic

Information

  • For Authors
  • For the Media

About

  • About the Journal
  • Editorial Board
  • Privacy Notice
  • Contact
  • Feedback
(eNeuro logo)
(SfN logo)

Copyright © 2026 by the Society for Neuroscience.
eNeuro eISSN: 2373-2822

The ideas and opinions expressed in eNeuro do not necessarily reflect those of SfN or the eNeuro Editorial Board. Publication of an advertisement or other product mention in eNeuro should not be construed as an endorsement of the manufacturer’s claims. SfN does not assume any responsibility for any injury and/or damage to persons or property arising from or related to any use of any material contained in eNeuro.