Skip to main content

Main menu

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Blog
    • Collections
    • Podcast
  • TOPICS
    • Cognition and Behavior
    • Development
    • Disorders of the Nervous System
    • History, Teaching and Public Awareness
    • Integrative Systems
    • Neuronal Excitability
    • Novel Tools and Methods
    • Sensory and Motor Systems
  • ALERTS
  • FOR AUTHORS
  • ABOUT
    • Overview
    • Editorial Board
    • For the Media
    • Privacy Policy
    • Contact Us
    • Feedback
  • SUBMIT

User menu

Search

  • Advanced search
eNeuro
eNeuro

Advanced Search

 

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Blog
    • Collections
    • Podcast
  • TOPICS
    • Cognition and Behavior
    • Development
    • Disorders of the Nervous System
    • History, Teaching and Public Awareness
    • Integrative Systems
    • Neuronal Excitability
    • Novel Tools and Methods
    • Sensory and Motor Systems
  • ALERTS
  • FOR AUTHORS
  • ABOUT
    • Overview
    • Editorial Board
    • For the Media
    • Privacy Policy
    • Contact Us
    • Feedback
  • SUBMIT
PreviousNext
Research ArticleResearch Article: Methods/New Tools, Novel Tools and Methods

Automated Detection and Localization of Synaptic Vesicles in Electron Microscopy Images

Barbara Imbrosci, Dietmar Schmitz and Marta Orlando
eNeuro 4 January 2022, 9 (1) ENEURO.0400-20.2021; https://doi.org/10.1523/ENEURO.0400-20.2021
Barbara Imbrosci
1German Center for Neurodegenerative Diseases (DZNE) Berlin, Berlin 10117, Germany
2Charité – Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, and Berlin Institute of Health, Berlin 10117, Germany
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Dietmar Schmitz
1German Center for Neurodegenerative Diseases (DZNE) Berlin, Berlin 10117, Germany
2Charité – Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, and Berlin Institute of Health, Berlin 10117, Germany
3NeuroCure Cluster of Excellence, Berlin 10117, Germany
4Bernstein Center for Computational Neuroscience (BCCN) Berlin, Berlin 10115, Germany
5Einstein Center for Neurosciences (ECN) Berlin, Berlin 10117, Germany
6Max-Delbrück-Centrum (MDC) for Molecular Medicine, Berlin 13125, Germany
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Dietmar Schmitz
Marta Orlando
2Charité – Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, and Berlin Institute of Health, Berlin 10117, Germany
3NeuroCure Cluster of Excellence, Berlin 10117, Germany
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Article
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF
Loading

Article Figures & Data

Figures

  • Tables
  • Extended Data
  • Figure 1.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 1.

    Architecture and performance of the vesicle classifier. A, Architecture of the CNN and diagrams showing the (B) cross-entropy loss, (C) accuracy, (D) precision, (E) recall, (F) F1-score, and (G) ROC curve on the training and test dataset (black and blue, respectively). H, Prediction of the vesicle classifiers on 39 image patches from the test dataset. The number in the first square brackets, on top of each image, represents the label assigned manually whereas the number in the second square brackets represents the prediction done by the classifier. The value 0 indicates that the label/prediction was negative (no vesicle) while the value 1 indicates a positive label/prediction (vesicle). In this representative example, 38 out of 39 images were predicted correctly. Red and black colors are used to indicate wrong and correct predictions, respectively.

  • Figure 2.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 2.

    Evaluation of single steps built in the algorithm. A, Micrograph showing a hMFB, from a chemically-fixed acute hippocampal slice, with all manually detected vesicles tagged by the white dots. Only vesicles belonging to the synaptic terminal delimited by the blue line were manually labeled and predicted. B, Magnification of the rectangular area shown in A with vesicles predicted by the algorithm. The vesicles were predicted either without (top), or with the contribution of the refinement CNN (bottom). On the left, the positions of the predicted vesicles are tagged by the white dots and false positives and false negative are marked by the semi-transparent blue and red circles, respectively. On the right, beyond the position of the predicted vesicles (white dots), the estimated vesicles areas are also represented as overlaid semi-transparent pink mask. C, F1-score without and with the contribution of the refinement CNN. D, EM image of a small portion of a hMFB (top, left), from a chemically-fixed acute hippocampal slice, same portion overlaid with the probability map generated by the first CNN as semi-transparent blue mask (top, middle), probability map alone (top, right), the green open circles point at three erroneously merged vesicles before clustering, while the two blue circles point at two clusters falling below the threshold size for being considered as vesicles. Vesicles detected without clustering-based segmentation (bottom, left), the arrows point at three merge errors. Vesicles detected without setting the size threshold for excluding very small clusters (bottom, middle), the arrows point at two false positives. Vesicles detected after implementing both clustering-based segmentation algorithm as well as after the threshold for excluding too small clusters (bottom, right), note that here both errors types are eliminated. E, F1-score without and with the contribution of the clustering-based segmentation algorithm and of (F) the size threshold for excluding very small clusters. * in C, E, and F indicate a p value < 0.05.

  • Figure 3.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 3.

    Evaluation of the performance of the algorithm on different sample preparations. A, Portion of a micrograph of a hMFB from a chemically-fixed acute hippocampal slice with all manually detected vesicles tagged by the white dots (left), with all predicted vesicles tagged by the white dots and false positives and false negatives marked by the semi-transparent blue and red circles, respectively (middle), and with all predicted vesicles tagged by the white dots and their estimated areas represented by the overlaid semi-transparent pink mask (right). Same as in A, but here, the micrographs show small hippocampal synapses from (B) a cryo-fixed and a (C) chemically-fixed neuronal culture. D, Precision, recall, F1-score of the algorithm for the three different sample preparations and F1-score obtained by comparing the results from the two human-based analysis (F1-s. man). For this analysis, only vesicles belonging to one synaptic terminal were manually labeled and predicted. Extended Data Figure 3-1 shows the comparison between synaptic vesicle count detected by humans and by the algorithm. Extended Data Figure 3-2 shows an example of synaptic vesicles detection using ilastik.

  • Figure 4.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 4.

    Performance of the model with different levels of noise and contrast. A, Precision (pink), recall (blue), and F1-score (black) at increasing noise levels. B, Portions of micrograph of a hMFB with increasing level of noise (from left to right) with all predicted vesicles tagged by the white dots. The level of noise in the images in the middle and on the right is marked by the gray rectangles in A. C, D, Same as in A, B, but instead of noise, different levels of contrast were tested. The contrast level in the images on the left (low contrast) and on the right (high contrast) is marked by the gray rectangles in C. For this analysis, only vesicles belonging to one synaptic terminal were manually labeled and predicted. On A, C, the dots represented the mean and the bars the SEM.

  • Figure 5.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 5.

    Performance of the algorithm on images available online. A–D, Portion of micrographs containing synaptic vesicles with all manually detected vesicles tagged by the white dots (left), with all predicted vesicles tagged by the white dots and with false positives and false negatives marked by the semi-transparent blue and red circles, respectively (middle), and with all predicted vesicles tagged by the white dots and their estimated areas represented by the overlaid semi-transparent pink mask (right). The image in A, is a portion of a virtual section of an electron tomogram from a cryo-fixed mouse hMFB (Imig et al., 2020); the image in B belongs to a synapse from a chemically-fixed zebrafish optic tectum (http://cellimagelibrary.org/images/6230). C, A portion of a synapse from a serial block face scanning EM (https://github.com/NeuroMorph-EPFL/NeuroMorph/tree/master/NeuroMorph_Datasets/EM_stack). D, A virtual slice of an electron tomogram of a chemically-fixed human synapse from the temporal lobe neocortex. Images in A, B, D were obtained with a transmission EM, whereas the image in C was obtained with a scanning EM. For this analysis, all vesicles present in the images were manually labeled and predicted.

  • Figure 6.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 6.

    Correlations of parameters obtained by human analysis or by the algorithm. Correlations between algorithm and human results for (A) total vesicle count, (B) nnd, and (C) estimated vesicle area. Each dot represents the average value for an image. The black lines represent the linear regressions. * indicate a p value < 0.05.

Tables

  • Figures
  • Extended Data
    • View popup
    Table 1

    Description of the datasets

    DatasetTotal imagesAcute slicesNeur. cultures (cryo/chem.fix.)Patches/full imagesUsage
    Train 121192/0Patch. 34805Train 1° cl.
    Test 1642/0Patch. 4209Test 1° cl.
    Train 216106/0Patch. 6245Train 2° cl.
    Test 2853/0Patch. 1912Test 2° cl.
    Test final27117/9Full images 27Evaluation
    External1002/8Full images 27Evaluation
    • Description of the datasets used to train and test the first and second (refinement) classifiers and for evaluating the final performance of the model. The term “acute slices” refers to images of hMFBs from chemically-fixed acute hippocampal slices, “neur. cultures” refers to images of small hippocampal synapses from either cryo-fixed or chemically-fixed cultured neurons.

    • View popup
    Table 2

    Improvement of the model performance by applying additionally the refinement classifier and postprocessing steps

    DataPrecisionRecallF1-score
    Without refinement49.62 ± 2.87%, n = 1195.78 ± 0.60%, n = 1164.73 ± 2.32%, n = 11
    Without clustering78.58 ± 1.17%, n = 1179.02 ± 1.93%, n = 1178.38 ± 1.18%, n = 11
    Without size threshold71.63 ± 1.12%, n = 1182.78 ± 1.80%, n = 1176.38 ± 0.90%, n = 11
    Final77.87 ± 1.15%, n = 1182.34 ± 1.87%, n = 1179.63 ± 1.11%, n = 11
    • Performance of the model (precision, recall, and F1-score) on images of hMFBs from chemically-fixed acute hippocampal slices (subset of dataset test final) without refinement classifier, without clustering-based segmentation, without removal of too small clusters, and with all steps included. Data are presented as mean ± SEM.

    • View popup
    Table 3

    Evaluation of the model performance on images from different preparations

    DataPrecisionRecallF1-score (alg.)F1-score (human)
    Acute slices77.87 ± 1.15%, n = 1182.34 ± 1.87%, n = 1179.63 ± 1.11%, n = 1183.59 ± 1.71%, n = 11
    Cryo-fixed n.c.85.10 ± 1.30%, n = 778.13 ± 4.69%, n = 780.82 ± 2.38%, n = 791.33 ± 0.93%, n = 7
    Chemically-fixed n.c.87.53 ± 1.96%, n = 987.24 ± 1.53%, n = 987.22 ± 1.37%, n = 992.99 ± 0.70%, n = 9
    • Performance of the model (precision, recall, and F1-score) on images of hMFBs from chemically-fixed acute hippocampal slices (acute slices) and of small hippocampal synapses from either cryo-fixed or chemically-fixed neuronal cultures (cryo-fixed n.c. and chemically-fixed n.c., respectively; dataset test final) and F1-score obtained comparing the annotations of two humans with each other. Data are presented as mean ± SEM.

    • View popup
    Table 4

    Evaluation of the performance of ilastik and comparison with our model

    DataPrecision (ilastik)Recall (ilastik)F1-score (ilastik)F1-score (alg.)
    Acute slices50.60 ± 3.19%, n = 1166.92 ± 4.60%, n = 1156.89 ± 3.36%, n = 1179.63 ± 1.11%, n = 11
    Cryo-fixed n.c.63.30 ± 4.52%, n = 762.62 ± 2.45%, n = 762.16 ± 2.25%, n = 780.82 ± 2.38%, n = 7
    Chemically-fixed n.c.74.15 ± 3.24%, n = 971.24 ± 5.97%, n = 970.48 ± 4.14%, n = 987.22 ± 1.37%, n = 9
    • Performance of ilastik (precision, recall, and F1-score) on the same images used in Table 3 (dataset test final) and F1-score obtained with our model. Data are presented as mean ± SEM.

    • View popup
    Table 5

    Statistical table

    Data typeCompared groupsTestResultsdf
    F1-score (%); Fig. 2Final vs no refin., no
    cluster, no size thr.
    Repeated-measures
    ANOVA
    F = 40.333
    Data typeCompared groupsTestConfidence leveldf
    95%Bonferroni corr.
    F1-score (%); Fig. 2Final vs no refin.Paired t test10.44% to 19.36%9.18% to 20.62%10
    F1-score (%); Fig. 2Final vs no clusterPaired t test0.60% to 1.90%0.41% to 2.08%10
    F1-score (%); Fig. 2Final vs no size thr.Paired t test2.34% to 4.15%2.08% to 4.41%10
    F1-score (%), acute
    slices; Fig. 3
    Algorithm vs humansPaired t test−5.67% to −2.25%-10
    F1-score (%), Cry. fix.
    n.c.; Fig. 3
    Algorithm vs humansPaired t test−16.89% to −4.15%-6
    F1-score (%), Che. fix.
    n.c.; Fig. 3
    Algorithm vs humansPaired t test−8.10% to −3.45%-8
    F1-score (%), acute slicesAlgorithm vs ilastikPaired t test16.20% to 29.27%-10
    F1-score (%), Cry. fix. n.c.Algorithm vs ilastikPaired t test12.42% to 24.89%-6
    F1-score (%), Che. fix. n.c.Algorithm vs ilastikPaired t test6.46% to 27.01%-8
    Vesicle count; Fig. 6Algorithm vs humansPearson corr.0.7288 to 0.9494-42
    Vesicle nnd; Fig. 6Algorithm vs humansPearson corr.0.2838 to 0.8309-42
    Vesicle area; Fig. 6Algorithm vs humansPearson corr.0.4979 to 0.8949-42

Extended Data

  • Figures
  • Tables
  • Extended Data Figure 3-1

    Comparison between synaptic vesicles count detected by humans and by the algorithm. Number of vesicles detected manually and by the algorithm in micrographs from (A) hMFBs from chemically fixed acute hippocampal slices, (B) small hippocampal synapses from cryo-fixed cultured neurons, and (C) small hippocampal synapses from chemically-fixed cultured neurons. Download Figure 3-1, TIF file.

  • Extended Data 1

    Codes and README files. Code files used for training the classifiers and for using the GUI: • performance_checker_(ilastik).py (it has the same function as performance_checker.py but on results from ilastik). • density_to_probability_transformer.py (it transforms density images from the Cell Counting workflow of ilastik into probability maps and save them); • performance_checker.py (it evaluates the performance of the algorithm by calculating true positives, false positives and false negatives using human annotations as ground truth); • ROC_AUC_calculator.py (it generates the ROC curve and calculates the AUC score from the first vesicle classifier); Codes used for the analyses: • running_analysis.py (the execution of this code launches the GUI). • Gui_vesicle_detection.py (it is needed to generate the GUI and to conduct image analysis, result visualization and proof-reading); • first_classifier_training.py and second_classifier_training.py (they contain the codes used to train the first and the refinement classifier and to evaluate their performance on the training and validation datasets); • CNNs_GaussianNoiseAdder.py (it contains the two CNNs and some lines to add Gaussian noise to the training dataset as data augmentation strategy); Download Extended Data 1, ZIP file.

  • Extended Data Figure 3-2

    Example of synaptic vesicles detection using ilastik. A, Raw micrograph (left), probability map (middle), and segmentation map (right) of a hMFB from a chemically-fixed acute hippocampal slice. The hMFB was isolated by applying a black mask on the surrounding. The probability map was obtained by converting the density image produced by the Cell Density Counting workflow. The segmentation map was then obtained using the Object Classification (inputs: raw data, pixel prediction map) workflow. B, On the left, a portion of the micrograph in A, with all manually detected vesicles tagged by the white dots. On the right the same image, with all the vesicles predicted by ilastik tagged by the dots. The correctly guessed vesicles (true positives) are represented in white, the wrongly predicted vesicles (false positives) in blue and the missed vesicles (false negatives) in red. Download Figure 3-2, TIF file.

Back to top

In this issue

eneuro: 9 (1)
eNeuro
Vol. 9, Issue 1
January/February 2022
  • Table of Contents
  • Index by author
  • Ed Board (PDF)
Email

Thank you for sharing this eNeuro article.

NOTE: We request your email address only to inform the recipient that it was you who recommended this article, and that it is not junk mail. We do not retain these email addresses.

Enter multiple addresses on separate lines or separate them with commas.
Automated Detection and Localization of Synaptic Vesicles in Electron Microscopy Images
(Your Name) has forwarded a page to you from eNeuro
(Your Name) thought you would be interested in this article in eNeuro.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Print
View Full Page PDF
Citation Tools
Automated Detection and Localization of Synaptic Vesicles in Electron Microscopy Images
Barbara Imbrosci, Dietmar Schmitz, Marta Orlando
eNeuro 4 January 2022, 9 (1) ENEURO.0400-20.2021; DOI: 10.1523/ENEURO.0400-20.2021

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Respond to this article
Share
Automated Detection and Localization of Synaptic Vesicles in Electron Microscopy Images
Barbara Imbrosci, Dietmar Schmitz, Marta Orlando
eNeuro 4 January 2022, 9 (1) ENEURO.0400-20.2021; DOI: 10.1523/ENEURO.0400-20.2021
Twitter logo Facebook logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Abstract
    • Significance Statement
    • Introduction
    • Materials and Methods
    • Results
    • Discussion
    • Acknowledgments
    • Footnotes
    • References
    • Synthesis
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF

Keywords

  • automated detection
  • convolutional neural networks
  • image analysis
  • machine learning
  • synaptic vesicle

Responses to this article

Respond to this article

Jump to comment:

No eLetters have been published for this article.

Related Articles

Cited By...

More in this TOC Section

Research Article: Methods/New Tools

  • CalTrig: A GUI-based Machine Learning Approach for Decoding Neuronal Calcium Transients in Freely Moving Rodents
  • Spiking neural network models of interaural time difference extraction via a massively collaborative process
  • Adapt-A-Maze: An Open-Source Adaptable and Automated Rodent Behavior Maze System
Show more Research Article: Methods/New Tools

Novel Tools and Methods

  • CalTrig: A GUI-based Machine Learning Approach for Decoding Neuronal Calcium Transients in Freely Moving Rodents
  • Spiking neural network models of interaural time difference extraction via a massively collaborative process
  • Adapt-A-Maze: An Open-Source Adaptable and Automated Rodent Behavior Maze System
Show more Novel Tools and Methods

Subjects

  • Novel Tools and Methods
  • Home
  • Alerts
  • Follow SFN on BlueSky
  • Visit Society for Neuroscience on Facebook
  • Follow Society for Neuroscience on Twitter
  • Follow Society for Neuroscience on LinkedIn
  • Visit Society for Neuroscience on Youtube
  • Follow our RSS feeds

Content

  • Early Release
  • Current Issue
  • Latest Articles
  • Issue Archive
  • Blog
  • Browse by Topic

Information

  • For Authors
  • For the Media

About

  • About the Journal
  • Editorial Board
  • Privacy Notice
  • Contact
  • Feedback
(eNeuro logo)
(SfN logo)

Copyright © 2025 by the Society for Neuroscience.
eNeuro eISSN: 2373-2822

The ideas and opinions expressed in eNeuro do not necessarily reflect those of SfN or the eNeuro Editorial Board. Publication of an advertisement or other product mention in eNeuro should not be construed as an endorsement of the manufacturer’s claims. SfN does not assume any responsibility for any injury and/or damage to persons or property arising from or related to any use of any material contained in eNeuro.