Skip to main content

Umbrella menu

  • SfN.org
  • eNeuro
  • The Journal of Neuroscience
  • Neuronline
  • BrainFacts.org

Main menu

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Latest Articles
    • Issue Archive
    • Editorials
    • Research Highlights
  • TOPICS
    • Cognition and Behavior
    • Development
    • Disorders of the Nervous System
    • History, Teaching and Public Awareness
    • Integrative Systems
    • Neuronal Excitability
    • Novel Tools and Methods
    • Sensory and Motor Systems
  • ALERTS
  • FOR AUTHORS
  • EDITORIAL BOARD
  • BLOG
  • ABOUT
    • Overview
    • For the Media
    • Privacy Policy
    • Contact Us
    • Feedback
  • SfN.org
  • eNeuro
  • The Journal of Neuroscience
  • Neuronline
  • BrainFacts.org

User menu

  • My alerts

Search

  • Advanced search
eNeuro
  • My alerts

eNeuro

Advanced Search

Submit a Manuscript
  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Latest Articles
    • Issue Archive
    • Editorials
    • Research Highlights
  • TOPICS
    • Cognition and Behavior
    • Development
    • Disorders of the Nervous System
    • History, Teaching and Public Awareness
    • Integrative Systems
    • Neuronal Excitability
    • Novel Tools and Methods
    • Sensory and Motor Systems
  • ALERTS
  • FOR AUTHORS
  • EDITORIAL BOARD
  • BLOG
  • ABOUT
    • Overview
    • For the Media
    • Privacy Policy
    • Contact Us
    • Feedback
PreviousNext
Research ArticleMethods/New Tools, Novel Tools and Methods

Quantifying Mesoscale Neuroanatomy Using X-Ray Microtomography

Eva L. Dyer, William Gray Roncal, Judy A. Prasad, Hugo L. Fernandes, Doga Gürsoy, Vincent De Andrade, Kamel Fezzaa, Xianghui Xiao, Joshua T. Vogelstein, Chris Jacobsen, Konrad P. Körding and Narayanan Kasthuri
eNeuro 25 September 2017, 4 (5) ENEURO.0195-17.2017; DOI: https://doi.org/10.1523/ENEURO.0195-17.2017
Eva L. Dyer
1Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, 30332
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Eva L. Dyer
William Gray Roncal
2The Johns Hopkins University Applied Physics Laboratory, Laurel, MD, 20723
3Dept. of Computer Science, The Johns Hopkins University, Baltimore, MD, 21218
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Judy A. Prasad
4Dept. of Neurobiology, University of Chicago, Chicago, IL, 60637
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Hugo L. Fernandes
5Dept. of Physical Medicine and Rehabilitation, Northwestern University, Chicago, IL, 60611
6Sensory Motor Performance Program, Rehabilitation Institute of Chicago, Chicago, IL, 60611
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Doga Gürsoy
7Advanced Photon Source, Argonne National Laboratory, Lemont, IL, 60439
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Vincent De Andrade
7Advanced Photon Source, Argonne National Laboratory, Lemont, IL, 60439
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Kamel Fezzaa
7Advanced Photon Source, Argonne National Laboratory, Lemont, IL, 60439
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Xianghui Xiao
7Advanced Photon Source, Argonne National Laboratory, Lemont, IL, 60439
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Joshua T. Vogelstein
8Department of Biomedical Engineering, The Johns Hopkins University, Baltimore, MD, 21205
9Institute of Computational Medicine, The Johns Hopkins University, Baltimore, MD, 21218
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Chris Jacobsen
7Advanced Photon Source, Argonne National Laboratory, Lemont, IL, 60439
10Department of Physics and Astronomy, Northwestern University, Chicago, IL, 60208
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Konrad P. Körding
11Department of Biomedical Engineering, University of Pennsylvania, Philadelphia, PA, 19104
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Narayanan Kasthuri
4Dept. of Neurobiology, University of Chicago, Chicago, IL, 60637
12Center for Nanoscale Materials, Argonne National Laboratory, Lemont, IL, 60439
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Narayanan Kasthuri
  • Article
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF
Loading

Article Figures & Data

Figures

  • Tables
  • Movies
  • Figure1
    • Download figure
    • Open in new tab
    • Download powerpoint
  • Fig. 1.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Fig. 1.

    Synchrotron X-ray tomography of a millimeter-scale brain sample. A schematic illustration of the imaging setup is displayed along the bottom: from left to right, the synchrotron X-ray source interacting with an embedded sample of somatosensory cortex as it is rotated during the collection of multi-angle projections. To collect this projection data, X-rays are passed through a scintillator crystal that converts the energy into visible light photons. These photons are then focused onto a light camera sensor, before a sinogram is generated via data collection from a row of sensor pixels. In the three panels above, visualizations of the neocortical sample preparation (a), location of the mounted sample within the instrument (b), and conversion and focusing of X-rays into light photons (c).

  • Fig. 2.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Fig. 2.

    Synchrotron X-ray imaging provides micron resolution within a neocortical volume. a, Microscopic visualization of cells, blood vessels, and dendrites within an X-ray–imaged volume of somatosensory cortex. Each panel shows one of three perspectives within the xyz coordinate framework (panels to the right are 11.5 µm wide, large panel to the left is 100 µm wide). b, Digital rendering of a manually reconstructed subset of blood vessels and cell bodies (nuclei highlighted) selected from within the neocortical volume. c, Photomicrographs of a subvolume within this sample, using µCT and EM to identify overlapping regions. These images were collected at three different pixel sizes (0.65 µm, 100 nm, 3 nm). In the left panel, a subset of a single virtual slice from an X-ray tomogram that spans the neocortical volume (0.65 µm pixel size). Outlined in blue to the right of this is a subset of the volume (within a) that highlights a configuration of three cell bodies and distinct proximal microvessels. This sample was subsequently serially sectioned and imaged in a scanning electron microscope. These cells are located in the EM dataset (inset), the ultrastructure of which is well preserved, even after µCT (right in red).

  • Fig. 3.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Fig. 3.

    Image processing and computer vision pipeline for segmentation and cell detection. Block diagram displaying the entire X-BRAIN workflow is described. The integration of sparsely labeled training data into our segmentation module (Step 1) is used to train a random forest classifier using ilastik. Densely annotated training data are used to perform hyperparameter optimization to tune our cell detection algorithm in Step 2. The final map of detected cells is displayed at the bottom of Step 2, with detected cells overlaid on the original X-ray image. Solid arrows, inputs into a module; dashed arrows, outputs; filled circle terminal, outputs that are stored in the spatial database.

  • Fig. 4.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Fig. 4.

    Visualization of X-ray image data, overlaid probability maps, and final segmentations. On the left, an X-ray micrograph. On the right, clockwise from upper left: vessel probabilities, cell probabilities, cell probabilities and segmentations, and the segmentations of cells and vessels.

  • Fig. 5.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Fig. 5.

    Automated methods for segmentation and cell detection reveal dense mesoscale brain maps. a, Performance of vessel segmentation and cell detection methods, as hyperparameters that affect the performance of the method, are varied. To optimize performance of the vessel segmentation method, the f2 score is computed—emphasizing recall—for multiple operating points (each curve represents a fixed parameter set with a varying vessel segmentation threshold). To measure performance for cell detection, the f1 score—balancing precision and recall—is calculated for multiple operating points as the stopping criterion is increased (x axis) in the greedy cell finder algorithm. Highlighted curves within each plot and the accompanying “star” indicate optimal hyperparameter performance. b, Results of cell detection and vessel segmentation algorithms on manually annotated test datasets. The training volumes V1 (195 × 195 × 65 µm and V2 (130 × 130 × 65 µm) and test volume V3 (130 × 130 × 130 µm) are visualized within the entire volume of X-ray–imaged tissue. c, Training volumes V1 and V2 and test volume V3 individually visualized. In each manually annotated subvolume, the results of X-BRAIN are overlaid, based on the best operating point selected by the parameter optimization approach in a. The precision (p) and recall (r) values for each subvolume are further annotated.

  • Fig. 6.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Fig. 6.

    Visualization of 3D reconstructions of the neural architecture within a millimeter-scale neocortical sample. a, Renderings of the vessel segmentation algorithm output across the depth of the entire analyzed sample. b, Visual perspective of the cell detection algorithm output integrated with renderings from vasculature displayed in a, with hatched inset showing the same subset of both neurons and vessels.

  • Fig. 7.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Fig. 7.

    Spatial statistics of X-ray volumes reveal layering and spatially diverse distribution of cell bodies. Top right, histograms of the estimates of the cell density over the extent of the entire sample of mouse cortex, distances between the center of each cell and its nearest neighbor (cell-to-cell distances), and distances between the center of each cell and the closest vessel voxel (cell-to-vessel distances). Top left, 3D rendering of the detected cells and vessels in the entire sample, with a manually labeled cube (V1) highlighted in blue. To confirm the 3D structure of this visualization (bottom left), confirmation is provided in the maps provided to the right: cell probability (red indicating high probability), detected cells (each detected cell displayed in a different color), and density estimates (bright yellow indicating high density). These results provide further confirmation that the 3D structure of the sample is preserved within our density estimate.

  • Fig. 8.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Fig. 8.

    Axonal reconstructions obtained through manual and automated methods yields high agreement. Segmented outputs are overlaid onto X-ray neocortical images (xy, xz, yz planes in the upper panels) and reconstructed in the lower panels for the proposed automated segmentation method (a) and manual annotations (b).

Tables

  • Figures
  • Movies
    • View popup
    Table 1.

    Statistics of cell counts in manually and automatically labeled volumes

    AnnotationCells,nAreaVolume (% of mm3)Density (105/mm3)
    V0-A097(2136, 2060)0.061.63
    V0-A196(1489, 1499)0.061.28
    V0-Xbrain94(1983, 2123)0.061.57
    V1-A0321(1997, 2035)2.51.28
    V1-Xbrain302(1983, 1963)2.51.21
    V2-A12103(1416, 1301)0.061.72
    V2-Xbrain112(1918, 1963)0.061.87
    V3-A03281NA0.21.41
    V3-Xbrain240(1419, 1385)0.21.20
    Vtot-Xbrain48, 689(1454, 138)421.02
    • The first column of the table displays the name of the volume (V0, V1, V2, and V3) as well as the annotator: manual annotator (A0, A1, A2, A3) or automated annotation (X-BRAIN). In the second and third columns, the number of detected cells and the area (mean, median) of annotated cell bodies (number of labeled voxels) are described. Volumes (percentage of cubic millimeters) of all the reported subvolumes are in the fourth column. Finally, we report the density of each subvolume in the fifth column. Note that V0, V1, and V2 are all manually annotated volumes used to train and tune our automated methods. V3 is a held-out test set whose location was unknown during training and tuning the parameters of the algorithm. NA, not applicable.

Movies

  • Figures
  • Tables
  • Movie 1.

    Myelinated axons, cells, and blood vessels in a small subvolume of neocortex. Each frame of the movie represents a virtual slice through the unsectioned volume, where the pixel size is 0.65 µm isotropic. To create each false-color image in the 3D stack, the raw probability maps for cells and blood vessels are stacked into the image’s green and blue channels, respectively. To visualize myelinated axons of the sample, the probability map is thresholded, and a small fraction of components are removed and then added to the red channel of the image (see Fig. 8a for a 3D rendering of these axonal segments).

Back to top

In this issue

eneuro: 4 (5)
eNeuro
Vol. 4, Issue 5
September/October 2017
  • Table of Contents
  • Index by author
Email

Thank you for sharing this eNeuro article.

NOTE: We request your email address only to inform the recipient that it was you who recommended this article, and that it is not junk mail. We do not retain these email addresses.

Enter multiple addresses on separate lines or separate them with commas.
Quantifying Mesoscale Neuroanatomy Using X-Ray Microtomography
(Your Name) has forwarded a page to you from eNeuro
(Your Name) thought you would be interested in this article in eNeuro.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Print
View Full Page PDF
Citation Tools
Quantifying Mesoscale Neuroanatomy Using X-Ray Microtomography
Eva L. Dyer, William Gray Roncal, Judy A. Prasad, Hugo L. Fernandes, Doga Gürsoy, Vincent De Andrade, Kamel Fezzaa, Xianghui Xiao, Joshua T. Vogelstein, Chris Jacobsen, Konrad P. Körding, Narayanan Kasthuri
eNeuro 25 September 2017, 4 (5) ENEURO.0195-17.2017; DOI: 10.1523/ENEURO.0195-17.2017

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Respond to this article
Share
Quantifying Mesoscale Neuroanatomy Using X-Ray Microtomography
Eva L. Dyer, William Gray Roncal, Judy A. Prasad, Hugo L. Fernandes, Doga Gürsoy, Vincent De Andrade, Kamel Fezzaa, Xianghui Xiao, Joshua T. Vogelstein, Chris Jacobsen, Konrad P. Körding, Narayanan Kasthuri
eNeuro 25 September 2017, 4 (5) ENEURO.0195-17.2017; DOI: 10.1523/ENEURO.0195-17.2017
del.icio.us logo Digg logo Reddit logo Twitter logo CiteULike logo Facebook logo Google logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Visual Abstract
    • Abstract
    • Significance Statement
    • Introduction
    • Materials and Methods
    • Results
    • Discussion
    • Acknowledgments
    • Footnotes
    • References
    • Synthesis
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF

Keywords

  • Automated Segmentation
  • cell counting
  • electron microscopy
  • neocortex
  • neuroanatomy
  • X-Ray Microtomography

Responses to this article

Respond to this article

Jump to comment:

No eLetters have been published for this article.

Related Articles

Cited By...

More in this TOC Section

Methods/New Tools

  • A novel 3-choice touchscreen task to examine spatial attention and orienting responses in rodents
  • Real-time closed-loop feedback in behavioral time scales using DeepLabCut
  • Superficial Bound of the Depth Limit of Two-Photon Imaging in Mouse Brain
Show more Methods/New Tools

Novel Tools and Methods

  • A novel 3-choice touchscreen task to examine spatial attention and orienting responses in rodents
  • Real-time closed-loop feedback in behavioral time scales using DeepLabCut
  • Superficial Bound of the Depth Limit of Two-Photon Imaging in Mouse Brain
Show more Novel Tools and Methods

Subjects

  • Novel Tools and Methods
  • Home
  • Alerts
  • Visit Society for Neuroscience on Facebook
  • Follow Society for Neuroscience on Twitter
  • Follow Society for Neuroscience on LinkedIn
  • Visit Society for Neuroscience on Youtube
  • Follow our RSS feeds

Content

  • Early Release
  • Current Issue
  • Latest Articles
  • Issue Archive
  • Blog
  • Browse by Topic

Information

  • For Authors
  • For the Media

About

  • About the Journal
  • Editorial Board
  • Privacy Policy
  • Contact
  • Feedback
(eNeuro logo)
(SfN logo)

Copyright © 2021 by the Society for Neuroscience.
eNeuro eISSN: 2373-2822

The ideas and opinions expressed in eNeuro do not necessarily reflect those of SfN or the eNeuro Editorial Board. Publication of an advertisement or other product mention in eNeuro should not be construed as an endorsement of the manufacturer’s claims. SfN does not assume any responsibility for any injury and/or damage to persons or property arising from or related to any use of any material contained in eNeuro.