Skip to main content

Main menu

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Blog
    • Collections
    • Podcast
  • TOPICS
    • Cognition and Behavior
    • Development
    • Disorders of the Nervous System
    • History, Teaching and Public Awareness
    • Integrative Systems
    • Neuronal Excitability
    • Novel Tools and Methods
    • Sensory and Motor Systems
  • ALERTS
  • FOR AUTHORS
  • ABOUT
    • Overview
    • Editorial Board
    • For the Media
    • Privacy Policy
    • Contact Us
    • Feedback
  • SUBMIT

User menu

Search

  • Advanced search
eNeuro
eNeuro

Advanced Search

 

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Blog
    • Collections
    • Podcast
  • TOPICS
    • Cognition and Behavior
    • Development
    • Disorders of the Nervous System
    • History, Teaching and Public Awareness
    • Integrative Systems
    • Neuronal Excitability
    • Novel Tools and Methods
    • Sensory and Motor Systems
  • ALERTS
  • FOR AUTHORS
  • ABOUT
    • Overview
    • Editorial Board
    • For the Media
    • Privacy Policy
    • Contact Us
    • Feedback
  • SUBMIT
PreviousNext
Research ArticleResearch Article: New Research, Novel Tools and Methods

Generalizing the Enhanced-Deep-Super-Resolution Neural Network to Brain MR Images: A Retrospective Study on the Cam-CAN Dataset

Cristiana Fiscone, Nico Curti, Mattia Ceccarelli, Daniel Remondini, Claudia Testa, Raffaele Lodi, Caterina Tonon, David Neil Manners and Gastone Castellani
eNeuro 10 May 2024, 11 (5) ENEURO.0458-22.2023; https://doi.org/10.1523/ENEURO.0458-22.2023
Cristiana Fiscone
1Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna 40126, Italy
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Cristiana Fiscone
Nico Curti
2Department of Physics and Astronomy, University of Bologna, Bologna 40126, Italy
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Nico Curti
Mattia Ceccarelli
3Department of Agricultural and Food Sciences, University of Bologna, Bologna 40127, Italy
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Mattia Ceccarelli
Daniel Remondini
2Department of Physics and Astronomy, University of Bologna, Bologna 40126, Italy
4INFN, Bologna 40127, Italy
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Daniel Remondini
Claudia Testa
2Department of Physics and Astronomy, University of Bologna, Bologna 40126, Italy
4INFN, Bologna 40127, Italy
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Claudia Testa
Raffaele Lodi
1Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna 40126, Italy
5Functional and Molecular Neuroimaging Unit, IRCCS Istituto delle Scienze Neurologiche di Bologna, Bologna 40139, Italy
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Raffaele Lodi
Caterina Tonon
1Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna 40126, Italy
5Functional and Molecular Neuroimaging Unit, IRCCS Istituto delle Scienze Neurologiche di Bologna, Bologna 40139, Italy
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Caterina Tonon
David Neil Manners
5Functional and Molecular Neuroimaging Unit, IRCCS Istituto delle Scienze Neurologiche di Bologna, Bologna 40139, Italy
6Department for Life Quality Studies, University of Bologna, Rimini 47921, Italy
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for David Neil Manners
Gastone Castellani
7Department of Medical and Surgical Sciences, University of Bologna, Bologna 40138, Italy
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Gastone Castellani
  • Article
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF
Loading

Abstract

The Enhanced-Deep-Super-Resolution (EDSR) model is a state-of-the-art convolutional neural network suitable for improving image spatial resolution. It was previously trained with general-purpose pictures and then, in this work, tested on biomedical magnetic resonance (MR) images, comparing the network outcomes with traditional up-sampling techniques. We explored possible changes in the model response when different MR sequences were analyzed. T1w and T2w MR brain images of 70 human healthy subjects (F:M, 40:30) from the Cambridge Centre for Ageing and Neuroscience (Cam-CAN) repository were down-sampled and then up-sampled using EDSR model and BiCubic (BC) interpolation. Several reference metrics were used to quantitatively assess the performance of up-sampling operations (RMSE, pSNR, SSIM, and HFEN). Two-dimensional and three-dimensional reconstructions were evaluated. Different brain tissues were analyzed individually. The EDSR model was superior to BC interpolation on the selected metrics, both for two- and three- dimensional reconstructions. The reference metrics showed higher quality of EDSR over BC reconstructions for all the analyzed images, with a significant difference of all the criteria in T1w images and of the perception-based SSIM and HFEN in T2w images. The analysis per tissue highlights differences in EDSR performance related to the gray-level values, showing a relative lack of outperformance in reconstructing hyperintense areas. The EDSR model, trained on general-purpose images, better reconstructs MR T1w and T2w images than BC, without any retraining or fine-tuning. These results highlight the excellent generalization ability of the network and lead to possible applications on other MR measurements.

  • brain
  • deep learning
  • image processing
  • MRI
  • super-resolution

Significance Statement

Super-resolution applications in biomedical images may help in reducing acquisition scan time and concurrently improving the quality of the exam. Neural networks have been shown to work better than traditional up-sampling techniques, even though ad hoc training experiments need to be performed for specific kind of data. In this work, we used a model previously trained with general-purpose images, and we directly applied to magnetic resonance human brain ones; we verified its ability of reconstructing new kind of images, comparing the results with traditional up-sampling techniques. Our analysis highlights the excellent generalization capabilities of the model over the images tested, without need of specific retraining, suggesting that such results might be reproduced on images from other acquisition systems.

Introduction

Super-Resolution (SR) algorithms aim to enhance the spatial resolution of low-resolution (LR) images, improving the detection of details and fine structures in corresponding high-resolution (HR) ones (Yang et al., 2019). The up-sampling process is an ill-posed problem by definition: starting from a smaller amount of information, the algorithms aim to increase it in the most reliable way, keeping track of several sources of noise which could affect the LR images, related to hardware (e.g., acquisition sensors, acquisition time, etc.) and/or software (e.g., compression artifacts, stochastic noise, etc) (Umer et al., 2020). Therefore, multiple putative associations between LR and HR exist, and the efficiency of up-sampling algorithms must be measured according to their ability to preserve original information or recover from degradations.

Different kinds of SR models have been developed over the years. Deep learning (DL) SR techniques were proposed by Dong et al. (2014), starting with convolutional neural networks (CNNs), a class of artificial neural networks suitable for many visual imagery applications; they have been widely used in many medical imaging tasks, including classification (Eroglu et al., 2022) and tumor detection (Çinar and Yildirim, 2020). The residual neural network (ResNet) (He et al., 2016; Dong and Inoue, 2021) has then been proposed for optimization and performance improvement: it refers to a class of models in which specific links, called residual connections, are added to a CNN structure, retaining information from different residual structural blocks and helping to overcome performance degradation problems. At first Kim et al. (2016) introduced residual learning techniques in SR tasks, which are now widely used for image restoration purposes. In this work, we used the Enhanced-Deep-Super-Resolution (EDSR) model, based on those techniques.

Deep SR neural networks have been developed in computer vision and image processing research fields, and to date they have been used in multiple real-world areas, including medical imaging (Zeng et al., 2018; Gupta et al., 2020; Sui et al., 2021; Zhang et al., 2021). In this work, we will focus on the application of the EDSR model to biomedical images, precisely to brain data acquired by magnetic resonance (MR) exams.

SR algorithms have already been implemented in clinical practice in some centers. They can bring several benefits, mainly related to the possibility of improving the trade-off between speed and accuracy of the exam. A long scan time is required for high-resolution image acquisition. This fact impacts both on the cost of the exam and on patient discomfort, which can influence the appearance of movement artifacts in the reconstructed medical images, worsening the chance of lesion detection, disease diagnosis, and radiomic feature extraction. Also, in MR imaging (MRI) applications, it leads to a reduced signal-to-noise ratio and high specific absorption rate (Sui et al., 2021). Therefore, enhancing the spatial resolution in postprocessing may help to improve the medical imaging potential (Zeng et al., 2018; Gupta et al., 2020).

It is not trivial to apply DL SR models, or DL models in general, to biomedical images: purpose-built architectures need to be tailored to a specific data type; also, the lack of data may be an issue, and data augmentation techniques (Chlap et al., 2021) are often employed to avoid overfitting; finally, DL algorithms are unsuitable for universal application, meaning that they are not guaranteed to generalize on data differing from the ones used for the training for which retraining or even start-over training stages are usually required.

In this work, we propose to apply the 2D EDSR model to MR brain images of human healthy subjects from different sequences, with 2× as up-sampling factor. We use it to evaluate 2D multislice super-resolved images, and 3D MRI reconstructions obtained from them. The major novelty of this work lies in the application of EDSR on new data avoiding retraining stages: usually, data used to train and test a DL model are of the same kind, and poor generalization over different types of data is an issue in all DL applications; here, we wanted to test the inherent generalization ability of EDSR, to establish whether it was possible to reconstruct a kind of data never seen by the model during training. In case of a positive outcome, the aim was to verify if the results achieved were superior to those of the traditional bicubic up-sampling method. As a secondary aim, we propose an original analysis on the explain ability of the model, describing the EDSR functioning with respect to the image pixel intensity. The manuscript is organized as follows: in the Materials and Methods, we provide an overview of the EDSR model used in this work, a description of the dataset and of the pipeline workflow developed for the analysis; then we describe the Results obtained by our pipeline for both 2D and 3D MRI, comparing the reconstruction in performances of the EDSR model with the standard up-sampling algorithm; we conclude this work with the description of pros and cons of our method and discuss the possibility of applying it as a postprocessing technique to speed up the MR exam acquisition and improve methodological applications such as radiomic feature extraction and artificial intelligence pipelines.

Materials and Methods

Ethics statement

Ethical approval for the Cam-CAN study was obtained from the Cambridgeshire 2 (now East of England-Cambridge Central) Research Ethics Committee (reference: 10/H0308/50).

EDSR-2x model

The EDSR model is a deep 2D CNN based on residual learning techniques, and it was promoted by the SNU Computer Vision Laboratory, from Seoul National University, during the example-based single-image SR NTIRE Challenge 2017, winning the contest (Agustsson and Timofte, 2017; Lim et al., 2017; Timofte et al., 2017). EDSR was trained with end-to-end DL techniques, achieving the best reconstruction performance. Currently, it is one of the CNNs with the best results (Bashir et al., 2021; Zhang et al., 2021) even for processing biomedical images, which are the focus of this work. The single-scale architecture of EDSR was optimized starting from SRResNet (Ledig et al., 2017), itself derived from the original ResNet architecture: batch normalization layers, which are usually inserted to reduce the risk of overfitting and to guarantee fast convergence, were removed, resulting in an improvement of model time convergence, flexibility, and performance. The model was trained exploiting the DIV2K dataset, which consists of 1,000 DIVerse 2K RGB general-purpose images collected from dozens of sites, with large diversity of contents (people, handmade objects, environments, flora and fauna, etc.). During the contest, no biomedical images were used during the training and test stages; in this work, we test the model over MR brain images, without retraining or fine-tuning. In the challenge, high-resolution and corresponding low-resolution images for two, three, and four down-sampling factors were provided for training (800 images), test (100 images), and validation (100 images) stages.

Dataset description

Data were provided by the Cambridge Centre for Ageing and Neuroscience (Cam-CAN) (Shafto et al., 2014; Taylor et al., 2017). The repository, based on a large-scale collaborative research project at the University of Cambridge, contains multimodal data, including structural and functional (resting and task-based) MR images, magnetoencephalography data, and several cognitive tests of nearly 700 healthy subjects, 100 per decade from 18 to 88 years old. We considered brain images of 70 healthy subjects (10 per decade from 18 to 85 years old), maintaining the same structure of the original collection, representative of the whole dataset. Subjects’ characteristics are summarized in Table 1.

View this table:
  • View inline
  • View popup
Table 1.

Characteristics of the analyzed samples selected from Cam-CAN dataset (Shafto et al., 2014; Taylor et al., 2017)

MRI protocol

High-resolution MR brain T1w and T2w images, acquired using a 3 T Siemens Magnetom Trio, were selected. The MR scan parameters used for the acquisition are as follows: (1) for T1w scans, sagittal 3D MPRAGE, TR/TE/TI = 2,250/2.99/900 ms; FA = 9°; FOV = 256 × 256 × 192 mm3; 1 × 1 × 1 mm3 spatial resolution; GRAPPA = 2; TA = 4 min 32 s; and (2) for T2w scans, sagittal 3D SPACE, TR/TE/TI = 2,800/408/900 ms; FOV = 256 × 256 × 192 mm3; 1 × 1 × 1 mm3 spatial resolution; GRAPPA = 2; TA = 4 min 30 s. Original images were provided in NIfTI format.

Image processing pipeline

To generate LR maps, 2D slices from the original images were convolved with a Gaussian filter based on a 3 × 3 kernel and unit standard deviation and then two times down-sampled with BiCubic (BC) interpolation. This degradation process yields MR images with properties closed to those generated by LR MR image acquisition (Zeng et al., 2018). This pipeline is usually used in SR studies to have pairs of LR–HR images and validate the models. In fact, in an ideal situation, the same subject, perfectly still, would be scanned both with LR and HR within the same FOV, but this is unrealistic and it would be necessary to perform some registration operations which is better to avoid since interpolation methods have to be tested. Also, to make twice as many measurements would require twice of resource in terms of scan time.

The EDSR model and the BC interpolation were used to up-sample LR images. This pipeline, developed in Python 3.7.6 [libraries: OpenCV 4.5.2 (Bradski, 2000), NumPy 1.20.2, NiBabel 3.2.1], was implemented to have a gold-standard (original HR) and a comparison term (BC up-sampling, a common procedure for up-sampling) against which to assess EDSR reconstructions. After the down-sampling, images showed more pronounced ringing or Gibbs's artifacts. During preliminary stages of the work, a Gibbs-artifact removal tool (Kellner et al., 2016) was tested on LR images before the application of the two up-sampling methods, but it did not influence the outcomes. Thus, we decided to exclude this step from the pipeline.

EDSR processes 2D images, so we analyzed individually the data extracted from the three principal orthogonal planes (sagittal, axial, and coronal), yielding respectively 192, 256, and 256 processed slices. The result was a volume composed of 2D super-resolved slices for each direction. We refer to those volumes as 2D multislice reconstructions. Image dimensions for each direction are shown in Table 2. The 3D reconstructions were built up by averaging 2D multislice reconstructions from the three planes. The pipeline workflow is illustrated in Figure 1, and it was applied to both T1w and T2w images. An example of EDSR and BC reconstructions is shown in Figure 2.

Figure 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 1.

Pipeline applied to original HR T1w, showed in this figure as example, and T2w images to perform 3D reconstructions. The EDSR model is a 2D CNN, thus sagittal (192 slices), axial (256 slices), and coronal (256 slices) planes were processed separately, and three distinct 2D multislice images were achieved and then averaged to obtain the final map. The detailed pipeline for 2D SR reconstructions is illustrated in the gray box at the bottom of the panel: the original 2D images were processed with a Gaussian filter and then down-sampled with BC interpolation. The obtained LR images were up-sampled using the EDSR model and BC interpolation. SR, super-resolution; EDSR, enhanced deep super-resolution; BC, BiCubic; MRI, magnetic resonance imaging.

Figure 2.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 2.

Original HR, BC, and EDSR T1w (panel a) and T2w (panel b) images of a representative individual (24 years/M healthy subject) from the Cam-CAN database (red boxes, zoom-in on brain cortex; blue boxes, zoom-in on brain ventricles; HR, high resolution; EDSR, enhanced deep super-resolution; BC, BiCubic).

View this table:
  • View inline
  • View popup
Table 2.

Matrix dimension and spatial resolution of T1w and T2w original HR, LR, and up-sampled HR images, through 2D 2x-EDSR and BC interpolation

Image quality assessment

The images involved in the current work were acquired for radiological studies, aiming to maximize the human readability. Therefore, the human perception plays an important role for the assessment of image quality and the similarity of images to a reference standard. To model and quantify perceived quality is challenging and in the literature several measures have been proposed (Bull and Zhang, 2021). In this work we quantified the quality of EDSR and BC images as the pixel-level fidelity to the ground truth, using standard metrics. Considering N the number of pixels, G the gold-standard image (original HR), and I the up-sampled image (EDSR or BC), we measured the following:

  1. Root mean square error (RMSE) (Agustsson and Timofte, 2017):RMSE=∑i=1N(Gi−Ii)2N=MSE.

  2. Peak signal-to-noise ratio (pSNR) (Timofte et al., 2017):pSNR=10⋅log10MAXG2MSE.

The RMSE and the pSNR are standard image quality metrics for the absolute error between the obtained image and reference one. Differences in intensity values are not enough to analyze and quantify the perceptual distortion experienced by the human vision, so the following perception-based metrics have been used in this study to provide a more comprehensive analysis:

  1. Structural similarity index (SSIM) (Wang et al., 2004): is one of the most promising perception-based metrics. It considers image quality degradation as perceived change in structural information. It is a full reference metric based on visible structures in the image, extracting three features (luminance l, contrast c, and structure s) and performing the measure based on those, i.e.:SSIM=l⋅c⋅s=2μGμI+c1μG2+μI2+c1⋅2σGσI+c2σG2+σI2+c2⋅σGI+c22σG+σI+c22,

where μG is the average of G, μI is the average of I, σG2 is the variance of G, σI2 is the variance of I, σGI is the covariance of G and I, c1=(k1L)2 , c2=(k2L)2 , L is the dynamic range of the pixel-values, and k1=0.01 and k2=0.03 by default.

  1. High frequency error norm (HFEN) (Sun et al., 2019): quantifies the quality of reconstruction of edges and fine features. It is defined as L2 norm difference between G and I, processed by a rotationally symmetric Laplacian of Gaussian (LoG) filter of size 15 × 15 and standard deviation of 1.5, which captures the high-frequency information:HFEN=||LoG(G)−LoG(I)||2.

Images with low RMSE, high pSNR, high SSIM, and low HFEN indicate high quality of reconstruction. Those metrics were evaluated with Python 3.7.6 [libraries: Scikit-image 0.16.2 (Van der Walt et al., 2014), SciPy 1.5.3 (Virtanen et al., 2020), NumPy 1.20.2] for EDSR and BC reconstructed images of each subject and each sequence, using the original HR images as ground-truth, over a region from which the scalp had been excluded using BET [Brain Extraction Tool (Smith, 2002)—FSL 6.0.4]. For 2D multislice reconstructions, the metrics were evaluated over 2D slices and the average among them were considered as follows:⟨μ⟩plane=1Nplane∑j=1Nplaneμj, where plane = sagittal, axial, and coronal; µ = RMSE, pSNR, SSIM, and HFEN; and Nplane = 256, 192, 256, respectively. For 3D reconstructions, similarity parameters were directly evaluated over the three-dimensional matrices.

Comparison of up-sampling methods

The image quality metrics of EDSR and BC images were compared as follows:

  1. Considering the entire brain in 2D multislice reconstruction.

Sagittal, axial, and coronal planes were considered separately. Differences between the three directions for the two up-sampling methods, EDSR, and BC were evaluated.

  1. Considering the entire brain in 3D reconstructions.

  2. Considering different brain tissues. The MRtrix3 (Tournier et al., 2019) tool 5ttgen, based on a FreeSurfer (Fischl, 2012) parcellation image map, was used to segment original T1w images into different tissues: gray matter (GM), divided into cortical cortex GM (CGM) and deep GM nuclei (DGM), white matter (WM), and cerebrospinal fluid (CSF). An example of CGM, DGM, WM, and CSF appearance in T1w and T2w images is shown in Figure 3. The up-sampled images from both T1w and T2w exams were masked using these segmentations to analyze each tissue separately. DGM and CGM were considered together to study the whole GM. 3D reconstructions were considered.

  3. Considering the entire brain and inverting the gray intensity histograms. The original data included images with intensity ranging from 0 to 255. The histogram of each original image, T1w and T2w for the 70 selected subjects, was inverted within the same range. New images were obtained, in which intensities were inverted. Human perception of hyper- and hypointensity regions obviously changed after this operation. We aimed to test whether this would influence the EDSR model in performing up-sampling operation and the quantitative evaluation of its reconstructions. Therefore, after the histogram inversion, the new images were processed, and the statistical analysis was run again as described above. 3D reconstructions were considered.

Figure 3.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 3.

Brain tissue segmentation using 5ttgen tool from MRtrix based on a FreeSurfer parcellation image map. The considered segmented tissues are as follows: cortical cortex gray matter (CGM), deep gray matter nuclei (DGM), white matter (WM), and cerebrospinal fluid (CSF) (T1w and T2w images from 24 years/M healthy subject from the Cam-CAN database).

We compared the distributions of RMSE, pSNR, SSIM, and HFEN obtained by EDSR and BC algorithms applied on both T1w and T2w images considering ROIs according to the above list. We used the one-sample Kolmogorov–Smirnov test to quantify the compatibility between the distributions of similarity parameters within the samples. None were normally distributed, so nonparametric comparison tests (Kruskal–Wallis and Kolmogorov–Smirnov) were selected, considering p values < 0.05 as significant. The effect sizes were measured by Cohen's d coefficients (d > 0.8 considered as “large”). The difference was considered significant when at least two out of three measurements showed it. Statistical analysis was carried out using Matlab R2020 (toolboxes: Statistics and Machine Learning 11.7, Image Processing 11.1).

Results

Entire brain: 2D reconstructions

The reference metric (RMSE, pSNR, SSIM, and HFEN) distributions among the 70-subject samples of EDSR and BC methods were evaluated as described in the previous section, using the original HR images as the gold standard for both the up-sampling methods for T1w and T2w images. Sagittal, axial, and coronal 2D slices were considered separately. Results are reported in Table 3. For each plane, each metric, and each sequence, median values [±median absolute deviation (MAD)] are shown. For all the directions, results showed significant differences in favor of the EDSR method on all the criteria for T1w images and on SSIM and HFEN metrics for T2w images. The metrics ranking among the three directions were evaluated, both for EDSR and BC methods: no orientation is superior to the others over all the measurements.

View this table:
  • View inline
  • View popup
Table 3.

Median (±MAD) of RMSE, pSNR, SSIM, and HFEN distributions among the 70-subject samples

Entire brain: 3D reconstructions

From the 2D multislice SR images, 3D reconstructions were obtained (Fig. 1). Results are reported in Table 4. Age, sex, and total intracranial volume (TIV) were inserted as covariates of no interest in the analysis. Results showed significant differences in favor of the EDSR method on all the metrics for T1w images and on SSIM and HFEN metrics for T2w images.

View this table:
  • View inline
  • View popup
Table 4.

Median (±MAD) of RMSE, pSNR, SSIM, and HFEN distributions among the 70-subject samples

Different brain tissues

We considered GM, WM, and CSF individually in 3D reconstructions. In Table 5, the results are shown, as the outcomes described in the previous section. Results showed the following: (1) in GM, significant differences in favor of the EDSR method on all the criteria for T1w and T2w images; (2) in WM, significant differences in favor of the EDSR method on all the criteria for T2w images, but no difference for T1w images; and (3) in CSF, significant differences in favor of the EDSR method on all the criteria for T1w images, but no difference for T2w images.

View this table:
  • View inline
  • View popup
Table 5.

Median (±MAD) of RMSE, pSNR, SSIM, and HFEN distributions among the 70-subject samples

Entire brain: histogram inversion

Histograms of original T1w and T2w images were inverted, and the new images were processed. Results are summarized in Table 6. There were significant differences in favor of the EDSR method on all the criteria for T2w images, and on the SSIM and HFEN metrics for T1 images.

View this table:
  • View inline
  • View popup
Table 6.

Median (±MAD) of RMSE, pSNR, SSIM, and HFEN distributions among the 70-subject samples

Discussion

SR techniques have the potential to improve the trade-off between diagnostic accuracy and limited total scan time, supporting faster screening and follow-up exams. Lower resolution images can be acquired rapidly and then have their spatial resolution enhanced at the postprocessing stage. The 2D CNN EDSR model was introduced in Lim et al. (2017), and it is still one of the state-of-the-art SR DL algorithms: in a recent review of single-image SR methods (Bashir et al., 2021), EDSR was among the top 5 best performing out of a total of 19 models. It has recently been used in biomedical applications (Huang et al., 2021), and we decided to adopt it for the current study. We tested the model, pretrained on natural images, on MR scans, aiming to test its generalization ability. To validate the model, we needed a pixel-wise reference map; in an ideal situation, pairs of perfectly registered LR and HR images would be acquired during the same exam, but actually further interpolation operations would be needed to register the two images within each other, and also the required scan time would increase. Thus, following the literature (Zhang et al., 2021), we used BC interpolation, together with the application of a Gaussian filter, to simulate LR image acquisition. The results, from 2D and 3D reconstructions, were compared with BC up-sampling, chosen as comparison since it is commonly used and implemented in most image visualization software packages.

In Figure 2, an example of EDSR and BC T1w and T2w up-sampled images is shown: a qualitative assessment highlights that the model better reproduces the original HR images in both sequences, restoring high spatial frequency structures, such as the cortical gyri or ventricle edges, in more detail. To quantify aspects of reconstructions relevant to human perception, a robust analysis was carried out. The discussion follows the order of presentation of the results.

  1. The model was tested over 2D images, and it outperforms BC interpolation in all the slices; there is a significant difference in favor of the EDSR model for both MR modalities considering the two perception-based metrics (SSIM and HFEN), while for the other two criteria (RMSE and pSNR) there is significant difference for T1w but not for T2w images (Table 3). Several studies have shown that SSIM and HFEN are more reliable indicators of image quality degradation and that they better match the human visual system than RMSE and pSNR (Wang et al., 2004). In fact, they measure complex image features such as luminance, contrast, and high spatial frequencies, while RMSE and pSNR are based only on the absolute difference in pixel gray levels. Thus, the results from 2D super-resolved images demonstrate that the EDSR reconstructions are better overall than those based on BC interpolation in T1w and T2w images.

    The comparison between the three orthogonal directions in both EDSR and BC methods shows that no orientation is superior to the others over all the measurements. Considering that T1w and T2w images were acquired with readout along sagittal direction, the EDSR model shows effectiveness in enhancing both in-plane (sagittal slices) and through-plane (axial and coronal slices) spatial resolution.

  2. The MR images analyzed are 3D data; thus an overall assessment of the entire volume is preferred. Thus, we exploited the 2D CNN model to reproduce 3D images, combining reconstructions from the three principal orthogonal planes (Fig. 1). The analysis (Table 4) confirms the superior performance of EDSR over BC; age, sex, and TIV were inserted as covariates and did not show any influence on EDSR outcomes.

  3. The reconstruction of the EDSR model showed some intensity differences compared with the original gold standard, evaluated using the RMSE and pSNR criteria. To go beyond using the algorithm as a “black box” and to attempt to explain its functioning, we proceeded to investigate the influence of the underlying tissue gray levels on model performance. The GM, WM, and CSF were considered separately since tissue contrast and intensities vary in the different MR sequences analyzed. The appearance of each tissue type in T1w and T2w images is shown in Figure 3, and from Table 5 it can be understood that:

    1. GM: EDSR outperforms bicubic interpolation on all criteria both in T1w and in T2w images.

    2. WM: EDSR outperforms bicubic interpolation on all criteria in T2w images, where the tissue appears hypointense, while in T1w images, where WM appears brighter, there is no significant difference.

    3. CSF: EDSR outperforms bicubic interpolation on all criteria in T1w images, where the tissue appears hypointense, while in T2w images, where CSF appears particularly hyperintense, there is no significant difference.

  4. The outcomes of the previous point suggest that lack of outperformance of EDSR occurs in hyperintense areas. To test this hypothesis, histograms of the original images were reversed, and the analyses were repeated, yielding the opposite results (Table 6): the EDSR significantly outperformed BC interpolation as an up-sampling method on all the criteria in T2w images and in two (SSIM and HFEN) out of four in T1w images.

It is well established that DL algorithms, including EDSR, work better than traditional BC up-sampling over data from the training dataset (Dong et al., 2014). We wanted to test the response of EDSR to data with characteristics unlike those of the training set: the network weights, derived in the NTIRE Challenge 2017 with a training using variety of photographic images, were left untouched and the model was directly applied to MR brain images. The poor generalizability over different data is an issue in SR model applications: commonly train and test data are of the same thematic type, whichever is the model used (e.g., Generative Adversarial Networks). Some examples of brain MRI images are as follows: Zhang et al. (2021) used 3 T structural T1w images divided into train and test datasets; Sui et al. (2021) trained their model using 3 T T1w and T2w images from an online dataset and tested it on T2w images acquired on their 3T scanner. EDSR performed excellent reconstructions of MR brain images, superior to the BC standard method, leading to the consideration that a retraining stage was not necessary, although the kind of data used for the training were very different from the ones that we wanted to super-resolve. In fact, the high quality of EDSR reconstructions achieved in this work is not trivial, considering the transition from general-purpose images to brain MR scans. The capacity to generalize over new data is probably due to the large number of model parameters: referring to the review of Bashir et al. (2021), EDSR was the model with the highest number of parameters among those analyzed, more than one-third higher than the second ranked model. This fact does not affect though the computational cost, which remains low. Furthermore, EDSR is a 2D CNN, which positively influences computational effort and memory allocation. Such models can be used to reconstruct three-dimensional data, including brain images from MR exams: Zhang et al. (2021) obtained 3D reconstructions from two multislice super-resolution images, achieving better results than three-dimensional super-resolution technology. We decided to use an analogous method as a viable alternative to 3D CNNs.

Our SR pipeline showed good technical outcomes that, as we mentioned above, enable the acquisition of low-resolution images whose spatial resolution can be enhanced in postprocessing, yielding the clinical benefit of reducing the total scan time. Scan time is not linearly proportional to the voxel size, depending on several factors including parallel imaging; however, it is possible to make an approximate evaluation: for example, on a Siemens MAGNETOM Skyra 3 T scanner, the scan time for 1 mm-isotropic T1w (sagittal 3D, MPRAGE, TR/TE = 2,300/2.98 ms) is ∼ 5’ 21’; setting the spatial resolution to 2 mm isotropic and changing the other parameters as little as possible, it would be ∼ 2’ 30’’. Therefore, the proposed postprocessing operation could speed up the acquisition time by more than half, while yielding the same nominal spatial resolution and simultaneously reducing the probability of motion artifacts and other noise sources.

There are a few limiting factors to consider and further analysis to perform for the optimal usage of EDSR.

At first, data used in this work are high-quality MR images of healthy subjects, without sizeable movement artifacts, and their spatial resolution was improved from 2 × 2 to 1 × 1 mm2. At a millimeter–centimeter scale, image intensity in these images is dominated by the presence of myelinated axons (WM) neuronal and glial cell bodies (GM) and CSF. Within a 1 mm isotropic voxel, one tissue type typically predominates, and this pattern recurs in neighboring pixels over a centimeter scale, producing patches of fairly uniform pixel intensity, separated by fairly sharp boundaries. It is possible to speculate that these image characteristics are closed to those found in the photographs of natural scenes used to train the EDSR. Additionally, changes in signal intensity or morphology due to neurological diseases introduce additional tissue types in a variable manner at the voxel scale. Thus, the achievement of good performance with different scales and with pathological tissue is not obvious and will have to be tested.

At second, we expect that the model could be used on data based on MR sequences that yield contrasts other than T1w and T2w, but in this case its performance is likely to vary, especially in images with large hyperintense areas. We noticed that a deterioration in EDSR performance occurs with T2w images, as in Sui et al. (2021), in which CSF appears very bright. This may be related to the fact that in natural images like landscapes and animals, used to train the model, bright and shiny elements, such as light reflected in a mirror, are less common than the darker ones, such as shadows of objects and people. Moreover, images including larger glares or shadows components are commonly excluded by training datasets like the DIV2K, since they are interpreted as undesired components in a picture. To better understand and potentially explain the limits of generalizability shown by the model, we investigated its behavior with respect to the pixel intensity of the different tissue types finding that it was more difficult to accurately reproduce high-intensity pixels.

Finally, we tested EDSR over LR images generated from HR ones, which is a standard procedure in SR studies to have a pixel-wise reference maps to which compare the model reconstructions, avoiding registration operations. Nevertheless, since they are simulated LR images, the superior performance of the model needs to be confirmed on LR images from real MR acquisitions.

The aim of this work was to validate our proposed SR pipeline: the EDSR model, previously trained with natural images, was directly applied to MR brain images, without retraining or fine-tuning, to test its inherent generalization ability over a new kind of data. For this stage, we decided to use a dataset of healthy controls evaluating the performance of the model with reference metrics parameters. In future work, radiomic and textural features will be used to assess the reliability of our method, including images from patients’ exams. To assess the diagnostic relevance of the HR images obtained with the pipeline, quantitative analysis will need to be augmented with clinical evaluation.

Conclusion

The application of the EDSR-2x on MR biomedical images in this work leads to promising results: without needing a tailored retraining, the model shows its ability to generalize from general-purpose images to different MR sequences of healthy subjects, achieving better performance than traditional up-sampling methods. Although the model works with two-dimensional data, the outperformance remains when three-dimensional reconstructions are considered. The data analyzed in this study include images from different MR sequences of subjects with different characteristics in terms of age, sex, and TIVs; several metrics and statistical tests were selected to assess the quality of the up-sampled images, thus the application of this model on MR brain images was satisfactorily validated. An analysis of the grayscale histogram identified a different reconstruction performance as function of pixel hyperintense areas, posing limits on the possible applications, especially in cases in which the MR techniques is prone to generate images with regions of high intensity.

The T1w and T2w images analyzed in this work were previously unseen by the model, and we can reasonably expect similar outcomes on images from the same sequences, which would appear with similar intensities and contrasts, acquired with different scanners.

Footnotes

  • The authors declare no competing financial interests.

  • The authors would like to thank the Italian National Institute for Nuclear Physics (Progetto Next-AIM—Artificial Intelligence in Medicine), which supported this work, and the Cambridge Centre for Ageing and Neuroscience (CamCAN), founded by the UK Biotechnology and Biological Sciences Research Council, together with support from the UK Medical Research Council and University of Cambridge, UK. The publication of this article was supported by the “Ricerca Corrente” funding from the Italian Ministry of Health.

  • ↵*C.F. and N.C. contributed equally to this work.

  • ↵**D.N.M. and G.C. contributed equally to this work and share seniorship.

This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International license, which permits unrestricted use, distribution and reproduction in any medium provided that the original work is properly attributed.

References

  1. ↵
    1. Agustsson E,
    2. Timofte R
    (2017) NTIRE (2017) challenge on single image super-resolution: dataset and study, IEEE CVPRW, pp 1122–1131.
  2. ↵
    1. Bashir SMA,
    2. Wang Y,
    3. Khan M,
    4. Niu Y
    (2021) A comprehensive review of deep learning-based single image super-resolution. Peer J Comput Sci 7:e621. pmid:34322592 https://doi.org/10.7717/peerj-cs.621
    OpenUrlPubMed
  3. ↵
    1. Bradski G
    (2000) The OpenCV Library. Dobb’s J Softw Tools 120:122–125.
    OpenUrl
  4. ↵
    1. Bull DR,
    2. Zhang F
    (2021) Chapter 10: Measuring and managing picture quality. In: Intelligent image and video compression, Ed 2, (Bull DR, Zhang F, eds), pp 335–384. Oxford: Academic Press.
  5. ↵
    1. Chlap P,
    2. Min H,
    3. Vandenberg N,
    4. Dowling J,
    5. Holloway L,
    6. Haworth A
    (2021) A review of medical image data augmentation techniques for deep learning applications. J Med Imaging Radiat Oncol 65:545–563. https://doi.org/10.1111/1754-9485.13261
    OpenUrlCrossRef
  6. ↵
    1. Çinar A,
    2. Yildirim M
    (2020) Detection of tumors on brain MRI images using the hybrid convolutional neural network architecture. Med Hypotheses 139:209684. https://doi.org/10.1016/j.mehy.2020.109684
    OpenUrl
  7. ↵
    1. Dong L,
    2. Inoue K
    (2021) Super-resolution reconstruction based on two-stage residual neural network. Mach Learn Appl 6:100162. https://doi.org/10.1016/j.mlwa.2021.100162
    OpenUrl
  8. ↵
    1. Dong C,
    2. Loy CC,
    3. He K,
    4. Tang X
    (2014) Learning a deep convolutional network for image super-resolution, ECCV lecture notes in comp science Springer, Cham, 8692: 184–199.
  9. ↵
    1. Eroglu Y,
    2. Çinar A,
    3. Yildirim M
    (2022) mRMR-based hybrid convolutional neural network model for classification of Alzheimer’s disease on brain magnetic resonance images. Int J Imaging Syst Technol 32:517–527. https://doi.org/10.1002/ima.22632
    OpenUrl
  10. ↵
    1. Fischl B
    (2012) FreeSurfer. NeuroImage 62:774–781. https://doi.org/10.1016/j.neuroimage.2012.01.021 pmid:22248573
    OpenUrlCrossRefPubMed
  11. ↵
    1. Gupta R,
    2. Sharma A,
    3. Kumar A
    (2020) Super-resolution using GANs for medical imaging. Procedia Comput Sci 173:28–35. https://doi.org/10.1016/j.procs.2020.06.005
    OpenUrl
  12. ↵
    1. He K,
    2. Zhang X,
    3. Ren S,
    4. Sun J
    (2016) Deep residual learning for image recognition, IEEE CVPR, pp 770–778.
  13. ↵
    1. Huang B,
    2. Xiao H,
    3. Liu W,
    4. Zhang Y,
    5. Wu H,
    6. Wang W,
    7. Yang Y,
    8. Yang Y,
    9. Miller GW,
    10. Li T
    (2021) MRI super-resolution via realistic downsampling with adversarial learning. Phys Med Biol 66:205004. pmid:34474407 https://doi.org/10.1088/1361-6560/ac232e
    OpenUrlPubMed
  14. ↵
    1. Kellner E,
    2. Dhital B,
    3. Kiselev VG,
    4. Reisert M
    (2016) Gibbs-ringing artifact removal based on local subvoxel-shifts. Magn Reson Med 76:1574–1581. https://doi.org/10.1002/mrm.26054
    OpenUrlCrossRefPubMed
  15. ↵
    1. Kim J,
    2. Lee JK,
    3. Lee KM
    (2016) Accurate image super-resolution using very deep convolutional networks, IEEE CVPR, pp 1646–1654.
  16. ↵
    1. Ledig C, et al.
    (2017) Photo-realistic single image super resolution using a generative adversarial network, IEEE CVPR, pp 105–144.
  17. ↵
    1. Lim B,
    2. Son S,
    3. Kim H,
    4. Nah S,
    5. Lee KM
    (2017) Enhanced deep residual networks for single image super-resolution, IEEE CVPRW, pp 1132-1140.
  18. ↵
    1. Shafto MA, et al.
    (2014) The Cambridge Centre for ageing and neuroscience (Cam-CAN) study protocol: a cross-sectional, lifespan, multidisciplinary examination of healthy cognitive ageing. BMC Neurol 14:204. https://doi.org/10.1186/s12883-014-0204-1
    OpenUrlCrossRefPubMed
  19. ↵
    1. Smith SM
    (2002) Fast robust automated brain extraction. Hum Brain Mapp 17:143–155. https://doi.org/10.1002/hbm.10062 pmid:12391568
    OpenUrlCrossRefPubMed
  20. ↵
    1. Sui Y,
    2. Afacan O,
    3. Gholipour A,
    4. Warfield SK
    (2021) MRI super-resolution through generative degradation learning. Med Image Comput Comput Assist Interv 12906:430–440. https://doi.org/10.1007/978-3-030-87231-1_42 pmid:34713277
    OpenUrlPubMed
  21. ↵
    1. Sun L,
    2. Fan Z,
    3. Ding X,
    4. Huang Y,
    5. Paisley J
    (2019) Region-of-interest undersampled MRI reconstruction: a deep convolutional neural network approach. Magn Reson Imaging 63:185–192. https://doi.org/10.1016/j.mri.2019.07.010
    OpenUrl
  22. ↵
    1. Taylor JR,
    2. Williams N,
    3. Cusack R,
    4. Auer T,
    5. Shafto MA,
    6. Dixon M,
    7. Tyler LK,
    8. Cam-Can,
    9. Henson RN
    (2017) The Cambridge Centre for ageing and neuroscience (Cam-CAN) data repository: structural and functional MRI, MEG, and cognitive data from a cross-sectional adult lifespan sample. NeuroImage 144:262–269. pmid:26375206 https://doi.org/10.1016/j.neuroimage.2015.09.018
    OpenUrlCrossRefPubMed
  23. ↵
    1. Timofte R, et al.
    (2017) NTIRE 2017 challenge on single image super-resolution: methods and results, IEEE CVPRW, pp 1110–1121.
  24. ↵
    1. Tournier JD,
    2. Smith RE,
    3. Raffelt D,
    4. Tabbara R,
    5. Dhollander T,
    6. Pietsch M,
    7. Christiaens D,
    8. Jeurissen B,
    9. Chun-Hung Y,
    10. Connelly A
    (2019) MRtrix3: a fast, flexible and open software framework for medical image processing and visualisation. NeuroImage 202:116–137. https://doi.org/10.1016/j.neuroimage.2019.116137
    OpenUrl
  25. ↵
    1. Umer RM,
    2. Foresti GL,
    3. Micheloni C
    (2020) Deep generative adversarial residual convolutional networks for real-world super-resolution, IEEE CVPRW, pp 1769–1777.
  26. ↵
    1. Van der Walt S,
    2. Schönberger JL,
    3. Nunez-Iglesias J,
    4. Boulogne F,
    5. Warner JD,
    6. Yager N,
    7. Gouillart E,
    8. Yu T,
    9. scikit-image contributors
    (2014) scikit-image: image processing in Python. Peer J 2:e453. pmid:25024921 https://doi.org/10.7717/peerj.453
    OpenUrlCrossRefPubMed
  27. ↵
    1. Virtanen P, et al.
    (2020) SciPy 1.0: fundamental algorithms for scientific computing in python. Nat Methods 17:261–272. pmid:32015543 https://doi.org/10.1038/s41592-019-0686-2
    OpenUrlCrossRefPubMed
  28. ↵
    1. Wang Z,
    2. Bovik AC,
    3. Sheikh HR,
    4. Simoncelli EP
    (2004) Image quality assessment: from error visibility to structural similarity. IEEE Trans Image Process 13:600–612. https://doi.org/10.1109/TIP.2003.819861
    OpenUrlCrossRefPubMed
  29. ↵
    1. Yang W,
    2. Zhang X,
    3. Tian Y,
    4. Wang W,
    5. Xue J-H
    (2019) Deep learning for single image super-resolution: a brief review. IEEE Trans Multimedia 21:3106–3121. https://doi.org/10.1109/TMM.2019.2919431
    OpenUrl
  30. ↵
    1. Zeng K,
    2. Zheng H,
    3. Cai C,
    4. Yang Y,
    5. Zhang K,
    6. Chen Z
    (2018) Simultaneous single- and multi-contrast super-resolution for brain MRI images based on a convolutional neural network. Comp Biol Med 99:133–141. https://doi.org/10.1016/j.compbiomed.2018.06.010
    OpenUrl
  31. ↵
    1. Zhang H,
    2. Shinomiya Y,
    3. Yoshida S
    (2021) 3D MRI reconstruction based on 2D generative adversarial network super-resolution. Sensors 21:2978. https://doi.org/10.3390/s21092978 pmid:33922811
    OpenUrlPubMed

Synthesis

Reviewing Editor: Christoph Michel, Universite de Geneve

Decisions are customarily a result of the Reviewing Editor and the peer reviewers coming together and discussing their recommendations until a consensus is reached. When revisions are invited, a fact-based synthesis statement explaining their decision and outlining what is needed to prepare a revision will be listed below. The following reviewer(s) agreed to reveal their identity: Giovanni Mettivier. Note: If this manuscript was transferred from JNeurosci and a decision was made to accept the manuscript without peer review, a brief statement to this effect will instead be what is listed below.

Both reviewers suggest to revise the manuscript.They have concrete suggestions where and how changes should be made. Please follow their advices when revising the manuscript.

Reviewer 1:

Comments to the Authors (Required) I reviewed the study titled "Generalizing the Enhanced-Deep-Super-Resolution neural network to brain MR images: a retrospective study on the Cam-CAN dataset" in detail. There are some missing points in the study. Authors need to address the following shortcomings. First of all, there are many studies in the literature using brain MRI images. https://doi.org/10.1016/j.mehy.2020.109684 , https://doi.org/10.1002/ima.22632 studies can be given as examples. Studies using MR images were not discussed in the study. At the end of the Introduction section, a paragraph containing the innovations and contributions of the article and a paragraph about the organization of the study should be added. Heading 2.1 should be elaborated because it is the most important heading of the article. Spelling and grammatical errors should be reviewed.

Reviewer 2:

In this paper the authors want to investigate the possibility of using an EDSR network previously trained on generic images to increase the spatial resolution in brain MRI images in order to reduce the acquisition time needed to obtain high spatial resolution MRI images.

The article is well written and clear and certainly of interest but there is a very important point that needs, in my opinion, more discussion. The data certainly show a superiority of the EDSR network compared to the BC method but this could also be expected since the LR images are obtained from the HR using a Gaussian filter and two BCs. However, the data does not show how well the EDSR algorithm provides an image useful for reducing the acquisition time. How close the diagnostic content is to the HR that would be obtained with a longer time acquisition. This, in my opinion, can only be assessed by a doctor. What is the opinion of the authors on the matter. Can they propose another method to evaluate it? I believe that this concept should be clarified in the discussions or included in the limitations of the study and also discuss how much this affects the achievement of the objective of the paper.

Back to top

In this issue

eneuro: 11 (5)
eNeuro
Vol. 11, Issue 5
May 2024
  • Table of Contents
  • Index by author
  • Masthead (PDF)
Email

Thank you for sharing this eNeuro article.

NOTE: We request your email address only to inform the recipient that it was you who recommended this article, and that it is not junk mail. We do not retain these email addresses.

Enter multiple addresses on separate lines or separate them with commas.
Generalizing the Enhanced-Deep-Super-Resolution Neural Network to Brain MR Images: A Retrospective Study on the Cam-CAN Dataset
(Your Name) has forwarded a page to you from eNeuro
(Your Name) thought you would be interested in this article in eNeuro.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Print
View Full Page PDF
Citation Tools
Generalizing the Enhanced-Deep-Super-Resolution Neural Network to Brain MR Images: A Retrospective Study on the Cam-CAN Dataset
Cristiana Fiscone, Nico Curti, Mattia Ceccarelli, Daniel Remondini, Claudia Testa, Raffaele Lodi, Caterina Tonon, David Neil Manners, Gastone Castellani
eNeuro 10 May 2024, 11 (5) ENEURO.0458-22.2023; DOI: 10.1523/ENEURO.0458-22.2023

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Respond to this article
Share
Generalizing the Enhanced-Deep-Super-Resolution Neural Network to Brain MR Images: A Retrospective Study on the Cam-CAN Dataset
Cristiana Fiscone, Nico Curti, Mattia Ceccarelli, Daniel Remondini, Claudia Testa, Raffaele Lodi, Caterina Tonon, David Neil Manners, Gastone Castellani
eNeuro 10 May 2024, 11 (5) ENEURO.0458-22.2023; DOI: 10.1523/ENEURO.0458-22.2023
Twitter logo Facebook logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Abstract
    • Significance Statement
    • Introduction
    • Materials and Methods
    • Results
    • Discussion
    • Conclusion
    • Footnotes
    • References
    • Synthesis
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF

Keywords

  • brain
  • deep learning
  • image processing
  • MRI
  • super-resolution

Responses to this article

Respond to this article

Jump to comment:

No eLetters have been published for this article.

Related Articles

Cited By...

More in this TOC Section

Research Article: New Research

  • Lack of ADAP1/Centaurin-α1 Ameliorates Cognitive Impairment and Neuropathological Hallmarks in a Mouse Model of Alzheimer’s Disease
  • Nicotinic Modulation of Fast-spiking Neurons in Rat Somatosensory Cortex Across Development
  • Transient Photoactivation of Rac1 Induces Persistent Structural LTP Independent of CaMKII in Hippocampal Dendritic Spines
Show more Research Article: New Research

Novel Tools and Methods

  • Transient Photoactivation of Rac1 Induces Persistent Structural LTP Independent of CaMKII in Hippocampal Dendritic Spines
  • Reliable Single-trial Detection of Saccade-related Lambda Responses with Independent Component Analysis
  • Using Simulations to Explore Sampling Distributions: An Antidote to Hasty and Extravagant Inferences
Show more Novel Tools and Methods

Subjects

  • Novel Tools and Methods
  • Home
  • Alerts
  • Follow SFN on BlueSky
  • Visit Society for Neuroscience on Facebook
  • Follow Society for Neuroscience on Twitter
  • Follow Society for Neuroscience on LinkedIn
  • Visit Society for Neuroscience on Youtube
  • Follow our RSS feeds

Content

  • Early Release
  • Current Issue
  • Latest Articles
  • Issue Archive
  • Blog
  • Browse by Topic

Information

  • For Authors
  • For the Media

About

  • About the Journal
  • Editorial Board
  • Privacy Notice
  • Contact
  • Feedback
(eNeuro logo)
(SfN logo)

Copyright © 2025 by the Society for Neuroscience.
eNeuro eISSN: 2373-2822

The ideas and opinions expressed in eNeuro do not necessarily reflect those of SfN or the eNeuro Editorial Board. Publication of an advertisement or other product mention in eNeuro should not be construed as an endorsement of the manufacturer’s claims. SfN does not assume any responsibility for any injury and/or damage to persons or property arising from or related to any use of any material contained in eNeuro.