A generalizable brain extraction net (BEN) for multimodal MRI data from rodents, nonhuman primates, and humans
Abstract
Accurate brain tissue extraction on magnetic resonance imaging (MRI) data is crucial for analyzing brain structure and function. While several conventional tools have been optimized to handle human brain data, there have been no generalizable methods to extract brain tissues for multimodal MRI data from rodents, nonhuman primates, and humans. Therefore, developing a flexible and generalizable method for extracting whole brain tissue across species would allow researchers to analyze and compare experiment results more efficiently. Here, we propose a domain-adaptive and semi-supervised deep neural network, named the Brain Extraction Net (BEN), to extract brain tissues across species, MRI modalities, and MR scanners. We have evaluated BEN on 18 independent datasets, including 783 rodent MRI scans, 246 nonhuman primate MRI scans, and 4,601 human MRI scans, covering five species, four modalities, and six MR scanners with various magnetic field strengths. Compared to conventional toolboxes, the superiority of BEN is illustrated by its robustness, accuracy, and generalizability. Our proposed method not only provides a generalized solution for extracting brain tissue across species but also significantly improves the accuracy of atlas registration, thereby benefiting the downstream processing tasks. As a novel fully automated deep-learning method, BEN is designed as an open-source software to enable high-throughput processing of neuroimaging data across species in preclinical and clinical applications.
Data availability
All data (MRI data, source codes, pretrained weights and replicate demo notebooks for Figure 1-7) are included in the manuscript or available at https://github.com/yu02019/BEN.
-
A longitudinal MRI dataset of young adult C57BL6J mouse brainZenodo, doi:10.5281/zenodo.6844489.
-
CAMRI Rat Brain MRI DataOpenNeuro, doi:10.18112/openneuro.ds002870.v1.0.1.
-
CAMRI Mouse Brain MRI DataOpenNeuro, doi:10.18112/openneuro.ds002868.v1.0.1.
-
An Open Resource for Non-human Primate ImagingNeuron, doi:10.1016/j.neuron.2018.08.039.
-
The Adolescent Brain Cognitive Development (ABCD) study: Imaging acquisition across 21 sitesDevelopmental Cognitive Neuroscience, doi:10.1016/j.dcn.2018.03.001.
-
Multimodal population brain imaging in the UK Biobank prospective epidemiological studyNature Neuroscience, doi:10.1038/nn.4393.
Article and author information
Author details
Funding
National Natural Science Foundation of China (81873893,82171903,92043301)
- Xiao-Yong Zhang
Fudan University (the Office of Global Partnerships (Key Projects Development Fund))
- Xiao-Yong Zhang
Shanghai Municipal Science and Technology Major Project (No.2018SHZDZX01)
- Xiao-Yong Zhang
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
Ethics
Animal experimentation: Partial rodent MRI data collection were approved by the Animal Care and Use Committee of Fudan University, China. The rest rodent data (Rat-T2WI-9.4T and Rat-EPI-9.4T datasets) are publicly available (CARMI: https://openneuro.org/datasets/ds002870/versions/1.0.0). Marmoset MRI data collection were approved by the Animal Care and Use Committee of the Institute of Neuroscience, Chinese Academy of Sciences, China. Macaque MRI data are publicly available from the nonhuman PRIMatE Data Exchange (PRIME-DE) (https://fcon_1000.projects.nitrc.org/indi/indiPRIME.html).
Human subjects: The Zhangjiang International Brain Biobank (ZIB) protocols were approved by the Ethics Committee of Fudan University (AF/SC-03/20200722) and written informed consents were obtained from all volunteers. UK Biobank (UKB) and Adolescent Brain Cognitive Development (ABCD) are publicly available.
Reviewing Editor
- Saad Jbabdi, University of Oxford, United Kingdom
Version history
- Preprint posted: May 26, 2022 (view preprint)
- Received: June 20, 2022
- Accepted: December 21, 2022
- Accepted Manuscript published: December 22, 2022 (version 1)
- Version of Record published: February 17, 2023 (version 2)
- Version of Record updated: February 21, 2023 (version 3)
Copyright
© 2022, Yu et al.
This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.
Metrics
-
- 977
- Page views
-
- 142
- Downloads
-
- 1
- Citations
Article citation count generated by polling the highest count across the following sources: Crossref, PubMed Central, Scopus.
Download links
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
Further reading
-
- Computational and Systems Biology
- Neuroscience
Previous research has highlighted the role of glutamate and gamma-aminobutyric acid (GABA) in perceptual, cognitive, and motor tasks. However, the exact involvement of these neurochemical mechanisms in the chain of information processing, and across human development, is unclear. In a cross-sectional longitudinal design, we used a computational approach to dissociate cognitive, decision, and visuomotor processing in 293 individuals spanning early childhood to adulthood. We found that glutamate and GABA within the intraparietal sulcus (IPS) explained unique variance in visuomotor processing, with higher glutamate predicting poorer visuomotor processing in younger participants but better visuomotor processing in mature participants, while GABA showed the opposite pattern. These findings, which were neurochemically, neuroanatomically and functionally specific, were replicated ~21 mo later and were generalized in two further different behavioral tasks. Using resting functional MRI, we revealed that the relationship between IPS neurochemicals and visuomotor processing is mediated by functional connectivity in the visuomotor network. We then extended our findings to high-level cognitive behavior by predicting fluid intelligence performance. We present evidence that fluid intelligence performance is explained by IPS GABA and glutamate and is mediated by visuomotor processing. However, this evidence was obtained using an uncorrected alpha and needs to be replicated in future studies. These results provide an integrative biological and psychological mechanistic explanation that links cognitive processes and neurotransmitters across human development and establishes their potential involvement in intelligent behavior.
-
- Computational and Systems Biology
- Neuroscience
Cerebellar climbing fibers convey diverse signals, but how they are organized in the compartmental structure of the cerebellar cortex during learning remains largely unclear. We analyzed a large amount of coordinate-localized two-photon imaging data from cerebellar Crus II in mice undergoing ‘Go/No-go’ reinforcement learning. Tensor component analysis revealed that a majority of climbing fiber inputs to Purkinje cells were reduced to only four functional components, corresponding to accurate timing control of motor initiation related to a Go cue, cognitive error-based learning, reward processing, and inhibition of erroneous behaviors after a No-go cue. Changes in neural activities during learning of the first two components were correlated with corresponding changes in timing control and error learning across animals, indirectly suggesting causal relationships. Spatial distribution of these components coincided well with boundaries of Aldolase-C/zebrin II expression in Purkinje cells, whereas several components are mixed in single neurons. Synchronization within individual components was bidirectionally regulated according to specific task contexts and learning stages. These findings suggest that, in close collaborations with other brain regions including the inferior olive nucleus, the cerebellum, based on anatomical compartments, reduces dimensions of the learning space by dynamically organizing multiple functional components, a feature that may inspire new-generation AI designs.