A generalizable brain extraction net (BEN) for multimodal MRI data from rodents, nonhuman primates, and humans

  1. Ziqi Yu
  2. Xiaoyang Han
  3. Wenjing Xu
  4. Jie Zhang
  5. Carsten Marr
  6. Dinggang Shen
  7. Tingying Peng  Is a corresponding author
  8. Xiao-Yong Zhang  Is a corresponding author
  9. Jianfeng Feng
  1. Fudan University, China
  2. Helmholtz Zentrum München, Germany
  3. ShanghaiTech University, China

Abstract

Accurate brain tissue extraction on magnetic resonance imaging (MRI) data is crucial for analyzing brain structure and function. While several conventional tools have been optimized to handle human brain data, there have been no generalizable methods to extract brain tissues for multimodal MRI data from rodents, nonhuman primates, and humans. Therefore, developing a flexible and generalizable method for extracting whole brain tissue across species would allow researchers to analyze and compare experiment results more efficiently. Here, we propose a domain-adaptive and semi-supervised deep neural network, named the Brain Extraction Net (BEN), to extract brain tissues across species, MRI modalities, and MR scanners. We have evaluated BEN on 18 independent datasets, including 783 rodent MRI scans, 246 nonhuman primate MRI scans, and 4,601 human MRI scans, covering five species, four modalities, and six MR scanners with various magnetic field strengths. Compared to conventional toolboxes, the superiority of BEN is illustrated by its robustness, accuracy, and generalizability. Our proposed method not only provides a generalized solution for extracting brain tissue across species but also significantly improves the accuracy of atlas registration, thereby benefiting the downstream processing tasks. As a novel fully automated deep-learning method, BEN is designed as an open-source software to enable high-throughput processing of neuroimaging data across species in preclinical and clinical applications.

Data availability

All data (MRI data, source codes, pretrained weights and replicate demo notebooks for Figure 1-7) are included in the manuscript or available at https://github.com/yu02019/BEN.

The following data sets were generated
The following previously published data sets were used

Article and author information

Author details

  1. Ziqi Yu

    Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, Shanghai, China
    Competing interests
    No competing interests declared.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0001-8201-5481
  2. Xiaoyang Han

    Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, Shanghai, China
    Competing interests
    No competing interests declared.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0002-3007-6079
  3. Wenjing Xu

    Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, Shanghai, China
    Competing interests
    No competing interests declared.
  4. Jie Zhang

    Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, Shanghai, China
    Competing interests
    No competing interests declared.
  5. Carsten Marr

    Institute of AI for Health, Helmholtz Zentrum München, Neuherberg, Germany
    Competing interests
    No competing interests declared.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0003-2154-4552
  6. Dinggang Shen

    School of Biomedical Engineering, ShanghaiTech University, Shanghai, China
    Competing interests
    Dinggang Shen, is affiliated with Shanghai United Imaging Intelligence Co., Ltd. He has financial interests to declare..
  7. Tingying Peng

    Helmholtz AI, Helmholtz Zentrum München, Neuherberg, Germany
    For correspondence
    tingying.peng@helmholtz-muenchen.de
    Competing interests
    No competing interests declared.
  8. Xiao-Yong Zhang

    Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, Shanghai, China
    For correspondence
    xiaoyong_zhang@fudan.edu.cn
    Competing interests
    No competing interests declared.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0001-8965-1077
  9. Jianfeng Feng

    Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, Shanghai, China
    Competing interests
    No competing interests declared.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0001-5987-2258

Funding

National Natural Science Foundation of China (81873893,82171903,92043301)

  • Xiao-Yong Zhang

Fudan University (the Office of Global Partnerships (Key Projects Development Fund))

  • Xiao-Yong Zhang

Shanghai Municipal Science and Technology Major Project (No.2018SHZDZX01)

  • Xiao-Yong Zhang

The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.

Reviewing Editor

  1. Saad Jbabdi, University of Oxford, United Kingdom

Ethics

Animal experimentation: Partial rodent MRI data collection were approved by the Animal Care and Use Committee of Fudan University, China. The rest rodent data (Rat-T2WI-9.4T and Rat-EPI-9.4T datasets) are publicly available (CARMI: https://openneuro.org/datasets/ds002870/versions/1.0.0). Marmoset MRI data collection were approved by the Animal Care and Use Committee of the Institute of Neuroscience, Chinese Academy of Sciences, China. Macaque MRI data are publicly available from the nonhuman PRIMatE Data Exchange (PRIME-DE) (https://fcon_1000.projects.nitrc.org/indi/indiPRIME.html).

Human subjects: The Zhangjiang International Brain Biobank (ZIB) protocols were approved by the Ethics Committee of Fudan University (AF/SC-03/20200722) and written informed consents were obtained from all volunteers. UK Biobank (UKB) and Adolescent Brain Cognitive Development (ABCD) are publicly available.

Version history

  1. Preprint posted: May 26, 2022 (view preprint)
  2. Received: June 20, 2022
  3. Accepted: December 21, 2022
  4. Accepted Manuscript published: December 22, 2022 (version 1)
  5. Version of Record published: February 17, 2023 (version 2)
  6. Version of Record updated: February 21, 2023 (version 3)

Copyright

© 2022, Yu et al.

This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.

Metrics

  • 1,255
    views
  • 176
    downloads
  • 3
    citations

Views, downloads and citations are aggregated across all versions of this paper published by eLife.

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Open citations (links to open the citations from this article in various online reference manager services)

Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)

  1. Ziqi Yu
  2. Xiaoyang Han
  3. Wenjing Xu
  4. Jie Zhang
  5. Carsten Marr
  6. Dinggang Shen
  7. Tingying Peng
  8. Xiao-Yong Zhang
  9. Jianfeng Feng
(2022)
A generalizable brain extraction net (BEN) for multimodal MRI data from rodents, nonhuman primates, and humans
eLife 11:e81217.
https://doi.org/10.7554/eLife.81217

Share this article

https://doi.org/10.7554/eLife.81217

Further reading

    1. Computational and Systems Biology
    Qianmu Yuan, Chong Tian, Yuedong Yang
    Tools and Resources

    Revealing protein binding sites with other molecules, such as nucleic acids, peptides, or small ligands, sheds light on disease mechanism elucidation and novel drug design. With the explosive growth of proteins in sequence databases, how to accurately and efficiently identify these binding sites from sequences becomes essential. However, current methods mostly rely on expensive multiple sequence alignments or experimental protein structures, limiting their genome-scale applications. Besides, these methods haven’t fully explored the geometry of the protein structures. Here, we propose GPSite, a multi-task network for simultaneously predicting binding residues of DNA, RNA, peptide, protein, ATP, HEM, and metal ions on proteins. GPSite was trained on informative sequence embeddings and predicted structures from protein language models, while comprehensively extracting residual and relational geometric contexts in an end-to-end manner. Experiments demonstrate that GPSite substantially surpasses state-of-the-art sequence-based and structure-based approaches on various benchmark datasets, even when the structures are not well-predicted. The low computational cost of GPSite enables rapid genome-scale binding residue annotations for over 568,000 sequences, providing opportunities to unveil unexplored associations of binding sites with molecular functions, biological processes, and genetic variants. The GPSite webserver and annotation database can be freely accessed at https://bio-web1.nscc-gz.cn/app/GPSite.

    1. Cell Biology
    2. Computational and Systems Biology
    Thomas Grandits, Christoph M Augustin ... Alexander Jung
    Research Article

    Computer models of the human ventricular cardiomyocyte action potential (AP) have reached a level of detail and maturity that has led to an increasing number of applications in the pharmaceutical sector. However, interfacing the models with experimental data can become a significant computational burden. To mitigate the computational burden, the present study introduces a neural network (NN) that emulates the AP for given maximum conductances of selected ion channels, pumps, and exchangers. Its applicability in pharmacological studies was tested on synthetic and experimental data. The NN emulator potentially enables massive speed-ups compared to regular simulations and the forward problem (find drugged AP for pharmacological parameters defined as scaling factors of control maximum conductances) on synthetic data could be solved with average root-mean-square errors (RMSE) of 0.47 mV in normal APs and of 14.5 mV in abnormal APs exhibiting early afterdepolarizations (72.5% of the emulated APs were alining with the abnormality, and the substantial majority of the remaining APs demonstrated pronounced proximity). This demonstrates not only very fast and mostly very accurate AP emulations but also the capability of accounting for discontinuities, a major advantage over existing emulation strategies. Furthermore, the inverse problem (find pharmacological parameters for control and drugged APs through optimization) on synthetic data could be solved with high accuracy shown by a maximum RMSE of 0.22 in the estimated pharmacological parameters. However, notable mismatches were observed between pharmacological parameters estimated from experimental data and distributions obtained from the Comprehensive in vitro Proarrhythmia Assay initiative. This reveals larger inaccuracies which can be attributed particularly to the fact that small tissue preparations were studied while the emulator was trained on single cardiomyocyte data. Overall, our study highlights the potential of NN emulators as powerful tool for an increased efficiency in future quantitative systems pharmacology studies.