A generalizable brain extraction net (BEN) for multimodal MRI data from rodents, nonhuman primates, and humans

  1. Ziqi Yu
  2. Xiaoyang Han
  3. Wenjing Xu
  4. Jie Zhang
  5. Carsten Marr
  6. Dinggang Shen
  7. Tingying Peng  Is a corresponding author
  8. Xiao-Yong Zhang  Is a corresponding author
  9. Jianfeng Feng
  1. Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, China
  2. MOE Key Laboratory of Computational Neuroscience and Brain-Inspired Intelligence, Fudan University, China
  3. MOE Frontiers Center for Brain Science, Fudan University, China
  4. Institute of AI for Health (AIH), Helmholtz Zentrum München, Germany
  5. School of Biomedical Engineering, ShanghaiTech University, China
  6. Shanghai United Imaging Intelligence Co., Ltd, China
  7. Shanghai Clinical Research and Trial Center, China
  8. Helmholtz AI, Helmholtz Zentrum München, Germany
6 figures, 8 tables and 1 additional file

Figures

BEN renovates the brain extraction workflow to adapt to multiple species, modalities and platforms.

The BEN has the following advantages: (1) Transferability and flexibility: BEN can adapt to different species, modalities and platforms through its adaptive batch normalization module and semi-supervised learning module. (2) Automatic quality assessment: Unlike traditional toolboxes, which rely on manual inspection to assess the brain extraction quality, BEN incorporates a quality assessment module to automatically evaluate its brain extraction performance. (3) Speed: As a DL-based method, BEN can process an MRI volume faster (<1 s) than traditional toolboxes (several minutes or longer).

The architecture of our proposed BEN demonstrates its generalizability.

(A) The domain transfer workflow. BEN is initially trained on the Mouse-T2-11.7T dataset (representing the source domain) and then transferred to many target domains that differ from the source domain in either the species, MRI modality, magnetic field strength, or some combination thereof. Efficient domain transfer is achieved via an adaptive batch normalization (AdaBN) strategy and a Monte Carlo quality assessment (MCQA) module. (B) The backbone of BEN is the nonlocal U-Net (NL-U-Net) used for brain extraction. Similar to the classic U-Net architecture, NL-U-Net also contains a symmetrical encoding and decoding path, with an additional nonlocal attention module to tell the network where to look, thus maximizing the capabilities of the model. (C) Illustration of the AdaBN strategy. The batch normalization (BN) layers in the network are first trained on the source domain. When transferring to a target domain, the statistical parameters in the BN layers are updated in accordance with the new data distribution in the target domain. (D) Illustration of the MCQA process. We use Monte Carlo dropout sampling during the inference phase to obtain multiple predictions in a stochastic fashion. The Monte Carlo predictions are then used to generate aleatoric and epistemic uncertainties that represent the confidence of the segmentations predicted by the network, and we screen out the optimal segmentations with minimal uncertainty in each batch and use them as pseudo-labels for semi-supervised learning.

Figure 3 with 4 supplements
Performance comparison of BEN with two benchmark settings on the task of cross-species domain transfer.

(A - D) Curve plots representing domain transfer tasks across species, showing the variation in the segmentation performance in terms of the Dice score and 95% Hausdorff distance (HD95) (y axis) as a function of the number of labeled volumes (x axis) for training from scratch (black dotted lines), fine-tuning (black solid lines) and BEN (red lines). From top to bottom: (A) mouse to rat, (B) mouse to marmoset, (C) mouse to macaque, and (D) mouse to human (ABCD) with an increasing amount of labeled training data (n=1, 2, 3, …, as indicated on the x axis in each panel). Both the Dice scores and the HD95 values of all three methods reach saturation when the number of labels is sufficient (n>20 labels); however, BEN outperforms the other methods, especially when limited labels are available (n≤5 labels). Error bars represent the mean with a 95% confidence interval (CI) for all curves. The blue dotted line corresponds to y(Dice)=0.95, which can be considered to represent qualified performance for brain extraction. (E - H) 3D renderings of representative segmentation results (the number (n) of labels used for each method is indicated in each panel). Images with fewer colored regions represent better segmentation results. (gray: true positive; brown: false positive; blue: false negative). Sample size for these datasets: Mouse-T2WI-11.7T (N=243), Rat-T2WI-11.7T (N=132), Marmoset-T2WI-9.4T (N=62), Macaque-T1WI (N=76) and Human-ABCD (N=963).

Figure 3—figure supplement 1
Performance comparison of BEN with two benchmark settings on the task of cross-modality domain transfer.

(A - C) The curve plots representing domain transfer tasks across modality, showing the variation in the segmentation performance in terms of the Dice score and 95% Hausdorff distance (HD95) (y axis) as a function of the number of labeled volumes (x axis) for training from scratch (black dotted lines), fine-tuning (black solid lines) and BEN (red lines). From top to bottom: (A) T2WI to EPI, (B) T2WI to SWI and (C) T2WI to ASL with an increasing amount of labeled training data (n=1, 2, 3, 5, 8, 10, as indicated on the x axis in each panel). BEN consistently surpasses other methods with less labeled data required. Note that, BEN achieves acceptable performance via zero-shot inference (n=0), where no label is used in the target domain. In this case, fine-tuning directly infer on target datasets and training from scratch failed to perform. Error bars represent the mean with a 95% confidence interval (CI) for all curves. The blue dotted line corresponds to y(Dice)=0.95, which can be considered to represent qualified performance for brain extraction. (D - F) 3D renderings of representative segmentation results (the number (n) of labels used for each method is indicated in each panel). Images with fewer colored regions represent better segmentation results. (gray: true positive; brown: false positive; blue: false negative). Sample size for these datasets: Mouse-T2WI-11.7T (N=243), Mouse-EPI-11.7T (N=54), Mouse-SWI-11.7T (N=50) and Mouse-ASL-11.7T (N=58).

Figure 3—figure supplement 2
Performance comparison of BEN with two benchmark settings on the task of cross-platform domain transfer.

(A - C) Curve plots representing domain transfer tasks across platforms with different magnetic field strengths, showing the variation in the segmentation performance in terms of the Dice score and 95% Hausdorff distance (HD95) (y axis) as a function of the number of labeled volumes (x axis) for training from scratch (black dotted lines), fine-tuning (black solid lines) and BEN (red lines). From top to bottom: (A) 11.7T to 9.4T, (B) 11.7T to 7T and (C) 9.4T to 7T with an increasing amount of labeled training data (n=1, 2, 3,..., as indicated on the x axis in each panel). BEN reaches saturation point much earlier in all three tasks than other methods, indicating that fewer labels are required for BEN than for other methods. Error bars represent the mean with a 95% confidence interval (CI) for all curves. The blue dotted line corresponds to y(Dice)=0.95, which can be considered to represent qualified performance for brain extraction. (D - F) 3D renderings of representative segmentation results (the number (n) of labels used for each method is indicated in each panel). Note that due to MRI acquisition, the anterior and posterior of brains in the 7T dataset are not included. Images with fewer colored regions represent better segmentation results. (gray: true positive; brown: false positive; blue: false negative). Sample size for these datasets: Mouse-T2WI-11.7T (N=243), Mouse-T2WI-9.4T (N=14), Mouse-T2WI-7T (N=14).

Figure 3—figure supplement 3
BEN’s transferability is not dependent on specific source dataset.

In our previous experiments, the mouse-T2WI-11.7T dataset was used as the source domain and transferred the model to other species, including humans. Here, we reselect the human as the source domain and transfer the model to the mouse dataset. (A) Human to Mouse with an increasing amount of labeled training data (n=1, 2, 3,..., as indicated on the x axis in each panel). Curve plots representing domain transfer tasks across species, showing the variation in the segmentation performance in terms of HD95 (y axis) as a function of the number of labeled volumes (x axis) for training from scratch (black dotted lines), fine-tuning (black solid lines), and BEN (red lines). (B) 3D renderings of representative segmentation results (the number (n) of labels used for each method is indicated in each panel). Images with fewer colored regions represent better segmentation results. (gray: true positive; brown: false positive; blue: false negative). Sample size for these datasets: Mouse-T2WI-11.7T (N=243) and Human-ABCD (N=3250).

Figure 3—figure supplement 4
UMAP visualizes BEN’s transfer learning.

(A - C) Scatter plot of neural network feature map clusters. An unsupervised clustering algorithm (UMAP) was used to visualize the semantic features. Different colors in the scatter plot indicate different clusters (green: brain, red: non-brain). Visualization of feature maps via UMAP during the following three intermediate processes: (A) semantic features of brain and non-brain are separate in the source domain; (B) these features are intermingled in the target domain without transfer learning; (C) features are separated again in the target domain after BEN’s domain transferring strategy.

Figure 4 with 4 supplements
BEN outperforms traditional SOTA methods and advantageously adapts to datasets from various domains across multiple species, modalities, and field strengths.

(A - E) Violin plots and inner box plots showing the Dice scores of each method for (A) mouse (n=345), (B) rat (n=330), (C) marmoset (n=112), (D) macaque (n=134), and (E) human (n=4601) MRI scans acquired with different magnetic field strengths. The field strength is illustrated with different markers above each panel, and the results for each method are shown in similar hues. The median values of the data are represented by the white hollow dots in the violin plots, the first and third quartiles are represented by the black boxes, and the interquartile range beyond 1.5 times the first and the third quartiles is represented by the black lines. ‘N.A.’ indicates a failure of the method on the corresponding dataset. (F - J) Comparisons of the volumetric segmentations obtained with each method relative to the ground truth for five species. For better visualization, we select magnetic field strengths of 11.7T for mouse scans (F), 7T for rat scans (G), 9.4T for marmoset scans (H), and 3T for both macaque (I) and human (J) scans. Plots for other field strengths can be found in Figure 4—figure supplement 2. The linear regression coefficients (LRCs) and 95% CIs are displayed above each graph. Each dot in a graph represents one sample in the dataset. The error bands in the plots represent the 95% CIs. (K - O) Bland–Altman analysis showing high consistency between the BEN results and expert annotations. The biases between two observers and the 95% CIs for the differences are shown in the tables above each plot. Δvolume = predicted volume minus annotated volume. Each method is shown using the same hue in all plots (gray: AFNI; blue: FSL; green: FreeSurfer; purple: Sherm; red: BEN). Different field strengths are shown with different markers (●: 11.7T; ×: 9.4T; ◆: 7T; ★: 4.7T; +: 3T; ▲: 1.5T). The Dice scores compute the overlap between the segmentation results and manual annotations.

Figure 4—figure supplement 1
BEN outperforms traditional SOTA methods in functional MRI scans acquired from multiple species on different magnetic field strengths.

(A - D) Violin plots and inner box plots showing the Dice scores of each method for (A) mouse (n=54), (B) rat (n=55), (C) marmoset (n=50), and (d) macaque (n=58) MRI images acquired. The field strength is illustrated with different markers above each panel, and the results for each method are shown in similar hues. The median values of the data are represented by the white hollow dots in the violin plots, the first and third quartiles are represented by the black boxes, and the interquartile range beyond 1.5 times the first and the third quartiles is represented by the black lines. ‘N.A.’ indicates failure of the method on the corresponding dataset. (gray: AFNI; blue: FSL; green: Freesurfer; purple: Sherm; red: BEN).

Figure 4—figure supplement 2
Linear regression coefficients and Bland–Altman analysis demonstration of the rest datasets.

(A–D, E–H) The segmentation accuracy is assessed using the linear regression coefficients (LRCs) (n of volumes used are shown above the figure panels). Compared with other methods, BEN achieves better performance. The magnetic field strengths are also shown in different markers according to the legend. Each dot in a graph represents one sample in the dataset. The error bands represent 95% CI in plots. (I - L, M - P) Bland–Altman analysis showing high consistency between the BEN results and expert annotations. The biases between two observers and the 95% CIs for the differences are shown in the tables above each plot. Each method is shown using the same hue in all plots (gray: AFNI; blue: FSL; green: FreeSurfer; purple: Sherm; red: BEN). Different field strengths are shown with different markers (●: 11.7T; ×: 9.4T; ◆: 7T; ★: 4.7T; +: 3T; ▲: 1.5T).

Figure 4—figure supplement 3
Error maps of BEN and SOTA methods.

Heatmap projections of average false negative (FN) and false positive (FP) for each species (mouse, rat, marmoset, and macaque). From the first row to the third row: axial, coronal, and sagittal view. Compared with other methods, BEN shows much less FN and FP errors. The upper extreme represents a high systematic number of FNs and FPs. (mouse: n=243 in Mouse-T2WI-11.7T dataset; rat: n=132 in Rat-T2WI-11.7T dataset; marmoset: n=62 in Marmoset-T2WI-9.4T dataset; macaque: n=76 in Macaque-T1WI dataset).

Figure 4—figure supplement 4
Execution time comparison of BEN with other methods.

Compared with other conventional methods, BEN has unequaled faster processing speed. The plot is in log scale and represents average processing time to segment one 3D volume.

Figure 5 with 1 supplement
BEN improves the accuracy of atlas registration by producing high-quality brain extraction results.

(A) The Dorr space, (E) the SIGMA space and (I) the MNI space, corresponding to the mouse, rat and human atlases, respectively. (B, F, J) Integration of BEN into the registration workflow: (i) Three representative samples from a mouse (n=157), a rat (n=88), and human (n=144) in the native space. (ii) The BEN-segmented brain MRI volumes in the native space as registered into the Dorr/SIGMA/MNI space using the Advanced Normalization Tools (ANTs) toolbox, for comparison with the registration of AFNI-segmented/original MRI volumes with respect to the corresponding atlas. (iii) The warped volumes in the Dorr/SIGMA/MNI spaces. (iv) Error maps showing the fixed atlas in green and the moving warped volumes in magenta. The common areas where the two volumes are similar in intensity are shown in gray. (v) The brain structures in the warped volumes shown in the atlas spaces. In our experiment, BEN significantly improves the alignment between the propagated annotations and the atlas, as confirmed by the improved Dice scores in the (C, G, K) thalamic and (D, H, L) hippocampal regions (box plots: purple for BEN, green for w/o BEN; volumes: n=157 for the mouse, n=88 for the rat, n=144 for the human; statistics: paired t-test, n.s.: no significance, *p<0.05, **p<0.01, ***p<0.001).

Figure 5—figure supplement 1
Atlas registration with BEN benefits brain volumetric quantification in longitudinal studies.

In our two examples, (A) adult mice (n=40) were imaged at weeks 8, 12, 20, and 32. (B) With BEN (purple boxplots), the volume statistics of the thalamus and hippocampus remain almost constant, which is plausible, as these two brain regions do not further enlarge in adulthood. In comparison, (D) the volume changes of the thalamus and hippocampus in (C) juvenile rats (n=23, imaged at weeks 3, 6, 9, and 12) suggest continuous growth with time, particularly between weeks 3 and 9. In contrast, with AFNI, due to poor atlas registration and consequently inaccurate volume statistics (green box plots), these crucial brain growth trends during development may be missed.

Figure 6 with 3 supplements
BEN provides a measure of uncertainty that potentially reflects rater disagreement.

(A) Representative EPI images from four species (in each column from left to right, mouse, rat, marmoset, and macaque). (B) BEN segmentations (red semi-transparent areas) overlaid with expert consensus annotations (red opaque lines), where the latter are considered the ground truth. (C) Attention maps showing the key semantic features in images as captured by BEN. (D) Uncertainty maps showing the regions where BEN has less confidence. The uncertainty values are normalized (0–1) for better visualization. (E) Annotations by junior raters (n=7) shown as opaque lines of different colors. (F) Bar plots showing the Dice score comparisons between the ground truth and the BEN segmentation results (red) as well as the ground truth and the junior raters’ annotations (gray) for all species (gray: raters, n=7, red: Monte Carlo samples from BEN, n=7; statistics: Mann–Whitney test, **p<0.01). Each dot represents one rater or one sample. Values are represented as the mean and 95% CI. (G) Correlation of the linear regression plot between the Dice score and the normalized uncertainty. The error band represents the 95% CI. Each dot represents one volume (species: mouse; modality: T2WI; field strength: 11.7T; n=133; r=−0.75).

Figure 6—figure supplement 1
BEN’s uncertainty is consistent with interrater' disagreement.

Heat map projections of raters’ disagreement (orange hues) and BEN uncertainty (blue hues) across four species using the ground truth as the reference. From left to right: Sagittal, coronal and axial view. These disagree maps and uncertainty maps share similar feature distribution patterns.

Figure 6—figure supplement 2
Interrater disagreement.

Heatmaps representing the segmentation Dice scores of each junior rater as calculated using the labels from one of the others as the ground truth on the rat images and the macaque images.

Figure 6—figure supplement 3
BEN provides a measure of uncertainty that potentially reflects the disagreement of conventional toolboxes in human data.

(A) Representative structural MR images of the human in the ABCD dataset (n=3250). (B) BEN’s segmentation (red semi-transparent areas) overlaid with annotation of experts’ consensus, which is considered as ground truth (red opaque lines). (C) Attention map shows the key semantic features in images where BEN captures. (D) Uncertainty map shows the regions where BEN has less confidence. The uncertainty values are normalized (0~1) for better visualization. (E) Segmentations obtained by toolboxes (n=3) are shown in different color opaque lines.

Tables

Appendix 1—table 1
MRI scan information of the fifteen animal datasets and three human datasets.

FDU: Fudan University. UCAS: University of Chinese Academy of Sciences; UNC: University of North Carolina at Chapel Hill; ABCD: Adolescent Brain Cognitive Developmental study; ZIB: Zhangjiang International Brain BioBank at Fudan University.

SpeciesModalityMagnetic Field(T)ScansSlicesIn-plane Resolution(mm)Thickness(mm)ManufacturerInstitution
MouseT2WI11.72439,0300.10*0.100.4BrukerFDU
9.4144480.06*0.060.4BrukerUCAS
7141260.08*0.080.8BrukerUCAS
EPI11.7542,1980.2*0.20.4BrukerFDU
9.4203600.15*0.150.5BrukerUCAS
SWI11.7501,5000.06*0.060.5BrukerFDU
ASL11.7586960.167*0.1671BrukerFDU
RatT2WI11.71325,5440.14*0.140.6BrukerFDU
9.4556600.1*0.11BrukerUNC
7884,4000.09*0.090.4BrukerUCAS
EPI9.4556600.32*0.321BrukerUNC
Sum of rodent78325,622
MarmosetT2WI9.4622,4800.2*0.21BrukerUCAS
EPI9.4501,5800.5*0.51BrukerUCAS
Macaque *T1WI4.7
3
1.5
7620,0630.3*0.3~0.6*0.60.3~0.75Siemens,
Bruker,
Philips
Multicenter
EPI1.5582,5570.7*0.7~2.0*2.01.0~3.0Siemens,
Bruker,
Philips
Multicenter
Sum of nonhuman primate24626,680
Human
(ABCD)
T1WI33,250552,5001.0*1.01GE
Siemens
Philips
Multicenter
Human
(UK Biobank)
T1WI3963196,7931.0*1.01SiemensMulticenter
Human
(ZIB)
T1WI3388124,1600.8*0.80.8SiemensFDU
Sum of human4,601873,453
In total5,630925,755
  1. *
Appendix 1—table 2
Performance comparison of BEN with SOTA methods on the source domain (Mouse-T2WI-11.7T).

Dice: Dice score; SEN: sensitivity; SPE: specificity; ASD: Average Surface Distance; HD95: the 95-th percentile of Hausdorff distance.

MethodDiceSENSPEASDHD95
Sherm0.96050.93910.99820.67480.4040
AFNI0.90930.91620.98941.93460.9674
FSL0.39481.00000.670420.47245.5975
BEN0.98590.98890.99820.32600.1436
Appendix 1—table 3
Performance comparison of BEN with SOTA methods on two public datasets.

Dice: Dice score; Jaccard: Jaccard Similarity; SEN: sensitivity; HD: Hausdorff distance.

CARMI dataset * (Rodent)
T2WI
MethodsDiceJaccardSENHD (voxels)
RATS (Oguz et al., 2014)0.910.830.858.76
PCNN (Chou et al., 2011)0.890.800.907.00
SHERM (Liu et al., 2020)0.880.790.866.72
U-Net (Hsu et al., 2020)0.970.940.964.27
BEN0.980.950.982.72
EPI
MethodsDiceJaccardSENHD (voxels)
RATS (Oguz et al., 2014)0.860.750.757.68
PCNN (Chou et al., 2011)0.850.740.938.25
SHERM (Liu et al., 2020)0.800.670.787.14
U-Net (Hsu et al., 2020)0.960.930.964.60
BEN0.970.940.984.20
PRIME-DE (Macaque)
T1WI
MethodsDiceJaccardSENHD (voxels)
FSL0.810.710.9632.38
FreeSurfer0.560.390.9942.18
AFNI0.860.790.8225.46
U-Net (Wang et al., 2021)0.98---
BEN0.980.940.9813.21
  1. *
Appendix 1—table 4
Ablation study of each module of BEN in the source domain.

(a) training with all labeled data using U-Net. The backbone of BEN is non-local U-Net (NL-U-Net). (b) training with 5% labeled data. (c) training with 5% labeled data using BEN’s semi-supervised learning module (SSL). The remaining 95% of the unlabeled data is also used for the training. Since this ablation study is performed on the source domain, the adaptive batch normalization (AdaBN) module is not used. Dice: Dice score; SEN: sensitivity; SPE: specificity; HD95: the 95-th percentile of Hausdorff distance.

MethodScans usedMetrics
LabeledUnlabeledDiceSENSPEHD95
aU-Net24300.97730.96960.99840.2132
aBackbone24300.98440.98300.99840.0958
bU-Net1200.95880.95460.99451.1388
bBackbone1200.96140.96790.99700.7468
cBackbone +SSL122310.97280.98750.99520.2937
Appendix 1—table 5
Ablation study of each module of BEN in the target domain.

(a) training from scratch with all labeled data. The backbone of BEN is non-local U-Net (NL-U-Net). (b) training from scratch with 5% labeled data. (c) fine-tuning (using pretrained weights) with 5% labeled data. (d) fine-tuning with 5% labeled data using BEN’s SSL and AdaBN modules. The remaining 95% of the unlabeled data is also used for the training stage. Dice: Dice score; SEN: sensitivity; SPE: specificity; HD95: the 95-th percentile of Hausdorff distance.

MethodPretrainedScans usedMetrics
LabeledUnlabeledDiceSENSPEHD95
aBackbone
(from scratch)
13200.98270.98410.99870.1881
bBackbone
(from scratch)
700.89900.86540.99604.6241
cBackbone700.94830.90630.99970.6563
cBackbone +AdaBN700.97280.98750.99520.2937
dBackbone +SSL71250.96140.96790.99700.7468
dBackbone +AdaBN + SSL71250.97790.97630.99860.2912
Appendix 1—table 6
BEN provides interfaces for the following conventional neuroimaging software.
Appendix 1—table 7
Protocols and parameters used for conventional neuroimaging toolboxes.
MethodCommandParameterDescriptionRangeChosen value
AFNI3dSkullStrip-marmosetBrain of a marmoseton/offon for marmoset
-ratBrain of a raton/offon for rodent
-monkeyBrain of a monkeyon/offon for macaque
FreeSurfermri_watershed-T1Specify T1 input volumeon/offon
-rSpecify the radius of the brain (in voxel unit)positive number60
-lessShrink the surfaceon/offoff
-moreExpand the surfaceon/offoff
FSLbet2-fFractional intensity threshold0.1~0.90.5
-mGenerate binary brain maskon/offon
-nDon't generate the default brain image outputon/offon
Shermsherm-animalSpecies of the task'rat' or 'mouse'according to the task
-isotropicCharacteristics of voxels0/10
Author response table 1
Comparison of brain extraction performance of different methods on different datasets.

SkullStrip: semi-automatic registration-based method. (ASD: average surface distance. HD95: 95% Hausdorff distance).

SpeciesFieldModalityMethodDiceSensitivitySpecificityASDHD95
Mouse11.7TT2WIBEN0.98590.98890.99820.32600.1436
Sherm0.96050.93910.99820.67480.4040
AFNI0.90930.91620.98941.93460.9674
SkullStrip0.96970.97830.99570.47290.2829
FSL0.39481.00000.670420.47245.5975
EPIBEN0.97910.99120.99700.59450.4946
Sherm0.94400.92060.99710.48270.4896
AFNI0.91390.93650.98900.78350.7315
SkullStrip0.92370.95020.98990.85700.5673
SWIBEN0.98790.99120.99790.45480.2420
Sherm0.95860.93420.99830.46330.4019
AFNI0.90600.86310.99500.86320.5522
SkullStrip0.96000.95900.99540.47770.3797
ASLBEN0.98070.98040.99660.17660.2679
Sherm000.99383.566910.6817
AFNI0.75840.93790.93244.32522.3752
SkullStrip0.88930.91770.97370.73510.7029
9.4TT2WIBEN0.98300.99030.99540.51290.3284
Sherm0.86750.78930.99511.79731.4372
AFNI0.86990.95320.95482.29791.2606
SkullStrip0.92980.96830.97501.50240.8822
EPIBEN0.95350.95670.99200.56860.5021
Sherm0.93680.93940.99010.59560.4370
AFNI0.88550.95850.96500.99600.6349
SkullStrip0.92490.96130.98100.74400.6718
7TT2WIBEN0.98150.96820.99510.42720.2595
Sherm0.84360.94230.95964.37231.6113
AFNI0.91230.94230.99102.42930.8190
SkullStrip0.85990.94230.97031.59890.9821
Rat7TT2WIBEN0.98540.98780.99660.46880.1790
Sherm0.95320.93140.99560.84281.0463
AFNI0.82850.72320.99692.86722.7561
FSL-----
SkullStrip0.97540.98130.99390.50150.4158
EPIBEN0.97050.94150.99660.28580.7081
Sherm0.63390.46450.99981.22833.4795
AFNI0.80800.71510.98960.90184.2521
SkullStrip0.93940.95000.98690.41610.7991
Marmoset9.4TT2WIBEN0.98040.97880.99570.55685.7393
Sherm-----
AFNI0.93110.95260.98031.17449.7897
SkullStrip0.97550.97890.99430.40053.8443
FSL0.76890.81260.95522.09395.1353
EPIBEN0.97740.98160.99490.91263.500
Sherm-----
AFNI0.91590.91420.98470.83145.1391
SkullStrip0.95330.92860.99650.44893.0342
FSL0.93260.97480.97571.08468.7247

Additional files

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Open citations (links to open the citations from this article in various online reference manager services)

Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)

  1. Ziqi Yu
  2. Xiaoyang Han
  3. Wenjing Xu
  4. Jie Zhang
  5. Carsten Marr
  6. Dinggang Shen
  7. Tingying Peng
  8. Xiao-Yong Zhang
  9. Jianfeng Feng
(2022)
A generalizable brain extraction net (BEN) for multimodal MRI data from rodents, nonhuman primates, and humans
eLife 11:e81217.
https://doi.org/10.7554/eLife.81217