Bi-channel Image Registration and Deep-learning Segmentation (BIRDS) for efficient, versatile 3D mapping of mouse brain
Abstract
We have developed an open-source software called BIRDS (bi-channel image registration and deep-learning segmentation) for the mapping and analysis of 3D microscopy data and applied this to the mouse brain. The BIRDS pipeline includes image pre-processing, bi-channel registration, automatic annotation, creation of a 3D digital frame, high-resolution visualization, and expandable quantitative analysis. This new bi-channel registration algorithm is adaptive to various types of whole-brain data from different microscopy platforms and shows dramatically improved registration accuracy. Additionally, as this platform combines registration with neural networks, its improved function relative to other platforms lies in the fact that the registration procedure can readily provide training data for network construction, while the trained neural network can efficiently segment incomplete/defective brain data that is otherwise difficult to register. Our software is thus optimized to enable either minute-timescale registration-based segmentation of cross-modality, whole-brain datasets or real-time inference-based image segmentation of various brain regions of interest. Jobs can be easily submitted and implemented via a Fiji plugin that can be adapted to most computing environments.
Data availability
The Allen CCF is open access and available with related tools at https://atlas.brain-map.org/The datasets (Brain1~5) have been deposited in Dryad at https://datadryad.org/stash/share/4fesXcJif0L2DnSj7YmjREe37yPm1bEnUiK49ELtALgThe code and plugin can be found at the following link:https://github.com/bleach1by1/BIRDS_pluginhttps://github.com/bleach1by1/birds_reghttps://github.com/bleach1by1/birds_dl.githttps://github.com/bleach1by1/BIRDS_demoAll data generated or analysed during this study are included in the manuscript. Source data files have been provided for Figures 1, 2, 3, 4, 5 and Figure 2-figure supplement 3,4; Figure 5-figure supplement 2,3
-
Brain 1 and 2Dryad Digital Repository.
-
Brain 5Dryad Digital Repository.
-
Brain 3 and 4Dryad Digital Repository.
Article and author information
Author details
Funding
National Key R&D program of China (2017YFA0700501)
- Peng Fei
National Natural Science Foundation of China (21874052)
- Peng Fei
National Natural Science Foundation of China (31871089)
- Yunyun Han
Innovation Fund of WNLO
- Peng Fei
Junior Thousand Talents Program of China
- Peng Fei
Junior Thousand Talents Program of China
- Yunyun Han
The FRFCU (HUST:2172019kfyXKJC077)
- Yunyun Han
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
Copyright
© 2021, Wang et al.
This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.
Metrics
-
- 6,244
- views
-
- 691
- downloads
-
- 18
- citations
Views, downloads and citations are aggregated across all versions of this paper published by eLife.
Download links
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
Further reading
-
- Computational and Systems Biology
- Neuroscience
Audiovisual information reaches the brain via both sustained and transient input channels, representing signals’ intensity over time or changes thereof, respectively. To date, it is unclear to what extent transient and sustained input channels contribute to the combined percept obtained through multisensory integration. Based on the results of two novel psychophysical experiments, here we demonstrate the importance of the transient (instead of the sustained) channel for the integration of audiovisual signals. To account for the present results, we developed a biologically inspired, general-purpose model for multisensory integration, the multisensory correlation detectors, which combines correlated input from unimodal transient channels. Besides accounting for the results of our psychophysical experiments, this model could quantitatively replicate several recent findings in multisensory research, as tested against a large collection of published datasets. In particular, the model could simultaneously account for the perceived timing of audiovisual events, multisensory facilitation in detection tasks, causality judgments, and optimal integration. This study demonstrates that several phenomena in multisensory research that were previously considered unrelated, all stem from the integration of correlated input from unimodal transient channels.