Accurate and versatile 3D segmentation of plant tissues at cellular resolution
Abstract
Quantitative analysis of plant and animal morphogenesis requires accurate segmentation of individual cells in volumetric images of growing organs. In the last years, deep learning has provided robust automated algorithms that approach human performance, with applications to bio-image analysis now starting to emerge. Here, we present PlantSeg, a pipeline for volumetric segmentation of plant tissues into cells. PlantSeg employs a convolutional neural network to predict cell boundaries and graph partitioning to segment cells based on the neural network predictions. PlantSeg was trained on 1xed and live plant organs imaged with confocal and light sheet microscopes. PlantSeg delivers accurate results and generalizes well across different tissues, scales, acquisition settings even on non plant samples. We present results of PlantSeg applications in diverse developmental contexts. PlantSeg is free and open-source, with both a command line and a user-friendly graphical interface (https://github.com/hci-unihd/plant-seg).
Data availability
All data used in this study have been deposited in Open Science Framework: https://osf.io/uzq3w/Additionally Arabidopsis 3D Digital Tissue Atlas is available under https://osf.io/fzr56/
Article and author information
Author details
Funding
Deutsche Forschungsgemeinschaft (FOR2581)
- Jan U Lohmann
- Miltos Tsiantis
- Fred Hamprecht
- Kay Schneitz
- Alexis Maizel
- Anna Kreshuk
Leverhulme Trust (RPG-2016-049)
- George Bassel
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
Copyright
© 2020, Wolny et al.
This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.
Metrics
-
- 15,320
- views
-
- 1,654
- downloads
-
- 224
- citations
Views, downloads and citations are aggregated across all versions of this paper published by eLife.
Citations by DOI
-
- 224
- citations for umbrella DOI https://doi.org/10.7554/eLife.57613