CellSeg3D, Self-supervised 3D cell segmentation for fluorescence microscopy

  1. Cyril Achard
  2. Timokleia Kousi
  3. Markus Frey
  4. Maxime Vidal
  5. Yves Paychere
  6. Colin Hofmann
  7. Asim Iqbal
  8. Sebastien B Hausmann
  9. Stéphane Pagès
  10. Mackenzie Weygandt Mathis  Is a corresponding author
  1. Brain Mind Institute and Neuro X, École Polytechnique Fédérale de Lausanne (EPFL), Switzerland
  2. Wyss Center for Bio and Neuroengineering, Switzerland
4 figures, 3 tables and 1 additional file

Figures

Figure 1 with 2 supplements
Performance of 3D semantic and instance segmentation models.

(a) Raw mesoSPIM whole-brain sample, volumes, and corresponding ground truth labels from somatosensory (S1) and visual (V1) cortical regions. (b) Evaluation of instance segmentation performance for: baseline with Otsu thresholding only, supervised models: Cellpose, StartDist, SwinUNetR, SegResNet; and our self-supervised model WNet3D over three data subsets. F1-score is computed from the Intersection over Union (IoU) with ground truth labels, then averaged. Error bars represent 50% ~Confidence Intervals (CIs). (c) View of 3D instance labels from models, as noted, for the visual cortex volume. (d) Illustration of our WNet3D architecture showcasing the dual 3D U-Net structure with our modifications (see Methods).

Figure 1—figure supplement 1
Hyperparameter tuning of baselines and statistics.

(a, b, c) Hyperparameter optimization for several supervised models. In Cellpose, the cell probability threshold value is applied before the sigmoid, hence values between −12 and 12 were tested. CellSeg3D models return predictions between 0 and 1 after applying the softmax, values tested were, therefore, in this range. Error bars show 95% ~CIs. (d) StarDist hyperparameter optimization. Several parameters were tested for non-maximum suppression (NMS) threshold and cell probability threshold. Heatmap is F1-Score. (e) Pooled F1-Scores per split, related to Figure 2a, used for statistical testing shown in f. The central box represents the interquartile range (IQR) of values with the median as a horizontal line, the upper and lower limits the upper and lower quartiles. Whiskers extend to data points within 1.5 IQR of the quartiles. Outliers are shown separately. (f) Pairwise Conover’s test p-values for the F1-Score values per model shown in e. Colors are based on level of significance. (g) Example image of WNet3D before and after artifact filtering; after also shown in Figure 1c, plus an additional example of WNet3D in S1 cortex.

Figure 1—figure supplement 2
Training WNet3D: Overview of the training process of WNet3D.

(a) The loss for the encoder Uenc is the SoftNCuts, whereas the reconstruction loss for Udec is MSE. The weighted sum of losses is calculated as indicated in Methods. For select epochs, input volumes are shown, with outputs from encoder Uenc above, and outputs from decoder Udec below. (b) Additional model inference results on Mouse Skull dataset, and example of post-processing in order to correct holes or other artifacts, as shown with the red to white arrows.

Benchmarking the performance of WNet3D vs.supervised models with various amounts of training data on our mesoSPIM dataset.

(a) Semantic segmentation performance: comparison of model efficiency, indicating the volume of training data required to achieve a given performance level. Each supervised model was trained with an increasing percentage of training data (with 10, 20, 60, or 80%, left to right/dark to light within each model grouping, see legend); F1-Score score with an IoU>=0 was computed on unseen test data, over three data subsets for each training/evaluation split. Our self-supervised model (WNet3D) is also trained on a subset of the training set of images, but always without ground truth human labels. Far right: We also show the performance of the pre-trained WNet3D available in the plugin (far right), with and without cropping the regions where artifacts are present in the image. See Methods for details. The central box represents the interquartile range (IQR) of values with the median as a horizontal line, the upper and lower limits the upper and lower quartiles. Whiskers extend to data points within 1.5 IQR of the quartiles. (b) Instance segmentation performance comparison of Swin-UNetR and WNet3D (pretrained, see Methods), evaluated on unseen data across 3 data subsets, compared with a Swin-UNetR model trained using labels from the WNet3D self-supervised model. Here, WNet3D was trained on separate data, producing semantic labels that were then used to train a supervised Swin-UNetR model, still on held-out data. This supervised model was evaluated as the other models, on 3 held-out images from our dataset, unseen during training. Error bars indicate 50% ~CIs. (c) Workflow diagram depicting the segmentation pipeline: either raw data can be used directly (self-supervised) or labeled and used for training, after which other data can be used for model inference. Each stream concludes with post-hoc inspection and refinement, if needed (post-processing analysis and/or refining the model).

Benchmarking on additional datasets.

(a) Left: 3D Platynereis-ISH-Nuclei confocal data; middle is WNet3D semantic segmentation; right is instance segmentation. (b) Instance segmentation performance (zero-shot) of the pretrained WNet3D, Otsu threshold, and supervised models (Cellpose, StarDist) on select datasets featured in a, shown as F1-score vs intersection over union (IoU) with ground truth labels. (c) Left: 3D Platynereis-Nuclei LSM data; middle is WNet3D semantic segmentation; right is instance segmentation. (d) Instance segmentation performance (zero-shot) of the pretrained WNet3D, Otsu threshold, and supervised models (Cellpose, StarDist) on select datasets featured in c, shown as F1-score vs IoU with ground truth labels. (e) Left: Mouse Skull-Nuclei Zeiss LSM 880 data; middle is WNet3D semantic segmentation; right is instance segmentation. A demo of using CellSeg3D to obtain these results is available here: https://www.youtube.com/watch?v=U2a9IbiO7nE&t=12s. (f) Instance segmentation performance (zero-shot) of the pretrained WNet3D, Otsu threshold, and supervised models (Cellpose, StarDist) on select datasets featured in e, shown as F1-score vs IoU with ground truth labels.

CellSeg3D napari plugin example outputs.

(a) Demo using a cleared and MesoSPIM imaged c-FOS mouse brain, followed by BrainReg (20.22) registration to the Allen Brain Atlas https://mouse.brain-map.org/, then processing of regions of interest (ROIs) with CellSeg3D. Here, the WNet3D was used for semantic segmentation followed by processing for instance segmentation. (b) Qualitative example of WNet3D-generated prediction (thresholded) and labels on a crop from the c-FOS-labeled whole-brain. A demo of using CellSeg3D to obtain these results is available here: https://www.youtube.com/watch?v=3UOvvpKxEAo.

Tables

Table 1
F1-Scores for additional benchmark datasets, where we test our pretrained WNet3D, zero-shot.

Kruskal-Wallis H test [dataset, statistic, p-value]: Platynereis-ISH-Nuclei-CBG, 1.6, 0.69; Platynereis-Nuclei-CBG, 3.06, 0.38; Mouse-Skull-Nuclei-CBG (within post-processed), 10.13, 0.018; Mouse-Skull-Nuclei-CBG (no processing), 15.8, 0.001.

F10.1F10.2F10.3F10.4F10.5F10.6F10.7F10.8F10.9F1MEAN
Platynereis-ISH-Nuclei-CBG:
Otsu threshold0.8720.8470.8170.7720.7060.6050.4740.2460.0260.596
Cellpose (supervised)0.8960.8660.8320.7780.6980.5760.3620.1170.0100.570
StarDist (supervised)0.8410.8220.7860.6860.5360.3260.1100.0110.0.458
WNet3D (zero-shot)0.8760.8560.8340.7900.7290.6320.4920.2490.0340.610
Platynereis-Nuclei-CBG:
Otsu threshold0.7980.7730.7330.7020.6630.5900.5070.3360.0770.576
Cellpose (supervised)0.6910.6630.6240.5940.5530.4970.4170.2900.0620.488
StarDist (supervised)0.8500.8330.8030.7640.7000.6110.4920.2720.0190.594
WNet3D (zero-shot)0.8380.8080.7780.7390.6950.6170.5120.3380.0590.598
Mouse-Skull-Nuclei-CBG (most challenging dataset)
Otsu threshold0.6670.6340.5960.5660.4950.4270.3690.2760.0970.458
Otsu threshold +post-processing0.6950.6680.6470.6120.5430.4900.4280.3340.1370.506
Cellpose (supervised)0.1370.1110.0770.0540.0380.0280.0200.0140.0060.054
Cellpose +post-processing0.3860.3620.3390.3120.2660.2280.1890.1200.0270.248
StarDist (supervised)0.5730.5330.4110.2530.1350.0650.0200.0030.00.221
StarDist +post-processing0.6890.6490.5570.4070.2760.1740.0730.0100.00.315
WNet3D (zero-shot)0.7660.7320.6690.5720.4550.3550.2540.1750.0330.446
WNet3D+post-processing0.8070.7830.7630.7250.6370.5340.4280.2960.0730.561
Table 2
Dataset ground-truth cell count per volume.
RegionSizeCount
(pixels)(# of cells)
Sensorimotor
1199 × 106 × 147343
2299 × 78 × 111365
3299 × 105 × 147631
4249 × 93 × 114396
5249 × 86 × 94347
Visual329 × 127 × 214485
Table 3
Parameters used for instance segmentation with the pyclEsperanto Voronoi-Otsu function.
DatasetOutline σSpot σ
mesoSPIM0.650.65
Mouse Skull115
Platynereis-ISH0.52
Platynereis0.52.75

Additional files

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Open citations (links to open the citations from this article in various online reference manager services)

Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)

  1. Cyril Achard
  2. Timokleia Kousi
  3. Markus Frey
  4. Maxime Vidal
  5. Yves Paychere
  6. Colin Hofmann
  7. Asim Iqbal
  8. Sebastien B Hausmann
  9. Stéphane Pagès
  10. Mackenzie Weygandt Mathis
(2025)
CellSeg3D, Self-supervised 3D cell segmentation for fluorescence microscopy
eLife 13:RP99848.
https://doi.org/10.7554/eLife.99848.4