Author Response
The following is the authors’ response to the original reviews.
eLife assessment
This article describes a useful python-based image-analysis tool for bacteria growing in the 'mother-machine' microfluidic device. This new method for image segmentation and tracking offers a user-friendly graphical interface based on the previously developed, promising environment for image analysis 'Napari'. The authors demonstrate the usefulness of their software and its robust performance by comparing it to other methods used for the same purpose. The comparison provides solid support for the new method, although it would have been even stronger if tested using data sets from other groups. This article will be of interest for scientists who utilize the 'mother machine', not least because it also provides a short overview of how to set up this widely used device.
Public Reviews:
Reviewer #1 (Public Review):
The authors aim to develop an easy-to-use image analysis tool for the mother machine that is used for single-cell time-lapse imaging. Compared with related software, they tried to make this software more user-friendly for non-experts with a design of "What You Put Is What You Get". This software is implemented as a plugin of Napari, which is an emerging microscopy image analysis platform. The users can interactively adjust the parameters in the pipeline with good visualization and interaction interface.
Strengths:
- Updated platform with great 2D/3D visualization and annotation support.
- Integrated one-stop pipeline for mather machine image processing.
- Interactive user-friendly interface.
- The users can have a visualization of intermediate results and adjust the parameters.
We thank the reviewer for their positive comments.
Weaknesses:
- Based on the presentation of the manuscript, it is not clear that the goals are fully achieved.
- Although there is great potential, there is little evidence that this tool has been adopted by other labs.
- The comparison of Otsu and U-Net results does not make much sense to me. The systematic bias could be adjusted by threshold change. The U-Net output is a probability map with floating point numbers. This output is probably thresholded to get a binary mask, which is not mentioned in the manuscript. This threshold could also be adjusted. Actually, Otsu is a segmentation method and U-Net is an image transformation method and they should not be compared together. U-Net output could also be segmented using Otsu.
We agree that the comparison of the classical and U-Net results may be misleading. As the reviewer points out, the issue ultimately comes down to thresholding. Indeed, the threshold of both the Otsu and U-Net outputs could be adjusted to bring them into line with each other. The comparison between the Otsu pipeline and U-Net pipeline is meant to illustrate that any pipeline (making use of a variety of methods) may be highly susceptible to the value of a user-input (or hard-coded threshold).
We have clarified the discussion to emphasize that the comparison is not specifically between U-Net and Otsu but between the two pipelines (lines 238 - 257).
We have also clarified that the U-Net probability map output was binarized with a threshold of 0.5 (lines 538-541). We note the same activation function and threshold are used in DeLTA. As the reviewer points out, Otsu’s method could indeed be applied to threshold the U-Net output as well. What we referred to as the “Otsu” MM3 method itself uses Otsu thresholding coupled with a Euclidean distance transform and a Random Walker algorithm. For clarity we now refer to it as a classical or non-learning method in the text.
- The diversity of datasets used in this study is limited.
We have added a section “Testing napari-MM3 on other datasets” (lines 187-196) evaluating the performance of MM3 on 4 datasets (3 E. coli, 1 Corynebacterium glutamicum) from outside our lab, demonstrating its versatility.
- There is some ambiguity in the main point of this manuscript, the title and figures illustrate a complete pipeline, including imaging, image segmentation, and analysis. While the abstract focus only on the software MM3. If only MM3 is the focus and contribution of this manuscript, more presentations should focus on this software tool. It is also not clear whether the analysis features are also integrated with MM3 or not.
We have added a line (lines 160-162) clarifying that final analysis and plotting must be done outside of napari. MM3 itself processes raw microscopy images, segments cells and reconstructs cell lineages (Figure 2).
- The impact of this work depends on the adoption of the software MM3. Napari is a promising platform with expanding community. With good software user experience and long-term support, there is a good chance that this tool could be widely adopted in the mother machine image analysis community.
We thank the reviewer for their endorsement of MM3’s potential.
- The data analysis in this manuscript is used as a demo of MM3 features, rather than scientific research.
Reviewer #2 (Public Review):
The authors present an image-analysis pipeline for mother-machine data, i.e., for time-lapses of single bacterial cells growing for many generations in one-dimensional microfluidic channels. The pipeline is available as a plugin of the python-based image-analysis platform Napari. The tool comes with two different previously published methods to segment cells (classical image transformation and thresholding as well as UNet-based analysis), which compare qualitatively and quantitatively well with the results of widely accessible tools developed by others (BACNET, DelTA, Omnipose). The tool comes with a graphical user interface and example scripts, which should make it valuable for other mother-machine users, even if this has not been demonstrated yet.
We thank the reviewer for their positive comments.
The authors also add a practical overview of how to prepare and conduct mother-machine experiments, citing their previous work and giving more advice on how to load cells using centrifugation. However, the latter part lacks detailed instructions.
We have added a more detailed experimental protocol, including the procedure we use for cell loading, to the lab github page https://github.com/junlabucsd/mother-machine-protocols (linked in the main text).
Finally, the authors emphasize that machine-learning methods for image segmentation reproduce average quantities of training datasets, such as the length at birth or division. Therefore, differences in training can propagate to difference in measured average quantities. This result is not surprising and is normally considered a desired property of any machine-learning algorithm as also commented on below.
Points for improvement:
Different datasets: The authors demonstrate the use of their method for bacteria growing in different growth conditions in their own microscope. However, they don't provide details on whether they had to adjust image-analysis parameters for each dataset. Similarly, they say that their method also works for other organisms including yeast and C. elegans (as part of the Results section) but they don't show evidence nor do they write whether the method needs to be tuned/trained for those datasets. Finally, they don't demonstrate that their method works on data from other labs, which might be different due to differences in setup or imaging conditions.
We have added a section “Testing napari-MM3 on other datasets” (lines 187-196) evaluating the performance of MM3 on 4 datasets (3 E. coli, 1 Corynebacterium glutamicum) from outside our lab, demonstrating its versatility. We provide details of the procedure and parameters used in the Methods section. (“Analysis of external datasets” lines 476-486).
Bias due to training sets:
The bias in ML-methods based on training datasets is not surprising but arguably a desired property of those methods. Similarly, threshold-based classical segmentation methods are biased by the choice of threshold values and other segmentation parameters. A point that would have profited from discussion in this regard: How to make image segmentation unbiased, that is, how to deliver physical cell boundaries? This can be done by image simulations and/or by comparison with alternative methods such as fluorescence microscopy.
We agree this is an important point. We have revised the relevant sections (lines 238 - 270) to add context to the discussion of bias in both classical and deep learning methods. We have added a subsection (lines 401 - 410) discussing methods to this end, such as synthetic training data generation or calibrating the segmentation to fluorescence images.
The authors stress the user-friendliness of their method in comparison to others. For example, they write: 'Unfortunately, many of these tools present a steep learning curve for most biologists, as they require familiarity with command line tools, programming, and image analysis methods.' I suggest to instead emphasize that many of the tools published in recent years are designed to be very use friendly. And as will all methods, MM3 also comes at a prize, which is to install Napari followed by the installation of MM3, which, according to their own instructions, is not easy either.
We have modified our language to acknowledge that indeed recent software such as DeLTA and BACMMAN make a point to be user-friendly and accessible (lines 52-53).
Reviewer #1 (Recommendations For The Authors):
-The resources, including documentation and code, are referenced and are not easy to find. It should be easier for readers to curate them in a separate Resources section.
We have created a Resources section in the Methods (top of first page) with the documentation, code and protocols hyperlinked.
- It would be easier to understand the usage of MM3 with a screen recording video. I found a video from the GitHub paper, but the resolution is a bit low. Attaching a high-resolution screenshot video would be helpful.
A high resolution tutorial video has been made more visible on the github page.
- In Table 1, AMD GPU is used which is not easy to use for Deep Learning. It is not clear whether the GPU is used for Deep Learning training and inference.
We have clarified this point in the Table 1 caption, and linked to a reference on how to use AMD GPUs with Tensorflow on Macs.
- Some paragraphs in the Discussion section are like blogs with general recommendations. Although the suggestions look pretty useful, it is not the focus of this manuscript. It might be more appropriate to put it in the GitHub repo or a documentation page. The discussion should still focus on the software, such as features, software maintenance, software development roadmap, and community adoption.
- It would be easier for reviewers to add line numbers in the manuscript.
Reviewer #2 (Recommendations For The Authors):
Software Installation: This might be something for the GitHub forum, but briefly trying to install the plugin myself, I already failed at the first line of the GitHub instructions, which is to use mamba for installation. This relates to my point above: Any program that is not stand-alone requires some user-savviness and trial-and-error, which is just hard to avoid for any method. I suggest being less critical of 'other methods' and instead focus on the advantage of the mother-machine-specific aspects of napari-mm3.
The authors write 'Still, most labs do not have the time and resources to evaluate other tools they do not use critically, [...]'. The sentence is not very clear. Evaluating tools not used is obviously difficult/impossible.
We have reworded this sentence to be more clear (lines 54-55).
The authors write: 'The supervised learning method uses a convolutional neural net (CNN) with the U-Net architecture [20].' Can the authors cite previous work that has taken advantage of this approach before (e.g., DelTA)?
We have added citations to DeLTA and other previous software (line 151).
Cell tracking and lineage reconstruction should be described in more detail and/or with reference to previous work.
We have added more details to the SI (lines 554 - 567) discussing the method in the context of existing mother machine analysis software.
The authors provide a figure for a '3D printed cell loader', but as far they don't give instructions including a CAD file and the model of the fan used for spinning. The same holds for the stage inset (which, as far as I see, is not referred to in the manuscript text nor described in a figure caption).
Thank you for pointing out this omission. The centrifuge is referenced in Box 1. We have updated the manuscript with a link to a Github repository containing CAD files & details of the centrifuge construction. We decided to remove the stage insert from the figure.
Figure S3: Is the asymmetry in growth rate due to the expression of a fluorescent protein, due to strain differences, or due to imaging artifacts? Maybe this is impossible to tell based on the available datasets, but this could be discussed.
Based on previous work (DOI 10.1099/mic.0.057240-0) it is likely due to the expression of the fluorescent protein and fluorescence imaging. We have added a brief discussion in the Figure S3 caption.