Peer review process
Not revised: This Reviewed Preprint includes the authors’ original preprint (without revision), an eLife assessment, public reviews, and a provisional response from the authors.
Read more about eLife’s peer review process.Editors
- Reviewing EditorJulia SeroUniversity of Bath, Bath, United Kingdom
- Senior EditorPanayiota PoiraziFORTH Institute of Molecular Biology and Biotechnology, Heraklion, Greece
Reviewer #1 (Public review):
Summary:
The authors develop a Python-based analysis framework for cellular organelle segmentation, feature extraction, and analysis for live-cell imaging videos. They demonstrate that their pipeline works for two organelles (mitochondria and lysosomes) and provide a step-by-step overview of the AutoMorphoTrack package.
Strengths:
The authors provide evidence that the package is functional and can provide publication-quality data analysis for mitochondrial and lysosomal segmentation and analysis.
Weaknesses:
(1) I was enthusiastic about the manuscript as a good end-to-end cell/organelle segmentation and quantification pipeline that is open-source, and is indeed useful to the field. However, I'm not certain AutoMorphoTrack fully fulfills this need. It appears to stitch together basic FIJI commands in a Python script that an experienced user can put together within a day. The paper reads as a documentation page, and the figures seem to be individual analysis outputs of a handful of images. Indeed, a recent question on the image.sc forum prompted similar types of analysis and outputs as a simple service to the community, and with seemingly better results and integrated organelle identity tracking (which is necessary in my opinion for live imaging). I believe this is a better fit in the methods section of a broader work. https://forum.image.sc/t/how-to-analysis-organelle-contact-in-fiji-with-time-series-data/116359/5.
(2) The authors do not discuss or compare to any other pipelines that can accomplish similar analyses, such as Imaris, CellProfiler, or integrate options for segmentation, etc., such as CellPose, StarDist.
(3) Although LLM-based chatbot integration seems to have been added for novelty, the authors do not demonstrate in the manuscript, nor provide instructions for making this easy-to-implement, given that it is directed towards users who do not code, presumably.
Reviewer #2 (Public review):
Summary:
AutoMorphoTrack provides an end-to-end workflow for organelle-scale analysis of multichannel live-cell fluorescence microscopy image stacks. The pipeline includes organelle detection/segmentation, extraction of morphological descriptors (e.g., area, eccentricity, "circularity," solidity, aspect ratio), tracking and motility summaries (implemented via nearest-neighbor matching using cKDTree), and pixel-level overlap/colocalization metrics between two channels. The manuscript emphasizes a specific application to live imaging in neurons, demonstrated on iPSC-derived dopaminergic neuronal cultures with mitochondria in channel 0 and lysosomes in channel 1, while asserting adaptability to other organelle pairs.
The tool is positioned for cell biologists, including users with limited programming experience, primarily through two implemented modes of use: (i) a step-by-step Jupyter notebook and (ii) a modular Python package for scripted or batch execution, alongside an additional "AI-assisted" mode that is described as enabling analyses through natural-language prompts.
The motivation and general workflow packaging are clear, and the notebook-plus-modules structure is a reasonable engineering choice. However, in its current form, the manuscript reads more like a convenient assembly of standard methods than a validated analytical tool. Key claims about robustness, accuracy, and scope are not supported by quantitative evidence, and the 'AI-assisted' framing is insufficiently defined and attributes to the tool capabilities that are provided by external LLM platforms rather than by AutoMorphoTrack itself. In addition, several figure, metric, and statistical issues-including physically invalid plots and inconsistent metric definitions-directly undermine trust in the quantitative outputs.
Strengths:
(1) Clear motivation: lowering the barrier for organelle-scale quantification for users who do not routinely write custom analysis code.
(2) Multiple entry points: an interactive notebook together with importable modules, emphasizing editable parameters rather than a fully opaque black box.
(3) End-to-end outputs: automated generation of standardized visualizations and tables that, if trustworthy, could help users obtain quantitative summaries without assembling multiple tools.
Weaknesses:
(1) "AI-assisted / natural-language" functionality is overstated.
The manuscript implies an integrated natural-language interface, but no such interface is implemented in the software. Instead, users are encouraged to use external chatbots to help generate or modify Python code or execute notebook steps. This distinction is not made clearly and risks misleading readers.
(2) No quantitative validation against trusted ground truth.
There is no systematic evaluation of segmentation accuracy, tracking fidelity, or interaction/overlap metrics against expert annotations or controlled synthetic data. Without such validation, accuracy, parameter sensitivity, and failure modes cannot be assessed.
(3) Limited benchmarking and positioning relative to existing tools.
The manuscript does not adequately compare AutoMorphoTrack to established platforms that already support segmentation, morphometrics, tracking, and colocalization (e.g., CellProfiler) or to mitochondria-focused toolboxes (e.g., MiNA, MitoGraph, Mitochondria Analyzer). This is particularly problematic given the manuscript's implicit novelty claims.
(4) Core algorithmic components are basic and likely sensitive to imaging conditions.
Heavy reliance on thresholding and morphological operations raises concerns about robustness across varying SNR, background heterogeneity, bleaching, and organelle density; these issues are not explored.
(5) Multiple figure, metric, and statistical issues undermine confidence.
The most concerning include:
(i) "Circularity (4πA/P²)" values far greater than 1 (Figures 2 and 7, and supplementary figures), which is inconsistent with the stated definition and strongly suggests a metric/label mismatch or computational error.
(ii) A displacement distribution extending to negative values (Figure 3B). This is likely a plotting artifact (e.g., KDE boundary bias), but as shown, it is physically invalid and undermines confidence in the motility analysis.
(iii) Colocalization/overlap metrics that are inconsistently defined and named, with axis ranges and terminology that can mislead (e.g., Pearson r reported for binary masks without clarification).
(iv) Figure legends that do not match the displayed panels, and insufficient reporting of Ns, p-values, sampling units, and statistical assumptions.
Reviewer #3 (Public review):
Summary:
AutoMorphoTrack is a Python package for quantitatively evaluating organelle shape, movement, and colocalization in high-resolution live cell imaging experiments. It is designed to be a beginning-to-end workflow from segmentation through metric graphing, which is easy to implement. The paper shows example results from their images of mitochondria and lysosomes within cultured neurons, demonstrating how it can be used to understand organelle processing.
Strengths:
The text is well-written and easy to follow. I particularly appreciate tables 1 and 2, which clearly define the goals of each module, the tunable parameters, and the input and outputs. I can see how the provided metrics would be useful to other groups studying organelle dynamics. Additionally, because the code is open-source, it should be possible for experienced coders to use this as a backbone and then customize it for their own purposes.
Weaknesses:
Unfortunately, I was not able to install the package to test it myself using any standard install method. This is likely fixable by the authors, but until a functional distribution exists, the utility of this tool is highly limited. I would be happy to re-review this work after this is fixed.
The authors claim that there is "AI-Assisted Execution and Natural-Language Interface". However, this is never defended in any of the figures, and from quickly reviewing the .py files, there does not seem to be any built-in support or interface for this. Without significantly more instructions on how to connect this package to a (free) LLM, along with data to prove that this works reproducibly to produce equivalent results, this section should be removed.
Additionally, I have a few suggestions/questions:
(1) Red-green images are difficult for colorblind readers. I recommend that the authors change all raw microscopy images to a different color combination.
(2) For all of the velocity vs displacement graphs (Figure 3C and subpart G of every supplemental figure), there is a diagonal line clearly defining a minimum limit of detected movement. Is this a feature of the dataset (drift /shakiness /etc) or some sort of minimum movement threshold in the tracking algorithm? This should be discussed in the text.
(3) Integrated Correlation Summary (Figure 5) - Pearson is likely the wrong metric for most of these metric pairs because even interesting relationships may be non-linear. Please replace with Spearman correlation, which is less dependent on linearity.