AutoMorphoTrack: A modular framework for quantitative analysis of organelle morphology, motility, and interactions at single-cell resolution

  1. Department of Neurology, Massachusetts General Hospital and Harvard Medical School, Boston, United States
  2. Aligning Science Across Parkinson’s (ASAP) Collaborative Research Network, Chevy Chase, United States

Peer review process

Not revised: This Reviewed Preprint includes the authors’ original preprint (without revision), an eLife assessment, public reviews, and a provisional response from the authors.

Read more about eLife’s peer review process.

Editors

  • Reviewing Editor
    Julia Sero
    University of Bath, Bath, United Kingdom
  • Senior Editor
    Panayiota Poirazi
    FORTH Institute of Molecular Biology and Biotechnology, Heraklion, Greece

Reviewer #1 (Public review):

Summary:

The authors develop a Python-based analysis framework for cellular organelle segmentation, feature extraction, and analysis for live-cell imaging videos. They demonstrate that their pipeline works for two organelles (mitochondria and lysosomes) and provide a step-by-step overview of the AutoMorphoTrack package.

Strengths:

The authors provide evidence that the package is functional and can provide publication-quality data analysis for mitochondrial and lysosomal segmentation and analysis.

Weaknesses:

(1) I was enthusiastic about the manuscript as a good end-to-end cell/organelle segmentation and quantification pipeline that is open-source, and is indeed useful to the field. However, I'm not certain AutoMorphoTrack fully fulfills this need. It appears to stitch together basic FIJI commands in a Python script that an experienced user can put together within a day. The paper reads as a documentation page, and the figures seem to be individual analysis outputs of a handful of images. Indeed, a recent question on the image.sc forum prompted similar types of analysis and outputs as a simple service to the community, and with seemingly better results and integrated organelle identity tracking (which is necessary in my opinion for live imaging). I believe this is a better fit in the methods section of a broader work. https://forum.image.sc/t/how-to-analysis-organelle-contact-in-fiji-with-time-series-data/116359/5.

(2) The authors do not discuss or compare to any other pipelines that can accomplish similar analyses, such as Imaris, CellProfiler, or integrate options for segmentation, etc., such as CellPose, StarDist.

(3) Although LLM-based chatbot integration seems to have been added for novelty, the authors do not demonstrate in the manuscript, nor provide instructions for making this easy-to-implement, given that it is directed towards users who do not code, presumably.

Reviewer #2 (Public review):

Summary:

AutoMorphoTrack provides an end-to-end workflow for organelle-scale analysis of multichannel live-cell fluorescence microscopy image stacks. The pipeline includes organelle detection/segmentation, extraction of morphological descriptors (e.g., area, eccentricity, "circularity," solidity, aspect ratio), tracking and motility summaries (implemented via nearest-neighbor matching using cKDTree), and pixel-level overlap/colocalization metrics between two channels. The manuscript emphasizes a specific application to live imaging in neurons, demonstrated on iPSC-derived dopaminergic neuronal cultures with mitochondria in channel 0 and lysosomes in channel 1, while asserting adaptability to other organelle pairs.

The tool is positioned for cell biologists, including users with limited programming experience, primarily through two implemented modes of use: (i) a step-by-step Jupyter notebook and (ii) a modular Python package for scripted or batch execution, alongside an additional "AI-assisted" mode that is described as enabling analyses through natural-language prompts.

The motivation and general workflow packaging are clear, and the notebook-plus-modules structure is a reasonable engineering choice. However, in its current form, the manuscript reads more like a convenient assembly of standard methods than a validated analytical tool. Key claims about robustness, accuracy, and scope are not supported by quantitative evidence, and the 'AI-assisted' framing is insufficiently defined and attributes to the tool capabilities that are provided by external LLM platforms rather than by AutoMorphoTrack itself. In addition, several figure, metric, and statistical issues-including physically invalid plots and inconsistent metric definitions-directly undermine trust in the quantitative outputs.

Strengths:

(1) Clear motivation: lowering the barrier for organelle-scale quantification for users who do not routinely write custom analysis code.

(2) Multiple entry points: an interactive notebook together with importable modules, emphasizing editable parameters rather than a fully opaque black box.

(3) End-to-end outputs: automated generation of standardized visualizations and tables that, if trustworthy, could help users obtain quantitative summaries without assembling multiple tools.

Weaknesses:

(1) "AI-assisted / natural-language" functionality is overstated.

The manuscript implies an integrated natural-language interface, but no such interface is implemented in the software. Instead, users are encouraged to use external chatbots to help generate or modify Python code or execute notebook steps. This distinction is not made clearly and risks misleading readers.

(2) No quantitative validation against trusted ground truth.

There is no systematic evaluation of segmentation accuracy, tracking fidelity, or interaction/overlap metrics against expert annotations or controlled synthetic data. Without such validation, accuracy, parameter sensitivity, and failure modes cannot be assessed.

(3) Limited benchmarking and positioning relative to existing tools.

The manuscript does not adequately compare AutoMorphoTrack to established platforms that already support segmentation, morphometrics, tracking, and colocalization (e.g., CellProfiler) or to mitochondria-focused toolboxes (e.g., MiNA, MitoGraph, Mitochondria Analyzer). This is particularly problematic given the manuscript's implicit novelty claims.

(4) Core algorithmic components are basic and likely sensitive to imaging conditions.

Heavy reliance on thresholding and morphological operations raises concerns about robustness across varying SNR, background heterogeneity, bleaching, and organelle density; these issues are not explored.

(5) Multiple figure, metric, and statistical issues undermine confidence.

The most concerning include:
(i) "Circularity (4πA/P²)" values far greater than 1 (Figures 2 and 7, and supplementary figures), which is inconsistent with the stated definition and strongly suggests a metric/label mismatch or computational error.

(ii) A displacement distribution extending to negative values (Figure 3B). This is likely a plotting artifact (e.g., KDE boundary bias), but as shown, it is physically invalid and undermines confidence in the motility analysis.

(iii) Colocalization/overlap metrics that are inconsistently defined and named, with axis ranges and terminology that can mislead (e.g., Pearson r reported for binary masks without clarification).

(iv) Figure legends that do not match the displayed panels, and insufficient reporting of Ns, p-values, sampling units, and statistical assumptions.

Reviewer #3 (Public review):

Summary:

AutoMorphoTrack is a Python package for quantitatively evaluating organelle shape, movement, and colocalization in high-resolution live cell imaging experiments. It is designed to be a beginning-to-end workflow from segmentation through metric graphing, which is easy to implement. The paper shows example results from their images of mitochondria and lysosomes within cultured neurons, demonstrating how it can be used to understand organelle processing.

Strengths:

The text is well-written and easy to follow. I particularly appreciate tables 1 and 2, which clearly define the goals of each module, the tunable parameters, and the input and outputs. I can see how the provided metrics would be useful to other groups studying organelle dynamics. Additionally, because the code is open-source, it should be possible for experienced coders to use this as a backbone and then customize it for their own purposes.

Weaknesses:

Unfortunately, I was not able to install the package to test it myself using any standard install method. This is likely fixable by the authors, but until a functional distribution exists, the utility of this tool is highly limited. I would be happy to re-review this work after this is fixed.

The authors claim that there is "AI-Assisted Execution and Natural-Language Interface". However, this is never defended in any of the figures, and from quickly reviewing the .py files, there does not seem to be any built-in support or interface for this. Without significantly more instructions on how to connect this package to a (free) LLM, along with data to prove that this works reproducibly to produce equivalent results, this section should be removed.

Additionally, I have a few suggestions/questions:

(1) Red-green images are difficult for colorblind readers. I recommend that the authors change all raw microscopy images to a different color combination.

(2) For all of the velocity vs displacement graphs (Figure 3C and subpart G of every supplemental figure), there is a diagonal line clearly defining a minimum limit of detected movement. Is this a feature of the dataset (drift /shakiness /etc) or some sort of minimum movement threshold in the tracking algorithm? This should be discussed in the text.

(3) Integrated Correlation Summary (Figure 5) - Pearson is likely the wrong metric for most of these metric pairs because even interesting relationships may be non-linear. Please replace with Spearman correlation, which is less dependent on linearity.

Author response:

Reviewer #1

We thank the reviewer for their thoughtful and constructive assessment of AutoMorphoTrack and for recognizing its potential utility as an open-source end-to-end workflow for organelle analysis.

(1) Novelty and relationship to existing tools / FIJI workflows

We appreciate this concern and agree that many of the underlying image-processing operations (e.g., thresholding, morphological cleanup, region properties) are well-established. Our goal with AutoMorphoTrack is not to introduce new segmentation algorithms, but rather to provide a curated, reproducible, and extensible end-to-end workflow that integrates segmentation, morphology, tracking, motility, and colocalization into a single, transparent pipeline tailored for live-cell organelle imaging.

While an experienced user could assemble similar analyses ad hoc using FIJI or custom scripts, our contribution lies in:

Unifying these steps into a single workflow with consistent parameterization and outputs

Generating standardized, publication-ready visualizations and tables at each step,

Enabling batch and longitudinal analyses across cells and conditions, and

Lowering the barrier for users who do not routinely write custom analysis code.

We note that the documentation-style presentation of the manuscript is intentional, as it serves both as a methods paper and a practical reference for users implementing the workflow. We agree, however, that the manuscript currently overemphasizes step-by-step execution at the expense of positioning. In revision, we will more explicitly frame AutoMorphoTrack as a workflow integration and usability contribution, rather than a fundamentally new algorithmic advance.

We will also cite and discuss the image.sc example referenced by the reviewer, clarifying conceptual overlap and differences in scope.

(2) Comparison to existing pipelines (Imaris, CellProfiler, CellPose, StarDist)

We agree and thank the reviewer for highlighting this omission. In the revised manuscript, we will expand the related-work and positioning section to explicitly compare AutoMorphoTrack with established commercial (e.g., Imaris) and open-source (e.g., CellProfiler, MiNA, MitoGraph) platforms, as well as learning-based segmentation tools such as CellPose and StarDist.

Rather than claiming superiority, we will clarify trade-offs, emphasizing that AutoMorphoTrack prioritizes:

Transparency and parameter interpretability,

Lightweight dependencies suitable for small live-imaging datasets

Direct integration of morphology, tracking, and colocalization in a single workflow, and

Ease of modification for domain-specific use cases.

(3) AI / chatbot integration

We appreciate this critique and agree that the current description is insufficiently precise. AutoMorphoTrack does not implement a native natural-language interface. Instead, our intent was to convey that the workflow can be executed and modified with assistance from external large language models (LLMs) in a notebook-based environment.

In revision, we will revise this section to:

Clearly distinguish AutoMorphoTrack’s functionality from that of external LLM tools,

Remove any implication of a built-in AI interface, and

Provide concrete, reproducible examples of how non-coding users may interact with the pipeline using natural-language prompts mediated by external tools.

Reviewer #2

We thank the reviewer for their detailed and technically rigorous evaluation. We appreciate the recognition of the workflow’s motivation and structure, and we agree that several aspects of validation, positioning, and quantitative reporting must be strengthened.

(1) AI-assisted / natural-language functionality

We agree with this critique. AutoMorphoTrack does not provide a native natural-language execution layer, and the manuscript currently overstates this aspect. In revision, we will explicitly scope any reference to AI assistance as external, optional support for code generation and parameter editing, with clearly documented examples and stated limitations.

We agree that conflating external LLM capabilities with the software itself risks misleading readers, and we will correct this accordingly.

(2) Lack of quantitative validation

We fully agree that the current manuscript lacks formal quantitative validation. In the revised version, we will add a dedicated validation section including:

Segmentation accuracy compared to expert annotations using overlap metrics (e.g., Dice / IoU),

Tracking fidelity assessed using manually annotated tracks and/or synthetic ground truth,

Sensitivity analyses for key parameters (e.g., thresholding and linking distance), and

Explicit discussion of failure modes and quality-control indicators.

We acknowledge that without such validation, claims of robustness are not sufficiently supported.

(3) Benchmarking and positioning relative to existing tools

We agree and will substantially strengthen AutoMorphoTrack’s benchmarking and positioning relative to existing platforms. Rather than framing novelty algorithmically, we will clarify that the primary contribution is a reproducible, integrated workflow designed specifically for two-organelle live imaging in neurons, with transparent parameters and standardized outputs.

We note that our goal is not to exhaustively benchmark against all available tools, but rather to provide representative comparisons that clarify operating regimes, assumptions, and trade-offs. We will add a comparative table and/or qualitative comparison highlighting strengths, assumptions, and limitations relative to existing tools.

(4) Core algorithms and robustness

We agree that reliance on threshold-based segmentation introduces sensitivity to imaging conditions. In revision, we will:

Explicitly discuss the operating regime and assumptions under which AutoMorphoTrack performs reliably,

Clarify that the framework is modular and can accept alternative segmentation backends, and

Include guidance on when outputs should be treated with caution.

(5) Figure, metric, and statistical issues

We thank the reviewer for identifying several critical issues and agree that these undermine confidence. In revision, we will correct all figure, metric-definition, and reporting inconsistencies, including:

Resolving circularity values exceeding 1 by correcting computation and/or labeling errors,

Revising physically invalid displacement plots and clarifying kernel-density limitations,

Ensuring colocalization metrics are consistently defined, named, and interpreted, with explicit clarification of whether calculations are intensity- or mask-based,

Correcting figure legends to match displayed panels, and

Clearly reporting sample size, sampling units, and statistical assumptions, including handling of multiple comparisons where applicable.

(6) Value-added demonstration

We agree that the manuscript would benefit from a clearer demonstration of value-added use cases. In revision, we will include at least one realistic example showing how AutoMorphoTrack enables a complete, reproducible analysis workflow with reduced setup burden compared to manually assembling multiple tools.

(7) Editorial suggestions

We agree and will streamline the Results section to reduce procedural repetition and focus more on validation, limitations, and quality-control guidance.

Reviewer #3

We thank the reviewer for their positive assessment of clarity and organization, and for the constructive practical feedback.

Installation issues

We appreciate the detailed report of installation failures and acknowledge that the current packaging and distribution are inadequate. Prior to revision, we will:

Fix the package structure to support standard installation methods,

Ensure all required files (e.g., setup configuration, README) are correctly included,

Test installation on clean environments across platforms, and

Correct broken links to notebooks and documentation.

We agree that without a functional installation pathway, the utility of the tool is severely limited.

AI-assisted claims

We agree with the reviewer and echo our responses above. The AI-assisted description will be clarified and appropriately scoped in the revised manuscript.

Additional suggestions

Color accessibility: We will revise all figures to use colorblind-safe palettes.

Velocity–displacement diagonal: We will explicitly explain the origin of this relationship, including whether it reflects dataset properties, tracking assumptions, or minimum detectable motion.

Integrated correlation metric: We agree that Spearman correlation is more appropriate for many of these relationships and will replace Pearson correlations accordingly.

Supplementary movies: We agree that providing raw movies would improve interpretability and will add representative examples as supplementary material.

  1. Howard Hughes Medical Institute
  2. Wellcome Trust
  3. Max-Planck-Gesellschaft
  4. Knut and Alice Wallenberg Foundation