Peer review process
Not revised: This Reviewed Preprint includes the authors’ original preprint (without revision), an eLife assessment, public reviews, and a provisional response from the authors.
Read more about eLife’s peer review process.Editors
- Reviewing EditorAdrien PeyracheMcGill University, Montreal, Canada
- Senior EditorLaura ColginUniversity of Texas at Austin, Austin, United States of America
Reviewer #1 (Public review):
Summary
The manuscript by Peden-Asarch et al. introduces MPS, a new open-source software package for processing miniscope data. The authors aim to provide a fast, end-to-end analysis pipeline tailored to miniscope users with minimal experience in coding or version control. The work addresses an important practical barrier in the field by focusing on usability and accessibility.
Strengths
The authors identify a clear and well-motivated need within the miniscope community. Existing pipelines for miniscope data analysis are often complex, difficult to install, and challenging to maintain. In addition, users frequently encounter technical limitations such as out-of-memory errors, reflecting the substantial computational demands of these workflows-resources that are not always available in many laboratories. MPS is presented as an attempt to alleviate these issues by offering a more streamlined, accessible, and robust processing framework.
Weaknesses
The authors state that "MPS is the first implementation of Constrained Non-negative Matrix Factorization (CNMF) with Nonnegative Double Singular Value Decomposition (NNDSVD) initialization." However, NNDSVD initialization is the default method in scikit-learn's NMF implementation and is also used in CaIMAN. I recommend rephrasing this claim in the abstract to more accurately reflect MPS's novelty, which appears to lie in the specific combination of constrained NMF with NNDSVD initialization, rather than being the first use of NNDSVD initialization itself.
At present, there are practical issues that limit the usability of the software. The link to the macOS installer on the documentation website is not functional. Furthermore, installation on a MacBook Pro was unsuccessful, producing the following error:
"rsync(95755): error: ... Permission denied ... unexpected end of file."
For the purposes of this review, resolving this issue would significantly improve the evaluation of the software and its accessibility to users.
More broadly, the authors propose self-contained installers as a solution to the "package-management burden" commonly associated with scientific software. While this approach is appealing and potentially useful for novice users, current best practices in software development increasingly rely on continuous integration and continuous deployment (CI/CD) pipelines to ensure reproducibility, testing, and long-term maintenance. In this context, it has become standard for Python packages to be distributed via PyPI or Conda. Without dismissing the value of standalone installers, the overall quality and sustainability of MPS would be greatly enhanced by also supporting conventional environment-based installations.
Reviewer #2 (Public review):
Summary:
This manuscript introduces Miniscope Processing Suite (MPS), a novel no-code GUI-based pipeline built to easily process long-duration one-photon calcium imaging data from head-mounted Miniscopes. MPS aims to address two large problems that persist despite the rapid proliferation of Miniscope use across the field. The first issue is concerned with the high technical barrier to using existing pipelines (e.g., CaImAn, MIN1PIPE, Minian, CaliAli) that require users to have coding skills to analyze data. The second problem addressed is the intense memory limitations of these pipelines, which can prevent analysis of long-duration (multi-hour) recordings without state-of-the-art hardware. The MPS toolbox takes inspiration from what existing pipelines do well, innovates new modules like Window Cropping, NNDSVD initialization, Watershed-based segmentation, and improves the user experience to improve access to calcium imaging analysis without the need for new training in new coding languages. In many ways, MPS achieves this aim, and thus will be of interest to a growing, broad audience of new calcium imagers.
There are, however, some concerns with the current manuscript and pipeline that, if addressed, would greatly improve the impact of this work. Currently, the manuscript provides insufficient evidence that MPS can generate good results efficiently on various data sets, and it is not properly benchmarked against other established packages. Additionally, considering the goal of MPS is to attract novices to attempt Miniscope analysis, better tutorials, documentation, and walkthroughs of expected vs inaccurate results should be provided so that it is clear when the user can trust the output. Otherwise, this simplified approach may end up leading new users to erroneous results.
Strengths:
The manuscript itself is well-organized, clear, and easy to follow. MPS is clearly designed to remove the computational barrier for entry for a broad neuroscience community to record and analyze calcium data. The development of several well-detailed algorithmic innovations merits recognition. Firstly, MPS is extremely easy to install, keep updated, and step through. Having each step save every output automatically is a well-thought-out feature that will allow users to enter back into the pipeline at any step and compare results.
The implementation of an erroneous frame identifier and remover during preprocessing is an important new feature that is typically done offline with custom-built code. Interactive ROI cropping early in the pipeline is an efficient way to lower pixel load, and NNDSVD initialization is a new way to provide nonnegative, biologically interpretable starting spatial and temporal factors for later CNMF iterations. Parallel temporal-first update ordering cuts down dramatically on later computational load. Together, all these features, neatly packaged into a no-code GUI like the Data Explorer for manual curation, are practical additions that will benefit end users.
Weaknesses:
A major limitation of this manuscript is that the authors don't validate the accuracy of their source extraction using ground-truth data or any benchmark against existing pipelines. The paper uses their own analysis of processing speeds, component counts, signal-to-noise ratio improvements, and morphological characteristics of detected cells, but it needs to be reworked to include some combination of validation against manually annotated ground truth data sets, simulated data with known cell locations and activity patterns, or cross-validation with established pipelines on identical datasets. Without this kind of validation, it is impossible to truly determine whether MPS produces biologically acceptable results that help distinguish it from what is currently already available. For example, line 57 refers to the CaImAn pipeline having near-human efficiency (Figures 3-5 and Tables 1 and 2 of the CaImAn paper), but no specific examples for MPS performance benchmarks are made. Figure 15 of the Minian paper provides other examples of how to show this.
Considering one of the main benefits of MPS is its low memory demand and ability to run on unsophisticated hardware, the authors should include a figure that shows how processing times and memory usage scale with dataset sizes (FOV, number of frames and/or neurons, sparsity of cells) and differing pipelines. Figure 8 of the CaImAn paper and Figure 18 of the Minian paper show this quite nicely. Table 1 currently references how "traditional approaches" differ methodologically from MPS innovations, but runtime comparisons on identical datasets processed through MPS, CaImAn, Minian, or CaliAli would be necessary to substantiate performance claims of MPS being "10-20X faster". Additionally, while the paper does mention the type of hardware used by the experimenters, a table with a full breakdown of components may be useful for reproducibility. As well as the minimum requirements for smooth processing.
The current datasets used for validating MPS are not described in the manuscript. The manuscript appears to have 28 sessions of calcium imaging, but it is unclear if this is a single cohort or even animal, or whether these data are all from the same brain region. Importantly, the generalizability of parameter choices and performance could vary for others based on brain region differences, use of alternative calcium indicators (anything other than GCaMP8f used in the paper), etc. This leads to another limitation of the paper in its current form. While MPS is aimed at eliminating the need to code, users should not be expected to blindly trust default or suggested parameter selections. Instead, users need guidance on what each modifiable parameter does to their data and how each step analysis output should be interpreted. Perhaps including a tutorial with sample test data for parameter investigation and exploration, like many other existing pipelines do, is warranted. This would also increase the transparency and reproducibility of this work.
Currently, the documentation and FAQ website linked to MPS installation does not do an adequate job of describing parameters or their optimization. The main GitHub repository does contain better stepwise explanations, but there needs to be a centralized location for all this information. Additionally, a lack of documentation on the graphs created by each analysis step makes it hard for a true novice to interpret whether their own data is appropriately optimized for the pipeline. Greater detail on this would greatly improve the quality and impact of MPS.