Peer review process
Not revised: This Reviewed Preprint includes the authors’ original preprint (without revision), an eLife assessment, public reviews, and a provisional response from the authors.
Read more about eLife’s peer review process.Editors
- Reviewing EditorHelen ScharfmanNathan Kline Institute, Orangeburg, United States of America
- Senior EditorLaura ColginUniversity of Texas at Austin, Austin, United States of America
Reviewer #1 (Public review):
Summary:
In this manuscript, Jin et. al., describe SMARTR, an image analysis strategy optimized for analysis of dual-activity ensemble tagging mouse reporter lines. The pipeline performs cell segmentation, then registers the location of these cells into an anatomical atlas, and finally, calculates the degree of co-expression of the reporters in cells across brain regions. They demonstrate the utility of the method by labeling two ensemble populations during two related experiences: inescapable shock and subsequent escapable shock as part of learned helplessness.
Strengths:
(1) We appreciated that the authors provided all documentation necessary to use their method and that the scripts in their publicly available repository are well commented.
(2) The manuscript was well-written and very clear, and the methods were generally highly detailed.
Weaknesses:
(1) The heatmaps (for example, Figure 3A, B) are challenging to read and interpret due to their size. Is there a way to alter the visualization to improve interpretability? Perhaps coloring the heatmap by general anatomical region could help? We feel that these heatmaps are critical to the utility of the registration strategy, and hence, clear visualization is necessary.
(2) Additional context in the Introduction on the use of immediate early genes to label ensembles of neurons that are specifically activated during the various behavioral manipulations would enable the manuscript and methodology to be better appreciated by a broad audience.
(3) The authors mention that their segmentation strategies are optimized for the particular staining pattern exhibited by each reporter and demonstrate that the manually annotated cell counts match the automated analysis. They mention that alternative strategies are compatible, but don't show this data.
(4) The authors provided highly detailed information for their segmentation strategy, but the same level of detail was not provided for the registration algorithms. Additional details would help users achieve optimal alignment.
Reviewer #2 (Public review):
Summary:
This manuscript describes a workflow and software package, SMARTR, for mapping and analyzing neuronal ensembles tagged using activity-dependent methods. They showcase this pipeline by analyzing ensembles tagged during the learned helplessness paradigm. This is an impressive effort, and I commend the authors for developing open-source software to make whole-brain analyses more feasible for the community. Software development is essential for modern neuroscience and I hope more groups make the effort to develop open-source, easily usable packages. However, I do have concerns over the usability and maintainability of the SMARTR package. I hope that the authors will continue to develop this package, and encourage them to make the effort to publish it within either the Bioconductor or CRAN framework.
Strengths:
This is a novel software package aiming to make the analysis of brain-wide engrams more feasible, which is much needed. The documentation for the package and workflow is solid.
Weaknesses:
While I was able to install the SMARTR package, after trying for the better part of one hour, I could not install the "mjin1812/wholebrain" R package as instructed in OSF. I also could not find a function to load an example dataset to easily test SMARTR. So, unfortunately, I was unable to test out any of the packages for myself. Along with the currently broken "tractatus/wholebrain" package, this is a good example of why I would strongly encourage the authors to publish SMARTR on either Bioconductor or CRAN in the future. The high standards set by Bioc/CRAN will ensure that SMARTR is able to be easily installed and used across major operating systems for the long term.
The package is quite large (several thousand lines include comments and space). While impressive, this does inherently make the package more difficult to maintain - and the authors currently have not included any unit tests. The authors should add unit tests to cover a large percentage of the package to ensure code stability.
Why do the authors choose to perform image segmentation outside of the SMARTR package using ImageJ macros? Leading segmentation algorithms such as CellPose and StarMap have well-documented APIs that would be easy to wrap in R. They would likely be faster as well. As noted in the discussion, making SMARTR a one-stop shop for multi-ensemble analyses would be more appealing to a user.
Given the small number of observations for correlation analyses (n=6 per group), Pearson correlations would be highly susceptible to outliers. The authors chose to deal with potential outliers by dropping any subject per region that was> 2 SDs from the group mean. Another way to get at this would be using Spearman correlation. How do these analyses change if you use Spearman correlation instead of Pearson? It would be a valuable addition for the author to include Spearman correlations as an option in SMARTR.
I see the authors have incorporated the ability to adjust p-values in many of the analysis functions (and recommend the BH procedure) but did not use adjusted p-values for any of the analyses in the manuscript. Why is this? This is particularly relevant for the differential correlation analyses between groups (Figures 3P and 4P). Based on the un-adjusted p-values, I assume few if any data points will still be significant after adjusting. While it's logical to highlight the regional correlations that strongly change between groups, the authors should caution ¬ which correlations are "significant" without adjusting for multiple comparisons. As this package now makes this analysis easily usable for all researchers, the authors should also provide better explanations for when and why to use adjusted p-values in the online documentation for new users.
The package was developed in R3.6.3. This is several years and one major version behind the current R version (4.4.3). Have the authors tested if this package runs on modern R versions? If not, this could be a significant hurdle for potential users.