Author response:
Reviewer #1 (Public review):
Summary:
In this work, the authors have developed SPLASH+, a micro-assembly and biological interpretation framework that expands on their previously published reference-free statistical approach (SPLASH) for sequencing data analysis.
Thank you for this thorough overview of our work.
Strengths:
(1) The methodology developed by the authors seems like a promising approach to overcome many of the challenges posed by reference-based single-cell RNA-seq analysis methods.
Thank you for your positive comment on the potential of our approach to address the limitations of reference-based methods for scRNA-Seq analysis.
(2) The analysis of the RNU6 repetitive small nuclear RNA provides a very compelling example of a type of transcript that is very challenging to analyze with standard reference-based methods (e.g., most reads from this gene fail to align with STAR, if I understood the result correctly).
We thank the reviewer for their positive comment. We agree that the variation in RNU6 detected by SPLASH+ underscores the potential of our reference-free method to make discoveries in cases where reference-based approaches fall short.
Weaknesses:
(1) The manuscript presents a number of case studies from very diverse domains of single-cell RNA-seq analysis. As a result, the manuscript has been challenging to review, because it requires domain expertise in centromere biology, RNA splicing, RNA editing, V(D)J transcript diversity, and repeat polymorphisms.
We appreciate the reviewer’s effort in thoroughly evaluating this manuscript, especially given the broad range of biological domains discussed. Our main goal in presenting a wide range of applications was to highlight the key strength of the SPLASH+ framework: its ability to unify diverse biological discoveries within a single method that operates directly on sequencing reads.
(2) Although the paper focuses on SmartSeq2 full-length single-cell RNA-seq data analysis, the vast majority of single-cell RNA-seq data that is currently being generated comes from droplet-based methods (e.g., 10x Genomics) that sequence only the 3' or 5' ends of transcripts. As a result, it is unclear if SPLASH+ is also applicable to these types of data.
We thank the reviewer for this comment. Due to the specific data format of barcoded single-cell sequencing platforms such as 10x Genomics, extending the SPLASH framework to support 10x analysis required engineering a specialized preprocessing tool. We have addressed this in a recent work, which is now available as a preprint (https://doi.org/10.1101/2024.12.24.630263).
(3) The criteria used for the selection of the 10 'core genes' have not been sufficiently justified.
We chose these genes as SPLASH+ detected regulated splicing for them in nearly all tissues (18 out of 19) analyzed in our study (i.e., identifying anchors classified as splicing anchors in those tissues). Our subsequent analysis showed that all these genes are involved in either splicing regulation or histone modification. We will further clarify this selection criterion in the revision.
(4) It is currently unclear how the splicing diversity discovered in this paper relates to the concept of noisy splicing (i.e., there are likely many low-frequency transcripts and splice junctions that are unlikely to have a significant functional impact beyond triggering nonsense-mediated decay).
In our analysis, to ensure sufficient read coverage, we considered significant anchors supported by more than 50 reads and detected in over 10 cells. Additionally, our downstream analyses (including splicing analysis) are based on assembled sequences (compactors) generated through our micro-assembly step. This process effectively acts as a denoising step by filtering out sequences likely caused by sequencing errors or with very low read support. However, we agree that the detected splice variants have not been fully functionally characterized, and further functional experiments may be needed.
(5) The paper presents only a very superficial discussion of the potential weaknesses of the SPLASH+ method.
We discussed two potential limitations of SPLASH+ in the Conclusions section: (1) it is not suitable for differential gene expression analysis, and (2) although we provide a framework for interpreting and analyzing SPLASH results, further work is still needed to improve the annotation of calls lacking BLAST matches. We will add more discussion for these in the revision.
(6) The cursory mention of metatranscriptome in the conclusion of the paper is confusing, as it might suggest the presence of microbial cells in sterile human tissues (which has recently been discredited in cancer, see e.g. https://www.science.org/content/article/journal-retracts-influential-cancer-microbiome-paper).
We will remove the mention of metatranscriptome in the revised manuscript.
Reviewer #2 (Public review):
The authors extend their SPLASH framework with single-cell RNA-seq in mind, in two ways. First, they introduce "compactors", which are possible paths branching out from an anchor. Second, they introduce a workflow to classify compactors according to the type of biological sequence variation represented (splicing, SNV, etc). They focus on simulated data for fusion detection, and then focus on analyzing the Tabula sapiens Smart-seq2 data, showing extensive results on alternative splicing analysis, VDJ, and repeat elements.
This is strong work with an impressive array of biological investigations and results for a methods paper. I have various concerns about terminology and comparisons, as follows (in a somewhat arbitrary order, apologies).
Thank you for this thorough overview of our work and your positive comment on the strength of our work.
(1) The discussion of the weaknesses of the consensus sequence approach of SPLASH is an odd way to motivate SPLASH+ in my opinion, in that SPLASH is not yet so widely used, so the baseline for SPLASH+ is really standard alignment-based approaches. It is fine to mention consensus sequence issues briefly, but it felt belabored.
We thank the reviewer and agree that the primary comparison for SPLASH+ is with reference-based methods. However, since SPLASH+ builds upon SPLASH, we also aimed to highlight the limitations of the consensus step in original SPLASH and how SPLASH+ addresses them. To maintain the main focus of the paper on comparison with reference-based methods and biological investigations, this discussion with consensus was provided in a Supplementary Figure. We will shorten this discussion in the revision.
(2) Regarding compactors reducing alignment cost: the comparison should really be between compactor construction and alignment vs read alignment (and maybe vs modern contig construction algorithms and alignment).
Since the SPLASH framework is fundamentally reference-free and does not require read alignment, we compared the number of sequence alignments for compactors to the total read alignments required by a reference-based method to show that while compactors are aligned to the reference, the number of alignments needed is still orders of magnitude less than a reference-based approach requiring alignment of all the reads.
(3) The language around "compactors" is a bit confusing, where the authors sometimes refer to the tree of possibilities from an anchor as a "compactor", and sometimes a compactor is a single branch. Presumably, ideally, compactors should be DAGs, not trees, i.e., they can connect back together. Perhaps the authors could comment on whether this matters/would be a valuable extension.
We thank the reviewer for their comment. We refer to each generated assembled sequence as “a compactor”, and we attempted to make this clear in the paper. We will review the text further to ensure this definition is clear in the revised version.
(4) The main oddness of the splicing analysis to me is not using cell-type/state in any way in the statistical testing. This need not be discrete cell types: psiX, for example, tested whether exonic PSI was variable with reference to a continuous gene expression embedding. Intuitively, such transcriptome-wide signal should be valuable for a) improving power and b) distinguishing cell-type intrinsic/"noisy" from cell-type specific splicing variation. A straightforward way of doing this would be pseudobulking cell types. Possibly a more sophisticated hierarchical model could be constructed also.
We appreciate the reviewer’s concern regarding SPLASH+ not using cell type metadata. SPLASH, which performs the core statistical inference in SPLASH+, is an unsupervised tool specifically designed to make biological discoveries without relying on metadata (such as cell type annotations in scRNA-Seq). This is particularly useful in scRNA-seq, where cell type labels could be missing, imprecise, or may miss important within-cell-type variation. As shown in the paper, even without using metadata, SPLASH+ demonstrated improved performance than both SpliZ and Leafcutter (two metadata-dependent tools) in terms of achieving higher concordance and identifying more differentially spliced genes. Regarding pseudobulking, as has been shown in the SpliZ paper (https://doi.org/10.1038/s41592-022-01400-x), pseudobulking requires multiple pseudobulked replicates per cell type for reliable inference, which is often not feasible in scRNA-seq settings, making such methods statistically suboptimal for single-cell studies. We will add a discussion on pseudobulking in the revision.
(5) A secondary weakness is that some informative reads will not be used, for example, unspliced reads aligning to an alterantive exons. This relates to the broader weakness of SPLASH that it is blind to changes in coverage that are not linked to a specific anchor (which should be acknowledged somewhere, maybe in the Discussion). In the deeply sequenced SS2 data, this is likely not an issue, but might be more limiting in sparser data. A related issue is that coverage change indicative of, e.g., alternative TSS or TES (that do not also include a change in splice junction use) will not be detected. In fairness, all these weaknesses are shared by LeafCutter. It would be valuable to have a comparison to a more "traditional" splicing analysis approach (pick your favorite of rMATS, MISO, SUPPA).
We thank the reviewer for their comment. As noted in the Conclusion, the SPLASH framework is not designed for differential gene expression analysis, which relies on quantifying read coverage. Rather, it focuses on detecting differential sequence diversity arising from mechanisms like alternative splicing or RNA editing. We will clarify this limitation further in the revised Conclusion.
Regarding splicing evaluation, we have performed extensive comparisons with two widely used and recent methods—SpliZ and Leafcutter—for both bulk and single-cell splicing analysis. While we appreciate the reviewer’s suggestion to include an additional method, given the current length of the paper and the fact that leafcutter has previously been shown to outperform rMATS, MAJIQ, and Cufflinks2
(https://www.nature.com/articles/s41588-017-0004-9), we believe the current comparisons provide sufficient support for the evaluation of the splicing detection by SPLASH+.
(6) "We should note that there is no difference between gene fusions and other RNA variants (e.g., RNA splicing) from a sequence assembly viewpoint". Maybe this is true in an abstract sense, but I don't think it is in reality. AS can produce hundreds of isoforms from the same gene, and be variable across individual cells. Gene fusions are generally less numerous/varied and will be shared across clonal populations, so the complexity is lower. That simplicity is balanced against the challenge that any genes could, in principle, fuse.
We selected the fusion benchmarking dataset solely to evaluate how well compactors reconstruct sequences. Since our goal was to assess the accuracy of reconstructed compactor sequences, we needed a benchmarking dataset with ground truth sequences, which this dataset provides. We had explained our main reason and purpose for selecting fusion dataset in the text, but we will clarify it further in the revision.
(7) For the fusion detection assessment, SPLASH+ is given the correct anchor for detection. This feels like cheating since this information wouldn't usually be available. Can the authors motivate this? Are the other methods given comparable information? Also, TPM>100 seems like a very high expression threshold for the assessment.
We agree with the reviewer that the fusion benchmarking dataset should not be used to assess the entire SPLASH+ framework. In fact, we did not use this dataset to evaluate SPLASH+; it was used exclusively to evaluate the performance of compactors as a standalone module. Specifically, we tested how well compactors can reconstruct fusion sequences when provided with seed sequences corresponding to fusion junctions. This aligns with our expectation from compactors in SPLASH+, that they should correctly reconstruct the sequence context for the detected anchors. As noted in our previous response, since our goal was to assess the accuracy of reconstructed compactor sequences, we required a benchmarking dataset with ground truth sequences, which this dataset provides. We will clarify this further in the revision.
We appreciate the reviewer’s concern that a TPM of 100 is high. In Figure 1C, we presented the full TPM distribution for fusions missed or detected by compactors. The 100 threshold was an arbitrary benchmark to illustrate the clear difference in TPM profiles between these two sets of fusions. We will clarify this point in the revised manuscript.
(8) Why are only 3'UTRs considered and not 5'? Is this because the analysis is asymmetric, i.e., only considering upstream anchors and downstream variation? If so, that seems like a limitation: how much additional variation would you find if including the other direction?
We thank the reviewer for their comment. SPLASH+ can, in principle, detect variation in 5’ UTR regions, as demonstrated by the variations observed in the 5’ UTRs of the genes ANPC16 and ARPC2. If sequence variation exists in the 5′ UTR, SPLASH+ can still detect it by identifying an anchor upstream of the variable region, as it directly parses sequencing reads to find anchors with downstream sequence diversity. Even when the variation occurs near the 5′ end of the 5′ UTR, SPLASH+ can still capture this diversity if the user selects a shorter anchor length.
(9) I don't find the theoretical results very meaningful. Assuming independent reads (equivalently binomial counts) has been repeatedly shown to be a poor assumption in sequencing data, likely due to various biases, including PCR. This has motivated the use of overdispersed distributions such as the negative Binomial and beta binomial. The theory would be valuable if it could say something at a specified level of overdispersion. If not, the caveat of assuming no overdispersion should be clearly stated.
We appreciate the reviewer’s comment. We will clarify this in the revised paper.