Unsupervised discovery of temporal sequences in highdimensional datasets, with applications to neuroscience
Abstract
Identifying lowdimensional features that describe largescale neural recordings is a major challenge in neuroscience. Repeated temporal patterns (sequences) are thought to be a salient feature of neural dynamics, but are not succinctly captured by traditional dimensionality reduction techniques. Here, we describe a software toolbox—called seqNMF—with new methods for extracting informative, nonredundant, sequences from highdimensional neural data, testing the significance of these extracted patterns, and assessing the prevalence of sequential structure in data. We test these methods on simulated data under multiple noise conditions, and on several real neural and behavioral datas. In hippocampal data, seqNMF identifies neural sequences that match those calculated manually by reference to behavioral events. In songbird data, seqNMF discovers neural sequences in untutored birds that lack stereotyped songs. Thus, by identifying temporal structure directly from neural data, seqNMF enables dissection of complex neural circuits without relying on temporal references from stimuli or behavioral outputs.
https://doi.org/10.7554/eLife.38471.001Introduction
The ability to detect and analyze temporal sequences embedded in a complex sensory stream is an essential cognitive function, and as such is a necessary capability of neuronal circuits in the brain (Clegg et al., 1998; Janata and Grafton, 2003; Bapi et al., 2005; Hawkins and Ahmad, 2016), as well as artificial intelligence systems (Cui et al., 2016; Sutskever et al., 2014). The detection and characterization of temporal structure in signals is also useful for the analysis of many forms of physical and biological data. In neuroscience, recent advances in technology for electrophysiological and optical measurements of neural activity have enabled the simultaneous recording of hundreds or thousands of neurons (Chen et al., 2013; Kim et al., 2016; Scholvin et al., 2016; Jun et al., 2017), in which neuronal dynamics are often structured in sparse sequences (Hahnloser et al., 2002; Harvey et al., 2012; MacDonald et al., 2011; Okubo et al., 2015; Fujisawa et al., 2008; Pastalkova et al., 2008). Such sequences can be identified by averaging across multiple trials, but only in cases where an animal receives a temporally precise sensory stimulus, or executes a sufficiently stereotyped behavioral task.
Neural sequences have been hypothesized to play crucial roles over a much broader range of natural settings, including during learning, sleep, or diseased states (Mackevicius and Fee, 2018). In these applications, it may not be possible to use external timing references, either because behaviors are not stereotyped or are entirely absent. Thus, sequences must be extracted directly from the neuronal data using unsupervised learning methods. Commonly used methods of this type, such as principal component analysis (PCA) or clustering methods, do not efficiently extract sequences, because they typically only model synchronous patterns of activity, rather than extended spatiotemporal motifs of firing.
Existing approaches that search for repeating neural patterns require computationally intensive or statistically challenging analyses (Brody, 1999; Mokeichev et al., 2007; Quaglio et al., 2018; Brunton et al., 2016). While progress has been made in analyzing nonsynchronous sequential patterns using statistical models that capture crosscorrelations between pairs of neurons (Russo and Durstewitz, 2017; Gerstein et al., 2012; Schrader et al., 2008; Torre et al., 2016; Grossberger et al., 2018; van der Meij and Voytek, 2018), such methods may not have statistical power to scale to patterns that include many (more than a few dozen) neurons, may require long periods (≥10^{5} timebins) of stationary data, and may have challenges in dealing with (nonsequential) background activity. For a review highlighting features and limitations of these methods see (Quaglio et al., 2018).
Here, we explore a complementary approach, which uses matrix factorization to reconstruct neural dynamics using a small set of exemplar sequences. In particular, we build on convolutional nonnegative matrix factorization (convNMF) (Smaragdis, 2004; Smaragdis, 2007) (Figure 1B), which has been previously applied to identify recurring motifs in audio signals such as speech (O’Grady and Pearlmutter, 2006; Smaragdis, 2007; Vaz et al., 2016), as well as neural signals (Peter et al., 2017). ConvNMF identifies exemplar patterns (factors) in conjunction with the times and amplitudes of pattern occurrences. This strategy eliminates the need to average activity aligned to any external behavioral references.
While convNMF may produce excellent reconstructions of the data, it does not automatically produce the minimal number of factors required. Indeed, if the number of factors in the convNMF model is greater than the true number of sequences, the algorithm returns overly complex and redundant factorizations. Moreover, in these cases, the sequences extracted by convNMF will often be inconsistent across optimization runs from different initial conditions, complicating scientific interpretations of the results (Peter et al., 2017; Wu et al., 2016).
To address these concerns, we developed a toolbox of methods, called seqNMF, which includes two different strategies to resolve the problem of redundant factorizations described above. In addition, the toolbox includes methods for promoting potentially desirable features such as orthogonality or sparsity of the spatial and temporal structure of extracted factors, and methods for analyzing the statistical significance and prevalence of the identified sequential structure. To assess these tools, we characterize their performance on synthetic data under a variety of noise conditions and also show that they are able to find sequences in neural data collected from two different animal species using different behavioral protocols and recording technologies. Applied to extracellular recordings from rat hippocampus, seqNMF identifies neural sequences that were previously found by trialaveraging. Applied to functional calcium imaging data recorded in vocal/motor cortex of untutored songbirds, seqNMF robustly identifies neural sequences active in a biologically atypical and overlapping fashion. This finding highlights the utility of our approach to extract sequences without reference to external landmarks; untutored bird songs are so variable that aligning neural activity to song syllables would be difficult and highly subjective.
Results
Matrix factorization framework for unsupervised discovery of features in neural data
Matrix factorization underlies many wellknown unsupervised learning algorithms, including PCA (Pearson, 1901), nonnegative matrix factorization (NMF) (Lee and Seung, 1999), dictionary learning, and kmeans clustering (see Udell et al., 2016 for a review). We start with a data matrix, $\mathbf{\mathbf{X}}$, containing the activity of $N$ neurons at $T$ timepoints. If the neurons exhibit a single repeated pattern of synchronous activity, the entire data matrix can be reconstructed using a column vector $\mathbf{\mathbf{w}}$ representing the neural pattern, and a row vector $\mathbf{\mathbf{h}}$ representing the times and amplitudes at which that pattern occurs (temporal loadings). In this case, the data matrix $\mathbf{\mathbf{X}}$ is mathematically reconstructed as the outer product of $\mathbf{\mathbf{w}}$ and $\mathbf{\mathbf{h}}$. If multiple component patterns are present in the data, then each pattern can be reconstructed by a separate outer product, where the reconstructions are summed to approximate the entire data matrix (Figure 1A) as follows:
where ${\mathbf{\mathbf{X}}}_{nt}$ is the ${(nt)}^{th}$ element of matrix $\mathbf{\mathbf{X}}$, that is, the activity of neuron $n$ at time $t$. Here, in order to store $K$ different patterns, $\mathbf{\mathbf{W}}$ is a $N\times K$ matrix containing the $K$ exemplar patterns, and $\mathbf{\mathbf{H}}$ is a $K\times T$ matrix containing the $K$ timecourses:
Given a data matrix with unknown patterns, the goal of matrix factorization is to discover a small set of patterns, $\mathbf{\mathbf{W}}$, and a corresponding set of temporal loading vectors, $\mathbf{\mathbf{H}}$, that approximate the data. In the case that the number of patterns, $K$, is sufficiently small (less than $N$ and $T$), this corresponds to a dimensionality reduction, whereby the data is expressed in more compact form. PCA additionally requires that the columns of $\mathbf{\mathbf{W}}$ and the rows of $\mathbf{\mathbf{H}}$ are orthogonal. NMF instead requires that the elements of $\mathbf{\mathbf{W}}$ and $\mathbf{\mathbf{H}}$ are nonnegative. The discovery of unknown factors is often accomplished by minimizing the following cost function, which measures the elementbyelement sum of all squared errors between a reconstruction $\stackrel{~}{\mathbf{\mathbf{X}}}=\mathbf{\mathbf{W}\mathbf{H}}$ and the original data matrix $\mathbf{\mathbf{X}}$ using the Frobenius norm, ${\parallel \mathbf{\mathbf{M}}\parallel}_{F}=\sqrt{{\sum}_{ij}{\mathbf{\mathbf{M}}}_{ij}^{2}}$:
(Note that other loss functions may be substituted if desired, for example to better reflect the noise statistics; see (Udell et al., 2016) for a review). The factors ${\mathbf{\mathbf{W}}}^{*}$ and ${\mathbf{\mathbf{H}}}^{*}$ that minimize this cost function produce an optimal reconstruction ${\stackrel{~}{\mathbf{\mathbf{X}}}}^{*}={\mathbf{\mathbf{W}}}^{*}{\mathbf{\mathbf{H}}}^{*}$. Iterative optimization methods such as gradient descent can be used to search for global minima of the cost function; however, it is often possible for these methods to get caught in local minima. Thus, as described below, it is important to run multiple rounds of optimization to assess the stability/consistency of each model.
While this general strategy works well for extracting synchronous activity, it is unsuitable for discovering temporally extended patterns—first, because each element in a sequence must be represented by a different factor, and second, because NMF assumes that the columns of the data matrix are independent ‘samples’ of the data, so permutations in time have no effect on the factorization of a given data. It is therefore necessary to adopt a different strategy for temporally extended features.
Convolutional matrix factorization
Convolutional nonnegative matrix factorization (convNMF) (Smaragdis, 2004; Smaragdis, 2007) extends NMF to provide a framework for extracting temporal patterns, including sequences, from data. While in classical NMF each factor $\mathbf{\mathbf{W}}$ is represented by a single vector (Figure 1A), the factors $\mathbf{\mathbf{W}}$ in convNMF represent patterns of neural activity over a brief period of time. Each pattern is stored as an $N\times L$ matrix, ${\mathbf{\mathbf{w}}}_{\mathbf{\mathbf{k}}}$, where each column (indexed by $\mathrm{\ell}=1$ to $L$) indicates the activity of neurons at different timelags within the pattern (Figure 1B). The times at which this pattern/sequence occurs are encoded in the row vector ${\mathbf{\mathbf{h}}}_{\mathrm{\U0001d7cf}}$, as for NMF. The reconstruction is produced by convolving the $N\times L$ pattern with the time series ${\mathbf{\mathbf{h}}}_{\mathrm{\U0001d7cf}}$ (Figure 1B).
If the data contains multiple patterns, each pattern is captured by a different $N\times L$ matrix and a different associated time series vector $\mathbf{\mathbf{h}}$. A collection of $K$ different patterns can be compiled together into an $N\times K\times L$ array (also known as a tensor), $\mathbf{\mathbf{W}}$ and a corresponding $K\times T$ time series matrix $\mathbf{\mathbf{H}}$. Analogous to NMF, convNMF generates a reconstruction of the data as a sum of $K$ convolutions between each neural activity pattern ($\mathbf{\mathbf{W}}$), and its corresponding temporal loadings ($\mathbf{\mathbf{H}}$):
The tensor/matrix convolution operator $\u229b$ (notation summary, Table 1) reduces to matrix multiplication in the $L=1$ case, which is equivalent to standard NMF. The quality of this reconstruction can be measured using the same cost function shown in Equation 3, and $\mathbf{\mathbf{W}}$ and $\mathbf{\mathbf{H}}$ may be found iteratively using similar multiplicative gradient descent updates to standard NMF (Lee and Seung, 1999; Smaragdis, 2004; Smaragdis, 2007).
While convNMF can perform extremely well at reconstructing sequential structure, it can be challenging to use when the number of sequences in the data is not known (Peter et al., 2017). In this case, a reasonable strategy would be to choose $K$ at least as large as the number of sequences that one might expect in the data. However, if $K$ is greater than the actual number of sequences, convNMF often identifies more significant factors than are minimally required. This is because each sequence in the data may be approximated equally well by a single sequential pattern or by a linear combination of multiple partial patterns. A related problem is that running convNMF from different random initial conditions produces inconsistent results, finding different combinations of partial patterns on each run (Peter et al., 2017). These inconsistency errors fall into three main categories (Figure 1C):
Type 1: Two or more factors are used to reconstruct the same instances of a sequence.
Type 2: Two or more factors are used to reconstruct temporally different parts of the same sequence, for instance the first half and the second half.
Type 3: Duplicate factors are used to reconstruct different instances of the same sequence.
Together, these inconsistency errors manifest as strong correlations between different redundant factors, as seen in the similarity of their temporal loadings ($\mathbf{\mathbf{H}}$) and/or their exemplar activity patterns ($\mathbf{\mathbf{W}}$).
We next describe two strategies for overcoming the redundancy errors described above. Both strategies build on previous work that reduces correlations between factors in NMF. The first strategy is based on regularization, a common technique in optimization that allows the incorporation of constraints or additional information with the goal of improving generalization performance or simplifying solutions to resolve degeneracies (Hastie et al., 2009). A second strategy directly estimates the number of underlying sequences by minimizing a measure of correlations between factors (stability NMF; Wu et al., 2016).
Optimization penalties to reduce redundant factors
To reduce the occurrence of redundant factors (and inconsistent factorizations) in convNMF, we sought a principled way of penalizing the correlations between factors by introducing a penalty term, $\mathcal{R}$, into the convNMF cost function:
Regularization has previously been used in NMF to address the problem of duplicated factors, which, similar to Type 1 errors above, present as correlations between the $\mathbf{\mathbf{H}}$’s (Choi, 2008; Chen and Cichocki, 2004). Such correlations are measured by computing the correlation matrix ${\mathbf{\mathbf{H}\mathbf{H}}}^{\top}$, which contains the correlations between the temporal loadings of every pair of factors. The regularization may be implemented using the penalty term $\mathcal{R}=\lambda {\parallel {\mathbf{\mathbf{H}\mathbf{H}}}^{\top}\parallel}_{1,i\ne j}$, where the seminorm $\parallel \cdot {\parallel}_{1,i\ne j}$ sums the absolute value of every matrix entry except those along the diagonal (notation summary, Table 1) so that correlations between different factors are penalized, while the correlation of each factor with itself is not. Thus, during the minimization process, similar factors compete, and a larger amplitude factor drives down the temporal loading of a correlated smaller factor. The parameter $\lambda $ controls the magnitude of the penalty term $\mathcal{R}$.
In convNMF, a penalty term based on ${\mathbf{\mathbf{H}\mathbf{H}}}^{\top}$ yields an effective method to prevent errors of Type 1, because it penalizes the associated zero lag correlations. However, it does not prevent errors of the other types, which exhibit different types of correlations. For example, Type 2 errors result in correlated temporal loadings that have a small temporal offset and thus are not detected by ${\mathbf{\mathbf{H}\mathbf{H}}}^{\top}$. One simple way to address this problem is to smooth the $\mathbf{\mathbf{H}}$’s in the penalty term with a square window of length $2L1$ using the smoothing matrix $\mathbf{\mathbf{S}}$ (${\mathbf{\mathbf{S}}}_{ij}=1$ when $ij<L$ and otherwise ${\mathbf{\mathbf{S}}}_{ij}=0$). The resulting penalty, $\mathcal{R}=\lambda \parallel {\mathbf{\mathbf{H}\mathbf{S}\mathbf{H}}}^{\top}\parallel $, allows factors with small temporal offsets to compete, effectively preventing errors of Types 1 and 2.
This penalty does not prevent errors of Type 3, in which redundant factors with highly similar patterns in $\mathbf{\mathbf{W}}$ are used to explain different instances of the same sequence. Such factors have temporal loadings that are segregated in time, and thus have low correlations, to which the cost term $\parallel {\mathbf{\mathbf{H}\mathbf{S}\mathbf{H}}}^{\top}\parallel $ is insensitive. One way to resolve errors of Type 3 might be to include an additional cost term that penalizes the similarity of the factor patterns in $\mathbf{\mathbf{W}}$. This has the disadvantage of requiring an extra parameter, namely the $\lambda $ associated with this cost.
Instead we chose an alternative approach to resolve errors of Type 3 that simultaneously detects correlations in $\mathbf{\mathbf{W}}$ and $\mathbf{\mathbf{H}}$ using a single crossorthogonality cost term. We note that, for Type 3 errors, redundant $\mathbf{\mathbf{W}}$ patterns have a high degree of overlap with the data at the same times, even though their temporal loadings are segregated at different times. To introduce competition between these factors, we first compute, for each pattern in $\mathbf{\mathbf{W}}$, its overlap with the data at time $t$. This quantity is captured in symbolic form by $\mathbf{\mathbf{W}}\stackrel{\top}{\u229b}\mathbf{\mathbf{X}}$ (see Table 1). We then compute the pairwise correlation between the temporal loading of each factor and the overlap of every other factor with the data. This crossorthogonality penalty term, which we refer to as 'xortho’, sums up these correlations across all pairs of factors, implemented as follows:
When incorporated into the update rules, this causes any factor that has a high overlap with the data to suppress the temporal loadings ($\mathbf{\mathbf{H}}$) of any other factors that have high overlap with the data at that time (Further analysis, Appendix 2). Thus, factors compete to explain each feature of the data, favoring solutions that use a minimal set of factors to give a good reconstruction. The resulting global cost function is:
The update rules for $\mathbf{\mathbf{W}}$ and $\mathbf{\mathbf{H}}$ are based on the derivatives of this global cost function, leading to a simple modification of the standard multiplicative update rules used for NMF and convNMF (Lee and Seung, 1999; Smaragdis, 2004; Smaragdis, 2007) (Table 2). Note that the addition of this crossorthogonality term does not formally constitute regularization, because it also includes a contribution from the data matrix $\mathbf{\mathbf{X}}$, rather than just the model variables $\mathbf{\mathbf{W}}$ and $\mathbf{\mathbf{H}}$. However, at least for the case that the data is well reconstructed by the sum of all factors, the xortho penalty can be shown to be approximated by a formal regularization (Appendix 2). This formal regularization contains both a term corresponding to a weighted smoothed orthogonality penalty on $\mathbf{\mathbf{W}}$ and a term corresponding to a weighted smoothed) orthogonality penalty on $\mathbf{\mathbf{H}}$, consistent with the observation that the xortho penalty simultaneously punishes factor correlations in $\mathbf{\mathbf{W}}$ and $\mathbf{\mathbf{H}}$.
There is an interesting relation between our method for penalizing correlations and other methods for constraining optimization, namely sparsity. Because of the nonnegativity constraint imposed in NMF, correlations can also be reduced by increasing the sparsity of the representation. Previous efforts have been made to minimize redundant factors using sparsity constraints; however, this approach may require penalties on both $\mathbf{\mathbf{W}}$ and $\mathbf{\mathbf{H}}$, necessitating the selection of two hyperparameters (${\lambda}_{w}$ and ${\lambda}_{h}$) (Peter et al., 2017). Since the use of multiple penalty terms increases the complexity of model fitting and selection of parameters, one goal of our work was to design a simple, single penalty function that could regularize both $\mathbf{\mathbf{W}}$ and $\mathbf{\mathbf{H}}$ simultaneously. The xortho penalty described above serves this purpose (Equation 6). As we will describe below, the application of sparsity penalties can be very useful for shaping the factors produced by convNMF, and our code includes options for applying sparsity penalties on both $\mathbf{\mathbf{W}}$ and $\mathbf{\mathbf{H}}$.
Extracting groundtruth sequences with the xortho penalty when the number of sequences is not known
We next examined the effect of the xortho penalty on factorizations of sequences in simulated data, with a focus on convergence, consistency of factorizations, the ability of the algorithm to discover the correct number of sequences in the data, and robustness to noise (Figure 2A). We first assessed the model’s ability to extract three groundtruth sequences lasting 30 timesteps and containing 10 neurons in the absence of noise (Figure 2A). The resulting data matrix had a total duration of 15,000 timesteps and contained on average 60±6 instances of each sequence. Neural activation events were represented with an exponential kernel to simulate calcium imaging data. The algorithm was run with the xortho penalty for 1000 iterations andit reliably converged to a rootmeansquarederror (RMSE) close to zero (Figure 2B). RMSE reached a level within 10% of the asymptotic value in approximately 100 iterations.
While similar RMSE values were achieved using convNMF with and without the xortho penalty; the addition of this penalty allowed three groundtruth sequences to be robustly extracted into three separate factors (${\mathbf{\mathbf{w}}}_{1}$, ${\mathbf{\mathbf{w}}}_{2}$, and ${\mathbf{\mathbf{w}}}_{3}$ in Figure 2A) so long as $K$ was chosen to be larger than the true number of sequences. In contrast, convNMF with no penalty converged to inconsistent factorizations from different random initializations when $K$ was chosen to be too large, due to the ambiguities described in Figure 1. We quantified the consistency of each model (see Materials and methods), and found that factorizations using the xortho penalty demonstrated near perfect consistency across different optimization runs (Figure 2C).
We next evaluated the performance of convNMF with and without the xortho penalty on datasets with a larger number of sequences. In particular, we set out to observe the effect of the xortho penalty on the number of statistically significant factors extracted. Statistical significance was determined based on the overlap of each extracted factor with held out data (see Materials and methods and code package). With the penalty term, the number of significant sequences closely matched the number of groundtruth sequences. Without the penalty, all 20 extracted sequences were significant by our test (Figure 2D).
We next considered how the xortho penalty performs on sequences with more complex structure than the sparse uniform sequences of activity ediscussed above. We further examined the case in which a population of neurons is active in multiple different sequences. Such neurons that are shared across different sequences have been observed in several neuronal datasets (Okubo et al., 2015; Pastalkova et al., 2008; Harvey et al., 2012). For one test, we constructed two sequences in which shared neurons were active at a common pattern of latencies in both sequences; in another test, shared neurons were active in a different pattern of latencies in each sequence. In both tests, factorizations using the xortho penalty achieved nearperfect reconstruction error, and consistency was similar to the case with no shared neurons (Figure 2E,F). We also examined other types of complex structure and have found that the xortho penalty performs well in data with large gaps between activity or with large overlaps of activity between neurons in the sequence. This approach also worked well in cases in which the duration of the activity or the interval between the activity of neurons varied across the sequence (Figure 3—figure supplement 3).
Robustness to noisy data
The crossorthogonality penalty performed well in the presence of types of noise commonly found in neural data. In particular, we considered: participation noise, in which individual neurons participate probabilistically in instances of a sequence; additive noise, in which neuronal events occur randomly outside of normal sequence patterns; temporal jitter, in which the timing of individual neurons is shifted relative to their typical time in a sequence; and finally, temporal warping, in which each instance of the sequence occurs at a different randomly selected speed. To test the robustness of the algorithm with the xortho penalty to each of these noise conditions, we factorized data containing three neural sequences at a variety of noise levels (Figure 3, top row). The value of $\lambda $ was chosen using methods described in the next section. Factorizations with the xortho penalty proved relatively robust to all four noise types, with a high probability of returning the correct numbers of significant factors (Figure 4—figure supplement 5). Furthermore, under lownoise conditions, the algorithm produced factors that were highly similar to groundtruth, and this similarity declined gracefully at higher noise levels (Figure 3). Visualization of the extracted factors revealed a good qualitative match to groundtruth sequences even in the presence of high noise except for the case of temporal jitter (Figure 3). We also found that the xortho penalty allows reliable extraction of sequences in which the duration of each neuron’s activity exhibits substantial random variation across different renditions of the sequence, and in which the temporal jitter of neural activity exhibits systematic variation at different points in the sequences (Figure 3—figure supplement 3).
Finally, we wondered how our approach with the xortho penalty performs on datasets with only a small number of instances of each sequence. We generated data containing different numbers of repetitions ranging from 1 to 20, of each underlying groundtruth sequence. For intermediate levels of additive noise, we found that three repetitions of each sequence were sufficient to correctly extract factors with similarity scores close to those obtained with much larger numbers of repetitions (Figure 3—figure supplement 2).
Methods for choosing an appropriate value of $\lambda $
The xortho penalty performs best when the strength of the regularization term (determined by the hyperparameter $\lambda $) is chosen appropriately. For $\lambda $ too small, the behavior of the algorithm approaches that of convNMF, producing a large number of redundant factors with high xortho cost. For $\lambda $ too large, all but one of the factors are suppressed to zero amplitude, resulting in a factorization with nearzero xortho cost, but with large reconstruction error if multiple sequences are present in the data. Between these extremes, there exists a region in which increasing $\lambda $ produces a rapidly increasing reconstruction error and a rapidly decreasing xortho cost. Thus, there is a single point, which we term ${\lambda}_{0}$, at which changes in reconstruction cost and changes in xortho cost are balanced (Figure 4A). We hypothesized that the optimal choice of $\lambda $ (i.e. the one producing the correct number of groundtruth factors) would lie near this point.
To test this intuition, we examined the performance of the xortho penalty as a function of$\lambda $ in noisy synthetic data consisting of three nonoverlapping sequences (Figure 4A). Our analysis revealed that, overall, values of $\lambda $ between 2${\lambda}_{0}$ and 5${\lambda}_{0}$ performed well for these data across all noise types and levels (Figure 4B,C). In general, nearoptimal performance was observed over an order of magnitude range of $\lambda $ (Figure 1). However, there were systematic variations depending on noise type: for additive noise, performance was better when $\lambda $ was closer to ${\lambda}_{0}$, while with other noise types, performance was better at somewhat higher values of $\lambda $s ($\approx 10{\lambda}_{0}$).
Similar ranges of $\lambda $ appeared to work for datasets with different numbers of groundtruth sequences—for the datasets used in Figure 2D, a range of $\lambda $ between 0.001 and 0.01 returned the correct number of sequences at least 90% of the time for datasets containing between 1 and 10 sequences (Figure 2—figure supplement 2). Furthermore, this method for choosing $\lambda $ also worked on datasets containing sequences with shared neurons (Figure 4—figure supplement 2).
The value of $\lambda $ may also be determined by crossvalidation (see Materials and methods). Indeed, the $\lambda $ chosen with the heuristic described above coincided with a minimum or distinctive feature in the crossvalidated test error for all the cases we examined (Figure 4—figure supplement 3). The seqNMF code package accompanying this paper provides functions to determine $\lambda $ both by crossvalidation or in reference to ${\lambda}_{0}$.
Sparsity constraints to reduce redundant factors
One of the advantages of the xortho penalty is that it includes only a single term to penalize correlations between different factors, and thus requires only a single hyperparameter $\lambda $. This contrasts with the approach of incorporating a sparsity constraint on $\mathbf{\mathbf{W}}$ and $\mathbf{\mathbf{H}}$ of the form ${\lambda}_{w}{\parallel \mathbf{\mathbf{W}}\parallel}_{1}+{\lambda}_{h}{\parallel \mathbf{\mathbf{H}}\parallel}_{1}$ (Peter et al., 2017). We have found that the performance of the sparsity approach depends on the correct choice of both hyperparameters ${\lambda}_{w}$ and ${\lambda}_{h}$ (Figure 4—figure supplement 4). Given the optimal choice of these parameters, the L1 sparsity constraint yields an overall performance approximately as good as the xortho penalty (Figure 4—figure supplement 4). However, there are some consistent differences in the performance of the sparsity and xortho approaches depending on noise type; an analysis at moderately high noise levels reveals that the xortho penalty performs slightly better with warping and participation noise, while the L1 sparsity penalty performs slightly better on data with jitter and additive noise (Figure 4—figure supplement 5). However, given the added complexity of choosing two hyperparameters for L1 sparsity, we prefer the xortho approach.
Direct selection of $K$ to reduce redundant factors
An alternative strategy to minimizing redundant factorizations is to estimate the number of underlying sequences and to select the appropriate value of $K$. An approach for choosing the number of factors in regular NMF is to run the algorithm many times with different initial conditions, at different values of $K$, and choose the case with the most consistent and uncorrelated factors. This strategy is called stability NMF (Wu et al., 2016) and is similar to other stabilitybased metrics that have been used in clustering models (von Luxburg, 2010). The stability NMF score, diss, is measured between two factorizations, ${\mathbf{\mathbf{F}}}^{1}=\{{\mathbf{\mathbf{W}}}^{1},{\mathbf{\mathbf{H}}}^{1}\}$ and ${\mathbf{\mathbf{F}}}^{2}=\{{\mathbf{\mathbf{W}}}^{2},{\mathbf{\mathbf{H}}}^{2}\}$, run from different initial conditions:
where $\mathbf{\mathbf{C}}$ is the crosscorrelation matrix between the columns of the matrix ${\mathbf{\mathbf{W}}}^{1}$ and the the columns of the matrix ${\mathbf{\mathbf{W}}}^{2}$. Note that diss is low when there is a onetoone mapping between factors in ${\mathbf{\mathbf{F}}}^{1}$ and ${\mathbf{\mathbf{F}}}^{2}$, which tends to occur at the correct K in NMF (Wu et al., 2016; Ubaru et al., 2017). NMF is run many times and the diss metric is calculated for all unique pairs. The best value of K is chosen as that which yields the lowest average diss metric.
To use this approach for convNMF, we needed to slightly modify the stability NMF diss metric. Unlike in NMF, convNMF factors have a temporal degeneracy; that is, one can shift the elements of ${\mathbf{\mathbf{h}}}_{k}$ by one time step while shifting the elements of ${\mathbf{\mathbf{w}}}_{k}$ by one step in the opposite direction with little change to the model reconstruction. Thus, rather than computing correlations from the factor patterns $\mathbf{\mathbf{W}}$ or loadings $\mathbf{\mathbf{H}}$, we computed the diss metric using correlations between factor reconstructions (${\stackrel{~}{\mathbf{\mathbf{X}}}}_{k}={\mathbf{\mathbf{w}}}_{\mathbf{\mathbf{k}}}\u229b{\mathbf{\mathbf{h}}}_{\mathbf{\mathbf{k}}}$).
where $\text{Tr}[\cdot ]$ denotes the trace operator, $\text{Tr}[\mathbf{\mathbf{M}}]={\sum}_{i}{\mathbf{\mathbf{M}}}_{ii}$. That is, ${\mathbf{\mathbf{C}}}_{ij}$ measures the correlation between the reconstruction of factor i in ${\mathbf{\mathbf{F}}}^{1}$ and the reconstruction of factor j in ${\mathbf{\mathbf{F}}}^{2}$. Here, as for stability NMF, the approach is to run convNMF many times with different numbers of factors ($K$) and choose the $K$ which minimizes the diss metric.
We evaluated the robustness of this approach in synthetic data with the four noise conditions examined earlier. Synthetic data were constructed with three groundtruth sequences and 20 convNMF factorizations were carried out for each K ranging from 1 to 10. For each K the average diss metric was computed over all 20 factorizations. In many cases, the average diss metric exhibited a minimum at the groundtruth $K$ (Figure 5—figure supplement 1). As shown below, this method also appears to be useful for identifying the number of sequences in real neural data.
Not only does the diss metric identify factorizations that are highly similar to the ground truth and have the correct number of underlying factors, it also yields factorizations that minimize reconstruction error in held out data (Figure 5, Figure 5—figure supplement 2), as shown using the same crossvalidation procedure described above (Figure 5—figure supplement 2). For simulated datasets with participation noise, additive noise, and temporal jitter, there is a clear minimum in the test error at the K given by diss metric. In other cases, there is a distinguishing feature such as a kink or a plateau in the test error at this K (Figure 5—figure supplement 2).
Strategies for dealing with ambiguous sequence structure
Some sequences can be interpreted in multiple ways, and these interpretations will correspond to different factorizations. A common example arises when neurons are shared between different sequences, as is shown in Figure 6A and B. In this case, there are two ensembles of neurons (1 and 2), that participate in two different types of events. In one event type, ensemble one is active alone, while in the other event type, ensemble one is coactive with ensemble 2. There are two different reasonable factorizations of these data. In one factorization, the two different ensembles are separated into two different factors, while in the other factorization the two different event types are separated into two different factors. We refer to these as ’partsbased’ and ’eventsbased’ respectively. Note that these different factorizations may correspond to different intuitions about underlying mechanisms. ‘Partsbased’ factorizations will be particularly useful for clustering neurons into ensembles, and ‘eventsbased’ factorizations will be particularly useful for correlating neural events with behavior.
Here, we show that the addition of penalties on either $\mathbf{\mathbf{W}}$ or $\mathbf{\mathbf{H}}$ correlations can be used to shape the factorizations of convNMF, with or without the xortho penalty, to produce ‘partsbased’ or ‘eventsbased’ factorization. Without this additional control, factorizations may be either ‘partsbased’, or ‘eventsbased’ depending on initial conditions and the structure of shared neurons activities. This approach works because, in ‘eventsbased’ factorization, the $\mathbf{\mathbf{H}}$’s are orthogonal (uncorrelated) while the $\mathbf{\mathbf{W}}$’s have high overlap; conversely, in the ‘partsbased’ factorization, the $\mathbf{\mathbf{W}}$’s are orthogonal while the $\mathbf{\mathbf{H}}$’s are strongly correlated. Note that these correlations in $\mathbf{\mathbf{W}}$ or $\mathbf{\mathbf{H}}$ are unavoidable in the presence of shared neurons and such correlations do not indicate a redundant factorization. Update rules to implement penalties on correlations in $\mathbf{\mathbf{W}}$ or $\mathbf{\mathbf{H}}$ are provided in Table 2 with derivations in Appendix 1. Figure 9—figure supplement 2 shows examples of using these penalties on the songbird dataset described in Figure 9.
$L1$ regularization is a widely used strategy for achieving sparse model parameters (Zhang et al., 2016), and has been incorporated into convNMF in the past (O’Grady and Pearlmutter, 2006; Ramanarayanan et al., 2013). In some of our datasets, we found it useful to include $L1$ regularization for sparsity. The multiplicative update rules in the presence of $L1$ regularization are included in Table 2, and as part of our code package. Sparsity on the matrices $\mathbf{\mathbf{W}}$ and $\mathbf{\mathbf{H}}$ may be particularly useful in cases when sequences are repeated rhythmically (Figure 6—figure supplement 1A). For example, the addition of a sparsity regularizer on the $\mathbf{\mathbf{W}}$ update will bias the $\mathbf{\mathbf{W}}$ exemplars to include only a single repetition of the repeated sequence, while the addition of a sparsity regularizer on $\mathbf{\mathbf{H}}$ will bias the $\mathbf{\mathbf{W}}$ exemplars to include multiple repetitions of the repeated sequence. Like the ambiguities described above, these are both valid interpretations of the data, but each may be more useful in different contexts.
Quantifying the prevalence of sequential structure in a dataset
While sequences may be found in a variety of neural datasets, their importance and prevalence is still a matter of debate and investigation. To address this, we developed a metric to assess how much of the explanatory power of a seqNMF factorization was due to synchronous vs. asynchronous neural firing events. Since convNMF can fit both synchronous and sequential events in a dataset, reconstruction error is not, by itself, diagnostic of the ‘sequenciness’ of neural activity. Our approach is guided by the observation that in a data matrix with only synchronous temporal structure (i.e. patterns of rank 1), the columns can be permuted without sacrificing convNMF reconstruction error. In contrast, permuting the columns eliminates the ability of convNMF to model data that contains sparse temporal sequences (i.e. high rank patterns) but no synchronous structure. We thus compute a ‘sequenciness’ metric, ranging from 0 to 1, that compares the performance of convNMF on columnshuffled versus nonshuffled data matrices (see Materials and methods), and quantify the performance of this metric in simulated datasets containing synchronous and sequential events with varying prevalence (Figure 7C). We found that this metric varies approximately linearly with the degree to which sequences are present in a dataset. Below, we apply this method to real experimental data and obtain high ‘sequenciness’ scores, suggesting that convolutional matrix factorization is a wellsuited tool for summarizing neural dynamics in these datasets.
Application of seqNMF to hippocampal sequences
To test the ability of seqNMF to discover patterns in electrophysiological data, we analyzed multielectrode recordings from rat hippocampus (https://crcns.org/datasets/hc), which were previously shown to contain sequential patterns of neural firing (Pastalkova et al., 2015). Specifically, rats were trained to alternate between left and right turns in a Tmaze to earn a water reward. Between alternations, the rats ran on a running wheel during an imposed delay period lasting either 10 or 20 seconds. By averaging spiking activity during the delay period, the authors reported long temporal sequences of neural activity spanning the delay. In some rats, the same sequence occurred on left and right trials, while in other rats, different sequences were active in the delay period during each trial types.
Without reference to the behavioral landmarks, seqNMF was able to extract sequences in both datasets. In Rat 1, seqNMF extracted a single factor, corresponding to a sequence active throughout the running wheel delay period and immediately after, when the rat ran up the stem of the maze (Figure 8A); for 10 fits of K ranging from 1 to 10, the average diss metric reached a minimum at 1 and with $\lambda =2{\lambda}_{0}$, most runs using the xortho penalty extracted a single significant factor (Figure 8C–E). Factorizations of thes data with one factor captured 40.8% of the power in the dataset on average, and had a ‘sequenciness’ score of 0.49. Some runs using the xortho penalty extracted two factors (Figure 8E), splitting the delay period sequence and the maze stem sequence; this is a reasonable interpretation of the data, and likely results from variability in the relative timing of running wheel and maze stem traversal. At somewhat lower values of $\lambda $, factorizations more often split these sequences into two factors. At even lower values of $\lambda $, factorizations had even more significant factors. Such higher granularity factorizations may correspond to real variants of the sequences, as they generalize to heldout data or may reflect time warping in the data (Figure 5—figure supplement 2J). However, a single sequence may be a better description of the data because the diss metric displayed a clear minimum at $K=1$ (Figure 8C). In Rat 2, seqNMF typically identified three factors (Figure 8B). The first two correspond to distinct sequences active for the duration of the delay period on alternating left and right trials. A third sequence was active immediately following each of the alternating sequences, corresponding to the time at which the animal exits the wheel and runs up the stem of the maze. For 10 fits of K ranging from 1 to 10, the average diss metric reached a minimum at three and with $\lambda =1.5{\lambda}_{0}$, most runs with the xortho penalty extracted between 2 and 4 factors (Figure 8F–H). Factorizations of these data with three factors captured 52.6% of the power in the dataset on average, and had a pattern ‘sequenciness’ score of 0.85. Taken together, these results suggest that seqNMF can detect multiple neural sequences without the use of behavioral landmarks.
Application of seqNMF to abnormal sequence development in avian motor cortex
We applied seqNMF methods to analyze functional calcium imaging data recorded in the songbird premotor cortical nucleus HVC during singing. Normal adult birds sing a highly stereotyped song, making it possible to detect sequences by averaging neural activity aligned to the song. Using this approach, it has been shown that HVC neurons generate precisely timed sequences that tile each song syllable (Hahnloser et al., 2002; Picardo et al., 2016; Lynch et al., 2016). Songbirds learn their song by imitation and must hear a tutor to develop normal adult vocalizations. Birds isolated from a tutor sing highly variable and abnormal songs as adults (Fehér et al., 2009). Such ‘isolate’ birds provide an opportunity to study how the absence of normal auditory experience leads to pathological vocal/motor development. However, the high variability of pathological ‘isolate’ song makes it difficult to identify neural sequences using the standard approach of aligning neural activity to vocal output.
Using seqNMF, we were able to identify repeating neural sequences in isolate songbirds (Figure 9A). At the chosen $\lambda $ (Figure 9B), xortho penalized factorizations typically extracted three significant sequences (Figure 9C). Similarly, the diss measure has a local minimum at $K=3$ (Figure 9—figure supplement 1B). The threesequence factorization explained 41% of the total power in the dataset, with a sequenciness score of 0.7 andhe extracted sequences included sequences deployed during syllables of abnormally long and variable durations (Figure 9D–F, Figure 9—figure supplement 1A).
In addition, the extracted sequences exhibit properties not observed in normal adult birds. We see an example of two distinct sequences that sometimes, but not always, cooccur (Figure 9). We observe that a shorter sequence (green) occurs alone on some syllable renditions while a second, longer sequence (purple) occurs simultaneously on other syllable renditions. We found that biasing xortho penalized convNMF towards ’partsbased’ or ’eventsbased’ factorizations gives a useful tool to visualize this feature of the data (Figure 9—figure supplement 2). This probabilistic overlap of different sequences is highly atypical in normal adult birds (Hahnloser et al., 2002; Long et al., 2010; Picardo et al., 2016; Lynch et al., 2016) and is associated with abnormal variations in syllable structure—in this case resulting in a longer variant of the syllable when both sequences cooccur. This acoustic variation is a characteristic pathology of isolate song (Fehér et al., 2009).
Thus, even though we observe HVC generating sequences in the absence of a tutor, it appears that these sequences are deployed in a highly abnormal fashion.
Application of seqNMF to a behavioral dataset: song spectrograms
Although we have focused on the application of seqNMF to neural activity data, these methods naturally extend to other types of highdimensional datasets, including behavioral data with applications to neuroscience. The neural mechanisms underlying song production and learning in songbirds is an area of active research. However, the identification and labeling of song syllables in acoustic recordings is challenging, particularly in young birds in which song syllables are highly variable. Because automatic segmentation and clustering often fail, song syllables are still routinely labelled by hand (Okubo et al., 2015). We tested whether seqNMF, applied to a spectrographic representation of zebra finch vocalizations, is able to extract meaningful features in behavioral data. Using the xortho penalty, factorizations correctly identified repeated acoustic patterns in juvenile songs, placing each distinct syllable type into a different factor (Figure 10). The resulting classifications agree with previously published handlabeled syllable types (Okubo et al., 2015). A similar approach could be applied to other behavioral data, for example movement data or human speech, and could facilitate the study of neural mechanisms underlying even earlier and more variable stages of learning. Indeed, convNMF was originally developed for application to spectrograms (Smaragdis, 2004); notably it has been suggested that auditory cortex may use similar computations to represent and parse natural statistics (Młynarski and McDermott, 2018).
Discussion
As neuroscientists strive to record larger datasets, there is a need for rigorous tools to reveal underlying structure in highdimensional data (Gao and Ganguli, 2015; Sejnowski et al., 2014; Churchland and Abbott, 2016; Bzdok and Yeo, 2017). In particular, sequential structure is increasingly regarded as a fundamental property of neuronal circuits (Hahnloser et al., 2002; Harvey et al., 2012; Okubo et al., 2015; Pastalkova et al., 2008), but standardized statistical approaches for extracting such structure have not been widely adopted or agreed upon. Extracting sequences is particularly challenging when animal behaviors are variable (e.g. during learning) or absent entirely (e.g. during sleep).
Here, we explored a simple matrix factorizationbased approach to identify neural sequences without reference to animal behavior. The convNMF model elegantly captures sequential structure in an unsupervised manner (Smaragdis, 2004; Smaragdis, 2007; Peter et al., 2017). However, in datasets where the number of sequences is not known, convNMF may return inefficient and inconsistent factorizations. To address these challenges, we introduced a new regularization term to penalize correlated factorizations, and developed a new dissimilarity measure to assess model stability. Both proposed methods can be used to infer the number of sequences in neural data and are highly robust to noise. For example, even when (synthetic) neurons participate probabilistically in sequences at a rate of 50%, the model typically identifies factors with greater than 80% similarity to the ground truth (Figure 3A). Additionally, these methods perform well even with very limited amounts of data: for example successfully extracting sequences that only appear a handful of times in a noisy data stream (Figure 3—figure supplement 2).
The xortho penalty developed in this paper may represent a useful improvement over traditional orthogonality regularizations or suggest how traditional regularization penalties may be usefully modified. First, it simultaneously provides a penalty on correlations in both $\mathbf{\mathbf{W}}$ and $\mathbf{\mathbf{H}}$, thus simplifying analyses by having only one penalty term. Second, although the xortho penalty does not formally constitute regularization due to its inclusion of the data $\mathbf{\mathbf{X}}$, we have described how the penalty can be approximated by a datafree regularization with potentially useful properties (Appendix 2). Specifically, the datafree regularization contains terms corresponding to weighted orthogonality in (smoothed) $\mathbf{\mathbf{H}}$ and $\mathbf{\mathbf{W}}$, where the weights focus the orthogonality penalty preferentially on those factors that contribute the most power to the reconstruction. This concept of using powerweighted regularization penalties may be applicable more generally to matrix factorization techniques.
As in many data analysis scenarios, a variety of statistical approaches may be brought to bear on finding sequences in neural data. A classic method is to construct crosscorrelogram plots, showing spike time correlations between pairs of neurons at various time lags. However, other forms of spike rate covariation, such as trialtotrial gain modulation, can produce spurious peaks in this measure (Brody, 1999); recent work has developed statistical corrections for these effects (Russo and Durstewitz, 2017). After significant pairwise correlations are identified, one can heuristically piece together pairs of neurons with significant interactions into a sequence. This bottomup approach may be better than seqNMF at detecting sequences involving small numbers of neurons, since seqNMF specifically targets sequences that explain large amounts of variance in the data. On the other hand, bottomup approaches to sequence extraction may fail to identify long sequences with high participation noise or jitter in each neuron (Quaglio et al., 2018). One can think of seqNMF as a complementary topdown approach, which performs very well in the highnoise regime since it learns a template sequence at the level of the full population that is robust to noise at the level of individual units.
Statistical models with a dynamical component, such as Hidden Markov Models (HMMs) (Maboudi et al., 2018), linear dynamical systems (Kao et al., 2015), and models with switching dynamics (Linderman et al., 2017), can also capture sequential firing patterns. These methods will typically require many hidden states or latent dimensions to capture sequences, similar to PCA and NMF which require many components to recover sequences. Nevertheless, visualizing the transition matrix of an HMM can provide insight into the order in which hidden states of the model are visited, mapping onto different sequences that manifest in population activity (Maboudi et al., 2018). One advantage of this approach is that it can model sequences that occasionally end prematurely, while convNMF will always reconstruct the full sequence. On the other hand, this pattern completion property makes convNMF robust to participation noise and jitter. In contrast, a standard HMM must pass through each hidden state to model a sequence, and therefore may have trouble if many of these states are skipped. Thus, we expect HMMs and related models to exhibit complementary strengths and weaknesses when compared to convNMF.
Another strength of convNMF is its ability to accommodate sequences with shared neurons, as has been observed during song learning (Okubo et al., 2015). Sequences with shared neurons can be interpreted either in terms of ‘partsbased’ or ‘eventsbased’ factorizations (Figure 9—figure supplement 2). This capacity for a combinatorial description of overlapping sequences distinguishes convNMF from many other methods, which assume that neural patterns/sequences do not cooccur in time. For example, a vanilla HMM can only model each time step with a single hidden state and thus cannot express partsbased representations of neural sequences. Likewise, simple clustering models would assign each time interval to a single cluster label. Adding hierarchical and factorial structure to these models could allow them to test for overlapping neural sequences (see e.g. Ghahramani and Jordan, 1997); however, we believe seqNMF provides a simpler and more direct framework to explore this possibility.
Finally, as demonstrated by our development of new regularization terms and stability measures, convolutional matrix factorization is a flexible and extensible framework for sequence extraction. For example, one can tune the overall sparsity in the model by introducing additional L1 regularization terms. The loss function may also be modified, for example substituting in KL divergence or more general $\beta $divergence (Villasana et al., 2018). Both L1 regularization and $\beta $divergence losses are included in the seqNMF code package so that the model can be tuned to the particular needs of future analyses. Future development could incorporate outlier detection into the objective function (Netrapalli et al., 2014), or online optimization methods for large datasets (Wang et al., 2013). Other extensions to NMF, for example, Union of Intersections NMF Cluster (Ubaru et al., 2017), have yielded increased robustness and consistency of NMF factorizations, and could potentially also be modified for application to convNMF. Thus, adding convolutional structure to factorizationbased models of neural data represents a rich opportunity for statistical neuroscience.
Despite limiting ourselves to a relatively simple model for the purposes of this paper, we extracted biological insights that would have been difficult to otherwise achieve. For example, we identified neural sequences in isolated songbirds without aligning to song syllables, enabling new research into songbird learning on two fronts. First, since isolated and juvenile birds sing highly variable songs that are not easily segmented into stereotyped syllables, it is difficult and highly subjective to identify sequences by aligning to humanlabeled syllables. SeqNMF enables the discovery and future characterization of neural sequences in these cases. Second, while behaviorally aligned sequences exist in tutored birds, it is that possible many neural sequences—for example, in different brain areas or stages of development—are not closely locked to song syllables. Thus, even in cases where stereotyped song syllables exist, behavioral alignment may overlook relevant sequences and structure in the data. These lessons apply broadly to many neural systems, and demonstrate the importance of generalpurpose methods that extract sequences without reference to behavior. Our results show that convolutional matrix factorization models are an attractive framework to meet this need.
Materials and methods
Contact for resource sharing
Request a detailed protocolFurther requests should be directed to Michale Fee (fee@mit.edu).
Software and data availability
Request a detailed protocolThe seqNMF MATLAB code is publicly available as a github repository, which also includes our songbird data (Figure 9) for demonstration (Mackevicius et al., 2018; copy archived at https://github.com/elifesciencespublications/seqNMF).
The repository includes the seqNMF function, as well as helper functions for selecting $\lambda $, testing the significance of factors, plotting, and other functions. It also includes a demo script with an example of how to select $\lambda $ for a new dataset, test for significance of factors, plot the seqNMF factorization, switch between partsbased and eventsbased factorizations, and calculate crossvalidated performance on a masked test set.
Generating simulated data
Request a detailed protocolWe simulated neural sequences containing between 1 and 10 distinct neural sequences in the presence of various noise conditions. Each neural sequence was made up of 10 consecutively active neurons, each separated by three timebins. The binary activity matrix was convolved with an exponential kernel ($\tau =10$ timebins) to resemble neural calcium imaging activity.
SeqNMF algorithm details
The xortho penalized convNMF algorithm is a direct extension of the multiplicative update convNMF algorithm (Smaragdis, 2004), and draws on previous work regularizing NMF to encourage factor orthogonality (Chen and Cichocki, 2004).
The uniqueness and consistency of traditional NMF has been better studied than convNMF. In special cases, NMF has a unique solution comprised of sparse, ‘partsbased’ features that can be consistently identified by known algorithms (Donoho and Stodden, 2004; Arora et al., 2011). However, this ideal scenario does not hold in many practical settings. In these cases, NMF is sensitive to initialization, resulting in potentially inconsistent features. This problem can be addressed by introducing additional constraints or regularization terms that encourage the model to extract particular, e.g. sparse or approximately orthogonal features (Huang et al., 2014; Kim and Park, 2008). Both theoretical work and empirical observations suggest that these modifications result in more consistently identified features (Theis et al., 2005; Kim and Park, 2008).
For xortho penalized seqNMF, we added to the convNMF cost function a term that promotes competition between overlapping factors, resulting in the following cost function:
We derived the following multiplicative update rules for $\mathbf{\mathbf{W}}$ and $\mathbf{\mathbf{H}}$ (Appendix 1):
where the division and $\times $ are elementwise. The operator $\stackrel{\mathrm{\ell}\to}{(\cdot )}$ shifts a matrix in the $\to $ direction by $\mathrm{\ell}$ timebins, that is a delay by $\mathrm{\ell}$ timebins, and $\stackrel{\leftarrow \mathrm{\ell}}{(\cdot )}$ shifts a matrix in the $\leftarrow $ direction by $\mathrm{\ell}$ timebins (notation summary, Table 1). Note that multiplication with the $K\times K$ matrix $(\mathrm{\U0001d7cf}\mathbf{\mathbf{I}})$ effectively implements factor competition because it places in the $k$th row a sum across all other factors. These update rules are derived in Appendix 1 by taking the derivative of the cost function in Equation 8 and choosing an appropriate learning rate for each element.
In addition to the multiplicative updates outlined in Table 2, we also renormalize so rows of $\mathbf{\mathbf{H}}$ have unit norm; shift factors to be centered in time such that the center of mass of each $\mathbf{\mathbf{W}}$ pattern occurs in the middle; and in the final iteration run one additional step of unregularized convNMF to prioritize the cost of reconstruction error over the regularization (Algorithm 1). This final step is done to correct a minor suppression in the amplitude of some peaks in $\mathbf{\mathbf{H}}$ that may occur within $2L$ timebins of neighboring sequences.
Testing the significance of each factor on heldout data
Request a detailed protocolIn order to test whether a factor is significantly present in heldout data, we measured the distribution across timebins of the overlaps of the factor with the heldout data, and compared the skewness of this distribution to the null case (Figure 1). Overlap with the data is measured as $\mathbf{\mathbf{W}}\stackrel{\top}{\u229b}\mathbf{\mathbf{X}}$, a quantity which will be high at timepoints when the sequence occurs, producing a distribution of $\mathbf{\mathbf{W}}\stackrel{\top}{\u229b}\mathbf{\mathbf{X}}$ with high skew. In contrast, a distribution of overlaps exhibiting low skew indicates a sequence is not present in the data, since there are few timepoints of particularly high overlap. We estimated what skew levels would appear by chance by constructing null factors where temporal relationships between neurons have been eliminated. To create such null factors, we start from the real factors then circularly shift the timecourse of each neuron by a random amount between 0 and $L$. We measure the skew of the overlap distributions for each null factor, and ask whether the skew we measured for the real factor is significant at pvalue $\alpha $, that is, if it exceeds the Bonferroni corrected ${((1\frac{\alpha}{K})\times 100)}^{th}$ percentile of the null skews (see Figure 2—figure supplement 1).
Algorithm 1: SeqNMF (xortho algorithm) 

Input: Data matrix $\mathbf{\mathbf{X}}$, number of factors $K$, factor duration $L$, regularization strength $\lambda $ Output: Factor exemplars $\mathbf{\mathbf{W}}$, factor timecourses $\mathbf{\mathbf{H}}$ 1 Initialize $\mathbf{\mathbf{W}}$ and $\mathbf{\mathbf{H}}$ randomly 2 Iter = 1 3 While (Iter $<$ maxIter) and ($\mathrm{\Delta}$ cost $>$ tolerance) do 4 Update $\mathbf{\mathbf{H}}$ using multiplicative update from Table 2 5 Shift $\mathbf{\mathbf{W}}$ and $\mathbf{\mathbf{H}}$ to center $\mathbf{\mathbf{W}}$’s in time 6 Renormalize $\mathbf{\mathbf{W}}$ and $\mathbf{\mathbf{H}}$ so rows of $\mathbf{\mathbf{H}}$ have unit norm 7 Update $\mathbf{\mathbf{W}}$ using multiplicative update from Table 2 8 Iter = Iter + 1 9 Do one final unregularized convNMF update of $\mathbf{\mathbf{W}}$ and $\mathbf{\mathbf{H}}$ 10 return 
Note that if $\lambda $ is set too small, seqNMF will produce multiple redundant factors to explain one sequence in the data. In this case, each redundant candidate sequence will pass the significance test outlined here. We will address below a procedure for choosing $\lambda $ and methods for determining the number of sequences.
Calculating the percent power explained by a factorization
Request a detailed protocolIn assessing the relevance of sequences in a dataset, it can be useful to calculate what percentage of the total power in the dataset is explained by the factorization ($\stackrel{~}{\mathbf{\mathbf{X}}}$). The total power in the data is $\sum {\mathbf{\mathbf{X}}}^{2}$ (abbreviating ${\sum}_{n=1}^{N}{\sum}_{t=1}^{T}{x}_{nt}^{2}$ to $\sum {X}^{2}$). The power unexplained by the factorization is $\sum (\mathbf{X}\stackrel{~}{\mathbf{X}}{)}^{2}$. Thus, the percent of the total power explained by the factorization is:
‘Sequenciness’ score
Request a detailed protocolThe ‘sequenciness’ score was developed to distinguish between datasets with exclusively synchronous patterns, and datasets with temporally extended sequential patterns. This score relies on the observation that synchronous patterns are not disrupted by shuffling the columns of the data matrix. The ‘sequenciness’ score is calculated by first computing the difference between the power explained by seqNMF in the actual and columnshuffled data. This quantity is then divided by the power explained in the actual data minus the power explained in data where each neuron is timeshuffled by a different random permutation.
Choosing appropriate parameters for a new dataset
Request a detailed protocolThe choice of appropriate parameters ($\lambda $, $K$ and $L$) will depend on the data type (sequence length, number, and density; amount of noise; etc.).
In practice, we found that results were relatively robust to the choice of parameters. When $K$ or $L$ is set larger than necessary, seqNMF tends to simply leave the unnecessary factors or times empty. For choosing $\lambda $, the goal is to find the ‘sweet spot’ (Figure 4) to explain as much data as possible while still producing sensible factorizations, that is, minimally correlated factors, with low values of ${(\mathbf{\mathbf{W}}\stackrel{\top}{\u229b}\mathbf{\mathbf{X}}){\mathbf{\mathbf{S}\mathbf{H}}}^{\top}}_{1,i\ne j}$. Our software package includes demo code for determining the best parameters for a new type of data, using the following strategy:
Start with $K$ slightly larger than the number of sequences anticipated in the data
Start with $L$ slightly longer than the maximum expected factor length
Run seqNMF for a range of $\lambda $’s, and for each $\lambda $ measure the reconstruction error $\left({\mathbf{\mathbf{X}}\mathbf{\mathbf{W}}\u229b\mathbf{\mathbf{H}}}_{F}^{2}\right)$ and the correlation cost term $\left({(\mathbf{\mathbf{W}}\stackrel{\top}{\u229b}\mathbf{\mathbf{X}}){\mathbf{\mathbf{S}\mathbf{H}}}^{\top}}_{1,i\ne j}\right)$
Choose a $\lambda $ slightly above the crossover point ${\lambda}_{0}$
Decrease $K$ if desired, as otherwise some factors will be consistently empty
Decrease $L$ if desired, as otherwise some times will consistently be empty
In some applications, achieving the desired accuracy may depend on choosing a $\lambda $ that allows some inconsistency. It is possible to deal with this remaining inconsistency by comparing factors produced by different random initializations, and only considering factors that arise from several different initializations, a strategy that has been previously applied to standard convNMF on neural data (Peter et al., 2017).
During validation of our procedure for choosing $\lambda $, we compared factorizations to ground truth sequences as shown in Figure 4. To find the optimal value of $\lambda $, we used the product of two curves. The first curve was obtained by calculating the fraction of fits in which the true number of sequences was recovered as a function of $\lambda $. The second curve was obtained by calculating similarity to ground truth as a function of $\lambda $ (see Materials and methods section ‘Measuring performance on noisy fits by comparing seqNMF sequence to groundtruth sequences’). The product of these two curves was smoothed using a threesample boxcar sliding window, and the width was found as the values of $\lambda $ on either side of the peak value that correspond most closely to the halfmaximum points of the curve.
Preprocessing
Request a detailed protocolWhile seqNMF is generally quite robust to noisy data, and different types of sequential patterns, proper preprocessing of the data can be important to obtaining reasonable factorizations on real neural data. A key principle is that, in minimizing the reconstruction error, seqNMF is most strongly influenced by parts of the data that exhibit high variance. This can be problematic if the regions of interest in the data have relatively low amplitude. For example, high firing rate neurons may be prioritized over those with lower firing rate. As an alternative to subtracting the mean firing rate of each neuron, which would introduce negative values, neurons could be normalized divisively or by subtracting off a NMF reconstruction fit using a method that forces a nonnegative residual (Kim and Smaragdis, 2014). Additionally, variations in behavioral state may lead to seqNMF factorizations that prioritize regions of the data with high variance and neglect other regions. It may be possible to mitigate these effects by normalizing data, or by restricting analysis to particular subsets of the data, either by time or by neuron.
Measuring performance on noisy data by comparing seqNMF sequences to groundtruth sequences
Request a detailed protocolWe wanted to measure the ability of seqNMF to recover groundtruth sequences even when the sequences are obstructed by noise. Our noisy data consisted of three groundtruth sequences, obstructed by a variety of noise types. For each groundtruth sequence, we found its best match among the seqNMF factors. This was performed in a greedy manner. Specifically, we first computed a reconstruction for one of the groundtruth factors. We then measured the correlation between this reconstruction and reconstructions generated from each of the extracted factors, and chose the best match (highest correlation). Next, we matched a second groundtruth sequence with its best match (highest correlation between reconstructions) among the remaining seqNMF factors, and finally we found the best match for the third groundtruth sequence. The mean of these three correlations was used as a measure of similarity between the seqNMF factorization and the groundtruth (noiseless) sequences.
Testing generalization of factorization to randomly heldout (masked) data entries
Request a detailed protocolThe data matrix $\mathbf{\mathbf{X}}$ was divided into training data and test data by randomly selecting 5 or 10% of matrix entries to hold out. Specifically, the objective function (Equation 5, in the Results section) was modified to:
where $\times $ indicates elementwise multiplication (Hadamard product) and $\mathbf{\mathbf{M}}$ is a binary matrix with 5 or 10% of the entries randomly selected to be zero (heldout test set) and the remaining 95 or 90% set to one (training set). To search for a solution, we reformulate this optimization problem as:
where we have introduced a new optimization variable $\mathbf{\mathbf{Z}}$, which can be thought of as a surrogate dataset that is equal to the ground truth data only on the training set. The goal is now to minimize the difference between the model estimate, $\stackrel{~}{\mathbf{\mathbf{X}}}=\mathbf{\mathbf{W}}\u229b\mathbf{\mathbf{H}}$, and the surrogate, $\mathbf{\mathbf{Z}}$, while constraining $\mathbf{\mathbf{Z}}$ to equal $\mathbf{\mathbf{X}}$ at unmasked elements (where ${m}_{ij}=1$) and allowing $\mathbf{\mathbf{Z}}$ to be freely chosen at masked elements (where ${m}_{ij}=0$). Clearly, at masked elements, the best choice is to make $\mathbf{\mathbf{Z}}$ equal to the current model estimate $\stackrel{~}{\mathbf{\mathbf{X}}}$ as this minimizes the cost function without violating the constraint. This leads to the following update rules which are applied cyclically to update $\mathbf{\mathbf{Z}}$, $\mathbf{\mathbf{W}}$, and $\mathbf{\mathbf{H}}$.
The measure used for testing generalization performance was root mean squared error (RMSE). For the testing phase, RMSE was computed from the difference between $\stackrel{~}{\mathbf{\mathbf{X}}}$ and the data matrix $\mathbf{\mathbf{X}}$ only for heldout entries.
Hippocampus data
Request a detailed protocolThe hippocampal data was collected in the Buzsaki lab (Pastalkova et al., 2015; Mizuseki et al., 2013), and is publicly available on the Collaborative Research in Computational Neuroscience (CRCNS) Data sharing website. The dataset we refer to as ‘Rat 1’ is in the hc5 dataset, and the dataset we refer to as ‘Rat 2’ is in the hc3 dataset. Before running seqNMF, we processed the data by convolving the raw spike trains with a gaussian kernel of standard deviation 100 ms.
Animal care and use
Request a detailed protocolWe used male zebra finches (Taeniopygia guttata) from the MIT zebra finch breeding facility (Cambridge, MA). Animal care and experiments were carried out in accordance with NIH guidelines, and reviewed and approved by the Massachusetts Institute of Technology Committee on Animal Care (protocol 071507118).
In order to prevent exposure to a tutor song, birds were fosterraised by female birds, which do not sing, starting on or before posthatch day 15. For experiments, birds were housed singly in custommade sound isolation chambers.
Data acquisition and preprocessing
Request a detailed protocolThe calcium indicator GCaMP6f was expressed in HVC by intracranial injection of the viral vector AAV9.CAG.GCaMP6f.WPRE.SV40 (Chen et al., 2013) into HVC. In the same surgery, a cranial window was made using a GRIN (gradient index) lens (1 mm diamenter, 4 mm length, Inscopix). After at least one week, in order to allow for sufficient viral expression, recordings were made using the Inscopix nVista miniature fluorescent microscope.
Neuronal activity traces were extracted from raw fluorescence movies using the CNMF_E algorithm, a constrained nonnegative matrix factorization algorithm specialized for microendoscope data by including a local background model to remove activity from outoffocus cells (Zhou et al., 2018).
We performed several preprocessing steps before applying seqNMF to functional calcium traces extracted by CNMF_E. First, we estimated burst times from the raw traces by deconvolving the traces using an AR2 process. The deconvolution parameters (time constants and noise floor) were estimated for each neuron using the CNMF_E code package (Zhou et al., 2018). Some neurons exhibited larger peaks than others, likely due to different expression levels of the calcium indicator. Since seqNMF would prioritize the neurons with the most power, we renormalized by dividing the signal from each neuron by the sum of the maximum value of that row and the ${95}^{th}$ percentile of the signal across all neurons. In this way, neurons with larger peaks were given some priority, but not much more than that of neurons with weaker signals.
Appendix 1
Deriving multiplicative update rules
Standard gradient descent methods for minimizing a cost function must be adapted when solutions are constrained to be nonnegative, since gradient descent steps may result in negative values. Lee and Seung invented an elegant and widelyused algorithm for nonnegative gradient descent that avoids negative values by performing multiplicative updates (Lee and Seung, 2001; Lee and Seung, 1999). They derived these multiplicative updates by choosing an adaptive learning rate that makes additive terms cancel from standard gradient descent on the cost function. We will reproduce their derivation here, and detail how to extend it to the convolutional case (Smaragdis, 2004) and apply several forms of regularization (O’Grady and Pearlmutter, 2006; Ramanarayanan et al., 2013; Chen and Cichocki, 2004). See Table 2 for a compilation of cost functions, derivatives and multiplicative updates for NMF and convNMF under several different regularization conditions.
Standard NMF
NMF performs the factorization $\mathbf{\mathbf{X}}\approx \stackrel{~}{\mathbf{\mathbf{X}}}=\mathbf{\mathbf{W}\mathbf{H}}$. NMF factorizations seek to solve the following problem:
This problem is convex in $\mathbf{\mathbf{W}}$ and $\mathbf{\mathbf{H}}$ separately, not together, so a local minimum is found by alternating $\mathbf{\mathbf{W}}$ and $\mathbf{\mathbf{H}}$ updates. Note that:
Thus, gradient descent steps for $\mathbf{\mathbf{W}}$ and $\mathbf{\mathbf{H}}$ are:
To arrive at multiplicative updates, Lee and Seung (Lee and Seung, 2001) set:
Thus, the gradient descent updates become multiplicative:
where the division and $\times $ are elementwise.
Standard convNMF
Convolutional NMF factorizes data $\mathbf{\mathbf{X}}\approx \stackrel{~}{\mathbf{\mathbf{X}}}={\sum}_{\mathrm{\ell}}{\mathbf{\mathbf{W}}}_{\cdot \cdot \mathrm{\ell}}\stackrel{\mathrm{\ell}\to}{\mathbf{\mathbf{H}}}=\mathbf{\mathbf{W}}\u229b\mathbf{\mathbf{H}}$. ConvNMF factorizations seek to solve the following problem:
The derivation above for standard NMF can be applied for each $\mathrm{\ell}$, yielding the following update rules for convNMF (Smaragdis, 2004):
Where the operator $\mathrm{\ell}\to $ shifts a matrix in the $\to $ direction by $\mathrm{\ell}$ timebins, that is a delay by $\mathrm{\ell}$ timebins, and $\leftarrow \mathrm{\ell}$ shifts a matrix in the $\leftarrow $ direction by $\mathrm{\ell}$ timebins (Table 1). Note that NMF is a special case of convNMF where $L=1$.
Incorporating regularization terms
Suppose we want to regularize by adding a new term, $\mathcal{R}$ to the cost function:
Using a similar trick to Lee and Seung, we choose a ${\eta}_{\mathbf{\mathbf{W}}},{\eta}_{\mathbf{\mathbf{H}}}$ to arrive at a simple multiplicative update. Below is the standard NMF case, which generalizes trivially to the convNMF case.
Note that:
We set:
Thus, the gradient descent updates become multiplicative:
where the division and $\times $ are elementwise.
The above formulation enables flexible incorporation of different types of regularization or penalty terms into the multiplicative NMF update algorithm. This framework also extends naturally to the convolutional case. See Table 2 for examples of several regularization terms, including $L1$ sparsity (O’Grady and Pearlmutter, 2006; Ramanarayanan et al., 2013) and spatial decorrelation (Chen and Cichocki, 2004), as well as the terms we introduce here to combat the types of inefficiencies and cross correlations we identified in convolutional NMF, namely, smoothed orthogonality for $\mathbf{\mathbf{H}}$ and $\mathbf{\mathbf{W}}$, and the xortho penalty term. For the xortho penalty term, $\lambda {(\mathbf{\mathbf{W}}\stackrel{\top}{\u229b}\mathbf{\mathbf{X}}){\mathbf{\mathbf{S}\mathbf{H}}}^{\top}}_{1,i\ne j}$, the multiplicative update rules are:
where the division and $\times $ are elementwise. Note that multiplication with the $K\times K$ matrix $(\mathrm{\U0001d7cf}\mathbf{\mathbf{I}})$ effectively implements factor competition because it places in the $k$th row a sum across all other factors.
Appendix 2
Relation of the xortho penalty to traditional regularizations
As noted in the main text, the xortho penalty term is not formally a regularization because it includes the data $\mathbf{\mathbf{X}}$. In this Appendix, we show how this penalty can be approximated by a datafree regularization. The resulting regularization contains three terms corresponding to a weighted orthogonality penalty on pairs of $\mathbf{\mathbf{H}}$ factors, a weighted orthogonality penalty on pairs of $\mathbf{\mathbf{W}}$ factors, and a term that penalizes interactions among triplets of factors. We analyze each term in both the time domain (Equation 50) and in the frequency domain (Equations 50 and 69).
Time domain analysis
We consider the crossorthogonality penalty term:
and define, $\mathbf{\mathbf{R}}=(\mathbf{\mathbf{W}}\stackrel{\top}{\u229b}\mathbf{\mathbf{X}}){\mathbf{\mathbf{S}\mathbf{H}}}^{\top}$, which is a $K\times K$ matrix. Each element ${\mathbf{\mathbf{R}}}_{ij}$ is a positive number describing the overlap or correlation between factor $i$ and factor $j$ in the model. Each element of $\mathbf{\mathbf{R}}$ can be written explicitly as:
Where the index variables $t$ and $\tau $ range from $1$ to $T$, $n$ ranges from $1$ to $N$, and $\mathrm{\ell}$ ranges from $1$ to $L$.
Our goal here is to find a close approximation to this penalty term that does not contain the data $\mathbf{\mathbf{X}}$. This can readily be done if $\mathbf{\mathbf{X}}$ is wellapproximated by the convNMF decomposition:
Substituting this expression into Equation 45 and defining the smoothed matrix ${\sum}_{\tau}{\mathbf{\mathbf{S}}}_{t\tau}{\mathbf{\mathbf{H}}}_{j\tau}$ as ${\mathbf{\mathbf{H}}}_{jt}^{\text{smooth}}$ gives:
Making the substitution $u=\mathrm{\ell}{\mathrm{\ell}}^{\prime}$ gives:
where in the above expression we have taken $u=\mathrm{\ell}{\mathrm{\ell}}^{\prime}$ to extend over the full range from $(L1)$ to $(L1)$ under the implicit assumption that $\mathbf{\mathbf{W}}$ and $\mathbf{\mathbf{H}}$ are zero padded such that values of $\mathbf{\mathbf{W}}$ for lag indices outside the range $0$ to $L1$ and values of $\mathbf{\mathbf{H}}$ for time indices outside the range $1$ to $T$ are taken to be zero.
Relabeling ${\mathrm{\ell}}^{\prime}$ as $\mathrm{\ell}$ and gathering terms together yields
We note that the above expression contains terms that resemble penalties on orthogonality between two $\mathbf{\mathbf{W}}$ factors (first parenthetical) or two $\mathbf{\mathbf{H}}$ factors (one of which is smoothed, second parenthetical), but in this case allowing for different time lags $u$ between the factors. To understand this formula better, we decompose the above sum over $k$ into three contributions corresponding to $k=i$, $k=j$, and $k\ne i,j$
The first term above contains, for $u=0$, a simple extension of the ${(i,j)}^{th}$ element of the $\mathbf{\mathbf{H}}$ orthogonality condition ${\mathbf{\mathbf{H}\mathbf{S}\mathbf{H}}}^{\top}$. The extension is that the orthogonality is weighted by the power, that is the sum of squared elements, in the ${i}^{th}$ factor of $\mathbf{\mathbf{W}}$ (the apparent lack of symmetry in weighing by the ${i}^{th}$ rather than the ${j}^{th}$ factor can be removed by simultaneously considering the term ${R}_{ji}$, as shown in the Fourier representation of the following section). This weighting has the benefit of applying the penalty on $\mathbf{\mathbf{H}}$ orthogonality most strongly to those factors whose corresponding $\mathbf{\mathbf{W}}$ components contain the most power. For $u\ne 0$, this orthogonality condition is extended to allow for overlap of timeshifted $\mathbf{\mathbf{H}}$’s, with weighting at each time shift by the autocorrelation of the corresponding $\mathbf{\mathbf{W}}$ factor. Qualitatively, this enforces that (even in the absence of the smoothing matrix $\mathbf{\mathbf{S}}$), $\mathbf{\mathbf{H}}$’s that are offset by less than the width of the autocorrelation of the corresponding $\mathbf{\mathbf{W}}$’s will have overlapping convolutions with these $\mathbf{\mathbf{W}}$’s due to the temporal smoothing associated with the convolution operation. We note that, for sparse sequences as in the examples of Figure 1, there is no timelagged component to the autocorrelation, so this term corresponds simply to a smoothed $\mathbf{\mathbf{H}}$ orthogonality regularization, weighted by the strength of the corresponding $\mathbf{\mathbf{W}}$ factors.
The second term above represents a complementary orthogonality condition on the $\mathbf{\mathbf{W}}$ components, in which orthogonality in the ${i}^{th}$ and ${j}^{th}\mathbf{\mathbf{W}}$ factors are weighted by the (smoothed) autocorrelation of the $\mathbf{\mathbf{H}}$ factors. For the case in which the $\mathbf{\mathbf{H}}$ factors have no timelagged autocorrelations, this corresponds to a simple weighting of $\mathbf{\mathbf{W}}$ orthogonality by the strength of the corresponding $\mathbf{\mathbf{H}}$ factors.
Finally, we consider the remaining terms of the cost function, for which $k\ne i,j$. We note that these terms are only relevant when the factorization contains at least three factors, and thus their role cannot be visualized from the simple Type 1 to Type 3 examples of Figure 1. These terms have the form:
To understand how this term contributes, we consider each of the expressions in parentheses. The first expression corresponds, as described above, to the timelagged cross correlation of the ${i}^{th}$ and $k}^{th}\phantom{\rule{thinmathspace}{0ex}}\mathbf{W$ components. Likewise, the second expression corresponds to the timelagged correlation of the (smoothed) ${j}^{th}$ and $k}^{th}\phantom{\rule{thinmathspace}{0ex}}\mathbf{H$ components. Thus, this term of ${\mathbf{\mathbf{R}}}_{ij}$ contributes whenever there is a factor (${\mathbf{\mathbf{W}}}_{\cdot k\cdot},{\mathbf{\mathbf{H}}}_{k\cdot}$) that overlaps, at the same time lags, with the ${i}^{th}$ factor’s $\mathbf{\mathbf{W}}$ component and the ${j}^{th}$ factor’s $\mathbf{\mathbf{H}}$ component. Thus, this term penalizes cases where, rather than (or in addition to) two factors $i$ and $j$ directly overlapping one another, they have a common factor $k$ with which they overlap.
An example of the contribution of a triplet penalty term, as well as of the paired terms of Equation 50, is shown in Figure 1 of this Appendix. By inspection, there is a penalty ${\mathbf{\mathbf{R}}}_{23}$ due to the overlapping values of the pair (${\mathbf{\mathbf{h}}}_{\mathrm{\U0001d7d0}},{\mathbf{\mathbf{h}}}_{\mathrm{\U0001d7d1}}$). Likewise, there is a penalty ${\mathbf{\mathbf{R}}}_{13}$ due to the overlapping values of the pair (${\mathbf{\mathbf{w}}}_{\mathrm{\U0001d7cf}},{\mathbf{\mathbf{w}}}_{\mathrm{\U0001d7d1}}$). The triplet penalty term contributes to ${\mathbf{\mathbf{R}}}_{12}$ and derives from the fact that ${\mathbf{\mathbf{w}}}_{\mathrm{\U0001d7cf}}$ overlaps with ${\mathbf{\mathbf{w}}}_{\mathrm{\U0001d7d1}}$ at the same time (and with the same, zero time lag) as ${\mathbf{\mathbf{h}}}_{\mathrm{\U0001d7d0}}$ overlaps with ${\mathbf{\mathbf{h}}}_{\mathrm{\U0001d7d1}}$.
In summary, the above analysis shows that for good reconstructions of the data where $\mathbf{\mathbf{X}}\approx \mathbf{\mathbf{W}}\u229b\mathbf{\mathbf{H}}$, the xortho penalty can be wellapproximated by the sum of three contributions. The first corresponds to a penalty on timelagged (smoothed) $\mathbf{\mathbf{H}}$ orthogonality weighted at each time lag by the autocorrelation of the corresponding $\mathbf{\mathbf{W}}$ factors. The second similarly corresponds to a penalty on timelagged $\mathbf{\mathbf{W}}$ orthogonality weighted at each time lag by the (smoothed) autocorrelation of the corresponding $\mathbf{\mathbf{H}}$ factors. For simple cases of sparse sequences, these contributions reduce to orthogonality in $\mathbf{\mathbf{H}}$ or $\mathbf{\mathbf{W}}$ weighted by the power in the corresponding $\mathbf{\mathbf{W}}$ or $\mathbf{\mathbf{H}}$, respectively, thus focusing the penalties most heavily on those factors which contribute most heavily to the data reconstruction. The third, triplet contribution corresponds to the case in which a factor in $\mathbf{\mathbf{W}}$ and a different factor in $\mathbf{\mathbf{H}}$ both overlap (at the same time lag) with a third common factor, and may occur even when the factors $\mathbf{\mathbf{W}}$ and $\mathbf{\mathbf{H}}$ themselves are orthogonal. Further work is needed to determine whether this third contribution is critical to the xortho penalty or is simply a byproduct of the xortho penalty procedure’s direct use of the data $\mathbf{\mathbf{X}}$.
Frequency domain analysis
Additional insight may be obtained by analyzing these three components of $\mathbf{\mathbf{R}}$ in the Fourier domain. Before doing so, we below derive the Fourier domain representation of $\mathbf{\mathbf{R}}$, and provide insights suggested by this perspective.
Fourier representation of the xortho penalty
As in the time domain analysis, we start with defining:
Unpacking the notation above, we note that:
where ${\overrightarrow{\mathbf{\mathbf{W}}}}_{i}^{(n)}$ is the ${n}^{th}$ row of ${\mathbf{\mathbf{W}}}_{\cdot i\cdot}$, ${\overrightarrow{\mathbf{\mathbf{X}}}}^{(n)}$ is the ${n}^{th}$ row of $\mathbf{\mathbf{X}}$, ${\overrightarrow{\mathbf{\mathbf{H}}}}_{j}$ is ${\mathbf{\mathbf{H}}}_{j\cdot}$, $\overrightarrow{\mathbf{\mathbf{s}}}$ is a smoothing vector corresponding to the entries of each row of the smoothing matrix $\mathbf{\mathbf{S}}$, and “$\cdot $” is a dot product. For ease of mathematical presentation, in the following, we work with continuous time rather than the discretely sampled data and extend the $\mathbf{\mathbf{W}}$ factors, $\mathbf{\mathbf{H}}$ factors, and data matrix $\mathbf{\mathbf{X}}$ through zeropadding on both ends so that:
and
Recall that the Fourier transform is defined as:
with inverse Fourier transform:
Now recall some basic features of Fourier transforms of correlation and convolution integrals:
This final identity, known as Parseval’s theorem, says that the inner product (dot product) between two functions evaluated in the time and frequency domain are equivalent up to a proportionality constant of $1/(2\pi )$. With the above identities, we can calculate our quantity of interest:
First, define:
From Equation 60 (Parseval’s theorem):
Then, from Equations 58 and 59, we have:
The above formula shows that:
Viewed in the frequency domain, the xortho penalty reduces to a (sum over neurons and frequencies of a) simple product of Fourier transforms of the four matrices involved in the penalty.
The smoothing can equally well be applied to $\mathbf{\mathbf{H}}$ or $\mathbf{\mathbf{W}}$ or $\mathbf{\mathbf{X}}$. (For $\mathbf{\mathbf{X}}$, note that for symmetric smoothing function $\mathbf{\mathbf{s}}(t)=\mathbf{\mathbf{s}}(t)$, we also have $\widehat{\mathbf{\mathbf{s}}}(\omega )=\widehat{\mathbf{\mathbf{s}}}(\omega )$.)
One can view this operation as either of the below:
First correlate $\mathbf{\mathbf{W}}$ and $\mathbf{\mathbf{X}}$ by summing correlations of each row, and then calculate the overlap with the smoothed $\mathbf{\mathbf{H}}$, as described in the main text:${\mathbf{\mathbf{R}}}_{ij}=\frac{1}{2\pi}\sum _{n=1}^{N}{\int}_{\mathrm{\infty}}^{\mathrm{\infty}}d\omega \left[{\widehat{\mathbf{\mathbf{W}}}}_{i}^{(n)}(\omega ){\widehat{\mathbf{\mathbf{X}}}}^{(n)}(\omega )\right]\left[{\widehat{\mathbf{\mathbf{H}}}}_{j}(\omega )\widehat{\mathbf{\mathbf{s}}}(\omega )\right]$
Correlate $\mathbf{\mathbf{H}}$ with each row of $\mathbf{\mathbf{X}}$ and then calculate the overlap of this correlation with the corresponding smoothed row of $\mathbf{\mathbf{W}}$. Then sum over all rows:${\mathbf{\mathbf{R}}}_{ij}=\frac{1}{2\pi}\sum _{n=1}^{N}{\int}_{\mathrm{\infty}}^{\mathrm{\infty}}d\omega \left[{\widehat{\mathbf{\mathbf{H}}}}_{j}(\omega ){\widehat{\mathbf{\mathbf{X}}}}^{(n)}(\omega )\right]\left[{\widehat{\mathbf{\mathbf{W}}}}_{i}^{(n)}(\omega )\widehat{\mathbf{\mathbf{s}}}(\omega )\right]$
Fourier representation of the traditional regularization approximation of the xortho penalty
We now proceed to show how the xortho penalty can be approximated by a traditional (datafree) regularization, expressing the results in the frequency domain. As in the time domain analysis, we consider the approximation in which the data $\mathbf{\mathbf{X}}$ are nearly perfectly reconstructed by the convNMF decomposition ($\mathbf{\mathbf{X}}\approx \mathbf{\mathbf{W}}\u229b\mathbf{\mathbf{H}}$).
Noting that this decomposition is a sum over factors of rowbyrow ordinary convolutions, we can write the Fourier analog for each row of $\mathbf{\mathbf{X}}$ as:
Thus, substituting $\mathbf{\mathbf{X}}$ with the reconstruction, $\mathbf{\mathbf{W}}\u229b\mathbf{\mathbf{H}}$ in Equation 65, we have:
As in the time domain analysis, we separate the sum over $k$ into three cases: $k=i$, $k=j$, and $k\ne i,j$. Recall that for real numbers, $\widehat{f}(\omega )={\widehat{f}}^{*}(\omega )$, and $\widehat{f}(\omega ){\widehat{f}}^{*}(\omega )={\widehat{f}(\omega )}^{2}$. Thus, separating the sum over $k$ into the three cases, we have:
where $Y$ represents the remaining terms for which $k\ne i,j$.
We can obtain a more symmetric form of this equation by summing the contributions of factors $i$ and $j$, ${\mathbf{\mathbf{R}}}_{ij}+{\mathbf{\mathbf{R}}}_{ji}$. For symmetric smoothing functions $\mathbf{\mathbf{s}}(t)=\mathbf{\mathbf{s}}(t)$, for which $\widehat{\mathbf{\mathbf{s}}}(\omega )=\widehat{\mathbf{\mathbf{s}}}(\omega )$, we obtain:
As in the time domain analysis, the first two terms above have a simple interpretation in comparison to traditional orthogonality regularizations: The first term resembles a traditional regularization of orthogonality in (smoothed) $\mathbf{\mathbf{H}}$, but now weighted frequencybyfrequency by the summed power at that frequency in the corresponding $\mathbf{\mathbf{W}}$ factors. For sparse (deltafunctionlike) sequences, the power in $\mathbf{\mathbf{W}}$ at each frequency is a constant and can be taken outside the integral. In this case, the regularization corresponds precisely to orthogonality in (smoothed) H, weighted by the summed power in the corresponding $\mathbf{\mathbf{W}}$’s. Likewise, the second term above corresponds to a traditional regularization of orthogonality in (smoothed) $\mathbf{\mathbf{W}}$, weighted by the summed power at each component frequency in the corresponding $\mathbf{\mathbf{H}}$ factors.
Altogether, we see that these terms represent a Fourierpower weighted extension of (smoothed) traditional orthogonality regularizations in $\mathbf{\mathbf{W}}$ and $\mathbf{\mathbf{H}}$. This weighting may be beneficial relative to traditional orthogonality penalties, since it makes the regularization focus most heavily on the factors and frequencies that contribute most to the data reconstruction.
Finally, we consider the remaining terms in the cost function, for which $k\ne i,j$. As noted previously, these terms are only relevant when the factorization contains at least three terms, so cannot be seen in the simple Type 1, 2 and 3 cases illustrated in Figure 1. These terms have the form:
To understand how this term contributes, we consider each of the expressions in parentheses. The first expression contains each frequency component of the correlation of the ${i}^{th}$ and ${k}^{th}$ factors’ $\mathbf{\mathbf{W}}$’s. The second expression likewise contains each frequency component of the correlation of the ${j}^{th}$ and ${k}^{th}$ factors’ $\mathbf{\mathbf{H}}$’s. Thus, analogous to the time domain analysis, this term of ${\mathbf{\mathbf{R}}}_{ij}$ contributes whenever there is a factor (${\mathbf{\mathbf{W}}}_{\cdot k\cdot},{\mathbf{\mathbf{H}}}_{k\cdot}$) that overlaps at any frequency with the ${i}^{th}$ factor’s $\mathbf{\mathbf{W}}$ component and the ${j}^{th}$ factor’s $\mathbf{\mathbf{H}}$ component. In this manner, this threefactor interaction term effectively enforces competition between factors $i$ and $j$ even if they are not correlated themselves, as demonstrated in Figure 1 of this Appendix.
Data availability
Code and songbird dataset is publicly available on github: https://github.com/FeeLab/seqNMF (copy archived at https://github.com/elifesciencespublications/seqNMF). Rat datasets were collected in the Buzsaki lab, and are publicly available on CRCNS (https://crcns.org/datasets/hc); users must first create a free account (https://crcns.org/register) before they can download the datasets from the site.

Collaborative Research in Computational NeuroscienceMultiple single unit recordings from different rat hippocampal and entorhinal regions while the animals were performing multiple behavioral tasks. CRCNS.org.https://doi.org/10.6080/K09G5JRZ

Collaborative Research in Computational NeuroscienceSimultaneous extracellular recordings from left and right hippocampal areas CA1 and right entorhinal cortex from a rat performing a left / right alternation task and other behaviors. CRCNS.org.https://doi.org/10.6080/K0KS6PHF
References

Investigation of sequence processing: a cognitive and computational neuroscience perspectiveCurrent Science 89:1690–1698.

Crossvalidation of component models: a critical look at current methodsAnalytical and Bioanalytical Chemistry 390:1241–1251.https://doi.org/10.1007/s0021600717901

Correlations without synchronyNeural Computation 11:1537–1551.https://doi.org/10.1162/089976699300016133

Extracting spatialtemporal coherent patterns in largescale neural recordings using dynamic mode decompositionJournal of Neuroscience Methods 258:1–15.https://doi.org/10.1016/j.jneumeth.2015.10.010

Nonnegative matrix factorization with temporal smoothness and/or spatial decorrelation constraintsSignal Processing 11:.

ConferenceAlgorithms for orthogonal nonnegative matrix factorizationIEEE International Joint Conference on Neural Networks (IEEE World Congress on Computational Intelligence). pp. 1828–1832.

Conceptual and technical advances define a key moment for theoretical neuroscienceNature Neuroscience 19:348–349.https://doi.org/10.1038/nn.4255

Sequence learningTrends in Cognitive Sciences 2:275–281.https://doi.org/10.1016/S13646613(98)012029

Continuous online sequence learning with an unsupervised neural network modelNeural Computation 28:2474–2504.https://doi.org/10.1162/NECO_a_00893

BookWhen does nonnegative matrix factorization give a correct decomposition into parts?In: Thrun S, Saul L. K, Schölkopf B, editors. Advances in Neural Information Processing Systems, 16. MIT Press. pp. 1141–1148.

Behaviordependent shortterm assembly dynamics in the medial prefrontal cortexNature Neuroscience 11:823–833.https://doi.org/10.1038/nn.2134

On simplicity and complexity in the brave new world of largescale neuroscienceCurrent Opinion in Neurobiology 32:148–155.https://doi.org/10.1016/j.conb.2015.04.003

Detecting synfire chains in parallel spike dataJournal of Neuroscience Methods 206:54–64.https://doi.org/10.1016/j.jneumeth.2012.02.003

Why neurons have thousands of synapses, a theory of sequence memory in neocortexFrontiers in Neural Circuits 10:23.https://doi.org/10.3389/fncir.2016.00023

Nonnegative matrix factorization revisited: uniqueness and algorithm for symmetric decompositionIEEE Transactions on Signal Processing 62:211–224.https://doi.org/10.1109/TSP.2013.2285514

ReportSparse nonnegative matrix factorization for clusteringGeorgia Institute of Technology. Technical Report GTCSE.

ConferenceEfficient model selection for speech enhancement using a deflation method for nonnegative matrix factorization2014 IEEE Global Conference on Signal and Information Processing (GlobalSIP).https://doi.org/10.1109/GlobalSIP.2014.7032175

BookAlgorithms for nonnegative matrix factorizationIn: Leen T. K, Dietterich T. G, Tresp V, editors. Advances in Neural Information Processing Systems, 13. MIT Press. pp. 556–562.

ConferenceBayesian learning and inference in recurrent switching linear dynamical systemsProceedings of the 20th International Conference on Artificial Intelligence and Statistics.

Building a state space for song learningCurrent Opinion in Neurobiology 49:59–68.https://doi.org/10.1016/j.conb.2017.12.001

Learning midlevel auditory codes from natural sound statisticsNeural Computation 30:631–669.https://doi.org/10.1162/neco_a_01048

BookNonconvex robust PCAIn: Ghahramani Z, Welling M, Cortes C, Lawrence ND, Weinberger KQ, editors. Advances in Neural Information Processing Systems, 27. Curran Associates, Inc. pp. 1107–1115.

ConferenceConvolutive nonnegative matrix factorisation with a sparseness constraint16th IEEE Signal Processing Society Workshop on Machine Learning for Signal Processing.https://doi.org/10.1109/MLSP.2006.275588

On lines and planes of closest fit to systems of points in spaceThe London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science 2:559–572.https://doi.org/10.1080/14786440109462720

BookSparse convolutional coding for neuronal assembly detectionIn: Guyon I, Luxburg U. V, Bengio S, Wallach H, Fergus R, Vishwanathan S, Garnett R, editors. Advances in Neural Information Processing Systems, 30. Curran Associates, Inc. pp. 3675–3685.

Methods for identification of spike patterns in massively parallel spike trainsBiological Cybernetics 112:57–80.https://doi.org/10.1007/s0042201807550

Spatiotemporal articulatory movement primitives during speech production: extraction, interpretation, and validationThe Journal of the Acoustical Society of America 134:1378–1394.https://doi.org/10.1121/1.4812765

Closepacked silicon microelectrodes for scalable spatially oversampled neural recordingIEEE Transactions on Biomedical Engineering 63:120–130.https://doi.org/10.1109/TBME.2015.2406113

Detecting synfire chain activity using massively parallel spike train recordingJournal of Neurophysiology 100:2165–2176.https://doi.org/10.1152/jn.01245.2007

Putting big data to good use in neuroscienceNature Neuroscience 17:1440–1441.https://doi.org/10.1038/nn.3839

Convolutive speech bases and their application to supervised speech separationIEEE Transactions on Audio, Speech and Language Processing 15:1–12.https://doi.org/10.1109/TASL.2006.876726

BookSequence to sequence learning with neural networksIn: Ghahramani Z, Welling M, Cortes C, Weinberger KQ, Lawrence ND, editors. Advances in Neural Information Processing Systems, 27. MIT Press. pp. 3104–3112.

ConferenceFirst results on uniqueness of sparse nonnegative matrix factorization13th European Signal Processing Conference.

ASSET: analysis of sequences of synchronous events in massively parallel spike trainsPLOS Computational Biology 12:e1004939.https://doi.org/10.1371/journal.pcbi.1004939

ConferenceUoINMF cluster: a robust nonnegative matrix factorization algorithm for improved partsbased decomposition and reconstruction of noisy data2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA).

Generalized low rank modelsFoundations and Trends in Machine Learning 9:1–118.https://doi.org/10.1561/2200000055

ConferenceConvex hull convolutive nonnegative matrix factorization for uncovering temporal patterns in multivariate timeseries dataInterspeech 2016.

Clustering stability: an overviewFoundations and Trends in Machine Learning 2:235–274.

Online nonnegative convolutive pattern learning for speech signalsIEEE Transactions on Signal Processing 61:44–56.https://doi.org/10.1109/TSP.2012.2222381
Article and author information
Author details
Funding
Simons Foundation (Simons Collaboration for the Global Brain)
 Mark S Goldman
 Michale S Fee
National Institute on Deafness and Other Communication Disorders (R01 DC009183)
 Michale S Fee
G Harold and Leila Y. Mathers Foundation
 Michale S Fee
U.S. Department of Defense (NDSEG Fellowship program)
 Emily L Mackevicius
Department of Energy, Labor and Economic Growth (Computational Science Graduate Fellowship (CSGF))
 Alex H Williams
NIH Office of the Director (Training Grant 5T32EB01994003)
 Andrew H Bahle
National Institute of Neurological Disorders and Stroke (U19 NS104648)
 Mark S Goldman
National Institute of Mental Health (R25 MH062204)
 Mark S Goldman
 Michale S Fee
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
Acknowledgements
This work was supported by a grant from the Simons Collaboration for the Global Brain, the National Institutes of Health (NIH) [grant number R01 DC009183] and the G Harold and Leila Y Mathers Charitable Foundation. ELM received support through the NDSEG Fellowship program. AHB received support through NIH training grant 5T32EB01994003. MSG received support from the NIH [grant number U19NS104648]. AHW received support from the U.S. Department of Energy Computational Science Graduate Fellowship (CSGF) program. Thanks to Pengcheng Zhou for advice on his CNMF_E calcium data cell extraction algorithm. Thanks to Wiktor Młynarski for helpful convNMF discussions. Thanks to Michael Stetner, Galen Lynch, Nhat Le, Dezhe Jin, Edward Nieh, Adam Charles, Jane Van Velden and Yiheng Wang for comments on the manuscript and on our code package. Thanks to our reviewers for wonderful suggestions, including the use of diss to select $K$, and using seqNMF to measure ‘sequenciness’. Special thanks to the 2017 Methods in Computational Neuroscience course [supported by NIH grant R25 MH062204 and Simons Foundation] at the Woods Hole Marine Biology Lab, where this collaboration was started.
Ethics
Animal experimentation: Animal care and experiments were carried out in accordance with NIH guidelines, and reviewed and approved by the Massachusetts Institute of Technology Committee on Animal Care (protocol 071507118).
Version history
 Received: May 19, 2018
 Accepted: January 4, 2019
 Version of Record published: February 5, 2019 (version 1)
 Version of Record updated: February 7, 2019 (version 2)
 Version of Record updated: August 6, 2019 (version 3)
Copyright
© 2019, Mackevicius et al.
This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.
Metrics

 11,221
 Page views

 1,589
 Downloads

 49
 Citations
Article citation count generated by polling the highest count across the following sources: Crossref, PubMed Central, Scopus.
Download links
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
Further reading

 Medicine
 Neuroscience
Background:
Ketamine has emerged as one of the most promising therapies for treatmentresistant depression. However, interindividual variability in response to ketamine is still not well understood and it is unclear how ketamine’s molecular mechanisms connect to its neural and behavioral effects.
Methods:
We conducted a singleblind placebocontrolled study, with participants blinded to their treatment condition. 40 healthy participants received acute ketamine (initial bolus 0.23 mg/kg, continuous infusion 0.58 mg/kg/hr). We quantified restingstate functional connectivity via datadriven global brain connectivity and related it to individual ketamineinduced symptom variation and cortical gene expression targets.
Results:
We found that: (i) both the neural and behavioral effects of acute ketamine are multidimensional, reflecting robust interindividual variability; (ii) ketamine’s datadriven principal neural gradient effect matched somatostatin (SST) and parvalbumin (PVALB) cortical gene expression patterns in humans, while the mean effect did not; and (iii) behavioral datadriven individual symptom variation mapped onto distinct neural gradients of ketamine, which were resolvable at the singlesubject level.
Conclusions:
These results highlight the importance of considering individual behavioral and neural variation in response to ketamine. They also have implications for the development of individually precise pharmacological biomarkers for treatment selection in psychiatry.
Funding:
This study was supported by NIH grants DP5OD01210901 (A.A.), 1U01MH121766 (A.A.), R01MH112746 (J.D.M.), 5R01MH112189 (A.A.), 5R01MH108590 (A.A.), NIAAA grant 2P50AA01287011 (A.A.); NSF NeuroNex grant 2015276 (J.D.M.); Brain and Behavior Research Foundation Young Investigator Award (A.A.); SFARI Pilot Award (J.D.M., A.A.); Heffter Research Institute (Grant No. 1–190420) (FXV, KHP); Swiss Neuromatrix Foundation (Grant No. 2016–0111) (FXV, KHP); Swiss National Science Foundation under the framework of Neuron Cofund (Grant No. 01EW1908) (KHP); Usona Institute (2015 – 2056) (FXV).
Clinical trial number:

 Neuroscience
Storing and accessing memories is required to successfully perform daytoday tasks, for example for engaging in a meaningful conversation. Previous studies in both rodents and primates have correlated hippocampal cellular activity with behavioral expression of memory. A key role has been attributed to awake hippocampal replay – a sequential reactivation of neurons representing a trajectory through space. However, it is unclear if awake replay impacts immediate future behavior, gradually creates and stabilizes longterm memories over a long period of time (hours and longer), or enables the temporary memorization of relevant events at an intermediate time scale (seconds to minutes). In this study, we aimed to address the uncertainty around the timeframe of impact of awake replay by collecting causal evidence from behaving rats. We detected and disrupted sharp wave ripples (SWRs)  signatures of putative replay events  using electrical stimulation of the ventral hippocampal commissure in rats that were trained on three different spatial memory tasks. In each task, rats were required to memorize a new set of locations in each trial or each daily session. Interestingly, the rats performed equally well with or without SWR disruptions. These data suggest that awake SWRs  and potentially replay  does not affect the immediate behavior nor the temporary memorization of relevant events at a short timescale that are required to successfully perform the spatial tasks. Based on these results, we hypothesize that the impact of awake replay on memory and behavior is longterm and cumulative over time.