Direct extraction of signal and noise correlations from two-photon calcium imaging of ensemble neuronal activity

  1. Anuththara Rupasinghe
  2. Nikolas Francis
  3. Ji Liu
  4. Zac Bowen
  5. Patrick O Kanold
  6. Behtash Babadi  Is a corresponding author
  1. Department of Electrical and Computer Engineering, University of Maryland, United States
  2. The Institute for Systems Research, University of Maryland, United States
  3. Department of Biology, University of Maryland, United States
  4. Department of Biomedical Engineering, Johns Hopkins University, United States

Abstract

Neuronal activity correlations are key to understanding how populations of neurons collectively encode information. While two-photon calcium imaging has created a unique opportunity to record the activity of large populations of neurons, existing methods for inferring correlations from these data face several challenges. First, the observations of spiking activity produced by two-photon imaging are temporally blurred and noisy. Secondly, even if the spiking data were perfectly recovered via deconvolution, inferring network-level features from binary spiking data is a challenging task due to the non-linear relation of neuronal spiking to endogenous and exogenous inputs. In this work, we propose a methodology to explicitly model and directly estimate signal and noise correlations from two-photon fluorescence observations, without requiring intermediate spike deconvolution. We provide theoretical guarantees on the performance of the proposed estimator and demonstrate its utility through applications to simulated and experimentally recorded data from the mouse auditory cortex.

Introduction

Neuronal activity correlations are essential in understanding how populations of neurons encode information. Correlations provide insights into the functional architecture and computations carried out by neuronal networks (Abbott and Dayan, 1999; Averbeck et al., 2006; Cohen and Kohn, 2011; Hansen et al., 2012; Kohn et al., 2016; Kohn and Smith, 2005; Lyamzin et al., 2015; Montijn et al., 2014; Smith and Sommer, 2013; Sompolinsky et al., 2001; Yatsenko et al., 2015). Neuronal activity correlations are often categorized in two groups: signal correlations and noise correlations (Cohen and Kohn, 2011; Cohen and Maunsell, 2009; Gawne and Richmond, 1993; Josić et al., 2009; Lyamzin et al., 2015; Vinci et al., 2016). Given two neurons, signal correlation quantifies the similarity of neural responses that are time-locked to a repeated stimulus across trials, whereas noise correlation quantifies the stimulus-independent trial-to-trial variability shared by neural responses that are believed to arise from common latent inputs.

Two-photon calcium imaging has become increasingly popular in recent years to record in vivo neural activity simultaneously from hundreds of neurons (Ahrens et al., 2013; Romano et al., 2017; Stosiek et al., 2003; Svoboda and Yasuda, 2006). This technology takes advantage of intracellular calcium flux mostly arising from spiking activity and captures calcium signaling in neurons in living animals using fluorescence microscopy. The observed fluorescence traces of calcium concentrations, however, are indirectly related to neuronal spiking activity. Extracting spiking activity from fluorescence traces is a challenging signal deconvolution problem and has been the focus of active research (Deneux et al., 2016; Friedrich et al., 2017; Grewe et al., 2010; Jewell et al., 2020; Jewell and Witten, 2018; Kazemipour et al., 2018; Pachitariu et al., 2018; Pnevmatikakis et al., 2016; Stringer and Pachitariu, 2019; Theis et al., 2016; Vogelstein et al., 2009; Vogelstein et al., 2010).

The most commonly used approach to infer signal and noise correlations from two-photon data is to directly apply the classical definitions of correlations for firing rates (Lyamzin et al., 2015), to fluorescence traces (Fallani et al., 2015; Francis et al., 2018; Rothschild et al., 2010; Winkowski and Kanold, 2013). However, it is well known that fluorescence observations are noisy and blurred surrogates of spiking activity, because of dependence on observation noise, calcium dynamics and the temporal properties of calcium indicators. Due to temporal blurring, the resulting signal and noise correlation estimates are highly biased. An alternative approach is to carry out the inference in a two-stage fashion: first, infer spikes using a deconvolution technique, and then compute firing rates and evaluate the correlations (Kerlin et al., 2019; Najafi et al., 2020; Ramesh et al., 2018; Soudry et al., 2015; Yatsenko et al., 2015). These two-stage estimates are highly sensitive to the accuracy of spike deconvolution, and require high temporal resolution and signal-to-noise ratios (Lütcke et al., 2013; Pachitariu et al., 2018). Furthermore, these deconvolution techniques are biased toward obtaining accurate first-order statistics (i.e. spike timings) via spatiotemporal priors, which may be detrimental to recovering second-order statistics (i.e. correlations). Finally, both approaches also undermine the non-linear dynamics of spiking activity as governed by stimuli, past activity and other latent processes (Truccolo et al., 2005). There are a few existing studies that aim at improving estimation of neuronal correlations, but they either do not consider signal correlations (Rupasinghe and Babadi, 2020; Yatsenko et al., 2015), or aim at estimating surrogates of correlations from spikes such as the connectivity/coupling matrix (Aitchison et al., 2017; Mishchenko et al., 2011; Soudry et al., 2015; Keeley et al., 2020).

Here, we propose a methodology to directly estimate both signal and noise correlations from two-photon imaging observations, without requiring an intermediate step of spike deconvolution. We pose the problem under the commonly used experimental paradigm in which neuronal activity is recorded during trials of a repeated stimulus. We avoid the need to perform spike deconvolution by integrating techniques from point processes and state-space modeling that explicitly relate the signal and noise correlations to the observed fluorescence traces in a multi-tier model. Thus, we cast signal and noise correlations within a parameter estimation setting. To solve the resulting estimation problem in an efficient fashion, we develop a solution method based on variational inference (Jordan et al., 1999; Blei et al., 2017), by combining techniques from Pólya-Gamma augmentation (Polson et al., 2013) and compressible state-space estimation (Rauch et al., 1965; Kazemipour et al., 2018; Ba et al., 2014). We also provide theoretical guarantees on the bias and variance performance of the resulting estimator.

We demonstrate the utility of our proposed estimation framework through application to simulated and real data from the mouse auditory cortex during presentations of tones and acoustic noise. In application to repeated trials under spontaneous and stimulus-driven conditions within the same experiment, our method reliably provides noise correlation structures that are invariant across the two conditions. In addition, our joint analysis of signal and noise correlations corroborates existing hypotheses regarding the distinction between their structures (Keeley et al., 2020; Rumyantsev et al., 2020; Bartolo et al., 2020). Moreover, while application of our proposed method to spatial analysis of signal and noise correlations in the mouse auditory cortex is consistent with existing work (Winkowski and Kanold, 2013), it reveals novel and distinct spatial trends in the correlation structure of layers 2/3 and 4. In summary, our method improves on existing work by: (1) explicitly modeling the fluorescence observation process and the non-linearities involved in spiking activity, as governed by both the stimulus and latent processes, through a multi-tier Bayesian forward model, (2) joint estimation of signal and noise correlations directly from two-photon fluorescence observations through an efficient iterative procedure, without requiring intermediate spike deconvolution, (3) providing theoretical guarantees on the performance of the proposed estimator, and (4) gaining access to closed-form posterior approximations, with low-complexity and iterative update rules and minimal dependence on training data. Our proposed method can thus be used as a robust and scalable alternative to existing approaches for extracting signal and noise correlations from two-photon imaging data.

Results

In this section, we first demonstrate the utility of our proposed estimation framework through simulation studies as well as applications on experimentally recorded data from the mouse auditory cortex. Then, we present theoretical performance bounds on the proposed estimator. Before presenting the results, we will give an overview of the proposed signal and noise correlation inference framework, and outline our contributions and their relationship to existing work. For the ease of reproducibility, we have archived a MATLAB implementation of our proposed method in GitHub (Rupasinghe, 2020) and have deposited the data used in this work in the Digital Repository at the University of Maryland (Rupasinghe et al., 2021).

Signal and noise correlations

We consider a canonical experimental setting in which the same external stimulus, denoted by 𝐬t, is repeatedly presented across L independent trials and the spiking activity of a population of N neurons are indirectly measured using two-photon calcium fluorescence imaging. Figure 1 (forward arrow) shows the generative model that is used to quantify this procedure. The fluorescence observation in the l𝗍𝗁 trial from the j𝗍𝗁 neuron at time frame t, denoted by yt,l(j), is a noisy surrogate of the intracellular calcium concentrations. The calcium concentrations in turn are temporally blurred surrogates of the underlying spiking activity nt,l(j), as shown in Figure 1.

The proposed generative model and inverse problem.

Observed (green) and latent (orange) variables pertinent to the j𝗍𝗁 neuron are indicated, according to the proposed model for estimating the signal (blue) and noise (red) correlations from two-photon calcium fluorescence observations. Calcium fluorescence traces (yt,l(j)) of L trials are observed, in which the repeated external stimulus (𝐬t) is known. The underlying spiking activity (nt,l(j)) , trial-to-trial variability and other intrinsic/extrinsic neural covariates that are not time-locked with the external stimulus (xt,l(j)) , and the stimulus kernel (𝐝j) are latent. Our main contribution is to solve the inverse problem: recovering the underlying latent signal (𝐒) and noise (𝐍) correlations directly from the fluorescence observations, without requiring intermediate spike deconvolution.

In modeling the spiking activity, we consider two main contributions: (1) the common known stimulus 𝐬t affects the activity of the j𝗍𝗁 neuron via an unknown kernel 𝐝j, akin to the receptive field; (2) the trial-to-trial variability and other intrinsic/extrinsic neural covariates that are not time-locked to the stimulus 𝐬t are captured by a trial-dependent latent process xt,l(j). Then, we use a Generalized Linear Model to link these underlying neural covariates to spiking activity (Truccolo et al., 2005). More specifically, we model spiking activity as a Bernoulli process:

nt,l(j)Bernoulli(ϕ(xt,l(j),𝐝j𝐬t)),

where ϕ() is a mapping function, which could in general be non-linear.

The signal correlations aim to measure the correlations in the temporal response that are time-locked to the repeated stimulus, 𝐬t. On the other hand, noise correlations in our setting quantify connectivity arising from covariates that are unrelated to the stimulus, including the trial-to-trial variability (Keeley et al., 2020). Based on the foregoing model, we propose to formulate the signal ((𝚺s)i,j) and noise ((𝚺x)i,j) covariance between the i𝗍𝗁 neuron and j𝗍𝗁 neuron as:

(1) (𝚺s)i,j:=𝐝icov(𝐬t,𝐬t)𝐝j,(𝚺x)i,j:=cov(xt,l(i),xt,l(j)),

where cov(,) is the empirical covariance function defined for two vector time series ut and vt as cov(ut,vt):=1Tt=1T(ut1Tt=1Tut)(vt1Tt=1Tvt), for a total observation duration of T time frames.

Our main contribution is to provide an efficient solution for the so-called inverse problem: direct estimation of 𝚺s and 𝚺x from the fluorescence observations, without requiring intermediate spike deconvolution (Figure 1, backward arrow). The signal and noise correlation matrices, denoted by 𝐒 and 𝐍, can then be obtained by standard normalization of 𝚺s and 𝚺x:

(2) (𝐒)i,j:=(𝚺s)i,j(𝚺s)i,i.(𝚺s)j,j,(𝐍)i,j:=(𝚺x)i,j(𝚺x)i,i.(𝚺x)j,j,i,j=1,2,,N.

We note that when spiking activity is directly observed using electrophysiology recordings, the conventional signal ((𝚺s𝖼𝗈𝗇)i,j) and noise ((𝚺x𝖼𝗈𝗇)i,j) covariances of spiking activity between the i𝗍𝗁 and j𝗍𝗁 neuron are defined as (Lyamzin et al., 2015):

(3) (Σscon)i,j:=cov(1Ll=1Lnt,l(i),1Ll=1Lnt,l(j)),(Σxcon)i,j:=1Ll=1Lcov(nt,l(i)1Ll=1Lnt,l(i),nt,l(j)1Ll=1Lnt,l(j)),

which after standard normalization in Equation 2 give the conventional signal ((𝐒𝖼𝗈𝗇)i,j) and noise ((𝐍𝖼𝗈𝗇)i,j) correlations. While at first glance, our definitions of signal and noise covariances in Equation 1 seem to be a far departure from the conventional ones in Equation 3, we show that the conventional notions of correlation indeed approximate the same quantities as in our definitions:

SconSandNconN,

under asymptotic conditions (i.e. T and L sufficiently large). We prove this assertion of asymptotic equivalence in Appendix 1, which highlights another facet of our contributions: our proposed estimators are designed to robustly operate in the regime of finite (and typically small) T and L, aiming for the very same quantities that the conventional estimators could only recover accurately under ideal asymptotic conditions.

Existing methods used for performance comparison

In order to compare the performance of our proposed method with existing work, we consider three widely available methods for extracting neuronal correlations. In simulation studies, we additionally benchmark these estimates with respect to the known ground truth. The existing methods considered are the following:

  • Pearson correlations from the two-photon data

  • In this method, fluorescence observations are assumed to be the direct measurements of spiking activity, and thus empirical Pearson correlations of the two-photon data are used to compute the signal and noise correlations (Rothschild et al., 2010; Winkowski and Kanold, 2013; Francis et al., 2018; Bowen et al., 2020). Explicitly, these estimates are obtained by simply replacing nt,l(j) in Equation 3 by yt,l(j), without performing spike deconvolution.

  • Two-stage Pearson estimation

  • Unlike the previous method, in this case spikes are first inferred using a deconvolution technique. Then, following temporal smoothing via a narrow Gaussian kernel the Pearson correlations are computed using the conventional definitions of Equation 3. For spike deconvolution, we primarily used the FCSS algorithm (Kazemipour et al., 2018). In order to also demonstrate the sensitivity of these estimates to the deconvolution technique that is used, we provide a comparison with the f-oopsi deconvolution algorithm (Pnevmatikakis et al., 2016) in Figure 2—figure supplement 1.

  • Two-stage GPFA estimation

  • Similar to the previous method, spikes are first inferred using a deconvolution technique. Then, a latent variable model called Gaussian Process Factor Analysis (GPFA) (Yu et al., 2009) is applied to the inferred spikes in order to estimate the latent covariates and receptive fields. Based on those estimates, the signal and residual noise correlations are derived through a formulation similar to Equation 1 and Equation 2 (Ecker et al., 2014).

Simulation study 1: neuronal ensemble driven by external stimulus

We simulated calcium fluorescence observations according to the proposed generative model given in Proposed forward model, from an ensemble of N=8 neurons for a duration of T=5000 time frames. We considered L=20 repeated trials driven by the same external stimulus, which we modeled by an autoregressive process (see Guidelines for model parameter settings for details). Figure 2 shows the corresponding estimation results.

Figure 2 with 6 supplements see all
Results of simulation study 1.

(A) Estimated noise and signal correlation matrices from different methods. Rows from left to right: ground truth, proposed method, Pearson correlations from two-photon recordings, two-stage Pearson estimates and two-stage GPFA estimates. The normalized mean squared error (NMSE) of each estimate with respect to the ground truth and the leakage effect quantified by the ratio between out-of-network and in-network power (leakage) are indicated below each panel. (B) Simulated external stimulus (orange), latent trial-dependent process (red), fluorescence observations (black), estimated calcium concentrations (purple), putative spikes (green), and estimated mean of the latent state (blue) by the proposed method, for the first trial of neuron 1.

The first column of Figure 2A shows the ground truth noise (top) and signal (bottom) correlations (diagonal elements are all equal to one and omitted for visual convenience). The second column shows estimates of the noise and signal correlations using our proposed method, which closely match the ground truth. The third, fourth and fifth columns, respectively, show the results of the Pearson correlations from the two-photon data, two-stage Pearson, and two-stage GPFA estimation methods. Through a qualitative visual inspection, it is evident that these methods incur high false alarms and mis-detections of the ground truth correlations.

To quantify these comparisons, the normalized mean square error (NMSE) of different estimates with respect to the ground truth are shown below each of the subplots (Figure 2A). Our proposed method achieves the lowest NMSE compared to the others. Furthermore, we observed a significant mixing between signal and noise correlations in these other estimates. To quantify this leakage effect, we first classified each of the correlation entries as in-network or out-of-network, based on being non-zero or zero in the ground truth, respectively (see Performance evaluation). We then computed the ratio between the power of out-of-network components and the power of in-network components as a measure of leakage. The leakage ratios are also reported in Figure 2A. The leakage of our proposed estimates is the lowest of all four techniques, in estimating both the signal and noise correlations. In order to further probe the performance of our proposed method, the simulated external stimulus st, latent trial-dependent process xt,1(1) , simulated observations yt,1(1) , estimated calcium concentration z^t,1(1) , the putative spikes n^t,1(1):=z^t,1(1)-αz^t-1,1(1) , and the estimated mean of the latent state m𝐱t,1(1) , for the first trial of the first neuron are shown in Figure 2B. These results demonstrate the ability of the proposed estimation framework in accurately identifying the latent processes, which in turn leads to an accurate estimation of the signal and noise correlations as shown in Figure 2B.

The main sources of the observed performance gap between our proposed method and the existing ones are the bias incurred by treating the fluorescence traces as spikes, low spiking rates, non-linearity of spike generation with respect to intrinsic and external covariates, and sensitivity to spike deconvolution. For the latter, we demonstrated the sensitivity of the two-stage Pearson estimates to the choice of the deconvolution technique in Figure 2—figure supplement 1. Furthermore, in order to isolate the effect of said non-linearities on the estimation performance, we applied the two-stage methods to ground truth spikes in Figure 2—figure supplement 2. Our analysis showed that both two-stage estimates incur significant estimation errors even if the spikes were recovered perfectly, mainly due to the limited number of trials (L=20 here). In accordance with our theoretical analysis of the asymptotic behavior of the conventional signal and noise correlation estimates given in Appendix 1, we also showed in Figure 2—figure supplement 2 that the performance of the two-stage Pearson estimates based on ground truth spikes, but using L=1000 trials, dramatically improves. Our proposed method, however, was capable of producing reliable estimates with the number of trials as low as L=20, which is typical in two-photon imaging experiments.

Analysis of robustness with respect to modeling assumptions

While the preceding results are quite favorable to our proposed method, the underlying generative models were the same as those used to estimate signal and noise correlations, which is in contrast to conventional real data validation with known ground truth. Access to ground truth correlations in two-photon imaging experimental settings, however, is quite challenging. In order to further probe the robustness of our proposed method in the absence of ground truth data, we utilized surrogate data that parallel the setting of Figure 2, but deviate from our modeling assumptions.

  1. Robustness to stimulus integration model mismatch. First, we considered surrogate data generated with a non-linear stimulus integration model by replacing the linear receptive field component 𝐝j𝐬t with 𝐝j𝐬t+(𝐝~j,1𝐬t)2+(𝐝~j,2𝐬t)2, where 𝐝~j,1 and 𝐝~j,2 are akin to quadratic receptive field components. We assumed a linear stimulus integration model in our estimation framework (i.e., 𝐝~j,1=𝐝~j,2=). Figure 2—figure supplement 3 shows the resulting correlation estimates. While the performance of our proposed signal correlation estimates degrade under this setting as compared to Figure 2, our proposed estimates still outperform existing methods. In addition, the model mismatch in the stimulus integration component does not affect the accuracy of noise correlation estimation in our method.

  2. Robustness to calcium decay model mismatch. Next, we tested our proposed estimation framework on data simulated with a different calcium decay model. Specifically, we simulated data with second-order autoregressive calcium dynamics, and at a lower signal-to-noise ratio (SNR) compared to the setting of Figure 2, and used our inference framework which assumes first order calcium dynamics for estimation. Figure 2—figure supplement 4 shows the corresponding noise and signal correlations estimated by the proposed method under these conditions. Even though the performance slightly degrades (in terms of NMSE and leakage ratio), our method is able to recover the underlying correlations faithfully under this setting.

  3. Robustness to SNR level and firing rate. Next, we compared the performance of Pearson and Two-Stage Pearson methods with our proposed method under varying SNR levels and average firing rates, as shown in Figure 2—figure supplement 5. While the performance of all methods degrades at low SNR levels or firing rates (SNR < 10 dB, firing rate < 0.5 Hz), our proposed method outperforms the existing methods for a wide range of SNR and firing rate values. To quantify this comparison, we have also indicated the mean and standard deviation of the relative performance gain of our proposed estimates across SNR levels and firing rates as insets in Figure 2—figure supplement 5.

  4. Robustness to observation noise model mismatch. Finally, we repeated the foregoing comparisons under varying SNR levels and firing rates, only now we included an additional observation noise model mismatch. Similar to the treatment in Deneux et al., 2016, we considered two temporally correlated observation noise models: white noise with a low frequency drift (Figure 2—figure supplement 6, top panels) and pink noise (Figure 2—figure supplement 6, bottom panels). In accordance with the results in Figure 2—figure supplement 5, our proposed method outperforms the existing ones for a wide range of SNR and firing rate values and under both observation noise model mismatch conditions. From Figure 2—figure supplement 6C and F, it can be observed that the ground truth spikes are favorably recovered as a byproduct of our method, even though the estimated calcium concentrations are contaminated by the temporally correlated fluctuations in observation noise. This in turn results in accurate signal and noise correlation estimates.

Simulation study 2: spontaneous activity

Next, we present the results of a simulation study in the absence of external stimuli (i.e. st=0), pertaining to the spontaneous activity condition. It is noteworthy that the proposed method can readily be applied to estimate noise correlations during spontaneous activity, by simply setting the external stimulus 𝐬t and the receptive field 𝐝j to zero in the update rules (see Proposed forward model for details). We simulated the ensemble spiking activity based on a Poisson process (Smith and Brown, 2003) using a discrete time-rescaling procedure (Brown et al., 2002; Smith and Brown, 2003), so that the data are generated using a different model than that used in our inference framework (i.e. Bernoulli process with a logistic link as outlined in Proposed forward model). As such, we eliminated potential performance biases in favor of our proposed method by introducing the aforementioned model mismatch. We simulated L=20 independent trials of spontaneous activity of N=30 neurons, observed for a time duration of T=5000 time frames. The number of neurons in this study is notably larger than that used in the previous one, to examine the scalability of our proposed approach with respect to the ensemble size.

Figure 3 shows the comparison of the noise correlation matrices estimated by our proposed method, Pearson correlations from two-photon recordings, two-stage Pearson, and two-stage GPFA estimates, with respect to the ground truth. The Pearson and the two-stage estimates are highly variable and result in excessive false detections. Our proposed estimate, however, closely follows the ground truth, which is also reflected by the comparatively lower NMSE and leakage ratios, in spite of the mismatch between the models used for data generation and inference. In addition, our proposed method exhibits favorable scaling with respect to the ensemble size, thanks to the underlying low-complexity variational updates (see Low-complexity parameter updates for details).

Results of simulation study 2.

Estimated noise correlation matrices using different methods based from spontaneous activity data. Rows from left to right: ground truth, proposed method, Pearson correlations from two-photon recordings, two-stage Pearson and two-stage GPFA estimates. The normalized mean squared error (NMSE) of each estimate with respect to the ground truth and the ratio between out-of-network power and in-network power (leakage) are shown below each panel.

Real data study 1: mouse auditory cortex under random tone presentation

We next applied our proposed method to experimentally recorded two-photon observations from the mouse primary auditory cortex (A1). The dataset consisted of recordings from 371 excitatory neurons in layer 2/3 A1, from which we selected N=16 responsive neurons (i.e. neurons that exhibited at least one spiking event in at least half of the trials considered; see Guidelines for model parameter settings). A random sequence of four tones was presented to the mouse, with the same sequence being repeated for L=10 trials. Each trial consisted of T=3600 time frames, and each tone was 2 s long followed by a 4 s silent period (see Experimental procedures for details). We considered an integration window of R=25 frames for stimulus encoding (see Guidelines for model parameter settings for details). The comparison of the noise and signal correlation estimates obtained by our proposed method, Pearson correlations from two-photon recordings, two-stage Pearson and two-stage GPFA methods is shown in Figure 4A. The spatial map of the 16 neurons considered in the analysis in the field of view is shown in Figure 4B. Figure 4C shows the stimulus tone sequence st , two-photon observations yt,1(1) , estimated calcium concentration z^t,1(1) , putative spikes n^t,1(1):=z^t,1(1)-αz^t-1,1(1) and the estimated mean of the latent state m𝐱t,1(1) , for the first trial of the first neuron.

Figure 4 with 2 supplements see all
Application to experimentally-recorded data from the mouse A1.

(A) Estimated noise (top) and signal (bottom) correlation matrices using different methods. Rows from left to right: proposed method, Pearson correlations from two-photon data, two-stage Pearson and two-stage GPFA estimates. (B) Location of the selected neurons with the highest activity in the field of view. (C) Presented tone sequence (orange), observations (black), estimated calcium concentrations (purple), putative spikes (green) and estimated mean latent state (blue) in the first trial of the first neuron. (D) Null distributions of chance occurrence of dissimilarities between signal and noise correlation estimates using different methods. The observed test statistic in each case is indicated by a dashed vertical line. (E) Scatter plots of signal vs. noise correlations for individual cell pairs (blue dots) corresponding to each method. Data were normalized for comparison by computing z-scores. For each case, the linear regression model fit is shown in red, and the slope and p-value of the t-test are indicated as insets.

We estimated the Best Frequency (BF) of each neuron as the tone that resulted in the highest level of fluorescence activity. The results in Figure 4A are organized such that the neurons with the same BF are neighboring, with the BF increasing along the diagonal. Thus, expectedly (Bowen et al., 2020) our proposed method as well as the Pearson and two-stage Pearson estimates show high signal correlations along the diagonal. However, the two-stage GPFA estimates do not reveal such a structure. By visual inspection, as also observed in the simulation studies, the Pearson correlations from two-photon recordings, two-stage Pearson and two-stage GPFA estimates have significant leakage between the signal and noise correlations, whereas our proposed signal and noise correlation estimates in Figure 4A suggest distinct spatial structures.

To quantify this visual comparison, we used a statistic based on the Tanimoto similarity metric (Lipkus, 1999), denoted by Ts(𝐗,𝐘) for two matrices 𝐗 and 𝐘. As a measure of dissimilarity, we used Td(𝐗,𝐘):=1-Ts(𝐗,𝐘) (see Performance evaluation for details). The comparison of Td(𝐒^,𝐍^) for the four estimates is presented in the second column of Table 1. To assess statistical significance, for each comparison we obtained null distributions corresponding to chance occurrence of dissimilarities using a shuffling procedure as shown in Figure 4D, and then computed one-tailed p-values from those distributions (see Performance evaluation for details). Table 1 and Figure 4D includes these p-values, which show that the proposed estimates (boldface numbers in Table 1, second column) indeed have the highest dissimilarity between signal and noise correlations. The higher leakage effect in the other three estimates is also reflected in their smaller Td(𝐒^,𝐍^) values.

Table 1
Dissimilarity metric statistics for the estimates in Figure 4A (also illustrated in Figure 4D), linear regression statistics of the comparison between signal and noise correlations in Figure 4E, and the average NMSE across 50 trials used in the shuffling procedure illustrated in Figure 5A.
Dissimilarity Td(𝐒^,𝐍^)Regression statistics (Figure 4E)Shuffling test (Figure 5)
EstimateFigure 4DSlope (p-value)R2 ValueNMSE in 𝐍^NMSE in 𝐒^
Proposed0.8725(p<10-4)0.02(p=0.84)4×10-41.07±0.161.32±0.19
Pearson0.6675(p=0.71)0.33(p=2×10-4)0.1100
Two-stage Pearson0.7325(p=0.09)0.15(p=0.10)0.021.84±0.340.55±0.12
Two-stage GPFA0.7625(p<10-4)0.02(p=0.86)3×10-42.32±0.522.26±0.51

To further investigate this effect, we have depicted the scatter plots of signal vs. noise correlations estimated by each method in Figure 4E. To examine the possibility of the leakage effect on a pairwise basis, we performed linear regression in each case. The slope of the model fit, the p-value for the corresponding t-test, and the R2 values are reported in the third and fourth columns of Table 1 (the slope and p-values are also shown as insets in Figure 4E). Consistent with the results of Winkowski and Kanold, 2013, the Pearson estimates suggest a significant correlation between the signal and noise correlation pairs (as indicated by the higher slope in Figure 4E). However, none of the other estimates (including the proposed estimates) in Figure 4E register a significant trend between signal and noise correlations. This further corroborates our assessment of the high leakage between signal and noise correlations in Pearson estimates, since such a leakage effect could result in overestimation of the trend between the signal and noise correlation pairs. The signal and noise correlations estimated by our proposed method show no pairwise trend, suggesting distinct patterns of stimulus-dependent and stimulus-independent functional connectivity (Kohn et al., 2016; Montijn et al., 2014; Rothschild et al., 2010; Keeley et al., 2020).

A key advantage of our proposed method over the Pearson and two-stage approaches is the explicit modeling of stimulus integration. The relevant parameter in this regard is the length of the stimulus integration window R. While in our simulation studies the value of R was known, it needs to be set by the user in real data applications. To this end, domain knowledge or data-driven methods such as cross-validation and model order selection can be utilized (see Guidelines for model parameter settings for details). Noting that the number of parameters to be estimated linearly scales with R, it must be chosen large enough to capture the stimulus effects, yet small enough to result in favorable computational complexity. Here, given that the typical tone response duration of mouse A1 neurons is <1 s (Linden et al., 2003; DeWeese et al., 2003; Petrus et al., 2014), with a sampling frequency of fs=30 Hz, we surmised that a choice of R30 suffices to capture the stimulus effects. We further examined the effect of varying R on the proposed correlation estimates in Figure 4—figure supplement 1. As shown, small values of R (e.g. R=1 or 10) may not be adequate to fully capture stimulus integration effects. By considering values of R in the range 25-50, we observed that the correlation estimates remain stable. We thus chose R=25 for our analysis.

Careful inspection of the second panel in Figure 4C shows that the fluorescence activity often saturates to ∼4 times its baseline value. This effect is due to successive closely spaced spikes, which implies the occurrence of more than one spike per frame and thus violates our Bernoulli modeling assumption. To inspect the performance of our method more carefully under this scenario, we show in Figure 4—figure supplement 2 a zoomed-in view of the estimated latent processes z^t,1(1) (calcium concentration) and n^t,1(1) (putative spikes) for a sample data segment with high fluorescence activity. The estimated latent processes reveal two mechanisms leveraged by our inference method to mitigate the aforementioned model mismatch: first, our proposed method predicts spiking events in adjacent time frames to compensate for rapid increase in firing rate and thus infers calcium concentration levels that match the observed fluorescence; secondly, even though our generative model assumes that there is only one spiking event in a given time frame, this restriction is mitigated in our inference framework by relaxing the constraint n^t,l(j):=z^t,l(j)-αz^t-1,l(j)1, as explained in Low-complexity parameter updates. While this relaxation was performed for the sake of tractability of the inverse solution, it in fact leads to improved estimation results under episodes of rapid increase in firing rate, by allowing the putative spike magnitudes n^t,l(j) to be greater than 1. The latter is evident in the magnitude of the inferred spikes in Figure 4—figure supplement 2, following the rise of fluorescence activity.

Given that the ground truth correlations are not available for a direct comparison, we instead performed a test of specificity that reveals another key limitation of existing methods. Fluorescence observations exhibit structured dynamics due to the exponential intracellular calcium concentration decay (as shown in Figure 4C, for example), which are in turn related to the underlying spikes that are driven non-linearly by intrinsic/extrinsic stimuli as well as the properties of the indicator used. As such, an accurate inference method is expected to be specific to this temporal structure. To test this, we randomly shuffled the T time frames consistently in the same order in all trials, in order to fully break the temporal structure governing calcium decay dynamics, and then estimated correlations from these shuffled data using the different methods. The resulting estimates of noise correlations are shown in Figure 5A for one instance of such shuffled data. The average NMSE for a total of 50 shuffled samples with respect to the original un-shuffled estimates (in Figure 4A) are tabulated in the fifth and sixth columns of Table 1, and are also indicated below each panel in Figure 5A.

Assessing the specificity of different estimation results shown in Figure 4.

Rows from left to right: proposed method, Pearson correlations from two-photon data, two-stage Pearson and two-stage GPFA estimates. (A) The estimated noise correlations using different methods after random temporal shuffling of the observations. The mean and standard deviation of the NMSE across 50 trials are indicated below each panel. (B) Histograms of the noise correlation estimates between the first and third neurons over the 50 temporal shuffling trials. The estimate based on the original (un-shuffled) data in each case is indicated by a dashed vertical line.

A visual inspection of Figure 5A shows that the Pearson correlations from two-photon recordings expectedly remain unchanged. Since this method treats each time frame to be independent, temporal shuffling does not impact the correlations in anyway. On the other extreme, both of the two-stage estimates seem to detect highly variable and large correlation values, despite operating on data that lacks any relevant temporal structure. Our proposed method, however, remarkably produces negligible correlation estimates. Although both the two-stage and proposed estimates show variability with respect to the shuffled data (Table 1, fifth column), the standard deviation of the NMSE values of our proposed method are considerably smaller than those of the two-stage methods (Table 1, fifth column). For further inspection, the histograms of a single element ((𝐍^)1,3) of the estimated correlation matrices across the 50 shuffling trials are shown in Figure 5B. The original un-shuffled estimates are marked by the dashed vertical lines in each case. The proposed estimate in Figure 5B is highly concentrated around zero, even though the un-shuffled estimate is non-zero. However, the two-stage estimates produce correlations that are widely variable across the shuffling trials. This analysis demonstrates that our proposed method is highly specific to the temporal structure of fluorescence observations, whereas the Pearson correlations from two-photon recordings, two-stage Pearson and two-stage GPFA methods fail to be specific.

Real data study 2: spontaneous vs. stimulus-driven activity in the mouse A1

To further validate the utility of our proposed methodology, we applied it to another experimentally-recorded dataset from the mouse A1 layer 2/3. This experiment pertained to trials of presenting a sequence of short white noise stimuli, randomly interleaved with silent trials of the same duration. Figure 6A shows a sample trial sequence. The two-photon recordings thus contained episodes of stimulus-driven and spontaneous activity (see Experimental procedures for details). Under this experimental setup, it is expected that the noise correlations are invariant across the spontaneous and stimulus-driven conditions. In accordance with the foregoing results of real data study 1, we also expect the signal and noise correlation patterns to be distinct. Each trial considered in the analysis consisted of T=765 frames (see Experimental procedures for details). We selected N=10 responsive neurons (according to the criterion described in Guidelines for model parameter settings), each with L=10 trials. Similar to real data study 1, we chose a stimulus integration window of length R=25 frames.

Figure 6 with 1 supplement see all
Comparison of spontaneous and stimulus-driven activity in the mouse A1.

(A) A sample trial sequence in the experiment. Stimulus-driven (stim) trials were recorded with randomly interleaved spontaneous (spon) trials of the same duration. (B) Estimated noise and signal correlation matrices under spontaneous (top) and stimulus-driven (bottom) conditions. Rows from left to right: proposed method, Pearson correlations from two-photon data, two-stage Pearson and two-stage GPFA estimates. (C) Location of the selected neurons with highest activity in the field of view. (D) Stimulus onsets (orange), observations (black), estimated calcium concentrations (purple) and putative spikes (green) for the first trial from two pairs of neurons with high signal correlation (top) and high noise correlation (bottom), as identified by the proposed estimates.

Figure 6B shows the resulting noise and signal correlation estimates under the spontaneous (𝐍^𝗌𝗉𝗈𝗇, top) and stimulus-driven (𝐍^𝗌𝗍𝗂𝗆 and 𝐒^𝗌𝗍𝗂𝗆, bottom) conditions. Figure 6C shows the spatial map of the 10 neurons considered in the analysis in the field of view. A visual inspection of the first column of Figure 6B indeed suggests that 𝐍^𝗌𝗉𝗈𝗇 and 𝐍^𝗌𝗍𝗂𝗆 are saliently similar, and distinct from 𝐒^𝗌𝗍𝗂𝗆. The Pearson correlations obtained from two-photon data (second column) and the two-stage Pearson and GPFA estimates (third and fourth columns, respectively), however, evidently lack this structure. As in the previous study, we quantified this visual comparison using the similarity metric Ts(𝐗,𝐘) and the dissimilarity metric Td(𝐗,𝐘) (see Performance evaluation for details). These statistics are reported in Table 2 along with the p-values (null distributions are shown in Figure 6—figure supplement 1), which show that the only significant outcomes (boldface numbers) are those of our proposed method. While it is expected from the experiment design for the noise correlations under the two settings to be similar, the only method that detects this expected outcome with statistical significance is our proposed method. Moreover, the statistically significant dissimilarity between the signal and noise correlations of our proposed estimates corroborate the hypothesis that signal and noise are encoded by distinct functional networks (Kohn et al., 2016; Montijn et al., 2014; Rothschild et al., 2010; Keeley et al., 2020).

Table 2
Similarity/dissimilarity metric statistics for the estimates in Figure 6.
Estimation methodTs(𝐍^𝗌𝗉𝗈𝗇,𝐍^𝗌𝗍𝗂𝗆)Td(𝐒^𝗌𝗍𝗂𝗆,𝐍^𝗌𝗍𝗂𝗆)
Proposed0.5716 (p=0.003)0.7946 (p=0.004)
Pearson0.3031 (p=0.61)0.5032 (p=0.92)
Two-stage Pearson0.2790 (p=0.05)0.7862 (p=0.39)
Two-stage GPFA0.2008 (p=0.50)0.7792 (p=0.22)

Furthermore, Figure 6D shows the time course of the stimulus, observations, estimated calcium concentrations and putative spikes for the first trial from two pairs of neurons with high signal correlation (j=2,8, top) and high noise correlation (j=3,5, bottom). As expected, the putative spiking activity of the neurons with high signal correlation (top) are closely time-locked to the stimulus onsets. The activity of the two neurons with high noise correlation (bottom), however, is not time-locked to the stimulus onsets, even though the two neurons exhibit highly correlated activity. The correlations estimated via the proposed method thus encode substantial information about the inter-dependencies of the spiking activity of the neuronal ensemble.

Real data study 3: spatial analysis of signal and noise correlations in the mouse A1

Lastly, we applied our proposed method to examine the spatial distribution of signal and noise correlations in the mouse A1 layers 2/3 and 4 (data from Bowen et al., 2020). The dataset included fluorescence activity recorded during multiple experiments of presenting sinusoidal amplitude-modulated tones, with each stimulus being repeated across several trials (see Experimental procedures and Bowen et al., 2020 for experimental details). In each experiment, we selected on average around 20 responsive neurons for subsequent analysis (according to the criterion described in Guidelines for model parameter settings). For brevity, we compare the estimates of signal and noise correlations using our proposed method only with those obtained by Pearson correlations from the two-photon data. The latter method was also used in previous analyses of data from this experimental paradigm (Winkowski and Kanold, 2013).

In parallel to the results reported in Winkowski and Kanold, 2013, Figure 7A and Figure 7B illustrate the correlation between the signal and noise correlations in layers 2/3 and 4, respectively. Consistent with the results of Winkowski and Kanold, 2013, the signal and noise correlations exhibit positive correlation in both layers, regardless of the method used. However, the correlation coefficients (i.e. slopes in the insets) identified by our proposed method are notably smaller than those obtained from Pearson correlations, in both layer 2/3 (Figure 7A) and layer 4 (Figure 7B). Comparing this result with our simulation studies suggests that the stronger linear trend between the signal and noise correlations observed using the Pearson correlation estimates is likely due to the mixing between the estimates of signal and noise correlations. As such, our method suggests that the signal and noise correlations may not be as highly correlated with one another as indicated in previous studies of layer 2/3 and 4 in mouse A1 (Winkowski and Kanold, 2013).

Next, to evaluate the spatial distribution of signal and noise correlations, we plotted the correlation values for pairs of neurons as a function of their distance for layer 2/3 (Figure 7C) and layer 4 (Figure 7D). The distances were discretized using bins of length 10μm. The scatter of the correlations along with their median at each bin are shown in all panels. Then, to examine the spatial trend of the correlations, we performed linear regression in each case. The slope of the model fit, the p-value for the corresponding t-test, and the R2 values are reported in Table 3 (the slope and p-values are also shown as insets in Figure 7C and D).

Table 3
Linear regression statistics for the analysis of correlations vs. cell-pair distance.
Statistics of layer 2/3 correlationsStatistics of layer 4 correlations
CorrelationsSlope (p-value)R2 ValueSlope (p-value)R2 Value
Proposed Signal Corr.-𝟗×𝟏𝟎-𝟓 (p=0.002)0.012-𝟏×𝟏𝟎-𝟒 (p=3×10-6)0.023
Pearson Signal Corr.-5×10-5 (p=0.02)0.007-3×10-5 (p=0.02)0.005
Proposed Noise Corr.-𝟏×𝟏𝟎-𝟒 (p=0.005)0.010-𝟓×𝟏𝟎-𝟓 (p=0.06)0.004
Pearson Noise Corr.-4×10-5 (p=0.1)0.003-5×10-5 (p=0.02)0.005

From Table 3 and Figure 7C and D (upper panels), it is evident that the signal correlations show a significant negative trend with respect to distance, using both methods and in both layers. However, the slope of these negative trends identified by our method (boldface numbers in Table 3) is notably steeper than those identified by Pearson correlations. On the other hand, the trends of the noise correlations with distance (bottom panels) are different between our proposed method and Pearson correlations: our proposed method shows a significant negative trend in layer 2/3, but not in layer 4, whereas the Pearson correlations of the two-photon data suggest a significant negative trend in layer 4, but not in layer 2/3. In addition, the slopes of these negative trends identified by our method (boldface numbers in Table 3) are steeper than or equal to those identified by Pearson correlations.

Our proposed estimates also indicate that noise correlations are sparser and less widespread in layer 4 (Figure 7D) than in layer 2/3 (Figure 7C). To further investigate this observation, we depicted the two-dimensional spatial spread of signal and noise correlations in both layers and for both methods in Figure 7E and F, by centering each neuron at the origin and overlaying the individual spatial spreads. The horizontal and vertical axes in each panel represent the relative dorsoventral and rostrocaudal distances, respectively, and the heat-maps represent the magnitude of correlations. Comparing the proposed noise correlation spread in Figure 7E with the corresponding spread in Figure 7F, we observe that the noise correlations in layer 2/3 are indeed more widespread and abundant than in layer 4, as can be expected by more extensive intralaminar connections in layer 2/3 vs. 4 (Watkins et al., 2014; Meng et al., 2017a; Meng et al., 2017b; Kratz and Manis, 2015).

Figure 7 with 2 supplements see all
Comparison of signal and noise correlations across layers 2/3 and 4.

(A) Scatter-plot of noise vs. signal correlations (blue) for individual cell-pairs in layer 2/3, based on the proposed (left) and Pearson estimates (right). Data were normalized for comparison by computing z-scores. The linear model fits are shown in red, and the slope and p-value of the t-tests are indicated as insets. Panel (B) corresponds to layer 4 in the same organization as panel A. (C) Signal (top) and noise (bottom) correlations vs. cell-pair distance in layer 2/3, based on the proposed (left) and Pearson estimates (right). Distances were binned to 10μm intervals. The median of the distributions (black) and the linear model fit (red) are shown in each panel. The slope of the linear model fit, and the p-value of the t-test are also indicated as insets. Dashed horizontal lines indicate the zero-slope line for ease of visual comparison. Panel D corresponds to layer 4 in the same organization as panel C. (E) Spatial spread of signal (top) and noise (bottom) correlations in layer 2/3, based on the proposed (left) and Pearson estimates (right). The horizontal and vertical axes in each panel respectively represent the relative dorsoventral and rostrocaudal distances between each cell-pair, and the heat-map indicates the magnitude of correlations. Marginal distributions of the signal (blue) and noise (red) correlations along the dorsoventral and rostrocaudal axes for the proposed method (darker colors) and Pearson method (lighter colors) are shown at the top and right sides of the sub-panels. Panel F corresponds to layer 4 in the same organization as panel E.

The spatial spreads of signal and noise correlations based on the Pearson estimates are remarkably similar in both layers (Figure 7E and F, right panels), whereas they are saliently different for our proposed estimates (Figure 7E and F, left panels). This further corroborates our hypothesis on the possibility of high mixing between the signal and noise correlation estimates obtained by the Pearson correlation of two-photon data. To further examine the differences between the signal and noise correlations, the marginal distributions along the dorsoventral and rostrocaudal axes are shown in Figure 7E and F, selectively overlaid for ease of visual comparison. To quantify the differences between the spatial distributions of signal and noise correlations estimated by each method, we performed Kolmogorov-Smirnov (KS) tests on each pair of marginal distributions, which are summarized in Figure 7—figure supplement 1. Although the marginal distributions of signal and noise correlations are significantly different in all cases from both methods, the effect sizes of their difference (KS statistics) are higher for our proposed estimates compared to those of the Pearson estimates.

Finally, the spatial spreads of correlations for either method and in each layer suggest non-uniform angular distributions with possibly directional bias. To test this effect, we computed the angular marginal distributions and performed KS tests for non-uniformity, which are reported in Figure 7—figure supplement 2. These tests indicate that all distributions are significantly non-uniform. In addition, the angular distributions of both signal and noise correlations in layer 4 exhibit salient modes in the rostrocaudal direction, whereas they are less directionally selective in layer 2/3 (Figure 7—figure supplement 2).

In summary, the spatial trends identified by our proposed method are consistent with empirical observations of spatially heterogeneous pure-tone frequency tuning by individual neurons in auditory cortex (Winkowski and Kanold, 2013). The improved correspondence of our proposed method compared to results obtained using Pearson correlations could be the result of the demixing of signal and noise correlations in our method. As a result of the demixing, our proposed method also suggests that noise correlations have a negative trend with distance in layer 2/3, but are much sparser and spatially flat in layer 4. In addition, the spatial spread patterns of signal and noise correlations are more structured and remarkably more distinct for our proposed method than those obtained by the Pearson estimates.

Theoretical analysis of the bias and variance of the proposed estimators

Finally, we present a theoretical analysis of the bias and variance of the proposed estimator. Note that our proposed estimation method has been developed as a scalable alternative to the intractable maximum likelihood (ML) estimation of the signal and noise covariances (see Overview of the proposed estimation method). In order to benchmark our estimates, we thus need to evaluate the quality of said ML estimates. To this end, we derived bounds on the bias and variance of the ML estimators of the kernel 𝐝j for j=1,,N and the noise covariance 𝚺x. In order to simplify the treatment, we posit the following mild assumptions:

Assumption (1). We assume a scalar time-varying external stimulus (i.e. 𝐬t=st, and hence 𝐝j=dj,𝐝=[d1,d2,,dN] ). Furthermore, we set the observation noise covariance to be 𝚺w=σw2𝐈, for notational convenience.

Assumption (2). We derive the performance bounds in the regime where T and L are large, and thus do not impose any prior distribution on the correlations, which are otherwise needed to mitigate overfitting (see Preliminary assumptions).

Assumption (3). We assume the latent trial-dependent process and stimulus to be slowly varying signals, and thus adopt a piece-wise constant model in which these processes are constant within consecutive windows of length W (i.e. 𝐱t,l=𝐱Wk,l and st=sWk, for (k-1)W+1t<kW and k=1,,K with Wk=(k-1)W+1 and KW=T) for our theoretical analysis, as is usually done in spike count calculations for conventional noise correlation estimates.

Our main theoretical result is as follows:

Theorem 1 (Performance Bounds) Let q>164, 0<ϵ<1/2, and 0<η1/2 be fixed constants, σm2:=maxi(Σx)i,i and σs2:=1Kk=1KsWk2. Then, under Assumptions (1 - 3), the bias and variance of the maximum likelihood estimators d^ and Σ^x, conditioned on an event AW with P(AW)1-η satisfy:

|bias𝒜W(d^j)|1W12ϵC1(2σw1+α2+1)+τj,Var𝒜W(d^j)(Σx)j,jKLσs2(1η)+1W12ϵC2(2σw1+α2+1)+τ~j,|bias𝒜W((Σ^x)i,j)||(Σx)i,j|KL(1η)+logWW12ϵC3(14σw1+α2+3)+ξi,j,Var𝒜W((Σ^x)i,j)(KL1)((Σx)i,j2+(Σx)i,i(Σx)j,j)K2L2(1η)+logWW12ϵC4(2σw1+α2+1)+ξ~i,j,

for all i,j=1,2,,N, if

logWmax{log(8KLN/η)q,32σm2qϵ2,2log(64q)12ϵ,max{6.25,4(μx+maxk,j{ |sWkdj| })2}8qσm2,log2},

where τj and τ~j denote bounded terms that are O(σw2) or O(1W), ξi,j and ξ~i,j denote bounded terms that are O(σw2) or O(1W1-2ϵ) and C1,C2,C3 and C4 are bounded constants given in Appendix 2.

Proof. The proof of Theorem 1 is provided in Appendix 2.

In order to discuss the implications of this theoretical result, several remarks are in order:

  • Remark 1: Achieving near Oracle performance

  • A common benchmark in estimation theory is the performance of the idealistic oracle estimator, in which an oracle directly observes the true latent process 𝐱t,l and the true kernel dj and forms the correlation estimates. In this case, the oracle would incur zero bias and variance of order 𝒪(1/KL) in estimating dj, and outputs an estimate of 𝚺x with bias and variance in the order of 𝒪(1/KL). Theorem 1 indeed states that for sufficiently large W and small σw, the bias and variance of the ML estimators are arbitrarily close to those of the oracle estimator. Recall that our variational inference framework is in fact a solution technique for the regularized ML problem. Hence, the bounds in Theorem 1 provide a benchmark for the expected performance of the proposed estimators, by quantifying the excess bias and variance over the performance of the oracle estimator.

  • Remark 2: Effect of the observation noise and observation duration

  • As the assumed window of stationarity W (and hence the observation duration T), the loss of performance of the proposed estimators only depends on σw2, the variance of the observation noise. As a result, at a given observation noise variance σw2, these bounds provide a sufficient upper bound on the time duration of the observations required for attaining a desired level of estimation accuracy. It is noteworthy that σw2 is typically small in practice, as it pertains to the effective observation noise and is significantly diminished by pixel averaging of the fluorescence traces following cell segmentation.

  • Remark 3: Effect of the number of trials

  • Finally, note that the bounds in Theorem 1 have terms that also drop as the number of trials L grows. These terms in fact pertain to the performance of the oracle estimator. As the number of trials grows (L), the oracle estimates become arbitrarily close to the true parameters 𝚺x and 𝐝j. Thus, our theoretical performance bounds also provide a sufficient upper bound on the number of trials L required for the oracle estimator to attain a desired level of estimation accuracy.

Discussion

We developed a novel approach for the joint estimation of signal and noise correlations of neuronal activities directly from two-photon calcium imaging observations and tested our method with experimental data. Existing widely used methods either take the fluorescence traces as surrogates of spiking activity, or first recover the unobserved spikes using deconvolution techniques, both followed by computing Pearson correlations or connectivity matrices. As such, they typically result in estimates that are highly biased and are heavily dependent on the choice of the spike deconvolution technique. We addressed these issues by explicitly relating the signal and noise covariances to the observed two-photon data via a multi-tier Bayesian model that accounts for the observation process and non-linearities involved in spiking activity. We developed an efficient estimation framework by integrating techniques from variational inference and state-space estimation. We also established performance bounds on the bias and variance of the proposed estimators, which revealed favorable scaling with respect to the observation noise and trial length.

We demonstrated the utility of our proposed estimation framework on both simulated and experimentally recorded data from the mouse auditory cortex. In our simulation studies, we evaluated the robustness of our proposed method with respect to several model mismatch conditions induced by the stimulus integration model, calcium decay, SNR level, firing rate, and temporally correlated observation noise. In all cases, we observed that our proposed estimates outperform the existing methods in recovering the signal and noise correlations.

There are two main sources for the observed performance gap between our proposed method and existing approaches. The first source is the favorable soft decisions on the timing of spikes achieved by our method as a byproduct of the iterative variational inference procedure. An accurate probabilistic decoding of spikes results in better estimates of the signal and noise correlations, and conversely having more accurate estimates of the signal and noise covariances improves the probabilistic characterization of spiking events. This is in contrast with both the Pearson correlations computed from two-photon data and two-stage methods: in computing the Pearson correlations from two-photon data, spike timing is heavily blurred by the calcium decay; in the two-stage methods, erroneous hard decisions on the timing of spikes result in biases that propagate to and contaminate the downstream signal and noise correlation estimation and thus results in significant errors.

The second source of performance improvement is the explicit modeling of the non-linear mapping from stimulus and latent covariates to spiking through a canonical point process model, which is in turn tied to a two-photon observation model in a multi-tier Bayesian fashion. Our theoretical analysis in Theorem 1 corroborates that this virtue of our proposed methodology results in robust performance under limited number of trials. As we have shown in Appendix 1, as the number of trials L and trial duration T tend to infinity, conventional notions of signal and noise correlation indeed recover the ground truth signal and noise correlations, as the biases induced by non-linearities average out across trial repetitions. However, as exemplified in Figure 2—figure supplement 2, in order to achieve comparable performance to our method using few trials (e.g. L=20), the conventional correlation estimates require considerably more trials (e.g. L=1000).

Application to two-photon data recorded from the mouse primary auditory cortex showed that unlike the aforementioned existing methods, our estimates provide noise correlation structures that are expectedly invariant across spontaneous and stimulus-driven conditions within an experiment, while producing signal correlation structures that are largely distinct from those given by noise correlation. These results provide evidence for the involvement of distinct functional neuronal network structures in encoding the stimulus-dependent and stimulus-independent information.

Our analysis of the relationship between the signal and noise correlations in layers 2/3 and 4 in mouse A1 indicated a smaller correlation between signal and noise correlations than previously reported (Winkowski and Kanold, 2013). Thus, our proposed method suggests that the signal and noise correlations reflect distinct circuit mechanisms of sound processing in layers 2/3 vs 4. The spatial distribution of signal correlations obtained by our method was consistent with previous work showing significant negative trends with distance (Winkowski and Kanold, 2013). However, in addition, our proposed method revealed a significant negative trend of noise correlations with distance in layer 2/3, but not in layer 4, in contrast to the outcome of Pearson correlation analysis. The lack of a negative trend in layer 4 could be attributed to the sparse nature of the noise correlation spread in layer 4, as revealed by our analysis of two-dimensional spatial spreads. The latter analysis indeed revealed that the noise correlations in layer 2/3 are more widespread than those in layer 4, consistent with existing work based on whole-cell patch recordings (Meng et al., 2017a; Meng et al., 2017b).

The two-dimensional spatial spreads of signal and noise correlations obtained by our method are more distinct than those obtained by Pearson correlations. The spatial spreads also allude to directionality of the functional connectivity patterns, with a notable rostrocaudal preference in layer 4. This result seems surprising in light of existing evidence for quasi-rostrocaudal organization of the tonotopic axis in mouse A1 (Romero et al., 2020). However, given the heterogeneity of tuning in both layers 2/3 and 4 with a best frequency interqartile range of ∼1–1.5 octaves over the imaging field (Bowen et al., 2020) and using supra-threshold tones, we expect that the tones will drive not only neurons with the corresponding best frequency, but also neurons tuned to neighboring frequencies. Moreover, there is high connectivity between layer 4 cells within a few 100 μm across the tonotopic axis (Kratz and Manis, 2015; Meng et al., 2017a), potentially amplifying and broadening the effect of supra-threshold tones.

Our proposed method can scale up favorably to larger populations of neurons, thanks to the underlying low-complexity variational updates in the inference procedure. Due to its minimal dependence on training data, our estimation framework is also applicable to single-session analysis of two-photon data with limited number of trials and duration. Another useful byproduct of the proposed framework is gaining access to approximate posterior densities in closed-form, which allows further statistical analyses such as construction of confidence intervals. Our proposed methodology can thus be used as a robust and scalable alternative to existing approaches for extracting neuronal correlations from two-photon calcium imaging data.

A potential limitation of our proposed generative model is the assumption that there is at most one spiking event per time frame for each neuron, in light of the fact that typical two-photon imaging frame durations are in the range of 30–100 ms. Average spike rates of excitatory neurons in mouse A1 layers 2/3 and 4 are of the order of <10 Hz (Petrus et al., 2014; Forli et al., 2018) and thus our model is reasonable for the current study, although it might not be optimal during bursting activity. It is noteworthy that we relax this assumption in the inference framework by allowing the magnitude of putative spikes to be greater than one, thus alleviating the model mismatch during episodes of rapid increase in firing rate. This assumption can also be made more precise by adopting a Poisson model, but that would render closed-form variational density updates intractable.

Furthermore, in the regime of extremely low spiking rate and high observation noise, the proposed method may fail to capture the underlying correlations faithfully and its performance degrades to those of existing methods based on Pearson correlations, as we have shown through our simulation studies. Nevertheless, our method addresses key limitations of conventional signal and noise correlation estimators that persist even in high spiking rate and high SNR conditions.

Our proposed estimation framework can be used as groundwork for incorporating other notions of correlation such as the connected correlation function (Martin et al., 2020), and to account for non-Gaussian and higher order structures arising from spatiotemporal interactions (Kadirvelu et al., 2017; Yu et al., 2011). Other possible extensions of this work include leveraging variational inference beyond the mean-field regime (Wang and Blei, 2013), extension to time-varying correlations that underlie rapid task-dependent dynamics, and extension to non-linear models such as those parameterized by neural networks (Aitchison et al., 2017). In the spirit of easing reproducibility, a MATLAB implementation of our proposed method as well as the data used in this work are made publicly available (Rupasinghe, 2020; Rupasinghe et al., 2021).

Materials and methods

Proposed forward model

Request a detailed protocol

Suppose we observe fluorescence traces of N neurons, for a total duration of T discrete-time frames, corresponding to L independent trials of repeated stimulus. Let yt,l:=[yt,l(1),yt,l(2),,yt,l(N)], zt,l:=[zt,l(1),zt,l(2),,zt,l(N)], and nt,l:=[nt,l(1),nt,l(2),,nt,l(N)] be the vectors of noisy observations, intracellular calcium concentrations, and ensemble spiking activities, respectively, at trial l and frame t. We capture the dynamics of 𝐲t,l by the following state-space model:

𝐲t,l=𝐀𝐳t,l+𝐰t,l,𝐳t,l=α𝐳t-1,l+𝐧t,l,

where 𝐀N×N represents the scaling of the observations, 𝐰t,l is zero-mean i.i.d. Gaussian noise with covariance 𝚺w, and 0α<1 is the state transition parameter capturing the calcium dynamics through a first order model. Note that this state-space is non-Gaussian due to the binary nature of the spiking activity, that is, nt,l(j){0,1}. We model the spiking data as a point process or Generalized Linear Model with Bernoulli statistics (Eden et al., 2004; Paninski, 2004; Smith and Brown, 2003; Truccolo et al., 2005):

nt,l(j)Bernoulli(λt,l(j)),λt,l(j)=ϕ(xt,l(j),djst),

where λt,l(j) is the conditional intensity function (Truccolo et al., 2005), which we model as a non-linear function of the known external stimulus 𝐬t and the other latent intrinsic and extrinsic trial-dependent covariates, xt,l:=[xt,l(1),xt,l(2),,xt,l(N)]. While we assume the stimulus 𝐬tM to be common to all neurons, we model the distinct effect of this stimulus on the j𝗍𝗁 neuron via an unknown kernel 𝐝jM, akin to the receptive field.

The non-linear mapping of our choice is the logistic link, which is also the canonical link for a Bernoulli process in the point process and Generalized Linear Model frameworks (Truccolo et al., 2005). Thus, we assume:

ϕ(xt,l(j),𝐝j𝐬t)=exp(xt,l(j)+𝐝j𝐬t)1+exp(xt,l(j)+𝐝j𝐬t).

Finally, we assume the latent trial dependent covariates to be a Gaussian process 𝐱t,l𝒩(𝝁x,𝚺x), with mean μx:=[μx(1),μx(2),,μx(N)] and covariance 𝚺x.

The probabilistic graphical model in Figure 8 summarizes the main components of the aforementioned forward model. According to this forward model, the underlying noise covariance matrix that captures trial-to-trial variability can be identified as 𝚺x. The signal covariance matrix, representing the covariance of the neural activity arising from the repeated application of the stimulus 𝐬t, is given by 𝚺s:=𝐃cov(𝐬t,𝐬t)𝐃, where D:=[d1,d2,,dN]RM×N. The signal and noise correlation matrices, denoted by 𝐒 and 𝐍, can then be obtained by standard normalization of 𝚺s and 𝚺x:

(𝐒)i,j:=(𝚺s)i,j(𝚺s)i,i.(𝚺s)j,j,(𝐍)i,j:=(𝚺x)i,j(𝚺x)i,i.(𝚺x)j,j,i,j=1,2,,N.

The main problem is thus to estimate {𝚺x,𝐃} from the noisy and temporally blurred data {𝐲t,l}t=1,l=1T,L .

Probabilistic graphical model of the proposed forward model.

The fluorescence observations at the t𝗍𝗁 time frame and l𝗍𝗁 trial: 𝐲t,l, are noisy surrogates of the intracellular calcium concentrations: 𝐳t,l. The calcium concentration at time t is a function of the spiking activity 𝐧t,l, and the calcium activity at the previous time point 𝐳t-1,l. The spiking activity is driven by two independent mechanisms: latent trial-dependent covariates 𝐱t,l, and contributions from the known external stimulus 𝐬t, which we model by 𝐃𝐬t (in which the receptive field 𝐃 is unknown). Then, we model 𝐱t,l as a Gaussian process with constant mean 𝝁x, and unknown covariance 𝚺x. Finally, we assume the covariance 𝚺x to have an inverse Wishart prior distribution with hyper-parameters ψx and ρx. Based on this forward model, the inverse problem amounts to recovering the signal and noise correlations by directly estimating 𝚺x and 𝐃 (top layer) from the fluorescence observations {𝐲t,l}t=1,l=1T,L (bottom layer).

Overview of the proposed estimation method

Request a detailed protocol

First, given a limited number of trials L from an ensemble with typically low spiking rates, we need to incorporate suitable prior assumptions to avoid overfitting. Thus, we impose a prior p𝗉𝗋(𝚺x) on the noise covariance, to compensate sparsity of data. A natural estimation method to estimate {𝚺x,𝐃} in a Bayesian framework is to maximize the observed data likelihood p({yt,l}t,l=1T,L|Σx,D), that is maximum likelihood (ML). Thus, we consider the joint likelihood of the observed data and latent processes to perform Maximum a Posteriori (MAP) estimation:

(4) p(y,z,x,Σx|D)=ppr(Σx)t,l=1T,L1(2π)N|Σw|exp(12(yt,lAzt,l)Σw1(yt,lAzt,l))×t,l,j=1T,L,N(exp(xt,l(j)+djst))zt,l(j)αzt1,l(j)1+exp(xt,l(j)+djst)t,l=1T,L1(2π)N|Σx|exp(12(xt,lμx)Σx1(xt,lμx)).

Inspecting this MAP problem soon reveals that estimating 𝚺x and 𝐃 is a challenging task: (1) standard approaches such as Expectation-Maximization (EM) (Shumway and Stoffer, 1982) are intractable due to the complexity of the model, arising from the hierarchy of latent processes and the non-linearities involved in their mappings and (2) the temporal coupling of the likelihood in the calcium concentrations makes any potential direct solver scale poorly with T.

Thus, we propose an alternative solution based on Variational Inference (VI) (Beal, 2003; Blei et al., 2017; Jordan et al., 1999). VI is a method widely used in Bayesian statistics to approximate unwieldy posterior densities using optimization techniques, as a low-complexity alternative strategy to Markov Chain Monte Carlo sampling (Hastings, 1970) or empirical Bayes techniques such as EM. To this end, we treat {𝐱t,l}t,l=1T,L and 𝚺x as latent variables and {𝐳t,l}t,l=1T,L and 𝐃 as unknown parameters to be estimated. We introduce a framework to update the latent variables and parameters sequentially, with straightforward update rules. We will describe the main ingredients of the proposed framework in the following subsections. Hereafter, we use the shorthand notations 𝐲:={𝐲t,l}t,l=1T,L, 𝐳:={𝐳t,l}t,l=1T,L, and 𝐱:={𝐱t,l}t,l=1T,L.

Preliminary assumptions

Request a detailed protocol

For the sake of simplicity, we assume that the constants α, 𝐀, 𝚺w and 𝝁x are either known or can be consistently estimated from pilot trials. Next, we take p𝗉𝗋(𝚺x) to be an Inverse Wishart density:

𝚺xInvWishN(𝝍x,ρx),

which turns out to be the conjugate prior in our model. Thus, 𝝍x and ρx will be the hyper-parameters of our model. Procedures for hyper-parameter tuning and choosing the key model parameters are given in subsections Hyper-parameter tuning and Guidelines for model parameter settings, respectively.

Decoupling via Pólya-Gamma augmentation

Request a detailed protocol

Direct application of VI to problems containing both discrete and continuous random variables results in intractable densities. Specifically, finding a variational distribution for 𝐱t,l in our model with a standard distribution is not straightforward, due to the complicated posterior arising from co-dependent Bernoulli and Gaussian random variables. In order to overcome this difficulty, we employ Pólya-Gamma (PG) latent variables (Pillow and Scott, 2012; Polson et al., 2013; Linderman et al., 2016). We observe from Equation 4 that the posterior density, p(𝐱|𝐳,𝐃,𝚺x) is conditionally independent in t,l with: 

p(𝐱t,l|𝐳,𝐃,𝚺x)p(𝐱t,l|𝚺x)j=1N(exp(xt,l(j)+𝐝j𝐬t))zt,l(j)-αzt-1,l(j)1+exp(xt,l(j)+𝐝j𝐬t).

Thus, upon careful inspection, we see that this density has the desired form for the PG augmentation scheme (Polson et al., 2013). Accordingly, we introduce a set of auxiliary PG-distributed i.i.d. latent random variables ωt,l:=[ωt,l(1),ωt,l(2),,ωt,l(N)], ωt,l(j)PG(1,0) for 1jN, 1tT and 1lL, to derive the complete data log-likelihood:

(5) logp(y,z,x,ω,Σx|D)=TL2log|Σx|+logppr(Σx)+t,l=1T,L{12(yt,lAzt,l)Σw1(yt,lAzt,l)12(xt,lμx)Σx1(xt,lμx)+j=1N{(zt,l(j)αzt1,l(j)12)(xt,l(j)+djst)12ωt,l(j)(xt,l(j)+djst)2+logpPG(1,0)(ωt,l(j))}}+C,

where 𝝎:={𝝎t,l}t,l=1T,L and C accounts for terms not depending on 𝐲,𝐳,𝐱,𝝎, 𝚺x and 𝐃. The complete data log-likelihood is notably quadratic in 𝐳t,l, which as we show later admits efficient estimation procedures with favorable scaling in T.

Deriving the optimal variational densities

Request a detailed protocol

In this section, we will outline the procedure of applying VI to the latent variables 𝐱={𝐱t,l}t,l=1T,L,𝝎={𝝎t,l}t,l=1T,L and 𝚺x, assuming that the parameter estimates 𝐳^ and 𝐃^ of the previous iteration are available. The methods that we propose to update the parameters 𝐳^ and 𝐃^ subsequently, will be discussed in the next section.

The objective of variational inference is to posit a family of approximate densities 𝒬 over the latent variables, and to find the member of that family that minimizes the Kullback-Leibler (KL) divergence to the exact posterior:

q(x,ω,Σx|z^,D^)=argminq𝒬KL(q(x,ω,Σx|z^,D^)p(x,ω,Σx|y,z^,D^)).

However, evaluating the KL divergence is intractable, and it has been shown (Blei et al., 2017) that an equivalent result to this minimization can be obtained by maximizing the alternative objective function, called the evidence lower bound (ELBO):

ELBO(q)=E[logp(x,ω,Σx,y|z^,D^)]E[logq(x,ω,Σx|z^,D^)].

Further, we assume 𝒬 to be a mean-field variational family (Blei et al., 2017), resulting in the overall variational density of the form:

(6) q(𝐱,𝝎,𝚺x)=q(𝚺x)t,l=1T,L(q(𝐱t,l)j=1Nq(ωt,l(j))).

Under the mean field assumptions, the maximization of the ELBO can be derived using the optimization algorithm ‘Coordinate Ascent Variational Inference’ (CAVI) (Bishop, 2006; Blei et al., 2017). Accordingly, we see that the optimal variational densities in Equation 6 take the forms:

logq(xt,l)Eq(Σx)q(ωt,l)[logp(xt,l|ωt,l,Σx,y,z^,D^)],logq(ωt,l(j))Eq(xt,l)[logp(ωt,l(j)|xt,l,Σx,y,z^,D^)],logq(Σx)Eq(x)[logp(Σx|x,y,z^,D^)].

Upon evaluation of these expectations, we derive the optimal variational distributions as:

q(xt,l)𝒩(mxt,l,Qxt,l),q(ωt,l(j))PG(1,ct,l(j)),q(Σx)InvWishN(Px,γx),

whose parameters mxt,l:=[mxt,l(1),mxt,l(2),,mxt,l(N)]T, 𝐐𝐱t,l, ct,l(j), 𝐏x, and γx can be updated given parameter estimates 𝐃^ and 𝐳^:

Qxt,l=(Ω~t,l+γxPx1)1,mxt,l=Qxt,l(z^t,lαz^t1,l121Ω~t,lD^st+γxPx1μx),Px:=ψx+t,l=1T,L{Qxt,l+mxt,lmxt,lμxmxt,lmxt,lμx+μxμx},ct,l(j)=(Qxt,l)j,j+(mxt,l(j)+d^jst)2,

and γx:=ρx+TL, with Ω~t,lRN×N denoting a diagonal matrix with entries (Ω~t,l)j,j:=12ct,l(j)tanh(ct,l(j)2) and 𝟏N denoting the vector of all ones.

Low-complexity parameter updates

Request a detailed protocol

Note that even though 𝐳 is composed of the latent processes 𝐳t,l, we do not use VI for its inference, and instead consider it as an unknown parameter. This choice is due to the temporal dependencies arising from the underlying state-space model in Equation 4, which hinders a proper assignment of variational densities under the mean field assumption. We thus seek to estimate both 𝐳 and 𝐃 using the updated variational density q*(𝐱,𝝎,𝚺x).

First, note that the log-likelihood in Equation 5 is decoupled in l, which admits independent updates to {𝐳t,l}t=1T, for l=1,,L. As such, given an estimate 𝐃^, we propose to estimate {𝐳t,l}t=1T as:

{z^t,l}t=1T=argmax{zt,l}t=1TEq(x,ω,Σx)[logp(y,z,x,ω,Σx|D^)]=argmin{zt,l}t=1Tt=1T{12(yt,lAzt,l)Σw1(yt,lAzt,l)j=1N(mxt,l(j)+d^jst)(zt,l(j)αzt1,l(j))}, under the constraints 0zt,l(j)αzt1,l(j)1, for t=1,,T and j=1,,N. These constraints are a direct consequence of nt,l(j)=zt,l(j)-αzt-1,l(j) being a Bernoulli random variable with 𝔼[nt,l(j)][0,1]. While this problem is a quadratic program and can be solved using standard techniques, it is not readily decoupled in t, and thus standard solvers would not scale favorably in T.

Instead, we consider an alternative solution that admits a low-complexity recursive solution by relaxing the constraints. To this end, we relax the constraint 𝐳t,l-α𝐳t-1,l𝟏 and replace the constraint 𝐳t,l-α𝐳t-1,l by penalty terms proportional to |zt,l(j)-αzt-1,l(j)|. The resulting relaxed problem is thus given by:

(7) min{𝐳t,l}t=1Tt=1T{12(𝐲t,l-𝐀𝐳t,l)𝚺w-1(𝐲t,l-𝐀𝐳t,l)+j=1Nνt,l(j)|zt,l(j)-αzt-1,l(j)|},

where νt,l(j):=β|m𝐱t,l(j)+𝐝^j𝐬t| with β1 being a hyper-parameter. Given that the typical spiking rates are quite low in practice, m𝐱t,l(j)+𝐝^j𝐬t is expected to be a negative number. Thus, we have assumed that -m𝐱t,l(j)-𝐝^j𝐬t=|m𝐱t,l(j)+𝐝^j𝐬t|.

The problem of Equation 7 pertains to compressible state-space estimation, for which fast recursive solvers are available (Kazemipour et al., 2018). The solver utilizes the Iteratively Re-weighted Least Squares (IRLS) (Ba et al., 2014) framework to transform the absolute value in the second term of the cost function into a quadratic form in 𝐳t,l, followed by Fixed Interval Smoothing (FIS) (Rauch et al., 1965) to find the minimizer. At iteration k, given a current estimate 𝐳[k-1], the problem reduces to a Gaussian state-space estimation of the form:

(8) 𝐲t,l=𝐀𝐳t,l+𝐰t,l,𝐳t,l=α𝐳t-1,l+𝐯t,l,

with 𝐰t,l𝒩(0,𝚺w) and 𝐯t,l𝒩(0,𝚺𝐯t,l[k]), where 𝚺𝐯t,l[k]N×N is a diagonal matrix with (𝚺𝐯t,l[k])j,j:=(z^t,l(j)[k-1]-αz^t-1,l(j)[k-1])2+ε2/νt,l(j), for some small constant ε>0. This problem can be efficiently solved using FIS, and the iterations proceed for a total of K times or until a standard convergence criterion is met (Kazemipour et al., 2018). It is noteworthy that our proposed estimator of the calcium concentration 𝐳t,l can be thought of as soft spike deconvolution, which naturally arises from our variational framework, as opposed to the hard spike deconvolution step used in two-stage estimators.

Finally, given q*(𝐱,𝝎,𝚺x) and the updated 𝐳^, the estimate of 𝐝j for j=1,2,,N can be updated in closed-form by maximizing the expected complete log-likelihood 𝔼q*(𝐱,𝝎,𝚺x)[logp(𝐲,𝐳^,𝐱,𝝎,𝚺x|𝐃)]:

d^j=(t,l=1T,L((Ω~t,l)j,jstst))1(t,l=1T,L{(z^t,l(j)αz^t1,l(j)12)st(Ω~t,l)j,jmxt,l(j)st}).

The VI procedure iterates between updating the variational densities and parameters until convergence, upon which we estimate the noise and signal covariances as:

𝚺^x:=mode{q*(𝚺x)}=𝐏xγx+N+1,𝚺^s:=𝐃^𝔼[𝐬t𝐬t]𝐃^.

The overall combined iterative procedure is outlined in Algorithm 1. Furthermore, a MATLAB implementation of this algorithm is publicly available in Rupasinghe, 2020. It is worth noting that a special case of our proposed variational inference procedure can be used to estimate signal and noise correlations from electrophysiology recordings. Given that spiking activity, that is {𝐧t,l}t,l=1T,L, is directly observed in this case, the solution to the optimization problem in Equation (7) is no longer required. Thus, the parameters 𝚺x and 𝐃 can be estimated using a simplified variational procedure, which is outlined in Algorithm 2 in Appendix 3.

Guidelines for model parameter settings

There are several key model parameters that need to be set by the user prior to the application of our proposed method. Here, we provide our rationale and criteria for choosing these parameters, which could also serve as guidelines in facilitating the applicability and adoption of our method by future users. We will also provide the specific choices of these parameters used in our simulation studies and real data analyses.

Number of neurons selected for the analysis (N)

Request a detailed protocol

While our proposed method scales-up well with the population size due to low-complexity update rules involved, including neurons with negligible spiking activity in the analysis would only increase the complexity and potentially contaminate the correlation estimates. Thus, we performed an initial pre-processing step to extract N neurons that exhibited at least one spiking event in at least half of the trials considered.

Stimulus integration window length (R)

Request a detailed protocol

The number of lags R considered in stimulus integration is a key parameter that can be set through data-driven approaches or using prior domain knowledge. Examples of common data-driven criteria include cross-validation, Akaike Information Criterion (AIC) and Bayesian Information Criterion (BIC), which balance the estimation accuracy and model complexity (Arlot and Celisse, 2010; Ding et al., 2018).

To quantify the effect of R on model complexity, we first describe the stimulus encoding model in our framework. Suppose that the onset of the p𝗍𝗁 tone in the stimulus set (p=1,,P, where P is the number of distinct tones) is given by a binary sequence ft(p){0,1}. The choice of R implies that the response at time t post-stimulus depends only on the R most recent time lags. As such, the effective stimulus at time t corresponding to tone p is given by 𝐬t(p):=[ft(p),ft-1(p),,ft-R+1(p)]R. By including all the P tones, the overall effective stimulus at the tth time frame is given by 𝐬t:=[𝐬t(1),,𝐬t(P)]RP. The stimulus modulation vector 𝐝j would thus be RP-dimensional. As a result, the number of parameters (M=RP) to be estimated linearly increases with R. By using additional domain knowledge, we chose R to be large enough to capture the stimulus effects, and at the same time to be small enough to control the complexity of the algorithm.

As an example, given that the typical tone response duration of mouse primary auditory neurons is <1 s (Linden et al., 2003; DeWeese et al., 2003; Petrus et al., 2014), with a sampling frequency of fs=30 Hz, a choice of R30 would suffice to capture the stimulus effects. By further examining the effect of varying R on the proposed correlation estimates in Figure 4—figure supplement 1, we chose R=25 for our real data analyses.

Observation noise covariance (𝚺w) and scaling matrix (𝐀)

Request a detailed protocol

We assumed that the observation noise covariance 𝚺w is diagonal, and estimated the diagonal elements using the background fluorescence in the absence of spiking events, for each neuron. We set 𝐀=a𝐈, where 𝐈N×N represents the identity matrix, and estimated a by considering the average increase in fluorescence after the occurrence of isolated spiking events. Specifically, we derived the average fluorescence activity of multiple trials triggered to the fluorescence rise onset, and set a as the increment in the magnitude of this average fluorescence immediately following the rise onset.

State transition parameter (α)

Request a detailed protocol

We chose α in the range [0.95,0.98], which match the slow dynamics of the calcium indicator in our data. We tested the robustness of our estimates under different choices of α in this range through the method outlined in Hyper-parameter tuning, and accordingly chose the optimal value of α.

Mean of the latent trial-dependent process (𝝁x)

Request a detailed protocol

We estimated 𝝁x as a constant that is proportional to the average firing rate. To this end, we parametrized each component of 𝝁x as μx(j)=-aμ+bμ1LTt,l=1T,Lyt,l(j), for j=1,,N. The constants aμ and bμ were chosen such that -2μx(j)-10, which gives the range of baseline parameters compatible with observed firing rates in our experimental data.

Parameter choices for simulation study 1

Request a detailed protocol

In the first simulation study, we set α=0.98, β=8, 𝐀=0.1𝐈, 𝝁x=-4.51 and 𝚺w=2×10-4𝐈 (𝐈8×8 represents the identity matrix and 𝟏8 represents the vector of all ones), so that the SNR of simulated data was in the same range as that of experimentally-recorded data. We used a 6𝗍𝗁 order autoregressive process with a mean of -1 as the stimulus (st), and considered R=2 (M=2) lags of the stimulus (i.e. 𝐬t=[st,st-1]) in both the generative model and inference procedure. The components of the linear and quadratic stimulus modulation vectors, that is 𝐝j, 𝐝~j,1 and 𝐝~j,2, were chosen at random uniformly in the range [-0.5,0.5]. The variance of st was set in each case such that the average power of the overall signal component (𝐝j𝐬t for the linear model, and 𝐝j𝐬t+(𝐝~j,1𝐬t)2+(𝐝~j,2𝐬t)2 for the non-linear model) was comparable to the average power of the noise component (xt,l(j)).

Algorithm 1 Estimation of Σx and D through the proposed iterative procedure
Inputs: Ensemble of fluorescence measurements {𝐲t,l}t,l=1T,L, constants α,𝐀,𝚺w and 𝝁x, hyper-parameters 𝝍x, ρx, β and ϵ, tolerance at convergence δ and the external stimulus 𝐬t
Outputs: 𝚺^x and 𝐃^
Initialization: Initial choice of 𝚺𝐯t,l, Ω~t, 𝚺^x and 𝐃^, residual=10δ, γx=ρx+LT
1: while residualδ do
Estimate calcium concentrations using Fixed Interval Smoothing
2:    for l=1,,L do
Forward filter:
3:      for t=1,...,T do
4:              z(t|t1),l=αz(t1|t1),l
5:              P(t|t1),l=α2P(t1|t1),l+Σvt,l
6:              Bt,l=P(t|t1),lA(AP(t|t1),lA+Σw)1
7:              z(t|t),l=z(t|t1),l+Bt,l(yt,lAz(t|t1),l)
8:              P(t|t),l=(IBt,lA)P(t|t1),l
9:      end for
Backward smoother:
10:      for t=T1,...,1 do
11:              z^t,l=z(t|t),l+αP(t|t),lP(t+1|t),l1(z^t+1,lz(t+1|t),l)
12:      end for
13:   end for
Update variational parameters
14:      for t=1,,T and l=1,,L do
15:              𝐐𝐱t,l=(𝛀~t,l+γx𝐏x-1)-1
16:              mxt,l=Qxt,l(z^t,lαz^t1,l121Ω~t,lD^st+γxPx1μx)
17: vt,l(j):=β|mxt,l(j)+d^jTst|
18: for j=1,,N do
19:              ct,l(j)=(𝐐𝐱t,l)j,j+(m𝐱t,l(j)+𝐝^j𝐬t)2
20:              (Ω~t,l)j,j:=12ct,l(j)tanh(ct,l(j)2)
21:      end for
22:   end for
23:   Px:=ψx+t,l=1T,L{Qxt,l+mxt,lmxt,lμxmxt,lmxt,lμx+μxμx}
Update IRLS covariance approximation
24:      for l=1,,L,t=1,,T and j=1,,N do
25:              (Σvt,l)j,j:=(z^t,l(j)αz^t1,l(j))2+ε2/vt,l(j)
26:      end for
Update outputs and the convergence criterion
27:      for j=1,,N do
28:              d^j=(t,l=1T,L((Ω~t,l)j,jstst))1(t,l=1T,L{(z^t,l(j)αz^t1,l(j)12)st(Ω~t,l)j,jmxt,l(j)st})
29:      end for
30:      (𝐃^)𝗉𝗋𝖾𝗏=𝐃^, D^=[d^1,d^2,,d^N]
31:      (𝚺^x)𝗉𝗋𝖾𝗏=𝚺^x, Σ^x=Pxγx+N+1
32:      residual=(𝚺^x)𝗉𝗋𝖾𝗏-𝚺^x2/(𝚺^x)𝗉𝗋𝖾𝗏2+(𝐃^)𝗉𝗋𝖾𝗏-𝐃^2/(𝐃^)𝗉𝗋𝖾𝗏2
33:   end while
34:   Return 𝚺^x and 𝐃^

Parameter choices for simulation study 2

Request a detailed protocol

In the second simulation study, we set α=0.98, 𝐀=0.1𝐈, 𝝁x=-4.51 and 𝚺w=10-4𝐈 (𝐈30×30 represents the identity matrix and 𝟏30 represents the vector of all ones) when generating the fluorescence traces {𝐲t,l}t,l=1T,L, so that the SNR of the simulated data was in the same range as of real calcium imaging observations. Furthermore, we simulated the spike trains based on a Poisson process (Smith and Brown, 2003) using the discrete time re-scaling procedure (Brown et al., 2002; Smith and Brown, 2003). Following the assumptions in Brown et al., 2002, we used an exponential link to simulate the observations:

nt,l(j)Poisson(λt,l(j)),λt,l(j)=exp(xt,l(j)),

as opposed to the Bernoulli-logistic assumption in our recognition model. Then, we estimated the noise covariance 𝚺^x using the Algorithm 1, with a slight modification. Since there are no external stimuli, we set 𝐬t= and 𝐃=. Accordingly, in Algorithm 1, we initialized 𝐃^= and did not perform the update on 𝐃^ in the subsequent iterations.

Parameter choices for real data study 1

Request a detailed protocol

The dataset consisted of recordings from 371 excitatory neurons, from which we selected N=16 responsive neurons for the analysis. Each trial consisted of T=3600 time frames (the sampling frequency was 30 Hz, and each trial had a duration of 120 s), with the presentation of a random sequence of four tones (P=4). The spiking events were very sparse and infrequent, and hence this dataset fits our model with at most one spiking event in a time frame.

We considered R=25 (M=100) time lags in this analysis and further examined the effect of varying R in Figure 4—figure supplement 1. We set α=0.95 and 𝐀=𝐈 (𝐈16×16 represents the identity matrix).

Parameter choices for real data study 2

Request a detailed protocol

Each trial consisted of T=765 frames (25.5 s) at a sampling frequency of 30 Hz. The A1 neurons studied here had low response rates (in both time and space), with only 10 neurons exhibiting spiking activity in at least half of the trials. Thus, we selected N=10 neurons and L=10 trials for the analysis, and chose R=25 lags of the stimulus (M=25) in the model for the stimulus-driven condition. We set α=0.95 and 𝐀=0.75𝐈 (𝐈10×10 represents the identity matrix).

Parameter choices for real data study 3

Request a detailed protocol

Each experiment consisted of L=5 trials of P=9 different tone frequencies repeated at four different amplitude levels, resulting in each concatenated trial being 180 s long (see Bowen et al., 2020 for more details). We set the number of stimulus time lags considered to be R=25 (M=225). For each layer, we analyzed fluorescence observations from six experiments. In each experiment, we selected the most responsive N20 neurons for the subsequent analysis. We set α=0.95 and 𝐀=𝐈.

Performance evaluation

Simulation studies

Request a detailed protocol

Since the ground truth is known in simulations, we directly compared the performance of each signal and noise correlation estimate with the ground truth signal and noise correlations, respectively. Suppose the ground truth correlations are given by the matrix 𝐗 and the estimated correlations are given by the matrix 𝐗^. To quantify the similarity between 𝐗 and 𝐗^, we defined the following two metrics:

Normalized Mean Squared Error (NMSE): The NMSE computes the mean squared error of 𝐗^ with respect to 𝐗 using the Frobenius Norm:

NMSE:=XX^F2XF2.

Ratio between out-of-network power and in-network power (leakage): First, we identified the in-network and out-of-network components from the ground truth correlation matrix 𝐗. Suppose that if the true correlation between the i𝗍𝗁 neuron and the j𝗍𝗁 neuron is non-zero, then |(𝐗)i,j|>δx, for some δx>0. Thus, we formed a matrix 𝐗𝗂𝗇 that masks the in-network components, by setting (𝐗𝗂𝗇)i,j=1 if |(𝐗)i,j|>δx and (𝐗𝗂𝗇)i,j=0 if |(𝐗)i,j|δx. Likewise, we also formed a matrix 𝐗𝗈𝗎𝗍 that masks the out-of-network components, by setting (𝐗𝗈𝗎𝗍)i,j=1 if |(𝐗)i,j|δx and (𝐗𝗈𝗎𝗍)i,j=0 if |(𝐗)i,j|>δx. Then, using these two matrices we quantified the leakage effect of 𝐗^ comparative to 𝐗 by:

leakage:=𝐗^𝐗𝗈𝗎𝗍F2𝐗^𝐗𝗂𝗇F2,

where () denotes element-wise multiplication.

Real data studies

Request a detailed protocol

To quantify the similarity and dissimilarity between signal and noise correlation estimates, we used a statistic based on the Tanimoto similarity metric (Lipkus, 1999), denoted by Ts(𝐗,𝐘) for two matrices 𝐗 and 𝐘. For two vectors 𝐚 and 𝐛 with non-negative entries, the Tanimoto coefficient (Lipkus, 1999) is defined as:

T(a,b):=aTbaTa+bTbaTb.

The Tanimoto similarly metric between two matrices can be defined in a similar manner, by vectorizing the matrices. Thus, we formulated a similarity metric between two correlation matrices 𝐗 and 𝐘 as follows. Let 𝐗+:=max{𝐗,0𝐈} and 𝐗-:=max{-𝐗,0𝐈}, with the max{,} operator interpreted element-wise. Note that 𝐗=𝐗+-𝐗-, and 𝐗+,𝐗- have non-negative entries. We then defined the similarity matrix by combining those of the positive and negative parts as follows:

Ts(𝐗,𝐘):=εT(𝐗+,𝐘+)+(1-ε)T(𝐗-,𝐘-)

where ε[0,1] denotes the percentage of positive entries in 𝐗 and 𝐘. As a measure of dissimilarity, we used Td(𝐗,𝐘):=1-Ts(𝐗,𝐘). The values of Td(𝐒^,𝐍^) in Table 1 and Ts(𝐍^𝗌𝗉𝗈𝗇,𝐍^𝗌𝗍𝗂𝗆) and Td(𝐒^𝗌𝗍𝗂𝗆,𝐍^𝗌𝗍𝗂𝗆) reported in Table 2 were obtained based on the foregoing definitions.

To further assess the statistical significance of these results, we performed following randomized tests. To test the significance of Ts(𝐍^𝗌𝗉𝗈𝗇,𝐍^𝗌𝗍𝗂𝗆), for each comparison and each algorithm, we fixed the first matrix (i.e. 𝐍^𝗌𝗉𝗈𝗇) and randomly shuffled the entries of the second one (i.e. 𝐍^𝗌𝗍𝗂𝗆) while respecting symmetry. We repeated this procedure for 10000 trials, to derive the null distributions that represented the probabilities of chance occurrence of similarities between two random groups of neurons.

To test the significance of Td(𝐒^,𝐍^) and Td(𝐒^𝗌𝗍𝗂𝗆,𝐍^𝗌𝗍𝗂𝗆), for each comparison and each algorithm, again we fixed the first matrix (i.e. signal correlations). Then, we formed the elements of the second matrix (akin to noise correlations) as follows. For each element of the second matrix, we assigned either the same element as the signal correlations (in order to model the leakage effect) or a random noise (with same variance as the elements in the noise correlation matrix) with equal probability. As before, we repeated this procedure for 10,000 trials, to derive the null distributions that represent the probabilities of chance occurrence of dissimilarities between two matrices that have some leakage between them.

Hyper-parameter tuning

Request a detailed protocol

The hyper-parameters that directly affect the proposed estimation are the inverse Wishart prior hyper-parameters: ψx and ρx. Given that ρx appears in the form of γx:=TL+ρx, we will consider ψx and γx as the main hyper-parameters for simplicity. Here, we propose a criterion for choosing these two hyper-parameters in a data-driven fashion, which will then be used to construct the estimates of the noise covariance matrix 𝚺^x and weight matrix 𝐃^. Due to the hierarchy of hidden layers in our model, an empirical Bayes approach for hyper-parameter selection using a likelihood-based performance metric is not straightforward. Hence, we propose an alternative empirical method for hyper-parameter selection as follows.

For a given choice of 𝝍x and γx, we estimate 𝚺^x and 𝐃^ following the proposed method. Then, based on the generative model in Proposed forward model, and using the estimated values of 𝚺^x and 𝐃^, we sample an ensemble of simulated fluorescence traces 𝐲^={𝐲^t(l)}t,l=1T,L, and compute the metric d(𝝍x,γx):

d(ψx,γx):=Dfrob(cov(y^,y^),cov(y,y)),

where cov(,) denotes the empirical covariance and D𝖿𝗋𝗈𝖻(𝐗,𝐘):=𝐗-𝐘F2. Note that D𝖿𝗋𝗈𝖻(𝐗,𝐘) is strictly convex in 𝐗. Thus, minimizing D𝖿𝗋𝗈𝖻(𝐗,𝐘) over 𝐗 for a given 𝐘 has a unique solution. Accordingly, we observe that d(𝝍x,γx) is minimized when cov(𝐲^,𝐲^) is nearest to cov(𝐲,𝐲). Therefore, the corresponding estimates 𝚺^x and 𝐃^ that generated 𝐲^, best match the second-order statistics of 𝐲 that was generated by the true parameters 𝚺x and 𝐃.

The typically low spiking rate of sensory neurons observed in practice may render the estimation problem ill-posed. It is thus important to have an accurate choice of the scale matrix 𝝍x in the prior distribution. However, an exhaustive search for optimal tuning of 𝝍x is not computationally feasible, given that it has N(N+1)/2 free variables. Thus, the main challenge here is finding the optimal choice of the scale matrix 𝝍x,𝗈𝗉𝗍.

To address this challenge, we propose the following method. First, we fix 𝝍x,𝗂𝗇𝗂𝗍=τ𝐈, where τ is a scalar and 𝐈N×N is the identity matrix. Next, given 𝝍x,𝗂𝗇𝗂𝗍 we find the optimal choice of γx as:

γx,𝗂𝗇𝗂𝗍=argminγx𝒮γd(𝝍x,𝗂𝗇𝗂𝗍,γx),

where 𝒮γ is a finite set of candidate solutions for γx>N-1. Let 𝚺^x,𝗂𝗇𝗂𝗍 denote the noise covariance estimate corresponding to hyper-parameters (𝝍x,𝗂𝗇𝗂𝗍,γx,𝗂𝗇𝗂𝗍). We will next use 𝚺^x,𝗂𝗇𝗂𝗍 to find a suitable choice of 𝝍x. To this end, we first fix γx,𝗈𝗉𝗍:=TL+ρ~x, for some N-1<ρ~xTL. Note that by choosing ρ~x to be much smaller than TL, the final estimates become less sensitive to the choice of γx. Then, we construct a candidate set 𝒮ψ for 𝝍x,𝗈𝗉𝗍 by scaling Σ^x,init with a finite set of scalars ηR+, i.e. 𝒮ψ:={ηΣ^x,init,ηR+}. To select 𝝍x,𝗈𝗉𝗍, we match it with the choice of γx,𝗈𝗉𝗍 by solving:

𝝍x,𝗈𝗉𝗍=argmin𝝍x𝒮ψd(𝝍x,γx,𝗈𝗉𝗍).

Finally, we use these hyper-parameters (𝝍x,𝗈𝗉𝗍,γx,𝗈𝗉𝗍) to obtain the estimators 𝚺^x and 𝐃^ as the output of the algorithm.

Experimental procedures

All procedures were approved by the University of Maryland Institutional Animal Care and Use Committee. Imaging experiments were performed on a P60 (for real data study 1) and P83 (for real data study 2) female F1 offspring of the CBA/CaJ strain (The Jackson Laboratory; stock #000654) crossed with transgenic C57BL/6J-Tg(Thy1-GCaMP6s)GP4.3Dkim/J mice (The Jackson Laboratory; stock #024275) (CBAxThy1), and F1 (CBAxC57). The third real data study was performed on data from P66-P93 and P166-P178 mice (see Bowen et al., 2020 for more details). We used the F1 generation of the crossed mice because they have good hearing into adulthood (Frisina et al., 2011).

We performed cranial window implantation and two-photon imaging as previously described in Francis et al., 2018; Liu et al., 2019; Bowen et al., 2019. Briefly, we implanted a cranial window of 3 mm in diameter over the left auditory cortex. We used a scanning microscope (Bergamo II series, B248, Thorlabs) coupled to Insight X3 laser (Spectra-physics) (study 1) or pulsed femtosecond Ti:Sapphire two-photon laser with dispersion compensation (Vision S, Coherent) (studies 2 and 3) to image GCaMP6s fluorescence from individual neurons in awake head-fixed mice with an excitation wavelengths of λ=920 nm and λ=940 nm, respectively. The microscope was controlled by ThorImageLS software. The size of the field of view was 370×370μm. Imaging frames of 512×512 pixels (pixel size 0.72μm) were acquired at 30 Hz by bidirectional scanning of an 8 kHz resonant scanner. The imaging depth was around 200μm below pia.

Data pre-processing

Request a detailed protocol

A circular ROI was manually drawn over each cell body to extract raw fluorescence traces from individual cells. Neuropil contamination subtraction and baseline correction were performed on the raw fluorescence traces of each cell (Francis et al., 2018; Liu et al., 2019; Bowen et al., 2020) according to F𝖼𝖾𝗅𝗅-αnF𝗇𝖾𝗎𝗋𝗈𝗉𝗂𝗅-baselinebaseline, where αn was set to 0.7 in real data study 1 (Francis et al., 2018), 0.8 in real data study 2 (Liu et al., 2019) and 0.9 in real data study 3 (Bowen et al., 2020). The two-photon observations {𝐲t,l}t,l=1T,L used in our analyses are the output of this pre-processing step.

Stimuli for real data study 1

Request a detailed protocol

During imaging experiments, we presented four tones (4, 8, 16, and 32 kHz) at 70 dB SPL. The tones were 2 s in duration with an inter-trial silence of 4 s. For the sequence of tones, we first generated a randomized sequence that consisted of five repeats for each tone (20 tones in total) and then the same sequence was repeated for 10 trials.

Stimuli for real data study 2

Request a detailed protocol

During imaging experiments, we presented a 75 dB SPL 100 ms broadband noise (4–48 kHz) as the auditory stimulus. Each trial was 5.1 s long (1 s pre-stimulus silence + 0.1 s stimulus + 3 s post-stimulus silence), and the inter-trial duration was 3 s. Spontaneous neuronal activity was collected from activity during randomly interleaved no-stimuli trials of the same duration, and these trials had complete silence throughout the trial duration (5.1 s long).

Then, we extracted 50 such trials from each type, and formed 10 (L=10) trials each of 25.5 s duration (T=765 frames) for the subsequent analysis, by concatenating five 5.1 s trials. This final step was performed to increase the effective trial duration.

Stimuli for real data study 3

Request a detailed protocol

During imaging experiments, sounds were played at four sound levels (20, 40, 60, and 80 dB SPL). Auditory stimuli consisted of sinusoidal amplitude-modulated (SAM) tones (20 Hz modulation, cosine phase), ranging from 3 to 48 kHz. The frequency resolution was two tones/octave (0.5 octave spacing) and each of these tonal stimuli was 1 s long, repeated five times with a 4−6 s inter-stimulus interval (see Bowen et al., 2020 for details).

Appendix 1

Relationship to existing definitions of Signal and Noise correlations

Recall that the conventional definitions of signal and noise covariance of spiking activity between the i𝗍𝗁 and j𝗍𝗁 neuron are (Lyamzin et al., 2015):

(Σscon)i,j=cov(1Llnt,l(i),1Llnt,l(j)),(Σxcon)i,j=1Llcov(nt,l(i)1Llnt,l(i),nt,l(j)1Llnt,l(j)),

where cov(ut,vt):=1Tt=1T(ut1Tt=1Tut)(vt1Tt=1Tvt), is the empirical covariance. The correlations, are then derived by the standard normalization:

(9) (Scon)i,j:=(Σscon)i,j(Σscon)i,i.(Σscon)j,j,(Ncon)i,j:=(Σxcon)i,j(Σxcon)i,i.(Σxcon)j,j,i,j=1,2,,N.

Suppose that the spiking events follow the forward model: 

nt,l(j)Bernoulli(λt,l(j)),λt,l(j)=ϕ(xt,l(j),djst),

where ϕ:2[0,1] is a differentiable non-linear mapping. We assume 𝐱t,l and 𝐬t to be independent. Without loss of generality, let 𝔼[𝐬t]= and 𝔼[𝐱t,l]=𝝁x . Further, we define the notation XtYt to denote almost sure equivalence, that is Xta.s.Z and Yta.s.Z for some random variable Z.

First, let us consider (𝐒𝖼𝗈𝗇)i,j. Noting that 𝔼[nt,l(j)]=𝔼[λt,l(j)] and 𝔼[nt,l(i)nt,l(j)]=𝔼[λt,l(i)λt,l(j)], we conclude as T:

(𝚺s𝖼𝗈𝗇)i,jcov(1Llλt,l(i),1Llλt,l(j)),

from the law of large numbers. Then, if we consider the Taylor series expansion of ϕ(xt, l(j),𝐝j𝐬t) around the mean (μx(j),0), we get:

(Σscon)i,jcov(ϕ(μx(i),0)+1Ll(xt,l(i)μx(i))ϕ(xt,l(i))(μx(i),0)+(dist)ϕ(dist)(μx(i),0)+ϵt,l(i),ϕ(μx(j),0)+1Ll(xt,l(j)μx(j))ϕ(xt,l(j))(μx(j),0)+(djst)ϕ(djst)(μx(j),0)+ϵt,l(j)),

where ϵt,l(i) and ϵt,l(j) represent the higher order terms. Then, as L, we get:

(𝚺s𝖼𝗈𝗇)i,jcov((𝐝i𝐬t)ϕ(𝐝i𝐬 t)(μx(i),0),(𝐝j𝐬t)ϕ(𝐝j𝐬t)(μx(j),0))+ϵt,l,

since limL1Ll=1L(xt,l(j))=μx(j) by the Law of Large numbers. Thus, we see that:

(Σscon)i,jCSdicov(st,st)dj+ϵt,l=CS(Σs)i,j+ϵt,l,

where CS is a constant and ϵt,l is typically small if the latent process 𝐱t,l and the stimulus 𝐬t are concentrated around their means. Then, the signal correlations are obtained by normalization of the signal covariance as in Equation 9, through which the scaling factor CS cancels and we get:

(𝐒𝖼𝗈𝗇)i,j(𝐒)i,j.

Thus, as T,L, we see that 𝐒 is indeed the signal correlation matrix that is aimed to be approximated by the conventional definitions.

Next, let us consider (𝐍𝖼𝗈𝗇)i,j. Similar to foregoing analysis of the signal covariance, as T we get:

(Σxcon)i,j1Llcov(λt,l(i)1Llλt,l(i),λt,l(j)1Llλt,l(j)).

Then, from a Taylor series expansion, we get:

(Σxcon)i,j1Llcov(xt,l(i)ϕ(xt,l(i))(μx(i),0)1Llxt,l(i)ϕ(xt,l(i))(μx(i),0)+ξt,l(i),xt,l(j)ϕ(xt,l(j))(μx(j),0)1Llxt,l(j)ϕ(xt,l(j))(μx(j),0)+ξt,l(j)),

where ξt,l(i) and ξt,l(j) represent the higher order terms. Then, as L:

(𝚺x𝖼𝗈𝗇)i,j1Llcov((xt,l(i)-μx(i))ϕ(xt,l(i))(μx(i),0),(xt,l(j)-μx(j))ϕ(xt,l(j))(μx(j),0))+ξt,l,

from the law of large numbers. Accordingly, we see that:

(Σxcon)i,jCN1Llcov(xt,l(i)μx(i),xt,l(j)μx(j))+ξt,l=CN(Σx)i,j+ξt,l,

where CN is a constant and ξt,l is typically small if the latent process 𝐱t,l and the stimulus 𝐬t are concentrated around their means. Then, the noise correlations are derived by normalization of the noise covariance given in Equation 9. This cancels out the scaling factor CN, and we get:

(𝐍𝖼𝗈𝗇)i,j(𝐍)i,j.

Thus, we similarly conclude that as T,L, the conventional definition of noise correlation 𝐍𝖼𝗈𝗇 indeed aims to approximate 𝐍.

As a numerical illustration, we demonstrated in Figure 2—figure supplement 2 that the conventional definitions of the correlations indeed approximate our proposed definitions, but require much larger number of trials to be accurate. More specifically, in order to achieve comparable performance to our method using L=20 trials, the conventional correlation estimates require L=1000 trials.

Appendix 2

Proof of Theorem 1

In what follows, we present a comprehensive proof of Theorem 1. Recall the following key assumptions:

Assumption (1). We assume a scalar time-varying external stimulus (i.e. 𝐬t=st, and hence 𝐝j=dj,𝐝=[d1,d2,,dN] ). Furthermore, we set the observation noise covariance to be 𝚺w=σw2𝐈, for notational convenience.

Assumption (2). We derive the performance bounds in the regime where T and L are large, and thus do not impose any prior distribution on the correlations (i.e. p𝗉𝗋(𝚺x)1), which are otherwise needed to mitigate overfitting (see Preliminary assumptions).

Assumption (3). We assume the latent trial-dependent process and stimulus to be slowly varying signals, and thus adopt a piece-wise constant model in which these processes are constant within consecutive windows of length W (i.e., 𝐱t,l=𝐱Wk,l and st=sWk, for (k-1)W+1t<kW and k=1,,K with Wk=(k-1)W+1 and KW=T) for our theoretical analysis, as is usually done in spike count calculations for conventional noise correlation estimates.

Proof of Theorem 1. First, recall the proposed forward model (see Proposed forward model) under Assumption (1 – 3):

yt,l=Azt,l+wt,l,zt,l=αzt1,l+nt,l,nt,l(j)Bernoulli(ϕ(xWk,l(j))),xWk,l𝒩(μx+sWkd,Σx),

where ϕ():=exp()1+exp(), is the logistic function. Note that we have re-defined the latent process 𝐱t,l by absorbing the stimulus activity st𝐝 to the mean of 𝐱t,l for notational convenience, without loss of generality. Hereafter, we also assume that 𝐀=𝐈 without loss of generality. For a truncation level B (to be specified later), consider the event

𝒜W:={|xWk,l(j)|B and 12(1+exp(B))n¯Wk,l(j)112(1+exp(B)) for j=1,,N,k=1,,K and l=1,,L},

such that n¯Wk,l=[n¯Wk,l(1),n¯Wk,l(2),,n¯Wk,l(N)]:=1Ww=1Wn(k1)W+w,l.

First, we derive convenient forms of the maximum likelihood estimators via the Laplace’s approximations and asymptotic expansions (Wong, 2001) through the following lemma:

Lemma 1

Conditioned on event AW, the maximum likelihood estimators of the stimulus kernel of the j𝑡ℎ neuron and the noise covariance between the i𝑡ℎ and j𝑡ℎ neurons take the forms:

d^j=d~j(1+𝒪(σw2))(1+𝒪(1W))and(Σ^x)i,j=(Σ~x)i,j(1+𝒪(σw2))(1+𝒪(1W)),

where

d~j=1Lk=1KsWk2k,l=1K,LsWk(ϕ1(n~Wk,l(j))μx(j) )and(Σ~x)i,j=1KLk,l=1K,L(ϕ1(n~Wk,l(i))μx(i)sWkd~i )(ϕ1(n~Wk,l(j))μx(j)sWkd~j ),

with

n~Wk,l=[n~Wk,l(1),n~Wk,l(2),,n~Wk,l(N)]:=1Ww=1W(y(k1)W+w,lαy(k1)W+w1,l)andϕ1(z):=ln(z/(1z)).

Proof of Lemma 1.

First, maximizing the data likelihood, we derive the estimators:

(10) d^j=argmaxdjp(y|Σx,d)=(1Lk=1KsWk2k,l=1K,LsWk(xWk,l(j)μx(j)))p(y|n)p(n|x)p(x|Σx,d)dndxp(y|n)p(n|x)p(x|Σx,d)dndx,

and

(11) (Σ^x)i,j=argmax(Σx)i,jp(y|Σx,d)=(1KLk,l=1K,L(xWk,l(i)μx(i)sWkd^i)(xWk,l(j)μx(j)sWkd^j))p(y|n)p(n|x)p(x|Σx,d)dndxp(y|n)p(n|x)p(x|Σx,d)dndx,

where Wk=(k1)W+1. Then, we simplify these integrals based on the saddle point method of asymptotic expansions (Wong, 2001). To that end, first consider the numerator of Equation 10 denoted by Inum(1). First, we evaluate the integration in Inum(1) with respect to the variable n. To that end, note:

Inum(1)=hnum(1)(n)exp(A1f1(n))dn,

where hnum(1)(n)=1(2π)TNLσw2TNL(1Lk=1KsWk2k,l=1K,LsWk(xWk,l(j)μx(j)))p(n|x)p(x|Σx,d)dx, A1=1σw2, f1(n)=12t,l,j(yt,l(j)k=1tαtknt,l(j))2 and d𝐧 is shorthand notation for the product measure of the discrete random vector 𝐧. Observing that f1(𝐧^)= for n^:={n^t,l=yt,lαyt1,l}t,l=1T,L, using the method of asymptotic expansions, Inum(1) can be evaluated as:

(12) Inum(1)=hnum(1)(𝐧^)×exp(A1f1(𝐧^))(2π)TLN-A1|H(f1)|(1+𝒪(1A1)),

where the determinant of the Hessian matrix |H(f1)|, is a negative function of α. Note that the covariance of this Gaussian integral (-(H(f1))-1) is a function of α(0,1), and hence is bounded. Thus, all higher order error terms in Equation 12 are also bounded, as higher order moments of Gaussian distributions are functions of the covariance.

Next, we simplify the integral hnum(1)(𝐧^) in Equation 12 using a similar procedure. We have:

(13) hnum(1)(𝐧^)=rnum(1)(𝐱)exp(A2f2(𝐱))𝑑𝐱,

where f2(𝐱)=k,l,j(n~Wk,l(j)xWk,l(j)-log(1+exp(xWk,l(j)))) with

n~Wk,l=[n~Wk,l(1),n~Wk,l(2),,n~Wk,l(N)]:=1Ww=1Wn^(k1)W+w,l,
rnum(1)(x)=1(2π)(W+1)KLNσw2TNL|Σx|KLexp(12t(xWk,lμxsWkd)Σx1(xWk,lμxsWkd))×(1Lk=1KsWk2k=1KsWk,l(xWk,l(j)μx(j))),

and A2=W. Then, we note that the gradient of f2, f2(𝐱^)= for 𝐱^:={x^Wk,l(j)=ϕ-1(n~Wk,l(j))}k,l,j=1K,L,N, where ϕ-1(z):=logit(z)=ln(z/(1-z)). Accordingly, by re-applying the saddle point method of asymptotic expansions, we evaluate the integral in Equation 13 as:

(14) hnum(1)(𝐧^)=rnum(1)(𝐱^)×exp(A2f2(𝐱^))(2π)KLN-A2|H(f2(𝐱^))|(1+𝒪(1A2)),

where the determinant of the Hessian, |H(f2(𝐱^))|=-k,l,jn~Wk,l(j)(1-n~Wk,l(j))<0 when conditioned on event 𝒜W. The higher order terms in Equation 14 will be bounded if the covariance of the saddle point approximation (-(H(f2(𝐱^)))-1) is bounded, which we ensure by conditioning on event 𝒜W. This completes the evaluation of Inum(1).

Following the same sequence of arguments, we evaluate the denominator of Equation 10 denoted by Iden(1). Accordingly, we derive:

(15) Iden(1)=hden(1)(n^)×exp(A1f1(n^))(2π)TLNA1|H(f1)|(1+𝒪(1A1)),hden(1)(n^)=rden(1)(x^)×exp(A2f2(x^))(2π)KLNA2|H(f2(x^))|(1+𝒪(1A2)),

where rden(1)(x)=1(2π)(W+1)KLNσw2TNL|Σx|KLexp(12k,l(xWk,lμxsWkd)Σx1(xWk,lμxsWkd)). Finally, by combining Equation 12, Equation 14 and Equation 15, the maximum likelihood estimator in Equation 10 takes the form:

d^j=Inum(1)Iden(1)=d~j(1+𝒪(1A1))(1+𝒪(1A2))(1+𝒪(1A1))(1+𝒪(1A2))=d~j(1+𝒪(σw2))(1+𝒪(1W)).

Further, following the same sequence of reasoning, simplifying the numerator (Inum(2)) and denominator (Iden(2)) of Equation 11 yields:

(𝚺^x)i,j=Inum(2)Iden(2)=(𝚺~x)i,j(1+𝒪(1A1))(1+𝒪(1A2))(1+𝒪(1A1))(1+𝒪(1A2))=(𝚺~x)i,j(1+𝒪(σw2))(1+𝒪(1W)).

This concludes the proof of Lemma 1. ■

Given that ϕ-1(z) is unbounded for z=0 or z=1, we consider another truncation: ϕB-1(z):=min{max{ϕ-1(z),-B},B}, where B=2log(2exp(B)+1). This choice of B guarantees that over 𝒜W, |ϕB-1(n¯Wk,l(j))|<B for all j=1,,N, k=1,,K and l=1,,L: and thus ϕB-1(n¯Wk,l(j))=ϕ-1(n¯Wk,l(j)) on 𝒜W.

From Lemma 1, the bias and variance of the maximum likelihood estimators, d^j and (𝚺^x)i,j are upper-bounded, if those of d~j and (𝚺~x)i,j are bounded:

(16) |bias(d^j)||bias(d~j)|+ζj,Var(d^j)Var(d~j)+ζ~j,

and

(17) |bias((𝚺^x)i,j)||bias((𝚺~x)i,j)|+υi,j,Var((𝚺^x)i,j)Var((𝚺~x)i,j)+υ~i,j,

where ζj, ζ~j, υi,j and υ~i,j represent terms that are 𝒪(σw2) or 𝒪(1W) . Thus, we seek to derive the performance bounds of d~j and (𝚺~x)i,j.

Bounding the bias of d^j

Let us first consider d~j. Note that:

(18) |bias(d~j)|:=|E[d~j]dj|=(a)|E[d~j(dOracle)j]|(b)1Lk=1KsWk2k,l=1K,L|sWk|E[|ϕB1(n~Wk,l(j))xWk,l(j)|],

where (a) holds since the Oracle estimator, (dOracle)j=1Lk=1KsWk2k,l=1K,LsWk(xWk,l(j)-μx(j)) (i.e., observing 𝐱t,l directly) is unbiased and (b) follows through the application of Jensen’s inequality and triangle inequality. To simplify this bound, the triangle inequality yields:

(19) 𝔼[|ϕB-1(n~Wk,l(j))-xWk,l(j)|]𝔼[|ϕB-1(n~Wk,l(j))-ϕB-1(n¯Wk,l(j))|]+𝔼[|ϕB-1(n¯Wk,l(j))-xWk,l(j)|].

Then, to bound each of these terms, we establish a piece-wise linear Lipschitz-type bound on ϕB-1(z). First, consider the first term 𝔼[|ϕB-1(n~Wk,l(j))-ϕB-1(n¯Wk,l(j))|]. We seek to upper-bound this expectation by bounding |ϕB-1(n~Wk,l(j))-ϕB-1(n¯Wk,l(j))| via the following technical lemma:

Lemma 2. Conditioned on event AW, the following bound holds for all j=1,,N, k=1,,K and l=1,,L:

ε(n~Wk,l(j),n¯Wk,l(j)):=|ϕB-1(n~Wk,l(j))-ϕB-1(n¯Wk,l(j))|g(B)|n~Wk,l(j)-n¯Wk,l(j)|,

where

g(B)=max{4(1+exp(B))2,4exp(-B)log(2exp(B)+1)(1+(2exp(B)+1)2)}.

Proof of Lemma 2. First, consider the case n¯Wk,l(j)0.5. We bound the function ε(n~Wk,l(j),n¯Wk,l(j)) in a piece-wise fashion as follows. Note that ϕB-1(n~Wk,l(j)) is convex for n~Wk,l(j)0.5 and concave for n~Wk,l(j)0.5. Thus, it immediately follows that for n~Wk,l(j)n¯Wk,l(j), ε(n~Wk,l(j),n¯Wk,l(j)) is convex and hence:

(20) ε(n~Wk,l(j),n¯Wk,l(j))|B+ϕB1(n¯Wk,l(j))||n¯Wk,l(j)11+exp(B)|(n¯Wk,l(j)n~Wk,l(j)).

Furthermore, for n¯Wk,l(j)n~Wk,l(j)0.5, ε(n~Wk,l(j),n¯Wk,l(j)) is concave, and hence is bounded by the tangent at n¯Wk,l(j):

(21) ε(n~Wk,l(j),n¯Wk,l(j))1n¯Wk,l(j)(1n¯Wk,l(j))(n~Wk,l(j)n¯Wk,l(j)).

Finally, for the case of n~Wk,l(j)0.5, consider the line,

(22) h(n~Wk,l(j),n¯Wk,l(j)):=|BϕB1(n¯Wk,l(j))||11+exp(B)n¯Wk,l(j)|(n~Wk,l(j)n¯Wk,l(j)).

From the convexity of ε(n~Wk,l(j),n¯Wk,l(j)), h(n~Wk,l(j),n¯Wk,l(j)) upper bounds ε(n~Wk,l(j),n¯Wk,l(j)) for n~Wk,l(j)0.5, since h(0.5,n¯Wk,l(j))ε(0.5,n¯Wk,l(j)) for n¯Wk,l(j)0.5. Combining the piece-wise bounds in Equation 20, Equation 21 and Equation 22, we conclude that for n¯Wk,l(j)0.5:

(23) ε(n~Wk,l(j),n¯Wk,l(j))g~(n¯Wk,l(j),B)|n~Wk,l(j)-n¯Wk,l(j)|,

where

g~(n¯Wk,l(j),B)=max{1n¯Wk,l(j)(1-n¯Wk,l(j)),|B+ϕB-1(n¯Wk,l(j))||n¯Wk,l(j)-11+exp(B)|,|B-ϕB-1(n¯Wk,l(j))||11+exp(-B)-n¯Wk,l(j)|}.

Due to the symmetry of ε(n~Wk,l(j),n¯Wk,l(j)), the same bound in Equation 23 can be established for n¯Wk,l(j)>0.5 as well.

Then, using |ϕB-1(n¯Wk,l(j))|B and conditioning on event 𝒜W, we simplify this bound as:

g~(n¯Wk,l(j),B)max{4(1+exp(B))2,4B(1+exp(B))(1+exp(B))exp(B)-(2exp(B)+1)}.

Finally, based on the fact that B=2log(2exp(B)+1), the latter is further upper bounded as:

g~(n¯Wk,l(j),B)g(B),

where

g(B)=max{4(1+exp(B))2,4exp(-B)log(2exp(B)+1)(1+(2exp(B)+1)2)}.

This concludes the proof of Lemma 2. ■

Following Lemma 2, by conditioning on the event 𝒜W we have:

(24) 𝔼𝒜W[|ϕB-1(n~Wk,l(j))-ϕB-1(n¯Wk,l(j))|]g(B)𝔼𝒜W[|n~Wk,l(j)-n¯Wk,l(j)|].

Then, we note that:

(25) E[|n~Wk,l(j)n¯Wk,l(j)|](c)E[|n~Wk,l(j)n¯Wk,l(j)|2]=(d)σw1+α2W,

where in (c) we have used the Cauchy-Schwarz inequality, and in (d) we have used the fact that the observation noise across the W time instances is i.i.d. and white. From the bounds in Equation 24 and Equation 25, we conclude that the first expectation in Equation 19, conditioned on event 𝒜W is bounded as:

(26) E𝒜W[|ϕB1(n~Wk,l(j))ϕB1(n¯Wk,l(j))|]g(B)E𝒜W[|n~Wk,l(j)n¯Wk,l(j)|]g(B)σw1+α2WP(𝒜W).

The foregoing sequence of reasoning similarly follows for 𝔼[|ϕB-1(n¯Wk,l(j))-xWk,l(j)|], since 11+exp(B)ϕ(xWk,l(j))1-11+exp(B) for k=1,,K, l=1,,L and j=1,,N (as a consequence of |xWk,l(j)|<B for k=1,,K, l=1,,L and j=1,,N, conditioned on 𝒜W). Accordingly, we derive the upper bound on the second term in Equation 19, conditioned on event 𝒜W:

(27) E𝒜W[|ϕB1(n¯Wk,l(j))xWk,l(j)|]g(B)E𝒜W[|n¯Wk,l(j)ϕ(xWk,l(j))|](e)g(B)WP(𝒜W)E[(w=1Wn(k1)W+w,l(j)Wϕ(xWk,l(j)))2]=(f)g(B)WP(𝒜W)E[Wϕ(xWk,l(j))(1ϕ(xWk,l(j)))](g)g(B)2WP(𝒜W),

where (e) follows from the application of Jensen’s inequality, (f) follows from the formula for the variance of a Binomial random variable, and (g) follows from the inequality ϕ(xWk,l(j))(1ϕ(xWk,l(j)))1/4, for ϕ(xWk,l(j))[0,1]. Combining the results in Equation 26 and Equation 27, the overall expectation in Equation 19, conditioned on the event 𝒜W is upper-bounded by:

(28) 𝔼𝒜W[|ϕB-1(n~Wk,l(j))-xWk,l(j)|]2g(B)W(σw1+α2+12),

where we have lower bounded the probability of the event 𝒜W by 1/2(that is, (𝒜W)>1/2) (that is, P(𝒜W)>1/2). Thus, from Equation 18 and Equation 28 we derive:

|bias𝒜W(d~j)|2g(B)W(σw1+α2+12)k,l=1K,L|sWk|Lk=1KsWk2(h)2g(B)σsW(σw1+α2+12),

where in (h) we have used the Cauchy-Schwarz inequality k=1K|sWk|Kk=1KsWk2 while defining σs2:=1Kk=1KsWk2.

Then, for B2.5, we have g(B)=4(1+exp(B))2 and B=2log(2exp(B)+1)3B. Let B:=σm8qlogW for some q>164. Further, for some ϵ<1/2, suppose that:

(29) logWmax{log(8KLN/η)q,32σm2qϵ2,2log(64q)12ϵ,max{6.25,4(μx+maxk,j{ |sWkdj| })2}8qσm2,log2}.

Under these conditions, 

(30) g(B)4(1+exp(σm8qlogW))2(i)16exp(2σm8qlogW)16Wϵ,

where in (i) we have used the fact that ex1 for x0. Thus, under the conditions in Equation 29, we have:

(31) |bias𝒜W(d~j)|32σsW1-2ϵ(σw1+α2+12).

Finally, from Equation 16 and Equation 31, we conclude that:

|bias𝒜W(d^j)|1W1-2ϵC1(2σw1+α2+1)+𝒪(σw2)+𝒪(1W),

where C1:=16σs.

Bounding the variance of d^j

Next, we prove the upper bound on the variance of the maximum likelihood estimator, d^j. To that end, we upper-bound the variance of d~j. First, using the Cauchy-Schwarz inequality, we have:

(32) Var(d~j):=𝔼[|d~j-𝔼[d~j]|2]{𝔼[|d~j-(dOracle)j|2]+Var((dOracle)j)}2.

Then, we upper-bound the conditional second moment of |d~j-(dOracle)j| using the same techniques as we used in bounding the first moment. Accordingly, we get:

(33) E𝒜W[|d~j(dOracle)j|2]=1(Lk=1KsWk2)2E𝒜W[|k,l=1K,LsWk(ϕB1(n~Wk,l(j))xWk,l(j))|2](j)1(Lk=1KsWk2)2{k,l=1K,L|sWk|E𝒜W[|ϕB1(n~Wk,l(j))xWk,l(j)|2]}2(k){2g(B)σsW(σw1+α2+12)}2,

where in (j), we have used the Cauchy-Schwarz inequality and (k) follows from 𝔼𝒜W[|ϕB-1(n~Wk,l(j))-xWk,l(j)|2]2(g(B))2W(σw1+α2+12)2, which can be proven by the same techniques as before.

Next, we note that the variance of the Oracle estimator (dOracle)j is given by:

(34) Var((dOracle)j)=1(Lk=1KsWk2)2k,l=1K,LsWk2Var((xWk,l(j)μx(j)))=(Σx)j,jLk=1KsWk2=(Σx)j,jLKσs2.

Combining Equation 32, Equation 33, and Equation 34, we can upper-bound the conditional variance of d~j following Equation 32 as:

Var𝒜W(d~j)(Σx)j,jKLσs2(1η)+2g(B)σsW(σw1+α2+12).

Then, following Equation 16, under the conditions for W in Equation 29, we conclude the proof of the conditional variance of d^j:

(35) Var𝒜W(d^j)(Σx)j,jKLσs2(1η)+1W12ϵC2(2σw1+α2+1)+𝒪(σw2)+𝒪(1W),

where C2:=82σs.

Bounding the bias of (𝚺^x)i,j

Next, following the foregoing techniques, we upper-bound the bias and variance of the noise covariance estimator (𝚺^x)i,j. To that end, we first note:

(36) |bias((Σ~x)i,j)|:=|E[(Σ~x)i,j](Σx)i,j|(l)|E[(Σ~x)i,j(ΣOracle)i,j]|+|bias((ΣOracle)i,j)|,

where (l) follows from the triangle inequality, with the Oracle noise covariance estimator (i.e., observing 𝐱t,l directly), being defined as:

(ΣOracle)i,j=1KLk,l=1K,L(xWk,l(i)μx(i)sWk(dOracle)i)(xWk,l(j)μx(j)sWk(dOracle)j).

Then, to simplify the first term in Equation 36, we use similar techniques as before. Accordingly,

(37) |E[(Σ~x)i,j(ΣOracle)i,j]|=|E[1KLk,l=1KL(ϕB1(n~Wk,l(i))μx(i)sWkd~i )(ϕB1(n~Wk,l(j))μx(j)sWkd~j)1KLk,l=1KL(xWk,l(i)μx(i)sWk(dOracle)i)(xWk,l(j)μx(j)sWk(dOracle)j)]|(m)1KLk,l=1K,LE[|(ϕB1(n~Wk,l(i))μx(i) )(ϕB1(n~Wk,l(j))μx(j) )(xWk,l(i)μx(i))(xWk,l(j)μx(j))|]+1KL2k=1KsWk2E[|k,l=1K,LsWk(ϕB1(n~Wk,l(i))μx(i))k,l=1K,LsWk(ϕB1(n~Wk,l(j))μx(j))k,l=1K,LsWk(xWk,l(i)μx(i))k,l=1K,LsWk(xWk,l(j)μx(j))|],

where (m) follows through the application of Jensen’s inequality and triangle inequality. Next, conditioned on the event 𝒜W we have:

(38) E𝒜W[|(ϕB1(n~Wk,l(i))μx(i) )(ϕB1(n~Wk,l(j))μx(j) )(xWk,l(i)μx(i))(xWk,l(j)μx(j))|]E𝒜W[|ϕB1(n~Wk,l(i))ϕB1(n~Wk,l(j))xWk,l(i)xWk,l(j)|]+μx(j)E𝒜W[|ϕB1(n~Wk,l(i))xWk,l(i)|]+μx(i)E𝒜W[|ϕB1(n~Wk,l(j))xWk,l(j)|]E𝒜W[|ϕB1(n~Wk,l(i))ϕB1(n~Wk,l(j))ϕB1(n¯Wk,l(i))ϕB1(n¯Wk,l(j))|]+E𝒜W[|ϕB1(n¯Wk,l(i))ϕB1(n¯Wk,l(j))xWk,l(i)xWk,l(j)|]+2μmE𝒜W[|ϕB1(n~Wk,l(j))xWk,l(j)|]2g(B)σw1+α2W(g(B)σw1+α2W+4log(2exp(B)+1))+2g(B)W{g(B)4W+B}+4μmg(B)W(σw1+α2+12),

where μm=μx and we have used B=2log(2exp(B)+1). Similarly, conditioned on the event 𝒜W the second term in Equation 37 can be bounded as:

(39) E𝒜W[|k,l=1K,LsWk(ϕB1(n~Wk,l(i))μx(i) )k,l=1K,LsWk(ϕB1(n~Wk,l(j))μx(j) )k,l=1K,LsWk(xWk,l(i)μx(i))k,l=1K,LsWk(xWk,l(j)μx(j))|]k,k,l,l=1K,K,L,L|sWksWk|E𝒜W[|ϕB1(n~Wk,l(i))ϕB1(n~Wk,l(j))xWk,l(i)xWk,l(j)|]+μx(i)k,k,l,l=1K,K,L,L|sWksWk|E𝒜W[|ϕB1(n~Wk,l(j))xWk,l(j)|]+μx(j)k,k,l,l=1K,K,L,L|sWksWk|E𝒜W[|ϕB1(n~Wk,l(i))xWk,l(i)|]
(40) (Lk=1K|sWk|)2{2g(B)σw1+α2W(g(B)σw1+α2W+4log(2exp(B)+1))+2g(B)W{g(B)4W+B}+4μmg(B)W(σw1+α2+12)}.

Then, by combining the bounds in Equation 38 and Equation 40 and using an instance of Cauchy-Schwarz inequality (k=1K|sWk|)2Kk=1KsWk2, we see that the bound in Equation 37 conditioned on the event 𝒜W can be expressed as:

(41) |E𝒜W[(Σ~x)i,j(ΣOracle)i,j]|4g(B)σw1+α2W(g(B)σw1+α2W+4log(2exp(B)+1)+2μm)+4g(B)W{g(B)4W+B+μm}.

Next, we see that the oracle estimator follows an Inverse Wishart distribution, that is KLΣOracleInvWishN(Σx,KL1). Therefore, we get:

E[ΣOracle]=(KL1)KLΣx.

Thus, the bias of the oracle estimator is given by:

(42) |bias((ΣOracle)i,j)|=1KL|(Σx)i,j|.

Combining the results in Equation 41 and Equation 42, the bias of (Σ~x)i,j can be bounded as:

(43) |bias𝒜W((Σ~x)i,j)||(Σx)i,j|KL(1η)+4g(B)σw1+α2W(4log(2exp(B)+1)+2μm)+4g(B)W(B+μm)+𝒪(g(B)2W).

Finally, under the conditions for W in Equation 29, the latter inequality simplifies to:

(44) |bias𝒜W((Σ~x)i,j)|(n)|(Σx)i,j|KL(1η)+Bg(B)W(28σw1+α2+6)+𝒪(g(B)2W)(o)|(Σx)i,j|KL(1η)+64σm2qlogWW12ϵ(14σw1+α2+3)+𝒪(1W12ϵ),

where in (n) we have used 2log(2exp(B)+1)3B and B>2μm and in (o) we have used Bg(B)16Lϵσm8qlogL, which follows from Equation 30. Thus, following Equation 17 we derive the bound on the bias of the maximum likelihood estimator:

|bias𝒜W((Σ^x)i,j)||(Σx)i,j|KL(1η)+logWW12ϵC3(14σw1+α2+3)+𝒪(σw2)+𝒪(1W12ϵ),

where C3:=64σm2q.

Bounding the variance of (Σ^x)i,j

Next, we establish an upper bound on the variance of the maximum likelihood estimator of the noise covariance. To that end, we upper-bound the variance of (𝚺~x)i,j. First, using the Cauchy-Schwarz inequality, we get:

(45) Var((Σ~x)i,j):=E[|(Σ~x)i,jE[(Σ~x)i,j]|2]{E[|(Σ~x)i,j(ΣOracle)i,j|2]+Var((ΣOracle)i,j)}2.

Then, we upper-bound the conditional second moment of |(Σ~x)i,j(ΣOracle)i,j| using the same techniques used in bounding its first moment. Accordingly, we derive:

(46) E𝒜W[|(Σ~x)i,j(ΣOracle)i,j|2]=1K2L2E𝒜W[{k,l=1K,L((ϕB1(n~Wk,l(i))μx(i)sWkd~i )(ϕB1(n~Wk,l(j))μx(j)sWkd~j )(xWk,l(i)μx(i)sWk(dOracle)i)(xWk,l(j)μx(j)sWk(dOracle)j))}2]1K2L2{k,l=1K,L{E𝒜W[((ϕB1(n~Wk,l(i))μx(i)sWkd~i )(ϕB1(n~Wk,l(j))μx(j)sWKd~j )(xWk,l(i)μx(i)sWk(dOracle)i)(xWk,l(j)μx(j)sWk(dOracle)j))2]}12}2,

where the last bound follows from the Cauchy-Schwarz inequality. Then, we derive:

(47) E𝒜W[{(ϕB1(n~Wk,l(i))μx(i)sWkd~i )(ϕB1(n~Wk,l(j))μx(j)sWkd~j )(xWk,l(i)μx(i)sWk(dOracle)i)(xWk,l(j)μx(j)sWk(dOracle)j)}2]=E𝒜W[{(ϕB1(n~Wk,l(i))μx(i)sWk1Lk=1KsWk2k,l=1K,LsWk(ϕB1(n~Wk,l(i))μx(i) ) )×(ϕB1(n~Wk,l(j))μx(j)sWk1Lk=1KsWk2k,l=1K,LsWk(ϕB1(n~Wk,l(j))μx(j) ) )(xWk,l(i)μx(i)sWk1Lk=1KsWk2k,l=1K,LsWk(xWk,l(i)μx(i) ) )×(xWk,l(j)μx(j)sWk1Lk=1KsWk2k,l=1K,LsWk(xWk,l(j)μx(j) ) )}2]=E𝒜W[{((ϕB1(n~Wk,l(i))μx(i) )(ϕB1(n~Wk,l(j))μx(j) )(xWk,l(i)μx(i))(xWk,l(j)μx(j)))sWkLk=1KsWk2k,l=1K,LsWk{(ϕB1(n~Wk,l(i))μx(i) )(ϕB1(n~Wk,l(j))μx(j) )(xWk,l(i)μx(i))(xWk,l(j)μx(j))}sWkLk=1KsWk2k,l=1K,LsWk{(ϕB1(n~Wk,l(i))μx(i) )(ϕB1(n~Wk,l(j))μx(j) )(xWk,l(i)μx(i))(xWk,l(j)μx(j))}+sWk2L2k,k=1K,KsWk2sWk2k,k,l,l=1K,LsWksWk{(ϕB1(n~Wk,l(i))μx(i) )(ϕB1(n~Wk,l(j))μx(j) )(xWk,l(i)μx(i))(xWk,l(j)μx(j))}}2]2(g(B))2W(σw1+α2+12)2(g(B)W(σw1+α2+12)+2(B+μm))2×(1+sWkLk=1KsWkLk=1KsWk2+sWkLk=1KsWkLk=1KsWk2+sWk2Lk=1KsWkLk=1KsWkL2k=1KsWk2k=1KsWk2)2.

Using the final bound of Equation 47 in Equation 46, we get:

(48) E𝒜W[|(Σ~x)i,j(ΣOracle)i,j|2]2g(B)W(σw1+α2+12)(g(B)W(σw1+α2+12)+2(B+μm))×1KL(KL+Lk=1K|sWk|k=1K|sWk|k=1KsWk2+Lk=1K|sWk|k=1K|sWk|k=1KsWk2+Lk=1KsWk2k=1K|sWk|k=1K|sWk|k=1KsWk2k=1KsWk2)42g(B)W(σw1+α2+12)(g(B)W(σw1+α2+12)+2(B+μm)),

where the last inequality follows from an instance of the Cauchy-Schwarz inequality, that is (k=1K|sWk|)2Kk=1KsWk2.

Then, following the observation KL ΣOracleInvWishN(Σx,KL1), we derive the variance of (ΣOracle)i,j:

(49) Var((ΣOracle)i,j)=δi,j2=(KL1)((Σx)i,j2+(Σx)i,i(Σx)j,j)K2L2.

Combining Equation 45, Equation 48 and Equation 49, we express the upper bound on the conditional variance of (Σ~x)i,j as:

Var𝒜W((Σ~x)i,j)11ηδi,j+82g(B)W(B+μm)(σw1+α2+12)+𝒪(g(B)2W).

Then, following Equation 17 and the conditions in Equation 29, we conclude the proof of the upper bound on the conditional variance of (Σ^x)i,j:

Var𝒜W((Σ^x)i,j)11ηδi,j+logWW12ϵC4(2σw1+α2+1)+𝒪(σw2)+𝒪(1W12ϵ),

where C4:=384σmq.

Finally, it only remains to prove that the event 𝒜W occurs with high probability for sufficiently large W:

Lemma 3. The probability of occurrence of the event

𝒜W={|xWk,l(j)|B and 12(1+exp(B))n¯Wk,l(j)112(1+exp(B)) for j=1,,N,k=1,,K and l=1,,L}

is upper-bounded as follows:

P(𝒜W)1η,

for some constant 0<η1/2 satisfying the conditions of Equation (29).

Proof of Lemma 3.

First, using the union bound, we have:

(50) P(𝒜W)1k,l,j=1K,L,N{P(|xWk,l(j)|>B)+P(n¯Wk,l(j)<12(1+exp(B)))+P(n¯Wk,l(j)>112(1+exp(B)))}.

Next, we bound the probabilities on the right hand side using Chernoff’s inequality (Boucheron et al., 2013). First, note that:

P(xWk,l(j)>B)=P(xWk,l(j)μx(j)sWkdj>Bμx(j)sWkdj)(p)P(xWk,l(j)μx(j)sWkdj>B2)(q)exp(B28σm2),

where (p) follows if B>2(μx+maxk,j{ |sWkdj| }) (which will hold under the conditions in Equation 29) and (q) has been derived by applying the Chernoff’s bound on the Gaussian random variable xWk,l(j). From the same reasoning, we see that P(xWk,l(j)<B)exp(B28σm2). Combining these two results, we get the upper bound:

(51) P(|xWk,l(j)|>B)2exp(B28σm2).

Next, note that:

(52) P(n¯Wk,l(j)<12(1+exp(B)))(r)P(n¯Wk,l(j)ϕ(xWk,l(j))<12(1+exp(B)))(s)exp(W16(1+exp(B))2),

where (r) follows from the observation 11+exp(B)<ϕ(xWk,l(j)) (which is a consequence of |xWk,l(j)|<B). Then, we note that the zero-mean random variable n¯Wk,l(j)ϕ(xWk,l(j)) is sub-Gaussian with variance factor 2W. Thus, using the Chernoff’s inequality on sub-Gaussian random variables (Boucheron et al., 2013), we derive the upper-bound (s) in Equation 52. In a similar fashion, based on the observation ϕ(xWk,l(j))<111+exp(B), we conclude the bound:

(53) P(n¯Wk,l(j)>112(1+exp(B)))exp(W16(1+exp(B))2).

By combining the bounds in Equation 51, Equation 52, and Equation 53, the upper bound on P(𝒜W) in Equation 50 takes the form:

P(𝒜W)12KLNexp(W16(1+exp(B))2)2KLNexp(B28σm2).

Finally, under the assumptions in Equation 29, we further simplify this bound as:

P(𝒜W)12KLNexp(W1ϵ64)2KLNWq14KLNWq,

where we have used W2 (which gives logW2loglogW) and logW2log(64q)12ϵ to show that W1ϵ64qlogW. Thus, logWlog(8KLN/η)q ensures that P(𝒜W)1η, for 0<η12. ■ 

This concludes the proof of Theorem 1.■

Appendix 3

Adapting the proposed signal and noise correlation estimates to spiking observations

While Algorithm 1 takes two-photon fluorescence observations as input and produces estimates of signal and noise correlation as output, it is possible to adapt it to spiking observations obtained by electrophysiology recordings. The resulting algorithm is obtained by simplifying the variational inference procedure in Algorithm 1 and is given below for completeness:

Algorithm 2 Estimation of 𝚺x and 𝐃 from spiking observations
Inputs: Ensemble of spiking observations {𝐧t,l}t,l=1T,L, constant 𝝁x, hyper-parameters 𝝍x and ρx, tolerance at convergence δ and the external stimulus 𝐬t
Outputs: Σ^x and D^
Initialization: Initial choice of 𝐏x, Ω~t, 𝚺^x and 𝐃^, residual=10δ, γx=ρx+LT
1. while residualδ do
Update variational parameters
2.      for t=1,...,T and l=1,...,L do
3.           𝐐𝐱t,l=(𝛀~t,l+γx𝐏x-1)-1
4.           mxt,l=Qxt,l(nt,l121Ω~t,lD^st+γxPx1μx)
5.           for j=1,,N do
6.             ct,l(j)=(𝐐𝐱t,l)j,j+(m𝐱t,l(j)+𝐝^j𝐬t)2
7.             (Ω~t,l)j,j:=12ct,l(j)tanh(ct,l(j)2)
8.        end for
9.      end for
10. Px:=ψx+t,l=1T,L{Qxt,l+mxt,lmxt,lμxmxt,lmxt,lμx+μxμx}
Update outputs and the convergence criterion
11.      for j=1,,N do
12.          d^j=(t,l=1T,L((Ω~t,l)j,jstst))1(t,l=1T,L{(nt,l(j)12)st(Ω~t,l)j,jmxt,l(j)st})
13.      end for
14.      (𝐃^)𝗉𝗋𝖾𝗏=𝐃^, D^=[d^1,d^2,,d^N]
15.      (𝚺^x)𝗉𝗋𝖾𝗏=𝚺^x, 𝚺^x=𝐏xγx+N+1
16.      residual=(𝚺^x)𝗉𝗋𝖾𝗏-𝚺^x2/(𝚺^x)𝗉𝗋𝖾𝗏2+(𝐃^)𝗉𝗋𝖾𝗏-𝐃^2/(𝐃^)𝗉𝗋𝖾𝗏2
17.    end while
18.     Return 𝚺^x and 𝐃^

Data availability

A MATLAB implementation of the proposed method has been archived in Github at https://github.com/Anuththara-Rupasinghe/Signal-Noise-Correlation (copy archived at https://archive.softwareheritage.org/swh:1:rev:7397cc8d751a128f41df81f8af160014b22974d6). The data used in this work have been deposited in the Digital Repository at the University of Maryland at http://hdl.handle.net/1903/26917.

The following data sets were generated
    1. Rupasinghe A
    2. Francis N
    3. Liu J
    4. Bowen Z
    5. Kanold PO
    6. Babadi B
    (2021) Digital Repository at the University of Maryland
    ID 1903/26917. Experimental Data from `Direct Extraction of Signal and Noise Correlations from Two-Photon Calcium Imaging of Ensemble Neuronal Activity'.

References

  1. Conference
    1. Aitchison L
    2. Russell L
    3. Packer AM
    4. Yan J
    5. Castonguay P
    6. Hausser M
    7. Turaga SC
    (2017)
    Model-based bayesian inference of neural activity and connectivity from all-optical interrogation of a neural circuit
    Advances in Neural Information Processing Systems. pp. 3486–3495.
  2. Thesis
    1. Beal MJ
    (2003)
    Variational algorithms for approximate Bayesian inference, PhD thesis
    University of London, University College London (United Kingdom).
  3. Book
    1. Bishop CM
    (2006)
    Pattern Recognition and Machine Learning (Information Science and Statistics
    Berlin, Heidelberg: Springer-Verlag.
  4. Book
    1. Boucheron S
    2. Lugosi G
    3. Massart P
    (2013)
    Concentration Inequalities: A Nonasymptotic Theory of Independence
    OUP Oxford Press.
  5. Book
    1. Keeley SL
    2. Aoi MC
    3. Yu Y
    4. Smith SL
    5. Pillow JW
    (2020) Identifying signal and noise structure in neural population activity with Gaussian process factor models
    In: Larochelle H, Ranzato M, Hadsell R, Balcan M. F, Lin H, editors. Advances in Neural Information Processing Systems. Curran Associates, Inc. pp. 1–84.
    https://doi.org/10.1101/2020.07.23.217984
  6. Conference
    1. Linderman S
    2. Adams RP
    3. Pillow JW
    (2016)
    Bayesian latent structure discovery from multi-neuron recordings
    Advances in Neural Information Processing Systems. pp. 2002–2010.
  7. Book
    1. Pillow JW
    2. Scott J
    (2012)
    Fully Bayesian inference for neural models with negative-binomial spiking
    In: Pereira F, Burges C. J. C, Bottou L, Weinberger K. Q, editors. Advances in Neural Information Processing Systems. Curran Associates, Inc. pp. 1898–1906.
  8. Report
    1. Rupasinghe A
    2. Francis N
    3. Liu J
    4. Bowen Z
    5. Kanold PO
    6. Babadi B
    (2021)
    Experimental Data From ‘Direct Extraction of Signal and Noise Correlations From Two-Photon Calcium Imaging of Ensemble Neuronal Activity’
    Digital Repository at the University of Maryland (DRUM).

Article and author information

Author details

  1. Anuththara Rupasinghe

    Department of Electrical and Computer Engineering, University of Maryland, College Park, United States
    Contribution
    Conceptualization, Data curation, Software, Formal analysis, Validation, Investigation, Visualization, Methodology, Writing - original draft, Writing - review and editing
    Competing interests
    No competing interests declared
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0003-2143-8709
  2. Nikolas Francis

    1. The Institute for Systems Research, University of Maryland, College Park, United States
    2. Department of Biology, University of Maryland, College Park, United States
    Contribution
    Conceptualization, Resources, Data curation, Formal analysis, Supervision, Funding acquisition, Validation, Investigation, Visualization, Methodology, Writing - original draft, Project administration, Writing - review and editing
    Competing interests
    No competing interests declared
  3. Ji Liu

    1. The Institute for Systems Research, University of Maryland, College Park, United States
    2. Department of Biology, University of Maryland, College Park, United States
    Contribution
    Conceptualization, Data curation, Software, Formal analysis, Validation, Investigation, Visualization, Methodology, Writing - original draft, Writing - review and editing
    Competing interests
    No competing interests declared
  4. Zac Bowen

    1. The Institute for Systems Research, University of Maryland, College Park, United States
    2. Department of Biology, University of Maryland, College Park, United States
    Contribution
    Data curation, Investigation, Writing - review and editing
    Competing interests
    No competing interests declared
  5. Patrick O Kanold

    1. The Institute for Systems Research, University of Maryland, College Park, United States
    2. Department of Biology, University of Maryland, College Park, United States
    3. Department of Biomedical Engineering, Johns Hopkins University, Baltimore, United States
    Contribution
    Conceptualization, Resources, Supervision, Funding acquisition, Validation, Investigation, Project administration, Writing - review and editing
    Competing interests
    No competing interests declared
  6. Behtash Babadi

    Department of Electrical and Computer Engineering, University of Maryland, College Park, United States
    Contribution
    Conceptualization, Resources, Formal analysis, Supervision, Funding acquisition, Validation, Investigation, Visualization, Methodology, Writing - original draft, Project administration, Writing - review and editing
    For correspondence
    behtash@umd.edu
    Competing interests
    No competing interests declared
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0002-9856-006X

Funding

National Science Foundation (1807216)

  • Behtash Babadi

National Science Foundation (2032649)

  • Behtash Babadi

National Institutes of Health (1U19NS107464)

  • Patrick O Kanold
  • Behtash Babadi

The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.

Acknowledgements

The authors thank Daniel E Winkowski for collecting the data in Bowen et al., 2019 that was also used in this work.

Ethics

Animal experimentation: All procedures, under Kanold lab protocol R-JAN-19-06, conformed to the guidelines of the University of Maryland Institutional Animal Care and Use Committee and the Guide for the Care and Use of Laboratory Animals of the National Institutes of Health.

Version history

  1. Received: March 3, 2021
  2. Accepted: June 27, 2021
  3. Accepted Manuscript published: June 28, 2021 (version 1)
  4. Version of Record published: August 10, 2021 (version 2)

Copyright

© 2021, Rupasinghe et al.

This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.

Metrics

  • 3,403
    views
  • 442
    downloads
  • 14
    citations

Views, downloads and citations are aggregated across all versions of this paper published by eLife.

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Open citations (links to open the citations from this article in various online reference manager services)

Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)

  1. Anuththara Rupasinghe
  2. Nikolas Francis
  3. Ji Liu
  4. Zac Bowen
  5. Patrick O Kanold
  6. Behtash Babadi
(2021)
Direct extraction of signal and noise correlations from two-photon calcium imaging of ensemble neuronal activity
eLife 10:e68046.
https://doi.org/10.7554/eLife.68046

Share this article

https://doi.org/10.7554/eLife.68046

Further reading

    1. Genetics and Genomics
    2. Neuroscience
    Kenneth Chiou, Noah Snyder-Mackler
    Insight

    Single-cell RNA sequencing reveals the extent to which marmosets carry genetically distinct cells from their siblings.

    1. Neuroscience
    Flavio J Schmidig, Simon Ruch, Katharina Henke
    Research Article

    We are unresponsive during slow-wave sleep but continue monitoring external events for survival. Our brain wakens us when danger is imminent. If events are non-threatening, our brain might store them for later consideration to improve decision-making. To test this hypothesis, we examined whether novel vocabulary consisting of simultaneously played pseudowords and translation words are encoded/stored during sleep, and which neural-electrical events facilitate encoding/storage. An algorithm for brain-state-dependent stimulation selectively targeted word pairs to slow-wave peaks or troughs. Retrieval tests were given 12 and 36 hr later. These tests required decisions regarding the semantic category of previously sleep-played pseudowords. The sleep-played vocabulary influenced awake decision-making 36 hr later, if targeted to troughs. The words’ linguistic processing raised neural complexity. The words’ semantic-associative encoding was supported by increased theta power during the ensuing peak. Fast-spindle power ramped up during a second peak likely aiding consolidation. Hence, new vocabulary played during slow-wave sleep was stored and influenced decision-making days later.