Cortical state transitions and stimulus response evolve along stiff and sloppy parameter dimensions, respectively
Abstract
Previous research showed that spontaneous neuronal activity presents sloppiness: the collective behavior is strongly determined by a small number of parameter combinations, defined as ‘stiff’ dimensions, while it is insensitive to many others (‘sloppy’ dimensions). Here, we analyzed neural population activity from the auditory cortex of anesthetized rats while the brain spontaneously transited through different synchronized and desynchronized states and intermittently received sensory inputs. We showed that cortical state transitions were determined by changes in stiff parameters associated with the activity of a core of neurons with low responses to stimuli and high centrality within the observed network. In contrast, stimulusevoked responses evolved along sloppy dimensions associated with the activity of neurons with low centrality and displaying large ongoing and stimulusevoked fluctuations without affecting the integrity of the network. Our results shed light on the interplay among stability, flexibility, and responsiveness of neuronal collective dynamics during intrinsic and induced activity.
Introduction
How biological systems achieve a tradeoff between stability and flexibility is a central question in biology. A candidate explanation for the coexistence of these two features is sloppiness (Machta et al., 2013; Transtrum et al., 2015). In general, sloppiness is a property of complex models exhibiting large parameter uncertainty when fit to data, meaning that different combinations of parameters lead to a similar system behavior, while changes in some few critical parameters, called stiff parameters, significantly modifies it. In this way, biological systems can be either robust to large fluctuations of input/environmental signals which effects are embedded in a highdimensional subspace of insensitive parameters, or, on the contrary, by tuning some few parameters, configured to be highly sensitive and selective to relevant signals.
Recently, it has been shown that the spontaneous activity of neural circuits presents sloppiness both in vitro and in vivo (Panas et al., 2015), suggesting that collective activity is stabilized by a subset of highly active and stable neurons, while the activity and coactivity of the remaining neurons present larger spontaneous fluctuations without strongly affecting the collective statistics. However, this view is challenged by extensive research showing that the spontaneous cortical activity transits through different synchronized and desynchronized cortical states (Marguet and Harris, 2011; Harris and Thiele, 2011; Luczak et al., 2013; Pachitariu et al., 2015) that represent statistically different collective behaviors (Hahn et al., 2017) with different information processing capabilities (Pachitariu et al., 2015; Engel et al., 2016; Beaman et al., 2017). Moreover, how sensory inputs affect sloppiness is unknown and it is a relevant question to understand how sensory stimuli change the network state in a way that responsiveness and stability are ensured. In the present study, we examined how changes in neural network parameters correlate with spontaneous transitions among cortical states and stimulusevoked responses.
To answer these questions, we recorded the neuronal spiking activity in the primary auditory cortex (A1) of six anesthetized rats. We analyzed the joint activity of groups of neurons while the cortex spontaneously transited through different synchronized and desynchronized cortical states and intermittently received external acoustic stimuli. We used a statistical model to describe the joint spiking activity with a small number of parameters. We found that the estimated parameters of neuronal ensemble activity presented sloppiness and that sensory inputs and cortical state transitions evolved in different pathways in parameter space. Specifically, we found that cortical state transitions evolve along stiff dimensions, whereas sensoryevoked activity evolves along sloppy dimensions. Finally, we showed that stiff parameters are related to the activity and coactivity of neurons with high centrality within the functional network of the recorded neurons.
Results
We recorded spontaneous and stimulusevoked population activity from the primary auditory cortex (A1) of urethaneanesthetized rats (n = 6) using multisite silicon microelectrodes (see Materials and methods). The data was composed of activity from ${N}_{\text{pop}}$ wellisolated single units (${N}_{\text{pop}}=$ 44147 neurons) and some spiketrains from multiunit activity (3103 spiketrains). Unless otherwise specified, the analyses present here focused on singleunit activity only. We analyzed the data during spontaneous activity and in response to acoustic 'clicks' (5ms square pulses; interstimulus interval, 2.5 or 3.5 s). To track the evolution of the neuronal activity, we divided each recording session into ${N}_{E}$ adjacent epochs of 100 s, each one containing 12–29 stimulus presentations. Within each 100s epoch the data was separated into spontaneous activity, that is the activity during 1.5s intervals preceding each stimulus (i.e., 18–43.5 s of spontaneous activity in total for each epoch), and stimulusevoked activity, that is the activity right after the stimulus onset (Figure 1A–B).
Description of spontaneous activity patterns using maximum entropy models
We first examined the temporal evolution of the spontaneous activity across the ${N}_{E}$ epochs. Because we were interested in the evolution of the statistics of ensemble activity, we described the collective activity of groups of $N$ singleunits using a maximum entropy model (MEM) (Schneidman et al., 2006; Shlens et al., 2009; Tkačik et al., 2015) in each epoch (see Materials and methods and Figure 1C–D). These models allowed us to describe the patterned activity with a small number of parameters. To fit the model, time was discretized in bins of dt = 10 ms. Within each time bin, the ensemble activity of $N$ neurons was described by a binary vector, $\overrightarrow{\sigma}=\left[{\sigma}_{1},{\sigma}_{2},\dots ,{\sigma}_{N}\right]$, where ${\sigma}_{i}=+1$ if the ith neuron fired a spike in that time bin and ${\sigma}_{i}=1$ otherwise. The collective activity was determined by the probability distribution $P\left(\overrightarrow{\sigma}\right)$ over all ${2}^{N}$ possible binary patterns. The MEM fits ${P}_{\text{data}}\left(\overrightarrow{\sigma}\right)$ by finding a distribution ${P}_{\text{MEM}}\left(\overrightarrow{\sigma}\right)$ that maximizes its entropy under the constraint that the activation rates ($\u27e8{\sigma}_{i}\u27e9$) and the pairwise correlations ($<{\sigma}_{i}{\sigma}_{j}>$) found in the data are preserved in the model. It is known that the maximum entropy distribution that is consistent with these constraints is the Boltzmann distribution, $P\left(\overrightarrow{\sigma}\right)\propto {e}^{E\left(\overrightarrow{\sigma}\right)}$, where $E\left(\overrightarrow{\sigma}\right)$ is the energy of the pattern $\overrightarrow{\sigma}$, given by: $E\left(\overrightarrow{\sigma}\right)=\sum _{i=1}^{N}\left[{h}_{i}{\sigma}_{i}+\frac{1}{2}\sum _{j=1}^{N}{J}_{ij}{\sigma}_{i}{\sigma}_{j}\right]$ (Schneidman et al., 2006; Tkačik et al., 2015). The model parameter ${h}_{i}$ represents the intrinsic tendency of neuron i towards activation (${\sigma}_{i}=+1$) or silence (${\sigma}_{i}=1$) and the parameter ${J}_{ij}$ represents the effective interaction between neurons i and j. Once we learned the parameters $\mathrm{\Omega}=\left\{\mathit{h},\mathit{J}\right\}$ using a gradient descent algorithm (see Materials and methods), the expected probability of any pattern is known. For each recording session and for each of the ${N}_{E}$ epochs, we fitted the model using the spontaneous binarized activity from an ensemble of $N$ = 10 randomly selected single neurons from the entire population of ${N}_{pop}$ single neurons. We chose $N$ = 10 because 100s epochs provided around 5000 observed spontaneous patterns, which is a reasonable amount to get an estimate of the distribution of the 2^{10} = 1024 possible patterns. To accurately estimate models of larger $N$, the epochs ought to be much larger preventing possibility to investigate the temporal evolution of the model along the experiment. We finally repeated the process of randomly choosing $N$ = 10 single units $Q$ times for each experiment (for datasets 3 and 5: $Q=10$ ensembles, otherwise: $Q=20$). In summary, for each recording session, we built $Q\times {N}_{E}$ models, each composed of 10 units. Before studying the evolution of the model parameters $\mathrm{\Omega}\left(t\right)$ across epochs ($t=$ 0, 1, 2..), we first evaluated how well the MEM fitted the data.
For each epoch, we used the JensenShannon divergence (${D}_{JS}$, see Materials and methods) to measure the similarity between the probability distribution of the empirical and model binary patterns (Figure 2A–C). We compared this similarity to the distribution of binary patterns predicted from independentMEMs, for which only the activation rates were preserved (i.e., only $\mathit{h}$ was optimized). We found that the empirical distribution was well approximated by MEMs and that, for all recording sessions, the goodnessoffit (i.e., ${1/D}_{JS}$) was orders of magnitude higher for MEMs than for independentMEMs (Figure 2C), leading to excellent model performances (i.e., KullbackLeibler ratio equal to 0.95 ± 0.03 on average, see Materials and methods).
Temporal evolution of activity observables, model parameters, and their sensitivity
We next analyzed the temporal evolution of the different spiking data statistics and the model parameters. We first measured the temporal variation of the activity observables (i.e., firing rates and pairwise correlations) by calculating the average Pearson correlation (or similarity $\gamma $; see Materials and methods) between the values in epoch $t$ and those in epoch $t+\mathrm{\Delta}t$ (Figure 3A). This similarity rapidly decayed with $\u2206t$, indicating that the observables substantially changed over time. We next examined how much these variations influenced the evolution of the collective activity characterized by the distribution of binary patterns. For this, we evaluated how well the data in a given epoch $t$ could be explained by the MEM constructed using the data at time $t+\mathrm{\Delta}t$. Specifically, we calculated $\u27e8{D}_{JS}\u27e9\left(\mathrm{\Delta}t\right)$, given by the average JensenShannon divergence between the distribution of data binary patterns in epoch $t$, that is ${P}_{\text{data,}t}$, and the distribution of binary patterns predicted by the MEM constructed using the data in epoch $t+\mathrm{\Delta}t$, that is $P}_{\text{MEM,}t+\mathrm{\Delta}t$ (see Materials and methods). We found that $\u27e8{D}_{JS}\u27e9\left(\mathrm{\Delta}t\right)$ increased as a function of $\u2206t$, indicating that the collective activity changed during the recording session, so that the model parameters $\mathrm{\Omega}\left(t\right)$ at epoch $t$ did not predict the collective activity at epoch $t+\mathrm{\Delta}t$ (Figure 3B). Indeed, the model parameters substantially changed over time, with a rapidly decaying similarity (Figure 3C, orange and purple traces).
As shown in Panas et al. (2015), changes in model parameters can differently contribute to collective activity, since the model can be sensitive to changes in some few combinations of parameters. Following this, we next evaluated the sensitivity of model parameters by calculating the Fisher information matrix (FIM, see Materials and methods) for each neuronal ensemble and each epoch. The FIM measures how much the model loglikelihood ${P}_{\text{MEM}}\left(\overrightarrow{\sigma}\mathrm{\Omega}\right)$ changes with respect to changes in the parameters $\mathrm{\Omega}$. We first notice that the FIM had the highest stability across time, compared to the data firing rates and correlations and the model parameters (Figure 3C, blue trace). Indeed, the similarity after 1/2 hour was $\gamma =$ 0.882 ± 0.002 for the FIM, $\gamma =$ 0.732 ± 0.003 for the firing rates, $\gamma =$ 0.551 ± 0.004 for the biases, $\gamma =$ 0.364 ± 0.004 for the correlations, $\gamma =$ 0.234 ± 0.003 for the couplings (F_{4,495} = 305.73, p<0.001, oneway ANOVA followed by Tukey's post hoc analysis) (Figure 3D). Altogether these results show that the sensitivity of the model parameters remained relatively stable despite substantial changes in firing rates, correlations, collective activity and the model parameters themselves.
Spontaneous neuronal activity presents sloppiness
Having shown that the sensitivity of model parameters was relatively stable during the recording sessions, we next studied the structure of the FIMs. First, we noted that most elements of the FIM had nearzero values (Figure 4A) indicating that most of the parameters had a small effect on the model loglikelihood. In contrast, a small fraction of elements had values strongly different from zero as revealed by the heavy tail of the distribution of FIM values (Figure 4A). To identify the parameter combinations that had the strongest effect on model behavior, we decomposed the FIM into eigenvectors and classified them according to their eigenvalue (Figure 4B). We observed that, except for some few eigenvalues, most of the FIM eigenvalues were small, corresponding to combinations of parameters that had little effect on model behavior. These unimportant parameter combinations defined the sloppy dimensions of the model. The few eigenvectors with large eigenvalues defined the stiff parameter dimensions along which the model behavior was strongly affected.
In the following we showed that the temporal evolution of the model parameters occurred predominantly along the sloppy dimensions. For this, we projected the parameters $\mathrm{\Omega}\left({t}^{\prime}\right)$, calculated at time $t\text{'}$, into the eigenvectors of the FIM at time $t$, denoted ${\mathit{\nu}}_{t,1},{\mathit{\nu}}_{t,2},\dots ,{\mathit{\nu}}_{t,k},\dots $, where $k$ is the rank of the eigenvector (Figure 4C). For each dimension, or eigenvector, we obtained a distribution of projections of parameters $\mathrm{\Omega}\left({t}^{\prime}\right)$ (Figure 4D). To quantify how much the parameters varied along each eigenvector, we calculated the average variance of each projection as a function of the rank of the eigenvector. We found that the projection variance increased as a function of the eigenvector’s rank for all datasets (Figure 4E). This indicates that the model parameters predominantly evolved along sloppy dimensions (i.e., FIM eigenvectors of highest rank $k$), while they remained relatively stable along stiff dimensions (i.e., FIM eigenvectors of lowest rank $k$). Using stationary surrogate data, we controlled that these parameter fluctuations were not fully explained by estimation errors and, furthermore, that parameter fluctuations along sloppy dimensions were those that deviated the most from the stationary case (see Figure 4—figure supplement 1). Nevertheless, we noted that the projection variance into the stiff dimensions, albeit small, was not zero. This means that the model also evolved along parameter dimensions that had a strong impact on the collective activity. We hypothesized that changes in collective behavior, associated to changes in stiff parameters, were related to changes in cortical state.
Cortical state transitions evolve along stiff dimensions
To test this hypothesis, we first measured the cortical state in each epoch $t$ using silence density, $CS\left(t\right)$, defined as the fraction of 20ms time bins with zero population activity, that is no spikes from any neuron (see Materials and methods) (Luczak et al., 2013; Pachitariu et al., 2015; Mochol et al., 2015). To obtain the most accurate estimate of silence density, we used all the spikes from the merge of all the singleunits and multiunits in the calculation of $CS\left(t\right)$. During the course of the experiment, we observed large fluctuations in silence density, with low and high values associated to desynchronized and synchronized cortical states, respectively (Figure 5A). We found that differences in collective dynamics in different epochs, quantified by ${D}_{JS}\left(t,{t}^{\prime}\right)={D}_{JS}\left({P}_{\text{data,}t};{P}_{\text{data,}{t}^{\prime}}\right)$, significantly covaried with the changes in cortical state, given by $d=\leftCS\left(t\right)CS\left({t}^{\prime}\right)\right$ (averaged correlation coefficient 0.56 ± 0.08, p < 0.001) (Figure 5B–C). Thus, changes in collective behavior correlated with changes in cortical state.
We next asked which activity observables, that is the firing rate of each neuron and all pairwise correlations, related more to cortical state transitions. For this, we calculated the absolute correlation, $\left{r}_{cs}\right$, between the cortical state $CS\left(t\right)$ and the activity observables. We found that $\left{r}_{cs}\right$ was broadly distributed between 0 and 0.94, thus some observables correlated more with the cortical state (Figure 5D, top panel). Next, to relate the sensitivity of model parameters (their stiffness) to the activity observables, we measured the sensitivity of a given parameter by its average contribution to the first eigenvector of the FIM and we associated it to the corresponding observable (Panas et al., 2015). We defined the sensitivity ${s}_{\text{ne}}$ at the neuronal ensemble level and the sensitivity $s$ at the population level (see Materials and methods). Note that the ranges of the sensitivity of biases ($\mathit{h}$) and couplings ($\mathit{J}$) were similar (Figure 5D, bottom panel), and that sensitivities calculated in the first and the second halves of the recording session were highly correlated (correlation coefficient > 0.82, for all rats; average: 0.89 ± 0.03). We found a significant positive correlation between the associated sensitivity ($s$) and the correlation with the cortical state ($\left{r}_{cs}\right$) in 5/6 datasets (Figure 5E–F). Thus, the observables that correlated more with the cortical state were those with the highest associated sensitivity. This result led us to separate the activity observables into two classes, called 'sloppy' and 'stiff', based on whether the associated sensitivity ($s$) was lower or higher than the median of $s$. We found that stiff variables were significantly more correlated with the cortical state than the sloppy variables (p<0.01 for all datasets, paired ttest; Figure 5G). This relationship was preserved when using an alternative, more general definition of sensitivity that considered the contribution to all eigenvectors of the FIM, instead of the contribution to the first eigenvector only (see Figure 5—figure supplement 1). Altogether, these results indicate that neuronal activity and coactivity preferentially evolved along sensitive (stiff) parameter dimensions during cortical state transitions.
Sensoryevoked activity evolves along sloppy dimensions
The above results indicate that, although intrinsic spontaneous dynamics predominantly evolved along sloppy dimensions (Figure 4F), cortical state transitions were governed by changes in stiff parameters (Figure 5G). We next investigated which parameter dimensions were explored when the neural network was driven by external sensory inputs, that is during stimulusevoked activity (Figure 6A). We observed that evoked responses (which could be increased or decreased with respect to prestimulus baseline firing rate) were larger for sloppy neurons than for stiff neurons (Figure 6B–C). To quantify the responsiveness of each neuron, we calculated the modulation index (MI, see Materials and methods) of each neuron in response to acoustic stimuli. We next calculated the relation between MI, calculated during evoked activity, and the sensitivity $s$ associated to firing rates, calculated during the spontaneous activity as above. We found that the more responsive neurons were those with the lowest associated sensitivity (Figure 6D–E). This indicates that stimulusevoked neuronal activity evolved mostly along sloppy dimensions. This result was replicated when using a more general definition of sensitivity that considered the contribution to all eigenvectors of the FIM (see Figure 6—figure supplement 1A). Finally, we evaluated the difference, noted ΔMI, between the MI of sloppy and stiff neurons as a function of cortical state $CS\left(t\right)$. Specifically, first, the MI values in each epoch were averaged according to different ranges of the silence density. Second, the MI values of sloppy and stiff neurons were compared within each range. We found that ΔMI was maximal during desynchronized activity, and minimal during synchronized activity (Figure 6F, see also Figure 6—figure supplement 1B). Thus, the cortical activity during stimulus response evolved predominantly along sloppy dimensions for the desynchronized cortical state, while, in the synchronized state, the dominance of sloppy fluctuations was reduced, and stiff fluctuations became comparable.
Finally, correlations between cortical state fluctuations and sensitivity and between MI and sensitivity could reflect a dependency between sensitivity and model estimation errors. To test this, we evaluated the mean error on the model estimation of the observables and test its interaction with sensitivity, cortical state fluctuations, and MI (see Appendix 1 and Appendix 1—figure 1). We found that model estimation errors correlated with sensitivity, but they could not fully explain neither the positive correlation between sensitivity and cortical state nor the negative correlation between sensitivity and MI.
Stiff parameters were associated to central neurons within the neuronal network
In this section, we further investigate the properties of neurons and pairs of neurons with respect to their associated parameter sensitivity. As above, we separated the neurons and pairs of neurons into two classes, called 'sloppy units/pairs' and 'stiff units/pairs', based on whether the associated sensitivity ($s$) was lower or higher than the median $s$ (units were associated to parameters ${h}_{i}$, and pairs or links were associated to parameters ${J}_{ij}$). With this dichotomization, we found that stiff units were significantly more active than sloppy units (Figure 7A). We quantified this by performing receiver operating characteristic (ROC) analysis and used the area under the ROC curve (AUC) as a measure of how well the firing rates distributions of the two classes were separated (AUC = 0.961–0.998, p < 0.001, for all rats; Figure 7G). Stiff neurons were also significantly more correlated among them than sloppy neurons (AUC = 0.615–0.932, p < 0.001, for all rats; Figure 7B,G). The distributions of correlations remained well separated when calculated for the links, that is pairs of neurons with associated parameters ${J}_{ij}$ (AUC = 0.541–0.766, p < 0.001, for all rats; Figure 7C,G).
To further investigate the structure of correlations, we evaluated the centrality of stiff and sloppy neurons within the observed network of neurons. For this we used the betweenness centrality (BC), a measure of node centrality in a graph or network, which in our case was given by the functional connectivity matrix among the recorded neurons (see Materials and methods). The BC measures the extent to which a node in the graph tends to lay on the shortest path between other nodes. Thus, a node with higher BC has more influence over the network, because more information passes through that node. We found that stiff neurons had significantly more centrality in the functional connectivity graph than sloppy neurons (AUC = 0.740–0.831, p<0.001, for all rats; Figure 7D,G). This indicates that stiff neurons were part of the core of the graph, while sloppy neurons were part of the graph periphery, as clearly shown using graph visualization (Figure 7E; Fruchterman and Reingold, 1991). BC values were correlated with firing rates (correlation coefficient: 0.59 ± 0.11), which could suggest that differences in BC between stiff and sloppy neurons were simply a consequence of differences in firing rates. However, using surrogate data that preserved the observed firing rates and produced correlations through global modulations, we found that neither the structure of correlations nor the BC values could be trivially predicted by globally modulated firing rates but they were rather suggestive of functional interactions (see Appendix 1 and Appendix 1—figure 2). Thus, in addition to different firing rates, different correlations and BC values were supplementary features of stiff and sloppy neurons.
Moreover, previous work has shown that cortical neurons differ in their coupling to the population activity, with neurons that activate most often when many others are active and neurons that tend to activate more frequently when others are silent (Okun et al., 2015). Thus, along with centrality, we calculated the neurontopopulation coupling, given by the Pearson correlation between the activity of each neuron $i$ and the number of coactive neurons (excluding neuron $i$; see Materials and methods). We found that stiff neurons were significantly more coupled to the population activity than sloppy neurons (AUC = 0.603–0.939, p < 0.001, for all rats; Figure 7F,G). In summary, stiff units were more active, more central, more coupled among them, and more coupled to the population activity than sloppy units. The same results were found when using a more general definition of sensitivity that considered the contribution to all eigenvectors of the FIM (Figure 7—figure supplement 1).
Discussion
We here studied the changes in activity caused by intrinsic (i.e. cortical state) and extrinsic (i.e., stimulusevoked) sources in A1 neuronal ensembles in an estimated parameter space. The parameter space was obtained using the maximum entropy principle, providing a handful number of parameters describing the probability of all possible binary activity patterns. These parameters differed in their impact on collective activity that was sensitive to a few combinations of parameters, called stiff dimensions, but insensitive to many others called sloppy dimensions. Our results suggest that spontaneous cortical state transitions and stimulusdriven activity evolved along different parameter dimensions. Indeed, in one hand, while most of the fluctuations during spontaneous activity evolved along sloppy dimensions, some residual ongoing fluctuations evolved along stiff dimensions, and these fluctuations were correlated with synchronized/desynchronized cortical state transitions. On the other hand, stimulusinduced activity was larger in sloppy dimensions than in stiff dimensions, an effect that was most prominent during the desynchronized cortical state. Note that the observation that both spontaneous and stimulusdriven activities predominantly evolve along sloppy dimensions results from the strong similarity of spontaneous and evoked activity, reported in several previous studies (Arieli et al., 1996; Kenet et al., 2003; MacLean et al., 2005; Luczak et al., 2009). Finally, by classifying the neurons as stiff versus sloppy neurons (i.e., those contributing more or less to the principal stiff dimension) we found that the firing rates and the functional connectivity topology significantly differed between the two classes of neurons. It should be noted, however, that, since sensitivity is a continuous variable, the two classes of neurons that we defined here do not represent two disjoint groups but rather represent two parts of a continuum.
The observation that stimulusinduced activity evolved along sloppy dimensions can have important functional implications. It suggests that a stimulus can modulate the activity of a subset of sloppy neurons without entirely affecting the collective activity. This could be an efficient functional architecture to encode sensory information without perturbing other ongoing or memorystored processes. Consistent with this view and with previous studies (Margolis et al., 2012; Mizuseki and Buzsáki, 2013; Panas et al., 2015), our results suggest that the integrity of the network is ensured by a core of highly active stiff neurons, which have strong functional connections among them (either through anatomical connections or common inputs), while topologically peripheral sloppy neurons (within the functional connectivity graph) can be largely modulated by external inputs. A similar subnetwork of highly active, interconnected neurons has been recently identified in the mice neocortex (Yassin et al., 2010). Importantly, sensory input was not required to drive these cells. Previous studies of complex systems have derived general principles of core/periphery network structures: the network periphery is more variable, evolvable, and plastic than the network core, while the network core facilitates system robustness (Kitano, 2004; Csermely et al., 2013). Thus, we hypothesize that sloppy neurons could also be more affected by synaptic plasticity, allowing for network reconfiguration without loss of stability. Consistent with this, previous work on wholebrain fMRI has observed core stability and peripheral flexibility over the course of learning (Bassett et al., 2013), and recent analyses of functional networks from calcium imaging data recorded in mouse primary auditory cortex revealed a stable core and a variable periphery over time (Betzel et al., 2019). Furthermore, we observed that stimulus responses evolved more pronouncedly along sloppy dimensions in the desynchronized state, while in the synchronized state fluctuations along sloppy and stiff dimensions were comparable (Figure 6F). This supports the view that responses along sloppy dimensions provide information processing benefits, since previous studies have shown that auditory stimuli in rodents (Marguet and Harris, 2011; Pachitariu et al., 2015) and visual stimuli in both rats (Goard and Dan, 2009) and monkeys (Beaman et al., 2017) are better represented in the desynchronized state as compared to the synchronized state.
The properties of spontaneous and induced cortical dynamics observed in the present anesthetized condition are likely to be relevant also during wakefulness. Indeed, several studies reported the existence of synchronized cortical states during wakefulness (for review see Zagha and McCormick, 2014), and global fluctuation resembling transitions between up and down periods during alert or quiescent wakefulness (Petersen et al., 2003; Luczak et al., 2007; Poulet and Petersen, 2008; Zagha et al., 2013; Tan et al., 2014; Engel et al., 2016) or even during task engagement (Sachidhanandam et al., 2013). Moreover, sloppiness has been observed in asynchronous spontaneous activity under light anesthesia (Panas et al., 2015), we thus expect to observe a similar stiffsloppy architecture in the awake state. However, we believe that the comparison of Fisher information matrices during wakefulness and during different levels of anesthesia could provide valuable information about the principles governing vigilance.
We found that stiff neurons were more linked to the observed neuronal population activity than sloppy neurons. Stiff neurons had higher centrality in the functional connectivity graph and higher coupling to the population activity than sloppy neurons. Previous research showed that neurons differ in their coupling to the population activity, with neurons that activate most often when many others are active, called ‘choristers’, and neurons that tend to activate more frequently when others are silent, called ‘soloists’ (Okun et al., 2015). Our results suggest that stiff and sloppy neurons are chorister and soloist neurons, respectively. In other words, changes in the activity of stiff/chorister neurons lead to changes in collective behavior (i.e., cortical states), while the activity of sloppy/soloist neurons can spontaneously fluctuate or respond to stimuli without strongly affecting the collective behavior. Thus, we believe that the roles of stiff/chorister neurons and sloppy/soloist neurons are important to understand tradeoffs between responsiveness and stability of the network. Furthermore, we here studied the evolution of neuronal activity on the time scale of hours and found that fluctuations on stiff parameter dimensions were the weakest and were related to cortical state transitions, which time scale is in the order of tens of minutes (Hahn et al., 2017; Mochol et al., 2015). Previous studies have reported prominent changes on neuronal activity and tuning properties over days, but with stable decoding performances of population activity (Chestek et al., 2007; Ziv et al., 2013; Panas et al., 2015). However, we hypothesize that learning or adaptation to changing environments could lead to large changes in collective activity. In that case, particular attention could be paid to the influence of highorder areas on the activity of subsets of stiff and sloppy neurons from sensory areas, as topdown regulation might be a mechanism to control the stabilizing network core.
The existence of cortical neurons with different sensitivities (from sloppy to stiff neurons) provides new valuable architectural constrains for models of the brain state and its transitions. Several past studies have modeled the synchronized brain dynamics as transitions between two attractors. Depending on the model specificity those transitions could be noise driven (Mejias et al., 2010; Mochol et al., 2015; Jercog et al., 2017) or caused by some fatigue mechanism (Compte et al., 2003; Hill and Tononi, 2005; Mattia and SanchezVives, 2012). To make the system works in a desynchronized regime it was enough to increase the background input to the network (Bazhenov et al., 2002; Hill and Tononi, 2005; Curto et al., 2009; Destexhe, 2009; Mochol et al., 2015). Given our present results, the models could be extended to include a network core/periphery architecture, a nonhomogeneous background input preferentially targeting the network core, and different stimulus spatial distributions. Such a model would provide insights on the interplay between cortical state transitions and sensory representation. Moreover, our findings question the view that the mechanisms by which background and stimulus inputs impact the dynamics are similar, as assumed in the simple bistable rate model (Mochol et al., 2015).
Finally, we here described the patterned activity of small ($N$ = 10) neuronal ensembles using MEMs. It is known that MEMs of small sizes can present departures from the observed distribution of summed activities and higherorder correlations (Tkačik et al., 2014). Recent advancements on learning algorithms allow to construct MEMs of ∼100 neurons. However, these models cannot be used in a timeresolved manner, as we did here, due to limited data in each epoch. Small model sizes are thus the cost to pay to study the evolution of collective activity over time in a meaningful time scale (i.e., the one of cortical state transitions).
Materials and methods
Ethics statement
Request a detailed protocolAll experiments were carried out in accordance with protocols approved by the Animal Ethics Committee of the University of Barcelona (Comité d’Experimentació Animal, Universitat de Barcelona, Ref 116/13).
Experimental techniques
Request a detailed protocolWe analyzed the neuronal activity recorded in the primary auditory cortex (A1) of 6 anesthetized rats (Sprague–Dawley; 250–400 g). The experimental procedures and spikes sorting procedures have been previously described in Mochol et al. (2015). Briefly, rats were anesthetized with urethane (1.5 g/kg body weight) and silicon microelectrodes (Neuronexus) with 32 or 64 channels were inserted in deep layers (depth, 600–1,200 μm) of the primary auditory cortex. The spiking activity from single units and multiunits (i.e., neurons that were not well isolated) was simultaneously recorded during spontaneous activity and in response to acoustic ‘clicks’ (5 ms square pulses; interstimulus interval, 2.5 or 3.5 s; see Table 1). In some datasets, double clicks (5 ms square pulses; 50 or 100 ms interclick interval) were also presented, but, in the present study, we analyzed only the responses to single click. The spiking data is publicly available here: https://github.com/adrianponce/Spont_stim_spiking_A1.
Cortical state
Request a detailed protocolLong continuous recordings (mean, ∼2 h) were divided into ${N}_{E}$ 100s epochs, and cortical state was estimated in each epoch based on spontaneous pooled population activity, that is the merge of single and multiunit spike trains during the 1.5s intervals preceding each stimulus presentation. Cortical state was quantified using silence density defined as the fraction of 20ms time bins with no population activity. Silent and active periods were obtained from the merge of consecutive empty and nonempty bins, respectively.
Maximum entropy models
Request a detailed protocolThe spontaneous spiking activity of ensembles of $N$ single neurons was studied using statistical modeling based on maximum entropy principle. The ensemble activity was binarized in nonoverlapping time bins of dt = 10 ms, during which neuron i either did (${\sigma}_{i}=+1$) or did not (${\sigma}_{i}=1$) generate one or more spikes. The state of the neural ensemble is described by a binary pattern $\overrightarrow{\sigma}=\left[{\sigma}_{1},{\sigma}_{2},\dots ,{\sigma}_{N}\right]$, and thus the collective activity is described by the probability distribution $P\left(\overrightarrow{\sigma}\right)$ over all 2^{N} possible binary patterns. We estimated $P\left(\overrightarrow{\sigma}\right)$ using a Maximum entropy model (MEM). The MEM finds $P\left(\overrightarrow{\sigma}\right)$ by maximizing its entropy under the constraint that some empirical statistics are preserved. A pairwiseMEM provides a solution under the constraint that the activation rates ($<{\sigma}_{i}>$) and the pairwise correlations ($<{\sigma}_{i}{\sigma}_{j}>$) are preserved. The maximum entropy distribution $P\left(\overrightarrow{\sigma}\right)$ that is consistent with these expectation values is given by the Boltzmann distribution (Schneidman et al., 2006; Tkačik et al., 2015):
where $E\left(\overrightarrow{\sigma}\right)$ is the energy of the pattern $\overrightarrow{\sigma}$, given by:
and $Z=\sum _{\{\overrightarrow{\sigma}\}}{e}^{E(\text{}\overrightarrow{\sigma}\text{})}$ is the partition function.
The model parameter ${h}_{i}$, called intrinsic bias, represents the intrinsic tendency of neuron i towards activation (${\sigma}_{i}=+1$) or silence (${\sigma}_{i}=1$) and the parameter ${J}_{ij}$ represents the effective interaction between neurons i and j. The estimation of the model parameters $\mathrm{\Omega}=\left\{\mathit{h},\mathit{J}\right\}$ was achieved through a gradient descent algorithm (see below). For each recording session, we constructed models for $Q$ ensembles of $N=10$ randomly selected single neurons from the entire population of ${N}_{\text{pop}}$ single neurons and learned the model parameters using the spontaneous binarized activity within each 100s epoch. Thus, for each recording session, we built $Q\times {N}_{E}$ models of 10 units. We were interested on the evolution of the model parameters over time, that is $\mathrm{\Omega}\left(t\right)$. Note that, for a given model, the number of free parameters is the sum of intrinsic biases and effective couplings, $N+N(N1)/2=$ 55, that is $\mathrm{\Omega}=\left[{h}_{1},\text{}{h}_{2},\text{}\dots ,\text{}{h}_{N},{J}_{12},\text{}{J}_{13},\dots \right]$.
Estimation of MEM parameters
Request a detailed protocolThe MEM parameters $\mathrm{\Omega}=\left\{\mathit{h},\mathit{J}\right\}$ were iteratively adjusted to minimize the absolute difference between the empirical activation rates ($\u27e8{\sigma}_{i}\u27e9$) and correlations ($\u27e8{\sigma}_{i}{\sigma}_{j}\u27e9$) and those (${\u27e8{\sigma}_{i}\u27e9}_{\text{model}}$, ${\u27e8{\sigma}_{i}{\sigma}_{j}\u27e9}_{\text{model}}$) predicted by the model through Metropolis Monte Carlo simulations (100,000 samples). Specifically, each iteration is given by: ${h}_{i}^{new}={h}_{i}^{old}\alpha \left({\u27e8{\sigma}_{i}\u27e9}_{\text{model}}\u27e8{\sigma}_{i}\u27e9\right)$, and ${J}_{ij}^{new}={J}_{ij}^{old}\alpha \left({\u27e8{\sigma}_{i}{\sigma}_{j}\u27e9}_{\text{model}}\u27e8{\sigma}_{i}{\sigma}_{j}\u27e9\right)$, where α is the learning rate ($\alpha =$ 0.1). In our study we stopped the reestimations once the differences between the empirical and model values are less than a tolerance threshold (0.005) or if this tolerance was not reached within a maximum number of iterations (100).
MEM goodnessoffit
Request a detailed protocolThe goodnessoffit of the MEMs was evaluated using the Jensen–Shannon divergence (D_{JS}) between the probability distribution of the empirical and model binary patterns (Marre et al., 2009). D_{JS} is a symmetric version of the KullbackLeibler divergence (D_{KL}) and is given as:
Where ${P}_{\text{MEM}}$ was given by the Boltzmann distribution of the model, ${P}_{\text{data}}$ was estimated from the $N$dimensional binary patterns observed in the data, and:
The fitting of MEM (secondorder model) was compared to the fit obtained using independentMEM, that is in which only for which only the activation rates ($<{\sigma}_{i}>$) are preserved (i.e., only $\mathit{h}$ is optimized; firstorder model). In this case, the pattern energy is given by: $E\left(\overrightarrow{\sigma}\right)=\sum _{i=1}^{N}{h}_{i}{\sigma}_{i}$.
Furthermore, the performance of the model can be evaluated using the KullbackLeibler ratio, $R$ (Shlens et al., 2009). This ratio is given by comparing the KullbackLeibler divergence between the distribution ${P}_{1}$ of the firstorder model (i.e., independentMEM) and the distribution of the actual data, ${D}_{1}={D}_{KL}\left({P}_{1};{P}_{\text{data}}\right)$, with the KullbackLeibler divergence between the distribution ${P}_{2}$ of the secondorder model and the distribution of the actual data, ${D}_{2}={D}_{KL}\left({P}_{2};{P}_{\text{data}}\right)$. Specifically, the KullbackLeibler ratio is defined as:
This ratio can range between 0 and 1, with one giving the highest performance.
Fisher information matrix
Request a detailed protocolBecause in the MEM, all the information about the collective activity is contained in the probability distribution of the binary patterns, $P\left(\overrightarrow{\sigma}\right)$, one can define the model parameter space as $P\left(\overrightarrow{\sigma}\mathrm{\Omega}\right)$. We were interested in knowing which parameters, or combination of parameters, have a strong effect on the collective activity. To measure how distinguishable two models, with parameters $\mathrm{\Omega}$ and $\mathrm{\Omega}+\delta \mathrm{\Omega}$, are based on their predictions, we used the Fisher information matrix (FIM). Indeed, the KullbackLeibler divergence between the two models can be written as:
where 1 $\le k,l\le $ 55, and the matrix $FIM$ is given by:
The FIM represents the curvature of the loglikelihood of the model, $\mathrm{log}P\left(\overrightarrow{\sigma}\mathrm{\Omega}\right)$, with respect to the model parameters. It quantifies the sensitivity of the model to changes in parameters. By calculating the eigenvalues of the FIM, we can determine which combinations of parameters affect the most the model's behavior.
In the case of MEM, the FIM can be easily obtained by using Equations 1, 2, and 7. As a result, the FIM is given by the covariance matrix of observables associated to the parameters which can be calculated from the model through Metropolis Monte Carlo simulations (500,000 steps), that is:
with 1 $\le k,l\le $ 55 and $\overrightarrow{x}=\left[{\sigma}_{1},\text{}{\sigma}_{2},\text{}\dots ,\text{}{\sigma}_{N},\text{}{\sigma}_{1}{\sigma}_{2},\text{}{\sigma}_{1}{\sigma}_{3},\dots \right]$.
Sensitivity measures
Request a detailed protocolThe FIM was calculated for every neuronal ensemble at every 100s epoch and it was decomposed into eigenvectors, noted ${\mathit{\nu}}_{t,1},{\mathit{\nu}}_{t,2},\dots ,{\mathit{\nu}}_{t,k},\dots $, where $k$ is the rank of the eigenvector and $t$ denotes the epoch. Following Panas et al. (2015), within each neuronal ensemble, we measured the sensitivity of a given parameter by its averaged contribution to the first eigenvector of the FIM, that is the sensitivity of ith parameter is given by ${s}_{\text{ne,}i}=\frac{1}{{N}_{E}}{\sum}_{t}\left{\nu}_{t,1}\left(i\right)\right$, with $1\le i\le N$.
We next constructed a sensitivity measure for the entire population of ${N}_{\text{pop}}$ neurons. For this, we defined the set of all single neuron indices and all pairs of neurons $\mathit{I}=\left\{1,\dots ,{N}_{\text{pop}},\left(\mathrm{1,2}\right),\left(\mathrm{1,3}\right),\dots \right\}$. This set has $L={N}_{\text{pop}}+\frac{{N}_{\text{pop}}({N}_{\text{pop}}1)}{2}$ elements. For each element $j$ of $\mathit{I}$, we defined the sensitivity ${s}_{j}$ as the average of ${s}_{\text{en,}i}$ over the neuronal ensembles that contained the $j$th single neuron or the pair of neurons (i.e., those neuronal ensembles for which $i$ maps to $j$). In other words, ${s}_{\text{ne}}$ denotes the sensitivity within an ensemble of $N=10$ neurons and has 55 elements, and $s$ denotes the sensitivity within the entire population of ${N}_{\text{pop}}$ neurons and has $L$ elements. This allows comparison of $s$ with statistics derived from the population of ${N}_{\text{pop}}$ neurons.
Parameters that contributed less to the first eigenvector could in principle contribute to the other stiff dimensions (those with lower rank $k$, e.g., $k=$ 2). For this reason, we also considered an alternative definition of sensitivity that considers the weighted contribution to all eigenvectors of the FIM. For each neuronal ensemble and each 100s epoch $t$, we defined the weighted sensitivity of the parameter $i$ as the temporal average of its contribution to the eigenvectors of the FIM, weighted by the associated eigenvalues (${a}_{t,1},\dots ,{a}_{t,55}$):
As previous, from ${s}_{\text{ne}}^{w}$ one can construct a weighted sensitivity ${s}^{w}$ at the population level.
Finally, we separated the activity observables into two classes, called “sloppy” and “stiff”, based on whether the associated sensitivity $s$ was lower or higher than the median sensitivity.
Similarity measures
Request a detailed protocolTemporal variations of model parameters and data statistics were quantified using the average correlation between the parameters/statistics at time t and the parameters/statistics at time $t+\mathrm{\Delta}t$. For example, let $\overrightarrow{r}\left(t\right)$ the average firing rates of the neurons during the epoch $t$, the similarity measure is given by:
Where ${N}_{E}$ is the number of epochs and $\rho $ is the Pearson correlation coefficient. In the case of FIM, the matrix was vectorized to calculate $\rho $.
To evaluated how well the data in a given epoch $t$ could be explained by the MEM constructed using the data at time $t+\mathrm{\Delta}t$. Specifically, we defined the similarity measure $\u27e8{D}_{JS}\u27e9\left(\mathrm{\Delta}t\right)$, given by the average JensenShannon divergence between the distribution of data binary patterns in epoch $t$, that is ${P}_{\text{data,}t}$, and the distribution of binary patterns predicted by the MEM constructed using the data in epoch $t+\mathrm{\Delta}t$, that is $P}_{\text{MEM,}t+\mathrm{\Delta}t$. This measure is given as:
In other words, $1/\u27e8{D}_{JS}\u27e9\left(\mathrm{\Delta}t\right)$ quantifies how well, on average, the model with parameters $\mathrm{\Omega}\left(t+\mathrm{\Delta}t\right)$ represents the data from epoch $t$.
Modulation index
Request a detailed protocolWe quantified the responsiveness of the neurons to sensory stimuli through the modulation index (MI) defined as:
where ${r}_{spon}$ is the prestimulus average spike count, calculated in the 0.5s prestimulus interval, and ${r}_{stim}$ is the average spike count calculated from stimulus onset to 0.5 s after stimulus onset. With this definition, strongly increased or suppressed stimulus responses, with respect to prestimulus activity, lead to high MI values.
Betweenness centrality
Request a detailed protocolFor each recording session, we analyzed the network defined by the Pearson correlation matrix of the activities of all single units. The centrality of a neuron, or node, within the network was quantified using the betweenness centrality (BC) measure. BC is given by the number of shortest paths that pass through a given node. The correlation matrix was compute for all 100s epochs and, for each matrix element, we tested whether the mean of the ${N}_{E}$ correlation values differs from 0 (t test followed by Bonferroni correction), resulting in a binary graph $G$ with entries equal to 1 if correlation were significantly different from zero (corrected pvalue < 0.05) and 0 otherwise. The BC for each node of the graph was given by:
where $p\left(kl\right)$ is the total number of shortest paths from node $k$ to node $l$ and $p(kl;i)$ is the number of those paths that pass through $i$.
Neurontopopulation coupling
Request a detailed protocolTo quantify the coupling of each neuron to the activity of the neuronal population, we calculated, for each epoch, the Pearson correlation between the activity of each neuron (${\sigma}_{i}$) and the number of coactive neurons (i.e., with ${\sigma}_{i}=+1$) at each time bin ($dt$ = 10 ms) from the neuronal population of single units (without including the neuron $i$). The neurontopopulation coupling was given by the average of the correlation coefficient across epochs.
ROC analysis
Request a detailed protocolWe used the receiver operating characteristic curve (ROC) to evaluate the separation between the distributions of observables from sloppy and stiff classes. Let ${X}_{\text{sloppy}}$ and ${X}_{\text{stiff}}$ be the sloppy variables, that is those variables with associated sensitivity ($s$) lower than the median $s$, and the stiff variables, that is those variables with associated sensitivity ($s$) higher than the median $s$, respectively. The ROC curve, $f\left(c\right)$, is build by plotting the probability of $P\left({X}_{\text{sloppy}}>c\right)$ against the probability of $P\left({X}_{\text{stiff}}>c\right)$, for each all $c$. The area under the ROC curve (AUC) is a measure of separation between ${P(X}_{\text{sloppy}})$ and ${P(X}_{\text{stiff}})$, and it is given by:
AUC ranges between 0 and 1, with AUC = 0 if ${P(X}_{\text{sloppy}})$ and ${P(X}_{\text{stiff}})$ are completely separated and $X}_{\text{sloppy}}>{X}_{\text{stiff}$, AUC = 1 if ${P(X}_{\text{sloppy}})$ and ${P(X}_{\text{stiff}})$ are completely separated and $X}_{\text{stiff}}>{X}_{\text{sloppy}$, and AUC = 0.5 if ${P(X}_{\text{sloppy}})$ and ${P(X}_{\text{stiff}})$ are undistinguishable. We used a permutation test (1000 resamples), in which observables and classes were randomly associated, to assess AUC values that were significantly different from 0.5.
Stationary surrogates
Request a detailed protocolTo construct the stationary surrogates we first randomly selected a reference epoch $t$. Second, we generated binary data using the MEM estimated from the spiking data at this reference epoch, that isusing parameters $\mathrm{\Omega}\left(t\right)$, through Monte Carlo simulations of the model to obtain 5000 binary patterns. Third, we repeated the Monte Carlo simulations ${N}_{E}$ times. Finally, for each of the ${N}_{E}$ pieces of surrogate data, we estimated new MEM parameters, $\mathrm{\Omega}}^{\prime$, using gradient descend, and we calculated the corresponding Fisher Information Matrix (FIM) using 500,000 Monte Carlo steps as described above. By construction, the obtained surrogate data were stationary and had the same length of the original spiking data. Thus, parameter fluctuations in the surrogate data were only due to model estimation errors.
Appendix 1
Estimation errors and sensitivity
Correlations between cortical state fluctuations and parameter sensitivity and between MI and sensitivity could reflect a dependency between sensitivity and model estimation errors. To test this, we here evaluated the error on the model estimation of the observables (firing rates and correlations) as a function of sensitivity. For each epoch $t$, the firing rates and covariances estimated by the MEM can be calculated from the model parameters by means of the Boltzmann distribution. The estimated moments $\u27e8{\sigma}_{i}\u27e9$ and $\u27e8{\sigma}_{i}{\sigma}_{j}\u27e9$ are given by:
The estimated firing rates and the covariances are given as ${r}_{i}=\left(\u27e8{\sigma}_{i}\u27e9+1\right)/\left(2dt\right)$ and ${C}_{ij}=\u27e8{\sigma}_{i}{\sigma}_{j}\u27e9\u27e8{\sigma}_{i}\u27e9\u27e8{\sigma}_{j}\u27e9$ (Appendix 1—figure 1AB). We found that the firing rates and covariances estimated from the data and those estimated by the MEM highly correlated (average correlation coefficient for rates: 0.999 ± 0.001; for covariances: 0.956 ± 0.018, Appendix 1—figure 1C), indicating acceptable fits of the models. Indeed, the model performances, evaluated through the KullbackLeibler ratio R, were also high (average equal to 0.95 ± 0.03, Appendix 1—figure 1D).
We next defined the estimation errors as the averaged squared difference between the values estimated from the data and those estimated by the model:
where $\mathrm{\Delta}{r}_{t,i}={\left({r}_{i}^{\text{data,t}}{r}_{i}^{\text{model,t}}\right)}^{2}$ and $\mathrm{\Delta}{C}_{t,i,j}={\left({C}_{ij}^{\text{data,t}}{C}_{ij}^{\text{model,t}}\right)}^{2}$, and $\sigma}_{\mathrm{\Delta}r$ and $\sigma}_{\mathrm{\Delta}r$ denote the standard deviations of $\mathrm{\Delta}{r}_{t,i}$ and $\mathrm{\Delta}{C}_{t,i,j}$, respectively —this normalization is needed because $\mathrm{\Delta}{r}_{t,i}$ and $\mathrm{\Delta}{C}_{t,i,j}$ had different magnitudes. Using the same procedure as for sensitivity $s$, we obtained, for each dataset, a variable $Err$ that represented the mean estimation error of the ${N}_{\text{pop}}$ neurons and all pairwise combinations.
Since estimation errors correlated with sensitivity (average: 0.192 ± 0.055; Appendix 1—figure 1E), we asked whether the correlations between sensitivity and cortical state observed in Figure 5F could be fully predicted by correlations between errors and cortical state. We found that correlations between sensitivity $s$ and $\left{r}_{cs}\right$ were higher than correlations between estimation errors and $\left{r}_{cs}\right$ (0.252 ± 0.056 vs. 0.168 ± 0.017 on average; Appendix 1—figure 1F). We used Meng's ztest for dependent correlations to compare the correlations between sensitivity $s$ and $\left{r}_{cs}\right$ and those between estimation errors and $\left{r}_{cs}\right$, given the correlation between $s$ and errors, and found significant differences in 4/6 datasets (p < 0.05). This was the case for datasets with the highest model performances (datasets 1–4).
Finally, we tested whether estimation errors for firing rates correlated with the modulation index (MI) of the neurons and compared the results with those obtained in Figure 6D. We found that correlations between estimation errors and MI were substantially weaker than correlations between estimation errors and MI (−0.506 ± 0.043 vs. −0.086 ± 0.017 on average; p<0.05 for all datasets, Meng's ztest for dependent correlations; Appendix 1—figure 1G). We concluded that, although estimation errors correlated with sensitivity, they did not fully explain neither the correlation between sensitivity and cortical state (for the datasets with highest model performance) nor the correlation between sensitivity and MI.
Nonhomogeneous Poisson process surrogate data
In Figure 7A–D we found that stiff neurons were more active, more coupled among them, and more central than sloppy neurons. We here investigated the possibility that correlations and BC values could be trivially predicted by firing rates. We tested whether the structure of correlation could arise from spontaneous global fluctuations increasing or decreasing the firing of the neurons, instead of functional connectivity among them. For this, we constructed surrogate data that preserved the mean firing rate of the neurons, and the number and duration of silent (i.e., no spikes, empty 20 ms time bins) and active (i.e., nonempty 20 ms time bins) periods in each epoch, but without other structured correlations among the units.
To do this, we generated nonhomogeneous Poisson process (NHPP) surrogate data as follows. We first calculated the average spontaneous firing rates of the single neurons during active periods in the original data, $\overrightarrow{{r}_{e}}=\left[{r}_{e,1},{r}_{e,2},\dots ,{r}_{e,{N}_{pop}}\right]$, given by the number of spikes, in a given epoch $e$, divided by the total duration of active periods within this epoch. During silent periods the firing rate of all single units was zero. Next, for each unit $i$ and within each epoch $e$, we generated nonhomogeneous Poisson spike trains using the estimated firing rate as the intensity ${\lambda}_{e,i}\left(t\right)$ of the point process, that is ${\lambda}_{e,i}\left(t\right)$ was a step function with ${\lambda}_{e,i}\left(t\right)={r}_{e,i}$, for times $t$ within active periods, and ${\lambda}_{e,i}\left(t\right)=0$, otherwise. The resulting surrogate data preserved the firing rate of each neuron and silence periods of population activity (Appendix 1—figure 2A,E).
Despite being independent during active periods, the units in the NHPP data were correlated due to common global fluctuations across epochs and silent and active periods. Indeed, the mean correlation values in each epoch, $\u27e8C\u27e9}_{e$, from the original and the surrogate data were highly correlated (Appendix 1—figure 2B,E), although $\u27e8C\u27e9}_{e$ was systematically lower in the surrogate data than in the original data. We tested how well this simple model predicted the structure of correlations, by calculating the pairwise correlations, $C}_{ij$, and the BC values in the surrogate data, and comparing them to the values obtained in the original data (Appendix 1—figure 2CD). We found that neither $C}_{ij$ nor BC were predicted by the NHPP: the average fraction of explained variance was equal to R^{2} = 0.148 ± 0.07 and R^{2} = 0.184 ± 0.07 for $C}_{ij$ and BC, respectively (Appendix 1—figure 2E). Moreover, the correlation between firing rates and BC values was substantially reduced in the NHPP, with correlation coefficients equal to 0.59 ± 0.11 and 0.19 ± 0.15 for the original and the NHPP, respectively (p < 0.01, ttest). We concluded that correlations and centrality measures were not fully predicted by globally modulated firing rates but were rather suggestive of functional interactions.
Data availability
We made the spiking data publicly available at: https://github.com/adrianponce/Spont_stim_spiking_A1.
References

Taskbased coreperiphery organization of human brain dynamicsPLOS Computational Biology 9:e1003171.https://doi.org/10.1371/journal.pcbi.1003171

Model of thalamocortical slowwave sleep oscillations and transitions to activated statesThe Journal of Neuroscience 22:8691–8704.https://doi.org/10.1523/JNEUROSCI.221908691.2002

Stability of spontaneous, correlated activity in mouse auditory cortexPLOS Computational Biology 15:e1007360.https://doi.org/10.1371/journal.pcbi.1007360

Singleneuron stability during repeated reaching in macaque premotor cortexJournal of Neuroscience 27:10742–10750.https://doi.org/10.1523/JNEUROSCI.095907.2007

Cellular and network mechanisms of slow oscillatory activityJournal of Neurophysiology 89:2707–2725.https://doi.org/10.1152/jn.00845.2002

Structure and dynamics of core/periphery networksJournal of Complex Networks 1:93–123.https://doi.org/10.1093/comnet/cnt016

Graph drawing by forcedirected placementSoftware: Practice and Experience 21:1129–1164.https://doi.org/10.1002/spe.4380211102

Basal forebrain activation enhances cortical coding of natural scenesNature Neuroscience 12:1444–1449.https://doi.org/10.1038/nn.2402

Spontaneous cortical activity is transiently poised close to criticalityPLOS Computational Biology 13:e1005543.https://doi.org/10.1371/journal.pcbi.1005543

Cortical state and attentionNature Reviews Neuroscience 12:509–523.https://doi.org/10.1038/nrn3084

Modeling sleep and wakefulness in the thalamocortical systemJournal of Neurophysiology 93:1671–1698.https://doi.org/10.1152/jn.00915.2004

Gating of sensory input by spontaneous cortical activityJournal of Neuroscience 33:1684–1695.https://doi.org/10.1523/JNEUROSCI.292812.2013

Reorganization of cortical population activity imaged throughout longterm sensory deprivationNature Neuroscience 15:1539–1546.https://doi.org/10.1038/nn.3240

Statedependent representation of amplitudemodulated noise stimuli in rat auditory cortexJournal of Neuroscience 31:6414–6420.https://doi.org/10.1523/JNEUROSCI.577310.2011

Prediction of spatiotemporal patterns of neural activity from pairwise correlationsPhysical Review Letters 102:138101.https://doi.org/10.1103/PhysRevLett.102.138101

Exploring the spectrum of dynamical regimes and timescales in spontaneous cortical activityCognitive Neurodynamics 6:239–250.https://doi.org/10.1007/s1157101191794

Statedependent population coding in primary auditory cortexJournal of Neuroscience 35:2058–2073.https://doi.org/10.1523/JNEUROSCI.331814.2015

Sloppiness in spontaneously active neuronal networksThe Journal of Neuroscience 35:8480–8492.https://doi.org/10.1523/JNEUROSCI.442114.2015

Membrane potential correlates of sensory perception in mouse barrel cortexNature Neuroscience 16:1671–1677.https://doi.org/10.1038/nn.3532

The structure of largescale synchronized firing in primate retinaJournal of Neuroscience 29:5022–5031.https://doi.org/10.1523/JNEUROSCI.518708.2009

Searching for collective behavior in a large network of sensory neuronsPLOS Computational Biology 10:e1003408.https://doi.org/10.1371/journal.pcbi.1003408

Perspective: sloppiness and emergent theories in physics, biology, and beyondThe Journal of Chemical Physics 143:010901.https://doi.org/10.1063/1.4923066

Neural control of brain stateCurrent Opinion in Neurobiology 29:178–186.https://doi.org/10.1016/j.conb.2014.09.010

Longterm dynamics of CA1 hippocampal place codesNature Neuroscience 16:264–266.https://doi.org/10.1038/nn.3329
Decision letter

Michael J FrankSenior Editor; Brown University, United States

Brice BathellierReviewing Editor; CNRS, France

Brice BathellierReviewer; CNRS, France

Matthias H HennigReviewer

Mark D HumphriesReviewer; University of Nottingham, United Kingdom
In the interests of transparency, eLife publishes the most substantive revision requests and the accompanying author responses.
Acceptance summary:
This study shows that auditory cortex dynamics can be decomposed into a small set of critical dimensions and a set of dimensions that rather account for small variations of the dynamics. In addition, remarkably, critical dimensions are not shared between spontaneous and evoked activity, suggesting that the two types of activity come from different processes. This is an exciting result that should motivate important refinements in current models of auditory cortex dynamics.
Decision letter after peer review:
Thank you for submitting your article "Cortical state transitions and stimulus response evolve along stiff and sloppy parameter dimensions, respectively" for consideration by eLife. Your article has been reviewed by three peer reviewers, including Brice Bathellier as the Reviewing Editor and Reviewer #1, and the evaluation has been overseen by Michael Frank as the Senior Editor. The following individuals involved in review of your submission have agreed to reveal their identity: Matthias H. Hennig (Reviewer #2); Mark D Humphries (Reviewer #3).
The reviewers have discussed the reviews with one another and the Reviewing Editor has drafted this decision to help you prepare a revised submission.
Summary:
This paper uses modelbased analysis of extracellular recordings from the rat auditory cortex to provide evidence for the hypothesis that the stability of the population in cortical networks depends on a small number of neurons with highly sensitive parameters, while parameters associated with the majority of neurons are insensitive to changes. Moreover, the authors show that transitions between desynchronised and synchronised states are associated with changes in the sensitive parameters, while during sensory stimulation this is statedependent. Finally, it is shown that neurons with sensitive parameters have hublike properties in the network.
Overall these are very interesting and in fact surprising results. The paper is well written and presented, but a few complementary analyses/clarifications are needed to make the important but complex message of this study more accessible and convincing.
Essential revisions:
1) The central conclusions in this paper are based on analysis of the Fisher information matrix (FIM) for pairwise maximum entropy models fit to binarised multineuron patterns. One possible caveat here is that the models were fit on relatively small data sets, which could affect the analysis downstream as the insensitive directions in the FIM could just reflect poor parameter estimates. The around 5000 patterns used here to fit the model are very few, and estimates of correlations used to constrain the model may be poor, given cortical activity is quite sparse. Possible biases should be assessed more thoroughly. In particular it is important to check for the impact of estimation errors given the short epochs. One key measure to check is the sensitivity measure, as this defines both core results (spontaneous = stiff; evoked = sloppy). The authors should provide an estimate of the variation in "sensitivity" as a function of model estimation error.
Most importantly:
– Figure 5EG, check for how correlations between sensitivity and "state" change with errors in model estimation.
– Figure 6DE, check for how correlations between sensitivity and the modulation index change with errors in model estimation.
2) It is hard to follow what is 'stiff' and what is 'sloppy', as in most of the cases the frontier between the two categories is obviously not sharp. It is clear that there are two ends of the parameter distribution (one end is more stiff and the other more sloppy), but in between, there are many intermediates. The authors must be more explicit about this and make more precise wherever possible what they mean when they define these two categories.
– In Figure 4B they should display the boundary between stiff and sloppy with a shading or a vertical line. Also, there is a kink in the eigenvalue curve. Is it related to model/population size? Are the eigenvalues beyond this kink relevant?
– It should be explained why stiffness of individual neurons is chosen to be their contribution to the first eigenvalue only. Are further eigenvalues not relevant?
– Together it should be clarified in a dedicated paragraph that stiff and sloppy parameters/neurons are two parts of a wide distribution.
https://doi.org/10.7554/eLife.53268.sa1Author response
Essential revisions:
1) The central conclusions in this paper are based on analysis of the Fisher information matrix (FIM) for pairwise maximum entropy models fit to binarised multineuron patterns. One possible caveat here is that the models were fit on relatively small data sets, which could affect the analysis downstream as the insensitive directions in the FIM could just reflect poor parameter estimates. The around 5000 patterns used here to fit the model are very few, and estimates of correlations used to constrain the model may be poor, given cortical activity is quite sparse. Possible biases should be assessed more thoroughly. In particular it is important to check for the impact of estimation errors given the short epochs. One key measure to check is the sensitivity measure, as this defines both core results (spontaneous = stiff; evoked = sloppy). The authors should provide an estimate of the variation in "sensitivity" as a function of model estimation error.
As suggested by the reviewers, in the revised manuscript, we now evaluated the correlation between sensitivity and the error on the model estimation of the firing rates and covariances. This analysis is mentioned in the main article in subsection “Sensoryevoked activity evolves along sloppy dimensions” and described in detail in the Appendix 1 and in Appendix—figure 1.
The estimated firing rates and covariances can be calculated from the model parameters by means of the model's Boltzmann distribution. The firing rates and covariances estimated from the data and those estimated by the MEM highly correlated (average correlation: 0.999 ± 0.001 for rates; 0.956 ± 0.018, for covariances), indicating acceptable fits of the models (Appendix—figure 1AC). This goes in line with the reported high KullbackLeibler ratios (0.95 ± 0.03 on average) that quantify model performances (Appendix—figure 1D).
We next defined the estimation errors as the averaged normalized squared difference between the values estimated from the data and those estimated by the model. We found a correlation between the estimation and sensitivity of 0.192 ± 0.055 (Appendix—figure 1E). We next asked whether the correlations between sensitivity and cortical state observed in Figure 5F could be fully predicted by correlations between errors and cortical state. We found that correlations between sensitivity and were higher than correlations between estimation errors and (0.252 ± 0.056 vs. 0.168 ± 0.017, Appendix—figure 1F). We used Meng's ztest for dependent correlations to compare the correlations between sensitivity and those between estimation errors and, given the correlation between and errors, and found significant differences in 4/6 datasets (p < 0.05). This was the case for datasets with the highest model performances (datasets 1–4). Finally, we tested whether estimation errors for firing rates correlated with the modulation index (MI) of the neurons and compared the results with those obtained in Figure 6D. We found that correlations between estimation errors and MI were substantially weaker than correlations between estimation errors and MI (0.506 ± 0.043 vs. 0.086 ± 0.017 on average; p < 0.05 for all datasets, Meng's ztest; Appendix—figure 1G).
We concluded that, although estimation errors correlated with sensitivity, they did not fully explain neither the correlation between sensitivity and cortical state (for the datasets with highest model performance) nor the correlation between sensitivity and MI.
Most importantly:
– Figure 5EG, check for how correlations between sensitivity and "state" change with errors in model estimation.
As mentioned above, estimation errors did not fully explain the correlation between sensitivity and cortical state.
– Figure 6DE, check for how correlations between sensitivity and the modulation index change with errors in model estimation.
As mentioned above, estimation errors did not fully explain neither the correlation between sensitivity and MI.
2) It is hard to follow what is 'stiff' and what is 'sloppy', as in most of the cases the frontier between the two categories is obviously not sharp. It is clear that there are two ends of the parameter distribution (one end is more stiff and the other more sloppy), but in between, there are many intermediates. The authors must be more explicit about this and make more precise wherever possible what they mean when they define these two categories.
– In Figure 4B they should display the boundary between stiff and sloppy with a shading or a vertical line.
Stiff and sloppy dimensions are those associated with low and high rank eingenvectors of the FIM, respectively. In sloppy systems there is no clear cut between stiff and sloppy dimensions, so it is not possible to draw a line separating stiff and sloppy dimensions, thus we added a shading in Figure 4B. We concentrated into the first eigenvector to estimate the sensitivity of parameters, but, as shown below, we now also provide a more general definition of sensitivity that considers the weighted contribution to all eigenvectors (see below).
Also, there is a kink in the eigenvalue curve. Is it related to model/population size? Are the eigenvalues beyond this kink relevant?
In our study, the stiff dimension was defined as being the first eigenvector of the FIM (see next point to a discussion about an alternative definition). The kink in the eigenvalue curve observed in Figure 4B was observed in two datasets, for rat 1 (which is presented in the figure) and rat 2. For the other datasets, this kink was not clearly observed as shown Author response image 1. Thus, this observation is not systematic in our sample. The relation between the kink observed in rats 1 and 2 and the size of the population is not straightforward. One could think that the first eigenvectors relate to the bias parameters of the neurons, but, as shown in Figure 5D (bottom), biases and couplings contributed equally to the first eigenvector. We believe that a discussion about the kink observed in those datasets and its relation to model size is out of the scope, since our results are not based on this part of the spectrum but mainly focus on the first eigenvector. Concerning the relevance of eigenvectors beyond the first one, see next point.
– It should be explained why stiffness of individual neurons is chosen to be their contribution to the first eigenvalue only. Are further eigenvalues not relevant?
Following the work of Panas et al., 2015, we have measured the sensitivity of a given model parameter by its contribution to the first eigenvector of the FIM averaged across time. However, it could happen that the parameters that contribute less to the first eigenvector do contribute to the other stiff dimensions (those with lower rank, for example 2). To take into account this, in the new manuscript, we added an alternative definition of sensitivity that considers the weighted contribution to all eigenvectors of the FIM. Specifically, for each 100s epoch and each neuronal ensemble, we calculated the eigenvectors of the FIM (v_{t,}_{1},…,v_{t,}_{55}) and their associated eigenvalues (a_{t}_{,1},…,a_{t}_{,55}). We defined the weighted sensitivity of the parameter as the temporal average of its weighted contribution to all eigenvectors of the FIM, with weights equal to the associated eigenvalues:
sne,iw=1NE∑t=1NE∑k=155at,kVt,kiat,1+at,2+at,55
Just as for the original definition, one can construct a weighted sensitivity at the population level.
Using this more general definition, all the results were preserved. This version of the sensitivity is described in Materials and methods (subsection “Sensitivity measures”) and the results are described in Figure 5—figure supplement 1, Figure 6—figure supplement 1, Figure 7—figure supplement 1, and in the main article in subsections “Cortical state transitions evolve along stiff dimensions”, “Sensoryevoked activity evolves along sloppy dimensions” and “Stiff parameters were associated to central neurons within the neuronal network”.
– Together it should be clarified in a dedicated paragraph that stiff and sloppy parameters/neurons are two parts of a wide distribution.
We explicitly mention this point in the discussion of the revised manuscript. See paragraph one and five in the Discussion.
https://doi.org/10.7554/eLife.53268.sa2Article and author information
Author details
Funding
European Commission (FlagEra JTC PCI2018092891)
 Adrian PonceAlvarez
 Gustavo Deco
Ministerio de Economía y Competitividad (Juan de la Cierva Fellowship IJCI201421937)
 Gabriela Mochol
European Regional Development Fund (SAF201346717R)
 Jaime de la Rocha
Ministerio de Economía y Competitividad (BES2011049131)
 Ainhoa HermosoMendizabal
Ministerio de Economía y Competitividad (SAF201015730)
 Jaime de la Rocha
European Regional Development Fund (SAF201015730)
 Jaime de la Rocha
Ministerio de Economía y Competitividad (SAF201346717R)
 Jaime de la Rocha
Horizon 2020 Framework Programme (785907 HBP SGA1)
 Gustavo Deco
Horizon 2020 Framework Programme (785907 HBP SGA2)
 Gustavo Deco
Ministerio de Economía y Competitividad (PSI201675688P)
 Gustavo Deco
Catalan Research Group Support (2017 SGR 1545)
 Gustavo Deco
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
Acknowledgements
APA and GD received funding from the FLAGERA JTC (PCI2018092891). GD acknowledges funding from the European Union’s Horizon 2020 FET Flagship Human Brain Project under Grant Agreement 785907 HBP SGA2, the Spanish Ministry Project PSI201675688P (AEI/FEDER) and the Catalan Research Group Support 2017 SGR 1545. GM was supported by a Juan de la Cierva fellowship (IJCI2014–21937) from the Spanish Ministry of Economy and Competitiveness. AHM received support from the Spanish Ministry of Economy and Competitiveness (BES2011–049131). JR received funding from the Spanish Ministry of Economy and Competitiveness together with the European Regional Development Fund Grants SAF201015730 and SAF201346717R. We thank L Hollender for sharing her data.
Ethics
Animal experimentation: All experiments were carried out in accordance with protocols approved by the Animal Ethics Committee of the University of Barcelona (Comité d'Experimentació Animal, Universitat de Barcelona, Reference: 116/13).
Senior Editor
 Michael J Frank, Brown University, United States
Reviewing Editor
 Brice Bathellier, CNRS, France
Reviewers
 Brice Bathellier, CNRS, France
 Matthias H Hennig
 Mark D Humphries, University of Nottingham, United Kingdom
Version history
 Received: November 1, 2019
 Accepted: March 16, 2020
 Accepted Manuscript published: March 17, 2020 (version 1)
 Version of Record published: March 31, 2020 (version 2)
Copyright
© 2020, PonceAlvarez et al.
This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.
Metrics

 1,510
 Page views

 269
 Downloads

 9
 Citations
Article citation count generated by polling the highest count across the following sources: Crossref, PubMed Central, Scopus.
Download links
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
Further reading

 Computational and Systems Biology
 Neuroscience
An animal entering a new environment typically faces three challenges: explore the space for resources, memorize their locations, and navigate towards those targets as needed. Here we propose a neural algorithm that can solve all these problems and operates reliably in diverse and complex environments. At its core, the mechanism makes use of a behavioral module common to all motile animals, namely the ability to follow an odor to its source. We show how the brain can learn to generate internal “virtual odors” that guide the animal to any location of interest. This endotaxis algorithm can be implemented with a simple 3layer neural circuit using only biologically realistic structures and learning rules. Several neural components of this scheme are found in brains from insects to humans. Nature may have evolved a general mechanism for search and navigation on the ancient backbone of chemotaxis.

 Neuroscience
Automatic leveraging of information in a hippocampal neuron database to generate mathematical models should help foster interactions between experimental and computational neuroscientists.