UPDOWN cortical dynamics reflect state transitions in a bistable network
Abstract
In the idling brain, neuronal circuits transition between periods of sustained firing (UP state) and quiescence (DOWN state), a pattern the mechanisms of which remain unclear. Here we analyzed spontaneous cortical population activity from anesthetized rats and found that UP and DOWN durations were highly variable and that population rates showed no significant decay during UP periods. We built a network rate model with excitatory (E) and inhibitory (I) populations exhibiting a novel bistable regime between a quiescent and an inhibitionstabilized state of arbitrarily low rate. Fluctuations triggered state transitions, while adaptation in E cells paradoxically caused a marginal decay of Erate but a marked decay of Irate in UP periods, a prediction that we validated experimentally. A spiking network implementation further predicted that DOWNtoUP transitions must be caused by synchronous highamplitude events. Our findings provide evidence of bistable cortical networks that exhibit nonrhythmic state transitions when the brain rests.
https://doi.org/10.7554/eLife.22425.001Introduction
A ubiquitous pattern of spontaneous cortical activity during synchronized brain states consists of the alternation between periods of tonic firing (UP states) and periods of quiescence (DOWN states) (Luczak et al., 2007; Steriade et al., 1993a; Timofeev et al., 2001). Cortical UP and DOWN dynamics take place during slowwavesleep (SWS) (Steriade et al., 1993a) and can also be induced by a number of anesthetics (Steriade et al., 1993a). More recently however, similar synchronous cortical dynamics have been observed not only in awake rodents during quiescence (Luczak et al., 2007; Petersen et al., 2003), but also in animals performing a perceptual task, both rodents (Sachidhanandam et al., 2013; Vyazovskiy et al., 2011) and monkeys (Engel et al., 2016).
Spontaneous activity resembling UP and DOWN states has been found in cortical slices in vitro (Cossart et al., 2003; Fanselow and Connors, 2010; Mann et al., 2009; SanchezVives and McCormick, 2000), in slabs (Timofeev et al., 2000) and in vivo under extensive thalamic lesions (Steriade et al., 1993b). This suggests that the underlying mechanism has an intracortical origin. In such scenario, the standard hypothesis postulates that during UP periods a fatigue mechanism of cellular origin – e.g. spike frequency adaptation currents or synaptic shortterm depression – decreases network excitability until the state of tonic firing can no longer be sustained and the cortical network switches into a DOWN state (Contreras et al., 1996; SanchezVives and McCormick, 2000). During DOWN periods, in the absence of firing, the fatigue variables recover until the circuit becomes selfexcitable and autonomously transitions into an UP state (Cunningham et al., 2006; Le BonJego and Yuste, 2007; Poskanzer and Yuste, 2011; SanchezVives and McCormick, 2000; Timofeev et al., 2000). This mechanism of activity dependent negative feedback causing oscillatory UPDOWN dynamics has been implemented by several computational models (Bazhenov et al., 2002; Benita et al., 2012; Chen et al., 2012; Compte et al., 2003b; Ghorbani et al., 2012; Hill and Tononi, 2005; Parga and Abbott, 2007). However, although commonly described as a slow oscillation, the rhythmicity of UPDOWN dynamics has not been systematically quantified and seems to depend on the details of the preparation (Chauvette et al., 2011; Erchova et al., 2002; Lampl et al., 1999; RuizMejias et al., 2011).
Alternatively, there is strong evidence suggesting that UPDOWN transitions in neocortical circuits are coupled with activity in subcortical and limbic areas. Thalamocortical neurons for instance can endogenously oscillate at low frequencies (Hughes et al., 2002; McCormick and Pape, 1990), cause cortical UP states when stimulated (Rigas and CastroAlamancos, 2007) or modulate the UPDOWN dynamics when suppressed (David et al., 2013; Lemieux et al., 2014) and their spontaneous activity correlates with UP state onset (Contreras and Steriade, 1995; Slézia et al., 2011; Ushimaru et al., 2012). Moreover, the timing of hippocampal sharpwave ripples (Battaglia et al., 2004), or basal ganglia activity (Ushimaru et al., 2012) also tends to precede DOWN to UP transitions. Finally, intracortical stimulation can effectively cause UPDOWN transitions (Beltramo et al., 2013; Shu et al., 2003) even when only a few dozen neurons are stimulated (Stroh et al., 2013). In total, these findings describe a system whose macroscopic UPDOWN dynamics are sensitive to temporal fluctuations of both external inputs and local circuit activity. Such a network would in principle generate unpredictable and therefore irregular UPDOWN dynamics, since transitions are no longer dependent exclusively on local cortical internal dynamics.
The interplay of fatigue mechanisms and fluctuations causing transitions between two states has been theoretically studied in the developing spinal cord (Tabak et al., 2011; 2000), and in the context of UPDOWN dynamics mostly in networks composed of excitatory units (Holcman and Tsodyks, 2006; Lim and Rinzel, 2010; Mattia and SanchezVives, 2012; Mejias et al., 2010). Most models of spontaneous activity are however theoretically founded on the balance between excitatory (E) and inhibitory (I) populations (Amit and Brunel, 1997; van Vreeswijk and Sompolinsky, 1998), a dynamic state that can quantitatively mimic population spiking activity during desynchronized states (Renart et al., 2010). Analysis of cortical responses in the visual cortex suggest that cortical networks operate in the inhibitionstabilized regime, in which recurrent excitatory feedback alone is strong enough to destabilize the network activity but feedback inhibition maintains stability (Ozeki et al., 2009). In spite of growing evidence showing that the interaction between E and I populations is critical in generating spontaneous activity, the conditions under which an EI network model can exhibit a robust bistability between a lowrate inhibitionstabilized state and a quiescent state are still not well understood (Latham et al., 2000). To develop such a model, we first performed population recordings of ongoing cortical activity during synchronized brain state epochs in rats under urethane anesthesia (Détári and Vanderwolf, 1987; Luczak et al., 2007; Murakami et al., 2005; Whitten et al., 2009). Analysis of population singleunit spiking dynamics, showed irregular UP and DOWN periods and no decay of the average rate during UPs. Given these constraints, we built an EI network model that, capitalizing on the firing threshold nonlinearity and the asymmetry of the E and I transfer functions, exhibited a novel type of bistability with a quiescent (DOWN) and a lowrate state (UP). External input fluctuations into the network caused the irregular UPDOWN transitions. Adaptation in E cells in contrast, did not cause transitions and had a different effect on the E rate in each of the two states: while it exhibited recovery during DOWN periods, it showed almost no decay during UP periods due to the balanced nature of the UP dynamics. In addition, a spiking network implementation of the model revealed that external input fluctuations to neurons in the network cannot respond to simple independent Gaussian statistics but must include stochastic, synchronous highamplitude events that can trigger DOWNtoUP transitions. Our model provides the first EI network that exhibits stochastic transitions between a silent and a low rate inhibitionstabilized attractor matching the statistics of UP and DOWN periods and population rate timecourses observed in the cortex.
Results
To investigate the mechanisms underlying the generation of spontaneous cortical activity, we recorded the spiking activity from large populations of neurons (mean ±SD = 64±23 cells) in deep layers of somatosensory cortex of urethaneanesthetized rats (n = 7 animals) (Barthó et al., 2004; Luczak et al., 2009). Because brain state under urethane can vary spontaneously (Détári and Vanderwolf, 1987; Luczak et al., 2007; Murakami et al., 2005; Whitten et al., 2009), we selected the most clearly synchronized epochs characterized by the stable presence of highamplitude, slow fluctuations in cortical local field potential (LFP) signals (Figure 1A; see Materials and methods) (Harris and Thiele, 2011; Steriade et al., 2001). During these epochs, the instantaneous population rate $R\left(t\right)$, calculated by merging all the recorded individual spike trains, displayed alternations between periods of tonic firing and periods of silence (Luczak et al., 2007), a signature of UP and DOWN states from an extracellular standpoint (Figure 1B–C) (Cowan and Wilson, 1994; SanchezVives and McCormick, 2000; Steriade et al., 1993a). Despite the clear presence of UP and DOWN states, the population activity displayed no clear traces of rhythmicity as revealed by strongly damped oscillatory structure in both autocorrelograms of LFP and $R\left(t\right)$ (Figure 1D and E, respectively). Motivated by this, we hypothesized that the cortical circuit might transition between two network states in a random manner (Deco et al., 2009; Mejias et al., 2010; Mochol et al., 2015). Using a probabilistic hidden semiMarkov model (Chen et al., 2009), we inferred the instantaneous state of the circuit from the population rate $R\left(t\right)$ by extracting the sequence of putative UP (U) and DOWN (D) periods (Figure 1C, Materials and methods).
UP and DOWN duration statistics during synchronized states
The statistics of U and D period durations showed skewed gammalike distributions (Figure 2A and B right; Figure 2—figure supplement 1). The mean duration across different experiments displayed a wide range of values (Figure 2B left; mean ±SD:<U>= 0.43 ± 0.19 s, <D> = 0.46 ± 0.1 s, n = 7), whereas the coefficients of variation CV(U) and CV(D) of U and D periods, defined as the standard deviation divided by the mean of the period durations within experiments, were systematically high (Figure 2B middle, mean ±SD: CV(U) = 0.68 ± 0.09, CV(D) = 0.69 ± 0.1; median CV(U) = 0.64, CV(D) = 0.71). The irregularity in the U and D periods did not result from slow drifts in the mean U or D durations caused by variations of brain state as confirmed by computing the CV_{2} (Holt et al., 1996), a local measure of irregularity that is less affected by slow variations in the statistics (mean ±SD: CV_{2}(U) = 0.86 ± 0.13, CV_{2}(D) = 0.75 ± 0.17; see Materials and methods). The high variability of U and D periods is consistent with the nonperiodicity of the dynamics revealed in the autocorrelation function (Figure 1D–E).

Figure 2—source data 1
 https://doi.org/10.7554/eLife.22425.005
We then asked whether the lengths of U and D periods were independent, as if the transitions between the two network states would reset the circuit's memory, or if in contrast they were correlated by a process impacting the variability of several consecutive periods. We computed the linear crosscorrelation $Corr\left({U}_{i},{D}_{i+k}\right)$ (Figure 2C left, for k = 0) between pairs of periods separated in the DU sequence by a lag k (Figure 2C, right). The crosscorrelation $Corr\left({U}_{i},{D}_{i+k}\right)$ displayed consistently nonzero values for k = 0 and k = 1 (mean ±SD: 0.21 ± 0.09, 0.17 ± 0.09, respectively; significant crosscorrelation in 6/7 animals, p<0.05 permutation test), whereas it remained close to zero for the rest of lags, showing that period duration correlation is limited to adjacent periods (Figure 2C–D). The positive correlation between adjacent periods was not due to slow changes in their duration, as we corrected by the correlation obtained from surrogate DU sequences obtained from shuffling the original sequence within 30 s windows (see Materials and methods). Positive correlations between consecutive periods of activity and silence can be generated when fluctuation driven transitions are combined with an adaptive process such as activitydependent adaptation currents (Lim and Rinzel, 2010; Tabak et al., 2000): if a fluctuation terminates a U period prematurely without much buildup in adaptation, the consecutive D period also tends to be shorter as there is little adaptation to recover from. However, a major role of adaptation currents in dictating UPDOWN dynamics (Compte et al., 2003b) seems at odds with the lack of rhythmicity and the highly variable U and D durations, indicative of a stochastic mechanism causing the transitions between network states.
Spiking activity during UP and DOWN states
We next searched for more direct evidence of an adaptive process by examining the time course of the population firing rate $R\left(t\right)$ during U and D periods (see Figure 1C; see Materials and methods). The mean firing rate in U periods was low (mean ±SD: 3.72 ± 0.9 spikes/s, n = 7). Moreover, D periods displayed occasional spiking (mean ±SD rate 0.018 ± 0.007 spikes/s; see e.g. Figure 3A–B and Figure 3—figure supplement 1), in contrast with the idea that DOWN periods do not display spiking activity (Chauvette et al., 2010), but see (Compte et al., 2003b). Thus, our hypothesis was that adaptation currents, if present, would induce a decay in $R\left(t\right)$ during Us and an increase during Ds, and this impact on $R\left(t\right)$ dynamics should be more evident during longer periods due to a larger accumulation (during Us) or recovery (during Ds) of the adaptation. For each experiment, we aligned the rate $R\left(t\right)$ at the DOWNtoUP (DU) and UPtoDOWN (UD) transition times (Figure 3A). We then computed the average rates ${R}_{DU}\left(\tau \right)$ and ${R}_{UD}\left(\tau \right)$ across all DU and UD transitions, respectively, with $\tau =0$ representing the transition time (Figure 3BC; mean across experiments = 598 transitions; range 472–768). Because Us and Ds had different durations, we selected long periods (U, D > 0.5 s) and compared ${R}_{DU}\left(\tau \right)$ and ${R}_{UD}\left(\tau \right)$ at the beginning and end of each period (mean number of Us 181, range 61–307; Ds 202, range 55–331). To specifically assess a change in rate during the U period, we compared the average ${R}_{DU}\left(\tau \right)$ in the time window τ = (50, 200) ms (beginning of U) with the average ${R}_{UD}\left(\tau \right)$ in the window τ = (−200,–50) ms (end of U), which we referred to as Uonset and Uoffset windows, respectively. The windows were chosen 50 ms away from τ = 0 to avoid the transient change due to the state transitions (Figure 3C–D). We found no significant mean difference between population average rate at Uonset and Uoffset windows across our experiments (mean ±SD onset minus offset population rate 0.04 ± 0.40 spikes/s, p=1, Wilcoxon signed rank, n = 7 animals). The equivalent analysis performed on D periods yielded a small but significant mean increase in the population rate between the Donset and Doffset windows (mean ±SD −0.014 ± 0.013 spikes/s, p=0.047, Wilcoxon signed rank test). To examine in more detail the lack of population rate change during Us, we looked at the modulation of individual neuron rates normalized by the overall temporal average of each unit (Figure 3E). We found that the change between Uonset and Uoffset averaged across all our neurons (n = 448 cells) was not significantly different from zero (Figure 3E right, mean ±SD of the onset vs offset difference of normalized rates 0.057 ± 1.163, p=0.12, Wilcoxon signed rank test) but that the recovery during D periods was significant (Figure 3E left; mean ±SD −0.015 ± 0.087, p=0.0002, Wilcoxon signed rank test). Some individual neurons however did show a significant modulation between Uonset and Uoffset, but the decrease found in a fraction of the neurons was compensated with a comparable increase in another fraction of neurons (Figure 3E right). Thus, at the population level, spiking activity during U periods displayed a sustained time course with no significant traces of rate adaptation.

Figure 3—source data 1
 https://doi.org/10.7554/eLife.22425.008
Rate model for UP and DOWN dynamics
To understand the network and cellular mechanisms underlying the generation of stochastic UD dynamics, showing UD serial correlations and sustained rates during U periods, we analyzed a computational rate model composed of an excitatory (E) population recurrently coupled with an inhibitory (I) population (Latham et al., 2000; Ozeki et al., 2009; Tsodyks et al., 1997; Wilson and Cowan, 1972). The excitatoryinhibitory (EI) network model described the dynamics of the mean instantaneous rates r_{E} and r_{I} of each population in the presence of fluctuating external inputs. In addition, the E population included an adaptation mechanism, an additive hyperpolarizing current a that grew linearly with the rate r_{E} (Figure 4A; see Materials and methods). We did not consider adaptation in the inhibitory population for simplicity, and because inhibitory neurons show little or no spikefrequency adaptation when depolarized with injected current (McCormick et al., 1985). Our aim was to search for a regime in which, in the absence of adaptation and external input fluctuations, the network exhibited bistability between a quiescent (D) and a lowrate state (U) fixed point. Although bistability in lowdimensional EI networks has been described since the seminal work of Wilson and Cowan (1972), previous models primarily sought to explain bistability between a lowrate and a highrate state, and exploited the combination of expansive and contractive nonlinearities produced by the transfer function (Amit and Brunel, 1997; Renart et al., 2007; Wilson and Cowan, 1972), shortterm synaptic plasticity (Hansel and Mato, 2013; Mongillo et al., 2008) or the divisive effect of inhibitory conductances (Latham et al., 2000) (see Discussion). We found that the expansive nonlinearity of the transfer function alone was sufficient to obtain bistability between D and U states. Given this, we chose the simplest possible transfer function with a threshold: a thresholdlinear function (Figure 4B, see Materials and methods). Our choice to only use an expansive threshold nonlinearity constrained strongly the way in which the network could exhibit bistability as can be deduced by plotting the nullclines of the rates r_{E} and r_{I} (Figure 4C): only when the I nullcline was shifted to the right and had a larger slope than the E nullcline, the system exhibited two stable attractors (Equation 20 in Materials and methods). This configuration of the nullclines was readily obtained by setting the threshold and the gain of the I transfer function larger than those of the E transfer function (Figure 4B), a distinctive feature previously reported when intracellularly characterizing the f  I curve of pyramidal and fast spiking interneurons in the absence of background synaptic activity (Cruikshank et al., 2007; Schiff and Reyes, 2012). This difference in gains and thresholds in the E and I populations was not a necessary condition to obtain the bistability: alternatively, a proper selection of connectivity parameters with identical E and I transfer functions could satisfy the conditions to obtain similar bistable function (see Materials and methods, Equations [2022]). This novel bistable regime yielded a quiescent D state, and arbitrarily low firing rates for both E and I populations during U states, depending on the values of the thresholds and the synaptic weights (Figure 4C). This is remarkable as in most bistable network models the rate of the sustained activity state is constrained to be above certain lower bound (see Discussion). Moreover, in this bistable regime, the U state is an inhibitionstabilized state, a network dynamical condition in which the excitatory feedback is so strong that would alone be unstable, but is balanced with fast and strong inhibitory feedback to maintain the rates stable (Ozeki et al., 2009; Tsodyks et al., 1997) (see Materials and methods).
There are two ways in which transitions between U and D states can occur. On the one hand, transitions could be driven by external input fluctuations, which were modeled as a stochastic process with zero mean and short time constant (Figure 4D). This fluctuating input reflected either afferents coming from other brain areas whose neuronal activity was stochastic and uncorrelated with the cortical circuit internal dynamics or the stochasticity of the spiking happening during U periods which was not captured by the dynamics of the rates (Holcman and Tsodyks, 2006; Lim and Rinzel, 2010). On the other hand, in the absence of fluctuations, state transitions could also occur solely driven by adaptation currents (Figure 4E). Because the adaptation time constant was much longer than the time constants of the E and I rates, the dynamics of the rates r_{E}(t) and r_{I}(t) relaxing rapidly to their steadystate can be decoupled from the slow changes in a(t) (Latham et al., 2000; Rinzel and Lee, 1987). The network dynamics can be described in the phase plane (r_{E}(t), r_{I}(t)) with variations in a(t) causing a displacement of the Enullcline. In particular, during U periods the buildup in adaptation produced a downward displacement of the Enullcline (Figure 4E). If adaptation strength β was sufficiently large the displacement increased until the U state was no longer a fixed point and the network transitioned to the only stable fixed point D. Recovery of adaptation during D periods shifted the Enullcline upwards until the D state disappeared and there was a transition to the U state (Figure 4E). In this limit cycle regime the network exhibited an oscillatory behavior with a frequency close to the inverse of the adaptation recovery time constant. When the two types of transitions are combined, two types of stability in U and D states can be distinguished: (1) metastable, referred to a state that was stable to the dynamics of both the rates and the adaptation but could transition away due to input fluctuations; (2) quasistable, referred to a state that was stable for the fast rate dynamics but unstable for the slow adaptation dynamics, plus it was also susceptible to fluctuationdriven transitions.
UP and DOWN state statistics in the model
To quantify the relative impact of activity fluctuations and adaptation in causing UD transitions in the data, we compared the dynamics of the model for different adaptation strengths β and different values of the Ecell effective threshold θ_{E} (defined as the difference between the activation threshold and the mean external current). The (θ_{E},β) plane was divided into four regions with UD alternations, corresponding to the four combinations of metastability and quasistability (Figure 5A). Since only metastable states tend to give exponentially distributed durations with CV ~1, the large variability found in both U and D durations (Figure 2B) constrained the model to the subregion where both states were metastable and UD and DU transitions were driven by fluctuations (red area in Figure 5A). The existence of serial correlations between consecutive U and D in the data (Figure 2C–D) discarded an adaptationfree regime (β = 0), in which transitions were solely driven by fluctuations and the duration of each period was independent of previous durations (Figure 5B right). Thus, we explored a regime with β >0 but still in the region where both states were metastable (Figure 5B, green square) and the input fluctuations produced alternation dynamics (Figure 5C top) with broad U and D duration distributions and relatively high CVs (Figure 5D top). The magnitude of the fluctuations was adjusted to obtain frequent transitions in this region and serial correlations quantitatively comparable with the data (Figure 5—figure supplement 1). Moreover, the rates showed an autocorrelation function qualitatively similar to the data, with negative sidelobes but no clear traces of rhythmicity (Figure 5E). Adaptation introduced correlations across consecutive periods (Figure 5D bottom) because at the transition times the system kept a memory of the previous period in the adaptation value a(t). For adaptation to introduce substantial correlations, $a(t)$ had to be variable at the transition times (Lim and Rinzel, 2010), a condition that required adaptation to be fast, to vary within one period, but not too fast to prevent reaching the equilibrium (Figure 5C bottom trace). Thus, when a strong fluctuation caused a premature UD transition, i.e. a short ${U}_{k}$, adaptation had no time to build up and tended to be small, increasing the probability of a premature DU transition in the following D period, i.e. a short ${D}_{k+1}$. Conversely, a long ${U}_{k}$ recruited strong adaptation that required a long ${D}_{k+1}$ to recover (see highlighted examples in Figure 5C). In this regime, the dynamics of adaptation a(t) alone did not cause transitions but did strongly modulate the probability that an external fluctuation would cause a transition (MorenoBote et al., 2007). Altogether, this analysis suggests that the observed UD dynamics occurred in a regime with strong random fluctuations, that these fluctuations were necessary to cause the transitions, and that adaptation modulated the timing of the transitions and consequently introduced correlations between the duration of consecutive periods.
Dynamics of E and I populations during UP and DOWN states: model and data
According to the model, adaptation currents in the E population can parsimoniously account for the UD serial correlations but this is in apparent contradiction with the fact that the population rate $R\left(t\right)$ in the data did not decrease significantly during U periods (Figure 3C–E). To reconcile these two seemingly contradictory observations we used the model with the parameters that matched the data’s U and D statistics (Figure 5C–E) to characterize the time course of the rates r_{E}(t) and r_{I}(t) averaged across DU and UD transitions. Interestingly, the average r_{E}(t) at the beginning and at the end of U periods did not show much difference whereas the average r_{I}(t) showed a larger decrease over the U period (Figure 6A). Thus, although only the E and not the I population included intrinsic adaptation mechanisms, it was r_{I}(t) the one that exhibited the most pronounced decay during U periods. This was a direct consequence of the specific conditions that gave rise to bistability in our model: the difference in thresholds, that is, θ_{I} > θ_{E}, and the fact that the Inullcline has a higher slope than the Enullcline (Equation 21 in Materials and methods). These features imposed that as adaptation built up during U periods, the downward displacement of the Enullcline caused a greater decrease in r_{I}(t) than in r_{E}(t) (compare ‘decay’ colored bands in Figure 6B). With this arrangement the drop in r_{E}(t) could be made arbitrarily small by increasing the slope of the Inullcline (Figure 6B). Note that this feature of the model is not dependent on its specific regime of operation, as it would similarly apply in an adaptationdriven regime (Figure 4E). During D periods the average r_{E}(t) did show a substantial increase due to the recovery of adaptation, whereas the r_{I}(t) did not. This was because in the D state, the quiescent network behaved as isolated neurons reflecting the dynamics of intrinsic adaptation which was only present on E cells. In sum, if the majority of the neurons that we recorded experimentally were excitatory, the model could explain why adaptation currents did not cause a significant decrease in the average rate during U periods (Figure 3C–D). The model in addition predicts that the rate of inhibitory neurons should exhibit a noticeable decrease during U periods.
Motivated by this prediction, we investigated the dynamics of the rates of excitatory and inhibitory neurons during U and D periods in the experimental data. Based on spike waveforms, isolated units from n = 5 experiments were classified into putative interneurons (I) and putative excitatory neurons (E), following previously described procedures (Barthó et al., 2004). The average rate for E and I populations (${R}_{E}\left(t\right)$ and ${R}_{I}\left(t\right)$, respectively) displayed similar profiles across UD alternations, although higher values were observed for I cells during Us (see example experiment in Figure 6C). To assess the modulation of the rates during U periods, we looked at the normalized individual rates of all the E and I neurons (n = 330 and 21, respectively). As predicted by the model (Figure 6AB), I cells displayed a significant rate decay during U periods that was not observed in E cells (Figure 6D; mixedeffects ANOVA with factors neuron type (E/I), onset/offset and neuron identity and experiment as random factors: interaction neuron type x onset/offset F(1,349)=6.3, p=0.013). During D periods, E cells also showed a significant increase in rate (Wilcoxon signed rank test p=0.0092), just like that observed in the whole cell population, whereas no rate change was found in I cells (not shown). Although these changes observed during D periods were also predicted by the model, properly testing the significance of this interaction would require a larger data set with more I cells. The validation of the prediction on the counterintuitive emergent dynamics of E and I rates during U periods strongly suggests that the mechanism dissected by the model underlies the putative bistability observed in cortical circuit dynamics.
Dynamics of state transitions in a spiking EI network
To assess whether the mechanism for state transitions proposed by the rate model could generate UPDOWN dynamics in a more biophysically realistic circuit we built a network composed of N_{E} = 4000 excitatory and N_{I} = 1000 inhibitory leaky integrateandfire spiking units (Ricciardi, 1977) (alltoall connectivity). We used currentbased synapses (Brunel and Sergi, 1998) and introduced a spikebased afterhyperpolarization (AHP) current in E cells (Wang, 1998; La Camera et al., 2004). The EI asymmetry in spike threshold and f  I gain described in the rate model was implemented and, using standard meanfield methods (Amit and Tsodyks, 1991), we revealed the same network bistability described above (compare Figures 7A and 4C): a saddle node bifurcation gave rise to a quiescent branch (DOWN) coexisting with a lowrate branch (UP; Figure 7B). Numerical simulations showed that while the network was in the UP state the AHP current increased moving the system along the upper branch towards the saddlenode and just causing a small decrease in r_{E}. However, because we chose a small AHP amplitude so that the network operated in the bistable regime (see fixed points in Figure 7B), the adaptation buildup alone did not trigger an UP to DOWN transition. It was the current fluctuations produced by the irregular activity during UP periods that triggered UP to DOWN transitions. However, once the network was in the DOWN state the external independent Gaussian inputs only caused subthreshold membrane fluctuations in E cells that sat far away from the spiking threshold (voltage std. dev. 2.5 mV with distance from resting voltage to threshold of 12.4 mV). In these conditions, there was no spiking activity during DOWN periods and the network could not transition to the UP state (not shown). To make these subthreshold fluctuations effective in driving transitions, we first depolarized neurons so that their resting potential during the D state was closer to threshold and the recovery from adaptation alone was almost sufficient to cause the transitions (Figure 7—figure supplement 2A–E). This ad hoc depolarization was sufficient to generate UPDOWN alternations but prevented the membrane potential from showing bimodality (Figure 7—figure supplement 2F), the intracellular signature of UP and DOWN states. Moreover, the alternations had a very small serial correlation between consecutive D and U periods, Corr(D,U), (Figure 7—figure supplement 2I) because adaptation at the time of the DOWNtoUP transition was narrowly distributed, did not retain information about the length of the DOWN period, and could thus not constrain the duration of the following U period (Lim and Rinzel, 2010).
We thus reasoned that the most parsimonious way to cause a DOWN to UP transition without disrupting the bimodality of the membrane potential was to maintain resting neurons hyperpolarized and to introduce stochastic brief external excitatory synchronous inputs that caused large amplitude depolarizing voltage bumps in a targeted subpopulation of E and I neurons (DeWeese and Zador, 2006). The statistics of occurrence times of these bumps were Poisson (Tan et al., 2013) and their frequency (~2–3 Hz) and amplitude (~10–15 mV) were set so that (i) they could cause DOWN to UP transitions (Figure 7C–E) with nonrhythmic structure (Figure 7G) and yield U and D interval distributions similar to the data (Figure 7H) and (ii) the effectiveness in causing a transition was not allornone but depended on the population average AHP current amplitude $I}_{a$ (Figure 7—figure supplement 1E). This dependence occurs because the distance to the saddle point limiting the basin of attraction of the DOWN state decreases with $I}_{a$ (Figure 7B, red dashed line). This meant that some kicks during DOWN periods failed to cause a transition, giving rise to sporadic sparse firing during DOWN periods also seen as large bumps in the membrane potential of the targeted cells (see asterisks in Figure 7C; see Discussion). Because of this dependence of the transition probability on the recovery of the AHP current, the transition dynamics displayed serial correlations Corr(D, U) (Figure 7I), as observed in the data (Figure 2CD). Once in the UP state, the external kicks caused an excess of excitatory and especially inhibitory activity that destabilized the UP state and generated a transition to the DOWN state. Because the effectiveness of the kicks causing these transitions also depended on the average AHP current (Figure 7—figure supplement 1F), there were significant serial correlations Corr(U,D) quantitatively comparable to the data (Figure 7I). As explained by the rate model, the averaged population rate r_{E}(t) showed very weak decrease along the UP period whereas r_{I}(t) decayed much more strongly, until the overshoot caused by kicks at the UP offset (Figure 7J). Neurons displayed a bimodal distribution of the membrane potential (Figure 7F; Figure 7—figure supplement 1E) and during UP periods they fired low rate irregular spike trains (CV of the Interspikeinterval was 0.74 for E cells and 0.94 for I cells). In sum, a spiking network model was able to reproduce the results described in the rate model given there exists a mechanism to generate stochastic, synchronous largeamplitude bumps.
Discussion
Using cortical population recordings we have shown that UP and DOWN period durations are irregular and show positive serial correlation, but there is no significant decrease of population rate during UP periods. These findings seem inconsistent with one another, as some support, while other challenge the idea that UPDOWN dynamics are caused by cell or synaptic adaptive mechanisms. Using a standard EI rate model network, we have proposed a novel bistable regime based only on the expansive threshold nonlinearity of the transfer function and on a reported difference between E and I spiking thresholds. While fluctuations produce transitions between the quiescent state (D) and the inhibitionstabilized state of arbitrarily low rate (U), adaptation acting on the E population facilitates the effect of fluctuations causing the transitions. Paradoxically, because of the difference in E and I thresholds, adaptation causes a marginal decay of E rates but a significant decay of I rates during UP periods. This counterintuitive prediction, specific to our model, was validated in the experimental data.
Adaptive processes constitute the mechanistic hallmark for the generation of cortical UP and DOWN dynamics (Contreras et al., 1996; SanchezVives and McCormick, 2000; Timofeev et al., 2000). This principle has been used in several computational models, by implementing synaptic shortterm depression (Bazhenov et al., 2002; Benita et al., 2012; Ghorbani et al., 2012; Hill and Tononi, 2005; Holcman and Tsodyks, 2006; Mejias et al., 2010), or activitydependent adaptation currents (Compte et al., 2003b; Destexhe, 2009; Latham et al., 2000; Mattia and SanchezVives, 2012). Consistent with an adaptive process generating the dynamics, UP and DOWN states observed in vitro display clear rhythmicity with Gaussian shaped UP and DOWN duration distributions (Mattia and SanchezVives, 2012). An in vivo study using ketamine anesthesia in mice reported reduced UP and DOWN duration variability across multiple cortical areas with CVs around 0.2–0.4 (RuizMejias et al., 2011). Moreover, a comparison of the UP and DOWN dynamics in the cat observed under ketamine anesthesia and those found in slow wave sleep (SWS) showed that the alternations were more rhythmic under ketamine (Chauvette et al., 2011). In contrast, our data displayed large variability (mean CV(U)~CV(D) ≃0.7) and skewed distributions of UP and DOWN period durations (Figure 2B), in agreement with previous studies using urethane anesthesia (Dao Duc et al., 2015; Stern et al., 1997). Although a direct comparison between the UPDOWN dynamics under urethane anesthesia and during natural sleep has not been made, urethane seems to mimic sleep in several aspects. First, it induces spontaneous alternations between synchronized and desynchronized states (Curto et al., 2009; Steriade et al., 1994), resembling the alternations between SWS and REM sleep (Clement et al., 2008; Whitten et al., 2009). Second, the irregular UPDOWN transitions observed under urethane anesthesia seem to resemble the variability observed in SWS (Ji and Wilson, 2007; Johnson et al., 2010). Preliminary analysis of rat and mouse prefrontal cortex during SWS with the same populationbased UD detection methods used here (Materials and methods) showed that U periods had comparable mean length but were more irregular (CV ~1) than under urethane anesthesia (Figure 1B) whereas D periods were shorter (mean ~150 ms) and slightly more regular (CV ~0.5) (unpublished observations). Such an asymmetry in the duration and irregularity of UD periods can be easily reproduced in our model by choosing parameters in the mixed region where U is metastable and D is quasistable (Figure 5A light orange).
In addition, we found nonzero correlations between consecutive DU and UD period durations, a feature that was not observed previously in similar experimental conditions (Stern et al., 1997). Reduced statistical power (~30 UD/DU pairs were considered by (Stern et al., 1997) versus a range of 462–758 pairs in our n = 7 experiments) and different UD detection methods (intracellular membrane potential thresholding) could be the reasons for this discrepancy.
Bistability in cortical networks at low firing rates
Bistability in a dynamical system refers to the coexistence of two possible steady states between which the system can alternate (Angeli et al., 2004). This principle has been used to interpret UP and DOWN states as two attractors of cortical circuits (Cossart et al., 2003; Shu et al., 2003) and it seems to underlie higher cognitive functions (Compte, 2006; Durstewitz, 2009). In particular, multistability in recurrent cortical networks has been postulated to underlie the persistent activity observed during the delay period in working memory tasks (Amit and Brunel, 1997). Extensive theoretical work has shown that based on the change in curvature of the neuronal f  I curve, that is, from expansive to contractive, recurrent network models generate two types of coexisting attractors: a spontaneous state with arbitrarily low rates (falling in the expansive part of the f  I curve) and a sustained activity attractor where the reverberant activity of a subpopulation of neurons could be maintained at a rate on the contractive part of the f  I curve (Amit and Brunel, 1997; Brunel, 2000a; Wang, 2001). Thus, unless additional mechanisms are included, e.g. synaptic shortterm depression and facilitation (Barbieri and Brunel, 2007; Hansel and Mato, 2013; Mongillo et al., 2012) or finedtuned EI balance (Renart et al., 2007), the rate of persistent states is lowerbounded by the rate where the f  I curve changes from convex to concave (~10–20 spikes/s). Moreover, because of this the sustained attractor operates in an unbalanced suprathreshold regime where spike trains tend to be more regular (i.e. lower interspikeinterval CV, [Barbieri and Brunel, 2007; Hansel and Mato, 2013; Renart et al., 2007]) than those observed in the data (Compte et al., 2003b).
UP and DOWN states represent in contrast transitions between very different levels of activity: a quiescent state and a very low rate state. Given that we recorded neurons extracellularly, our estimate of the mean firing rate during UP periods (3.7 spikes/s) is most likely an overestimation. Whole cell intracellular recordings have reported rates in the range 1–2 spikes/s (Constantinople and Bruno, 2011), 0.4 spikes/s in Pyramidal L2/3 of the somatosensory cortex of awake mice (Gentet et al., 2012), 0.1 spikes/s in Pyramidal L2/3 cells in somatosensory cortex during UP periods in anesthetized rats (Waters and Helmchen, 2006), or 0.1–0.3 spikes/s in V1 neurons of awake mice (Haider et al., 2013). Juxtacellular recordings have found values near 4–5 spikes/s (Massi et al., 2012; Sakata and Harris, 2009) whereas Calcium imaging experiments report spontaneous rates < 0.1 spikes/s (Kerr et al., 2005). Despite UP rates being so low, rate models have commonly used the change in curvature of the transfer function to generate UP and DOWN dynamics (Curto et al., 2009; Lim and Rinzel, 2010; Mattia and SanchezVives, 2012; Mochol et al., 2015). It is also for this reason that most spiking network models generating UP and DOWN transitions exhibit unrealistically high rates during U periods (in the range 10–40 spikes/s) with relatively regular firing (Bazhenov et al., 2002; Compte et al., 2003b; Destexhe, 2009; Hill and Tononi, 2005).
An alternative mechanism to generate bistability between UP and DOWN states has been the shunting or divisive effect of inhibitory synaptic conductances, a mechanism that can produce nonmonotonic transfer functions and yield bistability between a zero rate state and a state of very low rate (Kumar et al., 2008; Latham et al., 2000; Vogels and Abbott, 2005). Latham and colleagues (Latham et al., 2000) addressed the question of how to obtain a state of low firing rate (i.e. <1 spikes/s) in a recurrent EI network and concluded that there were two alternative mechanisms: the most robust was to have a single attractor that relied on the excitatory drive from endogenously active neurons in the network or from external inputs. In fact, excitatory external inputs have been widely used to model low rate tonic spontaneous activity (i.e. no DOWN states) in EI networks of currentbased spiking units (Amit and Brunel, 1997; Brunel, 2000b; Vogels and Abbott, 2005). Alternatively, in the absence of endogenous or external drive, a silent attractor appears and a second attractor can emerge at a low rate over a limited range of parameters if inhibition exerts a strong divisive influence on the excitatory transfer function (Latham et al., 2000). Based on this, a spiking network of conductancebased point neurons with no external/endogenous activity could alternate between UP (0.2 spikes/s) and DOWN (0 spikes/s) periods via spike frequency adaptation currents. Although the authors did not characterize the statistics of UP and DOWN periods, this network could in principle generate positive correlations between consecutive U and D period durations, Corr(U,D), as long as rate fluctuations caused UP to DOWN transitions for sufficiently different adaptation values (Lim and Rinzel, 2010). However, since DOWN to UP transitions were caused by recovery from adaptation, the duration of a D period could not influence the duration of the following U period and their network could not produce correlations between consecutive DU periods (i.e. Corr(D,U)~0, as in the network shown in Figure 7—figure supplement 2). The model moreover lacked bimodality in the membrane voltage and did not specifically predict a distinct decay of r_{E} and r_{I} during UP periods.
Our model proposes a more parsimonious mechanism underlying UPDOWN bistability: the ubiquitous expansive threshold nonlinearity of the transfer function plus the asymmetry in threshold (θ_{I} > θ_{E}) and gain (larger for I than E cells). We used a thresholdlinear function for simplicity but other more realistic choices (e.g. thresholdquadratic) produced the same qualitative results. The threshold asymmetry is supported by in vitro patch clamp experiments revealing that firing threshold of inhibitory fastspiking neurons, measured as the lowest injected current causing spike firing, is higher than that of excitatory regularspiking neurons (Cruikshank et al., 2007; Schiff and Reyes, 2012). Inhibition in this model becomes active when external inputs onto E cells during the DOWN state are strong enough to push the system above the separatrix (Figure 4C) and ignite the UP state. Once recruited, inhibition is necessary to stabilize the activity because, in its absence, the positive feedback would make the UP state unstable, a condition known as an InhibitionStabilized Network (Ozeki et al., 2009). In this regime, excitatory currents are suprathreshold but when combined with inhibition result in a net subthreshold input current yielding lowrate irregular firing (Figure 7—figure supplement 2).
A direct implication of the specific mechanism of bistability in our model was that intrinsic adaptation of excitatory neurons (McCormick et al., 1985) did not cause a noticeable decrease in r_{E} during the UP periods but instead produced a significant decay in the inhibitory rate r_{I}. We confirmed this prediction in our data (Figure 6C–D). Interestingly, the same effect was also observed in ketamine anesthetized animals from both extracellular (Luczak and Barthó, 2012) and intracellular recordings resolving synaptic conductances (Haider et al., 2006). During DOWN periods, in contrast, the network is not in a balanced state and recovery from adaptation caused a significant increase in the rate of putative excitatory neurons, as predicted by the model. In sum, our results present the first EI network model with linearly summed inputs exhibiting bistability between a quiescent state and a inhibitionstabilized state with arbitrary low rate.
The role and origin of fluctuations in UPDOWN switching
Our findings stress the role of input fluctuations inducing transitions between the UP and DOWN network attractors because noiseinduced alternations generate periods with large variability as found in the data (Figure 2A–B). Adaptation was also necessary to introduce positive serial correlations and to reproduce the observed gammalike UPDOWN distributions (compare Figure 2A–B with Figures 5D and 7H) because it caused a soft refractory period after each transition decreasing the duration CVs below one (Figure 2B). In our rate model fluctuations were simply introduced by a timevarying Gaussian input so that in both DU and UD transitions the noise had the same external origin. In cortical circuits however these two transitions are very different: while in UPDOWN transitions the fluctuations can originate in the stochasticity of the spiking activity during the UP period, DU transitions depend on either local circuit mechanisms that do not need spiking activity or on external inputs to escape from a quiescent state. Our spiking EI network model could use the stochasticity of the recurrent spiking activity to cause transitions from a lowrate UP state to a quiescent DOWN state but needed synchronous input bumps to cause DOWN to UP transitions (Figure 7). Other models have proposed that synaptic noise (e.g. spontaneous miniatures) could cause the transitions from the quiescent state (i.e. DOWN to UP) (Bazhenov et al., 2002; Holcman and Tsodyks, 2006; Mejias et al., 2010). Our analysis shows however that to cause noisedriven transitions from a quiescent state using independent synaptic fluctuations into each cell (1) neurons need to be depolarized unrealistically close to threshold and hence do not display bimodal voltage distributions, and (2) the magnitude of adaptation must be tuned such that it brings neurons close to threshold allowing the sparse firing to trigger a transition. In this condition moreover, the network does not generate positive correlations between D and consecutive U intervals (Figure 7—figure supplement 2). For this reason we used instead synchronous external input kicks as the inducers of DOWNtoUP transitions. These input kicks were also effective driving UP to DOWN transitions but they caused an excess of E and especially I activity at the UP offset (see UPoffset peaks in Figure 7J). This feature was not observed in our data but has been observed when triggering UP to DOWN transitions with electrical stimulation (Shu et al., 2003). When kicks were suppressed during UP periods, UPtoDOWN transitions could be triggered by intrinsically generated fluctuations in the spiking activity and the offset peaks in the E and I rates could be largely reduced (not shown). This seems to suggest that the two type of transitions could be triggered by different types of events: DOWN to UP would be triggered by synchronous bumps whereas UP to DOWN by fluctuations in the rates of the two populations. Modeling such a mixedfactors network would require considering more realistic connectivity patterns (e.g. sparse and spatially organized) in order for the network to intrinsically generate more realistic spiking variability in the population (Amit and Brunel, 1997; Vogels and Abbott, 2005; Renart et al., 2010; Rosenbaum et al., 2017). For simplicity, we opted for an alltoall connected network (as opposed to e.g. sparse connectivity) because it was simpler to analyze theoretically and simulate numerically. In particular, the fluctuations of synaptic input in an alltoall network are set as a fixed parameter, independent of the recurrent activity. This allowed us to find the appropriate network states and determine their stability using standard mean field techniques (Roxin and Compte, 2016) and then adjust the magnitude of the fluctuations and kicks to reproduce the transition dynamics using numerical simulations. We leave for future study the extension of these results to more realistic sparse connectivity patterns. In a sparse randomly connected EI network for instance, it would be of interest to study the behavior of this type of bistability as the network size N increases and synaptic couplings are scaled as in a balanced network, that is, $J~1/\sqrt{N}$ (Renart et al., 2010; van Vreeswijk and Sompolinsky, 1998). In the large N limit, balanced networks linearly transform external inputs into population average output rate (van Vreeswijk and Sompolinsky, 1998). This implies that, the larger the network, the more fine tuning of the parameters would be necessary in order to generate this type of bistability.
Previous evidence supporting membrane voltage synchronous bumps
Evidence for temporally sparse synchronous inputs comes from intracellular membrane potential recordings under some types of anesthesia (pentobarbital or halothane) showing «presynaptic inputs [...] organized into quiescent periods punctuated by brief highly synchronous volleys, or ‘bumps’» (DeWeese and Zador, 2006). We postulate that these spontaneous bumps (DeWeese and Zador, 2006; Tan et al., 2013; Taub et al., 2013) (1) are caused by synchronous external inputs impinging on the neocortex, possibly from thalamocortical neurons (Crunelli and Hughes, 2010), since spontaneous bumps resemble sensory evoked responses (DeWeese and Zador, 2006) or from hippocampal Sharp Wave Ripples (Battaglia et al., 2004); (2) their timing resembles a Poisson stochastic process rather than a rhythmic input (Tan et al., 2013); (3) they lie at the origin of the DOWNtoUP transitions that we observe. Despite the fact that UPDOWNlike activity can emerge in cortical slices in vitro (Cossart et al., 2003; Fanselow and Connors, 2010; Mann et al., 2009; SanchezVives and McCormick, 2000) the intact brain can generate more complex UPDOWN patterns than the isolated cortex, with subcortical activity in many areas correlating with transition times (Battaglia et al., 2004; Crunelli et al., 2015; Crunelli and Hughes, 2010; David et al., 2013; Lewis et al., 2015; Slézia et al., 2011; Ushimaru et al., 2012). A recent study however reported very large (>20 mV) nonperiodic synchronous bumps in cortical in vitro slices (Graupner and Reyes, 2013) suggesting that these events could also be generated within local cortical circuits.
These arguments suggest that DOWN to UP transitions are, at least in part, caused by punctuated external synchronous inputs (Battaglia et al., 2004; Johnson et al., 2010), with slow intrinsic adaptation mechanisms contributing to modulate the probability that these events trigger a transition (MorenoBote et al., 2007). This complements the view that UPDOWN dynamics reflect an endogenous oscillation of the neocortex and connects to the role of UPDOWN states in memory consolidation: because in the active attractor (UP) the stationary activity is irregular and asynchronous (Renart et al., 2010), the existence of a silent attractor enables synchronous transient dynamics in the form of DOWN to UP transitions. These transients generate precise temporal relations among neurons in a cortical circuit (Luczak et al., 2007), which can cause synaptic plasticity underlying learning and memory (Peyrache et al., 2009). We speculate that, while the transient dynamics are triggered by external inputs, adaptation, by introducing refractoriness in this process, parses transition events preventing the temporal overlap of information packets (Luczak et al., 2015).
Materials and methods
Experimental procedures
Request a detailed protocolThis study involved analysis of previously published and new data. Previously published data (Barthó et al., 2004) was obtained under a protocol approved by the Rutgers University Animal Care and Use Committee. One new data set was performed in accordance with a protocol approved by the Animal Welfare Committee at University of Lethbridge (protocol # 0907). All surgeries were performed under anesthesia, and every effort was made to minimize suffering. Adult, male SpragueDawley rats (250–400 g) were anesthetized with urethane (1.5 g/kg) and supplemental doses of 0.15 g/kg were given when necessary after several hours since the initial dose. We also used an initial dose of Ketamine (15–25 mg/kg) before the surgery to induce the anesthetized state quickly. We then performed a craniotomy over the somatosensory cortex, whose position was determined using stereotaxic coordinates. Next 32 or 64 channels silicon microelectrodes (Neuronexus technologies, Ann Arbor MI) were slowly inserted into in deep layers of the cortex (depth 600–1200 μm; lowering speed ~1 mm/hour). Probes had either eight shanks each with eight staggered recording sites per shank (model Buzsaki64A64), or four shanks with two tetrode configurations in each (model A4 × 2tet5mm150200312A32). Neuronal signals were highpass filtered (1 Hz) and amplified (1,000X) using a 64channel amplifier (Sensorium Inc., Charlotte, VT), recorded at 20 kHz sampling rate with 16bit resolution using a PCbased data acquisition system (United Electronic Industries, Canton, MA) and custom written software (Matlab Data Acquisition Toolbox, MathWorks) and stored on disk for further analysis.
Data analysis
Request a detailed protocolSpike sorting was performed using previously described methods (Harris et al., 2000). Briefly, units were isolated by a semiautomatic algorithm (http://klustakwik.sourceforge.net) followed by manual clustering procedures (http://klusters.sourceforge.net). We defined the Population activity as the merge of the spike trains from all the well isolated units.
Putative E/I neuronal classification
Request a detailed protocolIsolated units were classified into narrowspiking (I) and broadspiking (E) cells based on three features extracted from their mean spike waveforms: spike width, asymmetry and troughtopeak distance. The two classes were grouped in the space of features by kmeans clustering (Barthó et al., 2004; Csicsvari et al., 1998; Sirota et al., 2008).
Synchronized state assessment
Request a detailed protocolWe classified the brain state based on the silence density defined as the fraction of 20 ms bins with zero spikes in the Population activity in 10 s windows (Mochol et al., 2015; Renart et al., 2010). Epochs with consecutive windows of silence density above 0.4, standard deviation below 0.1 and longer than 5 min, were considered as sustained synchronized brain state and were used for further analysis (synchronized states durations mean ±SD: 494 ± 58 s, n = 7 epochs).
UP and DOWN transitions detection
Request a detailed protocolUPDOWN phases have been commonly defined from intracellular recordings by detecting the crossing times of a heuristic threshold set on the membrane potential of individual neurons (Mukovski et al., 2007; Stern et al., 1997), or from local field potential signals (Compte et al., 2008; Mukovski et al., 2007) or combined together with the information provided by multiunit activity (Haider et al., 2006; Hasenstaub et al., 2007). Defining UPDOWN phases from singleunit recordings is more challenging because individual neurons fire at low rates discharging very few action potentials on each UP phase (Constantinople and Bruno, 2011; Gentet et al., 2012; Waters and Helmchen, 2006). However, pooling the spiking activity of many neurons into a population spike train reveals the presence of cofluctuations in the firing activity of the individual neurons and allows accurate detection of UPDOWN phases (Luczak et al., 2007; Saleem et al., 2010). We used a discretetime hidden semiMarkov probabilistic model (HMM) to infer the discrete twostate process that most likely generated the population activity (Chen et al., 2009). Thus, the population activity spike count was considered as a single stochastic point process whose rate was modulated by the discrete hidden state and the firing history of the ensemble of neurons recorded. In order to estimate the hidden state at each time, the method used the expectation maximization (EM) algorithm for the estimation of the parameters from the statistical model (Chen et al., 2009). Although the discretetime HMM provides a reasonable state estimate with a rather fast computing speed, the method is restricted to locate the UP and DOWN transition with a time resolution given by the bin size (T) for the population activity spike count (10 ms in our case). The initial parameters used for the detection were: Binsize T = 10 ms, number of history bins J = 2 (sets the length of the memory, i.e. J = 0 is a pure Markov process); historydependence weight β = 0.01 (i.e. β = 0 for a pure Markov process); transition matrix P_{DU} = P_{UD} = 0.9, P_{DD} = P_{UU} = 0.1; rate during UP periods α = 3, and rate difference during DOWN and UP periods μ = −2 (Chen et al., 2009). The algorithm gives an estimate of the state of the network on each bin T. Adjacent bins in the same state are then merged to obtain the series of putative UP (U) and DOWN (D) periods. The series is defined by the onset ${\left\{{t}_{i}^{on}\right\}}_{i=1}^{M}$ and offset $\left\{{t}_{i}^{off}\right\}}_{i=1}^{M$times of the Us, where M is the total number of Us, that determine the ith UP and DOWN period durations as (see Figure 1C):
Statistics of UP and DOWN durations
Request a detailed protocolThe mean and the coefficient of variation of U_{i} were defined as
where:
and equivalently for $<{D}_{i}>$ and $CV\left({D}_{i}\right)$. We controlled whether variability in U_{i} was produced by slow drifts by computing CV_{2} a measure of variability not contaminated by nonstationarities of the data (Compte et al., 2003a; Holt et al., 1996).
The serial correlation between ${U}_{i}$ and ${D}_{i+k}$, with k setting the lag in the UD series, e.g. k = 0 (k = 1) refers to the immediately previous (consecutive) DOWN period, was quantified with the Pearson correlation coefficient defined as:
where the covariance was defined as:
Values of ${U}_{i}$ and ${D}_{i}$differing more than 3 standard deviations from the mean were discarded from the correlation analysis (circles in Figure 2C). To remove correlations between ${U}_{i}$ and ${D}_{i}$ produced by slow drifts in the durations we used resampling methods developed to remove slow correlations among spike trains (Amarasingham et al., 2012). We generated the $l$$th$ shuffled series of U periods ${\left\{\widehat{{u}_{i}^{l}}\right\}}_{i=1}^{M}$ by randomly shuffling the order of the Us in the original series ${\left\{{U}_{i}\right\}}_{i=1}^{M}$ within intervals of 30 s. The same was done to define the shuffled series of D periods ${\left\{\widehat{{d}_{i}^{l}}\right\}}_{i=1}^{M}$. The two shuffled series lack any correlation except that introduced by covariations in the statistics with a timescale slower than 30 s. We generated L = 1000 independent shuffled series ${\left\{\widehat{{u}_{i}^{l}}\right\}}_{i=1}^{M}$and ${\left\{\widehat{{d}_{i}^{l}}\right\}}_{i=1}^{M}$with $l$=1,2,...L, computed the covariance $Cov\left({u}_{i}^{l},{d}_{i+k}^{l}\right)$ for each and the averaged over the ensemble $Cov\left({u}_{i},{d}_{i+k}\right)=<Cov\left({u}_{i}^{l},{d}_{i+k}^{l}\right){>}_{l}$. Finally, the correlation due to cofluctuations of Us and Ds faster than 30 s was computed by subtracting $Cov\left({u}_{i},{d}_{i+k}\right)$ from $Cov\left({U}_{i},{D}_{i+k}\right)$ in Equation 5. Significance of the correlation function $Corr\left({U}_{i},{D}_{i+k}\right)$ was assessed by computing a pointwise confidence interval from a distribution of L correlograms $Corr\left({u}_{i}^{l},{d}_{i+k}^{l}\right)$, for l = 1...L (L = 10000), computed from each shuffled series the same way as for the original series (gray dashed bands in Figure 2C). To take into account multiple comparisons introduced by the range in lag k, we obtained global confidence intervals (black dashed bands in Figure 2C) by finding the P of the pointwise intervals for which only a fraction of the correlograms $Corr\left({u}_{i}^{l},{d}_{i+k}^{l}\right)$ crosses the interval bands at any lag k= −7...7 (see Fujisawa et al., 2008 for details).
Gamma parameter estimates for distributions of U and D durations were computed using the Matlab builtin function gamfit.
Spike count statistics
Request a detailed protocolWe divided the time in bins of dt = 1 ms and defined the spike train of the jth neuron as:
The spike count of the jth neuron over the time window (tT/2, t+T/2) was obtained from
where $*$ refers to a discrete convolution and $K\left(t\right)$ is a square kernel which equals one in (T/2,T/2) and zero otherwise.
The instantaneous rate of the jth neuron was defined as:
and therefore the instantaneous population rate was defined as:
where N is the total number of well isolated and simultaneously recorded neurons. We have dropped the dependence on $T$ from ${r}_{i}\left(t\right)$ and $R\left(t\right)$ to ease the notation. We also defined the instantaneous Epopulation and Ipopulations rates, ${R}^{E}\left(t\right)$ and ${R}^{I}\left(t\right)$ respectively, as those computed using cells in the E and I subpopulations separately.
Population firing statistics during Us and Ds
Request a detailed protocolThe instantaneous population rate averaged across Us and Ds and aligned at the D to U transition (DU) was defined as:
where $\tau $ is the time to the DU transition. Because Us had different durations, for each $\tau \text{}\text{}0$, the sum only included the onset time ${t}_{i}^{on}$ if the subsequent period was longer than $\tau \text{}\text{}{U}_{i}$. By doing this we remove the trivial decay we would observe in ${R}_{DU}\left(\tau \right)$ as $\tau $increases due to the increasing probability to transition into a consecutive period ${D}_{i+1}$. For $\tau \text{}\text{}0$, ${R}_{DU}\left(\tau \right)$ reflecting the population averaged rate during the Ds, is obtained as in Equation 10 but including the times ${t}_{i}^{on}$ in the sum if the previous D was longer than $\tau \text{}\text{}{D}_{i1}$. Similarly, the average population rate aligned at the offset ${R}_{UD}\left(\tau \right)$ was defined equivalently by replacing ${\left\{{t}_{i}^{on}\right\}}_{i=1}^{M}$ by the series of offset times ${\left\{{t}_{i}^{off}\right\}}_{i=1}^{M}$. We also defined the onset and offsetaligned averaged population rate for excitatory (E) and inhibitory (I) populations, termed ${R}_{DU}^{E}{}_{\text{}}\left(\tau \right)$ and ${R}_{UD}^{E}{}_{\text{}}\left(\tau \right)$ for the E case and similarly for the I case. Moreover, the onset and offsetaligned averaged rate of the ith neuron ${r}_{DU}^{i}\left(\tau \right)$ and ${r}_{UD}^{i}\left(\tau \right)$ were defined similarly using the individual rate defined in Equation 8.
The autocorrelogram of the instantaneous population rate was defined as:
with the sum in t running over the L time bins in a window of size W. The average $<R\left(t\right){>}_{t}$ and variance were performed across time in the same window. To avoid averaging out a rhythmic structure in the instantaneous population rate due to slow drift in the oscillation frequency, we computed $AC\left(\tau \right)$ in small windows W = 20 s thus having a more instantaneous estimate of the temporal structure. With the normalization used, the autocorrelograms give $AC\left(\tau =0\right)=1$ and the values with $\tau \text{}\text{}0$ can be interpreted as the Pearson correlation between the population rate at time t and the population rate at time $t+\tau $ (Figure 1D).
Instantaneous rates at onset and offset intervals
Request a detailed protocolTo compare the population rates at the Uonset and Uoffset (Figures 3 and 6), we computed for each neuron the mean of ${r}_{DU}^{i}\left(\tau \right)$ over the window $\tau $ = (50,200) s (Uonset) and the mean of ${r}_{UD}^{i}\left(\tau \right)$ over the window $\tau $ = (−200,–50) s (Uoffset). We positioned the windows 50 ms away of the DU and UD transitions in order to preclude the possibility of contamination in the mean rate estimations due to possible misalignments from the U and D period detections. In the averaging we used U and D periods longer than 0.5 s, so that onset and offset windows were always nonoverlapping. Equivalent Donset and Doffset windows were defined in order to compare individual rates during D periods. To make the distribution of mean rates across the cell population Gaussian, we normalized each of the rates ${r}_{DU}^{i}\left(\tau \right)$ and ${r}_{UD}^{i}\left(\tau \right)$ by the overall timeaveraged rate of the neuron $r}_{i}=<{r}_{i}\left(t\right){>}_{t$ finally obtaining onset and offsetaligned normalized averaged rates (e.g. ${r}_{DU}^{i}\left(\tau \right)/{r}_{i}$). Despite this normalization, the distribution of the normalized rates in the Donset and Doffset was nonGaussian (most neurons fired no spikes). Thus we used the nonparametric twosided Wilcoxon signed rank test to compare onset and offset rates (Figure 3E). To test the rates changes during U periods in E and I neurons we used a fourway mixedeffects ANOVA with fixed factors onset/offset, E/I and random factors neuron index and animal. We compared the distribution of normalized averaged rate difference at the Uonset minus the U offset (Figure 3E right, dark gray histogram) with a distribution obtained from the same neurons but randomly shuffling the onset and offset labels of the spike counts but preserving trial and neuron indices (Figure 3E right, light gray bands show 95% C.I. of the mean histograms across 1000 shuffles). This surrogate data set represents the hypothesis in which none of the neurons shows any onset vs offset modulation. The comparison shows that there are significant fractions of neurons showing a rate decrease and increase that compensate to yield no significant difference on the population averaged rate. The same procedure was followed with the normalized rates in the Donset and Doffset but the limited number of nonzero spike counts limited the analysis yielding inconclusive results (Figure 3E left).
Computational rate model
Request a detailed protocolWe built a model describing the rate dynamics of an excitatory (${r}_{E}$) and inhibitory population (${r}_{I}$) recurrently connected that received external inputs (Wilson and Cowan, 1972). In addition, the excitatory population had an additive negative feedback term $a\left(t\right)$, representing the firing adaptation experienced by excitatory cells (McCormick et al., 1985). The model dynamics were given by:
The time constants of the rates were ${\tau}_{E}$= 10 ms and ${\tau}_{I}$= 2 ms, while the adaptation time constant was ${\tau}_{a}$= 500 ms. The synaptic couplings J_{XY} > 0 (with X,Y = E, I), describing the strength of the connections from Y to X, were ${J}_{EE}$= 5, ${J}_{EI}$= 1, ${J}_{IE}$= 10, ${J}_{II}$= 0.5 s. Because we are modeling low rates, the adaptation grows linearly with ${r}_{E}$ with strength $\beta $= 0.5 s. The fluctuating part of the external inputs $\sigma {\xi}_{X}\left(t\right)$ was modeled as two independent Ornstein–Uhlenbeck processes with zero mean, standard deviation $\sigma $= 3.5 and time constant 1 ms for both E and I populations. Because population averaged firing rates during spontaneous activity fell in the range 0–10 spikes/s, we modeled the transfer functions φ_{X} as thresholdlinear functions:
where the square brackets denote ${\left[z\right]}_{+}=z$ if $z\text{}\text{}0$ and zero otherwise, the gains were ${g}_{E}$ = 1 Hz and ${g}_{I}$= 4 Hz and the effective thresholds ${\theta}_{E}$ and ${\theta}_{I}$ represented the difference between the activation threshold minus the mean external current into each population. We took ${\theta}_{I}$= 25 a.u. and explored varying ${\theta}_{E}$ over a range of positive and negative values (Figure 5A–B). The choice of thresholds ${\theta}_{E}$ < ${\theta}_{I}$ and gains $g}_{E}\text{}\text{}{g}_{I$ reflecting the asymmetry in the fI curve of regular spiking neurons (E) and fast spiking interneurons (I) (Cruikshank et al., 2007; Nowak et al., 2003; Schiff and Reyes, 2012), facilitated that the model operated in a bistable regime (see below).
Inputoutput transfer functions are typically described as sigmoidalshaped functions (Haider and McCormick, 2009), capturing the nonlinearities due to spike threshold and firing saturation effects. Since we are interested in modeling spontaneous activity where average population rates are low, we constrained the transfer functions to exhibit only an expanding nonlinearity reflecting the threshold and thus avoid other effects that can only occur at higher rates (the contracting nonlinearity tends to occur for rates >30 spikes/s (Anderson et al., 2000; Houweling et al., 2010; Nowak et al., 2003; Priebe and Ferster, 2008). In particular, we modeled φ_{X} as piecewise linear (Schiff and Reyes, 2012; Stafstrom et al., 1984) but the same qualitative bistable regime can be obtained by choosing for instance a thresholdquadratic function. The model equations (Equations 1214) were numerically integrated using a fourthorder RungeKutta method with integration time step dt = 0.2 ms. U and D periods in the model were detected by thresholdbased method, finding the crossing of the variable r_{E} with the boundary 1 Hz, where periods shorter than minimum period duration of 50 ms were merged with neighboring periods (small changes in threshold and period durations did not affect qualitatively the results). The computational rate model was implemented in Matlab (MathWorks) using C ++ MEX, and the source code is available at ModelDB (https://senselab.med.yale.edu/ModelDB, Jercog, 2017).
Fixed points and stability
Request a detailed protocolWe start by characterizing the dynamics of the system in the absence of noise. Assuming that the rates evolve much faster than the adaptation, that is, ${\tau}_{E},{\tau}_{I}\ll {\tau}_{a}$, one can partition the dynamics of the full system into (1) the dynamics of the rates assuming adaptation is constant, (2) the slow evolution of adaptation assuming the rates are constantly at equilibrium. Thus, the equations of the nullclines of the 2D rate dynamics at fixed $a$, can be obtained from the 2D system given by Equations 1213. The nullclines of this reduced 2D system are obtained by setting its left hand side to zero:
The intersection of the nullclines define the fixed points $(\stackrel{\ast}{{r}_{E}\left(a\right)},\stackrel{\ast}{{r}_{I}\left(a\right)})$ of the 2D system to which the rates evolve. Once there adaptation varies slowly assuming that the rates are maintained at $(\stackrel{\ast}{{r}_{E}\left(a\right)},\stackrel{\ast}{{r}_{I}\left(a\right)})$ until it reaches the equilibrium at $a=\beta {r}_{E}^{*}{}_{}\left(a\right)$.
The network has a fixed point in $({r}_{E},{r}_{I},a)=(0,0,0)$ if and only if θ_{E} $\ge $ 0 and θ_{I} $\ge $ 0, that is, when the mean external inputs are lower than the activation thresholds. The stability of this point, corresponding to the DOWN state, further requires θ_{E} > 0, thus preventing the activation of the network due to small (infinitesimal) fluctuations in r_{E}. To find an UP state fixed point with nonzero rates we substitute in Equations 1617 the value of adaptation at equilibrium $a=\beta {r}_{E}$, assume the arguments of []_{+} are larger than zero and solve for (r_{E},r_{I}), obtaining:
where $M={J}_{EI}\text{}{J}_{IE}\text{}\left({J}_{EE}\prime \beta \right)\left({J}_{II}\prime \right)$, $J}_{EE}\prime ={J}_{EE}\frac{1}{{g}_{E}$ and $J}_{II}\prime ={J}_{II}+\frac{1}{{g}_{I}$.
The conditions for this UP state solution to exist are derived from imposing that the right hand side of Equations 1819 is positive. The stability of this solution (Equation 21 below) implies that the determinant $M\text{}$ is positive and that if ${r}_{I}$ is positive, then ${r}_{E}$ is also positive. Thus, provided the stability (Equations 2122), the only condition for the solution to exist is that the right hand side of Equation 19 is positive:
Given the separation of time scales described above, this fixed point is stable if the eigenvalues of the matrix of coefficients of Equations 16 and 17 without the term a (that we assume is constant) have all negative real part. Because the coefficients matrix is 2 × 2, this is equivalent to impose that the determinant of the matrix has a positive determinant and a negative trace. These conditions yield the following inequalities, respectively:
Equation 21 is equivalent to the condition that the Inullclines of the 2D reduced system has a larger slope than the Enullcline. From the U existence condition in Equation 20 and D stability condition, it can also be derived that ${J}_{EE}\phantom{\rule{thinmathspace}{0ex}}\prime \text{}\text{}0$, implying that at fixed inhibition, the Esubnetwork would be unstable (i.e. slope of the Enullcline is positive). In sum, the conditions for the existence of two stable U and D states imply that the U state would be unstable in the absence of feedback inhibition but the strength of feedback inhibition is sufficient to stabilize it. These are precisely the conditions that define an Inhibitory Stabilized Network state (Ozeki et al., 2009).
Phase plane analysis
Request a detailed protocolIn this section we determine the different operational regimes of the network in the (${\theta}_{E}$,β)plane (Figure 5A). In the absence of noise, given that θ_{I} $\ge $ 0, a stable D state exists in the semiplane (Figure 5A, purple and red regions):
Provided that our choice of synaptic couplings ${J}_{XY}$ and time constants hold the stability conditions (Equations 2122), the U state is stable in the semiplane given by Equation 20 (Figure 5A, orange and red regions):
In the intersection of these two semiplanes both D and U are stable (bistable region, Figure 5A red). In contrast, in the complementary region to the two semiplanes, neither U nor D are stable (Figure 5A white region). There, a rhythmic concatenation of relatively long U and D periods is observed where the network stays transiently in each state until adaptation triggers a transition (see e.g. Figure 4E). Because of the separation of timescales, we refer to this stability to the rate dynamics but not to the adaptation dynamics as quasistable states.
The addition of noise makes that some of the stable solutions now become metastable, meaning that the network can switch to a different state by the action of the noise (i.e. the external fluctuations in our model). This is the case of the bistable region (Figure 5A red) where fluctuations generate stochastic transitions between the two metastable U and D states (Figure 4D). In the region of D stability ${\theta}_{E}\text{}\text{}0$, we find a new subregion with noisedriven transitions from a metastable D state to a quasistable U state, and back to D by the action of adaptation (Figure 5A light purple). This subregion is delimited by the condition that U is not stable (i.e. Equation 24 does not hold) but just because of the existence of adaptation. This can be written by saying that Equation 24 holds if β = 0:
Equivalently, within the region of U stability, noise creates a new subregion with noisedriven transitions from a metastable U state to a quasistable D state, and back to U by the recovery from adaptation (Figure 5A light orange). This subregion is given by the condition that there is a negative effective threshold ${\theta}_{E}\text{}\text{}0$ (i.e. caused by a suprathreshold mean external drive) but the adaptation ${a}^{U}$ recruited in the U state is sufficient to counterbalance it: ${a}^{U}+{\theta}_{E}\text{}\text{}0$. This makes the D transiently stable until adaptation decays back to zero. Substituting ${a}^{U}=\beta {r}_{E}^{U}$ (Equation 14) and ${r}_{E}^{U}$ by the equilibrium rate at the U state given by Equation 18, the limit of this subregion can be expressed as (Figure 5A, light orange region):
Spiking network simulations
Request a detailed protocolWe used a network model of leaky integrateandfire neurons (Ricciardi, 1977), with ${N}_{E}$=4000 excitatory and ${N}_{I}$=1000 inhibitory neurons ‘alltoall’ connected. The membrane potential of a neuron $i$ from population $E$ and $I$ obeys
Whenever the membrane voltage of a $X=\{E,I\}$ neuron exceeds the threshold ${\theta}_{X}$ at time $t$, a spike is emitted and the membrane voltage is reset to ${V}_{r}^{X}$, that is, whenever ${V}_{i}^{X}\left({t}^{}\right)\ge {\theta}_{X}$ then ${V}_{i}^{X}\left({t}^{+}\right)={V}_{r}^{X}$. We used ${V}_{r}^{E}$ = 51 mV, ${V}_{r}^{I}$ = −49.9 mV and a leak potential of ${V}_{L}$ = −57.4 mV. The thresholds were ${\theta}_{E}$ = −45 mV and ${\theta}_{I}$ = −43.9 mV, and we used no spike refractory time. The membrane time constants were ${\tau}_{E}$= 20 ms and ${\tau}_{I}$ = 10 ms. The external input current ${I}_{ext,i}^{X}\left(t\right)=\sigma \sqrt{{\tau}_{X}}{\eta}_{i}\left(t\right)+{p}_{i}{I}_{kicks}\left(t\right)$ is composed of: (1) a Gaussian white noise term with std. dev. $\sigma $ = 2.5 mV which is independent from neuron to neuron, that is, $<{\eta}_{i}\left(t\right){\eta}_{j}(tt\text{'})={\delta}_{ij}\delta (tt\text{'})$ and was necessary to generate uncorrelated firing across neurons given the alltoall connectivity. (2) A separate source of randomly occurring input pulses, also called ‘kicks’, impinging coherently on 10% of both E and I neurons in the network (p_{i} is a binary random variable with probability p=0.1):
with K being the pulses amplitude (220 mV during D, 110 mV during U for E cells; 88 mV during D, 44 mV during U for I cells), ${\tau}_{k}$ the rise time (0.5 ms) of the pulses and ${\chi}_{\Delta}\left(t\right)$ the step function defined as 1 in the interval (0, Δ) and zero otherwise. We used duration Δ = 2 ms and amplitude K causing a depolarization of 16.1 mV (6.44 mV) during D (U) periods in Ekicked neurons, and 12.4 mV (4.9 mV) during D (U) periods in Ikicked neurons. These kicks were necessary to generate synchronous bumps in the membrane potential that would yield transitions to the UP state during DOWN periods in the absence of background activity (see Discussion). The amplitude of these events was constant for the sake of simplicity.
The recurrent input term consisted of inhibitory and excitatory synaptic currents, that is, ${I}_{rec}^{X}\left(t\right)={I}_{rec}^{XE}\left(t\right)+{I}_{rec}^{XI}\left(t\right)$, where ${I}_{rec}^{XY}={J}_{XY}{s}_{Y}\left(t\right)$ and ${J}_{XY}$ is the synaptic strength from neurons in population $Y$ to neurons in population $X$. The synaptic variable ${s}_{X}$ obeyed the following differential equation
where the summation is over all spikes emitted by all neurons in population $X$ (alltoall connectivity) and the factor $\underset{\_}{\tau}=1$ ms ensures that the area under the unitary synaptic event is constant regardless of the rise and decay timeconstants. The synaptic couplings were ${J}_{EE}$ = 1.4, ${J}_{EI}$= −0.35, ${J}_{IE}$ = 5 and ${J}_{II}$ = −1 mV and the rise $\left({\tau}_{r}^{X}\right)$ and decay $\left({\tau}_{d}^{X}\right)$ times of inhibitory synapses were both 1 ms, while those of excitation were 8 and 23 ms, respectively. These synaptic kinetic constants were chosen in order to reduce the magnitude of the fast oscillations during UPs. The delays ${d}_{j}^{X}$ were the same for all the postsynaptic synapses belonging to the same neuron and uniformly distributed between 0 and 1 ms (0 and 0.5 ms) across E (I) neurons.
In addition, the excitatory neurons displayed an after hyperpolarization (AHP) current ${I}_{a}$ that follows
with a slow adaptation time constant ${\tau}_{a}$ = 500 ms and adaptation strength $\beta $ = 300 mVms.
The population averaged AHP current is defined as
The E and I nullclines of the population averaged rates ${r}_{E}$ and ${r}_{I}$, respectively, are obtained from the equilibrium firing rate (${r}_{0X}$), which is given by the self consistent meanfield equation (Ricciardi, 1977; Amit and Brunel, 1997)
where the average input current to a neuron in population $X$ is given by
and the standard deviation of the current is given by the external white noise std. dev. ${\sigma}_{}$. For this analysis the AHP current I_{a} was assumed to be constant at the averaged values observed either at the UP onset and offset (see dark and light red Enullclines in Figure 7A, respectively). The bifurcation diagrams (Figure 7B and Figure 7—figure supplement 2B) were obtained by solving the stationary states of the network from the Fokker Planck equation describing the population dynamics and determining their stability using linear perturbation analysis (Brunel and Hakim, 1999; Richardson, 2007; Roxin and Compte, 2016).
Analysis of spiking network simulations
Request a detailed protocolThe spiking network model was implemented in C++, and the code is available at ModelDB (https://senselab.med.yale.edu/ModelDB, Jercog, 2017). Model equations were numerically integrated using secondorder Runge Kutta, where integration step was defined as dt = 0.05 ms and the total simulation length was 5000 s (50 simulations of 100 s each, where data from first D and U detected periods for each simulation was discarded to eliminate possible initial transient effects). For the analysis of U and D statistics and the rate dynamics in the spiking network simulations we used methods analogous to those applied on the experimental data. In addition, U and D period detection was obtained by applying the HMM (Chen et al., 2009) on 100 randomly selected neurons, using the same parameters as for the experimental data. Onset and offset aligned population rates ${r}_{E}$ and ${r}_{I}$ (Figure 7J) were computed using randomly sampled 90 E and 10 I neurons, respectively, with a minimum U/D period duration of 0.8 s.
In order to study the statistics of UP and DOWN dynamics in the spiking network, where transitions are caused by independent noise among cells (Figure 7—figure supplement 2), external kicks were not included and E cells were depolarized by 5.6 mV to keep their voltage right below their spike threshold. In addition, to keep the UP firing rate stabilized at low values, I cells were depolarized by 6 mV and synaptic decay time constant for excitatory synapses were set to ${\tau}_{r}^{E}$ = 2 ms and ${\tau}_{d}^{E}$ = 3 ms. In addition, adaptation strength was set to $\beta $ = 200 mVms, and HMM detection parameters chosen as α = 2 and μ = −1. The rest of the parameters are the same as those used in main Figure 7.
References

Conditional modeling and the jitter method of spike resamplingJournal of Neurophysiology 107:517–531.https://doi.org/10.1152/jn.00633.2011

Quantitative study of attractor neural network retrieving at low spike rates: I. substrate—spikes, rates and neuronal gainNetwork: Computation in Neural Systems 2:259–273.https://doi.org/10.1088/0954898X_2_3_003

Irregular persistent activity induced by synaptic excitatory feedbackFrontiers in Computational Neuroscience 1:5.https://doi.org/10.3389/neuro.10.005.2007

Hippocampal sharp wave bursts coincide with neocortical "upstate" transitionsLearning & Memory 11:697–704.https://doi.org/10.1101/lm.73504

Model of thalamocortical slowwave sleep oscillations and transitions to activated StatesJournal of Neuroscience 22:8691–8704.

Synaptic depression and slow oscillatory activity in a biophysical network model of the cerebral cortexFrontiers in Computational Neuroscience 6:64.https://doi.org/10.3389/fncom.2012.00064

Firing frequency of leaky intergrateandfire neurons with synaptic current dynamicsJournal of Theoretical Biology 195:87–95.https://doi.org/10.1006/jtbi.1998.0782

Persistent activity and the singlecell frequencycurrent curve in a cortical network modelNetwork: Computation in Neural Systems 11:261–280.https://doi.org/10.1088/0954898X_11_4_302

Dynamics of sparsely connected networks of excitatory and inhibitory spiking neuronsJournal of Computational Neuroscience 8:183–208.

Properties of slow oscillation during slowwave sleep and anesthesia in catsJournal of Neuroscience 31:14998–15008.https://doi.org/10.1523/JNEUROSCI.233911.2011

Interneuronmediated inhibition synchronizes neuronal activity during slow oscillationThe Journal of Physiology 590:3987–4010.https://doi.org/10.1113/jphysiol.2012.227462

Temporally irregular mnemonic persistent activity in prefrontal neurons of monkeys during a delayed response taskJournal of Neurophysiology 90:3441–3454.https://doi.org/10.1152/jn.00949.2002

Cellular and network mechanisms of slow oscillatory activity (<1 Hz) and wave propagations in a cortical network modelJournal of Neurophysiology 89:2707–2725.https://doi.org/10.1152/jn.00845.2002

Spontaneous highfrequency (1080 Hz) oscillations during up states in the cerebral cortex in vitroJournal of Neuroscience 28:13828–13844.https://doi.org/10.1523/JNEUROSCI.268408.2008

Cellular basis of EEG slow rhythms: a study of dynamic corticothalamic relationshipsJournal of Neuroscience 15:604–622.

Mechanisms of longlasting hyperpolarizations underlying slow sleep oscillations in cat corticothalamic networksThe Journal of Physiology 494 (Pt 1):251–264.https://doi.org/10.1113/jphysiol.1996.sp021488

Spontaneous firing patterns and axonal projections of single corticostriatal neurons in the rat medial agranular cortexJournal of Neurophysiology 71:17–32.

The thalamocortical network as a single slow wavegenerating unitCurrent Opinion in Neurobiology 31:72–80.https://doi.org/10.1016/j.conb.2014.09.001

Synaptic dynamics and neuronal network connectivity are reflected in the distribution of times in Up statesFrontiers in Computational Neuroscience 9:96.https://doi.org/10.3389/fncom.2015.00096

Essential thalamic contribution to slow waves of natural sleepJournal of Neuroscience 33:19599–19610.https://doi.org/10.1523/JNEUROSCI.316913.2013

Effective reduced diffusionmodels: a data driven approach to the analysis of neuronal dynamicsPLoS Computational Biology 5:e1000587.https://doi.org/10.1371/journal.pcbi.1000587

NonGaussian membrane potential dynamics imply sparse, synchronous activity in auditory cortexJournal of Neuroscience 26:12206–12218.https://doi.org/10.1523/JNEUROSCI.281306.2006

Somatosensory cortical neuronal population activity across states of anaesthesiaEuropean Journal of Neuroscience 15:744–752.https://doi.org/10.1046/j.0953816x.2002.01898.x

Behaviordependent shortterm assembly dynamics in the medial prefrontal cortexNature Neuroscience 11:823–833.https://doi.org/10.1038/nn.2134

Synaptic input correlations leading to membrane potential decorrelation of spontaneous activity in cortexJournal of Neuroscience 33:15075–15085.https://doi.org/10.1523/JNEUROSCI.034713.2013

Shortterm plasticity explains irregular persistent activity in working memory tasksJournal of Neuroscience 33:133–149.https://doi.org/10.1523/JNEUROSCI.345512.2013

Accuracy of tetrode spike separation as determined by simultaneous intracellular and extracellular measurementsJournal of Neurophysiology 84:401–414.

Cortical state and attentionNature Reviews Neuroscience 12:509–523.https://doi.org/10.1038/nrn3084

State changes rapidly modulate cortical neuronal responsivenessJournal of Neuroscience 27:9607–9622.https://doi.org/10.1523/JNEUROSCI.218407.2007

Modeling sleep and wakefulness in the thalamocortical systemJournal of Neurophysiology 93:1671–1698.https://doi.org/10.1152/jn.00915.2004

The emergence of Up and Down states in cortical networksPLoS computational biology 2:e23.https://doi.org/10.1371/journal.pcbi.0020023

Comparison of discharge variability in vitro and in vivo in cat visual cortex neuronsJournal of Neurophysiology 75:1806–1814.

Nanostimulation: manipulation of single neuron activity by juxtacellular current injectionJournal of Neurophysiology 103:1696–1704.https://doi.org/10.1152/jn.00421.2009

Coordinated memory replay in the visual cortex and hippocampus during sleepNature Neuroscience 10:100–107.https://doi.org/10.1038/nn1825

From The Cover: Imaging input and output of neocortical networks in vivoProceedings of the National Academy of Sciences 102:14063–14068.https://doi.org/10.1073/pnas.0506029102

The highconductance state of cortical networksNeural computation 20:1–43.https://doi.org/10.1162/neco.2008.20.1.1

Minimal models of adapted neuronal response to in vivolike input currentsNeural Computation 16:2101–2124.https://doi.org/10.1162/0899766041732468

Intrinsic dynamics in neuronal networks. I. TheoryJournal of Neurophysiology 83:808–827.

Persistently active, pacemakerlike neurons in neocortexFrontiers in Neuroscience 1:123–129.https://doi.org/10.3389/neuro.01.1.1.009.2007

The impact of cortical deafferentation on the neocortical slow oscillationJournal of Neuroscience 34:5689–5703.https://doi.org/10.1523/JNEUROSCI.115613.2014

Noiseinduced transitions in slow wave neuronal dynamicsJournal of Computational Neuroscience 28:1–17.https://doi.org/10.1007/s108270090178y

Consistent sequential activity across diverse forms of UP states under ketamine anesthesiaEuropean Journal of Neuroscience 36:2830–2838.https://doi.org/10.1111/j.14609568.2012.08201.x

Packetbased communication in the cortexNature Reviews Neuroscience 16:745–755.https://doi.org/10.1038/nrn4026

Exploring the spectrum of dynamical regimes and timescales in spontaneous cortical activityCognitive Neurodynamics 6:239–250.https://doi.org/10.1007/s1157101191794

Comparative electrophysiology of pyramidal and sparsely spiny stellate neurons of the neocortexJournal of Neurophysiology 54:782–806.

Noiseinduced alternations in an attractor network model of perceptual bistabilityJournal of Neurophysiology 98:1125–1139.https://doi.org/10.1152/jn.00116.2007

Electrophysiological classes of cat primary visual cortical neurons in vivo as revealed by quantitative analysesJournal of Neurophysiology 89:1541–1566.https://doi.org/10.1152/jn.00580.2002

Replay of rulelearning related neural patterns in the prefrontal cortex during sleepNature Neuroscience 12:919–926.https://doi.org/10.1038/nn.2337

Dissection of a model for neuronal parabolic burstingJournal of Mathematical Biology 25:653–675.https://doi.org/10.1007/BF00275501

The spatial structure of correlated neuronal variabilityNature neuroscience 20:107–114.https://doi.org/10.1038/nn.4433

Oscillations in the bistable regime of neuronal networksPhysical Review E 94:012410.https://doi.org/10.1103/PhysRevE.94.012410

Slow and fast rhythms generated in the cerebral cortex of the anesthetized mouseJournal of Neurophysiology 106:2910–2921.https://doi.org/10.1152/jn.00440.2011

Membrane potential correlates of sensory perception in mouse barrel cortexNature Neuroscience 16:1671–1677.https://doi.org/10.1038/nn.3532

Methods for predicting cortical UP and DOWN states from the phase of deep layer local field potentialsJournal of Computational Neuroscience 29:49–62.https://doi.org/10.1007/s1082701002285

Cellular and network mechanisms of rhythmic recurrent activity in neocortexNature Neuroscience 3:1027–1034.https://doi.org/10.1038/79848

Repetitive firing in layer V neurons from cat neocortex in vitroJournal of Neurophysiology 52:264–277.

Synchronized sleep oscillations and their paroxysmal developmentsTrends in Neurosciences 17:201–207.https://doi.org/10.1016/01662236(94)901058

A novel slow (< 1 Hz) oscillation of neocortical neurons in vivo: depolarizing and hyperpolarizing componentsJournal of Neuroscience 13:3252–3265.

Intracellular analysis of relations between the slow (< 1 Hz) neocortical oscillation and other sleep rhythms of the electroencephalogramJournal of Neuroscience 13:3266–3283.

Natural waking and sleep states: a view from inside neocortical neuronsJournal of Neurophysiology 85:1969–1985.

Spontaneous subthreshold membrane potential fluctuations and action potential variability of rat corticostriatal and striatal neurons in vivoJournal of Neurophysiology 77:1697–1715.

Quantifying the relative contributions of divisive and subtractive feedback to rhythm generationPLoS Computational Biology 7:e1001124.https://doi.org/10.1371/journal.pcbi.1001124

Modeling of spontaneous activity in developing spinal cord using activitydependent depression in an excitatory networkJournal of Neuroscience 20:3041–3056.

Cortical balance of excitation and inhibition is regulated by the rate of synaptic activityJournal of Neuroscience 33:14359–14368.https://doi.org/10.1523/JNEUROSCI.174813.2013

Origin of slow cortical oscillations in deafferented cortical slabsCerebral Cortex 10:1185–1199.https://doi.org/10.1093/cercor/10.12.1185

Synchrony generation in recurrent networks with frequencydependent synapsesJournal of Neuroscience 20:RC50.

Paradoxical effects of external modulation of inhibitory interneuronsJournal of Neuroscience 17:4382–4388.

Chaotic balanced state in a model of cortical circuitsNeural Computation 10:1321–1371.https://doi.org/10.1162/089976698300017214

Signal propagation and logic gating in networks of integrateandfire neuronsJournal of Neuroscience 25:10786–10795.https://doi.org/10.1523/JNEUROSCI.350805.2005

Calcium coding and adaptive temporal computation in cortical pyramidal neuronsJournal of Neurophysiology 79:1549–1566.

Synaptic reverberation underlying mnemonic persistent activityTrends in Neurosciences 24:455–463.https://doi.org/10.1016/S01662236(00)018683

Background synaptic activity is sparse in neocortexJournal of Neuroscience 26:8267–8277.https://doi.org/10.1523/JNEUROSCI.215206.2006
Article and author information
Author details
Funding
Ministerio de Economía y Competitividad (Spanish Ministry of Economy and Competitiveness together with the European Regional Development Fund: RYC201108755)
 Alex Roxin
European Regional Development Fund (Spanish Ministry of Economy and Competitiveness together with the European Regional Development Fund: RYC201108755)
 Alex Roxin
EU Biotrack contract (PCOFUNDGA2008229673)
 Alex Roxin
Hungarian Brain Research Program Grant (KTIA_NAP_13220140016)
 Peter Barthó
Ministerio de Economía y Competitividad (Spanish Ministry of Economy and Competitiveness together with the European Regional Development Fund: BFU200909537)
 Albert Compte
European Regional Development Fund (Spanish Ministry of Economy and Competitiveness together with the European Regional Development Fund: BFU200909537)
 Albert Compte
Ministerio de Economía y Competitividad (Spanish Ministry of Economy and Competitiveness together with the European Regional Development Fund: BFU201234838)
 Albert Compte
European Regional Development Fund (Spanish Ministry of Economy and Competitiveness together with the European Regional Development Fund: BFU201234838)
 Albert Compte
Agència de Gestió d’Ajuts Universitaris i de Recerca (SGR141265)
 Albert Compte
Ministerio de Economía y Competitividad (Spanish Ministry of Economy and Competitiveness together with the European Regional Development Fund: SAF201015730)
 Jaime de la Rocha
European Regional Development Fund (Spanish Ministry of Economy and Competitiveness together with the European Regional Development Fund: SAF201015730)
 Jaime de la Rocha
Ministerio de Economía y Competitividad (Spanish Ministry of Economy and Competitiveness together with the European Regional Development Fund: RYC200904829)
 Jaime de la Rocha
European Regional Development Fund (Spanish Ministry of Economy and Competitiveness together with the European Regional Development Fund: RYC200904829)
 Jaime de la Rocha
Ministerio de Economía y Competitividad (Spanish Ministry of Economy and Competitiveness together with the European Regional Development Fund: SAF201346717R)
 Jaime de la Rocha
European Regional Development Fund (Spanish Ministry of Economy and Competitiveness together with the European Regional Development Fund: SAF201346717R)
 Jaime de la Rocha
European Commission (EU Marie Curie grants: PIRG07GA2010268382)
 Jaime de la Rocha
Ministerio de Economía y Competitividad (SAF201570324R)
 Jaime de la Rocha
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
Acknowledgements
We thank Zhe Chen for sharing the HMM based UPDOWN detection related scripts, Narani van Laarhoven for sharing codes, Kenneth Harris, Gyuri Buzsáki and the members of the Compte and de la Rocha labs for helpful discussions. This work was supported by the AGAUR of the Generalitat de Catalunya (Ref: SGR141265), the Spanish Ministry of Economy and Competitiveness together with the European Regional Development Fund (grants BFU200909537, BFU201234838 to AC, RYC2011–08755 to AR, SAF201015730, SAF201346717R and RYC2009–04829 to JR), the EU (Marie Curie grants PIRG07GA2010–268382 to JR and BIOTRACK contract PCOFUNDGA2008–229673 to AR), Hungarian Brain Research Program Grant (KTIA_NAP_13220140016 to PB). Part of the work was carried out at the Esther Koplowitz Centre, Barcelona.
Ethics
Animal experimentation: This study involved analysis of previously published and new data. Previously published data (Bartho et al, J Neurophys. 2004, 92(1)) was obtained under a protocol approved by the Rutgers University Animal Care and Use Committee. One new data set was acquired in accordance with a protocol approved by the Animal Welfare Committee at University of Lethbridge (protocol # 0907). All surgeries were performed under anesthesia, and every effort was made to minimize suffering.
Copyright
© 2017, Jercog et al.
This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.
Metrics

 6,279
 views

 923
 downloads

 112
 citations
Views, downloads and citations are aggregated across all versions of this paper published by eLife.
Download links
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
Further reading

 Immunology and Inflammation
 Neuroscience
CD4^{+}CD25^{+}Foxp3^{+} regulatory T cells (Treg) have been implicated in pain modulation in various inflammatory conditions. However, whether Treg cells hamper pain at steady state and by which mechanism is still unclear. From a metaanalysis of the transcriptomes of murine Treg and conventional T cells (Tconv), we observe that the proenkephalin gene (Penk), encoding the precursor of analgesic opioid peptides, ranks among the top 25 genes most enriched in Treg cells. We then present various evidence suggesting that Penk is regulated in part by members of the Tumor Necrosis Factor Receptor (TNFR) family and the transcription factor Basic leucine zipper transcription faatflike (BATF). Using mice in which the promoter activity of Penk can be tracked with a fluorescent reporter, we also show that Penk expression is mostly detected in Treg and activated Tconv in noninflammatory conditions in the colon and skin. Functionally, Treg cells proficient or deficient for Penk suppress equally well the proliferation of effector T cells in vitro and autoimmune colitis in vivo. In contrast, inducible ablation of Penk in Treg leads to heat hyperalgesia in both male and female mice. Overall, our results indicate that Treg might play a key role at modulating basal somatic sensitivity in mice through the production of analgesic opioid peptides.

 Neuroscience
Biological synaptic transmission is unreliable, and this unreliability likely degrades neural circuit performance. While there are biophysical mechanisms that can increase reliability, for instance by increasing vesicle release probability, these mechanisms cost energy. We examined four such mechanisms along with the associated scaling of the energetic costs. We then embedded these energetic costs for reliability in artificial neural networks (ANNs) with trainable stochastic synapses, and trained these networks on standard image classification tasks. The resulting networks revealed a tradeoff between circuit performance and the energetic cost of synaptic reliability. Additionally, the optimised networks exhibited two testable predictions consistent with preexisting experimental data. Specifically, synapses with lower variability tended to have (1) higher input firing rates and (2) lower learning rates. Surprisingly, these predictions also arise when synapse statistics are inferred through Bayesian inference. Indeed, we were able to find a formal, theoretical link between the performancereliability cost tradeoff and Bayesian inference. This connection suggests two incompatible possibilities: evolution may have chanced upon a scheme for implementing Bayesian inference by optimising energy efficiency, or alternatively, energyefficient synapses may display signatures of Bayesian inference without actually using Bayes to reason about uncertainty.