Neural criticality from effective latent variables
Abstract
Observations of power laws in neural activity data have raised the intriguing notion that brains may operate in a critical state. One example of this critical state is ‘avalanche criticality’, which has been observed in various systems, including cultured neurons, zebrafish, rodent cortex, and human EEG. More recently, power laws were also observed in neural populations in the mouse under an activity coarsegraining procedure, and they were explained as a consequence of the neural activity being coupled to multiple latent dynamical variables. An intriguing possibility is that avalanche criticality emerges due to a similar mechanism. Here, we determine the conditions under which latent dynamical variables give rise to avalanche criticality. We find that populations coupled to multiple latent variables produce critical behavior across a broader parameter range than those coupled to a single, quasistatic latent variable, but in both cases, avalanche criticality is observed without finetuning of model parameters. We identify two regimes of avalanches, both critical but differing in the amount of information carried about the latent variable. Our results suggest that avalanche criticality arises in neural systems in which activity is effectively modeled as a population driven by a few dynamical variables and these variables can be inferred from the population activity.
eLife assessment
This paper provides a simple example of a neurallike system that displays criticality, but not for any deep reason; it's just because a population of neurons are driven (independently!) by a slowly varying latent variable, something that is common in the brain. Moreover, criticality does not imply optimal information transmission (one of its proposed functions). The work is likely to have an important impact on the study of criticality in neural systems and is convincingly supported by the experiments presented.
https://doi.org/10.7554/eLife.89337.3.sa0Introduction
The neural criticality hypothesis – the idea that neural systems operate close to a phase transition, perhaps for optimal information processing – is both ambitious and banal. Measurements from biological systems are limited in the range of spatial and temporal scales that can be sampled, not only because of the limitations of recording techniques but also due to the fundamentally nonstationary behavior of most, if not all, biological systems. These limitations make proving that an observation indicates critical behavior difficult. At the same time, the idea that brain networks are critical echoes the anthropic principle: tuned another way, a network becomes quiescent or epileptic and in either state, seems unlikely to support perception, thought, or flexible behavior, yet these observations do not explain how such finetuning could be achieved. Further muddying the water, researchers have reported multiple kinds of criticality in neural networks, including through analysis of avalanches (Beggs and Plenz, 2003; Plenz et al., 2021; O’Byrne and Jerbi, 2022; GirardiSchappo, 2021) and of coarsegrained activity (Meshulam et al., 2019), as well as of correlations (Dahmen et al., 2019). How these flavors of critical behavior relate to each other or any functional network mechanism is unknown.
The phenomenon that we will refer to as ‘avalanche criticality’ appears remarkably widespread. It was first observed in cultured neurons (Beggs and Plenz, 2003) and later studied in zebrafish (PonceAlvarez et al., 2018), turtles (Shew et al., 2015), rodents (Ma et al., 2019), monkeys (Petermann et al., 2009), and even humans (Poil et al., 2008). The standard analysis, described later, requires extracting powerlaw exponents from fits to the distributions of avalanche size and of duration and assessing the relationship between exponents. There is debate over whether these observations reflect true power laws, but within the resolution achievable from experiments, neural avalanches exhibit power laws with exponent relationships predicted from theory developed in physical systems (Perkovic et al., 1995).
Avalanche criticality is not the only form of criticality observed in neural systems. Zipf’s law, in which the frequency of a network state is inversely proportional to its rank, appears in systems as diverse as fly motion estimation and the salamander retina (Mora and Bialek, 2011; Schwab et al., 2014; Aitchison et al., 2016). More recently, Meshulam et al., 2019 reported various statistics of population activity in the mouse hippocampus, including the eigenvalue spectrum of the covariance matrix and the activity variance. These were found to scale as populations were ‘coarsegrained’ through a procedure in which neural activities were iteratively combined based on similarity. Similar observations have been reported in spontaneous activity recorded across a wide range of brain areas in the mouse (Morales et al., 2023). Simple neural network models of such data explain neither Zipf’s law nor coarsegrained criticality (Meshulam et al., 2019).
Even though these three forms of criticality are observed through different analyses, they may originate from similar mechanisms. Numerous studies have reported relatively lowdimensional structure in the activity of large populations of neurons (Mazor and Laurent, 2005; Ahrens et al., 2012; Mante et al., 2013; Pandarinath et al., 2018; Stringer et al., 2019; Nieh et al., 2021), which can be modeled by a population of neurons that are broadly and heterogeneously coupled to multiple latent (i.e. unobserved) dynamical variables. Using such a model, we previously reproduced scaling under coarsegraining analysis within experimental uncertainty (Morrell et al., 2021). Zipf’s law has been explained by a similar mechanism (Schwab et al., 2014; Aitchison et al., 2016; Humplik and Tkačik, 2017). A single quasistatic latent variable has been shown to produce avalanche power laws, but not the relationships expected between the critical exponents (Priesemann and Shriki, 2018), while a model including a global modulation of activity can generate avalanche criticality (Mariani et al., 2021), but has not demonstrated coarsegrained criticality (Morrell et al., 2021). It is not known under what conditions the more general latent dynamical variable model generates avalanche criticality.
Here, we examine avalanche criticality in the latent dynamical variable model of neural population activity. We find that avalanche criticality is observed over a wide range of parameters, some of which may be optimal for information representation. These results demonstrate how criticality in neural recordings can arise from latent dynamics in neural activity, without need for finetuning of network parameters.
Results
Critical exponents values and crackling noise
We begin by defining the metrics used to quantify avalanche statistics and briefly summarize experimental observations, which have been reviewed in detail elsewhere (Plenz et al., 2021; O’Byrne and Jerbi, 2022; GirardiSchappo, 2021). Activity is recorded across a set of neurons and binned in time. Avalanches are then defined as contiguous time bins, in which at least one neuron in the population is active. The duration of an avalanche is the number of contiguous time bins and the size is the summed activity during the avalanche. The distributions of avalanche size and duration are fit to power laws ($P(S)\sim {S}^{\tau}$ for size $S$, and $P(D)\sim {D}^{\alpha}$ for duration $D$) using standard methods (Clauset et al., 2009).
Power laws can be indicative of criticality, but they can also result from noncritical mechanisms (Touboul and Destexhe, 2017; Priesemann and Shriki, 2018). A more stringent test of criticality is the ‘crackling’ relationship (Perkovic et al., 1995; Touboul and Destexhe, 2017), which involves fitting a third powerlaw relationship, $\overline{S}(D)\sim {D}^{{\gamma}_{\text{fit}}}$, and comparing $\gamma}_{\text{fit}$ to the predicted exponent $\gamma}_{\text{pred}$, derived from the size and duration exponents, $\tau$ and $\alpha$:
Previous work demonstrating approximate power laws in size and duration distributions through the mechanism of a slowly changing latent variable did not generate crackling (Touboul and Destexhe, 2017; Priesemann and Shriki, 2018).
Measuring powerlaws in empirical data is challenging: it generally requires setting a lower cutoff in the size and duration, and the powerlaw behavior only has limited range due to the finite size and duration of the recording itself. Nonetheless, there is some consensus (Shew et al., 2015; Fontenele et al., 2019; Ma et al., 2019) that even if $\tau$ and $\alpha$ vary over a wide range (1.5 to about 3) across recordings, the values of $\gamma}_{\text{fit}$ and $\gamma}_{\text{pred}$ stay in a relatively narrow range, from about 1.1 to 1.3.
Avalanche scaling in a latent dynamical variable model
We study a model of a population of neurons that are not coupled to each other directly but are driven by a small number of latent dynamical variables – that is, slowly changing inputs that are not themselves measured (Figure 1A). We are agnostic as to the origin of these inputs: they may be externally driven from other brain areas, or they may arise from large fluctuations in local recurrent dynamics. The model was chosen for its simplicity, and because we have previously shown that this model with at least about five latent variables can produce power laws under the coarsegraining analysis (Morrell et al., 2021). In this paper, we examine avalanche criticality in the same model.
Specifically, we model the neurons as binary units ($s}_{i$) that are randomly (${J}_{i\mu}\sim N(0,1)$) coupled to dynamical variables ${h}_{\mu}(t)$. The probability of any pattern $\{{s}_{i}\}$, given the current state of the latent variables, is
where the parameter $\eta$ controls the scaling of the variables and $\u03f5$ controls the overall activity level. We modeled each latent variable as an OrnsteinUhlenbeck process with the time scale $\tau}_{F$ (see Materials and methods). Thus our model has four parameters: $\eta$ (input scaling), $\u03f5$ (activity threshold), $\tau}_{F$ (dynamical timescale), and $N}_{F$ (number of latent variables).
Distributions of avalanche size and avalanche duration within this model followed approximate power laws (Figure 1C; see Materials and methods). In the example shown (${N}_{F}=5$, $\tau}_{F}={10}^{4$, $\eta =4$ and $\u03f5=12$), we found exponents $\tau =1.89\pm 0.02$ (size) and $\alpha =2.11\pm 0.02$ (duration). Further, the average size of avalanches with fixed duration scaled as $S\sim {D}^{\gamma}$, with the fitted ${\gamma}_{\text{fit}}=1.24\pm 0.02$, in agreement with the predicted value ${\gamma}_{\text{pred}}=1.24\pm 0.02$. Thus, our model could generate avalanche scaling, at least for some parameter choices. In the following sections, we examine how avalanche scaling depends on model parameters ($N}_{F$, $\tau}_{F$, $\eta$ and $\u03f5$; see Table 2). We first focus on two sets of simulations: one set with ${N}_{F}=1$ latent variable, which does not generate scaling under coarsegraining (Morrell et al., 2021), and one set with ${N}_{F}=5$ latent variables, which can generate such scaling for some values of parameters $\tau}_{F$, $\eta$, and $\u03f5$ (Morrell et al., 2021; Table 1).
Avalanche scaling depends on the number of latent variables
We analyzed avalanches from one and fivevariable simulations, each with fixed latent dynamical timescale ($\tau}_{F}=5\times {10}^{3$ time steps; see Table 2 for parameters). In the following sections, time is measured in simulation time steps, see Materials and methods for converting time steps to seconds. We used established methods for measuring empirical power laws (Clauset et al., 2009). The minimum cutoffs for size ($S}_{\text{min}$) and duration ($D}_{\text{min}$) are indicated by vertical lines in Figure 2. For the population coupled to a single latent variable, the avalanche size distribution was not well fit by a power law (Figure 2A). With a sufficiently high minimum cutoff ($D}_{\text{min}$), the duration distribution was approximately powerlaw (Figure 2B).
We next assessed whether the simulation produced crackling. If so, the value $\gamma}_{\text{fit}$ obtained by fitting $\overline{S}(D)\sim {D}^{{\gamma}_{\text{fit}}}$ would be similar to $\gamma}_{\text{pred}}=\frac{\alpha 1}{\tau 1$. In many cases, such as the onevariable example shown in Figure 2C, the full range of avalanche durations were not fit by a single power law. Therefore, we determined the largest range, over which a power law was a good fit to the simulated observations. In this case, slightly over two decades of apparent scaling were observed starting from avalanches with minimum duration slightly less than 100 time steps (Figure 2C), with a bestfit value of ${\gamma}_{fit}\in [1.69,1.74]$. As we did not find a powerlaw in the size distribution, calculating $\gamma}_{\text{pred}$ is meaningless. If we do it anyway, we obtain ${\gamma}_{pred}=0.83\pm 0.03$ (yellow line in Figure 2C), which clearly deviates from the fitted value of $\gamma$. Thus, for the single latent dynamical variable model (${\tau}_{F}=5000$), powerlaw fits are poor, and there is no crackling.
The fivevariable model produces a different picture. We now find avalanches, for which size and duration distributions are much better fit by powerlaw models starting from very low minimum cutoffs (Figure 2D–E, Figure 2—figure supplement 2). Average size scaled with duration, again over more than two decades, with ${\gamma}_{\mathrm{f}\mathrm{i}\mathrm{t}}=1.27\pm 0.03$, which was in close agreement with ${\gamma}_{\mathrm{p}\mathrm{r}\mathrm{e}\mathrm{d}}=1.25\pm 0.02$ (Figure 2F). Holding other parameters constant, we thus found that scaling relationships and crackling arise in the multivariable model but not the singlevariable model.
Avalanche scaling depends on the time scale of latent variables
Based on simulations in the previous section, we surmised that the fivevariable simulation generated scaling more readily due to creating an ‘effective’ latent variable that had slower dynamics than any individual latent variable. We reasoned that at any moment in time, the latent variable state ${h}_{\mu}(t)$ is a vector in the latent space. Because coupling to the latent variables is random throughout the population, only the length ($\sim \sqrt{{N}_{F}}$) and not the direction of this vector matters, and the timescale of changes in this length would be much slower than $\tau}_{F$, the timescale of each of the components ${h}_{\mu}(t)$. We therefore speculated that increasing the timescale of dynamics of the latent variables should eventually lead to scaling and crackling in the singlevariable model as well as the fivevariable one. To examine the dependence of avalanche scaling on this timescale, we simulated onevariable and fivevariable networks at fixed $\eta$ and $\u03f5$ coupled to latent variables with the correlation time of their OrnsteinUhlenbeck dynamics of ${\tau}_{F}\in [{10}^{3},{10}^{5}]$ time steps, spanning from a factor of 10 faster to a factor of 10 slower than the original $\tau}_{F$ in Figure 1. Simulations were replicated five times at each combination of parameters by drawing new latent variable coupling values ($J}_{i\mu$), as well as new latent variable dynamics and instances of neural firing. For simulations that passed the criteria to be fitted by power laws, we plot the fitted values of $\tau$ , $\alpha$, $\gamma}_{\text{fit}$, and $\gamma}_{\text{fit}}{\gamma}_{\text{pred}$ (Figure 2G–J). Missing points are those for which distributions did not pass the power law fit criteria.
In the singlevariable model, bestfit exponents changed abruptly for latent variable timescale around $\tau}_{F}={10}^{4$ (Figure 2G and H), while in the fivevariable model, exponents tended to increase gradually (Figure 2G and H, red). The discontinuity in the singlevariable case reflected a change in the lower cutoff values in the powerlaw fits: size and duration distributions generated with faster latent dynamics could be fit reasonably well to a power law by using a high value of the lower cutoff (Figure 2—figure supplement 3). For time scales greater than ∼10^{4}, the minimum cutoffs dropped, and the singlevariable model generated powerlaw distributed avalanches and crackling (Figure 2J), similar to the fivevariable model. In summary, in the latent dynamical variable model, introducing multiple variables generated scaling at faster timescales. However, by slowing the timescale of the latent dynamics, the model generated signatures of critical avalanche scaling for both multi and singlevariable simulations.
Avalanche criticality, input scaling, and firing threshold
In the previous section, we found that a very slow single latent dynamical variable generated avalanche criticality in the simulation population. Thus, from now on, we simplify the model in order to characterize avalanche statistics across values of input scaling $\eta$ and firing threshold $\u03f5$. Specifically, we modeled a population of $N=128$ neurons coupled to a single quasistatic latent variable. We simulated 10^{3} segments of 10^{4} steps each and drew a new value of the latent variable ($h\sim N(0,1)$) for each segment. Ten replicates of the simulation were generated at each of the combinations of $\eta$ and $\u03f5$ (see Materials and methods).
Almost independent of $\eta$ and $\u03f5$, we found quality power law fits and crackling. Figure 3 shows the average (across $n=10$ network realizations) of the exponents extracted from size ($\tau$, Figure 3A) and duration ($\alpha$, Figure 3C) distributions. At small firing threshold ($\u03f5=2$), we do not observe scaling because the system is always active, and all avalanches merge into one. At large firing threshold $\u03f5$ and low input scaling $\eta$, we do not observe scaling because activity is so sparse that all avalanches are small. At intermediate values of the parameters, the simulations generated plausible scaling relationships in size and duration. The difference between $\gamma}_{\text{fit}$ and $\gamma}_{\text{pred}$ was typically less than 0.1 (Figure 4D–F), which was consistent with previously reported differences between fit and predicted exponents (Ma et al., 2019). Thus, there appears to be no need for finetuning to generate apparent scaling in this model, at least in the limit of (near) infinite observation time. Wherever $\eta$ and $\u03f5$ generate avalanches, there are approximate powerlaw distributions and crackling.
To determine where avalanches occur, we derive the avalanche rate across values of the latent variable $h$. In the quasistatic model, the probability of an avalanche initiation is the probability of a transition from the quiet to an active state. Because all neurons are conditionally independent, this is ${P}_{\text{ava}}={P}_{\text{silence}}(1{P}_{\text{silence}})$. Then the expected number of avalanches $\hat{N}}_{\text{ava}$ is obtained by integrating $P}_{\text{ava}$ over $h$ at each value of $\eta$ and $\u03f5$:
where $p(h)$ is the standard normal distribution. This probability tracks the observed number of avalanches across simulations, Figure 4A.
To gain an intuition for the conditions under which avalanches occur, we show two slices of the avalanche probability, at fixed $\eta$ (Figure 4B) and at fixed $\u03f5$ (Figure 4C). Black regions indicate where avalanches are likely to occur. If, for a given value of $\u03f5$ and $\eta$, there is no overlap between high avalanche probability regions and the distribution of $h$, then there will be no avalanches. For large $\u03f5$, avalanches occur because neurons with large coupling to the latent variable ($\eta {J}_{i}>>1$, recall ${J}_{i}\sim N(0,1)$) are occasionally activated by a value of the latent variable $h$ that is sufficient to exceed $\u03f5$ (Figure 4B). Thus, the scaling parameter $\eta$ controls the value of $h$ for which avalanches occur most frequently (Figure 4C). As $\u03f5$ decreases, avalanches occur for smaller and smaller $h$ until avalanches primarily occur when $h=0$.
To calculate the probability of avalanches, we must integrate over all values of $h$, but we can gain a qualitative understanding of which avalanche regime the system is in by examining the probability of avalanches at $h=0$. At $h=0$, the avalanche probability (see Materials and methods) is
which is maximized at ${\u03f5}_{0}=\mathrm{log}({2}^{1/N}1)$, independent of $J}_{i$ and $\eta$. After some algebra, we find that ${\u03f5}_{0}\sim \mathrm{log}N$ for large $N$. The dependence on $N$ reflects that a larger threshold is required for larger networks: large networks ($N\to \mathrm{\infty}$) are unlikely to achieve complete network silence, therefore preventing avalanches from occurring. Similarly, small networks ($N\sim 1$) are unlikely to fire consecutively and thus are unlikely to avalanche.
We plot ${P}_{\text{ava}}(\u03f5,\eta ;{J}_{i},N,h=0)$ as a function of $\u03f5$ in Figure 4D. The peak at $\u03f5}_{0$ divides the space into two regions. For $\u03f5<{\u03f5}_{0}$, a powerlaw is only observed in the largesize avalanches, which are rare (Figure 4E, green). By contrast, when $\u03f5>{\u03f5}_{0}$, minimum size cutoffs are low (Figure 4F, orange). Both regions, $\u03f5<{\u03f5}_{0}$ and $\u03f5>{\u03f5}_{0}$, exhibit crackling noise scaling. If observation times are not sufficiently long (estimated in Figure 4—figure supplement 1), then scaling will not be observed in the $\u03f5<{\u03f5}_{0}$ region, whose scaling relations arise from rare events. Insufficient observation times may explain experiments and simulations where avalanche scaling was not found.
Inferring the latent variable
Our analysis of ${P}_{\text{ava}}(\u03f5,\eta ,h)$ at $h=0$ suggested that there are two types of avalanche regimes: one with high activity and high minimum cutoffs in the power law fit (Type 1), and the other with lower activity and size cutoffs (Type 2). Further, when $P}_{\text{ava}$ drops to zero, avalanches disappear because the activity is too high or too low. We now examine how information about the value of the latent variables represented in the network activity relates to the activity type. To delineate these types, we calculated numerically ${\u03f5}^{\ast}(\eta )$, the value of $\u03f5$, for which the probability of avalanches is maximized, and the contours of $P}_{\text{ava}$ (Figure 5A). Curves for ${\u03f5}^{\ast}(\eta )$ and $\u03f5}_{0$ and $P}_{\text{ava}}={10}^{3$ are shown in Figure 5A and B.
We expect that the more cells fire, the more information they would convey, until the firing rate saturates, and inferring the value of the latent variable becomes impossible. Figure 5B supports the prediction: generally, information is higher in regions with more activity (lower $\u03f5$, higher $\eta$), but only up to a limit: as $\u03f5\to 0$, information decreases. This decrease begins approximately where the probability of avalanches drops to nearly zero (dashed black lines, Figure 5B–E) because all of the activity merges into a few very large avalanches. In other words, the Type1 avalanche region coincides with the highest information about the latent variable.
The critical brain hypothesis suggests that the brain operates in a critical state, and its functional role may be in optimizing information processing (Beggs, 2008; Chialvo, 2010). Under this hypothesis, we would expect the information conveyed by the network to be maximized in the regions we observe avalanche criticality. However, we see that critical regions do not always have optimal information transmission. In Figure 5, the region that displays crackling noise is that where avalanches exist (${P}_{\text{ava}}>0.001$), which corresponds to any $\eta$ value and $\u03f5\gtrsim 3$. This avalanche region encompasses both networks with high information transmission and networks with low information transmission. In summary, observing avalanche criticality in a system does not imply a highinformation processing network state. However, the scaling can be seen at smaller cutoffs, and hence with shorter recordings, in the highinformation state. This parallels the discussion by Schwab et al., 2014, who noticed that the Zipf’s law always emerges in neural populations driven by quasistationary latent fields, but it emerges at smaller system sizes when the information about the latent variable is high.
Discussion
Here, we studied systems with distributed, random coupling to latent dynamical variables and we found that avalanche criticality is nearly always observed, with no finetuning required. Avalanche criticality was surprisingly robust to changes in input gain and firing rate threshold. Loss of avalanche criticality could occur if the latent process was not wellsampled, either because the simulation was not long enough or the dynamics of the latent variables were too fast. Finally, while information about the latent variables in the network activity was higher where avalanches were generated compared to when they were not, there was a range of information values across the critical avalanche regime. Thus, avalanche criticality alone was not a predictor of optimal information transmission.
Explaining experimental exponents
A wide range of critical exponents has been found in ex vivo and in vivo recordings from various systems. For instance, the seminal work on avalanche statistics in cultured neuronal networks by Beggs and Plenz, 2003 found size and duration exponents of 1.5 and 2.0 respectively, along with $\gamma =2$, when time was discretized with a time bin equal to the average interevent interval in the system. A subset of the in vivo recordings analyzed from anesthetized cat (Hahn et al., 2010) and macaque monkeys (Petermann et al., 2009) also exhibited a size distribution exponent close to 1.5. By contrast, a survey of many in vivo and ex vivo recordings found powerlaw size distributions with exponents ranging from 1 to 3 depending on the system (Fontenele et al., 2019). Separately, Ma et al., 2019 reported recordings in freely moving rats with size exponents ranging from 1.5 to 2.7. In these recordings, when the crackling relationship held, the reported value of $\gamma$ was near 1.2 (Fontenele et al., 2019; Ma et al., 2019).
A model for generating avalanche criticality is a critical branching process (Beggs and Plenz, 2003), which predicts size and duration exponents of 1.5 and 2 and scaling exponent $\gamma$ of 2. However, there are alternatives: Lombardi et al., 2023 showed that avalanche criticality may also be produced by an adaptive Ising model in the subcritical regime, and in this case, the scaling exponent $\gamma$ was not 2 but close to 1.6. Our model, across the parameters we tested that produced exponents consistent with the scaling relationship, generated $\tau$ values that ranged from 1.9 to about 2.5. Across those simulations, we found values $\gamma$ within a narrow band from 1.1 to 1.3 (see Figure 2I and J and Figure 3H). While the exponent values our model produces are inconsistent with a critical branching process ($\gamma =2$), they match the ranges of exponents estimated from experiments and reported by Fontenele et al., 2019. In this context, it might be useful to explore if our model and that of Lombardi et al., 2023 might be related, with the adaptive feedback signal of the latter viewed as an effective, latent variable of the former.
A genuine challenge in comparing exponents estimated from different experiments with different recording modalities (spiking activity, calcium imaging, LFP, EEG, or MEG) arises from differences in spatial and temporal scale specific to a particular recording, as well as the myriad decisions made in avalanche analysis, such as defining thresholds or binning in time. Thus, one possible reason for differences in exponents across studies may derive from how the system is subsampled in space or coarsegrained in time, both of which systematically change exponents $\tau$ and $\alpha$ (Beggs and Plenz, 2003; Shew et al., 2015) and could account for differences in $\gamma$ (Capek et al., 2023). The model we presented here could be used as a test bed for examining how specific analysis choices affect exponents estimated from recordings.
A second possible explanation for differences in exponents is that different experiments study similar, but distinct biological phenomena. For instance, networks that were cultured in vitro may differ from those that were not, whether they are in vivo or ex vivo (i.e. brain slices), and sensoryprocessing networks may have different dynamics from networks with different processing demands. It is possible that certain networks develop connections between neurons such that they truly do produce dynamics that approximate a critical branching process, while other networks have different structure and resulting dynamics and thus can be better understood as coupled neurons receiving feedback (Lombardi et al., 2023) or as a system coupled to latent dynamical variables. This is especially true in sensory systems, where coupling to (latent) external stimuli in a way that the neural activity can be used to infer the stimuli is the reason for the networks’ existence (Schwab et al., 2014).
Relationship to past modeling work
Our model is not the first to produce approximate powerlaw size and duration distributions for avalanches from a latent variable process (Touboul and Destexhe, 2017; Priesemann and Shriki, 2018). In particular, Priesemann and Shriki, 2018 derived the conditions, under which an inhomogeneous Poisson process could produce such approximate scaling. The basic idea is to generate a weighted sum of exponentially distributed event sizes, each of which are generated from a homogeneous Poisson process. How each process is weighted in this sum determines the approximate powerlaw exponent, allowing one to tune the system to obtain the critical values of 1.5 and 2. Interestingly, this model did not generate nontrivial scaling of size with duration ($S\sim {D}^{\gamma}$). Instead, they found $\gamma =1$, not the predicted $\gamma =2$. Our results differ significantly, in that $\gamma$ was typically between 1.1 and 1.3 and it was nearly always close to the prediction from $\alpha$ and $\tau$. We speculate that this is due to nonlinearity in the mapping from latent variable to spiking activity, as doubling the latent field $h$ does not double the population activity, but doubling the rate of a homogeneous Poisson process does double the expected spike count. As biological networks are likely to have such nonlinearities in their responses to common inputs, this scenario may be more applicable to certain kinds of recordings.
Summary
Latent variables – whether they are emergent from network dynamics (Clark et al., 2023; Sederberg and Nemenman, 2020) or derived from shared inputs – are ubiquitous in largescale neural population recordings. This fact is reflected most directly in the relatively lowdimensional structure in largescale population recordings (Stringer et al., 2019; Pandarinath et al., 2018; Nieh et al., 2021). We previously used a model based on this observation to examine signatures of neural criticality under a coarsegraining analysis and found that coarsegrained criticality is generated by systems driven by many latent variables (Morrell et al., 2021). Here, we showed that the same model also generates avalanche criticality, and that when information about the latent variables can be inferred from the network, avalanche criticality is also observed. Crucially, finding signatures of avalanche criticality required long observation times, such that the latent variable was wellsampled. Previous studies showed that Zipf’s law appears generically in systems coupled to a latent variable that changes slowly relative to the sampling time, and that the Zipf’s behavior is easier to observe in the higher information regime (Schwab et al., 2014; Aitchison et al., 2016). However, this also suggests that observation of either scaling at modest data set sizes indeed points to some finetuning — namely to the increase of the information in the individual neurons (and, since neurons in these models are conditionally independent, also in the entire network) about the value of the latent variables. In other words, one would expect a sensory part of the brain, if adapted to the statistics of the external stimuli, to exhibit all of these critical signatures at relatively modest data set sizes. In monocular deprivation experiments, when the activity in the visual cortex is transiently not adapted to its inputs, scaling disappears, at least for recordings of a typical duration, and is restored as the system adapts to the new stimulus (Ma et al., 2019). We speculate that the observed recovery of criticality by Ma et al., 2019 could be driven by neurons adapting to the reduced stimuli state, for instance, by adjusting $\eta$ (input scaling) and $\u03f5$ (firing rate threshold).
Taken together, these results suggest that critical behavior in neural systems – whether based on the Zipf’s law, avalanches, or coarsegraining analysis – is expected whenever neural recordings exhibit some latent structure in population dynamics and this latent structure can be inferred from observations of the population activity.
Materials and methods
Simulation of dynamic latent variable model
Request a detailed protocolWe study a model from Morrell et al., 2021, originally constructed as a model of large populations of neurons in mouse hippocampus. In the original version of the model, neurons are noninteracting, receiving inputs reflective of placefield selectivity as well as input current arising from a random projection from a small number of latent dynamical variables, representing inputs shared across the population of neurons that are not directly measured or controlled. In the current paper, we incorporate only the latent variables (no place variables), and we assume that every cell is coupled to every latent variable with some randomly drawn coupling strength.
The probability of observing a certain population state $\{{s}_{i}\}$ given latent variables ${h}_{\mu}(t)$ at time $t$ is
where $Z$ is the normalization, and $H$ is the ‘energy’:
The latent variables ${h}_{\mu}(t)$ are OrnsteinUhlenbeck processes with zero mean, unit variance, and time constant $\tau}_{m$. Couplings $J}_{i\mu$ are drawn from the standard normal distribution.
Parameters for each figure are laid out in Tables 1–3. For the infinite time constant simulation, we draw a value $h\sim \mathcal{N}(0,1)$ and simulate for 10000 time steps, then repeat for 1000 draws of $h$.
Time step units
Request a detailed protocolMost results were presented using arbitrary time units: all times (i.e. $\tau}_{F$ and avalanche duration $D$) are measured in units of an unspecified time step. Specifying a time bin converts the probability of firing into actual firing rates, in spikes per second, and this choice determines which part of the $\eta$$\u03f5$ phase space is most relevant to a given experiment.
The time step is the temporal resolution at which activity is discretized, which varies from several to hundreds of milliseconds across different experimental studies (Beggs and Plenz, 2003; Fontenele et al., 2019; Ma et al., 2019). In physical units and assuming a bin size of 3 ms to 10 ms, our choice of $\eta$ and $\u03f5$ in Figure 2 would yield physiologically realistic firing rate ranges (Hengen et al., 2016), with highfiring neurons reaching averages rates of 2050 spikes/second and median firingrate neurons around 12 spikes/second. The timescales of latent variables examined range from about 3 s to 3000 s, assuming 3ms bins. Inputs with such timescales may arise from external sources, such as sensory stimuli, or from internal sources, such as changes in physiological state.
Simulations were carried out for the same number of time steps (2×10^{6}), which would be approximately 1 to 2 ‘hours’, a reasonable duration for in vivo neural recordings. Note that at large values of $\tau}_{F$, the latent variable space is not well sampled during this time period.
Analysis of avalanche statistics
Setting the threshold for observing avalanches
Request a detailed protocolIn our model, we count avalanches as periods of continuous activity (in any subset of neurons) that is bookended by time bins with no activity in the entire simulated neural network. For real neural populations of modest size, this method fails because there are no periods of quiescence. The typical solution is to set a threshold, and to only count avalanches when the population activity exceeds that threshold, with the hope that results are relatively robust to that choice. In our model, this operation is equivalent to changing $\u03f5$, which shifts the probability of firing up or down by a constant amount across all cells independent of inputs. Our results in Figure 3 show that $\alpha$ and $\tau$ decrease as the threshold for detection is increased (equivalent to large $\u03f5$), but that the scaling relationship is maintained. The model predicts that $\gamma}_{\mathrm{p}\mathrm{r}\mathrm{e}\mathrm{d}}{\gamma}_{\mathrm{f}\mathrm{i}\mathrm{t}$ would initially increase slightly with the detection threshold before decreasing back to near zero.
Following the algorithm laid out in Clauset et al., 2009, we fit power laws to the size and duration distributions from simulations generating avalanches. We use leastsquares fitting to estimate $\gamma}_{\mathrm{f}\mathrm{i}\mathrm{t}$, the scaling exponent for size with duration, assessing the consistency of the fit across decades.
Reading power laws from data
Request a detailed protocolWe want, from each simulation, a quantification of the quality of scaling (how many decades, minimally) and an estimate of the scaling exponents ($\tau$ for the size distribution, $\alpha$ for the duration distribution). We first compile all avalanches observed in the simulation and for each avalanche, calculate its size (total activity across the population during the avalanche) and its duration (number of time bins). Following the steps outlined by Clauset et al., 2009, we use the maximumlikelihood estimator to determine the scaling exponent. This is the solution to the transcendental equation
where $\zeta (\alpha ,{x}_{\text{min}})$ is the Hurwitz zeta function and $x}_{i$ are observations; that is, either the size or the duration of each avalanche i. For values of ${x}_{\text{min}}<6$, a numerical lookup table based on the builtin Hurwitz zeta function in the symbolic math toolbox was used (MATLAB2019b). For ${x}_{\text{min}}>6$ we use an approximation (Clauset et al., 2009),
To determine $x}_{\text{min}$, we computed the maximum absolute difference between the empirical cumulative density ($S(x)$) function and model’s cumulative density function $P(x)$ (the KolmogorovSmirnov (KS) statistic; $D={\text{max}}_{x\ge {x}_{\text{min}}}S(x)P(x)$). The KS statistic was computed between for powerlaw models with scaling parameter $\hat{\alpha}$ and cutoffs $x}_{\text{min}$. The value of $x}_{\text{min}$ that minimizes the KS statistic was chosen. Occasionally the KS statistic had two local minima (as in Figure 2—figure supplement 1), indicating two different powerlaws. In these cases, the minimum size and duration cutoffs were the smallest values that were within 10% of the absolute minimum of the KS statistic. Note that the statistic is computed for each model only on the powerlaw portion of the CDF (i.e. $x}_{i}\ge {x}_{\text{min}$). We do not attempt to determine an upper cutoff value.
To assess the quality of the powerlaw fit, Clauset et al., 2009 compared the empirical observations to surrogate data generated from a semiparametric powerlaw model. The semiparametric model sets the value of the CDF equal to the empirical CDF values up to $x={x}_{\text{min}}$ and then according to the powerlaw model for $x>{x}_{\text{min}}$. If the KS statistic for the real data (relative to its fitted model) is within the distribution of the KS statistics for surrogate datasets relative to their respective fitted models, the powerlaw model was considered a reasonable fit.
Strict application of this methodology could give misleading results. Much of this is due to the loss of statistical power when the minimum cutoff is so high that the number of observations drops. For instance, in the simulations shown in Figure 2, the onevariable duration distribution passed the Clauset et al., 2009 criterion, with a minimum KS statistic of 0.03 when the duration cutoff was 18 time steps. However, for the fivevariable simulation in Figure 2, a powerlaw would be narrowly rejected for both size and duration, despite having much smaller KS statistics: for $\tau$, the KS statistic was 0.0087 (simulation range: 0.0008 to 0.0082; number of avalanches observed: 58,787) and for $\alpha$ it was 0.0084 (simulation range: 0.0011 to 0.0075). Below we discuss this problem in more detail.
Determining range over which avalanche size scales with duration
Request a detailed protocolFor fitting $\gamma$, our aim was to find the longest sampled range, over which we have apparent powerlaw scaling of size with duration. Because our sampled duration values have linear spacing, error estimates are skewed if a naive goodness of fit criterion is used. We devised the following algorithm. First, the simulation must have at least one avalanche of size 500. We fit $S=c{D}^{\gamma}$ over one decade at a time. We chose as the lower duration cutoff the value of minimum duration, for which the largest number of subsequent (longerduration) fits produced consistent fit parameters (Figure 2—figure supplements 3 and 4, top row). Next, with the minimum duration set, we gradually increased the maximum duration cutoff, and we determined whether there was a significant bias in the residual over the first decade of the fit. We selected the highest duration cutoff, for which there was no bias. Finally, over this range, we refit the power law relationship and extracted confidence intervals.
Our analysis focused on finding the apparent powerlaw relationship that held over the largest logscale range. A common feature across simulation parameters ($\tau}_{F$, $N}_{F$) was the existence of two distinct powerlaw regimes. This is apparent in Figure 2I, which shows that when ${N}_{F}=1$ at small $\tau}_{F$, the bestfit $\gamma$ (that showing the largest range with powerlawconsistent scaling) is much larger ( 1.7), and then above ${\tau}_{F}\sim 3000$, the bestfit $\gamma$ drops to around 1.3.
Statistical power of powerlaw tests
Request a detailed protocolIn several cases, we found examples of powerlaw fits that passed the rejection criteria commonly used to determine avalanche scaling relationships because of limited number of observations. A key example is that of the single latent variable simulation shown in Figure 2B, where we could not reject a power law for the duration distribution. Conversely, strict application of the surrogate criteria would reject a power law for distributions that were quantitatively much closer to a powerlaw (i.e. lower KS statistic), but for which we had many more observations and thus a much stronger surrogate test (Figure 2). This points to the difficulty of applying a single criterion to determining a powerlaw fit. In this work, we adhere to the criteria set forth in Clauset et al., 2009, with a modification to control for the unreasonably high statistical power of simulated data. Specifically, the number of avalanches used for fitting and for surrogate analysis was capped at 500,000, drawn randomly from the entire pool of avalanches.
Additionally, we found examples, in which a short simulation was rejected, but increasing the simulation time by a factor of five yielded excellent powerlaw fits. We speculate that this arises due to insufficient sampling of the latent space. These observations raise an important biological point. Simulations provide the luxury of assuming the network is unchanging for as long as the simulator cares to keep drawing samples. In a biological network, this is not the case. Over the course of hours, the effective latent degrees of freedom could change drastically (e.g. due to circadian effects [Aton et al., 2009], changes in behavioral state [Fu et al., 2014], plasticity [Hooks and Chen, 2020], etc.), and the network itself (synaptic scaling, firing thresholds, etc.) could be plastic (Hengen et al., 2016). All these factors can be modeled in our framework by determining appropriate cutoffs (in duration of recording, in time step sizes, for activity distributions) based on specific experimental timescales.
Calculation of avalanche regimes
Request a detailed protocolIn the quasistatic model, we derive the dependence of the avalanche rate on $\eta$, $\u03f5$ and number of neurons $N$, finding that there are two distinct regimes, in which avalanches occur. Each time bin is independent, conditioned on the value of $h$. For an avalanche to occur, the probability of silence in the population (i.e. all ${s}_{i}=0$) must not be too close to 0 (or there are no breaks in activity) or too close to 1 (or there is no activity). At fixed $h$, the probability of silence is
An avalanche occurs when a silent time bin is followed by an active bin, which has probability ${P}_{\text{ava}}(\u03f5,\eta ;{J}_{i},N,h)={P}_{\text{silence}}(1{P}_{\text{\% silence}})$.
Information calculation
Maximumlikelihood decoding
Request a detailed protocolFor large populations coupled to a single latent variable, we estimated the information between population spiking activity and the latent variable as the information between the maximumlikelihood estimator $h}^{\ast$ of the latent variable $h$ and the latent variable itself. This approximation fails at extremes of network activity levels (low or high).
Specifically, we approximated the loglikelihood of $h}^{\ast$ given $h}_{\text{true}$ near $h}^{\ast$ by $\mathrm{log}L(h{h}^{\ast})\approx \mathrm{log}{L}_{max}\frac{1}{2}\frac{(h{h}^{\ast}{)}^{2}}{{\sigma}_{{h}^{\ast}}^{2}}$. Thus we assume that $h}^{\ast$ is normally distributed about $h}_{\text{true}$ with variance ${\sigma}^{2}({h}_{\text{true}})$. The variance is then derived from the curvature of the loglikelihood at the maximum. The information between two Gaussian variables, here $P({h}^{\ast}h)=N(h,{\sigma}_{{h}^{\ast}}^{2})$ and $p(h)=N(0,1)$, is
where the average is taken over ${h}_{\text{true}}\sim N(0,1)$.
Given a set of $T$ observations of the neurons $\{{s}_{i}\}$, the likelihood is
Maximizing the log likelihood gives the following condition:
where $\overline{s}}_{i}=\frac{1}{T}\sum _{t}{s}_{it$ is the average over observations $t$. The uncertainty in $h}^{\ast$ is $\sigma}_{h$, which was calculated from the second derivative of the log likelihood:
This expression depends on the observations $\overline{s}}_{i$ only through the maximumlikelihood estimate $h}^{\ast$. When $h}^{\ast}\to {h}_{\text{true}$, then the variance is
To generate Figure 5, we evaluated Equation 10 using Equation 18.
Code availability
Request a detailed protocolSimulation code was adapted from our previous work (Morrell et al., 2021). Code to run simulations and perform analyses presented in this paper is uploaded as Source code 1 and also available from https://github.com/ajsederberg/avalanche (copy archived at Sederberg, 2024).
Data availability
The current manuscript is a computational study, so no data were generated for this manuscript. Modelling code is uploaded as Source code 1.
References

Zipf’s law arises naturally when there are underlying, unobserved variablesPLOS Computational Biology 12:e1005110.https://doi.org/10.1371/journal.pcbi.1005110

Neuronal avalanches in neocortical circuitsThe Journal of Neuroscience 23:11167–11177.https://doi.org/10.1523/JNEUROSCI.233511167.2003

The criticality hypothesis: how local cortical networks might optimize information processingPhilosophical Transactions of the Royal Society A 366:329–343.https://doi.org/10.1098/rsta.2007.2092

Dimension of activity in random neural networksPhysical Review Letters 131:118401.https://doi.org/10.1103/PhysRevLett.131.118401

Criticality between Cortical StatesPhysical Review Letters 122:208101.https://doi.org/10.1103/PhysRevLett.122.208101

Neuronal avalanches in spontaneous activity in vivoJournal of Neurophysiology 104:3312–3322.https://doi.org/10.1152/jn.00953.2009

Probabilistic models for neural populations that naturally capture global coupling and criticalityPLOS Computational Biology 13:e1005763.https://doi.org/10.1371/journal.pcbi.1005763

Neuronal avalanches across the rat somatosensory barrel cortex and the effect of single whisker stimulationFrontiers in Systems Neuroscience 15:709677.https://doi.org/10.3389/fnsys.2021.709677

Coarse graining, fixed points, and scaling in a large population of neuronsPhysical Review Letters 123:178103.https://doi.org/10.1103/PhysRevLett.123.178103

Are biological systems poised at criticality?Journal of Statistical Physics 144:268–302.https://doi.org/10.1007/s1095501102294

How critical is brain criticality?Trends in Neurosciences 45:820–837.https://doi.org/10.1016/j.tins.2022.08.007

Avalanches, Barkhausen noise, and plain old criticalityPhysical Review Letters 75:4528–4531.https://doi.org/10.1103/PhysRevLett.75.4528

Selforganized criticality in the brainFrontiers in Physics 9:639389.https://doi.org/10.3389/fphy.2021.639389

Can a time varying external drive give rise to apparent criticality in neural systems?PLOS Computational Biology 14:e1006081.https://doi.org/10.1371/journal.pcbi.1006081

Zipf’s law and criticality in multivariate data without finetuningPhysical Review Letters 113:068102.https://doi.org/10.1103/PhysRevLett.113.068102

Adaptation to sensory input tunes visual cortex to criticalityNature Physics 11:659–663.https://doi.org/10.1038/nphys3370
Article and author information
Author details
Funding
Simons Foundation (SimonsEmory Consortium on Motor Control)
 Ilya Nemenman
National Science Foundation (BCS/1822677)
 Ilya Nemenman
National Institute of Neurological Disorders and Stroke (2R01NS084844)
 Ilya Nemenman
National Institute of Mental Health (1RF1MH130413)
 Audrey Sederberg
Simons Foundation (Investigator Program)
 Ilya Nemenman
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
Acknowledgements
IN was supported in part by the Simons Foundation Investigator program, the SimonsEmory Consortium on Motor Control, NSF grant BCS/1822677 and NIH grant 2R01NS084844. AS was supported in part by NIH grant 1RF1MH13041301 and by startup funds from the University of Minnesota Medical School. The authors acknowledge the Minnesota Supercomputing Institute (MSI) at the University of Minnesota for providing resources that contributed to the research results reported within this paper.
Version history
 Sent for peer review: May 30, 2023
 Preprint posted: June 8, 2023 (view preprint)
 Preprint posted: September 12, 2023 (view preprint)
 Preprint posted: February 2, 2024 (view preprint)
 Version of Record published: March 12, 2024 (version 1)
Cite all versions
You can cite all versions using the DOI https://doi.org/10.7554/eLife.89337. This DOI represents all versions, and will always resolve to the latest one.
Copyright
© 2023, Morrell et al.
This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.
Metrics

 725
 views

 55
 downloads

 4
 citations
Views, downloads and citations are aggregated across all versions of this paper published by eLife.
Download links
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
Further reading

 Neuroscience
Asymmetries in the size of structures deep below the cortex explain how alpha oscillations in the brain respond to shifts in attention.

 Neuroscience
Evidence suggests that subcortical structures play a role in highlevel cognitive functions such as the allocation of spatial attention. While there is abundant evidence in humans for posterior alpha band oscillations being modulated by spatial attention, little is known about how subcortical regions contribute to these oscillatory modulations, particularly under varying conditions of cognitive challenge. In this study, we combined MEG and structural MRI data to investigate the role of subcortical structures in controlling the allocation of attentional resources by employing a cued spatial attention paradigm with varying levels of perceptual load. We asked whether hemispheric lateralization of volumetric measures of the thalamus and basal ganglia predicted the hemispheric modulation of alphaband power. Lateral asymmetry of the globus pallidus, caudate nucleus, and thalamus predicted attentionrelated modulations of posterior alpha oscillations. When the perceptual load was applied to the target and the distractor was salient caudate nucleus asymmetry predicted alphaband modulations. Globus pallidus was predictive of alphaband modulations when either the target had a high load, or the distractor was salient, but not both. Finally, the asymmetry of the thalamus predicted alpha band modulation when neither component of the task was perceptually demanding. In addition to delivering new insight into the subcortical circuity controlling alpha oscillations with spatial attention, our finding might also have clinical applications. We provide a framework that could be followed for detecting how structural changes in subcortical regions that are associated with neurological disorders can be reflected in the modulation of oscillatory brain activity.