Global organization of neuronal activity only requires unstructured local connectivity
Abstract
Modern electrophysiological recordings simultaneously capture singleunit spiking activities of hundreds of neurons spread across large cortical distances. Yet, this parallel activity is often confined to relatively lowdimensional manifolds. This implies strong coordination also among neurons that are most likely not even connected. Here, we combine in vivo recordings with network models and theory to characterize the nature of mesoscopic coordination patterns in macaque motor cortex and to expose their origin: We find that heterogeneity in local connectivity supports network states with complex longrange cooperation between neurons that arises from multisynaptic, shortrange connections. Our theory explains the experimentally observed spatial organization of covariances in resting state recordings as well as the behaviorally related modulation of covariance patterns during a reachtograsp task. The ubiquity of heterogeneity in local cortical circuits suggests that the brain uses the described mechanism to flexibly adapt neuronal coordination to momentary demands.
Editor's evaluation
This is a thorough study showing that longrange correlations in the brain can arise without common input drive or longrange anatomical connections. These longrange correlations are modulated by the animal's behavioral state, a surprising finding that suggests a computational role for control of this kind of correlation. The paper details some analytical methods for modeling this behavior in disordered systems. The work will be of broad interest to neuroscientists, computational biologists, and biophysicists.
https://doi.org/10.7554/eLife.68422.sa0Introduction
Complex brain functions require coordination between large numbers of neurons. Unraveling mechanisms of neuronal coordination is therefore a core ingredient towards answering the longstanding question of how neuronal activity represents information. Population coding is one classical paradigm (Georgopoulos et al., 1983) in which entire populations of similarly tuned neurons behave coherently, thus leading to positive correlations among their members. The emergence and dynamical control of such populationaveraged correlations has been studied intensely (Ginzburg and Sompolinsky, 1994; Renart et al., 2010; Helias et al., 2014; Rosenbaum and Doiron, 2014). More recently, evidence accumulated that neuronal activity often evolves within more complex lowdimensional manifolds, which imply more involved ways of neuronal activity coordination (Gallego et al., 2017; Gallego, 2018; Gallego et al., 2020): A small number of populationwide activity patterns, the neural modes, are thought to explain most variability of neuronal activity. In this case, individual neurons do not necessarily follow a stereotypical activity pattern that is identical across all neurons contributing to a representation. Instead, the coordination among the members is determined by more complex relations. Simulations of recurrent network models indeed indicate that networks trained to perform a realistic task exhibit activity organized in lowdimensional manifolds (Sussillo et al., 2015). The dimensionality of such manifolds is determined by the structure of correlations (Abbott et al., 2011; Mazzucato et al., 2016) and tightly linked to the complexity of the task the network has to perform (Gao, 2017) as well as to the dimensionality of the stimulus (Stringer et al., 2019). Recent work has started to decipher how neural modes and the dimensionality of activity are shaped by features of network connectivity, such as heterogeneity of connections (Smith et al., 2018; Dahmen et al., 2019), block structure (Aljadeff et al., 2015; Aljadeff et al., 2016), and lowrank perturbations (Mastrogiuseppe and Ostojic, 2018) of connectivity matrices, as well as connectivity motifs (Recanatesi et al., 2019; Dahmen et al., 2021; Hu and Sompolinsky, 2020). Yet, these works neglected the spatial organization of network connectivity (Schnepel et al., 2015) that becomes more and more important with current experimental techniques that allow the simultaneous recording of ever more neurons. How distant neurons that are likely not connected can still be strongly coordinated to participate in the same neural mode is a widely open question.
To answer this question, we combine analyses of parallel spiking data from macaque motor cortex with the analytical investigation of a spatially organized neuronal network model. We here quantify coordination by Pearson correlation coefficients and pairwise covariances, which measure how temporal departures of the neurons’ activities away from their mean firing rate are correlated. We show that, even with only unstructured and shortrange connections, strong covariances across distances of several millimeters emerge naturally in balanced networks if their dynamical state is close to an instability within a ‘critical regime’. This critical regime arises from strong heterogeneity in local network connections that is abundant in brain networks. Intuitively, it arises because activity propagates over a large number of different indirect paths. Heterogeneity, here in the form of sparse random connectivity, is thus essential to provide a rich set of such paths. While mean covariances are readily accessible by meanfield techniques and have been shown to be small in balanced networks (Renart et al., 2010; Tetzlaff et al., 2012), explaining covariances on the level of individual pairs requires methods from statistical physics of disordered systems. With such a theory, here derived for spatially organized excitatoryinhibitory networks, we show that large individual covariances arise at all distances if the network is close to the critical point. These predictions are confirmed by recordings of macaque motor cortex activity. The longrange coordination found in this study is not merely determined by the anatomical connectivity, but depends substantially on the network state, which is characterized by the individual neurons’ mean firing rates. This allows the network to adjust the neuronal coordination pattern in a dynamic fashion, which we demonstrate through simulations and by comparing two behavioral epochs of a reachtograsp experiment.
Results
Macaque motor cortex shows longrange coordination patterns
We first analyze data from motor cortex of macaques during rest, recorded with $4\times 4{\mathrm{mm}}^{2}$, 100electrode Utah arrays with 400 µm interelectrode distance (Figure 1A). The resting condition of motor cortex in monkeys is ideal to assess intrinsic coordination between neurons during ongoing activity. In particular, our analyses focus on true resting state data, devoid of movementrelated transients in neuronal firing (see Materials and methods). Parallel singleunit spiking activity of $\approx 130$ neurons per recording session, sorted into putative excitatory and inhibitory cells, shows strong spikecount correlations across the entire Utah array, well beyond the typical scale of the underlying shortrange connectivity profiles (Figure 1B and D). Positive and negative correlations form patterns in space that are furthermore seemingly unrelated to the neuron types. All populations show a large dispersion of both positive and negative correlation values (Figure 1C).
The classical view on pairwise correlations in balanced networks (Ginzburg and Sompolinsky, 1994; Renart et al., 2010; Pernice et al., 2011; Pernice et al., 2012; Tetzlaff et al., 2012; Helias et al., 2014) focuses on averages across many pairs of cells: average correlations are small if the network dynamics is stabilized by an excess of inhibitory feedback; dynamics known as the ‘balanced state’ arise (van Vreeswijk and Sompolinsky, 1996; Amit and Brunel, 1997; van Vreeswijk and Sompolinsky, 1998): Negative feedback counteracts any coherent increase or decrease of the populationaveraged activity, preventing the neurons from fluctuating in unison (Tetzlaff et al., 2012). Breaking this balance in different ways leads to large correlations (Rosenbaum and Doiron, 2014; Darshan et al., 2018; Baker et al., 2019). Can the observation of significant correlations between individual cells across large distances be reconciled with the balanced state? In the following, we provide a mechanistic explanation.
Multisynaptic connections determine correlations
Connections mediate interactions between neurons. Many studies therefore directly relate connectivity and correlations (Pernice et al., 2011; Pernice et al., 2012; Trousdale et al., 2012; Brinkman et al., 2018; Kobayashi et al., 2019). From direct connectivity, one would expect positive correlations between excitatory neurons and negative correlations between inhibitory neurons and a mix of negative and positive correlations only for excitatoryinhibitory pairs. Likewise, a shared input from inside or outside the network only imposes positive correlations between any two neurons (Figure 2A). The observations that excitatory neurons may have negative correlations (Figure 1D), as well as the broad distribution of correlations covering both positive and negative values (Figure 1C), are not compatible with this view. In fact, the sign of correlations appears to be independent of the neuron types. So how do negative correlations between excitatory neurons arise?
The view that equates connectivity with correlation implicitly assumes that the effect of a single synapse on the receiving neuron is weak. This view, however, regards each synapse in isolation. Could there be states in the network where, collectively, many weak synapses cooperate, as perhaps required to form lowdimensional neuronal manifolds? In such a state, interactions may not only be mediated via direct connections but also via indirect paths through the network (Figure 2B). Such effective multisynaptic connections may explain our observation that far apart neurons that are basically unconnected display considerable correlation of arbitrary sign.
Let us here illustrate the ideas first and corroborate them in subsequent sections. Direct connections yield correlations of a predefined sign, leading to correlation distributions with multiple peaks, for example a positive peak for excitatory neurons that are connected and a peak at zero for neurons that are not connected. Multisynaptic paths, however, involve both excitatory and inhibitory intermediate neurons, which contribute to the interaction with different signs (Figure 2B). Hence, a single indirect path can contribute to the total interaction with arbitrary sign (Pernice et al., 2011). If indirect paths dominate the interaction between two neurons, the sign of the resulting correlation becomes independent of their type. Given that the connecting paths in the network are different for any two neurons, the resulting correlations can fall in a wide range of both positive and negative values, giving rise to the broad distributions for all combinations of neuron types in Figure 1C. This provides a hypothesis why there may be no qualitative difference between the distribution of correlations for excitatory and inhibitory neurons. In fact, their widths are similar and their mean is close to zero (see Materials and methods for exact values); the latter being the hallmark of the negative feedback that characterizes the balanced state. The subsequent modelbased analysis will substantiate this idea and show that it also holds for networks with spatially organized heterogeneous connectivity.
To play this hypothesis further, an important consequence of the dominance of multisynaptic connections could be that correlations are not restricted to the spatial range of direct connectivity. Through interactions via indirect paths the reach of a single neuron could effectively be increased. But the details of the spatial profile of the correlations in principle could be highly complex as it depends on the interplay of two antagonistic effects: On the one hand, signal propagation becomes weaker with distance, as the signal has to pass several synaptic connections. Along these paths mean firing rates of neurons are typically diverse, and so are their signal transmission properties (de la Rocha et al., 2007). On the other hand, the number of contributing indirect paths between any pair of neurons proliferates with their distance. With single neurons typically projecting to thousands of other neurons in cortex, this leads to involved combinatorics; intuition here ceases to provide a sensible hypothesis on what is the effective spatial profile and range of coordination between neurons. Also it is unclear which parameters these coordination patterns depend on. The modeldriven and analytical approach of the next section will provide such a hypothesis.
Networks close to instability show shallow exponential decay of covariances
We first note that the large magnitude and dispersion of individual correlations in the data and their spatial structure primarily stem from features in the underlying covariances between neuron pairs (Appendix 1—figure 1). Given the close relationship between correlations and covariances (Appendix 1—figure 1D and E), in the following we analyze covariances, as these are less dependent on single neuron properties and thus analytically simpler to treat. To gain an understanding of the spatial features of intrinsically generated covariances in balanced critical networks, we investigate a network of excitatory and inhibitory neurons on a twodimensional sheet, where each neuron receives external Gaussian white noise input (Figure 3A). We investigate the covariance statistics in this model by help of linearresponse theory, which has been shown to approximate spiking neuron models well (Pernice et al., 2012; Trousdale et al., 2012; Tetzlaff et al., 2012; Helias et al., 2013; Grytskyy et al., 2013; Dahmen et al., 2019). To allow for multapses, the connections between two neurons are drawn from a binomial distribution, and the connection probability decays with interneuronal distance on a characteristic length scale $d$ (for more details see Materials and methods). Previous studies have used linearresponse theory in combination with methods from statistical physics and field theory to gain analytic insights into both mean covariances (Ginzburg and Sompolinsky, 1994; Lindner et al., 2005; Pernice et al., 2011; Tetzlaff et al., 2012) and the width of the distribution of covariances (Dahmen et al., 2019). Fieldtheoretic approaches, however, were so far restricted to purely random networks devoid of any network structure and thus not suitable to study spatial features of covariances. To analytically quantify the relation between the spatial ranges of covariances and connections, we therefore here develop a theory for spatially organized random networks with multiple populations. The randomness in our model is based on the sparseness of connections, which is one of the main sources of heterogeneity in cortical networks in that it contributes strongly to the variance of connections (see Appendix 1 Section 15).
A distanceresolved histogram of the covariances in the spatially organized EI network shows that the mean covariance is close to zero but the width or variance of the covariance distribution stays large, even for large distances (Figure 3C). Analytically, we derive that, despite the complexity of the various indirect interactions, both the mean and the variance of covariances follow simple exponential laws in the longdistance limit (see Appendix 1 Section 4  Section 12). These laws are universal in that they do not depend on details of the spatial profile of connections. Our theory shows that the associated length scales are strikingly different for means and variances of covariances. They each depend on the reach of direct connections and on specific eigenvalues of the effective connectivity matrix. These eigenvalues summarize various aspects of network connectivity and signal transmission into a single number: Each eigenvalue belongs to a ‘mode’, a combination of neurons that act collaboratively, rather than independently, coordinating neuronal activity within a onedimensional subspace. To start with, there are as many such subspaces as there are neurons. But if the spectral bound in Figure 3B is close to one, only a relatively small fraction of them, namely those close to the spectral bound, dominate the dynamics; the dynamics is then effectively lowdimensional. Additionally, the eigenvalue quantifies how fast a mode decays when transmitted through a network. The eigenvalues of the dominating modes are close to one, which implies a long lifetime. The corresponding fluctuations thus still contribute significantly to the overall signal, even if they passed by many synaptic connections. Therefore, indirect multisynaptic connections contribute significantly to covariances if the spectral bound is close to one, and in that case we expect to see longrange covariances.
To quantify this idea, for the mean covariance $\overline{c}$ we find that the dominant behavior is an exponential decay $\overline{c}\sim \mathrm{exp}(x/\overline{d})$ on a length scale $\overline{d}$. This length scale is determined by a particular eigenvalue, the population eigenvalue, corresponding to the mode in which all neurons are excited simultaneously. Its position solely depends on the ratio between excitation and inhibition in the network and becomes more negative in more strongly inhibitiondominated networks (Figure 3B). We show in Appendix 1 Section 9.4 that this leads to a steep decay of mean covariances with distance. The variance of covariances, however, predominantly decays exponentially on a length scale d_{eff} that is determined by the spectral bound $R$, the largest real part among all eigenvalues (Figure 3B and D). In inhibitiondominated networks, $R$ is determined by the heterogeneity of connections. For $R\lesssim 1$ we obtain the effective length scale
What this means is that precisely at the point where $R$ is close to one, when neural activity occupies a lowdimensional manifold, the length scale d_{eff} on which covariances decay exceeds the reach of direct connections by a large factor (Figure 3D). As the network approaches instability, which corresponds to the spectral bound $R$ going to one, the effective decay constant diverges (Figure 3D inset) and so does the range of covariances.
Our populationresolved theoretical analysis, furthermore, shows that the larger the spectral bound the more similar the decay constants between different populations, with only marginal differences for $R\lesssim 1$ (Figure 3E). This holds strictly if connection weights only depend on the type of the presynaptic neuron but not on the type of the postsynaptic neuron. Moreover, we find a relation between the squared effective decay constants and the squared anatomical decay constants of the form
This relation is independent of the eigenvalues of the effective connectivity matrix, as the constant of order $\mathcal{O}(1)$ does only depend on the choice of the connectivity profile. For $R\simeq 1$, this means that even though the absolute value of both effective length scales on the left hand side is large, their relative difference is small because it equals the small difference of anatomical length scales on the right hand side.
Pairwise covariances in motor cortex decay on a millimeter scale
To check if these predictions are confirmed by the data from macaque motor cortex, we first observe that, indeed, covariances in the resting state show a large dispersion over almost all distances on the Utah array (Figure 4). Moreover, the variance of covariances agrees well with the predicted exponential law: Performing an exponential fit reveals length constants above 1 mm. These large length constants have to be compared to the spatial reach of direct connections, which is about an order of magnitude shorter, in the range of 100400 μm (Schnepel et al., 2015), so below the 400 μm interelectrode distance of the Utah array. The shallow decay of the variance of covariances is, next to the broad distribution of covariances, a second indication that the network is in the dynamically balanced critical regime, in line with the prediction by Equation (1).
The populationresolved fits to the data show a larger length constant for excitatory covariances than for inhibitory ones (Figure 4A). This is qualitatively in line with the prediction of Equation (2) given the – by tendency – longer reach of excitatory connections compared to inhibitory ones, as derived from morphological constraints (Reimann et al., 2017, Fig. S2). In the dynamically balanced critical regime, however, the predicted difference in slope for all three fits is practically negligible. Therefore, we performed a second fit where the slope of the three exponentials is constrained to be identical (Figure 4B). The error of this fit is only marginally larger than the ones of fitting individual slopes (Figure 4C). This shows that differences in slopes are hardly detectable given the empirical evidence, thus confirming the predictions of the theory given by Equation (1) and Equation (2).
Firing rates alter connectivitydependent covariance patterns
Since covariances measure the coordination of temporal fluctuations around the individual neurons’ mean firing rates, they are determined by how strong a neuron transmits such fluctuations from input to output (Abeles, 1991). To leading order this is explained by linearresponse theory (Ginzburg and Sompolinsky, 1994; Lindner et al., 2005; Pernice et al., 2011; Tetzlaff et al., 2012): How strongly a neuron reacts to a small change in its input depends on its dynamical state, foremost the mean and variance of its total input, called ‘working point’ in the following. If a neuron receives almost no input, a small perturbation in the input will not be able to make the neuron fire. If the neuron receives a large input, a small perturbation will not change the firing rate either, as the neuron is already saturated. Only in the intermediate regime the neuron is susceptible to small deviations of the input. Mathematically, this behavior is described by the gain of the neuron, which is the derivative of the inputoutput relation (Abeles, 1991). Due to the nonlinearity of the inputoutput relation, the gain is vanishing for very small and very large inputs and nonzero in the intermediate regime. How strongly a perturbation in the input to one neuron affects one of the subsequent neurons therefore not only depends on the synaptic weight $\mathit{J}$ but also on the gain $\mathit{S}$ and thereby the working point. This relation is captured by the effective connectivity $\mathit{W}=\mathit{S}\cdot \mathit{J}$. What is the consequence of the dynamical interaction among neurons depending on the working point? Can it be used to reshape the lowdimensional manifold, the collective coordination between neurons?
The first part of this study finds that longrange coordination can be achieved in a network with shortrange random connections if effective connections are sufficiently strong. Alteration of the working point, for example by a different external input level, can affect the covariance structure: The pattern of coordination between individual neurons can change, even though the anatomical connectivity remains the same. In this way, routing of information through the network can be adapted dynamically on a mesoscopic scale. This is a crucial difference of such coordination as opposed to coordination imprinted by complex but static connection patterns.
Here, we first illustrate this concept by simulations of a network of 2000 sparsely connected thresholdlinear (ReLU) rate neuron models that receive Gaussian white noise inputs centered around neuronspecific nonzero mean values (see Materials and methods and Appendix 1 Section 14 for more details). The ReLU activation function thereby acts as a simple model for the vanishing gain for neurons with too low input levels. Note that in corticallike scenarios with low firing rates, neuronal working points are far away from the highinput saturation discussed above, which is therefore neglected by the choice of the ReLU activation function. For independent and stationary external inputs covariances between neurons are solely generated inside the network via the sparse and random recurrent connectivity. External inputs only have an indirect impact on the covariance structure by setting the working point of the neurons.
We simulate two networks with identical structural connectivity and identical external input fluctuations, but small differences in mean external inputs between corresponding neurons in the two simulations (Figure 5A). These small differences in mean external inputs create different gains and firing rates and thereby differences in effective connectivity and covariances. Since mean external inputs are drawn from the same distribution in both simulations (Figure 5B), the overall distributions of firing rates and covariances across all neurons are very similar (Figure 5E1, F1). But individual neurons’ firing rates do differ (Figure 5E2). For the simple ReLU activation used here, we in particular observe neurons that switch between nonzero and zero firing rate between the two simulations. This resulting change of working points substantially affects the covariance patterns (Figure 5F2): Differences in firing rates and covariances between the two simulations are significantly larger than the differences across two different epochs of the same simulation (Figure 5C). The larger the spectral bound, the more sensitive are the intrinsically generated covariances to the changes in firing rates (Figure 5D). Thus, a small offset of individual firing rates is an effective parameter to control networkwide coordination among neurons. As the input to the local network can be changed momentarily, we predict that in the dynamically balanced critical regime coordination patterns should be highly dynamic.
Coordination patterns in motor cortex depend on behavioral context
In order to test the theoretical prediction in experimental data, we analyze parallel spiking activity from macaque motor cortex, recorded during a reachtograsp experiment (Riehle et al., 2013; Brochier et al., 2018). In contrast to the resting state, where the animal was in an idling state, here the animal is involved in a complex task with periods of different cognitive and behavioral conditions (Figure 6A). We compare two epochs in which the animal is requested to wait and is sitting still but which differ in cognitive conditions. The first epoch is a starting period (S), where the monkey has selfinitiated the behavioral trial and is attentive because it is expecting a cue. The second epoch is a preparatory period (P), where the animal has just received partial information about the upcoming trial and is waiting for the missing information and the GO signal to initiate the movement.
Within each epoch, S or P, the neuronal firing rates are mostly stationary, likely due to the absence of arm movements which create relatively large transient activities in later epochs of the task, which are not analyzed here (see Appendix 1 Section 3). The overall distributions of the firing rates are comparable for epochs S and P, but the firing rates are distributed differently across the individual neurons: Figure 6C shows one example session of monkey N, where the changes in firing rates between the two epochs are visible in the spread of markers around the diagonal line in panel C2. To assess the extent of these changes, we split each epoch, S and P, into two disjoint subperiods, S1/S2 and P1/P2 (Figure 6A). We compute the correlation coefficient between the firing rate vectors of two subperiods of different epochs (‘between’ markers in Figure 6E) and compare it to the correlation coefficient between the firing rate vectors of two subperiods of the same epoch (‘within’ markers): Firing rate vectors in S1 are almost perfectly correlated with firing rate vectors in S2 ($\rho \approx 1$ for all of the five/eight different recording sessions from different recording days for monkey E/N, similarly for P1 and P2), confirming stationarity investigated in Appendix 1 Section 3. Firing rate vectors in S1 or S2, however, show significantly lower correlation to firing rate vectors in P1 and P2, confirming a significant change in network state between epochs S and P (Figure 6E).
The mechanistic model in the previous section shows a qualitatively similar scenario (Figure 5C and E). By construction it produces different firing rate patterns in the two simulations. While the model is simplistic and in particular not adapted to quantitatively reproduce the experimentally observed activity statistics, its simulations and our underlying theory make a general prediction: Differences in firing rates impact the effective connectivity between neurons and thereby evoke even larger differences in their coordination if the network is operating in the dynamically balanced critical regime (Figure 5D). To check this prediction, we repeat the correlation analysis between the two epochs, which we described above for the firing rates, but this time for the covariance patterns. Despite similar overall distributions of covariances in S and P (Figure 6D1), covariances between individual neuron pairs are clearly different between S and P: Figure 6B shows the covariance pattern for one representative reference neuron in one example recording session of monkey N. In both epochs, this covariance pattern has a saltandpepper structure as for the resting state data in Figure 1D. Yet, neurons change their individual coordination: a large number of neuron pairs even changes from positive covariance values to negative ones and vice versa. These neurons fire cooperatively in one epoch of the task while they show antagonistic firing in the other epoch. The covariances of all neuron pairs of that particular recording session are shown in Figure 6D2. Markers in the upper left and lower right quadrant show neuron pairs that switch the sign of their coordination (45 % of all neuron pairs). The extent of covariance changes between epochs is again quantified by correlation coefficients between the covariance patterns of two subperiods (Figure 6F). As for the firing rates, we find rather large correlations between covariance patterns in S1 and S2 as well as between covariance patterns in P1 and P2. Note, however, that correlation coefficients are around 0.8 rather than 1, presumably since covariance estimates from 200 ms periods are noisier than firing rate estimates. The covariance patterns in S1 or S2 are, however, significantly more distinct from covariance patterns in P1 and P2, with correlation coefficients around 0.5 (Figure 6F). This more pronounced change of covariances compared to firing rates is predicted by a network whose effective connectivity has a large spectral bound, in the dynamically balanced critical state. In particular, the theory provides a mechanistic explanation for the different coordination patterns between neurons on the mesoscopic scale (range of a Utah array), which are observed in the two states S and P (Figure 6B). The coordination between neurons is thus considerably reshaped by the behavioral condition.
Discussion
In this study, we investigate coordination patterns of many neurons across mesoscopic distances in macaque motor cortex. We show that these patterns have a saltandpepper structure, which can be explained by a network model with a spatially dependent random connectivity operating in a dynamically balanced critical state. In this state, crosscovariances are shaped by a large number of parallel, multisynaptic pathways, leading to interactions reaching far beyond the range of direct connections. Strikingly, this coordination on the millimeter scale is only visible if covariances are resolved on the level of individual neurons; the population mean of covariances quickly decays with distance and is overall very small. In contrast, the variance of covariances is large and predominantly decreases exponentially on length scales of up to several millimeters, even though direct connections typically only reach a few hundred micrometers.
Since the observed coordination patterns are determined by the effective connectivity of the network, they are dynamically controllable by the network state; for example, due to modulations of neuronal firing rates. Parallel recordings in macaque motor cortex during resting state and in different epochs of a reachtograsp task confirm this prediction. Simulations indeed exhibit a high sensitivity of coordination patterns to weak modulations of the individual neurons’ firing rates, providing a plausible mechanism for these dynamic changes.
Models of balanced networks have been investigated before (van Vreeswijk and Sompolinsky, 1996; Brunel, 2000; Renart et al., 2010; Tetzlaff et al., 2012) and experimental evidence for cortical networks operating in the balanced state is overwhelming (Okun and Lampl, 2008; Reinhold et al., 2015; Dehghani et al., 2016). Excess of inhibition in such networks yields stable and balanced populationaveraged activities as well as low average covariances (Tetzlaff et al., 2012). Recently, the notion of balance has been combined with criticality in the dynamically balanced critical state that results from large heterogeneity in the network connectivity (Dahmen et al., 2019). Here, we focus on another ubiquitous property of cortical networks, their spatial organization, and study the interplay between balance, criticality, and spatial connectivity in networks of excitatory and inhibitory neurons. We show that in such networks, heterogeneity generates disperse covariance structures between individual neurons on large lengthscales with a saltandpepper structure.
Spatially organized balanced network models have been investigated before in the limit of infinite network size, as well as under strong and potentially correlated external drive, as is the case, for example, in primary sensory areas of the brain (Rosenbaum et al., 2017; Baker et al., 2019). In this scenario, intrinsically generated contributions to covariances are much smaller than external ones. Populationaveraged covariances then fulfill a linear equation, called the ‘balance condition’ (van Vreeswijk and Sompolinsky, 1996; Hertz, 2010; Renart et al., 2010; Rosenbaum and Doiron, 2014), that predicts a nonmonotonous change of populationaveraged covariances with distance (Rosenbaum et al., 2017). In contrast, we here consider covariances on the level of individual cells in finitesize networks receiving only weak inputs. While we cannot strictly rule out that the observed covariance patterns in motor cortex are a result of very specific external inputs to the recorded local network, we believe that the scenario of weak external drive is more suitable for nonsensory brain areas, such as, for example, the motor cortex in the resting state conditions studied here. Under such conditions, covariances have been shown to be predominantly generated locally rather than from external inputs: Helias et al., 2014 investigated intrinsic and extrinsic sources of covariances in ongoing activity of balanced networks and found that for realistic sizes of correlated external populations the major contribution to covariances is generated from local network interactions (Figure 7a in Helias et al., 2014). Dahmen et al., 2019 investigated the extreme case, where the correlated external population is of the same size as the local population (Fig. S6 in Dahmen et al., 2019). Despite sizable external input correlations projected onto the local circuit via potentially strong afferent connections, the dependence of the statistics of covariances on the spectral bound of the local recurrent connectivity is predicted well by the theory that neglects correlated external inputs (see supplement section 3 in Dahmen et al., 2019).
Our analysis of covariances on the singleneuron level goes beyond the balance condition and requires the use of fieldtheoretical techniques to capture the heterogeneity in the network (Dahmen et al., 2019; Helias and Dahmen, 2020). It relies on linearresponse theory, which has previously been shown to faithfully describe correlations in balanced networks of nonlinear (spiking) units (Tetzlaff et al., 2012; Trousdale et al., 2012; Pernice et al., 2012; Grytskyy et al., 2013; Helias et al., 2013; Dahmen et al., 2019). These studies mainly investigated populationaveraged correlations with small spectral bounds of the effective connectivity. Subsequently, Dahmen et al., 2019 showed the quantitative agreement of this linearresponse theory for covariances between individual neurons in networks of spiking neurons for the whole range of spectral bounds, including the dynamically balanced critical regime. The longrange coordination studied in the current manuscript requires the inclusion of spatially nonhomogeneous coupling to analyze excitatoryinhibitory random networks on a twodimensional sheet with spatially decaying connection probabilities. This new theory allows us to derive expressions for the spatial decay of the variance of covariances. We primarily evaluate these expressions in the longrange limit, which agrees well with simulations for distances $x>2d\sim \mathcal{O}(1\phantom{\rule{thinmathspace}{0ex}}\mathrm{m}\mathrm{m})$, which is fulfilled for most distances on the Utah array (Figure 3, Appendix 1—figure 7). For these distances, we find that the decay of covariances is dominated by a simple exponential law. Unexpectedly, its decay constant is essentially determined by only two measures, the spectral bound of the effective connectivity, and the length scale of direct connections. The length scale of covariances diverges when approaching the breakdown of linear stability. In this regime, differences in covariances induced by differences in length scales of excitatory and inhibitory connections become negligible. The predicted emergence of a single length scale of covariances is consistent with our data.
This study focuses on local and isotropic connection profiles to show that longrange coordination does not rely on specific connection patterns but can result from the network state alone. Alternative explanations for longrange coordination are based on specifically imprinted network structures: Anisotropic local connection profiles have been studied and shown to create spatiotemporal sequences (Spreizer et al., 2019). Likewise, embedded excitatory feedforward motifs and cell assemblies via excitatory longrange patchy connections (DeFelipe et al., 1986) can create positive covariances at long distances (Diesmann et al., 1999; LitwinKumar and Doiron, 2012). Yet, these connections cannot provide an explanation for the large negative covariances between excitatory neurons at long distances (see e.g. Figure 1D). Longrange connectivity, for example arising from a saltandpepper organization of neuronal selectivity with connections preferentially targeting neurons with equal selectivity (BenYishai et al., 1995; Hansel and Sompolinsky, 1998; Roxin et al., 2005; Blumenfeld et al., 2006), would produce saltandpepper covariance patterns even in networks with small spectral bounds where interactions are only mediated via direct connections. However, in this scenario, one would expect that neurons which have similar selectivity would throughout show positive covariance due to their mutual excitatory connections and due to the correlated input they receive. Yet, when analyzing two different epochs of the reachtograsp task, we find that a large fraction of neuron pairs actually switches from being significantly positively correlated to negatively correlated and vice versa (see Figure 6D2, upper left and lower right quadrant). This statedependence of covariances is in line with the here suggested mechanism of longrange coordination by indirect interactions: Such indirect interactions depend on the effective strengths of various connections and can therefore change considerably with network state. In contrast, correlations due to imprinted network structures are static, so that a change in gain of the neurons will either strengthen or weaken the specific activity propagation, but it will not lead to a change of the sign of covariances that we see in our data. The static impact of these connectivity structures on covariances could nevertheless in principle be included in the presented formalism. Longrange coordination can also be created from shortrange connections with random orientations of anisotropic local connection profiles (Smith et al., 2018). This finding can be linked to the emergence of tuning maps in the visual cortex. The mechanism is similar to ours in that it uses nearly linearly unstable modes that are determined by spatial connectivity structures and heterogeneity. Given the different source of heterogeneity, the modes and corresponding covariance patterns are different from the ones discussed here: Starting from fully symmetric networks with corresponding symmetric covariance patterns, Smith et al., 2018 found that increasing heterogeneity (anisotropy) yields more randomized, but still patchy regions of positive and negative covariances that are in line with lowdimensional activity patterns found in visual cortex. In motor cortex we instead find saltandpepper patterns that can be explained in terms of heterogeneity through sparsity. We provide the theoretical basis and explicit link between connectivity eigenspectra and covariances and show that heterogeneity through sparsity is sufficient to generate the dynamically balanced critical state as a simple explanation for the broad distribution of covariances in motor cortex, the saltandpepper structure of coordination, its long spatial range, and its sensitive dependence on the network state. Note that both mechanisms of longrange coordination, the one studied in Smith et al., 2018 and the one presented here, rely on the effective connectivity for the network to reside in the dynamically balanced critical regime. The latter regime is, however, not just one single point in parameter space, but an extended region that can be reached via a multitude of control mechanisms for the effective connectivity, for example by changing neuronal gains (Salinas and Sejnowski, 2001a; Salinas and Sejnowski, 2001b), synaptic strengths (Sompolinsky et al., 1988), and network microcircuitry (Dahmen et al., 2021).
What are possible functional implications of the coordination on mesoscopic scales? Recent work demonstrated activity in motor cortex to be organized in lowdimensional manifolds (Gallego et al., 2017; Gallego, 2018; Gallego et al., 2020). Dimensionality reduction techniques, such as PCA or GPFA (Yu et al., 2009), employ covariances to expose a dynamical repertoire of motor cortex that is comprised of neuronal modes. Previous work started to analyze the relation between the dimensionality of activity and connectivity (Aljadeff et al., 2015; Aljadeff et al., 2016; Mastrogiuseppe and Ostojic, 2018; Dahmen et al., 2019; Dahmen et al., 2021; Hu and Sompolinsky, 2020), but only in spatially unstructured networks, where each neuron can potentially be connected to any other neuron. The majority of connections within cortical areas, however, stems from local axonal arborizations (Schnepel et al., 2015). Here, we add this biological constraint and demonstrate that these networks, too, support a dynamically balanced critical state. This state in particular exhibits neural modes which are spanned by neurons spread across the experimentally observed large distances. In this state a small subset of modes that are close to the point of instability dominates the variability of the network activity and thus spans a lowdimensional neuronal manifold. As opposed to specifically designed connectivity spectra via plasticity mechanisms (Hennequin et al., 2014) or lowrank structures embedded into the connectivity (Mastrogiuseppe and Ostojic, 2018), the dynamically balanced critical state is a mechanism that only relies on the heterogeneity which is inherent to sparse connectivity and abundant across all brain areas.
While we here focus on covariance patterns in stationary activity periods, the majority of recent works studied transient activity during motor behavior (Gallego et al., 2017). How are stationary and transient activities related? During stationary ongoing activity states, covariances are predominantly generated intrinsically (Helias et al., 2014). Changes in covariance patterns therefore arise from changes in the effective connectivity via changes in neuronal gains, as demonstrated here in the two periods of the reachtograsp experiment and in our simulations for networks close to criticality (Figure 5D). During transient activity, on top of gain changes, correlated external inputs may directly drive specific neural modes to create different motor outputs, thereby restricting the dynamics to certain subspaces of the manifold. In fact, Elsayed et al., 2016 reported that the covariance structures during movement preparation and movement execution are unrelated and corresponding to orthogonal spaces within a larger manifold. Also, Luczak et al., 2009 studied auditory and somatosensory cortices of awake and anesthetized rats during spontaneous and stimulusevoked conditions and found that neural modes of stimulusevoked activity lie in subspaces of the neural manifold spanned by the spontaneous activity. Similarly, visual areas V1 and V2 seem to exploit distinct subspaces for processing and communication (Semedo et al., 2019), and motor cortex uses orthogonal subspaces capturing communication with somatosensory cortex or behaviorgenerating dynamics (Perich et al., 2021). Gallego, 2018 further showed that manifolds are not identical, but to a large extent preserved across different motor tasks due to a number of taskindependent modes. This leads to the hypothesis that the here described mechanism for longrange cooperation in the dynamically balanced critical state provides the basis for lowdimensional activity by creating such spatially extended neural modes, whereas transient correlated inputs lead to their differential activation for the respective target outputs. The spatial spread of the neural modes thereby leads to a distributed representation of information that may be beneficial to integrate information into different computations that take place in parallel at various locations. Further investigation of these hypotheses is an exciting endeavor for the years to come.
Materials and methods
Experimental design and statistical analysis
Two adult macaque monkeys (monkey E  female, and monkey N  male) are recorded in behavioral experiments of two types: resting state and reachtograsp. The recordings of neuronal activity in motor and premotor cortex (hand/arm region) are performed with a chronically implanted $4\mathrm{x}4{\mathrm{mm}}^{2}$ Utah array (Blackrock Microsystems). Details on surgery, recordings, spike sorting and classification of behavioral states can be found in Riehle et al., 2013; Riehle et al., 2018; Brochier et al., 2018; Dąbrowska et al., 2020. All animal procedures were approved by the local ethical committee (C2EA 71; authorization A1/10/12) and conformed to the European and French government regulations.
Resting state data
Request a detailed protocolDuring the resting state experiment, the monkey is seated in a primate chair without any task or stimulation. Registration of electrophysiological activity is synchronized with a video recording of the monkey’s behavior. Based on this, periods of ‘true resting state’ (RS), defined as no movements and eyes open, are chosen for the analysis. Eye movements and minor head movements are included. Each monkey is recorded twice, with a session lasting approximately 15 and 20 min for monkeys E (sessions E1 and E2) and N (sessions N1 and N2), respectively, and the behavior is classified by visual inspection with single second precision, resulting in 643 and 652 s of RS data for monkey E and 493 and 502 s of RS data for monkey N.
Reachtograsp data
Request a detailed protocolIn the reachtograsp experiment, the monkeys are trained to perform an instructed delayed reachtograsp task to obtain a reward. Trials are initiated by a monkey closing a switch (TS, trial start). After 400 ms a diode is illuminated (WS, warning signal), followed by a cue after another 400 ms(CUEON), which provides partial information about the upcoming trial. The cue lasts 300 ms and its removal (CUEOFF) initiates a 1 s preparatory period, followed by a second cue, which also serves as GO signal. Two epochs, divided into 200 ms subperiods, within such defined trials are chosen for analysis: the first 400 ms after TS (starting period, S1 and S2), and the 400 ms directly following CUEOFF (preparatory period, P1 and P2) (Figure 6a). Five selected sessions for monkey E and eight for monkey N provide a total of 510 and 1111 correct trials, respectively. For detailed numbers of trials and single units per recording session see Appendix 1—table 1.
Separation of putative excitatory and inhibitory neurons
Request a detailed protocolOffline spikesorted single units (SUs) are separated into putative excitatory (broadspiking) and putative inhibitory (narrowspiking) based on their spike waveform width (Barthó et al., 2004; Kaufman et al., 2010; Kaufman et al., 2013; Peyrache, 2012; Peyrache and Destexhe, 2019). The width is defined as the time (number of data samples) between the trough and peak of the waveform. Widths of all average waveforms from all selected sessions (both resting state and reachtograsp) per monkey are collected. Thresholds for ‘broadness’ and ‘narrowness’ are chosen based on the monkeyspecific distribution of widths, such that intermediate values stay unclassified. For monkey E the thresholds are 0.33 ms and 0.34 ms and for monkey N 0.40 ms and 0.41 ms. Next, a twostep classification is performed session by session. Firstly, the thresholds are applied to average SU waveforms. Secondly, the thresholds are applied to SU single waveforms and a percentage of single waveforms preclassified as the same type as the average waveform is calculated. SU for which this percentage is high enough are marked classified. All remaining SUs are grouped as unclassified. We verify the robustness of our results with respect to changes in the spike sorting procedure in Appendix 1 Section 2.
Synchrofacts, that is, spikelike synchronous events across multiple electrodes at the sampling resolution of the recording system (1/30 ms) (Torre, 2016), are removed. In addition, only SUs with a signaltonoise ratio (Hatsopoulos et al., 2004) of at least 2.5 and a minimal average firing rate of 1 Hz are considered for the analysis, to ensure enough and clean data for valid statistics.
Statistical analysis
Request a detailed protocolAll RS periods per resting state recording are concatenated and binned into 1 s bins. Next, pairwise covariances of all pairs of SUs are calculated according to the following formula:
with b_{i}, b_{j}  binned spike trains, ${\mu}_{i}$, ${\mu}_{j}$ being their mean values, $l$ the number of bins, and $\u27e8x,y\u27e9$ the scalar product of vectors $x$ and $y$. Obtained values are broadly distributed, but low on average in every recorded session: in session E1 EE pairs: $0.19\pm 1.10$ (M±SD), EI: $0.24\pm 2.31$, II: $0.90\pm 4.19$, in session E2 EE: $0.060\pm 1.332$, EI $0.30\pm 2.35$, II $1.0\pm 4.5$, in session N1 EE $0.24\pm 1.13$, EI $0.66\pm 2.26$, II $2.4\pm 4.9$, in session N2 EE $0.41\pm 1.47$, EI $1.0\pm 3.1$, II $3.9\pm 7.3$.
To explore the dependence of covariance on the distance between the considered neurons, the obtained values are grouped according to distances between electrodes on which the neurons are recorded. For each distance the average and variance of the obtained distribution of crosscovariances is calculated. The variance is additionally corrected for bias due to a finite number of measurements (Dahmen et al., 2019). In most of cases, the correction does not exceed 0.01%.
In the following step, exponential functions $y=a{e}^{\frac{x}{d}}$ are fitted to the obtained distanceresolved variances of crosscovariances ($y$ corresponding to the variance and $x$ to distance between neurons), which yields a pair of values $(a,d)$. The least squares method implemented in the Python scipy.optimize module (SciPy v.1.4.1) is used. Firstly, three independent fits are performed to the data for excitatoryexcitatory, excitatoryinhibitory, and inhibitoryinhibitory pairs. Secondly, analogous fits are performed, with the constraint that the decay constant $d$ should be the same for all three curves.
Covariances in the reachtograsp data are calculated analogously but with different time resolution. For each chosen subperiod of a trial, data are concatenated and binned into 200 ms bins, meaning that the number of spikes in a single bin corresponds to a single trial. The mean of these counts normalized to the bin width gives the average firing rate per SU and subperiod. The pairwise covariances are calculated according to Equation (3). To assess the similarity of neuronal activity in different periods of a trial, Pearson productmoment correlation coefficients are calculated on vectors of SUresolved rates and pairresolved covariances. Correlation coefficients from all recording sessions per monkey are separated into two groups: using subperiods of the same epoch (withinepoch), and using subperiods of different epochs of a trial (betweenepochs). These groups are tested for differences with significance level $\alpha =0.05$. Firstly, to check if the assumptions for parametric tests are met, the normality of each obtained distribution is assessed with a ShapiroWilk test, and the equality of variances with an Ftest. Secondly, a ttest is applied to compare within and betweenepochs correlations of rates or covariances. Since there are two within and four between correlation values per recording session, the number of degrees of freedom equals: $df=({N}_{\mathrm{sessions}}\cdot 21)+({N}_{\mathrm{sessions}}\cdot 41)$, which is 28 for monkey E and 46 for monkey N. To estimate the confidence intervals for obtained differences, the mean difference between groups $m$ and their pooled standard deviation $s$ are calculated for each comparison
with m_{within} and m_{between} being the mean, s_{within} and s_{between} the standard deviation and ${N}_{\mathrm{within}}$ and ${N}_{\mathrm{between}}$ the number of within and betweenepoch correlation coefficient values, respectively.
This results in 95 % confidence intervals $m\pm t(df)\cdot s$ of $0.192\pm 0.093$ for rates and $0.32\pm 0.14$ for covariances in monkey E and $0.19\pm 0.14$ for rates and $0.26\pm 0.17$ for covariances in monkey N.
For both monkeys the withinepoch ratecorrelations distribution does not fulfill the normality assumption of the ttest. We therefore perform an additional nonparametric KolmogorovSmirnov test for the rate comparison. The differences are again significant; for monkey E $D=1.00,\phantom{\rule{thinmathspace}{0ex}}p=6.66\cdot 10{}^{8}$; for monkey N $D=1.00,\phantom{\rule{thinmathspace}{0ex}}p=8.87\cdot 10{}^{13}$.
For all tests we use the implementations from the Python scipy.stats module (SciPy v.1.4.1).
Mean and variance of covariances for a twodimensional network model with excitatory and inhibitory populations
Request a detailed protocolThe mean and variance of covariances are calculated for a twodimensional network consisting of one excitatory and one inhibitory population of neurons. The connectivity profile $p(\mathit{x})$, describing the probability of a neuron having a connection to another neuron at distance $\mathit{x},$ decays with distance. We assume periodic boundary conditions and place the neurons on a regular grid (Figure 3A), which imposes translation and permutation symmetries that enable the derivation of closedform solutions for the distancedependent mean and variance of the covariance distribution. These simplifying assumptions are common practice and simulations show that they do not alter the results qualitatively.
Our aim is to find an expression for the mean and variance of covariances as functions of distance between two neurons. While the theory in Dahmen et al., 2019 is restricted to homogeneous connections, understanding the spatial structure of covariances here requires us to take into account the spatial structure of connectivity. Fieldtheoretic methods, combined with linearresponse theory, allow us to obtain expressions for the mean covariance $\overline{\mathit{c}}$ and variance of covariance $\overline{\delta {\mathit{c}}^{2}}$
with identity matrix 1, mean $\mathit{M}$ and variance $\mathit{S}$ of connectivity matrix $\mathit{W}$, input noise strength $D$, and spectral bound $R$. Since $\mathit{M}$ and $\mathit{S}$ have a similar structure, the mean and variance can be derived in the same way, which is why we only consider variances in the following.
To simplify Equation (4), we need to find a basis in which $\mathit{S},$ and therefore also $\mathit{A}=\mathbf{1}\mathit{S}$, is diagonal. Due to invariance under translation, the translation operators $\mathit{T}$ and the matrix $\mathit{S}$ have common eigenvectors, which can be derived using that translation operators satisfy ${\mathit{T}}^{N}=1$, where $N$ is the number of lattice sites in $x$ or $y$direction (see Appendix 1). Projecting onto a basis of these eigenvectors shows that the eigenvalues ${s}_{\mathit{K}}$ of $\mathit{S}$ are given by a discrete twodimensional Fourier transform of the connectivity profile${s}_{\mathit{K}}\propto \sum _{x}p(x){\mathrm{e}}^{\mathrm{i}\mathbf{KX}}\mathit{\hspace{1em}}.$
Expressing ${\mathit{A}}^{1}$ in the eigenvector basis yields ${\mathit{A}}^{1}(\mathit{x})=\mathbf{1}+\mathit{B}(\mathit{x})$, where $\mathit{B}(\mathit{x})$ is a discrete inverse Fourier transform of the kernel ${s}_{\mathit{K}}/(1{s}_{\mathit{K}})$. Assuming a large network with respect to the connectivity profiles allows us to take the continuum limit
As we are only interested in the longrange behavior, which corresponds to $\mathit{x}\to \mathrm{\infty}$, or $\mathit{K}\to 0$, respectively, we can approximate the Fourier kernel around $\mathit{K}\approx 0$ by a rational function, quadratic in the denominator, using a Padé approximation. This allows us to calculate the integral which yields
where ${K}_{0}(x)$ denotes the modified Bessel function of second kind and zeroth order (Olver et al., 2010), and the effective decay constant d_{eff} is given by Equation (1). In the longrange limit, the modified Bessel function behaves like
Writing Equation (4) in terms of $B(\mathit{x})$ gives
with the double asterisk denoting a twodimensional convolution. $(B\ast \ast B)(\mathit{x})$ is a function proportional to the modified Bessel function of second kind and first order (Olver et al., 2010), which has the longrange limit
Hence, the effective decay constant of the variances is given by d_{eff}. Note that further details of the above derivation can be found in the Appendix 1 Section 4  Section 12.
Network model simulation
Request a detailed protocolThe explanation of the network state dependence of covariance patterns presented in the main text is based on linearresponse theory, which has been shown to yield results quantitatively in line with nonlinear network models, in particular networks of spiking leaky integrateandfire neuron models (Tetzlaff et al., 2012; Trousdale et al., 2012; Pernice et al., 2012; Grytskyy et al., 2013; Helias et al., 2013; Dahmen et al., 2019). The derived mechanism is thus largely model independent. We here chose to illustrate it with a particularly simple nonlinear inputoutput model, the rectified linear unit (ReLU). In this model, a shift of the network’s working point can turn some neurons completely off, while activating others, thereby leading to changes in the effective connectivity of the network. In the following, we describe the details of the network model simulation.
We performed a simulation with the neural simulation tool NEST (Jordan, 2019) using the parameters listed in Appendix 1—table 4. We simulated a network of $N$ inhibitory neurons (threshold_lin_rate_ipn, Hahne, 2017), which follow the dynamical equation
where z_{i} is the input to neuron i, $\nu $ the output firing rate with (threshold linear activation function)
time constant $\tau $, connectivity matrix $\mathit{J}$, a constant external input ${\mu}_{\mathrm{ext},i}$, and uncorrelated Gaussian white noise $\u27e8{\xi}_{i}(t)\u27e9=0$, $\u27e8{\xi}_{i}(s){\xi}_{j}(t)\u27e9={\delta}_{ij}\delta (st)$, with noise strength $\sqrt{\tau}{\sigma}_{\mathrm{noise},i}$. The neurons were connected using the fixed_indegree connection rule, with connection probability $p$, indegree $K=p\cdot N$, and deltasynapses (rate_connection_instantaneous) of weight $w$.
The constant external input ${\mu}_{\mathrm{ext},i}$ to each neuron was normally distributed, with mean ${\mu}_{\mathrm{ext}}$, and standard deviation ${\sigma}_{\mathrm{ext}}$. It was used to set the firing rates of neurons, which, via the effective connectivity, influence the intrinsically generated covariances in the network. The two parameters ${\mu}_{\mathrm{ext}}$ and ${\sigma}_{\mathrm{ext}}$ were chosen such that, in the stationary state, half of the neurons were expected to be above threshold. Which neurons are active depends on the realization of ${\mu}_{\mathrm{ext},i}$ and is therefore different for different networks.
To assess the distribution of firing rates, we first considered the static variability of the network and studied the stationary solution of the noiseaveraged input ${\u27e8z\u27e9}_{\mathrm{noise}}$, which follows from Equation (5) as
Note that ${\u27e8{\nu}_{j}\u27e9}_{\mathrm{noise}}={\u27e8\varphi ({z}_{j})\u27e9}_{\mathrm{noise}}$, through the nonlinearity $\varphi $, in principle depends on fluctuations of the system. This dependence is, however, small for the chosen threshold linear $\varphi $, which is only nonlinear in the point $z=0$.
The derivation of ${\mu}_{\mathrm{ext}}$ is based on the following meanfield considerations: according to Equation (6) the mean input to a neuron in the network is given by the sum of external input and recurrent input
The variance of the input is given by
The mean firing rate can be calculated using the diffusion approximation (Tuckwell, 2009; Amit and Tsodyks, 2009), which is assuming a normal distribution of inputs due to the centrallimit theorem, and the fact that a linear threshold neuron only fires if its input is positive
where $P$ denotes the probability density of the firing rate $\nu $. The variance of the firing rates is given by
The number of active neurons is the number of neurons with a positive input, which we set to be equal to $N/2$
which is only fulfilled for $\mu =0$. Inserting this condition simplifies the equations above and leads to
For the purpose of relating synaptic weight $w$ and spectral bound $R$, we can view the nonlinear network as an effective linear network with half the population size (only the active neurons). In the latter case, we obtain
For a given spectral bound $R$, this relation allows us to derive the value
that, for a arbitrarily fixed ${\sigma}_{\mathrm{ext}}$ (here ${\sigma}_{\mathrm{ext}}=1$), makes half of the population being active. We were aiming for an effective connectivity with only weak fluctuations in the stationary state. Therefore, we fixed the noise strength for all neurons to the small value ${\sigma}_{\mathrm{noise}}=0.1\ll {\sigma}_{\mathrm{ext}}$ compared to the external input, such that the noise fluctuations did not have a large influence on the calculation above that determines which neurons were active.
To show the effect of a change in the effective connectivity on the covariances, we simulated two networks with identical connectivity, but supplied them with slightly different external inputs. This was realized by choosing
with
$\u03f5\ll 1$, and $\alpha \in \{1,2\}$ indexing the two networks. The main component ${\mu}_{\mathrm{ext},i}$ of the external input was the same for both networks. But, the small component ${\mu}_{\mathrm{ext},i}^{(\alpha )}$ was drawn independently for the two networks. This choice ensures that the two networks have a similar external input distribution (Figure 5B1), but with the external inputs distributed differently across the single neurons (Figure 5B2). How similar the external inputs are distributed across the single neurons is determined by $\u03f5$.
The two networks have a very similar firing rate distribution (Figure 5E1), but, akin to the external inputs, the way the firing rates are distributed across the single neurons differs between the two networks (Figure 5E2). As the effective connectivity depends on the firing rates
this leads to a difference in the effective connectivities of the two networks and therefore to different covariance patterns, as discussed in Figure 5.
We performed the simulation for spectral bounds ranging from 0.1 to 0.9 in increments of 0.1. We calculated the correlation coefficient of firing rates and the correlation coefficient of timelag integrated covariances between ${N}_{\mathrm{sample}}$ neurons in the two networks (Figure 5D) and studied the dependence on the spectral bound.
To check whether the simulation was long enough to yield a reliable estimate of the rates and covariances, we split each simulation into two halves, and calculate the correlation coefficient between the rates and covariances from the first half of the simulation with the rates and covariances from the second half. They were almost perfectly correlated (Figure 5C). Then, we calculated the correlation coefficients comparing all halves of the first simulation with all halves of the second simulation, showing that the covariance patterns changed much more than the rate patterns (Figure 5C).
Appendix 1
1 Correlations and covariances
A typical measure for the strength of neuronal coordination is the Pearson correlation coefficient, here applied to spike counts in $1\mathrm{s}$ bins. Correlation coefficients, however, comprise features of both auto and crosscovariances. From a theoretical point of view, it is simpler to study crosscovariances separately. Indeed, linearresponse theory has been shown to faithfully predict crosscovariances in spiking leaky integrateandfire networks (Tetzlaff et al., 2012; Pernice et al., 2012; Trousdale et al., 2012; Helias et al., 2013; Dahmen et al., 2019; Grytskyy et al., 2013). Appendix 1—figure 1 justifies the investigation of crosscovariances instead of correlation coefficients for the purpose of this study. It shows that the spatial organization of correlations closely matches the spatial organization of crosscovariances.
2 Robustness to E/I separation
The analysis of the experimental data involves a number of preprocessing steps, which may affect the resulting statistics. In our study one such critical step is the separation of putative excitatory and inhibitory units, which is partially based on setting thresholds on the widths of spike waveform, as described in the Methods section. We tested the robustness of our conclusions with respect to these thresholds.
As mentioned in the Methods, two thresholds for the width of a spike waveform are chosen, based on all SU average waveforms: A width larger than the ‘‘broadness’’ threshold indicates a putative excitatory neuron. A width lower than the ‘‘narrowness’’ threshold indicates a putative inhibitory neuron. Units with intermediate widths are unclassified. Additionally, to increase the reliability of the classification, we perform it in two steps: first on the SU’s average waveform, and second on all its single waveforms. We calculate the percentage of single waveforms classified as either type. Finally, only SUs showing a high enough percentage of single waveforms classified the same as the average waveform are sorted as the respective type. The minimal percentage required, referred to as consistency $c$, is initially set to the lowest value which ensures no contradictions between average and singlewaveform thresholding results. While the ‘‘broadness’’ and ‘‘narrowness’’ thresholds are chosen based on all available data for a given monkey, the required consistency is determined separately for each recording session. For monkey N $c$ is set to 0.6 in all but one sessions: In resting state session N1 it is increased to 0.62. For monkey E the values of $c$ equals 0.6 in the resting state recordings and take the following values in five analyzed reachtograsp sessions: 0.6, 0.89, 0.65, 0.61, 0.64.
The only step of our analysis for which the separation of putative excitatory and inhibitory neurons is crucial is the fitting of exponentials to the distanceresolved covariances. This step only involves resting state data. To test the robustness of our conclusions, we manipulate the required consistency value for sessions E1, E2, N1, and N2 by setting it to 0.75. Appendix 1—figure 2 and Appendix 1—table 1 summarize the resulting fits.
It turns out that increasing $c$ to 0.75, which implies disregarding about 2025 percent of all data, does not have a strong effect on the fitting results. The obtained decay constants are smaller than for a lower $c$ value, but they stay in a range about an order of magnitude larger than the anatomical connectivity. We furthermore see that fitting individual slopes to different populations in some sessions leads to unreliable results (cf. yellow lines in Appendix 1—figure 2A, I and blue lines in Appendix 1—figure 2C,D,K,L). Therefore, the data is not sufficient to detect differences in decay constants for different neuronal populations. Fitting instead a single decay constant yields trustworthy results (cf. yellow lines in Appendix 1—figure 2E,M and blue lines in Appendix 1—figure 2G,H,O,P). Our data thus clearly expose that decay constants of covariances are in the millimeter range.
3 Stationarity of behavioral data
The linearresponse theory, with the aid of which we develop our predictions about the covariance structure in the network, assumes that the processes under examination are stationary in time. However, this assumption is not necessarily met in experimental data, especially in motor cortex during active behavioral tasks. For this reason we analyzed the stationarity of average single unit firing rate and pairwise zero timelag covariance throughout a reachtograsp trial, similarly to Dahmen et al., 2019. Although the spiking activity becomes highly nonstationary during the movement, those epochs that are chosen for the analysis in our study (S and P) show only moderate variability in time (Appendix 1—figure 3). An analysis on the level of singleunit resolved activity also shows that the majority of neurons has stationary activity statistics within the relevant epochs S and P, especially when comparing to their whole dynamic range that is explored during movement transients towards the end of the task (Appendix 1—figure 5). Appendix 1—figure 6 shows that there are, however, a few exceptions (e.g. units 11, 84 in this session) that show moderate transients also within an epoch. Nevertheless, these transients are small compared to changes between the two epochs S and P.
Thus both the populationlevel and singleunit level analyses are in line with the second test for stationarity that we show in Figure 6. There we compare the firing rate and covariance changes between two 200 ms segments of the same epoch to the firing rate and covariance changes between two 200 ms segments of different epochs. If the neural activity was not stationary within an epoch then we would not obtain correlation coefficients of almost one between firing rates in Figure 6E and correlation coefficients up to 0.9 between covariance patterns within one epoch in Figure 6F. In summary, the analyses together make us confident that assuming stationarity within an epoch is a good approximation to show that there are significant behaviorally related changes in covariances across epochs of the reachtograsp experiment.
4 Network model
We are considering neuronal network models with isotropic and distancedependent connection profiles. Ultimately, we are interested in describing cortical networks with twodimensional sheetlike structure. But, for developing the theory, we first consider the simpler case of a onedimensional ring and subsequently develop the theory on a twodimensional torus, ensuring periodic boundary conditions in both cases. $N$ equidistantly distributed neurons form a grid on these manifolds. The position of neuron $i\in \{1,\mathrm{\dots},N\}$ is described by the vector $\mathit{r}}_{i}\in {\mathbb{R}}^{D$, with $D\in \{1,2\}$. The connections ${W}_{ij}$ from neuron $j$ to neuron i are drawn randomly with a connection probability that decays with distance between neurons $\left{\mathit{R}}_{i}{\mathit{R}}_{j}\right$, described by the normalized connectivity profile $p(\mathit{R})$, $\int p(\mathit{R}){\mathrm{d}}^{D}r=1$, which we assume to obey radial symmetry. The connection probability decays on a characteristic length scale $d$. As we are working on discrete lattices, we introduce the probability of two neurons being connected ${p}_{ij}$, which is defined by the relation $p({\mathit{R}}_{i}{\mathit{R}}_{j})={lim}_{a\to 0}{p}_{ij}/a$, with lattice spacing $a$. We set the synaptic weights for connections of a single type to a fixed value $w$, but allow for multiple connections between neurons, that is ${W}_{ij}\in \{0,w,2w,\mathrm{\dots}\}={n}_{ij}\cdot w$ for all sending neurons $j$ of a given type, where ${n}_{ij}$ is binomially distributed. Such multapses are required to simultaneously meet biological constraints on neuronal indegrees, neuron densities, and spatial ranges of connections. If instead one assumed Bernoulli connectivity, an analysis analogous to Eq. 7 of Senk et al., 2018 would yield a connection probability exceeding unity.
We introduce two populations of neurons, excitatory (E) and inhibitory (I) neurons. The number of neurons of a given population $a\in \{\mathrm{E},\mathrm{I}\}$ is ${N}_{a}$, and their ratio is $q={N}_{\mathrm{E}}/{N}_{\mathrm{I}}$, which, for convenience, we assume to be an even number (see permutation symmetry below). The connection from population $b$ to population $a$ has the synaptic weight ${w}_{ab}$ and characteristic decay length of the connectivity profile ${d}_{ab}$. The average number of inputs drawn per neuron is fixed to ${K}_{ab}$. In order to preserve translation symmetry, $q$ excitatory neurons and one inhibitory neuron are put onto the same lattice point, as shown in Figure 3A in the main text.
Linearresponse theory has been shown to faithfully capture the statistics of fluctuations in asynchronous irregular network states (Lindner et al., 2005). Here we follow Grytskyy et al., 2013, who show that different types of neuronal network models can be mapped to an OrnsteinUhlenbeck process and that the lowfrequency limit of this simple rate model describes spike count covariances of spiking models well (Tetzlaff et al., 2012). In particular, Dahmen et al., 2019 showed quantitative agreement of linearresponse predictions for the statistics of spikecount covariances in leaky integrateandfire networks for the full range of spectral bounds $R\in [0,1)$. Therefore, we consider a network of linear rate neurons, whose activity $\mathit{x}\in {\mathbb{R}}^{N}$ is described by
with uncorrelated Gaussian white noise $\mathit{\xi}$, $\u27e8{\xi}_{i}(t)\u27e9=0$, $\u27e8{\xi}_{i}(s){\xi}_{j}(t)\u27e9={D}_{i}{\delta}_{ij}\delta (st)$. The solution to this differential equation can be found by multiplying the whole equation with the left eigenvectors ${\mathit{u}}_{\alpha}$ of $\mathit{W}$
where ${y}_{\alpha}={\mathit{U}}_{\alpha}\cdot \mathit{x}$, ${\xi}_{\alpha}={\mathit{u}}_{\alpha}\cdot \mathit{\xi}$, and ${\lambda}_{\alpha}$ is denoting the corresponding eigenvalue of $\mathit{W}$. Neglecting the noise term, the solutions are given by
with Heaviside function $\mathrm{\Theta}(t)$. These are the eigenmodes of the linear system and they are linear combinations of the individual neuronal rates
Note that the weights ${\left({u}_{\alpha}\right)}_{i}$ of these linear combinations depend on the details of the effective connectivity matrix $\mathit{W}$. The stability of an eigenmode is determined by the corresponding eigenvalue ${\lambda}_{\alpha}$. If $\mathrm{Re}\left({\lambda}_{\alpha}\right)\S lt;1$, the eigenmode is stable and decays exponentially. If $\mathrm{Re}\left({\lambda}_{\alpha}\right)\S gt;1$, the eigenmode is unstable and grows exponentially. If $\mathrm{Im}\left({\lambda}_{\alpha}\right)\ne 0,$ the eigenmode is oscillatory with an exponential envelope. $\mathrm{Re}\left({\lambda}_{\alpha}\right)=1$ is here referred to as the critical point. This type of stability is also called linear stability to stress that these considerations are only valid in the linear approximation. Realistic neurons have a saturation at high rates, which prevents activity from diverging indefinitely. A network is called linearly stable if all modes are stable. This is determined by the real part of the largest eigenvalue of $\mathit{W}$, called spectral bound $R.$ In inhibitiondominated networks, the spectral bound is determined by the heterogeneity in connections and $R\u2a851$ defines the dynamically balanced critical state (Dahmen et al., 2019).
The different noise components ${\xi}_{\alpha}$ excite the corresponding eigenmodes of the system and act as a driving force. A noise vector $\mathit{\xi}$ that is not parallel to a single eigenvector ${\mathit{U}}_{\alpha}$ excites several eigenmodes, each with the corresponding strength ${\xi}_{\alpha}$.
Note that the different eigenmodes do not interact, which is why the total activity $\mathit{x}$ is given by a linear combination, or superposition, of the eigenmodes
where ${\mathit{V}}_{\alpha}$ denotes the αth right eigenvector of the connectivity matrix $\mathit{W}$.
5 Covariances
Timelag integrated covariances ${c}_{ij}=\int D\tau \u27e8{x}_{i}(t){x}_{j}(t+\tau )\u27e9\u27e8{x}_{i}(t)\u27e9\u27e8{x}_{j}(t+\tau )\u27e9$ can be computed analytically for the linear dynamics (Gallego et al., 2020). They follow from the connectivity $\mathit{W}$ and the noise strength $D$ as (Pernice et al., 2011; Trousdale et al., 2012; Grytskyy et al., 2013; Lindner et al., 2005)
with identity matrix 1. These covariances are equivalent to covariances of spike counts in large time windows, given by the zerofrequency component of the Fourier transform of $\mathit{x}$ (sometimes referred to as WienerKhinchin theorem Gardiner, 1985; even though the theorem proper applies in cases where the Fourier transforms of the signals $\mathit{x}$ do not exist). Spike count covariances (Figure 1B in the main text) can be computed from trialresolved spiking data (Dahmen et al., 2019). This equivalence allow us to directly relate theoretical predictions for covariances to the experimentally observed ones.
While Equation (10) provides the full information on covariances between any two neurons in the network, this information is not available in the experimental data. Only a small subset of neuronal activities can be recorded such that inference of connectivity parameters from Equation 10 is unfeasible. We recently proposed in Dahmen et al., 2019 to instead consider the statistics of covariances as the basis for comparison between models and data. Using Equation 8 and Equation 10 as a starting point, field theoretical techniques allow the derivation of equations for the mean $\overline{\mathit{c}}$ and variance $\overline{\delta {\mathit{c}}^{2}}$ of crosscovariances in relation to the mean $\mathit{M}$ and variance $\mathit{S}$ of the connectivity matrix $\mathit{W}$ (Dahmen et al., 2019):
$\mathit{M}$and $\mathit{S}$ are defined in the subsequent section. The renormalized input noise strength is given by
with input noise covariance $\mathit{D}$, and the allones vector $\mathit{I}={(1,\mathrm{\dots},1)}^{\mathrm{T}}\in {\mathbb{R}}^{N}$. Note that Equation (12) only holds for crosscovariances ($i\ne j$). The diagonal terms $\left[\overline{\delta {\mathit{c}}^{2}}\right]}_{ii$, that is the variance of autocovariances, do get a second contribution, which is negligible for the crosscovariances considered here.
6 Cumulant generating function of connectivity matrix
For calculating the mean and variance of the covariances of the network activity ((11) and (12)) we need mean $\mathit{M}$ and variance $\mathit{S}$ of connectivity $\mathit{W}$. In the following, we derive the cumulant generating function (Gardiner, 1985) of ${W}_{ij}$.
The number of connections $n$ from neuron $j$ to neuron i is a binomial random variable with $K$ trials with the probability of success given by ${p}_{ij}$ (in the following, for brevity, we ignore the index i, ${p}_{ij}\equiv {p}_{j}$)
The average number of connections from neuron $j$ to neuron i is ${K}_{j}={p}_{j}K$, which assures the correct average total indegree
The moment generating function of a connectivity matrix element ${W}_{j}\equiv {W}_{ij}\in \{0,w,2w,\mathrm{\dots}\}$ is given by
In a realistic network, $K$ is very large. In the limit $K\to \mathrm{\infty}$, while keeping $Kp=\mathrm{const}.$, the binomial distribution converges to a Poisson distribution and we can write
Taking the logarithm leads to the cumulant generating function
and the first two cumulants
7 Note on derivation of variance of covariances
Note that $\mathit{M}$ and $\mathit{S}$ have an identical structure determined by the connectivity profile and the structure of the covariance equation is identical for the mean Equation (11) and variance Equation (12) as well. This is why in the following we only derive the results for the mean of covariances. The results for the variance of covariances is obtained by substituting $w$ by ${w}^{2}$ and ${\mathit{D}}_{\mathrm{r}}$ by ${\mathit{D}}_{\mathrm{r}}^{2}$. As we show, divergences in expressions related to the mean covariances arise if the population eigenvalue ${\lambda}_{0}$ of the effective connectivity matrix approaches one. In expressions related to the variance of covariances, the divergences are caused by the squared spectral bound ${R}^{2}$ being close to one. In general expressions, we sometimes write $\zeta $ in order to denote either the population eigenvalue or the spectral bound, corresponding to the context of mean or variance of covariances.
8 Utilizing symmetries to reduce dimensionality
For real neuronal networks, the anatomical connectivity is never known completely, let alone the effective connectivity. This is why we are considering disorderaveraged systems. They are described by the mean $\mathit{M}$ and variance $\mathit{S}$ of the connectivity. The latter inherit the underlying symmetries of the network, like for example the same radially symmetric connectivity profile for all neurons of one type. As neuronal networks are high dimensional systems, calculating covariances from Equation (11) and Equation (12) first seems like a daunting task. But, leveraging the aforementioned symmetries similarly as in Kriener et al., 2013 allows for an effective reduction of the dimensionality of the system, thereby rendering the problem manageable.
As a demonstrative example of how this is done, consider a random network of $N$ neurons on a onedimensional ring, in which a neuron can form a connection with weight $w$ to any other neuron with probability p_{0}. In that case, $\mathit{M}$ is a homogeneous matrix, with all entries given by the same average connectivity weight
This corresponds to an alltoall connected ring network. Due to the symmetry of the system, moving all neurons by one lattice constant does not change the system. The translation operator $\mathit{T}$, representing this operation mathematically, is defined via its effect on the vector of neuron activity $\mathit{x}$
Applying $\mathit{T}N$ $N$times yields the identity operation
Hence, its eigenvalues are given by complex roots of one
with $L=Na$ denoting the circumference of the ring. This shows that $\mathit{T}$ has $N$ onedimensional eigenspaces. Since the system is invariant under translation, $\mathit{M}$ is invariant under the transformation $\displaystyle \mathit{T}\mathit{M}{\mathit{T}}^{1}=\mathit{M}\phantom{\rule{thinmathspace}{0ex}},$ and thus $\mathit{M}$ and $\mathit{T}$ commute. As $\mathit{M}$ leaves eigenspaces of $\mathit{T}$ invariant (if $\mathit{V}$ is an eigenvector of $\mathit{T}$, $\mathit{M}\mathit{V}$ is an eigenvector with the same eigenvalue, so they need to be multiples of each other), all eigenvectors of $\mathit{T}$ must be eigenvectors of $\mathit{M}$. Accordingly, knowing the eigenvectors of $\mathit{T}$ allows diagonalizing $\mathit{M}$. The normalized (left and right) eigenvectors of $\mathit{T}$ are given by
We get the eigenvalues of $\mathit{M}$ by multiplying it with the eigenvectors of$\mathit{T}$
which is always zero, except for $l=0$, which corresponds to the population eigenvalue ${\lambda}_{0}:={m}_{{k}_{0}}=N{p}_{0}w$ of $\mathit{W}$ (Figure 3C in the main text). Now, we can simply write down the diagonalized form of $\mathit{M}$
and we effectively reduced the $N$dimensional to a one dimensional problem. Inverting $\mathit{A}:=\mathbf{1}\mathit{M}$ in Equation (11) is straightforward now, since it is diagonal in the new basis. Its eigenvalues can be written as $\displaystyle {a}_{k}=1{m}_{k}\phantom{\rule{thinmathspace}{0ex}},$ where we suppressed the index $l$. Therefore its inverse is given by
The renormalized noise can be evaluated using that the allones vector occurring in equation Equation (13) is the eigenvector ${\mathit{V}}_{0}$ of $\mathit{S}$. After identifying the eigenvalue s_{0} with the squared spectral bound ${R}^{2}$, we find
which allows us to express the mean crosscovariances $\overline{c}$ (see Equation (11)) and the variance of crosscovariances $\overline{\delta {c}^{2}}$ (see Equation (12)) in terms of the eigenvectors of $\mathit{M}$ and $\mathit{S}$ respectively
9 Onedimensional network with one population
The simplest network with spatial connectivity is a onedimensional ring of neurons with one population of neurons. Following section Section 6, the mean connectivity matrix has the form
As ${p}_{ij}$ only depends on the distance of two neurons, the rows in $\mathit{M}$ are identical, but shifted by one index.
9.1 Dimensionality reduction
We follow the procedure developed in Section 8, as the system is invariant under translation as well. Suppressing the subscripts of $k$, we get the eigenvalues of $\mathit{M}$
where the sum over $\mathit{x}$ denotes a sum over all lattice sites. We used the translational symmetry from the first to the second line. The change of sign in the exponential from line two to three is due to the fact that we are summing over the second index of ${p}_{ij}.$ Thus, the eigenvalues are effectively given by the discrete Fourier transform of the connectivity profile. Expressing ${\mathit{A}}^{1}$ using the eigenvectors ${\mathit{V}}_{k}$ of $\mathit{M}$ leads to
where we extracted an identity for later convenience, and we defined ${\mu}_{ij}$.
Next, we consider the renormalized noise, which is given by Equation (13). Using that the allones vector $\mathit{I}$ in the second term is the eigenvector of $\mathit{S}$ corresponding to $k=0$, we get
Again, we identify s_{0} with the spectral bound ${R}^{2}$, and find
Inserting Equation (14) and Equation 15 into Equation (11) yields
9.2 Continuum limit
As we assume the lattice constant to be small, we know that the connectivity profile is sampled densely, and we are allowed to take the continuum limit. Therefore, we write
Note that ${lim}_{a\to 0}{\sum}_{j}{p}_{j}/a={lim}_{a\to 0}{\sum}_{j}p({x}_{i}{x}_{j})/a=\int Dxp(x)$, because we are summing over the second index $j$. If the decay constant $d$ of the connectivity profile is small compared to the size of the network $L$, we can take $L$ to infinity and finally end up with
Analogously, we find
where we defined
with
Finally, we get
where the asterisk denotes the convolution.
9.3 Prediction of exponential decay of covariance statistics
Note that the integral in equation Equation 18 can be interpreted as an integral in the complex plane. According to the residue theorem, the solution to this integral is a weighted sum of exponentials, evaluated at the poles of $[1m(k)]{}^{1}$. As $\mu (x)$ appears in the equation for the mean covariances, and the convolution of two exponentials is an exponential with the prefactor $(\text{const. +}\text{x}\text{)}$, we expect the dominant behavior to be an exponential decay in the longrange limit, with decay constants given by the inverse imaginary part of the poles. The poles which are closest to zero are the ones which lead to the most shallow and thereby dominant decay. A real part of the poles leads to oscillations in $\mu (x)$.
9.4 Longrange limit
We cannot expect to solve the integral in Equation 17 for arbitrary connectivity profiles. To continue our analysis, we make use of the Padé method, which approximates arbitrary functions as rational functions (Basdevant, 1972). We approximate $\mu (k)$ around $k=0$ using a Padé approximation of order (0,2)
with
This allows us to calculate the approximate poles of $\mu (k)$
As $2m(0)/{m}^{\prime \prime}(0)$ will be negative, due to factor ${i}^{2}$ from the second derivative of the Fourier integral, we write
Closing the integral contour in Equation 18 in the upper half plane for $x\S gt;0$, and in the lower half plane for $x\S lt;0$, we get
where we defined the effective decay constant for the mean covariances
with $m(0)={\lambda}_{0}$ and ${m}^{\prime \prime}(0)={\lambda}_{0}\u27e8{x}^{2}\u27e9$, since $m(k)$ is the Fourier transform of the connectivity profile Equation (16). Note that ${\lambda}_{0}=Kw$ again is the population eigenvalue of the effective connectivity matrix $\mathit{W}.$ For evaluating Equation (11) and Equation (12), we need to calculate the convolution of μ with itself
The final expression for the mean covariances is
Equivalently, for the variance of covariances we obtain the final result
where
Note that the quality of the Padé approximation depends on the outlier eigenvalue and the spectral bound. For the variances, the approximation works best for spectral bounds $R$ close to 1. The reason for this is that we are approximating the position of the poles in the complex integral Equation (18). We make an approximation around $k=0$ and Equation (22) shows that the position of the complex poles moves closer to $k=0$ as $s(0)\equiv {R}^{2}\to 1$.
General results:
Using Equation (21)
we find
with
For the variance we use
to get
with
Exponential connectivity profile:
Using an exponential connectivity profile given by
we find $\u27e8{x}^{2}\u27e9=2{d}^{2}$ and
with ${\lambda}_{0}=Kw$ for the mean, and ${R}^{2}=K{w}^{2}$ for the variance.
Gaussian connectivity profile:
Analogously, using a Gaussian connectivity profile given by
we find $\u27e8{x}^{2}\u27e9={d}^{2}$, and get
10 Onedimensional network with two populations
Realistic neuronal network consist of excitatory and inhibitory neurons. So we need to introduce a second population to our network. Typically, there are more excitatory than inhibitory neurons in the brain. Therefore, we introduce $q$ excitatory neurons for each inhibitory neuron. We place $q$ excitatory neurons and one inhibitory neuron together in one cell. The cells are distributed equally along the ring. For convenience, we define $N\equiv {N}_{\mathrm{I}}$.
The structure of the connectivity matrix depends on the choice of the activity vector $\mathit{x}$. For later convenience we choose
where $\mathit{x}}_{i}^{(\mathrm{E})$ is a $q$dimensional vector denoting the activity of the $q$ excitatory neurons in cell i. $\mathit{M}$ is a $(q+1)N\times (q+1)N$matrix, which qualitatively has the structure
Note that $\mathrm{E}\mathrm{E}}_{ij$ are $q\times q$ matrices, $\mathrm{E}\mathrm{I}}_{ij$ are $q\times 1$ matrices, $\mathrm{I}\mathrm{E}}_{ij$ are $1\times q$ matrices and $\mathrm{I}\mathrm{I}}_{ij$ are $1\times 1$ matrices. The entries $a{b}_{ij}$ describe the connectivities from population $b$ in cell $j$ to population $a$ in cell i. The entries are given by
The difference stems from the fact that we have $q$ times as many excitatory neurons. As the total number of indegrees from excitatory neurons should be given by ${K}_{aE}$, we need to introduce a reducing factor of $1/q$, as the connection probability is normalized to one.
10.1 Dimensionality reduction
In the following, we will reduce the dimensionality of $\mathit{M}$ as done before in the case with one population. First, we make use of the symmetry within the cells. All entries in $\mathit{M}$ corresponding to connections coming from excitatory neurons of the same cell need to be the same. For that reason, we change the basis to
where $\mathit{I}$ denotes a $q$dimensional vector containing only ones. For a full basis, we need to include all the vectors with $\mathit{I}$ being replaced by a vector containing all possible permutations of equal numbers of ±1. In this basis $\mathit{M}$ is block diagonal
and ${\mathit{M}}^{\prime}$ is an $2N\times 2N$ matrix, which has the same qualitative structure as shown in Equation (24), but the submatrices ${\left(\mathrm{ab}\right)}_{ij}$ are replaced by
Next, we use translational symmetry of the cells. The translation operator is defined by
As the system is invariant under moving each cell to the next lattice site, ${\mathit{M}}^{\prime}$ is invariant under the transformation
Again, the eigenvalues of $\mathit{T}$ can be determined using ${\mathit{T}}^{N}=1$ and they are the same as in the case of one population. But, note that here the eigenspaces corresponding to the single eigenvalues are two dimensional. The eigenvectors
belong to the same eigenvalue. In this basis, ${\mathit{M}}^{\prime}$ is block diagonal, with each block consisting of a 2×2 matrix, corresponding to one value of ${k}_{l}=\frac{2\pi l}{L}$,$l\in \{0,\mathrm{\dots},N1\}$
Since all block matrices can be treated equally, we further reduced the problem to diagonalizing a 2×2 matrix. The submatrices take the form
with the discrete Fourier transform
Note that $x$ and $k$ are still discrete here, but we could take the continuum limit at this point. The eigenvalues of ${\mathit{M}}_{k}$ are given by
The corresponding eigenvectors are
with normalization ${N}_{\pm}$. The eigenvectors written in the Fourier basis are given by
and we can get the eigenvectors ${\stackrel{~}{\mathit{V}}}_{\pm}(k)$ in the basis we started with by extending ${\mathit{V}}_{k}^{(\mathrm{E})}$ and ${\mathit{V}}_{k}^{(\mathrm{I})}$ to vectors similar to Equation (25), where the elements corresponding to excitatory neurons are repeated $q$times. Note that the normalization of the original basis leads to an additional factor $1/\sqrt{q}$ in the first term of Equation (29).
Analogously, we can find the left eigenvectors of $\mathit{M}$ by conducting the same steps with the transpose of $\mathit{M}$
and the vectors in the original basis ${\stackrel{~}{\mathit{U}}}_{\pm}(k)$ are obtained similarly to the right eigenvectors. The normalization ${N}_{\pm}$ is chosen such that
which leads to
Now, we can express ${A}^{1}$ in terms of the eigenvalues and eigenvectors of $\mathit{M}$
which leads to
where we defined $\mathit{\mu}(k)$ similar to Equation (19). Let $E$ and $I$ be the sets of indices referring to excitatory or inhibitory neurons respectively. We find
with
and
10.2 General results
The renormalized noise is evaluated using the same trick as in the one population case. We express the allones vector using eigenvectors of the variance matrix $\mathit{S}$
Evaluating the coefficients $a$ and $b$ and inserting the corresponding solutions into Equation (13) yields
with
with the eigenvalues ${s}_{ab}(k)$ of $\mathit{S}$. We again identified the spectral bound
The mean covariances can be written as
where $\mathit{\mu}=\mathit{\mu}(x)$. We can distinguish three different kinds of covariances depending on the type of neurons involved
with
10.3 Longrange limit
From here on, we consider the special case in which the synaptic connections only depend on the type of the presynaptic neuron and not on the type of the postsynaptic neuron. This is in agreement with network parameters used in established cortical network models (Potjans and Diesmann, 2014; Senk et al., 2018), in which the connection probabilities to both types of target neurons in the same layer are usually of the same order of magnitude. In that case, all expressions become independent of the first population index ${A}_{ab}\equiv {A}_{b}$, and the only expressions we need to evaluate become
with
and
After taking the continuum limit, we can make a (0,2)Padé approximation again
which leads to the poles
or the effective decay constant of the mean covariances
Using
we get
after introducing relative parameters
The renormalized noise in Equation (13) reduces to
The mean covariances are
with
and
Note that expressions coming from both populations contribute to each kind of covariance. Therefore, all mean covariances contain a part that decays with either of the decay constants we just determined. If, for example, the inhibitory decay constant is much larger than the excitatory one, $\overline{{c}_{\mathrm{EI}}}(x)$ will decay with the largest decay constant in the longrange limit
Exponential connectivity profile:
Just as in Section 9.4 we get
with ${\lambda}_{0}={w}_{\mathrm{E}}{K}_{\mathrm{E}}+{w}_{\mathrm{I}}{K}_{\mathrm{I}}$ for the decay constant of the mean covariances, and ${R}^{2}={w}_{\mathrm{E}}^{2}{K}_{\mathrm{E}}+{w}_{\mathrm{I}}^{2}{K}_{\mathrm{I}}$ for the decay constant of the variances.
Gaussian connectivity profile:
And similar to Section 9.4 we get
11 Twodimensional network with one population
In the following, we are considering twodimensional networks, which are supposed to mimic a singlelayered cortical network. Neurons are positioned on a twodimensional lattice (${N}_{x}\times {N}_{y}$ grid) with periodic boundary conditions in both dimensions (a torus). We define the activity vector to be of the form
The connectivity matrix is defined correspondingly.
11.1 Dimensionality reduction
In two dimensions we have to define two translation operators that move all neurons either one step in the $x$direction, or the $y$direction, respectively. They are defined via their action on $\mathit{x}$
Similar reasoning as in one dimension leads to the eigenvalues
and similar for the $y$direction. The eigenvectors can be inferred from the recursion relations
where entries ${v}_{\alpha \beta}$ of the vector $\mathit{V}$ are defined analogously to Equation (38). The eigenvectors are given by
where we suppressed the subscripts of ${k}^{(x)}$ and ${k}^{(y)}$ again. Using that these eigenvectors are eigenvectors of $\mathit{M}$ as well, yields the eigenvalues of $\mathit{M}$
In the continuum limit, this becomes the twodimensional Fourier transform
The inverse of A is given by
with the inverse twodimensional Fourier transform
The expression for the renormalized noise is the same as in the onedimensional case with one population. Hence, the mean covariances are given by
which is the onedimensional expression, except for the convolution, which is replaced by its twodimensional analogon denoted here by the double asterisk.
11.2 Longrange limit
Employing the symmetry of the connectivity kernel, we rewrite the integral in $\mu (\mathit{x})$ using polar coordinates
with $r=\mathit{x}$, and make a Padé approximation of order (0,2) of the integration kernel
Following (Goldenfeld, 1992, p.160f), we can interpret this as calculating the Green’s function of the heat equation
which can be solved, using the fact that $\mu (\mathit{x})$ can only be a function of the radial distance $r$, due to the given symmetry of the kernel. Rewriting leads to
with the effective decay constant
and $\mathrm{\Gamma}=2m{(0)}^{2}/{m}^{\prime \prime}(0)$. Defining $\rho \equiv r/\overline{d}$, $\stackrel{~}{\mu}(\rho )\equiv \mu (r/\overline{d})$, and using $\delta (\rho \overline{d})={\overline{d}}^{2}\delta (\rho )$, we get
The solution to this equation is given by the modified Bessel function of second kind and zeroth order K_{0}
Reinserting the defined variables yields
Note that the modified Bessel functions of second kind decay exponentially for long distances
But, consider that the inverse square root of the distance appears in front of the exponential. Formally, this is the onedimensional result. The only difference here is, that $m(k)$ is a twodimensional Fourier transform instead of a onedimensional one and $\mu (r)$ contains modified Bessel functions of second kind instead of exponentials.
In order to evaluate the expression for the mean covariances Equation 42, one needs to calculate the twodimensional convolution of a modified Bessel function of second kind with itself, for which we use the following trick
where $\mathcal{F}$ denotes the Fourier transform, $\mathscr{H}$ denotes the Hankel transform, and $\beta ={\overline{d}}^{2}$. The last step can be found in Abramowitz and Stegun, 1964, 9.6.27.
The mean covariances are given by
Using
we get the effective decay constant
Exponential connectivity profile:
Using a twodimensional exponential connectivity profile
leads to $\u27e8{r}^{2}\u27e9=6{d}^{2}$, and we get
with ${\lambda}_{0}=Kw$, and ${R}^{2}=K{w}^{2}$.
Gaussian connectivity profile:
Using a twodimensional Gaussian connectivity profile
leads to $\u27e8{r}^{2}\u27e9=2{d}^{2}$, and we get
11.3 Note on higher order approximation
While the (0,2)Padé approximation seems to yield good results for the onedimensional cases, in two dimensions the results only coincide for large spectral radii (Appendix 1—figure 7). One can extract a higher order approximation of the poles of the integration kernel of $\mu (\mathit{x})$ and thereby the effective decay constant d_{eff} using the DLogPadémethod, for which one calculates an $(n,n+1)$Padé approximation of the logarithmic derivative of the integration kernel around zero (Pelizzola, 1994). Using a (1,2)Padé approximation leads to
which coincides with our previous results in the limit $m(0)\to 1$, and thus for large spectral radii. Note that this expression contains the fourth moment of the connectivity kernel ${m}^{\prime \prime \prime \prime}(0)=wK\u27e8{x}^{4}\u27e9$.
12 Twodimensional network with two populations
Finally, we consider a twodimensional network with two populations of neurons. As in the one dimensional case, the neurons are gathered in cells, which contain one inhibitory and $q$ excitatory neurons. Again, they are placed on a twodimensional lattice with periodic boundary conditions. The activity vector takes the form
where ${\mathit{x}}_{i,j}^{(\mathrm{E})}$ denotes a $q$dimensional vector.
12.1 Dimensionality reduction
We apply the procedure developed so far, which leads to the results we found in the onedimensional case with two populations, with Fourier transforms and convolutions replaced by their twodimensional analogons and modified Bessel functions of second kind instead of exponentials. So, we end up with
and ${\mu}_{ab}(x)$ given by (33) and the twodimensional Fourier transform
The renormalized noise is given by Equation 34 with spectral bound Equation 35, with the eigenvalues ${s}_{ab}(k)$ replaced by the twodimensional Fourier transforms ${s}_{ab}(\mathit{K})$.
12.2 Longrange limit
Again, considering the special case in which the synaptic connections only depend on the type of the presynaptic neuron and not on the type of the postsynaptic neuron, the expressions simplify to
with
Padé approximation of the Fourier kernel, integration using (Goldenfeld, 1992, p.160f) and suppressing the zero arguments of $\zeta $ and $m}_{a$ leads to
with
After introducing the same relative parameters as in Section 10.3, we find
The twodimensional convolutions are given by
The renormalized noise simplifies to Equation 37. The mean covariances are given by
Remember that the result for the variances of the covariances is obtained by substituting ${D}_{\mathrm{r}}$ by its square, and $w}_{a$, or $\omega $ respectively, by its square, and setting $\zeta ={R}^{2}$.
Equation (2) in the main text can be proven by inserting the result for
Using an exponential connectivity profile yields $\mathrm{const}.=3$, a Gaussian connectivity profile yields $\mathrm{const}.=1$.
Exponential connectivity profile:
Using the results from 11.2, we find
with ${\lambda}_{0}={w}_{\mathrm{E}}{K}_{\mathrm{E}}+{w}_{\mathrm{I}}{K}_{\mathrm{I}}$, and ${R}^{2}={w}_{\mathrm{E}}^{2}{K}_{\mathrm{E}}+{w}_{\mathrm{I}}^{2}{K}_{\mathrm{I}}$.
Gaussian connectivity profile:
Using the results from 11.2, we find
12.3 Higher order approximation
Using a (1,2)DLogPadé method as in Section 11.3 yields
which again contains the fourth moments of the connectivity kernels.
13 Validation of theory
In order to validate our results, we performed simulations, in which an effective connectivity matrix $\mathit{W}$ of a twodimensional network was drawn randomly, and covariances were calculated using the result from Pernice et al., 2011, Trousdale et al., 2012, and Lindner et al., 2005
The elements of the different components ${\mathit{W}}_{ab}$ of the effective connectivity matrix, similar to Equation (24), were drawn from a binomial distribution with ${K}_{b}$ trials and a success probability of ${\gamma}_{b}{p}_{b}(\mathit{x})$, with ${\gamma}_{b}$ given by Equation (36) and $\mathit{x}$ denoting the distance between the neurons.
We compared the results to the predictions by our discrete theory, continuum theory, and the longrange limit. We did this for all cases presented above: one dimension with one population, one dimension with two populations, two dimensions with one population, and two dimensions with two populations. In the cases of two populations we solely considered the special case of synaptic connections only depending on the type of the presynaptic neuron. The first three cases are not reported here. We simulated several sets of parameters, varying the number of neurons, the number of inputs, the decay constants and the spectral bound, of which we only report the one using the parameters listed in Appendix 1—table 3, because the results do not differ qualitatively. Using
and choosing
we calculated the synaptic weights
The comparison of simulation and discrete theory is shown in Appendix 1—figure 7a. Simulation and theory match almost perfectly. The continuum theory, which is shown in Figure 3D,E of the main text, matches as well as the discrete theory (not shown here). The slight shift in ydirection in Appendix 1—figure 7a is due to the fact that in the random realization of the network the spectral bound is not exactly matching the desired value, but is slightly different for each realization and distributed around the chosen value. This jittering around the real spectral bound is more pronounced as $R\to 1$. Note that the simulated networks were small compared to the decay constant of the connectivity profile, in order to keep simulation times reasonable. This is why the variances do not fall off linearly in the semilog plot. The kink and the related spreading starting around $x/d=1.5$ is a finite size effect due to periodic boundary conditions: The maximal distance of two neurons along the axes in units of spatial decay constants is $(N/2)/d\approx 1.5$. Because of the periodic boundary conditions, the covariances between two neurons increases once the distance between them exceeds the maximal distance along an axis. This, together with the fact that the curve is the result of the discrete Fourier transform of Equation 52, implies a zero slope at the boundary. This holds for any direction in the two dimensional plane, but the maximal distances between two neurons is longer for directions not aligned with any axis and depends on the precise direction, which explains the observed spreading.
In order to validate the longrange limit, we compared our discrete theory with the result from the Padé approximation at large distances (Appendix 1—figure 7b). We do not expect the Padé approximation to hold at small distances. We are mainly interested in the slope of the variance of covariances, because the slope determines how fast typical pairwise covariances decay with increasing interneuronal distance. The slope at large distances for the (0,2)Padé approximation is smaller than the prediction by our theory, but the higher order approximation matches our theory very well (Appendix 1—figure 7C). In the limit $R\to 1$ both Padé predictions yield similar results. The absolute value of the covariances in the Padé approximation can be obtained from a residue analysis. The (0,2)Pade approximation yields absolute values with a small offset, analogous to the slope results. Calculating the residues for the (1,2)DLog Padé approximation would lead to a better approximation. Note that for plotting the higher order prediction in Appendix 1—figure 7b, we just inserted Equation (57) into Equation (53) and Equation (55).
14 Parameters of NEST simulation
15 Sources of heterogeneity
Sparseness of connections is a large source of heterogeneity in cortical networks. It contributes strongly to the variance of effective connection weights that determines the spectral bound, the quantity that controls stability of balanced networks (Sompolinsky et al., 1988; Dahmen et al., 2019): Consider the following simple model ${W}_{ij}={W}_{ij}{\zeta}_{ij}$ for the effective connection weights ${W}_{ij}$, where ${\zeta}_{ij}\in \{0,1\}$ are independent Bernoulli numbers, which are 1 with probability $p$ and 0 with probability $1p$, and ${W}_{ij}$ are independently distributed amplitudes. The ${\zeta}_{ij}$ encode the sparseness of connections and the ${W}_{ij}$ encode the experimentally observed distributions of synaptic amplitudes and single neuron heterogeneities that lead to different neuronal gains. Since ${W}_{ij}$ and ${\zeta}_{ij}$ are independent, the variance of ${W}_{ij}$ is
For low connection probabilities as observed in cortex ($p(1p)\approx p$), assessing the different contributions to the variance thus amounts to comparing the mean and standard deviation of ${W}_{ij}$. Even though synaptic amplitudes are broadly distributed in cortical networks, one typically finds that their mean and standard deviation are of the same magnitude (see e.g. Sayer et al., 1990, Tab 1; Feldmeyer et al., 1999, Tab 1; Song et al., 2005, Fig.1; Lefort et al., 2009, Tab 2; Ikegaya et al., 2013, Fig.1; Loewenstein et al., 2011, Fig. 2). Sparseness of connections (second term on the right hand side) is thus one of the dominant contributors to the variance of connections. For simplicity, the other sources, in particular the distribution of synaptic amplitudes, are left out in this study. They can, however, be straightforwardly added in the model and the theoretical formalism, because it only depends on $\mathrm{V}\mathrm{a}\mathrm{r}({W}_{ij})$.
Data availability
All code and data required to reproduce the figures are available in a public zenodo repository at https://zenodo.org/record/5524777. Source data/code files are also attached as zip folders to the individual main figures of this submission.
References

BookHandbook of Mathematical FunctionsIn: Abramowitz M, editors. Applied Mathematics Series. National Bureau of Standards. pp. 55–57.

Transition to chaos in random networks with celltypespecific connectivityPhysical Review Letters 114:088101.https://doi.org/10.1103/PhysRevLett.114.088101

Lowdimensional dynamics of structured random networksPhysical Review E 93:022302.https://doi.org/10.1103/PhysRevE.93.022302

Correlated states in balanced neuronal networksPhysical Review. E 99:052414.https://doi.org/10.1103/PhysRevE.99.052414

The Padé Approximation and its Physical ApplicationsFortschritte Der Physik 20:283–331.https://doi.org/10.1002/prop.19720200502

Neural network model of the primary visual cortex: from functional architecture to lateral connectivity and backJournal of Computational Neuroscience 20:219–241.https://doi.org/10.1007/s108270066307y

Predicting how and when hidden neurons skew measured synaptic interactionsPLOS Computational Biology 14:e1006490.https://doi.org/10.1371/journal.pcbi.1006490

Dynamics of sparsely connected networks of excitatory and inhibitory spiking neuronsJournal of Computational Neuroscience 8:183–208.https://doi.org/10.1023/a:1008925309027

On the Complexity of Resting State Spiking Activity in Monkey Motor CortexCerebral Cortex Communications 2:tgab033.https://doi.org/10.1093/texcom/tgab033

Strength of Correlations in Strongly Recurrent Neuronal NetworksPhysical Review X 8:031072.https://doi.org/10.1103/PhysRevX.8.031072

Longrange focal collateralization of axons arising from corticocortical cells in monkey sensorymotor cortexThe Journal of Neuroscience 6:3749–3766.

BookHandbook of Stochastic Methods for Physics, Chemistry and the Natural Sciences, No.13 in Springer Series in SynergeticsSpringer.

Interruption of motor cortical discharge subserving aimed arm movementsExperimental Brain Research 49:327–340.https://doi.org/10.1007/BF00238775

Theory of correlations in stochastic neural networksPhysical Review E 50:3171–3191.https://doi.org/10.1103/PhysRevE.50.3171

A unified view on weakly correlated recurrent networksFrontiers in Computational Neuroscience 7:131.https://doi.org/10.3389/fncom.2013.00131

Decoding continuous and discrete motor behaviors using motor and premotor cortical ensemblesJournal of Neurophysiology 92:1165–1174.https://doi.org/10.1152/jn.01245.2003

Echoes in correlated neural systemsNew Journal of Physics 15:023002.https://doi.org/10.1088/13672630/15/2/023002

The correlation structure of local neuronal networks intrinsically results from recurrent dynamicsPLOS Computational Biology 10:e1003428.https://doi.org/10.1371/journal.pcbi.1003428

Statistical Field Theory for Neural NetworksStatistical Field Theory for Neural Networks, Statistical Field Theory for Neural Networks, Cham, Springer International Publishing, 10.1007/9783030464448.

Crosscorrelations in highconductance states of a model cortical networkNeural Computation 22:427–447.https://doi.org/10.1162/neco.2009.0608806

Roles of monkey premotor neuron classes in movement preparation and executionJournal of Neurophysiology 104:799–810.https://doi.org/10.1152/jn.00231.2009

The roles of monkey M1 neuron classes in movement preparation and executionJournal of Neurophysiology 110:817–825.https://doi.org/10.1152/jn.00892.2011

Reconstructing neuronal circuitry from parallel spike trainsNature Communications 10:4468.https://doi.org/10.1038/s41467019122252

How pattern formation in ring networks of excitatory and inhibitory spiking neurons depends on the input current regimeFrontiers in Computational Neuroscience 7:187.https://doi.org/10.3389/fncom.2013.00187

Slow dynamics and high variability in balanced cortical networks with clustered connectionsNature Neuroscience 15:1498–1505.https://doi.org/10.1038/nn.3220

Stimuli Reduce the Dimensionality of Cortical ActivityFrontiers in Systems Neuroscience 10:11.https://doi.org/10.3389/fnsys.2016.00011

Cluster variation method, Padé approximants, and critical behaviorPhysical Review E 49:R2503–R2506.https://doi.org/10.1103/PhysRevE.49.R2503

How structure determines correlations in neuronal networksPLOS Computational Biology 7:e1002059.https://doi.org/10.1371/journal.pcbi.1002059

Recurrent interactions in spiking networks with arbitrary topologyPhysical Review E 85:031916.https://doi.org/10.1103/PhysRevE.85.031916

Electrophysiological monitoring of inhibition in mammalian species, from rodents to humansNeurobiology of Disease 130:104500.https://doi.org/10.1016/j.nbd.2019.104500

Dimensionality in recurrent spiking networks: Global trends in activity and local origins in connectivityPLOS Computational Biology 15:e1006446.https://doi.org/10.1371/journal.pcbi.1006446

Distinct recurrent versus afferent dynamics in cortical visual processingNature Neuroscience 18:1789–1797.https://doi.org/10.1038/nn.4153

Behavioral Context Determines Network State and Variability Dynamics in Monkey Motor CortexFrontiers in Neural Circuits 12:52.https://doi.org/10.3389/fncir.2018.00052

The spatial structure of correlated neuronal variabilityNature Neuroscience 20:107–114.https://doi.org/10.1038/nn.4433

Correlated neuronal activity and the flow of neural informationNature Reviews. Neuroscience 2:539–550.https://doi.org/10.1038/35086012

The time course and amplitude of EPSPs evoked at synapses between pairs of CA3/CA1 neurons in the hippocampal sliceThe Journal of Neuroscience 10:826–836.

Physiology and Impact of Horizontal Connections in Rat NeocortexCerebral Cortex 25:3818–3835.https://doi.org/10.1093/cercor/bhu265

Distributed network interactions and their emergence in developing neocortexNature Neuroscience 21:1600–1608.https://doi.org/10.1038/s4159301802475

Chaos in random neural networksPhysical Review Letters 61:259–262.https://doi.org/10.1103/PhysRevLett.61.259

A neural network that finds a naturalistic solution for the production of muscle activityNature Neuroscience 18:1025–1033.https://doi.org/10.1038/nn.4042

Decorrelation of neuralnetwork activity by inhibitory feedbackPLOS Computational Biology 8:e1002596.https://doi.org/10.1371/journal.pcbi.1002596

Synchronous Spike Patterns in Macaque Motor Cortex during an InstructedDelay ReachtoGrasp TaskJournal of Neuroscience 36:8329–8340.https://doi.org/10.1523/JNEUROSCI.437515.2016

Impact of network structure and cellular response on spike time correlationsPLOS Computational Biology 8:e1002408.https://doi.org/10.1371/journal.pcbi.1002408

BookIntroduction to Theoretical NeurobiologyCambridge University Press.https://doi.org/10.1017/CBO9780511623271

Chaotic balanced state in a model of cortical circuitsNeural Computation 10:1321–1371.https://doi.org/10.1162/089976698300017214

Gaussianprocess factor analysis for lowdimensional singletrial analysis of neural population activityJournal of Neurophysiology 102:614–635.https://doi.org/10.1152/jn.90941.2008
Article and author information
Author details
Funding
Helmholtz Association (VHNG1028)
 David Dahmen
 Moritz Helias
European Commission (HBP (785907 945539))
 David Dahmen
 Moritz Layer
 Lukas Deutz
 Nicole Voges
 Michael von Papen
 Markus Diesmann
 Sonja Grün
 Moritz Helias
Deutsche Forschungsgemeinschaft (368482240/GRK2416)
 Moritz Layer
 Markus Diesmann
 Moritz Helias
Agence Nationale de la Recherche (GRASP)
 Thomas Brochier
 Alexa Riehle
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
Acknowledgements
This work was partially supported by HGF young investigator’s group VHNG1028, European Union’s Horizon 2020 research and innovation program under Grant agreements No. 785,907 (Human Brain Project SGA2) and No. 945,539 (Human Brain Project SGA3), ANR grant GRASP and partially funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation)  368482240/GRK2416. We are grateful to our colleagues in the NEST and Elephant developer communities and for continuous collaboration. All network simulations were carried out with NEST 2.20.0 (http://www.nestsimulator.org). All data analyses performed with Elephant (https://neuralensemble.org/elephant/). We thank Sebastian Lehmann for help with the design of the figures.
Ethics
All animal procedures were approved by the local ethical committee (C2EA 71; authorization A1/10/12) and conformed to the European and French government regulations.
Version history
 Preprint posted: July 15, 2020 (view preprint)
 Received: March 15, 2021
 Accepted: November 18, 2021
 Version of Record published: January 20, 2022 (version 1)
Copyright
© 2022, Dahmen et al.
This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.
Metrics

 1,537
 views

 261
 downloads

 14
 citations
Views, downloads and citations are aggregated across all versions of this paper published by eLife.
Download links
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
Further reading

 Neuroscience
One wellknown biomarker candidate that supposedly helps capture fluid cognition is Brain Age, or a predicted value based on machinelearning models built to predict chronological age from brain MRI. To formally evaluate the utility of Brain Age for capturing fluid cognition, we built 26 ageprediction models for Brain Age based on different combinations of MRI modalities, using the Human Connectome Project in Aging (n=504, 36–100 years old). First, based on commonality analyses, we found a large overlap between Brain Age and chronological age: Brain Age could uniquely add only around 1.6% in explaining variation in fluid cognition over and above chronological age. Second, the ageprediction models that performed better at predicting chronological age did NOT necessarily create better Brain Age for capturing fluid cognition over and above chronological age. Instead, betterperforming ageprediction models created Brain Age that overlapped larger with chronological age, up to around 29% out of 32%, in explaining fluid cognition. Third, Brain Age missed around 11% of the total variation in fluid cognition that could have been explained by the brain variation. That is, directly predicting fluid cognition from brain MRI data (instead of relying on Brain Age and chronological age) could lead to around a 1/3time improvement of the total variation explained. Accordingly, we demonstrated the limited utility of Brain Age as a biomarker for fluid cognition and made some suggestions to ensure the utility of Brain Age in explaining fluid cognition and other phenotypes of interest.

 Developmental Biology
 Neuroscience
Movement is a key feature of animal systems, yet its embryonic origins are not fully understood. Here, we investigate the genetic basis underlying the embryonic onset of movement in Drosophila focusing on the role played by small noncoding RNAs (microRNAs, miRNAs). To this end, we first develop a quantitative behavioural pipeline capable of tracking embryonic movement in large populations of fly embryos, and using this system, discover that the Drosophila miRNA miR2b1 plays a role in the emergence of movement. Through the combination of spectral analysis of embryonic motor patterns, cell sorting and RNA in situs, genetic reconstitution tests, and neural optical imaging we define that miR2b1 influences the emergence of embryonic movement by exerting actions in the developing nervous system. Furthermore, through the combination of bioinformatics coupled to genetic manipulation of miRNA expression and phenocopy tests we identify a previously uncharacterised (but evolutionarily conserved) chloride channel encoding gene – which we term Movement Modulator (Motor) – as a genetic target that mechanistically links miR2b1 to the onset of movement. Cellspecific genetic reconstitution of miR2b1 expression in a null miRNA mutant background, followed by behavioural assays and target gene analyses, suggest that miR2b1 affects the emergence of movement through effects in sensory elements of the embryonic circuitry, rather than in the motor domain. Our work thus reports the first miRNA system capable of regulating embryonic movement, suggesting that other miRNAs are likely to play a role in this key developmental process in Drosophila as well as in other species.