Abstract
Neuronal spiking activity in cortical circuits is often temporally structured by collective rhythms. Rhythmic activity has been hypothesized to regulate temporal coding and to mediate the flexible routing of information flow across the cortex. Spiking neuronal circuits, however, are non-linear systems that, through chaotic dynamics, can amplify insignificant microscopic fluctuations into network-scale response variability. In nonlinear systems in general, rhythmic oscillatory drive can induce chaotic behavior or boost the intensity of chaos. Thus, neuronal oscillations could rather disrupt than facilitate cortical coding functions by flooding the finite population bandwidth with chaotically-boosted noise. Here we tackle a fundamental mathematical challenge to characterize the dynamics on the attractor of effectively delayed network models. We find that delays introduce a transition to collective oscillations, below which ergodic theory measures have a stereotypical dependence on the delay so far only described in scalar systems and low-dimensional maps. We demonstrate that the emergence of internally generated oscillations induces a complete dynamical reconfiguration, by increasing the dimensionality of the chaotic attractor, the speed at which nearby trajectories separate from one another, and the rate at which the network produces entropy. We find that periodic input drive leads to a dramatic increase of chaotic measures at a the resonance frequency of the recurrent network. However, transient oscillatory input only has a moderate role on the collective dynamics. Our results suggest that simple temporal dynamics of the mean activity can have a profound effect on the structure of the spiking patterns and therefore on the information processing capability of neuronal networks.
Cortical spiking patterns are unreliable and vary from trial to trial in response to identical stimuli (1, 2). Multiple factors contribute to this noise entropy – variability unrelated to the stimulus or to the network state – from network-based deterministic chaos (3–6) to stochasticity in the action potential generation (7) and transmission (8). Noise entropy necessarily interferes with the robust representation of spiking patterns and code-word based information flow, and poses the question of which possible mechanisms cortical networks implement to represent and transmit information in the presence of this undesired variability. One possibility is that cortex integrates each-cell spikes in a few ms window to represent stimuli. In this rate code, spurious changes in spiking patters contribute to asynchronous irregular dynamics with low correlations. However, if precisely-timed spike patterns are the code used by cortical circuits, spiking variability is an undesired feature, and other network mechanisms should exist to control it.
Neuronal oscillations have been proposed to be a backbone organizing neuronal firing (9) that presumably quenches spikepattern variability by reducing the number of possible network configurations. Transient oscillatory (but not strongly synchronous) activity can aid the flexible gating information flow (10), but whether any type of oscillatory activity is able to tame noise entropy is still an unanswered question. In fact, from a theoretical perspective, whether noise entropy due to chaotic dynamics is intensified or quenched by collective oscillations is not easily predictable. First, the internallygenerated population oscillation can be thought of as a common harmonic drive to all the cells in the network. Generally, whether a harmonic drive will intensify or quench chaos in a given system is not possible to know a priory, and both effects have been observed (11–14). Second, oscillatory activity has been theoretically (15–20) and experimentally (21–24) traced back to delayed recurrent inhibition, whose effect is also difficult to predict: On the one hand, delayed interactions can induce highly synchronized and therefore highly structured activity (25–27), possibly shrinking the dimensionality of the dynamics and quenching variability. On the other hand, the dimensionality of the chaotic attractor increases linearly with increasing delays (28, 29) in simple systems, and it is possible that similar dependencies occur in more complex systems than the ones studied so far.
Here we investigate whether the emergence of collective oscillations at the population level quenches noise entropy by synchronizing neuronal firing or contributes to an expansion of the number of the possible spike-pattern configurations explored by the circuit by intensifying chaos. To do so, we advance the tractability of large neuronal spiking networks of exactly solvable neuronal models by developing a strategy that allows analyzing the dynamics of delayed spiking networks in a system with fixed and finite degrees of freedom. By building a two-stage model of the action potential generation and transmission, we map the infinite-dimensional system to a finite one and semi-analytically compute the entire spectrum of Lyapunov exponents whose positive sum is a direct proxy for the network’s noise entropy (6). We find that when delays are small, the dimensionality of the attractor increases linearly with the delay while keeping noise entropy constant, analogously to what has been reported in simplified delayed systems (28, 29). Larger delays destabilize the asynchronous irregular dynamics of the spiking network, giving rise to population oscillations in the beta frequency range (∼ 20 Hz). We show that this transition is achieved via Hopf bifurcation of the mean activity at a critical set of parameters that can be computed exactly, and is analogous to what is reported in network models without a dynamical mechanism for action potential generation (15–18). Unexpectedly, the noise entropy, together with the dimensionality of the chaotic attractor, is boosted at the oscillatory instability. To investigate the mechanism that underlies this effect, we drive a network with a periodic external input, to first order, analogous to the one that the network receives at the oscillatory transition. We find that the external periodic drive dramatically boosts noise entropy in the neighborhood of the resonant frequency of the network, and quenches it otherwise. We find that a realistic drive, in which collective activity has short transients of oscillatory activity with drifting frequencies, leads to only moderate changes in the noise entropy. Our results indicate that simple dynamics of the mean activity can have a profound effect on the chaotic dynamics of the network and that realistic transient synchrony signals could, beyond flexibly gating information transmission, prevent boosting of undesired variability in the cortex.
Results
Delayed networks with a finite-dimensional phase space
Spiking networks have an innate source of instability, the generation of the action potential, which leads to chaotic dynamics even when the statistics of the population firing rate are independent of time (3, 6). The noise entropy generated by these networks, present even in the absence of any stochastic source of variability, can be computed through the spectrum of Lyapunov exponents (6) (see Methods). Therefore, given a group of control parameters that control the transition to oscillations in the network we can ask: does the onset of collective rhythms imply a reduction of the intensity of chaos and a more robust representation of the network’s inputs? Or does it imply the opposite?
One well-known parameter controlling the transition to collective oscillations in brain-like inhibitory-dominated networks is the synaptic delay (30, 31) (Fig. 1 a-b). This parameter, despite being inescapable in biology, introduces a new degree of complexity: even simple scalar differential equations (Fig. 1c) are transformed into an infinite-dimensional system when incorporating delayed feedback (Fig. 1d). Whether a discrete spectrum of Lyapunov exponents generally exists in delayed systems, and whether it can be computed exactly, remains poorly understood. The most general strategy proposed so far suggests binning the delay period and expanding the dimensionality of the system by the number of bins considered (32). However, this procedure becomes arbitrarily costly with increasing resolution, even in scalar equations.
We developed a strategy that allows for studying the impact of collective oscillations on the properties of the network’s chaos using delayed recurrent inhibition as the control parameter in a numerically exact and efficient way. We include delays dynamically by designing a two-stage singleneuron model composed of a soma, modeled as a quadratic integrate-and-fire neuron (ϕ), and a single compartment axon (SCA, ξ, see Fig. 2a, Methods and Fig. S1 of the Supplementary Information). The SCA is modeled as an excitable spiking unit, resting in a steady state, which depolarizes at spike arrival. Post-synaptic delays are introduced as the time it takes for the axon to spike and relay its input. This strategy allows mapping the infinite-dimensional system to a system with 2N dimensions, under the assumption that the interspike interval is much larger than the delay δ. Numericallyexact event-based simulations of a network of inhibitory neurons with short delays (see below for E-I networks) exhibit the characteristic dynamics of non-delayed balanced state networks, with asynchronous irregular spiking patterns (Fig. 2b), broad firing rate distributions (Fig. 2c) and a large coefficient of variation of the inter spike intervals (Fig. 2d).
We compute the entire spectrum of Lyapunov exponents by deriving an analytical expression of the Jacobian of the effectively delayed network that is evaluated numerically at each spike time. The Jacobian’s non-trivial elements will characterize either the change in the state of a post-synaptic soma to an infinitesimal change in the state of the pre-synaptic SCA (Fig. 2e top) or the change in the state of an SCA to an infinitesimal change in the state of the pre-synaptic soma (Fig. 2e bottom, see Methods and Supplementary Information for an in-depth description).
The spectrum of exponents is shown in Figure 2g (see Fig. S2 of the Supplementary Information for different parameter choices of the SCA and S3 for convergence). The spectrum is invariant of network size (N), hinting at a continuous density of Lyapunov exponents for infinitely large networks (Fig. 2h), and a network-size independent first Lyapunov exponent (Fig. 2i). The attractor dimension (Fig. 2j) and the metric entropy, a proxy for the noise entropy in the network (6) (Fig. 2j), grow linearly with network size corresponding to a network-size invariant spectrum, which invites defining intensive measures of dimension and metric entropy , the latter in units of bits per second.
Boosting and collapse of chaos
As expected, sufficiently long delays destabilize the asynchronous irregular state in balanced-state networks of QIF neurons and lead to a state of collective oscillations in which the irregular dynamics at the single cell level is preserved (Fig. 3a). The mean square deviation (MSD, see Methods) of the mean firing rate has a threshold-linear dependence on the delay, which is the signature of a Hopf bifurcation at the level of the population mean, and is consistent with previously described oscillatory transitions to analogous states in related systems (15, 18, 19, 31, 34). The critical delay δMSD that indicates the onset of oscillatory activity depends on the parameters of the network, decreasing with increasing connection strength and target mean firing rate, and becoming arbitrarily small for large in-degree (see below). A second instability, now to full network synchrony, occurs at a second critical delay. The collective oscillation, in the beta range up to that point, collapses to a fully synchronous state with a frequency defined by the network’s target mean firing rate of 5 Hz (Fig. 3c).
Surprisingly, we find that contrary to the intuition of increased regularity in oscillatory networks, chaos is boosted at the (first) oscillatory transition. The maximum Lyapunov exponent (LE) decreases monotonically with increasing delays up to the oscillatory instability (Fig. 3d) together with around 6% of the LEs (Fig. 3e, see also Fig. S4 of the Supplementary Information). Simultaneously, the number of positive LEs increases monotonically (Fig. S4a). These competing effects contribute to the remarkable fact that the metric entropy remains independent of the delay up to until the oscillatory instability (Fig. 3f, compare to dashed line). The attractor dimension, on the other hand, is dominated by the number of positive LEs and increases monotonically up to the transition (Fig. 3g). Interestingly, up to the oscillatory instability the ergodic properties of the pulse-coupled highdimensional network studied here match the dependencies reported for low-dimensional systems (and also the dependences of rate networks, see below); with increasing delay the maximum Lyapunov exponent decreases, the attractor dimension increases, while the metric entropy is independent of the delay (28, 35, 36).
At the critical delay, all chaotic measures intensify. Looking at a fraction of the Lyapunov spectrum as a function of the delay (Fig. 3e) reveals that i) all the positive LEs increase in value after the transition and ii) the total number of positive LEs initially increases monotonically after the first oscillatory transition but later decreases (Fig. S4a). The entropy and the attractor dimension at the transition can only increase given the increase in the magnitude and the amount of positive LEs (Fig. 3f,g). Although the positive LEs grow larger in magnitude, their number decreases soon after the transition; the uneven contribution of these opposing effects gives rise to the non-monotonic behavior of the entropy and the attractor dimension. These dependencies are not a finite size effect (see Fig. S3d) and we have not observed signs of bistability (37).
We compare the original, infinite-dimensional delayed system with the equivalent finite system studied here by selfconsistently computing the instability periphery in parameter space (following Ref. (38), see Methods). We compute the critical value of the delay at which the mean firing rate (in the original, delayed system) undergoes a Hopf bifurcation, and we compare it to the one obtained numerically by estimating the critical delay from the MSD’s dependency on the delay, δMSD (Fig. 4c) the first LE, δLE, (Fig. 4d) and the metric entropy, δH, (Fig. 4e). Concretely, we compute the delay value at which the derivative of the maximum LE with respect to the delay changes from negative to positive, the delay at which the derivative of the metric entropy departs from zero, and the delay at which the derivative of the attractor dimension departs from a constant value. The accuracy of the match in panels (c-e) depends on both the estimation of the critical parameters from ergodic theory quantities, which are sensitive to simulation size (see Fig. 4), and details of the implementation of the self-consistent estimation of the critical delay in the original system (see also Fig. S5 in the Supplementary Information). Despite the assumption of fixed in-degree implicit in the semi-analytic method used here to compute the critical delay (38), broken in our simulations, we find good agreement.
Does the emergence of collective oscillations always lead to boosting of chaos? We test two other mechanisms that induce oscillations in the network. First, as suggested in Fig. 4, increasing the average degree (K) can induce oscillations in this network even in the absence of delayed interactions. Supplementary Figure S6 demonstrates that all indicators of chaos (λ1, d and h) also increase at the oscillatory transition as a function of the mean degree K. Second, we find that in non-delayed excitatory-inhibitory networks, there is a transition to a balanced oscillatory regime with an increasing inhibitory time constant, in agreement to what is reported in binary networks (39). We find that, right after the transition to collective oscillations, all indicators of chaos are intensified. Taken together, our results suggest that the transition from an asynchronous irregular activity to collective oscillations, although only slightly changing the single neuron spiking statistics, induces a full reconfiguration of the network dynamics by simultaneously increasing the maximum Lyapunov exponent, the metric entropy and the dimension of the chaotic attractor.
Single-cell heterogeneity perpetuates chaos
In a network in which all the neurons are the same and receive the same inputs, the only source of disorder is from the sparse connectivity. Increasing delays in these types of networks pushes neurons to align their activity to the ongoing rhythm until the dynamics of the network collapses to a limit cycle (Fig 3, δ ≈ 2.1 ms).
How robust is this pathological state of full synchrony? Would it be observable in cortex where cell and cell-type heterogeneity are abundant? We find that the inclusion of even moderate amounts of single-cell heterogeneity not only curbs the synchronization capability of the network (16, 40, 41) (Fig. 5a), but also safeguards a biophysically plausible regime by eliminating the second oscillatory instability leading to a pathological full-synchrony state (Fig.5b). In this case, the maximum LE (and also the coefficient of variation, see Fig. S9 and S10 of the Supplementary Information) peaks at the delay that would have led to a full-synchrony state in the absence of heterogeneity. The network, instead of collapsing to a limit cycle, retains the irregular firing patterns and the microscopic chaos associated with it for delays as long as the membrane time constant (Fig.5b-h, see also Fig. S8 in the Supplementary Information). The ghost of the second oscillatory transition marks a regime in which increasing delays can only further align the neurons to the ongoing rhythm, resulting in a vanishingly small metric entropy for large delays (Fig.5f). Interestingly, for large delays the maximum LE can have the same value as with no delay while the metric entropy undergoes a ten-fold decrease, indicating that simple numerical estimates of the maximum LE could not reveal the full extent of the dynamical reconfiguration the network undergoes after the second oscillatory transition. These results are robust at all network parameters (the connection strength, the target mean firing rate, and the average degree K) as shown in Figures S9 and S10 of the Supplementary Information.
The presence of strong recurrent excitation introduces a new source of network instability. Stable cortical circuits that would be unstable in the absence of feedback inhibition are called inhibition stabilized (42, 43), with balanced networks being one subclass of those. We next asked whether this more general class of model would also exhibit boosting of chaos at the oscillatory transition. We study the two cell-type case by including recurrent excitation in a four-to-one ratio, as observed in cortex(44) (Fig. 5e). We require identical input currents statistics to inhibitory and excitatory neurons (see Methods), and identical delays within and across the populations. To guarantee the existence of a balanced state, we used a parametrization (3) that relies on the strength of the E to I connections, ϵ. We find that the strength of the EI loop has a stabilizing effect on the asynchronous irregular regime, requiring increasingly longer delays for the oscillatory instability to occur. We notice that when the inputs to the different cell-type populations are identical, like in this case, the network synchronizes without a lag between the E and I populations (19). By replacing 80% of the neurons with excitatory neurons, the network is comparatively less chaotic for moderate delays (Fig. 5g-i, compare dashed dark-grey line with colored ones). On the other hand, for a fixed amount of inhibitory neurons, increasing the intensity of the E-I feedback loop ϵ intensifies chaos for all delays (Fig. 5 g-i, compare dashed light-grey line with colored ones, see also Fig. S11 of the Supplementary Information).
Absence of oscillatory boosting in rate networks
Next, we asked whether rate networks, a fundamentally different type of neuronal network model, would exhibit a transition to collective oscillations with delayed interactions, and if so, whether this transition would lead to an increase in noise entropy. We analyze a variety of both classic (45) and purely inhibitory rate networks (46), which we name weakly-delayed rate networks. We incorporate delays perturbatively by expanding the delayed activity around the non-delayed activity in powers of the delay up to order k (𝒪 (k)) (see Methods, and Section 3 of the Supplementary Information). By incorporating this expansion, we increase the degrees of freedom of a non-delayed network by N 𝒪 (k), where N is the number of neurons. A scheme of the effective network after a second order expansion is shown in Supplementary Figure S13a.
We analyze the dependence on the delay for the balanced inhibitory network and find that, although it has a transition from a chaotic regime (Fig. S14a top panel) to clock-like synchrony (Fig. S14a bottom) after a critical delay as in the spiking network without heterogeneity, we did not observe a coexistence of chaos with a collective oscillation at intermediate delays (unlike (47) under the presence of feedback). We computed the entire spectrum of Lyapunov exponents of the classic (Fig. S13) and the inhibitory network (Fig. S14) in the standard way (see Ref. (48)) as a function of the synaptic delay. We find that, before the collapse to a limit cycle in the inhibitory network and for all the delays analyzed in the classic network, the maximum Lyapunov exponent decreases (Figs. S13e and S14c), the metric entropy is constant (Figs. S13g and S14e) and the attractor dimension increases linearly with the delay (Figs. S13f and S14d), as we observed in spiking networks before the first oscillatory instability, and as was described in scalar differential equations and low dimensional maps (28, 29).
Resonant boosting of chaos
If the mere existence of collective oscillations was a sufficient condition to observe an intensification of all chaotic measures, then driving the network with an inhomogeneous Poisson process, harmonically modulated at frequencies smaller than the cutoff (38, 49), should induce oscillations at the population level and intensify chaos as quantified by all ergodic measures described above.
To test this hypothesis, we first include an external harmonic drive to a non-delayed purely inhibitory network. Each neuron receives, besides the recurrent input, an external input composed of K spike-trains that are sampled from an inhomogeneous Poisson process with a harmonically modulated rate with baseline I0, amplitude I1, and frequency f (Fig. 6a, left). We find that increasing the amplitude of the harmonic drive successfully induces population oscillations (Fig. 6a, right panels) as apparent from the raster plots. Interestingly, for low frequencies, even if there is a collective rhythm, chaos is not intensified but is reduced instead. The larger the amplitude of the oscillatory modulation of the incoming spike-trains, the more strongly reduced the first Lyapunov exponent (Fig. 6b), the metric entropy (Fig. 6c) and the attractor dimension are (Fig. 6d). Reduction of chaos by an external noisy input has been previously observed in very similar systems to the one studied here (6) and has been extensively characterized both analytically and numerically in rate networks (14, 48, 50–52).
The network has a non-monotonic response to oscillatory inputs of increasing frequency, as indicated by the pronounced peak of synchronization index in Fig. ??b. If we were to increase the coupling strength of the network to a critical value of K, this peak would develop into a full resonance (38). The generation of self-sustained oscillations at the population level shown in Fig. 3 is reached when the drive that generates a resonance in the network is internally generated. Here we show that for values of weaker coupling than the critical ones, driving the network with an oscillatory drive already shows signatures of this developing resonance, and that being in the neighborhood of that resonance is a sufficient condition for chaos to be intensified.
The network response to the weakest amplitude of the modulation (peach line, I1 = 0.1), is the scenario that better captures the activity of the network at the onset of the oscillatory instability in the non-driven network. In that case, the maximum Lyapunov exponent (Fig. 6b), the metric entropy (Fig. 6c) and the attractor dimension (Fig. 6d) all peak at a drive frequency of 21 Hz (see black guiding line).
The values of the frequency of the external input at which the maximum Lyapunov exponent, the metric entropy and the attractor dimension change from being suppressed by the external drive to being boosted by the drive are different for each measure. This can be better understood by looking at how the entire Lyapunov spectrum depends on the driver frequency (Supplementary Fig. S12c). With increasing frequency of the external drive, the positive Lyapunov exponents become more positive, but the negative ones become more negative at earlier frequencies and grow more strongly than the positive ones, resulting in a sharp decrease of the attractor dimension (fewer negative exponents are needed to cancel the sum that defines the dimension, see Fig.2f).
Oscillatory activity in the cortex is not perfectly periodic, it is usually confined to short episodes with frequencies that fluctuate over time (53). This type of transient oscillatory activity can be leveraged to flexibly gate information between areas (10), but whether a network driven by such transient oscillatory activity can enhance or reduce network chaos remains unknown. Here we asked whether driving a network with transient oscillatory bursts would lead to an increase in noise entropy, or if the broad spectrum of these signals would allow recruitment of suppressive responses, resulting either in weak modulation or overall suppression. We generated an inhomogeneous Poisson process with a rate modulated by transient oscillatory bursts, which were simulated as described in (10), and drove a non-delayed network with this drive (Fig. 6d, left). We changed the peak frequency of the signals by re-scaling the temporal axis (see Methods). We found that this type of drive largely cancels the intensification of chaos at the resonant frequency. The dependence of the maximum Lyapunov exponent on the peak frequency of the oscillatory burst of the input drive is shown in Figure 6fii. The maximum Lyapunov exponent is only significantly modulated in the neighborhood of the resonant frequency, having only a 5% increase rather than the almost 40% increase in the harmonic drive case. The metric entropy is also only weakly modulated by this type of drive. Because the bursts are transient and the frequency of the burst is not fixed, this input drive recruits suppressive responses from low frequencies, leading to an overall weak modulation of the metric entropy (Fig. 6g) and a reduction of the attractor dimension (Fig. 6h).
Discussion
Delayed interactions in dynamical systems
We developed a framework that allowed us to compute, for the first time, the entire spectrum of Lyapunov exponents in an effectively delayed high-dimensional system and therefore exactly characterize the dynamics on the attractor. We note that delayed systems can have an infinite-dimensional phase-space exploration, and therefore a discrete spectrum of Lyapunov exponents is not guaranteed to exist. We mapped the pulsecoupled spiking network with delayed interactions to an equivalent system with fixed and finite degrees of freedom. We found that the Lyapunov spectrum is invariant to network size, giving rise to extensive chaos, with a finite dimension of the attractor for finite network size (Fig. 2), as also found in other systems with infinite-dimensional phase space (54). We showed that, prior to the oscillatory instability, the attractor dimension increases linearly with the interaction delay, while the top 5% of the LEs (including the maximum LE) decrease in magnitude, while the number of positive LEs increases. This tension between a reduction in magnitude and an increase in the number of LEs leads to the remarkable fact that the metric entropy of the network is independent of the delay up to the oscillatory instability (Fig. 3). These same exact dependencies were found in classic studies of delayed dynamical systems, for either low-dimensional maps or scalar differential equations (28, 29, 35, 36). Remarkably, we find that these dependencies are also present in weakly delayed rate networks (Figs. S12-S13), where the transition to oscillations is accompanied by a complete collapse of chaos. These findings suggest that, in the absence of a mean-field bifurcation, the reshaping of the attractor dynamics with increasing delay is of a universal nature.
Chaos in spiking networks
Balanced networks, as originally described for binary neurons, are chaotic with an infinitely large maximum Lyapunov exponent (39). Although the statistics of the balanced-state networks appear to be largely independent of the single neuron model (3, 39, 55), its dynamical stability is not common or intrinsic to the balanced state. Random networks of pulse-coupled, inhibitory leaky integrateand-fire (LIF) neurons have stable dynamics (55, 56): After a transient of irregular activity whose duration scales exponentially with network size, the dynamics settle in a periodic orbit (56, 57). However, finite size perturbations reveal a phase space of simultaneously diverging and contracting trajectories (58, 59). Trajectories initially separated on average by less than a critical value, converge to each other exponentially (local stability), while those separated further diverge exponentially. The inclusion of longer time scales, via synaptic dynamics, can lead to a loss of dynamical stability, resulting in extensive chaotic dynamics after a critical value of the synaptic time constant (56, 60, 61).
Incorporating a dynamical mechanism for the generation of the action potential, as studied here for the quadratic integrate-and-fire neuron, leads to extensive chaos to sufficiently large external input (3, 6) even in purely inhibitory pulse-coupled networks. Here, we showed that the inclusion of delays does not affect the extensive nature of chaos, as including delays does not lead to the development of deterministic chaos in LIF neurons (56). We have not observed any transition to dynamical stability (i.e. to a negative maximum lyapunov exponent) that coexists with spiking variability, and chaos only collapses when the dynamics converge to clock-like synchrony. Future work will need to determine whether oscillatory activity can actively modulate the number of neurons contributing to the most stable directions at any given point in time, given that those neurons can respond reliably to the same presentation of the stimulus despite the chaotic dynamics (5, 6)
Collective oscillations and chaos
Our results demonstrate that the emergence of sparse, collective oscillations induces a complete dynamical reconfiguration in models of the cortical circuitry. We find that increasing the recurrent delayed inhibition, either by increasing the synaptic delay δ (Figs. 3-5), or the number of presynaptic inhibitory inputs (Supplementary Fig. S6), leads to a transition from an asynchronous irregular state characteristic of balanced-state networks, to a state with irregular activity at the single cell level but with collective rhythms at the populations level. This transition occurs via a Hopf bifurcation, analogous to that described previously in models lacking a dynamical mechanism for the generation of the action potential (15, 16, 31). Recent theoretical work has described the transition to collective oscillations of the QIF with increasing in-degree (62, 63), and work has also described the mean field dynamics of similar networks to us, reporting rate chaos (chaos in the mean field equations) with increasing input current to the cells (64). This is very different from the chaos reported here, in which the population rates are not chaotic. Previous work has found a coexistence of chaotic dynamics with sparse oscillations in networks of LIF neurons with slow synapses (4), but how oscillatory activity impacts the chaotic dynamics of the network remained elusive.
We used a powerful approach (38) to compare the equivalent system studied here and the true, infinite-dimensional delayed system and found that at the onset of the theoretically predicted value of the synaptic delay for the onset of collective oscillations (Fig. 4) chaos is greatly intensified, by simultaneously increasing the maximum Lyapunov exponent, the dimension of the chaotic attractor and the metric entropy. This finding is insensitive to the presence of single-cell (Figs. 5, S8, S9) and cell-type (5, S11) heterogeneity, and holds true in the entire region of the parameter space analyzed (S10). We found that this reconfiguration is only possible in models that explicitly incorporate a dynamical mechanism for the generation of the action potential and is not present in rate models (Figs. S12 and S13), in which we have not found chaotic, asynchronous activity to coexist with collective oscillations (but see (65) for oscillatory rate networks in the presence of structured connectivity).
To investigate the mechanisms underlying such intensification of chaos, we studied a non-delayed network driven by a harmonically-modulated Poisson process. We found that chaos is reduced for low frequencies of the drive, analogously to what is found in noise-driven rate networks (66). Nevertheless, for frequencies in the neighborhood of the resonant frequency, the chaotic measures are intensified. This boosting of chaos with an external asynchronous oscillatory drive is to our knowledge one of a kind and has strong resemblance to what is found in classic maps driven by a harmonic input. This finding is in stark contrast to what is found in rate network models (14, 50), for which a sufficiently strong harmonic drive is able to completely abolish chaos at all frequencies of the harmonic drive.
Transient oscillations lead to moderate boost of network chaos
Oscillatory activity in cortex is far from perfectly periodic, cortical rhythms are in fact characterized by transient oscillatory bursts of drifting frequency (53, 67) that occur in a seemingly stochastic fashion. Theoretical work has shown how to flexibly route information, leveraging those noise-induced transients of rhythmic activity (10). Our results here show that when input spike trains of the network are modulated by a transient oscillatory amplitude, the response of the network to such input, recruits suppressive responses from lower frequencies, leading to a very modest change in the Lyapunov exponent and in the metric entropy, and a mild decrease in the attractor dimension. These effects are in strong contrast with the periodic drive case studied, in which the maximum Lyapunov exponent and the metric entropy undergo a 40 % increase compared to the non-driven case. Future work will need to elucidate whether input patterns generated by a chaotic network with transient oscillatory burst would lead to chaos suppression as seen here with Poisson spiking patterns, or would lead to an increase of chaos in the target network.
We emphasize that generally, in dissipative systems (including our networks and possibly in contrast with Hamiltonian models (68–70)) a positive entropy rate is possible despite the existence of an invariant measure (71) (i.e. despite the fact that probability density over the attractor is not changing over time), because the measure is singular and therefore is a “bottomless source of entropy” (72, 73).
Materials and Methods
Network Model
We considered a network of N quadratic integrate- and-fire (QIF) neurons
where τi is the membrane time and vi is the voltage of the neuron i. The term describes the synaptic current that a neuron i receives at time t given that its pre-synaptic neighbors j emitted spikes in the times after a delay δ. The connectivity Jij has a binomial distribution with probability K/N where 1 ≪ K ≪ N. When a connection exists, its strength takes a constant value depending on the type of the pre and post cell: , for x, y = E, I. The second term is an constant external input , X0 = E0, I0 for excitatory and inhibitory neurons respectively (see Section A in Supplementary Information)
A. Phase reduction for integrate-and-fire models
In networks of pulse-coupled units receiving constant external inputs, and whose voltage dynamics have a defined threshold xt and a reset values xr, the evolution between network spikes ts is defined by a propagator function or map (25, 37, 74–76)
The function f evolves the state of the neuron i after the last spike in the network, , to the state just before the next spike at ts+1. If the neuron i is not postsynaptic to the neuron j∗ that produces an spike at ts+1, the state variable will be left unaffected, and then . On the contrary, if the neuron i∗ is postsynaptic to j∗, then its state variable has to be further updated:
Eqs 3 and 4, form the basis of the iterative evolution of the network from spike to spike. The Eqs for the QIF neuron used in this paper were derived in Ref. (3), a detailed derivation can also be found in Section B of the Supplementary Information)
B. Delayer single-compartment-axon
Synaptic and axonic delays consistent with the framework above specified were incorporated into the network by means of the introduction of delayer variables, chosen to be QIF neurons in the excitable (non-periodically spiking) regime.
The solution of Eq. 5 for all three cases (i) , (ii), (iii) ln . The transition between the regimes (i),(ii),(iii) is dynamically forbidden.
We can define a change of variables
that simplifies the solution of Eq. 5 reads:
In this representation a propagator function η and an update function γ, analogous to f and g of Eqs. (3,4) can as well be defined:
The ξ variable, when initially below the threshold value ξt = −1 ξ0 =< −1 takes a finite time to spike given by (see Figure S1). If the reset value is set to ξr = 0, there will be no dynamics after the spiking event. If while in its resting state, an incoming spike from neuron j∗ is received, then its state will be modified according to γ(0) and will depend only on the parameters of the system . If the incoming spike has a sufficiently large weight then the ξ variable will be pushed beyond its unstable fixed point and emit a spike after a fraction of time given by:
before relaying it to the next neuron. More concretely, the value of Jl∗j∗ has to be bigger than , the distance between the fixed points in equation 5. For the case in which Jl∗j∗ is independent of the neuronal pair, the delay introduced by the SCA will only depend on its own parameters, and then δl∗j∗ = δl∗. The singlecompartment-axon (SCA), is then defined by Eqs. 9, with a reset value ξr = 0 for the “exact delay” framework. If the SCA has a reset value that is equal to any value between the threshold ξT = −1 and zero, the delay will depend on the dynamics of the network and change from spike to spike. The delay, in that case, will be :
For J > 0, this equation has solutions for . The smaller the value of , the larger the range of ξ to which it can be reset, and the faster the function in the logarithm reaches a value that is only weakly dependent on ξ. This configuration is what we will can call “dynamic delay” framework.
In both cases, when the SCA is arranged post-synaptically to each neuron in a network, it will delay the transmission of the spike by an amount given by Eq. 10 or 11, while preserving the desirable features of iterable maps.
C. Equivalent delayed network
A diagram of the network architecture of a delayed network with balanced state properties is shown in Fig. 2a. A spiking neuron j∗ (in orange, left), described by its phase variable ϕj receives independent and large inputs proportional to the square root of the average amount of connections per neuron K. Its spike is instantaneously transmitted to the SCA, of variable ξl in green. The parameters of the SCA define the time it will take for the spike to be transmitted, from the SCA to the postsynaptic neurons i, whose phase variable is ϕi.
The connectivity matrix, 𝕁 has then the form in Eq. 12. For ease of notation, indexes i and j (from 1 to N) will be reserved for phase neurons and l and m (from N+1 to M = 2N) for the SCA. The connectivity matrix is a block matrix describing an ErdősRényi random graph on the upper right block, with weights proportional to and a diagonal matrix in the bottom left:
The event-based simulation of the system defined in Eqs (1, 2) is done by, after each spike, evolving the network dynamics by propagating both the somas by Eqs. S10 and the SCA 9 up to the next spike time. If the next spike time is from a soma, then the postsynaptic SCAs have to be updated according to Eq. 9, and if it was from an SCA, postsynaptic somas should be updated as in Eq. S10.
D. Characterization of network activity
The firing rates of each neuron νi were calculated by summing all the spikes and dividing by the time spanned between the first and the last spike. The mean population rate is then ν. The coefficient of variation is the square root of the squared inter-spike-interval times the squared rate, i.e. . The level of synchrony of the network was assessed by the synchronization index Where, is the mean activity in the phase representation. The variables σϕ(t) and σϕ (t) are the standard deviation of the mean phase over time or, respectively, of the phase trace Vi(t) of each individual neuron i. The χ coefficient is bounded to the unit interval 0 < χ < 1, with vanishing values indicating asynchronous dynamics. The amplitude A (Hz) and the frequency f (Hz) of the population activity were calculated by making a binning histogram of a 1 ms bin of the spikes of the entire network. This quantity normalized by the bin size and the number of neurons defines a multi-unit like signal (MUA). The mean peak high defines the amplitude A (Hz). The frequency of the MUA was obtained as the inverse of the first autocorrelation peak.
E. Simulation specifics and parameters
The simulations took default parameters given by table 1. Deviations from these default parameters are indicated below. The simulations are event-based using the iterative map described in the previous sections and following Ref.(3). In short, one needs to calculate the next spike time in the network, coming either from a soma or from a SCA. For that, it is sufficient to find the soma with the largest phase and the SCA with value closer to 1. For each simulation, random topologies (ErdsRényi) for the neuronal block (Eq. 12), random initial conditions, and an initial guess on the input currents to the neurons are generated. By means of root finding algorithms (Regula Falsi and Ridders’ method) (77), a guess on the input current I needed to have a target firing rate was made from Eq. S3. After a simulation lasting SR spikes per neuron, the rate was calculated and a new guess on the currents is made. This procedure is then iterated until the target mean firing rate is found with 1% precision. The multiplying factor that measures the distance to the balance condition is defined by Is = I/I0. Once the appropriate value of the current I is found, the network can be warmed up to disregard transients, which can be large in delayed systems by a time equivalent to SW spikes per neuron. Finally, a random orthonormal matrix Q is chosen and the QR algorithm described in the theory section below was left to warm up for a time duration equivalent to SWONS spikes. This guarantees some degree of alignment of the first Lyapunov vector to the first vector of the orthonormal system. The simulation runs for a time equivalent to SC spikes per neuron.
Deviations from the default which are not indicated explicitly are as follows: In Figure 5 N=8000 for panels f-j. Figure S2 had N=400, while in S10(g-h),(p-q), (y-z) N was 2000, and in those cases SR=SQ=SWONS=600. In Figure S10 N=5000.
In Figures 3, 4, S3, S4, S6, S7 no heterogeneity was present (i.e.).
The driven network simulations of Fig. 6 were performed as follows: Each neuron in the recurrent network receives an independent stream of inhibitory external input spikes of strength that are generated from an inhomogeneous Poisson process with a rate that is modulated either I) sinusoidally with frequency f and amplitude I1 around a constant rate νin = Kν or II) by the outcome of the LFP obtained by simulations of networks with transient synchrony (identical to that in Fig 1 of (10)). We implemented the sinusoidally modulated input drive for each neuron by drawing the inter-spike intervals of the inhomogeneous Poisson input from the distribution p(ΔtISI) = − log(x)/(νinτ m(1 + I1 sin(2πf t)), where x is uniformly distributed between 0 and 1, I1 is the relative amplitude of the input modulation, f is the frequency of the input modulation and νin is the external input rate that we chose to be 5 times the mean firing rate ν. We set J0 = 1. This implementation results precisely in the desired sinusoidally modulated Poisson rate for a relative input modulation strength I1 ≪ 1 and f ≪ νin. By adapting the constant external input current iteratively using the bisection scheme described above, we obtained a target firing rate of 5 Hz. We chose νin = Kν = 500Hz, thus that on average neurons in the recurrent network receive the same number of recurrent spikes as external input spikes. After a simulation lasting SR = 103 spikes per recurrent neuron, the rate was calculated and a new guess of the currents was made. This procedure is iterated until the target mean firing rate was found with 1% precision. Once the appropriate value of the current I is found, the orthonormalization procedure described below was done iteratively where the interval size was adapted to achieve a condition number of the Q matrix between 1e2 and 1e6 with an initial guess of 5000 spikes. We performed a warmup of both the recurrent network state and the state of the orthonormal system. The simulation ran for a 60s. Results in Fig. 6 show averages over 10 ErdsRényi network realizations.
F. Calculation of the critical delay
In the regime in which each neuron receives a large number of inputs per integration time, with each input making only a small contribution to the net input, and under the assumption that all neurons are the same, the dynamics of the network can be analyzed under the so-called diffusion approximation (15, 31, 38, 78). We used the method described in (38) to do this calculation, and is detailed in the Section C in the Supplementary Information
G. Ergodic theory measures characterizing the attractor
The dimensionality D of the attractor of the delayed circuit can be then computed by means of a classic conjecture by Kaplan and York (79). The conjecture states that the attractor dimension is given by the linear interpolation of the number of Lyapunov exponents that are needed for its cumulative sum to vanish (see Fig. 2f, top, see also Sect. A in Supplementary Information):
Where l is such that and .
The metric entropy H, is an upper bound on the rate at which the chaotic dynamics contributes to the KolmogorovSinai entropy, in other words, an upper bound on the average information gained by making a new measurement in a sequence of measurements of the state of the dynamical system (32) (see also Sect. A of Supplementary Information). We compute this metric entropy via the Pesin identity (80), which equates the metric entropy with the sum of the positive Lyapunov exponents (see also Fig. 2f):
In Figs. 3,5 and 6, we show the intrinsic metric entropy in bits per second, normalized by the mean firing rate of the network. In other words
Keeping in mind that the measures derived from the Lyapunov spectrum are upper bounds and that no rigorous statement can be made in favor of the equalities, we nevertheless refer to them as the metric entropy and the attractor dimension.
H. Lyapunov spectrum of delayed spiking neuronal networks
Given a neuronal model F (Vi) and a model for the SCA that are exactly solvable, the network can be propagated between spikes and updated by Eqs. S10 and S7. From these equations, a Jacobian 𝕃 (x(n)) at each spike time can be obtained, from which the estimation of the Lyapunov dimension D (Eq.13) and the metric entropy H (Eq.15) are possible via QR decomposition. The terms of the Jacobian and an explanation of the terms can be found in Section B of the Supplementary Information.
Acknowledgements
We thank M. Monteforte and M. Puelma Touzel for discussions, and L. Abbott and T. Nguyen for feedback on this manuscript. This work was supported by the German Research Foundation (Deutsche Forschungsgemeinschaft, DFG) through SFB 1528, SFB 889, SFB 1286, SPP 2205, DFG 436260547 in relation to NeuroNex (National Science Foundation 2015276) & under Germanys Excellence Strategy - EXC 2067/1-390729940; by the Leibniz Association (project K265/2019); and the Niedersächsisches Vorab of the VolkswagenStiftung through the Göttingen Campus Institute for Dynamics of Biological Networks.This work was partially supported by the Federal Ministry for Education and Research (BMBF) under grant no. 01GQ1005B (to A.P., R.E. F.W.), by a GGNB Excellence Stipend of the University of Göttingen (to A.P.), by a Swartz Fellowship for Theory in Neuroscience (A.P.) through CRC 889 by the Deutsche Forschungsgemeinschaft and by the VolkswagenStiftung under grant no. ZN2632 (to F.W.)
Supporting Information Text
1. Delayed spiking networks
Here we describe the methods for Figs. 1-7. All spiking network simulations were simulated in an event based manner as described in the sections below.
A. Network Model
We considered a network of N quadratic integrate and fire (QIF) neurons
where τi is the membrane time and vi is the voltage of the neuron i. The term is the total input current, consisting on two terms. The first one describes the synaptic current that a neuron i receives at time t given that its pre-synaptic neighbors j emitted spikes in the times . Every time a neuron spikes, takes a time δ to arrive to its post-synaptic target. The connectivity Jij has a binomial distribution with probability K/N where 1 ≪ K ≪ N. When a connection exists, its strength takes a constant value that only depends on whether the presynaptic and postsynaptic cells are excitatory or inhibitory., for x, y = E, I. The second term is an constant external input , I0 for excitatory and inhibitory neurons respectively.
The mean firing rate of the E and I populations is given by (1, 2) and . The mean input current is given by ; where X0, is a positive (excitatory) input current to the population x: E0 for the excitatory population and I0 for the inhibitory one.The variance of the input currents are given by
For the particular case of networks of inhibitory neurons, the condition of a finite mean input current requires that . In the limit of large K the firing rate must then satisfy . For networks of excitatory and inhibitory neurons, we request that the first moment of the firing rates distributions are the same (νE = νI), and that the variances of the input currents are equal . Together, these conditions impose further constraints on the synaptic weights: . The weight matrix is written as a function of three parameters: J0 η = J /J < 1 and ϵ = J /J (5). The synaptic weights then have the form . The stable solutions with positive rates are found for η and ϵ such that (5).
B. Phase reduction for integrate and fire models
In networks of pulse coupled units receiving constant external inputs, and whose voltage dynamics have a defined threshold xt and a reset values xr, the evolution between network spikes ts is defined by a propagator function or map (6–10)
The function f evolves the state of the neuron i after the last spike in the network, , to the state just before the next spike at ts+1. If the neuron i is not postsynaptic to the neuron j∗ that produces an spike at ts+1, the state variable will be left unaffected, and then . On the contrary, if the neuron i∗ is postsynaptic to j∗, then its state variable has to be further updated:
Eqs, S4 and S5, form the basis of the iterative evolution of the network from spike to spike. Its precise form will depend on the neuron model considered, as will the expression for the time to the next spike.
Analytically solvable neuron models which in absence of any recurrent input have periodic spiking activity can be mapped to a phase variable that linearly evolves in time. This variable, ϕ, has a propagator function f (ϕ) that is a linear function of the inter-spike times. All the information about the particular neuron model can be condensed in the form of the update function g.
The QIF neuron defined in Eq. S1, is analytically solvable in either its supra-threshold or its excitable regime; different solutions of Eq. S1 can be obtained depending on the sign of Ii (in the absence of recurrent connections). In the supra-threshold regime (I > 0) the solution to Eq. S1 with initial conditions V (t0) = V0 is
The propagator function f and the update function g in the voltage representation for the supra-threshold QIF are then:
We see that from Eq. S7, that the variable change ϕ = 2 arctan reduces the dynamics to a linearly evolving phase, ϕ ∈ (−π, π). The free period can be obtained requesting that Vt = +∞ and Vr = −∞ : . This yields a value for the frequency . The propagator and update functions for ϕ are
Where PRC is the phase response curve.
C. Calculation of the critical delay
Here we described the method to describe the transition to population oscillations and how to obtain the critical delay following the approach by Richardson (11, 12). The single neuron dynamics are then given by . Here, E(t) = τ KJν(t δ) and calculated in standard manner under the assumption of uncorrelated activity in the asynchronous irregular state (2–4), f (v) = v2 for the case of the QIF neuron here considered and ηi(t) is white gaussian noise. If the number of inputs per neuron K is fixed, then the dynamics of the distribution of voltages P (v, t) are described by a Fokker-Plank equation.
where J(v, t) is the probability current and the voltage-independent diffusion coefficient is given by . Richardson (11, 12) developed a numerical method to i) evaluate numerically the steady state solution p0(v) and find self consistently the steady state firing rate and ii) the linear response to parameter modulations, which allows to self-consistently calculate the critical paramete where the asynchronous irregular state loses stability to a synchronous irregular one. Here we give a brief summary of how to compute, for an otherwise fixed set of parameters, the critical delay δc. If E and σ are constant, a steady state will be reached in which the probability distribution is independent of time. In this case, the stationary firing rate can be self consistently found by the normalization condition of the probability distribution. This was done analytically in (3, 4) for the leaky integrate and fire (LIF) neuron and in (13) for the quadratic integrate and fire (QIF). Although the general-time dependent solution for nonlinear equations is inaccessible, significant progress can be made by studying the linear response to a harmonically modulated parameter. α = α0 + α1 exp(iωt).
We first solve the harmonic modulation of the input current E(t) for a network of uncoupled neurons, following Ref (11) and then ask for self-consistency. To do this, we decompose the probability current , the density , the input current E = E0 + E1eiωt and the firing rate . Inserting those in Eq. S12, and separating contributions to the modulation coming from E1 and from (see sections 3.3 and 4.1 in Ref (11) for details) we obtain two equations. One, defines the E0 that satisfies the steady state with the target firing rate r0. The other is the equation that relates , where Â(ω) is a function of the frequency ω, obtained using theRef (11) method.
Calculation of the critical delay
To compute the critical delay that destabilizes the asynchronous irregular state, we need to require that the input modulation E1, which had so far been considered external to the network, arises from network activity. This means that E(t) = τ KJν(t − δ) ≈ E0r0 + E1eiωt, where Es = τ KJ < 0 (for inhibitory networks). The self-consistent equation (analogous to Eq. (52) in Ref (11), reduced to the delta coupled network case) is 1 = − |Es| e−iωδÂ(iω). Writing the (known function)  = |A(ω)| eiϕ(ω), separating in real and imaginary part we have effectively two equations with two unknowns:
From which we can obtain the value of the frequency ω and the critical delay δ that destabilizes the asynchronous irregular state.
2. Ergodic theory of effectively delayed spiking neuronal networks
A. Background and definitions
The simplest measure of the dimensionality of a chaotic attractor is called Kolmogorov capacity or box-counting dimension (14). It is a metric-based measure in the sense that it does not take into account the density of orbits on the attractor. Given a parallelepiped of side ϵ, the number N (ϵ) that is needed to cover the set of points conforming the attractor A is expected to satisfy N (ϵ) ϵ−dc. The capacity of the set can then be defined as . If instead of considering the number of hypercubes of side ϵ to cover the attractor, we consider the number of hypercubes N (ϵ, ϑ) needed to cover a fraction ϑ of the attractor, then the ϑ capacity can be defined as . This measure, which for ϑ = 1 is directly the capacity and otherwise will be called Dµ, satisfies Dµ ≤ DC. Finally, a point-wise dimension Dp(x) can be as well defined by estimating the exponent with which the total probability within a ball of radius ϵ decreases as the radius vanishes (14) .
For a definition of an attractor dimension that is related to the measure of the attractor, it is necessary to define a partition ℬ= {Bi} that covers the phase space. At each measurement, the trajectory of the system can be found in one Bi such that for sufficiently long times, a frequency of occurrence Pi can be assigned to each Bi. When the size of the elements of the partition tends to zero, P defines a probability density such that its sum over the attractor is equivalent to the measure on the attractor µ(A). Then, the frequency of occurrence can be written as .
When observing the state of the dynamical system at a given point if time, the information that is gained about the state depends on the resolution of the instrument, the diameter of the partition used. This diameter is basically given by the largest size of the elements Bi of the chosen partition. If the partition has diameter ϵ, then the information gained by making a measurement is given by I(ℬ (ϵ)) = − i=1 µ(Bi(ϵ)) log (µ(Bi(ϵ))). If now, from all the possible partitions to be chosen, we choose that one that minimizes the expression, then the information dimension DI is defined as .
Kaplan (15), defined a measure of an attractor, the Lyapunov dimension, as a function of the Lyapunov exponents:
Where l is such that and . Kaplan (15) conjectured initially that the Lyapunov dimension D was generally equal to the fractal (box counting) dimension, or a lower bound to it (14). This form of the Kaplan-York conjecture can be found in the literature (15, 16) and in text books (17, 18). In further work by both Kaplan and York (19) and York and colleagues (14), this conjecture was updated to a form which includes information of the density of orbits over the attractor. They specifically conjecture that D = DC (ϑ). as in Ref. (14) further conjectured that this equality and the original Kaplan York conjecture all hold. Rigorous results show that DP = DI = limϑ→1 DC(ϑ) ≤ D for invertible smooth maps (as reviewed in (14, 20)). The conjecture has been shown to hold whenever a Sinai-Ruelle-Bowen (SRB) measure (defined as a measure that is absolutely continuous along the unstable manifolds) exists.
Regarding the relation of the metric entropy with the Lyapunov exponents, Ruelle proved (20) that :
The equality, known as the Pesin identity, was shown to hold true for SRB measures. If the system is hyperbolic, the tangent space to the trajectory can be decomposed in the direct sum of its linear stable and unstable subspaces †. Hyperbolicity implies the existence of an SRB measure, and that the Pesin identity and the Kaplan York conjecture hold. Nevertheless, ruling out the hyperbolicity of the system allows for no statement regarding the existence of an SRB measure. Keeping in mind that the measures derived from the Lyapunov spectrum are upper bounds and that no rigorous statement can be made in favor of the equalities, we will nevertheless refer to them as the metric entropy and the attractor dimension.
B. Lyapunov spectrum of delayed spiking neuronal networks
Given a neuronal model F (Vi) and a model for the SCA that are exactly solvable, the network can be propagated between spikes and updated by Eqs. S10 and S7. From these equations, a Jacobian L(x(n)) at each spike time can be obtained, from which the estimation of the Lyapunov dimension D (Eq.S15) and the metric entropy H (Eq.S16) are possible via QR decomposition.
B.1. Derivation of the single spike Jacobian
In the following, we will focus on the derivation of 𝕃 (x(n)) for the delayed system of neuronal types that allow for a phase representation. In this case, the Jacobian elements can be written in terms of the phase response curve (PRC) (defined in Eq. S10 for the QIF case).
When a SCA with associated variable ξm∗, spikes at τs+1 ∈ [ts, ts+1], the soma units that are not postsynaptic to the axon will evolve independently of when the spike is exactly, following and the ones that are postsynaptic to it will be both propagated and updated (see Eq. S10).
Equivalently, when a spike is emitted by a neuron, it will be received only by its own associated SCA. The effect of the incoming spike on the dynamics of the SCA will then be . SCA that are not post-synaptic to the spiking neuron will evolve with (see Methods).
In order to calculate the single spike Jacobian, we first summarize the equations for the phase neuron and the SCA. We define:
Where the supra-index ξ was added to emphasize that those constants are only meaningful for SCA.
The propagation and update functions for the neurons f and g, and for the SCA, η and γ, for a network of QIF neurons with QIF SCA’s are:
Soma equations
Propagator
With derivatives
Update function
With derivatives
Where the phase response curve was defined in Eq. S10.
SCA equations
Propagator
With derivatives
Update function
With derivatives
For a generic neuron model with state variable x, that evolves with function f and updates with function g, the derivative with respect to some other neuron variable (possibly defined by a different neuron model) y can be written as:
Where i0 is the index corresponding to the non-postsynaptic neurons, i∗ for the postsynaptic ones and the spiking neuron j∗. Note that if the spiking neuron is reset, then only the last term (Eq. S26e) survives given that and therefore has null derivative. The term and can be extracted from the fact that f (yj∗ (ts), τs+1 − ts)) = XT and therefore:
In the case we are analyzing here,
The terms of the Jacobian for the delayed system can be summarized as follows:
Non-postsynaptic somas
Contribute to the Jacobian only with diagonal terms from Eq. S26b
Postsynaptic somas
Contribute with non diagonal terms (when receiving a spike from a delayer SCA ξm∗, from Eq. S26d and S26e
Spiking neuron
Only one term in the Jacobian, from Eq. S26e, Eq. S26c and S26d vanish
Non-postsynaptic SCA
Contribute with diagonal terms from Eq. S26b
Postsynaptic SCA
Contribute with non-diagonal terms (when receiving a spike from a neuron ϕj∗ ) from Eq. S26d and S26e
Spiking SCA
Only one term in the Jacobian, from Eq. S26e. Eq. S26c and S26d cancel one each other
If one chooses ξR = 0, then the delay introduced in the network is exact, and the equation vanishes for the delayer SCA. In this case, the Jacobian is going to be singular. If the reset value is small but nonzero, an invertible Jacobian is obtained. In this last case, is necessary to ensure that the reset value is passed by the singularity at ξl = 1 − Alj (see S24), so an incoming spike takes finite time to drive the SCA to threshold (i.e. there is a solution for Eq. S35, see below). Then it is necessary that ξR > 1 − Alj.
C. The direct method for the estimation of the first LE
We computed the first Lyapunov exponent via direct method in order to numerically corroborate the equations derived in the previous subsection. For this, after a long warm-up, we perturbed the network by adding a random vector of norm ϵ. After a short simulation of T = 100 ms the norm β of the difference between the perturbed and the unperturbed final states, simulated separately, is stored. The iteration of this procedure N times leads to the estimation of the first Lyapunov exponent via the following formula:
The comparison between the direct method and the one derived in the above subsection are shown in Figure S8b, for ϵ = 10−10 and N=5000, showing good agreement.
3. Delayed rate networks
We study two examples of rate networks. The activity xi of each unit i is defined by
where τ is the time constant, Fi is a scalar continuous function, x(t) = {x1(t), …, xN (t)} is the vector of all activities and δ is the interaction delay.
A. Classic rate network without Dale’s law
We modify a classic network of N rate units originally studied by (21) to incorporate delays. In this case,
where ϕ is a nonlinear input-output function we choose to be ϕ(x) = tanh(x), g is a gain parameter, N is the number of neurons and Jij is the connectivity matrix whose elements are Gaussian distributed with zero mean and unit variance. In the case of δ = 0, the system has a stable trivial fixed point for g < 1. At g = 1, the system looses linear stability and a chaotic attractor is created. Because the fixed point is the same for each neuron, the linearized delayed system can be studied by proposing a solution of the form , where uk are the eigenvectors of J. The relationship between the eigenvalues of J, µk, and the decay rates νk determining the linear stability of the delayed system is given by (see also (22–24)):
Then, for each mode, the stability is given by the solutions of H(ν) = ν + 1 − µe−νδ = 0. The stability boundary is no longer a line at g = 1 but is defined by off-centered Archimedean spirals (23) that always contain the g = 1 point. If the elements of J are correlated, then the eigenvalue distribution of J will be ellipsoidal (25) and system can loose stability via intersections of the stability spiral with the eigenvalue spectrum giving rise to limit cycle oscillations (24). Here we focus on the chaotic regime, for g > 1, and for simplicity we look at uncorrelated configurations of J but those seem not to affect our results. We notice that the mean field theory developed in (21) would not be changed by including delayed interactions, and therefore we do not expect any change in the dynamics.
B. Inhibitory rate network in the balanced state
Following Ref. (26), we study a network of N inhibitory units which we modify to incorporate delayed interactions. In this case,
Where m0 is the target mean rate, and Jij is defined as in the classic SCS case. The nonlinearity in this case is given by a threshold linear function ϕ(x) = [x]+. The size of the network N, the sparsity K and the strength of the inhibitory connections J0 are related to the variables g0 and by and .
C. Small delay approximation
We study the system above by making a small delay approximation. We define . Then, can be approximated by a Taylor expansion of order . Each time derivative of , defines a new variable we call . The infinite dimensional system defined in Eqs. (S38, S40) can then be approximated by the 𝒪 + 1 dimensional system given by
Where .
We notice that the small delay approximation, is nothing more than expanding the term in Eq. S39 in powers of the delay δ. When comparing the eigenvalues of the exact delayed system against with the eigenvalues of the system under the small delay approximation, we therefore compare the solutions of Eq. S39 with the solutions of
The difference among these two is shown in Figure S13a.
4. Ergodic theory of chaos for weakly delayed rate networks
Defining the N dimensional matrix 𝔸 with components , at each point in time, the Jacobian 𝕃 of the (𝒪 + 1)N system can be written as:
where 𝕀 is the N dimensional identity matrix. Concretely, for the delayed SCS system defined in Eq. S38 we have
whereas for the purely inhibitory network
Defining now the vector z(t) = x(t), y0(t), y1(t), …, y𝒪−1(t), the evolution of the system defined in S41, can be re-written by defining G such that żi = Gi(z).
The calculation of the Lyapunov exponents can be done following Engelken et al. (27).
References
- 1.The variable discharge of cortical neurons: implications for connectivity, computation, and information codingThe J. neuroscience : official journal Soc. for Neurosci 18:3870–96
- 2.The highly irregular firing of cortical cells is inconsistent with temporal integration of random EPSPsThe J. neuroscience : official journal Soc. for Neurosci 13:334–50
- 3.Dynamical entropy production in spiking neuron networks in the balanced statePhys. Rev. Lett 105:1–4
- 4.Collective dynamics in sparse networksPhys. Rev. Lett 109:1–5
- 5.Chaos and reliability in balanced spiking networks with temporal drivePhys. Rev. E - Stat. Nonlinear, Soft Matter Phys 87:1–15
- 6.Structured chaos shapes spike-response noise entropy in balanced neural networksFront. computational neuroscience 8:1–10
- 7.Reliability of spike timing in neocortical neuronsScience 268:1503–6
- 8.From the stochasticity of molecular processes to the variability of synaptic transmissionNat. Rev. Neurosci 12:375–387
- 9.Neuronal Gamma-Band Synchronization as a Fundamental Process in Cortical ComputationAnnu. Rev. Neurosci 32:209–224
- 10.Flexible information routing by transient synchronyNat. Neurosci 20:1014–1022
- 11.Period-doubling cascades and devils staircases of the driven van der Pol oscillatorPhys. Rev. A 36:1428–1434
- 12.Using small perturbations to control chaosNature 363:411–417
- 13.Taming chaotic dynamics with weak periodic perturbationsPhys. Rev. Lett 66:2545–2548
- 14.Stimulus-dependent suppression of chaos in recurrent neural networksPhys. Rev. E - Stat. Nonlinear, Soft Matter Phys 82:1–5
- 15.Fast Global Oscillations in Networks of Integrate-and-Fire Neurons with Low Firing RatesNeural Comput 11:1621–1671
- 16.Synchronization properties of networks of electrically coupled neurons in the presence of noise and heterogeneitiesJ. Comput. Neurosci 26:369–392
- 17.Firing rate of the noisy quadratic integrate-and-fire neuronNeural computation 15:2281–2306
- 18.Contributions of Intrinsic Membrane Dynamics to Fast Network Oscillations With Irregular Neuronal DischargesJ. Neurophysiol 94:4344–4361
- 19.What Determines the Frequency of Fast Network Oscillations With Irregular Neural Discharges? I. Synaptic Dynamics and Excitation-Inhibition BalanceJ. Neurophysiol 90:415–430
- 20.Cortical Enlightenment: Are Attentional Gamma Oscillations Driven by ING or PING?Neuron 63:727–732
- 21.Driving fast-spiking cells induces gamma rhythm and controls sensory responsesNature 459:663–7
- 22.Subcortical Source and Modulation of the Narrowband Gamma Oscillation in Mouse Visual CortexNeuron 93:315–322
- 23.Cortical gamma band synchronization through somatostatin interneuronsNat. Neurosci 20:951–959
- 24.A neural circuit for gamma-band coherence across the retinotopic map in mouse visual cortexeLife 7:1–17
- 25.Synchronization induced by temporal delays in pulse-coupled oscillatorsPhys. Rev. Lett 74:1570–1573
- 26.Rapid phase locking in systems of pulse-coupled oscillators with delaysPhys. review letters 76:1755–1758
- 27.Enhancement of neural synchrony by time delayPhys. review letters 92
- 28.Conjecture on the dimensions of chaotic attractors of delayed-feedback dynamical systemsPhys. Rev. A 35:4020–4022
- 29.Chaotic attractors of an infinite-dimensional dynamical systemPhys. D: Nonlinear Phenom 4:366–393
- 30.Mutual information, Fisher information, and population codingNeural computation 10:1731–57
- 31.Phase diagrams of sparsely connected networks of excitatory and inhibitory spiking neuronsNeurocomputing :307–312
- 32.Information Dimension and the Probabilistic Structure of Chaos
- 33.Unstable Attractors: Existence and Robustness in Networks of Oscillators With Delayed Pulse CouplingNonlinearity 18
- 34.Sparsely synchronized neuronal oscillationsChaos 18
- 35.High-dimensional chaos in delayed dynamical systemsPhys. D: Nonlinear Phenom 70:235–249
- 36.Scaling of Lyapunov exponents in chaotic delay systemsarXiv preprint arXiv:1210.3528 0:1–4
- 37.Coexistence of regular and irregular dynamics in complex networks of pulse-coupled oscillatorsPhys. review letters 89
- 38.Spike-train spectra and network response functions for non-linear integrate- and-fire neuronsBiol. Cybern 99:381–392
- 39.Chaotic balanced state in a model of cortical circuitsNeural computation 10:1321–71
- 40.Dynamics of globally coupled inhibitory neurons with heterogeneityPhys. Rev. E 48:4810–4814
- 41.Synchronization and oscillatory dynamics in heterogeneous, mutually inhibited neuronsJ. Comput. Neurosci 5:5–16
- 42.Inhibitory Stabilization of the Cortical Network Underlies Visual Surround SuppressionNeuron 62:578–592
- 43.Dynamical models of cortical circuitsCurr. Opin. Neurobiol 25:228–236
- 44.GABAergic Interneurons in the Neocortex: From Cellular Properties to CircuitsNeuron 91:260–292
- 45.Chaos in random neural networksPhys. Rev. Lett 61:259–262
- 46.Transition to chaos in random neuronal networksPhys. Rev. X 5:1–28
- 47.Predictive coding in balanced neural networks with noise, chaos and delaysAdv. Neural Inf. Process. Syst :1–12
- 48.Lyapunov spectra of chaotic recurrent neural networksbioarXiv 2
- 49.Effects of synaptic noise and filtering on the frequency response of spiking neuronsPhys. Rev. Lett 86:2186–2189
- 50.Input correlations impede suppression of chaos and learning in balanced rate networks:1–20
- 51.Optimal Sequence Memory in Driven Random NetworksPhys. Rev. X 8
- 52.Suppressing chaos in neural networks by noisePhys. Rev. Lett 69:3717–3719
- 53.Searching for autocoherence in the cortical network with a time-frequency analysis of the local field potentialThe J. neuroscience 30:4033–4047
- 54.Negatively invariant sets of compact maps and an extension of a theorem of CartwrightJ. Differ. Equations 22:331–348
- 55.Stable irregular dynamics in complex neural networksPhys. Rev. Lett 100:2–5
- 56.How Chaotic is the Balanced State?Front. Comput. Neurosci 3
- 57.Very long transients, irregular firing, and chaotic dynamics in networks of randomly connected inhibitory integrate-and-fire neuronsPhys. Rev. E - Stat. Nonlinear, Soft Matter Phys 79:1–13
- 58.Dynamic flux tubes form reservoirs of stability in neuronal circuitsPhys. Rev. X 2
- 59.Statistical mechanics of spike events underlying phase space partitioning and sequence codes in large-scale models of neural circuitsPhys. Rev. E 99:1–16
- 60.Collective chaos in pulse-coupled neural networksEPL (Euro-physics Lett 92
- 61.Ph.D. thesis
- 62.Transition from Asynchronous to Oscillatory Dynamics in Balanced Spiking Networks with Instantaneous SynapsesPhys. Rev. Lett 121
- 63.Coherent oscillations in balanced neural networks driven by endogenous fluctuationsChaos 32
- 64.Asynchronous and Coherent Dynamics in Balanced Excitatory-Inhibitory Spiking NetworksFront. Syst. Neurosci 15:1–33
- 65.Coherent chaos in a recurrent neural network with structured connectivityPLoS Comput. Biol 14:1–27
- 66.Noise dynamically suppresses chaos in neural networksarXiv 22:1–5
- 67.Is gamma-band activity in the local field potential of V1 cortex a “clock” or filtered noise?The J. neuroscience : official journal Soc. for Neurosci 31:9658–9664
- 68.Dynamical criticality in the collective activity of a population of retinal neuronsPhys. Rev. Lett 114:1–13
- 69.Weak pairwise correlations imply strongly correlated network states in a neural populationNature 440:1007–1012
- 70.Collective Behavior of Place and Non-place Neurons in the Hippocampal NetworkNeuron 96:1178–1191
- 71.What are SRB measures, and which dynamical systems have them?J. Stat. Phys 108:733–754
- 72.Smooth dynamics and new theoretical ideas in nonequilibrium statistical mechanicsJ. Stat. Phys 95
- 73.Positivity of entropy production in nonequilibrium statistical mechanicsJ. Stat. Phys 85:1–23
- 74.Topological Speed Limits to Network SynchronizationPhys. Rev. Lett 92
- 75.The simplest problem in the collective dynamics of neural networks: is synchrony stable?Nonlinearity 21:1579–1599
- 76.Synchronization of pulse-coupled biological oscillatorsSIAM J. on Appl. Math 50:1645–1662
- 77.Numerical Recipes: The Art of Scientific ComputingCambridge University Press
- 78.Firing-rate response of linear and nonlinear integrate-and-fire neurons to modulated current-based and conductance-based synaptic drivePhys. Rev. E - Stat. Non-linear, Soft Matter Phys 76:1–15
- 79.Chaotic behavior of multi-dimensional difference equationsH. Peitgen H. Walther 730:204–227
- 80.Ergodic theory of chaos and strange attractorsRev. Mod. Phys 57:617–656
- 1.Chaos in neuronal networks with balanced excitatory and inhibitory activitySci. (New York, N.Y 274:1724–6
- 2.Chaotic balanced state in a model of cortical circuitsNeural computation 10:1321–71
- 3.Fast Global Oscillations in Networks of Integrate-and-Fire Neurons with Low Firing RatesNeural Comput 11:1621–1671
- 4.Phase diagrams of sparsely connected networks of excitatory and inhibitory spiking neuronsNeurocomputing :307–312
- 5.Dynamical entropy production in spiking neuron networks in the balanced statePhys. Rev. Lett 105:1–4
- 6.Topological Speed Limits to Network SynchronizationPhys. Rev. Lett 92
- 7.The simplest problem in the collective dynamics of neural networks: is synchrony stable?Nonlinearity 21:1579–1599
- 8.Coexistence of regular and irregular dynamics in complex networks of pulse-coupled oscillatorsPhys. review letters 89
- 9.Synchronization induced by temporal delays in pulse-coupled oscillatorsPhys. Rev. Lett 74:1570–1573
- 10.Synchronization of pulse-coupled biological oscillatorsSIAM J. on Appl. Math 50:1645–1662
- 11.Spike-train spectra and network response functions for non-linear integrate-and-fire neuronsBiol. Cybern 99:381–392
- 12.Firing-rate response of linear and nonlinear integrate-and-fire neurons to modulated current-based and conductance-based synaptic drivePhys. Rev. E - Stat. Nonlinear, Soft Matter Phys 76:1–15
- 13.Firing rate of the noisy quadratic integrate-and-fire neuronNeural computation 15:2281–2306
- 14.The dimension of chaotic attractorsPhys. D: Nonlinear Phenom 7:153–180
- 15.Chaotic behavior of multi-dimensional difference equationsH. Peitgen H. Walther 730:204–227
- 16.On determining the dimension of chaotic flowsPhys. D: Nonlinear Phenom 3:605–617
- 17.Chaos: From Simple Models to Complex SystemsWorld Scientific Publishing Co
- 18.Deterministic Chaos: An Introduction: Fourth EditionWILEY-VCH Verlag GmbH & Co :1–287
- 19.The liapunov dimension of strange attractorsJ. Differ. Equations 49:185–207
- 20.Ergodic theory of chaos and strange attractorsRev. Mod. Phys 57:617–656
- 21.Chaos in random neural networksPhys. Rev. Lett 61:259–262
- 22.Will a large complex system with time delays be stable?Phys. Rev. Lett 93:1–4
- 23.Synchronization in networks with random interactions: Theory and applicationsChaos 16:1–21
- 24.Instability to a heterogeneous oscillatory state in randomly connected recurrent networks with delayed interactionsPhys. Rev. E 94
- 25.Spectrum of large random asymmetric matricesPhys. Rev. Lett 60:1895–1898
- 26.Transition to chaos in random neuronal networksPhys. Rev. X 5:1–28
- 27.Lyapunov spectra of chaotic recurrent neural networksbioarXiv 2
- 28.Parabolic Bursting in an Excitable System Coupled with a Slow OscillationSIAM J. on Appl. Math 46:233–253
Article and author information
Author information
Version history
- Preprint posted:
- Sent for peer review:
- Reviewed Preprint version 1:
Copyright
© 2023, Palmigiano et al.
This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.
Metrics
- views
- 324
- downloads
- 13
- citations
- 0
Views, downloads and citations are aggregated across all versions of this paper published by eLife.