Fundamental challenges of the study of oscillations due to delayed interactions.

(a) Neuronal network without synaptic delays. Whenever the unit with dynamical variable ϕj spikes, its spike is immediately relayed. (b). Delayed network. Each time a neuron j is active, it takes a time δ for the spike to propagate to the postsynaptic neurons. In the case of delayed pulse-coupled oscillators, this network can be mapped to a finite size system, with variable dimension (33). (c) An N dimensional deterministic dynamical system is defined by N scalar initial conditions.(d) In a delayed dynamical system, each of the N variables will be initialized by a history function. In this case, the system will be infinite-dimensional.

Extensive chaos in delayed balanced circuits.

(a) Circuit diagram of the two-stage neuronal model. Each neuron (brown) is composed of a soma (orange) linked to its single-compartment-axon (SCA, green). The link weight Jlj is such that when the soma spikes (i.e. ϕj reaches threshold), the SCA (ξl) is released from its fixed-point steadystate, relaying its input after a delay δ exactly defined by their connection weight (see Eq. 10 Methods). Each soma receives on average the activity of K neighbors with weight , and external inputs of order . (b) Raster plot of network’s activity. (c) Firing rate distribution (left) and (d) CV distribution (right) for a small (1 ms) delay. (e) Terms involved in the calculation of the single-spike Jacobian. When an SCA (soma) spikes, the state of the postsynaptic soma (SCA) is propagated via the function f (η) from the previous spike time ts to the new spike time τs+1, then updated via g (γ), and then propagated up to the next spike time. Then the limit of τs+1ts+1 is taken (see Methods). (f) Chaos descriptors: KaplanYork dimension (top) and the metric entropy (bottom). (g) Spectrum of Lyapunov exponents. Shaded area are the positive exponents. (h) Spectrum for different network sizes as a function of the normalized index of the exponent. A system-size invariant Lyapunov spectrum indicates extensive chaos (i) First Lyapunov exponent (j) Attractor dimension (k) and Metric entropy as a function of system size (N).

Boosting and collapse of chaos with increasing delayed inhibition.

(a) Diagram of an inhibitory network with dynamic delays, each neuron is as in Fig. 2a. The mean firing rate of the network is kept constant at 5 Hz by adjusting the homogeneous external input that each neuron receives. (b) Left: Mean squared deviation from the mean population firing rate (MSD, vanishing values indicates an asynchronous state) as a function of the synaptic delay for different values of the average in-degree K. The larger K, the smaller the delay δ needed to destabilize the asynchronous irregular state. Numbered circles indicate the points for which raster plots are shown. (c) Frequency of the oscillation is in the beta range and monotonically decreases with the delay. After a K-dependent critical delay δc, the network collapses to a limit cycle (complete synchrony), and the network oscillates at the target firing rate ν (see Methods) (d) Maximum Lyapunov exponent as a function of δ for different values of K. (e) A third of the spectrum of Lyapunov exponents as a function of the delay for K = 75. The blue line in panel (d) is the bottom row of this panel. (f) Intensive metric entropy (h = H/N) divided by the mean firing rate , giving units of bits per spike, and (g) attractor dimension d = D/N, as a function of the delay. Both the metric entropy and the attractor dimension have a non-monotonic dependence with the delay prior to the collapse to a limit-cycle.

Chaos is boosted at the oscillatory instability

(a) We compare the infinite-dimensional system (original delayed system) with the system with fixed and finite degrees of freedom studied here. (b) Phase diagram (computed semi-analytically following (38)) showing the critical delay at which the asynchronous irregular state loses stability to collective oscillations, as a function of the mean degree K and the target mean firing rate. (c) Comparison between the critical delay estimated from simulations as when the MSD departs from zero (δMSD) and the one from the true delayed system (δA) (d) Same as c) but for the critical delay estimated in simulations is estimated from when the first Lyapunov exponent increases (e) Same as c) but for the critical delay estimated from the increase in the metric entropy (δH)

Single cell and cell-type heterogeneity perpetuate chaos.

(a) Network of heterogeneous inhibitory neurons. (b) Maximum Lyapunov exponent (c), network-size intensive metric entropy normalized by the firing rate and (d) intensive attractor dimension as a function of the delay for different values of heterogeneity. The homogeneous case (of Fig. 3) is shown for reference. (e) Spiking patterns at different values of the synaptic delays. Notice how despite the seemingly strong synchronization for large delays, the network remains strongly chaotic. (f) Diagram of a network of N neurons, NE = 0.8N excitatory and NI = 0.2N inhibitory, in the balanced state. The connectivity matrix is proportional to with entries JEEϵη, JIEϵη, , and , and η=0.9 (g) Maximum LE as a function of the delay for different strength of the excitatory to inhibition connectivity ϵ. Dashed dark gray is a network of N inhibitory neurons. (h) Intensive metric entropy as a function of the delay for different values of ϵ and for a network of purely inhibitory neurons of size N (dashed dark gray) and NI (dashed light gray). (i) Intensive attractor dimension as a function of the delay for different values of ϵ and for a network of purely inhibitory neurons of size N (dashed dark gray) and NI (dashed light gray). (j) Spiking patterns. Spikes in red are of E neurons, while blue are of I.

Resonant boosting of chaos is quenched by transient synchrony

(a) (left) Diagram of a non-delayed inhibitory network driven by harmonic external input. Each neuron receives, besides its recurrent input, K external spike trains sampled from an inhomogeneous Poisson process with a harmonically modulated rate with baseline I0, amplitude I1, and frequency f, designed to mimic the oscillatory input after the transition to oscillations in Figure 3 (a) (right) Raster plots of a network activity driven by external inputs of frequencies f = {1, 20} Hz. (b) maximum Lyapunov exponent, (c) metric entropy and (d) attractor dimension, as a function of the frequency of the cosine-modulated Poisson input f, for increasing values of the input drive’s oscillatory amplitude I1. The top dashed line in panel (c) is the value the entropy with a constant input (not Poisson, i.e. that of Fig. 3). The bottom dashed line is that of a non-harmonically modulated Poisson drive. Notice how the external input drive reduces chaos proportionally to the intensity of the drive for smaller frequencies and how this behavior is reversed in the neighborhood of the resonant frequency of the network.(e) (left) Diagram of a non-delayed inhibitory network driven by an external input with transient oscillations. Each neuron receives K external spike trains sampled from an inhomogeneous Poisson process with a rate given by transient oscillations of a different spiking network model (see (10) and Methods) with baseline I0, amplitude I1, and peak frequency . (e) (right) Raster plots of a network activity driven by external inputs of with peak frequencies of Hz. (f) Maximum Lyapunov exponent, (g) metric entropy and (h) Attractor dimension, as a function of the frequency of the transient-oscillation-modulated Poisson input , for increasing values of the input drive’s oscillatory amplitude I1. Notice how, compared to the harmonic case, a drive with transient synchrony and drifting frequencies, recruits responses of the network that would either reduce or intensify chaos.

List of default network parameters.

Single compartment axonunit

(a) Three possible representation for the dynamics of the QIF neuron. (Left) A quadratic integrate and fire (QIF), has two fixed points at , an stable one (black) and an unstable one (white). This neuron is solvable analytically either in the tonic spiking regime or the excitable regime but nor in both. The neuron model diverges in finite time, emitting a spike. (Middle) A classic variable transformation was introduced by (28) and is known as the theta neuron. (Right) We introduce a variable transformation, the ξ representation, tailored to the SCA. This representation is analytically solvable. The SCA at rest in ξ = 0 is pulled towards −∞ when receiving a spike, and evolves towards zero, emitting a spike at ξ = −1 (b) Phase space of the SCA ξ (Top) and the canonical QIF representations (Bottom) (c) Time evolution without reset to the fixed point ξr = ξt = −1. When the ξ variable crosses -1 (Top), in the voltage representation the voltage diverges (Bottom) (c) Time evolution resetting the SCA to its stable fixed point i.e. ξr = 0 in both representations.

Dependence on the reset values of the single compartment axon

(see Eq. S35 and Methods); ξr = 0 corresponding to the exact delay framework.(a) Full spectrum for different values of the reset. The spectrum is composed of two branches (left). The first one (middle)characterizes the biologically relevant degrees of freedom (up to i = N, where N is the number of soma units in the network). The second branch (right) is defined by the time it takes for the axon to relax back to its fixed point. In the “exact delay” framework where ξr = 0, that time constant is zero and therefore the second branch of the Lyapunov exponents diverges to−∞. If the reset point is close to the fixed point the SCA will relax faster than what its dictated by its time constant τ m,ξ. As can be seen, the first part of the spectrum seems to be independent on the choice of ξr, given that the time constant is chosen such that they dynamics are not visibly affected. This results on identical dependencies of the normalized metric entropy h (b) and attractor dimension (c) of the value of the delay.

Convergence of the Lyapunov spectrum and finite size effects in the first Lyapunov exponent.

(a) Example traces of Lyapunov exponents for different values of the delay. (b) Difference between different different LE traces for different instantiations of the random connectivity |Δλi| get smaller with time. Over the course of seconds simulations the LE converge to its true value. (c) Distributions of coefficient of variations and firing rate distributions for the example rasters in Figure 3 of the main text. (d) First Lyapunov exponent as a function of the delay for different network sizes N. The larger the network the clearer the downward dependence of the first exponent with the delay before the transition. (e) MSD indicating that the increase in the exponent arises from the mean field Hopf bifurcation.

Structure of the Lyapunov Spectrum (K=50).

(a) Fraction of positive Lyapunov Exponents. (b) Slope of the λi computed by , where n < 10, where the transition occurs and λi(0) is the spectrum of the non=-delayed system (c) Cumulative sum of the magnitude computed in (b). The minimum indicates the fraction of LE that have an initially negative dependence with the delay as computed by taking the n th difference. Approximately the first 5% LE decrease their magnitude with increasing delay before the transition. (d) All biologically-relevant Lyapunov exponents as a function of the delay for a single realization of the random connectivity. After the transition at δc = 3 ms the maximum lyapunov exponent vanishes (the trivial exponent is present because of the translation invariance of the system) and the rest of the spectrum relaxes to the inverse of the membrane time constant taum = 10 ms. After the instability, all positive Lyapunov exponents increase in magnitude (the LE that was initially zero is in dash). Negative LE decrease in magnitude. (e) Fraction of LE that have a negative slope with the delay before the transition as a function of the nth difference. (f) Slope of the λi computed by for n after the oscillatory transition. Right after the transition most LE have a larger value that in the non-delayed case (g) Cumulative sum of the magnitude computed in (f)

Numerical precision of the Richardson’s method.

(a) The threshold-integration method developed by Richardson (11) involves integrating the first derivative of the voltage probability distribution and that of the current backwards from threshold to a lower bound of the voltage. The QIF neuron used here does not have a threshold or a reset value, given that it diverges in finite time and therefore the critical delay computed with this method will depend on the choice of the integration bounds. Empirically nevertheless we see that the estimation changes very little after a certain threshold value. Critical delay for a network of K=50, target firing rate =5 Hz, and other parameters as default, as a function of dV (the integration step) and the value of the threshold chosen for integration. b-c) Cross sections of a)

Transition to collective oscillations as a function of the average indegree K.

Because of the balanced-state scaling in which the connection weights scale as , increasing K also makes each connection weaker. (a) Inhibitory network with delays. (b) Maximum Lyapunov exponent as a function of K for different values of the delay. The critical value of the indegree at which the asynchronous irregular state loses stability to a collective oscillation is computed as in Figure 4 of the main text, and indicated by the vertical arrows in matching colors.(c) metric entropy as a function of K for different values of the delay. (d) Attractor dimension as a function of the delay. Even if the attractor dimension decreases with K up to the oscillatory instability, it suddenly increases at the transition. All measures of chaos increase at the oscillatory transition.

(a) Non delayed E-I network in the balanced state. The asynchronous irregular state loses stability with increasing inhibitory time constant as described theoretically in (2). We see that at the onset of collective oscillations, all chaotic measures increase. (b) Maximum Lyapunov exponent as a function of of the inhibitory time constant τI (c). Raster plots for different values of the inhibitory time constant. (d) metric entropy as a function of τI (e) Attractor dimension as a function of τI.

Single cell heterogeneity perpetuates chaos.

(a) Network of inhibitory neurons with heterogeneity and raster plot of the network activity for different delays, observe the difference in the spiking pattern after the first oscillatory transition (middle bottom panel) and after the second one (bottom panel). (b) Direct computation of the maximum LE, by perturbing the trajectory numerically is compared with the method developed here (N = 5000) showing good agreement. (c) MSD and (d) Maximum LE for different network sizes as a function of the delay for an example heterogeneous network of σ = 0.5. The dashed line in (c) indicates the MSD for an homogeneous network for reference. The maximum exponent converges to a size independent value which increases with the network size, indicating that finite size network fluctuations only lead to an under-estimation of the chaotic nature of the circuit.

Impact of heterogeneity.

(a) Amplitude and (b) frequency of the population oscillation as a function of the delay for different values of membrane time constant heterogeneity (uniform distribution of mean 10ms and standard deviation ). (d) The maximum Lyapunov exponent and the (e) coefficient of variation have similar dependencies to the delay. (f) is the ratio between the input current to the neurons needed to reach the target firing rate compared to what its predicted theoretically from the balanced state condition Eq. S3

Robustness of the findings.

(a-i) Robustness for different values of the mean firing rate (a) Amplitude of the time dependent population rate (also multi-unit activity, MUA) as a function of the synaptic delay δ. (b) Frequency of the oscillation vs δ. The values depicted are shown only when the MUA amplitude departed from baseline (c) Standard deviation of the firing rate distribution vs δ. (d) is the ratio between the input current to the neurons needed to reach the target firing rate compared to what its predicted theoretically from the balanced state condition Eq. S3 This number quantifies a departure from the balanced state equation. (e) Maximum Lyapunov Exponent vs δ. (f) Coefficient of variation vs δ. (g) metric entropy vs δ. Notice that its not normalized by the firing rate and the units are then bits per second. (h) Lyapunov Dimension. Parameters: Default parameters as in table 1 (see Methods), except for panels (g-h) where N=2000 and the changes indicated in the figure. i) Top: Dependence of the critical delay with the mean rate ν. Top-Middle: Frequency at the onset of collective oscillations as a function of ν. Middle Bottom: Maximum value of the maximum Lyapunov exponent as a function of ν. Bottom: value of the delay at which the maximum lyapunov exponent peaks vs the value of the delay at which the coefficient of variation peaks. (j-r) Robustness for different values of the connectivity strength J0. Panels identical to (a-i). (s-zz) Robustness for different values of K. Panels identical to (a-i).

Dependence of the maximum Lyapunov exponent on the intensity of the EI loop.

(a) The maximum Lyapunov exponent is invariant under small changes in the excitation ratio parameter η (see Eq. A). (b) Different values of the feedback loop ϵ delay the transition to collective rhythms and push the peak of the exponent towards larger delays. Parameters: Default parameters except for N=5000

(a) Harmonically driven case (b) Synchronization index as a function of the frequency of the harmonic drive f (c) Entire spectrum of the LE of the driven network for different values of the frequency of the harmonic input drive (left), and a third of the spectrum for frequency of the drive f between 15 and 30 Hz. Notice how when the maximum Lyapunov exponent starts to increase, the negative exponents become more negative, resulting unexpectedly in a decrease instead of an increase of the attractor dimension for frequencies smaller than the critical one.

Classic rate networks behave like classic scalar delayed systems

(a) Diagram of the small-delay approximation network. There will be 𝒪 N extra units for an approximation of order 𝒪 and a network of N neurons (see Methods) (b) Distribution of perturbation decay rates νk, associated to the eigenvalue λk of the random matrix J, for the exact delayed system (purple) and for the small delayed approximation (turquoise) of order 𝒪 = 2 (left column) or 𝒪 = 4 (right column) (c) Lyapunov spectrum of the approximate system for orders 2, 3 and 4 for an intermediate delay (d) Mean square deviation (e) Maximum Lyapunov exponent (f) Attractor dimension and (g) Metric entropy as a function of the synaptic delay. Notice that the metric entropy is in bits per second.

Inhibitory delayed rate networks behave like classic scalar delayed systems

(a) Diagram of the small-delay approximation network. There will be 𝒪 N extra units for an approximation of order 𝒪 and a network of N neurons (see Methods) Both the delayed rate network and the approximated “weakly delayed” networks used here have a transition from fully developed chaos (top) to clock-like synchrony. (b) Mean square deviation from the mean is vanishingly small before the critical delay indicating asynchronous dynamics. (c) First Lyapunov exponent as a function of the delay (d) Attractor dimension (e) Metric entropy.