Boosting of neural circuit chaos at the onset of collective oscillations

  1. Center for Theoretical Neuroscience, Zuckerman Institute, Columbia University, New York, USA
  2. Göttingen Campus Institute for Dynamics of Biological Networks, Göttingen, Germany
  3. Max Planck Institute for Dynamics and Self-Organization, Göttingen, Germany
  4. Bernstein Center for Computational Neuroscience, Göttingen, Germany
  5. Institute for Dynamics of Complex Systems, Georg-August University, Göttingen, Germany
  6. Max Planck Institute for Multidisciplinary Sciences, Göttingen, Germany

Peer review process

Not revised: This Reviewed Preprint includes the authors’ original preprint (without revision), an eLife assessment, and public reviews.

Read more about eLife’s peer review process.

Editors

  • Reviewing Editor
    Julijana Gjorgjieva
    Technical University of Munich, Freising, Germany
  • Senior Editor
    Panayiota Poirazi
    FORTH Institute of Molecular Biology and Biotechnology, Heraklion, Greece

**Reviewer #1 (Public Review):
**
Summary:
Cortical activity displays high trial-to-trial variability and oscillatory transients. These dynamical features have implications for how information is encoded and transmitted in the brain. While trial-to-trial variability has been widely studied, via mathematical models, in asynchronous dynamical states, works investigating variability in synchronous states are more sparse. In this study, the authors characterise the nature of the chaotic attractor underlying neural activity at the onset of oscillations induced by transmission delays. They find that variability is boosted by delay-induced oscillations in comparison to the asynchronous state.

Strengths:

1. Quantifying the chaotic nature of high-dimensional neural activity is a hard mathematical challenge. This work builds upon prior theoretical work to study how spike chaos is affected by oscillatory mean activity, a phenomenon frequently observed in the cortex.

2. The evidence supporting all findings appears to be highly robust.

3. The manuscript is well written.

Weaknesses:

1. The core contribution of the paper is a description of chaotic activity as delays are increased (Fig. 2). Within the main text, it is noted that two instabilities leading to oscillatory activity emerge. However, the definition and nature of these two transitions lack some clarity. In particular, whether the two transitions are "real" (meaning that they separate three distinct regimes of activity), or whether they rather correspond to different measures of the same underlying instability, remains opaque.

2. While the mathematical aspects of the analysis are discussed in detail, the biological implications of the findings remain rather less clear. In particular, a discussion regarding the implications of the findings for cortical coding is missing. Furthermore, while the authors have put forth efforts to contextualize their findings within the domain of the dynamical systems and applied math literature, the relationship with the corresponding neuroscience literature seems less developed.

3. The connection with biology is also hindered by the fact that measures used to characterise trial-to-trial variability (metric entropy and Kaplan-Yorke dimension) significantly differ from those commonly used in the analysis of experimental data, and these measures are not contextualized within the manuscript.

4. The text comprises a significant amount of undefined mathematical jargon.

5. For the purpose of the mathematical analysis, the original delayed model is substituted with an effectively delayed version. The authors convincingly demonstrate an alignment between the outcomes from the two models. This alignment appears to be unaffected by variations in the reset parameter of the effective model (Fig. S2). Nonetheless, a systematic discussion on the efficacy and limitations of this replacement approach seems absent. Under what circumstances are the two models equivalent? Conversely, when does their correspondence become very poor?

Reviewer #2 (Public Review):

Summary:
The authors investigate the effect of oscillatory activity on the chaotic dynamics of high-dimensional networks. The network oscillations are internally generated by synaptic delays which are known to produce oscillations. The authors demonstrate that the intensity of the chaos and the dimension of the chaotic attractor picks at a delay value. A similar effect is found when an external input drives the network. In this case, these quantities pick at the network's resonant frequency. This shows that the intensity of the chaotic dynamics can be boosted by internally or externally generated oscillations.

Strengths:
The paper is technically solid. They introduce a novel method to perform calculations of the Lyapunov spectrum in networks with delays, which have infinite dimensions, effectively transforming it into a network of finite dimensions. The conclusions of the paper are supported by strong analytical calculations and novel and intensive numerical methods.

Weaknesses:
The main weakness is that is difficult to find the relevance of the paper's findings to neuroscience. It is not clear to me that measures such as the rate of production of entropy of a chaotic attractor in spiking networks, its dimension, and its Lyapunov spectra are experimentally relevant. Moreover, the authors make little to no attempt to provide interpretations for these quantities nor put their work in a broader context in the field of systems neuroscience. The paper also is written in an overly technical way with sometimes the use of technical jargon which might be difficult to follow for a non-expert in mean field theories and statistical physics.

Reviewer #3 (Public Review):

Summary:
In this work, the authors propose a novel method for analyzing spiking neuron network models with delays. By modeling the delay as an additional axonal component to relay spikes, the infinite-dimensional system of the delayed network is transformed into a system of finite dimensions. This allows the calculation of the entire spectrum of Lyapunov exponents which provide information on the dimensionality of attractor and noise entropy of network responses. The authors demonstrate that chaos intensifies at the onset of oscillations as synaptic delay increases. This is surprising since network oscillation has been thought to indicate regular firing activity. The authors find similar results in different types of networks and in networks driven by oscillatory inputs, suggesting that the boosting of chaos by oscillation can be a general feature of spiking networks.

Strengths:
This work builds on the authors' past work on characterizing chaos in spiking networks and extends to include synaptic delays. The transformation of a delayed network into a network of two-compartment neurons, modeling the spike generation and transmission, is novel and interesting. This allows for an analytical expression of the single spike Jacobian of the network dynamics, which can be used to calculate the full spectrum of Lyapunov exponents.

The analysis is rigorous and the parameter study is comprehensive.

Weaknesses:
Because the delayed interaction is spike-triggered, effectively it only requires N variables to count the remaining time since the last spike from each neuron. The axon component only implements the delay time to transmit a spike with no interaction with other neurons. It seems that the axon component can be simply modeled as a variable counting the time since the last spike and does not need to be modeled as a QIF model. Is there any advantage of modeling the axon component as a QIF model? The supplemental figure S2 considers the case of "dynamic delay", where delay time can depend on network activity, but the Lyapunov exponents seem to be largely independent of the reset parameter.

In most of the results, the network mean firing rate is kept at a fixed value while the delay time parameter varies. What would be the results if only the delay parameter changes? It would be helpful if the authors could provide some reasoning as to why it is a better comparison with the network rate kept as a constant.

The majority of the neurons have a CV below 1 (Fig 2d and Fig S3c). This indicates that many neurons are in the mean-driven regime. This is different from balanced networks where CVs are around 1. It would be helpful for the authors to comment on this discrepancy.

  1. Howard Hughes Medical Institute
  2. Wellcome Trust
  3. Max-Planck-Gesellschaft
  4. Knut and Alice Wallenberg Foundation