Peer review process
Not revised: This Reviewed Preprint includes the authors’ original preprint (without revision), an eLife assessment, and public reviews.
Read more about eLife’s peer review process.Editors
- Reviewing EditorFriedemann ZenkeFriedrich Miescher Institute for Biomedical Research, Basel, Switzerland
- Senior EditorPanayiota PoiraziFORTH Institute of Molecular Biology and Biotechnology, Heraklion, Greece
Reviewer #1 (Public review):
Summary:
The aim of this paper is to model the spontaneous emergence of sequences in networks of plastic spiking neurons. By spontaneous, they mean that the inputs have no structure, no sequences, but the network nevertheless generates sequences. To obtain this, they assume several synaptic plasticity and single neuron plasticity rules. The primary findings are that sequences can emerge, that they slowly drift over time, that weights also constantly change over time, but that very strong weights are more stable. The main driver of this result is the plasticity rules assumed.
Strengths:
The paper is based on simulations of a relatively large network of conductance based integrate and fire neurons. There are two different pair-based STDP rules assumed for excitatory-to-excitatory synapses and for inhibitory-to-excitatory synapses. In addition, weights are normalized, and there is an adaptation due to plasticity of the spiking threshold. The network is analyzed via simulations and data processing akin to what would be done for physiological data. The simulations are extensive, and the analysis seems rigorous.
Weaknesses:
There are several fundamental problems with the paper:
(1) The plasticity mechanisms used assumed that pair-based STDP is sufficient to account for synaptic plasticity in vivo. This is unrealistic. Various different papers have shown that pair-based STDP models do not account well for experimental data. If this model is a simulation of the visual cortex (unclear), then firing rates can be sufficiently high, such that firing rates are more important than spike times. We already know that firing rates matter due to the original Markram et al paper from 1997. Even if pair-based STDP is used, we already know from Bi and Poo 1998 that there is a weight dependence of synaptic plasticity such that strong weights potentiate less and decay more. This additional assumption alone might completely change the results in this study. We don't really know how to model realistic synaptic plasticity, but we know pair-based STDP is a bad model. Would these results be robust enough for a change in the learning rule, for example, to triplet-based, calcium-based, or voltage-based? Are the results shown even robust enough to include slight modifications to the learning rule, for example, weight dependence of pair-based STDP?
(2) The first stage of training, in which the network reaches a steady state, is unclear. What type of activity is exhibited in this network? Does most of it arise from the external inputs? What firing rates are obtained? What are the spike statistics? This is important because this activity is responsible for generating the emergent sequences, and also depends (I think) on the plasticity mechanisms. Does the 'spontaneous activity' in the network depend strongly on the external input? Figure 1E is where we see a raster plot, but we see only neurons within a sequence, and it seems neurons within the sequence fire almost only once. Before showing sequences that more general structure of the spiking activity and how it evolves should be explained and quantified.
(3) Do these sequences really emerge without structured inputs? Is there any evidence to suggest that such sequences emerge without a structured input? If yes, please cite it. It makes sense that it would, because the time scale of these sequences is much faster than the sensory or behavioral time scale. However, experimental evidence to support this will make the paper much more interesting.
(4) This paper is a phenomenological paper. It does not really say what these sequences might be good for, except for a cite or two, and it does not model any specific experiment. There is a medium here (a plastic spiking network) which generates a phenomenon (sequences). It also generates other measurable phenomena, such as connectivity motifs. Such motifs have been quantified in animals. It would be natural to compare the motif statistics found here to motifs characterized experimentally. This would make these results more substantial.
(5) There are implicit predictions in the work. For example, about the stability of strong vs. weak efficacies or the stability of different motifs. Such predictions should be made more explicit.
Reviewer #2 (Public review):
Summary:
This paper investigates how a combination of spike-timing-dependent plasticity rules in recurrent spiking networks leads to the spontaneous emergence of repeating neuronal sequences. The authors show that despite the weight distribution reaching a steady state, individual synaptic connections undergo constant turnover with timescales that depend on connection strength. The plasticity rules promote fan-in/out connectivity motifs that appear to support sequence generation.
Strengths:
The question addressed is important and biologically relevant. The most interesting finding of the paper is the coexistence of a stable weight distribution with constant turnover of individual synaptic connections.The simulations seem to be carefully executed.
Weaknesses:
The paper does not make a sufficient attempt to explain why the observed phenomena arise under the specific learning rules employed. There is no theoretical reduction, no analytical argument, and no mechanistic intuition. As it stands, this reads as a descriptive simulation study.
It is never made clear which results reflect robust qualitative phenomena and which are specific to the particular hyperparameter choices of these simulations. Specific percentages and parameter values are reported throughout the main text without justification of their importance or generality.
The finding that sequence composition undergoes continual turnover while the global weight distribution remains stable is interesting, but the authors should more carefully situate this result within the existing theoretical literature on synaptic drift and sequence stability under ongoing plasticity. Several modeling papers have addressed related phenomena, and the novelty of the present contribution relative to this body of work is not clearly established.
Reviewer #3 (Public review):
Summary:
This modelling study connects synaptic plasticity, connectivity motifs, and representational drift. The authors combine excitatory and inhibitory STDP with weight normalization and intrinsic plasticity in a recurrent spiking network of AdEx neurons. This combination generates heavy-tailed synaptic weight distributions and supports repeating spike sequences under both unstructured and structured inputs. While global network statistics stabilize over time, individual synapses continue to change, creating a form of drift. Structured inputs further stabilize sequences, yet the network retains flexibility to learn new patterns.
Strengths:
(1) Multi-scale turnover analysis:
The authors study the evolution of individual synapses, 3-neuron motifs, follower neurons, and entire neuronal sequences, revealing distinct turnover timescales.
(2) Fan-in/out motif analysis:
A specific connectivity motif (fan-in/out) is shown to be over-represented in the network and preferentially stabilised by the plasticity rules compared to other possible motifs. This generates interesting insights and testable predictions.
(3) Connection to representational drift:
The connection of ongoing synaptic plasticity to drift is timely and interesting, reproducing observations of macro-level stability and synapse-level turnover with a relatively simple mechanism.
(4) Rigour and thoroughness:
The overall quality of the numerical experiments performed in this study is high, with extensive supplementary material performing various controls to solidify the claims.
Weaknesses:
(1) Limited connection to network function:
Sequence detection relies on a rather artificial protocol (forced spiking of a single neuron 1,000 times), which I suspect mostly tests whether the lognormal tail of the weight distribution can propagate activity. This risks being circular. I think performing the same sequence analysis on a random network/a network with the same weight distribution but shuffled would help understand what comes from a generic heavy-tailed weight distribution and the particular weights potentiated by the plasticity rules used here.
The network, which would classically be evaluated as a memory network, is not assessed on this aspect. While the authors do not overclaim, this limits the impact.
Relatedly, the relearning experiment (Figure 5G) shows catastrophic forgetting. This is acknowledged in the discussion, but the suggested solutions (alternating patterns, plastic readout) are speculative without supporting simulations. This limits the applicability of the model as a memory model or, more broadly, as a model of a brain region/function.
Additionally, in the sequence learning experiments with structured input, the ability to learn seems tied to the very specific timescale of pattern presentation (~10 ms per pattern, comparable to the STDP kernel time constants), arguably faster than the timescale of external stimuli. The stability of sequences may also owe more to the normalization scheme than to STDP per se.
(2) Novelty claims and positioning within the literature:
On page 16, the authors write: "Our results demonstrate that spiking sequences can be generated in randomly connected networks trained by synaptic plasticity even under unstructured inputs, which supports STDP being the main actor, while stabilizing mechanisms such as weight normalization and intrinsic plasticity play a complementary role." (c1).
Several aspects of this work are less novel than the presentation suggests:
(a) The fact that STDP can create sequence-like dynamics/asymmetric connectivity matrices in recurrent networks has been studied theoretically [1,2] and in simulations [3,4,5]. While [3] is cited, the manuscript underplays the similarity. [4] (uncited) considers e+iSTDP with a different homeostatic term to represent sequential stimuli in large recurrent spiking networks. [5] (uncited) also considers a recurrent spiking network with several STDP-like rules and shows that many combinations can store and recall sequential inputs.
(b) Lognormal weight distributions emerging from STDP-based plasticity and the autonomous emergence of connectivity structures have extensive literature. While many of these articles are already cited in the manuscript, I fail to see what this work brings to this matter compared to existing work (particularly [6]).
(c) Several published works challenge the manuscript's implicit claim (c1) that sequences require their particular combination of rules. Many other plasticity mechanisms can create sequences [3,4,5,7,8,9]. Some interpretations may also need to be dialed down: [10] (uncited) showed that sequences can be stored and retrieved using EI and IE plasticity alone. iSTDP may be doing more computational work than acknowledged, which complicates the interpretation of which mechanisms are truly driving the phenomena.
Overall, most of the relevant work is already cited in the manuscript, but not necessarily acknowledged adequately.
(3) Justification of plasticity model/robustness analysis:
The parameters in Tables 1 and 2 are quite specific without strong justification (for instance, different sparsity values for each connection type and specific normalization factors). Without parameter sweeps, it is difficult to know whether the key findings are robust or overfit to this particular network configuration. Given the number of parameters, exhaustive sweeps are out of question, and the argument made previously would still prevent the rule combination proposed from being considered as more than one possible mechanism for sequence generation among many others. However, this deserves to be acknowledged, and potentially a few sweeps to be run (e.g., over LTP/LTD ratio, normalization threshold, and network size). I don't think that Figure S12, which shows that removing any component of the model causes it to break down in some way, is enough to cover alternative plasticity rules.
A related concern is that the network is small by current standards (1,200E + 240I neurons), especially with sparse connectivity (6-20%). Small networks with few connections are susceptible to synchronization (other studies typically consider networks of at least 10k neurons). The authors should discuss whether the phenomena they observe would persist at larger scales and under more biologically realistic connectivity. Specifically, are the intrinsic and normalization plasticity terms as crucial in this case?
(4) Fan-in/out motif evidence is correlational:
The evidence linking the fan-in/out motif to sequence stability appears to be correlational. Properly establishing causality would require targeted ablations or rewiring of fan-in/out connections. While designing a clean causal intervention may be difficult, the correlational nature of the evidence should be stated explicitly.
Conclusion:
To summarize, the manuscript would benefit from:
(1) Reframing the contribution:
Multi-scale turnover analysis and the discussion around representational drift as the core novelties. I would reposition sequence emergence and lognormal distributions as reproducing known results under a specific plasticity model and analysis method.
(2) Acknowledging that many rule combinations could produce equivalent outcomes, and not suggesting that the combination chosen here is special.
(3) Adding parameter sensitivity analysis or, at a minimum, discussing robustness.
References:
[1] Kempter, Gerstner and van Hemmen, Hebbian learning and spiking neurons, 1999, PRE
[2] Ocker, Litwin-Kumar and Doiron, Self-organization of microcircuits in networks of spiking neurons with plastic synapses, 2015, plos CB
(Theoretical account of STDP in spiking networks and motifs, though it only looks at 2-synapse motifs (not fan-in/fan-out)).
[3] Fiete et al., Spike-Time-Dependent Plasticity and Heterosynaptic Competition Organize Networks to Produce Long Scale-Free Sequences of Neural Activity, 2010, Neuron
[4] Duarte and Morrison, Dynamic stability of sequential stimulus representations in adapting neuronal networks, 2014, Frontiers in Comp Neuro
[5] Confavreux et al., Memory by a thousand rules: Automated discovery of functional multi-type plasticity rules reveals variety and degeneracy at the heart of learning, 2025, bioRxiv
[6] Zheng, Dimitrakakis and Triesch , Network Self-Organization Explains the Statistics and Dynamics of Synaptic Connection Strengths in Cortex, 2013, plos CB
[7] Zheng and Triesch, Robust development of synfire chains from multiple plasticity mechanisms, 2014, Front Comp Neuro
[8] Ravid Tannenbaum and Burak, Shaping Neural Circuits by High Order Synaptic Interactions, 2016, plos CB
[9] Bell, Duffy, and Fairhall, Discovering plasticity rules that organize and maintain neural circuits, 2024, NeurIPS
[10] Gong and Brunel, Inhibitory Plasticity Enhances Sequence Storage Capacity and Retrieval Robustness, 2024, bioRxiv