Single spikes drive sequential propagation and routing of activity in a cortical network

  1. Juan Luis Riquelme
  2. Mike Hemberger
  3. Gilles Laurent
  4. Julijana Gjorgjieva  Is a corresponding author
  1. Max Planck Institute for Brain Research, Germany
  2. School of Life Sciences, Technical University of Munich, Germany

Abstract

Single spikes can trigger repeatable firing sequences in cortical networks. The mechanisms that support reliable propagation of activity from such small events and their functional consequences remain unclear. By constraining a recurrent network model with experimental statistics from turtle cortex, we generate reliable and temporally precise sequences from single spike triggers. We find that rare strong connections support sequence propagation, while dense weak connections modulate propagation reliability. We identify sections of sequences corresponding to divergent branches of strongly connected neurons which can be selectively gated. Applying external inputs to specific neurons in the sparse backbone of strong connections can effectively control propagation and route activity within the network. Finally, we demonstrate that concurrent sequences interact reliably, generating a highly combinatorial space of sequence activations. Our results reveal the impact of individual spikes in cortical circuits, detailing how repeatable sequences of activity can be triggered, sustained, and controlled during cortical computations.

Editor's evaluation

This is an important study of the role of spike timing in the turtle cortex. The authors provide compelling evidence that single spikes evoke motifs via strong connections, and that those motifs can be reliably routed by weaker connections. The work is careful and clear and makes intuitive predictions about how motifs are generated. It will be especially interesting to determine to what extent the results apply to the mammalian cortex.

https://doi.org/10.7554/eLife.79928.sa0

eLife digest

Neurons in the brain form thousands of connections, or synapses, with one another, allowing signals to pass from one cell to the next. To activate a neuron, a high enough activating signal or ‘action potential’ must be reached. However, the accepted view of signal transmission assumes that the great majority of synapses are too weak to activate neurons. This means that often simultaneous inputs from many neurons are required to trigger a single neuron’s activation. However, such coordination is likely unreliable as neurons can react differently to the same stimulus depending on the circumstances. An alternative way of transmitting signals has been reported in turtle brains, where impulses from a single neuron can trigger activity across a network of connections. Furthermore, these responses are reliably repeatable, activating the same neurons in the same order.

Riquelme et al. set out to understand the mechanism that underlies this type of neuron activation using a mathematical model based on data from the turtle brain. These data showed that the neural network in the turtle’s brain had many weak synapses but also a few, rare, strong synapses. Simulating this neural network showed that those rare, strong synapses promote the signal’s reliability by providing a consistent route for the signal to travel through the network. The numerous weak synapses, on the other hand, have a regulatory role in providing flexibility to how the activation spreads. This combination of strong and weak connections produces a system that can reliably promote or stop the signal flow depending on the context.

Riquelme et al.’s work describes a potential mechanism for how signals might travel reliably through neural networks in the brain, based on data from turtles. Experimental work will need to address whether strong connections play a similar role in other animal species, including humans. In the future, these results may be used as the basis to design new systems for artificial intelligence, building on the success of neural networks.

Introduction

Experimental and modeling studies have proposed that cortical circuits rely on firing rates to convey information reliably in the presence of irregular activity and intrinsic sources of noise (London et al., 2010; Renart et al., 2010; Shadlen and Newsome, 1994). This suggests that individual spikes are effectively superfluous for computation. On the other hand, even a single spike can noticeably increase network firing rates in rat barrel cortex (London et al., 2010) and trigger reliable sequences of spikes in human and turtle cortex (Hemberger et al., 2019; Molnár et al., 2008). Indeed, repeatable and temporally precise patterns of action potentials in neural circuits have long suggested that the precise timing of spikes may play an important role (Bair and Koch, 1996; Bolding and Franks, 2017; Fellous et al., 2004; Hahnloser et al., 2002; Kumar et al., 2010; Wehr and Laurent, 1996; Wehr and Zador, 2003). How relevant, therefore, are single spikes for cortical function? Because spikes are the main form of neuronal communication, understanding how networks respond to single spikes is crucial to define the building blocks of cortical computation (Brette, 2015).

The influence of single neurons has been documented in rat and mouse cortex, where single-cell stimulation has meso- and macroscopic effects on network activity, brain state, and even behavior (Brecht et al., 2004; Doron et al., 2014; Houweling and Brecht, 2008; Kwan and Dan, 2012; Wolfe et al., 2010). Similarly, recent experiments in turtle cortex have shown that one to three spikes of a single neuron can trigger activity in the surrounding network (Hemberger et al., 2019). Only containing three layers, turtle visual cortex is architectonically similar to mammalian olfactory cortex or hippocampus and evolutionarily linked to the six-layered mammalian neocortex (Fournier et al., 2015; Tosches et al., 2018). Moreover, it lends itself to long ex vivo experiments where local connectivity is intact. Recent experiments found that the activity triggered by electrically evoked single spikes in the turtle cortex is reliable in three ways: responses are repeatable across trials, the responses involve the same neurons, and their activations respect the same temporal order (Hemberger et al., 2019). Repeatable sequences of spikes have been reported across various systems in vivo, such as in replay or pre-play in rat hippocampus (Buzsáki and Tingley, 2018; Diba and Buzsáki, 2007; Dragoi and Tonegawa, 2011), rat auditory and somatosensory cortex (Luczak et al., 2015; Luczak and Maclean, 2012), mouse visual and auditory cortex (Carrillo-Reid et al., 2015; Dechery and MacLean, 2017), and human middle temporal lobe (Vaz et al., 2020). Although often linked to behavioral or sensory cues, the network mechanisms that underlie such sequences are unknown. Even in the ex vivo turtle cortex, the electrical distance between the MEA and pyramidal cell layer has limited the observation of sequence propagation within the excitatory neuron population (Fournier et al., 2018; Shein-Idelson et al., 2017). How sequences propagate, even under irregular and seemingly noisy activity, remains a puzzle, yet it may be key to understanding the computational role of sequences and cortical computations more generally (Buzsáki and Tingley, 2018).

Several candidate theoretical frameworks based on structured connectivity could explain the reliable propagation of activity during sequences. One example is synfire chains, where divergent–convergent connectivity might connect groups of neurons that fire synchronously (Abeles, 1991; Diesmann et al., 1999; Kumar et al., 2010), or polychronous chains, where transmission delays require neurons to fire in precise time-locked patterns (Izhikevich, 2006). However, experimental evidence for these specialized connectivity structures is limited (Egger et al., 2020; Ikegaya et al., 2004; Long et al., 2010). Alternatively, structured connectivity leading to sequences has been proposed to arise via the training of connections (Fiete et al., 2010; Maes et al., 2020; Rajan et al., 2016). Finally, models based on the turtle cortex, in particular, have investigated network-wide propagation of waves or the statistical properties of population activity in the form of neuronal avalanches (Nenadic et al., 2003; Shew et al., 2015). Overall, these models of cortical propagation focus primarily on coordinated population activity and have not investigated how single-neuron activation can trigger reliable sequences.

To mechanistically investigate sequence generation from single spikes, we built and explored a model network constrained by previously reported experimental measurements of the turtle cortex (Hemberger et al., 2019). Our model readily generates reliable sequences in response to single-neuron activations without any fine-tuning. Analyzing the properties of sequences as a function of model parameters, we found that few strong connections support sequence propagation while many weak connections provide sequence flexibility. We examined how sparse contextual input modulates sequences, modifying the paths of propagation of network activity. Our results suggest that single spikes can reliably trigger and modify sequences and thus provide reliable yet flexible routing of information within cortical networks.

Results

Model of the turtle visual cortex with experimentally defined constraints

To explore the network mechanisms that might lead to the reliable activity propagation from single spikes, we built a recurrent network model with single-cell properties and connectivity constrained to previously obtained experimental measurements from the turtle visual cortex (Hemberger et al., 2019). The network model is composed of 100,000 neurons (93% excitatory and 7% inhibitory) equivalent, by neuronal density, to a 2 × 2 mm slab of turtle visual cortex (Figure 1A).

A biologically inspired model of turtle visual cortex.

(A) Schematic of the network with the size, proportion of E and I populations, and connection probabilities within a disc of 200 μm radius (see C). Mean number of outgoing connections per cell: E-to-E: 750, I-to-I: 110, E-to-I: 190, I-to-E: 2690. Blue triangles: excitatory neurons. Red circles: inhibitory neurons. (B) Top left: example of fit (black) of membrane potential responses to current injection in a recorded neuron (blue). Bottom left and right: distributions of fitted membrane capacitance (Cm) and leak conductance (gL) (n = 3886 traces). Top right: distribution of adaptive indices for 145 recorded neurons. Arrowheads indicate parameters of model neurons (median). (C) Gaussian profiles of distance-dependent connection probabilities for different connection types (top: exc, bottom: inh). The profiles were fitted from the fraction of pairs of patched neurons that showed a connection from all tested pairs. Vertical bars below (data) indicate connected (colored) or disconnected (gray) pairs of neurons for different connection types. (D) Lognormal fit to peak excitatory synaptic conductances. Top: cumulative distribution function (cdf). Bottom: probability density function (pdf). Conductances were estimated from excitatory postsynaptic potential (EPSP) amplitudes experimentally obtained from recorded pairs of connected neurons (gray dots). Inset: example of modeled EPSPs for different synaptic weights (top arrowheads) in an isolated postsynaptic neuron at resting potential. (E) Probability mass function (pmf) of number of strong (top 0.3%) or weak (bottom 90%) excitatory-to-excitatory connections per model neuron. (F) Bootstrap estimate of the probability of an excitatory neuron having at least one strong excitatory-to-excitatory connection. Colored area: 95% confidence interval. Arrowhead: fit with the original dataset in (D). (G) Experimentally measured mean and standard deviation of the membrane potential of 168 patched neurons in ex vivo preparations. Arrowheads indicate medians. (H) Membrane potential of two model neurons (blue: exc; red: inh) under different white noise current parameters. Note the presence of action potentials and EPSPs under a high mean current (μin). (I) Distributions of membrane potential mean and standard deviation for model neurons (blue: exc; red: inh) under the white noise current parameters in (H).

We modeled neurons as adaptive exponential integrate-and-fire units based on experimental evidence that excitatory (pyramidal) neurons show adaptive spiking (Hemberger et al., 2019). We used previously found adaptation current parameters to capture the median of the experimental distribution of the Adaptation index (0.3) in the turtle cortex (Brette and Gerstner, 2005). We fitted membrane capacitance and leak conductance parameters to match membrane potential from previous experimental recordings under current-clamp and used the median values for our model (Figure 1B).

As suggested by axonal arbor reconstructions of turtle cortex neurons (Hemberger et al., 2019), we connected model neurons with probabilities that decay spatially. We fitted a Gaussian profile for the spatial decay to connection probability estimates that had been obtained from paired whole-cell patch-clamp recordings at multiple distances (Hemberger et al., 2019; Figure 1C). Because our estimates of connection probabilities (Figure 1A) are derived from ‘paired-patch’ recordings of nearby neurons, we scaled the decay profile to match population-specific probabilities in any given disc of 200 μm radius.

Direct and indirect evidence from paired-patch experiments in turtle cortex indicates the presence of rare but powerful excitatory synapses, resulting in a long-tailed distribution of excitatory postsynaptic potential (EPSP) amplitudes (Hemberger et al., 2019). We thus generated excitatory synaptic conductances in our model from a lognormal distribution fitted to the experimental data (Figure 1D, see Methods, Neuron and synapse models). We truncated the distribution at the conductance value corresponding to the largest recorded EPSP amplitude in our paired-patch experiments (21 mV). Based on these experimental constraints, each model excitatory neuron connects to other neurons with a majority of weak synapses and about two connections sufficiently strong to bring the postsynaptic neuron from rest to near firing threshold (Figure 1D, inset, E). The actual efficacy of these rare but powerful connections depends on the conductance state of their postsynaptic neurons and, thus, on ongoing network activity and the state of adaptation currents. To verify that our lognormal fit is not strongly biased by the experimental dataset, we bootstrapped our estimate for the probability of a neuron having at least one strong connection under our connectivity assumptions (Figure 1F). We found a heavily skewed distribution, with 39% of bootstrapped fits showing a probability even higher than our model.

We generated inhibitory connections from a scaled-up copy of the distribution of excitatory conductances, resulting in a skewed distribution, as observed in other systems (Iascone et al., 2020; Kajiwara et al., 2020; Rubinski and Ziv, 2015). We assumed that strengths for excitatory and inhibitory connections are independent of distance. For each connection, we added a random delay between 0.5 and 2 ms to account for the short and reliable latency between presynaptic action potential and monosynaptic EPSPs in our paired-patch experiments. Besides the spatial connectivity profiles and the lognormal distribution of synaptic strengths for excitatory and inhibitory connections, we otherwise assumed random connectivity.

Neurons recorded in ex vivo conditions generate spontaneous action potentials once every few seconds and exhibit a noisy resting membrane potential (Hemberger et al., 2019; Figure 1G). We modeled the source of this spontaneous activity as a white noise current sampled independently but with the same statistics for every simulated neuron. In every simulation, we sampled current parameters uniformly between 50 and 110 pA mean (μin) and 0–110 pA standard deviation (σin). These currents produce various levels of membrane potential fluctuations and spontaneous firing, similar to those observed experimentally (Figure 1H). We observed that current mean has a more substantial effect than current variance on the membrane potential variance of model neurons (Figure 1I). This results from self-sustained firing in the network at sufficiently depolarizing currents (note spikes in Figure 1H, bottom left).

In summary, we built a model where we constrained single-neuron properties and connection strength distributions by biological data but otherwise assumed random connectivity. The model, which reproduces experimental cortical data, displays highly heterogeneous connectivity and provides a testbed to investigate the mechanistic underpinnings of sequence generation and propagation observed in the biological networks.

Single spikes trigger reliable sequential activity in a biologically constrained network model

Next, we examined if our biologically constrained model produces reliable sequential neuron activations triggered by a single spike in a randomly chosen pyramidal neuron. We first generated 300 random networks and randomly selected an excitatory neuron in each one (called trigger neuron). For each network, we generated 20 simulations under different levels of spontaneous activity, yielding a total of 6000 simulations (Figure 2—figure supplement 1A). In each simulation, we caused the trigger neuron to fire 100 action potentials at 400-ms intervals (each action potential defining a trial) and measured, in all the other neurons, the resulting firing rate modulation (∆FR, Figure 2A). To identify reliably activated model neurons, we performed a statistical test on the distribution of ∆FR, assuming a Poisson process of constant firing rate as the null distribution (Figure 2—figure supplement 1E). We call ‘followers’ those model neurons displaying statistically high ∆FR (p = 10−7, Figure 2B, see Methods, Definition of followers). We found followers in 94.6% of our 6000 simulations.

Figure 2 with 1 supplement see all
Repeatable sequential activation of follower neurons in the model network.

(A) Schematic of the stimulation protocol in the model. One neuron (top) was driven to produce one action potential, and all other neurons in the network were tested for resulting changes in their firing rate. (B) Distribution of single-cell firing rate modulation (∆FR) for one representative simulation. Top: cumulative mass function of the null distribution. Dashed: threshold for follower classification (p = 10−7). Bottom: ∆FR for followers (yellow) and other neurons in the network (blue: exc, red: inh). ∆FR is normalized to the modulation of a perfect follower (exactly one spike per trial). (C) Left: sequence of spikes from followers in two consecutive trials in an example simulation. Y-axis: followers sorted by median spike time across all trials. Same sorting in both trials. Spikes from non-followers are not shown. Right: same for a different simulation with a higher mean firing rate (left: 0.017 spk/s, right: 0.051 spk/s). See Figure 2—figure supplement 1C for full rasters. (D) Normalized spike rank entropy of sequences with at least 10 followers (black: mean; gray: std), compared to shuffled sequences (red). (E) Left: spatial evolution of the center-of-mass of follower activations during the first 100 ms of the first sequence in (C). Trigger neuron in purple outline. Exc followers in blue outlines. Right: spatial location of excitatory followers pooled from all simulations, colored by local density (n = 117,427). All simulations are aligned so that the trigger neuron is in the center. Stippled square: size of the MEA used in experiments. (F) Number of followers detected for each simulation as a function of the mean level of activity in the network. Blue: exc; red: inh; thick line: moving average; gray dots: sequences in (C). (G) Probability of generating a minimum number of followers for excitatory and inhibitory populations in high- and low-activity simulations. Dots: experimental ex vivo estimates. (H) Statistics of excitatory- or inhibitory-follower activations by activity level. Left: distance from trigger neuron to farthest detected follower in each simulation (side: half-width of the model network; MEA: half-width of MEA used in experiments). Middle: delay to median spike time of the last activated follower in each simulation (trial: maximum detectable duration under protocol in A). Right: standard deviation of follower spike times, averaged over all followers in each simulation (rand: expected standard deviation of randomly distributed spike times). Boxes: median and [25th, 75th] percentiles.

When ordering the followers in the model network by activation delay from the trigger neuron spike, we observed reliable spike sequences as seen in experiments (Figure 2C, Figure 2—figure supplement 1B). We used an entropy-based measure to quantify the variability of follower identity per rank in a sequence, and observed similar results as in the experiments, with follower ordering being most predictable in the first four ranks of the sequence (see Methods, Entropy of follower rank, Figure 2D). As observed in experiments, we found that sequences in our model evolve over space, spreading away from the trigger neuron (Figure 2E, left). Furthermore, around 18% of all followers in the model fall outside the frame used in the experimental recordings (defined by a 1.3 × 1.3 mm MEA, stippled line, Figure 2E, right), suggesting a possible underestimation of the number of followers and sequence length in the original experiments.

While the electrical distance between the MEA and the pyramidal cell layer limited the experimental observation of excitatory followers in the turtle cortex, our model predicts that their number far exceeds that of inhibitory followers (Figure 2F). We tested the effect of spontaneous activity in the network on the number of followers. Our model generates almost no followers when spontaneous firing rates are close to zero. The number of model followers increases rapidly with increasing spontaneous firing rates, peaking at low firing rates (0.007 spk/s mean firing rate for the top 1% simulations by follower count), and slowly decreases as spontaneous firing rates increase further. We classified our simulations into low and high activity using spontaneous mean firing rate ranges observed in experiments (Hemberger et al., 2019): [0, 0.05] spk/s ex vivo, 41% of simulations; [0.02, 0.09] spk/s in vivo, 33% of simulations ([25th, 75th] percentiles).

We found sequences in both groups but with notable differences in the behavior of excitatory and inhibitory model followers. The probability of a low-activity simulation displaying at least 1 or 10 inhibitory followers (74% and 19%, respectively) shows an excellent agreement with experimental data (77% and 20%, respectively, Figure 2G). Interestingly, we observed that the probability of producing inhibitory followers drops in high-activity simulations (4% prob. of ≥10 inhibitory followers). Excitatory neurons show the opposite trend: the probability of having at least 10 excitatory followers is higher in high-activity (69%) than in low-activity simulations (50%).

We saw that the sequences of activations of excitatory followers often last more than 150 ms and reach the spatial boundaries of our model circuit (2 × 2 mm, 100k neurons) in both low- and high-activity simulations (Figure 2H, left). Inhibitory followers in the model network, however, often fire early in the sequence and are spatially closer to the trigger neuron than excitatory ones (Figure 2H, middle and right). These differences are more pronounced in high- than low-activity simulations (Figure 2H, gray and colored). Importantly, inhibitory followers display near-random levels of temporal jitter in high-activity regimes (Figure 2H, foll. jitter, standard deviation of spike delays). Consequently, we predict that followers should be detectable in experiments in vivo in the turtle cortex but mainly within the pyramidal layer.

In summary, our biologically constrained model produces repeatable firing sequences across groups of neurons in response to single spikes in randomly chosen excitatory neurons with properties very similar to those observed in ex vivo experiments (low firing rates) that constrained the model network. Our simulations further provide an experimentally testable prediction: that sequences may occur under in vivo levels of spontaneous firing rate in the turtle cortex. In these conditions of higher spontaneous activity, the activation sequences are mainly composed of excitatory followers, whereas inhibitory followers produce less reliable and temporally jittered responses.

Strong connections provide reliability of activity propagation while weak connections modulate it

To better understand the mechanisms behind follower activation, we examined how spikes propagate through our model networks. We found that single strong sparse connections drive reliable excitatory responses, while convergent weak connections control the level of spontaneous network activity and drive inhibition.

In a random subset of our 6000 simulations (n = 900), we searched for instances when an neuron spike successfully traversed a connection, that is, when the postsynaptic neuron fired within 100 ms of the presynaptic one. We call this successful activation of a connection a ‘spike transfer’ from the pre- to the postsynaptic neuron. We thus combined spikes with recurrent connectivity to produce a directed acyclic graph of all spike transfers for each simulation (Figure 3A). Note that we studied spike transfers among all connected neurons in the network, allowing us to characterize the connections that are most frequently traversed. Most excitatory-to-excitatory spike transfers in low-activity simulations show a delay from pre- to postsynaptic spike of 6–8 ms (Figure 3B), matching delays measured in turtle cortex (Hemberger et al., 2019). Interestingly, excitatory-to-inhibitory spike transfers display consistently shorter delays than their excitatory-to-excitatory counterparts, even at higher firing rates (Figure 3B, inset), possibly reflecting the more depolarized state of inhibitory neurons (Figure 1I).

Figure 3 with 1 supplement see all
Connectivity underlying activity propagation in the network.

(A) Schematic. Left: the recurrent network structure and spiking activity are combined to produce a directed graph of spike transfers. Right: motifs of spike transfers detected in this graph (conv: convergence). (B) Distribution of delays of excitatory-to-inhibitory (red) and excitatory-to-excitatory (blue) spike transfers. Average of all low-activity simulations ([0, 0.05] spk/s). Arrowheads: mode of each distribution. Inset: most common delay as a function of mean firing rate. (C) Number of motifs leading to an excitatory (blue) or inhibitory (red) neuron spike. Motifs defined in (A). Each dot represents the mean for all spikes in each simulation. Low-activity simulations. Boxes: median and [25th, 75th] percentiles. (D) Same as (C) for high-activity simulations. (E) Histograms of strengths of synapses leading to excitatory (blue) or inhibitory (red) neuron spikes. Insets: zoomed-in tails. Each line averages 25–100 simulations grouped in equal bins of mean firing rate (color bar). (F) Top: schematic of alternative network models and their truncated distribution of synaptic strengths. Bottom: number of detected excitatory followers per simulation (n = 2000 each). Boxes: median and [25th, 75th] percentiles. (G) Input–output curves for the full (purple) and strong-only (orange) models. (H) Difference between the number of followers detected in full and strong-only models under the same input. Brackets indicate regimes where the presence of weak connections increases (full >str) or decreases (full <str) the follower count. 2000 simulations.

To understand the connectivity structure underlying these spike transfers, we examined connectivity motifs in the graph (Figure 3C, D). The possible number of motifs increases with the number of neurons involved. Due to this combinatorial nature, we extracted only low-order motifs with depths of up to two spike transfers and involving ≤4 spikes. In low-activity simulations, we found that excitatory spikes are rarely triggered by convergence or fan motifs (Figure 3C, top, conv.), but they are rather the result of one-to-one spike transfers (Figure 3C, top, single). By contrast, we saw that spikes from inhibitory neurons result from more spike transfers, with a prevalence of convergence motifs (Figure 3C, bottom, conv.). Although spike transfers by convergence are more common in high-activity simulations for both excitatory and inhibitory populations (Figure 3D), the increase is greater for motifs leading to inhibitory spikes. Indeed, inhibitory spikes in high-activity simulations rarely involve single spike transfers (Figure 3D, bottom, single). This analysis reveals that different excitatory and inhibitory connectivity motifs underlie activity propagation in the networks.

Finally, we extracted the synaptic strengths involved in these spike transfers (Figure 3E). We found that the strength distribution of those connections underlying spike transfers matches the shape of the full connectivity, with a peak of very weak connections followed by a long tail (as in Figure 1D). Interestingly, the distribution of excitatory-to-excitatory spike transfers displays a secondary peak indicating an over-representation of strong connections (Figure 3E, top inset). By contrast, the absence of this secondary peak and the much higher primary peak (~1.5 M) for excitatory-to-inhibitory spike transfers suggest that inhibitory spikes are primarily the result of weak convergence (Figure 3E, bottom). The difference between the two populations is again greater at higher levels of activity.

This motif analysis allowed us to make several key predictions for how excitatory and inhibitory neurons participate in different types of motifs and with different synaptic strengths to propagate sequences. First, inhibitory neurons are usually activated by the convergent activation of multiple weak connections. The low fraction of inhibitory neurons and the high E-to-I connection probability (Figure 1A) makes inhibitory neurons strongly susceptible to spontaneous network activity. Consequently, we suggest that inhibitory neurons are less selective to their excitatory drivers, limiting their capacity to become followers. Second, we found that excitatory neurons are less susceptible to ongoing network activity than their inhibitory counterparts. Hence, single strong excitatory inputs are the dominant drive behind excitatory spikes and, thus, the likely conduit of sequence propagation.

To confirm the role of strong connections in sequence propagation, we built variations of our model in which we removed excitatory-to-excitatory connections according to their strengths (Figure 3F, top). We did not alter connectivity involving inhibitory neurons. We defined weak and strong connections as, respectively, the bottom 90% or top 0.3% of all connections, representing the two modes of the distribution of synaptic strengths of excitatory-to-excitatory spike transfers (Figure 3E, top). We found that networks with only strong connections result in very sparse excitatory connectivity, where 61.4% of excitatory neurons receive ≤2 excitatory connections (Figure 1E). We then re-ran a random subset of all simulations using the modified models (n = 2000). We observed that networks with only weak connections require a much stronger drive than their intact counterparts to produce any activity (Figure 3—figure supplement 1A) and produce fewer excitatory followers (0% prob. of ≥10 followers, Figure 3—figure supplement 1C), in contrast with those with only strong connections (75%) (Figure 3F, bottom). Hence, we conclude that sparse connections strong enough to trigger a postsynaptic spike are necessary and sufficient to produce repeatable sequences in our model.

Interestingly, the model containing only strong connections often produces an excess of followers compared to the full model, suggesting that weak connections limit the reliability of those neurons under normal circumstances. Indeed, eliminating weak connections reduces the slope of the network’s input/output (I/O) curve (Figure 3G). Consequently, networks with only strong connections remain at low firing rates for a broader range of inputs, in which more neurons behave as reliable followers (Figure 2G). Eliminating weak connections also causes a shift of the I/O curve to the right, increasing the range of inputs over which output activity is close to zero. This shift defines a narrow range of inputs where weak connections increase rather than decrease the number of followers (Figure 3H).

In summary, our model suggests that rare but strong and common but weak connections play different roles in the propagation of activity: the former promote reliable responses to single spikes, while the latter amplify spontaneous network activity and drive recurrent inhibition, effectively modulating the reliability of those responses.

Sequences are composed of sub-sequences that correspond to sub-networks of strong connections

To better understand the regulation of sequential neuronal activations, we examined when and how sequences fail to propagate. We found that sequences could fail partially, with different sections of the same sequence failing to propagate independently of one another.

We chose a simulation with an intermediate firing rate (0.034 spk/s), and follower count (25) closest to the average for that firing rate. We observed that followers (Figure 4A, rows) can fail to activate over repeated trials with the same trigger (Figure 4A, columns). As a result, the exact spiking composition of a sequence can vary from one trial to the next, consistent with our definition of followers as neurons activated in a significant fraction of, but not necessarily all, trials. Because follower activations depend mainly on rare strong connections (Figure 3F), we reasoned that the activation or failure of any follower could affect the unfolding of a sequence. Consequently, we performed k-modes clustering of followers based on their activation over trials (Figure 4B, left). Our analysis reveals clusters composed of followers that tend to activate together in the same trials (Figure 4C, left). Here, we detected followers and performed clustering exclusively by activity, not connectivity. Nonetheless, when we mapped these follower clusters onto the graph of connections, we found that the connections within each cluster are strong and belong to the tail of the distribution of connection strengths (Figure 4B, right, C, right). We thus call each one of these follower clusters ‘sub-networks’ and their corresponding sections of the spiking sequence ‘sub-sequences’. We observed similar decompositions into sub-sequences across simulations with different trigger neurons and numbers of followers (Figure 4D). These sub-networks persist despite random connectivity without hand-tuning of network structure. Across all simulations, strong connections (in the top 0.3% of the distribution) are more frequent within than between sub-networks, providing a mechanistic substrate for sub-network segregation (Figure 4E). Importantly, the few equally strong connections between sub-networks do not always guarantee the reliable propagation of activity between them due to unpredictable network effects resulting from recurrent interactions (Figure 4F).

Clusters of sequential spikes reflect sub-networks of strongly connected followers.

(A) Left: matrix of followers (rows) and trials (columns) of a representative simulation indicating follower activation (dark entries). Colored arrowheads indicate trials in (C). Right: graph of excitatory follower-to-follower connections colored by strength. Trigger neuron in purple. Neurons identified as followers but not directly connected are aligned below. (B) Left: matrix in (A) after sorting followers and trials according to k-modes clustering. Right: same graph as in (A) with only top 5% connections. Colors according to follower clustering. Note that members of orange, green, and brown clusters are connected to one another. (C) The three trials indicated in (A) and (B). Left: firing sequence with spikes colored according to follower cluster. Right: same graph as in (B) with inactive followers in gray. (D) Two trials (columns) of three different simulations to illustrate selective sub-network activation. Top to bottom: different trigger neurons with 28, 68, and 145 followers. Only the top 5% connections are shown. Followers were clustered as in (B). Green and orange: example active sub-networks. Blue: other active sub-networks. Gray: inactive followers. (E) Relationship between connectivity and follower clusters. Left: schematic illustrating connections between (purple) and within (blue) clusters. Right: estimated probability (measured frequency) of strong (top 0.3%) excitatory-to-excitatory connection within or between clusters for 10,472 clusters of at least 5 followers pooled across all 6000 simulations. (F) Estimated probability (measured frequency) of postsynaptic activation conditioned on presynaptic activation in the same trial, for excitatory-to-excitatory connections, pooled across all 6000 simulations. Boxes: median and [25th, 75th] percentiles. (G) Relationship between entropy and baseline firing rate. Left: schematic of four possible propagation scenarios in a simplified scheme where only two sub-networks are considered per simulation. Right: entropy over trials for each simulation with at least two sub-networks (n = 5371) as a function of network mean firing rate.

To further examine the potential interdependence of sub-network activation, we selected the two largest sub-networks of every simulation (a and b) and classified each trial as one of four possible outcomes: full failure, a and b together, a alone, or b alone (Figure 4G, top). We defined a sub-network as activated if at least 40% of its followers fire at least once in that trial. We then extracted the entropy of these outcomes as a function of network mean firing rate across all simulations with at least two sub-networks (Figure 4G, bottom). The entropy is about 1 bit at low firing rates, indicating two equally likely outcomes (full failure or partial propagation). As spontaneous activity increases, entropy rises towards its maximum (2 bits), where all four outcomes become equally likely. This trend towards maximal entropy is consistent with recurrent connectivity and spontaneous activity in the model network being random, which, on average, avoids biases of activation of one sub-network over the other. In short, despite dense recurrent connectivity, the state of the network can turn on or off some sections of a sequence without altering other sections.

In summary, we found that sequence decomposition based on coactivity reveals sub-networks of strongly connected followers. The few strong connections linking different sub-networks do not always reliably propagate activity. We predict that this apparent failure, in fact, adds flexibility, enabling the sub-networks to act as independent paths of propagation.

Sparse external input can halt or facilitate the propagation of activity

We hypothesized that the connections between sub-networks could be critically important for sequence propagation. Indeed, the excitability and state of an entry point to a sub-network should affect sub-network activation, hence, we refer to these entry point neurons as ‘gates’. By injecting a single extra spike to these gates, we could control the activation of subsequences and thus the path of propagation of the sequence.

We first identified the gates for every sub-network of every simulation based on activity, as the neurons within each sub-network with the shortest median spike delay from trigger.

To evaluate how the excitability of gate neurons might affect propagation in the rest of the sub-network, we varied the amount of excitatory or inhibitory conductances they received. We again ran a random selection of our original simulations (n = 2000), but this time applying an additional spike from an external source to a randomly selected gate (Figure 5A). For the conductance of this external input, we chose the maximum value from either the excitatory or inhibitory synaptic conductance distribution (Figure 1D) while keeping the level of spontaneous activity in the rest of the network unchanged. We defined the level of activation of each sub-network (a) as the fraction of trials (out of 100) in which the sub-network is active and then computed the change of activation relative to control (a0) under the altered conductance state of the gate (∆a/a0, %) (Figure 5B). We found that external inputs to gate neurons are highly effective in controlling propagation through a sub-network: a single external inhibitory-input spike halves the probability of sub-network activation in 74% of the simulations and entirely halts its activation in 26% of the simulations. By contrast, a single external excitatory-input spike doubles baseline sub-network activation probability in 55% of the simulations (Figure 5B, Figure 5—figure supplement 1A).

Figure 5 with 1 supplement see all
Sub-sequence activations depend on the state of gate neurons.

(A) Voltage traces for six neurons in a sequence (network and colors as in Figure 4B) in two trials of simulations where the first neuron in the green sub-network receives a single excitatory (left,+I) or inhibitory (right, −I) input from an external source (arrows). (B) Fold change of sub-network activation above baseline (∆a/a0, %) for sub-networks randomly selected across all 6000 original simulations. The model gate neuron of each sub-network received an additional single input from an external source at the beginning of each trial. (C) Schematic of protocol and map of the fold change in activation (∆a/a0, %) for a sub-network when manipulating its gate (same network as Figure 4A). Arrow in schematic indicates stimulated gate. Solid outlines indicate combinations of strength and timing leading to halting (bottom, −50% ∆a/a0), facilitation (top right,+50% ∆a/a0), and adaptation (top left, −50% ∆a/a0). ∆t indicates the delay from trigger spike to external input. Dashed lines indicate the median spike time of the gate. (D) Same as (C) for a different sub-network. (E) Average map of the change in activation (∆a/a0, %, same as C) computed from 16 sub-networks of 8 different networks spanning 5–462 followers and 0.01–0.1 spk/s baseline firing rate (networks in C) and (Figure 5—figure supplement 1B). Maps were aligned relative to trigger spike as in (C) before averaging. Solid outlines indicate ±50% ∆a/a0. Color bar as in (C). (F) Same as (E) but maps were aligned to each gate median spike time (dashed line in C) before averaging. (G) Same as (C) for the orange sub-network when manipulating the green gate. (H) Same as (D) for the green sub-network when manipulating the orange gate.

To examine the effect of timing and strength of the external input on sub-network activation, we focused on the network in Figure 4A. We varied the sign (depolarizing vs. hyperpolarizing) and amplitude of the synaptic conductance of the external input to a gate neuron and its timing relative to trigger activation. We explored only conductances corresponding to single connections. We observed three different effects on the activation of the respective sub-network (∆a/a0, Figure 5C, D). First, an inhibitory input of any strength in a temporal window of ~100 ms can acutely reduce sub-network activation, often halting propagation. Second, an excitatory input of intermediate strength can facilitate propagation. The temporal window of facilitation is centered on the expected spike time of the gate neuron. Lastly, an excitatory input of large amplitude can impair propagation if it occurs much earlier than the activation of the trigger neuron. This sub-network behavior results from the single-cell adaptive properties indicated by experimental data (Hemberger et al., 2019): the external input, by triggering the gate and some of the neurons in its sub-network, activates adaptive currents and thus raises their threshold for subsequent activation by the trigger neuron. We found equivalent effects on simulated networks with number of followers and baseline firing rates spanning orders of magnitude (Figure 5—figure supplement 1B, C). Computing the average across all tested sub-networks, we conclude that halting is most effective in a narrower time window (relative to trigger activation) than facilitation (70 and 100 ms, respectively) (Figure 5E, left). When considered relative to the expected spike time of the gate, we observed that the temporal windows are wider (Figure 5F, right). Importantly, while the halting window appears to have a sharp temporal boundary determined by the expected spike time of the gate in the sequence, the facilitation window increase with increasing input amplitude.

Finally, we examined how modulating the gate to one sub-network affects the activation of another (Figure 5G, H). Interestingly, this relationship between sub-networks is not symmetrical: in the single-network example, while orange sub-network activation is primarily independent of the state of the green sub-network (Figure 5G), gating of the orange sub-network unexpectedly facilitates green sub-network activation (Figure 5H), suggesting that the former typically exerts a net inhibitory influence on the latter. Indeed, strong excitatory inputs that might activate the orange sub-network early also reduce green sub-network activation (Figure 5H). More generally, we observed various complex interactions between pairs of sub-networks when examining other networks (Figure 5—figure supplement 1B, right). The sub-network interactions do not follow any particular trend (Figure 5—figure supplement 1C), suggesting that they depend on the particular realization of the random connectivity in the network.

In sum, we identified gate neurons in different sub-networks whose activation is critical for the activation of the sub-network in a model based on the turtle cortex. External single spikes can control the state of gate neurons and thus halt or facilitate the activation of individual sub-networks. The effect of these spikes depends on their timing relative to the sequence trigger. Finally, the activation of a sub-network may influence other sub-networks via recurrent connectivity leading to complex and non-reciprocal interactions.

Sequences from multiple triggers reliably activate combinations of followers

To examine how sequences interact, we studied a new set of simulations where multiple-trigger neurons were activated simultaneously. We found that sequences from coactivated trigger neurons often interact, but they do so reliably, resulting in a combinatorial definition of followers.

We first simulated 100 trials by inducing a single spike from each of 2000 randomly selected trigger neurons in a representative network (as in Figure 4A, though see Figure 6—figure supplement 1 for multiple network instantiations). Randomly pairing these trigger neurons, we estimated a very low probability that two excitatory neurons share followers (Figure 6A, B). Two triggers share at least 1 follower in only 3.12% of all tested pairs (n = 5000), and this probability drops to 0% for at least 11 followers.

Figure 6 with 1 supplement see all
Interaction of sequences from multiple triggers reliably route activity.

(A) Left: schematic of follower classes as a function of trigger neuron (purple: follower to a; red: follower to b; blue: follower to both a and b). Right: example sequences produced by the activation of a or b. Followers are sorted by trigger class (left) and median spike time. Triggers do not share any followers (blue trigger class). (B) Number of followers for each trigger class in (A) for all random pairs of simulations (n = 5000). Boxes: median and [25th, 75th] percentiles. (C) Left: schematic of follower classes as a function of trigger: single-trigger neuron (a or b) or simultaneous coactivation (ab). Blue, purple, and red as in (A). Right: same trials as in (A), and a trial under coactivation. Followers are sorted by trigger class (left) and median spike time. Same follower sorting in all trials. (D) Number of followers presents under trigger coactivation as a function of the sum of followers for single triggers. Dashed line: diagonal. Solid line: linear fit with zero intercept. (E) Number of followers for each trigger class in (C) for all random pairs of triggers (n = 5000). (F) Median follower spike time for each trigger class in (C) after pooling followers from all simulations (5000 simulations; 532,208 followers). Lines indicate median. (G) Top: trigger neuron (purple), followers, strong connections between them (arrows), and other neurons in the network (dark blue). Follower colors as in Figure 4B. Bottom: sequences triggered under simultaneous coactivation of trigger (purple) and one additional contextual neuron that facilitates propagation in both sub-networks (left), only in the green sub-network (middle), or only in the orange sub-network (right). (H) Effect on green and orange sub-network activation (G top) when the trigger neuron is coactivated with each of 2000 randomly selected contextual neurons. Dashed lines indicate baseline activation of each sub-network. Neurons are colored by local density. Colored arrows correspond to sequences in (G).

Next, we generated a new simulation for each randomly chosen pair of trigger neurons while activating both simultaneously (100 trials). Comparing the followers resulting from these double-trigger activations to the followers from single-trigger activations reveals sets of followers (Figure 6C): ones responsive to the activation of only one of the triggers (a), followers responsive to the activation of only the other trigger (b), and followers responsive to their coactivation (a&b).

The number of followers obtained under a&b coactivation is smaller than the sum of the number of followers of a and b alone (Figure 6D, Figure 6—figure supplement 1A). Since a and b rarely share followers, the resulting sub-linear activation suggests frequent lateral inhibition, as we had observed between sub-networks (Figure 5H). Indeed, followers to a single trigger can be lost when that trigger is coactivated with another (a only and b only, Figure 6E, Figure 6—figure supplement 1B). Other followers, however, can emerge when two trigger neurons are coactivated but are absent in response to the activation of either trigger alone (ab only). We call these ‘combination-specific followers’ because their activation depends on whether multiple-trigger neurons are silent or active (a only, b only, and ab only). Combination-specific followers represent approximately half of all followers in the coactivation protocol (Figure 6—figure supplement 1C). Finally, some followers fire independently of whether the second trigger neuron is silent or active (a&ab and b&ab). We call these ‘core followers’. We found that the excitatory and inhibitory compositions of core and combination-specific followers do not differ (Figure 6—figure supplement 1D). Core followers typically fire early in a sequence, whereas combination-specific followers become activated later (Figure 6F). This is consistent with experimental evidence that follower identity is more reliable early than late in a sequence (Hemberger et al., 2019), as captured by our model (Figure 2D). By contrast, the late activation of combination-specific followers suggests that modulation of sequences via intra-cortical interactions operates best on followers synaptically distant from the trigger neuron.

These findings prompted us to examine the effect of circuit interactions on sub-network activation in a known sequence. We used the trigger in the network in Figure 4A as our control with two known sub-networks and coactivated it with each of 2000 randomly selected excitatory neurons in the network, which we call ‘contextual’ neurons. By contrast with random spontaneous activity, these additional activations formed a context of activity that remained constant across trials. We then evaluated the effect of coactivation on the sub-networks of our control sequence (Figure 6G, H). We found that most contextual neurons excite or inhibit both sub-networks equally (diagonal trend, Figure 6H). Consistent with the tendency towards lateral inhibition (Figure 6D), 87.8% of the tested contextual neurons cause some inhibition of both sub-networks (bottom left quadrant, Figure 6H). Lateral excitation is also present but rarer (6.7%, top right quadrant, Figure 6H). The symmetry along the diagonal is likely a consequence of our random connectivity despite biological constraints and reflects our earlier observation that specific pairs of sub-networks can strongly influence each other, with the average effect being balanced (Figure 5—figure supplement 1B, right, Figure 5—figure supplement 1C). We saw that some contextual neurons prevent activation of one sub-network while facilitating the other (top left and bottom right quadrants, Figure 6H, 2.7%, and 2.9%, respectively, Figure 6—figure supplement 1E). The small percentage (~6%) of contextual neurons that can determine which sub-network becomes active suggests a highly specific form of routing. This result is consistent with our previous finding that excitatory followers generally tolerate ongoing network activity (Figure 3C–E). Given the size of our model network (2 × 2 mm, 100k neurons, Figure 1A), we estimate that in a given trial, the path of activation through the network can be influenced by the activation of ~5000 contextual neurons, enabling a big reservoir for computation through routing.

Together, our results indicate that overlap of sequences initiated by different triggers is rare but that interactions between the sequences are common. Importantly, we find that sequence interactions are reliable from trial to trial: they define groups of followers that activate reliably while being specific to the combined activation or silence of multiple triggers. While inhibitory interactions are most common, the activation of a sub-sequence can be facilitated through a small percentage of neurons within the recurrent network. Consequently, the state of just a few contextual neurons can selectively route cortical activity along alternative paths of propagation. Together, our results suggest that cortical computations may rely on reliable yet highly specific recurrent interactions between sequences.

Discussion

We developed a model constrained by experimental data from the turtle visual cortex to investigate the effect of single spikes in recurrent networks with otherwise random connectivity. Our model produces reliable sequences from single spikes as experimentally measured ex vivo and predicts their existence in vivo in the turtle cortex, where firing rates are higher, with marked differences between excitatory and inhibitory neurons. We found that strong but rare connections form a substrate for reliable propagation in the network, while dense but weak connections can enhance or disrupt propagation. Our model reveals that sequences may be decomposed into multiple sub-sequences, each reflecting a sub-network of strongly connected followers, which emerge without any hand-tuning of network structures. We showed that activation of individual sub-sequences is sensitive to the state of a few gate neurons and can thus be controlled through ongoing recurrent activity or external inputs. We observed that different trigger neurons rarely share followers, but their sequences often interact, suggesting that downstream circuits may access the combined information from different sources. Indeed, the selectivity of followers to the combinatorial activation of multiple triggers suggests a mechanism that can produce a wide repertoire of activations while remaining reliable and specific. Finally, we found that the exact path of propagation is influenced by contextual activity provided by the activation of a small percentage of other neurons in the network. In summary, our biologically constrained model allows us to dissect the mechanisms behind the reliable and flexible propagation of single spikes, yielding insights and predictions into how cortical networks may transmit and combine streams of information.

Routing

The sequences in our model represent a form of propagating activity. They are related to, but distinct from, previous frameworks: temporary breaking of excitatory/inhibitory balance, synfire chains, neuronal avalanches, and stochastic resonance.

Our work suggests that the disruption of propagating spiking sequences may be a hallmark of flexibility, where apparent failures to propagate activity are the consequence of routing it (Figure 7). For instance, changes in contextual activity across trials may frequently route propagation away from certain neurons, yielding what we would identify as followers with low firing rate modulation. Different replay sequences of place cells in the same environment have been observed in rat hippocampus and linked to flexible planning in goal-directed and memory tasks, and our model provides a plausible circuit mechanism for this flexible trajectory sequence generation (Pfeiffer and Foster, 2013; Widloski and Foster, 2022). We propose that the dynamic routing of activity relies on two elements: the existence, for each propagation path, of few points of failure and specific mechanisms to manipulate them. The strength of some connections in our model enables reliable propagation, while their sparsity defines groups of neurons with few entry connections (gates). Precisely timed external inputs can selectively open or close these gates. In addition, ongoing parallel sequences can also influence the path of propagation in the form of lateral inhibition (competition) or excitation (cooperation) between sub-networks in the recurrent network. We expect that complex computations can be built from this proposed form of gating. Previous theoretical work on signal propagation in recurrent networks with irregular background activity found that the combination of parallel pathways and gating can implement different logic statements, illustrating how gating can form the basis for more complex computations (Vogels and Abbott, 2005). Indeed, recent advances in machine-learning result from leveraging gating to control the flow of information more explicitly. For instance, gating can implement robust working memory in recurrent networks (Chung et al., 2014), prevent catastrophic forgetting through context-dependent learning (Masse et al., 2018), and produce powerful attention mechanisms (Vaswani et al., 2017).

Routing using sparse strong connectivity.

As activity propagates in cortex, a spike will preferentially propagate through strong connections. The strength of those connections ensures reliable propagation, while their sparsity enables flexibility, creating sub-networks with gates where propagation can be halted or facilitated. The halting of activity can appear as partial failures of the expected sequence of spikes over a different history of neuron activations. The spiking state of other neurons in the recurrent network, together with external inputs, form a context that determines the final path for each spike.

Although a single spike can reliably trigger many others in the network, we never observed explosive amplification of activity in our model. On the contrary, multiple excitatory spikes result in a sub-linear summation of responses (Figure 6D), likely reflecting the frequent lateral inhibition onto sub-sequences (Figure 6H). Together, this suggests that sequences operate in a regime where local inhibition often balances excitatory signals. We did not explicitly construct our model to display an excitatory/inhibitory balance. Still, this form of lateral inhibition and routing via the alteration of excitatory and inhibitory conductances in gate neurons are reminiscent of ‘precise balance’, a form of balance within short temporal windows (Bhatia et al., 2019). Hence, our work is consistent with and extends previous theoretical work proposing that the gating of firing rate signals could be arbitrated via the disruption of excitatory/inhibitory balance in local populations (Vogels and Abbott, 2009).

Synfire chains are a classical model of reliable propagation where pools of 100–250 neurons are connected sequentially in a divergent–convergent pattern. Near-synchronous activation of one pool can thus guarantee postsynaptic spikes in the next pool even under partial failure, so reliable synchronous activation recovers in later pools (Abeles, 1991; Diesmann et al., 1999; Kumar et al., 2010). While the biological evidence for synfire chains is inconclusive (Abeles et al., 1993; Fiete et al., 2010; Long et al., 2010; Oram et al., 1999; Prut et al., 1998), theoretical studies have embedded them in recurrent networks yielding insights into the specific network structures needed for effective propagation and gating of activity (Aviel et al., 2003; Kremkow et al., 2010; Kumar et al., 2010; Mehring et al., 2003). A related structure to synfire chains is that of polychronous chains, where synchrony is replaced by precise time-locked patterns that account for the distribution of synaptic delays (Egger et al., 2020; Izhikevich, 2006). These time-locked patterns, or polychronous groups, self-reinforce through spike-timing-dependent plasticity, resulting in stronger synapses. Contrary to classical synfire chains, connectivity in our network is unstructured, and activity propagates over synaptic chains that are sparse (≤2 connections, Figures 2E and 3F) yet powerful enough to make the synchronous firing of large groups of neurons unnecessary. This sparsity enables the selective halting of propagation by even small inputs (Figure 5A) or sequence-to-sequence interactions (Figure 6C) and causes late followers to be the most prone to modulation, that is, least reliable (Figures 2D and 6H). Interestingly, the unstructured random connectivity in our model captures very well the experimental finding that 80% of excitatory neurons trigger sequences even when activated in isolation (Hemberger et al., 2019; Figure 2G), suggesting that sparse chains of strong connections are indeed common, at least in turtle cortex. Consequently, reliable propagation may involve far fewer neurons and spikes than initially predicted by synfire models, with significant experimental and computational implications.

In our model, a single spike could trigger multiple ones without runaway excitation, suggesting that sequences share some properties with neuronal avalanches (Beggs and Plenz, 2003). Criticality, a dynamical regime that can maximize information transmission in recurrent networks (Beggs, 2008), has been associated with the ex vivo turtle cortex, from which data constrained our model, after observing scale-free statistics of population activity (Shew et al., 2015). Although cortical criticality does not require reliability, it does not preclude it. Indeed, repeatable and precise LFP avalanches have been found in slice cultures, possibly reflecting the existence of underlying repeatable spiking sequences (Beggs and Plenz, 2004). Our finding of spike sequences as the expression of routing by strong connections may thus be an alternative view, compatible with the hypothesis of cortical criticality. Importantly, we interpret the halting of sequences as a form of contextual gating. While avalanches may statistically describe sequence sizes as a stochastic branching process, our simulations with fixed external input (Figure 5) and contextual activity (Figure 6) suggest that reliable control of sub-sequences is possible. Consequently, network states that remain constant from trial to trial will produce sequence statistics that differ from pure branching-process statistics (e.g., powerlaws). Experimentally, this might be achieved pharmacologically or by increasing the number of trigger neurons that are coactivated (Figure 6G, H).

Lastly, stochastic resonance is a phenomenon where noise can enhance the response of a system (Collins et al., 1995a). In particular, long-tailed distributions of strengths may create the opportunity for aperiodic stochastic resonance between weak and strong spike inputs (Collins et al., 1995b; Teramae et al., 2012). Indeed, our simulations with and without weak connections reveal narrow ranges of input where weak connections could increase or decrease reliability (Figure 3H). This global control mechanism contrasts with the selectivity of single sub-networks to the activation of a few neurons within the recurrent circuit (Figure 6H). As with criticality and synfire chains, our results suggest that some routing mechanisms might not be stochastic and operate at the population level, but rather that they are reliable and fine-grained.

Connectivity

Our modeling results show that rare but strong connections are key to the generation of sequential activity from single spikes. Such sparse and strong connections are a common feature of brain connectivity. Indeed, long-tailed distributions of connection strengths are found in rat visual cortex (Song et al., 2005), mouse barrel and visual cortices (Cossell et al., 2015; Lefort et al., 2009), rat and guinea pig hippocampus (Ikegaya et al., 2013; Sayer et al., 1990), and human cortex (Shapson-Coe et al., 2021). Modeling and in vitro studies in hippocampus and cortex have suggested that sparse strong inputs substantially affect their postsynaptic partners and network dynamics (Franks and Isaacson, 2006; Ikegaya et al., 2013; Setareh et al., 2017). Furthermore, repeatable sequences of action potentials have also been reported in most of these mammalian cortical structures (Buzsáki and Tingley, 2018; Carrillo-Reid et al., 2015; Dechery and MacLean, 2017; Diba and Buzsáki, 2007; Dragoi and Tonegawa, 2011; Luczak et al., 2015; Luczak and Maclean, 2012; Vaz et al., 2020). Thus, could strong connections underlie cortical sequences more generally, and are sequences an ancient feature of cortical computation? We based our model on data from the turtle cortex, yet other species may introduce variations that might be relevant, including differences in operating firing rates or neuronal populations. For instance, the reliability of sequence propagation may depend on connectivity features we did not explore here, such as the presence of different connectivity motifs or differences in the distance-, layer-, and population-specific connection probabilities (Jiang et al., 2015; Song et al., 2005). Further modeling and experimental work on the cortical responses of diverse species will be needed to provide a comprehensive comparative perspective (Laurent, 2020).

Our study identified distinct roles for strong and weak connections, providing reliability and flexibility, respectively. Experimental distributions of trial-to-trial variability of EPSP sizes in rat and mouse cortices show that the amplitude of large EPSPs is less variable than that of small EPSPs, supporting the greater reliability of strong connections (Buzsáki and Mizuseki, 2014; Ikegaya et al., 2013; Lefort et al., 2009). Recent electron microscopy evidence from mouse primary visual cortex (V1) shows that the distribution of synapse sizes between L2/3 pyramidal neurons is best described by the combination of two lognormal distributions (Dorkenwald et al., 2019). This binarization of excitatory synapse types may be the morphological equivalent of our modeling predictions.

Although we based many model parameters on experimental data from the turtle visual cortex (connection strengths, single-neuron properties, connection probabilities), our model has otherwise random connectivity. Consequently, our model displays a high degree of heterogeneity, with sub-networks emerging without fine-tuned connectivity, allowing us to make general predictions about sequence propagation and the impact of single spikes. However, other features of cortical organization, such as structured connectivity, neuronal diversity, and dendritic interactions, might affect activity propagation and routing. Indeed, certain nonrandom features may be expected in turtle cortical connectivity, such as axonal projection biases (Shein-Idelson et al., 2017), differences between apical and basal dendritic connectivity, gradients of connectivity across the cortical surface (Fournier et al., 2015), or plasticity-derived structures. Transcriptomic and morphological evidence suggests that turtle cortex contains diverse neuronal types, as seen in mammalian cortex (Nenadic et al., 2003; Tosches et al., 2018). Different neuronal types may play different roles in information routing. For instance, somatostatin- and parvalbumin-expressing neurons in mouse V1 may play different roles in controlling trial-to-trial reliability (Rikhye et al., 2021). Finally, our model represents neurons as single compartments, but complex input interactions and nonlinearities may occur at the level of dendritic arbors. For instance, synapses may cluster in short dendritic segments to effectively drive postsynaptic spiking (Kirchner and Gjorgjieva, 2021; Scholl et al., 2021); these inputs may be locally balanced by inhibition Iascone et al., 2020; and sequential activation within clusters may be particularly effective at triggering somatic spiking (Ishikawa and Ikegaya, 2020). The consequences of these aspects of cortical organization on information routing via strong connections remain to be explored.

Function

Evidence from rodents has often associated spiking sequences with memory or spatial tasks (Buzsáki and Tingley, 2018; Diba and Buzsáki, 2007; Dragoi and Tonegawa, 2011; Modi et al., 2014; Vaz et al., 2020) or reported sequences within the spontaneous or stimulus-evoked activity in sensory areas (Carrillo-Reid et al., 2015; Dechery and MacLean, 2017; Fellous et al., 2004; Luczak et al., 2015; Luczak and Maclean, 2012). More generally, spiking sequences have been proposed as parts of an information-coding scheme for communication within cortex (Luczak et al., 2015) or as content-free structures that can reference experiences distributed across multiple cortical modules (Buzsáki and Tingley, 2018). Although spking sequences were observed in the ex vivo turtle cortex, we predict the existence of sequences under in vivo levels of baseline activity. We propose that reliable routing of sequential activity might implement the visual tuning of cortical neurons. The dorsal cortex of turtles is a visual processing area that shows no clear retinotopy (Fournier et al., 2018) but displays stimulus-evoked wave patterns (Nenadic et al., 2003; Prechtl et al., 1997). Most neurons have receptive fields covering the entire visual field and display a wide range of orientation selectivity (Fournier et al., 2018). The presence of orientation tuning in the turtle cortex and our result that excitatory neurons are mainly activated via strong connections (Figure 3C–E) predict that like-tuned neurons should share strong connections. Interestingly, long-tailed distributions of connection strengths between L2/3 neurons of mouse V1 play a prominent role in orientation tuning in vivo (Cossell et al., 2015) (but see opposite evidence in ferret V1 Scholl et al., 2021). In mouse V1, stimulus presentation triggers broadly tuned depolarization via a majority of weak connections, while sparse strong connections determine orientation-selective responses (Cossell et al., 2015). This untuned depolarization via weak connections is consistent with our finding of activity regimes where weak connections can modulate the reliability of propagation (Figure 3H). Overall, the dual roles in routing played by weak and strong connectivity may have corresponding roles in what we describe as tuning. ‘Tuning’ is an operational term describing a slice of the recurrent pipeline of transformations linking stimulus to perception or behavior. Routing of spikes and tuning thus likely represent different ways of describing the same processes.

Our model goes beyond generating reliable sequences from single spikes in many other ways. For example, experiments in the turtle cortex with multiple-trigger neuron activation revealed non-additive effects on follower composition (Hemberger et al., 2019). We can now mechanistically explain these as interactions between sub-networks of strong connections. In our model, the scale of the temporal windows for effective single-sub-network gating is about twice as long as the standard deviation of follower spike times (approximately 100 and 50 ms, respectively, Figures 2H and 5C). Consequently, parallel running sequences (e.g., as observed in the mouse auditory cortex Dechery and MacLean, 2017) may have the temporal precision to interact consistently across trials. Indeed, our model produces late-activated followers that are reliable and specific to the coactivation of two trigger neurons (Figure 6E). The presence of combination-specific followers in our model suggests that the tuning of a neuron may be expressed as the result of logical operations on the tuning properties of other neurons in the recurrent circuit. Under this view, lateral inhibition between sub-networks, mediated through the convergence of weak connections (Figure 3D, E), may perform nontrivial computations on the properties of stimuli in a sensory cortex. Indeed, lateral inhibition triggered by early responding principal neurons implements concentration-invariant coding in mouse olfactory cortex (Bolding and Franks, 2018). In line with our observation of common lateral inhibition between sub-networks (Figures 5H and 6H), activations of single neurons in mouse V1 tend to inhibit neurons with overlapping tuning (feature competition) (Chettih and Harvey, 2019). Similarly, highly specific lateral excitation between sub-networks (Figure 6H) may act as a mechanism to bind together stimulus properties (Abeles et al., 2004). Given the density of excitatory neurons and the frequent presence of strong connections (Figure 1A, E), such circuits may display a high combinatorial capacity (Bienenstock, 1995).

In conclusion, our results establish a mechanistic link between three elements common to most cortical circuits: sparse firing, spiking sequences, and long-tailed distributions of connection strengths. We provide a computational interpretation for spiking sequences as the expression of reliable but flexible routing of single spikes over sets of neurons connected by strong connections (Figure 7). By dissecting how a cortex responds to single spikes, we illustrate the relevance of this elementary unit of communication between neurons in cortical computations.

Methods

Neuron and synapse models

We simulated neurons as adaptive exponential integrate-and-fire (see Table 1 for parameters) (Brette and Gerstner, 2005). We used the NEST model ‘aeif_cond_exp’ (see Methods, Simulations), which implements a conductance-based exponential integrate-and-fire neuron model where the membrane potential is given by:

CdVdt=gL(VEL)+gLΔTexpVVTΔTge(VEe)gi(VEi)w+Ieτwdwdt=a(VEL)w
Table 1
Neuron and synapse model parameters.

Asterisk (*) indicates parameters fitted from experimental data (Hemberger et al., 2019). Parameters of excitatory synaptic conductance are for a truncated lognormal distribution.

VariableValue
Excitatory reversal potential (Ee)10 mV
Inhibitory reversal potential (Ei)−75 mV
Synaptic conductance time constant*1.103681 ms
Excitatory synaptic conductance (mean)*3.73 nS
Excitatory synaptic conductance (std)*6.51 nS
Excitatory synaptic conductance (max)*67.8 nS
Leak reversal potential (EL)−70.6 mV
Spike detection threshold0 mV
Membrane reset potential−60 mV
Spike initiation threshold (VT)−50.4 mV
Membrane capacitance* (C)239.8 pF
Leak conductance* (gL)4.2 nS
Subthreshold adaptation (a)4 nS
Spike-triggered adaptation (b)80.5 pA
Adaptation time constant (τw)144 ms
Slope factor (ΔT)2 mV
Refractory period2 ms

At each firing time the variable w is increased by an amount, which accounts for spike-triggered adaptation. Inhibitory and excitatory synaptic conductances (gi and ge) are implemented as exponential decay kernels (see NEST documentation, Linssen et al., 2018).

To estimate membrane properties, we used least squares to match the output of our neuron model to the membrane potential of real neurons in whole-cell clamp under different levels of current injection (Hemberger et al., 2019). An example of the fit can be seen in Figure 1B. We first fit the model membrane leak conductance (resistance) using the steady state of the experimentally measured membrane potential after current injection. We then fit the model membrane capacitance (which determines the membrane time constant) using the experimentally measured potential immediately after the onset of current injection. We fitted all traces independently (n = 3886), and used the median of the distributions for all neurons in the model. Membrane parameters resulted in a rheobase current of 150 pA for a model neuron in isolation.

We fitted synaptic time constants to rise times of EPSPs (n = 382) obtained in paired-patch recordings (same as Figure 1D; Hemberger et al., 2019) using least squares and used the median for our model synapses. We converted experimentally measured EPSP amplitudes (same n = 382) to synaptic conductances (Figure 1D) using least squares and fitted a lognormal distribution via maximum likelihood estimation. When drawing strengths from the resulting lognormal distribution, we discarded and re-sampled conductances higher than the maximum conductance of all fitted EPSP amplitudes (67.8 nS, 21 mV). Inhibitory synaptic conductances were drawn from the same truncated lognormal distribution and scaled up by 8, resulting in currents 2.5 times stronger and a maximum IPSP amplitude of −21 mV on an isolated model neuron that has been depolarized to 50 mV.

Network

Our network model consisted of 93,000 excitatory neurons and 7000 inhibitory neurons that were randomly assigned a location on a square of side 2000 μm.

Neurons were connected randomly. The probability of connection between two given neurons changed as a function of Euclidean distance in the plane, assuming periodic boundary conditions. We first estimated connection probability at regular distance intervals from 918 paired whole-cell patch-clamp recordings (Figure 1C; Hemberger et al., 2019). We then fitted these estimates with a Gaussian profile using least squares. Finally, we scaled the profile so that the expected probability within a 200-μm radius matched population-specific probabilities measured experimentally (Figure 1A). The degrees of connectivity were not fixed, so inhomogeneities in neuron location and randomness of connectivity meant that neurons received a variable number of connections. For instance, the distributions of numbers of E–E weak and strong connections are given in Figure 1E.

All strengths and delays were assigned independently for each connection. Neurons did not have autapses.

Simulations

Simulations were run using NEST 2.16 (Linssen et al., 2018) with a time step of 0.1ms. Each simulation containing 100 trials took between 30 and 45 min on an Intel(R) Xeon(R) Gold 6152 CPU @ 2.10 GHz.

We instantiated 300 networks and generated 20 simulations of each instantiation, yielding a batch of 6000 simulations. Each neuron in each simulation received a random current sampled independently every 1 ms from a Gaussian distribution N(μin, σin). Each simulation had randomly assigned input parameters (μin and σin). Additionally, a single-trigger neuron was randomly chosen in each network and driven to spike at regular intervals (see Methods, Definition of followers). Whenever we re-ran a simulation, we used the same input statistics and trigger neuron unless otherwise stated.

Certain mean input levels (μin) could lead to self-sustaining activity (Figure 1H) but only after a few spikes had occurred in the network, possibly resulting in different activity early and late in a simulation. Consequently, we kick-started all our simulations with a volley of spikes sent to a random selection of 500 excitatory neurons in the first 100 ms and then discarded the initial 1000 ms.

We analyzed all 6000 original simulations to identify sequences of spikes (Figure 2) and sub-sequences (Figure 4). A random selection of 900 simulations was analyzed to extract connectivity motifs (Figure 3A–D). A random selection of 2000 simulations was re-run with truncated connectivity (Figure 3F–H). To study the effect of external spikes on sub-sequences (Figure 5A, B), we re-ran a random selection of 2000 simulations where the gate of one randomly chosen sub-network received an additional external spike. To study how external spike parameters affected sub-sequences (Figure 5C–H, Figure 5—figure supplement 1), we selected 8 simulations and re-ran each one between 1142 and 3655 times while varying external spike strength and timing. To study the effect of coactivation of multiple-trigger neurons (Figure 6), we selected a representative simulation (same as Figure 4A) and re-ran it 7000 times with new randomly chosen triggers (but same input parameters, μin, and σin). We repeated this analysis on another 50 randomly chosen simulations re-ran 800 times each (Figure 6—figure supplement 1).

Simulation firing rates

We focused on capturing the mean firing rates from ex vivo and in vivo spontaneous activity ([0, 0.05] and [0.02, 0.09] spk/s, Hemberger et al., 2019) because they provide a clear baseline of uncorrelated variability for new incoming spikes. To compute the mean firing rate of each simulation, we took the average during the baseline periods (100 ms before each spike injection). Our averages include all neurons (excitatory and inhibitory) equivalent to a 2 × 2 mm slab of tissue (Figure 1A). Since experimental estimates are limited to those high-rate neurons that can be accessed and spike sorted in extracellular recordings (Fournier et al., 2018; Hemberger et al., 2019), comparing them to our model is not trivial. Along those lines, most excitatory neurons in our model produced none-to-few spikes (Figure 2—figure supplement 1E), and excluding them increases mean rates 10-fold (to 0.22–1.04 spk/s).

As well as low average firing rates, our model also produces fluctuations of higher instantaneous firing rates reminiscent of the waves observed experimentally (Prechtl et al., 1997; Shew et al., 2015; Figure 2—figure supplement 1C, D). Convolving spike trains in our simulations with a 250-ms Gaussian window yields peak instantaneous firing rates one order of magnitude higher than their average (maximum firing rate per neuron, averaged across all neurons, 0.53–1.36 exc. spk/s, 1.7–3.7 inh. spk/s), and comparable to those observed in vivo under visual stimulation (Fournier et al., 2018).

Definition of followers

Every 400 ms, for 100 trials, we set the voltage of the trigger model neuron above the spike detection threshold, generating an instantaneous spike (trials). For each trial, we defined a window before spike injection (100 ms) and a window after spike injection (300 ms). We computed the mean firing rate for all windows before and after spike injection for all other neurons in the network and defined the difference in firing rate before and after as the firing rate modulation of each neuron (∆FR). We normalized ∆FR by dividing it by that expected from a neuron that never spiked before spike injection and that spiked precisely once after spike injection.

We defined a null distribution for firing rate modulation (∆FR) as the difference of two random samples from two Poisson random variables with the same underlying rate. The rates of the two Poisson random variables were adjusted to account for the different duration of the time windows before and after spike injection. The underlying rate was taken as the mean firing rate of all model neurons in the windows before spike injection. The resulting null distribution was centered at zero but displayed a non-zero variance. We used p < 10−7 as an upper boundary to the null distribution to segregate spurious firing rate increases from reliable neurons. Since we tested all 105 neurons in our model (except the trigger neuron), we expected one false positive on average every 100 simulations. Due to big differences in firing rates, we computed different null distributions for the excitatory and inhibitory populations. Due to the very low baseline firing rates in our networks, we could not detect the opposite of followers: neurons that decreased their firing rate during our spike induction protocol. However, our analysis of the coactivation of multiple triggers (Figure 6) showed that such neurons do exist.

A higher number of trials per simulation would have allowed a better estimation of ∆FR and the detection of extra followers. However, we found that 100 trials detected the most reliable followers while limiting simulation times (see Methods, Simulations) and keeping our protocol and predictions experimentally relevant.

Lognormal fit bootstrap

We generated 50,000 bootstrapped samples of the probability that a neuron had at least one strong excitatory-to-excitatory connection (Figure 1F). A connection was strong if it fell between 50.6 and 67.8 nS, corresponding, respectively, to the top 99.7% of the original fitted lognormal distribution (Figure 1D) and the maximum possible strength.

The number of strong connections sij of a model neuron i in bootstrap step j follows a Binomial distribution B(ni, pj), where ni is the total number of connections of neuron i, and pj is the probability that a connection falls within the strong weight range. To estimate pj we sampled 122 times, with replacement, from a set of 122 experimentally obtained EPSP amplitudes and fitted a new truncated lognormal. Our model neurons received a variable number of connections ni (see Methods, Network), which was well described by a discretized Gaussian distribution N(μ = 745, σ = 27). For each bootstrapped step j, we thus generated 1000 values of ni, drew one sample sij of each B(ni, pj), and calculated the ratio of all sij bigger or equal to 1.

Entropy of follower rank

To compare the reliability of the order of follower activation to that seen in experiments, we used the same entropy-based measure as used first in experiments (Hemberger et al., 2019; Figure 2D). For every trial, followers were ordered by the time of their first spike and we calculated the entropy of follower identity for each rank:

Hk=inpiklog2pik

where n is the number of followers, pik is the frequency with which follower i fired in rank k. The entropy Hk was normalized by the entropy of a uniform distribution:

Hu=in1nlog21n

A value of Hk=0 indicates that the same neuron fired in the rank k in all trials. Note that that this measure is especially sensitive to activations or failures of single followers early in a trial since they affect the ranks of all subsequent followers in that same trial. Due to our exploration of different firing rate levels, we often observed sequence failure (Figure 4) and numbers of followers (Figure 2F) much higher than the number of trials (100), both factors that limited our capacity to estimate order entropy accurately. Consequently, we limited our estimates to simulations with at least as many trials as followers and to trials where at least 25% of followers were present.

Spatial center-of-mass

To visualize the spatial propagation of sequences, we calculated the center-of-mass of follower activations over time similar to the experimental data (see Figure 5A in Hemberger et al., 2019). We took the average follower XY location for those followers activating in a sliding window of 5 ms. The resulting X and Y traces were smoothed over using a Gaussian window of 15 ms.

K-modes clustering

For each simulation, we summarized the activity of each follower as a single vector of 100 binary entries representing the presence (at least one spike) or silence of that follower in each of the trials (100 trials per simulation). We then applied k-modes clustering on each simulation separately. K-modes clustering is an unsupervised method for assigning data samples to K clusters. It is similar to k-means clustering, but each cluster centroid is instead defined by the number of matching values between sample vectors and thus performs well for categorical (binary) data. Manual inspection suggested that followers were well clustered in groups of 5–10 neurons. Since we applied unsupervised clustering to 6000 simulations with very different numbers of followers each (see examples in Figure 6D), we took K = n/6 for each simulation, where n is the total number of followers detected in that simulation. The algorithm had access only to spiking activity and not to the connectivity between followers. We used the implementation provided in the python package k-modes (https://pypi.org/project/kmodes).

Graph visualization

All graphs of strong connections (Figures 4B–D6G) were laid out using Graphviz dot and do not represent the spatial location of model neurons unless otherwise specified.

Electrophysiology

All electrophysiological recordings used to fit the models were described in the original experimental study (Hemberger et al., 2019).

Adaptation index was computed as the ratio of inter-spike-interval between the last two spikes and the interspike interval of the first two spikes in whole-cell patch current-clamp recordings under a constant current injection for 1 s. The current value was two to five times the current value that elicited a single spike for each particular neuron.

Data and software availability

All code used for model simulations and analysis has been deposited at https://github.com/comp-neural-circuits/tctx, (copy archived at swh:1:rev:4fcd1a195a64015e777e55f582e8df35c4913993; Riquelme, 2022). All experimental data have been previously published in Hemberger et al., 2019. Data used to fit the model parameters have been deposited in https://doi.org/10.6084/m9.figshare.19763017.v1.

Data availability

All code used for model simulations and analysis has been deposited at https://github.com/comp-neural-circuits/tctx, (copy archived at swh:1:rev:4fcd1a195a64015e777e55f582e8df35c4913993). All experimental data have been previously published in Hemberger et al., 2019. Data used to fit the model parameters have been deposited in https://doi.org/10.6084/m9.figshare.19763017.v1.

The following data sets were generated
    1. Riquelme JL
    (2022) figshare
    Turtle cortex whole-cell patch-clamp recordings.
    https://doi.org/10.6084/m9.figshare.19763017.v1

References

    1. Beggs JM
    (2008) The criticality hypothesis: how local cortical networks might optimize information processing
    Philosophical Transactions. Series A, Mathematical, Physical, and Engineering Sciences 366:329–343.
    https://doi.org/10.1098/rsta.2007.2092
    1. Collins JJ
    2. Chow CC
    3. Imhoff TT
    (1995b) Aperiodic stochastic resonance in excitable systems
    Physical Review. E, Statistical Physics, Plasmas, Fluids, and Related Interdisciplinary Topics 52:R3321–R3324.
    https://doi.org/10.1103/physreve.52.r3321

Decision letter

  1. Peter Latham
    Reviewing Editor; University College London, United Kingdom
  2. Laura L Colgin
    Senior Editor; University of Texas at Austin, United States
  3. Abigail Morrison
    Reviewer; Forschungszentrum Jülich, Germany

Our editorial process produces two outputs: i) public reviews designed to be posted alongside the preprint for the benefit of readers; ii) feedback on the manuscript for the authors, including requests for revisions, shown below. We also include an acceptance summary that explains what the editors found interesting or important about the work.

Decision letter after peer review:

Thank you for submitting your article "Single spikes drive sequential propagation and routing of activity in a cortical network" for consideration by eLife. Your article has been reviewed by 3 peer reviewers, one of whom is a member of our Board of Reviewing Editors, and the evaluation has been overseen by Laura Colgin as the Senior Editor. The following individual involved in the review of your submission has agreed to reveal their identity: Abigail Morrison (Reviewer #3).

The reviewers have discussed their reviews with one another, and the Reviewing Editor has drafted this to help you prepare a revised submission.

Essential revisions:

In a surprising display of unanimity, all three reviewers very much liked the paper, and all three reviewers had one main comment, which is related to the exceptionally low firing rates in the ex-vivo turtle cortex. Our question is: would you see repeated patterns if the firing rate were higher? Extrapolation of your simulations suggests not:

In the "Connectivity" section, starting on line 688, you ask the question "could strong connections underlie cortical sequences more generally, and are sequences an ancient feature of cortical computation?" You don't exactly answer that, but doesn't Figure 2F, which shows that the number of followers falls off rapidly with firing rates, tell us that the answer is likely to be no?

We hesitate to ask for more simulations, but without them, we're not sure this work can't be extrapolated beyond the ex-vivo turtle prep, as you appear to have done:

Line 212: "Our simulations further predict that sequences can occur under in vivo levels (high firing rates) of spontaneous activity".

Lines 594-5: "Our model produces reliable sequences from single spikes as experimentally measured ex vivo and predicts their existence in vivo, where firing rates are higher, with marked differences between excitatory and inhibitory neurons."

We looked at your Ref. 30, and Figure 8C of that reference shows firing rates in an awake turtle prep ranging from 0 and 5 spikes/s following a visual stimulus, with a mean of about 2.5 (and a max of 20). This is much higher than the 0.02-0.09 spikes/s you used in your high firing rate simulations (from line 191). We might be misreading Ref. 30, but certainly, your firing rates are 1-2 orders of magnitude lower than what's found in the mammalian cortex.

Along the same lines, on lines 468-73 you say: "In sum, we identified gate neurons in different sub-networks whose activation is critical for the activation of the sub-network. External single spikes can control the state of gate neurons and thus halt or facilitate the activation of individual sub-networks. The effect of these spikes depends on their timing relative to the sequence trigger. Finally, the activation of a sub-network may influence other sub-networks via recurrent connectivity leading to complex and non-reciprocal interactions."

We're not sure this result will be robust in a noisier network at higher rates; that should be clear.

On a secondary note, which relates to clarity, we will admit that there was somewhat of a split. Reviewer 2 found the paper "exceptionally clear and well-written", whereas Reviewer 1 found the paper often confusing. Here is a summary of what Reviewer 1 said:

My number one rule of writing is: that the reader should never have to ask "why am I being told this?". However, I had to ask that question a lot while I was reading the paper. That's because I wasn't told what to expect up front. Instead, the style was "here's an observation; here's what it means". This sounds good on paper, but it means I have to absorb all sorts of information in isolation, and then put it together. There's a simple fix to that: give us the whole story at the beginning. We do get a pretty good summary, but not until lines 316-9:

"In summary, our model suggests that rare but strong and common but weak connections play different roles in the propagation of activity: the former promotes reliable responses to single spikes, while the latter amplify spontaneous network activity and drive recurrent inhibition, effectively modulating the reliability of those responses."

It would help a huge amount to get a summary like that -- but possibly expanded, because that's only part of the story – upfront. That way, for each of the various manipulations you made -- and there were a lot of them! -- it would have been easy to tell why. As it was, for most of them I had a hard time figuring that out, and I ended up getting pretty lost.

For what it's worth, my take on your results is as follows. As you point out on lines 102-4 "each model excitatory neuron connects to other neurons with a majority of weak synapses and about two connections sufficiently strong to bring the postsynaptic neuron from rest to near firing threshold". This implies a rapid growth in the number of spikes (by a factor of 2 per relevant timestep), with saturation caused by inhibition. This is an intrinsic feature of E-I networks (London et al. (your ref 2) showed this in rat barrel cortex). But there are two differences in the turtle cortex relative to the mammalian cortex: 1) about two connections from each excitatory neuron are strong enough so that a single presynaptic spike can trigger a postsynaptic spike, and 2) the firing rate was very low (about 0.1 Hz). The low firing rate is important; as it increased, repeatability dropped, especially for inhibitory neurons (Figure 2F). So maybe we should think of the weak connectivity as controlling the level of noise.

I'm not 100% this is correct, but it is what I extracted!

Of course, my rule isn't everybody's rule, so the extent to which you implement this is up to you. But in my view, the easier a paper is to read, the more impact it has.

Besides that, we have lots of comments, mainly about clarity and figures. They're more or less collated from the three reviewers, so they're not necessarily in a totally sensible order, although we did try. To reduce the trend toward longer and longer replies to reviewers, you do not need to reply to all of these. We'll leave it to your judgment which ones you do reply to. Maybe just the substantive ones you disagree with? They're all pretty minor, so as far as we're concerned you can just implement them (or at least the ones you agree with), and we'll be happy.

1. In Figure 1A, you should be clear about what the percentages mean. We _think_ they refer to connection probabilities, but that's not mentioned in the figure caption. Under that assumption, the connectivity (per neuron) is as follows:

E-E: 93,000 x 0.14 = 13,020

E-I: 7,000 x 0.46 = 3,220

I-E: 93,000 x 0.49 = 45,570

I-I: 7,000 x 0.26 = 1,820

These connectivity numbers should be in the paper since they're relevant. And 45,570 connections per neuron seems high -- is that consistent with experiments? Or are we doing something wrong?

2. Figure 1D: a log scale would, we think, make the figure easier to read.

3. Figure 1H and I: we're guessing blue and red are E and I, respectively (since that was the color coding for the triangles in panel A). But to avoid any possibility of confusion, this should be stated in the figure caption (and/or on the plots).

4. Line 168-9: "In each simulation, we caused the trigger neuron to fire 100 action potentials at long, regular intervals". We suggest that you tell us the interval (which, from Methods, was 400 ms). We wondered what "long, regular intervals" meant when we read that.

5. You use a Poisson process as a null model. Since E-E networks tend to oscillate, we believe (although we're not 100% sure), that a Poisson process will underestimate variability. You might want to address this (assuming it's relevant, which we're not sure it is).

6. In Figure 2C, what's the y-axis? Do slight displacements indicate different spikes? This should be clear.

7. It would be really nice to show spike rasters after a trigger spike. Presumably, they won't show much of anything, but it would be important to know that -- it's possible that one can see a slight increase in firing rate.

8. Please define normalized entropy (line 177) in the main text, at least qualitatively. And in Methods, it should be defined quantitatively (currently the reader is referred to a paper).

9. We don't understand the left panel in Figure 2E.

10. Lines 257-60: "Interestingly, excitatory-to-inhibitory spike transfers are consistently shorter than their excitatory-to-excitatory counterparts, even at higher firing rates (Figure 3B inset), possibly reflecting the more depolarized state of inhibitory neurons (Figure 1H right)."

Did you mean Figure 1I, not Figure 1H?

11. It would be nice to expand on the implications of Figure 3E. If we understand things correctly (a big if), you're showing the connections between any two neurons in which the postsynaptic one fired within 100 ms of the presynaptic one. Is that correct? If so, it means there are a lot of random coincidences -- that could even happen for un-connected neurons. Which makes it hard for us to interpret what the plot means.

12. line 475: "(network and colors as in Figure 4A)". We couldn't figure out all the colors from Figure 4A.

13. Figures 6B and 6D seem inconsistent: Figure 6B shows that a&b together generate very few followers, whereas Figure 6D shows that a&b together generate a lot of followers. What are we missing?

14. lines 636-8: "The response to simultaneous activation of multiple triggers in our model suggests that sequences operate under excitatory/inhibitory balance, where local inhibition cancels excitatory signals (5) (Figure 6D, H)."

Why do Figures 6D and 6H imply that sentence? More explanation would be helpful.

15. In Methods, please include equations. At least somebody (including one of us) would want to know what they are, and the reader shouldn't have to reproduce them from the explanations (especially since the definition of the exponential integrate and fire neuron probably isn't standard). Especially important is the average number of connections/neurons.

16. In table 1 it says that the synaptic conductance time constant is 1.103681 ms. Is it really that short? That seems strange, given the long membrane time constant. And why so many significant figures?

17. The result that strong sparse connections support sequential network activity is not particularly surprising. Perhaps more emphasis could be placed on the result that the weak, dense part of the network is necessary for supporting flexible sequential activation. Could a figure be added that shows the ideal ratio or range of ratios of strong to weak connections? Perhaps this is in the results already and just needs to be brought to the foreground.

18. The matching to the data from the turtle cortex is an excellent foundation for this work and grounds the model. Still, it might be nice to know how the results change as these parameters move around. In particular, it would be nice to know which aspects of the turtle's visual cortical network and cellular properties are necessary and sufficient parameters for the observed reliable sequence generation. All this is optional, so use your best judgment as to whether you want to include any of it. And some of it might simply be a point for the Results or Discussion.

a. If the ratio of E/I in the network is changed, do the results hold?

b. Are there other parameter changes that substantively affect the results? It seems like single-cell adaptation could be an important factor in pattern generation in the network.

c. The connection pattern in the turtle cortex might also be critical for these results. This isn't modeled explicitly in this work, but it could be an important feature for the experimental results that ground the paper. Does turtle cortex "look like" rodent cortex, or are some of the connections sparser, or do they fall off spatially faster or slower? It would be good to see a discussion of this in the paper.

d. In the model, connections are random but have a log-normal distribution. How does changing this affect the results and how well does that fit rodent cortical data?

19. The Results/Discussion seem to be missing a stronger take on what this kind of patterned activity might be useful for. A concrete example of a downstream computation that relies on the reliable combinatorial activation of sub-network sequences would be useful.

20. The gating results seem very exciting! It might be nice to highlight connections to a recent explosion in ML papers on gating and putting that part of the work more front and center in the abstract.

21. Line 183: "the number of excitatory followers far exceeds the number of inhibitory followers". This seems obvious in a network with 93% excitatory neurons. Are we missing something?

22. Line 187: Hz is a unit for oscillations and spiking activity is not oscillatory. Might we suggest spikes/s?

23. Figure 2E: The MEA square is an excellent example of the sort of thing that this reviewer cannot see when printed at the intended size. Please have mercy on older eyes.

24. Line 250-254: We struggled a bit to understand the concept of spike transfers. It seems like this is any link in any sequence where the presynaptic neuron is excitatory. Could you expand this explanation, please?

25. Line 265: "very few motifs lead to the activation of excitatory neurons". This confused us because in Line 183 it says "the number of excitatory followers far exceeds the number of inhibitory followers". We are clearly missing something important; please clarify.

26. Line 305: How strong do the connections have to be? We're guessing strong enough so a single presynaptic spike causes a postsynaptic spike, but this should be stated.

27. Line 316-319: We feel that the role for strong connections is more clearly demonstrated than the role for weak connections. Obviously, if there are only a very few strong connections then the neurons will have a lower mean membrane potential and will be less responsive to spiking input. Is it that simple, or is there more to it?

28. Figure 3H: We lack the visual acuity to interpret this plot. Perhaps an alternative representation would be clearer?

29. Line 366: two "due to"s in this sentence.

30. Figure 4E-G: there are numbers on these plots that we definitely can't read, and the connectivity in 4G is not only tiny but also a very pale grey in places.

31. Figure 4E, F: this is a measured frequency rather than a probability, right?

32. Line 618: landmark -> hallmark.

33. Line 651: it seems to us that your sequence structures have got a lot in common with those reported in your ref 34 (Polychronous groups, Izhikevich), which also depends on very strong synapses. Can you expand this section with some comparison to that study?

34. Line 731: "may not be linked to behavior". Well, it's ex vivo, so we guess certainly not linked to behavior. Perhaps you can rephrase to make your meaning clearer?

35. Line 818: "variable number of connections". It would be nice to know the statistics of this and how it compares with biology

36. Line 822: Nest -> NEST. (And we appreciate you stating and citing the version used!)

https://doi.org/10.7554/eLife.79928.sa1

Author response

Essential revisions:

In a surprising display of unanimity, all three reviewers very much liked the paper, and all three reviewers had one main comment, which is related to the exceptionally low firing rates in the ex-vivo turtle cortex. Our question is: would you see repeated patterns if the firing rate were higher? Extrapolation of your simulations suggests not:

In the "Connectivity" section, starting on line 688, you ask the question "could strong connections underlie cortical sequences more generally, and are sequences an ancient feature of cortical computation?" You don't exactly answer that, but doesn't Figure 2F, which shows that the number of followers falls off rapidly with firing rates, tell us that the answer is likely to be no?

We hesitate to ask for more simulations, but without them, we're not sure this work can't be extrapolated beyond the ex-vivo turtle prep, as you appear to have done:

Line 212: "Our simulations further predict that sequences can occur under in vivo levels (high firing rates) of spontaneous activity".

Lines 594-5: "Our model produces reliable sequences from single spikes as experimentally measured ex vivo and predicts their existence in vivo, where firing rates are higher, with marked differences between excitatory and inhibitory neurons."

We looked at your Ref. 30, and Figure 8C of that reference shows firing rates in an awake turtle prep ranging from 0 and 5 spikes/s following a visual stimulus, with a mean of about 2.5 (and a max of 20). This is much higher than the 0.02-0.09 spikes/s you used in your high firing rate simulations (from line 191). We might be misreading Ref. 30, but certainly, your firing rates are 1-2 orders of magnitude lower than what's found in the mammalian cortex.

Along the same lines, on lines 468-73 you say: "In sum, we identified gate neurons in different sub-networks whose activation is critical for the activation of the sub-network. External single spikes can control the state of gate neurons and thus halt or facilitate the activation of individual sub-networks. The effect of these spikes depends on their timing relative to the sequence trigger. Finally, the activation of a sub-network may influence other sub-networks via recurrent connectivity leading to complex and non-reciprocal interactions."

We're not sure this result will be robust in a noisier network at higher rates; that should be clear.

We address the reviewers' questions separately: first, whether simulations based on ex vivo turtle measurements can make predictions about in vivo turtle activity; second, whether these predictions can inform our understanding of the in vivo mammalian cortex.

In vivo turtle firing rates

We have re-analyzed some of our simulated networks and provide further details on their firing rates in a new Figure 2 —figure supplement 1, which we include and describe below and in our Results section (lines 181-200). Here we clarify the type of experimental data that guided our modeling, the activity our model produces, the limitations of experimental estimates, and the predictions of our model.

First, we clarify the experimental context for the ranges of firing rates generated in our simulations. The estimates of the mean firing rate for the in vivo turtle cortex ([0.02, 0.09] spk/s) that we used were based on the data from Fournier et al., 2018 (our previous Ref. 30) but re-analyzed for and first reported in Hemberger et al., 2019 (Figure 2 —figure supplement 1A). We have updated our citation to direct the reader to Hemberger et al., 2019 (lines 212-213) as the correct source. For that analysis, mean firing rates were extracted from all spike-sorted units detected over multiple 2-3 hour sessions in 4 lightly anesthetized turtles. Importantly, these firing rates were estimated from periods of spontaneous activity, not from periods of visual stimulation, as in Figure 8C in Fournier et al., 2018, mentioned by the reviewers. Spontaneous activity provides an estimate of the intrinsic variability of network activity in the behaving turtle without structured input. In our manuscript, we use a model to study network responses to new, externally-triggered single spikes. Hence, we focused on capturing the mean firing rates from in vivo spontaneous activity as it provides a clear baseline of uncorrelated variability for new incoming spikes, which would occur with the onset of visual stimulation. Our model reproduces this spontaneous activity through a random current (Figure 1G-I), resulting in network activity that is fully uncorrelated to the triggered spikes. To investigate the role of network firing rates in sequence propagation, we explored a range of network mean firing rates, including those compatible with the spontaneous in vivo data estimates reported in Hemberger et al., 2019 (Figure 2 figure supplement 1A).

Second, our model captures the low average firing rates of the turtle cortex, but it also produces fluctuations of higher instantaneous firing rates (Figure 2 —figure supplement 1CD). These fluctuations are also observed in the turtle cortex, often in the form of waves (Prechtl et al., 1997; Wright and Wessel, 2017). The low averages result from the strongly adaptive network dynamics and hours-long recordings (Fournier et al., 2018; Hemberger et al., 2019). In our model, convolving spike trains with a 250 ms Gaussian window (as in Fournier et al. 2018) yields peak instantaneous firing rates on the same order of magnitude as the experiments mentioned by the reviewers (maximum firing rate per neuron, averaged across all neurons, 0.53-1.36 exc. spk/s, 1.7-3.7 inh. spk/s). Note, however, that the rates in Figure 8C of Fournier et al. 2018 are averages from repeated visual stimulation, while peak rates in our model result from random fluctuations representing spontaneous activity.

Third, since it is simulated, our cortical network model allows us to access every neuron. In contrast, experimental estimates are biased by which neurons can be accessed, often those with the highest rates. For example, Fournier et al., 2018 and Hemberger et al., 2019 used extracellular recordings where spike sorting necessarily excludes most non-responding neurons from the average, potentially yielding artificially high mean rates (Levenstein and Okun, 2022; Shoham et al., 2006). For ex vivo recordings, most recorded neurons were inhibitory due to the position of the MEA. The electrical distance of excitatory neurons to the MEA limited access to the excitatory population. This bias towards inhibitory neurons in the experimental characterization of sequences was precisely one of the motivations for our modeling effort (Introduction, lines 54-56). By contrast, in our model networks, we match the density of a 2x2 mm slab of tissue and estimate the average firing rates of all neurons (100k). We find that the excitatory population in our model displays much lower firing than the inhibitory population, with most excitatory neurons producing none-to-few spikes (Figure 2 —figure supplement 1E). As a simple test: removing neurons with less than 10 spikes increases average firing rates 10-fold (from 0.02-0.13 to 0.22-1.04 spk/s) and pushes peak rates even further (to 2.2-2.3 exc. spk/s, 2.3-3.7 inh. spk/s), suggesting we might, in fact, be studying too high rates in relation to the experimental estimates which are constrained by subsampling and spike sorting.

Finally, we do find model sequences even at high rates, and we interpret them as a prediction for turtle experiments. Indeed Figure 2F shows a quick fall in the number of followers at increased rates. Still, this number remains on the order of tens of followers, even for firing rates well above estimates of in vivo spontaneous activity. We have added an example of such a sequence in Figure 2 —figure supplement 1B (mean rate 0.13 spk/s). This is a theoretical prediction specific to the turtle cortex that is experimentally testable. We now make this more explicit in our main text (lines 230, 232-240, 511, 644):

"In summary, our biologically-constrained model produces repeatable firing sequences across groups of neurons in response to single spikes in randomly chosen excitatory neurons with properties very similar to those observed in ex vivo experiments (low firing rates) that constrained the model network. Our simulations further provide an experimentally testable prediction: that sequences may occur under in vivo levels of spontaneous firing rate in the turtle cortex. In these conditions of higher spontaneous activity, the activation sequences are mainly composed of excitatory followers, whereas inhibitory followers produce less reliable and temporally jittered responses."

“Our model produces reliable sequences from single spikes as experimentally measured ex vivo and predicts their existence in vivo in the turtle cortex, where firing rates are higher, with marked differences between excitatory and inhibitory neurons.”

“Consequently, we predict that followers should be detectable in experiments in vivo in the turtle cortex but mainly within the pyramidal layer.”

“In sum, we identified gate neurons in different sub-networks whose activation is critical for the activation of the sub-network in a model based on the turtle cortex. External single spikes can control the state of gate neurons and thus halt or facilitate the activation of individual sub-networks. The effect of these spikes depends on their timing relative to the sequence trigger. Finally, the activation of a sub-network may influence other sub-networks via recurrent connectivity leading to complex and non-reciprocal interactions.”

In summary, our model operates within an electrophysiologically relevant regime of spontaneous activity in the turtle cortex and can provide an experimentally testable prediction about the presence of spiking sequences in vivo. In addition to the edits above, we include a short description of the firing rates of our model in a new subsection of our Methods section (lines 956-973):

"Simulation firing rates

We focused on capturing the mean firing rates from ex and in vivo spontaneous activity ([0, 0.05] and [0.02, 0.09] spk/s, Hemberger et al., 2019) because they provide a clear baseline of uncorrelated variability for new incoming spikes. To compute the mean firing rate of each simulation, we took the average during the baseline periods (100 ms before each spike injection). Our averages include all neurons (excitatory and inhibitory) equivalent to a 2x2 mm slab of tissue (Figure 1A). Since experimental estimates are limited to those high-rate neurons that can be accessed and spike-sorted in extracellular recordings (Fournier et al., 2018; Hemberger et al., 2019), comparing them to our model is not trivial. Along those lines, most excitatory neurons in our model produced none-to-few spikes (Figure 2 —figure supplement 1E), and excluding them increases mean rates 10-fold (to 0.22-1.04 spk/s).

As well as low average firing rates, our model also produces fluctuations of higher instantaneous firing rates reminiscent of the waves observed experimentally (Prechtl et al., 1997; Shew et al., 2015, Figure 2 —figure supplement 1CD). Convolving spike trains in our simulations with a 250 ms Gaussian window yields peak instantaneous firing rates one order of magnitude higher than their average (maximum firing rate per neuron, averaged across all neurons, 0.53-1.36 exc. spk/s, 1.7-3.7 inh. spk/s), and comparable to those observed in vivo under visual stimulation (Fournier et al., 2018).”

In vivo mammalian cortex

Next, we investigated average firing rates for mammalian cortices and clarified the limits of extrapolating our results to mammals.

First, while certainly lower than in mammals, spontaneous firing rates in the turtle cortex might not be as different as the reviewers propose. Architectonically and evolutionarily, the turtle cortex is most comparable to mammalian 3-layered paleo- and archi-cortices (piriform and hippocampal formation, Fournier et al., 2015). In hippocampus CA1, firing rates of pyramidal cells are known to span from 0.001 to 10 spk/s and follow heavily skewed distributions, with >70% of neurons showing mean spontaneous rates of <1 spk/s, even during active running (Mizuseki and Buzsáki, 2013). A recent re-analysis of data from several regions of the rodent forebrain suggests that neurons display a “ground state” firing mode characterized by irregular spiking at very low rates, with CA1 and piriform cortex often showing 0.1-1 spk/s (see Figures 2C and 4D of Levenstein et al., 2021).

Low spontaneous firing rates (<1 spk/s) have also been reported across neocortical areas (see reviews by Barth and Poulet, 2012; Shoham et al., 2006), sparking interest in theories of sparse-firing operation of cortex (Wolfe et al., 2010). A recent estimate of firing rates based on multi-area neuropixel data suggests that 40% of the neurons across a column of in vivo mouse sensory cortex fire at <1 spk/s (Levenstein and Okun, 2022). Finally, all the caveats regarding mean firing rate estimations described above for turtle studies equally apply to mammalian literature, including differences between evoked vs. spontaneous activity and the effects of biased sampling that result from short recordings, electrode location, and spike sorting.

Second, our manuscript only makes predictions for the turtle cortex at the spontaneous firing rate regimes that have been experimentally reported for it. We discuss similarities with other cortices in the hope that our work may guide future research in those systems. Indeed, long-tailed distributions of strong connections and spiking sequences, as observed in the turtle cortex and our model, have also been reported for in vivo mammalian cortices, from mice to humans (Discussion, lines 756-768). However, our model is tightly constrained by turtle measurements, as we now clearly state in the manuscript. Making concrete quantitative predictions about sequences in the mammalian cortex would require different species-specific models, which are beyond the scope or intention of our manuscript. In addition to the text changes described in the previous section, we now address this explicitly in our discussion (lines 768-776):

“Thus, could strong connections underlie cortical sequences more generally, and are sequences an ancient feature of cortical computation? We based our model on data from the turtle cortex, yet other species may introduce variations that might be relevant, including differences in operating firing rates or neuronal populations. For instance, the reliability of sequence propagation may depend on connectivity features we did not explore here, such as the presence of different connectivity motifs or differences in the distance-, layer- and population-specific connection probabilities (Jiang et al., 2015; Song et al., 2005). Further modeling and experimental work on the cortical responses of diverse species will be needed to provide a comprehensive comparative perspective (Laurent, 2020).”

Sequences at spontaneous rates beyond turtle in vivo

Nonetheless, we have taken this opportunity to explore sequence propagation in simulations at higher rates. We describe these in Author response image 1 (below). This initial exploration suggests that the mechanism we describe applies under spontaneous rates much higher than the in vivo turtle estimates. Still, it also brings up limitations for follower detection that need to be addressed in that scenario, which is beyond the scope of our manuscript.

We re-run the example simulations from Figure 2 —figure supplement 1 under increasingly higher mean currents (200-2000 pA, Author response image 1A). We found that mean inputs higher than these caused synchronous firing while increasing the input variance did not strongly affect the firing rates. The resulting mean rate reached up to 2.28 spk/s, with inhibitory neurons firing at rates several orders of magnitude higher than excitatory neurons (Author response image 1B, mean rate: 0.36 exc spk/s, 27.75 inh spk/s; mean peak rate: 2.51 exc. spk/s, 33.71 inh. spk/s). The rates of excitatory neurons follow a high-variance, skewed distribution which is no longer well captured by our Poisson null model (Author response image 1C; compare to Figure 2 —figure supplement 1E) and which results in a wide distribution of firing rate modulation (∆FR, difference of firing rate before and after spike induction, Author response image 1E; compare to Figure 2B). In consequence, our statistical test could no longer differentiate followers and non-followers. Therefore, to observe sequence propagation in this extended firing rate regime, we tracked the neurons identified as followers in our original simulations (Author response image 1D). Followers continued to activate following the induced spike, even under the strongest drive but did so very sparsely (average follower activated in 15% of trials). Consequently, the original sequences never repeated in full but presented, across trials, the same type of sub-sequence failures we studied in Figures 4-6. These failures are consistent with our original analysis of recurrent interactions, where we estimated that spikes from 90% of the neurons in the network would produce lateral inhibition onto a given subsequence (Figure 6H). Given the large number of ongoing spontaneous excitatory spikes in the network (around 3,400 spikes in the 100 ms surrounding any induced spike), it is not surprising that individual subsequences should be frequently halted.

Our model suggests that finding sequences at high firing rates would be greatly aided by detecting followers at lower firing rates. This suggestion is relevant to study sequences in experimental setups. For instance, the low spontaneous rates and strong adaptive features of the turtle cortex may make it especially suited to the study of spiking sequences. Similarly, in a mammalian network, repeating sequences can be detected in the rat somatosensory cortex in vivo during the transition from low-firing (DOWN) to high-firing (UP) states but not within the high-firing states (Luczak et al., 2007).

Author response image 1
Sequence evolution at super-high firing rates.

(A) Input-output curve of the network, including new simulations at higher firing rates. Original simulations explored the area in purple (same as Figure 2 —figure supplement 1). New simulations as colored dots. Input standard deviation for new simulations: 110 pA. (B) Spike raster of a simulation with 2.28 spk/s mean rate (red dot in A). All spikes for all neurons in the network for 1 second in the middle of the simulation. Arrowheads indicate the initiation of a trial. Purple circles highlight injected spikes. Blue: exc.; pink: inh. (C) Distribution of single-cell mean firing rate for an example simulation with mean network rate of 2.28 spk/s (red dot in A, mean exc. rate of 0.36 spk/s). Only excitatory neurons are shown. Dashed line indicates a Poisson fit. (D) Evolution of sequences as firing rates increase above turtle in vivo estimates. Each row corresponds to one of the three original simulations in Figure 2 —figure supplement 1 (s1, s2, s3). Left column: example trials from original simulations. Other columns: example trials of each simulation re-run at an increased mean firing rate (indicated at the top; colored dots as in A). Only spikes from excitatory followers are shown. Purple: trigger spike. Within each row, we show spikes from the same neurons with the same sorting. (E) Distribution of single-cell firing rate modulation (∆FR) for an example simulation with 2.28 spk/s mean rate (red dot in A). Yellow: ∆FR for excitatory followers as detected in original simulations. Blue: other excitatory neurons in the network.

On a secondary note, which relates to clarity, we will admit that there was somewhat of a split. Reviewer 2 found the paper "exceptionally clear and well-written", whereas Reviewer 1 found the paper often confusing. Here is a summary of what Reviewer 1 said:

My number one rule of writing is: that the reader should never have to ask "why am I being told this?". However, I had to ask that question a lot while I was reading the paper. That's because I wasn't told what to expect up front. Instead, the style was "here's an observation; here's what it means". This sounds good on paper, but it means I have to absorb all sorts of information in isolation, and then put it together. There's a simple fix to that: give us the whole story at the beginning. We do get a pretty good summary, but not until lines 316-9:

"In summary, our model suggests that rare but strong and common but weak connections play different roles in the propagation of activity: the former promotes reliable responses to single spikes, while the latter amplify spontaneous network activity and drive recurrent inhibition, effectively modulating the reliability of those responses."

It would help a huge amount to get a summary like that -- but possibly expanded, because that's only part of the story – upfront. That way, for each of the various manipulations you made -- and there were a lot of them! -- it would have been easy to tell why. As it was, for most of them I had a hard time figuring that out, and I ended up getting pretty lost.

For what it's worth, my take on your results is as follows. As you point out on lines 102-4 "each model excitatory neuron connects to other neurons with a majority of weak synapses and about two connections sufficiently strong to bring the postsynaptic neuron from rest to near firing threshold". This implies a rapid growth in the number of spikes (by a factor of 2 per relevant timestep), with saturation caused by inhibition. This is an intrinsic feature of E-I networks (London et al. (your ref 2) showed this in rat barrel cortex). But there are two differences in the turtle cortex relative to the mammalian cortex: 1) about two connections from each excitatory neuron are strong enough so that a single presynaptic spike can trigger a postsynaptic spike, and 2) the firing rate was very low (about 0.1 Hz). The low firing rate is important; as it increased, repeatability dropped, especially for inhibitory neurons (Figure 2F). So maybe we should think of the weak connectivity as controlling the level of noise.

I'm not 100% this is correct, but it is what I extracted!

Of course, my rule isn't everybody's rule, so the extent to which you implement this is up to you. But in my view, the easier a paper is to read, the more impact it has.

We thank the reviewers for their constructive suggestions to improve the clarity and impact of our manuscript. We have added short summary sentences at the beginning of each of our result sections to guide the reader, as suggested by Reviewer 2 (lines 278-280, 378-381, 454-458, 543-546):

“To better understand the mechanisms behind follower activation, we examined how spikes propagate through our model networks. We found that single strong sparse connections drive reliable excitatory responses, while convergent weak connections control the level of spontaneous network activity and drive inhibition.”

“To better understand the regulation of sequential neuronal activations, we examined when and how sequences fail to propagate. We found that sequences could fail partially, with different sections of the same sequence failing to propagate independently of one another.”

“We hypothesized that the connections between sub-networks could be critically important for sequence propagation. Indeed, the excitability and state of an entry point to a sub-network should affect sub-network activation, hence, we refer to these entry point neurons as “gates”. By injecting a single extra spike to these gates, we could control the activation of sub-sequences and thus the path of propagation of the sequence.”

“To examine how sequences interact, we studied a new set of simulations where multiple trigger neurons were activated simultaneously. We found that sequences from coactivated trigger neurons often interact, but they do so reliably, resulting in a combinatorial definition of followers.”

The take on our results by Reviewer 2, although mainly correct, lacks a key element: in our model, unlike in London et al., 2010, the network neurons showing a “rapid growth in spikes” in response to an induced spike are always the same ones (which we call followers). Thus, the growth of activity is channeled within the network and results in repeatable sequences of activations that can be reliably modified (which we interpret as routing).

Reviewer 2 also states that the number of strong connections is an important difference between the turtle and mammalian cortex. Long-tailed distributions of synaptic strengths are reported for mammalian cortices (Discussion, lines 756-768), but, to our knowledge, the exact number of strong connections per neuron is unknown. Even the definition of “strong” will vary when considering differences in network and single-cell properties (for example, adaptation currents, resting and threshold potentials). Thus, it’s hard to establish how much of a difference there is between mammalian cortices and our model’s estimate for the turtle dorsal cortex.

Besides that, we have lots of comments, mainly about clarity and figures. They're more or less collated from the three reviewers, so they're not necessarily in a totally sensible order, although we did try. To reduce the trend toward longer and longer replies to reviewers, you do not need to reply to all of these. We'll leave it to your judgment which ones you do reply to. Maybe just the substantive ones you disagree with? They're all pretty minor, so as far as we're concerned you can just implement them (or at least the ones you agree with), and we'll be happy.

1. In Figure 1A, you should be clear about what the percentages mean. We _think_ they refer to connection probabilities, but that's not mentioned in the figure caption. Under that assumption, the connectivity (per neuron) is as follows:

E-E: 93,000 x 0.14 = 13,020

E-I: 7,000 x 0.46 = 3,220

I-E: 93,000 x 0.49 = 45,570

I-I: 7,000 x 0.26 = 1,820

These connectivity numbers should be in the paper since they're relevant. And 45,570 connections per neuron seems high -- is that consistent with experiments? Or are we doing something wrong?

The percentages are estimated local connection probabilities. Our connection probabilities are distance-dependent with a Gaussian decay profile (given in Figure 1C). From our main text:

“Because our estimates of connection probabilities (Figure 1A) are derived from paired-patch recordings of nearby neurons, we scaled the decay profile to match population-specific probabilities in any given disc of 200μm radius.”

Note that our connectivity degrees are not fixed. We updated our Methods section to make this more explicit (lines 920-924):

“The degrees of connectivity were not fixed, so inhomogeneities in neuron location and randomness of connectivity meant that neurons received a variable number of connections.

For instance, the distributions of numbers of E-E weak and strong connections are given in Figure 1E.”

We have adjusted our schematic Figure 1A to highlight the locality of our connectivity and changed its caption to include the average number of connections:

2. Figure 1D: a log scale would, we think, make the figure easier to read.

We believe the linear scale shows more clearly the difference between the very strong and sparse connections at the far end of the tail (above orange arrowhead) compared to the weaker and dense connections at the main body (below blue arrowhead). We add a log-scale version in Author response image 2.

Author response image 2
Log-scale distribution of connection strengths.

Top: histogram of conductances for all excitatory-to-excitatory connections in an example model network. Bottom: histogram of synaptic conductances estimated from EPSP amplitudes experimentally obtained from recorded pairs of connected neurons.

3. Figure 1H and I: we're guessing blue and red are E and I, respectively (since that was the color coding for the triangles in panel A). But to avoid any possibility of confusion, this should be stated in the figure caption (and/or on the plots).

The reviewers are correct. We have added this to Figure 1H-I captions.

4. Line 168-9: "In each simulation, we caused the trigger neuron to fire 100 action potentials at long, regular intervals". We suggest that you tell us the interval (which, from Methods, was 400 ms). We wondered what "long, regular intervals" meant when we read that.

We have updated this line to include the 400 ms period.

5. You use a Poisson process as a null model. Since E-E networks tend to oscillate, we believe (although we're not 100% sure), that a Poisson process will underestimate variability. You might want to address this (assuming it's relevant, which we're not sure it is).

Within the spontaneous firing rate regimes that we consider, we find the Poisson captures well the variance and shape of single-cell firing rates, even at very high rates (see new Figure 2 —figure supplement 1E). The reviewers are correct that a Poisson approximation fails to capture the variance at much higher firing rates than those observed in the turtle cortex (Author response image 1C). We refer the reviewers to our response (above) to their main comment about mammalian firing rates.

6. In Figure 2C, what's the y-axis? Do slight displacements indicate different spikes? This should be clear.

Vertical displacement indicates different follower neurons. We have labeled the y-axis in Figure 2C and specified this in the caption.

7. It would be really nice to show spike rasters after a trigger spike. Presumably, they won't show much of anything, but it would be important to know that -- it's possible that one can see a slight increase in firing rate.

We now show spike rasters including all neurons for the two simulations shown in Figure 2C in our new Figure 2 —figure supplement 1, which we include and describe in our response to the reviewers’ main comment about firing rates (above). The average rates in our simulations are low, but given the size of our network (100k neurons), they translate into thousands to millions of spikes per simulation. Note that there are fluctuations of network activity, but these are not aligned with the injected spikes in our rasters (Figure 2 —figure supplement 1C) but rather reflect random spontaneous activity.

The reviewers are correct that there is, on average, an increase in the firing rate as a consequence of the injected spike, but this increase is not visible to the eye in the spike rasters. We measure the increase of single-cell firing rate (∆FR) to detect followers and find that it is centered around zero for the majority of the network (Figure 2B). The average ∆FR across all neurons (including followers) for all our simulations is between 0.003-0.005 spk/s.

8. Please define normalized entropy (line 177) in the main text, at least qualitatively. And in Methods, it should be defined quantitatively (currently the reader is referred to a paper).

We have updated this line in the main text to read (lines 194-198):

“When ordering the followers in the model network by activation delay from the trigger neuron spike, we observed reliable spike sequences as seen in experiments (Figure 2C). We used an entropy-based measure to quantify the variability of follower identity per rank in a sequence and observed similar results as in the experiments, with follower ordering being most predictable in the first four ranks of the sequence (see Methods, Figure 2D).” And in our methods (lines 1018-1031):

“To compare the reliability of the order of follower activation to that seen in experiments, we used the same entropy-based measure as used first in experiments (5) (Figure 2D). For every trial, followers were ordered by the time of their first spike, and we calculated the entropy of follower identity for each rank:Hk=inpiklog2pik where n is the number of followers, pik is the frequency with which follower i fired in rank k. The entropy Hk was normalized by the entropy of a uniform distribution:Hu=in1nlog21nA value of Hk = 0 indicates that the same neuron fired in the rank k in all trials. Note that this measure is especially sensitive to activations or failures of single followers early in a trial since they affect the ranks of all subsequent followers in that same trial. Due to our exploration of different firing rate levels, we often observed sequence failure (Figure 4) and numbers of followers (Figure 2F) much higher than the number of trials (100), both factors that limited our capacity to estimate order entropy accurately. Consequently, we limited our estimates to simulations with at least as many trials as followers and to trials where at least 25% of followers were present.

9. We don't understand the left panel in Figure 2E.

We reproduced the analysis of the center-of-mass of follower activations as was observed experimentally (Hemberger et al., 2019) Figure 5A. It indicates that follower activations spread away from the source neuron.

For clarity, we have updated our Figure 2E and its caption:

“Figure 2E. Left: Spatial evolution of the center-of-mass of follower activations during the first 100 ms of the first sequence in C. Trigger neuron in purple outline. Exc followers in blue outlines.”

And added to our Methods (lines 1032-1037):

Spatial center of mass

To visualize the spatial propagation of sequences, we calculated the center-of-mass of follower activations over time similar to the experimental data (see Figure 5A in Hemberger et al. 2019). We took the average follower XY location for those followers activating in a sliding window of 5 ms. The resulting X and Y traces were smoothed over using a Gaussian window of 15 ms.”

10. Lines 257-60: "Interestingly, excitatory-to-inhibitory spike transfers are consistently shorter than their excitatory-to-excitatory counterparts, even at higher firing rates (Figure 3B inset), possibly reflecting the more depolarized state of inhibitory neurons (Figure 1H right)."

Did you mean Figure 1I, not Figure 1H?

The reviewer is correct. Apologies. This was a typo which we have now corrected.

11. It would be nice to expand on the implications of Figure 3E. If we understand things correctly (a big if), you're showing the connections between any two neurons in which the postsynaptic one fired within 100 ms of the presynaptic one. Is that correct? If so, it means there are a lot of random coincidences -- that could even happen for un-connected neurons. Which makes it hard for us to interpret what the plot means.

The reviewers understand Figure 3E correctly, with the only caveat that we only consider neurons that share connections, so we don’t quantify coincidences of un-connected neurons. See also our updated description of a spike transfer in our response to reviewers’ comment 24.

Figure 3E (weights of connections involved in spike transfers) can be understood as a sampling, with replacement, from Figure 1D (weights of all connections in the network). Deviations in the shape of the distribution thus indicate connection strengths that are traversed more frequently than one would expect from purely random sampling. We now write in the main text (lines 308-315):

“We found that the strength distribution of those connections underlying spike transfers matches the shape of the full connectivity, with a peak of very weak connections followed by a long tail (as in Figure 1D). Interestingly, the distribution of excitatory-to-excitatory spike transfers displays a secondary peak indicating an over-representation of strong connections (Figure 3E top inset). By contrast, the absence of this secondary peak and the much higher primary peak (~1.5M) for excitatory-to-inhibitory spike transfers suggest that inhibitory spikes are primarily the result of weak convergence (Figure 3E bottom).”

12. line 475: "(network and colors as in Figure 4A)". We couldn't figure out all the colors from Figure 4A.

The reference should be to Figure 4B. Apologies. This was a typo which we have now corrected.

13. Figures 6B and 6D seem inconsistent: Figure 6B shows that a&b together generate very few followers, whereas Figure 6D shows that a&b together generate a lot of followers. What are we missing?

In Figure 6B, “a&b” indicates neurons that are followers to both a in isolation and to b in isolation. It shows an almost empty intersection between the set of followers to a and the set of followers to b. In Figure 6D we consider followers to the coactivation of both neurons at once, which we denote by “ab”.

To clarify this, we have added text labels to our schematics in Figure 6A and C.

Extract from Figure 6

A. Schematic of follower classes as a function of trigger neuron (purple: follower to a; red: follower to b; blue: follower to both a and b).

C. Schematic of follower classes as a function of trigger: single trigger neuron (a or b) or simultaneous coactivation (ab). Blue, purple, and red, as in A.

14. lines 636-8: "The response to simultaneous activation of multiple triggers in our model suggests that sequences operate under excitatory/inhibitory balance, where local inhibition cancels excitatory signals (5) (Figure 6D, H)."

Why do Figures 6D and 6H imply that sentence? More explanation would be helpful.

We have rewritten this sentence and parts of its paragraph to (lines 690-700):

“Although a single spike can reliably trigger many others in the network, we never observed explosive amplification of activity in our model. On the contrary, multiple excitatory spikes result in a sub-linear summation of responses (Figure 6D), likely reflecting the frequent lateral inhibition onto sub-sequences (Figure 6H). Together, this suggests that sequences operate in a regime where local inhibition often balances excitatory signals. We did not explicitly construct our model to display an excitatory/inhibitory balance. Still, this form of feedback inhibition and the possibility of routing via the alteration of excitatory and inhibitory conductances in gate neurons is reminiscent of “precise balance”, a form of balance within short temporal windows (Bhatia et al., 2019). Hence, our work is consistent with and extends previous theoretical work proposing that the gating of firing rate signals could be arbitrated via the disruption of excitatory/inhibitory balance in local populations (Vogels and Abbott, 2009).

15. In Methods, please include equations. At least somebody (including one of us) would want to know what they are, and the reader shouldn't have to reproduce them from the explanations (especially since the definition of the exponential integrate and fire neuron probably isn't standard). Especially important is the average number of connections/neurons.

We have updated our methods to include equations for the exponential integrate-and-fire neuron.

“We simulated neurons as adaptive exponential integrate-and-fire (see Table 1 for parameters) (Brette and Gerstner, 2005). We used the NEST model “aeif_cond_exp” (see Methods, Simulations), which implements a conductance-based exponential integrate-and-fire neuron model where the membrane potential is given by:

CdVdt=gL(VEL)+gLΔTexpVVTΔTge(VEe)gi(VCEi)w=Ieτwdwdt=a(VEL)wAt each firing time the variable is increased by an amount b, which accounts for spike-triggered adaptation. Inhibitory and excitatory synaptic conductances (gi and ge) are implemented as exponential decay kernels (see NEST documentation, Linssen et al., 2018).”

We also include equations for the normalized rank entropy in Methods (see reviewers’ comment 8) and the number of connections in the caption of Figure 1A (see reviewers’ comment 1).

16. In table 1 it says that the synaptic conductance time constant is 1.103681 ms. Is it really that short? That seems strange, given the long membrane time constant. And why so many significant figures?

It is difficult to establish what values are “strange” since we lack alternative sources to compare turtle cortex physiology. All of our model parameters take their values either from fitting procedures or literature. The fitting of membrane and synapse time constants was “bottom-up”, that is, they were determined from single- or paired-clamp measurements, and we did not perform any parameter searches that would produce a particular network behavior. The values resulting from the fitting could certainly be rounded after a few decimals, but we report them as they were used in our simulations.

We next describe in more detail the fitting process that we used to obtain the membrane and synaptic time constants and expand our Methods sections (see updated text below). We first fitted the membrane leak conductance and the membrane capacitance (which together determine the membrane time constant) from current-clamp traces. The result of this fit, as well as example traces, are in Figure 1B. We then fitted the synaptic conductance time constant using ESPS rise times. Examples of the resulting model EPSPs are shown as an inset in Figure 1D. All data used here are publicly available through figshare (see Data and Software availability).

Extract from Figure 1

B. Top left: example of fit (black) of membrane potential responses to current injection in a recorded neuron (blue). Bottom left and right: distributions of fitted membrane capacitance (Cm) and leak conductance (gL) (n=3,886 traces).

D. Lognormal fit to peak excitatory synaptic conductances. Top: cumulative distribution function (cdf). Bottom: probability density function (pdf). Conductances were estimated from EPSP amplitudes experimentally obtained from recorded pairs of connected neurons (gray dots). Inset: example of modeled EPSPs for different synaptic weights (top arrowheads) in an isolated postsynaptic model neuron at resting potential.

We expand our Methods (lines):

“To estimate membrane properties, we used least squares to match the output of our neuron model to the membrane potential of real neurons in whole-cell clamp under different levels of current injection (Hemberger et al., 2019). An example of the fit can be seen in Figure 1B. We first fit the model membrane leak conductance (resistance) using the steady state of the experimentally measured membrane potential after current injection. We then fit the model membrane capacitance (which determines the membrane time constant) using the experimentally measured potential immediately after the onset of current injection. We fitted all traces independently (n=3,886), and we used the median of the distributions for all neurons in the model.

(…)

We fitted synaptic time constants to rise times of excitatory postsynaptic potentials (EPSPs, n=382) obtained in paired patch recordings using least-squares and used the median for our model synapses. We then converted experimentally measured EPSP amplitudes (same n=382) to synaptic conductances (Figure 1D) using least squares and fitted a lognormal distribution via maximum likelihood estimation.“

17. The result that strong sparse connections support sequential network activity is not particularly surprising. Perhaps more emphasis could be placed on the result that the weak, dense part of the network is necessary for supporting flexible sequential activation. Could a figure be added that shows the ideal ratio or range of ratios of strong to weak connections? Perhaps this is in the results already and just needs to be brought to the foreground.

We respectfully disagree with the statement that reliable sequential activity is self-evident, given the presence of strong sparse connections. In fact, even the reviewers’ main comment seems to attribute our results to the “exceptionally low firing rates of the turtle cortex,” not to the presence of strong sparse connections. Our networks are large and recurrent, with parameters constrained by data and subject to uncorrelated random activity, so it is non-trivial to conclude that a single factor supports sequence generation. Both strong-sparse and weak-dense connections are important for understanding how sequences can be generated and propagated.

We do not per se have two separate groups of strong and weak connections, but rather a continuum of strengths since the entire distribution of connection strengths is extracted from experimental data (Figure 1D). We refer to them as weak and strong based on thresholds that we selected to study the two modes of the distribution of traversed connections (bottom 90% or top 0.3%, lines 328-333, Figure 3E top). Therefore, it is not obvious how we could vary the ratio of strong to weak connectivity unless we completely change our model.

Nonetheless, we next clarify how weak connections are important for sequence flexibility. While propagation happens mainly through strong connections, the actual efficacy of those strong connections will depend on the conductance state of followers, as we showed through the manipulation of gate neurons (Figure 5). Our analysis shows that weak connections play two key roles in altering this conductance state:

1. Weak connections provide an input amplification effect, depolarizing the network and promoting or reducing the number of followers. In Figure 3H (updated in reviewers’ comment 28), we directly compared simulations with and without weak connections. We observed that weak connections could differentially modulate the number of reliably-responding neurons as a function of the input regime of the network. Since activity in the turtle cortex evolves spatially as traveling waves (Figure 2 —figure supplement D) (Prechtl et al., 1997), weak connections play a unique role in enabling or disabling propagation within different subcircuits.

2. Weak connections provide the main lateral inhibition between subsequences. In our model, inhibitory neurons are widely connected and driven by weak convergent excitatory connections (Figure 3DE). When a trigger neuron spikes and activates its followers, these together will cause lateral inhibition onto downstream sub-sequences in 90% of the cases (Figure 6H). This lateral inhibition between sub-sequences, mediated through weak connections, can reliably halt propagation, creating a class of followers that require both the activation of one trigger neuron and the silence of another (which we term combination-specific followers, Figure 6E, lines 565-570).

We introduce both roles of weak connections supporting flexible routing in our discussion of tuning (as untuned depolarization and feature competition, lines 828-834, 848-856):

“In mouse V1, stimulus presentation triggers broadly-tuned depolarization via a majority of weak connections, while sparse strong connections determine orientation-selective responses (Cossell et al., 2015). This untuned depolarization via weak connections is consistent with our finding of activity regimes where weak connections can modulate the reliability of propagation (Figure 3H). Overall, the dual roles in routing played by weak and strong connectivity may have corresponding roles in what we describe as tuning.

(…)

Under this view, lateral inhibition between sub-networks, mediated through the convergence of weak connections (Figure 3ED), may perform nontrivial computations on the properties of stimuli in a sensory cortex. Indeed, lateral inhibition triggered by early responding principal neurons implements concentration-invariant coding in mouse olfactory cortex (Bolding and Franks, 2018). In line with our observation of common lateral inhibition between sub-networks (Figure 5H, Figure 6H), activations of single neurons in mouse V1 tend to inhibit neurons with overlapping tuning (feature competition) (Chettih and Harvey, 2019)”

18. The matching to the data from the turtle cortex is an excellent foundation for this work and grounds the model. Still, it might be nice to know how the results change as these parameters move around. In particular, it would be nice to know which aspects of the turtle's visual cortical network and cellular properties are necessary and sufficient parameters for the observed reliable sequence generation. All this is optional, so use your best judgment as to whether you want to include any of it. And some of it might simply be a point for the Results or Discussion.

a. If the ratio of E/I in the network is changed, do the results hold?

b. Are there other parameter changes that substantively affect the results? It seems like single-cell adaptation could be an important factor in pattern generation in the network.

c. The connection pattern in the turtle cortex might also be critical for these results. This isn't modeled explicitly in this work, but it could be an important feature for the experimental results that ground the paper. Does turtle cortex "look like" rodent cortex, or are some of the connections sparser, or do they fall off spatially faster or slower? It would be good to see a discussion of this in the paper.

d. In the model, connections are random but have a log-normal distribution. How does changing this affect the results and how well does that fit rodent cortical data?

We thank the reviewers for appreciating our efforts to ground the model. These are all interesting questions and particularly relevant for expanding our results to the mammalian cortex. We next address each suggestion individually.

Suggestion A: Ratio of E/I

Single-cell voltage clamp experiments suggest that inhibitory postsynaptic currents in the turtle cortex are 2-3 times stronger than excitatory currents. Consequently, we modeled the distribution of inhibitory synaptic strengths as a scaled-up version of the excitatory distribution, for which we had paired-patch measurements (Figure 1D, Methods). For the ratio of E/I neurons in the network, we took our best estimate of the interneuron population in the turtle dorsal cortex, which is 7% (see Hemberger et al., 2019, C.M. Müller, personal communication). Changing either the ratio of E/I strengths or E/I neurons can change the efficacy and expected number of strong excitatory connections and affect our conclusions. However, we believe that such changes should be, like our model, grounded on estimates that are species-specific. Thus, we have included in our discussion as part of our response to the reviewers’ main comment about extrapolations to mammals:

“Thus, could strong connections underlie cortical sequences more generally, and are sequences an ancient feature of cortical computation? We based our model on data from the turtle cortex, yet other species may introduce variations that might be relevant, including differences in operating firing rates or neuronal populations.”

Suggestion B: Single-cell adaptation

From whole-cell patch-clamp recordings, we estimate that the majority of excitatory neurons in the turtle cortex are adaptive, consistent with the strongly adaptive responses observed under visual stimulation (Fournier et al. 2018). On the other hand, inhibitory neurons show a diverse range in their adaptation/facilitation, likely reflecting inhibitory diversity in the turtle cortex, which is transcriptomically comparable to that in the mammalian cortex (Tosches et al. 2018). For simplicity, our model assumed all neurons, excitatory or inhibitory, to be adaptive. We have re-run a subset of our simulations where we disabled adaptive currents for all inhibitory neurons (a = 0, b = 0, see equations in reviewers’ comment 15). We provide the results in Author response image 3. Our conclusions regarding the relationship between firing rate and the number of detected followers remain unchanged (Author response image 3A; compare to Figure 2F). The resulting networks still produce reliable sequences with similar properties (Author response image 3B). Compared to our original simulations with adaptive inhibitory neurons, sequences without inhibitory adaptation display fewer excitatory followers (mean exc follower count without inh adaptation: 7; with: 20) and more inhibitory followers (without: 8; with: 4). Having explored these two extremes (all inhibitory neurons being adaptive or all non-adaptive), we expect that a mixed model that includes diverse inhibitory adaptation would still display sequences like the ones we present.

Author response image 3
Sequences without inhibitory neuron adaptation.

(A) Number of followers in a network with non-adaptive inhibitory neurons detected for each simulation as a function of the mean level of activity in the network. Blue: exc; red: inh; thick line: moving average; Gray dots: sequences in B. (B) Top to bottom: sequences of follower spikes in two trials for three example simulations (gray dots in A). Y-axis: Followers sorted by median spike-time. Same sorting in both trials. Spikes from non-follower spikes are not shown.

Suggestion C: Connectivity patterns

We agree that specific patterns of structured connectivity in the turtle cortex could be important for the generation and propagation of sequences. Unfortunately, we lack data on the fine-grained connectivity in the turtle cortex that would be relevant for the recurrent interactions that we study in our model. This lack of data led us to choose random connectivity for our model, although we did impose spatially decaying connection probability based on experimental estimates. Even under the assumption of randomness, we demonstrate reliable propagation, establishing a baseline against which one could evaluate different theoretical connectivity patterns (such as synfire or polychronous chains, which we describe in our Discussion, lines 702-726).

As the reviewers suggest, rodent and turtle cortical connectivity display certain spatial similarities. For instance, both the turtle dorsal cortex and the rodent auditory cortex display Gaussian profiles of distance-dependent connectivity (Levy and Reyes, 2012), and sensory afferents produce a gradient of en-passant synapses for both the turtle dorsal cortex and the rodent piriform cortex (Fournier et al., 2015). However, data is limited and detailed comparisons across the different cortical areas and species are not trivial. Nonetheless, we discuss the issue of connectivity patterns and potential future directions in our updated Discussion section:

“For instance, the reliability of sequence propagation may depend on connectivity features we did not explore here, such as the presence of different connectivity motifs or differences in the distance-, layer- and population-specific connection probabilities (Jiang et al., 2015; Song et al., 2005). Further modeling and experimental work on the cortical responses of diverse species will be needed to provide a comprehensive comparative perspective (Laurent, 2020).

(…)

Certain nonrandom features may be expected in turtle cortical connectivity, such as axonal projection biases (Shein-Idelson et al., 2017), differences between apical and basal dendritic connectivity, gradients of connectivity across the cortical surface (Fournier et al., 2015), or plasticity-derived structures. Transcriptomic and morphological evidence suggests that the turtle cortex contains diverse neuronal types, as seen in the mammalian cortex (Nenadic et al., 2003; Tosches et al., 2018). Different neuronal types may play different roles in information routing.”

Suggestion D: Log-normal distributions

In our present results, we already explored changing the log-normal distribution by truncating its tail (Figure 3F, reproduced below). Although similar in 90% of its connectivity, we found that the resulting network could rarely produce any followers.

Extract from Figure 3

“F. Top: schematic of alternative network models and their truncated distribution of synaptic strengths. Bottom: Number of detected excitatory followers per simulation (n=2,000 each). Boxes: median and [25th, 75th] percentiles.”

Additionally, the reviewers ask how well a log-normal fits rodent cortical data. Log-normal distributions of synaptic strengths are common to many mammalian species, and we include a short summary in our Discussion (below, lines 756-768). However, the specific lengths and thicknesses of the tails vary, and we cannot interpret them without the context of the particular circuit’s neuronal properties.

“Such sparse and strong connections are a common feature of brain connectivity. Indeed, long-tailed distributions of connection strengths are found in rat visual cortex (Song et al., 2005), mouse barrel and visual cortices (Cossell et al., 2015; Lefort et al., 2009), rat and guinea pig hippocampus (Ikegaya et al., 2013; Sayer et al., 1990), and human cortex (Shapson-Coe et al., 2021).”

19. The Results/Discussion seem to be missing a stronger take on what this kind of patterned activity might be useful for. A concrete example of a downstream computation that relies on the reliable combinatorial activation of sub-network sequences would be useful.

We agree this is a very exciting question, albeit difficult to answer with our current model based on ex vivo activity in the turtle cortex. Our study proposes sequential propagation (or failure to propagate) as an indication of activity routing. This routing effectively produces reliable mappings from specific initial inputs (injected spikes) to specific activations (followers), i. e. a computation. Given that we study a visual cortex, we discuss the reliable routing of spikes as implementing visual tuning of cortical neurons, one of the core cortical computations underlying perception. We now write this more explicitly in our Discussion (below, lines 809-820). We also discuss some possible implications of sequence propagation inspired by mammalian literature, which we have now extended:

“Evidence from rodents has often associated spiking sequences with memory or spatial tasks (Buzsáki and Tingley, 2018; Diba and Buzsáki, 2007; Dragoi and Tonegawa, 2011; Modi et al., 2014; Vaz et al., 2020) or reported sequences within the spontaneous or stimulus-evoked activity in sensory areas (Carrillo-Reid et al., 2015; Dechery and MacLean, 2017; Fellous et al., 2004; Luczak et al., 2015; Luczak and MacLean, 2012). More generally, spiking sequences have been proposed as parts of an information-coding scheme for communication within cortex (Luczak et al., 2015) or as content-free structures that can reference experiences distributed across multiple cortical modules (Buzsáki and Tingley, 2018). Although spiking sequences were observed in the ex vivo turtle cortex, we predict the existence of sequences under in vivo levels of spontaneous activity. We propose that reliable routing of sequential activity might implement the visual tuning of cortical neurons.

(…)

The presence of combination-specific followers in our model suggests that the tuning of a neuron may be expressed as the result of logical operations on the tuning properties of other neurons in the recurrent circuit.”

20. The gating results seem very exciting! It might be nice to highlight connections to a recent explosion in ML papers on gating and putting that part of the work more front and center in the abstract.

We thank the reviewers for their appreciation. We have added the following text to our discussion:

“We expect that complex computations can be built from this proposed form of gating. Previous theoretical work on signal propagation in recurrent networks with irregular background activity found that the combination of parallel pathways and gating can implement different logic statements, illustrating how gating can form the basis for more complex computations (Vogels and Abbott, 2005). Indeed, recent advances in machine learning result from leveraging gating to control the flow of information more explicitly. For instance, gating can implement robust working memory in recurrent networks (Chung et al., 2014), prevent catastrophic forgetting through context-dependent learning (Masse et al., 2018), and produce powerful attention mechanisms (Vaswani et al., 2017).”

21. Line 183: "the number of excitatory followers far exceeds the number of inhibitory followers". This seems obvious in a network with 93% excitatory neurons. Are we missing something?

Our model is based on experimental sequences that mostly consist of inhibitory followers (Hemberger et al., 2019). This was likely due to the experimental setup, where an MEA was placed under the bottom inhibitory layer of the 3-layered turtle cortex, resulting in very few excitatory neurons being spike-sorted. However, the length of experimental sequences (up to 200 ms), suggested that propagation should be happening within the excitatory layer, even though it was not electrophysiologically accessible. Thus, one of the motivations for our computational study was to verify that excitatory followers are common and to study propagation within the excitatory layer (Introduction, lines 54-56). We have replaced this sentence to make this more explicit:

“While the electrical distance between the MEA and the pyramidal cell layer limited the experimental observation of excitatory followers in the turtle cortex, our model predicts that their number far exceeds that of inhibitory followers”

22. Line 187: Hz is a unit for oscillations and spiking activity is not oscillatory. Might we suggest spikes/s?

We have changed all instances of Hz for “spk/s”, as in the experimental publication that our model is based on (Hemberger et al., 2019).

23. Figure 2E: The MEA square is an excellent example of the sort of thing that this reviewer cannot see when printed at the intended size. Please have mercy on older eyes.

We apologize for the lack of clarity. We have changed the MEA square color and line thickness to improve contrast.

We have also updated all of our main figures to ensure consistent maximum width (A4 minus margins: 159.2 mm) and font size (8pt), to improve whitespace use between panels, and reduce too light shades (see reviewers’ comments 28 and 30).

24. Line 250-254: We struggled a bit to understand the concept of spike transfers. It seems like this is any link in any sequence where the presynaptic neuron is excitatory. Could you expand this explanation, please?

Thanks for this comment. Indeed, spike transfer refers to any excitatory transmission within 100 ms. We studied spike transfers for the full network activity, not only on sequences. We have updated the text to read (lines 281-293):

“In a random subset of our 6,000 simulations (n=900), we searched for instances when a spike successfully traversed a connection, i.e., when the postsynaptic neuron fired within 100 ms of the presynaptic one. We call this successful activation of a connection a “spike transfer” from the pre- to the postsynaptic neuron. We thus combined spikes with recurrent connectivity to produce a directed acyclic graph of all spike transfers for each simulation (Figure 3A). Note that we studied spike transfers among all connected neurons in the network, allowing us to characterize the connections that are most frequently traversed. Most excitatory-to-excitatory spike transfers in low-activity simulations show a delay from pre- to postsynaptic spike of 6-8 ms (Figure 3B), matching the delays measured in turtle cortex (Hemberger et al., 2019). Interestingly, excitatory-to-inhibitory spike transfers display consistently shorter delays than their excitatory-to-excitatory counterparts, even at higher firing rates (Figure 3B inset), possibly reflecting the more depolarized state of inhibitory neurons (Figure 1I).

Please see also our response to reviewers’ comment 11.

25. Line 265: "very few motifs lead to the activation of excitatory neurons". This confused us because in Line 183 it says "the number of excitatory followers far exceeds the number of inhibitory followers". We are clearly missing something important; please clarify.

We apologize for the lack of clarity. The first sentence means that the spike transfers that lead to an excitatory spike rarely form complex motifs but are rather one-to-one transfers. This does not mean that there are very few excitatory spikes. We have updated that line to read:

“In low-activity simulations, we found that excitatory spikes are rarely triggered by convergence or fan motifs (Figure 3C top, conv.), but they are rather the result of one-to-one spike transfers (Figure 3C top, single).”

26. Line 305: How strong do the connections have to be? We're guessing strong enough so a single presynaptic spike causes a postsynaptic spike, but this should be stated.

We have rewritten this line to:

“Hence, we conclude that sparse connections strong enough to trigger a postsynaptic spike are necessary and sufficient to produce repeatable sequences in our model.”

We note that the exact strength of these connections will depend on the network state. From our main text (lines 116-119):

“The actual efficacy of these rare but powerful connections depends on the conductance state of their postsynaptic neurons and, thus, on ongoing network activity and the state of adaptation currents.”

27. Line 316-319: We feel that the role for strong connections is more clearly demonstrated than the role for weak connections. Obviously, if there are only a very few strong connections then the neurons will have a lower mean membrane potential and will be less responsive to spiking input. Is it that simple, or is there more to it?

Figure 3H directly compares simulations with and without weak connections (see the updated figure and caption in reviewers’ comment 28 below).

If there are only very few strong connections for some ranges of input (approx. 55-70 pA), the membrane potential is too far from threshold for most of these connections to be effective. In that case, the presence of weak connections can promote reliable transmission.

In other ranges of input, weak connections might increase the mean membrane potential to be too close to threshold, resulting in less predictable firing. In that case, the presence of weak connections reduces the reliability of the responses.

This effect is similar to stochastic resonance, which we describe in our Discussion section (lines 745-754) (Teramae et al., 2012), and to untuned depolarization, as reported in vivo in mouse V1 (828-830) (Cossell et al., 2015).

We also address the role of weak connections in our response to the reviewers’ comment 17.

28. Figure 3H: We lack the visual acuity to interpret this plot. Perhaps an alternative representation would be clearer?

We have simplified this plot and added explicit labels to the input regimes where the role of weak connections changes (see reviewers’ comments 17 and 27). We have also increased the size and updated its caption:

“H. Difference between the number of followers detected in full and strong-only models under the same input. Brackets indicate regimes where the presence of weak connections increases (full>str) or decreases (full<str) the follower count. 2,000 simulations.”

29. Line 366: two "due to"s in this sentence.

Thank you.

“Importantly, the few equally strong connections between sub-networks do not always guarantee the reliable propagation of activity between them due to unpredictable network effects resulting from recurrent interactions”

30. Figure 4E-G: there are numbers on these plots that we definitely can't read, and the connectivity in 4G is not only tiny but also a very pale grey in places.

We apologize for the lack of clarity. As we describe in comment 23, we have now updated all of our main figures to ensure consistent maximum width (A4 minus margins: 159.2 mm) and font size (8pt), to improve whitespace use between panels, and remove shades that are too light.

31. Figure 4E, F: this is a measured frequency rather than a probability, right?

Correct. These are measured frequencies, which we take as our best estimate of the corresponding probabilities. We have edited the captions to specify this:

“E. Relationship between connectivity and follower clusters. Left: schematic illustrating connections between (purple) and within (blue) clusters. Right: Estimated probability (measured frequency) of strong (top 0.3%) excitatory-to-excitatory connection within or between clusters for 10,472 clusters of at least 5 followers pooled across all 6,000 simulations.

F. Estimated probability (measured frequency) of postsynaptic activation conditioned on presynaptic activation in the same trial for excitatory-to-excitatory connections, pooled across all 6,000 simulations. Boxes: median and [25th, 75th] percentiles.”

32. Line 618: landmark -> hallmark.

Thank you.

33. Line 651: it seems to us that your sequence structures have got a lot in common with those reported in your ref 34 (Polychronous groups, Izhikevich), which also depends on very strong synapses. Can you expand this section with some comparison to that study?

We have added the following to our Discussion section (lines 710-714):

“A related structure to synfire chains is that of polychronous chains, where synchrony is replaced by precise time-locked patterns that account for the distribution of synaptic delays (Egger et al., 2020; Izhikevich, 2006). These time-locked patterns, or polychronous groups, self-reinforce through spike-timing-dependent plasticity, resulting in stronger synapses (Izhikevich, 2006).”

34. Line 731: "may not be linked to behavior". Well, it's ex vivo, so we guess certainly not linked to behavior. Perhaps you can rephrase to make your meaning clearer?

We have simplified this sentence and edited the ones around it as part of our response to the reviewers’ comment 20. We refer the reviewers to that response for the updated text.

35. Line 818: "variable number of connections". It would be nice to know the statistics of this and how it compares with biology

We now give the average number of connections in the caption of Figure 1A (see our response to reviewers’ comment 1). We provide an example of the distribution of weak and strong E-E connections in Figure 1E.

36. Line 822: Nest -> NEST. (And we appreciate you stating and citing the version used!)

Thank you, and apologies for the misspelling.

https://doi.org/10.7554/eLife.79928.sa2

Article and author information

Author details

  1. Juan Luis Riquelme

    1. Max Planck Institute for Brain Research, Frankfurt am Main, Germany
    2. School of Life Sciences, Technical University of Munich, Freising, Germany
    Contribution
    Conceptualization, Data curation, Software, Formal analysis, Validation, Investigation, Visualization, Methodology, Writing - original draft, Writing – review and editing
    Competing interests
    No competing interests declared
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0003-4604-7405
  2. Mike Hemberger

    Max Planck Institute for Brain Research, Frankfurt am Main, Germany
    Contribution
    Resources, Data curation
    Competing interests
    No competing interests declared
  3. Gilles Laurent

    Max Planck Institute for Brain Research, Frankfurt am Main, Germany
    Contribution
    Resources, Supervision, Writing – review and editing
    Competing interests
    No competing interests declared
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0002-2296-114X
  4. Julijana Gjorgjieva

    1. Max Planck Institute for Brain Research, Frankfurt am Main, Germany
    2. School of Life Sciences, Technical University of Munich, Freising, Germany
    Contribution
    Conceptualization, Resources, Supervision, Funding acquisition, Methodology, Project administration, Writing – review and editing
    For correspondence
    gjorgjieva@tum.de
    Competing interests
    No competing interests declared
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0001-7118-4079

Funding

Max-Planck-Gesellschaft

  • Juan Luis Riquelme
  • Mike Hemberger
  • Gilles Laurent
  • Julijana Gjorgjieva

The funders had no role in study design, data collection, and interpretation, or the decision to submit the work for publication.

Acknowledgements

We thank all members of the Gjorgjieva lab for discussions, and Hiroshi Ito and Marion Silies for critical feedback on the manuscript. This work was supported by the Max Planck Society.

Senior Editor

  1. Laura L Colgin, University of Texas at Austin, United States

Reviewing Editor

  1. Peter Latham, University College London, United Kingdom

Reviewer

  1. Abigail Morrison, Forschungszentrum Jülich, Germany

Version history

  1. Preprint posted: December 23, 2021 (view preprint)
  2. Received: May 3, 2022
  3. Accepted: December 19, 2022
  4. Version of Record published: February 13, 2023 (version 1)
  5. Version of Record updated: April 3, 2023 (version 2)

Copyright

© 2023, Riquelme et al.

This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.

Metrics

  • 1,453
    Page views
  • 166
    Downloads
  • 1
    Citations

Article citation count generated by polling the highest count across the following sources: Crossref, PubMed Central, Scopus.

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Open citations (links to open the citations from this article in various online reference manager services)

Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)

  1. Juan Luis Riquelme
  2. Mike Hemberger
  3. Gilles Laurent
  4. Julijana Gjorgjieva
(2023)
Single spikes drive sequential propagation and routing of activity in a cortical network
eLife 12:e79928.
https://doi.org/10.7554/eLife.79928

Share this article

https://doi.org/10.7554/eLife.79928

Further reading

    1. Neuroscience
    Olgerta Asko, Alejandro Omar Blenkmann ... Anne-Kristin Solbakk
    Research Article Updated

    Orbitofrontal cortex (OFC) is classically linked to inhibitory control, emotion regulation, and reward processing. Recent perspectives propose that the OFC also generates predictions about perceptual events, actions, and their outcomes. We tested the role of the OFC in detecting violations of prediction at two levels of abstraction (i.e., hierarchical predictive processing) by studying the event-related potentials (ERPs) of patients with focal OFC lesions (n = 12) and healthy controls (n = 14) while they detected deviant sequences of tones in a local–global paradigm. The structural regularities of the tones were controlled at two hierarchical levels by rules defined at a local (i.e., between tones within sequences) and at a global (i.e., between sequences) level. In OFC patients, ERPs elicited by standard tones were unaffected at both local and global levels compared to controls. However, patients showed an attenuated mismatch negativity (MMN) and P3a to local prediction violation, as well as a diminished MMN followed by a delayed P3a to the combined local and global level prediction violation. The subsequent P3b component to conditions involving violations of prediction at the level of global rules was preserved in the OFC group. Comparable effects were absent in patients with lesions restricted to the lateral PFC, which lends a degree of anatomical specificity to the altered predictive processing resulting from OFC lesion. Overall, the altered magnitudes and time courses of MMN/P3a responses after lesions to the OFC indicate that the neural correlates of detection of auditory regularity violation are impacted at two hierarchical levels of rule abstraction.

    1. Neuroscience
    Nada Kojovic, Sezen Cekic ... Marie Schaer
    Research Article Updated

    Atypical deployment of social gaze is present early on in toddlers with autism spectrum disorders (ASDs). Yet, studies characterizing the developmental dynamic behind it are scarce. Here, we used a data-driven method to delineate the developmental change in visual exploration of social interaction over childhood years in autism. Longitudinal eye-tracking data were acquired as children with ASD and their typically developing (TD) peers freely explored a short cartoon movie. We found divergent moment-to-moment gaze patterns in children with ASD compared to their TD peers. This divergence was particularly evident in sequences that displayed social interactions between characters and even more so in children with lower developmental and functional levels. The basic visual properties of the animated scene did not account for the enhanced divergence. Over childhood years, these differences dramatically increased to become more idiosyncratic. These findings suggest that social attention should be targeted early in clinical treatments.