Hippocampal sharp wave-ripples and the associated sequence replay emerge from structured synaptic interactions in a network model of area CA3

  1. András Ecker  Is a corresponding author
  2. Bence Bagi
  3. Eszter Vértes
  4. Orsolya Steinbach-Németh
  5. Mária R Karlócai
  6. Orsolya I Papp
  7. István Miklós
  8. Norbert Hájos
  9. Tamás F Freund
  10. Attila I Gulyás
  11. Szabolcs Káli  Is a corresponding author
  1. Institute of Experimental Medicine, Eötvös Loránd Research Network, Hungary
  2. Faculty of Information Technology and Bionics, Pázmány Péter Catholic University, Hungary
  3. Alfréd Rényi Institute of Mathematics, Eötvös Loránd Research Network, Hungary
  4. Institute for Computer Science and Control, Eötvös Loránd Research Network, Hungary
8 figures, 3 tables and 1 additional file

Figures

Figure 1 with 2 supplements
Overview of learning and the spontaneous generation of sharp wave-ripples (SWRs) and sequence replay in the model.

(A) Tuning curves (Equation (1)) of exemplar place cells covering the whole 3-m-long linear track. (B) Broad, symmetric spike-timing-dependent plasticity (STDP) kernel used in the learning phase. …

Figure 1—figure supplement 1
The generation of the spike trains of pyramidal cells (PCs) in the exploration phase.

(A) Firing rates of exemplar place cells covering the whole 3-m-long linear track. In contrast to the spatial tuning curves shown in Figure 1A (Equation (1)), these are time-dependent rates …

Figure 1—figure supplement 2
Single-cell models.

(A) Fitted AdExpIF pyramidal cell (PC) model (blue) and experimental traces (green) are shown in the top panel. The amplitudes of the 800-ms-long step current injections shown at the bottom were as …

Figure 2 with 2 supplements
Forward and backward replay events, accompanied by ripple oscillations, can occur spontaneously but can also be cued.

(A) Pyramidal cell (PC) raster plot of a 10-s-long simulation, with sequence replays initiating at random time points and positions and propagating either in forward or backward direction on the top …

Figure 2—figure supplement 1
The step-size distribution of the decoded paths is much wider than expected.

(A) Posterior matrix of the decoded positions from spikes within a selected high-activity state (first one from Figure 2A). Thick gray lines indicate the edges of the decoded, constant velocity …

Figure 2—figure supplement 2
Single-cell characteristics during network simulations.

(A) Distributions of single pyramidal cell (PC) (A1) and parvalbumin-containing basket cell (PVBC) (A2) firing rates during the 10-s-long simulation shown in Figure 2. (B) Distributions of …

Figure 3 with 1 supplement
Sharp waves, ripple oscillations, and replay are robust with respect to scaling the recurrent excitatory weights.

(A) Pyramidal cell (PC) raster plots on top and PC population rates at the bottom for E-E scaling factors 0.9 (A1), 0.95 (A2), and 1.05 (A3). (Scaling factor of 1.0 is equivalent to Figure 2A.) …

Figure 3—figure supplement 1
Spectral analysis of network dynamics across different pyramidal cell-pyramidal cell (PC-PC) weight scaling factors.

(A) Power spectral densities (PSDs) of PC (A1) and parvalbumin-containing basket cell (PVBC) (A2) population rates and estimated local field potential (LFP) (A3). Gray lines correspond to individual …

Two distinct environments can be learned and replayed by the network.

(A) Learned excitatory recurrent weight matrices. (A1) Weights after learning the first environment. Note that the matrix appears random because neurons are arranged according to their place field …

Learning with an asymmetric spike-timing-dependent plasticity (STDP) rule leads to the absence of backward replay.

(A) Asymmetric STDP kernel used in the learning phase. (B) Learned excitatory recurrent weight matrix. (C) Distribution of nonzero synaptic weights in the weight matrix shown in (B). (D) Pyramidal …

Altering the structure of recurrent excitatory interactions changes the network dynamics but altering the weight statistics has little effect.

(A1) Binarized (largest 3% and remaining 97% nonzero weights averaged separately) recurrent excitatory weight matrix. (Derived from the baseline one shown in Figure 1C.) (A2) Distribution of nonzero …

Sequential replay requires firing rate adaptation in the pyramidal cell (PC) population.

(A) Voltage traces of fitted AdExpIF (blue) and ExpIF (gray) PC models and experimental traces (green) are shown in the top panel. Insets show the f–I curves of the in vitro and in silico cells. The …

Generation of ripple oscillations relies on recurrent connections within the parvalbumin-containing basket cell (PVBC) population.

(A) Significant ripple frequency (A1) and ripple power (A2) of a purely PVBC network, driven by (independent) spike trains mimicking pyramidal cell (PC) population activity. Gray color in (A1) means …

Tables

Table 1
List of modeling assumptions.
1In the absence of unified datasets, it was assumed that published parameters from different animals (mouse/rat, strain, sex, age) can be used together to build a general model.
2Connection probabilities were assumed to depend only on the presynaptic cell type and to be independent of distance.
3Each pyramidal cell was assumed to have a place field in any given environment with a probability of 50%. For simplicity, multiple place fields were not allowed.
4When constructing the ‘teaching spike trains’ during simulated exploration, place fields were assumed to have a uniform size, tuning curve shape, and maximum firing rate.
5For simplicity, all synaptic interactions in the network were modeled as deterministic conductance changes. Short-term plasticity was not included, and long-term plasticity was assumed to operate only in the learning phase.
6When considering the nonspecific drive to the network in the offline state, it was assumed that the external input can be modeled as uncorrelated random spike trains (one per cell) activating strong synapses (representing the mossy fibers) in the pyramidal cell population.
7Some fundamental assumptions are inherited from common practices in computational neuroscience; these include modeling spike trains as Poisson processes, capturing weight changes with additive spike-timing-dependent plasticity, describing cells with single-compartmental AdExpIF models, modeling a neuronal population with replicas of a single model, and representing synapses with conductance-based models with biexponential kinetics.
8When comparing our model to in vivo data, an implicit assumption was that the behavior of a simplified model based on slice constraints can generalize to the observed behavior of the full CA3 region in vivo, in the context of studying the link between activity-dependent plasticity and network dynamics.
Table 2
Optimized parameters of pyramidal cell (PC) (AdExpIF and ExpIF) and parvalbumin-containing basket cell (PVBC) models.

Physical dimensions are as follows: Cm: pF; gL and a: nS; Vrest, ΔT, ϑ, θ, and Vreset: mV; tref and τw: ms; b: pA.

CmgLVrestΔTϑθVresettrefτwab
PC180.134.31–75.194.23–24.42–3.25–29.745.9684.93–0.27206.84
PC344.184.88–75.1910.78–28.7725.13–58.821.07---
PVBC118.527.51–74.744.58–57.71–34.78–64.991.15178.583.050.91
Table 3
Synaptic parameters (taken from the literature or optimized).

Physical dimensions are as follows: g^: nS; τr, τd, and td (synaptic delay): ms; and connection probability pconn is dimensionless. GC stands for the granule cells of the dentate gyrus. (GCPC synapses …

g^τrτdtdpconn
Sym.Asym.
PC PC0.1–6.30–151.39.52.20.1
PC PVBC0.8514.10.90.1
PVBC PC0.650.33.31.10.25
PVBC PVBC50.251.20.60.25
GC PC19.1521.50.655.4--

Additional files

Download links