Synaptic learning rules for sequence learning
Abstract
Remembering the temporal order of a sequence of events is a task easily performed by humans in everyday life, but the underlying neuronal mechanisms are unclear. This problem is particularly intriguing as human behavior often proceeds on a time scale of seconds, which is in stark contrast to the much faster millisecond timescale of neuronal processing in our brains. One longheld hypothesis in sequence learning suggests that a particular temporal finestructure of neuronal activity — termed ‘phase precession’ — enables the compression of slow behavioral sequences down to the fast time scale of the induction of synaptic plasticity. Using mathematical analysis and computer simulations, we find that — for short enough synaptic learning windows — phase precession can improve temporalorder learning tremendously and that the asymmetric part of the synaptic learning window is essential for temporalorder learning. To test these predictions, we suggest experiments that selectively alter phase precession or the learning window and evaluate memory of temporal order.
Introduction
It is a pivotal quality for animals to be able to store and recall the order of events (‘temporalorder learning’, Kahana, 1996; Fortin et al., 2002; Lehn et al., 2009; Bellmund et al., 2020) but there is only little work on the neural mechanisms generating asymmetric memory associations across behavioral time intervals (Drew and Abbott, 2006). Putative mechanisms need to bridge the gap between the faster time scale of the induction of synaptic plasticity (typically milliseconds) and the slower time scale of behavioral events (seconds or slower). The slower time scale of behavioral events is mirrored, for example, in the time course of firing rates of hippocampal place cells (O'Keefe and Dostrovsky, 1971), which signal when an animal visits certain locations (‘place fields’) in the environment. The faster time scale is given by the temporal properties of the induction of synaptic plasticity (Markram et al., 1997; Bi and Poo, 1998) — and spiketimingdependent plasticity (STDP) is a common form of synaptic plasticity that depends on the millisecond timing and temporal order of presynaptic and postsynaptic spiking. For STDP, the socalled ‘learning window’ describes the temporal intervals at which presynaptic and postsynaptic activity induce synaptic plasticity. Such precisely timed neural activity can be generated by phase precession, which is the successive acrosscycle shift of spike phases from late to early with respect to a background oscillation (Figure 1). As an animal explores an environment, phase precession can be observed in the activity of hippocampal place cells with respect to the theta oscillation (O'Keefe and Recce, 1993; Buzsáki, 2002; Qasim et al., 2021). Phase precession is highly significant in single trials (Schmidt et al., 2009; Reifenstein et al., 2012) and occurs even in first traversals of a place field in a novel environment (Cheng and Frank, 2008). Interestingly, phase precession allows for a temporal compression of a sequence of behavioral events from the time scale of seconds down to milliseconds (Figure 1; Skaggs et al., 1996; Tsodyks et al., 1996; Cheng and Frank, 2008), which matches the widths of generic STDP learning windows (Abbott and Nelson, 2000; Bi and Poo, 2001; Froemke et al., 2005; Wittenberg and Wang, 2006). This putative advantage of phase precession for temporalorder learning, however, has not yet been quantified. To assess the benefit of phase precession for temporalorder learning, we determine the synaptic weight change between pairs of cells whose activity represents two events of a sequence. Using both analytical methods and numerical simulations, we find that phase precession can dramatically facilitate temporalorder learning by increasing the synaptic weight change and the signaltonoise ratio by up to an order of magnitude. We thus provide a mechanistic description of associative chaining models (Lewandowsky and Murdock, 1989) and extend these models to explain how to store serial order.
Results
To address the question of how behavioral sequences could be encoded in the brain, we study the change of synapses between neurons that represent events in a sequence. We assume that the temporal order of two events is encoded in the asymmetry of the efficacies of synapses that connect neurons representing the two events (Figure 1). After the successful encoding of a sequence, a neuron that was activated earlier in the sequence has a strengthened connection to a neuron that was activated later in the sequence, whereas the connection in the reverse direction may be unchanged or is even weakened. As a result, when the first event is encountered and/or the first neuron is activated, the neuron representing the second event is activated. Consequently, the behavioral sequence could be replayed (as illustrated by simulations for example in Tsodyks et al., 1996; Sato and Yamaguchi, 2003; Leibold and Kempter, 2006; Shen et al., 2007; Cheng, 2013; Chenkov et al., 2017; Malerba and Bazhenov, 2019; Gillett et al., 2020) and the memory of the temporal order of events is recalled (Diba and Buzsáki, 2007; Schuck and Niv, 2019). We note, however, that in what follows we do not simulate such a replay of sequences, which would depend also on a vast number of parameters that define the network; instead, we rather focus on the underlying change in connectivity, which is the very basis of replay, and draw connections to ‘replay’ in the Discussion.
Let us now illustrate key features of the encoding of the temporal order of sequences. To do so, we consider the weight change induced by the activity of two sequentially activated cells $i$ and $j$ that represent two behavioral events (dashed lines in Figure 2A). Classical Hebbian learning (Hebb, 1949), where weight changes $\mathrm{\Delta}{w}_{ij}$ depend on the product of the firing rates f_{i} and f_{j}, is not suited for temporalorder learning because the weight change is independent of the order of cells:
Therefore, a classical Hebbian weight change is symmetric, that is, $\mathrm{\Delta}{w}_{ij}\mathrm{\Delta}{w}_{ji}=0$. This result can be generalized to learning rules that are based on the product of two arbitrary functions of the firing rates. We note that, although not suited for temporalorder learning, Hebbian rules are able to achieve more general ‘sequence learning’, where an association between sequence elements is created — independent of the order of events. To become sensitive to temporal order, we use spiketiming dependent plasticity (STDP; Markram et al., 1997; Bi and Poo, 1998). For STDP, average weight changes depend on the crosscorrelation function of the firing rates (example in Figure 2C,D),
which is antisymmetric: ${C}_{ij}(t)={C}_{ji}(t)$. Assuming additive STDP, that is, weight changes resulting from pairs of pre and postsynaptic action potentials are added, the average synaptic weight change $\mathrm{\Delta}{w}_{ij}$ between the two cells in a sequence can then be calculated explicitly (Kempter et al., 1999):
where $W$ is the STDP learning window (example in Figure 2E). We aim solve Equation 1 for given firing rates f_{i} and f_{j}. To do so, we assume that the synaptic weight ${w}_{ij}$ is generally small and thus only has a weak impact on the crosscorrelation of the cells during encoding, that is, for the ‘encoding’ of a sequence the crosscorrelation function is dominated by feedforward input, whereas the recurrent inputs are neglected.
Next, let us show that the symmetry of $W$ is essential for temporalorder learning. Any learning window $W$ can be split up into an even part ${W}^{\text{even}}$, with ${W}^{\text{even}}(t)={W}^{\text{even}}(t)$, and an odd part ${W}^{\text{odd}}$, with ${W}^{\text{odd}}(t)={W}^{\text{odd}}(t)$, such that $W={W}^{\text{even}}+{W}^{\text{odd}}$. For even learning windows, one can derive from Equation 1 and the antisymmetry of ${C}_{ij}$ that weight changes are symmetric, that is, $\mathrm{\Delta}{w}_{ij}=\mathrm{\Delta}{w}_{ji}$; therefore, only the odd part ${W}^{\text{odd}}$ of $W$ is useful for learning temporal order.
To further explore requirements for encoding the temporal order of a sequence of events, we restrict our analysis to odd learning windows. We then can relate the weight change $\mathrm{\Delta}{w}_{ij}$ to the essential features of ${C}_{ij}(t)$. To do so, we integrate Equation 1 by parts (with $W$ replaced by ${W}^{\text{odd}}$),
with the primitive $\overline{{W}^{\text{odd}}}(t):={\int}_{\mathrm{\infty}}^{t}\text{d}{t}^{\prime}{W}^{\text{odd}}({t}^{\prime})$ and the derivative ${C}_{ij}^{\prime}(t):=\frac{\text{d}}{\text{d}t}{C}_{ij}(t)$. Because $\overline{{W}^{\text{odd}}}(t)$ can be assumed to have finite support (note that ${\int}_{\mathrm{\infty}}^{+\mathrm{\infty}}\text{d}t{W}^{\text{odd}}(t)=0$), the first term in Equation 2 vanishes. Also the learning window has finite support, and therefore we can restrict the integral in the second term in Equation 2 to a finite region of width $K$ around zero:
where $K$ describes the width of the learning window $W$ (gray region in Figure 2E). The integral in Equation 3 can be interpreted as the crosscorrelation’s slope around zero, weighted by the symmetric function $\overline{{W}^{\text{odd}}}(t)$; interestingly, features of ${C}_{ij}$ for $t\gg K$, for example whether side lobes of the correlation function are decreasing or not, are irrelevant.
As a generic example of sequence learning, let us consider the activities of two cells $i$ and $j$ that encode two behavioral events, for example the traversal of two place fields of two hippocampal place cells. In general, the cells’ responses to these events are called ‘firing fields’. We model these firing fields as two Gaussian functions ${G}_{0,\sigma}$ and ${G}_{{T}_{ij},\sigma}$ that have the same width $\sigma $ but different mean values 0 and ${T}_{ij}$ (we note that ${T}_{ij}$ and $\sigma $ are measured in units of time, that is, seconds; Figure 2A, dashed curves). In this case of identical Gaussian shapes of the two firing fields, the crosscorrelation ${C}_{ij}(t)$ is also a Gaussian function, denoted by ${G}_{{T}_{ij},\sqrt{2}\sigma}$, but with mean ${T}_{ij}$ and width $\sqrt{2}\sigma $ (dashed curve in Figure 2C). The value $\sigma =0.3$ s, which we use in the example of Figure 2, matches experimental findings on place cells (O'Keefe and Recce, 1993; Geisler et al., 2010).
It is widely assumed that phase precession facilitates temporalorder learning (Skaggs et al., 1996; Dragoi and Buzsáki, 2006; Schmidt et al., 2009), but it has never been quantitatively shown. To test this hypothesis and to calculate how much phase precession contributes to temporalorder learning, we consider Gaussian firing fields that exhibit oscillatory modulations with theta frequency $\omega $ (Figure 2A, solid curves). The timedependent firing rate of cell $i$ is described by ${f}_{i}(t)\propto {G}_{{\mu}_{i},\sigma}(t)\left\{1+\mathrm{cos}[\omega (tc{\mu}_{i})]\right\}$, that is, a Gaussian that is multiplied by a sinusoidal oscillation; see also Equation 11 in Materials and methods. Phase precession occurs with respect to the population theta, which oscillates at a frequency of $(1c)\omega $ that is slightly smaller than $\omega $, with a ‘compression factor’ $c$ that is usually small: $0\le c\ll 1$ (Dragoi and Buzsáki, 2006; Geisler et al., 2010). This compression factor $c$ describes the average advance of the firing phase — from theta cycle to theta cycle — in units of the fraction of a theta cycle; $c$ thus determines the slope $\omega c$ of phase precession (Figure 2B). A typical value is $c\approx \pi /(4\sigma \omega )$, which accounts for ‘slopesize matching’ of phase precession (Geisler et al., 2010); that is, $c$ is inversely proportional to the field size $L:=4\sigma $ of the firing field, and the total range of phase precession within the firing field is constant and equals $\pi \equiv {180}^{\circ}$. If there are multiple theta oscillation cycles within a firing field ($\omega \sigma \gg 1$), which is typical for place cells, the crosscorrelation ${C}_{ij}(t)$ is a theta modulated Gaussian (solid curve in Figure 2C; see also Equation 15 in Materials and methods).
The generic shape of the crosscorrelation ${C}_{ij}$ in Figure 2C allows for an advanced interpretation of Equation 3, which critically depends on the width $K$ of the learning window $W$. We distinguish here two limiting cases: narrow learning windows ($K\ll 1/\omega \ll \sigma $), that is, the width $K$ of the learning window is much smaller than a theta cycle and the width of a firing field, and wide learning windows ($K\gg \sigma $), that is, the width $K$ of the learning window exceeds the width of a firing field. Let us first consider narrow learning windows. Only later in this manuscript, we will turn to the case of wide learning windows.
Dependence of temporalorder learning on the overlap of firing fields for narrow learning windows ($K\ll 1/\omega \ll \sigma $)
We first show formally that sequence learning with narrow learning windows requires that the two firing fields do overlap, that is, their separation ${T}_{ij}$ should be less than or at least similar to the width $\sigma $ of the firing fields. In Equation 3, which was derived for odd learning windows, the weight change $\mathrm{\Delta}{w}_{ij}$ is determined by ${C}_{ij}^{\prime}(t)$ around $t=0$ in a region of width $K$. For narrow learning windows ($K\ll 1/\omega $), this region is small compared to a theta oscillation cycle and much smaller than the width $\sigma $ of a firing field. Because the envelope of the crosscorrelation ${C}_{ij}(t)$ is a Gaussian with mean ${T}_{ij}$ and width $\sqrt{2}\sigma $, the slope ${C}_{ij}^{\prime}(t=0)$ scales with the Gaussian factor ${G}_{{T}_{ij},\sqrt{2}\sigma}(0)\propto \mathrm{exp}[{T}_{ij}^{2}/(4{\sigma}^{2})]$. The weight change $\mathrm{\Delta}{w}_{ij}$ therefore strongly depends on the separation ${T}_{ij}$ of the firing fields. When the two firing fields do not overlap (${T}_{ij}\gg \sigma $), the factor $\mathrm{exp}[{T}_{ij}^{2}/(4{\sigma}^{2})]$ quickly tends to zero, and sequence learning is not possible. On the other hand, when the two firing fields do have considerable overlap (${T}_{ij}\lesssim \sigma $) we have $\mathrm{exp}[{T}_{ij}^{2}/(4{\sigma}^{2})]\lesssim 1$. In this case, sequence learning may be feasible with narrow learning windows. In this section, we will proceed with the mathematical analysis for overlapping fields, which allows us to assume $\mathrm{exp}[{T}_{ij}^{2}/(4{\sigma}^{2})]\approx 1$.
For overlapping firing fields (${T}_{ij}\lesssim \sigma $), let us now consider the fine structure of the crosscorrelation ${C}_{ij}(t)$ for $t<K$, as illustrated in Figure 2D. Importantly, phase precession causes the first positive peak (i.e. for $t>0$) of ${C}_{ij}$ to occur at time $c{T}_{ij}$ with $c\ll 1$ (Dragoi and Buzsáki, 2006; Geisler et al., 2010); phase precession also increases the slope ${C}_{ij}^{\prime}(t)$ around $t=0$, which could be beneficial for temporalorder learning according to Equation 3. To quantify this effect, we calculated the crosscorrelation’s slope at $t=0$ (see also Equation 18 in Materials and methods):
How does ${C}_{ij}^{\prime}(0)$ depend on the temporal separation ${T}_{ij}$ of the firing fields? If the two fields overlap entirely (${T}_{ij}=0$) the sequence has no defined temporal order, and thus ${C}_{ij}^{\prime}(0)$ is zero. For at least partly overlapping firing fields (${T}_{ij}\lesssim \sigma $) and typical phase precession where $c=\pi /(4\omega \sigma )\ll 1$, we will show in the next paragraph (and explain in Materials and methods in the text below Equation 18) that the second addend in Equation 4 dominates the other two. In this case, ${C}_{ij}^{\prime}(0)$ is much higher as compared to the crosscorrelation slope in the absence of phase precession ($c=0$), leading to a clearly larger synaptic weight change for phase precession. The maximum of ${C}_{ij}^{\prime}(0)$ is mainly determined by this second addend (multiplied by ${G}_{{T}_{ij},\sqrt{2}\sigma}(0)$) and it can be shown (see Materials and methods) that this maximum is located near ${T}_{ij}\approx \sqrt{2}\sigma $ .
The increase of ${C}_{ij}^{\prime}(0)$ induced by phase precession can be exploited by learning windows $W$ that are narrower than a theta cycle (e.g. gray regions in Figure 2C,D,E). To quantify this effect, let us consider a simple but generic shape of a learning window, for example, the odd STDP window $W(t)=\mu \mathrm{sign}(t)\mathrm{exp}(t/\tau )$ with time constant $\tau $ and learning rate $\mu >0$ (Figure 2E); this STDP window is narrow for $\tau \ll 1/\omega $. Equations 3 and 4 then lead to (see Materials and methods, Equation 19) the average weight change
where $A$ depicts the number of spikes per field traversal. Note that, according to Equation 3, the weight change $\mathrm{\Delta}{w}_{ij}$ in Equation 5 can be interpreted as a timeaveraged version of ${C}_{ij}^{\prime}(t)$ near $t=0$ from Equation 4. Thus, Equations 4 and 5 have a similar structure, but Equation 5 includes multiple incidences of the term ${\omega}^{2}{\tau}^{2}$ that account for this averaging. This term is small for narrow learning windows ($\tau \ll 1/\omega $) and can thus be neglected (${\omega}^{2}{\tau}^{2}\ll 1$) in this limiting case; however, for typical biological values of $\tau \ge 10$ ms and $\omega =2\pi \cdot 10$ Hz, the peculiar structure of the ${\omega}^{2}{\tau}^{2}$containing factor in the third addend in the square brackets is the reason why this addend can be neglected compared to the first one; as a result, the cases of ‘phase locking’ ($c=0$) and ‘no theta’ (only the first addend remains) are basically indistinguishable. Moreover, for narrow odd learning windows, $\mathrm{\Delta}{w}_{ij}$ in Equation 5 inherits a number of properties from ${C}_{ij}^{\prime}(0)$ in Equation 4: the second addend still remains the dominant one for ${T}_{ij}\lesssim \sigma $; inherited are also the absence of a weight change for fully overlapping fields ($\mathrm{\Delta}{w}_{ij}=0$ for ${T}_{ij}=0$), the maximum weight change for ${T}_{ij}\approx \sqrt{2}\sigma $, and $\mathrm{\Delta}{w}_{ij}\to 0$ for ${T}_{ij}\to \mathrm{\infty}$ (Figure 3A). Furthermore, the prefactor ${A}^{2}\mu {\tau}^{2}$ in Equation 5 suggests that the average weight change increases with increasing width $\tau $ of the learning window, but we emphasize that this increase is restricted to $\tau \ll 1/\omega $ (as we assumed for the derivation), which prohibits a generalization of the quadratic scaling to large $\tau $; the exact dependence on $\tau $ will be explained later.
To quantify how much better a sequence can be learned with phase precession as compared to phase locking, we use the ratio of the weight change $\mathrm{\Delta}{w}_{ij}$ with phase precession ($c>0$) and the weight change $\mathrm{\Delta}{w}_{ij}(c=0)$ without phase precession (Figure 3A), and define the benefit $B$ of phase precession as
By inserting Equation 5 in Equation 6, we can explicitly calculate the benefit $B$ of phase precession (see Equation 20 in Materials and methods and solid line in Figure 3B). For ${T}_{ij}\lesssim \sigma $ and ${\omega}^{4}{\tau}^{4}\ll 1$ (see Materials and methods) the benefit $B$ is well approximated by a Taylor expansion up to third order in ${T}_{ij}$ (dashed line in Figure 3B),
The maximum of $B$ as a function of ${T}_{ij}$ is obtained for ${T}_{ij}=0$ (fully overlapping fields), but the average weight change $\mathrm{\Delta}{w}_{ij}$ is zero at this point. We note, however, that $B$ decays slowly with increasing ${T}_{ij}$, so $B({T}_{ij}=0)$ can be used to approximate the benefit for small field separations ${T}_{ij}$ (i.e. largely overlapping fields). For narrow ($\omega \tau \ll 1$) odd STDP windows and slopesize matching ($\omega \sigma c=\pi /4$), we find the maximum ${B}_{\text{max}}\approx \omega \sigma /2$, which has an interesting interpretation: If we relate $\sigma $ to the field size $L$ of a Gaussian firing field through $L=4\sigma $ and if we relate the frequency $\omega $ to the period ${T}_{\theta}$ of a theta oscillation cycle through ${T}_{\theta}=2\pi /\omega $, we obtain $B}_{\text{max}}\approx 0.82\phantom{\rule{thinmathspace}{0ex}}L/{T}_{\theta$, that is, the maximum benefit of phase precession is about the number of theta oscillation cycles in a firing field. The example in Figure 3B (with firing fields in Figure 2A) has the maximum benefit ${B}_{\text{max}}\approx 10$ and the benefit remains in this range for partly overlapping firing fields ($0<{T}_{ij}\lesssim \sigma $). We thus conclude that phase precession can boost temporalorder learning by about an order of magnitude for typical cases in which learning windows are narrower than a theta oscillation cycle and overlapping firing fields are an order of magnitude wider than a theta oscillation cycle.
So far, we have considered ‘average’ weight changes that resulted from neural activity that was described by a deterministic firing rate. However, neural activity often shows large variability, that is, different traversals of the same firing field typically lead to very different spike trains. To account for such variability, we have simulated neural activity as inhomogeneous Poisson processes (see Materials and methods for details). As a result, the change of the weight of a synapse, which depends on the correlation between spikes of the presynaptic and the postsynaptic cells, is a stochastic variable. It is important to consider the variability of the weight change (‘noise’) in order to assess the significance of the average weight change. For this reason, we utilize the signaltonoise ratio (SNR), that is, the mean weight change divided by its standard deviation (see Materials and methods for details). To do so, we perform stochastic simulations of spiking neurons and calculate the average weight change and its variability across trials. This is done for phaseprecessing as well as phaselocked activity. To connect this approach to our previous results, we confirm that the average weight changes estimated from many fields traversals matches well the analytical predictions (Figure 3A and B, see Materials and methods for details).
The SNR shown in Figure 3C summarizes how reliable is the learning signal in a single traversal of the two firing fields — for the assumed odd learning window. The SNR further depends on ${T}_{ij}$ and follows a similar shape as the weight changes in Figure 3A. For phase precession, there is a maximum SNR that is slightly shifted to larger ${T}_{ij}$; for phase locking, SNR is always much lower. For the synapse connecting two cells with firing fields as in Figure 2A where ${T}_{ij}=\sigma $, we find an SNR of 0.27, which is insufficient for a reliable representation of a sequence.
To allow reliable temporalorder learning, one possible solution is to increase the number of spikes per field traversal $A$ ($\text{SNR}\propto \sqrt{A}$, as shown in Appendix 1). Another possibility is to increase the number of synapses. In Materials and methods we show that $\text{SNR}\propto \sqrt{M}$ where $M$ is the number of identical and uncorrelated synapses. Therefore, to achieve $\text{SNR}\gtrsim 1$ for $A=10$, one needs $M\gtrsim 14$ synapses.
In summary, for narrow, odd learning windows ($\tau \ll 1/\omega \ll \sigma $), temporalorder learning could benefit tremendously from phase precession as long as firing fields have some overlap. Average weight changes and the SNR are highest, however, for clearly distinct but still overlapping firing fields. It should be noted that any even component of the learning window would increase the noise and thus further decrease the SNR.
Dependence of temporalorder learning on the width of the learning window for overlapping firing fields
To investigate how temporalorder learning for an odd learning window depends on its width, we vary the parameter $\tau $ and quantify the average synaptic weight change $\mathrm{\Delta}{w}_{ij}$ and the SNR both analytically and numerically. We first study overlapping firing fields (Figure 4) and later consider nonoverlapping firing fields (Figure 5).
For partly overlapping firing fields (e.g. ${T}_{ij}=\sigma $), we find numerically that the average synaptic weight change $\mathrm{\Delta}{w}_{ij}$ (the ‘learning signal’) increases monotonically for increasing $\tau $ and saturates (colored curves in Figure 4A). This is because for increasing $\tau $ the overlap between the learning window and the crosscorrelation function grows, and this overlap begins to saturate as soon as the learning window is wider than ${T}_{ij}$, that is, the value at which the crosscorrelation assumes its maximum (cmp. dashed curve in Figure 2C). To analytically calculate the saturation value of $\mathrm{\Delta}{w}_{ij}$ for large learningwindow widths ($\tau \gg \sigma $), we can approximate the learning window as a step function (see Materials and methods for details) and find the maximum
that provides an upper bound to the weight change for overlapping firing fields (solid line in Figure 4A). For $\tau \lesssim 1/\omega $ (and actually well beyond this region), the analytical smalltau approximation of $\mathrm{\Delta}{w}_{ij}$ (Equation 5, dashed curves in Figure 4A) matches the numerical results well.
The results in Figure 4A confirm that $\mathrm{\Delta}{w}_{ij}$ is increased by phase precession for narrow learning windows but is independent of phase precession for $\tau \gg 1/\omega $. Thus, the benefit $B$ becomes small for large $\tau $ (Figure 4B) because, for large enough $\tau $, the theta oscillation completes multiple cycles within the width of the learning window. To better understand this behavior, let us return to Equation 1: if the product of a wide learning window and the crosscorrelation ${C}_{ij}$ is integrated to obtain the weight change, the oscillatory modulation of the crosscorrelation (e.g. as in Figure 2C) becomes irrelevant; similarly, according to Equation 3, the particular value of the derivative ${C}_{ij}^{\prime}(t)$ near $t=0$ can be neglected. Consequently, for $\tau \gg 1/\omega $ phase precession and phase locking as well as the scenario of firing fields that are not theta modulated yield the same weight change (Figure 4A), and the benefit approaches 0 (Figure 4B). Wide learning windows thus ignore the temporal (theta) finestructure of the crosscorrelation.
How noisy is this learning signal $\mathrm{\Delta}{w}_{ij}$ across trials? Figure 4C shows that for odd learning windows the SNR increases with increasing $\tau $ and, for $\tau \gg \frac{1}{\omega}$, approaches a constant value. This constant value is the same for phase precession, phase locking, or no theta oscillations at all. Taken together, for large enough $\tau $, the advantage of phase precession vanishes. For small enough $\tau $, phase precession increases the SNR, which confirms and generalizes the results in Figure 3C. Remarkably, the SNR for ‘phase locking’ is lower than the one for ‘no theta’, which means that theta oscillations without phase precession degrade temporalorder learning, even though theta oscillations as such were emphasized to improve the modification of synaptic strength in many other cases (e.g. Buzsáki, 2002; D'Albis et al., 2015).
Figure 4C predicts that a large $\tau $ yields the biggest SNR, and thus wide learning windows are the best choice for temporalorder learning; however, we note that this conclusion is restricted to odd (i.e. asymmetric) learning windows. An additional even (i.e. symmetric) component of a learning window would increase the noise without affecting the signal, and thus would decrease the SNR (dots in Figure 4C). It is remarkable that the only experimentally observed instance of a wide window (with $\tau \approx 1$ s in Bittner et al., 2017) has a strong symmetric component, which leads to a low SNR (dot marked ‘B’ in Figure 4C).
Taken together, we predict that temporalorder learning would strongly benefit from wide, asymmetric windows. However, to date, all experimentally observed (predominantly) asymmetric windows are narrow (e.g. Bi and Poo, 2001; Froemke et al., 2005; Wittenberg and Wang, 2006; see Abbott and Nelson, 2000; Bi and Poo, 2001 for reviews).
Temporalorder learning for wide learning windows ($K\gg \sigma $)
We finally restrict our analysis to wide learning windows, which allows us then to also consider nonoverlapping firing fields (Figure 5A, we again use two Gaussians with widths $\sigma $ and separation ${T}_{ij}$). To allow for temporalorder learning in this case, the spikes of two nonoverlapping fields can only be ‘paired’ by a wide enough learning window. As already indicated in Figure 4, phase precession does not affect the weight change for such wide learning windows where the width $\tau $ of the learning window obeys $\tau \gg 1/\omega $ (note that we always assumed many theta oscillation cycles within a firing field, that is, $1/\omega \gg \sigma $). Furthermore, Figure 4 indicated that only the asymmetric part of the learning window contributes to temporalorder learning. For the analysis of temporalorder learning with nonoverlapping firing fields and wide learning windows, we thus ignore any theta modulation and phase precession and evaluate, again, only the odd STDP window $W(t)=\mu \mathrm{sign}(t)\mathrm{exp}(t/\tau )$. In this case, the weight change (Equation 1) is still determined by the crosscorrelation function and the learning window (examples in Figure 5B,C). The resulting weight change $\mathrm{\Delta}{w}_{ij}$ as a function of the temporal separation ${T}_{ij}$ of firing fields is shown in Figure 5D: with increasing ${T}_{ij}$, the weight $\mathrm{\Delta}{w}_{ij}$ quickly increases, reaches a maximum, and slowly decreases. The initial increase is due to the increasing overlap of the Gaussian bump in ${C}_{ij}$ with the positive lobe of the learning window. The decrease, on the other hand, is dictated by the time course of the learning window. For $\tau \gg \sigma $, these two effects can be approximated by
in which the error function describes the overlap of the crosscorrelation with the learning window and the exponential term describes the decay of the learning window (dashed black curve in Figure 5D, see also Equation 25 in Materials and methods for details).
How does the SNR of the weight change depend on the separation ${T}_{ij}$ of firing fields? For ${T}_{ij}=0$, the signal is zero and thus also the SNR. As ${T}_{ij}$ increases, both signal and noise increase, but quickly settle on a constant ratio. The value of the SNR height of this plateau can be approximated by
(dashed line in Figure 5E), where $A$ is the number of spikes within a firing field (Equation 11). For $A=10$, we find $\text{SNR}\approx 2.2$, allowing for temporalorder learning with a single synapse. We note that this conclusion is limited to asymmetric STDP windows. A symmetric component (like in Bittner et al., 2017) decreases the SNR and makes temporalorder learning less efficient.
Taken together, temporalorder learning can be performed with wide STDP windows, and phase precession does not provide any benefit; but temporalorder learning requires a purely asymmetric plasticity window. For nonoverlapping firing fields, wide learning windows are essential to bridge a temporal gap between the fields.
Discussion
In this report, we show that phase precession facilitates the learning of the temporal order of behavioral sequences for asymmetric learning windows that are shorter than a theta cycle. To quantify this improvement, we use additive, pairwise STDP and calculate the expected weight change for synapses between two activated cells in a sequence. We confirm the longheld hypothesis (Skaggs et al., 1996) that phase precession bridges the vastly different time scales of the slow sequence of behavioral events and the fast STDP rule. Synaptic weight changes can be an order of magnitude higher when phase precession organizes the spiking of multiple cells at the theta time scale as compared to phaselocking cells.
Other mechanisms and models for sequence learning
As an alternative mechanism to bridge the time scales of behavioral events and the induction of synaptic plasticity, Drew and Abbott, 2006 suggested STDP and persistent activity of neurons that code for such events. The authors assume regularly firing neurons that slowly decrease their firing rate after the event and show that this leads to a temporal compression of the sequence of behavioral events. For stochastically firing neurons, this approach is similar to ours with two overlapping, unmodulated Gaussian firing fields. In this case, sequence learning is possible, but the efficiency can be improved considerably by phase precession.
Sato and Yamaguchi, 2003 as well as Shen et al., 2007 investigated the memory storage of behavioral sequences using phase precession and STDP in a network model. In computer simulations, they find that phase precession facilitates sequence learning, which is in line with our results. In contrast to these approaches, our study focuses on a minimal network (two cells), but this simplification allows us to (i) consider a biologically plausible implementation of STDP, firing fields, and phase precession and (ii) derive analytical results. These mathematical results predict parameter dependencies, which is difficult to achieve with only computer simulations.
Related to our work is also the approach by Masquelier and colleagues Masquelier et al., 2009 who showed that pattern detection can be performed by single neurons using STDP and phase coding, yet they did not include phase precession. They consider patterns in the input whereas, in our framework, it might be argued that patterns between input and output are detected instead.
Noisy activity of neurons and prediction of the minimum number of synapses for temporalorder learning
To account for stochastic spiking, we use Poisson neurons. We find that a single synapse is not sufficient to reliably encode a minimal twoneuron sequence in a single trial because the fluctuations of the weight change are too large. Fortunately, the SNR scales with $\sqrt{MA}$, that is, the square root of the number $M$ of identical, but independent synapses and the number $A$ of spikes per field traversal of the neurons. For generic hippocampal place fields and typical STDP, we predict that about 14 synapses are sufficient to reliably encode temporal order in a single traversal. Interestingly, peak firing rates of place fields are remarkably high (up to 50 spikes/s; e.g. O'Keefe and Recce, 1993, Huxter et al., 2003). Taken together, in hippocampal networks, reliable encoding of the temporal order of a sequence is possible with a low number of synapses, which matches simulation results on memory replay (Chenkov et al., 2017).
Width, shape, and symmetry of the STDP window are critical for temporalorder learning
Various widths have been observed for STDP learning windows (Abbott and Nelson, 2000; Bi and Poo, 2001). We show that for all experimentally found STDP time constants phase precession can improve temporalorder learning. However, for learning windows much wider than a theta oscillation cycle, the benefit of phase precession for temporalorder learning is small. Wide learning windows, where the width can be even on a behavioral time scale of $\approx 1$ s (Bittner et al., 2017) or larger, could, on the other hand, enable the association of nonoverlapping firing fields. Alternatively, nonoverlapping firing fields might also be associated by narrow learning windows if additional cells (with firing fields that fill the temporal gap) help to bridge a large temporal difference, much like 'time cells' in the hippocampal formation (reviewed in Eichenbaum, 2014).
STDP windows typically have symmetric and asymmetric components (Abbott and Nelson, 2000; Mishra et al., 2016). We find that only the asymmetric component supports the learning of temporal order. In contrast, the symmetric component strengthens both forward and backward synapses by the same amount and thus contributes to the association of behavioral events independent of their temporal order. For example, the learning window reported by Bittner et al., 2017 shows only a mild asymmetry and is thus unfavorable to store the temporal order of behavioral events. Only long, predominantly asymmetric STDP windows would allow for effective temporalorder learning (Figure 4).
Generally, the shape of STDP windows is subject to neuromodulation; for example, cholinergic and adrenergic modulation can alter its polarity and symmetry (Hasselmo, 1999). Also dopamine can change the symmetry of the learning window (Zhang et al., 2009). Therefore, sequence learning could be modulated by the behavioral state (attention, reward, etc.) of the animal.
Key features of phase precession for temporal orderlearning: generalization to nonperiodic modulation of activity
For STDP windows narrower ($\lesssim 10$ ms) than a theta cycle ($\gtrsim 100$ ms), we argue that the slope of the crosscorrelation function at zero offset controls the change of the weight of the synapse connecting two neurons; and we show that phase precession can substantially increase this slope. This result predicts that features of the crosscorrelation at temporal offsets that are larger than the width of the learning window are irrelevant for temporalorder learning. It is thus conceivable to boost temporalorder learning even without phase precession, which is weak if theta oscillations are weak, as for example in bats (Ulanovsky and Moss, 2007) and humans (Herweg and Kahana, 2018; Qasim et al., 2021). In this case, temporalorder learning may instead benefit from two other phenomena that could create an appropriate shape of the crosscorrelation: (i) Spiking of cells is locked to common (aperiodic) fluctuations of excitability. (ii) Each cell responds the faster to an increase in its excitability the longer ago its firing field has been entered, which may be mediated by a progressive facilitation mechanism. Together, these phenomena can make the crosscorrelation exhibit a steeper slope around zero and could even give rise to a local maximum at a positive offset. This temporal fine structure is superimposed on a slower modulation, which is related to the widths of the firing fields. In summary, a progressively decreasing delay of spiking with respect to nonrhythmic fluctuations in excitation generalizes the notion of phase precession. Interestingly, synaptic shortterm facilitation, which could generate the described fine structure of the crosscorrelation, has also been proposed as mechanism underlying phase precession (Leibold et al., 2008).
Model assumptions
In our model, we assumed that recurrent synapses (e.g. between neurons representing a sequence) are plastic but weak during encoding, such that they have a negligible influence on the postsynaptic firing rate; and that the feedforward input dominates neuronal activity. These assumptions seem justified as Hasselmo, 1999 indicated that excitatory feedback connections may be suppressed during encoding to avoid interference from previously stored information (see also Haam et al., 2018). Furthermore, neuromodulators facilitate longterm plasticity (reviewed, e.g. by Rebola et al., 2017), which also supports our assumptions.
The assumption of weak recurrent connections implies that these connections do not affect the dynamics. Consequently (and in contrast to Tsodyks et al., 1996), we thus hypothesize that phase precession is not generated by the local, recurrent network (see also, e.g. Chadwick et al., 2016); instead, we assume that phase precession is inherited from upstream feedforward inputs (Chance, 2012; Jaramillo et al., 2014) or generated locally by a cellular/synaptic mechanism (Magee, 2001; Harris et al., 2002; Mehta et al., 2002; Thurley et al., 2008). After temporalorder learning was successful, the resulting asymmetric connections could indeed also generate phase precession (as demonstrated by the simulations in Tsodyks et al., 1996), and this phase precession could then even be similar to the one that has initially helped to shape synaptic connections. Finally, inherited or local cellularly/synaptically generated phase precession and locally networkgenerated phase precession could interact (as reviewed, for example in Jaramillo and Kempter, 2017).
We assumed in our model that the widths of the two firing fields that represent two events in a sequence are identical (see, e.g. Figure 2A). But firing fields may have different widths, and in this case a slopesize matched phase precession would fail to reproduce the timing of spikes required for the learning of the correct temporal order of the two events. For example, the learned temporal order of events (timed according to field entry) would even be reversed if two fields with different sizes are aligned at their ends. How could the correct temporal order nevertheless be learned in our framework? In the hippocampus, theta oscillations are a traveling wave (Lubenov and Siapas, 2009; Patel et al., 2012) such that there is a positive phase offset of theta oscillations for the wider firing fields in the more ventral parts of the hippocampus. This travelingwave phenomenon could preserve the temporal order in the phaseprecessioninduced compressed spike timing, as also pointed out earlier (Leibold and MonsalveMercado, 2017; Muller et al., 2018).
Our results on learning rules for sequence learning rely on pairwise STDP in which pairs of presynaptic and postsynaptic spikes are considered. Conversely, triplet STDP considers also motifs of three spikes (either 2 presynaptic  1 postsynaptic or 2 postsynaptic  1 presynaptic) (Pfister and Gerstner, 2006). Triplets STDP models can reproduce a number of experimental findings that pairwise STDP could not, for example the dependence on the repetition frequency of spike pairs (Sjöström et al., 2001). To investigate the influence of triplet interactions on sequence learning, we implemented the generic triplet rule by Pfister and Gerstner, 2006. We used their ‘minimal’ model, which was regarded as the best model in terms of number of free parameters and fitting error; for the parameters they obtained from fitting the triplet STDP model to hippocampal data, we found only mild differences to our results (see, e.g. Figure 4C). Differences are small because the fitted time constant of the triplet term (40 ms) is smaller than typical interspike intervals ($\gtrsim 50$ ms, minimum in field centers) in our simulations.
Replay of sequences and storage of multiple and overlapping sequences
A sequence imprinted in recurrent synaptic weights can be replayed during rest or sleep (Wilson and McNaughton, 1994; Nádasdy et al., 1999; Diba and Buzsáki, 2007; Peyrache et al., 2009; Davidson et al., 2009), which was also observed in networksimulation studies (Matheus Gauy et al., 2020; Malerba and Bazhenov, 2019; Gillett et al., 2020). Replay could thus be a possible readout of the temporalorder learning mechanism. However, replay depends on the many parameters of the network, and a thorough investigation of is beyond the scope of this manuscript. Therefore, we focus on synaptic weight changes that represent the formation of sequences in the network, which underlies replay, and we do not simulate replay.
We have considered the minimal example of a sequence of two neurons. Sequences can contain many more neurons, and the question arises how two different sequences can be told apart if they both contain a certain neuron, but proceed in different directions — as they might do for sequences of spatial or nonspatial events (Wood et al., 2000). In this case, it may be beneficial to not only strengthen synapses that connect direct successors in the sequence but also synapses that connect the secondtonext neuron. In this way, the two crossing sequences could be disambiguated, and the wider context in which an event is embedded becomes associated, which is in line with retrievedcontext theories of serialorder memory (Long and Kahana, 2019). More generally, it is an interesting question of how many sequences can be stored in a network of a given size. Gillett et al., 2020 were able to analytically calculate the storage capacity for the storage of sequences in a Hebbian network.
In conclusion, our model predicts that phase precession enables efficient and robust temporalorder learning. To test this hypothesis, we suggest experiments that modulate the shape of the STDP window or selectively manipulate phase precession and evaluate memory of temporal order.
Materials and methods
Experimental design: model description
Request a detailed protocolWe model the timedependent firing rate of a phase precessing cell $i$ (two examples in Figure 2A) as
where the scaling factor $A$ determines the number of spikes per field traversal and ${G}_{{\mu}_{i},\sigma}(t)=1/(\sqrt{2\pi}\sigma )\cdot \mathrm{exp}[{(t{\mu}_{i})}^{2}/(2{\sigma}^{2})]$ is a Gaussian function that describes a firing field with center at ${\mu}_{i}$ and width $\sigma $. The firing field is sinusoidally modulated with theta frequency $\omega $ (but the sinusoidal modulation is not a critical assumption, see Discussion), with typically many oscillation cycles in a firing field ($\omega \sigma \gg 1$). The compression factor $c$ can be used to vary between phase precession ($c>0$), phase locking ($c=0$), and phase recession ($c<0$) because the average population activity of many such cells oscillates at frequency of $(1c)\omega $ (Geisler et al., 2010; D'Albis et al., 2015), which provides a reference frame to assign theta phases (Figure 2A). Usually, $c\ll 1$ with typical values $c\lesssim 1/(\sigma \omega )$ (Geisler et al., 2010); for a pair of cells with overlapping firing fields (centers separated by ${T}_{ij}:={\mu}_{j}{\mu}_{i}$) the phase delay is $\omega c{T}_{ij}$ (Figure 2B).
To quantify temporalorder learning, we consider the average weight change $\mathrm{\Delta}{w}_{ij}$ of the synapse from cell $i$ to cell $j$, which is (Kempter et al., 1999)
where ${C}_{ij}(t)$ is the crosscorrelation between the firing rates f_{i} and f_{j} of cells $i$ and $j$, respectively (Figure 2C,D):
$W(t)$ denotes the synaptic learning window, for example the asymmetric window
where $\tau $ is the time constant and $\mu >0$ is the learning rate (Figure 2E).
For the following calculations, we make two assumptions that are reasonable in the hippocampal formation (O'Keefe and Recce, 1993; Bi and Poo, 2001; Geisler et al., 2010) :
The theta oscillation has multiple cycles within the Gaussian envelope of the firing field in Equation 11 ($1/\omega \ll \sigma $).
The window $W$ is short compared to the theta period ($\tau \ll 1/\omega $).
Analytical approximation of the crosscorrelation function
Request a detailed protocolTo explicitly calculate the crosscorrelation ${C}_{ij}(t)$ as defined in Equation 13, we plug in the firingrate functions (Equation 11) for the two neurons:
The first term (out of four) describes the crosscorrelation of two Gaussians, which results in a Gaussian function centered at ${T}_{ij}$ and with width $\sigma \sqrt{2}$. For the second term, we note that the product of two Gaussians yields a function proportional to a Gaussian with width $\sigma /\sqrt{2}$, and then use assumption (i). When integrated, the second term’s contribution to ${C}_{ij}(t)$ is negligible because the cosine function oscillates multiple times within the Gaussian bump, that is, positive and negative contributions to the integral approximately cancel. The same argument applies to the third term. For the fourth term, we use the trigonometric property $\mathrm{cos}(\alpha )\cdot \mathrm{cos}(\beta )=\frac{1}{2}\left(\mathrm{cos}(\alpha +\beta )+\mathrm{cos}(\alpha \beta )\right)$. We set $\alpha =\omega {t}^{\prime}$, $\beta =\omega (t+{t}^{\prime}c{T}_{ij})$ and find
Again, we use assumption (i) and neglect the first addend on the righthand side. Notably, the cosine function in the second addend is independent of the integration variable ${t}^{\prime}$. Taken together, we find
Thus, the crosscorrelation can be approximated by a Gaussian function (center at ${T}_{ij}$, width $\sigma \sqrt{2}$) that is theta modulated with an amplitude scaled by the factor $\frac{1}{2}$.
To further simplify Equation 15, we note that the time constant $\tau $ of the STDP window is usually small compared to the theta period (assumption (ii), Figure 2C,D,E). Structures in ${C}_{ij}(t)$ for $t\gg \tau $ thus have a negligible effect on the synaptic weight change. Therefore, we can focus on the crosscorrelation for small temporal lags. In this range, we approximate the (slow) Gaussian modulation of ${C}_{ij}(t)$ (Figure 2C,D, dashed red line) by a linear function, that is,
Inserting this result in Equation 15, we approximate the crosscorrelation function ${C}_{ij}(t)$ for $t\lesssim \tau $ as (Figure 2D, dashed black line)
In the Results, we show that the slope of the crosscorrelation function at $t=0$ is important for temporalorder learning. From Equation 17 we find
which has three addends within the square brackets. Let us estimate the relative size of the second and third terms with respect to the first one. The third term is at most of the order of 0.5 because $\mathrm{cos}(\omega c{T}_{ij})\le 1$. For the second addend, we note that $\mathrm{sin}(\omega c{T}_{ij})/(\omega c{T}_{ij})$ approaches 1 for ${T}_{ij}\to 0$ and remains in this range for $\omega c{T}_{ij}\lesssim \pi /4$. This condition is fulfilled for ${T}_{ij}\lesssim \sigma $ if we assume slopesize matching of phase precession (Geisler et al., 2010), that is, $\omega c\sigma \approx \frac{\pi}{4}\approx 0.79$. Then, the size of the second addend is dictated by the factor $\omega \sigma $, which is large according to assumption (i). In other words, for typical phase precession and ${T}_{ij}\lesssim \sigma $, the second addend is much larger than the other two.
To further understand the structure of ${C}_{ij}^{\prime}(0)$, which is also shaped by the prefactors in front of the square brackets, we first note that ${C}_{ij}^{\prime}(0)$ is zero for fully overlapping firing fields (${T}_{ij}\stackrel{}{\to}0$). On the other hand, for very large field separations (${T}_{ij}\gg \sigma $), the Gaussian term $G$ causes ${C}_{ij}^{\prime}(0)$ to become zero. The prefactors have a maximum at ${T}_{ij}=\sqrt{2}\sigma $. The maximum’s exact location is slightly shifted by the second addend but remains near $\sqrt{2}\sigma $. This peak will be important because it is inherited by the average weight change (Equation 3).
Average weight change
Request a detailed protocolHaving approximated the crosscorrelation function and its slope at zero (Equations 17,18), we are now ready to calculate the average synaptic weight change (Equation 3) for the assumed STDP window (Equation 14). Standard integration methods yield
Because $\mathrm{\Delta}{w}_{ij}$ is a temporal average of ${C}_{ij}^{\prime}(t)$ for small $t$ (see interpretation of Equation 3), the weight change’s structure resembles the previously discussed structure of ${C}_{ij}^{\prime}(0)$. The averaging introduces additional factors proportional to $1\pm {\omega}^{2}{\tau}^{2}$, but for $\omega \tau \ll 1$ [assumption (ii)] those have only minor effects on the relative size of the three addends. The second term still dominates. Importantly, $\mathrm{\Delta}{w}_{ij}=0$ for ${T}_{ij}=0$ and the position of the peak at ${T}_{ij}\lesssim \sqrt{2}\sigma $ is inherited from ${C}_{ij}^{\prime}(0)$ (Figure 3A).
The benefit of phase precession
Request a detailed protocolTo quantify the benefit $B$ of phase precession, we consider the expression $\mathrm{\Delta}{w}_{ij}/\mathrm{\Delta}{w}_{ij}(c=0)1$, because $\mathrm{\Delta}{w}_{ij}$ describes the overall weight change (including phase precession), and $\mathrm{\Delta}{w}_{ij}(c=0)$ serves as the baseline weight change due to the temporal separation of the firing fields (without phase precession). We subtract 1 to obtain $B=0$ when the weight changes are the same with and without phase precession. From Equation 19 we find
To better understand the structure of $B$, we Taylorexpand it in ${T}_{ij}$ up to the third order and assume ${\omega}^{4}{\tau}^{4}\ll 1$ [assumption (ii)]. The result is
Thus, $B$ assumes a maximum for ${T}_{ij}=0$ and slowly decays for small ${T}_{ij}$ (Figure 3B). Using slopesize matching ($\omega \sigma c=\pi /4$), the maximal benefit is
where $L=4\sigma $ depicts the total field size and ${T}_{\theta}=\frac{2\pi}{\omega}$ is the period of the theta oscillation. Thus, the number of theta cycles per firing field determines the benefit for small separations of the firing fields.
Average weight change for wide learning windows
Request a detailed protocolIn this paragraph we relax assumption (ii), that is, we consider wide asymmetric learning windows $W$ (Equation 14 with $\tau \gg \sigma $). Furthermore, we neglect any thetaoscillatory modulation of the firing fields in Equation 11 and, thus, ${C}_{ij}$ in Equation 15.
First, for nonoverlapping fields (${T}_{ij}\gg \sigma $), the learning window can be approximated to be constant near the peak of the Gaussian bump of ${C}_{ij}$. We can thus rewrite Equation 1 as
Second, for overlapping fields ($0<{T}_{ij}\lesssim \sigma $), the Gaussian bump of ${C}_{ij}$ partly lies on the negative lobe of $W$. We can approximate $W(t)=\text{sign}(t)$, and the average weight change in Equation 1 then reads
Combining the two limiting cases in Equations 23 and 24 yields
Signaltonoise ratio
Request a detailed protocolTo correctly encode the temporal order of behavioral events, the average weight change $\mathrm{\Delta}{w}_{ij}$ of a forward synapse needs to be larger than the average weight change $\mathrm{\Delta}{w}_{ji}$ of the corresponding backward synapse. We thus define the signaltonoise ratio as
where std() denotes the standard deviation and $\mathrm{\Delta}{w}_{ij}^{k}$, $\mathrm{\Delta}{w}_{ji}^{k}$ are the weight changes for trial $k\in [1,N]$, the averages across trials being $\mathrm{\Delta}{w}_{ij}={\u27e8\mathrm{\Delta}{w}_{ij}^{k}\u27e9}_{k}$ and $\mathrm{\Delta}{w}_{ji}={\u27e8\mathrm{\Delta}{w}_{ji}^{k}\u27e9}_{k}$. This expression for the SNR ‘punishes’ the nonsequencespecific strengthening of backward synapses. Specifically, $\text{SNR}=0$ for a symmetric (even) learning window, because the numerator (which represents the ‘signal’) is zero. On the other hand, a perfectly asymmetric learning window, like the one used throughout this study (Equation 14), yields $\text{SNR}=\frac{\mathrm{\Delta}{w}_{ij}}{\text{std}(\mathrm{\Delta}{w}_{ij}^{k})}$, because $\mathrm{\Delta}{w}_{ij}^{k}=\mathrm{\Delta}{w}_{ji}^{k}$. Asymmetric learning windows thus recover the classical definition of the SNR as the ratio between the average weight change and the standard deviation of the weight change.
We note that the generalized definition above can be used to calculate the SNR for arbitrary windows, such as the learning window from Bittner et al., 2017, Figure 4C.
Assuming an asymmetric window and $M$ uncorrelated synapses with the same mean and variance of the weight change, we can write the signaltonoise ratio as
because the variance of the sum can be decomposed into the sum of variances and covariances. All covariances are zero because synapses are uncorrelated. This leaves a sum of $M$ variances, which are identical. Therefore, the standard deviation, and consequently also the SNR, scale with $\sqrt{M}$.
Numerical simulations
Request a detailed protocolTo numerically simulate the synaptic weight change, spikes were generated by inhomogeneous Poisson processes with rate functions according to Equation 11. For every spike pair, the contribution to the weight change was calculated according to Equation 14. We repeated the simulations for $N={10}^{4}$ trials, and the mean weight change as well as the standard deviation across trials and the SNR were estimated. All simulations were implemented in Python 3.8 using the packages NumPy (RRID:SCR_008633) and SciPy (RRID:SCR_008058). Matplotlib (RRID:SCR_008624) was used for plotting; Inkscape (RRID:SCR_014479) was used for final adjustments to the Figures. The Python code is available at https://gitlab.com/e.reifenstein/synapticlearningrulesforsequencelearning (Reifenstein and Kempter, 2021; copy archived at swh:1:rev:157c347a735a090f591a2b77a71b90d7de65bca5).
Appendix 1
The signaltonoise ratio of $\mathrm{\Delta}{w}_{ij}$
A synapse with weight ${w}_{ij}$ is assumed to connect neuron $i$ to neuron $j$. Here, we aim to derive the signaltonoise ratio (SNR) of the weight changes $\mathrm{\Delta}{w}_{ij}$, which is defined as (Materials and methods)
where $\u27e8\mathrm{\Delta}{w}_{ij}\u27e9$ is the average signal. The noise is described by the standard deviation of the weight change,
Signal and noise are generated by additive STDP and spiking activity that is modeled by two inhomogeneous Poisson processes with rates ${f}_{i}(t)$ and ${f}_{j}(t)$ that have finite support. The average weight change is calculated by $\u27e8\mathrm{\Delta}{w}_{ij}\u27e9=\int dtW(t){C}_{ij}(t)$ where $W(t)$ is the synaptic learning window and ${C}_{ij}(t)$ depicts the crosscorrelation function ${C}_{ij}(t)=\int d{t}^{\prime}{f}_{i}({t}^{\prime}){f}_{j}({t}^{\prime}+t)$. From Kempter et al., 1999, we use
where ${S}_{i}(t)={\sum}_{n}\delta (t{t}_{i}^{(n)})$ and ${S}_{j}(t)={\sum}_{n}\delta (t{t}_{j}^{(n)})$ are the presynaptic and postsynaptic spike trains, respectively. To simplify, we set ${t}_{0}=\mathrm{\infty}$ and $\mathrm{\Delta}{w}_{ij}({t}_{0})=0$. Furthermore, we are interested in paired STDP and thus set ${w}^{\text{in}}={w}^{\text{out}}=0$. For $\u27e8\mathrm{\Delta}{w}_{ij}^{2}\u27e9={lim}_{t\to \mathrm{\infty}}\u27e8\mathrm{\Delta}{w}_{ij}^{2}\u27e9(t)$, Equation A12 reduces to
Because both spike trains are drawn from different Poisson processes, ${S}_{i}$ and ${S}_{j}$ are statistically independent, and therefore we can simplify
Moreover, in a spike train the spikes at different times are uncorrelated,
and
As ${S}_{i}$ and ${S}_{j}$ are realizations of inhomogeneous Poisson processes with rates ${f}_{i}(t)$ and ${f}_{j}(t)$, respectively, we find
and
We insert these expressions into Equation A13:
where
To explicitly calculate the SNR, we parameterize the firing rates as
and
See main text for definitions of symbols. Furthermore, we assume $W(s)={W}^{\text{odd}}(s)+{W}^{\text{even}}(s)$ with
(see Equation 14 in Materials and methods) and
In what follows we consider a limiting case of wide learning windows, for which we can explicitly calculate the SNR. The results obtained in this case match well to the numerical simulations for wide learning windows (Figures 4 and 5 in the main text).
Wide learning windows
For wide windows (formally: $\tau \to \mathrm{\infty}$, $\kappa \to \mathrm{\infty}$), we can approximate ${W}^{\text{even}}=\lambda $ and ${W}^{\text{odd}}(t)=\mu \mathrm{sgn}(t)$ and neglect the sinusoidal modulations of f_{i} and f_{j} in Equation A15 and A16; phase precession does not affect the SNR in this case.
The following calculations are similar for odd and even windows. We elaborate the calculations in detail for odd windows and use ‘±’ and ‘$\mp $’ to include the similar calculations for even windows. The top symbol (‘+’ and ‘—’, respectively) corresponds to odd windows; the bottom symbol corresponds to even windows.
To start, we split the third and fourth integral in Equation A14 into positive and negative time lags $s$ and $v$, respectively:
We rewrite $F$ as
which has four addends and occurs in four integrals in Equation A17. Thus, there are 16 terms we need to evaluate. We label these terms (1.i) to (1.iv) for the first integral, (2.i) to (2.iv) for the second integral and so on until (4.iv).
For the term (1.i) we find
The first integral is ${\int}_{\mathrm{\infty}}^{\mathrm{\infty}}d{t}^{\prime}{f}_{j}({t}^{\prime})=A$. The second integral can be solved by taking the derivative with respect to ${T}_{ij}$:
Term (1.i) (Equation A18) thus reads:
For (2.i) we find
Term (3.i) is symmetric to (2.i) and thus yields the same result. For (4.i) we find (in analogy to the term (1.i)):
We sum the contributions (1.i) to (4.i) for the odd learning window:
Let us continue with the second term of $F$, which is labeled by ‘(ii)’, and consider the first (of four) integrals in Equation A17, that is, we continue with contribution (1.ii):
with
which will be solved later for special cases. Note that $C$ depends on ${T}_{ij}$ because ${f}_{j}({t}^{\prime})$ depends on ${T}_{ij}$. For (2.ii) we find:
For (3.ii) we find the same:
For (4.ii) we find:
Summing contributions (1.ii) to (4.ii) for the odd window yields:
We continue with contribution (1.iii):
Contribution (1.iii) is nonzero if the argument ${t}^{\prime}+suv$ of the delta function in the last integral (across $v$) is zero for some $v$, which varies from 0 to $\mathrm{\infty}$. The argument of the delta function is thus zero for some $v$ if $0\le {t}^{\mathrm{\prime}}+su<\mathrm{\infty}$, which we can rewrite as $u\le {t}^{\prime}+s$ and then use it in the integral across $u$, which leads to
with $D:={\int}_{\mathrm{\infty}}^{\mathrm{\infty}}\mathrm{d}{t}^{\mathrm{\prime}}{f}_{j}({t}^{\mathrm{\prime}}){\int}_{0}^{\mathrm{\infty}}\mathrm{d}s\phantom{\rule{thinmathspace}{0ex}}{f}_{i}({t}^{\mathrm{\prime}}+s)\phantom{\rule{thinmathspace}{0ex}}\mathrm{e}\mathrm{r}\mathrm{f}\left(\frac{s+{t}^{\mathrm{\prime}}{T}_{ij}}{\sqrt{2}\sigma}\right)$. D will be evaluated later for special cases.
Similarly to (1.iii), we treat (2.iii):
For (3.iii) we find:
with ${D}^{\mathrm{\prime}}:={\int}_{\mathrm{\infty}}^{\mathrm{\infty}}\mathrm{d}{t}^{\mathrm{\prime}}{f}_{j}({t}^{\mathrm{\prime}}){\int}_{\mathrm{\infty}}^{0}\mathrm{d}s\phantom{\rule{thinmathspace}{0ex}}{f}_{i}({t}^{\mathrm{\prime}}+s)\phantom{\rule{thinmathspace}{0ex}}\mathrm{e}\mathrm{r}\mathrm{f}\left(\frac{s+{t}^{\mathrm{\prime}}{T}_{ij}}{\sqrt{2}\sigma}\right)$, which we will evaluate later for special cases.
Finally, for (4.iii) we find
To sum the four contributions (1.iii) to (4.iii) for the odd window, we note that the first terms (square brackets) of (1.iii) and (2.iii) cancel, as well as the first terms of (3.iii) and (4.iii). We thus obtain:
We continue with contribution (1.iv):
By similar arguments, (2.iv) yields:
(3.iv) yields
(4.iv) yields
We sum the contributions (1.iv) and (4.iv) and obtain ${A}^{2}$. We now collect all terms for the odd window:
So far, we have calculated the second moment of $\mathrm{\Delta}{w}_{ij}$. In order to determine the variance, we need to calculate the average weight change for the odd window:
The variance thus reads:
For the signaltonoise ratio, we note that the definition from Equation A11, for odd learning windows, simplifies to
because $\mathrm{\Delta}{w}_{ij}=\mathrm{\Delta}{w}_{ji}$ for odd learning windows.
We insert Equation A113 and A114 and find
To obtain the final result, we have to evaluate $C$, $D$, and ${D}^{\prime}$. We distinguish the two cases ${T}_{ij}\gg \sigma $ and ${T}_{ij}=\sigma $ to approximate these three terms:
1. ${T}_{ij}\gg \sigma $:
because the Gaussian function f_{j} is shifted far into the positive lobe of the error function.
Thus,
This number (for $A=10$) is indicated as the analytical comparison in Figure 5E. For large A, the SNR (Equation A116) approaches $\sqrt{A/2}$.
2. ${T}_{ij}=\sigma $:
all of which we calculated numerically.
It follows:
This number is plotted as the largetau approximation in Figure 4C. For large $A$, we find $\text{SNR}\propto \sqrt{A}$.
Even windows
As argued in the main text, for even windows, the weight change $\mathrm{\Delta}{w}_{ij}$ contains no information about the order of events because $\mathrm{\Delta}{w}_{ij}=\mathrm{\Delta}{w}_{ji}$. This can be seen from Equation A11 of Appendix 1. The SNR is zero for purely even windows because the signal is zero. Nonetheless, we can calculate the variance of the weight change. To do so, we collect all terms of $\u27e8\mathrm{\Delta}{w}_{ij}^{2}\u27e9$ for even windows (indicated by the bottom symbol of all occurences of ‘±’ and ‘$\mp $’ in the previous section). Again, we assume wide windows ($\kappa \to \mathrm{\infty}$).
Collecting the terms (1.i) to (4.i) yields
Similarly, we sum the terms (1.ii) to (4.ii):
We continue to collect the contributions (1.iii) to (4.iii):
Finally, summing (1.iv) to (4.iv) yields the same result as for the odd window: ${A}^{2}$.
Overall,
Together with
the variance reads:
We now insert these variances in the denominator of Equation A11:
which, assuming $\mu =\lambda $, is twice the noise as for odd windows ($\sqrt{2{A}^{3}+{A}^{2}}$, Equation A116).
In summary, for a complex learning window with even and odd contributions, the signal solely depends on the odd part, whereas both parts, even and odd, contribute to the noise. Any even contribution thus only decreases the SNR.
Appendix 2
Calculating SNR for learning windows of arbitrary width
We again consider odd learning windows of the shape
As in the case of wide learning windows, we again consider the second moment of the weight change (similar to Equation A14 of Appendix 1):
We write F similarly as before, neglecting the theta modulation of the firing rate:
with
and
We label the addends of Equation A22 as $\{1,2,3,4\}$ and the addends of Equation A23 as $\{i,ii,iii,iv\}$. In evaluating the second moment of the weight change, we realize that many integrands have similar forms, that is, products of exponentials, error functions, and delta functions. Consequently, we will first state the integral identities we use, and will then explicitly derive the term $(2.iii)$ as an example. The other terms can be evaluated in a similar manner.
Integral identities
For the evaluation of the second moment of the weight change, many integrands consist of exponential functions containing linear and squared terms. To tackle these integrals, we use Albano et al., 2011
The second recurring form of integrals is
Substituting
we can rewrite
We now use an integral identity by Ng and Geller, 1969 (their section 4.3, eq. 13):
which yields the desired solution ($p=1,\phantom{\rule{thinmathspace}{0ex}}q=a\left\{cb/2\right\}$):
Example: deriving the term (2.iii)
For the term ($2.iii$), we have
When applying the sifting property of the Dirac delta function, we note that the integral over $v$ is nonzero for $\mathrm{\infty}<{t}^{\prime}+su\le 0$, that is, for ${t}^{\prime}+s\le u$. Thus we have:
The integral over $u$ can be evaluated by using Equation A26, which yields:
The second part of the integral over $s$ (involving the error function) will be solved numerically. For this purpose, we define ${D}_{2}^{\prime}$ as:
For the first part of the integral over $s$ in Equation A213, we again use Equation A26, which results in:
The first part of the integral over $t$ can be solved by applying Equation A26 in the limit of $a\to \mathrm{\infty}$. For the second part we use Equation A210.
We now observe that defining D_{2} in the following way:
allows us to write $(2.iii)$ as:
Addends of the second moment
By similar logic, all four addends of Equation A22 (with four parts each) can be obtained. We list the results here:
First Addend
Second Addend
with the integral terms:
Third Addend
with the integral terms:
Fourth Addend
By collecting all 16 terms, we will obtain the average squared weight change. To calculate the variance, we also need the squared average weight change, which we will calculate in the next section.
Average weight change
The average weight change for odd learning windows is given by (cmp. Equation 1 in the main text):
We again neglect the theta modulation of the firing fields. Evaluating the first addend yields:
The second addend can be similarly evaluated:
The average weight change thus reads:
Equation A244 might show numerical instabilities for small $\tau $. These instabilities can be fixed using the following approximation for the error function proposed by Abramowitz, 1974:
where $p=0.47047$, ${a}_{1}=0.3480242$, ${a}_{2}=0.0958798$, ${a}_{3}=0.7478556$. Along with a new set of variables
the approximation yields
Note that the exponential with the $\frac{{\sigma}^{2}}{{\tau}^{2}}$ term vanishes when ${x}_{1}\ge 0$, and only one addend contains the term for ${x}_{1}<0$. Therefore, this approximation results in improved numerical stability for small $\tau $. Equation A246 is shown in Figure 4A.
Variance and signaltonoise ratio of the weight change
With all of the above results, we are now ready to state the variance and signaltonoise ratio of the weight change:
The signaltonoise ratio is then given by:
Equation A248 (with the variance from Equation A247 and the mean from Equation A244) is shown in Figure 4C. We observe that the analytical solution fits the numerical solution well for $\tau \gtrsim 0.1$ s but numerical instabilities cause it to diverge for $\tau \lesssim 0.1$ s.
The numerical instability for $\tau \lesssim 0.1$ s is likely due to a combination of two factors: the exponential terms $\mathrm{exp}\left[{\sigma}^{2}/{\tau}^{2}\right]$ become very large for small tau, and large arguments in the error function cause the terms ($1\pm \text{erf}(.)$) to be very close to zero. The product of the two is numerically unstable for small tau. Unfortunately, unlike in the case of the average weight change, we did not find an approximation which canceled out these exponential terms in the noise.
Data availability
Code and data are available at https://gitlab.com/e.reifenstein/synapticlearningrulesforsequencelearning (copy archived at https://archive.softwareheritage.org/swh:1:rev:157c347a735a090f591a2b77a71b90d7de65bca5).
References

Synaptic plasticity: taming the beastNature Neuroscience 3 Suppl:1178–1183.https://doi.org/10.1038/81453

BookHandbook of Mathematical Functions, With Formulas, Graphs, and Mathematical TablesUSA: Dover Publications, Inc.

The integrals in Gradshteyn and Ryzhik. Part 19: The error functionSCIENTIA Series A Mathematical Sciences 21:25–42.

Sequence memory in the HippocampalEntorhinal regionJournal of Cognitive Neuroscience 32:2056–2070.https://doi.org/10.1162/jocn_a_01592

Synaptic modification by correlated activity: hebb's postulate revisitedAnnual Review of Neuroscience 24:139–166.https://doi.org/10.1146/annurev.neuro.24.1.139

Hippocampal phase precession from dual input componentsJournal of Neuroscience 32:16693–16703.https://doi.org/10.1523/JNEUROSCI.278612.2012

The CRISP theory of hippocampal function in episodic memoryFrontiers in Neural Circuits 7:88.https://doi.org/10.3389/fncir.2013.00088

Memory replay in balanced recurrent networksPLOS Computational Biology 13:e1005359.https://doi.org/10.1371/journal.pcbi.1005359

Forward and reverse hippocampal placecell sequences during ripplesNature Neuroscience 10:1241–1242.https://doi.org/10.1038/nn1961

Time cells in the hippocampus: a new dimension for mapping memoriesNature Reviews Neuroscience 15:732–744.https://doi.org/10.1038/nrn3827

Critical role of the hippocampus in memory for sequences of eventsNature Neuroscience 5:458–462.https://doi.org/10.1038/nn834

Neuromodulation: acetylcholine and memory consolidationTrends in Cognitive Sciences 3:351–359.https://doi.org/10.1016/S13646613(99)013650

Spatial representations in the human brainFrontiers in Human Neuroscience 12:297.https://doi.org/10.3389/fnhum.2018.00297

Modeling inheritance of phase precession in the hippocampal formationJournal of Neuroscience 34:7715–7731.https://doi.org/10.1523/JNEUROSCI.513613.2014

Phase precession: a neural code underlying episodic memory?Current Opinion in Neurobiology 43:130–138.https://doi.org/10.1016/j.conb.2017.02.006

Associative retrieval processes in free recallMemory & Cognition 24:103–109.https://doi.org/10.3758/BF03197276

Hebbian learning and spiking neuronsPhysical Review E 59:4498–4514.https://doi.org/10.1103/PhysRevE.59.4498

Cellular and system biology of memory: timing, molecules, and beyondPhysiological Reviews 96:647–693.https://doi.org/10.1152/physrev.00010.2015

A specific role of the human hippocampus in recall of temporal sequencesJournal of Neuroscience 29:3475–3484.https://doi.org/10.1523/JNEUROSCI.537008.2009

Traveling Theta Waves and the Hippocampal Phase CodeScientific Reports 7:7678.https://doi.org/10.1038/s41598017080533

Hippocampal contributions to serialorder memoryHippocampus 29:252–259.https://doi.org/10.1002/hipo.23025

Dendritic mechanisms of phase precession in hippocampal CA1 pyramidal neuronsJournal of Neurophysiology 86:528–532.https://doi.org/10.1152/jn.2001.86.1.528

Circuit mechanisms of hippocampal reactivation during sleepNeurobiology of Learning and Memory 160:98–107.https://doi.org/10.1016/j.nlm.2018.04.018

Oscillations, phaseoffiring coding, and spike timingdependent plasticity: an efficient learning schemeJournal of Neuroscience 29:13484–13493.https://doi.org/10.1523/JNEUROSCI.220709.2009

Cortical travelling waves: mechanisms and computational principlesNature Reviews Neuroscience 19:255–268.https://doi.org/10.1038/nrn.2018.20

Replay and time compression of recurring spike sequences in the hippocampusThe Journal of Neuroscience 19:9497–9507.https://doi.org/10.1523/JNEUROSCI.192109497.1999

A table of integrals of the Error functionsJournal of Research of the National Bureau of Standards, Section B: Mathematical Sciences 73B:1.https://doi.org/10.6028/jres.073B.001

Replay of rulelearning related neural patterns in the prefrontal cortex during sleepNature Neuroscience 12:919–926.https://doi.org/10.1038/nn.2337

Triplets of spikes in a model of spike timingdependent plasticityJournal of Neuroscience 26:9673–9682.https://doi.org/10.1523/JNEUROSCI.142506.2006

Operation and plasticity of hippocampal CA3 circuits: implications for memory encodingNature Reviews Neuroscience 18:208–220.https://doi.org/10.1038/nrn.2017.10

SoftwareSynaptic learning rules for sequence learning , version swh:1:rev:157c347a735a090f591a2b77a71b90d7de65bca5Software Heritage.

Memory encoding by theta phase precession in the hippocampal networkNeural Computation 15:2379–2397.https://doi.org/10.1162/089976603322362400

Singletrial phase precession in the hippocampusJournal of Neuroscience 29:13232–13241.https://doi.org/10.1523/JNEUROSCI.227009.2009

BookTheta Phase Precession Enhance Single Trial Learning in an STDP NetworkIn: Wang R, Shen E, Gu F, editors. Advances in Cognitive Neurodynamics ICCN 2007 . Springer. pp. 109–114.https://doi.org/10.1007/9781402083877_21

Phase precession through synaptic facilitationNeural Computation 20:1285–1324.https://doi.org/10.1162/neco.2008.0706292

Hippocampal cellular and network activity in freely moving echolocating batsNature Neuroscience 10:224–233.https://doi.org/10.1038/nn1829

Malleability of spiketimingdependent plasticity at the CA3CA1 synapseJournal of Neuroscience 26:6610–6617.https://doi.org/10.1523/JNEUROSCI.538805.2006
Decision letter

Martin VinckReviewing Editor; Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, Germany

Joshua I GoldSenior Editor; University of Pennsylvania, United States

Francesco P BattagliaReviewer; Donders Institute, Netherlands
In the interests of transparency, eLife publishes the most substantive revision requests and the accompanying author responses.
Acceptance summary:
One of the major challenges of cortical circuits is to learn associations between events that are separated by long time periods, given that spiketimingdependent plasticity operates on short time scales. In the Hippocampus, a structure critical for memory formation, phase precession is known to compress the sequential activation of place fields to the thetacycle (~8Hz) time period. Reifenstein et al., describe a simple yet elegant mathematical principle through which theta phase precession contributes to learning the sequential order by which place fields are activated.
Decision letter after peer review:
[Editors’ note: the authors submitted for reconsideration following the decision after peer review. What follows is the decision letter after the first round of review.]
Thank you for submitting your work entitled "Synaptic learning rules for sequence learning" for consideration by eLife. Your article has been reviewed by 4 peer reviewers, including Martin Vinck as the Reviewing Editor and Reviewer #1, and the evaluation has been overseen by a Senior Editor. The following individuals involved in review of your submission have agreed to reveal their identity: Francesco P Battaglia (Reviewer #2); Frances Chance (Reviewer #4).
Our decision has been reached after consultation between the reviewers. Based on these discussions and the individual reviews below, we regret to inform you that your work will not be considered further for publication in eLife. However, eLife would welcome a substantially improved manuscript that addresses concerns raised; this would be treated as a new submission, but likely go to the same reviewers.
The reviewers acknowledged that the study addresses an important topic. They also applauded the rigor and elegance of the analytical approach. However, reviewers individually, and in subsequent discussion, expressed the concern that the physiological relevance of the findings is far from clear; this point would require a substantial amount of new simulations and models. They furthermore commented that the generation and storage of sequences remains unclear, again requiring substantial additions to the manuscript. Reviewers therefore recommended that, at present, the manuscript appears to be more suited for a more specialized journal.
Reviewer #1:
This paper develops a model of the way in which phase precession modulates synaptic plasticity. The idea and derivations are simple and easy to follow. The results, while not surprising, are overall interesting and important for researchers on phase precession and sequence learning. There are some useful analytical approximations in the paper. I have several comments:
1. The paper is all based on pairwise STDP.
How robust are these results when we consider perhaps more realistic STDP rules like triplet STDP? Perhaps this is something to discuss or explore, because it is not a priori obvious to me.
2. What are the widths reported in the literature for hippocampus? With all the recent literature on the dependence of STDP in vitro on Ca^{2+} levels, one has to take this with a grain of salt of course. I would think it's around 100ms which would make the benefit small?
3. The approximation of theta as an oscillator that shows no dampening is of course not realistic; in reality autocorrelation functions will show decreasing sidelobes. It's maybe not a problem, but could actually benefit your model.
4. To say that phase precession benefits sequence learning is maybe not the whole story. It seems that in general long STDP kernels benefit sequence learning for place fields, and they do this equally well for phase or no phase precession. If the STDP kernels are short, sequence learning is more difficult (and requires huge place field overlap), and phase precession is beneficial for that.
How does benefit interact with place field overlap? If the place fields are highly overlapping, then how does STDP kernel size regulate the sequence learning? Are longer STDP kernels invariantly better for sequence learning in the hippocampus? Or does this depend on place field separation. In other words are there are some scenarios where short STDP kernels have a clear benefit and where phase precession then gives a huge boost?
Reviewer #2:
Reifenstein and Kempter propose an analytical formulation for synaptic plasticity dynamics with STDP and phase precession as observed in hippocampal place cells.
The main result is that phase precession increases the slope of the 2cell crosscorrelation around the origin, which is the key driver of plasticity under asymmetric STDP, therefore improving the encoding of sequences in the synaptic matrix.
While the overall concept of phase precession favoring time compression of sequences and plasticity (when combined with STDP) has been present in the literature since the seminal Skaggs and McNaughton, 1996 paper, the novel contribution of this study is the elegant analytical formulation of the effect, which can be very useful to embed this effect into a network model. As a suggestion of a further direction, one could look at models (e.g. Tsodyks et al., Hippocampus, 1996) where asymmetries in synaptic connections are driver for phase precession. One could use this formulation for e.g. seeing how experience may induce phase precessing place field by shaping synaptic connections (maybe starting from a small symmetry breaking term in the initial condition).
The analytical calculation seems crystal clear to me (and quite simple, once one finds the right framework)
Reviewer #3:
The study uses analytical and numerical approaches to quantify conditions in which spike timingdependent plasticity (STDP) and theta phase precession may promote sequence learning. The strengths of the study are that the question is of general interest and the analytical approach, in so far as it can be applied, is quite rigorous. The weaknesses are that the extent to which the conclusions would hold in more physiological scenarios is not considered, and that the study does not investigate sequences but rather the strength of synaptic connections between sequentially activated neurons.
1. While the stated focus in on sequences, the key results are based on measures of synaptic weight between sequentially activated neurons. Given the claims of the study, a more relevant readout might be generation of sequences by the trained network.
2. The target network appears very simple. Assuming it can generate sequences, it's unclear whether the training rule would function under physiologically relevant conditions. For example, can the network trained in this way store multiple sequences? To what extent do sequences interfere with one another?
3. In a behaving animal movement speed varies considerably, with the consequence that the time taken to cross a place field may vary by an order of magnitude. I think it's important to consider the implications that this might have for the results.
4. Phase precession, STDP and sequence learning have been considered in previous work (e.g. Sato and Yamaguchi, Neural Computation, 2003; Shen et al., Advances in Cognitive Neurodynamics ICCN, 2007; Masquelier et al., J. Neurosci. 2009; Chadwick et al., eLife 2016). These previous approaches differ to various degrees from the present work, but each offers alternative suggestions for how STDP and phase precession could interact during memory. It's not clear what the advantages are of the framework proposed here.
5. While theta sequences are the focus of the introduction, many of the same arguments could be applied to compressed representations during sharp wave ripple events. This may be worth considering. Also, given the model involves excitatory connections between neurons that represent sequences, the relevance here may be more to CA3 were such connectivity is more common, rather than CA1 which is the focus of many of the studies cited in the introduction.
Reviewer #4:
This manuscript argues that phase precession enhances the learning of sequence learning by compressing a slower behavioral sequence, for example movement of an animal through a sequence of place fields, into a faster time scales associated with synaptic plasticity. The authors examine the synaptic weight change between pairs of neurons encoding different events in the behavioral sequence and find that phase precession enhances sequence learning when the learning rule is asymmetric over a relatively narrow time window (assuming the behavioral events encoded by the two neurons overlap, ie the place fields of the neurons overlap). For wider time windows, however, phase precession does not appear to convey any advantage.
I thought the study was interesting – the idea that phase precession "compresses" sequences into theta cycles has been around for a bit, but this is the first study that I've seen that does analysis at this level. I think many researchers who are interested in temporal coding would find the work very interesting.
I did, however, have a little trouble understanding what conclusions the study draws about the brain (if we are supposed to draw any). The authors conclude that phase precession facilitates if the learning window is shorter than a theta cycle – that seems in line with published STDPs rules from slice studies. However, Figure 4 seems to imply that the authors have recovered a 1 second learning window from Bittner's data – are they suggesting that phase precession is not an asset for the learning in that study (or did I miss something)? Are there predictions to be made about how place fields must be spaced for optimal sequence learning?
Also, I'd be curious to know how the authors analysis fits in with replay – is the assumption that neuromodulation is changing the time window or other learning dynamics?
https://doi.org/10.7554/eLife.67171.sa1Author response
[Editors’ note: the authors resubmitted a revised version of the paper for consideration. What follows is the authors’ response to the first round of review.]
Reviewer #1:
This paper develops a model of the way in which phase precession modulates synaptic plasticity. The idea and derivations are simple and easy to follow. The results, while not surprising, are overall interesting and important for researchers on phase precession and sequence learning. There are some useful analytical approximations in the paper. I have several comments:
1. The paper is all based on pairwise STDP.
How robust are these results when we consider perhaps more realistic STDP rules like triplet STDP? Perhaps this is something to discuss or explore, because it is not a priori obvious to me.
In “pairwise STDP”, pairs of presynaptic and postsynaptic spikes are considered. Conversely, “triplet STDP” considers triplet motifs of spiking (either 2 presynaptic – 1 postsynaptic or 2 postsynaptic – 1 presynaptic). Triplet STDP models allow to account for a number of experimental findings that pairwise STDP fails to reproduce, for example the dependence on the repetition frequency of spike pairs. However, it is unclear whether our results on sequence learning still hold for generic triplet STDP rules.
To investigate the relative weight change (forward weight minus backward weight), we reproduced results like the ones shown in Figure 3 of our manuscript for generic versions of the triplet rule from Pfister and Gerstner, (2006), who fitted triplet rule models to data from the hippocampus. Their model consists of four terms: pairwise potentiation, pairwise depression, triplet potentiation, and triplet depression. To be able to compare triplet STDP models with pairwise STDP models, we first simulated pairwise potentiation and pairwise depression according to the learning rule from Bi and Poo, (1998). The results resembled very much our Figure 3 (see Author response image 1) because the BiandPoo rule is close to the perfectly odd learning rule used for Figures 3 and 4 (see also the new simulations results shown in Figure 4C, for example for the BiandPoo rule). We then added triplet terms with the parameters of the minimal model described in Table 4 (“AlltoAll”, “Minimal” model, Pfister and Gerstner, 2006). This “minimal” model, which included only one triplet term to pairwise STDP (the “triplet potentiation term”, i.e., a 1pre2post term), was regarded as the best model in terms of number of free parameters and fitting error. We found that the results were very similar to the pairwise BiandPoo rule (see Author response image 1).
The small difference between pairwise and triplet STDP is probably due to the fact that the time constant for the triplet potentiation term is only 40 ms, which is shorter than the average ISI in our simulations with values typically > 50 ms (minimum average ISIs is 50 ms in the center of the firing field with a peak rate of 20 spikes/s; see, e.g., Figure 2A). This comparison therefore suggests that we can neglect triplet potentiation in our framework because the time constant of the triplet term is low enough.
We note, however, that we found larger differences (for weight changes, benefit, and SNR) when we used (instead of the “minimal” model) the “AlltoAll”, “Full” model from Table 4 in Pfister and Gerstner (2006), which included triplet depression, i.e., a 2pre1post term. This marked difference between “minimal” and “full” models in our simulations was surprising because Pfister and Gerstner, (2006) observed very similar fitting errors. A closer inspection revealed the origin of this difference: the triplet depression term has a time constant τ _{x} = 946 ms, which leads in our simulations to a strong accumulation of the corresponding dynamic variable r2 that keeps track of presynaptic events (equation 2 in Pfister and Gerstner, 2006). This accumulation is particularly relevant in our simulations in which we consider widths of firing fields and time lags between firing fields on the order of one second. On the other hand, the data to which Pfister and Gerstner (2006) fitted their model did not critically rely on such long delays; instead the data were dominated by pairs and triples with time differences on a 10 ms scale. Therefore, the longtime constant of τ _{x} = 946 ms of triplet depression should not be a critical parameter of their model. This fact was recognized by Pfister and Gerstner, (2006), who showed that “minimal” triplet learning rules are almost as good as the “full” ones but have two parameters less. Therefore the “minimal” model was regarded as the best model. Because this “minimal” model did not change the outcome in our sequence learning paradigm, we conclude that our results obtained for pairwise STDP are robust with respect to effects originating from triplets of spikes.
To indicate in the manuscript that our results are robust when triplet STDP is considered, we have added simulation results of the “minimal” triplet rule from Pfister and Gerstner, (2006) in Figure 4C and have included a paragraph on triplet rules in the Discussion.
2. What are the widths reported in the literature for hippocampus? With all the recent literature on the dependence of STDP in vitro on Ca^{2+} levels, one has to take this with a grain of salt of course. I would think it's around 100ms which would make the benefit small?
The reported time constants in the literature for hippocampus are on the order of 1530 ms (e.g. Abbott and Nelson, 2000; Bi and Poo, 2001; Wittenberg and Wang, 2006; Inglebert et al., 2020). In this case, the benefit is large (Figure 4B), and the “width” of a learning window, i.e., the range of the time interval in which weights are affected, appears to be in the range of 100 ms (see e.g. Figure 1 in Bi and Poo, 2001).
We agree with the reviewer that STDP depends on the Ca^{2+} level, as recent studies show (e.g. Inglebert et al., 2020). However, the total width of the STDP kernels rarely exceeds 100 ms — corresponding to ~50 ms for each lobe, positive and negative time lags. These widths are in line with the reported time constants of 1530 ms mentioned above.
To create a stronger connection to the STDP literature, we added and discussed the following references to the manuscript: Froemke et al., 2005; Wittenberg and Wang, 2006; Inglebert et al., 2020. Furthermore, we estimated the SNR of the synaptic weight change for a number of experimental STDP kernels and included the results in Figure 4C, as a comparison to our theoretical results.
3. The approximation of theta as an oscillator that shows no dampening is of course not realistic; in reality autocorrelation functions will show decreasing sidelobes. It's maybe not a problem, but could actually benefit your model.
The reviewer is right in that we do not explicitly model dampening sidelobes in the spiking autocorrelation function. For narrow STDP windows (K≪ 1/ω ≪ σ ), however, dampening sidelobes would have no effect on the synaptic weight change because only the slope of the crosscorrelation function around zerotime lag matters (Figure 2C,D and Equation 3 in the manuscript). Also for wide STDP windows (K≫ σ ), dampening sidelobes of the theta modulation would not cause a difference because we show (e.g. in Figure 4 for τ > 0.3 s) that “phase precession” and “phase locking” are basically identical to the case of no theta. In the Discussion (section “Key features of phase precession for temporal orderlearning: generalization to nonperiodic modulation of activity”) we even mention a scenario in which temporalorder learning could benefit from spike statistics similar to phase precession but in the absence of any periodic modulation.
Taken together, we think that the shape of the theta modulation is not a critical model assumption. To better emphasize this, we have added a brief note on irrelevant features of the autocorrelation already at the end of the paragraph following Equation 3.
4. To say that phase precession benefits sequence learning is maybe not the whole story. It seems that in general long STDP kernels benefit sequence learning for place fields, and they do this equally well for phase or no phase precession. If the STDP kernels are short, sequence learning is more difficult (and requires huge place field overlap), and phase precession is beneficial for that.
Indeed, phase precession can benefit temporalorder learning for short STDP kernels and overlapping firing fields, and this is one of our main findings. To make sure that this gets not (erroneously) generalized to wide STDP kernels, we checked our wording throughout the manuscript to be clear about the “short STDP kernels” condition. As a result, we added the words “for short synaptic learning windows” to the abstract.
On the other hand, wide STDP kernels also can facilitate temporalorder learning, but only if they are sufficiently asymmetric. Any symmetric component of the STDP kernel disturbs temporalorder learning, as we exemplify in Figure 4C by applying the learning window from Bittner et al., (2017) to our framework. We further fully agree with the reviewer that the temporal fine structure of spiking (phase precession vs. phase locking) becomes irrelevant for wide STDP kernels. For short kernels, however, phase precession makes all the difference (Figures 3 and 4).
To further clarify what we mean by “sequence learning”, we defined the terms “sequence learning” (in the Results, below the first equation) and “temporalorder learning” (in the Introduction) and replaced the more general term “sequence learning” by “temporalorder learning” at many suitable places in the manuscript.
How does benefit interact with place field overlap?
The larger the overlap, the stronger the benefit. This is shown in Figure 3B and described analytically in Equation 7 where we relate the benefit to the field separation Tij (which is the inverse of the overlap). We improved the text below Equation 7 to more strongly emphasize the relationship between the field separation Tij and the field overlap. We additionally made sure to include the field overlap in the summary sentence of the paragraph describing Figure 3B.
If the place fields are highly overlapping, then how does STDP kernel size regulate the sequence learning?
For overlapping fields (Tij = 𝜎), we show in Figure 4 that weight change (Figure 4A) and SNR (Figure 4C) increase with increasing STDP kernel width. We note that these results crucially depend on the STDP kernel being perfectly asymmetric. For learning windows with a symmetric component (dots in Figure 4C) the SNR is much lower and decreases for large widths. To better emphasize this in the manuscript, we included in Figure 4C further experimentally observed learning windows (for more details, see also our response to the next question by the reviewer).
Are longer STDP kernels invariantly better for sequence learning in the hippocampus?
Long STDP kernels are not invariantly better for temporalorder learning because it depends on the symmetry of the STDP kernel. A symmetric component of the STDP kernel reduces temporalorder learning — as we exemplify in Figure 4C by applying the very long (in the order of one second) and mostly symmetric learning window from Bittner et al. (2017) to our framework. Additionally, in the updated version of Figure 4C, we added several experimentally found learning windows (which also had symmetric components) from other studies (dots). SNRs were again below the blue line, which indicates the maximal SNR for purely asymmetric windows and phase precession. The most extreme case, i.e., a long (τ ≫ σ ) and purely asymmetric STDP kernels would yield the largest weight change and largest SNR but, in this scenario, phase precession would not alter the result compared to phase locking or no theta (as we show in Figure 4 A,B,C). However, long and predominantly asymmetric STDP windows have — to date and to the best of our knowledge — not been experimentally observed.
Or does this depend on place field separation.
For longer, asymmetric STDP kernels, the dependence of the weight change on placefield separation is given by Equation 9 and illustrated in Figure 5D: with increasing Tij, the weight change quickly increases, reaches a maximum, and then slowly decreases. Figure 5E shows that the SNR increases with increasing T ij, but quickly settles on a constant value. We again note that weight change and SNR would be lower for learning windows with a symmetric component.
In other words are there are some scenarios where short STDP kernels have a clear benefit and where phase precession then gives a huge boost?
We assume that the reviewer means “benefit” as we use the term in the manuscript, i.e., the benefit of phase precession over phase locking (instead of short vs. long STDP kernels, which we discussed above). In that case, Figure 4B shows that short (τ < 100 ms) STDP kernels generate a clear benefit, i.e., phase precession leads to clearly larger weight changes than phase locking (Figure 4A); this result is also supported by Equation 7 (solid black line in Figure 4B), which shows how the benefit depends on all parameters of the model. The benefit is the stronger the smaller τ is. This effect is confirmed by the signaltonoise ratio analysis in Figure 4C.
We believe that the last set of reviewer’s questions suggests that we should improve the manuscript to clarify the distinction between symmetric/asymmetric (mathematically even/odd) STDP kernels in the manuscript. We did so in the Results, and we also used more informative headlines for the subsections in the results.
Reviewer #2:
Reifenstein and Kempter propose an analytical formulation for synaptic plasticity dynamics with STDP and phase precession as observed in hippocampal place cells.
The main result is that phase precession increases the slope of the 2cell crosscorrelation around the origin, which is the key driver of plasticity under asymmetric STDP, therefore improving the encoding of sequences in the synaptic matrix.
While the overall concept of phase precession favoring time compression of sequences and plasticity (when combined with STDP) has been present in the literature since the seminal Skaggs and McNaughton 1996 paper, the novel contribution of this study is the elegant analytical formulation of the effect, which can be very useful to embed this effect into a network model.
We thank the reviewer for this very positive assessment of our work and particularly for pointing out the appeal of the analytical approach.
As a suggestion of a further direction, one could look at models (eg Tsodyks et al., Hippocampus 1996) where asymmetries in synaptic connections are driver for phase precession. One could use this formulation for eg seeing how experience may induce phase precessing place field by shaping synaptic connections (maybe starting from a small symmetry breaking term in the initial condition).
We thank the reviewer for this interesting direction to extend our work. In the current manuscript, we assume that asymmetries in synaptic connections do not generate phase precession (in contrast to Tsodyks et al., 1996). We even assume, for simplicity of the analytical treatment, that recurrent connections do not affect the dynamics. We thus hypothesize that phase precession is not generated by the local, recurrent network; instead, phase precession is inherited or generated locally by a cellular/synaptic mechanism. After experience, the resulting asymmetric connections could indeed also generate phase precession (as demonstrated by the simulations by Tsodyks et al., 1996), and this phase precession could then even be similar to the one that has initially helped to shape synaptic connections. Finally, inherited or local cellularly/synapticallygenerated phase precession and locally networkgenerated phase precession could interact (as reviewed, for example in Jaramillo and Kempter, 2017). We added this important line of thought to the Discussion (new section “Model assumptions”) of our manuscript.
The analytical calculation seems crystal clear to me (and quite simple, once one finds the right framework)
We thank the reviewer for this encouraging comment.
Reviewer #3:
The study uses analytical and numerical approaches to quantify conditions in which spike timingdependent plasticity (STDP) and theta phase precession may promote sequence learning. The strengths of the study are that the question is of general interest and the analytical approach, in so far as it can be applied, is quite rigorous.
We thank the reviewer for these positive comments on general interest and thorough analysis.
The weaknesses are that the extent to which the conclusions would hold in more physiological scenarios is not considered, and that the study does not investigate sequences but rather the strength of synaptic connections between sequentially activated neurons.
We regret that it became not clear enough how the conclusions of this theoretical work could be applied to more physiological scenarios, and why the predicted changes of synapses have strong implications on the replay of sequences. We try to respond to this general critique in detail below (see points 15).
1. While the stated focus in on sequences, the key results are based on measures of synaptic weight between sequentially activated neurons. Given the claims of the study, a more relevant readout might be generation of sequences by the trained network.
We agree that an appropriate readout of the result of learning would be the generation of sequences by a trained network. However, the fundamental basis of replay is a specific connectivity whereas the detailed characteristics of replay also depend on a variety of other parameters that define the neurons and the network. To illustrate the enormous complexity of the relation between the weights in a very simple network and the properties of replay, we refer the reviewer, for example, to Cheng, (2013) or Chenkov et al., (2017). Also the reviewer’s suggestions below (Sato and Yamaguchi, 2003; Shen et al., 2007) offer insights into the challenges of network simulations that include replay. To repeat such network simulations would be way beyond the scope of our manuscript, which tries to reveal the intricate relation between plasticity and weight changes. Because replay is indeed an important readout, we nevertheless thoroughly linked our work to the literature on the generation of sequences in trained networks; for example, we now also discuss the work by Gauy et al., (2018), Malerba and Bazhenov, (2019), and Gillett et al., (2020). Additionally, we have extensively discussed models of replay in the Discussion.
To better state our aims, to weaken our claims, and to scale down (possibly wrong) expectations, we added at end of the first paragraph of the Results (where we first mention “replay”) a remark on the scope of this work: “We note, however, that in what follows we do not simulate such a replay of sequences, which would depend also on a vast number of parameters that define the network; instead, we focus on the underlying changes in connectivity, which is the very basis of replay, and draw connections to replay in the Discussion.”
On the other hand, we note that we have removed the paragraph on replay speed because we felt that the numbers used for its estimation were questionable: (i) the delay of 10ms for the propagation of activity from one assembly to the next in Chenkov et al., (2017) might depend on the specific choice of parameters and (ii) the estimated spatial width of a place field (here 0.09m) is realistic but arbitrary. Much larger place fields exist. Therefore, the two numbers that are the basis for the estimation of the replay speed are variable and the replay speed (ratio of the two) might vary strongly.
2. The target network appears very simple. Assuming it can generate sequences, it's unclear whether the training rule would function under physiologically relevant conditions. For example, can the network trained in this way store multiple sequences? To what extent do sequences interfere with one another?
We agree with the reviewer that the anatomy of the target network appears very simple. However, the problem of evaluating weight changes in dependence upon phase precession, place field properties, and STDP parameters is quite complex. Though it is unclear whether physiologically relevant conditions can ever be achieved in a computational model (e.g., Almog and Korngren, 2016, J Neurophysiol), our model attempts to carefully reflect the essence of biological reality, and thus we consider our parameter settings as physiologically realistic as possible. Other theoretical work has demonstrated that multiple sequences can be stored in a network and that the memory capacity for sequences can be large (e.g. Leibold and Kempter, 2006; Trengove et al., 2013; Chenkov et al., 2017). We note that the corresponding network simulations are usually quite involved and typically depend on a vast number of parameters. We thus think that these kinds of network simulations are well beyond the scope of the current study.
We nevertheless address the topics of replay in general and multiple (and possibly overlapping) sequences in particular in the Discussion (sections “Other mechanisms and models for sequence learning” and “Replay of sequences and storage or multiple and overlapping sequences”), and now have added a note on storing multiple sequences and memory capacity.
3. In a behaving animal movement speed varies considerably, with the consequence that the time taken to cross a place field may vary by an order of magnitude. I think it's important to consider the implications that this might have for the results.
Running speed indeed affects field size and field distance (when measured in units of time). Because our theory investigates plasticity in dependence upon field size (which we quantify in units of time) and field separation (also in units of time, Figures 3 and 5; equations 4, 5 and 7; as well as the derivation of the maximal benefit in the paragraph after equation 7), our results include variations in running speed.
To make all this more explicit, we clarified in the manuscript (in the second paragraph after Equation 3, starting with “As a generic example.…”) that field width and field separation are measured in units of time (and not in units of length).
4. Phase precession, STDP and sequence learning have been considered in previous work (e.g. Sato and Yamaguchi, Neural Computation, 2003; Shen et al., Advances in Cognitive Neurodynamics ICCN, 2007; Masquelier et al., J. Neurosci. 2009; Chadwick et al., eLife 2016). These previous approaches differ to various degrees from the present work, but each offers alternative suggestions for how STDP and phase precession could interact during memory. It's not clear what the advantages are of the framework proposed here.
We thank the reviewer for the hint to these references and have included all of them in our manuscript. In particular, Sato and Yamaguchi, (2003) add a valuable contribution, investigating phase precession and STDP in a network of coupled phaseoscillators — a clear and rewarding approach, yet somewhat detached from biology.
We would like to point out that our formulation of phase precession and STDP intends to reflect the biological reality as close as possible. All individual components like phase precession, STDP, and place fields have been experimentally described. This is in contrast, to, e.g., phase precession in interneurons, as assumed by Chadwick et al., 2016. Compared to the cited new references, a major advantage of our approach is the analytical tractability. Our mathematical treatment of the problem yields a clear description of parameter dependencies — in contrast to, e.g., Shen et al., 2007 who investigate only one example of a smallnetwork simulation and thus cannot predict the dependence of learning upon the various parameters.
Finally, Masquelier et al., (2009) offer an alternative approach to learning using phase coding and STDP, and Chadwick et al., (2016) nicely explain generation of phase precession via recurrent networks.
5. While theta sequences are the focus of the introduction, many of the same arguments could be applied to compressed representations during sharp wave ripple events. This may be worth considering. Also, given the model involves excitatory connections between neurons that represent sequences, the relevance here may be more to CA3 were such connectivity is more common, rather than CA1 which is the focus of many of the studies cited in the introduction.
The place field width is also in seconds, we now point that out clearer (in the second paragraph after Equation 3, starting with “As a generic example.…”).
Reviewer #4:
This manuscript argues that phase precession enhances the learning of sequence learning by compressing a slower behavioral sequence, for example movement of an animal through a sequence of place fields, into a faster time scales associated with synaptic plasticity. The authors examine the synaptic weight change between pairs of neurons encoding different events in the behavioral sequence and find that phase precession enhances sequence learning when the learning rule is asymmetric over a relatively narrow time window (assuming the behavioral events encoded by the two neurons overlap, ie the place fields of the neurons overlap). For wider time windows, however, phase precession does not appear to convey any advantage.
I thought the study was interesting – the idea that phase precession "compresses" sequences into theta cycles has been around for a bit, but this is the first study that I've seen that does analysis at this level. I think many researchers who are interested in temporal coding would find the work very interesting.
We thank the reviewer for the very positive comments.
I did, however, have a little trouble understanding what conclusions the study draws about the brain (if we are supposed to draw any). The authors conclude that phase precession facilitates if the learning window is shorter than a theta cycle – that seems in line with published STDPs rules from slice studies. However, Figure 4 seems to imply that the authors have recovered a 1 second learning window from Bittner's data – are they suggesting that phase precession is not an asset for the learning in that study (or did I miss something)?
Correct. In Figure 4C we compare the signaltonoise ratio (SNR) of the weight change as a function of the width of the learning window. For asymmetric windows (colored solid lines), phase precession is helpful for temporalorder learning, but only for narrow (compared, e.g., to the theta oscillation cycle) learning windows. Interestingly, the SNR is increasing and saturates for larger widths of the learning window, which seems to suggest that very wide learning windows are optimal for temporalorder learning. To indicate that this behavior critically depends on the symmetry of the learning window, we included in this graph the SNR obtained with the learning window from Bittner’s data (dot marked “Bittner et al., (2017)” in the updated version of Figure 4), which has a strong symmetric component. In this case, the SNR is much lower than the SNR of an asymmetric window of the same width. Even though the Bittner window has a low SNR for temporalorder learning, it may still be useful for other tasks, for example placefield formation. The secondlong learning rule seems to serve this purpose well. For temporalorder memory, however, it is not suited due to its strong symmetric component.
To indicate that there are published STDP rules that are more useful for temporalorder learning, we now include in the same graph (new Figure 4C) the SNR for several other experimentally observed learning windows. These windows have strong asymmetric components and  depending on the proportion of symmetric and asymmetric parts  can reach SNR values close to the theoretical prediction for perfectly asymmetric windows. This graph (and many other results presented in our study) suggests that phase precession in combination with experimentally determined narrow asymmetric learning windows could be a mechanism supporting temporalorder learning in hippocampal networks. This conclusion is also summarized in a similar way in the abstract.
Are there predictions to be made about how place fields must be spaced for optimal sequence learning?
Yes, for example Figure 3 predicts that for temporalorder learning with narrow learning windows there is an optimal overlap between firing fields. For asymmetric STDP windows, the maximum weight change (Figure 3A) and the maximum SNR (Figure 3C) are achieved for partially overlapping firing fields where the overlap Tij is in the range of the width σ of the firing fields. On the other hand, wide STDP windows support temporalorder learning only if they are largely asymmetric (Figure 4 and Figure 5) but in this case phase precession is not beneficial.
Also, I'd be curious to know how the authors analysis fits in with replay – is the assumption that neuromodulation is changing the time window or other learning dynamics?
A key assumption underlying our work is that neuromodulation affects the plasticity and the strength of synapses (see, e.g., the sections “Model assumptions” and “Width, shape, and symmetry of the STDP window…” in the Discussion). For example, acetylcholine (among other neuromodulators) seems to play a particular role by differentially modulating the distinct phases of memory encoding and memory consolidation. In our work we follow the idea [proposed by Hasselmo, (1999) and supported by many later studies] that during encoding excitatory feedback connections (and replay) are suppressed to avoid interference from previously stored information, but that the same synapses in this phase are highly plastic in order to store sequences; this may be mediated by high levels of acetylcholine. On the other hand, during slowwave sleep, when acetylcholine levels are low and synapses are strong but less plastic, a sequence imprinted in recurrent synaptic weights can be replayed without having a too strong impact on the change of recurrent synaptic weights.
We have mentioned all these ideas on differential modulation of synapses and replay at various points in the manuscript. To better outline and summarize these important points in our manuscript, we have thoroughly revised and extended the Discussion.
https://doi.org/10.7554/eLife.67171.sa2Article and author information
Author details
Funding
Deutsche Forschungsgemeinschaft (01GQ1705)
 Richard Kempter
Deutsche Forschungsgemeinschaft (GRK 1589/2)
 Richard Kempter
Deutsche Forschungsgemeinschaft (SPP 1665)
 Richard Kempter
Deutsche Forschungsgemeinschaft (SFB 1315)
 Richard Kempter
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
Acknowledgements
This work was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation; Grants GRK 1589/2, SPP 1665, SFB 1315  projectID 327654276) and the German Federal Ministry for Education and Research (BMBF; Grant 01GQ1705). We thank Lukas Kunz, Natalie Schieferstein, Tiziano D’Albis, Paul Pfeiffer, and Adam Wilkins for helpful discussions and feedback on the manuscript. ETR and RK designed the research. ETR, IBK, and RK performed the research, wrote and discussed the manuscript. ETR, IBK, and RK declare no conflict of interest.
Senior Editor
 Joshua I Gold, University of Pennsylvania, United States
Reviewing Editor
 Martin Vinck, Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, Germany
Reviewer
 Francesco P Battaglia, Donders Institute, Netherlands
Publication history
 Received: February 3, 2021
 Accepted: March 31, 2021
 Accepted Manuscript published: April 16, 2021 (version 1)
 Version of Record published: June 3, 2021 (version 2)
Copyright
© 2021, Reifenstein et al.
This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.
Metrics

 1,884
 Page views

 271
 Downloads

 9
 Citations
Article citation count generated by polling the highest count across the following sources: Crossref, PubMed Central, Scopus.
Download links
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
Further reading

 Physics of Living Systems
 Neuroscience
Running stably on uneven natural terrain takes skillful control and was critical for human evolution. Even as runners circumnavigate hazardous obstacles such as steep drops, they must contend with uneven ground that is gentler but still destabilizing. We do not know how footsteps are guided based on the uneven topography of the ground and how those choices influence stability. Therefore, we studied human runners on traillike undulating uneven terrain and measured their energetics, kinematics, ground forces, and stepping patterns. We find that runners do not selectively step on more level ground areas. Instead, the body’s mechanical response, mediated by the control of leg compliance, helps maintain stability without requiring precise regulation of footsteps. Furthermore, their overall kinematics and energy consumption on uneven terrain showed little change from flat ground. These findings may explain how runners remain stable on natural terrain while devoting attention to tasks besides guiding footsteps.

 Neuroscience
Response variability is an essential and universal feature of sensory processing and behavior. It arises from fluctuations in the internal state of the brain, which modulate how sensory information is represented and transformed to guide behavioral actions. In part, brain state is shaped by recent network activity, fed back through recurrent connections to modulate neuronal excitability. However, the degree to which these interactions influence response variability and the spatial and temporal scales across which they operate, are poorly understood. Here, we combined population recordings and modeling to gain insights into how neuronal activity modulates network state and thereby impacts visually evoked activity and behavior. First, we performed cellularresolution calcium imaging of the optic tectum to monitor ongoing activity, the pattern of which is both a cause and consequence of changes in network state. We developed a minimal network model incorporating fast, short range, recurrent excitation and longlasting, activitydependent suppression that reproduced a hallmark property of tectal activity – intermittent bursting. We next used the model to estimate the excitability state of tectal neurons based on recent activity history and found that this explained a portion of the trialtotrial variability in visually evoked responses, as well as spatially selective response adaptation. Moreover, these dynamics also predicted behavioral trends such as selective habituation of visually evoked preycatching. Overall, we demonstrate that a simple recurrent interaction motif can be used to estimate the effect of activity upon the incidental state of a neural network and account for experiencedependent effects on sensory encoding and visually guided behavior.