Neural oscillations as a signature of efficient coding in the presence of synaptic delays
 Cited 8
 Views 1,449
 Annotations
Abstract
Cortical networks exhibit 'global oscillations', in which neural spike times are entrained to an underlying oscillatory rhythm, but where individual neurons fire irregularly, on only a fraction of cycles. While the network dynamics underlying global oscillations have been well characterised, their function is debated. Here, we show that such global oscillations are a direct consequence of optimal efficient coding in spiking networks with synaptic delays and noise. To avoid firing unnecessary spikes, neurons need to share information about the network state. Ideally, membrane potentials should be strongly correlated and reflect a 'prediction error' while the spikes themselves are uncorrelated and occur rarely. We show that the most efficient representation is when: (i) spike times are entrained to a global Gamma rhythm (implying a consistent representation of the error); but (ii) few neurons fire on each cycle (implying high efficiency), while (iii) excitation and inhibition are tightly balanced. This suggests that cortical networks exhibiting such dynamics are tuned to achieve a maximally efficient population code.
https://doi.org/10.7554/eLife.13824.001Introduction
Oscillations are a prominent feature of cortical activity. In sensory areas, one typically observes 'global oscillations' in the gammaband range (30–80 Hz), alongside single neuron responses that are irregular and sparse (Buzsáki and Wang, 2012; Yu and Ferster, 2010). The magnitude and frequency of gammaband oscillations are modulated by changes to the sensory environment (e.g. visual stimulus contrast (Henrie and Shapley, 2005) and behavioural state (e.g. attention [Fries et al., 2001]) of the animal. This has led a number of authors to propose that neural oscillations play a fundamental role in cortical computation (Fries, 2009; Engel et al., 2001). Others argue that oscillations emerge as a consequence of interactions between populations of inhibitory and excitatory neurons, and do not perform a direct functional role in themselves (Ray and Maunsell, 2010).
A prevalent theory of sensory processing, the 'efficient coding hypothesis', posits that the role of early sensory processing is to communicate information about the environment using a minimal number of spikes (Barlow, 1961). This implies that the responses of individual neurons should be as asynchronous as possible, so that they do not communicate redundant information (Simoncelli and Olshausen, 2001). Thus, oscillations are generally seen as a bad thing for efficient rate coding, as they tend to synchronise neural responses, and thus, introduce redundancy.
Here we propose that global oscillations are a necessary consequence of efficient rate coding in recurrent neural networks with synaptic delays.
In general, to avoid communicating redundant information, neurons driven by common inputs should actively decorrelate their spike trains. To illustrate this, consider a simple setup in which neurons encode a common sensory variable through their firing rates, with a constant value added to the sensory reconstruction each time a neuron fires a spike (Figure 1a).
With a constant input, the reconstruction error is minimized when the population fires spikes at regular intervals, while no two neurons fire spikes at the same time (as in Figure 1a, left panel, in red). To achieve this ideal, however, requires incredibly fast inhibitory connections, so that each time a neuron fires a spike it suppresses all other neurons from firing (Boerlin et al., 2013). In reality, inhibitory connections are subject to unavoidable delays (e.g. synaptic and transmission delays), and thus, cannot always prevent neurons from firing together. Worse, in the presence of delays, inhibitory signals, intended to prevent neurons from firing together, can actually have the reverse effect of synchronising the network, so that many neurons fire together on each cycle (as in Figure 1a, middle and right panels). This is analogous to the socalled ‘hipster effect’ where a group of individuals strive to look different from each other, but due to delayed reaction times, end up making similar decisions and all looking alike (Touboul and Effect, 2014).
Spiking synchronicity generally has a negative effect on coding performance. For example, Figure 1c shows how, in the regular spiking network described above, coding error increases with the number of synchronous spikes per cycle (while firing rate is held constant). It is thus tempting to conclude that neural networks should do everything possible to avoid synchronous firing. However, one also observes that a completely asynchronous network, in which neurons fire with independent Poisson variability (Figure 1b), performs far worse than the regular spiking network, even when multiple neurons fire together on each cycle (Figure 1c, horizontal dashed line).
Thus, to perform efficiently, neural networks face a tradeoff (Figure 1d). On the one hand, recurrent connections should coordinate the activity of different neurons in the network, so as to achieve an efficient and accurate population code. On the other hand, in the presence of synaptic delays, it is important that these recurrent signals do not overly synchronize the network, as this will reduce coding performance.
Here, we show that, in a network of leaky integrateandfire neurons (LIF) optimized for efficient coding, this tradeoff is best met by adding noise to the network (e.g. via random fluctuations in membrane potential, or synaptic failure) to ensure that: (i) neural spike trains are entrained to a global oscillatory rhythm (for a consistent representation of global information), but (ii) only a small fraction of cells fire on each oscillation cycle. In this regime, individual neurons fire irregularly (Shadlen and Newsome, 1998) and exhibit weak pairwise correlations (Schneidman et al., 2006), despite the presence of rhythmic population activity. Moreover, excitation and inhibition are tightly balanced on each oscillation cycle, with inhibition lagging excitation by a few milliseconds (Atallah and Scanziani, 2009; Wehr and Zador, 2003). Thus, ‘global oscillations’ come about as a direct consequence of efficient rate coding, in a recurrent network with synaptic delays.
Results
Efficient coding in an idealized recurrent network
It is instructive to first consider the behaviour of an idealized network, with instantaneous synapses. For this, we consider a model proposed by Boerlin et al., in which a network of integrateandfire neurons is optimized to efficiently encode a time varying input. As the model has already been described elsewhere (Boerlin et al., 2013) we restrict ourselves to outlining the basic principles, with a mathematical description reserved for the Materials and methods.
Underlying the model is the idea that downstream neurons should be able to reconstruct the input to the network by performing a linear summation of its output spike trains. To do this efficiently (i.e. with as few spikes as possible), the spiking output of the network is fedback and subtracted from its original input (Figure 2a). In consequence, the total input to each neuron is equal to a ‘prediction error’; the difference between the original input and the network reconstruction. This prediction error is also reflected in neural membrane potentials. When a neuron’s membrane potential becomes larger than a constant threshold, then it fires a spike; recurrent feedback then reduces the prediction error encoded by other neurons, preventing them from firing further redundant spikes.
To illustrate the principles underlying the model, we consider a network of three identical neurons. Figure 2b shows how spikes fired by the network are ‘readout’, to obtain a continuous reconstruction. Each time a neuron fires a spike, it increases the reconstruction by a fixed amount, decreasing the difference between the input and the neural reconstruction. Immediately after, feedback connections alter the membrane potentials of other neurons, to ensure that they maintain a consistent representation of the error, and do not fire further spikes (Figure 2b, lower panel). As a result, membrane potentials are highly correlated, while spikes are asynchronous.
Figure 2c shows the behaviour of the network in response to a constant input. To optimally encode a constant input, the network generates a regular train of spikes (as in the left panel of Figure 1a), resulting in a narrow distribution of population interspike intervals (ISIs) (Figure 2d). Neural membrane potentials, which encode a common prediction error, fluctuate in synchrony, with a frequency dictated by the population firing rate (Figure 2c, lower panel). However, as only one neuron fires per cycle, the spike trains of individual neurons are irregular and sparse, resulting in a nearexponential distribution of singlecell ISIs (Figure 2e).
Efficient coding with synaptic delays
In real neural networks, recurrent inhibition is not instantaneous, but subject to synaptic and transmission delays. Far from being a biological detail, even very short synaptic delays can profoundly change the behaviour of the idealized efficient coding network, and pose fundamental limits on its performance.
To render the idealized model biologically feasible, we extended it in two ways. First, to comply with Dale’s law, we introduced a population of inhibitory neurons, which mediates recurrent inhibition (Figure 3a). In our implementation, excitatory and inhibitory populations both encode two separate reconstructions of the target variable, an ‘excitatory reconstruction’ and an ‘inhibitory reconstruction’. Inhibitory neurons, which receive input from excitatory neurons, fire spikes in order to predict and cancel the inputs to the excitatory population. Consequently, the inhibitory reconstruction closely tracks fluctuations in the excitatory reconstruction (see Materials and methods).
Second, and more importantly, we replaced the instantaneous synapses in the idealized model with continuous synaptic dynamics, as shown in Figure 3b (see Materials and methods). As stated previously, adding synaptic delays substantially alters the performance of the network. Without delays, recurrent inhibition prevents all but one cell from firing per oscillation cycle, resulting in an optimally efficient code (Figure 3c, left panel). With delays however, inhibition cannot always act fast enough to prevent neurons firing together. As a result, neural activity quickly becomes synchronised, and the sensory reconstruction is destroyed by large populationwide oscillations (Figure 3c, middle panel). To improve performance, one can increase the membrane potential noise and spike threshold, so as to reduce the chance of neurons firing together (Figure 3c, right panel). Too much noise, however, and the firing of different neurons becomes uncoordinated, and network performance is diminished (see later).
We compared the performance of the efficient coding network (with excitatory/inhibitory populations and synaptic delays) to: (i) an ‘ideal’ model with no delays and (ii) a ‘rate model’, consisting of a population of independent Poisson units whose firing rate varies as a function of the feedforward input (see Materials and methods).
In the Poisson network, random fluctuations in firing rate degrade the reconstruction, resulting in large coding error. Increasing the population firing rate increases the signaltonoise ratio, and thus, also decreases the reconstruction error (Figure 3d, red). In contrast, in the ‘ideal’ efficient coding network, noise fluctuations are automatically corrected for by the recurrent connections (see Materials and methods). Thus, in this network, the only source of inaccuracy comes from the discreteness of the code (where each spike adds a fixed quantity to the readout), leading to a small error (Figure 3d, blue). Finally, while synaptic delays increase the coding error (relative to the ideal efficient coding network), the error remains significantly smaller than for a Poisson network with the matching rate (Figure 3d, black cross).
Figure 3e illustrates the ability of the efficient coding network to track a time varying input signal. Zooming in to a 1 s period within the trial (lower inset), one observes rhythmic fluctuations in the excitatory and inhibitory neural reconstructions. These fluctuations are essentially the same phenomena as observed for the ideal network, where the neural reconstruction fluctuated periodically around the target signal following the arrival of each new spike (Figure 2c). However, with synaptic delays, several neurons fire together before the arrival of recurrent inhibition. As a result, oscillations are slower and larger in magnitude than for the idealised network, where only one neuron fires on each cycle.
Oscillations and efficient coding
We sought to quantify the effect of oscillations on coding performance. To do this, we varied parameters of the model, so as to alter the degree of network synchrony, while keeping firing rates the same. In the main text we illustrate the effect of adding white noise to the membrane potentials (while simultaneously varying the spike threshold, to maintain constant firing rate; see Materials and methods).
Increasing the magnitude of the membrane potential noise desynchronized the network activity, resulting in a reduction in pairwise voltage correlations (Figure 4a). With increased noise, single neural spike trains also became more irregular, reflected by an increase in the spiking CV (Figure 4b).
Coding performance, however, varied nonmonotonically with the noise amplitude. The reconstruction error followed a ushaped curve, being minimised for a certain intermediate level of noise (Figure 4c, solid curve). For this intermediate noise level, coding performance was significantly better than a network of independent Poisson units with a matching firing rate (horizontal dashed line). Interestingly, deviations between excitation and inhibition followed a similar ushape curve, being minimised for the same intermediate noise level (Figure 4c, thick dashed curve). Thus, optimal coding was achieved when the balance between excitatory and inhibition was the tightest. Further, at the optimal level of noise, the spiking CV value was near unity (Figure 4b), implying irregular (nearpoisson) single cell responses.
To further understand the effect of varying noise amplitude, we plotted the network responses and neural reconstruction in three regimes: with low, intermediate, and high noise (indicated by green, blue and red circles respectively in Figure 4a–c).
With low noise, neural membrane potentials were highly correlated, leading many neurons to fire together on each oscillation cycle (Figure 4d, upper panel). As a result, the neural reconstruction exhibited large periodic fluctuations about the encoded input, leading to poor coding performance (Figure 4d, lower panel).
At the other extreme, when the injected noise was very high, the spike trains of different neurons were uncorrelated (Figure 4f, upper panel). As, in this regime, effectively no information was shared between neurons, inhibitory and excitatory reconstructions were decoupled, and coding performance was similar to a population of independent Poisson units (Figure 4f, lower panel).
In the intermediate noise regime, for which performance was optimal, spikes were aligned to rhythmic fluctuations in the prediction error, but few neurons fired on each cycle (Figure 4e, upper panel). These dynamics were reflected by a nearexponential distribution of interspikeintervals (Figure 4e), coupled with a narrow peak in the population firing rate spectrum (Figure 4e). In this regime, rhythmic fluctuations in the neural reconstruction were small in magnitude, and there was a tight coupling between inhibitory and excitatory reconstructions (Figure 4e, lower panel).
Manipulating synaptic failures and neural noise
To assess the generality of the results shown in Figure 4, we manipulated the degree of synchrony in the network in different ways. First, we varied the synaptic reliability, by varying the probability that a presynaptic spike led to a change in the postsynaptic membrane potential (Figure 5a–b). Note that the average recurrent input received by each neuron varies proportionally with the synaptic failure probability. Thus, to correct for this, and retain balance in the network, the magnitude of recurrent connection was scaled inversely with the synaptic failure probability (see Materials and methods).
Next, we selected a subset of cells to fire with random, Poisson distributed, firing rates (and matching rate) (Figure 5c–d). For both manipulations, the added membrane potential noise was held constant (equal to the ‘lownoise’ condition indicated by green open circle in Figure 4a–c).
We observed qualitatively similar changes in coding performance and network synchrony, regardless of how we manipulate noise in the network. In both cases, the coding error was smaller at an intermediate level of noise, for which there was tightest balance between inhibition and excitation (Figure 5a and c).
Oscillatory neural dynamics
We investigated the behaviour of the network in the optimal noise regime, shown in Figure 4e. Figure 6a shows the network reconstruction and population firing rates in response to a low (blue), medium (red) and high (green) amplitude stimulus, with the optimal noise amplitude (i.e. 17 mV ${\mathrm{s}}^{1/2}$). The population firing rate was characterised by a transient peak following stimulus onset, followed by decay to a constant value. A spectrogram of the population firing rate (Figure 6a, lower panel) reveals the presence of 30–50 Hz oscillations during the period of sustained activity. Figure 6b plots the spiking response and population firing rates during a 600 ms period of sustained activity. Here, one clearly sees correlated rhythmic fluctuations in excitatory (black) and inhibitory (red) activity. The strength of these oscillations increases with stimulus amplitude (Figure 6c). Nevertheless, for all input amplitudes, individual neurons fired irregularly, with a nearexponential distribution of interspike intervals (Figure 6d).
We next considered the dynamics of neural membrane potentials. Previously, Yu & Ferster (Yu and Ferster, 2010) reported that, in area V1, visual stimulation increases gammaband correlations between pairs of neural membrane potentials (Figure 7a). Qualitatively similar results were obtained with our model (Figure 7b). Increasing the amplitude of the feedforward input led to increased correlations between neural membrane potentials (Figure 7c), with strongest coherence observed in the gammaband range (Figure 7d). This is because more neurons fire spikes on each cycle, leading to stronger oscillations.
Several studies have reported a tight balance between inhibition and excitation (Atallah and Scanziani, 2009; Wehr and Zador, 2003; Cafaro and Rieke, 2010). Recently, Atallah et al. (Atallah and Scanziani, 2009) reported that inhibitory and excitatory currents are precisely balanced on individual cycles of an ongoing gamma oscillation (Figure 8a). In our model, efficient coding is achieved by maintaining such a tight balance between inhibitory and excitatory reconstructions. Thus, inhibitory and excitatory currents closely track each other (Figure 8b), with a high correlation between the amplitude of inhibitory and excitatory currents on each cycle (Figure 8c). In common with Atallah et al.’s data, inhibition lags behind excitation by a few milliseconds (Figure 8d). Fluctuations in the amplitude of inhibitory and excitatory currents instantaneously modulate the oscillation frequency, with a significant correlation observed between the peak amplitude on a given oscillation cycle and the period of the following cycle (Figure 8e).
Gamma oscillations and behavioural performance
In general, the optimal network parameters depend on the properties of the feedforward sensory input. For example, the higher the input amplitude, the more noise is required to achieve the optimal level of network synchrony (Figure 9a). While the network achieves reasonable coding accuracy for a large range of different inputs, adaptive tuning of the dynamics (for example, changing the noise level) can be beneficial for a more limited input range. This would affect the level of population synchrony and thus introduce a correlation between performance and the strength of Gamma oscillations.
For example, if the task is to detect weak (low amplitude) inputs, performance would be higher if top down modulations (such as attention) reduced the level of noise, and thus increased the degree of population synchrony (Figure 9b) without significantly changing firing rates. We can thus expect higher detection performance to correlate with a stronger level of Gamma oscillations (Figure 9c). This could account for certain attentiondependent increases in gammapower and its correlation with behavioural performance (see Discussion).
Note that a similar correlation between behavioural performance and Gamma power could arise from purely bottom up effects. In the presence of input noise causing trialbytrial changes in input strength, trials with stronger input amplitude would result in more detection but also exhibit more Gamma oscillations. In that case, however, the increase in Gamma power would be associated with a commensurate increase in population firing rate.
Sensitivity to network paramaters
We investigated the degree to which our results depended on the network size, and the ratio of inhibitory to excitatory neurons. With readout weights held constant, neural firing rates in our model are inversely proportional to the network size (such that the population firing rate stays constant; Figure 10a). The optimal coding performance was practically unaltered by increasing or decreasing the population size by a factor of two, although there was a small trend for the optimal noise magnitude to increase with population size (Figure 10b). The oscillatory dynamics were unaltered by varying the network size (Figure 10c). Varying the number of inhibitory neurons (while keeping the number of excitatory neurons constant), had a similar effect on inhibitory firing rates (Figure 10d), with little effect on coding performance or oscillatory dynamics (Figure 10e–f).
We next investigated the effect of varying the time constants of the network. We first rescaled all the timeconstants for the synaptic waveform. For plotting purposes, the effective ‘synaptic delay’ was defined as the time for the cumulative input due to a single synaptic event to attain half its maximum value. Increasing the synaptic delay meant that more neurons fired on each oscillation cycle before receiving recurrent inhibition, and thus resulted in decreased coding performance (Figure 11 b), and larger, lower frequency oscillations (Figure 11c).
Because there are only two time constants in the model network, when feedforward and recurrent weights are rescaled appropriately, decreasing the readout time, $\tau $, is equivalent to increasing the synaptic delay. However, when recurrent weights are held constant, decreasing $\tau $ increases firing rates (which are inversely proportional to $\tau $) in contrast to varying the synaptic delay, which has no effect on firing rates (compare Figure 11 a and e). Decreasing $\tau $ also increases the oscillation magnitude with a resulting decrease in coding quality (Figure 11f–g). However, unlike varying the synaptic delay, $\tau $ has a relatively weak effect on the oscillation frequency (Figure 11h). Intuitively, this is because varying $\tau $ causes two changes in network dynamics, which have opposite effects on the oscillation frequency. On the one hand, decreasing $\tau $ results in a faster integration time, speeding up the network dynamics (and thus, increasing the oscillation frequency). On the other hand, decreasing $\tau $ increases the oscillation magnitude, leading to stronger inhibition on each oscillation cycle, and thus decreasing the oscillation frequency.
Discussion
We present a novel hypothesis for the role of neural oscillations, as a consequence of efficient coding in recurrent neural networks with noise and synaptic delays. In order to efficiently code incoming information, neural networks must tradeoff two competing demands. On the one hand, to ensure that each spike conveys new information, neurons should actively desynchronise their spike trains. On the other hand, to do this optimally, neural membrane potentials should encode shared global information about what has already been coded by the network, which will tend to synchronise neural activity.
In a network of LIF neurons with dynamics and connectivity tuned for efficient coding, we found that this tradeoff is best met when neural spike trains are entrained to a global oscillatory rhythm (implying a consistent representation of the prediction error), but where few neurons fire spikes on each cycle (implying high efficiency). This also corresponds to the regime in which inhibition and excitation are most tightly balanced. Our results provide a functional explanation for why cortical networks operate in a regime in which: (i) global oscillations in population firing rates occur alongside individual neurons with low, irregular, firing rates (Brunel and Hakim, 1999) (ii) there is a tight balance between excitation and inhibition (Atallah and Scanziani, 2009; Wehr and Zador, 2003; Cafaro and Rieke, 2010).
For simplicity, we considered a homogeneous network with onedimensional feedforward input. However, the results presented here can be generalised to networks with heterogenous connection, as well as networks that encode highdimensional dynamical variables.
Relation to balanced network models
Previously, Brunel & colleagues derived the conditions under which a recurrent network of integrateandfire neurons with sparse irregular firing rates exhibits fast global oscillations (Brunel and Hakim, 1999; Renart et al., 2010; Brunel and Wang, 2003; Mazzoni et al., 2008). This behaviour is qualitatively similar to the network dynamics observed in our model. However, these previous models differ in several ways from the model presented here. For example, they assume sparse connections (and/or weak connectivity), in which the probability of connections (and/or connection strengths) scales inversely with the number of neurons. In contrast, the connections in our network are nonsparse and finite. Thus, our network achieves a tighter balance between inhibitory and excitatory currents, and smaller fluctuations in membrane potentials (they scale as $1/N$ in the absence of delays, rather than $1/\sqrt{N}$).
Previous models of balanced neural networks typically consider medium to large neural populations, for which finite size effects are limited. In contrast, the recurrent network proposed here is particularly relevant to networks with a relatively small number of neurons (i.e. hundreds, rather than thousands, of neurons per input dimension). This is because in cases where a very large number of neurons encode a lowdimensional input, fluctuations in firing rate due to Poisson noise can be averaged out, and thus are not a limiting factor for coding accuracy. Note that the population size at which our proposed recurrent coding scheme ceases to be advantageous depends on several factors, including the input dimensionality and average neural firing rates.
The most important distinction between our work and previous meanfield models lies in the way the network is constructed. In our work, the network connectivity and dynamics are derived from functional principles, in order to minimise a specific loss function (i.e. the squared difference between the neural reconstruction and input signal). This ‘topdown’ modelling approach allows us to directly ask questions about the network dynamics that subserve optimal efficient coding. For example, balanced inhibition and excitation are not imposed in our model, but rather, required for efficient coding. Further, while previous models showed mechanistically how fast oscillations can emerge in a network with slow irregular firing rates (Brunel and Hakim, 1999), our work goes further, showing that these dynamics are in fact required for optimal efficient coding.
Finally, it is important to realise that, while efficient coding in a recurrent network leads to global oscillations, the reverse is not true: just because a network oscillates, does not mean that it is performing efficiently. To demonstrate this point, we repeated our simulations in a network with heterogeneous readout weights (Figure 12a–b). Both the coding performance and spiking dynamics of this network were indistinguishable from the homogeneous network described in the main text. In contrast, when we randomised the recurrent connection strengths (Figure 12c–d; see Materials and methods), the coding performance of the network was greatly reduced (Figure 12e), despite the fact that the network dynamics and firing rate power spectrum were virtually unchanged (Figure 12f).
Indeed, the coherence between excitatory and inhibitory current oscillations (i.e. the level of balance) is a much more reliable signature of efficient coding than global population synchrony. While population synchrony can occur in globally balanced network as well, only networks with an intracellular detailed balance between excitatory and inhibitory currents achieve high coding performance.
Relation to previous efficient coding models
Previous work on efficient coding has mostly concentrated on using information theory to ask what information ‘should’ be represented by sensory systems (Simoncelli and Olshausen, 2001). Recently, however, researchers have begun to ask, mechanistically, how neural networks should be setup in order to operate as efficiently as possible (Boerlin et al., 2013; Boerlin and Denève, 2011; Deneve, 2008). This approach can provide certain insights not obtainable from information theory alone.
For example, according to information theory, in the lownoise limit, the most efficient spiking representation will be one in which the spike trains of different neurons are statistically independent, and thus, there are no oscillations. In practice however, neural networks must operate in the face of biological constraints, such as synaptic delays. Considering these constraints changes our conclusions. Specifically, contrary to what one might expect from a purely information theoretical analysis, we find that oscillations emerge (even in a regime with low inputnoise) as a consequence of neurons performing as efficiently as possible given synaptic delays, and should not be removed at all cost.
Relation to previous predictive spiking models
Previously, Deneve, Machens and colleagues described how a population of spiking neurons can efficiently encode a dynamic input variable (Boerlin et al., 2013; Boerlin and Denève, 2011; Deneve, 2008; Bourdoukan et al., 2013). In this work, we showed that a recurrent population of integrateandfire neurons with dynamics and connectivity tuned for efficient coding maintains a balance between excitation and inhibition, exhibits Poissonlike spike statistics (Boerlin et al., 2013), and is highly robust against perturbations such as neuronal cell death. However, we did not previously demonstrate a relation between efficient coding and neural oscillations. The main reason for this is that we always considered an idealised network, with instantaneous synapses. In this idealised network, only one cell fires at a time. As a result, oscillations are generally extremely weak and fast (with frequency equal to the population firing rate), and thus, completely washed out in a large network with added noise and/or heterogenous readout weights. In contrast, in a network with finite synaptic delays, more than one neuron may fire per oscillation cycle, before the arrival of inhibition. As a result, oscillations are generally much stronger, even with significant added noise and/or heterogenous readout weights.
Attentional modulation
Directing attention to a particular stimulus feature/location has been shown to increase the gammaband synchronisation of visual neurons that respond to the attended feature/location (Fries et al., 2001). Figure 9 illustrates how such an effect could come about. Here, we show that attentional modulations that increase the strength of gammaband oscillations will serve to increase perceptual discrimination of low contrast stimuli. Such attentional modulation could be achieved in a number of different ways, for example, by decreasing noise fluctuations, or modulating the effective gain of feedforward or recurrent connections (Reynolds and Heeger, 2009; Mitchell et al., 2007).
In general, the way in which attention should modulate the network dynamics will depend on the stimulus statistics and tasksetup. Future work, that considered higher dimensional sensory inputs (as well as competing ‘distractor’ stimuli), could allow us to investigate this question further.
The benefits of noise
An interesting aspect of our work is that it suggests various sources of noise (e.g. such as synaptic failure), often thought of as a ‘problem’ for neural coding, may in fact help neural networks achieve higher coding performance than would otherwise be possible.
With low noise, neural membrane potentials in our model are highly correlated (Figure 4a), and inhibition is not able to prevent multiple neurons firing together. To avoid this, we needed to add noise to neural membrane potentials (while simultaneously increasing spiking thresholds). With the right level of noise, fewer neurons fired on each oscillation cycle, resulting in increased coding performance. Too much noise, however, led to inconsistent information being encoded by different neurons, decreasing coding performance (Figure 4c). This phenomena, where noise fluctuations increase the signal processing performance of a system, is often referred to as ‘stochastic resonance’ (Benzi et al., 1981; Faisal et al., 2008), and has been observed in multiple sensory systems, including cat visual neurons (Longtin et al., 1991; Levin and Miller, 1996). Previously however, stochastic resonance has usually been seen as a method to amplify subthreshold sensory signals that would not normally drive neurons to spike. Here, in contrast, noise desynchronises neurons that receive similar recurrent inputs, increasing the coding efficiency of the population.
Biological limitations
While it is interesting that, starting from a pure topdown coding rule, one can arrive at a network of recurrently coupled effective integrateandfire (IF) neurons, we emphasize that this derived network is still far from being ‘biologically realistic. In the current paper, we address a fundamental limitation of the idealized model that emerges from the derivation, where neural feedback is both noiseless and instantaneous. We show how efficient neural coding can be performed in a network where feedback is noisy and delayed, with resulting oscillatory dynamics that resemble what is observed experimentally.
Nonetheless, significant challenges remain in order to draw a closer connection between the topdown neural model and biology. For example, neurons in our model have voltagedriven synapses, while real synapses are better approximated by conductance based models. In another study, we showed how to adapt the framework to more realistic Hodgkin Huxley neurons that included synapses with finite rise time, but no delays (Schwemmer et al., 2015). Further, the slow time constant (100 ms), required in our simulations to achieve high coding performance, contrasts with the relatively fast (∼10–20 ms) membrane time constants observed experimentally. It will be important in the future to investigate how extensions to the model (such as higherdimensional inputs and/or alternative implementations with sparse recurrent connectivity) can help recover coding performance given fast, biologically realistic, membrane time constants.
Alternative functional roles for oscillations
Neural oscillations have been hypothesised to fulfill a number of different functional roles, including feature binding (Singer, 1999), gating communication between different neural assemblies (Fries, 2005; Womelsdorf et al., 2007; Akam and Kullmann, 2010), encoding feedforward and feedback prediction errors (Arnal et al., 2011; Arnal and Giraud, 2012; Bastos et al., 2012) and facilitating ‘phase codes’ in which information is communicated via the timing of spikes relative to the ongoing oscillation cycle (Buzsáki and Chrobak, 1995).
Many of these theories propose new ways in which oscillations encode incoming sensory information. In contrast, in our work network oscillations do not directly code for anything, but rather, are predicted as a consequence of efficient rate coding, an idea whose origins go back more than 50 years (Barlow, 1961).
Materials and methods
Efficient spiking network
We consider a dynamical variable that evolves in time according to:
where $c\left(t\right)$ is a timevarying external input or command variable, and $\tau $ is a fixed time constant. Our goal is to build a network of $N$ neurons that take $c\left(t\right)$ as input, and reproduce the trajectory of $x\left(t\right)$. Specifically, we want to be able to read an estimate $\widehat{x}\left(t\right)\approx x\left(t\right)$ of the dynamical variable from the network’s spike trains $o\left(t\right)=({o}_{1}\left(t\right),{o}_{2}\left(t\right),\mathrm{\dots},{o}_{N}\left(t\right))$. These output spike trains are given by ${o}_{i}\left(t\right)={\sum}_{k}\delta \left(t{t}_{i}^{k}\right)$, where ${t}_{i}^{k}$ is the time of the ${k}^{th}$ spike in neuron $i$.
We first assume that the estimate, $\widehat{x}\left(t\right)$, can be read out by a weighted leaky integration of spike trains:
where ${w}_{i}$ is a constant readout weight associated with the ${i}^{th}$ neuron. For simplicity, we set the readout timeconstant, $\tau $, equal to the timescale of the input, $x$.
We next assume that the network minimises the distance between $x\left(t\right)$ and $\widehat{x}\left(t\right)$ by optimising over spike times ${t}_{i}^{k}$. The network minimises the loss function,
The first term in the loss function is the squared distance between the $x\left(t\right)$ and $\widehat{x}\left(t\right)$. The second term and third term represent L1 and L2 penalties on the firing rate, respectively. $\alpha $ and $\beta $ are constants that determine the size of the penalty. The time varying firing rate of the ${i}^{th}$ neuron is defined to by the differential equation:
A neuron fires a spike at time $t$ if it can reduce the instantaneous error $E\left(t\right)$ (i.e. when $E(t\text{neuron}\text{}i\text{}\text{spikes})\text{}\text{}E(t\text{neuron}\text{}i\text{}\text{doesn't}\text{}\text{spike})$). This results in a spiking rule:
where,
Since ${V}_{i}\left(t\right)$ is a timevarying variable, whereas ${T}_{i}$ is a constant, we identify the former with the ${i}^{th}$ neuron’s membrane potential ${V}_{i}\left(t\right)$, and the latter with its firing threshold ${T}_{i}$.
To obtain the network dynamics, we take the derivative of each neuron’s membrane potential to obtain:
where ${\nu}_{i}\left(t\right)$ corresponds to a white ‘background noise’, with unit variance (added for biological realism). Thus, the resultant dynamics are equivalent to a recurrent network of leaky integrateandfire (LIF) neurons, with leak, $V\left(t\right)$, feedforward input, ${w}_{i}c\left(t\right)$, recurrent input, ${w}_{i}{\sum}_{k}{w}_{k}{o}_{k}\left(t\right)$, and selfinhibition (or reset), $\beta {o}_{i}\left(t\right)$. Note that in the case considered here, where the decoding timescale is equal to the membraine time constant, the ‘leak term’, ${V}_{i}$, emerges directly from the derivation.
Balanced network of inhibitory and excitatory neurons
To construct a network that respects the Dale’s law, we introduce a population of inhibitory neurons, that tracks the estimate encoded by the excitatory neurons, and provides recurrent feedback. For simplicity, we consider a network in which all readout weights are positive. In our framework, this results in a particularly simple network architecture, in which a single population of excitatory neurons is recurrently connected to a population of inhibitory neurons (Figure 3a). For further discussion of different network architectures, see (Boerlin et al., 2013).
We first introduce a population of inhibitory neurons, that receive input from excitatory cells. The objective of the inhibitory population is to minimise the squared distance between excitatory and inhibitory neural reconstructions (${\widehat{x}}_{E}$, and ${\widehat{x}}_{I}$, respectively), by optimising over spike times ${t}_{i}^{k}$. Thus, an inhibitory neuron spikes when it can reduce the loss function:
Following the same prescription as before, we obtain the following dynamics for the inhibitory neurons:
Thus, inhibitory neurons receive input from excitatory neurons (second term), and recurrent inhibition from other inhibitory neurons (third term).
Now, as the inhibitory reconstruction tracks the excitatory reconstruction, we can make the simplifying assumption of replacing ${\widehat{x}}_{E}$ with ${\widehat{x}}_{I}$, in our earlier expression for the excitatory membrane potential (equation 6), giving:
Taking the derivative of this expression as before, we obtain the following dynamics for the excitatory neurons:
Thus, excitatory neurons receive excitatory feedforward input (second term) and recurrent inhibitory input (third term).
Synaptic dynamics
To account for transmission delays and continuous synaptic dynamics we assume that each spike generates a continuous current input to other neurons, with dynamics described by the synaptic waveform, $h\left(t{t}_{i}^{l}\right)$. The shape of this waveform is given by:
where ${\tau}_{r}$ is the synaptic rise time, ${\tau}_{d}$ is the decay time and ${\tau}_{tr}$ is the transmission delay. The normalisation constant, ${\tau}_{d}{\tau}_{r}$, ensures that ${\int}_{{\tau}_{tr}}^{\mathrm{\infty}}h\left(t\right)dt=1$. This profile is plotted in Figure 3b, with ${\tau}_{r}=1\mathrm{m}\mathrm{s}$, ${\tau}_{d}=3\mathrm{m}\mathrm{s}$ and ${\tau}_{tr}=1\mathrm{m}\mathrm{s}$.
To incorporate continuous synaptic currents into the model, we alter equations 10 and 12, by replacing each of the recurrent spiking inputs (${o}_{k}\left(t\right)$) by the convolution of the spiking input and current waveform ($h\left(t\right)\star {o}_{k}\left(t\right)\leftarrow {o}_{k}\left(t\right)$).
Simulation parameters
For the simulations shown in Figure 2, we considered a toy network of 3 neurons with equal readout weights, ${w}_{i}=1$. The L1 spike cost was $\alpha =0$ and the L2 spike cost was set to $\beta =0.04$. The readout time constant was set to $\tau =0.1s$. The magnitude of injected membrane potential noise was set to $\sigma =0.02$. In each case, network dynamics were computed from equation 8.
For the simulations shown in Figures 3–12, we considered a larger network of 50 excitatory neurons, and 50 inhibitory neurons. All neurons had equal readout weights, equal to ${\gamma}_{0}=1.2{\mathrm{mV}}^{1/2}$. The L1 spike cost was set to 0. The L2 spike cost was set to $\beta =8.5$ mV. If we assume a spike threshold of −55 mV, this corresponds to a resting potential of −60 mV (${V}_{rest}={V}_{thresh}\frac{1}{2}\left({L}_{1}+{L}_{2}+{\gamma}_{0}^{2}\right)$), a reset of −65 mV (${V}_{reset}={V}_{thresh}{L}_{2}{\gamma}_{0}^{2}$), and postsynaptic potentials of 1.45 mV (${V}_{PSP}={\gamma}_{0}^{2}$; see (Boerlin et al., 2013) for details of scaling to biological parameters).
For Figures 3, 6–8, 10 and 12 the magnitude of injected membrane potential noise was set to its ‘optimal value’ $\sigma =17$ mV ${\mathrm{s}}^{1/2}$. For Figure 5 the membrane potential noise was set lower, at $\sigma =8$ mV ${\mathrm{s}}^{1/2}$.
Network dynamics were computed from equations 10 and 12 (with the exception that recurrent inputs were convolved with the synaptic current waveform, $h\left(t\right)$, described in equation 13).
Algorithm
Simulations were run using a Euler method, with discrete time steps of 0.5 ms. For the ‘ideal’ network (i.e. with instantaneous synapses) only one neuron (with highest membrane potential) was allowed to fire a spike within each bin. Changing the temporal discretisation did not qualitatively alter our results.
Stimulus details
In Figure 2b, the encoded variable, $x$, was obtained by lowpass filtering white noise with a firstorder Butterworth filter, with cutoff frequency of 4 Hz (the lowpass Butterworth filter, with transfer function proportional to $1/\left(1+{\left(\omega /{\omega}_{c}\right)}^{2n}\right)$, where $n$ is the filter order, and ${\omega}_{c}$ is the cutoff frequency, was implemented using Matlab’s ‘butter’ command).
After filtering, $x$, is rescaled to have a mean of 3, and standard deviation of 1. In Figure 2c the encoded variable is constant, $x=4$. In Figure 3, the encoded variable is obtained by lowpass filtering a whitenoise input in the same way as for Figure 2b, this time with a cutoff frequency of 2 Hz. After filtering, $x$, is rescaled to have nonzero mean of 50, and standard deviation of 10. In Figures 4–5, 7–8 and 10–12 the encoded variable was held constant at $x=50$. In Figures 6 and 9, the ’low’, ’medium’ and ’high’ amplitude inputs are $x=$ 35, 50, and 65, respectively.
Poisson model
In Figure 3d–e, we compare the efficient coding model to a rate model, in which neural firing rates vary as a function of the feedforward input, $c\left(t\right)$. Firing rates for the Poisson model were equal to the mean firing rate in the recurrent model (when excitatory readout weights are all equal, firing rates are proportional to the feedforward input, $c(t)$). Spiking responses were obtained by drawing from a Poisson distribution.
Varying the noise
For the simulations shown in Figure 4, we varied the injected membrane potential noise. In general, varying the noise amplitude changes neural firing rates, leading to systematic estimation biases. To compensate for this, we adjusted the L2 spike cost for the inhibitory and excitatory neurons, so as to maintain zero estimation bias (or equivalently, to keep firing rates constant). For each noise level, we ran an initial simulation, modifying excitatory and inhibitory costs, ${\beta}_{E}$ and ${\beta}_{I}$, in real time (via a stochastic gradient descent algorithm), until both the excitatory and inhibitory estimation biases converged to zero.
To generate Figure 5a–b we varied the synaptic reliability, by altering the probability that a presynaptic spike produces a change in the postsynaptic membrane potential. To keep the total synaptic input to each neuron constant, we divided the recurrent connection strengths by the probability of synaptic failure.
To generate Figure 5c–d we chose a fraction of inhibitory and excitatory neurons. The selected neurons fired spikes random Poisson spike trains. Their firing rates remained unchanged.
Population firing rates
To plot the population firing rate (Figure 6a and b), we lowpass filtered neural spike trains using a first order Butterworth filter (with cutoff frequency of 5.5 Hz, and 66 Hz, for panels b and d, respectively), before averaging over neurons.
Spectral analysis
The spectrogram of the population firing rate, shown in Figure 6a (lower panel), was computed using a shorttime Fouriertransform, with a Hamming time window of 60 ms (Matlab’s ‘spectrogram function’). Finally, the instantaneous power spectrum was lowpass filtered with a firstorder Butterworth filter, with cutoff frequency 3 Hz. The power spectrum of the population firing rate and neural membrane potentials (Figures 4h, 5b and d, 6c) was computed using the multitaper method (using Matlab’s ‘pmtm’ function), with bandwidth chosen empirically to achieve a spectrum that varied smoothly with frequency.
Excitatory and inhibitory currents
To plot the currents shown in Figure 8b, we divided the total excitatory and inhibitory input to each cell by a presumed membrane resistance of ${R}_{m}=5\text{M}\mathrm{\Omega}$ (changing this value rescales the yaxis). We then defined peaks in excitatory and inhibitory currents as local maxima in the currents, separated by a drop an 80% drop in the current magnitude. Further, we only included peaks in inhibitory and excitatory currents that occurred within 15 ms of each other.
Discrimination threshold
For the simulation shown in Figure 9, we considered the performance of the network in discriminating between two 0.1 s long stimulus segments, with equally spaced around the ‘low’ amplitude input ($x=48$). From signal detection theory, a subject’s probability of selecting between two stimuli is given by: ${P}_{correct}({x}_{1},{x}_{2})=\frac{1}{2}\mathrm{erfc}\left(\frac{1}{2}D({x}_{1},{x}_{2})\right)$, where $\mathrm{erfc}(x)$ is the cumulative error function, and $D({x}_{1},{x}_{2})$ is the normalized distance (or dprime) between the distribution of estimates: $D({x}_{1},{x}_{2})=\frac{\mu ({x}_{2})\mu ({x}_{1})}{\sqrt{\frac{1}{2}{\sigma}^{2}({s}_{1})+{\sigma}^{2}({s}_{2})}}$. These quantities can be directly computed from the network output.
‘Random’ versus ‘precisely’ balanced network
The network used to generate Figure 12a–b was the same as before, with the exception that the readout weights varied (and thus, the connection strengths) for each neuron. Readout weights were sampled from a gamma distribution, with mean $1.2{\mathrm{mV}}^{1/2}$, and standard deviation $0.5{\mathrm{mV}}^{1/2}$. For the optimal ‘precisely balanced’ network (Figure 12a–b), the reciprocal connection strengths between each inhibitory and excitatory neuron are equal in magnitude. For the ‘randomly’ balanced network (Figure 12c–d), we disrupted this symmetry by randomly permuting the input to each excitatory/inhibitory neuron (while ensuring that the summed input to each neuron remained unchanged). Recurrent inhibitory connections were left unchanged.
References
 1

2
Cortical oscillations and sensory predictionsTrends in Cognitive Sciences 16:390–398.https://doi.org/10.1016/j.tics.2012.05.003
 3
 4
 5
 6

7
The mechanism of stochastic resonanceJournal of Physics A: Mathematical and General 14:L453–L457.https://doi.org/10.1088/03054470/14/11/006

8
Spikebased population coding and working memoryPLoS Computational Biology 7:e1001080.https://doi.org/10.1371/journal.pcbi.1001080

9
Predictive coding of dynamical variables in balanced spiking networksPLoS Computational Biology 9:e1003258.https://doi.org/10.1371/journal.pcbi.1003258

10
Learning optimal spikebased representationsAdvances in Neural Information Processing Systems 14:2979–3010.
 11
 12

13
Temporal structure in spatially organized neuronal ensembles: a role for interneuronal networksCurrent Opinion in Neurobiology 5:504–510.https://doi.org/10.1016/09594388(95)800123

14
Mechanisms of gamma oscillationsAnnual Review of Neuroscience 35:203–225.https://doi.org/10.1146/annurevneuro062111150444
 15

16
Bayesian spiking neurons I: inferenceNeural Computation 20:91–117.https://doi.org/10.1162/neco.2008.20.1.91

17
Dynamic predictions: oscillations and synchrony in topdown processingNature Reviews Neuroscience 2:704–716.https://doi.org/10.1038/35094565
 18
 19
 20

21
A mechanism for cognitive dynamics: neuronal communication through neuronal coherenceTrends in Cognitive Sciences 9:474–480.https://doi.org/10.1016/j.tics.2005.08.011

22
Neuronal gammaband synchronization as a fundamental process in cortical computationAnnual Review of Neuroscience 32:209–224.https://doi.org/10.1146/annurev.neuro.051508.135603

23
Historydependent multipletimescale dynamics in a singleneuron modelJournal of Neuroscience 25:6479–6489.https://doi.org/10.1523/JNEUROSCI.076305.2005

24
LFP power spectra in V1 cortex: the graded effect of stimulus contrastJournal of Neurophysiology 94:479–490.https://doi.org/10.1152/jn.00919.2004
 25

26
Dendritic computationAnnual Review of Neuroscience 28:503–532.https://doi.org/10.1146/annurev.neuro.28.061604.135703
 27
 28
 29
 30
 31
 32
 33

34
Constructing precisely computing networks with biophysical spiking neuronsJournal of Neuroscience 35:10112–10134.https://doi.org/10.1523/JNEUROSCI.495114.2015

35
The variable discharge of cortical neurons: implications for connectivity, computation, and information codingJournal of Neuroscience 18:3870–3896.

36
Natural image statistics and neural representationAnnual Review of Neuroscience 24:1193–1216.https://doi.org/10.1146/annurev.neuro.24.1.1193
 37

38
The hipster effect: when anticonformists all look the samearxiv, http://arxiv.org/abs/1410.8001.
 39
 40
 41
Decision letter

Peter LathamReviewing Editor; University College London, United Kingdom
In the interests of transparency, eLife includes the editorial decision letter and accompanying author responses. A lightly edited version of the letter sent to the authors after peer review is shown, indicating the most substantive concerns; minor comments are not usually included.
Thank you for submitting your work entitled "Neural oscillations as a signature of efficient coding in the presence of synaptic delays" for consideration by eLife. Your article has been favorably evaluated by Timothy Behrens as the Senior editor and three reviewers, one of whom is a member of our Board of Reviewing Editors.
The reviewers have discussed the reviews with one another and the Reviewing Editor has drafted this decision to help you prepare a revised submission.
Summary:
The authors have previously shown that a an optimal error correcting code requires a balanced excitatory (E) and inhibitory (I) network, where a spike occurs only to reduce the error between a network estimate and the true stimulus. In that previous work, feedback was instantaneous. Here the authors show that if feedback is delayed (as it must be in realistic networks), oscillations develop. However, excessive synchronous network oscillations degrades coding. The central result of the manuscript is that noise mitigates some of the deleterious aspects of networkwide oscillatory synchrony on neural coding.
Essential revisions:
1) Some features of the result will naturally depend on the readout time constant τ. The only place I see its value mentioned is in connection with Figure 2 (subsection "Simulation parameters", first paragraph). There it says that τ was 100 ms. Was this value also used in the later simulations? It seems rather a large value to want to associate with a membrane time constant, so I think it would be useful if the authors said something about how this filtering might be implemented biologically. And what would the results look like if τ had a value more like a typical membrane time constant?
2) How sensitive are the results to the parameters of the network? In particular, how do they scale with network size, and with the ratio of excitatory to inhibitory neurons? In particular, when the network is scaled up and the ratio of excitatory to inhibitory neurons is set to a more realistic value, like 4, what happens to the following:
A) Does the optimal noise (Figure 4c) stay at 15 mV? And does the ratio of the optimal RMS error to the Poisson RMS error stay the same?
B) Does the optimal failure probability stay at about 0.5? And does the ratio of the optimal RMS error to the Poisson RMS error (which should be shown in that figure) stay the same?
C) Do the oscillation frequencies stay in the 3050 Hz range?
D) Do the oscillation frequencies depend most strongly on the delay, or on other network parameters?
3) In the text referring to Figure 3(e), it would be good to explain why the rate in the performancematched case is so high.
4) The fact that failures improves network performance may be one of the most interesting results in the paper, as it implies that failures are a feature, not a bug. We suggest that the paper would have more impact on the community if you emphasized this point, although we will leave that up to you.
https://doi.org/10.7554/eLife.13824.014Author response
Essential revisions:
1) Some features of the result will naturally depend on the readout time constant τ. The only place I see its value mentioned is in connection with Figure 2 (subsection "Simulation parameters", first paragraph). There it says that τ was 100 ms. Was this value also used in the later simulations? It seems rather a large value to want to associate with a membrane time constant, so I think it would be useful if the authors said something about how this filtering might be implemented biologically. And what would the results look like if τ had a value more like a typical membrane time constant?
There are two time constants in our model, the readout time, τ, and the synaptic delay. Thus (with other parameters rescaled accordingly), reducing the readout time constant is equivalent to increasing the delay. We conducted additional simulations (Figure 11) showing that both decreasing τ or increasing the synaptic delay have similar effects on coding performance, which is reduced. These simulations are described in more detail later, in our response to the second comment raised by the reviewers.
To retain coding performance with a smaller readout time constant requires introducing changes to the ideal network, to prevent multiple neurons firing synchronously, and dampen the resulting excessive oscillations. In the main text we illustrated how encoding performance can be recovered by adding noise (e.g. synaptic failure, additive membrane potential noise or Poisson spiking neurons). We note that coding performance can also be improved by altering other aspects of the network, such as recurrent connectivity. For example, while beyond the scope of the current work, we found that a locally connected network consisting of several overlapping inhibitory subpopulations, each of which encodes its own ‘version’ of the input, can lead to weaker oscillations, improving the performance of the ‘alltoall’ network presented in our work.
The reviewers raise the fair point that the decoding time constant of 100 ms in the paper is considerably longer than typical membrane time constants. Indeed, the value of 100 ms was used primarily to ensure consistency with our previous theoretical work (Boerlin et al. 2013, PLoS Comp), rather than for explicit biological realism. It is worth noting however, that in contrast to the simple effective integrateandfire (IF) model that results from deriving our framework, real neurons exhibit dynamics on multiple timescales, including slow adaptation time scales (Fairhall et al., 2001, Nature), slow inactivation dynamics of intrinsic conductances (Gilboa et al., 2015, J Neurosci), slow integration due to voltagedependent potassium currents (Storm et al., 1988, Nature) and slower dendritic integration timescales due to NMDAsynaptic currents (London et al., 2005, Ann. Rev. Neurosci). It is possible that these slower dynamics could account for the slow readout time, that is required by the network in order to achieve a high degree of coding precision.
Finally, while it is interesting to show how, starting from a pure ‘topdown’ coding rule, one can arrive at a network of recurrently coupled effective integrateandfire (IF) neurons, we emphasize that this derived network is still far from being 'biologically realistic’. In the current paper, we addressed a major inconsistency between our previous work, where synapses were noiseless and instantaneous, and biology, where synapses are noisy and delayed. Significantly, we believe that the principles that emerge extend beyond the model network, to many other recurrent systems where interacting subunits perform a global optimization. Nonetheless, we concede that significant challenges remain in order to draw a closer connection between our topdown neural model and biology: not least the introduction of conductance based synapses and/or understanding how the multiple cellular mechanisms may lead to slow decoding timescales in spite of fast membrane time constants.
We have added a section to the Discussion (“Biological limitations”), with the above arguments.
2) How sensitive are the results to the parameters of the network? In particular, how do they scale with network size, and with the ratio of excitatory to inhibitory neurons? In particular, when the network is scaled up and the ratio of excitatory to inhibitory neurons is set to a more realistic value, like 4, what happens to the following:
A) Does the optimal noise (Figure 4c) stay at 15 mV? And does the ratio of the optimal RMS error to the Poisson RMS error stay the same?
B) Does the optimal failure probability stay at about 0.5? And does the ratio of the optimal RMS error to the Poisson RMS error (which should be shown in that figure) stay the same?
C) Do the oscillation frequencies stay in the 3050 Hz range?
D) Do the oscillation frequencies depend most strongly on the delay, or on other network parameters?
As suggested by the reviewers, we investigated the behavior of the model network with varying: (i) population size, (ii) inhibitory population size only, (iii) synaptic delay, and (iv) the decoding timescale. These results are presented in two new figures (Figure 10–11), described in a new section in the Results (‘Sensitivity to network parameters’).
With all other parameters held constant, increasing the population size results in a lower firing rate for each neuron (such that the summed firing rate of all neurons is constant; Figure 10a). When only the inhibitory population size was altered, then the inhibitory firing rate varied while the excitatory firing rate is constant (Figure 10d).
The coding performance and oscillatory dynamics, on the other hand, remain relatively unchanged when we vary the population size or inhibitory/excitatory ratio. For example, neither the ‘optimal’ noise level nor the oscillation frequency were greatly changed by increasing/decreasing the population size or excitatory/inhibitory ratio by a factor of two (Figure 10b–c and e–f).
Note that, although in our simulations, varying the ratio of excitatory to inhibitory neurons leads to unequal firing rate for the two populations (Figure 10d), this does not have to be the case: one could rescale the inhibitory/excitatory readout weights so that both populations have equal rates. Nonetheless, whatever the manipulation, EI currents should equal the total IE currents, so that balance in the network is preserved.
Finally, we emphasize that our efficient coding model is particularly applicable for small to medium size neural ensembles (with correspondingly low population firing rates), where noise fluctuations resulting from Poisson spiking would otherwise lead to decreased coding performance (see Figure 3d). Thus, we did not find it relevant to scale our model to represent very large networks (i.e. with 1000s of neurons).
In contrast to variations in the population size, varying the synaptic delay had a significant effect on both network dynamics and coding performance. Increasing the synaptic delay resulted in larger and lower frequency oscillations, with a concomitant decrease in coding accuracy (Figure 11A–d).
There are only two time constants in the network: the delay, and the decoding time constant, τ. Therefore, with other parameters (e.g. feedforward/recurrent weights) scaled appropriately, decreasing τ is equivalent to increasing the synaptic delay. On the other hand, when all other parameters are held constant, decreasing τ serves to increase firing rates (which are inversely proportional to τ), unlike varying the synaptic delay, which has no effect (compare Figures 11a and e).
In common with increasing the length of the delay, decreasing τ also increases the magnitude of network oscillations, while decreasing the coding quality (Figure 11f–g). However, unlike the delay, varying τ had a relatively weak effect on the oscillation frequency (Figure 11g–h). Intuitively, this is because varying τ causes two different changes in network dynamics that push in different directions. On the one hand, decreasing τ results in faster integration time, speeding up the network dynamics (and thus, tending to increase oscillation frequency). On the other hand, decreasing τ increases the oscillation magnitude, leading to stronger inhibition on each oscillation cycle and tending to slow down the oscillations.
3) In the text referring to Figure 3(e), it would be good to explain why the rate in the performancematched case is so high.
We added a new figure panel (Figure 3d) to show the relation between firing rate and coding performance in each of the model networks. Instead of showing bar plots of performance at a given fixed rate (or conversely, rate required to achieve a given level of performance), we plot the full error/rate curves for the ‘ideal recurrent’ and Poisson models. The nonideal recurrent network is shown as a black cross on this plot.
In the Poisson network, random fluctuations in firing rate cause the reconstruction to deviate from its true value, and decrease coding performance. These noise fluctuations become less important as the population firing rate increases, with a corresponding decrease in the reconstruction error (that scales as ~1/√F), where F is the population firing rate).
In contrast, in the ideal efficient coding network, noise fluctuations are automatically ‘corrected for’ by the recurrent connection. Thus, the only source of inaccuracy comes from the discreteness of the code (where each spike adds a fixed quantity to the readout), leading to a much smaller reconstruction error (that scales as ‘1/F’).
With the addition of synaptic delays, it is no longer possible to achieve the performance of the ideal network. Nonetheless, by desynchronizing the network with an appropriate level of noise, this problem can be minimized, leading to a reconstruction error significantly smaller than for the Poisson network.
We have added text to the Results (“Efficient coding with synaptic delays”) to clarify these concepts.
4) The fact that failures improves network performance may be one of the most interesting results in the paper, as it implies that failures are a feature, not a bug. We suggest that the paper would have more impact on the community if you emphasized this point, although we will leave that up to you.
We thank the reviewer for this suggestion. We also think that this is an interesting point to make. For simplicity, we chose to continue to use additive noise on the membrane potential for the majority of the simulations (we could redo all of them with the failures without changing the results qualitatively). However, we have added text to the Abstract, Introduction (see final paragraph) and Discussion (‘The benefits of noise’) to emphasize how our work suggests that synaptic failures (and noise in general) may be a feature, not a bug.
https://doi.org/10.7554/eLife.13824.015Article and author information
Author details
Funding
Agence Nationale de la Recherche (ANR10LABX0087)
 Matthew Chalk
 Boris Gutkin
 Sophie Denève
European Research Council (Predispike)
 Matthew Chalk
 Sophie Denève
James S. McDonnell Foundation
 Sophie Denève
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
Acknowledgements
Boris Gutkin acknowledges funding by the Russian Academic Excellence Project '5100’.
Reviewing Editor
 Peter Latham, Reviewing Editor, University College London, United Kingdom
Publication history
 Received: December 17, 2015
 Accepted: July 5, 2016
 Accepted Manuscript published: July 7, 2016 (version 1)
 Version of Record published: July 25, 2016 (version 2)
 Version of Record updated: November 3, 2016 (version 3)
Copyright
© 2016, Chalk et al.
This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.
Metrics

 1,449
 Page views

 395
 Downloads

 8
 Citations
Article citation count generated by polling the highest count across the following sources: Crossref, Scopus, PubMed Central.
Download links
Downloads (link to download the article as PDF)
Download citations (links to download the citations from this article in formats compatible with various reference manager tools)
Open citations (links to open the citations from this article in various online reference manager services)
Further reading

 Computational and Systems Biology
 Neuroscience

 Computational and Systems Biology
 Microbiology and Infectious Disease