Are single-peaked tuning curves tuned for speed rather than accuracy?

  1. Movitz Lenninger  Is a corresponding author
  2. Mikael Skoglund
  3. Pawel Andrzej Herman
  4. Arvind Kumar  Is a corresponding author
  1. Division of Information Science and Engineering, KTH Royal Institute of Technology, Sweden
  2. Division of Computational Science and Technology, KTH Royal Institute of Technology, Sweden

Abstract

According to the efficient coding hypothesis, sensory neurons are adapted to provide maximal information about the environment, given some biophysical constraints. In early visual areas, stimulus-induced modulations of neural activity (or tunings) are predominantly single-peaked. However, periodic tuning, as exhibited by grid cells, has been linked to a significant increase in decoding performance. Does this imply that the tuning curves in early visual areas are sub-optimal? We argue that the time scale at which neurons encode information is imperative to understand the advantages of single-peaked and periodic tuning curves, respectively. Here, we show that the possibility of catastrophic (large) errors creates a trade-off between decoding time and decoding ability. We investigate how decoding time and stimulus dimensionality affect the optimal shape of tuning curves for removing catastrophic errors. In particular, we focus on the spatial periods of the tuning curves for a class of circular tuning curves. We show an overall trend for minimal decoding time to increase with increasing Fisher information, implying a trade-off between accuracy and speed. This trade-off is reinforced whenever the stimulus dimensionality is high, or there is ongoing activity. Thus, given constraints on processing speed, we present normative arguments for the existence of the single-peaked tuning organization observed in early visual areas.

Editor's evaluation

This fundamental study provides important insight into coding strategies in sensory areas. The study was well done, and the analysis and simulations were highly convincing. This study should be of particular interest to anybody who cares about efficient coding.

https://doi.org/10.7554/eLife.84531.sa0

Introduction

One of the fundamental problems in systems neuroscience is understanding how sensory information can be represented in the spiking activity of an ensemble of neurons. The problem is exacerbated by the fact that individual neurons are highly noisy and variable in their responses, even to identical stimuli (Arieli et al., 1996). A common feature of early sensory representation is that the neocortical neurons in primary sensory areas change their average responses only to a small range of features of the sensory stimulus. For instance, some neurons in the primary visual cortex respond to moving bars oriented at specific angles (Hubel and Wiesel, 1962). This observation has led to the notion of tuning curves. Together, a collection of tuning curves provides a possible basis for a neural code.

A considerable emphasis has been put on understanding how the structure of noise and correlations affect stimulus representation given a set of tuning curves (Shamir and Sompolinsky, 2004; Averbeck and Lee, 2006; Franke et al., 2016; Zylberberg et al., 2016; Moreno-Bote et al., 2014; Kohn et al., 2016). More recently, the issue of local and catastrophic errors, dating back to the work of Shannon (Shannon, 1949), has been raised in the context of neuroscience (e.g. Xie, 2002; Sreenivasan and Fiete, 2011). Intuitively, local errors are small estimation errors that depend on the trial-by-trial variability of the neural responses and the local shapes of the tuning curves surrounding the true stimulus condition (Figure 1a bottom plot, see s1). On the other hand, catastrophic errors are very large estimation errors that depend on the trial-by-trial variability and the global shape of the tuning curves (Figure 1a bottom plot, see s2). While a significant effort has been put into studying how stimulus tuning and different noise structures affect local errors, less is known about the interactions with catastrophic errors. For example, Fisher information is a common measure of the accuracy of a neural code (Brunel and Nadal, 1998; Abbott and Dayan, 1999; Guigon, 2003; Moreno-Bote et al., 2014; Benichoux et al., 2017). The Cramér-Rao bound states that a lower limit of the minimal mean squared error (MSE) for any unbiased estimator is given by the inverse of Fisher information (Lehmann and Casella, 1998). Thus, increasing Fisher information reduces the lower bound on MSE. However, because Fisher information can only capture local errors, the true MSE might be considerably larger in the presence of catastrophic errors (Xie, 2002; Kostal et al., 2015; Malerba et al., 2022), especially if the available decoding time is short (Bethge et al., 2002; Finkelstein et al., 2018).

Illustrations of local and catastrophic errors.

(a) Top: A two-neuron system encoding a single variable using single-peaked tuning curves (λ=1). Bottom: The tuning curves create a one-dimensional activity trajectory embedded in a two-dimensional neural activity space (black trajectory). Decoding the two stimulus conditions, s1 and s2, illustrates the two types of estimation errors that can occur due to trial-by-trial variability, local (s^1) and catastrophic (s^2). (b) Same as in (a) but for periodic tuning curves (λ=0.5). Notice that the stimulus conditions are intermingled and that the stimulus can not be determined from the firing rates. (c) Time evolution of the root mean squared error (RMSE) using maximum likelihood estimation (solid line) and the Cramér-Rao bound (dashed line) for a population of single-peaked tuning curves (N=600, w=0.3, average evoked firing rate fstim¯=20exp(-1/w)B0(1/w) sp/s, and b=2 sp/s). For about 50 ms the RMSE is significantly larger than the predicted lower bound. (d) The empirical error distributions for the time point indicated in (c), where the RMSE strongly deviates from the predicted lower bound. Inset: A non-zero empirical error probability spans the entire stimulus domain. (e) Same as in (d) when the RMSE roughly converges to the Cramér-Rao bound. Notice the absence of large estimation errors.

A curious observation is that the tuning curves in early visual areas predominately use single-peaked firing fields, whereas grid cells in the entorhinal cortex are known for their periodically distributed firing fields (Hafting et al., 2005). It has been shown that the multiple firing locations of grid cells increase the precision of the neural code compared to single-peaked tuning curves (Sreenivasan and Fiete, 2011; Mathis et al., 2012; Wei et al., 2015). This raises the question of why periodic firing fields are not a prominent organization of early visual processing too?

The theoretical arguments in favor of periodic tuning curves have mostly focused on local errors under the assumption that catastrophic errors are negligible (Sreenivasan and Fiete, 2011). However, given the response variability, it takes a finite amount of time to accumulate a sufficient number of spikes to decode the stimulus. Given that fast processing speed is a common feature of visual processing (Thorpe et al., 1996; Fabre-Thorpe et al., 2001; Rolls and Tovee, 1994; Resulaj et al., 2018), it is crucial that each neural population in the processing chain can quickly produce a reliable stimulus-evoked signal. Therefore, the time required to produce signals without catastrophic errors will likely put fundamental constraints on any neural code, especially in early visual areas.

Here, we contrast Fisher information with the minimal decoding time required to remove catastrophic errors (i.e. the time until Fisher information becomes a reasonable descriptor of the MSE). We base the results on the maximum likelihood estimator for uniformly distributed stimuli (i.e., the maximum a posteriori estimator) using populations of tuning curves with different numbers of peaks. We show that the minimal decoding time tends to increase with increasing Fisher information in the case of independent Poissonian noise to each neuron. This suggests a trade-off between the decoding accuracy of a neural population and the speed by which it can produce a reliable signal. Furthermore, we show that the difference in minimal decoding time grows with the number of jointly encoded stimulus features (stimulus dimensionality) and in the presence of ongoing (non-specific) activity. Thus, single-peaked tuning curves require shorter decoding times and are more robust to ongoing activity than periodic tuning curves. Finally, we illustrate the issue of large estimation errors and periodic tuning in simple spiking neural network model tracking either a step-like stimulus change or a continuously time-varying stimulus.

Results

Shapes of tuning curves, Fisher information, and catastrophic errors

To enable a comparison between single-peaked and periodic (multi-peaked) tuning curves, we consider circular tuning curves responding to a D-dimensional stimulus, s, according to

(1) fi(s)=aij=1Dexp(1w(cos(2πλi(sjsi,j))1))+b

where ai is the peak amplitude of the stimulus-related tuning curve i, w is a width scaling parameter, λi defines the spatial period of the tuning curve, si,j determines the location of the peak(s) in the j:th stimulus dimension, and b determines the amount of ongoing activity (see Figure 1a–b, top panels). The parameters are kept fixed for each neuron, thus ignoring any effect of learning or plasticity. In the following, the stimulus domain is set to s[0,1)D for simplicity. To avoid boundary effects, we assume that the stimulus has periodic boundaries (i.e. sj=0 and sj=1 are the same stimulus condition) and adjust any decoded value to lie within the stimulus domain, for example,

(2) s^ML=1+0.1(mod1)=0.1,

see Materials and methods - ’Implementation of maximum likelihood estimator’ for details.

We assume that the stimulus is uniformly distributed across its domain and that its dimensions are independent. This can be seen as a worst-case scenario as it maximizes the entropy of the stimulus. In a single trial, we assume that the number of emitted spikes for each neuron is conditionally independent, and follows a Poisson distribution, given some stimulus-dependent rate fi(s). Thus, the probability of observing a particular activity pattern, r, in a population of N neurons given the stimulus-dependent rates and decoding time, T, is

(3) p(r|s,T)=i=1Np(ri|Tfi(s))=i=1N(Tfi(s))riexp(Tfi(s))ri!.

Given a model of neural responses, the Cramér-Rao bound provides a lower bound on the accuracy by which the population can communicate a signal as the inverse of the Fisher information. For sufficiently large populations, using the population and spike count models in Equation 1 and Equation 3, Fisher information is given by (for ai=a and b=0 for all neurons, see Sreenivasan and Fiete, 2011 or Appendix 2 - 'Fisher information and the Cramér-Rao bound' for details)

(4) J(2π)2aTNwB0(1/w)D1B1(1/w)exp(D/w)λ2¯

where λ2¯ denotes the sample average of the squared inverse of the (relative) spatial periods across the population, and Bα() denotes the modified Bessel functions of the first kind. Equation 4 (and similar expressions) suggests that populations consisting of periodic tuning curves, for which λ2¯1, are superior at communicating a stimulus signal than a population using tuning curves with only single peaks, where λ2¯=1. However, (inverse) Fisher information only predicts the amount of local errors for an efficient estimator. Hence, the presence of catastrophic errors (Figure 1a, bottom) can be identified by large deviations from the predicted MSE for an asymptotically efficient estimator (Figure 1c–d). Therefore, we define minimal decoding time as the shortest time required to approach the Cramér-Rao bound (Figure 1c and e).

Periodic tuning curves and stimulus ambiguity

To understand why the amount of catastrophic error can differ with different spatial periods, consider first the problem of stimulus ambiguity that can arise with periodic tuning curves. If all tuning curves in the population share the same relative spatial period, λ, then the stimulus-evoked responses can only provide unambiguous information about the stimulus in the range [0,λ). Beyond this range, the response distributions are no longer unique. Thus, single-peaked tuning curves (λ=1) provide unambiguous information about the stimulus. Periodic tuning curves (λ<1), on the other hand, require the use of tuning curves with two or more distinct spatial periods to resolve the stimulus ambiguity (Fiete et al., 2008; Mathis et al., 2012; Wei et al., 2015). In the following, we assume the tuning curves are organized into discrete modules, where all tuning curves within a module share spatial period (Figure 1b) mimicking the organization of grid cells (Stensola et al., 2012). For convenience, assume that λ1>λ2>...>λL where L is the number of modules. Thus, the first module provides the most coarse-grained resolution of the stimulus interval, and each successive module provides an increasingly fine-grained resolution. It has been suggested that a geometric progression of spatial periods, such that λi=cλi-1 for some spatial factor 0<c1, may be optimal for maximizing the resolution of the stimulus while reducing the required number of neurons (Mathis et al., 2012; Wei et al., 2015). However, trial-by-trial variability can still cause stimulus ambiguity and catastrophic errors - at least for short decoding times, as we show later, even when using multiple modules with different spatial periods.

(Very) Short decoding times - when both Fisher information and MSE fails

While it is known that Fisher information is not an accurate predictor of the MSE when the decoding time is short (Bethge et al., 2002), less has been discussed about the issue of MSE. Although MSE is often interpreted as a measure of accuracy, its insensitivity to rare outliers makes it a poor measure of reliability. Therefore, comparing MSE directly between populations can be a misleading measure of reliability if the distributions of errors are qualitatively different. If the amounts of local errors differ, lower MSE does not necessarily imply fewer catastrophic errors. This is exemplified in Figure 2, comparing a single-peaked and a periodic population encoding a two-dimensional stimulus using the suggested optimal scale factor, c1/1.44 (Wei et al., 2015). During the first 30 ms, the single-peaked population has the lowest MSE of the two populations despite having lower Fisher information (Figure 2a). Furthermore, comparing the error distribution after the periodic population achieves a lower MSE (the black circle in Figure 2a) shows that the periodic population still suffers from rare errors that span the entire stimulus range (Figure 2b–c, insets). As we will show, a comparison of MSE, as a measure of reliability, only becomes valid once catastrophic errors are removed. Here we assume that catastrophic errors should strongly affect the usability of a neural code. Therefore, we argue that the first criterion for any rate-based neural code should be to satisfy its constraint on decoding time to avoid catastrophic errors.

(Very) Short decoding times when both Fisher information and MSE fails.

(a) Time evolution of root mean squared error (RMSE), averaged across trials and stimulus dimensions, using maximum likelihood estimation (solid lines) for two populations (blue: λ1=1, c=1, red: λ1=1, c=1/1.44). Dashed lines indicate the lower bound predicted by Cramér-Rao. The black circle indicates the point where the periodic population has become optimal in terms of MSE. (b) The empirical distribution of errors for the time indicated by the black circle in (a). The single-peaked population (blue) has a wider distribution of errors centered around 0 compared to the periodic population (red), as suggested by having a higher MSE. Inset: Zooming in on rare error events reveals that while the periodic population has a narrower distribution of errors around 0, it also has occasional errors across large parts of the stimulus domain. (c) The empirical CDF of the errors for the same two populations as in (b). Inset: a zoomed-in version (last 1%) of the empirical CDF highlights the heavy-tailed distribution of errors for the periodic population. Parameters used in the simulations: stimulus dimensionality D=2, the number of modules L=5, number of neurons N=600, average evoked firing rate fstim¯=20exp(-1/w)B0(1/w) sp/s, ongoing activity b=2 sp/s, and width parameter w=0.3. Note that the estimation errors for the two stimulus dimensions are pooled together.

Minimal decoding times in populations with two modules

How does the choice of spatial periods impact the required decoding time to remove catastrophic errors? To get some intuition, we first consider the case of populations encoding a one-dimensional stimulus using only two different spatial scales, λ1 and λ2. From the perspective of a probabilistic decoder (Seung and Sompolinsky, 1993; Deneve et al., 1999; Ma et al., 2006), assuming that the stimulus is uniformly distributed, the maximum likelihood (ML) estimator is Bayesian optimal (and asymptotically efficient). The maximum likelihood estimator aims at finding the stimulus condition which is the most likely cause of the observed activity, r, or

(5) s^ML=argmaxsp(r|s),

where p(r|s) is called the likelihood function. The likelihood function equals the probability of observing the observed neural activity, r, assuming that the stimulus condition was s. In the case of independent Poisson spike counts (or at least independence across modules), each module contributes to the joint likelihood function p(r|s) with individual likelihood functions, Q1 and Q2 (Wei et al., 2015). Thus, the joint likelihood function can be seen as the product of the two individual likelihood functions, where each likelihood is λi-periodic

(6) p(r|s)=Q1(r|s)Q2(r|s).

In this sense, each module provides its own ML-estimate of the stimulus, sML(1)=argmaxsQ1(r|s) and sML(2)=argmaxsQ2(r|s). Because of the periodicity of the tuning curves, there can be multiple modes for each of the likelihoods (e.g. Figure 3a and b, top panels). For the largest mode of the joint likelihood function to also be centered close to the true stimulus condition, the distance δ between sML(1) and sML(2) must be smaller than between any other pair of modes of Q1 and Q2. Thus, to avoid catastrophic errors, δ must be smaller than some largest allowed distance δ* which guarantees this relation (see Equations 25–30 for calculation of δ* assuming the stimulus is in the middle of the domain). As δ varies from trial to trial, we limit the probability of the decoder experiencing catastrophic errors to some small error probability, perror, by imposing that

(7) Pr(|δ|>δ)<perror.

Assuming that the estimation of each module becomes efficient before the joint estimation, Equation 7 can be reinterpreted as a lower bound on the required decoding time before the estimation based on the joint likelihood function becomes efficient

(8) Tth>2(erfinv(1perror)δ)2(1J1,norm+1J2,norm),

where erfinv() is the inverse of the error function and Jk,norm refers to the time-normalized Fisher information of module k (see Materials and methods for derivation). Thus, the spatial periods of the modules influence the minimal decoding time by determining: (1) the largest allowed distance δ* between the estimates of the modules and (2) the variances of the estimations given by the inverse of their respective Fisher information.

Figure 3 with 2 supplements see all
Catastophic errors and minimal decoding times in populations with two modules.

(a) Top: Sampled individual likelihood functions of two modules with very different spatial periods. Bottom: The sampled joint likelihood function for the individual likelihood functions in the top panel. (b–c) Same as in (a) but for spatial periods that are similar but not identical and for a single-peaked population, respectively. (d) Bottom: The dependence of the scale factor c on the minimal decoding time for λ1=1. Blue circles indicate the simulated minimal decoding times, and the black line indicates the estimation of the minimal decoding times according to Equation 8, with perror=10-4. Top left: The predicted value of 1/δ*. Top right: The inverse of the Fisher information. (e) Same as (d) but for λ1=1/2. (f) RMSE (lines), the 99.8th percentile (filled circles), and the maximal error (open circles) of the error distribution for several choices of scale factor, c, and decoding time. The color code is the same as in panels (d-e). The parameters used in (d-f) are: population size N=600, number of modules L=2, scale factors c=0.05-1, width parameter w=0.3, average evoked firing rate fstim¯=20exp(-1/w)B0(1/w) sp/s, ongoing activity b=0 sp/s, and threshold factor α=2.

To give some intuition of the approximation, if the spatial periods of the modules are very different, λ2λ1, then there exist many peaks of Q2 around the peak of Q1 (Figure 3a). Additionally, there can be modes of Q1 and Q2 far away from the true stimulus close together. Thus, λ2λ1 can create a highly multi-modal joint likelihood function where small deviations in sML(1) and sML(2) can cause a shift, or a change, of the maximal mode of the joint likelihood. To avoid this, δ* must be small, leading to longer decoding times by Equation 8. Furthermore, suppose the two modules have similar spatial periods λ2λ1, or λ1 is close to a multiple of λ2. In that case, the distance between the peaks a few periods away is also small, again leading to longer decoding times (Figure 3b). In other words, periodic tuning suffers from the dilemma that small shifts in the individual stimulus estimates can cause catastrophic shifts in the joint likelihood function. Although these might be rare events, the possibility of such errors increases the probability of catastrophic errors. Thus, assuming λ1<1, both small and large scale factors c can lead to long decoding times. When λ1=1, however, only small-scale factors c pose such problems, at least unless the stimulus is close to the periodic edge (i.e. s0 or s1, see Figure 3—figure supplement 1). On the other hand, compared to single-peaked tuning curves, periodic tuning generally leads to sharper likelihood functions, increasing the accuracy of the estimates once catastrophic errors are removed (e.g., compare the widths of the joint likelihood functions in Figure 3a–c).

To test the approximation in Equation 8, we simulated a set of populations (N=600 neurons) with different spatial periods. The populations were created using identical tuning parameters except for the spatial periods, whose distribution varied across the populations, and the amplitudes, which were adjusted to ensure an equal average firing rate (across all stimulus conditions) for all neurons (see Materials and methods for details on simulations). As described above, the spatial periods were related by a scale factor c. Different values of c were tested for the largest period being either λ1=1 or λ1=1/2. Furthermore, only populations with unambiguous codes over the stimulus interval were included (i.e. c1/2,1/3,1/4, for λ1=1/2; Mathis et al., 2012). Note, however, that there is no restriction on the periodicity of the tuning curves to align with the periodicity of the stimulus (i.e. 1/λi does not need to be an integer). For each population, the minimal decoding time was found by gradually increasing the decoding time until the empirical MSE was lower than twice the predicted lower bound (i.e. α=2, see Equation 10 and Materials and methods for details). Limiting the probability of catastrophic errors to perror=10-4, Equation 8 is a good predictor of the minimal decoding time (Figure 3d–e, bottom panels, coefficient of determination R20.92 and R20.95 for λ1=1 and λ1=1/2, respectively). For both λ1=1 and λ1=1/2, the minimal decoding time increases overall with decreasing scale factor, c (see Figure 3d–e). However, especially for λ1=1/2, the trend is interrupted by large peaks (Figure 3e). For λ1=1, there are deviations from the predicted minimal decoding time for small scale factors, c. They occur whenever λ2 is slightly below a multiple of λ1=1, and get more pronounced when increasing the sensitivity to the threshold factor α=1.2 (see Figure 3—figure supplement 2). We believe one cause of these deviations is the additional shifts across the periodic boundary (as in Figure 3—figure supplement 1) that can occur when c is just below 1/2,1/3,1/4,, etc.

To confirm that the estimated minimal decoding times have some predictive power on the error distributions, we re-simulated a subset of the populations for various decoding times, T, using 15,000 randomly sampled stimulus conditions (Figure 3f). Both the RMSE and outlier errors (99.8th percentile and the maximal error, that is, 100th percentile) agree with the shape of minimal decoding times, suggesting that a single-peaked population is good at removing large errors at very short time scales.

Minimal decoding times for populations with more than two modules

From the two-module case above, it is clear that the choice of scale factor influences the minimal decoding time. However, Equation 8 is difficult to interpret and is only valid for two-module systems (L=2). To approximate how the minimal decoding time scales with the distribution of spatial periods in populations with more than two modules, we extended the approximation method first introduced by Xie (Xie, 2002). The method was originally used to assess the number of neurons required to reach the Cramér-Rao bound for single-peaked tuning curves with additive Gaussian noise for the ML estimator. In addition, it only considered encoding a one-dimensional stimulus variable. We adapted this method to approximate the required decoding time for stimuli with arbitrary dimensions, Poisson-distributed spike counts, and tuning curves with arbitrary spatial periods. In this setting, the scaling of minimum decoding time with the spatial periods, λ1,,λL, can be approximated as (see Materials and methods for derivation)

(9) TthA(w)1aNexp(D/w)B0(1/w)(D1)λ3¯2λ2¯3A(w)Nfstim(D)¯λ3¯2λ2¯3,

where λ2¯ and λ3¯ indicate the sample average across the inverse spatial periods (squared or cubed, respectively) in the population, fstim(D)¯ is the average evoked firing rate across the stimulus domain, and A(w) (or A(w)) is a function of w (see Materials and methods for detailed expression). The last approximation holds with equality whenever all tuning curves have an integer number of peaks. The derivation was carried out assuming the absence of ongoing activity and that the amplitudes within each population are similar, a1aN. Importantly, the approximation also assumes the existence of a unique solution to the maximum likelihood equations. Therefore, it is ill-equipped to predict the issues of stimulus ambiguity. Thus, going back to the two-module cases, Equation 9 cannot capture the additional effects of λ2λ1 or when λ1 is close to a multiple of λ2, as in Figure 3d–e. On the other hand, complementing the theory presented in Equation 8, Equation 9 provides a more interpretable expression of the scaling of minimal decoding time. For c1, the minimal decoding time, Tth, is expected to increase with decreasing scale factor, c (see Equation 47). The scaling should also be similar for different choices of λ1. Furthermore, assuming all other parameters are constant, the minimal decoding time should grow roughly exponentially with the number of stimulus dimensions.

To confirm the validity of Equation 9, we simulated populations of N=600 tuning curves across L=5 modules. Again, the spatial periods across the modules were related by a scale factor, c (Figure 4a). To avoid the effects of c1, we limited the range of the scale factor to 0.3c1. The upper bound on c was kept (for λ1=1) to include entirely single-peaked populations. Again, the assumption of homogeneous amplitudes in Equation 9 was dropped in simulations (Figure 4b, left column) to ensure that the average firing rate across the stimulus domain is equal for all neurons (see Figure 4b, right column, for the empirical average firing rates). This had little effect on Fisher information, where the theoretical prediction was based on the average amplitudes across all populations with the same λ1 and stimulus dimensionality D (see Figure 4c, inset). As before, Fisher information grows with decreasing scale factor, c, and with decreasing spatial period λ1. As expected, increasing the stimulus dimensionality decreases Fisher information if all other parameters are kept constant. On the other hand, the minimal decoding time increases with decreasing spatial periods and increases with stimulus dimensionality (Figure 4c). The increase in decoding time between D=1 and D=2 is also very well predicted by Equation 9, at least for c>0.5 (Figure 4—figure supplement 1a). In these simulations, the choice of width parameter is compatible with experimental data (Ringach et al., 2002), but similar trends were found for a range of different width parameters (although the differences become smaller for small w, see Figure 4—figure supplement 1b–d).

Figure 4 with 6 supplements see all
Minimal decoding times for populations with five modules.

(a) Illustration of the likelihood functions of a population with L=5 modules using scale factor c=0.7. (b) The peak stimulus-evoked amplitudes of each neuron (left column) were selected such that all neurons shared the same expected firing rate for a given stimulus condition (right column). (c) Inset: Plot of average Fisher information as a function of the scale factor c (colored lines: estimations from simulation data, black lines: theoretical approximations). Main plot: Plot of minimal decoding time as a function of scale factor c. Minimal decoding time tends to increase with decreasing grid scales (colored lines: estimated minimal decoding time from simulations, black lines: fitted theoretical predictions using Equation 47). The gray color corresponds to points with large discrepancies between the predicted and the simulated minimal decoding times. (d) Plot of the average Fisher information against the minimal decoding time. Points colored in gray are the same as in panel (c). (e) RMSE (lines), the 99.8th percentile (filled circles), and the maximal error (open circles) of the error distribution when decoding a 1-dimensional stimulus for several choices of decoding time. The color code is the same as above. (f) same as (e) but for a two-dimensional stimulus. Note that the error distributions across stimulus dimensions are pooled together. Parameters used in panels (a-d): population size N=600, number of modules L=5, scale factors c=0.3-1, width parameter w=0.3, average evoked firing rate fstim¯=20exp(-D/w)B0(1/w)D sp/s, ongoing activity b=0 sp/s, and threshold factor α=2.

From Equation 9, we fitted two constants, K1 (regressor) and K2 (intercept), using least square regression across populations sharing the same largest period, λ1, and stimulus dimensionality, D (see Equation 47). Within the simulated range of scale factors, the regressions provide reasonable fits for the populations with λ1=1 (Figure 4c, coefficient of determination R20.89 and R20.90 for D=1 and D=2, respectively). For the populations with λ1=1/2, Equation 9 becomes increasingly unable to predict the behavior of the minimal decoding time as c approaches 1 (see the red and yellow lines in Figure 4c–d). On the other hand, as was suggested above, the scaling of the minimal decoding time with c is, in fact, similar for λ1=1 and λ1=1/2 whenever c is less than 0.9. As suggested by Figure 4d, there is also a strong correlation between Fisher information and minimal decoding time, again indicating a speed-accuracy trade-off. Furthermore, similar results are obtained when either decreasing the threshold factor to α=1.2 (Figure 4—figure supplement 2) or changing the minimal decoding time criterion to a one-sided Kolmogorov–Smirnov test (KS-test) between the empirical distribution of errors and the Gaussian error distribution predicted by the Cramér-Rao bound (Figure 4—figure supplement 3, using an ad-hoc Bonferroni-type correction for multiple sequential testing, α/j, where j is the j th time comparison and α=0.05 is the significance level.)

To further illustrate the relationship between minimum decoding time and the distribution of catastrophic errors, we re-simulated the same populations using fixed decoding times, evaluating the RMSE together with the 99.8th and 100th (maximal error) percentiles of the root squared error distributions across 15,000 new uniformly sampled stimuli (Figure 4e–f). As suggested by the minimal decoding times, there is a clear trade-off between minimizing RMSE over longer decoding times and removing outliers, especially the maximal error, over shorter decoding times. Figure 4—figure supplement 4 shows the time evolution for a few of these populations.

Additionally, to verify that the minimal decoding times are good predictors of the decoding time necessary to suppress large estimation errors, we compared the same error percentiles as in Figure 4e–f (i.e. the 99.8th and 100th percentiles) against the minimal decoding times, Tth, estimated in Figure 4c. For each population, we expect a strong reduction in the magnitude of the largest errors when the decoding time, T, is larger than the minimal decoding time, Tth. Figure 5 shows a clear difference in large estimation errors between populations for which Tth<T and populations with Tth>T (circles to the left and right of the magenta lines in Figure 5, respectively). Thus, although only using the difference between MSE and Fisher information, our criterion on minimal decoding time still carries important information about the presence of large estimation errors.

Minimal decoding time predicts the removal of large estimation errors.

(a) The 99.8th percentile (filled circles) and the maximal error (i.e., 100th percentile, open circles) of the root squared error distributions for D=1 against the estimated minimal decoding time for the corresponding populations (α=2) for various choices of decoding time, T (indicted by the vertical magenta lines). (b) same as (a) but for D=2. (c-d) Same as for (a-b) but for α=1.2. Note that the plots (a) and (c), or (b) and (d), illustrate the same percentile data only remapped on the x-axis by the different minimal decoding times from the different threshold factors α. Color code: same as in Figure 4.

To summarize, while periodic tuning curves provide lower estimation errors for long decoding times by minimizing local errors (Figure 4c, inset), a population of single-peaked tuning curves is faster at producing a statistically reliable signal by removing catastrophic errors (Equation 9 and Figure 4c). Generalizing minimal decoding times to an arbitrary number of stimulus dimensions reveals that the minimal decoding time also depends on the stimulus dimensionality (Figure 4c, compare lines for D=1 and D=2). Interestingly, however, the approximation predicts that although minimal decoding time grows with increasing stimulus dimensionality, the minimal required spike count might be independent of stimulus dimensionality, at least for populations with integer spatial frequencies, that is, integer number of peaks (see Equation A5.4). The populations simulated here have non-integer spatial frequencies. However, the trend of changes in the mean spike count is still just slightly below 1 (indicating that slightly fewer spikes across the population are needed with increasing D, see Figure 4—figure supplement 5). Thus, as the average firing rate decreases with the number of encoded features D (Figure 4b, right column), the increase in minimal decoding time for stimuli of higher dimensionality can be primarily explained by requiring a longer time to accumulate the sufficient number of spikes across the population. Lastly, to rule out that the differences in minimal decoding time cannot be explained by the periodicity of the tuning curves not aligning to that of the stimulus, we also simulated populations with different combinations of integer peaks (Figure 4—figure supplement 6). Again, the same phenomenon is observed: periodic tuning curves increase the required decoding time to remove catastrophic errors. This also highlights that the approximation of minimal decoding time does not require the spatial periods to be related by a scale factor, c.

Effect of ongoing activity

Many cortical areas exhibit ongoing activity, that is, activity that is not stimulus-specific (Snodderly and Gur, 1995; Barth and Poulet, 2012). Thus, it is important to understand the impact of ongoing activity on the minimal decoding time, too. Unfortunately, because our approximation of the minimal decoding times did not include ongoing activity, we relied on simulations to study the effect of such non-specific activity.

When including independent ongoing (background) activity at 2 spikes/s to all neurons for the same populations as in Figure 4, minimal decoding times were elevated across all populations (Figure 6). Furthermore, the minimal decoding time increased faster with decreasing c in the presence of ongoing activity compared to the case without ongoing activity (ratios of fitted regressors K1(b=2)/K1(b=0) using Equation 47 were approximately 1.69 and 1.72 for D=1 and D=2, respectively). Similar results are found using α=1.2 (Figure 6—figure supplement 1) or the alternative criterion on minimal decoding time based on one-sided KS-tests described earlier (Figure 6—figure supplement 2). Thus, ongoing activity can have a substantial impact on the time required to produce reliable signals. Figure 6 suggests that areas with ongoing activity are less suited for periodic tuning curves. Especially, the combination of multidimensional stimuli and ongoing activity leads to much longer minimal decoding times for tuning curves with small spatial periods (c1). For example, when encoding a two-dimensional stimulus, only the populations with λ1=1, c=1 and λ1=1, c=0.95 could remove catastrophic errors in less than 40ms when ongoing activity at 2 sp/s was present. Thus, the ability to produce reliable signals at high speeds severely deteriorates for periodic tuning curves in the presence of non-specific ongoing activity.

Figure 6 with 2 supplements see all
Ongoing activity increases minimal decoding time.

(a) The case of encoding a one-dimensional stimulus (D=1) with or without ongoing activity at 2 sp/s (diamond and circle shapes, respectively). (b) The case of a two-dimensional stimulus (D=2) under the same conditions as for (a). In both conditions, ongoing activity increases the time required for all populations to produce reliable signals, but the effect is strongest for c1. The parameters used in the simulations are: population size N=600, number of modules L=5, scale factors c=0.3-1, width parameter w=0.3, average evoked firing rate fstim¯=20exp(-D/w)B0(1/w)D sp/s, ongoing activity b=2 sp/s, and threshold factor α=2.

This result has an intuitive explanation. The amount of catastrophic errors depends on the probability that the trial variability reshapes the neural activity to resemble the possible activities for a distinct stimulus condition (see Figure 1a). From the analysis presented above, periodic tuning curves have been suggested to be more susceptible to such errors. Adding ongoing activity does not reshape the stimulus-evoked parts of the tuning curves but only increases the trial-by-trial variability. Thus, by this reasoning, it is not surprising that the systems which are already more susceptible also are even more negatively affected by the increased variability induced by ongoing activity. The importance of Figure 6 is that even ongoing activity as low as 2 sp/s can have a clearly visible effect on minimal decoding time.

Implications for a simple spiking neural network with sub-optimal readout

Until this point, the arguments about minimal decoding time have relied on rate-based tuning curves encoding static stimuli. To extend beyond static stimuli and to exemplify the role of decoding time for spiking neurons, we simulated simple two-layer feed-forward spiking neural networks to decode time-varying stimulus signals. The first layer (N1=500) corresponds to the tuning curves (without connections between the simulated neurons). The stimulus-specific tuning of the Poissonian inputs to these neurons is either fully single-peaked, creating a population of single-peaked tuning curves, or periodic with different spatial periods, creating a population of periodic tuning curves (Figure 7a, see Materials and methods for details). The second layer instead acts as a readout layer (N2=400, allowing a weak convergence of inputs from the first layer). This layer receives both stimulus-specific excitatory input from the first layer and external non-specific Poissonian excitation (corresponding to background activity). The connection strength between the first and second layers depends on the difference in preferred stimulus conditions between the pre- and post-synaptic neurons. Such connectivity could, for example, be obtained by unsupervised Hebbian learning. Because the tuning curves in the first layer can be periodic, they can also connect strongly to several readout neurons. We introduced lateral inhibition among the readout neurons (without explicitly modeling inhibitory neurons) to create a winner-take-all style of dynamics. The readout neurons with large differences in preferred stimulus inhibit each other more strongly. Decoding is assumed to be instantaneous and based on the preferred stimulus condition of the spiking neuron in the readout layer. However, to compare the readouts, we averaged the stimulus estimates in sliding windows.

Implications for a simple spiking neural network with suboptimal readout.

(a) Illustration of the spiking neural networks (SNNs). (b) Example of single trials. Top row: Two example trials for step-like change in stimulus (green line). The left and right plots show the readout activity for the single-peaked (blue) and periodic SSNs (orange), respectively. Note that the variance around true stimulus is larger for the single-peaked SNN (i.e. larger local errors) but that there are fewer very large errors than for the periodic SNN. Bottom row: Same as for the top row but with a continuously time-varying stimulus. (c) Bottom: The median RMSE (thick lines) over all trials in a sliding window (length 50ms) for the single-peaked (blue) and periodic (orange) SNNs. The shadings correspond to the regions between the 5th and 95th percentiles. Top: The instantaneous population firing rates of the readout layers and the standard deviations (same color code as in the bottom panel). (d) Bottom left: The median estimated stimulus across trials in a sliding window (length 10ms) for the single-peaked (blue) and periodic (orange) SNNs. Shaded areas again correspond to the regions between the 5th and 95th percentiles. The true stimulus is shown in green. Bottom right: the average firing rate of each neuron, arranged according to the preferred stimulus condition. Top: The instantaneous population firing rates of the readout layers and the standard deviations. See Materials and methods for simulation details and Table 1, Table 2, Table 3 for all parameters used in the simulation.

We tested two different types of time-varying stimuli: (1) a step-like change from s=0.25 to s=0.75 (Figure 7b top row, green trace) and (2) a continuously time-varying stimulus drawn from an Ornstein–Uhlenbeck process (Figure 7b bottom row, green trace; see Materials and methods). In the case of a step-like stimulus change, the readout layer for the single-peaked population required a shorter time to switch states than the periodic network (Figure 7c). The shorter switching time is consistent with the hypothesis that single-peaked tuning curves have shorter minimal decoding times than periodic tuning curves. In these simulations, the difference is mainly due to some neurons in the first layer of the periodic network responding both before and after the step change. Thus, the correct readout neurons (after the change) must compensate for the hyper-polarization built up before the change and the continuing inhibitory input from the previously correct readout neurons (which still get excitatory inputs). Note that there are only minor differences in the population firing rates between the readout layers, confirming that this is not a consequence of different excitation levels but rather of the structures of excitation.

The continuously time-varying stimulus could be tracked well by both networks. However, averaging across trials shows that SNNs with periodic tuning curves have larger sporadic fluctuations (Figure 7d). This suggests that decoding with periodic tuning curves has difficulties in accurately estimating the stimulus without causing sudden, brief periods of large errors. To make a statistical comparison between the populations, we investigated the distributions of root mean squared error (RMSE) across trials. In both stimulus cases, there is a clear difference between the network with single-peaked tuning curves and the network with periodic ones. For the step-like change in stimulus condition, a significant difference in RMSE arises roughly 100 ms after the stimulus change (Figure 8a). For the time-varying stimulus, using single-peaked tuning curves also results in significantly lower RMSE compared to a population of periodic tuning curves (Figure 8b, RMSE calculated across the entire trial).

Statistical comparison of the SNN models.

(a) Step-like change: Comparison between the distributions of accumulated RMSEs at different decoding times (p=0.4, 9.010-4, and 8.710-5, respectively). (b) OU-stimulus: The distributions of RMSE across trials for the two SNNs (p=4.310-8). All statistical comparisons in (a) and (b) were based on two-sample Kolmogorov–Smirnov (KS) tests using 30 trials per network.

Discussion

Several studies have suggested that periodic tuning creates an unparalleled precise neural code by minimizing local errors (Sreenivasan and Fiete, 2011; Mathis et al., 2012; Wei et al., 2015; Malerba et al., 2022). Nevertheless, despite this advantage of periodic tuning, single-peaked tuning curves are widespread in early sensory areas and especially in the early visual system. There is a long history of studying information representation using rate-based tuning curves. Still, the effect of spatial periodicity and catastrophic errors on the required decoding time has not been addressed. Here, we showed that the possibility of catastrophic estimation errors (Figure 1a) introduces the possibility that different shapes of tuning curves can have different minimal decoding times.

The emerging question is whether there is a trade-off between the accuracy of a neural code and the minimal required decoding time for single-peaked and periodic tuning. The answer is yes. We found that minimal decoding time increased with decreasing spatial periods of the tuning curves (Figure 4c), suggesting a trade-off between accuracy and speed for populations of tuning curves. The differences in minimal decoding time cannot be explained by the periodicity of the tuning curves not aligning to the stimulus domain, as the same holds comparing populations with integer number of peaks (Figure 4—figure supplement 6). Furthermore, our results remained unchanged when we discarded any decoded stimuli which needed the mod1 operation to lie within the stimulus domain [0,1)D, thus ruling out any possible distortion effect of the periodic stimulus and decoding approach. In addition, we show that our results are valid for a range of population sizes (Figure 9a-b), ongoing (Figure 9a-b) and evoked activities (Figure 9c), and stimulus dimensions (Figure 9d). We used the more conservative threshold factor on MSE, α=1.2, to capture all the nuances w.r.t. the level of ongoing activity even for large population sizes. In simulated networks with spiking neurons, we showed that the use of periodic tuning curves increased the chances of large estimation errors, leading to longer times before switching ‘states’ (Figure 7c) and difficulties tracking a time-varying, one-dimensional stimulus (Figure 7d).

Minimal decoding time for various tuning and stimulus parameters.

(a-b) Minimal decoding time for different combinations of population sizes (N) and levels of ongoing background activity (b) for the single-peaked population (a) and the periodic population (b). (c) Minimal decoding time as a function of average stimulus-evoked firing rate (x-axis re-scaled to the corresponding peak amplitude, a, for single-peaked tuning curves for easier interpretation). The corresponding amplitudes are a=8,16, and 32 sp/s, respectively. (d) Minimal decoding time as a function of stimulus dimensionality. Unless indicated on the axes, the parameters are set according to the orange circles and rectangles in (a-d). Auxiliary parameters: number of modules L=5, width parameter w=0.3, and threshold factor α=1.2.

Experimental data suggest that decoding times can be very short, of the order of tens of milliseconds, reflecting that a considerable part of the information contained in firing rates over long periods is also present in short sample periods (Tovée et al., 1993). Additionally, the first few spikes have been shown to carry significant amounts of task information in both visual (Resulaj et al., 2018), olfactory (Resulaj and Rinberg, 2015), and somatosensory areas (Panzeri et al., 2001; Petersen et al., 2001). As the tuning curves in this study all have equal average firing rates, we can reinterpret the minimal decoding time in terms of the prominence of the first spikes. In our simulations, tens of spikes carry enough information to produce a reliable stimulus estimate free of catastrophic errors (Figure 4—figure supplement 5). As with decoding time, single-peaked tuning curves also need fewer spikes to produce reliable signals. Thus, the speed-accuracy trade-off can be reinterpreted as a trade-off between being accurate and efficient.

The notion of a speed-accuracy trade-off is further strengthened when considering high-dimensional stimuli that demand longer minimal decoding times. Natural stimuli typically have higher dimensionality than those used in animal experiments. Many sensory neurons are tuned to multiple features of the external stimulus, creating mixed selectivity of features (e.g. Garg et al., 2019). For neurons responding to task-related variables, mixed selectivity has been shown to enable linear separability and to improve discriminability (Rigotti et al., 2013; Fusi et al., 2016; Johnston et al., 2020). For continuous stimulus estimations, mixed selectivity has also been proposed to decrease MSE when decoding time is limited (Finkelstein et al., 2018). However, to remove catastrophic errors, which, as we have argued, is not necessarily synonymous with lower MSE, the exponential increase in minimal decoding time could easily lead to very long decoding times. Thus, minimal decoding time should set a bound on the number of features a population can jointly encode reliably. In addition, neurons in sensory areas often exhibit a degree of non-specific activity (Snodderly and Gur, 1995; Barth and Poulet, 2012). Introducing ongoing activity to the populations in our simulations further amplified the differences in minimal decoding times (Figure 6). Thus, for jointly encoded stimuli, especially in areas with high degrees of ongoing activity, a population of single-peaked tuning curves might be the optimal encoding strategy for rapid and reliable communication.

We note that these results might extend beyond the visual areas, too. Although this study focused on tuning curves encoding continuous variables, catastrophic errors can also occur in systems with discrete tuning curves. Sensory stimuli can be fast-varying (or even discontinuous), and large errors can potentially harm the animal. Therefore, constraints on decoding time are likely important for any early sensory area. In addition, hippocampal place cells involved in spatial navigation (O’Keefe and Dostrovsky, 1971; Wilson and McNaughton, 1993) are known for their single-peaked tuning. The interesting observation in this context is that place cells produce reliable signals faster than their input signals from the medial entorhinal cortex with a combination of single- and multi-peaked tuning (Cholvin et al., 2021). On the other hand, for sufficiently slow-varying stimuli, a periodic population can be used together with error correction to remove catastrophic errors (Sreenivasan and Fiete, 2011). Furthermore, for non-periodic stimuli with large domains, the combinatorial nature of periodic tuning curves can create unique stimulus representations far exceeding the spatial periods of the tuning curves (Fiete et al., 2008). Thus, periodic tuning curves are ideal for representing space, where the stimulus domain can be vast, and the change in position is constrained by the speed of movement. Interestingly, when faced with very large arenas, place cells can also exhibit multi-peaked tuning (Eliav et al., 2021).

To summarize, we provide normative arguments for the single-peaked tuning of early visual areas. Rapid decoding of stimulus is crucial for the survival of the animals. Consistent with this, animals and humans can process sensory information at impressive speeds. For example, the human brain can generate differentiating event-related potentials to go/no-go categorization tasks using novel complex visual stimuli in as little as 150 ms (Thorpe et al., 1996). These ‘decoding’ times do not decrease for highly familiar objects, suggesting that the speed of visual processing cannot be reduced by learning (Fabre-Thorpe et al., 2001). Given constraints on low latency communication, it is crucial that each population can quickly produce a reliable signal. In this regard, single-peaked tuning curves are superior to periodic ones. The fact that early visual areas exhibit ongoing activity and encode multi-dimensional stimuli further strengthens the relevance of the differences in minimal decoding time.

To conclude, our work highlights that minimum decoding time is an important attribute and should be considered while evaluating candidate neural codes. Our analysis suggests that decoding of high-dimensional stimuli can be prohibitively slow with rate-based tuning curves. Experimental data on the representation of high-dimensional stimuli is rather scant as relatively low-dimensional stimuli are typically used in experiments (e.g. oriented bars). Our work gives a compelling reason to understand whether and how biological brains can reliably encode high-dimensional stimuli at behaviorally relevant time scales.

Materials and methods

Minimal decoding times - simulation protocols

Request a detailed protocol

To study the dependence of decoding time T on MSE for populations with different distributions of spatial frequencies, we simulated populations of synthetic tuning curves (Equation 1). The stimulus was circular with a [0,1)D range. The preferred stimulus conditions s were sampled independently from a random uniform distribution over [0,1)D (independently and uniformly for each stimulus dimension). To ensure equal comparison, the preferred locations s were shared across all populations. Each neuron’s amplitude, ai, was tuned according to Equation A1.5 to ensure an equal average firing rate across the stimulus domain for all neurons. In each trial, a stimulus s[0,1)D was also independently sampled from a uniform distribution over [0,1)D. The spike count for each neuron was then sampled according to Equation 3.

Minimal decoding time was defined as the shortest time for which the neural population approximately reaches the Cramér-Rao bound. To estimate the reaction time in simulations, we incrementally increased the decoding time T (using 1 ms increments, starting at T=1 ms) until

(10) MSE(T,λ)¯αdiag(J(T,λ)1)¯.

As the ML estimator is asymptotically efficient (attaining the Cramér-Rao bound in the limit of infinite data), the threshold factor, α, in Equation 10 was added as a relaxation (see figure captions for choices of α). Note that the mean bars on the left- and right-hand sides of Equation 10 refer to the means across stimulus dimensions (for multi-dimensional stimuli) and that diag() refers to taking the diagonal elements from the inverse of the Fisher information matrix, J(T,λ)1. For a given decoding time T, the estimation of MSE was done by repeatedly sampling random stimulus conditions (from a uniform distribution), sampling a noisy response to the stimulus (Poisson distributed spike counts), and then applying maximum likelihood estimation (see next section ’Implementation of maximum likelihood estimator’ for details on implementation). In Figures 3 and 9, 15000 stimulus conditions were drawn for each T, and in Figure 4, stimulus conditions were repeatedly drawn until the two first non-zero digits of the MSE were stable for 1000 consecutive trials. However, in controls not presented here, we could not see any significant difference between these two sampling approaches. Because the Fisher information matrix J was analytically estimated only in the special case without ongoing activity, it was approximated in simulations by the element-wise average across 10,000 randomly sampled stimulus conditions (also uniformly distributed), where each element was calculated according to Equation A2.12or Equation A2.13 given a random stimulus trial.

Implementation of maximum likelihood estimator

Request a detailed protocol

Given some noisy neural responses, r, the maximum likelihood estimator (MLE) chooses the stimulus condition which maximizes the likelihood function, s^ML=argmaxs(r,s)=argmaxsi=1Np(ri|s). A common approach is to instead search for the maximum of the log-likelihood function (the logarithm is a monotonic function and therefore preserves any maxima/minima). The stimulus-dependent terms of the log-likelihood can then be expressed as

(11) logp(r|s)V(r;s)=i=1Nrilog(Tfi(s))Tfi(s).

Unfortunately, the log-likelihood function is not guaranteed to be concave, and finding the stimulus condition s^ML which maximizes the log-likelihood function is not trivial (a non-convex optimization problem). To overcome this difficulty, we combined grid-search with the Nelder–Mead method, an unconstrained non-linear program solver (implemented using MATLAB’s built-in function fminsearch, https://www.mathworks.com/help/matlab/ref/fminsearch.html).

Grid search was used to find a small set of starting points with large log-likelihood values. To do so, we sampled 100 random stimulus conditions within the stimulus interval [0,1)D and selected the four stimulus conditions with the largest log-likelihood values. The true stimulus condition, strue, was always added to the set of starting points regardless of the log-likelihood value of that condition (yielding a total of 5 starting points).

Then the Nelder–Mead method was used with these starting points to find a set of 5 (possibly local) maxima. The stimulus was decoded as the stimulus, s^, yielding the largest log-likelihood of the 5 maxima. As we always included the true stimulus condition in the Nelder-Mead search, this approach should not overestimate the amount of threshold distortion but can potentially miss some global estimation errors instead. Finally, as the Nelder–Mead method is unconstrained but the stimulus domain periodic, the output of the maximum likelihood decoder was transformed into the stimulus interval [0,1)D by applying the mod 1 operation on each stimulus dimension,

(12) s^ML=s^(mod1).

Given an estimated stimulus, s^ML, the error was then evaluated along each stimulus dimension independently, taking into account the periodic boundary,

(13) ϵi2=min[(sis^ML,i)2,(sis^ML,i+1)2,(sis^ML,i1)2]

for i{1,,D}.

Lastly, to rule out that the estimates before the mod-operation, s^, outside of the stimulus domain [0,1)D did not influence the results, we also discarded these samples, but this produced similar results.

Spiking network model

Stimuli

Request a detailed protocol

As in the previous simulations, we assumed that the stimulus domain was a circular stimulus defined between [0,1). We simulated the responses to two different types of stimuli, (1) a step-like change in stimulus condition from s=0.25 to s=0.75 and (2) a stimulus drawn from a modified Ornstein–Uhlenbeck process

(14) dstdt=stτs+2σs2τsξs(mod 1).

For parameter values, see Table 1.

Table 1
Parameters and parameter values for O-U stimulus.
ParametersParameter values
τs0.5 (s)
σs0.1

Network model

Request a detailed protocol

The spiking networks were implemented as two-layer, feed-forward networks using LIF neurons with (Dirac) delta synapses. The dynamics of the membrane potential for the neurons in the first layer were described by

(15) dVidt=ViVrestτmem+kJEδ(ttk),

where Vi is the voltage of neuron i, τmem the membrane time constant, tk the timing of the k th input spike to neuron i, and JE the induced EPSP. The neurons in the first layer were constructed to correspond to either single-peaked or periodic tuning curves. Two networks were tested, one network where the first layer corresponds to single-peaked tuning curves and a second network corresponding to periodic tuning curves (with L=4 modules). For each neuron i in module j in the first layer, the input was drawn from independent Poisson point processes with stimulus-dependent rates fi(j)(s(t))

(16) fi(j)(s(t))=aexp(1w(cos(2πλj(s(t)si(j)))1))+b.

Here, the constants a and b were chosen such that the baseline firing rate was slightly above zero and the maximal firing rate was slightly below 20 sp/s (see Table 3 for all network-related parameter values). Because of the choice of λj, the modulation strengths of the inputs were such that the average input to each neuron was equal. For each module in the first layer, the preferred locations si(j) were equidistantly placed across [0,λj).

Similarly, for the second layer, the membrane potential was described by

(17) dVidt=ViVrestτmem+j[1,...,N1]kJEE(Δi,j)δ(ttk(j)d)+j[1,...,N2]kJI(Δi,j)δ(ttk(j)d)+kJEδ(ttk),

where JEE(Δi,j) and JI(Δi,j) are synapse-specific EPSPs/IPSPs which depends on the difference in preferred tuning Δi,j between the pre- and post-synaptic neurons (see Equation 18), tk(j) the timing of the k th spike of pre-synaptic neuron j, and d the delay (see Table 2, Table 3 for parameter values). The neurons in the second layer were only tuned to a single preferred stimulus location each, equidistantly placed across [0,1). Whenever a spike occurred in the first layer, it elicited EPSPs with a delay of 1.5 ms in all neurons in the second layer. The size of the EPSPs depended on the difference in preferred tuning, Δi,j, between the pre- and post-synaptic neurons

(18) JEE(Δi,j)=exp(1wro(cos(2πΔi,j)1))JEE.

Here, JEE determines the maximal EPSP (mV), and the constant wro was chosen such that the full width at half maximum of the EPSP kernels tiled the stimulus domain without overlap. Note that for periodically tuned neurons in the first layer (i.e. with multiple preferred locations), the Δi,j were determined by the smallest difference in preferred tuning across the multiple preferred locations.

As for the excitatory neurons in the first layer, whenever a spike occurred in the second layer, it elicited IPSPs with a delay of 1.5 ms in all other neurons in the second layer. Again, the size of the IPSPs depended on the difference in preferred tuning, Δi,j between the two neurons, but this time according to

(19) JI(Δi,j)=|sin(πΔi,j)|JI.

Thus, the range of inhibition was much broader compared to the excitation.

Table 2
Parameters and parameter values for LIF neurons.
ParametersParameter values
Membrane time constant, τmemb (ms)20
Threshold memb. potential, Vth (mV)20
Reset memb. potential (mV)10
Resting potential, V0 (mV)0
Refractory period, τrp (ms)2
Table 3
Spiking network parameters and parameter values.
ParametersParameter values
Number of neurons 1st layer, N1500
Number of neurons 2nd layer, N2400
Maximal stimulus-evoked input rate, a (sp/s)750
Baseline input rate, b (sp/s)4250
Spatial periods, λj[1] or [1,2,3,4]
Width parameter, w0.3
Width parameter (readout layer), wro(π/N2)22log(2)
Input EPSP (1st layer), JE (mV)0.2
Maximal EPSP (2nd layer), JEE (mV)2
Maximal IPSP (2nd layer), JII (mV)2
Synaptic delays, d (ms)1.5

Evaluating decoding performance

Request a detailed protocol

We assumed that the decoder was instantaneously based on the neuron index of the firing neuron in the readout layer. Let Φ(tk) denote a function that provides the index of the neuron firing at time tk. Given the equidistant distribution of preferred locations for the readout neurons, the stimulus is instantaneously decoded by mapping the neuron identity to the interval [0,1]

(20) s^(tk)=Φ(tk)N2,

where N2 is the number of neurons in the readout layer. For both stimulus cases, the decoding performance was evaluated using (1) the distribution of RMSE (Figure 7d) or estimated stimulus conditions (Figure 7e) in a sliding window or (2) the distributions of accumulated RMSE (Figure 8).

Parameters

Simulation tools

Request a detailed protocol

All the simulations were done using code written in MATLAB and Python (using Brian2 simulator Stimberg et al., 2019). The simulation code is available at https://github.com/movitzle/Short_Decoding_Time (copy archived at Lenninger, 2023 ).

Approximating minimal decoding time in two-module systems

Request a detailed protocol

To gain an understanding of the interaction between two modules with different spatial periods, consider the likelihood function as a product of the likelihood functions of the two modules individually

(21) p(r|s)=Q1(s)Q2(s).

Using the Laplace approximation, each of these functions can be approximated as a periodic sum of Gaussians (Wei et al., 2015). Assuming that each module becomes efficient before the joint likelihood, we only focus on the largest, periodically occurring, peaks

(22) Qi(s)Q^i(r(i)|s)=Aini=KiKiexp(Σi2(s(sML(i)+niλi))2),

where r(i) denotes the activity pattern of module i, sML(i) the peak closest to the true stimulus condition, s0, and Ki is large enough for Q^i(r(i)|s) to cover the entire stimulus range [0,1). The approximation can be seen as ‘rolling out’ the stimulus domain from [0,1) to . Therefore, to neglect the impact of the stimulus boundary, we assume that the stimulus is in the middle of the stimulus domain and K1=12λ1 and K2=12λ2. Furthermore, assuming that each module is efficient, the width of the Gaussians can be approximated as

(23) Σid2ds2logQi(s)Ji(s),

where Ji(s)Ji is the Fisher information of module i. The joint likelihood function can thus be approximated as

(24) p(r|s)Q^1(r(1)|s)Q^2(r(2)|s)==A1A2n1=K1K1exp(J12(s(sML(1)+n1λ1))2)n2=K2K2exp(J22(s(sML(2)+n2λ2))2).

As the likelihood functions depend on the particular realization of the spike counts, the distance between the modes of the respective likelihoods closest to the true stimulus condition s0, δ0,0=sML(1)-sML(2), is a random variable. Note that in the Results section, δ0,0 is simply referred to as δ for clarity.

The joint likelihood distribution p(r|s) has its maximal peak close to the true stimulus condition s0 if δ0,0 is the smallest distance between any pairs of peaks of Q1 and Q2 (see Equation A3.7 for details). Assuming that both modules provide efficient estimates, the distance δ0,0 can be approximated as a normally distributed random variable

(25) δ0,0=sML(1)sML(2)=(sML(1)s0)(sML(2)s0)N(0,1T(J1,norm1+J2,norm1)),

where Jk,norm refers to the time-normalized Fisher information of module k. Thus, as the decoding time T increases, the variance of δ0,0 decreases. Hence, it is necessary for the decoding time T to be large enough such that it is rare for δ0,0 not to be the smallest distance between any pair of peaks. Similarly, the distance between the other pair of peaks in Q1 and Q2 within the stimulus range becomes

(26) δn1,n2=(sML(1)+n1λ1)(sML(2)+n2λ2)==δ0,0+(n1λ1n2λ2),

where n1{-K1,,K1} and n2{-K2,,K2} are indexing the different Gaussians as before. Thus, the threshold for catastrophic error is reached when there is another pair of modes with the same distance between them, that is,

(27) |δ0,0|=|δn1,n2|=|δ0,0+(n1λ1n2λ2)|,

for some n1 and n2 belonging to the index sets as above. Thus, to avoid catastrophic errors, it is necessary that

(28) |δ0,0||δ0,0+(n1λ1n2λ2)|,

for all n1{-K1,,K1} and n2{-K2,,K2}. By solving Equation 28, and taking into account that δ0,0 can be either positive or negative, we get the maximally allowed displacement

(29) δ=minn1,n2:(n1,n2)(0,0),n1{K1,...,K1},n2{K2,...,K2}12|(n1λ1n2λ2)|.

Note that for λ1=1, all n1 represent the same mode (but one full rotation 1 away). Thus, we limit the search such that λ1|n1|<1 and λ2|n2|<1. Assuming that the period of the second module is a scaling of the first module, λ2=cλ1, the above equation becomes

(30) δ=minn1,n2:(n1,n2)(0,0)12|λ1(n1n2c)|.

Note that stimulus ambiguity can never be resolved if δn1,n2=δ0,0 for some pair (n1,n2)(0,0), which is analogous to the condition in Mathis et al., 2012. To limit the probability of catastrophic estimation errors from the joint distribution to some small error probability perror, the following should hold

(31) Pr(|δ0,0|>δ)<perror.

Because δ0,0N(0,J1-1+J2-1), we have

(32) Pr(|δ0,0|>δ)=1erf(δ2σ)<perror,

where erf() is the error-function and σ=J1-1+J2-1. By rearranging the terms and using Equation A2.8, we can obtain a lower bound on the required decoding time

(33) Tth>2(erf1(1perror)δ)2(1J1,norm+1J2,norm),

where Ji,norm is the time-normalized Fisher information of module i. Note that δ* can easily be found using an exhaustive search according to Equation 29 or Equation 30.

Approximating minimal decoding time

Request a detailed protocol

To approximate the order by which the population reaction time scales with the distribution of spatial periods and the stimulus dimensionality, we extended the approximation method introduced by Xie, 2002. The key part of the approximation method is to use a Taylor series to reason about which conditions must hold for the distribution of errors to be normally distributed with a covariance equal to the inverse of the Fisher information matrix. Note that this approximation assumes the existence of a unique solution to the maximum likelihood equations, thus, it does not apply to ambiguous neural codes (e.g. c=1/2,1/3,1/4,, etc.).

First, let’s recollect the Taylor series with Lagrangian reminder for a general function g

(34) g(x+δ)=g(x)+g(x)δ+12g(x)δ2,

where x* is somewhere on the interval [x,x+δ). Thus, in the multivariate case, the derivative in the j:th direction of the log-likelihood function for stimulus condition s^ML=s^ can be rewritten using a Taylor series with Lagrangian reminder as

(35) sklogp(r|s)|s=s^=sklogp(r|s)|s=s+l=1D2slsklogp(r|s)|s=s(s^lsl)++12l=1Dm=1D3smslsklogp(r|s)|s=s(s^lsl)(s^msm),

for all k{1,,D} where s is the true stimulus condition and s* is a stimulus point between s and s^.

If the estimated stimulus is close to the true stimulus, then the quadratic order terms are small. If so, the variance of (s^-s) converges towards N(0,J-1) (in distribution), where J is the Fisher information matrix (Lehmann and Casella, 1998). However, if the estimated stimulus is not close to the true stimulus, then the quadratic terms are not negligible. Therefore, when T is sufficiently large, and the variance of the estimation follows the Cramér-Rao bound, the following should hold for all k{1,,D}

(36) |l=1D2slsklogp(r|s)|s=s(s^lsl)||12l=1Dm=1D3smslsklogp(r|s)|s=s(s^lsl)(s^msm)|.

In this regime, we make the following term-wise approximations

(37) 2slsklogp(r|s)|s=sE[2slsklogp(r|s)|s=s]=Jk,l(s)=Jk,l,

and

(38) 3smslsklogp(r|s)|s=sE[3smslsklogp(r|s)|s=s]=Mk,l,m(s),

which gives

(39) |l=1DJk,l(s^lsl)||12l=1Dm=1DMk,l,m(s)(s^lsl)(s^msm)|.

Because Mk,l,m0 unless k=l=m (see Equation A4.3, Equation A4.4, Equation A4.5), Equation 39 simplifies to

(40) |l=1DJk,l(s^lsl)||12Mk,k,k(s)(s^ksk)2|.

Furthermore, because J(s) is a diagonal matrix (see Equation A2.18), we have

(41) |Jk,k(s^ksk)||12Mk,k,k(s)(s^ksk)2|.

Next, by taking the square of the absolute values, we obtain

(42) Jk,k2(s^ksk)214Mk,k,k2(s)((s^ksk)2)2.

Because we assumed that N and T are sufficiently large to meet the Cramér-Rao bound, we have that

(43) (s^ksk)(s^lsl){J¯1}k,l.

Inserting Equation 43 into Equation 42 gives

(44) Jk,k2{J1}k,k14Mk,k,k2(s)({J1}k,k)2,

or, equivalently,

(45) 114Mk,k,k2(s){J1}k,k3=14Mk,k,k2(s){J}k,k3.

By approximating the term Mk,k,k(s*) with an upper bound M* (see Equation A4.10) and using the expression for Fisher information (Equation A2.8), the expression for population reaction times can be obtained as

(46) TthA(w)1aNB0(1w)(D1)exp(Dw)λ3¯2λ2¯3,

where A(w) is a function of w. Lastly, by casting Equation 46 in terms of the scale factor c, and fitting using (for example) least square regression, we obtain

(47) TthK1A(w)1aMexp(D/w)B0(1/w)(D1)(j=0L1c3j)2(j=0L1c2j)3+K2,

where M is the number of neurons per module, and K1 and K2 are constants. Note that in the simulations, w is fixed and A(w) can therefore be incorporated into K1.

Appendix 1

Tuning curves and spike count model

In the paper, we study the representation of a multidimensional stimulus s=(s1,,sD). For simplicity, it is assumed that the range of the stimulus in each dimension is equal, such that sj[0,R) for all j{1,,D} for some stimulus range R (in the main text, we consider R=1 for simplicity). Note that these assumption does not qualitatively change the results. Furthermore, we assume that the tuning curves were circular (von Mises) tuning curves

(A1.1) fi(s)=aij=1Dexp(1w(cos(2πλiR(sjsi,j))1))+b=aij=1Dqi,j(s)+b,

where ai is the peak amplitude of the stimulus-related tuning curve of neuron i, w is a width scaling parameter, λi defines the spatial period of the tuning, sj,i determines the location of the firing field(s) in the j:th dimension, and b determines the amount of background activity. The amplitude parameters ai were tuned such that all tuning curves had the same firing rate when averaged across all stimulus conditions (see Supplementary Equation A1.5).

It is possible to reparametrize the stimuli into a phase variable, ϕ=sjR. In the article, calculations and numerical simulation are based on phase variables ϕ. This only changes the MSE and Fisher information by a constant scaling 1R2. As we are interested in comparing the minimal decoding times, not the absolute values of the MSE, we can drop the "unnormalized" stimulus s. The tuning curves in Supplementary Equation A1.1 can thus be rewritten using the phase variable ϕ as

(A1.2) fi(ϕ)=aij=1Dexp(1w(cos(2πλi(ϕjϕi,j))1))+b=aij=1Dqi,j(ϕ)+b.

Given stimulus condition s (or ϕ) and decoding time T, the spike count of each neuron was independently sampled from a Poisson distribution with rate Tfi(s). Thus, the probability of observing a particular spike count pattern r=(r1,,rN) given s is

(A1.3) p(r|s)=i=1Np(ri|s)=i=1N(Tfi(s))riexp(Tfi(s))ri!.

Adjusting amplitudes

In order to make a fair comparison of decoding times across populations, we constrain each neuron to have the same average firing rate across the stimulus domain, f¯. The average firing rate over the stimulus domain is

(A1.4) f¯i=b+ai1RD0R...0Rj=1Dqi,j(ϕj)dϕ1...dϕD.

Thus, given a desired stimulus-evoked firing rate, fstim¯, over a normalized stimulus range (R=1), the amplitudes will be set to

(A1.5) ai=fstim¯01...01j=1Dqi,j(ϕj)dϕ1...dϕD.

Note that the integrals in Equation A1.5 are analytically solvable whenever the relative spatial frequency ξi=1/λi is a positive integer, in which case we have

(A1.6) 01...01j=1Dexp(1w(cos(2πλi(ϕjϕi,j))1))dϕ1...dϕD=B0(1w)Dexp(Dw)

regardless of ϕi,j, here B0() is the modified Bessel function of the first kind. In simulations, fstim,i¯=fstim¯ was set such that tuning curves with integer spatial frequencies (1/λ) have amplitudes of 20 sp/s, that is,

(A1.7) fstim(D)¯=20B0(1w)Dexp(Dw).

Appendix 2

Fisher information and the Cramér-Rao bound

Assuming a one-dimensional variable, the Cramér-Rao bound gives a lower bound on the MSE of any estimator G

(A2.1) E[(G(r)s)2][1+bG(s)]2J(s)+bG(s)2,

where bG(s)=E[G(r)-s] is the bias of the estimator G and J(s) is Fisher information, defined as

(A2.2) J(s)=E[slogp(r|s)]2=E[2s2logp(r|s)]

where the last equality holds if p(r|s) is twice differentiable and the neural responses are conditionally independent (Lehmann and Casella, 1998). Assuming an unbiased estimator, the bound can be simplified to

(A2.3) E[(G(s)s)2]=Var(G(r))1J(s).

For multi-parameter estimation, let J(s) denote the Fisher information matrix, with elements defined analogously to Supplementary Equation A2.2

(A2.4) Jk,l(s)=E[2sksllogp(r|s)],

then (for unbiased estimators) the Cramér-Rao bound is instead stated as the following matrix inequality (Lehmann and Casella, 1998)

(A2.5) Cov(G)=ΣJ-1(s)

in the sense that the difference Σ-J-1(s) is a positive semi-definite matrix. Thus, this implies the following lower bound for MSE of the k:th term

(A2.6) Var(G(sk))=Σk,k{J-1(s)}k,k

where G(sk)=s^k, that is, the estimation of sk using estimator G. Note that if J(s) is a diagonal matrix, that is, {J(s)}j,k=0 for all jk, then the following also holds

(A2.7) {J-1(s)}k,k={Jk,k(s)}-1.

For the tuning curves defined in Supplementary Equation A1.1, the diagonal elements of the Fisher information matrix can be analytically solved assuming aia within each module (see Supplementary Equation A2.19)

(A2.8) Jk,k(s)(2π)2TNaR2wB0(1w)D1exp(Dw)B1(1w)λ2¯

where the bar indicates the sample average across modules and R is the stimulus range (note that in the main text, we assume R=1). The off-diagonal elements, on the other hand, can be shown to be 0 (see below). Thus we have equality in the last inequality of Supplementary Equation A2.6, and the MSE for each stimulus dimension is lower bounded by the inverse of Supplementary Equation A2.8.

Approximating Fisher information

To analytically approximate the Fisher information for a given neural population, we will neglect the impact of ongoing activity b. Then, the tuning curves in Supplementary Equation A1.1 factorize as fi(s)=aq1,i(s1)qD,i(sD) and the log-likelihood for N neurons with conditionally independent spike counts becomes

(A2.9) logp(r|s)=i=1Nrilog(Tfi(s))Tfi(s)logri!

By taking the second derivatives w.r.t. stimulus dimension, we get for k=l:

(A2.10) 2sk2logp(r|s)=...=iNr(qk,iqk,i(qk,iqk,i)2)Tfiqk,iqk,i

and for kl

(A2.11) 2sksllogp(r|s)=...=iNTfiqk,iql,iqk,iql,i.

Consequently, the elements of the Fisher information matrix are given by

(A2.12) Jk,k(s)=E[2sk2logp(r|s)]=i=1NTfi(s)(qk,i(sk)qk,i(sk))2

and for kl

(A2.13) Jk,l(s)=E[2slsklogp(r|s)]=i=1NTfi(s)qk,i(sk)ql,i(sl)qk,i(sk)ql,i(sl).

To simplify calculations, it is possible to reparametrize the stimulus as in Supplementary Equation A1.2 using the formula for Fisher information under reparametrization (Lehmann and Casella, 1998)

(A2.14) Jk,l(ϕ)=mnsmϕksnϕlJk,l(s)=R2Jk,l(s)

to obtain

(A2.15) J(s)=1R2Jk,l(ϕ).

We can approximate the elements of the Fisher information matrix J(ϕ) in the limit of large N by replacing the sums with integrals, for example,

(A2.16) Jk,k(ϕ)=i=1NTfi(ϕ)(qk,i(ϕk)qk,i(ϕk))2Tj=1LLλjDajϕ112λjϕ1+12λjϕD12λjϕD+12λj[p=1Dexp(1w(cos(2πλj(ϕpϕp))1))](2π)2sin2(2πλj(ϕkϕk))λj2w2dϕ

where L is the number of distinct modules, M is the number of neurons in each module, dϕ=dϕ1dϕD, and the D-dimensional integral is taken over the interval [ϕp-12λj,ϕp+12λj) along each dimension. Making the variable substitution θp=2πλj(ϕp-ϕp) for p={1,,D} we have

(A2.17) Jk,k(ϕ)MTaj=1L1λjDππππ[p=1Dexp(1w(cos(θp)1))](2π)2sin2(θk)λj2w2λjD(1)D(2π)Ddθ==...=(2π)2NTawB0(1w)D1B1(1w)exp(Dw)λ2¯

where the sample average is taken over the population’s distribution of spatial frequencies and Bα() is the modified Bessel function of the first kind. Similar calculations for the case kl yield

(A2.18) Jk,l(ϕ)=...MTj=1Law2exp(Dw)B0(1w)D2ππ1λjsin(θk)exp(1wcos(θk))dθkππ1λjsin(θl)exp(1wcos(θl))dθl=0.

Thus, the stimulus parameters will be asymptotically orthogonal for all of the populations considered in this paper. That is, the covariance matrix will be diagonal. The per-neuron average contribution to each diagonal element of the Fisher information matrix, as reported in the main text, is, therefore

(A2.19) J¯k,k(s)T(2π)2aR2wB0(1w)D1B1(1w)exp(Dw)λ2¯.

Appendix 3

Maximum of the joint likelihood function (2 module case)

Assuming that the responses of the two modules are independent, the joint likelihood function p(r|s) can be decomposed into a product of the likelihood functions of the two modules. Using the approximation of each Q1(r|s) and Q2(r|s) as Gaussian sums (see Materials and methods), we have the following

(A3.1) p(r|s)=Q1(s)Q2(s)Q^1(s)Q^2(s)=A1n1=K1K1exp(J12(s(sML(1)+n1λ1R))2)A2n2=K2K2exp(J22(s(sML(2)+n2λ2R))2)

Thus, the contribution of the p:th and q:th mode of Q^1 and Q^2 to the joint likelihood function is

(A3.2) Q^1pQ^2q=A1A2exp(Σ12(s(sML(1)+pλ1R))2)exp(Σ22(s(sML(2)+qλ2R))2)=A1A2exp(Σ12(ssp)2Σ22(ssq)2)

where we in the last step renamed sML(1)+pλ1R and sML(2)+qλ2R to sp and sq, respectively. Unless the width w of the tuning curves or the range R is very large, all the modes of Q^1 and Q^2, respectively, are well separated (see the end of the section). Thus, it is a reasonable approximation that the maximum of p(r|s)=Q1(s)Q2(s) is defined by the maximum of Q^1p(s)Q^2q(s) across all combinations of p and q. Each combination Q^1p(s)Q^2q(s) reaches its maximum for some stimulus sp,q*:

(A3.3) sp,q=argmaxsQ^1p(s)Q^2q(s)=argminsΣ12(ssp)2+Σ22(ssq)2

Taking the derivative w.r.t. s on the rightmost terms and solving gives

(A3.4) sp,q=Σ1sp+Σ2sqΣ1+Σ2.

Thus, using δp,q=(sp-sq), the maximal value of each pair Q^1p(s)Q^2q(s) is

(A3.5) Q^1p(sp,q)Q^2q(sp,q)=A1A2exp(J12(sp,qsp)2J22(sp,qsq)2)=...==A1A2exp(J12(J2δp,qJ1+J2)2J22(J1δp,qJ1+J2)2)=...==A1A2exp(12J1J2J1+J2δp,q2)

Thus, the maximum likelihood choice will approximately be sp,q* for the p:th and q:th mode with the smallest δp,q2, that is, the smallest distance between the modes. Lastly, all modes of Q^1 and Q^2, respectively, need to be sufficiently separated such that no two pairs of p and q reinforce each other. However, it is well known the full width at half maximum for a Gaussian function is FWHM=22ln2σi, where for our functions Q^1p and Q^2q, σ1=1/J1 and σ2=1/J2. Thus, given the expression for Fisher information in Equation A2.8, the FWHM can be expressed as

(A3.6) FWHM=2πλiln(2)wR22aTMB0(1/w)D1B1(1/w)exp(D/w)

Thus, for the modes to be separated, it is reasonable to require that the FWHM is no longer than one period length of the module, that is, λi. Hence, we have that

(A3.7) 2πλiln(2)wR22aTMB0(1/w)D1B1(1/w)exp(D/w)<λi

Rewriting this into a bound on the time T needed for the assumption of separation, we get

(A3.8) T>(π2)2ln(2)wR22aMB0(1/w)D1B1(1/w)exp(D/w)

For the parameters used in our simulations, this is satisfied very fast, on the order of tens of microseconds. However, note that the assumption of each module providing efficient estimates, which is a prerequisite for these approximations, requires significantly longer time scales. Thus, the assumption that the individual modes of Q^1 and Q^2 are well-separated is, in our case, not likely to be a restrictive assumption.

Appendix 4

Calculate Mk,l,m

To approximate the minimal decoding time, we need to calculate (see Equations 38; 39, main text)

(A4.1) Mk,l,m(s)=3smslsklogp(r|s)E[3smslsklogp(r|s)].

For klm, using Supplementary Equation A1.1, Equation A1.2, Equation A1.3, we have

(A4.2) 3smslsklogp(r|s)=i=1NTfi(s)qk,i(sk)ql,i(sl)qm,i(sm)qk,i(sk)ql,i(sl)qm,i(sm)

Thus, Mk,l,m for klm becomes

(A4.3) Mk,l,m(s)=i=1NTfi(s)qk,i(sk)ql,i(sl)qm,i(sm)qk,i(sk)ql,i(sl)qm,i(sm)j=1LM(λjR)Ds1R2λjs1+R2λjsDR2λjsD+R2λjTa[j=1Dexp(1w(cos(2πλjR(sjsj))1))](2π)3λj3w3sin(2πλjR(sksk))sin(2πλjR(slsl))sin(2πλjR(smsm))ds=0

as odd functions over even intervals integrate to zero. For kl=m (note that k=lm and k=ml follows by symmetry) we have

(A4.4) 3sl2sklogp(r|s)=i=1NTfi(s)qk,i(sk)ql,i(sl)qk,i(sk)ql,i(sl)

and hence,

(A4.5) Mk,l,l=i=1NTfi(s)qk,i(sk)ql,i(sl)qk,i(sk)ql,i(sl)j=1LM(λjR)Ds1R2λjs1+R2λjsDR2λjsD+R2λjTa[j=1Dexp(1w(cos(2πλjR(sjsj))1))](2π)3λj3w3sin(2πλjR(sksk))(wcos(2πλjR(slsl))sin2(2πλjR(slsl)))ds=0.

Lastly, for k=l=m we have,

(A4.6) d3dsk3logp(r|s)=i=1N(riTfi(s))qk,i(sk)qk,i(sk)3riqk,i(sk)qk,i(sk)qk,i(sk)2+2ri(qk,i(sk)qk,i(sk))3.

Thus Mk,k,k(s), becomes

(A4.7) Mk,k,k(s)=ErPoiss(f(s))[d3dsk3logp(r|s)|s=s]=i=1N(Tfi(s)Tfi(s))qk,i(sk)qk,i(sk)3Tfi(s)qk,i(sk)qk,i(sk)qk,i(sk)2+2Tfi(s)(qk,i(sk)qk,i(sk))3.

Each term above have a dependence on sin(2πλiR(sk*-sk,i)), with an odd power. Therefore, when multiplying with f(s*) and integrating as above, these terms vanish. Hence, we can focus only on the terms including f(s). After some calculus manipulation, it is possible to reduce the expression to include only Tsin(2πλiR(sk*-sk,i))fi(s) (for all i).

(A4.8) Mk,k,k(s)Tj=1LaM(λjR)D(2π)3λj3R3ws1R2λjs1+R2λjsDR2λjsD+R2λjsin(2π(sksk)λjR)××j=1Dexp(1w(cos(2πλiR(sjsj))1))ds==Tj=1LaM(λjR)(2π)3λj3R3wexp(Dw)B0(1w)(D1)××skR2λjsk+R2λjsin(2π(sksk)λjR)exp(1w(cos(2πλiR(sksk))1))dsk

Unfortunately, as this integral includes both sk* and sk, no simple expression can be obtained. Using the variable substitution θk*=2πλjR(sk*-sk), we can simplify it slightly to

(A4.9) Mk,k,k(s)TaM(2π)2R3wexp(Dw)B0(1w)(D1)j=1L1λj3××ππsin(θk)exp(1w(cos(θk+2πλiR(sksk))1))dθk

Instead, we focus on the difference ϕj*=ϕk(λj)=sk-sk*, which maximizes the above integral for each module. Thus, all integrals can be upper bounded by a constant C*, yielding the upper bound

(A4.10) Mk,k,k(s)M=(2π)2CTaNR3wexp(Dw)B0(1w)(D1)λ3¯

Note that the constant C* can be incorporated into the regression coefficient K1 in Equation 47 and that the stimulus range, R, is assumed to be R=1 in the main text.

Appendix 5

Approximating minimal required spike count

Given the approximation of minimal decoding time in Equation 9 (main text), we seek to reformulate the approximation in terms of the required total spike count, instead. The average total spike count for a given population and stimulus condition is

(A5.1) μ(s)=Er[i=1Nri|s]=i=1NTfi(s)

where T is the decoding time. Thus, the average spike count over both stimulus conditions (assuming uniformly distributed stimulus and integer spatial frequencies) and trials for the entire population is

(A5.2) μ=Es[Er[i=1Nri|s]]=1RD0R0Ri=1NTfi(s)ds=NT(aB0(1/w)Dexp(D/w)+b).

Consequently, the number of spikes evoked by the stimulus-related tuning of the population is

(A5.3) μstim=NTaB0(1/w)Dexp(D/w).

Inserting Supplementary Equation A5.3 into Equation 46 (main text) reveals the number of stimulus-evoked spikes, μstim*, the population must produce before reaching the predicted lower bound

(A5.4) μstimA(w)B0(1/w)λ3¯2λ2¯3.

Data availability

Code has been made publicly available on Github (https://github.com/movitzle/Short_Decoding_Time, copy archived at Lenninger, 2023).

References

Article and author information

Author details

  1. Movitz Lenninger

    Division of Information Science and Engineering, KTH Royal Institute of Technology, Stockholm, Sweden
    Contribution
    Conceptualization, Data curation, Software, Formal analysis, Validation, Investigation, Visualization, Methodology, Writing - original draft, Writing – review and editing
    For correspondence
    movitzle@kth.se
    Competing interests
    No competing interests declared
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0002-6165-4900
  2. Mikael Skoglund

    Division of Information Science and Engineering, KTH Royal Institute of Technology, Stockholm, Sweden
    Contribution
    Supervision, Funding acquisition, Writing – review and editing
    Competing interests
    No competing interests declared
  3. Pawel Andrzej Herman

    Division of Computational Science and Technology, KTH Royal Institute of Technology, Stockholm, Sweden
    Contribution
    Resources, Supervision, Funding acquisition, Writing – review and editing
    Competing interests
    No competing interests declared
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0001-6553-823X
  4. Arvind Kumar

    Division of Computational Science and Technology, KTH Royal Institute of Technology, Stockholm, Sweden
    Contribution
    Conceptualization, Supervision, Methodology, Writing - original draft, Writing – review and editing
    For correspondence
    arvkumar@kth.se
    Competing interests
    No competing interests declared
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0002-8044-9195

Funding

Digital Futures

  • Movitz Lenninger
  • Mikael Skoglund
  • Pawel Andrzej Herman
  • Arvind Kumar

Vetenskapsrådet

  • Arvind Kumar

Institute of Advanced Studies (Fellowship)

  • Arvind Kumar

The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.

Acknowledgements

We thank the reviewers and editors for their helpful comments on improving the manuscript and Dr. Pascal Helson for proofreading the manuscript.

Version history

  1. Preprint posted: September 10, 2022 (view preprint)
  2. Received: October 28, 2022
  3. Accepted: May 11, 2023
  4. Accepted Manuscript published: May 16, 2023 (version 1)
  5. Version of Record published: June 9, 2023 (version 2)

Copyright

© 2023, Lenninger et al.

This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.

Metrics

  • 823
    views
  • 103
    downloads
  • 0
    citations

Views, downloads and citations are aggregated across all versions of this paper published by eLife.

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Open citations (links to open the citations from this article in various online reference manager services)

Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)

  1. Movitz Lenninger
  2. Mikael Skoglund
  3. Pawel Andrzej Herman
  4. Arvind Kumar
(2023)
Are single-peaked tuning curves tuned for speed rather than accuracy?
eLife 12:e84531.
https://doi.org/10.7554/eLife.84531

Share this article

https://doi.org/10.7554/eLife.84531

Further reading

    1. Neuroscience
    Ivan Tomić, Paul M Bays
    Research Article

    Probing memory of a complex visual image within a few hundred milliseconds after its disappearance reveals significantly greater fidelity of recall than if the probe is delayed by as little as a second. Classically interpreted, the former taps into a detailed but rapidly decaying visual sensory or ‘iconic’ memory (IM), while the latter relies on capacity-limited but comparatively stable visual working memory (VWM). While iconic decay and VWM capacity have been extensively studied independently, currently no single framework quantitatively accounts for the dynamics of memory fidelity over these time scales. Here, we extend a stationary neural population model of VWM with a temporal dimension, incorporating rapid sensory-driven accumulation of activity encoding each visual feature in memory, and a slower accumulation of internal error that causes memorized features to randomly drift over time. Instead of facilitating read-out from an independent sensory store, an early cue benefits recall by lifting the effective limit on VWM signal strength imposed when multiple items compete for representation, allowing memory for the cued item to be supplemented with information from the decaying sensory trace. Empirical measurements of human recall dynamics validate these predictions while excluding alternative model architectures. A key conclusion is that differences in capacity classically thought to distinguish IM and VWM are in fact contingent upon a single resource-limited WM store.

    1. Neuroscience
    Emilio Salinas, Bashirul I Sheikh
    Insight

    Our ability to recall details from a remembered image depends on a single mechanism that is engaged from the very moment the image disappears from view.