Are single-peaked tuning curves tuned for speed rather than accuracy?

  1. Movitz Lenninger  Is a corresponding author
  2. Mikael Skoglund
  3. Pawel Andrzej Herman
  4. Arvind Kumar  Is a corresponding author
  1. Division of Information Science and Engineering, KTH Royal Institute of Technology, Sweden
  2. Division of Computational Science and Technology, KTH Royal Institute of Technology, Sweden
  3. Science for Life Laboratory, Sweden
9 figures, 3 tables and 1 additional file

Figures

Illustrations of local and catastrophic errors.

(a) Top: A two-neuron system encoding a single variable using single-peaked tuning curves (λ=1). Bottom: The tuning curves create a one-dimensional activity trajectory embedded in a two-dimensional …

(Very) Short decoding times when both Fisher information and MSE fails.

(a) Time evolution of root mean squared error (RMSE), averaged across trials and stimulus dimensions, using maximum likelihood estimation (solid lines) for two populations (blue: λ1=1, c=1, red: λ1=1, c=1/1.44)…

Figure 3 with 2 supplements
Catastophic errors and minimal decoding times in populations with two modules.

(a) Top: Sampled individual likelihood functions of two modules with very different spatial periods. Bottom: The sampled joint likelihood function for the individual likelihood functions in the top …

Figure 3—figure supplement 1
Example of decoding a stimulus close to the periodic edge.

Top: Sampled likelihood functions of two modules with λ1=1 and λ2=1/2.3. Bottom: The joint likelihood function is shifted across the periodic boundary. Such shifts across the periodic boundary can become …

Figure 3—figure supplement 2
Same as (Figure 2d–e) but using threshold factor α=1.2.

Notice the stronger deviations from the predicted minimal decoding time for λ1=1 when c is slightly below 1/2,1/3,1/4,, etc.

Figure 4 with 6 supplements
Minimal decoding times for populations with five modules.

(a) Illustration of the likelihood functions of a population with L=5 modules using scale factor c=0.7. (b) The peak stimulus-evoked amplitudes of each neuron (left column) were selected such that all …

Figure 4—figure supplement 1
Scaling of minimal decoding time with stimulus dimensionality and tuning width.

(a) The predicted minimal decoding time from D=1 scaled by exp(1/w)B0(1/w) provides a reasonable prediction of the minimal decoding time for D=2. The data used is the same as in Figure 4c (solid lines with …

Figure 4—figure supplement 2
Minimal decoding times using a different threshold factor.

Same as Figure 4a–d in the main text, but simulated using threshold factor. For each T, the MSE is evaluated based on 15,000 random stimulus samples.

Figure 4—figure supplement 3
Minimal decoding times based on one-sided KS-tests.

Same as Figure 4a–d in the main text, but simulated using the one-sided KS-test for determining minimal decoding time (see main text for details). For each T, the MSE is evaluated based on 15,000 …

Figure 4—figure supplement 4
Time evolution of MSE and outlier errors.

(a) Time evolution of the RMSE (non-transparent lines) for periodic and single-peaked populations and the lower bound set by the Cramér-Rao bound (transparent lines) when decoding a D=1 stimulus. (b) …

Figure 4—figure supplement 5
Spike counts required for removing catastrophic errors.

Plot of the mean spike counts (summed over the population) required to remove catastrophic errors for the populations in Figure 4. Each circle indicates the minimal spike count for a single …

Figure 4—figure supplement 6
Minimal decoding times for populations without scale factors.

Same as Figure 4c, e and f, but only using tuning curves with an integer number of peaks, that is, being all integers. Thus, there is no common scale factor relating the spatial frequencies of the …

Minimal decoding time predicts the removal of large estimation errors.

(a) The 99.8th percentile (filled circles) and the maximal error (i.e., 100th percentile, open circles) of the root squared error distributions for D=1 against the estimated minimal decoding time for …

Figure 6 with 2 supplements
Ongoing activity increases minimal decoding time.

(a) The case of encoding a one-dimensional stimulus (D=1) with or without ongoing activity at 2 sp/s (diamond and circle shapes, respectively). (b) The case of a two-dimensional stimulus (D=2) under …

Figure 6—figure supplement 1
Minimal decoding times using a different threshold factor.

Same as Figure 6 in the main text, but using α=1.2.

Figure 6—figure supplement 2
Minimal decoding times based on one-sided KS-test.

Same as Figure 6 in the main text, but using the one-sided KS-test criterion described before (see Figure 4—figure supplement 4).

Implications for a simple spiking neural network with suboptimal readout.

(a) Illustration of the spiking neural networks (SNNs). (b) Example of single trials. Top row: Two example trials for step-like change in stimulus (green line). The left and right plots show the …

Statistical comparison of the SNN models.

(a) Step-like change: Comparison between the distributions of accumulated RMSEs at different decoding times (p=0.4, 9.010-4, and 8.710-5, respectively). (b) OU-stimulus: The distributions of RMSE across trials …

Minimal decoding time for various tuning and stimulus parameters.

(a-b) Minimal decoding time for different combinations of population sizes (N) and levels of ongoing background activity (b) for the single-peaked population (a) and the periodic population (b). (c

Tables

Table 1
Parameters and parameter values for O-U stimulus.
ParametersParameter values
τs0.5 (s)
σs0.1
Table 2
Parameters and parameter values for LIF neurons.
ParametersParameter values
Membrane time constant, τmemb (ms)20
Threshold memb. potential, Vth (mV)20
Reset memb. potential (mV)10
Resting potential, V0 (mV)0
Refractory period, τrp (ms)2
Table 3
Spiking network parameters and parameter values.
ParametersParameter values
Number of neurons 1st layer, N1500
Number of neurons 2nd layer, N2400
Maximal stimulus-evoked input rate, a (sp/s)750
Baseline input rate, b (sp/s)4250
Spatial periods, λj[1] or [1,2,3,4]
Width parameter, w0.3
Width parameter (readout layer), wro(π/N2)22log(2)
Input EPSP (1st layer), JE (mV)0.2
Maximal EPSP (2nd layer), JEE (mV)2
Maximal IPSP (2nd layer), JII (mV)2
Synaptic delays, d (ms)1.5

Additional files

Download links