Learning accurate path integration in ring attractor models of the head direction system

  1. Pantelis Vafidis  Is a corresponding author
  2. David Owald
  3. Tiziano D'Albis
  4. Richard Kempter  Is a corresponding author
  1. Computation and Neural Systems, California Institute of Technology, United States
  2. Bernstein Center for Computational Neuroscience, Germany
  3. Institute for Theoretical Biology, Department of Biology, Humboldt-Universität zu Berlin, Germany
  4. Institute of Neurophysiology, Charité – Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, and Berlin Institute of Health, Germany
  5. NeuroCure, Charité - Universitätsmedizin Berlin, Germany
  6. Einstein Center for Neurosciences, Germany
15 figures, 2 tables and 1 additional file

Figures

Figure 1 with 2 supplements
Network architecture.

(A) The ring of HD cells projects to two wings of HR cells, a leftward (Left HR cells, abbreviated as L-HR) and a rightward (Right HR cells, or R-HR), so that each wing receives selective …

Figure 1—figure supplement 1
Separation of axon-proximal and axon-distal inputs to HD (E-PG) neurons in the Drosophila EB.

(A) Synaptic locations in the EB where visual (R2 and R4d) and recurrent and HR-to-HD (P-EN1 and P-EN2) inputs arrive, for a total of 16 HD neurons tested (Neuron ID above each panel). Similarly to …

Figure 1—video 1
A three-dimensional rotating video of the synapse locations in Figure 1E.
Path integration (PI) performance of the network.

(A) Example activity profiles of HD, L-HR, and R-HR neurons (firing rates gray-scale coded). Activities are visually guided (yellow overbars) or are the result of PI in the absence of visual input …

Figure 3 with 3 supplements
The network connectivity during and after learning.

(A), (B) The learned weight matrices (color coded) of recurrent connections in the HD ring, Wrec, and of HR-to-HD connections, WHR, respectively. Note the circular symmetry in both matrices. (C) …

Figure 3—figure supplement 1
Removal of long-range excitatory projections impairs PI for high angular velocities.

(A) Profiles of the HR-to-HD weight matrix WHR from Figure 3C (dashed lines), and the same profiles after the long-range excitatory projections have been removed (solid lines). (B) PI in the …

Figure 3—figure supplement 2
Details of learning.

(A) Learning errors (Equation 18) in the converged network in light conditions (yellow overbar) or during PI in darkness (purple overbar). Note the difference in scale. In light conditions, the …

Figure 3—figure supplement 3
PI performance of a perturbed network.

After learning, the synaptic connections in Figure 3A and B have been perturbed with Gaussian noise with standard deviation ∼1.5. (A), (B) Synaptic weight matrices after noise addition. (C) Example …

Figure 4 with 1 supplement
The network adapts rapidly to new gains.

Starting from the converged network in Figure 3, we change the gain g between visual and self-motion inputs, akin to experiments conducted in VR in flies and rodents (Seelig and Jayaraman, 2015; Jay…

Figure 4—figure supplement 1
Limits of PI gain adaptation.

(A) Normalized root mean square error (NRMSE) between neural and head angular velocity, for gain-1 networks that subsequently have been rewired to learn different gains. To compute the NRMSE, we …

Appendix 1—figure 1
Robustness to injected noise.

(A) PI example in a network trained with noise (SNR2, train noise σn=0.7). Panels are organized as in Figure 2A, which shows the activity in a network trained without noise (SNR=, σn=0). (B) Profiles of …

Appendix 2—figure 1
Limits of network performance when varying synaptic delays.

(A) Maximum neural angular velocity learned is inversely proportional to the synaptic delay τs in the network, with constant b=75deg in Equation 23 (blue dot-dashed line). Green dots: point estimate of …

Appendix 3—figure 1
Performance of a network where HD-to-HR connection weights are allowed to vary randomly, and HD neurons are projecting to HR neurons also adjacent to the ones they correspond to, respecting the topography of the protocerebral bridge (PB).

(A) The HD-to-HR connectivity matrix, WHD. Note that, compared to what is described in the Materials and methods (final paragraph of ‘Neuronal Model’), the order of HD neurons is rearranged: we have …

Appendix 3—figure 2
PI performance in a network with random HD-to-HR connection strengths and learned weights from network in Figure 3.

Here we vary the magnitude of the main diagonal HD-to-HR connections but preserve the 1-to-1 nature of the connections. We assume that Wrec and WHR are passed down genetically (i.e. there is no further …

Appendix 3—figure 3
PI performance of a network where HD-to-HR connection weights are completely random.

(A) The HD-to-HR weights are drawn from a folded normal distribution, originating from a normal distribution with 0 mean and π(wHD)2/200 variance. (B) As a result, the learned HR-to-HD connections have also …

Appendix 5—figure 1
Left: assumed distribution of head-turning speeds (black) and discrete approximation used for the simulations.

The colored vertical lines indicate speeds for which the filter h+ is plotted in the right panel. Right: temporal filter h+(θ) for several example speeds (see vertical lines in the left panel). Note …

Appendix 5—figure 2
Evolution of the reduced model.

The figure shows from top to bottom: (A) the HD-cells’ firing rate f(va+); (B) the error ϵ; (C) the average absolute error; (D) the recurrent weights w; (E–F) the rotation weights wR and wL. The HD …

Appendix 5—figure 3
Development of the recurrent weights.

The figure provides an intuition for the shape of the recurrent-weights profiles that emerge during learning. Each column refers to a different time step (see also dashed lines in Appendix 5—figure 2

Appendix 5—figure 4
Development of the rotation weights.

The figure provides an intuition for the shape of the rotation-weights profiles that emerge during learning. Each column refers to a different time step (see also dashed lines in Appendix 5—figure 2)…

Author response image 1
PI performance of a network that was initialized with randomly shuffled weights from Fig.

3A,B. (A), (B) Resulting weight matrices after ~22 hours of training. The weights matrices look very similar to the ones in the main text in Fig. 3A,B, albeit connectivity remains noisy and weights …

Author response image 2
PI performance for networks with more neurons.

(A) For NHD = NHR = 120 and training time ~4.5 hours, PI performance is excellent, and the flat area for small angular velocities observed in Figure 2C is no longer present. (B) To confirm that …

Tables

Table 1
Parameter values.
ParameterValueUnitExplanation
NHD60Number of head direction (HD) neurons
NHR60Number of head rotation (HR) neurons
Δϕ12degAngular resolution of network
τs65msSynaptic time constant
IinhHD-1Global inhibition to HD neurons
τl10msLeak time constant of axon-distal compartment of HD neurons
C1msCapacitance of axon-proximal compartment of HD neurons
gL1Leak conductance of axon-proximal compartment of HD neurons
gD2Conductance from axon-distal to axon-proximal compartment
IexcHD4Excitatory input to axon-proximal compartment in light conditions
σn0Synaptic input noise level
M4Visual input amplitude
Mstim16Optogenetic stimulation amplitude
σ0.15Visual receptive field width
σstim0.25Optogenetic stimulation width
Iovis-5Visual input baseline
fmax150spikes/sMaximum firing rate
β2.5Steepness of activation function
x1/21Input level for 50% of the maximum firing rate
IinhHR-1.5Global inhibition to HR neurons
k1/360s/degConstant ratio of velocity input and head angular velocity
Aactive2Input range for which f has not saturated
wHD13.3¯msConstant weight from HD to HR neurons
τδ100msPlasticity time constant
Δt0.5msEuler integration step size
τv0.5sTime constant of velocity decay
σv450deg/sStandard deviation of angular velocity noise
η0.051 /sLearning rate
  1. Parameter values, in the order they appear in the Methods section. These values apply to all simulations, unless otherwise stated. Note that voltages, currents, and conductances are assumed unitless in the text; therefore capacitances have the same units as time constants.

Appendix 4—table 1
Default values for time scales in the model ordered with respect to their magnitude.
Time scaleExpressionValueUnit
Membrane time constant of axon-proximal compartmentC/gL1ms
Membrane time constant of axon-distal compartmentτl10ms
Synaptic time constantτs65ms
Weight update filtering time constantτδ100ms
Velocity decay time constantτv0.5s
Learning time scale1/η20s

Additional files

Download links