Learning recurrent dynamics in spiking networks
Figures
![](https://iiif.elifesciences.org/lax/37124%2Felife-37124-fig1-v2.tif/full/617,/0/default.jpg)
Synaptic drive and spiking rate of neurons in a recurrent network can learn complex patterns.
(a) Schematic of network training. Blue square represents the external stimulus that elicits the desired response. Black curves represent target output for each neuron. Red arrows represent recurrent connectivity that is trained to produce desired target patterns. (b) Synaptic drive of 10 sample neurons before, during and after training. Pre-training is followed by multiple training trials. An external stimulus (blue) is applied prior to training for 100 ms. Synaptic drive (black) is trained to follow the target (red). If the training is successful, the same external stimulus can elicit the desired response. Bottom shows the spike rater of 100 neurons. (c) Top, The Pearson correlation between the actual synaptic drive and the target output during training trials. Bottom, The matrix (Fresenius) norm of changes in recurrent connectivity normalized to initial connectivity during training. (d) Filtered spike train of 10 neurons before, during and after training. As in (b), external stimulus (blue) is applied immediately before training trials. Filtered spike train (black) learns to follow the target spiking rate (red) with large errors during the early trials. Applying the stimulus to a successfully trained network elicits the desired spiking rate patterns in every neuron. (e) Top, Same as in (c) but measures the correlation between filtered spike trains and target outputs. Bottom, Same as in (c).
![](https://iiif.elifesciences.org/lax/37124%2Felife-37124-fig1-figsupp1-v2.tif/full/617,/0/default.jpg)
Learning arbitrarily complex target patterns in a network of rate-based neurons.
The network dynamics obey where . The synaptic current to every neuron in the network was trained to follow complex periodic functions where the initial phase and frequencies were selected randomly. The elements of initial connectivity matrix were drawn from a Gaussian distribution with mean zero and standard deviation where was strong enough to induce chaotic dynamics; Network size , connection probability between neurons , and time constant ms. External input with constant random amplitude was applied to each neuron for 50 ms (blue) and was set to zero elsewhere. (a) Before training, the network is in chaotic regime and the synaptic current (black) of individual neurons fluctuates irregularly. (b) After learning to follow the target trajectories (red), the synaptic current tracks the target pattern closely in response to the external stimulus.
![](https://iiif.elifesciences.org/lax/37124%2Felife-37124-fig1-figsupp2-v2.tif/full/617,/0/default.jpg)
Training a network that has no initial connections.
The coupling strength of the initial recurrent connectivity is zero, and, prior to training, no synaptic or spiking activity appears beyond the first few hundred milliseconds. (a) Training synaptic drive patterns using the RLS algorithm. Black curves show the actual synaptic drive of 10 neurons and red curves show the target outputs. Blue shows the 100 ms external stimulus. (b) Correlation between synaptic drive and target function (top) and the Frobenius norm of changes in recurrent connectivity normalized to initial connectivity during training (botom). (c and d) Same as in (a) and (b), but spiking rate patterns are trained.
![](https://iiif.elifesciences.org/lax/37124%2Felife-37124-fig2-v2.tif/full/617,/0/default.jpg)
Learning multiple target patterns.
(a) The synaptic drive of neurons learns two different target outputs. Blue stimulus evokes the first set of target outputs (red) and the green stimulus evokes the second set of target outputs (red). (b) The spiking rate of individual neurons learns two different target outputs.
![](https://iiif.elifesciences.org/lax/37124%2Felife-37124-fig3-v2.tif/full/617,/0/default.jpg)
Quasi-static and heterogeneous patterns can be learned.
Example target patterns include complex periodic functions (product of sines with random frequencies), chaotic rate units (obtained from a random network of rate units), and OU noise (obtained by low-pass filtering white noise with time constant 100 ms). (a) Target patterns (red) overlaid with actual synaptic drive (black) of a trained network. Quasi-static prediction (Equation 1) of synaptic drive (blue). (b) Spike trains of trained neurons elicited multiple trials, trial-averaged spiking rate calculated by the average number of spikes in 50 ms time bins (black), and predicted spiking rate (blue). (c) Performance of trained network as a function of the fraction of randomly selected targets. (d) Network response from a trained network after removing all the synaptic connections from 5%, 10% and 40% of randomly selected neurons in the network.
![](https://iiif.elifesciences.org/lax/37124%2Felife-37124-fig3-figsupp1-v2.tif/full/617,/0/default.jpg)
Learning target patterns with low-population spiking rate.
The synaptic drive of networks consisting of 500 neurons were trained to learn complex periodic functions where the initial phase and frequencies were selected randomly from [500 ms, 1000 ms]. (a) The amplitude , resulting in population spiking rate 2.8 Hz in trained window. (b) The amplitude , resulting in population spiking rate 1.5 Hz in trained window. (c) The amplitude , resulting in population spiking rate 0.01 Hz in trained window and learning fails.
![](https://iiif.elifesciences.org/lax/37124%2Felife-37124-fig3-figsupp2-v2.tif/full/617,/0/default.jpg)
Learning recurrent dynamics with leaky integrate-and-fire and Izhikevich neuron models.
Synaptic drive of a network of spiking neurons were trained to follow 1000 ms long targets where and were selected uniformly from the interval [500 ms, 1000 ms]. (a) Network consisted of leaky integrate-and-fire neuron models, whose membrane potential obeys with a time constant ms; the neuron spikes when exceeds spike threshold mV then is reset to mV. Red curves show the target pattern and black curves show the voltage trace and synaptic drive of a trained network. (b) Spike rastergram of a trained leaky integrate-and-fire neuron network generating the synaptic drive patterns. (c) Network consisted of Izhikevich neurons, whose dynamics are described by two equations and ; the neuron spikes when exceeds 30 mV, then is reset to and is reset to . Neuron parameters and were selected as in the original study (Izhikevich, 2003) so that there were equal numbers of regular spiking, intrinsic bursting, chattering, fast spiking and low threshold spiking neurons. Synaptic current is modeled as in Equations 6 for all neuron models with synaptic decay time ms. Red curves show the target patterns and black curves show the voltage trace and synaptic drive of a trained network. (d) Spike rastergram of a trained Izhikevich neuron network showing the trained response of different cell types.
![](https://iiif.elifesciences.org/lax/37124%2Felife-37124-fig3-figsupp3-v2.tif/full/617,/0/default.jpg)
Synaptic drive of a network of neurons is trained to learn an identical sine wave while external noise generated independently from OU process is injected to individual neurons.
The same external noise (gray curves) is applied repeatedly during and after training. (a)-(b) The amplitude of external noise is varied from (a) low, (b) medium to (c) high. The target sine wave is shown in red and the synaptic drive of neurons are shown in black. The raster plot in (c) shows the ensemble of spike trains of a successfully trained network with strong external noise.
![](https://iiif.elifesciences.org/lax/37124%2Felife-37124-fig4-v2.tif/full/617,/0/default.jpg)
Learning innate activity in a network of excitatory and inhibitory neurons that respects Dale’s Law.
(a) Synaptic drive of sample neurons starting at random initial conditions in response to external stimulus prior to training. (b) Spike raster of sample neurons evoked by the same stimulus over multiple trials with random initial conditions. (c) Single spike perturbation of an untrained network. (d)-(f) Synaptic drive, multi-trial spiking response and single spike perturbation in a trained network. (g) The average phase deviation of theta neurons due to single spike perturbation. (h) Left, distribution of eigenvalues of the recurrent connectivity before and after training as a function their absolution values. Right, Eigenvalue spectrum of the recurrent connectivity; gray circle has unit radius. (i) The accuracy of quasi-static approximation in untrained networks and the performance of trained networks as a function of coupling strength J and synaptic time constant τs. Color bar shows the Pearson correlation between predicted and actual synaptic drive in untrained networks (left) and innate and actual synaptic drive in trained networks (right).
![](https://iiif.elifesciences.org/lax/37124%2Felife-37124-fig5-v2.tif/full/617,/0/default.jpg)
Generating in vivo spiking activity in a subnetwork of a recurrent network.
(a) Network schematic showing cortical (black) and auxiliary (white) neuron models trained to follow the spiking rate patterns of cortical neurons and target patterns derived from OU noise, respectively. Multi-trial spike sequences of sample cortical and auxiliary neurons in a successfully trained network. (b) Trial-averaged spiking rate of cortical neurons (red) and neuron models (black) when no auxiliary neurons are included. (c) Trial-averaged spiking rate of cortical and auxiliary neuron models when . (c) Spiking rate of all the cortical neurons from the data (left) and the recurrent network model (right) trained with . (e) The fit to cortical dynamics improves as the number of auxiliary neurons increases. (f) Random shuffling of synaptic connections between cortical neuron models degrades the fit to cortical data. Error bars show the standard deviation of results from 10 trials.
![](https://iiif.elifesciences.org/lax/37124%2Felife-37124-fig6-v2.tif/full/617,/0/default.jpg)
Sampling and tracking errors.
Synaptic drive was trained to learn 1 s long trajectories generated from OU noise with decay time . (a) Performance of networks of size as a function of synaptic decay time and target decay time . (b) Examples of trained networks whose responses show sampling error, tracking error, and successful learning. The target trajectories are identical and ms. (c) Inverted ‘U’-shaped curve as a function of synaptic decay time. Error bars show the s.d. of five trained networks of size . (d) Inverted ‘U’-shaped curve for networks of sizes and for ms. (e) Network performance shown as a function of where the range of is from 30 ms to 500 ms and the range of is from 1ms to 500ms and . (f) Network performance shown as a function of where the range of is from 1 ms to 30 ms, the range of is from 500 to 1000 and ms.
![](https://iiif.elifesciences.org/lax/37124%2Felife-37124-fig7-v2.tif/full/617,/0/default.jpg)
Capacity as a function of network size.
(a) Performance of trained networks as a function of target length for networks of size and . Target patterns were generated from OU noise with decay time ms. (b) Networks of fixed sizes trained on a range of target length and correlations. Color bar shows the Pearson correlation between target and actual synaptic drive. The black lines show the function where was fitted to minimize the least square error between the linear function and maximal target length that can be successfully learned at each . (c) Learning capacity shown as a function of network size.
Additional files
-
Transparent reporting form
- https://doi.org/10.7554/eLife.37124.014