Learning recurrent dynamics in spiking networks
Abstract
Spiking activity of neurons engaged in learning and performing a task show complex spatiotemporal dynamics. While the output of recurrent network models can learn to perform various tasks, the possible range of recurrent dynamics that emerge after learning remains unknown. Here we show that modifying the recurrent connectivity with a recursive least squares algorithm provides sufficient flexibility for synaptic and spiking rate dynamics of spiking networks to produce a wide range of spatiotemporal activity. We apply the training method to learn arbitrary firing patterns, stabilize irregular spiking activity in a network of excitatory and inhibitory neurons respecting Dale's law, and reproduce the heterogeneous spiking rate patterns of cortical neurons engaged in motor planning and movement. We identify sufficient conditions for successful learning, characterize two types of learning errors, and assess the network capacity. Our findings show that synaptically-coupled recurrent spiking networks possess a vast computational capability that can support the diverse activity patterns in the brain.
Data availability
Example computer code that trains recurrent spiking networks is available at http://github.com/chrismkkim/SpikeLearning
Article and author information
Author details
Funding
National Institute of Diabetes and Digestive and Kidney Diseases (Intramural Research Program)
- Christopher M Kim
- Carson C Chow
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
Copyright
This is an open-access article, free of all copyright, and may be freely reproduced, distributed, transmitted, modified, built upon, or otherwise used by anyone for any lawful purpose. The work is made available under the Creative Commons CC0 public domain dedication.
Metrics
-
- 3,900
- views
-
- 679
- downloads
-
- 45
- citations
Views, downloads and citations are aggregated across all versions of this paper published by eLife.
Download links
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
Further reading
-
- Computational and Systems Biology
The principle of efficient coding posits that sensory cortical networks are designed to encode maximal sensory information with minimal metabolic cost. Despite the major influence of efficient coding in neuroscience, it has remained unclear whether fundamental empirical properties of neural network activity can be explained solely based on this normative principle. Here, we derive the structural, coding, and biophysical properties of excitatory-inhibitory recurrent networks of spiking neurons that emerge directly from imposing that the network minimizes an instantaneous loss function and a time-averaged performance measure enacting efficient coding. We assumed that the network encodes a number of independent stimulus features varying with a time scale equal to the membrane time constant of excitatory and inhibitory neurons. The optimal network has biologically plausible biophysical features, including realistic integrate-and-fire spiking dynamics, spike-triggered adaptation, and a non-specific excitatory external input. The excitatory-inhibitory recurrent connectivity between neurons with similar stimulus tuning implements feature-specific competition, similar to that recently found in visual cortex. Networks with unstructured connectivity cannot reach comparable levels of coding efficiency. The optimal ratio of excitatory vs inhibitory neurons and the ratio of mean inhibitory-to-inhibitory vs excitatory-to-inhibitory connectivity are comparable to those of cortical sensory networks. The efficient network solution exhibits an instantaneous balance between excitation and inhibition. The network can perform efficient coding even when external stimuli vary over multiple time scales. Together, these results suggest that key properties of biological neural networks may be accounted for by efficient coding.
-
- Computational and Systems Biology
The RAS-MAPK system plays an important role in regulating various cellular processes, including growth, differentiation, apoptosis, and transformation. Dysregulation of this system has been implicated in genetic diseases and cancers affecting diverse tissues. To better understand the regulation of this system, we employed information flow analysis based on transfer entropy (TE) between the activation dynamics of two key elements in cells stimulated with EGF: SOS, a guanine nucleotide exchanger for the small GTPase RAS, and RAF, a RAS effector serine/threonine kinase. TE analysis allows for model-free assessment of the timing, direction, and strength of the information flow regulating the system response. We detected significant amounts of TE in both directions between SOS and RAF, indicating feedback regulation. Importantly, the amount of TE did not simply follow the input dose or the intensity of the causal reaction, demonstrating the uniqueness of TE. TE analysis proposed regulatory networks containing multiple tracks and feedback loops and revealed temporal switching in the reaction pathway primarily responsible for reaction control. This proposal was confirmed by the effects of an MEK inhibitor on TE. Furthermore, TE analysis identified the functional disorder of a SOS mutation associated with Noonan syndrome, a human genetic disease, of which the pathogenic mechanism has not been precisely known yet. TE assessment holds significant promise as a model-free analysis method of reaction networks in molecular pharmacology and pathology.