Local online learning in recurrent networks with random feedback
Abstract
Recurrent neural networks (RNNs) enable the production and processing of time-dependent signals such as those involved in movement and working memory. Classic gradient-based algorithms for training RNNs have been available for decades, but are inconsistent with biological features of the brain, such as causality and locality. We derive an approximation to gradient-based learning that comports with these constraints by requiring synaptic weight updates to depend only on local information about pre- and postsynaptic activities, in addition to a random feedback projection of the RNN output error. In addition to providing mathematical arguments for the effectiveness of the new learning rule, we show through simulations that it can be used to train an RNN to perform a variety of tasks. Finally, to overcome the difficulty of training over very large numbers of timesteps, we propose an augmented circuit architecture that allows the RNN to concatenate short-duration patterns into longer sequences.
Data availability
Code implementing the RFLO learning algorithm for the example shown in Figure 2 has been included as a source code file accompanying this manuscript.
Article and author information
Author details
Funding
National Institutes of Health (DP5 OD019897)
- James M Murray
National Science Foundation (DBI-1707398)
- James M Murray
Gatsby Charitable Foundation
- James M Murray
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
Copyright
© 2019, Murray
This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.
Metrics
-
- 5,916
- views
-
- 939
- downloads
-
- 50
- citations
Views, downloads and citations are aggregated across all versions of this paper published by eLife.
Download links
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
Further reading
-
- Neuroscience
Dendritic branching and synaptic organization shape single-neuron and network computations. How they emerge simultaneously during brain development as neurons become integrated into functional networks is still not mechanistically understood. Here, we propose a mechanistic model in which dendrite growth and the organization of synapses arise from the interaction of activity-independent cues from potential synaptic partners and local activity-dependent synaptic plasticity. Consistent with experiments, three phases of dendritic growth – overshoot, pruning, and stabilization – emerge naturally in the model. The model generates stellate-like dendritic morphologies that capture several morphological features of biological neurons under normal and perturbed learning rules, reflecting biological variability. Model-generated dendrites have approximately optimal wiring length consistent with experimental measurements. In addition to establishing dendritic morphologies, activity-dependent plasticity rules organize synapses into spatial clusters according to the correlated activity they experience. We demonstrate that a trade-off between activity-dependent and -independent factors influences dendritic growth and synaptic location throughout development, suggesting that early developmental variability can affect mature morphology and synaptic function. Therefore, a single mechanistic model can capture dendritic growth and account for the synaptic organization of correlated inputs during development. Our work suggests concrete mechanistic components underlying the emergence of dendritic morphologies and synaptic formation and removal in function and dysfunction, and provides experimentally testable predictions for the role of individual components.
-
- Neuroscience
The axon initial segment (AIS) constitutes not only the site of action potential initiation, but also a hub for activity-dependent modulation of output generation. Recent studies shedding light on AIS function used predominantly post-hoc approaches since no robust murine in vivo live reporters exist. Here, we introduce a reporter line in which the AIS is intrinsically labeled by an ankyrin-G-GFP fusion protein activated by Cre recombinase, tagging the native Ank3 gene. Using confocal, superresolution, and two-photon microscopy as well as whole-cell patch-clamp recordings in vitro, ex vivo, and in vivo, we confirm that the subcellular scaffold of the AIS and electrophysiological parameters of labeled cells remain unchanged. We further uncover rapid AIS remodeling following increased network activity in this model system, as well as highly reproducible in vivo labeling of AIS over weeks. This novel reporter line allows longitudinal studies of AIS modulation and plasticity in vivo in real-time and thus provides a unique approach to study subcellular plasticity in a broad range of applications.