Local online learning in recurrent networks with random feedback

  1. James M Murray  Is a corresponding author
  1. Columbia University, United States

Abstract

Recurrent neural networks (RNNs) enable the production and processing of time-dependent signals such as those involved in movement and working memory. Classic gradient-based algorithms for training RNNs have been available for decades, but are inconsistent with biological features of the brain, such as causality and locality. We derive an approximation to gradient-based learning that comports with these constraints by requiring synaptic weight updates to depend only on local information about pre- and postsynaptic activities, in addition to a random feedback projection of the RNN output error. In addition to providing mathematical arguments for the effectiveness of the new learning rule, we show through simulations that it can be used to train an RNN to perform a variety of tasks. Finally, to overcome the difficulty of training over very large numbers of timesteps, we propose an augmented circuit architecture that allows the RNN to concatenate short-duration patterns into longer sequences.

Data availability

Code implementing the RFLO learning algorithm for the example shown in Figure 2 has been included as a source code file accompanying this manuscript.

Article and author information

Author details

  1. James M Murray

    Zuckerman Mind, Brain, and Behavior Institute, Columbia University, New York, United States
    For correspondence
    jm4347@columbia.edu
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0003-3706-4895

Funding

National Institutes of Health (DP5 OD019897)

  • James M Murray

National Science Foundation (DBI-1707398)

  • James M Murray

Gatsby Charitable Foundation

  • James M Murray

The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.

Copyright

© 2019, Murray

This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.

Metrics

  • 6,157
    views
  • 962
    downloads
  • 51
    citations

Views, downloads and citations are aggregated across all versions of this paper published by eLife.

Citations by DOI

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Open citations (links to open the citations from this article in various online reference manager services)

Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)

  1. James M Murray
(2019)
Local online learning in recurrent networks with random feedback
eLife 8:e43299.
https://doi.org/10.7554/eLife.43299

Share this article

https://doi.org/10.7554/eLife.43299