1. Neuroscience
Download icon

Recurrent network model for learning goal-directed sequences through reverse replay

  1. Tatsuya Haga  Is a corresponding author
  2. Tomoki Fukai  Is a corresponding author
  1. RIKEN, Japan
Research Article
  • Cited 11
  • Views 2,164
  • Annotations
Cite this article as: eLife 2018;7:e34171 doi: 10.7554/eLife.34171
Voice your concerns about research culture and research communication: Have your say in our 7th annual survey.

Abstract

Reverse replay of hippocampal place cells occurs frequently at rewarded locations, suggesting its contribution to goal-directed path learning. Symmetric spike-timing dependent plasticity (STDP) in CA3 likely potentiates recurrent synapses for both forward (start to goal) and reverse (goal to start) replays during sequential activation of place cells. However, how reverse replay selectively strengthens forward synaptic pathway is unclear. Here, we show computationally that firing sequences bias synaptic transmissions to the opposite direction of propagation under symmetric STDP in the co-presence of short-term synaptic depression or afterdepolarization. We demonstrate that significant biases are created in biologically realistic simulation settings, and this bias enables reverse replay to enhance goal-directed spatial memory on a W-maze. Further, we show that essentially the same mechanism works in a two-dimensional open field. Our model for the first time provides the mechanistic account for the way reverse replay contributes to hippocampal sequence learning for reward-seeking spatial navigation.

Data availability

Our study is based on computer simulations, and we are sharing codes in Github.

Article and author information

Author details

  1. Tatsuya Haga

    Center for Brain Science, RIKEN, Wako, Japan
    For correspondence
    tatsuya.haga@riken.jp
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0003-3145-709X
  2. Tomoki Fukai

    Center for Brain Science, RIKEN, Wako, Japan
    For correspondence
    tfukai@riken.jp
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0001-6977-5638

Funding

Ministry of Education, Culture, Sports, Science, and Technology (15H04265)

  • Tomoki Fukai

Core Research for Evolutional Science and Technology (JPMJCR13W1)

  • Tomoki Fukai

Ministry of Education, Culture, Sports, Science, and Technology (16H01289)

  • Tomoki Fukai

Ministry of Education, Culture, Sports, Science, and Technology (17H06036)

  • Tomoki Fukai

The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.

Reviewing Editor

  1. Upinder Singh Bhalla, National Centre for Biological Sciences, Tata Institute of Fundamental Research, India

Publication history

  1. Received: December 7, 2017
  2. Accepted: July 2, 2018
  3. Accepted Manuscript published: July 3, 2018 (version 1)
  4. Version of Record published: July 25, 2018 (version 2)

Copyright

© 2018, Haga & Fukai

This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.

Metrics

  • 2,164
    Page views
  • 380
    Downloads
  • 11
    Citations

Article citation count generated by polling the highest count across the following sources: Crossref, Scopus, PubMed Central.

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Download citations (links to download the citations from this article in formats compatible with various reference manager tools)

Open citations (links to open the citations from this article in various online reference manager services)

Further reading

    1. Neuroscience
    Eva On-Chai Lau et al.
    Research Article Updated

    Diaphanous (DIAPH) three (DIAPH3) is a member of the formin proteins that have the capacity to nucleate and elongate actin filaments and, therefore, to remodel the cytoskeleton. DIAPH3 is essential for cytokinesis as its dysfunction impairs the contractile ring and produces multinucleated cells. Here, we report that DIAPH3 localizes at the centrosome during mitosis and regulates the assembly and bipolarity of the mitotic spindle. DIAPH3-deficient cells display disorganized cytoskeleton and multipolar spindles. DIAPH3 deficiency disrupts the expression and/or stability of several proteins including the kinetochore-associated protein SPAG5. DIAPH3 and SPAG5 have similar expression patterns in the developing brain and overlapping subcellular localization during mitosis. Knockdown of SPAG5 phenocopies DIAPH3 deficiency, whereas its overexpression rescues the DIAHP3 knockdown phenotype. Conditional inactivation of Diaph3 in mouse cerebral cortex profoundly disrupts neurogenesis, depleting cortical progenitors and neurons, leading to cortical malformation and autistic-like behavior. Our data uncover the uncharacterized functions of DIAPH3 and provide evidence that this protein belongs to a molecular toolbox that links microtubule dynamics during mitosis to aneuploidy, cell death, fate determination defects, and cortical malformation.

    1. Neuroscience
    Alexander Fengler et al.
    Tools and Resources Updated

    In cognitive neuroscience, computational modeling can formally adjudicate between theories and affords quantitative fits to behavioral/brain data. Pragmatically, however, the space of plausible generative models considered is dramatically limited by the set of models with known likelihood functions. For many models, the lack of a closed-form likelihood typically impedes Bayesian inference methods. As a result, standard models are evaluated for convenience, even when other models might be superior. Likelihood-free methods exist but are limited by their computational cost or their restriction to particular inference scenarios. Here, we propose neural networks that learn approximate likelihoods for arbitrary generative models, allowing fast posterior sampling with only a one-off cost for model simulations that is amortized for future inference. We show that these methods can accurately recover posterior parameter distributions for a variety of neurocognitive process models. We provide code allowing users to deploy these methods for arbitrary hierarchical model instantiations without further training.