VTA neurons coordinate with the hippocampal reactivation of spatial experience
Abstract
Spatial learning requires the hippocampus, and the replay of spatial sequences during hippocampal sharp wave-ripple (SPW-R) events of quiet wakefulness and sleep is believed to play a crucial role. To test whether the coordination of VTA reward prediction error signals with these replayed spatial sequences could contribute to this process, we recorded from neuronal ensembles of the hippocampus and VTA as rats performed appetitive spatial tasks and subsequently slept. We found that many reward responsive (RR) VTA neurons coordinated with quiet wakefulness-associated hippocampal SPW-R events that replayed recent experience. In contrast, coordination between RR neurons and SPW-R events in subsequent slow wave sleep was diminished. Together, these results indicate distinct contributions of VTA reinforcement activity associated with hippocampal spatial replay to the processing of wake and SWS-associated spatial memory.
Article and author information
Author details
Ethics
Animal experimentation: All procedures were approved by the Committee on Animal Care of Massachusetts Institute of Technology (Protocol Number 0505-032-08) and followed the ethical guidelines of the US National Institutes of Health.
Reviewing Editor
- Howard Eichenbaum, Boston University, United States
Publication history
- Received: October 28, 2014
- Accepted: October 13, 2015
- Accepted Manuscript published: October 14, 2015 (version 1)
- Version of Record published: December 17, 2015 (version 2)
Copyright
© 2015, Gomperts et al.
This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.
Metrics
-
- 4,971
- Page views
-
- 1,339
- Downloads
-
- 82
- Citations
Article citation count generated by polling the highest count across the following sources: Crossref, PubMed Central, Scopus.
Download links
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
Further reading
-
- Neuroscience
Gustatory sensory neurons detect caloric and harmful compounds in potential food and convey this information to the brain to inform feeding decisions. To examine the signals that gustatory neurons transmit and receive, we reconstructed gustatory axons and their synaptic sites in the adult Drosophila melanogaster brain, utilizing a whole-brain electron microscopy volume. We reconstructed 87 gustatory projections from the proboscis labellum in the right hemisphere and 57 from the left, representing the majority of labellar gustatory axons. Gustatory neurons contain a nearly equal number of interspersed pre-and post-synaptic sites, with extensive synaptic connectivity among gustatory axons. Morphology- and connectivity-based clustering revealed six distinct groups, likely representing neurons recognizing different taste modalities. The vast majority of synaptic connections are between neurons of the same group. This study resolves the anatomy of labellar gustatory projections, reveals that gustatory projections are segregated based on taste modality, and uncovers synaptic connections that may alter the transmission of gustatory signals.
-
- Neuroscience
Categorization of everyday objects requires that humans form representations of shape that are tolerant to variations among exemplars. Yet, how such invariant shape representations develop remains poorly understood. By comparing human infants (6–12 months; N=82) to computational models of vision using comparable procedures, we shed light on the origins and mechanisms underlying object perception. Following habituation to a never-before-seen object, infants classified other novel objects across variations in their component parts. Comparisons to several computational models of vision, including models of high-level and low-level vision, revealed that infants’ performance was best described by a model of shape based on the skeletal structure. Interestingly, infants outperformed a range of artificial neural network models, selected for their massive object experience and biological plausibility, under the same conditions. Altogether, these findings suggest that robust representations of shape can be formed with little language or object experience by relying on the perceptually invariant skeletal structure.