A bidirectional corticoamygdala circuit for the encoding and retrieval of detailed reward memories
Abstract
Adaptive reward-related decision making often requires accurate and detailed representation of potential available rewards. Environmental reward-predictive stimuli can facilitate these representations, allowing one to infer which specific rewards might be available and choose accordingly. This process relies on encoded relationships between the cues and the sensory-specific details of the reward they predict. Here we interrogated the function of the basolateral amygdala (BLA) and its interaction with the lateral orbitofrontal cortex (lOFC) in the ability to learn such stimulus-outcome associations and use these memories to guide decision making. Using optical recording and inhibition approaches, Pavlovian cue-reward conditioning, and the outcome-selective Pavlovian-to-instrumental transfer (PIT) test in male rats, we found that the BLA is robustly activated at the time of stimulus-outcome learning and that this activity is necessary for sensory-specific stimulus-outcome memories to be encoded, so they can subsequently influence reward choices. Direct input from the lOFC was found to support the BLA in this function. Based on prior work, activity in BLA projections back to the lOFC was known to support the use of stimulus-outcome memories to influence decision making. By multiplexing optogenetic and chemogenetic inhibition we performed a serial circuit disconnection and found that the lOFCàBLA and BLAàlOFC pathways form a functional circuit regulating the encoding (lOFCàBLA) and subsequent use (BLAàlOFC) of the stimulus-dependent, sensory-specific reward memories that are critical for adaptive, appetitive decision making.
Data availability
All data and code support the findings of this study are available from the corresponding author upon request and via Dryad (doi:10.5068/D1109S).
-
A bidirectional corticoamygdala circuit for the encoding and retrieval of detailed reward memoriesDryad Digital Repository, doi:10.5068/dryad.D1109S.
Article and author information
Author details
Funding
National Institutes of Health (DA035443)
- Kate M Wassum
National Science Foundation
- Ana C Sias
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
Ethics
Animal experimentation: All procedures were conducted in accordance with the NIH Guide for the Care and Use of Laboratory Animals and were approved by the UCLA Institutional Animal Care and Use Committee.
Copyright
© 2021, Sias et al.
This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.
Metrics
-
- 4,697
- views
-
- 543
- downloads
-
- 40
- citations
Views, downloads and citations are aggregated across all versions of this paper published by eLife.
Download links
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
Further reading
-
- Neuroscience
Sequenced reactivations of hippocampal neurons called replays, concomitant with sharp-wave ripples in the local field potential, are critical for the consolidation of episodic memory, but whether replays depend on the brain’s reward or novelty signals is unknown. Here, we combined chemogenetic silencing of dopamine neurons in ventral tegmental area (VTA) and simultaneous electrophysiological recordings in dorsal hippocampal CA1, in freely behaving male rats experiencing changes to reward magnitude and environmental novelty. Surprisingly, VTA silencing did not prevent ripple increases where reward was increased, but caused dramatic, aberrant ripple increases where reward was unchanged. These increases were associated with increased reverse-ordered replays. On familiar tracks this effect disappeared, and ripples tracked reward prediction error (RPE), indicating that non-VTA reward signals were sufficient to direct replay. Our results reveal a novel dependence of hippocampal replay on dopamine, and a role for a VTA-independent RPE signal that is reliable only in familiar environments.
-
- Neuroscience
Active inference integrates perception, decision-making, and learning into a united theoretical framework, providing an efficient way to trade off exploration and exploitation by minimizing (expected) free energy. In this study, we asked how the brain represents values and uncertainties (novelty and variability), and resolves these uncertainties under the active inference framework in the exploration-exploitation trade-off. Twenty-five participants performed a contextual two-armed bandit task, with electroencephalogram (EEG) recordings. By comparing the model evidence for active inference and reinforcement learning models of choice behavior, we show that active inference better explains human decision-making under novelty and variability, which entails exploration or information seeking. The EEG sensor-level results show that the activity in the frontal, central, and parietal regions is associated with novelty, while the activity in the frontal and central brain regions is associated with variability. The EEG source-level results indicate that the expected free energy is encoded in the frontal pole and middle frontal gyrus and uncertainties are encoded in different brain regions but with overlap. Our study dissociates the expected free energy and uncertainties in active inference theory and their neural correlates, speaking to the construct validity of active inference in characterizing cognitive processes of human decisions. It provides behavioral and neural evidence of active inference in decision processes and insights into the neural mechanism of human decisions under uncertainties.