Reinforcement biases subsequent perceptual decisions when confidence is low: a widespread behavioral phenomenon
Abstract
Learning from successes and failures often improves the quality of subsequent decisions. Past outcomes, however, should not influence purely perceptual decisions after task acquisition is complete since these are designed so that only sensory evidence determines the correct choice. Yet, numerous studies report that outcomes can bias perceptual decisions, causing spurious changes in choice behavior without improving accuracy. Here we show that the effects of reward on perceptual decisions are principled: past rewards bias future choices specifically when previous choice was difficult and hence decision confidence was low. We identified this phenomenon in six datasets from four laboratories, across mice, rats, and humans, and sensory modalities from olfaction and audition to vision. We show that this choice-updating strategy can be explained by reinforcement learning models incorporating statistical decision confidence into their teaching signals. Thus, despite being suboptimal from the experimenter’s perspective, confidence-guided reinforcement learning optimizes behavior in uncertain, real-world situations.
Data availability
The data used in this study is available at http://dx.doi.org/10.6084/m9.figshare.4300043
Article and author information
Author details
Funding
Wellcome (106101)
- Armin Lak
Wellcome (213465)
- Armin Lak
National Institutes of Health (R01 MH110404)
- Naoshige Uchida
National Institutes of Health (R01MH097061 and R01DA038209)
- Naoshige Uchida
Wellcome (205093)
- Matteo Carandini
Deutsche Forschungsgemeinschaft (DO 1240/2-1 and DO 1240/3-1)
- Tobias H Donner
RIKEN-CBS
- Emily Hueske
- Susumu Tonegawa
JPB Foundation
- Emily Hueske
- Susumu Tonegawa
Howard Hughes Medical Institute
- Emily Hueske
- Susumu Tonegawa
German Academic Exchange Service
- Anne E Urai
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
Ethics
Animal experimentation: The experimental procedures were approved by Institutional committees at Cold Spring Harbor Laboratory (for experiments on rats), MIT and Harvard University (for mice auditory experiments) and were in accordance with National Institute of Health standards (project ID: 18-14-11-08-1). Experiments on mice visual decisions were approved by the home Office of the United Kingdom (license 70/8021). Experiments in humans were approved by the ethics committee at the University of Amsterdam (project ID: 2014-BC-3376).
Human subjects: The ethics committee at the University of Amsterdam approved the study, and all observers gave their informed consent.project ID: 2014-BC-3376
Copyright
© 2020, Lak et al.
This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.
Metrics
-
- 7,673
- views
-
- 1,160
- downloads
-
- 91
- citations
Views, downloads and citations are aggregated across all versions of this paper published by eLife.
Download links
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
Further reading
-
- Neuroscience
Cell identification is an important yet difficult process in data analysis of biological images. Previously, we developed an automated cell identification method called CRF_ID and demonstrated its high performance in Caenorhabditis elegans whole-brain images (Chaudhary et al., 2021). However, because the method was optimized for whole-brain imaging, comparable performance could not be guaranteed for application in commonly used C. elegans multi-cell images that display a subpopulation of cells. Here, we present an advancement, CRF_ID 2.0, that expands the generalizability of the method to multi-cell imaging beyond whole-brain imaging. To illustrate the application of the advance, we show the characterization of CRF_ID 2.0 in multi-cell imaging and cell-specific gene expression analysis in C. elegans. This work demonstrates that high-accuracy automated cell annotation in multi-cell imaging can expedite cell identification and reduce its subjectivity in C. elegans and potentially other biological images of various origins.
-
- Neuroscience
Techniques that enable precise manipulations of subsets of neurons in the fly central nervous system (CNS) have greatly facilitated our understanding of the neural basis of behavior. Split-GAL4 driver lines allow specific targeting of cell types in Drosophila melanogaster and other species. We describe here a collection of 3060 lines targeting a range of cell types in the adult Drosophila CNS and 1373 lines characterized in third-instar larvae. These tools enable functional, transcriptomic, and proteomic studies based on precise anatomical targeting. NeuronBridge and other search tools relate light microscopy images of these split-GAL4 lines to connectomes reconstructed from electron microscopy images. The collections are the result of screening over 77,000 split hemidriver combinations. Previously published and new lines are included, all validated for driver expression and curated for optimal cell-type specificity across diverse cell types. In addition to images and fly stocks for these well-characterized lines, we make available 300,000 new 3D images of other split-GAL4 lines.