The functional form of value normalization in human reinforcement learning
Abstract
Reinforcement learning research in humans and other species indicates that rewards are represented in a context-dependent manner. More specifically, reward representations seem to be normalized as a function of the value of the alternative options. The dominant view postulates that value context-dependence is achieved via a divisive normalization rule, inspired by perceptual decision-making research. However, behavioral and neural evidence points to another plausible mechanism: range normalization. Critically, previous experimental designs were ill-suited to disentangle the divisive and the range normalization accounts, which generate similar behavioral predictions in many circumstances. To address this question, we designed a new learning task where we manipulated, across learning contexts, the number of options and the value ranges. Behavioral and computational analyses falsify the divisive normalization account and rather provide support for the range normalization rule. Together, these results shed new light on the computational mechanisms underlying context-dependence in learning and decision-making.
Data availability
Data and codes are available here https://github.com/hrl-team/3options
Article and author information
Author details
Funding
European Research Council (101043804)
- Stefano Palminteri
Agence Nationale de la Recherche (ANR-21-CE23-0002-02)
- Stefano Palminteri
Agence Nationale de la Recherche (ANR-21-CE37-0008-01)
- Stefano Palminteri
Agence Nationale de la Recherche (ANR-21-CE28-0024-01)
- Stefano Palminteri
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
Reviewing Editor
- Thorsten Kahnt, National Institute on Drug Abuse Intramural Research Program, United States
Ethics
Human subjects: The research was carried out following the principles and guidelines for experiments including human participants provided in the declaration of Helsinki (1964, revised in 2013). The INSERM Ethical Review Committee / IRB00003888 approved and participants were provided written informed consent prior to their inclusion
Version history
- Preprint posted: July 16, 2022 (view preprint)
- Received: October 2, 2022
- Accepted: July 9, 2023
- Accepted Manuscript published: July 10, 2023 (version 1)
- Version of Record published: August 1, 2023 (version 2)
Copyright
© 2023, Bavard & Palminteri
This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.
Metrics
-
- 1,185
- views
-
- 250
- downloads
-
- 1
- citations
Views, downloads and citations are aggregated across all versions of this paper published by eLife.
Download links
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
Further reading
-
- Computational and Systems Biology
- Ecology
Collaborative hunting, in which predators play different and complementary roles to capture prey, has been traditionally believed to be an advanced hunting strategy requiring large brains that involve high-level cognition. However, recent findings that collaborative hunting has also been documented in smaller-brained vertebrates have placed this previous belief under strain. Here, using computational multi-agent simulations based on deep reinforcement learning, we demonstrate that decisions underlying collaborative hunts do not necessarily rely on sophisticated cognitive processes. We found that apparently elaborate coordination can be achieved through a relatively simple decision process of mapping between states and actions related to distance-dependent internal representations formed by prior experience. Furthermore, we confirmed that this decision rule of predators is robust against unknown prey controlled by humans. Our computational ecological results emphasize that collaborative hunting can emerge in various intra- and inter-specific interactions in nature, and provide insights into the evolution of sociality.
-
- Computational and Systems Biology
- Developmental Biology
The gain-of-function mutation in the TALK-1 K+ channel (p.L114P) is associated with maturity-onset diabetes of the young (MODY). TALK-1 is a key regulator of β-cell electrical activity and glucose-stimulated insulin secretion. The KCNK16 gene encoding TALK-1 is the most abundant and β-cell-restricted K+ channel transcript. To investigate the impact of KCNK16 L114P on glucose homeostasis and confirm its association with MODY, a mouse model containing the Kcnk16 L114P mutation was generated. Heterozygous and homozygous Kcnk16 L114P mice exhibit increased neonatal lethality in the C57BL/6J and the CD-1 (ICR) genetic background, respectively. Lethality is likely a result of severe hyperglycemia observed in the homozygous Kcnk16 L114P neonates due to lack of glucose-stimulated insulin secretion and can be reduced with insulin treatment. Kcnk16 L114P increased whole-cell β-cell K+ currents resulting in blunted glucose-stimulated Ca2+ entry and loss of glucose-induced Ca2+ oscillations. Thus, adult Kcnk16 L114P mice have reduced glucose-stimulated insulin secretion and plasma insulin levels, which significantly impairs glucose homeostasis. Taken together, this study shows that the MODY-associated Kcnk16 L114P mutation disrupts glucose homeostasis in adult mice resembling a MODY phenotype and causes neonatal lethality by inhibiting islet insulin secretion during development. These data suggest that TALK-1 is an islet-restricted target for the treatment for diabetes.