The functional form of value normalization in human reinforcement learning

  1. Sophie Bavard  Is a corresponding author
  2. Stefano Palminteri  Is a corresponding author
  1. Universität Hamburg, Germany
  2. Ecole Normale Supérieure, France

Abstract

Reinforcement learning research in humans and other species indicates that rewards are represented in a context-dependent manner. More specifically, reward representations seem to be normalized as a function of the value of the alternative options. The dominant view postulates that value context-dependence is achieved via a divisive normalization rule, inspired by perceptual decision-making research. However, behavioral and neural evidence points to another plausible mechanism: range normalization. Critically, previous experimental designs were ill-suited to disentangle the divisive and the range normalization accounts, which generate similar behavioral predictions in many circumstances. To address this question, we designed a new learning task where we manipulated, across learning contexts, the number of options and the value ranges. Behavioral and computational analyses falsify the divisive normalization account and rather provide support for the range normalization rule. Together, these results shed new light on the computational mechanisms underlying context-dependence in learning and decision-making.

Data availability

Data and codes are available here https://github.com/hrl-team/3options

Article and author information

Author details

  1. Sophie Bavard

    Department of Psychology, Universität Hamburg, Hamburg, Germany
    For correspondence
    sophie.bavard@gmail.com
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0002-9283-2976
  2. Stefano Palminteri

    Département d'Etudes Cognitives, Ecole Normale Supérieure, Paris, France
    For correspondence
    stefano.palminteri@ens.fr
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0001-5768-6646

Funding

European Research Council (101043804)

  • Stefano Palminteri

Agence Nationale de la Recherche (ANR-21-CE23-0002-02)

  • Stefano Palminteri

Agence Nationale de la Recherche (ANR-21-CE37-0008-01)

  • Stefano Palminteri

Agence Nationale de la Recherche (ANR-21-CE28-0024-01)

  • Stefano Palminteri

The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.

Ethics

Human subjects: The research was carried out following the principles and guidelines for experiments including human participants provided in the declaration of Helsinki (1964, revised in 2013). The INSERM Ethical Review Committee / IRB00003888 approved and participants were provided written informed consent prior to their inclusion

Copyright

© 2023, Bavard & Palminteri

This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.

Metrics

  • 1,789
    views
  • 340
    downloads
  • 11
    citations

Views, downloads and citations are aggregated across all versions of this paper published by eLife.

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Open citations (links to open the citations from this article in various online reference manager services)

Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)

  1. Sophie Bavard
  2. Stefano Palminteri
(2023)
The functional form of value normalization in human reinforcement learning
eLife 12:e83891.
https://doi.org/10.7554/eLife.83891

Share this article

https://doi.org/10.7554/eLife.83891