Single caudate neurons encode temporally discounted value for formulating motivation for action

  1. Yukiko Hori
  2. Koki Mimura
  3. Yuji Nagai
  4. Atsushi Fujimoto
  5. Kei Oyama
  6. Erika Kikuchi
  7. Ken-ichi Inoue
  8. Masahiko Takada
  9. Tetsuya Suhara
  10. Barry J Richmond
  11. Takafumi Minamimoto  Is a corresponding author
  1. National Institutes for Quantum and Radiological Science and Technology, Japan
  2. Kyoto University, Japan
  3. NIMH/NIH/DHHS, Bethesda, MD 20814, USA, United States

Abstract

The term ‘temporal discounting’ describes both choice preferences and motivation for delayed rewards. Here we show that neuronal activity in the dorsal part of the primate caudate head (dCDh) signals the temporally discounted value needed to compute the motivation for delayed rewards. Macaque monkeys performed an instrumental task, in which visual cues indicated the forthcoming size and delay duration before reward. Single dCDh neurons represented the temporally discounted value without reflecting changes in the animal’s physiological state. Bilateral pharmacological or chemogenetic inactivation of dCDh markedly distorted the normal task performance based on the integration of reward size and delay, but did not affect the task performance for different reward sizes without delay. These results suggest that dCDh is involved in encoding the integrated multidimensional information critical for motivation.

Data availability

We provide source data to reproduce the main results of the paper presented in Figures 1, 5, 7 and 8.

Article and author information

Author details

  1. Yukiko Hori

    Department of Functional Brain Imaging, National Institute of Radiological Sciences, National Institutes for Quantum and Radiological Science and Technology, Chiba, Japan
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0003-1023-9587
  2. Koki Mimura

    Department of Functional Brain Imaging, National Institute of Radiological Sciences, National Institutes for Quantum and Radiological Science and Technology, Chiba, Japan
    Competing interests
    The authors declare that no competing interests exist.
  3. Yuji Nagai

    Department of Functional Brain Imaging, National Institute of Radiological Sciences, National Institutes for Quantum and Radiological Science and Technology, Chiba, Japan
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0001-7005-0749
  4. Atsushi Fujimoto

    Department of Functional Brain Imaging, National Institute of Radiological Sciences, National Institutes for Quantum and Radiological Science and Technology, Chiba, Japan
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0002-1621-2003
  5. Kei Oyama

    Department of Functional Brain Imaging, National Institute of Radiological Sciences, National Institutes for Quantum and Radiological Science and Technology, Chiba, Japan
    Competing interests
    The authors declare that no competing interests exist.
  6. Erika Kikuchi

    Department of Functional Brain Imaging, National Institute of Radiological Sciences, National Institutes for Quantum and Radiological Science and Technology, Chiba, Japan
    Competing interests
    The authors declare that no competing interests exist.
  7. Ken-ichi Inoue

    Systems Neuroscience Section, Primate Research Institute, Kyoto University, Inuyama, Japan
    Competing interests
    The authors declare that no competing interests exist.
  8. Masahiko Takada

    Systems Neuroscience Section, Primate Research Institute, Kyoto University, Inuyama, Japan
    Competing interests
    The authors declare that no competing interests exist.
  9. Tetsuya Suhara

    Department of Functional Brain Imaging, National Institute of Radiological Sciences, National Institutes for Quantum and Radiological Science and Technology, Chiba, Japan
    Competing interests
    The authors declare that no competing interests exist.
  10. Barry J Richmond

    Laboratory of Neuropsychology, NIMH/NIH/DHHS, Bethesda, MD 20814, USA, Bethesda, United States
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0002-8234-1540
  11. Takafumi Minamimoto

    Department of Functional Brain Imaging, National Institute of Radiological Sciences, National Institutes for Quantum and Radiological Science and Technology, Chiba, Japan
    For correspondence
    minamimoto.takafumi@qst.go.jp
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0003-4305-0174

Funding

Japan Society for the Promotion of Science (JP18H04037,JP20H05955)

  • Takafumi Minamimoto

Japan Agency for Medical Research and Development (JP20dm0107146)

  • Takafumi Minamimoto

Japan Agency for Medical Research and Development (JP20dm0207077)

  • Masahiko Takada

Japan Agency for Medical Research and Development (JP20dm0307021)

  • Ken-ichi Inoue

National Institute of Mental Health (Annual Report ZIAMH-2619)

  • Barry J Richmond

Primate Research Institute, Kyoto University (2020-A-6)

  • Takafumi Minamimoto

The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.

Reviewing Editor

  1. Alicia Izquierdo, University of California, Los Angeles, United States

Ethics

Animal experimentation: All surgical and experimental procedures were approved by the National Institutes for Quantum and Radiological Science and Technology (11-1038-11) and by the Animal Care and Use Committee of the National Institute of Mental Health (Annual Report ZIAMH002619), and were in accordance with the Institute of Laboratory Animal Research Guide for the Care and Use of Laboratory Animals.

Version history

  1. Preprint posted: May 18, 2020 (view preprint)
  2. Received: July 20, 2020
  3. Accepted: July 29, 2021
  4. Accepted Manuscript published: July 30, 2021 (version 1)
  5. Version of Record published: August 9, 2021 (version 2)

Copyright

This is an open-access article, free of all copyright, and may be freely reproduced, distributed, transmitted, modified, built upon, or otherwise used by anyone for any lawful purpose. The work is made available under the Creative Commons CC0 public domain dedication.

Metrics

  • 907
    views
  • 139
    downloads
  • 13
    citations

Views, downloads and citations are aggregated across all versions of this paper published by eLife.

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Open citations (links to open the citations from this article in various online reference manager services)

Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)

  1. Yukiko Hori
  2. Koki Mimura
  3. Yuji Nagai
  4. Atsushi Fujimoto
  5. Kei Oyama
  6. Erika Kikuchi
  7. Ken-ichi Inoue
  8. Masahiko Takada
  9. Tetsuya Suhara
  10. Barry J Richmond
  11. Takafumi Minamimoto
(2021)
Single caudate neurons encode temporally discounted value for formulating motivation for action
eLife 10:e61248.
https://doi.org/10.7554/eLife.61248

Share this article

https://doi.org/10.7554/eLife.61248

Further reading

    1. Neuroscience
    Jack W Lindsey, Elias B Issa
    Research Article

    Object classification has been proposed as a principal objective of the primate ventral visual stream and has been used as an optimization target for deep neural network models (DNNs) of the visual system. However, visual brain areas represent many different types of information, and optimizing for classification of object identity alone does not constrain how other information may be encoded in visual representations. Information about different scene parameters may be discarded altogether (‘invariance’), represented in non-interfering subspaces of population activity (‘factorization’) or encoded in an entangled fashion. In this work, we provide evidence that factorization is a normative principle of biological visual representations. In the monkey ventral visual hierarchy, we found that factorization of object pose and background information from object identity increased in higher-level regions and strongly contributed to improving object identity decoding performance. We then conducted a large-scale analysis of factorization of individual scene parameters – lighting, background, camera viewpoint, and object pose – in a diverse library of DNN models of the visual system. Models which best matched neural, fMRI, and behavioral data from both monkeys and humans across 12 datasets tended to be those which factorized scene parameters most strongly. Notably, invariance to these parameters was not as consistently associated with matches to neural and behavioral data, suggesting that maintaining non-class information in factorized activity subspaces is often preferred to dropping it altogether. Thus, we propose that factorization of visual scene information is a widely used strategy in brains and DNN models thereof.

    1. Neuroscience
    Zhaoran Zhang, Huijun Wang ... Kunlin Wei
    Research Article

    The sensorimotor system can recalibrate itself without our conscious awareness, a type of procedural learning whose computational mechanism remains undefined. Recent findings on implicit motor adaptation, such as over-learning from small perturbations and fast saturation for increasing perturbation size, challenge existing theories based on sensory errors. We argue that perceptual error, arising from the optimal combination of movement-related cues, is the primary driver of implicit adaptation. Central to our theory is the increasing sensory uncertainty of visual cues with increasing perturbations, which was validated through perceptual psychophysics (Experiment 1). Our theory predicts the learning dynamics of implicit adaptation across a spectrum of perturbation sizes on a trial-by-trial basis (Experiment 2). It explains proprioception changes and their relation to visual perturbation (Experiment 3). By modulating visual uncertainty in perturbation, we induced unique adaptation responses in line with our model predictions (Experiment 4). Overall, our perceptual error framework outperforms existing models based on sensory errors, suggesting that perceptual error in locating one’s effector, supported by Bayesian cue integration, underpins the sensorimotor system’s implicit adaptation.