Visual attention modulates the integration of goal-relevant evidence and not value

  1. Pradyumna Sepulveda  Is a corresponding author
  2. Marius Usher
  3. Ned Davies
  4. Amy A Benson
  5. Pietro Ortoleva
  6. Benedetto De Martino  Is a corresponding author
  1. University College London, United Kingdom
  2. Tel Aviv University, Israel
  3. Princeton University, United States

Abstract

When choosing between options, such as food items presented in plain view, people tend to choose the option they spend longer looking at. The prevailing interpretation is that visual attention increases value. However, in previous studies, 'value' was coupled to a behavioural goal, since subjects had to choose the item they preferred. This makes it impossible to discern if visual attention has an effect on value, or, instead, if attention modulates the information most relevant for the goal of the decision-maker. Here we present the results of two independent studies—a perceptual and a value-based task—that allow us to decouple value from goal-relevant information using specific task-framing. Combining psychophysics with computational modelling, we show that, contrary to the current interpretation, attention does not boost value, but instead it modulates goal-relevant information. This work provides a novel and more general mechanism by which attention interacts with choice.

Data availability

Data and the codes used for this study have been deposited at the Brain Decision Modelling Lab GitHub (https://github.com/BDMLab).

The following data sets were generated

Article and author information

Author details

  1. Pradyumna Sepulveda

    Institute of Cognitive Neuroscience, University College London, London, United Kingdom
    For correspondence
    p.sepulveda@ucl.ac.uk
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0003-0159-6777
  2. Marius Usher

    School of Psychological Sciences and Sagol School of Neuroscience, Tel Aviv University, Tel Aviv, Israel
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0001-8041-9060
  3. Ned Davies

    Institute of Cognitive Neuroscience, University College London, London, United Kingdom
    Competing interests
    The authors declare that no competing interests exist.
  4. Amy A Benson

    Institute of Cognitive Neuroscience, University College London, London, United Kingdom
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0002-8239-5266
  5. Pietro Ortoleva

    Department of Economics and Woodrow Wilson School, Princeton University, Princeton, United States
    Competing interests
    The authors declare that no competing interests exist.
  6. Benedetto De Martino

    Institute Cognitive of Neuroscience, University College London, London, United Kingdom
    For correspondence
    benedettodemartino@gmail.com
    Competing interests
    The authors declare that no competing interests exist.

Funding

Chilean National Agency for Research and Development (Graduate student scholarship - DOCTORADO BECAS CHILE/2017 - 72180193)

  • Pradyumna Sepulveda

Wellcome Trust (Sir Henry Dale Fellowship (102612 /A/13/Z))

  • Benedetto De Martino

The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.

Ethics

Human subjects: All participants signed a consent form and both studies were done following the approval given by the University College London, Division of Psychology and Language Sciences ethics committee (project ID number 1825/003).

Copyright

© 2020, Sepulveda et al.

This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.

Metrics

  • 2,726
    views
  • 358
    downloads
  • 54
    citations

Views, downloads and citations are aggregated across all versions of this paper published by eLife.

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Open citations (links to open the citations from this article in various online reference manager services)

Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)

  1. Pradyumna Sepulveda
  2. Marius Usher
  3. Ned Davies
  4. Amy A Benson
  5. Pietro Ortoleva
  6. Benedetto De Martino
(2020)
Visual attention modulates the integration of goal-relevant evidence and not value
eLife 9:e60705.
https://doi.org/10.7554/eLife.60705

Share this article

https://doi.org/10.7554/eLife.60705

Further reading

    1. Neuroscience
    Masahiro Takigawa, Marta Huelin Gorriz ... Daniel Bendor
    Research Article

    During rest and sleep, memory traces replay in the brain. The dialogue between brain regions during replay is thought to stabilize labile memory traces for long-term storage. However, because replay is an internally-driven, spontaneous phenomenon, it does not have a ground truth - an external reference that can validate whether a memory has truly been replayed. Instead, replay detection is based on the similarity between the sequential neural activity comprising the replay event and the corresponding template of neural activity generated during active locomotion. If the statistical likelihood of observing such a match by chance is sufficiently low, the candidate replay event is inferred to be replaying that specific memory. However, without the ability to evaluate whether replay detection methods are successfully detecting true events and correctly rejecting non-events, the evaluation and comparison of different replay methods is challenging. To circumvent this problem, we present a new framework for evaluating replay, tested using hippocampal neural recordings from rats exploring two novel linear tracks. Using this two-track paradigm, our framework selects replay events based on their temporal fidelity (sequence-based detection), and evaluates the detection performance using each event's track discriminability, where sequenceless decoding across both tracks is used to quantify whether the track replaying is also the most likely track being reactivated.

    1. Neuroscience
    Nicolas Langer, Maurice Weber ... Ce Zhang
    Tools and Resources

    Memory deficits are a hallmark of many different neurological and psychiatric conditions. The Rey–Osterrieth complex figure (ROCF) is the state-of-the-art assessment tool for neuropsychologists across the globe to assess the degree of non-verbal visual memory deterioration. To obtain a score, a trained clinician inspects a patient’s ROCF drawing and quantifies deviations from the original figure. This manual procedure is time-consuming, slow and scores vary depending on the clinician’s experience, motivation, and tiredness. Here, we leverage novel deep learning architectures to automatize the rating of memory deficits. For this, we collected more than 20k hand-drawn ROCF drawings from patients with various neurological and psychiatric disorders as well as healthy participants. Unbiased ground truth ROCF scores were obtained from crowdsourced human intelligence. This dataset was used to train and evaluate a multihead convolutional neural network. The model performs highly unbiased as it yielded predictions very close to the ground truth and the error was similarly distributed around zero. The neural network outperforms both online raters and clinicians. The scoring system can reliably identify and accurately score individual figure elements in previously unseen ROCF drawings, which facilitates explainability of the AI-scoring system. To ensure generalizability and clinical utility, the model performance was successfully replicated in a large independent prospective validation study that was pre-registered prior to data collection. Our AI-powered scoring system provides healthcare institutions worldwide with a digital tool to assess objectively, reliably, and time-efficiently the performance in the ROCF test from hand-drawn images.