Visual attention modulates the integration of goal-relevant evidence and not value

  1. Pradyumna Sepulveda  Is a corresponding author
  2. Marius Usher
  3. Ned Davies
  4. Amy A Benson
  5. Pietro Ortoleva
  6. Benedetto De Martino  Is a corresponding author
  1. University College London, United Kingdom
  2. Tel Aviv University, Israel
  3. Princeton University, United States

Abstract

When choosing between options, such as food items presented in plain view, people tend to choose the option they spend longer looking at. The prevailing interpretation is that visual attention increases value. However, in previous studies, 'value' was coupled to a behavioural goal, since subjects had to choose the item they preferred. This makes it impossible to discern if visual attention has an effect on value, or, instead, if attention modulates the information most relevant for the goal of the decision-maker. Here we present the results of two independent studies—a perceptual and a value-based task—that allow us to decouple value from goal-relevant information using specific task-framing. Combining psychophysics with computational modelling, we show that, contrary to the current interpretation, attention does not boost value, but instead it modulates goal-relevant information. This work provides a novel and more general mechanism by which attention interacts with choice.

Data availability

Data and the codes used for this study have been deposited at the Brain Decision Modelling Lab GitHub (https://github.com/BDMLab).

The following data sets were generated

Article and author information

Author details

  1. Pradyumna Sepulveda

    Institute of Cognitive Neuroscience, University College London, London, United Kingdom
    For correspondence
    p.sepulveda@ucl.ac.uk
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0003-0159-6777
  2. Marius Usher

    School of Psychological Sciences and Sagol School of Neuroscience, Tel Aviv University, Tel Aviv, Israel
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0001-8041-9060
  3. Ned Davies

    Institute of Cognitive Neuroscience, University College London, London, United Kingdom
    Competing interests
    The authors declare that no competing interests exist.
  4. Amy A Benson

    Institute of Cognitive Neuroscience, University College London, London, United Kingdom
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0002-8239-5266
  5. Pietro Ortoleva

    Department of Economics and Woodrow Wilson School, Princeton University, Princeton, United States
    Competing interests
    The authors declare that no competing interests exist.
  6. Benedetto De Martino

    Institute Cognitive of Neuroscience, University College London, London, United Kingdom
    For correspondence
    benedettodemartino@gmail.com
    Competing interests
    The authors declare that no competing interests exist.

Funding

Chilean National Agency for Research and Development (Graduate student scholarship - DOCTORADO BECAS CHILE/2017 - 72180193)

  • Pradyumna Sepulveda

Wellcome Trust (Sir Henry Dale Fellowship (102612 /A/13/Z))

  • Benedetto De Martino

The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.

Ethics

Human subjects: All participants signed a consent form and both studies were done following the approval given by the University College London, Division of Psychology and Language Sciences ethics committee (project ID number 1825/003).

Copyright

© 2020, Sepulveda et al.

This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.

Metrics

  • 2,852
    views
  • 379
    downloads
  • 58
    citations

Views, downloads and citations are aggregated across all versions of this paper published by eLife.

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Open citations (links to open the citations from this article in various online reference manager services)

Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)

  1. Pradyumna Sepulveda
  2. Marius Usher
  3. Ned Davies
  4. Amy A Benson
  5. Pietro Ortoleva
  6. Benedetto De Martino
(2020)
Visual attention modulates the integration of goal-relevant evidence and not value
eLife 9:e60705.
https://doi.org/10.7554/eLife.60705

Share this article

https://doi.org/10.7554/eLife.60705

Further reading

    1. Neuroscience
    Lenia Amaral, Xiaosha Wang ... Ella Striem-Amit
    Research Article

    Research on brain plasticity, particularly in the context of deafness, consistently emphasizes the reorganization of the auditory cortex. But to what extent do all individuals with deafness show the same level of reorganization? To address this question, we examined the individual differences in functional connectivity (FC) from the deprived auditory cortex. Our findings demonstrate remarkable differentiation between individuals deriving from the absence of shared auditory experiences, resulting in heightened FC variability among deaf individuals, compared to more consistent FC in the hearing group. Notably, connectivity to language regions becomes more diverse across individuals with deafness. This does not stem from delayed language acquisition; it is found in deaf native signers, who are exposed to natural language since birth. However, comparing FC diversity between deaf native signers and deaf delayed signers, who were deprived of language in early development, we show that language experience also impacts individual differences, although to a more moderate extent. Overall, our research points out the intricate interplay between brain plasticity and individual differences, shedding light on the diverse ways reorganization manifests among individuals. It joins findings of increased connectivity diversity in blindness and highlights the importance of considering individual differences in personalized rehabilitation for sensory loss.

    1. Computational and Systems Biology
    2. Neuroscience
    Gabriel Loewinger, Erjia Cui ... Francisco Pereira
    Tools and Resources

    Fiber photometry has become a popular technique to measure neural activity in vivo, but common analysis strategies can reduce the detection of effects because they condense within-trial signals into summary measures, and discard trial-level information by averaging across-trials. We propose a novel photometry statistical framework based on functional linear mixed modeling, which enables hypothesis testing of variable effects at every trial time-point, and uses trial-level signals without averaging. This makes it possible to compare the timing and magnitude of signals across conditions while accounting for between-animal differences. Our framework produces a series of plots that illustrate covariate effect estimates and statistical significance at each trial time-point. By exploiting signal autocorrelation, our methodology yields joint 95% confidence intervals that account for inspecting effects across the entire trial and improve the detection of event-related signal changes over common multiple comparisons correction strategies. We reanalyze data from a recent study proposing a theory for the role of mesolimbic dopamine in reward learning, and show the capability of our framework to reveal significant effects obscured by standard analysis approaches. For example, our method identifies two dopamine components with distinct temporal dynamics in response to reward delivery. In simulation experiments, our methodology yields improved statistical power over common analysis approaches. Finally, we provide an open-source package and analysis guide for applying our framework.