Using the past to estimate sensory uncertainty

  1. Ulrik Beierholm  Is a corresponding author
  2. Tim Rohe
  3. Ambra Ferrari
  4. Oliver Stegle
  5. Uta Noppeney
  1. Durham University, United Kingdom
  2. Friedrich Alexander University Erlangen-Nuernberg, Germany
  3. University of Birmingham, United Kingdom
  4. DKFZ, Germany

Abstract

To form a more reliable percept of the environment, the brain needs to estimate its own sensory uncertainty. Current theories of perceptual inference assume that the brain computes sensory uncertainty instantaneously and independently for each stimulus. We evaluated this assumption in four psychophysical experiments, in which human observers localized auditory signals that were presented synchronously with spatially disparate visual signals. Critically, the visual noise changed dynamically over time continuously or with intermittent jumps. Our results show that observers integrate audiovisual inputs weighted by sensory uncertainty estimates that combine information from past and current signals consistent with an optimal Bayesian learner that can be approximated by exponential discounting. Our results challenge leading models of perceptual inference where sensory uncertainty estimates depend only on the current stimulus. They demonstrate that the brain capitalizes on the temporal dynamics of the external world and estimates sensory uncertainty by combining past experiences with new incoming sensory signals.

Data availability

The human behavioral raw data and computational model predictions as well as the code for computational modelling and analyses scripts are available in an OSF repository: https://osf.io/gt4jb/

The following data sets were generated

Article and author information

Author details

  1. Ulrik Beierholm

    Psychology Department, Durham University, Durham, United Kingdom
    For correspondence
    ulrik.beierholm@durham.ac.uk
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0002-7296-7996
  2. Tim Rohe

    Institute of Psychology, Friedrich Alexander University Erlangen-Nuernberg, Erlangen, Germany
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0001-9713-3712
  3. Ambra Ferrari

    Centre for Computational Neuroscience and Cognitive Robotics, University of Birmingham, Birmingham, United Kingdom
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0003-1946-3884
  4. Oliver Stegle

    DKFZ, Heidelberg, Germany
    Competing interests
    The authors declare that no competing interests exist.
  5. Uta Noppeney

    Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, Birmingham, United Kingdom
    Competing interests
    The authors declare that no competing interests exist.

Funding

H2020 European Research Council (ERC-multsens,309349)

  • Uta Noppeney

Max Planck Society

  • Tim Rohe
  • Uta Noppeney

Deutsche Forschungsgemeinschaft (DFG RO 5587/1-1)

  • Tim Rohe

The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.

Ethics

Human subjects: All volunteers participated in the study after giving written informed consent. The study was approved by the human research review committee of the University of Tuebingen (approval number 432 2007 BO1) and the research review committee of the University of Birmingham (approval number ERN_15-1458AP1).

Reviewing Editor

  1. Tobias Reichenbach, Imperial College London, United Kingdom

Version history

  1. Received: December 4, 2019
  2. Accepted: December 13, 2020
  3. Accepted Manuscript published: December 15, 2020 (version 1)
  4. Version of Record published: January 13, 2021 (version 2)

Copyright

© 2020, Beierholm et al.

This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.

Metrics

  • 2,048
    Page views
  • 340
    Downloads
  • 11
    Citations

Article citation count generated by polling the highest count across the following sources: Crossref, PubMed Central, Scopus.

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Open citations (links to open the citations from this article in various online reference manager services)

Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)

  1. Ulrik Beierholm
  2. Tim Rohe
  3. Ambra Ferrari
  4. Oliver Stegle
  5. Uta Noppeney
(2020)
Using the past to estimate sensory uncertainty
eLife 9:e54172.
https://doi.org/10.7554/eLife.54172

Share this article

https://doi.org/10.7554/eLife.54172

Further reading

    1. Neuroscience
    Peibo Xu, Jian Peng ... Yuejun Chen
    Research Article

    Deciphering patterns of connectivity between neurons in the brain is a critical step toward understanding brain function. Imaging-based neuroanatomical tracing identifies area-to-area or sparse neuron-to-neuron connectivity patterns, but with limited throughput. Barcode-based connectomics maps large numbers of single-neuron projections, but remains a challenge for jointly analyzing single-cell transcriptomics. Here, we established a rAAV2-retro barcode-based multiplexed tracing method that simultaneously characterizes the projectome and transcriptome at the single neuron level. We uncovered dedicated and collateral projection patterns of ventromedial prefrontal cortex (vmPFC) neurons to five downstream targets and found that projection-defined vmPFC neurons are molecularly heterogeneous. We identified transcriptional signatures of projection-specific vmPFC neurons, and verified Pou3f1 as a marker gene enriched in neurons projecting to the lateral hypothalamus, denoting a distinct subset with collateral projections to both dorsomedial striatum and lateral hypothalamus. In summary, we have developed a new multiplexed technique whose paired connectome and gene expression data can help reveal organizational principles that form neural circuits and process information.

    1. Neuroscience
    Maureen van der Grinten, Jaap de Ruyter van Steveninck ... Yağmur Güçlütürk
    Tools and Resources

    Blindness affects millions of people around the world. A promising solution to restoring a form of vision for some individuals are cortical visual prostheses, which bypass part of the impaired visual pathway by converting camera input to electrical stimulation of the visual system. The artificially induced visual percept (a pattern of localized light flashes, or ‘phosphenes’) has limited resolution, and a great portion of the field’s research is devoted to optimizing the efficacy, efficiency, and practical usefulness of the encoding of visual information. A commonly exploited method is non-invasive functional evaluation in sighted subjects or with computational models by using simulated prosthetic vision (SPV) pipelines. An important challenge in this approach is to balance enhanced perceptual realism, biologically plausibility, and real-time performance in the simulation of cortical prosthetic vision. We present a biologically plausible, PyTorch-based phosphene simulator that can run in real-time and uses differentiable operations to allow for gradient-based computational optimization of phosphene encoding models. The simulator integrates a wide range of clinical results with neurophysiological evidence in humans and non-human primates. The pipeline includes a model of the retinotopic organization and cortical magnification of the visual cortex. Moreover, the quantitative effects of stimulation parameters and temporal dynamics on phosphene characteristics are incorporated. Our results demonstrate the simulator’s suitability for both computational applications such as end-to-end deep learning-based prosthetic vision optimization as well as behavioral experiments. The modular and open-source software provides a flexible simulation framework for computational, clinical, and behavioral neuroscientists working on visual neuroprosthetics.