Integrating prediction errors at two time scales permits rapid recalibration of speech sound categories

  1. Itsaso Olasagasti  Is a corresponding author
  2. Anne-Lise Giraud
  1. University of Geneva, Switzerland

Abstract

Speech perception presumably arises from internal models of how specific sensory features are associated with speech sounds. These features change constantly (e.g. different speakers, articulation modes etc.), and listeners need to recalibrate their internal models by appropriately weighing new versus old evidence. Models of speech recalibration classically ignore this volatility. The effect of volatility in tasks where sensory cues were associated with arbitrary experimenter-defined categories were well described by models that continuously adapt the learning rate while keeping a single representation of the category. Using neurocomputational modelling we show that recalibration of natural speech sound categories is better described by representing the latter at different time scales. We illustrate our proposal by modeling fast recalibration of speech sounds after experiencing the McGurk effect. We propose that working representations of speech categories are driven both by their current environment and their long-term memory representations.

Data availability

The original MATLAB scripts used to run the simulations are available online (https://gitlab.unige.ch/Miren.Olasagasti/recalibration-of-speech-categories).

Article and author information

Author details

  1. Itsaso Olasagasti

    Basic Neurosciences, University of Geneva, Geneva, Switzerland
    For correspondence
    itsaso.olasagasti@gmail.com
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0002-5172-5373
  2. Anne-Lise Giraud

    Department of Neuroscience, University of Geneva, Geneva, Switzerland
    Competing interests
    The authors declare that no competing interests exist.

Funding

Swiss National Science Foundation (320030B_182855)

  • Anne-Lise Giraud

The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.

Copyright

© 2020, Olasagasti & Giraud

This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.

Metrics

  • 798
    views
  • 153
    downloads
  • 5
    citations

Views, downloads and citations are aggregated across all versions of this paper published by eLife.

Citations by DOI

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Open citations (links to open the citations from this article in various online reference manager services)

Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)

  1. Itsaso Olasagasti
  2. Anne-Lise Giraud
(2020)
Integrating prediction errors at two time scales permits rapid recalibration of speech sound categories
eLife 9:e44516.
https://doi.org/10.7554/eLife.44516

Share this article

https://doi.org/10.7554/eLife.44516