Humans parsimoniously represent auditory sequences by pruning and completing the underlying network structure

Abstract

Successive auditory inputs are rarely independent, their relationships ranging from local transitions between elements to hierarchical and nested representations. In many situations, humans retrieve these dependencies even from limited datasets. However, this learning at multiple scale levels is poorly understood. Here we used the formalism proposed by network science to study the representation of local and higher-order structures and their interaction in auditory sequences. We show that human adults exhibited biases in their perception of local transitions between elements, which made them sensitive to high-order network structures such as communities. This behavior is consistent with the creation of a parsimonious simplified model from the evidence they receive, achieved by pruning and completing relationships between network elements. This observation suggests that the brain does not rely on exact memories but on a parsimonious representation of the world. Moreover, this bias can be analytically modeled by a memory/efficiency trade-off. This model correctly accounts for previous findings, including local transition probabilities as well as high-order network structures, unifying sequence learning across scales. We finally propose putative brain implementations of such bias.

Data availability

All Data and analysis are publicly available at https://osf.io/e8u7f/

The following data sets were generated

Article and author information

Author details

  1. Lucas Benjamin

    Cognitive Neuroimaging Unit, CNRS ERL 9003, INSERM U992, Paris-Saclay, France
    For correspondence
    lucas.benjamin@cea.fr
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0002-9578-6039
  2. Fló Ana

    Cognitive Neuroimaging Unit, CNRS ERL 9003, INSERM U992, Paris-Saclay, France
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0002-3260-0559
  3. Fosca Al Roumi

    Cognitive Neuroimaging Unit, CNRS ERL 9003, INSERM U992, Paris-Saclay, France
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0001-9590-080X
  4. Ghislaine Dehaene-Lambertz

    Cognitive Neuroimaging Unit, CNRS ERL 9003, INSERM U992, Paris-Saclay, France
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0003-2221-9081

Funding

European Research Council (695710)

  • Ghislaine Dehaene-Lambertz

The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.

Ethics

Human subjects: All participants gave their informed consents for participation and publication and this research was approved by the Ethical research committee of Paris-Saclay University under the reference CER-Paris-Saclay-2019-063

Copyright

© 2023, Benjamin et al.

This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.

Metrics

  • 891
    views
  • 144
    downloads
  • 14
    citations

Views, downloads and citations are aggregated across all versions of this paper published by eLife.

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Open citations (links to open the citations from this article in various online reference manager services)

Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)

  1. Lucas Benjamin
  2. Fló Ana
  3. Fosca Al Roumi
  4. Ghislaine Dehaene-Lambertz
(2023)
Humans parsimoniously represent auditory sequences by pruning and completing the underlying network structure
eLife 12:e86430.
https://doi.org/10.7554/eLife.86430

Share this article

https://doi.org/10.7554/eLife.86430