Categorical representation from sound and sight in the ventral occipito-temporal cortex of sighted and blind

  1. Stefania Mattioni  Is a corresponding author
  2. Mohamed Rezk
  3. Ceren Battal
  4. Roberto Bottini
  5. Karen E Cuculiza Mendoza
  6. Nikolaas N Oosterhof
  7. Olivier Collignon  Is a corresponding author
  1. Université catholique de Louvain, Belgium
  2. Université catholique de Louvain (UcL), Belgium
  3. University of Trento, Italy

Abstract

Is vision necessary for the development of the categorical organization of the Ventral Occipito-Temporal Cortex (VOTC)? We used fMRI to characterize VOTC responses to eight categories presented acoustically in sighted and early blind individuals, and visually in a separate sighted group. We observed that VOTC reliably encodes sound categories in sighted and blind people using a representational structure and connectivity partially similar to the one found in vision. Sound categories were, however, more reliably encoded in the blind than the sighted group, using a representational format closer to the one found in vision. Crucially, VOTC in blind represents the categorical membership of sounds rather than their acoustic features. Our results suggest that sounds trigger categorical responses in the VOTC of congenitally blind and sighted people that partially match the topography and functional profile of the visual response, despite qualitative nuances in the categorical organization of VOTC between modalities and groups.

Data availability

Processed data have been made available on OSF at the link https://osf.io/erdxz/. To preserve participant anonymity and due to restrictions on data sharing in our ethical approval, fully anonymised raw data can only be shared upon request to the corresponding author.

The following data sets were generated

Article and author information

Author details

  1. Stefania Mattioni

    IPSY, Université catholique de Louvain, Louvain-la-Neuve, Belgium
    For correspondence
    stefania.mattioni@uclouvain.be
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0001-8279-6118
  2. Mohamed Rezk

    IPSY, Université catholique de Louvain, Louvain-la-Neuve, Belgium
    Competing interests
    The authors declare that no competing interests exist.
  3. Ceren Battal

    IPSY, Université catholique de Louvain (UcL), Louvain-la-Neuve, Belgium
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0002-9844-7630
  4. Roberto Bottini

    Center for Mind/Brain Studies, University of Trento, Trento, Italy
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0001-7941-7762
  5. Karen E Cuculiza Mendoza

    CIMeC, University of Trento, Trento, Italy
    Competing interests
    The authors declare that no competing interests exist.
  6. Nikolaas N Oosterhof

    CIMeC, University of Trento, Trento, Italy
    Competing interests
    The authors declare that no competing interests exist.
  7. Olivier Collignon

    IPSY - IONS, Université catholique de Louvain, Louvain-la-Neuve, Belgium
    For correspondence
    olivier.collignon@uclouvain.be
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0003-1882-3550

Funding

European Commission (Starting Grant MADVIS: 337573)

  • Olivier Collignon

The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.

Ethics

Human subjects: The ethical committee of the University of Trento approved this study (protocol 2014-007) and participants gave their informed consent before participation.

Copyright

© 2020, Mattioni et al.

This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.

Metrics

  • 3,415
    views
  • 474
    downloads
  • 65
    citations

Views, downloads and citations are aggregated across all versions of this paper published by eLife.

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Open citations (links to open the citations from this article in various online reference manager services)

Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)

  1. Stefania Mattioni
  2. Mohamed Rezk
  3. Ceren Battal
  4. Roberto Bottini
  5. Karen E Cuculiza Mendoza
  6. Nikolaas N Oosterhof
  7. Olivier Collignon
(2020)
Categorical representation from sound and sight in the ventral occipito-temporal cortex of sighted and blind
eLife 9:e50732.
https://doi.org/10.7554/eLife.50732

Share this article

https://doi.org/10.7554/eLife.50732

Further reading

    1. Biochemistry and Chemical Biology
    2. Neuroscience
    Silvia Galli, Marco Di Antonio
    Insight

    The buildup of knot-like RNA structures in brain cells may be the key to understanding how uncontrolled protein aggregation drives Alzheimer’s disease.

    1. Neuroscience
    Paul I Jaffe, Gustavo X Santiago-Reyes ... Russell A Poldrack
    Research Article

    Evidence accumulation models (EAMs) are the dominant framework for modeling response time (RT) data from speeded decision-making tasks. While providing a good quantitative description of RT data in terms of abstract perceptual representations, EAMs do not explain how the visual system extracts these representations in the first place. To address this limitation, we introduce the visual accumulator model (VAM), in which convolutional neural network models of visual processing and traditional EAMs are jointly fitted to trial-level RTs and raw (pixel-space) visual stimuli from individual subjects in a unified Bayesian framework. Models fitted to large-scale cognitive training data from a stylized flanker task captured individual differences in congruency effects, RTs, and accuracy. We find evidence that the selection of task-relevant information occurs through the orthogonalization of relevant and irrelevant representations, demonstrating how our framework can be used to relate visual representations to behavioral outputs. Together, our work provides a probabilistic framework for both constraining neural network models of vision with behavioral data and studying how the visual system extracts representations that guide decisions.