Functional brain alterations following mild-to-moderate sensorineural hearing loss in children

  1. Axelle Calcus  Is a corresponding author
  2. Outi Tuomainen
  3. Ana Campos
  4. Stuart Rosen
  5. Lorna F Halliday
  1. Ecole Normale Supérieure, France
  2. University College London, United Kingdom

Abstract

Auditory deprivation in the form of deafness during development leads to lasting changes in central auditory system function. However, less is known about the effects of mild-to-moderate sensorineural hearing loss (MMHL) during development. Here, we used a longitudinal design to examine late auditory evoked responses and mismatch responses to nonspeech and speech sounds for children with MMHL. At Time 1, younger children with MMHL (8-12 years; n = 23) showed age-appropriate mismatch negativities (MMNs) to sounds, but older children (12-16 years; n = 23) did not. Six years later, we re-tested a subset of the younger (now older) children with MMHL (n = 13). Children who had shown significant MMNs at Time 1 showed MMNs that were reduced and, for nonspeech, absent at Time 2. Our findings demonstrate that even a mild-to-moderate hearing loss during early-to-mid childhood can lead to changes in the neural processing of sounds in late childhood/adolescence.

Data availability

Unidentifiable data, stimuli, and statistical analyses scripts are available on https://github.com/acalcus/MMHL.git

The following data sets were generated

Article and author information

Author details

  1. Axelle Calcus

    Département d'Etudes Cognitives, Ecole Normale Supérieure, Paris, France
    For correspondence
    axelle.calcus@ens.fr
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0002-1240-1122
  2. Outi Tuomainen

    Department of Speech, Hearing and Phonetic Sciences, University College London, London, United Kingdom
    Competing interests
    The authors declare that no competing interests exist.
  3. Ana Campos

    Department of Speech, Hearing and Phonetic Sciences, University College London, London, United Kingdom
    Competing interests
    The authors declare that no competing interests exist.
  4. Stuart Rosen

    Department of Speech, Hearing and Phonetic Sciences, University College London, London, United Kingdom
    Competing interests
    The authors declare that no competing interests exist.
  5. Lorna F Halliday

    Department of Speech, Hearing and Phonetic Sciences, University College London, London, United Kingdom
    Competing interests
    The authors declare that no competing interests exist.

Funding

H2020 Marie Skłodowska-Curie Actions (FP7-607139)

  • Axelle Calcus

ESRC National Centre for Research Methods, University of Southampton (RES-061-25-0440)

  • Lorna F Halliday

The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.

Ethics

Human subjects: Informed consent, and consent to publish was obtained from parents/guardians of the children included in this study. Ethical approval for this study was provided by the UCL Research Ethics Committee (Project ID number: 2109/004).

Copyright

© 2019, Calcus et al.

This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.

Metrics

  • 3,892
    views
  • 432
    downloads
  • 15
    citations

Views, downloads and citations are aggregated across all versions of this paper published by eLife.

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Open citations (links to open the citations from this article in various online reference manager services)

Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)

  1. Axelle Calcus
  2. Outi Tuomainen
  3. Ana Campos
  4. Stuart Rosen
  5. Lorna F Halliday
(2019)
Functional brain alterations following mild-to-moderate sensorineural hearing loss in children
eLife 8:e46965.
https://doi.org/10.7554/eLife.46965

Share this article

https://doi.org/10.7554/eLife.46965

Further reading

    1. Neuroscience
    Gaqi Tu, Peiying Wen ... Kaori Takehara-Nishiuchi
    Research Article

    Outcomes can vary even when choices are repeated. Such ambiguity necessitates adjusting how much to learn from each outcome by tracking its variability. The medial prefrontal cortex (mPFC) has been reported to signal the expected outcome and its discrepancy from the actual outcome (prediction error), two variables essential for controlling the learning rate. However, the source of signals that shape these coding properties remains unknown. Here, we investigated the contribution of cholinergic projections from the basal forebrain because they carry precisely timed signals about outcomes. One-photon calcium imaging revealed that as mice learned different probabilities of threat occurrence on two paths, some mPFC cells responded to threats on one of the paths, while other cells gained responses to threat omission. These threat- and omission-evoked responses were scaled to the unexpectedness of outcomes, some exhibiting a reversal in response direction when encountering surprising threats as opposed to surprising omissions. This selectivity for signed prediction errors was enhanced by optogenetic stimulation of local cholinergic terminals during threats. The enhanced threat-evoked cholinergic signals also made mice erroneously abandon the correct choice after a single threat that violated expectations, thereby decoupling their path choice from the history of threat occurrence on each path. Thus, acetylcholine modulates the encoding of surprising outcomes in the mPFC to control how much they dictate future decisions.

    1. Neuroscience
    Philipp S O'Neill, Martín Baccino-Calace ... Igor Delvendahl
    Tools and Resources

    Quantitative information about synaptic transmission is key to our understanding of neural function. Spontaneously occurring synaptic events carry fundamental information about synaptic function and plasticity. However, their stochastic nature and low signal-to-noise ratio present major challenges for the reliable and consistent analysis. Here, we introduce miniML, a supervised deep learning-based method for accurate classification and automated detection of spontaneous synaptic events. Comparative analysis using simulated ground-truth data shows that miniML outperforms existing event analysis methods in terms of both precision and recall. miniML enables precise detection and quantification of synaptic events in electrophysiological recordings. We demonstrate that the deep learning approach generalizes easily to diverse synaptic preparations, different electrophysiological and optical recording techniques, and across animal species. miniML provides not only a comprehensive and robust framework for automated, reliable, and standardized analysis of synaptic events, but also opens new avenues for high-throughput investigations of neural function and dysfunction.