Large-scale electrophysiology and deep learning reveal distorted neural signal dynamics after hearing loss

  1. Shievanie Sabesan
  2. Andreas Fragner
  3. Ciaran Bench
  4. Fotios Drakopoulos
  5. Nicholas A Lesica  Is a corresponding author
  1. University College London, United Kingdom
  2. Perceptual Technologies Ltd, United Kingdom

Abstract

Listeners with hearing loss often struggle to understand speech in noise, even with a hearing aid. To better understand the auditory processing deficits that underlie this problem, we made large-scale brain recordings from gerbils, a common animal model for human hearing, while presenting a large database of speech and noise sounds. We first used manifold learning to identify the neural subspace in which speech is encoded and found that it is low-dimensional and that the dynamics within it are profoundly distorted by hearing loss. We then trained a deep neural network (DNN) to replicate the neural coding of speech with and without hearing loss and analyzed the underlying network dynamics. We found that hearing loss primarily impacts spectral processing, creating nonlinear distortions in cross-frequency interactions that result in a hypersensitivity to background noise that persists even after amplification with a hearing aid. Our results identify a new focus for efforts to design improved hearing aids and demonstrate the power of DNNs as a tool for the study of central brain structures.

Data availability

The metadata, ABR recordings, and a subset of the IC recordings analyzed in this study are available on figshare (DOI:10.6084/m9.figshare.845654). We have made only a subset of the IC recordings available because they are also being used for commercial purposes. These purposes (to develop improved assistive listening technologies) are distinct from the purpose for which the recordings are used in this manuscript (to better understand the fundamentals of hearing loss). Researchers seeking access to the full set of neural recordings for research purposes should contact the corresponding author via e-mail to set up a material transfer agreement. The custom code used for training the deep neural network models for this study is available at github.com/nicklesica/dnn.

The following data sets were generated

Article and author information

Author details

  1. Shievanie Sabesan

    Ear Institute, University College London, London, United Kingdom
    Competing interests
    No competing interests declared.
  2. Andreas Fragner

    Perceptual Technologies Ltd, London, United Kingdom
    Competing interests
    No competing interests declared.
  3. Ciaran Bench

    Ear Institute, University College London, London, United Kingdom
    Competing interests
    No competing interests declared.
  4. Fotios Drakopoulos

    Ear Institute, University College London, London, United Kingdom
    Competing interests
    No competing interests declared.
  5. Nicholas A Lesica

    Ear Institute, University College London, London, United Kingdom
    For correspondence
    n.lesica@ucl.ac.uk
    Competing interests
    Nicholas A Lesica, is a co-founder of Perceptual Technologies.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0001-5238-4462

Funding

Wellcome Trust (200942/Z/16/Z)

  • Shievanie Sabesan
  • Nicholas A Lesica

Engineering and Physical Sciences Research Council (EP/W004275/1)

  • Ciaran Bench
  • Fotios Drakopoulos
  • Nicholas A Lesica

The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.

Ethics

Animal experimentation: All experimental protocols were approved by the UK Home Office (PPL P56840C21). Every effort was made to minimize suffering.

Copyright

© 2023, Sabesan et al.

This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.

Metrics

  • 4
    citations

Views, downloads and citations are aggregated across all versions of this paper published by eLife.

Citations by DOI

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Open citations (links to open the citations from this article in various online reference manager services)

Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)

  1. Shievanie Sabesan
  2. Andreas Fragner
  3. Ciaran Bench
  4. Fotios Drakopoulos
  5. Nicholas A Lesica
(2023)
Large-scale electrophysiology and deep learning reveal distorted neural signal dynamics after hearing loss
eLife 12:e85108.
https://doi.org/10.7554/eLife.85108

Share this article

https://doi.org/10.7554/eLife.85108