Lip movements entrain the observers' low-frequency brain oscillations to facilitate speech intelligibility

  1. Hyojin Park  Is a corresponding author
  2. Christoph Kayser
  3. Gregor Thut
  4. Joachim Gross
  1. University of Glasgow, United Kingdom

Abstract

During continuous speech, lip movements provide visual temporal signals that facilitate speech processing. Here, using MEG we directly investigated how these visual signals interact with rhythmic brain activity in participants listening to and seeing the speaker. First, we investigated coherence between oscillatory brain activity and speaker's lip movements and demonstrated significant entrainment in visual cortex. We then used partial coherence to remove contributions of the coherent auditory speech signal from the lip-brain coherence. Comparing this synchronization between different attention conditions revealed that attending visual speech enhances the coherence between activity in visual cortex and the speaker's lips. Further, we identified a significant partial coherence between left motor cortex and lip movements and this partial coherence directly predicted comprehension accuracy. Our results emphasize the importance of visually entrained and attention-modulated rhythmic brain activity for the enhancement of audiovisual speech processing.

Article and author information

Author details

  1. Hyojin Park

    Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, United Kingdom
    For correspondence
    Hyojin.Park@glasgow.ac.uk
    Competing interests
    The authors declare that no competing interests exist.
  2. Christoph Kayser

    Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, United Kingdom
    Competing interests
    The authors declare that no competing interests exist.
  3. Gregor Thut

    Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, United Kingdom
    Competing interests
    The authors declare that no competing interests exist.
  4. Joachim Gross

    Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, United Kingdom
    Competing interests
    The authors declare that no competing interests exist.

Ethics

Human subjects: This study was approved by the local ethics committee (CSE01321; University of Glasgow, Faculty of Information and Mathematical Sciences) and conducted in conformity with the Declaration of Helsinki. All participants provided informed written consent before participating in the experiment and received monetary compensation for their participation.

Copyright

© 2016, Park et al.

This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.

Metrics

  • 4,791
    views
  • 842
    downloads
  • 132
    citations

Views, downloads and citations are aggregated across all versions of this paper published by eLife.

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Open citations (links to open the citations from this article in various online reference manager services)

Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)

  1. Hyojin Park
  2. Christoph Kayser
  3. Gregor Thut
  4. Joachim Gross
(2016)
Lip movements entrain the observers' low-frequency brain oscillations to facilitate speech intelligibility
eLife 5:e14521.
https://doi.org/10.7554/eLife.14521

Share this article

https://doi.org/10.7554/eLife.14521

Further reading

    1. Cell Biology
    2. Neuroscience
    Victor C Wong, Patrick R Houlihan ... Erin K O'Shea
    Research Article

    AMPA-type receptors (AMPARs) are rapidly inserted into synapses undergoing plasticity to increase synaptic transmission, but it is not fully understood if and how AMPAR-containing vesicles are selectively trafficked to these synapses. Here, we developed a strategy to label AMPAR GluA1 subunits expressed from their endogenous loci in cultured rat hippocampal neurons and characterized the motion of GluA1-containing vesicles using single-particle tracking and mathematical modeling. We find that GluA1-containing vesicles are confined and concentrated near sites of stimulation-induced structural plasticity. We show that confinement is mediated by actin polymerization, which hinders the active transport of GluA1-containing vesicles along the length of the dendritic shaft by modulating the rheological properties of the cytoplasm. Actin polymerization also facilitates myosin-mediated transport of GluA1-containing vesicles to exocytic sites. We conclude that neurons utilize F-actin to increase vesicular GluA1 reservoirs and promote exocytosis proximal to the sites of synaptic activity.

    1. Neuroscience
    Proloy Das, Mingjian He, Patrick L Purdon
    Tools and Resources

    Modern neurophysiological recordings are performed using multichannel sensor arrays that are able to record activity in an increasingly high number of channels numbering in the 100s to 1000s. Often, underlying lower-dimensional patterns of activity are responsible for the observed dynamics, but these representations are difficult to reliably identify using existing methods that attempt to summarize multivariate relationships in a post hoc manner from univariate analyses or using current blind source separation methods. While such methods can reveal appealing patterns of activity, determining the number of components to include, assessing their statistical significance, and interpreting them requires extensive manual intervention and subjective judgment in practice. These difficulties with component selection and interpretation occur in large part because these methods lack a generative model for the underlying spatio-temporal dynamics. Here, we describe a novel component analysis method anchored by a generative model where each source is described by a bio-physically inspired state-space representation. The parameters governing this representation readily capture the oscillatory temporal dynamics of the components, so we refer to it as oscillation component analysis. These parameters – the oscillatory properties, the component mixing weights at the sensors, and the number of oscillations – all are inferred in a data-driven fashion within a Bayesian framework employing an instance of the expectation maximization algorithm. We analyze high-dimensional electroencephalography and magnetoencephalography recordings from human studies to illustrate the potential utility of this method for neuroscience data.