Lip movements entrain the observers' low-frequency brain oscillations to facilitate speech intelligibility
Abstract
During continuous speech, lip movements provide visual temporal signals that facilitate speech processing. Here, using MEG we directly investigated how these visual signals interact with rhythmic brain activity in participants listening to and seeing the speaker. First, we investigated coherence between oscillatory brain activity and speaker's lip movements and demonstrated significant entrainment in visual cortex. We then used partial coherence to remove contributions of the coherent auditory speech signal from the lip-brain coherence. Comparing this synchronization between different attention conditions revealed that attending visual speech enhances the coherence between activity in visual cortex and the speaker's lips. Further, we identified a significant partial coherence between left motor cortex and lip movements and this partial coherence directly predicted comprehension accuracy. Our results emphasize the importance of visually entrained and attention-modulated rhythmic brain activity for the enhancement of audiovisual speech processing.
Article and author information
Author details
Ethics
Human subjects: This study was approved by the local ethics committee (CSE01321; University of Glasgow, Faculty of Information and Mathematical Sciences) and conducted in conformity with the Declaration of Helsinki. All participants provided informed written consent before participating in the experiment and received monetary compensation for their participation.
Copyright
© 2016, Park et al.
This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.
Metrics
-
- 4,838
- views
-
- 848
- downloads
-
- 136
- citations
Views, downloads and citations are aggregated across all versions of this paper published by eLife.
Download links
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
Further reading
-
- Neuroscience
Sleep associated memory consolidation and reactivation play an important role in language acquisition and learning of new words. However, it is unclear to what extent properties of word learning difficulty impact sleep associated memory reactivation. To address this gap, we investigated in 22 young healthy adults the effectiveness of auditory targeted memory reactivation (TMR) during non-rapid eye movement sleep of artificial words with easy and difficult to learn phonotactical properties. Here, we found that TMR of the easy words improved their overnight memory performance, whereas TMR of the difficult words had no effect. By comparing EEG activities after TMR presentations, we found an increase in slow wave density independent of word difficulty, whereas the spindle-band power nested during the slow wave up-states – as an assumed underlying activity of memory reactivation – was significantly higher in the easy/effective compared to the difficult/ineffective condition. Our findings indicate that word learning difficulty by phonotactics impacts the effectiveness of TMR and further emphasize the critical role of prior encoding depth in sleep associated memory reactivation.
-
- Neuroscience
The neural mechanisms that willfully direct attention to specific locations in space are closely related to those for generating targeting eye movements (saccades). However, the degree to which the voluntary deployment of attention to a location necessarily activates a corresponding saccade plan remains unclear. One problem is that attention and saccades are both automatically driven by salient sensory events; another is that the underlying processes unfold within tens of milliseconds only. Here, we use an urgent task design to resolve the evolution of a visuomotor choice on a moment-by-moment basis while independently controlling the endogenous (goal-driven) and exogenous (salience-driven) contributions to performance. Human participants saw a peripheral cue and, depending on its color, either looked at it (prosaccade) or looked at a diametrically opposite, uninformative non-cue (antisaccade). By varying the luminance of the stimuli, the exogenous contributions could be cleanly dissociated from the endogenous process guiding the choice over time. According to the measured time courses, generating a correct antisaccade requires about 30 ms more processing time than generating a correct prosaccade based on the same perceptual signal. The results indicate that saccade plans elaborated during fixation are biased toward the location where attention is endogenously deployed, but the coupling is weak and can be willfully overridden very rapidly.