Language: I see what you are saying
In the mid-1940s, the psychologist Alvin Liberman went to work with Franklin Cooper at the Haskins Laboratories in New Haven, Connecticut. He initially set out to create a device to turn printed letters into sounds so that blind people could ‘hear’ written texts (Liberman, 1996). His first foray involved shining a light through a slit onto the page in order to convert the lines of each letter into light and then into frequencies of sound. Liberman and colleagues reasoned that with enough training, blind users would be able to learn these arbitrary letter-sound pairs and so be able to understand the text.
The device was a spectacular failure: the users performed slowly and inaccurately. This led Liberman and colleagues to the realization that speech is not an arbitrary sequence of sounds, but a specific human code. They argued that the key to this code was the link between the speech sounds a person hears and the motor actions they make in order to speak. This important work led to decades of further research and helped lay the foundation for the psychological and neuroscientific study of speech.
When we watch and listen to someone speak, our brain combines the visual information of the movement of the speaker’s mouth with the speech sounds that are produced by this movement (McGurk and MacDonald, 1976). One of the core problems that researchers in this field are investigating is how these different sets of information are integrated to allow us to understand speech. Now, in eLife, Hyojin Park, Christoph Kayser, Gregor Thut and Joachim Gross of the University of Glasgow report that they have studied this integration by using a technique called magnetoencephalography to record the magnetic fields that are generated by the electrical currents of the brain (Park et al., 2016).
Park et al. presented volunteers with audio-visual clips of naturalistic speech and then asked them to complete a short questionnaire about the speech they heard and saw. In some cases, these clips were manipulated so that the audio did not match the video. In other cases, Park et al. presented a different speech signal to each ear and asked the volunteers to pay attention to just one signal. By analyzing these combinations, they could separate the brain activity that is associated with watching someone speak from the activity that processes the speech sounds themselves.
Park et al. found that a part of the continuous speech stream called the envelope, which is the slow rising and falling in the amplitude of the speech, was tracked in auditory areas of the brain (Figure 1). Conversely, the visual cortex tracked mouth movements. These results are a good replication and extension of previous data recorded from both the auditory domain (Cogan and Poeppel, 2011; Gross et al., 2013; Luo and Poeppel, 2007) and the visual domain (Luo et al., 2010; Zion Golumbic et al., 2013). However, Park et al. extended these findings by asking: what role does tracking the lip movements of a speaker play in speech perception?

A proposed model for the role of the motor system in speech perception.
A person produces speech by the coordinated movement of their articulatory system. The listener hears the sound (black line) and sees the mouth of the speaker open and close (represented by blue line). Some of the information in the sound is contained within the speech envelope (green line). The auditory regions of the brain (green circle) track the speech envelope, while the visual system (blue circle) tracks the visual movements of the mouth. The motor system (red circle) then decodes the intended mouth movement and integrates this with the response of the auditory regions to the incoming sounds.
To learn more about which parts of the brain track the lip movements of the speaker, Park et al. performed a partial regression on the lip movement, envelope and brain activity data to remove the response to sound and focus on just the effect of tracking the lip movements. This revealed two areas of the brain that actively track lip movements during speech. The first area, as found by previous researchers, was the visual cortex. This presumably tracks the lips as a visual signal. The second area was the left motor cortex.
To further establish the role of the motor cortex during speech perception, Park et al. examined the comprehension scores from the questionnaire. These scores could be predicted from the extent to which neural activity in the motor cortex synchronized with the lip movements observed by the participant: higher scores correlated with a higher degree of synchronization. This suggests that the ability of the motor cortex to track lip movements is important for understanding audiovisual speech, suggesting a new role for the motor system in speech perception. Park et al. interpret this finding to suggest that the motor system helps to predict the upcoming sound signal by simulating the speaker’s intended mouth movement (Arnal and Giraud, 2012; Figure 1).
While this is an important first step, it is still not clear how the lip movement tracked by the motor cortex is integrated with the response of auditory regions of the brain to speech sounds. Are mouth movements tracked specifically for ambiguous or difficult stimuli (Du et al., 2014) or is this tracking necessary for perceiving speech generally? Future work will hopefully clarify the specifics of this mechanism.
It is interesting and somewhat ironic that the motor cortex tracks the visual signals of mouth movement, given the early (and unsuccessful) efforts of Liberman and colleagues to help the blind ‘hear’ written texts. Indeed, just as these early researchers proposed, it seems that the link between the motor and auditory system is a key to understanding how speech is represented in the brain.
References
-
Cortical oscillations and sensory predictionsTrends in Cognitive Sciences 16:390–398.https://doi.org/10.1016/j.tics.2012.05.003
-
A mutual information analysis of neural coding of speech by low-frequency MEG phase informationJournal of Neurophysiology 106:554–563.https://doi.org/10.1152/jn.00075.2011
-
Noise differentially impacts phoneme representations in the auditory and speech motor systemsProceedings of the National Academy of Sciences of the United States of America 111:7126–7131.https://doi.org/10.1073/pnas.1318738111
-
Visual input enhances selective speech envelope tracking in auditory cortex at a "cocktail party"Journal of Neuroscience 33:1417–1426.https://doi.org/10.1523/JNEUROSCI.3675-12.2013
Article and author information
Author details
Publication history
- Version of Record published: June 9, 2016 (version 1)
Copyright
© 2016, Cogan
This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.
Metrics
-
- 1,570
- Page views
-
- 130
- Downloads
-
- 0
- Citations
Article citation count generated by polling the highest count across the following sources: Crossref, PubMed Central, Scopus.
Download links
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
Further reading
-
- Genetics and Genomics
- Neuroscience
For at least two centuries, scientists have been enthralled by the “zombie” behaviors induced by mind-controlling parasites. Despite this interest, the mechanistic bases of these uncanny processes have remained mostly a mystery. Here, we leverage the Entomophthora muscae-Drosophila melanogaster “zombie fly” system to reveal the mechanistic underpinnings of summit disease, a manipulated behavior evoked by many fungal parasites. Using a high-throughput approach to measure summiting, we discovered that summiting behavior is characterized by a burst of locomotion and requires the host circadian and neurosecretory systems, specifically DN1p circadian neurons, pars intercerebralis to corpora allata projecting (PI-CA) neurons and corpora allata (CA), the latter being solely responsible for juvenile hormone (JH) synthesis and release. Using a machine learning classifier to identify summiting animals in real time, we observed that PI-CA neurons and CA appeared intact in summiting animals, despite invasion of adjacent regions of the “zombie fly” brain by E. muscae cells and extensive host tissue damage in the body cavity. The blood-brain barrier of flies late in their infection was significantly permeabilized, suggesting that factors in the hemolymph may have greater access to the central nervous system during summiting. Metabolomic analysis of hemolymph from summiting flies revealed differential abundance of several compounds compared to non-summiting flies. Transfusing the hemolymph of summiting flies into non-summiting recipients induced a burst of locomotion, demonstrating that factor(s) in the hemolymph likely cause summiting behavior. Altogether, our work reveals a neuro-mechanistic model for summiting wherein fungal cells perturb the fly’s hemolymph, activating a neurohormonal pathway linking clock neurons to juvenile hormone production in the CA, ultimately inducing locomotor activity in their host.
-
- Neuroscience
Deep brain stimulation targeting the posterior hypothalamus (pHyp-DBS) is being investigated as a treatment for refractory aggressive behavior, but its mechanisms of action remain elusive. We conducted an integrated imaging analysis of a large multi-centre dataset, incorporating volume of activated tissue modeling, probabilistic mapping, normative connectomics, and atlas-derived transcriptomics. Ninety-one percent of the patients responded positively to treatment, with a more striking improvement recorded in the pediatric population. Probabilistic mapping revealed an optimized surgical target within the posterior-inferior-lateral region of the posterior hypothalamic area. Normative connectomic analyses identified fiber tracts and functionally connected with brain areas associated with sensorimotor function, emotional regulation, and monoamine production. Functional connectivity between the target, periaqueductal gray and key limbic areas – together with patient age – were highly predictive of treatment outcome. Transcriptomic analysis showed that genes involved in mechanisms of aggressive behavior, neuronal communication, plasticity and neuroinflammation might underlie this functional network.