1. Neuroscience

Brain patterns can predict speech of words and syllables

Neurons in the ‘hand knob’ area of the motor cortex become active during speech and could hold the key to restoring speech to people who have lost the ability.
Press Pack
  • Views 80
  • Annotations

Neurons in the brain’s motor cortex previously thought of as active mainly during hand and arm movements also light up during speech in a way that is similar to patterns of brain activity linked to these movements, suggest new findings published today in eLife.

By demonstrating that it is possible to identify different syllables or words from patterns of neural activity, the study provides insights that could potentially be used to restore the voice in people who have lost the ability to speak.

Speaking involves some of the most precise and coordinated movements humans make. Studying it is fascinating but challenging, because there are few opportunities to make measurements from inside someone’s brain while they speak. This study took place as part of the BrainGate2 Brain-Computer Interface pilot clinical trial*, which is testing a computer device that can ‘communicate’ with the brain, helping to restore communication and provide control of prosthetics such as robotic arms.

The researchers studied speech by recording brain activity from multi-electrode arrays previously placed in the motor cortex of two people taking part in BrainGate 2 study. This allowed them to study the timing and location of the firing of a large population of neurons that is activated during speech, rather than just a few at a time.

“We first asked if neurons in the so-called ‘hand knob’ area of the brain’s motor cortex are active during speaking,” explains lead author Sergey Stavisky, Postdoctoral Research Fellow in the Department of Neurosurgery and the Wu Tsai Neurosciences Institute at Stanford University, US. “This seemed unlikely because this is an area known to control hand and arm movements, not speech. But clues in the scientific literature suggested there might be an overlap.”

To test this, the team recorded neural activity from participants during a speaking task where they heard one of 10 different syllables, or one of 10 different short words, and then spoke the prompted sound after hearing a ‘go’ cue. The firing rates of neurons changed strongly during their speaking of words and syllables and the active neurons were spread throughout the part of motor cortex that the researchers were recording from. Moreover, the firing rates did not change as much when the participants heard the sound, only when they spoke it. This change in neuron activity also corresponded to groups of similar sounds, called phonemes, and with face and mouth movements. This suggests that although there is high-level separation of control of different body parts across the motor cortex, activity of the individual neurons overlaps.

Next, the team performed a ‘decoding’ analysis to see whether the neuron-firing patterns could reveal information about the specific sound being spoken. They found that by analysing neural activity across nearly 200 electrodes, they could identify which of several syllables or words the participant was saying. In fact, certain patterns of neuron activity could correctly predict the sound, or lack of sound, in more than 80% of cases for one of the participants, and between 55% and 61% of cases for the other.

“This suggests it might be possible to use this brain activity to understand what words people who cannot speak are trying to say,” says co-senior author Krishna Shenoy, Howard Hughes Medical Institute Investigator and Hong Seh and Vivian W. M. Lim Professor in the School of Engineering, and Co-Director of the Neural Prosthetics Translational Laboratory (NPTL), at Stanford University.

“With this study we have shown that we can identify syllables and words people say based on their brain activity, which lays the groundwork for synthesising, or producing, text and speech from the neural activity measured when a patient tries to speak,” concludes co-senior author Jaimie Henderson, John and Jene Blume - Robert and Ruth Halperin Professor in the Department of Neurosurgery and Co-Director of NPTL, Stanford University. “Further work is now needed to synthesise continuous speech for people who cannot provide example data by actually speaking.”

*More information about the BrainGate2 study is available at https://www.braingate.org.

Media contacts

  1. Emily Packer
    eLife
    e.packer@elifesciences.org
    +441223855373

  2. Bruce Goldman
    Stanford Health Care | School of Medicine
    goldmanb@stanford.edu
    +16507252106

About

eLife is a non-profit organisation inspired by research funders and led by scientists. Our mission is to help scientists accelerate discovery by operating a platform for research communication that encourages and recognises the most responsible behaviours in science. We publish important research in all areas of the life and biomedical sciences, including Human Biology and Medicine, and Neuroscience, which is selected and evaluated by working scientists and made freely available online without delay. eLife also invests in innovation through open-source tool development to accelerate research communication and discovery. Our work is guided by the communities we serve. eLife is supported by the Howard Hughes Medical Institute, the Max Planck Society, the Wellcome Trust and the Knut and Alice Wallenberg Foundation. Learn more at https://elifesciences.org/about.

To read the latest Human Biology and Medicine research published in eLife, visit https://elifesciences.org/subjects/human-biology-medicine.

And for the latest in Neuroscience, see https://elifesciences.org/subjects/neuroscience.

    1. Neuroscience

    Neural ensemble dynamics in dorsal motor cortex during speech in people with paralysis

    Sergey D Stavisky, Francis R Willett ... Jaimie M Henderson