Cortical encoding of acoustic and linguistic rhythms in spoken narratives

  1. Cheng Luo
  2. Nai Ding  Is a corresponding author
  1. Zhejiang University, China

Abstract

Speech contains rich acoustic and linguistic information. Using highly controlled speech materials, previous studies have demonstrated that cortical activity is synchronous to the rhythms of perceived linguistic units, e.g., words and phrases, on top of basic acoustic features, e.g., the speech envelope. When listening to natural speech, it remains unclear, however, how cortical activity jointly encodes acoustic and linguistic information. Here, we investigate the neural encoding of words using electroencephalography, and observe neural activity synchronous to multi-syllabic words when participants naturally listen to narratives. An amplitude modulation (AM) cue for word rhythm enhances the word-level response, but the effect is only observed during passive listening. Furthermore, words and the AM cue are encoded by spatially separable neural responses that are differentially modulated by attention. These results suggest that bottom-up acoustic cues and top-down linguistic knowledge separately contribute to cortical encoding of linguistic units in spoken narratives.

Data availability

The EEG data and analysis code (in MatLab) were uploaded as Source data files.

Article and author information

Author details

  1. Cheng Luo

    College of Biomedical Engineering and Instrument Sciences, Zhejiang University, Hangzhou, China 310027, Zhejiang University, Hangzhou, China
    Competing interests
    The authors declare that no competing interests exist.
  2. Nai Ding

    Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrument Sciences, Zhejiang University, Hangzhou, China 310027, Zhejiang University, Hangzhou, China
    For correspondence
    ding_nai@zju.edu.cn
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0003-3428-2723

Funding

National Natural Science Foundation of China (31771248)

  • Nai Ding

Major Scientific Research Project of Zhejiang Lab (2019KB0AC02)

  • Nai Ding

National Key R & D Program of China (2019YFC0118200)

  • Nai Ding

Zhejiang Provincial Natural Science Foundation of China (LGF19H090020)

  • Cheng Luo

Fundamental Research Funds for the Central Universities (2020FZZX001-05)

  • Nai Ding

The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.

Ethics

Human subjects: The experimental procedures were approved by the Research Ethics Committee of the College of Medicine, Zhejiang University (2019-047). All participants provided written informed consent prior to the experiment and were paid.

Copyright

© 2020, Luo & Ding

This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.

Metrics

  • 1,845
    views
  • 339
    downloads
  • 21
    citations

Views, downloads and citations are aggregated across all versions of this paper published by eLife.

Citations by DOI

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Open citations (links to open the citations from this article in various online reference manager services)

Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)

  1. Cheng Luo
  2. Nai Ding
(2020)
Cortical encoding of acoustic and linguistic rhythms in spoken narratives
eLife 9:e60433.
https://doi.org/10.7554/eLife.60433

Share this article

https://doi.org/10.7554/eLife.60433