Eelbrain, a Python toolkit for time-continuous analysis with temporal response functions

  1. Christian Brodbeck  Is a corresponding author
  2. Proloy Das
  3. Marlies Gillis
  4. Joshua P Kulasingham
  5. Shohini Bhattasali
  6. Phoebe Gaston
  7. Philip Resnik
  8. Jonathan Z Simon
  1. University of Connecticut, United States
  2. Stanford University, United States
  3. KU Leuven, Belgium
  4. Linköping University, Sweden
  5. University of Toronto, Canada
  6. University of Maryland, College Park, United States

Abstract

Even though human experience unfolds continuously in time, it is not strictly linear; instead, it entails cascading processes building hierarchical cognitive structures. For instance, during speech perception, humans transform a continuously varying acoustic signal into phonemes, words, and meaning, and these levels all have distinct but interdependent temporal structures. Time-lagged regression using temporal response functions (TRFs) has recently emerged as a promising tool for disentangling electrophysiological brain responses related to such complex models of perception. Here we introduce the Eelbrain Python toolkit, which makes this kind of analysis easy and accessible. We demonstrate its use, using continuous speech as a sample paradigm, with a freely available EEG dataset of audiobook listening. A companion GitHub repository provides the complete source code for the analysis, from raw data to group level statistics. More generally, we advocate a hypothesis-driven approach in which the experimenter specifies a hierarchy of time-continuous representations that are hypothesized to have contributed to brain responses, and uses those as predictor variables for the electrophysiological signal. This is analogous to a multiple regression problem, but with the addition of a time dimension. TRF analysis decomposes the brain signal into distinct responses associated with the different predictor variables by estimating a multivariate TRF (mTRF), quantifying the influence of each predictor on brain responses as a function of time(-lags). This allows asking two questions about the predictor variables: 1) Is there a significant neural representation corresponding to this predictor variable? And if so, 2) what are the temporal characteristics of the neural response associated with it? Thus, different predictor variables can be systematically combined and evaluated to jointly model neural processing at multiple hierarchical levels. We discuss applications of this approach, including the potential for linking algorithmic/representational theories at different cognitive levels to brain responses through computational models with appropriate linking hypotheses.

Data availability

The data analyzed here was originally released with DOI: 10.7302/Z29C6VNH and can be retrieved from https://deepblue.lib.umich.edu/data/concern/data_sets/bg257f92t. For the purpose of this tutorial, the data were restructured and rereleased with DOI: 10.13016/pulf-lndn at http://hdl.handle.net/1903/27591. The companion GitHub repository contains code and instructions for replicating all analyses presented in the paper (https://github.com/Eelbrain/Alice).

The following previously published data sets were used

Article and author information

Author details

  1. Christian Brodbeck

    University of Connecticut, Storrs, United States
    For correspondence
    christian.brodbeck@uconn.edu
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0001-8380-639X
  2. Proloy Das

    Stanford University, Stanford, United States
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0002-8807-042X
  3. Marlies Gillis

    KU Leuven, Leuven, Belgium
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0002-3967-2950
  4. Joshua P Kulasingham

    Linköping University, Linköping, Sweden
    Competing interests
    The authors declare that no competing interests exist.
  5. Shohini Bhattasali

    University of Toronto, Toronto, Canada
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0002-6767-6529
  6. Phoebe Gaston

    University of Connecticut, Storrs, United States
    Competing interests
    The authors declare that no competing interests exist.
  7. Philip Resnik

    University of Maryland, College Park, College Park, United States
    Competing interests
    The authors declare that no competing interests exist.
  8. Jonathan Z Simon

    University of Maryland, College Park, College Park, United States
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0003-0858-0698

Funding

National Science Foundation (BCS 1754284)

  • Christian Brodbeck

National Science Foundation (BCS 2043903)

  • Christian Brodbeck

National Science Foundation (IIS 2207770)

  • Christian Brodbeck

National Science Foundation (SMA 1734892)

  • Joshua P Kulasingham
  • Jonathan Z Simon

National Institutes of Health (R01 DC014085)

  • Joshua P Kulasingham
  • Jonathan Z Simon

National Institutes of Health (R01 DC019394)

  • Jonathan Z Simon

Fonds Wetenschappelijk Onderzoek (SB 1SA0620N)

  • Marlies Gillis

Office of Naval Research (MURI N00014-18-1-2670)

  • Shohini Bhattasali
  • Philip Resnik

National Institutes of Health (T32 DC017703)

  • Phoebe Gaston

The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.

Reviewing Editor

  1. Andrea E Martin, Max Planck Institute for Psycholinguistics, Netherlands

Version history

  1. Preprint posted: August 3, 2021 (view preprint)
  2. Received: November 18, 2022
  3. Accepted: November 24, 2023
  4. Accepted Manuscript published: November 29, 2023 (version 1)
  5. Version of Record published: January 11, 2024 (version 2)

Copyright

© 2023, Brodbeck et al.

This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.

Metrics

  • 1,540
    views
  • 212
    downloads
  • 8
    citations

Views, downloads and citations are aggregated across all versions of this paper published by eLife.

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Open citations (links to open the citations from this article in various online reference manager services)

Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)

  1. Christian Brodbeck
  2. Proloy Das
  3. Marlies Gillis
  4. Joshua P Kulasingham
  5. Shohini Bhattasali
  6. Phoebe Gaston
  7. Philip Resnik
  8. Jonathan Z Simon
(2023)
Eelbrain, a Python toolkit for time-continuous analysis with temporal response functions
eLife 12:e85012.
https://doi.org/10.7554/eLife.85012

Share this article

https://doi.org/10.7554/eLife.85012

Further reading

    1. Neuroscience
    Tianhao Chu, Zilong Ji ... Si Wu
    Research Article

    Hippocampal place cells in freely moving rodents display both theta phase precession and procession, which is thought to play important roles in cognition, but the neural mechanism for producing theta phase shift remains largely unknown. Here, we show that firing rate adaptation within a continuous attractor neural network causes the neural activity bump to oscillate around the external input, resembling theta sweeps of decoded position during locomotion. These forward and backward sweeps naturally account for theta phase precession and procession of individual neurons, respectively. By tuning the adaptation strength, our model explains the difference between ‘bimodal cells’ showing interleaved phase precession and procession, and ‘unimodal cells’ in which phase precession predominates. Our model also explains the constant cycling of theta sweeps along different arms in a T-maze environment, the speed modulation of place cells’ firing frequency, and the continued phase shift after transient silencing of the hippocampus. We hope that this study will aid an understanding of the neural mechanism supporting theta phase coding in the brain.

    1. Neuroscience
    Josue M Regalado, Ariadna Corredera Asensio ... Priyamvada Rajasethupathy
    Research Article

    Learning requires the ability to link actions to outcomes. How motivation facilitates learning is not well understood. We designed a behavioral task in which mice self-initiate trials to learn cue-reward contingencies and found that the anterior cingulate region of the prefrontal cortex (ACC) contains motivation-related signals to maximize rewards. In particular, we found that ACC neural activity was consistently tied to trial initiations where mice seek to leave unrewarded cues to reach reward-associated cues. Notably, this neural signal persisted over consecutive unrewarded cues until reward-associated cues were reached, and was required for learning. To determine how ACC inherits this motivational signal we performed projection-specific photometry recordings from several inputs to ACC during learning. In doing so, we identified a ramp in bulk neural activity in orbitofrontal cortex (OFC)-to-ACC projections as mice received unrewarded cues, which continued ramping across consecutive unrewarded cues, and finally peaked upon reaching a reward-associated cue, thus maintaining an extended motivational state. Cellular resolution imaging of OFC confirmed these neural correlates of motivation, and further delineated separate ensembles of neurons that sequentially tiled the ramp. Together, these results identify a mechanism by which OFC maps out task structure to convey an extended motivational state to ACC to facilitate goal-directed learning.