Eelbrain, a Python toolkit for time-continuous analysis with temporal response functions
Abstract
Even though human experience unfolds continuously in time, it is not strictly linear; instead, it entails cascading processes building hierarchical cognitive structures. For instance, during speech perception, humans transform a continuously varying acoustic signal into phonemes, words, and meaning, and these levels all have distinct but interdependent temporal structures. Time-lagged regression using temporal response functions (TRFs) has recently emerged as a promising tool for disentangling electrophysiological brain responses related to such complex models of perception. Here we introduce the Eelbrain Python toolkit, which makes this kind of analysis easy and accessible. We demonstrate its use, using continuous speech as a sample paradigm, with a freely available EEG dataset of audiobook listening. A companion GitHub repository provides the complete source code for the analysis, from raw data to group level statistics. More generally, we advocate a hypothesis-driven approach in which the experimenter specifies a hierarchy of time-continuous representations that are hypothesized to have contributed to brain responses, and uses those as predictor variables for the electrophysiological signal. This is analogous to a multiple regression problem, but with the addition of a time dimension. TRF analysis decomposes the brain signal into distinct responses associated with the different predictor variables by estimating a multivariate TRF (mTRF), quantifying the influence of each predictor on brain responses as a function of time(-lags). This allows asking two questions about the predictor variables: 1) Is there a significant neural representation corresponding to this predictor variable? And if so, 2) what are the temporal characteristics of the neural response associated with it? Thus, different predictor variables can be systematically combined and evaluated to jointly model neural processing at multiple hierarchical levels. We discuss applications of this approach, including the potential for linking algorithmic/representational theories at different cognitive levels to brain responses through computational models with appropriate linking hypotheses.
Data availability
The data analyzed here was originally released with DOI: 10.7302/Z29C6VNH and can be retrieved from https://deepblue.lib.umich.edu/data/concern/data_sets/bg257f92t. For the purpose of this tutorial, the data were restructured and rereleased with DOI: 10.13016/pulf-lndn at http://hdl.handle.net/1903/27591. The companion GitHub repository contains code and instructions for replicating all analyses presented in the paper (https://github.com/Eelbrain/Alice).
-
EEG Datasets for Naturalistic Listening to "Alice in Wonderland"Deep Blue Data, DOI:10.7302/Z29C6VNH.
Article and author information
Author details
Funding
National Science Foundation (BCS 1754284)
- Christian Brodbeck
National Science Foundation (BCS 2043903)
- Christian Brodbeck
National Science Foundation (IIS 2207770)
- Christian Brodbeck
National Science Foundation (SMA 1734892)
- Joshua P Kulasingham
- Jonathan Z Simon
National Institutes of Health (R01 DC014085)
- Joshua P Kulasingham
- Jonathan Z Simon
National Institutes of Health (R01 DC019394)
- Jonathan Z Simon
Fonds Wetenschappelijk Onderzoek (SB 1SA0620N)
- Marlies Gillis
Office of Naval Research (MURI N00014-18-1-2670)
- Shohini Bhattasali
- Philip Resnik
National Institutes of Health (T32 DC017703)
- Phoebe Gaston
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
Copyright
© 2023, Brodbeck et al.
This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.
Metrics
-
- 2,136
- views
-
- 279
- downloads
-
- 23
- citations
Views, downloads and citations are aggregated across all versions of this paper published by eLife.
Download links
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
Further reading
-
- Neuroscience
Relatively little is known about the way vision is used to guide locomotion in the natural world. What visual features are used to choose paths in natural complex terrain? To answer this question, we measured eye and body movements while participants walked in natural outdoor environments. We incorporated measurements of the three-dimensional (3D) terrain structure into our analyses and reconstructed the terrain along the walker’s path, applying photogrammetry techniques to the eye tracker’s scene camera videos. Combining these reconstructions with the walker’s body movements, we demonstrate that walkers take terrain structure into account when selecting paths through an environment. We find that they change direction to avoid taking steeper steps that involve large height changes, instead of choosing more circuitous, relatively flat paths. Our data suggest walkers plan the location of individual footholds and plan ahead to select flatter paths. These results provide evidence that locomotor behavior in natural environments is controlled by decision mechanisms that account for multiple factors, including sensory and motor information, costs, and path planning.
-
- Neuroscience
Movements are performed by motoneurons transforming synaptic inputs into an activation signal that controls muscle force. The control signal emerges from interactions between ionotropic and neuromodulatory inputs to motoneurons. Critically, these interactions vary across motoneuron pools and differ between muscles. To provide the most comprehensive framework to date of motor unit activity during isometric contractions, we identified the firing activity of extensive samples of motor units in the tibialis anterior (129 ± 44 per participant; n=8) and the vastus lateralis (130 ± 63 per participant; n=8) muscles during isometric contractions of up to 80% of maximal force. From this unique dataset, the rate coding of each motor unit was characterised as the relation between its instantaneous firing rate and the applied force, with the assumption that the linear increase in isometric force reflects a proportional increase in the net synaptic excitatory inputs received by the motoneuron. This relation was characterised with a natural logarithm function that comprised two stages. The initial stage was marked by a steep acceleration of firing rate, which was greater for low- than medium- and high-threshold motor units. The second stage comprised a linear increase in firing rate, which was greater for high- than medium- and low-threshold motor units. Changes in firing rate were largely non-linear during the ramp-up and ramp-down phases of the task, but with significant prolonged firing activity only evident for medium-threshold motor units. Contrary to what is usually assumed, our results demonstrate that the firing rate of each motor unit can follow a large variety of trends with force across the pool. From a neural control perspective, these findings indicate how motor unit pools use gain control to transform inputs with limited bandwidths into an intended muscle force.