Anticipation of temporally structured events in the brain
Abstract
Learning about temporal structure is adaptive because it enables the generation of expectations. We examined how the brain uses experience in structured environments to anticipate upcoming events. During fMRI, individuals watched a 90-second movie clip six times. Using a Hidden Markov Model applied to searchlights across the whole brain, we identified temporal shifts between activity patterns evoked by the first vs. repeated viewings of the movie clip. In many regions throughout the cortex, neural activity patterns for repeated viewings shifted to precede those of initial viewing by up to 15 seconds. This anticipation varied hierarchically in a posterior (less anticipation) to anterior (more anticipation) fashion. We also identified specific regions in which the timing of the brain's event boundaries were related to those of human-labeled event boundaries, with the timing of this relationship shifting on repeated viewings. With repeated viewing, the brain's event boundaries came to precede human-annotated boundaries by 1-4 seconds on average. Together, these results demonstrate a hierarchy of anticipatory signals in the human brain and link them to subjective experiences of events.
Data availability
We used a publicly-available dataset, from https://openneuro.org/datasets/ds001545/versions/1.1.1
Article and author information
Author details
Funding
The authors declare that there was no funding for this work.
Reviewing Editor
- Marius V Peelen, Radboud University, Netherlands
Version history
- Received: November 17, 2020
- Accepted: April 21, 2021
- Accepted Manuscript published: April 22, 2021 (version 1)
- Version of Record published: June 1, 2021 (version 2)
Copyright
© 2021, Lee et al.
This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.
Metrics
-
- 4,042
- Page views
-
- 478
- Downloads
-
- 23
- Citations
Article citation count generated by polling the highest count across the following sources: Crossref, PubMed Central, Scopus.
Download links
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
Further reading
-
- Neuroscience
Even though human experience unfolds continuously in time, it is not strictly linear; instead, it entails cascading processes building hierarchical cognitive structures. For instance, during speech perception, humans transform a continuously varying acoustic signal into phonemes, words, and meaning, and these levels all have distinct but interdependent temporal structures. Time-lagged regression using temporal response functions (TRFs) has recently emerged as a promising tool for disentangling electrophysiological brain responses related to such complex models of perception. Here we introduce the Eelbrain Python toolkit, which makes this kind of analysis easy and accessible. We demonstrate its use, using continuous speech as a sample paradigm, with a freely available EEG dataset of audiobook listening. A companion GitHub repository provides the complete source code for the analysis, from raw data to group level statistics. More generally, we advocate a hypothesis-driven approach in which the experimenter specifies a hierarchy of time-continuous representations that are hypothesized to have contributed to brain responses, and uses those as predictor variables for the electrophysiological signal. This is analogous to a multiple regression problem, but with the addition of a time dimension. TRF analysis decomposes the brain signal into distinct responses associated with the different predictor variables by estimating a multivariate TRF (mTRF), quantifying the influence of each predictor on brain responses as a function of time(-lags). This allows asking two questions about the predictor variables: 1) Is there a significant neural representation corresponding to this predictor variable? And if so, 2) what are the temporal characteristics of the neural response associated with it? Thus, different predictor variables can be systematically combined and evaluated to jointly model neural processing at multiple hierarchical levels. We discuss applications of this approach, including the potential for linking algorithmic/representational theories at different cognitive levels to brain responses through computational models with appropriate linking hypotheses.
-
- Neuroscience
The functional complementarity of the vestibulo-ocular reflex (VOR) and optokinetic reflex (OKR) allows for optimal combined gaze stabilization responses (CGR) in light. While sensory substitution has been reported following complete vestibular loss, the capacity of the central vestibular system to compensate for partial peripheral vestibular loss remains to be determined. Here, we first demonstrate the efficacy of a 6-week subchronic ototoxic protocol in inducing transient and partial vestibular loss which equally affects the canal- and otolith-dependent VORs. Immunostaining of hair cells in the vestibular sensory epithelia revealed that organ-specific alteration of type I, but not type II, hair cells correlates with functional impairments. The decrease in VOR performance is paralleled with an increase in the gain of the OKR occurring in a specific range of frequencies where VOR normally dominates gaze stabilization, compatible with a sensory substitution process. Comparison of unimodal OKR or VOR versus bimodal CGR revealed that visuo-vestibular interactions remain reduced despite a significant recovery in the VOR. Modeling and sweep-based analysis revealed that the differential capacity to optimally combine OKR and VOR correlates with the reproducibility of the VOR responses. Overall, these results shed light on the multisensory reweighting occurring in pathologies with fluctuating peripheral vestibular malfunction.