Anticipation of temporally structured events in the brain

  1. Caroline S Lee
  2. Mariam Aly
  3. Christopher Baldassano  Is a corresponding author
  1. Columbia University, United States

Abstract

Learning about temporal structure is adaptive because it enables the generation of expectations. We examined how the brain uses experience in structured environments to anticipate upcoming events. During fMRI, individuals watched a 90-second movie clip six times. Using a Hidden Markov Model applied to searchlights across the whole brain, we identified temporal shifts between activity patterns evoked by the first vs. repeated viewings of the movie clip. In many regions throughout the cortex, neural activity patterns for repeated viewings shifted to precede those of initial viewing by up to 15 seconds. This anticipation varied hierarchically in a posterior (less anticipation) to anterior (more anticipation) fashion. We also identified specific regions in which the timing of the brain's event boundaries were related to those of human-labeled event boundaries, with the timing of this relationship shifting on repeated viewings. With repeated viewing, the brain's event boundaries came to precede human-annotated boundaries by 1-4 seconds on average. Together, these results demonstrate a hierarchy of anticipatory signals in the human brain and link them to subjective experiences of events.

Data availability

We used a publicly-available dataset, from https://openneuro.org/datasets/ds001545/versions/1.1.1

The following previously published data sets were used

Article and author information

Author details

  1. Caroline S Lee

    Department of Psychology, Columbia University, New York, United States
    Competing interests
    The authors declare that no competing interests exist.
  2. Mariam Aly

    Psychology, Columbia University, New York, United States
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0003-4033-6134
  3. Christopher Baldassano

    Psychology, Columbia University, New York, United States
    For correspondence
    c.baldassano@columbia.edu
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0003-3540-5019

Funding

The authors declare that there was no funding for this work.

Copyright

© 2021, Lee et al.

This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.

Metrics

  • 4,878
    views
  • 572
    downloads
  • 47
    citations

Views, downloads and citations are aggregated across all versions of this paper published by eLife.

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Open citations (links to open the citations from this article in various online reference manager services)

Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)

  1. Caroline S Lee
  2. Mariam Aly
  3. Christopher Baldassano
(2021)
Anticipation of temporally structured events in the brain
eLife 10:e64972.
https://doi.org/10.7554/eLife.64972

Share this article

https://doi.org/10.7554/eLife.64972

Further reading

    1. Neuroscience
    Jan H Kirchner, Lucas Euler ... Julijana Gjorgjieva
    Research Article

    Dendritic branching and synaptic organization shape single-neuron and network computations. How they emerge simultaneously during brain development as neurons become integrated into functional networks is still not mechanistically understood. Here, we propose a mechanistic model in which dendrite growth and the organization of synapses arise from the interaction of activity-independent cues from potential synaptic partners and local activity-dependent synaptic plasticity. Consistent with experiments, three phases of dendritic growth – overshoot, pruning, and stabilization – emerge naturally in the model. The model generates stellate-like dendritic morphologies that capture several morphological features of biological neurons under normal and perturbed learning rules, reflecting biological variability. Model-generated dendrites have approximately optimal wiring length consistent with experimental measurements. In addition to establishing dendritic morphologies, activity-dependent plasticity rules organize synapses into spatial clusters according to the correlated activity they experience. We demonstrate that a trade-off between activity-dependent and -independent factors influences dendritic growth and synaptic location throughout development, suggesting that early developmental variability can affect mature morphology and synaptic function. Therefore, a single mechanistic model can capture dendritic growth and account for the synaptic organization of correlated inputs during development. Our work suggests concrete mechanistic components underlying the emergence of dendritic morphologies and synaptic formation and removal in function and dysfunction, and provides experimentally testable predictions for the role of individual components.

    1. Neuroscience
    Sean M Perkins, Elom A Amematsro ... Mark M Churchland
    Research Article

    Decoders for brain-computer interfaces (BCIs) assume constraints on neural activity, chosen to reflect scientific beliefs while yielding tractable computations. Recent scientific advances suggest that the true constraints on neural activity, especially its geometry, may be quite different from those assumed by most decoders. We designed a decoder, MINT, to embrace statistical constraints that are potentially more appropriate. If those constraints are accurate, MINT should outperform standard methods that explicitly make different assumptions. Additionally, MINT should be competitive with expressive machine learning methods that can implicitly learn constraints from data. MINT performed well across tasks, suggesting its assumptions are well-matched to the data. MINT outperformed other interpretable methods in every comparison we made. MINT outperformed expressive machine learning methods in 37 of 42 comparisons. MINT’s computations are simple, scale favorably with increasing neuron counts, and yield interpretable quantities such as data likelihoods. MINT’s performance and simplicity suggest it may be a strong candidate for many BCI applications.