1. Neuroscience
Download icon

Temporally delayed linear modelling (TDLM) measures replay in both animals and humans

  1. Yunzhe Liu  Is a corresponding author
  2. Raymond J Dolan
  3. Cameron Higgins
  4. Hector Penagos
  5. Mark W Woolrich
  6. H Freyja Ólafsdóttir
  7. Caswell Barry
  8. Zeb Kurth-Nelson
  9. Timothy E Behrens
  1. University College London, United Kingdom
  2. University of Oxford, United Kingdom
  3. Massachusetts Institute of Technology, United States
  4. Donders Institute for Brain Cognition and Behaviour, Radboud University, Netherlands
  5. DeepMind, United Kingdom
Research Article
  • Cited 0
  • Views 442
  • Annotations
Cite this article as: eLife 2021;10:e66917 doi: 10.7554/eLife.66917

Abstract

There are rich structures in off-task neural activity which are hypothesised to reflect fundamental computations across a broad spectrum of cognitive functions. Here, we develop an analysis toolkit – Temporal Delayed Linear Modelling (TDLM) for analysing such activity. TDLM is a domain-general method for finding neural sequences that respect a pre-specified transition graph. It combines nonlinear classification and linear temporal modelling to test for statistical regularities in sequences of task-related reactivations. TDLM is developed on the non-invasive neuroimaging data and is designed to take care of confounds and maximize sequence detection ability. Notably, as a linear framework, TDLM can be easily extended, without loss of generality, to capture rodent replay in electrophysiology, including in continuous spaces, as well as addressing second-order inference questions, e.g., its temporal and spatial varying pattern. We hope TDLM will advance a deeper understanding of neural computation and promote a richer convergence between animal and human neuroscience.

Data availability

No new data is used or generated in the current paper.Data relevant for current paper is available at https://github.com/YunzheLiu/TDLM. This dataset is from "Ólafsdóttir, H. F., Carpenter, F., & Barry, C. (2016). Coordinated grid and place cell replay during rest. Nature neuroscience, 19(6), 792-794."

Article and author information

Author details

  1. Yunzhe Liu

    Wellcome Centre for Human Neuroimaging, University College London, London, United Kingdom
    For correspondence
    yunzhe.liu.16@ucl.ac.uk
    Competing interests
    No competing interests declared.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0003-0836-9403
  2. Raymond J Dolan

    The Wellcome Trust Centre for Neuroimaging, University College London, London, United Kingdom
    Competing interests
    No competing interests declared.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0001-9356-761X
  3. Cameron Higgins

    Department of Psychiatry, University of Oxford, Oxford, United Kingdom
    Competing interests
    No competing interests declared.
  4. Hector Penagos

    Picower Institute for Learning and Memory; Center for Brains, Minds and Machines; Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, United States
    Competing interests
    No competing interests declared.
  5. Mark W Woolrich

    Oxford Centre for Human Brain Activity (OHBA), Wellcome Centre for Integrative NeuroImaging, Department of Psychiatry, University of Oxford, Oxford, United Kingdom
    Competing interests
    No competing interests declared.
  6. H Freyja Ólafsdóttir

    Donders Institute for Brain Cognition and Behaviour, Radboud University, Nijmegen, Netherlands
    Competing interests
    No competing interests declared.
  7. Caswell Barry

    Cell and Developmental Biology, University College London, London, United Kingdom
    Competing interests
    No competing interests declared.
  8. Zeb Kurth-Nelson

    Neuroscience Team, DeepMind, London, United Kingdom
    Competing interests
    Zeb Kurth-Nelson, Zeb Kurth-Nelson is affiliated with DeepMind. The author has no other competing interests to declare..
  9. Timothy E Behrens

    Wellcome Trust Centre for Neuroimaging, University of Oxford, Oxford, United Kingdom
    Competing interests
    Timothy E Behrens, Senior editor, eLife.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0003-0048-1177

Funding

Wellcome (098362/Z/12/Z)

  • Raymond J Dolan

Wellcome (104765/Z/14/Z)

  • Timothy E Behrens

Wellcome (219525/Z/19/Z)

  • Timothy E Behrens

James S. McDonnell Foundation (JSMF220020372)

  • Timothy E Behrens

Wellcome (212281/Z/18/Z)

  • Caswell Barry

max planck

  • Yunzhe Liu

The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.

Ethics

Human subjects: The human dataset used in this study was reported in Liu et al 2019. All participants were recruited from the UCL Institute of Cognitive Neuroscience subject pool, had a normal or corrected-to-normal vision, no history of psychiatric or neurological disorders, and had provided written informed consent prior to the start of the experiment, which was approved by the Research Ethics Committee at University College London (UK), under ethics number 9929/002.

Reviewing Editor

  1. Caleb Kemere, Rice University, United States

Publication history

  1. Received: January 26, 2021
  2. Accepted: June 6, 2021
  3. Accepted Manuscript published: June 7, 2021 (version 1)

Copyright

© 2021, Liu et al.

This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.

Metrics

  • 442
    Page views
  • 91
    Downloads
  • 0
    Citations

Article citation count generated by polling the highest count across the following sources: Crossref, PubMed Central, Scopus.

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Download citations (links to download the citations from this article in formats compatible with various reference manager tools)

Open citations (links to open the citations from this article in various online reference manager services)

Further reading

    1. Neuroscience
    Xiaoxuan Jia et al.
    Research Article

    Temporal continuity of object identity is a feature of natural visual input, and is potentially exploited -- in an unsupervised manner -- by the ventral visual stream to build the neural representation in inferior temporal (IT) cortex. Here we investigated whether plasticity of individual IT neurons underlies human core-object-recognition behavioral changes induced with unsupervised visual experience. We built a single-neuron plasticity model combined with a previously established IT population-to-recognition-behavior linking model to predict human learning effects. We found that our model, after constrained by neurophysiological data, largely predicted the mean direction, magnitude and time course of human performance changes. We also found a previously unreported dependency of the observed human performance change on the initial task difficulty. This result adds support to the hypothesis that tolerant core object recognition in human and non-human primates is instructed -- at least in part -- by naturally occurring unsupervised temporal contiguity experience.

    1. Neuroscience
    Nick Taubert et al.
    Research Article

    Dynamic facial expressions are crucial for communication in primates. Due to the difficulty to control shape and dynamics of facial expressions across species, it is unknown how species-specific facial expressions are perceptually encoded and interact with the representation of facial shape. While popular neural network models predict a joint encoding of facial shape and dynamics, the neuromuscular control of faces evolved more slowly than facial shape, suggesting a separate encoding. To investigate these alternative hypotheses, we developed photo-realistic human and monkey heads that were animated with motion capture data from monkeys and humans. Exact control of expression dynamics was accomplished by a Bayesian machine-learning technique. Consistent with our hypothesis, we found that human observers learned cross-species expressions very quickly, where face dynamics was represented largely independently of facial shape. This result supports the co-evolution of the visual processing and motor control of facial expressions, while it challenges appearance-based neural network theories of dynamic expression recognition.