Large-scale neural dynamics in a shared low-dimensionalstate space reflect cognitive and attentional dynamics

  1. Hayoung Song  Is a corresponding author
  2. Won Mok Shim  Is a corresponding author
  3. Monica D Rosenberg  Is a corresponding author
  1. University of Chicago, United States
  2. Sungkyunkwan University, Republic of Korea

Abstract

Cognition and attention arise from the adaptive coordination of neural systems in response to external and internal demands. The low-dimensional latent subspace that underlies large-scale neural dynamics and the relationships of these dynamics to cognitive and attentional states, however, are unknown. We conducted functional magnetic resonance imaging as human participants performed attention tasks, watched comedy sitcom episodes and an educational documentary, and rested. Whole-brain dynamics traversed a common set of latent states that spanned canonical gradients of functional brain organization, with global desynchronization among functional networks modulating state transitions. Neural state dynamics were synchronized across people during engaging movie watching and aligned to narrative event structures. Neural state dynamics reflected attention fluctuations such that different states indicated engaged attention in task and naturalistic contexts whereas a common state indicated attention lapses in both contexts. Together, these results demonstrate that traversals along large-scale gradients of human brain organization reflect cognitive and attentional dynamics.

Data availability

Raw fMRI data from the SitcOm, Nature documentary, Gradual-onset continuous performance task (SONG) dataset are available on OpenNeuro;https://openneuro.org/datasets/ds004592/versions/1.0.1. Behavioral data, processed fMRI output, and main analysis scripts are published on Github; https://github.com/hyssong/neuraldynamics

The following data sets were generated
    1. Song H
    2. Shim WM
    3. Rosenberg MD
    (2023) SONG dataset
    https://doi.org/10.18112/openneuro.ds004592.v1.0.1.
The following previously published data sets were used

Article and author information

Author details

  1. Hayoung Song

    Department of Psychology, University of Chicago, Chicago, United States
    For correspondence
    hyssong@uchicago.edu
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0002-5970-8076
  2. Won Mok Shim

    Department of Biomedical Engineering, Sungkyunkwan University, Suwon, Republic of Korea
    For correspondence
    wonmokshim@skku.edu
    Competing interests
    The authors declare that no competing interests exist.
  3. Monica D Rosenberg

    Department of Psychology, University of Chicago, Chicago, United States
    For correspondence
    mdrosenberg@uchicago.edu
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0001-6179-4025

Funding

Institute for Basic Science (R015-D1)

  • Won Mok Shim

National Research Foundation of Korea (NRF-2019M3E5D2A01060299)

  • Won Mok Shim

National Research Foundation of Korea (NRF-2019R1A2C1085566)

  • Won Mok Shim

National Science Foundation (BCS-2043740)

  • Monica D Rosenberg

The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.

Reviewing Editor

  1. Shella Keilholz, Emory University and Georgia Institute of Technology, United States

Ethics

Human subjects: Informed consent and consent to publish were obtained from the participants prior to the experiments, and the possible consequences of the study were explained. The study was approved by the Institutional Review Board of Sungkyunkwan University.

Version history

  1. Preprint posted: November 5, 2022 (view preprint)
  2. Received: December 10, 2022
  3. Accepted: June 16, 2023
  4. Accepted Manuscript published: July 3, 2023 (version 1)
  5. Version of Record published: August 3, 2023 (version 2)

Copyright

© 2023, Song et al.

This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.

Metrics

  • 3,080
    views
  • 412
    downloads
  • 6
    citations

Views, downloads and citations are aggregated across all versions of this paper published by eLife.

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Open citations (links to open the citations from this article in various online reference manager services)

Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)

  1. Hayoung Song
  2. Won Mok Shim
  3. Monica D Rosenberg
(2023)
Large-scale neural dynamics in a shared low-dimensionalstate space reflect cognitive and attentional dynamics
eLife 12:e85487.
https://doi.org/10.7554/eLife.85487

Share this article

https://doi.org/10.7554/eLife.85487

Further reading

    1. Neuroscience
    Jack W Lindsey, Elias B Issa
    Research Article

    Object classification has been proposed as a principal objective of the primate ventral visual stream and has been used as an optimization target for deep neural network models (DNNs) of the visual system. However, visual brain areas represent many different types of information, and optimizing for classification of object identity alone does not constrain how other information may be encoded in visual representations. Information about different scene parameters may be discarded altogether (‘invariance’), represented in non-interfering subspaces of population activity (‘factorization’) or encoded in an entangled fashion. In this work, we provide evidence that factorization is a normative principle of biological visual representations. In the monkey ventral visual hierarchy, we found that factorization of object pose and background information from object identity increased in higher-level regions and strongly contributed to improving object identity decoding performance. We then conducted a large-scale analysis of factorization of individual scene parameters – lighting, background, camera viewpoint, and object pose – in a diverse library of DNN models of the visual system. Models which best matched neural, fMRI, and behavioral data from both monkeys and humans across 12 datasets tended to be those which factorized scene parameters most strongly. Notably, invariance to these parameters was not as consistently associated with matches to neural and behavioral data, suggesting that maintaining non-class information in factorized activity subspaces is often preferred to dropping it altogether. Thus, we propose that factorization of visual scene information is a widely used strategy in brains and DNN models thereof.

    1. Neuroscience
    Zhaoran Zhang, Huijun Wang ... Kunlin Wei
    Research Article

    The sensorimotor system can recalibrate itself without our conscious awareness, a type of procedural learning whose computational mechanism remains undefined. Recent findings on implicit motor adaptation, such as over-learning from small perturbations and fast saturation for increasing perturbation size, challenge existing theories based on sensory errors. We argue that perceptual error, arising from the optimal combination of movement-related cues, is the primary driver of implicit adaptation. Central to our theory is the increasing sensory uncertainty of visual cues with increasing perturbations, which was validated through perceptual psychophysics (Experiment 1). Our theory predicts the learning dynamics of implicit adaptation across a spectrum of perturbation sizes on a trial-by-trial basis (Experiment 2). It explains proprioception changes and their relation to visual perturbation (Experiment 3). By modulating visual uncertainty in perturbation, we induced unique adaptation responses in line with our model predictions (Experiment 4). Overall, our perceptual error framework outperforms existing models based on sensory errors, suggesting that perceptual error in locating one’s effector, supported by Bayesian cue integration, underpins the sensorimotor system’s implicit adaptation.