Neuroscout, a unified platform for generalizable and reproducible fMRI research

  1. Alejandro de la Vega  Is a corresponding author
  2. Roberta Rocca
  3. Ross W Blair
  4. Christopher J Markiewicz
  5. Jeff Mentch
  6. James D Kent
  7. Peer Herholz
  8. Satrajit S Ghosh
  9. Russell A Poldrack
  10. Tal Yarkoni
  1. The University of Texas at Austin, United States
  2. Aarhus University, Denmark
  3. Stanford University, United States
  4. Massachusetts Institute of Technology, United States
  5. McGill University, Canada

Abstract

Functional magnetic resonance imaging (fMRI) has revolutionized cognitive neuroscience, but methodological barriers limit the generalizability of findings from the lab to the real world. Here, we present Neuroscout, an end-to-end platform for analysis of naturalistic fMRI data designed to facilitate the adoption of robust and generalizable research practices. Neuroscout leverages state-of-the-art machine learning models to automatically annotate stimuli from dozens of fMRI studies using naturalistic stimuli-such as movies and narratives-allowing researchers to easily test neuroscientific hypotheses across multiple ecologically-valid datasets. In addition, Neuroscout builds on a robust ecosystem of open tools and standards to provide an easy-to-use analysis builder and a fully automated execution engine that reduce the burden of reproducible research. Through a series of meta-analytic case studies, we validate the automatic feature extraction approach and demonstrate its potential to support more robust fMRI research. Owing to its ease of use and a high degree of automation, Neuroscout makes it possible to overcome modeling challenges commonly arising in naturalistic analysis and to easily scale analyses within and across datasets, democratizing generalizable fMRI research.

Data availability

All code from our processing pipeline and core infrastructure is available online (https://www.github.com/neuroscout/neuroscout). An online supplement including all analysis code and resulting images is available as a public GitHub repository (https://github.com/neuroscout/neuroscout-paper).All analysis results are made publicly available in a public GitHub repository

The following previously published data sets were used
    1. Hanke M
    2. et al
    (2014) studyforrest
    OpenNeuro, doi:10.18112/ openneuro.ds000113 .v1.3.0.
    1. Nastase SA
    2. et al
    (2021) Narratives
    OpenNeuro, doi:10.18112/openneuro.ds002345 .v1.1.4.

Article and author information

Author details

  1. Alejandro de la Vega

    Department of Psychology, The University of Texas at Austin, Austin, United States
    For correspondence
    delavega@utexas.edu
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0001-9062-3778
  2. Roberta Rocca

    Interacting Minds Centre, Aarhus University, Aarhus, Denmark
    Competing interests
    The authors declare that no competing interests exist.
  3. Ross W Blair

    Department of Psychology, Stanford University, Stanford, United States
    Competing interests
    The authors declare that no competing interests exist.
  4. Christopher J Markiewicz

    Department of Psychology, Stanford University, Stanford, United States
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0002-6533-164X
  5. Jeff Mentch

    McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, United States
    Competing interests
    The authors declare that no competing interests exist.
  6. James D Kent

    Department of Psychology, The University of Texas at Austin, Austin, United States
    Competing interests
    The authors declare that no competing interests exist.
  7. Peer Herholz

    Montreal Neurological Institute, McGill University, Montreal, Canada
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0002-9840-6257
  8. Satrajit S Ghosh

    McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, United States
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0002-5312-6729
  9. Russell A Poldrack

    Department of Psychology, Stanford University, Stanford, United States
    Competing interests
    The authors declare that no competing interests exist.
  10. Tal Yarkoni

    Department of Psychology, The University of Texas at Austin, Austin, United States
    Competing interests
    The authors declare that no competing interests exist.

Funding

National Institute of Mental Health (R01MH109682)

  • Alejandro de la Vega
  • Roberta Rocca
  • Ross W Blair
  • Christopher J Markiewicz
  • Jeff Mentch
  • James D Kent
  • Peer Herholz
  • Satrajit S Ghosh
  • Russell A Poldrack
  • Tal Yarkoni

National Institute of Mental Health (R01MH096906)

  • Alejandro de la Vega
  • James D Kent
  • Tal Yarkoni

National Institute of Mental Health (R24MH117179)

  • Peer Herholz
  • Satrajit S Ghosh

National Institute of Mental Health (R24MH117179)

  • Ross W Blair
  • Christopher J Markiewicz
  • Russell A Poldrack

Canada First Research Excellence Fund

  • Peer Herholz

Brain Canada Fondation

  • Peer Herholz

Unifying Neuroscience and Artificial Intelligence - Québec

  • Peer Herholz

The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.

Copyright

© 2022, de la Vega et al.

This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.

Metrics

  • 1,375
    views
  • 236
    downloads
  • 6
    citations

Views, downloads and citations are aggregated across all versions of this paper published by eLife.

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Open citations (links to open the citations from this article in various online reference manager services)

Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)

  1. Alejandro de la Vega
  2. Roberta Rocca
  3. Ross W Blair
  4. Christopher J Markiewicz
  5. Jeff Mentch
  6. James D Kent
  7. Peer Herholz
  8. Satrajit S Ghosh
  9. Russell A Poldrack
  10. Tal Yarkoni
(2022)
Neuroscout, a unified platform for generalizable and reproducible fMRI research
eLife 11:e79277.
https://doi.org/10.7554/eLife.79277

Share this article

https://doi.org/10.7554/eLife.79277

Further reading

    1. Neuroscience
    Song Chang, Beilin Zheng ... Liping Yu
    Research Article

    Multisensory object discrimination is essential in everyday life, yet the neural mechanisms underlying this process remain unclear. In this study, we trained rats to perform a two-alternative forced-choice task using both auditory and visual cues. Our findings reveal that multisensory perceptual learning actively engages auditory cortex (AC) neurons in both visual and audiovisual processing. Importantly, many audiovisual neurons in the AC exhibited experience-dependent associations between their visual and auditory preferences, displaying a unique integration model. This model employed selective multisensory enhancement for the auditory-visual pairing guiding the contralateral choice, which correlated with improved multisensory discrimination. Furthermore, AC neurons effectively distinguished whether a preferred auditory stimulus was paired with its associated visual stimulus using this distinct integrative mechanism. Our results highlight the capability of sensory cortices to develop sophisticated integrative strategies, adapting to task demands to enhance multisensory discrimination abilities.

    1. Computational and Systems Biology
    2. Neuroscience
    Brian DePasquale, Carlos D Brody, Jonathan W Pillow
    Research Article Updated

    Accumulating evidence to make decisions is a core cognitive function. Previous studies have tended to estimate accumulation using either neural or behavioral data alone. Here, we develop a unified framework for modeling stimulus-driven behavior and multi-neuron activity simultaneously. We applied our method to choices and neural recordings from three rat brain regions—the posterior parietal cortex (PPC), the frontal orienting fields (FOF), and the anterior-dorsal striatum (ADS)—while subjects performed a pulse-based accumulation task. Each region was best described by a distinct accumulation model, which all differed from the model that best described the animal’s choices. FOF activity was consistent with an accumulator where early evidence was favored while the ADS reflected near perfect accumulation. Neural responses within an accumulation framework unveiled a distinct association between each brain region and choice. Choices were better predicted from all regions using a comprehensive, accumulation-based framework and different brain regions were found to differentially reflect choice-related accumulation signals: FOF and ADS both reflected choice but ADS showed more instances of decision vacillation. Previous studies relating neural data to behaviorally inferred accumulation dynamics have implicitly assumed that individual brain regions reflect the whole-animal level accumulator. Our results suggest that different brain regions represent accumulated evidence in dramatically different ways and that accumulation at the whole-animal level may be constructed from a variety of neural-level accumulators.