Neuroscout, a unified platform for generalizable and reproducible fMRI research
Abstract
Functional magnetic resonance imaging (fMRI) has revolutionized cognitive neuroscience, but methodological barriers limit the generalizability of findings from the lab to the real world. Here, we present Neuroscout, an end-to-end platform for analysis of naturalistic fMRI data designed to facilitate the adoption of robust and generalizable research practices. Neuroscout leverages state-of-the-art machine learning models to automatically annotate stimuli from dozens of fMRI studies using naturalistic stimuli-such as movies and narratives-allowing researchers to easily test neuroscientific hypotheses across multiple ecologically-valid datasets. In addition, Neuroscout builds on a robust ecosystem of open tools and standards to provide an easy-to-use analysis builder and a fully automated execution engine that reduce the burden of reproducible research. Through a series of meta-analytic case studies, we validate the automatic feature extraction approach and demonstrate its potential to support more robust fMRI research. Owing to its ease of use and a high degree of automation, Neuroscout makes it possible to overcome modeling challenges commonly arising in naturalistic analysis and to easily scale analyses within and across datasets, democratizing generalizable fMRI research.
Data availability
All code from our processing pipeline and core infrastructure is available online (https://www.github.com/neuroscout/neuroscout). An online supplement including all analysis code and resulting images is available as a public GitHub repository (https://github.com/neuroscout/neuroscout-paper).All analysis results are made publicly available in a public GitHub repository
-
studyforrestOpenNeuro, doi:10.18112/ openneuro.ds000113 .v1.3.0.
-
Learning Temporal StructureOpenNeuro, doi:10.18112/ openneuro.ds001545.v1.1.1.
-
SherlockOpenNeuro, doi:10.18112/ openneuro.ds001132.v1.0.0.
-
Schematic NarrativeOpenNeuro, doi:10.18112/ openneuro.ds001510.v2.0.2.
-
ParanoiaStoryOpenNeuro, doi:10.18112/openneuro.ds001338 .v1.0.0.
-
BudapestOpenNeuro, doi:10.18112/ openneuro.ds003017.v1.0.3.
-
Naturalistic Neuroimaging Databasedoi:10.18112/openneuro.ds002837.v2.0.0OpenNeuro,.
-
NarrativesOpenNeuro, doi:10.18112/openneuro.ds002345 .v1.1.4.
Article and author information
Author details
Funding
National Institute of Mental Health (R01MH109682)
- Alejandro de la Vega
- Roberta Rocca
- Ross W Blair
- Christopher J Markiewicz
- Jeff Mentch
- James D Kent
- Peer Herholz
- Satrajit S Ghosh
- Russell A Poldrack
- Tal Yarkoni
National Institute of Mental Health (R01MH096906)
- Alejandro de la Vega
- James D Kent
- Tal Yarkoni
National Institute of Mental Health (R24MH117179)
- Peer Herholz
- Satrajit S Ghosh
National Institute of Mental Health (R24MH117179)
- Ross W Blair
- Christopher J Markiewicz
- Russell A Poldrack
Canada First Research Excellence Fund
- Peer Herholz
Brain Canada Fondation
- Peer Herholz
Unifying Neuroscience and Artificial Intelligence - Québec
- Peer Herholz
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
Copyright
© 2022, de la Vega et al.
This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.
Metrics
-
- 1,375
- views
-
- 236
- downloads
-
- 6
- citations
Views, downloads and citations are aggregated across all versions of this paper published by eLife.
Download links
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
Further reading
-
- Neuroscience
Multisensory object discrimination is essential in everyday life, yet the neural mechanisms underlying this process remain unclear. In this study, we trained rats to perform a two-alternative forced-choice task using both auditory and visual cues. Our findings reveal that multisensory perceptual learning actively engages auditory cortex (AC) neurons in both visual and audiovisual processing. Importantly, many audiovisual neurons in the AC exhibited experience-dependent associations between their visual and auditory preferences, displaying a unique integration model. This model employed selective multisensory enhancement for the auditory-visual pairing guiding the contralateral choice, which correlated with improved multisensory discrimination. Furthermore, AC neurons effectively distinguished whether a preferred auditory stimulus was paired with its associated visual stimulus using this distinct integrative mechanism. Our results highlight the capability of sensory cortices to develop sophisticated integrative strategies, adapting to task demands to enhance multisensory discrimination abilities.
-
- Computational and Systems Biology
- Neuroscience
Accumulating evidence to make decisions is a core cognitive function. Previous studies have tended to estimate accumulation using either neural or behavioral data alone. Here, we develop a unified framework for modeling stimulus-driven behavior and multi-neuron activity simultaneously. We applied our method to choices and neural recordings from three rat brain regions—the posterior parietal cortex (PPC), the frontal orienting fields (FOF), and the anterior-dorsal striatum (ADS)—while subjects performed a pulse-based accumulation task. Each region was best described by a distinct accumulation model, which all differed from the model that best described the animal’s choices. FOF activity was consistent with an accumulator where early evidence was favored while the ADS reflected near perfect accumulation. Neural responses within an accumulation framework unveiled a distinct association between each brain region and choice. Choices were better predicted from all regions using a comprehensive, accumulation-based framework and different brain regions were found to differentially reflect choice-related accumulation signals: FOF and ADS both reflected choice but ADS showed more instances of decision vacillation. Previous studies relating neural data to behaviorally inferred accumulation dynamics have implicitly assumed that individual brain regions reflect the whole-animal level accumulator. Our results suggest that different brain regions represent accumulated evidence in dramatically different ways and that accumulation at the whole-animal level may be constructed from a variety of neural-level accumulators.