THINGS-data, a multimodal collection of large-scale datasets for investigating object representations in human brain and behavior
Abstract
Understanding object representations requires a broad, comprehensive sampling of the objects in our visual world with dense measurements of brain activity and behavior. Here we present THINGS-data, a multimodal collection of large-scale neuroimaging and behavioral datasets in humans, comprising densely-sampled functional MRI and magnetoencephalographic recordings, as well as 4.70 million similarity judgments in response to thousands of photographic images for up to 1,854 object concepts. THINGS-data is unique in its breadth of richly-annotated objects, allowing for testing countless hypotheses at scale while assessing the reproducibility of previous findings. Beyond the unique insights promised by each individual dataset, the multimodality of THINGS-data allows combining datasets for a much broader view into object processing than previously possible. Our analyses demonstrate the high quality of the datasets and provide five examples of hypothesis-driven and data-driven applications. THINGS-data constitutes the core public release of the THINGS initiative (https://things-initiative.org) for bridging the gap between disciplines and the advancement of cognitive neuroscience.
Data availability
All parts of the THINGS-data collection are freely available on scientific data repositories. We provide the raw MRI (https://openneuro.org/datasets/ds004192) and raw MEG (https://openneuro.org/datasets/ds004212) datasets in BIDS format98 on OpenNeuro109. In addition to these raw datasets, we provide the raw and preprocessed MEG data as well as the raw and derivative MRI data on Figshare110 (https://doi.org/10.25452/figshare.plus.c.6161151). The MEG data derivatives include preprocessed and epoched data that are compatible with MNE-python and CoSMoMVPA in MATLAB. The MRI data derivatives include single trial response estimates, category-selective and retinotopic regions of interest, cortical flatmaps, independent component based noise regressors, voxel-wise noise ceilings, and estimates of subject specific retinotopic parameters. In addition, we included the preprocessed and epoched eyetracking data that were recorded during the MEG experiment in the OpenNeuro repository. The behavioral triplet odd-one-out dataset can be accessed on OSF (https://osf.io/f5rn6/, https://doi.org/10.17605/OSF.IO/F5RN6).
-
THINGS-fMRIOpenNeuro doi:10.18112/openneuro.ds004192.v1.0.5.
-
THINGS-MEGOpenNeuro doi:10.18112/openneuro.ds004212.v2.0.0.
-
THINGS-odd-one-outOpen Science Foundation doi:10.17605/OSF.IO/F5RN6.
Article and author information
Author details
Funding
National Institutes of Health (ZIA-MH-002909)
- Martin N Hebart
- Lina Teichmann
- Adam H Rockter
- Alexis Kidder
- Anna Corriveau
- Maryam Vaziri-Pashkam
- Chris I Baker
National Institutes of Health (ZIC-MH002968)
- Charles Y Zheng
Max-Planck-Gesellschaft (Max Planck Research Group M.TN.A.NEPF0009)
- Martin N Hebart
- Oliver Contier
European Research Council (Starting Grant StG-2021-101039712)
- Martin N Hebart
Hessisches Ministerium für Wissenschaft und Kunst (LOEWE Start Professorship)
- Martin N Hebart
Max Planck School of Cognition
- Oliver Contier
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
Ethics
Human subjects: All research participants for the fMRI and MEG studies provided informed consent in participation and data sharing, and they received financial compensation for taking part in the respective studies. The research was approved by the NIH Institutional Review Board as part of the study protocol 93-M-0170 (NCT00001360).All research participants taking part in the online behavioral study provided informed consent for the participation in the study. The online study was conducted in accordance with all relevant ethical regulations and approved by the NIH Office of Human Research Subject Protection (OHSRP).
Copyright
This is an open-access article, free of all copyright, and may be freely reproduced, distributed, transmitted, modified, built upon, or otherwise used by anyone for any lawful purpose. The work is made available under the Creative Commons CC0 public domain dedication.
Metrics
-
- 9,459
- views
-
- 1,243
- downloads
-
- 35
- citations
Views, downloads and citations are aggregated across all versions of this paper published by eLife.
Download links
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
Further reading
-
- Neuroscience
Studying infant minds with movies is a promising way to increase engagement relative to traditional tasks. However, the spatial specificity and functional significance of movie-evoked activity in infants remains unclear. Here, we investigated what movies can reveal about the organization of the infant visual system. We collected fMRI data from 15 awake infants and toddlers aged 5–23 months who attentively watched a movie. The activity evoked by the movie reflected the functional profile of visual areas. Namely, homotopic areas from the two hemispheres responded similarly to the movie, whereas distinct areas responded dissimilarly, especially across dorsal and ventral visual cortex. Moreover, visual maps that typically require time-intensive and complicated retinotopic mapping could be predicted, albeit imprecisely, from movie-evoked activity in both data-driven analyses (i.e. independent component analysis) at the individual level and by using functional alignment into a common low-dimensional embedding to generalize across participants. These results suggest that the infant visual system is already structured to process dynamic, naturalistic information and that fine-grained cortical organization can be discovered from movie data.
-
- Neuroscience
Outcomes can vary even when choices are repeated. Such ambiguity necessitates adjusting how much to learn from each outcome by tracking its variability. The medial prefrontal cortex (mPFC) has been reported to signal the expected outcome and its discrepancy from the actual outcome (prediction error), two variables essential for controlling the learning rate. However, the source of signals that shape these coding properties remains unknown. Here, we investigated the contribution of cholinergic projections from the basal forebrain because they carry precisely timed signals about outcomes. One-photon calcium imaging revealed that as mice learned different probabilities of threat occurrence on two paths, some mPFC cells responded to threats on one of the paths, while other cells gained responses to threat omission. These threat- and omission-evoked responses were scaled to the unexpectedness of outcomes, some exhibiting a reversal in response direction when encountering surprising threats as opposed to surprising omissions. This selectivity for signed prediction errors was enhanced by optogenetic stimulation of local cholinergic terminals during threats. The enhanced threat-evoked cholinergic signals also made mice erroneously abandon the correct choice after a single threat that violated expectations, thereby decoupling their path choice from the history of threat occurrence on each path. Thus, acetylcholine modulates the encoding of surprising outcomes in the mPFC to control how much they dictate future decisions.