THINGS-data, a multimodal collection of large-scale datasets for investigating object representations in human brain and behavior

  1. Martin N Hebart  Is a corresponding author
  2. Oliver Contier
  3. Lina Teichmann
  4. Adam H Rockter
  5. Charles Y Zheng
  6. Alexis Kidder
  7. Anna Corriveau
  8. Maryam Vaziri-Pashkam
  9. Chris I Baker
  1. Max Planck Institute for Human Cognitive and Brain Sciences, Germany
  2. National Institute of Mental Health, United States

Abstract

Understanding object representations requires a broad, comprehensive sampling of the objects in our visual world with dense measurements of brain activity and behavior. Here we present THINGS-data, a multimodal collection of large-scale neuroimaging and behavioral datasets in humans, comprising densely-sampled functional MRI and magnetoencephalographic recordings, as well as 4.70 million similarity judgments in response to thousands of photographic images for up to 1,854 object concepts. THINGS-data is unique in its breadth of richly-annotated objects, allowing for testing countless hypotheses at scale while assessing the reproducibility of previous findings. Beyond the unique insights promised by each individual dataset, the multimodality of THINGS-data allows combining datasets for a much broader view into object processing than previously possible. Our analyses demonstrate the high quality of the datasets and provide five examples of hypothesis-driven and data-driven applications. THINGS-data constitutes the core public release of the THINGS initiative (https://things-initiative.org) for bridging the gap between disciplines and the advancement of cognitive neuroscience.

Data availability

All parts of the THINGS-data collection are freely available on scientific data repositories. We provide the raw MRI (https://openneuro.org/datasets/ds004192) and raw MEG (https://openneuro.org/datasets/ds004212) datasets in BIDS format98 on OpenNeuro109. In addition to these raw datasets, we provide the raw and preprocessed MEG data as well as the raw and derivative MRI data on Figshare110 (https://doi.org/10.25452/figshare.plus.c.6161151). The MEG data derivatives include preprocessed and epoched data that are compatible with MNE-python and CoSMoMVPA in MATLAB. The MRI data derivatives include single trial response estimates, category-selective and retinotopic regions of interest, cortical flatmaps, independent component based noise regressors, voxel-wise noise ceilings, and estimates of subject specific retinotopic parameters. In addition, we included the preprocessed and epoched eyetracking data that were recorded during the MEG experiment in the OpenNeuro repository. The behavioral triplet odd-one-out dataset can be accessed on OSF (https://osf.io/f5rn6/, https://doi.org/10.17605/OSF.IO/F5RN6).

The following data sets were generated

Article and author information

Author details

  1. Martin N Hebart

    Vision and Computational Cognition Group, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
    For correspondence
    hebart@cbs.mpg.de
    Competing interests
    No competing interests declared.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0001-7257-428X
  2. Oliver Contier

    Max Planck Institute for Human Cognitive and Brain Sciences, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
    Competing interests
    No competing interests declared.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0002-2983-4709
  3. Lina Teichmann

    Laboratory of Brain and Cognition, National Institute of Mental Health, Bethesda, United States
    Competing interests
    No competing interests declared.
  4. Adam H Rockter

    Laboratory of Brain and Cognition, National Institute of Mental Health, Bethesda, United States
    Competing interests
    No competing interests declared.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0002-2446-717X
  5. Charles Y Zheng

    Machine Learning Team, National Institute of Mental Health, Bethesda, United States
    Competing interests
    No competing interests declared.
  6. Alexis Kidder

    Laboratory of Brain and Cognition, National Institute of Mental Health, Bethesda, United States
    Competing interests
    No competing interests declared.
  7. Anna Corriveau

    Laboratory of Brain and Cognition, National Institute of Mental Health, Bethesda, United States
    Competing interests
    No competing interests declared.
  8. Maryam Vaziri-Pashkam

    Laboratory of Brain and Cognition, National Institute of Mental Health, Bethesda, United States
    Competing interests
    No competing interests declared.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0003-1830-2501
  9. Chris I Baker

    Laboratory of Brain and Cognition, National Institute of Mental Health, Bethesda, United States
    Competing interests
    Chris I Baker, Senior editor, eLife.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0001-6861-8964

Funding

National Institutes of Health (ZIA-MH-002909)

  • Martin N Hebart
  • Lina Teichmann
  • Adam H Rockter
  • Alexis Kidder
  • Anna Corriveau
  • Maryam Vaziri-Pashkam
  • Chris I Baker

National Institutes of Health (ZIC-MH002968)

  • Charles Y Zheng

Max-Planck-Gesellschaft (Max Planck Research Group M.TN.A.NEPF0009)

  • Martin N Hebart
  • Oliver Contier

European Research Council (Starting Grant StG-2021-101039712)

  • Martin N Hebart

Hessisches Ministerium für Wissenschaft und Kunst (LOEWE Start Professorship)

  • Martin N Hebart

Max Planck School of Cognition

  • Oliver Contier

The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.

Reviewing Editor

  1. Morgan Barense, University of Toronto, Canada

Ethics

Human subjects: All research participants for the fMRI and MEG studies provided informed consent in participation and data sharing, and they received financial compensation for taking part in the respective studies. The research was approved by the NIH Institutional Review Board as part of the study protocol 93-M-0170 (NCT00001360).All research participants taking part in the online behavioral study provided informed consent for the participation in the study. The online study was conducted in accordance with all relevant ethical regulations and approved by the NIH Office of Human Research Subject Protection (OHSRP).

Version history

  1. Preprint posted: July 23, 2022 (view preprint)
  2. Received: August 9, 2022
  3. Accepted: February 25, 2023
  4. Accepted Manuscript published: February 27, 2023 (version 1)
  5. Version of Record published: March 24, 2023 (version 2)
  6. Version of Record updated: March 31, 2023 (version 3)
  7. Version of Record updated: April 11, 2023 (version 4)

Copyright

This is an open-access article, free of all copyright, and may be freely reproduced, distributed, transmitted, modified, built upon, or otherwise used by anyone for any lawful purpose. The work is made available under the Creative Commons CC0 public domain dedication.

Metrics

  • 6,095
    views
  • 899
    downloads
  • 21
    citations

Views, downloads and citations are aggregated across all versions of this paper published by eLife.

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Open citations (links to open the citations from this article in various online reference manager services)

Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)

  1. Martin N Hebart
  2. Oliver Contier
  3. Lina Teichmann
  4. Adam H Rockter
  5. Charles Y Zheng
  6. Alexis Kidder
  7. Anna Corriveau
  8. Maryam Vaziri-Pashkam
  9. Chris I Baker
(2023)
THINGS-data, a multimodal collection of large-scale datasets for investigating object representations in human brain and behavior
eLife 12:e82580.
https://doi.org/10.7554/eLife.82580

Share this article

https://doi.org/10.7554/eLife.82580

Further reading

    1. Neuroscience
    Zahid Padamsey, Danai Katsanevaki ... Nathalie L Rochefort
    Research Article

    Mammals have evolved sex-specific adaptations to reduce energy usage in times of food scarcity. These adaptations are well described for peripheral tissue, though much less is known about how the energy-expensive brain adapts to food restriction, and how such adaptations differ across the sexes. Here, we examined how food restriction impacts energy usage and function in the primary visual cortex (V1) of adult male and female mice. Molecular analysis and RNA sequencing in V1 revealed that in males, but not in females, food restriction significantly modulated canonical, energy-regulating pathways, including pathways associated waith AMP-activated protein kinase, peroxisome proliferator-activated receptor alpha, mammalian target of rapamycin, and oxidative phosphorylation. Moreover, we found that in contrast to males, food restriction in females did not significantly affect V1 ATP usage or visual coding precision (assessed by orientation selectivity). Decreased serum leptin is known to be necessary for triggering energy-saving changes in V1 during food restriction. Consistent with this, we found significantly decreased serum leptin in food-restricted males but no significant change in food-restricted females. Collectively, our findings demonstrate that cortical function and energy usage in female mice are more resilient to food restriction than in males. The neocortex, therefore, contributes to sex-specific, energy-saving adaptations in response to food restriction.

    1. Neuroscience
    Jack W Lindsey, Elias B Issa
    Research Article

    Object classification has been proposed as a principal objective of the primate ventral visual stream and has been used as an optimization target for deep neural network models (DNNs) of the visual system. However, visual brain areas represent many different types of information, and optimizing for classification of object identity alone does not constrain how other information may be encoded in visual representations. Information about different scene parameters may be discarded altogether (‘invariance’), represented in non-interfering subspaces of population activity (‘factorization’) or encoded in an entangled fashion. In this work, we provide evidence that factorization is a normative principle of biological visual representations. In the monkey ventral visual hierarchy, we found that factorization of object pose and background information from object identity increased in higher-level regions and strongly contributed to improving object identity decoding performance. We then conducted a large-scale analysis of factorization of individual scene parameters – lighting, background, camera viewpoint, and object pose – in a diverse library of DNN models of the visual system. Models which best matched neural, fMRI, and behavioral data from both monkeys and humans across 12 datasets tended to be those which factorized scene parameters most strongly. Notably, invariance to these parameters was not as consistently associated with matches to neural and behavioral data, suggesting that maintaining non-class information in factorized activity subspaces is often preferred to dropping it altogether. Thus, we propose that factorization of visual scene information is a widely used strategy in brains and DNN models thereof.