THINGS-data, a multimodal collection of large-scale datasets for investigating object representations in human brain and behavior

  1. Martin N Hebart  Is a corresponding author
  2. Oliver Contier
  3. Lina Teichmann
  4. Adam H Rockter
  5. Charles Y Zheng
  6. Alexis Kidder
  7. Anna Corriveau
  8. Maryam Vaziri-Pashkam
  9. Chris I Baker
  1. Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, United States
  2. Vision and Computational Cognition Group, Max Planck Institute for Human Cognitive and Brain Sciences, Germany
  3. Department of Medicine, Justus Liebig University Giessen, Germany
  4. Max Planck School of Cognition, Max Planck Institute for Human Cognitive and Brain Sciences, Germany
  5. Machine Learning Core, National Institute of Mental Health, National Institutes of Health, United States
9 figures and 2 additional files

Figures

Figure 1 with 1 supplement
Overview over datasets.

(A) THINGS-data comprises MEG, fMRI and behavioral responses to large samples of object images taken from the THINGS database. (B) In the fMRI and MEG experiment, participants viewed object images …

Figure 1—figure supplement 1
Effects of ICA denoising on fMRI noise ceiling estimates, for all three fMRI participants.

Each data point represents a voxel in a visual mask determined based on the localizer experiment. The x-axis shows the test data noise ceiling in % explainable variance after standard preprocessing. …

Figure 2 with 5 supplements
Quality metrics for fMRI and MEG datasets.

fMRI participants are labeled F1-F3 and MEG participants M1-M4 respectively. (A) Head motion in the fMRI experiment as measured by the mean framewise displacement in each functional run of each …

Figure 2—figure supplement 1
Event-related fields for occipital, temporal, and parietal sensors.

After preprocessing, event-related fields were calculated for each participant (columns 1–4). Every row shows a different sensor group, as depicted in column 5. Thin lines correspond to the average …

Figure 2—figure supplement 2
MEG noise ceilings for all sensors.

Every column shows the noise ceiling for a given participant. The last column highlights which sensors were considered for each sensor group (row). Noise ceilings were calculated for each sensor …

Figure 2—figure supplement 3
fMRI voxel-wise noise ceilings per participant projected onto the flattened cortical surface.

(A) The noise ceiling estimate on the level of single trial responses. (B) Noise ceiling estimate in the test dataset where responses from 12 trial repetitions can be averaged. Note that the range …

Figure 2—figure supplement 4
Head coil positioning across runs in the MEG experiment.

Head position was recorded with three marker coils attached at the nasion, left preauricular, and right preauricular. The coil positions were recorded before and after each run. To calculate the …

Figure 2—figure supplement 5
Example visualization used for the manual labeling of independent components.

For the ICA-based denoising, two raters manually labeled a subset of all independent components as signal or noise based on these visualizations. For the depicted example component, both raters …

Figure 3 with 1 supplement
Behavioral similarity dataset.

(A) How much data is required to capture the core representational dimensions underlying human similarity judgments? Based on the original dataset of 1.46 million triplets (Hebart et al., 2020), it …

Figure 3—figure supplement 1
Changes in embedding dimensions between original embedding (49 dimensions) and the new embedding (66 dimensions) based on the full dataset.

Lines correspond to Pearson correlations between old and new dimensions, only showing cases with r>0.3 for dimensions that already have a strong pairing (e.g. ‘artificial/hard’ with …

Object image decoding in fMRI and MEG.

(A) Decoding accuracies in the fMRI data from a searchlight-based pairwise classification analysis visualized on the cortical surface. (B) Analogous decoding accuracies in the MEG data plotted over …

Object category decoding and multidimensional scaling of object categories in fMRI and MEG.

(A) Decoding accuracies in the fMRI data from a searchlight-based pairwise classification analysis visualized on the cortical surface. (B) Multidimensional scaling of fMRI response patterns in …

Figure 6 with 2 supplements
Functional topography and temporal dynamics of object animacy and size.

(A) Voxel-wise regression weights for object animacy and size as predictors of trial-wise fMRI responses. The results replicate the characteristic spoke-like topography of functional tuning to …

Figure 6—figure supplement 1
Functional topography of object animacy.

fMRI single trial responses averaged per object concept were predicted with animacy and size ratings obtained from human observers using ordinary least squares linear regression. Voxel-wise …

Figure 6—figure supplement 2
Functional topography of object size.

fMRI single-trial responses averaged per object concept were predicted with animacy and size ratings obtained from human observers using ordinary least squares linear regression. Voxel-wise …

Identifying shared representations between brain and behavior.

(A) Pearson correlation between perceived similarity in behavior and local fMRI activity patterns using searchlight representational similarity analysis. Similarity patterns are confined mostly to …

Predicting fMRI regional activity with MEG responses.

(A) Pearson correlation between predicted and true regression labels using mean FFA and V1 responses as dependent and multivariate MEG sensor activation pattern as independent variable. Shaded areas …

Appendix 2—figure 1
Eye-tracking preprocessing and results.

(A) Visual illustration of the eye-tracking preprocessing routine. Raw data for one example run (top row) and preprocessed data for the same run (bottom row). (B) Amount of eye-tracking data removed …

Additional files

Download links