1. Neuroscience
Download icon

Invariant representations of mass in the human brain

  1. Sarah Schwettmann  Is a corresponding author
  2. Joshua B Tenenbaum
  3. Nancy Kanwisher
  1. Massachusetts Institute of Technology, United States
Research Article
  • Cited 3
  • Views 2,376
  • Annotations
Cite this article as: eLife 2019;8:e46619 doi: 10.7554/eLife.46619

Abstract

An intuitive understanding of physical objects and events is critical for successfully interacting with the world. Does the brain achieve this understanding by running simulations in a mental physics engine, which represents variables such as force and mass, or by analyzing patterns of motion without encoding underlying physical quantities? To investigate, we scanned participants with fMRI while they viewed videos of objects interacting in scenarios indicating their mass. Decoding analyses in brain regions previously implicated in intuitive physical inference revealed mass representations that generalized across variations in scenario, material, friction, and motion energy. These invariant representations were found during tasks without action planning, and tasks focusing on an orthogonal dimension (object color). Our results support an account of physical reasoning where abstract physical variables serve as inputs to a forward model of dynamics, akin to a physics engine, in parietal and frontal cortex.

Data availability

All data collected in this study ia available on OpenNeuro under the accession number 002355 (doi:10.18112/openneuro.ds002355.v1.0.0).

The following data sets were generated

Article and author information

Author details

  1. Sarah Schwettmann

    Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, United States
    For correspondence
    schwett@mit.edu
    Competing interests
    The authors declare that no competing interests exist.
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0001-6385-1396
  2. Joshua B Tenenbaum

    Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, United States
    Competing interests
    The authors declare that no competing interests exist.
  3. Nancy Kanwisher

    Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, United States
    Competing interests
    The authors declare that no competing interests exist.

Funding

National Institutes of Health (Grant DP1HD091947)

  • Nancy Kanwisher

National Science Foundation (Science and Technology Center for Brains,Minds and Machines)

  • Sarah Schwettmann

The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.

Ethics

Human subjects: All participants provided informed consent before participation. The Massachusetts Institute of Technology Institutional Review Board approved all experimental protocols (protocol number: 0403000096).

Reviewing Editor

  1. Thomas Yeo, National University of Singapore, Singapore

Publication history

  1. Received: March 6, 2019
  2. Accepted: December 10, 2019
  3. Accepted Manuscript published: December 17, 2019 (version 1)
  4. Version of Record published: February 7, 2020 (version 2)

Copyright

© 2019, Schwettmann et al.

This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.

Metrics

  • 2,376
    Page views
  • 373
    Downloads
  • 3
    Citations

Article citation count generated by polling the highest count across the following sources: Crossref, PubMed Central, Scopus.

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Download citations (links to download the citations from this article in formats compatible with various reference manager tools)

Open citations (links to open the citations from this article in various online reference manager services)

  1. Further reading

Further reading

    1. Neuroscience
    Laifu Li et al.
    Research Article Updated

    Consolation is a common response to the distress of others in humans and some social animals, but the neural mechanisms underlying this behavior are not well characterized. By using socially monogamous mandarin voles, we found that optogenetic or chemogenetic inhibition of 5-HTergic neurons in the dorsal raphe nucleus (DR) or optogenetic inhibition of serotonin (5-HT) terminals in the anterior cingulate cortex (ACC) significantly decreased allogrooming time in the consolation test and reduced sociability in the three-chamber test. The release of 5-HT within the ACC and the activity of DR neurons were significantly increased during allogrooming, sniffing, and social approaching. Finally, we found that the activation of 5-HT1A receptors in the ACC was sufficient to reverse consolation and sociability deficits induced by the chemogenetic inhibition of 5-HTergic neurons in the DR. Our study provided the first direct evidence that DR-ACC 5-HTergic neural circuit is implicated in consolation-like behaviors and sociability.

    1. Computational and Systems Biology
    2. Neuroscience
    Jack Goffinet et al.
    Research Article Updated

    Increases in the scale and complexity of behavioral data pose an increasing challenge for data analysis. A common strategy involves replacing entire behaviors with small numbers of handpicked, domain-specific features, but this approach suffers from several crucial limitations. For example, handpicked features may miss important dimensions of variability, and correlations among them complicate statistical testing. Here, by contrast, we apply the variational autoencoder (VAE), an unsupervised learning method, to learn features directly from data and quantify the vocal behavior of two model species: the laboratory mouse and the zebra finch. The VAE converges on a parsimonious representation that outperforms handpicked features on a variety of common analysis tasks, enables the measurement of moment-by-moment vocal variability on the timescale of tens of milliseconds in the zebra finch, provides strong evidence that mouse ultrasonic vocalizations do not cluster as is commonly believed, and captures the similarity of tutor and pupil birdsong with qualitatively higher fidelity than previous approaches. In all, we demonstrate the utility of modern unsupervised learning approaches to the quantification of complex and high-dimensional vocal behavior.