Physical Inference: How the brain represents mass

New fMRI experiments and machine learning are helping to identify how the mass of objects is processed in the brain.
  1. Grant Fairchild
  2. Jacqueline C Snow  Is a corresponding author
  1. University of Nevada Reno, United States

Imagine that you are driving to work along an icy road when a deer suddenly jumps into your path. Depending on the distance, you may have time to apply the brakes, or you may consider swerving to avoid a collision. Your intuitive ability to reason about the physics of objects in your environment, for instance their mass, could mean the difference between a fatal crash and a safe arrival at your workplace. However, the way that the brain computes the mass of an object remains a matter of debate. Specifically, we do not know if object mass is primarily processed in dorsal fronto-parietal areas of the cortex (a region involved in action planning), or if this information is first represented in ventral areas of the cortex (which are engaged in object perception).

In 2014 it was reported that activation patterns in ventral visual areas predicted the weight of an object about to be lifted (Gallivan et al., 2014). Conversely, in 2018 one of the present authors (JCS) and co-workers found that a patient with bilateral brain lesions that included the ventral visual cortex was, nevertheless, sensitive to object weight (Buckingham et al., 2018). Now, in eLife, Sarah Schwettmann, Joshua Tenenbaum and Nancy Kanwisher from the Massachusetts Institute of Technology report having characterized the human brain regions and computations involved in intuitive physical reasoning about mass (Schwettmann et al., 2019).

Schwettmann et al. focused on the areas of the fronto-parietal cortex that were identified in a previous study (Fischer et al., 2016). They applied machine learning to fMRI data to characterize how the mass of objects is represented in these brain areas. If an algorithm can be trained to correctly predict whether someone is looking at a heavy or a light object simply based on the patterns of activation in a specific brain region, then it indicates that this brain area actively represents mass. Furthermore, if the algorithm can predict the weight of the object the observer is viewing even when other elements in the stimulus are changed, such as composition or speed, then the representation is said to remain ‘invariant’, or stable. And indeed, Schwettmann et al. show that such invariant representations of object mass exist in the dorsal fronto-parietal cortex across three experiments (Figure 1).

Investigating dorsal representation of object mass.

Schematics showing the three experiments performed by Schwettmann et al: in the first experiment, participants watched brief movies depicting basic geometric shapes of low or high mass (left, top). Participants were asked to judge the mass of the object shown in each movie. The second experiment used the same set of movies, except that the participants were required to judge the color of the object in half of the trials. In the final experiment, the geometric solids depicted in the movies were comprised of four different surface materials (lego, aluminum, cardboard, cork) that moved differently when the object slid down a ramp because of differences in mass and friction (left, bottom). Together, the experiments identified dorsal regions that consistently represent object mass, and showed that these representations are both automatic and invariant.

In the first experiment, the participants were asked to judge the weight of basic geometric solids presented in dynamic movie clips in which the objects splashed into water, fell onto a pillow, and were blown across a surface. The algorithm was ‘trained’ on the data obtained from two of these movies — that is, it received both the fMRI data and the information about whether the viewer was observing a heavy or light object. The team then found that the algorithm could predict the weight of the object the volunteer observed in the third movie based solely on the fMRI data from the dorsal brain areas. The second experiment showed that these brain regions also appeared to process mass when the observers were asked to pay attention to the color of the objects rather than their weight. In the last experiment, Schwettmann et al. demonstrated that representations of mass in the dorsal cortex remained invariant even as the surface materials and the amount of motion of the objects changed. Finally, follow-up analyses revealed that the algorithm could reliably use data from the dorsal cortex to predict object mass, but could not do so for data from areas along the ventral cortex.

Taken together, these results reveal that some areas in the fronto-parietal cortex compute physical variables and anticipate the dynamics of objects. The finding that during a perceptual task, object mass is represented in the dorsal cortex but not the ventral areas suggests that information about weight may be processed originally in the dorsal cortex, even though the ventral regions may then receive these signals during action planning.

The results also fit with a growing body of evidence that the dorsal cortex is involved in visual perception as well as space and action computations (Erlikhman et al., 2018; Freud et al., 2016). Exactly how invariant representations of physical parameters, such as object mass, are integrated with the computations required for goal-directed actions remains a tantalizing next step for future research.

Mass representations in the fronto-parietal cortex remain surprisingly invariant across changes in stimuli, environments and tasks. Such invariance is presumably advantageous because mass can be extracted from different visual cues and generalized to new scenarios. That the dorsal cortex computes mass automatically, whether or not it is the focus of someone’s attention, suggests that information about the physical parameters of the environment is sufficiently important for the brain to keep track of it all the time. Future studies will be required to examine whether dorsal brain areas also represent other potentially important physical variables, such as force. It is likely that active, invariant representations of environmental physics can help to quickly guide action, and that they may therefore be a key adaptation for survival.

References

Article and author information

Author details

  1. Grant Fairchild

    Grant Fairchild is in the Department of Psychology, University of Nevada Reno, Reno, United States

    Competing interests
    No competing interests declared
  2. Jacqueline C Snow

    Jacqueline C Snow is in the Department of Psychology, University of Nevada Reno, Reno, United States

    For correspondence
    snow@unr.edu
    Competing interests
    No competing interests declared
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0002-8284-8704

Publication history

  1. Version of Record published:

Copyright

© 2020, Fairchild and Snow

This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.

Metrics

  • 1,204
    views
  • 127
    downloads
  • 3
    citations

Views, downloads and citations are aggregated across all versions of this paper published by eLife.

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Open citations (links to open the citations from this article in various online reference manager services)

Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)

  1. Grant Fairchild
  2. Jacqueline C Snow
(2020)
Physical Inference: How the brain represents mass
eLife 9:e54373.
https://doi.org/10.7554/eLife.54373
  1. Further reading

Further reading

    1. Neuroscience
    Masahiro Takigawa, Marta Huelin Gorriz ... Daniel Bendor
    Research Article

    During rest and sleep, memory traces replay in the brain. The dialogue between brain regions during replay is thought to stabilize labile memory traces for long-term storage. However, because replay is an internally-driven, spontaneous phenomenon, it does not have a ground truth - an external reference that can validate whether a memory has truly been replayed. Instead, replay detection is based on the similarity between the sequential neural activity comprising the replay event and the corresponding template of neural activity generated during active locomotion. If the statistical likelihood of observing such a match by chance is sufficiently low, the candidate replay event is inferred to be replaying that specific memory. However, without the ability to evaluate whether replay detection methods are successfully detecting true events and correctly rejecting non-events, the evaluation and comparison of different replay methods is challenging. To circumvent this problem, we present a new framework for evaluating replay, tested using hippocampal neural recordings from rats exploring two novel linear tracks. Using this two-track paradigm, our framework selects replay events based on their temporal fidelity (sequence-based detection), and evaluates the detection performance using each event's track discriminability, where sequenceless decoding across both tracks is used to quantify whether the track replaying is also the most likely track being reactivated.

    1. Neuroscience
    Toshiki Kobayashi, Daichi Nozaki
    Research Article

    The remarkable ability of the motor system to adapt to novel environments has traditionally been investigated using kinematically non-redundant tasks, such as planar reaching movements. This limitation prevents the study of how the motor system achieves adaptation by altering the movement patterns of our redundant body. To address this issue, we developed a redundant motor task in which participants reached for targets with the tip of a virtual stick held with both hands. Despite the redundancy of the task, participants consistently employed a stereotypical strategy of flexibly changing the tilt angle of the stick depending on the direction of tip movement. Thus, this baseline relationship between tip-movement direction and stick-tilt angle constrained both the physical and visual movement patterns of the redundant system. Our task allowed us to systematically investigate how the motor system implicitly changed both the tip-movement direction and the stick-tilt angle in response to imposed visual perturbations. Both types of perturbations, whether directly affecting the task (tip-movement direction) or not (stick-tilt angle around the tip), drove adaptation, and the patterns of implicit adaptation were guided by the baseline relationship. Consequently, tip-movement adaptation was associated with changes in stick-tilt angle, and intriguingly, even seemingly ignorable stick-tilt perturbations significantly influenced tip-movement adaptation, leading to tip-movement direction errors. These findings provide a new understanding that the baseline relationship plays a crucial role not only in how the motor system controls movement of the redundant system, but also in how it implicitly adapts to modify movement patterns.