Physical Inference: How the brain represents mass
Imagine that you are driving to work along an icy road when a deer suddenly jumps into your path. Depending on the distance, you may have time to apply the brakes, or you may consider swerving to avoid a collision. Your intuitive ability to reason about the physics of objects in your environment, for instance their mass, could mean the difference between a fatal crash and a safe arrival at your workplace. However, the way that the brain computes the mass of an object remains a matter of debate. Specifically, we do not know if object mass is primarily processed in dorsal fronto-parietal areas of the cortex (a region involved in action planning), or if this information is first represented in ventral areas of the cortex (which are engaged in object perception).
In 2014 it was reported that activation patterns in ventral visual areas predicted the weight of an object about to be lifted (Gallivan et al., 2014). Conversely, in 2018 one of the present authors (JCS) and co-workers found that a patient with bilateral brain lesions that included the ventral visual cortex was, nevertheless, sensitive to object weight (Buckingham et al., 2018). Now, in eLife, Sarah Schwettmann, Joshua Tenenbaum and Nancy Kanwisher from the Massachusetts Institute of Technology report having characterized the human brain regions and computations involved in intuitive physical reasoning about mass (Schwettmann et al., 2019).
Schwettmann et al. focused on the areas of the fronto-parietal cortex that were identified in a previous study (Fischer et al., 2016). They applied machine learning to fMRI data to characterize how the mass of objects is represented in these brain areas. If an algorithm can be trained to correctly predict whether someone is looking at a heavy or a light object simply based on the patterns of activation in a specific brain region, then it indicates that this brain area actively represents mass. Furthermore, if the algorithm can predict the weight of the object the observer is viewing even when other elements in the stimulus are changed, such as composition or speed, then the representation is said to remain ‘invariant’, or stable. And indeed, Schwettmann et al. show that such invariant representations of object mass exist in the dorsal fronto-parietal cortex across three experiments (Figure 1).
In the first experiment, the participants were asked to judge the weight of basic geometric solids presented in dynamic movie clips in which the objects splashed into water, fell onto a pillow, and were blown across a surface. The algorithm was ‘trained’ on the data obtained from two of these movies — that is, it received both the fMRI data and the information about whether the viewer was observing a heavy or light object. The team then found that the algorithm could predict the weight of the object the volunteer observed in the third movie based solely on the fMRI data from the dorsal brain areas. The second experiment showed that these brain regions also appeared to process mass when the observers were asked to pay attention to the color of the objects rather than their weight. In the last experiment, Schwettmann et al. demonstrated that representations of mass in the dorsal cortex remained invariant even as the surface materials and the amount of motion of the objects changed. Finally, follow-up analyses revealed that the algorithm could reliably use data from the dorsal cortex to predict object mass, but could not do so for data from areas along the ventral cortex.
Taken together, these results reveal that some areas in the fronto-parietal cortex compute physical variables and anticipate the dynamics of objects. The finding that during a perceptual task, object mass is represented in the dorsal cortex but not the ventral areas suggests that information about weight may be processed originally in the dorsal cortex, even though the ventral regions may then receive these signals during action planning.
The results also fit with a growing body of evidence that the dorsal cortex is involved in visual perception as well as space and action computations (Erlikhman et al., 2018; Freud et al., 2016). Exactly how invariant representations of physical parameters, such as object mass, are integrated with the computations required for goal-directed actions remains a tantalizing next step for future research.
Mass representations in the fronto-parietal cortex remain surprisingly invariant across changes in stimuli, environments and tasks. Such invariance is presumably advantageous because mass can be extracted from different visual cues and generalized to new scenarios. That the dorsal cortex computes mass automatically, whether or not it is the focus of someone’s attention, suggests that information about the physical parameters of the environment is sufficiently important for the brain to keep track of it all the time. Future studies will be required to examine whether dorsal brain areas also represent other potentially important physical variables, such as force. It is likely that active, invariant representations of environmental physics can help to quickly guide action, and that they may therefore be a key adaptation for survival.
References
-
Preserved object weight processing after bilateral lateral occipital complex lesionsJournal of Cognitive Neuroscience 30:1683–1690.https://doi.org/10.1162/jocn_a_01314
-
Towards a unified perspective of object shape and motion processing in human dorsal cortexConsciousness and Cognition 64:106–120.https://doi.org/10.1016/j.concog.2018.04.016
-
'What' is happening in the dorsal visual pathwayTrends in Cognitive Sciences 20:773–784.https://doi.org/10.1016/j.tics.2016.08.003
-
Representation of object weight in human ventral visual cortexCurrent Biology 24:1866–1873.https://doi.org/10.1016/j.cub.2014.06.046
Article and author information
Author details
Publication history
Copyright
© 2020, Fairchild and Snow
This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.
Metrics
-
- 1,204
- views
-
- 127
- downloads
-
- 3
- citations
Views, downloads and citations are aggregated across all versions of this paper published by eLife.
Download links
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
Further reading
-
- Neuroscience
During rest and sleep, memory traces replay in the brain. The dialogue between brain regions during replay is thought to stabilize labile memory traces for long-term storage. However, because replay is an internally-driven, spontaneous phenomenon, it does not have a ground truth - an external reference that can validate whether a memory has truly been replayed. Instead, replay detection is based on the similarity between the sequential neural activity comprising the replay event and the corresponding template of neural activity generated during active locomotion. If the statistical likelihood of observing such a match by chance is sufficiently low, the candidate replay event is inferred to be replaying that specific memory. However, without the ability to evaluate whether replay detection methods are successfully detecting true events and correctly rejecting non-events, the evaluation and comparison of different replay methods is challenging. To circumvent this problem, we present a new framework for evaluating replay, tested using hippocampal neural recordings from rats exploring two novel linear tracks. Using this two-track paradigm, our framework selects replay events based on their temporal fidelity (sequence-based detection), and evaluates the detection performance using each event's track discriminability, where sequenceless decoding across both tracks is used to quantify whether the track replaying is also the most likely track being reactivated.
-
- Neuroscience
The remarkable ability of the motor system to adapt to novel environments has traditionally been investigated using kinematically non-redundant tasks, such as planar reaching movements. This limitation prevents the study of how the motor system achieves adaptation by altering the movement patterns of our redundant body. To address this issue, we developed a redundant motor task in which participants reached for targets with the tip of a virtual stick held with both hands. Despite the redundancy of the task, participants consistently employed a stereotypical strategy of flexibly changing the tilt angle of the stick depending on the direction of tip movement. Thus, this baseline relationship between tip-movement direction and stick-tilt angle constrained both the physical and visual movement patterns of the redundant system. Our task allowed us to systematically investigate how the motor system implicitly changed both the tip-movement direction and the stick-tilt angle in response to imposed visual perturbations. Both types of perturbations, whether directly affecting the task (tip-movement direction) or not (stick-tilt angle around the tip), drove adaptation, and the patterns of implicit adaptation were guided by the baseline relationship. Consequently, tip-movement adaptation was associated with changes in stick-tilt angle, and intriguingly, even seemingly ignorable stick-tilt perturbations significantly influenced tip-movement adaptation, leading to tip-movement direction errors. These findings provide a new understanding that the baseline relationship plays a crucial role not only in how the motor system controls movement of the redundant system, but also in how it implicitly adapts to modify movement patterns.