Efficient recognition of facial expressions does not require motor simulation
Abstract
What mechanisms underlie facial expression recognition? A popular hypothesis holds that efficient facial expression recognition cannot be achieved by visual analysis alone but additionally requires a mechanism of motor simulation — an unconscious, covert imitation of the observed facial postures and movements. Here, we first discuss why this hypothesis does not necessarily follow from extant empirical evidence. Next, we report experimental evidence against the central premise of this view: we demonstrate that individuals can achieve normotypical efficient facial expression recognition despite a congenital absence of relevant facial motor representations and, therefore, unaided by motor simulation. This underscores the need to reconsider the role of motor simulation in facial expression recognition.
Data availability
Data and stimulus materials are publicly available and can be accessed on the Open Science Framework platform (https://osf.io/8t4fv/?view_only=85c15cafe5d94bb6a5cff2f09a6ef56d)
-
Data from: Efficient recognition of facial expressions does not require motor simulationOpen Science Framework, DOI 10.17605/OSF.IO/8T4FV.
Article and author information
Author details
Funding
Harvard University's Mind, Brain and Behavior Interfaculty Initiative
- Alfonso Caramazza
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
Ethics
Human subjects: The study was approved by the local Ethical committee at UCLouvain (Registration # B403201629166). Written informed consents were obtained from all participants prior to the study, and after the nature and possible consequences of the studies were explained.
Copyright
© 2020, Vannuscorps et al.
This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.
Metrics
-
- 3,648
- views
-
- 279
- downloads
-
- 15
- citations
Views, downloads and citations are aggregated across all versions of this paper published by eLife.
Download links
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
Further reading
-
- Neuroscience
Concurrent verbal working memory task can eliminate the color-word Stroop effect. Previous research, based on specific and limited resources, suggested that the disappearance of the conflict effect was due to the memory information preempting the resources for distractors. However, it remains unclear which particular stage of Stroop conflict processing is influenced by working memory loads. In this study, electroencephalography (EEG) recordings with event-related potential (ERP) analyses, time-frequency analyses, multivariate pattern analyses (MVPAs), and representational similarity analyses (RSAs) were applied to provide an in-depth investigation of the aforementioned issue. Subjects were required to complete the single task (the classical manual color-word Stroop task) and the dual task (the Sternberg working memory task combined with the Stroop task), respectively. Behaviorally, the results indicated that the Stroop effect was eliminated in the dual-task condition. The EEG results showed that the concurrent working memory task did not modulate the P1, N450, and alpha bands. However, it modulated the sustained potential (SP), late theta (740–820 ms), and beta (920–1040 ms) power, showing no difference between congruent and incongruent trials in the dual-task condition but significant difference in the single-task condition. Importantly, the RSA results revealed that the neural activation pattern of the late theta was similar to the response interaction pattern. Together, these findings implied that the concurrent working memory task eliminated the Stroop effect through disrupting stimulus-response mapping.
-
- Neuroscience
Two-photon (2P) fluorescence imaging through gradient index (GRIN) lens-based endoscopes is fundamental to investigate the functional properties of neural populations in deep brain circuits. However, GRIN lenses have intrinsic optical aberrations, which severely degrade their imaging performance. GRIN aberrations decrease the signal-to-noise ratio (SNR) and spatial resolution of fluorescence signals, especially in lateral portions of the field-of-view (FOV), leading to restricted FOV and smaller number of recorded neurons. This is especially relevant for GRIN lenses of several millimeters in length, which are needed to reach the deeper regions of the rodent brain. We have previously demonstrated a novel method to enlarge the FOV and improve the spatial resolution of 2P microendoscopes based on GRIN lenses of length <4.1 mm (Antonini et al., 2020). However, previously developed microendoscopes were too short to reach the most ventral regions of the mouse brain. In this study, we combined optical simulations with fabrication of aspherical polymer microlenses through three-dimensional (3D) microprinting to correct for optical aberrations in long (length >6 mm) GRIN lens-based microendoscopes (diameter, 500 µm). Long corrected microendoscopes had improved spatial resolution, enabling imaging in significantly enlarged FOVs. Moreover, using synthetic calcium data we showed that aberration correction enabled detection of cells with higher SNR of fluorescent signals and decreased cross-contamination between neurons. Finally, we applied long corrected microendoscopes to perform large-scale and high-precision recordings of calcium signals in populations of neurons in the olfactory cortex, a brain region laying approximately 5 mm from the brain surface, of awake head-fixed mice. Long corrected microendoscopes are powerful new tools enabling population imaging with unprecedented large FOV and high spatial resolution in the most ventral regions of the mouse brain.